## Posts tagged ‘physics education’

### CS Teacher Interview: Emmanuel Schanzer on Integrating CS into Other Subjects

I love that Bootstrap is building on their great success with algebra to integrate CS into Physics and Social Studies. I’m so looking forward to hearing how this works out. I’m working on related projects, following Bootstrap’s lead.

Lots of governors, superintendents and principals made pledges to bring CS to every child, but discovered that dedicated CS electives and required CS classes were either incredibly expensive (hiring/retaining new teachers), logistically impossible (adding a new class given finite hours in the day and rooms in the building), or actively undermined equity (opt-in classes are only taken by students with the means and/or inclination). As a result, they started asking how they might integrate CS into other subjects — and authentic integration is our special sauce! Squeezing CS into math is something folks have been trying to do for decades, with little success. Our success with Bootstrap:Algebra means we’ve got a track record of doing it right, which means we’ve been approached about integration into everything from Physics to Social Studies.

Source: Computer Science Teacher: CS Teacher Interview: Emmanuel Schanzer–The Update

### Hake on MOORFAPs: Massive Open Online Repetitions of FAiled Pedagogy

I enjoy Richard Hake’s posts. He has done excellent empirical educational research, so he knows what he’s talking about. His posts are filled with links to all kinds of great research and other sources.

This post does a nice job of making an argument similar to mine — MOOCs don’t utilize what we know works best in teaching. Hake goes on to point out, “And they’re not measuring learning, either!”

1. “The online and blended education world, really the higher ed world where most of us spend our days, fails to make any appearance.”

2. “If in fact the real story is the rise of blended and online learning, then [that story] will go completely untold if MOOCs are the sole focus.”

In my opinion, two other problems are that “Laptop U”:

3. Fails to emphasize the fact that MOOCs, like most Higher Ed institutions, concentrate on DELIVERY OF INSTRUCTION rather than STUDENT LEARNING to the detriment of their effectiveness – – see “From Teaching to Learning: A New Paradigm for Undergraduate Education” [Barr and Tagg (1995)] at <http://bit.ly/8XGJPc>.

4. Ignores the failure of MOOC providers to gauge the effectiveness of their courses by pre-to-postcourse measurement of student learning gains utilizing “Concept Inventories” <http://bit.ly/dARkDY>. As I pointed out “Is Higher Education Running AMOOC?” [Hake (2013) at <http://yhoo.it/12nPMZB>, such assessment would probably demonstrate that MOOCs are actually MOORFAPs (Massive Open Online Repetitions of FAiled Pedagogy). There would then be some incentive to transform MOOCs into MOOLOs (Massive Open Online Learning Opportunities).

via Net-Gold : Message: Re: ‘Laptop U’ Misses the Real Story.

### New National Academies report on Discipline-Based Education Research

I’ve just started looking at this report — pretty interesting synthesis of work in physics education research, chemistry ed research, and others.

The National Science Foundation funded a synthesis study on the status, contributions, and future direction of discipline-based education research (DBER) in physics, biological sciences, geosciences, and chemistry. DBER combines knowledge of teaching and learning with deep knowledge of discipline-specific science content. It describes the discipline-specific difficulties learners face and the specialized intellectual and instructional resources that can facilitate student understanding.

Discipline-Based Education Research is based on a 30-month study built on two workshops held in 2008 to explore evidence on promising practices in undergraduate science, technology, engineering, and mathematics (STEM) education. This book asks questions that are essential to advancing DBER and broadening its impact on undergraduate science teaching and learning. The book provides empirical research on undergraduate teaching and learning in the sciences, explores the extent to which this research currently influences undergraduate instruction, and identifies the intellectual and material resources required to further develop DBER.

Discipline-Based Education Research provides guidance for future DBER research. In addition, the findings and recommendations of this report may invite, if not assist, post-secondary institutions to increase interest and research activity in DBER and improve its quality and usefulness across all natural science disciples, as well as guide instruction and assessment across natural science courses to improve student learning. The book brings greater focus to issues of student attrition in the natural sciences that are related to the quality of instruction. Discipline-Based Education Research will be of interest to educators, policy makers, researchers, scholars, decision makers in universities, government agencies, curriculum developers, research sponsors, and education advocacy groups.

### Teaching CS in Schools with Meaning: Contexts and problems come first

Richard Hake relates a story from Alan Schoenfeld:

One of the problems on the NAEP [National Assessment of Educational Progress] secondary mathematics exam, which was administered to a stratified sample of 45,000 students nationwide, was the following: An army bus holds 36 soldiers. If 1128 soldiers are being bused to their training site, how many buses are needed?

Seventy percent of the students who took the exam set up the correct long division and performed it correctly. However, the following are the answers those students gave to the question of ‘how many buses are needed?’: 29% said…31 remainder 12; 18% said…31; 23% said…32, which is correct. (30% did not do the computation correctly).

It’s frightening enough that fewer than one-fourth of the students got the right answer. More frightening is that almost one out of three students said that the number of buses needed is ‘31 remainder 12’.

The problem that Hake and Schoenfeld are both pointing out is that we teach mathematics (and much else in our curriculum) completely divorced from the contexts in which the mathematics make sense. The children taking the NAEP knew *how* to do the mathematics, but not * why*, and not nearly enough about how the mathematics helps to solve a problem. They knew mathematics, but now what it was

**.**

*for*Hake relates this story in an article about Louis Paul Benezet, an educator who ran a radical experiment in the 1930’s. Benezet saw how mindlessly young children were performing mathematics, so he made a dramatic change: Almost entirely remove mathematics from grades 1-5. Start teaching mathematics in grade 6, with a focus on problem-solving (e.g., start from estimation, so that you have a sense of when an answer is reasonable). Sixth graders can understand the problems for which one should use mathematics. The point is *not* to introduce the * solution*, until students understood the

*. Remarkably, the experimental 6th graders completely caught up in just four months to the 6th graders who had had mathematics all five previous years.*

**problem**The experiment was radical then, and as far as I know, has not been replicated — even though evaluations suggest it worked well. It runs against our intuition about curriculum. Mathematics is important, right? We should do *more* of it, and as *early* as possible. How could you *remove* any of it? Benezet argued that, instead, young children should do more reading and writing, saving the mathematics for when it made sense.

Hake uses Benezet (and the evaluation of Benezet’s approach by Berman) to argue for a similar radical approach to physics education — teaching some things to kids to build up intuition, but with a focus on *using* physics to solve problems, and introducing the problems only when the students can understand them. There are lessons here for computing education, too.

- First,
*problems and contexts*Teaching a FOR loop and arrays**always**come first!*before*teaching a problem in which they are useful just leads to rote learning, brittle knowledge which can’t be applied anywhere, let alone transferred. - Second, the answer to the question “What should be removed from our overly-packed curriculum to squeeze computer science in?” may be “Get rid of the overly-packed curriculum.” There may be things that we’re teaching at the wrong time, in the wrong way, which really is just a waste of everyone’s time.
- Finally, just how young should we be teaching programming? Several people sent me the link to the report about Estonia teaching all first graders to program (quoted and linked below). Sure, you
*can*teach first graders to program — but will they understand*why*they’re programming? What problems will first graders recognize as problems for which programming is a solution?

I do applaud the national will in Estonia to value computing education, but I do wonder if teaching programming so young leads to rote learning and the idea that “31 remainder 12” is a reasonable number of buses.

We’re reading today that Estonia is implementing a new education program that will have 100 percent of publicly educated students learning to write code.

Called ProgeTiiger, the new initiative aims to turn children from avid consumers of technology (which they naturally are; try giving a 5-year-old an iPad sometime) into developers of technology (which they are not; see downward-spiraling computer science university degree program enrollment stats).

ProgreTiiger education will start with students in the first grade, which starts around the age of 7 or 8 for Estonians. The compsci education will continue through a student’s final years of public school, around age 16. Teachers are being trained on the new skills, and private sector IT companies are also getting involved, which makes sense, given that these entities will likely end up being the long-term beneficiaries of a technologically literate populace.

via Guess who’s winning the brains race, with 100% of first graders learning to code? | VentureBeat.

### A lesson from physics: Even lucid lectures on abstractions don’t work

I used Arnold Arons’ work a lot when I did my dissertation, so I particularly liked this quote from a recent Richard Hake post. There are direct implications for us in CS, where just about everything (from FOR loops to linked lists) are abstract ideas. Lectures, even lucid ones on these topics, don’t work for most students.

“I point to the following unwelcome truth: much as we might dislike the implications, research is showing that didactic exposition of abstract ideas and lines of reasoning (however engaging and lucid we might try to make them) to passive listeners yields pathetically thin results in learning and understanding – except in the very small percentage of students who are specially gifted in the field.”

Arnold Arons (1997)REFERENCES [URL’s shortened by <http://bit.ly/> and accessed on 06 March 2012.] Arons, A.B. 1997. “Teaching Introductory

Physics,” p. 362. Wiley, publisher’s information at <http://bit.ly/jBcyBU>. Amazon.com information at <http://amzn.to/bBPfop>, note the searchable “Look Inside” feature.

### Physics students grok that they need computing

Danny Caballero has started a blog at Boulder, and a recent post describes a survey he took of upper-division undergraduates in Physics. They definitely grokked that they need computing in their education.

59 students responded to the survey. Most students were juniors and seniors. That’s because we targeted upper-division courses like Classical Mechanics, E&M and Stat. Mech.

Here’s a brief summary with more details located here:

75% of students said that computational modeling is “Essential” or “Important” to their development as a physicist.

Students mentioned these characteristics would be important for the tool they might be taught to use (in decreasing order): ease of use, support/resources for learning, efficiency and power, flexibility and adaptability, well-accepted in the field.

Students were neutral about using open-source software, but stated that it was important for the tool be free or cheap after they graduate.

As far as implementation, students wanted to see computational modeling as a complement to analytical problem solving task. Ideas included solving more complex problems, helping with algebra and visualizing problems.

via Think like a physicist… | Ideas and thoughts from an underpaid fool.

### Instruction makes student attitudes on computational modeling worse: Caballero thesis part 3

*Note: Danny’s whole thesis is now available on-line.*

In Danny Caballero’s first chapter, he makes this claim:

Introductory physics courses can shape how students think about science, how they believe science is done and, perhaps most importantly, can influence if they continue to pursue science or engineering in the future. Students’ attitudes toward learning physics, their beliefs about what it means to learn physics and their sentiments about the connection between physics to the “real world” can play a strong role in their performance in introductory physics courses. This performance can affect their decision to continue studying science or engineering.

Danny is arguing that physics plays a key role in retaining student interest in science and engineering. Computing plays a similar role. Computing is the new workbench for science and engineering, where the most innovative and ground-breaking work is going to happen. Danny realized that students’ attitudes about *computational modeling* are important, in terms of (a) student performance and learning in physics and (from above) all of science and engineering learning, and (b) influencing student decisions to continue in science and engineering. What we worry about are students facing programming and saying, “Real scientists and engineers do this? I *hate* this! Time to switch to a management degree!”.

There are validated instruments for measuring student attitudes towards computer science and physics, but not for measuring student attitudes towards computational modeling. So, Danny built one (which is included in an appendix to his thesis), that he calls “COMPASS” for “Computational Modeling in Physics Attitudinal Student Survey.” He validated it with experts and with correlations with similar instruments for physics attitudes. It contains phrases for students to agree-or-disagree with, like:

- I find that I can use a computer model that I’ve written to solve a related problem.
- Computer models have little relation to the real world.
- It is important for me to understand how to express physics concepts in a computer model.
- To learn how to solve problems with a computer, I only need to see and to memorize examples that are solved using a computer.

Danny gave this instrument to a bunch of experts in computational modeling who generally had similar answers to all the statements, e.g., strongly agreed/strongly disagreed in all the same places. Then he measured student answers in terms of percentage of answers that were “favorable” (agreed with experts) on computational modeling, and the percentage of answers that were “unfavorable” (were different than the experts) on computational modeling. A student’s answers to COMPASS is then a pair of %favorable and %unfavorable. He gave this to several cohorts at Georgia Tech and at North Carolina State University, in week 2 (just as the semester started) and in week 15 (as the semester was wrapping up). The direction of change from week 2 to week 15 was the same for every cohort:

The black square in each indicates the mean. ** The answers after instruction shifted to more unfavorable attitudes toward computational modeling**. Instruction led to students being

*more*negative about computational modeling. Danny did an analysis of where the big shifts were in these answers. In particular, students after instruction had

*less*personal interest in computational modeling, agreed

*less*with the importance of sense-making (the third bullet above), and agreed

*more*with the importance of rote memorization (last bullet above).

Danny chopped up these data in lots of ways. Does student grade influence the results? Gender? Year in school? The only thing that really mattered was major. Computing majors (thankfully!) did recognize more value for computational modeling after instruction.

These results are disappointing. Teaching students about computational modeling makes them like it less? Makes them see less value in it? Across *multiple* cohorts?!? But from a research perspective, this is an important result. ** We can’t fix a problem that we don’t know is there.** Danny has not only identified a problem. He’s given us a tool to investigate it.

The value of COMPASS is in having a yardstick. We can use it to see how we can influence these attitudes. Danny wrote it so that “physics” could be swapped out for “biology” or “chemistry” easily, to measure attitudes towards computational modeling in those disciplines, too. I’ll bet that this is a useful starting place for many folks interested in measuring computational thinking, too.

**“So, Guzdial, you spent a lot of time on these three blog posts? Why?!?”**

I strongly believe that the future of computing education lies in teaching more than just those who are going to be software developers. Scaffidi, Shaw, and Myers estimate that there are four professionals who program but who are not software developers for every software developer in the US. We computing educators need to understand how people are coming to computing as a tool for thinking, not just a material for engineering. We need to figure out how to teach these students, what tools to provide them, and how to measure their attitudes and learning.

*Danny’s thesis is important in pursuing this goal*. Each of these three studies is important for computing education research, as well as for physics education research. Danny has shown that physics students’ learning is different with computational modeling, where their challenges are in computational modeling in Python, and here, how their attitudes about computational modeling are changing. Danny has done a terrific job describing the issues of a non-CS programming community (physics learners) in learning to use computational modeling.

This is an important area of research, not just for computer science, but for STEM more generally. Computing is critical to all of STEM. We need to produce STEM graduates who *can* model use computers and who have *positive* attitudes about computational modeling.

The challenge for computing education researchers is that Danny’s thesis shows us is * we don’t know how to do that yet. *Our tools are wrong (e.g., the VPython errors getting in the way), and our instructional practices are wrong (e.g., such that students are

*more negative*about computational modeling after instruction than before). We have a long way to go before we can teach all STEM students about how to use computing in a powerful way for thinking.

**W***e need to figure it out*. Computational modeling is critical for success in STEM today. We will only figure it out by keep trying. We have to use curricula like *Matter and Interactions.* We have to figure out the pedagogy. We have to create new learning tools. The work to be done is not just for physics educators, but for computer scientists, too.

Finally, I wrote up these blog posts because I don’t think we’ll see work like this in any CS Ed conferences in the near term. Danny just got a job in Physics at U. Colorado-Boulder. He’s a physicist. Why should he try to publish in the SIGCSE Symposium or ICER? How would that help his tenure case? I wonder if his work could get in. His results don’t tell us anything about CS1 or helping CS majors become better software developers. Will reviewers recognize that computational modeling for STEM learning is important for CS Ed, too? I hope so, and I hope we see work like this in CS Ed forums. In the meantime, it’s important to *find* this kind of computing education work in non-CSEd communities and *connect* to it.

### What students get wrong when building computational physics models in Python: Cabellero thesis part 2

Danny’s first study found that students studying *Matter and Interactions* didn’t do better on the FCI. That’s not a condemnation of M&I. FCI is an older, narrow measure of physics learning. The *other* things that M&I cover are very important. In fact, computational modeling is a critically new learning outcome that science students need.

So, the next thing that Danny studied in his thesis was what problems students were facing when they built physics models in Vpython. He studied one particular homework assignment. Students were given a piece of VPython code that modeled a projectile.

The grading system gave students the program with variables filled in with randomly-generated values. The Force Calculation portion was blank. The grading also gave them the correct answer for the given program, if the force calculation part was provided correctly. Finally, the students were given the description of another situation. The students had to complete the force calculation (and could use the given, correct answer to check that), and then had to change the constants to model the grading situation. They submitted the final program.

Danny studied about 1400 of these submitted programs. * Only about 60% of them were correct*.

He and his advisor coded the errors. *All* of them. And they had 91% inter-rater reliability, which is amazing! Danny then used cluster analysis to group the errors.

Here’s what he found (image taken below from his slides, not his thesis):

23.8% of the students couldn’t get the test case to work. 19.8% of the students got the mapping to the new test condition wrong. That last one is a common CS error — something which had to be inside the loop was moved before the loop. Worked once, but never got updated.

Notice that a lot of the students got an “Error in Force Calculation.” Some of these were a sign error, which is as much a physics error as a computation error. But a lot of the students tried to raise a value to a vector power. VPython caught that as a type error — and the students couldn’t understand the error message. Some of these students plugged in something that got past the error, but wasn’t physically correct. That’s a pretty common strategy of students (that Matt Jadud has documented in detail), to focus on getting rid of the error message without making sure the program still makes sense.

Danny suggests that these were physics mistakes. I disagree. I think that these are computation, or at best, computational modeling errors. Many students don’t understand how to map from a situation to a set of constants in a program. (Given that we know how much difficulty students have understanding variables in programs, I wouldn’t be surprised if they don’t *really* understand what the constants mean or what they do in the program.) They don’t understand Python’s error messages, which were about *types* not about *Physics*.

Danny’s results help us in figuring out how to teach computational modeling better.

- These results can inform our development of new computational modeling environments for students. Python is a language designed for developers, not for physics students creating computational models. Python’s error messages weren’t designed to be understood for that audience — they are about explaining the errors in terms of computational ideas, not in terms of modeling ideas.
- These results can also inform how we teach computational modeling. I asked, and the lectures never included
*live coding*, which has been identified as a best practice for teaching computer science. This means that students never saw someone map from a problem to a program, nor saw anyone interpret error messages.

If we want to see computation used in teaching across STEM, we have to know about these kinds of problems, and figure out how to address them.

### Adding computational modeling in Python doesn’t lead to better Physics learning: Caballero thesis, part 1

This week, I was on a physics dissertation committee for the first time. Marcos Daniel “Danny” Caballero is the first Physics Education Research (PER) Ph.D. from Georgia Tech. (He’s getting a physics Ph.D., but his work is in PER.) His dissertation is available, and his papers can also be found on the GT PER website.

Danny’s dissertation was on “Evaluating and Extending a Novel Course Reform of Introductory Mechanics.” He did three different studies exploring the role of computation in physics learning in the mechanics class. Each of those studies is a real contribution to computing education research, too. (Yes, I enjoyed sitting on the committee!)

The first study is an analysis of physics learning students in their “traditional” mechanics class, vs. students in their “Matter & Interactions” course. (This study has already been accepted for publication in a physics journal.) I’ve mentioned M&I before — it’s an approach that uses programming in VPython for some of the labs and homework. Danny used the oft-admired, gold-standard Force Concept Inventory. The results weren’t great for the M&I crowd.

The traditional students did statistically significantly better than the M&I students on the FCI. Danny did the right thing and dug deeper. Which topics did the traditional students do better on? How did the classes differ?

The answer wasn’t really too surprising. The M&I class had different emphases than the traditional class. The traditional class had the students doing more homework on the FCI topics that the traditional students did better on. M&I students did homework on a lot of other topics (like programming in VPython) that the traditional students didn’t. Students learn from what they do and spend time on. Less homework on FCI topics meant less learning about FCI topics.

Jan Hawkins made this observation a long time ago: ** technology doesn’t necessarily lead to learning the same things better — it often leads to better learning about new things**. Yasmin Kafai showed new kinds of learning when she studied science learners expressing themselves in Microworlds Logo. We also know that learning two things takes more effort than learning one thing. Idit Harel was the first one to show synergy between learning programming and learning fractions, but it was through lots of effort (e.g., more than 9 months of course time) and lots of resources (e.g., Idit and two graduate students from MIT in the classroom, besides the teacher). In my own dissertation work on Emile, I found that students building physics simulations in HyperTalk learned a lot of physics in three weeks — but not so much CS.

There’s a bigger question here, from a computing education perspective. Is this really a test of whether computing helped students learn FCI kinds of physics students? *Did these students really learn computing?* My bet, especially based on the findings in Danny’s other two studies that I will blog on, is that they didn’t. That’s not really surprising. Roy Pea and Midian Kurland showed in their studies that students studying Logo didn’t also develop metacognitive skills. But as they pointed out in their later papers, Pea and Kurland mostly showed that the condition wasn’t met. These kids didn’t really learn Logo! One wouldn’t expect any impact from the little bit of programming that they saw in their studies.

The real takeaway from this is: ** Computation + X doesn’t necessarily mean better learning in X**. It’s hard to get students to learn about computation. If you get any impact from computing education, it may be on X’, not on X.

### It’s in Science: Interaction beats Lecture

AP, Washington Post, NYTimes, and NPR covered this story this week — Carl Weiman has an article in *Science* showing that two grad students with an interactive learner-engagement method beats out a highly-rated veteran lecturer in terms of student learning in a large class. This is a cool piece, and I buy it — that’s why I’m doing peer-interaction in my class. I still believe that lecture *can* work, the evidence is strong that learner-engagement beats lecture, especially in large STEM classes. I think that this result is particularly disconcerting for the open learning movement. If lectures aren’t worth much for most learners, what is it that iTunes-U and MIT Open Courseware are offering?

Who’s better at teaching difficult physics to a class of more than 250 college students: the highly rated veteran professor using time-tested lecturing, or the inexperienced graduate students interacting with kids via devices that look like TV remotes? The answer could rattle ivy on college walls.

A study by Nobel Prize winning physicist Carl Wieman at the university found that students learned better from inexperienced teachers using an interactive method _ including the clicker _ than a veteran professor giving a traditional lecture. Student answers to questions and quizzes are displayed instantly on the professor’s presentation.

He found that in nearly identical classes, Canadian college students learned a lot more from teaching assistants using interactive tools than they did from a veteran professor giving a traditional lecture. The students who had to engage interactively using the TV remote-like devices scored about twice as high on a test compared to those who heard the normal lecture, according to a study published Thursday in the journal

Science.

### How computing and physics learning differ

Allison Elliott Tew successfully defended her thesis proposal this morning. Hooray! You may recall from my pre-SIGCSE description of her work, that she’s attempting to build a language independent measure of CS1 learning. Allison talked about concept inventories in a way this morning that I found intriguing with respect to computing.

You may know that concept inventories are used to assess student knowledge about an area. They are based on an analysis of what students already think about an area, and those pre-conceptions/misconceptions appear as “distractors” in the multiple-choice questions. The most famous of these assessments is probably the Force Concept Inventory, which was developed over 25 years by David Hestenes. The FCI measures knowledge about Newtonian mechanics, and it includes all those deeply-held beliefs that students have about the world from living in it for 18 years before entering a College physics classroom. The FCI was used in a huge study (*n*>6000 students) by Richard Hake to show that instruction alone was ineffective in shaking those beliefs, and “interactive engagement” (like peer instruction) was necessary to get students to learn physics well.

There are efforts to build concept inventories for computer science, but they run into a problem when creating a direct mapping. Students enter our classes, for the most part, without any conceptions at all of computing. If they have conceptions (or misconceptions), they’ve only developed them recently. Students may have an idea about how computers work from years of working with those computers, but the challenging issues of variable types, defining classes, pointers, recursion, and essentially everything that students have trouble with in computing are all *totally new to their computer science classes.* They don’t have 18 years to develop preconceptions and misconceptions. That makes it hard to develop a CS concept inventory, because any wrongly-held beliefs that students have are due to their instruction, not due to naive reflection on experience.

Which leads me to my question: How deeply held are those misconceptions? Physics misconceptions are well-documented and very hard to shake. They have served 18 year olds very well! How about computer science misconceptions? Since they form over a short period of time, can we just correct them with, “No, that’s wrong”? Maybe if we taught things better, there would be no misconceptions to inventory. And if there are some, maybe they’re really easy to change. I don’t know how one would measure strength of misconception, but I’ll bet that it’s different between physics and computer science.

### The Learning Process for Education Research

One of the more influential projects in physics education (and learning sciences over all) was the effort by Jill Larkin and Herb Simon to characterize how students and experts solved physics problems. They found that students tend to look at physics problems at a shallow level, while experts see deep structure. Students tend to look at what variables are present in the problem, match them to the equations given in class, and see what they can compute with those variables. Experts look at a problem and identify the *kind *of problem it was, then work out a process to a solution.

My son is currently taking AP Physics, and I’m seeing this same process when he asks me for help. My dissertation work was about teaching students kinematics by having them build simulations, so I’m familiar with some of the content. I’m no expert, but am a bit closer than my son. Matt brought me a problem then started with, “I can figure out delta-Y here, but can’t see why that’s useful.” He knew the equation that matched the variables. I drew a picture, then figured out what we needed to compute. I then *remembered the wrong equation* (evidence that I’m no expert) and came up with an answer that clearly couldn’t be right. (Kudos to Matt for realizing that!) Matt then figured out the right equation, and we came up with a more reasonable answer. I worked from the problem situation to an equation, and Matt started by looking for an equation.

I’ve been seeing this ** same **process lately in how people come to understand education research. I’m teaching an undergraduate and graduate (joint) class on educational technology this semester. (We just started class last week.) In the first week, I had them read two chapters of Seymour Papert’s

*Mindstorms*; the paper “Pianos, not Stereos” by Mitchel Resnick, Amy Bruckman, and Fred Martin; and Jeanette Wing’s “Computational Thinking.” I started the class discussion by asking for summary descriptions of the papers. A Ph.D. student described Jeanette’s position as “Programming is useful for everyone to understand, because it provides useful tools and metaphors for understanding the world.” I corrected that, to explain that Jeanette questions whether “programming” is necessary for gaining “computational thinking.” The student shrugged off my comment with a (paraphrased) “Whatever.” For those of us who care about computing education, that’s not a “whatever” issue at all — it’s a deep and interesting question whether someone can understand computing with little (or no?) knowledge of programming. At the same time, the student can be excused for not seeing the distinction. It”s the first week of class, and it’s hard to see deep structure yet. The surface level is still being managed. It’s hard to distinguish “learning programming” and “learning to think computationally,” especially for people who have learned to program. “How else would you come to think computationally?”

This last week, we’ve been reviewing the findings from our first year of our Disciplinary Commons for Computing Educators where we had university and high school computer science teachers do Action Research in their own classrooms. Well, we *tried *to do Action Research. We found that the teachers had a hard time inventing researchable questions about their own classrooms. We ended up scaffolding the process, by starting out with experimental materials from others’ studies, so that teachers could simply pick the experiment that he or she felt would be most useful to replicate in his or her classroom. We then found that the teachers did not immediately see how the results had any implication for their own classrooms. It took us awhile to get teachers to even ask the questions: “The results show X (e.g., most students in my classroom never read the book). What does that mean for my students? Does that mean X is true for all my students? Should I be doing something different in response?”

These results aren’t really surprising, either — at least in hindsight. High school and university teachers have their jobs *not *because they are expert at education research. University researchers typically are expert at *some *computing-related research, not computing education research, and a general “research perspective” doesn’t seem to transfer. Our teachers were looking at the surface level, and it does take some particular knowledge about how to develop researchable questions and how to interpret results into an action plan afterwards.

Education research is a field of study. I’ve been doing this kind of work for over 20 years, so you’d think I’d have realized that by now, but I still get surprised. Simply being a teacher doesn’t make you an expert in education research, and being a domain researcher doesn’t make you an expert in education research in that domain. It takes time and effort to see the deeper issues in education research, and everyone starts out just managing the surface features.

### Feynman lectures from Microsoft: A medium for active essays and computing ed

I got to see the Project Tuva videos at the MSR Faculty Summit during the last session of the day Tuesday. If you haven’t seen them, I recommend them to you. (Though, as Ian Bogost found out, you have to have the latest version of Microsoft Silverlight to watch them.) These are Richard Feynman’s “Messenger Lectures” which he delivered at Cornell and recorded by the BBC. The Project Tuva site enhances the video with the ability to take notes, read others’ notes (synched to the video), see links (that appear at the appropriate points in the video), including links into simulations and even the Worldwide Telescope. So, when Feynman talks about how stars are formed, you click and go see telescope imagery of new stars being formed. When Feynman talks about Tycho Brahe, you go to a simulation of planetary orbits so that you can make your own velocity measurements.

While I’m a Feynman fan (as are most scientists, engineers, mathematicians, and computer scientists I know), I was more excited about the medium than I was the Project Tuva lectures themselves. I’m still looking for the right medium (and authoring tools) to express ideas in computation. Books don’t cut it, and running programs go too far the other way.

Expressing computation to students is as pedagogically complex as expressing quantum mechanics or electrodynamics to physics students. Most physics educators whom I’ve asked list those subjects as the most challenging to teach since the phenomena are impossible to directly observe and the behavior is non-intuitive. All of computing is like that! I can convince you that Biology is really about cells with a microscope, and most of Physics and Chemistry is about explaining the phenomena that you see every day.

While we see computing everyday, the computation behind those applications is difficult to see. Just how many **for** loops are necessary to write that email? Did you see all those linked lists behind your Powerpoint slide deck? Applications that we use daily are layers upon layers of computation, such that it’s nearly impossible to see the low-level computation that we want to teach in an introductory course. This is the problem of not having a microscope for computing.

So, we use books with source code in them. To imagine the execution of a piece of source code is perhaps the most important and most intellectually challenging goal of an introductory course. If human intelligence is computable, then the Halting Problem comes into effect, and for some level of complexity, we *cannot* figure out the execution just by looking at the source code. Books with source code are a reasonable way of talking about computation, once you have some level of ability to imagine execution. For rank beginners, it’s almost cruel — it’s like saying, “Let’s have you learn Russian by throwing you into Moscow without a coat in January. Better figure out what Russian means quickly!”

Alan Kay has argued for “active essays,” a kind of dynamic book with simulations built in. Ted Kaehler, Mitchel Resnick, and Brian Silverman have all built some reall interesting active essays. An active essay could expose the source code, with explanation, and allow for execution within the same medium. As a form of scaffolding, perhaps the source code could be tweaked in some meaningful ways, so that students could see the relationship between changes to the source code and impact on dynamic behavior — *which is the most important thing to learn in an introductory class!*

The enhanced video mode of Project Tuva could offer a way of doing “active essays” where the base medium is video rather than text. Active essays have simulations embedded within text explanations. We could, however, have videos explaining concepts, with simulations embedded within the video. That’s an intriguing notion. As I’ve blogged previously, there is psychology evidence that lectures are better for explaining computing concepts than just reading a book. (Yes, I discovered that my old Amazon blog posts are still there, if I can find a direct link to individual posts.) Maybe with the Project Tuva enhanced video mode, we could finally do the *Metaobject Protocol* book in a compelling way.

Now, speaking of “books not cutting it,” the post-review-process version of the Data Structures manuscript is due next Wednesday, and I’d better get to it before my co-author discovers I’ve been spending my writing time blogging (again).

### Are we measuring what students are learning?

One measure of the success of a talk is how many questions you get in the hallway *after* the talk. I got a few yesterday, which suggests that people were still thinking about the points afterwards.

One question I got was about a finding we’ve had in several of the contextualized computing education classes, like robotics and Gameboys for computer organization. Students report spending extra time on their homework beyond what’s required “just because it’s cool.” Yet, in some cases, there is no difference in grade distributions or failure rates compared to a comparison class. What gives? Isn’t that a *bad* thing if students spend extra time but it’s not productive time?

Absolutely, that can be the case. It may also be the case that students are learning things that we don’t know how to measure. Think about the argument that it takes 10,000 hours of practice to develop expertise (a number that has been recalculated from several sources). Can we come up with learning objectives for each of those 10,000 hours? Or is it that we can measure *some* of those objectives, but others of the items being learned are subtle, or are prerequisite concepts, or are about skills, or even muscle memory?

A famous story in physics education is about how concepts are more complex and have more facets than we realize. David Hestenes has developed some sophisticated and multi-faceted assessments for concepts like “force” — a whole test, just addressing “force.” Eric Mazur at Harvard scoffed at these assessments (as he said at a AAAS meeting I went to a couple of years ago, and quoted in a paper by Dreifus in 2007). His *Harvard* students would blow these assessments away! Gutsy man that he is, he actually tried them in his classes. His students did no better than the averages that Hestenes was publishing. Mazur was aghast and became a outspoken proponent of better forms of teaching and assessment.

Building up these kinds of assessments takes huge effort but is critically important to measure what learning is *really* going on. For the most part in Computing Education, we have *not* done this yet. Grades are a gross measure of learning, and to progress the field, we need fine-grained measures.

### Aligning Computer Science with Mathematics by Felleisen and Krishnamurthi

The July 2009 *Communications of the ACM *has an interesting article by Matthias Felleisen and Shriram Krishnamurthi *Why Computer Science Doesn’t Matter* with the great subtitle “Aligning computer science with high school mathematics can help turn it into an essential subject for all students.” The argument that Matthias and Shriram are making is that we can use programming to teach mathematics better, and help students learn both. In so doing, we prevent marginalization of computer science and support an important goal of the American education system, the teaching of “‘rithmetic.”

It’s a good argument and one that I support. I am dubious about some of the education claims made in the argument, like “we have already seen our curricular approach…help students raise their alebra scores” and “Formal evaluation shows the extremely positive impact this curriculum has…” (I’ve been searching through the sites given in the article, but can’t find peer-reviewed, published papers that support these claims.) But these are really minor quibbles. Having written one of these pieces, I know that the tyrany of 1800 words is severe, and the authors can be excused for not providing background citations to the studies supporting their claims. Instead, I’d like to provide some of the background literature that supports their claim.

Can we use programming to help students learn mathematics? Absolutely! Probably the most famous study supporting this argument is Idit Harel’s dissertation work on *Instructional Software Development Project (ISDP). *Idit had fourth graders write software in Logo to teach fractions to third graders. She found that a real synergy occurred between the concepts of programming and the concepts of mathematics, and her students ended up learning more about both compared to another class. Yasmin Kafai (who just moved to Penn from UCLA this last year) continued this project, exploring novel collaboration models (e.g., the fourth graders become fifth grade “consultants” as another cohort of fourth graders helps another cohort of third graders) and expanding from mathematics into science. My own dissertation explored the synergy between physics and programming. My results weren’t as strong — I had good physics learning, but not good computer science learning. I suspect the problems were the challenge of making real learning in only a three week summer workshop, and not having the kinds of IDE’s that Matthias and Shriram are calling for.

“Our community must realize that minor tweaks of currently dominant approaches to programming won’t suffice.” Completely agreed, and the best argument for this point came from Bruce Sherin’s powerful dissertation (with Andy diSessa at Berkeley). Bruce taught two groups of students lessons in physics, one using programming and one using algebra. (Andy would probably argue with Matthias and Shriram, “The ideal language and the IDE for imaginative programming are still to be designed.” Over 20 years ago, Boxer implemented much of what they’re calling for.) Bruce found some really interesting differences between what was learned via each form of notation. For example, programming was better for figuring out *causality* and *sequencing.* An algebraic formula like *x = x0 + vt* leaves invisible to the novice that the *t* is what will typically vary in that equation. On the other hand, algebra was better for understanding *balance *and *equilibria*. A formula like *F=ma* works in both directions: increase the mass or acceleration and the force increases, or if the force declines, then either the mass or the acceleration must have declined. Most programming languages do not make evident how constraints work in the world. The media extensions that Matthias and Shriram describe help address some of the challenges Bruce found when students had a single physics concept (e.g., an object moving because it’s location changed) being represented by multiple lines of code.

“As computer science educators, we must also demand a smooth, continuous path from imaginative programming to the engineering of large programs.” Alan Kay has been making this argument for years. He refers to the concept of *Omniuser* who can move from scripting in E-Toys, to changing how the levels close to the metal of the machine work, all with a single system and (hopefully) a single notation. His STEPS effort is seeking to build such systems. In particular, Alan and his team are exploring “systems math” which is a kind of powerful mathematics that can only really exist in the powerful medium of programming. Thus, STEPS gives us a way to go beyond just support “‘rithmetic” to support powerful new kinds of mathematics learning.

I’m a big fan of Scheme and consider DrScheme to be one of the finest pedagogical IDE’s ever created. TeachScheme is a brilliant curriculum. My guess is that careful studies of the effort would support many of the claims being made by Matthias and Shriram. More importantly, though, I believe that they’re right that programming could actually *improve* mathematics learning. Doing it in such a way that students’ mathaphobia doesn’t drive even *more* students from computer science is a real challenge. An even bigger challenge is doing it in such a way that can gain the support of organizations like NCTM and that meets the mathematics standards in our schools. As they say, “Any attempt to align programming with mathematics will fail unless the programming language is as close to school mathematics as possible.” It’s more than just the programming language — the whole package (curriculum, IDE, language) has to look and feel like mathematics to make it successful with the mathematics education community.

Recent Comments