## Archive for August 1, 2011

### What students get wrong when building computational physics models in Python: Cabellero thesis part 2

Danny’s first study found that students studying *Matter and Interactions* didn’t do better on the FCI. That’s not a condemnation of M&I. FCI is an older, narrow measure of physics learning. The *other* things that M&I cover are very important. In fact, computational modeling is a critically new learning outcome that science students need.

So, the next thing that Danny studied in his thesis was what problems students were facing when they built physics models in Vpython. He studied one particular homework assignment. Students were given a piece of VPython code that modeled a projectile.

The grading system gave students the program with variables filled in with randomly-generated values. The Force Calculation portion was blank. The grading also gave them the correct answer for the given program, if the force calculation part was provided correctly. Finally, the students were given the description of another situation. The students had to complete the force calculation (and could use the given, correct answer to check that), and then had to change the constants to model the grading situation. They submitted the final program.

Danny studied about 1400 of these submitted programs. * Only about 60% of them were correct*.

He and his advisor coded the errors. *All* of them. And they had 91% inter-rater reliability, which is amazing! Danny then used cluster analysis to group the errors.

Here’s what he found (image taken below from his slides, not his thesis):

23.8% of the students couldn’t get the test case to work. 19.8% of the students got the mapping to the new test condition wrong. That last one is a common CS error — something which had to be inside the loop was moved before the loop. Worked once, but never got updated.

Notice that a lot of the students got an “Error in Force Calculation.” Some of these were a sign error, which is as much a physics error as a computation error. But a lot of the students tried to raise a value to a vector power. VPython caught that as a type error — and the students couldn’t understand the error message. Some of these students plugged in something that got past the error, but wasn’t physically correct. That’s a pretty common strategy of students (that Matt Jadud has documented in detail), to focus on getting rid of the error message without making sure the program still makes sense.

Danny suggests that these were physics mistakes. I disagree. I think that these are computation, or at best, computational modeling errors. Many students don’t understand how to map from a situation to a set of constants in a program. (Given that we know how much difficulty students have understanding variables in programs, I wouldn’t be surprised if they don’t *really* understand what the constants mean or what they do in the program.) They don’t understand Python’s error messages, which were about *types* not about *Physics*.

Danny’s results help us in figuring out how to teach computational modeling better.

- These results can inform our development of new computational modeling environments for students. Python is a language designed for developers, not for physics students creating computational models. Python’s error messages weren’t designed to be understood for that audience — they are about explaining the errors in terms of computational ideas, not in terms of modeling ideas.
- These results can also inform how we teach computational modeling. I asked, and the lectures never included
*live coding*, which has been identified as a best practice for teaching computer science. This means that students never saw someone map from a problem to a program, nor saw anyone interpret error messages.

If we want to see computation used in teaching across STEM, we have to know about these kinds of problems, and figure out how to address them.

Recent Comments