Archive for July 29, 2011

Adding computational modeling in Python doesn’t lead to better Physics learning: Caballero thesis, part 1

This week, I was on a physics dissertation committee for the first time.  Marcos Daniel “Danny” Caballero is the first Physics Education Research (PER) Ph.D. from Georgia Tech.  (He’s getting a physics Ph.D., but his work is in PER.)  His dissertation is available, and his papers can also be found on the GT PER website.

Danny’s dissertation was on “Evaluating and Extending a Novel Course Reform of Introductory Mechanics.”  He did three different studies exploring the role of computation in physics learning in the mechanics class.  Each of those studies is a real contribution to computing education research, too.  (Yes, I enjoyed sitting on the committee!)

The first study is an analysis of physics learning students in their “traditional” mechanics class, vs. students in their “Matter & Interactions” course.  (This study has already been accepted for publication in a physics journal.)  I’ve mentioned M&I before — it’s an approach that uses programming in VPython for some of the labs and homework.  Danny used the oft-admired, gold-standard Force Concept Inventory.  The results weren’t great for the M&I crowd.

The traditional students did statistically significantly better than the M&I students on the FCI.  Danny did the right thing and dug deeper.  Which topics did the traditional students do better on?  How did the classes differ?

The answer wasn’t really too surprising.  The M&I class had different emphases than the traditional class.  The traditional class had the students doing more homework on the FCI topics that the traditional students did better on. M&I students did homework on a lot of other topics (like programming in VPython) that the traditional students didn’t. Students learn from what they do and spend time on.  Less homework on FCI topics meant less learning about FCI topics.

Jan Hawkins made this observation a long time ago: technology doesn’t necessarily lead to learning the same things better — it often leads to better learning about new things.  Yasmin Kafai showed new kinds of learning when she studied science learners expressing themselves in Microworlds Logo.  We also know that learning two things takes more effort than learning one thing. Idit Harel was the first one to show synergy between learning programming and learning fractions, but it was through lots of effort (e.g., more than 9 months of course time) and lots of resources (e.g., Idit and two graduate students from MIT in the classroom, besides the teacher).  In my own dissertation work on Emile, I found that students building physics simulations in HyperTalk learned a lot of physics in three weeks — but not so much CS.

There’s a bigger question here, from a computing education perspective.  Is this really a test of whether computing helped students learn FCI kinds of physics students?  Did these students really learn computing?  My bet, especially based on the findings in Danny’s other two studies that I will blog on, is that they didn’t.  That’s not really surprising.  Roy Pea and Midian Kurland showed in their studies that students studying Logo didn’t also develop metacognitive skills. But as they pointed out in their later papers, Pea and Kurland mostly showed that the condition wasn’t met.  These kids didn’t really learn Logo!  One wouldn’t expect any impact from the little bit of programming that they saw in their studies.

The real takeaway from this is: Computation + X doesn’t necessarily mean better learning in X.  It’s hard to get students to learn about computation.  If you get any impact from computing education, it may be on X’, not on X.

July 29, 2011 at 7:49 am 13 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,186 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,060,874 hits
July 2011
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

CS Teaching Tips