Archive for August 2, 2011

Instruction makes student attitudes on computational modeling worse: Caballero thesis part 3

Note: Danny’s whole thesis is now available on-line.

In Danny Caballero’s first chapter, he makes this claim:

Introductory physics courses can shape how students think about science, how they believe science is done and, perhaps most importantly, can influence if they continue to pursue science or engineering in the future. Students’ attitudes toward learning physics, their beliefs about what it means to learn physics and their sentiments about the connection between physics to the “real world” can play a strong role in their performance in introductory physics courses. This performance can affect their decision to continue studying science or engineering.

Danny is arguing that physics plays a key role in retaining student interest in science and engineering. Computing plays a similar role. Computing is the new workbench for science and engineering, where the most innovative and ground-breaking work is going to happen.  Danny realized that students’ attitudes about computational modeling are important, in terms of (a) student performance and learning in physics and (from above) all of science and engineering learning, and (b) influencing student decisions to continue in science and engineering. What we worry about are students facing programming and saying, “Real scientists and engineers do this?  I hate this!  Time to switch to a management degree!”.

There are validated instruments for measuring student attitudes towards computer science and physics, but not for measuring student attitudes towards computational modeling.  So, Danny built one (which is included in an appendix to his thesis), that he calls “COMPASS” for “Computational Modeling in Physics Attitudinal Student Survey.”  He validated it with experts and with correlations with similar instruments for physics attitudes.  It contains phrases for students to agree-or-disagree with, like:

  • I find that I can use a computer model that I’ve written to solve a related problem.
  • Computer models have little relation to the real world.
  • It is important for me to understand how to express physics concepts in a computer model.
  • To learn how to solve problems with a computer, I only need to see and to memorize examples that are solved using a computer.

Danny gave this instrument to a bunch of experts in computational modeling who generally had similar answers to all the statements, e.g., strongly agreed/strongly disagreed in all the same places. Then he measured student answers in terms of percentage of answers that were “favorable” (agreed with experts) on computational modeling, and the percentage of answers that were “unfavorable” (were different than the experts) on computational modeling.  A student’s answers to COMPASS is then a pair of %favorable and %unfavorable.  He gave this to several cohorts at Georgia Tech and at North Carolina State University, in week 2 (just as the semester started) and in week 15 (as the semester was wrapping up).  The direction of change from week 2 to week 15 was the same for every cohort:

The black square in each indicates the mean.  The answers after instruction shifted to more unfavorable attitudes toward computational modeling.  Instruction led to students being more negative about computational modeling.  Danny did an analysis of where the big shifts were in these answers.  In particular, students after instruction had less personal interest in computational modeling, agreed less with the importance of sense-making (the third bullet above), and agreed more with the importance of rote memorization (last bullet above).

Danny chopped up these data in lots of ways.  Does student grade influence the results?  Gender?  Year in school?  The only thing that really mattered was major.  Computing majors (thankfully!) did recognize more value for computational modeling after instruction.

These results are disappointing.  Teaching students about computational modeling makes them like it less?  Makes them see less value in it?  Across multiple cohorts?!? But from a research perspective, this is an important result.  We can’t fix a problem that we don’t know is there.  Danny has not only identified a problem.  He’s given us a tool to investigate it.

The value of COMPASS is in having a yardstick.  We can use it to see how we can influence these attitudes.  Danny wrote it so that “physics” could be swapped out for “biology” or “chemistry” easily, to measure attitudes towards computational modeling in those disciplines, too.  I’ll bet that this is a useful starting place for many folks interested in measuring computational thinking, too.

“So, Guzdial, you spent a lot of time on these three blog posts?  Why?!?”

I strongly believe that the future of computing education lies in teaching more than just those who are going to be software developers.  Scaffidi, Shaw, and Myers estimate that there are four professionals who program but who are not software developers for every software developer in the US.  We computing educators need to understand how people are coming to computing as a tool for thinking, not just a material for engineering.  We need to figure out how to teach these students, what tools to provide them, and how to measure their attitudes and learning.

Danny’s thesis is important in pursuing this goal.  Each of these three studies is important for computing education research, as well as for physics education research.  Danny has shown that physics students’ learning is different with computational modeling, where their challenges are in computational modeling in Python, and here, how their attitudes about computational modeling are changing. Danny has done a terrific job describing the issues of a non-CS programming community (physics learners) in learning to use computational modeling.

This is an important area of research, not just for computer science, but for STEM more generally.  Computing is critical to all of STEM.  We need to produce STEM graduates who can model use computers and who have positive attitudes about computational modeling.

The challenge for computing education researchers is that Danny’s thesis shows us is we don’t know how to do that yet.  Our tools are wrong (e.g., the VPython errors getting in the way), and our instructional practices are wrong (e.g., such that students are more negative about computational modeling after instruction than before).  We have a long way to go before we can teach all STEM students about how to use computing in a powerful way for thinking.

We need to figure it out.  Computational modeling is critical for success in STEM today.  We will only figure it out by keep trying.  We have to use curricula like Matter and Interactions. We have to figure out the pedagogy.  We have to create new learning tools.  The work to be done is not just for physics educators, but for computer scientists, too.

Finally, I wrote up these blog posts because I don’t think we’ll see work like this in any CS Ed conferences in the near term.  Danny just got a job in Physics at U. Colorado-Boulder.  He’s a physicist.  Why should he try to publish in the SIGCSE Symposium or ICER?  How would that help his tenure case? I wonder if his work could get in.  His results don’t tell us anything about CS1 or helping CS majors become better software developers.  Will reviewers recognize that computational modeling for STEM learning is important for CS Ed, too?  I hope so, and I hope we see work like this in CS Ed forums.  In the meantime, it’s important to find this kind of computing education work in non-CSEd communities and connect to it.  

August 2, 2011 at 10:15 am 31 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,053,933 hits
August 2011
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031  

CS Teaching Tips