Archive for July 14, 2009
Are we measuring what students are learning?
One measure of the success of a talk is how many questions you get in the hallway after the talk. I got a few yesterday, which suggests that people were still thinking about the points afterwards.
One question I got was about a finding we’ve had in several of the contextualized computing education classes, like robotics and Gameboys for computer organization. Students report spending extra time on their homework beyond what’s required “just because it’s cool.” Yet, in some cases, there is no difference in grade distributions or failure rates compared to a comparison class. What gives? Isn’t that a bad thing if students spend extra time but it’s not productive time?
Absolutely, that can be the case. It may also be the case that students are learning things that we don’t know how to measure. Think about the argument that it takes 10,000 hours of practice to develop expertise (a number that has been recalculated from several sources). Can we come up with learning objectives for each of those 10,000 hours? Or is it that we can measure some of those objectives, but others of the items being learned are subtle, or are prerequisite concepts, or are about skills, or even muscle memory?
A famous story in physics education is about how concepts are more complex and have more facets than we realize. David Hestenes has developed some sophisticated and multi-faceted assessments for concepts like “force” — a whole test, just addressing “force.” Eric Mazur at Harvard scoffed at these assessments (as he said at a AAAS meeting I went to a couple of years ago, and quoted in a paper by Dreifus in 2007). His Harvard students would blow these assessments away! Gutsy man that he is, he actually tried them in his classes. His students did no better than the averages that Hestenes was publishing. Mazur was aghast and became a outspoken proponent of better forms of teaching and assessment.
Building up these kinds of assessments takes huge effort but is critically important to measure what learning is really going on. For the most part in Computing Education, we have not done this yet. Grades are a gross measure of learning, and to progress the field, we need fine-grained measures.
Recent Comments