Archive for August 17, 2011
App Inventor goes to MIT
MIT is creating a Center for Mobile Learning (Hal Abelson is involved), and it will maintain and develop App Inventor. More information at the Google blog.
The Massachusetts Institute of Technology’s Media Lab has announced the creation of the New Center for Mobile Learning, with start-up support from Google. The research center, to be led by three professors at MIT, is focused on building innovative mobile technologies in education, such as interactive games for children that use GPS. The first project involves creating new features and versions of Google’s App Inventor for Android, which allows programmers to easily build applications for the company’s smartphone operating system.
Eric Mazur’s Keynote at ICER 2011: Observing demos hurts learning, and confusion is a sign of understanding
Our keynote for ICER 2011 was Eric Mazur, famous Harvard physics education researcher. Mazur maintains a terrific website with his publications and talks, so the slides from his talk are available as well as the papers which serve as the content for this talks. His keynote talk was on “The scientific approach to teaching: Research as a basis for course design.” I was hoping that he might give us some advice, from older-and-wiser physics education research to up-start, trying-to-learn-to-walk computing education research. He didn’t do that. Instead, he told us about three of his most recent findings, which were great fun and intriguing.
The first set of findings were about peer instruction, which we’ve talked about here. He spent some time exploring findings on the Force Concept Inventory (FCI), particularly with respect to gender. In the US and Belgium (as one place where he’s explored this), there is a huge, statistically significant gap between men and women on the FCI. In Taiwan, he didn’t find that same gap, so it is cultural, not innate. With peer instruction, the gap goes away. Good stuff, but not shocking.
The second set of findings were on physics demonstrations, when teachers make spark and lights, balance weights, make things explode (if you’re lucky), and do all kinds of things to wake you up and make you realize your misconceptions. Do they really help? Mazur tried four conditions (rotated around, so students would try them each): No demo, observing a demo, observing a demo after making a prediction of what you thought would happen, and then having a discussion afterward. The results were pretty much always the same (here are the results from one study):
Yeah, you read that right — observing a demo is worse than having no demo at all! The problem is that you see a demo, and remember it in terms of your misconceptions. A week later, you think the demo showed you what you already believed. On some of the wrong answers that students gave in Mazur’s study, they actually said “as shown in the demo.” The demo showed the opposite! The students literally remember it wrong. People remember models, not facts, said Mazur. By recording a prediction, you force yourself to remember when you guessed wrong.
That last line in the data table is another really interesting finding — talking about it didn’t improve learning beyond just making the prediction. Social doesn’t help all learning. Sometimes, just the individual is enough for learning.
This result has some pretty important ramifications for us computing educators. When we run a program in class, we’re doing a demonstration. What do students remember of the results of that program execution? Do they even think about what they expect to see before the program executes? What are they learning from those executions? I think live coding (and execution) is very important. We need to think through what students are learning from those observations.
Third finding: Students praise teachers who give clear lectures, who reduce confusion. Student evaluations of teaching reward that clarity. Students prefer not to be confused. Is that always a good thing? Mazur tried an on-line test on several topics, where he asked students a couple of hard questions (novel situations, things they hadn’t faced previously), and then a meta-question, “Did you know what you were doing on those questions?” Mazur and his colleagues then coded that last question for “confusion” or “no confusion,” and compared that to performance on the first two problems.
Confused students are far more likely to actually understand. It’s better for students to be confused, because it means that they’re trying to make sense of it all.
I asked Mazur if he knew about the other direction: If a student says they know something, do they really? He said that they tried that experiment, and the answer is that students’ self-reported knowledge has no predictive ability for their actual performance. Students really don’t know if they understand something or not — their self-report is just noise.
Recent Comments