Posts tagged ‘peer instruction’
Proposal #1 to Change CS Education to Reduce Inequity: Teach computer science to advantage the students with less computing background
This is my second post in a series about how we have to change how we teach CS education to reduce inequity. I started this series with this post, making an argument based on race, but might also be made in terms of the pandemic. We have to change how we teach CS this year.
The series has several inspirations, but the concrete one that I want to reference back to each week is the statement from the University of Maryland’s CS department:
Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset
We as individual computing teachers make choices that influence whether students with less computing background can succeed. I often see choices being made that encourage the most capable students, but at the cost of the least prepared students. Part of this is because we see ourselves as preparing students for top software engineering jobs. The questions that get asked on technical interviews explicitly drive how many CS departments teach algorithms and theory. We want to encourage “excellence.” But whose excellence do we care about? Is Silicon Valley entrepreneurial perspectives the only ones that matter? The goal of “becoming a great software engineer” does not consider alternative endpoints for computing education (see post here). Not all our students want those kinds of jobs. Many of our students are much more interested in giving back to their community, rather than take the Silicon Valley jobs that our programs aim for (see post here).
Please don’t teach students as if they are you. First, you (as a CS teacher, as someone who reads this blog) are wildly different than our normal student. Second, your memories of how you learned and what worked for you are likely wrong. Humans are terrible at reconstructing how their memory was at a prior time and what led to their learning. That’s why we need research.
In this post, I will identify four of the methods that are differential, that advantage the students with less computing background — there are many more:
- Use Peer Instruction
- Explain connection to community values
- Use Parsons Problems
- Use subgoal labeling
Use Peer Instruction
When I talk to computer science teachers about peer instruction and how powerful it is for learning, the most common response is, “Oh, we already do that.” When I press them, they tell me that they “have class discussions” or “use undergraduate teaching assistants.” Nope, that’s not peer instruction.
Peer instruction (PI) is a technical term meaning a very specific protocol. Digital Promise and UTeach are creating a set of CS teaching micro credentials, and the one that they have on PI defines it well (see link here). PI is where the teacher poses a question for the class for individual responses, students discuss their answers, students respond again, and the teacher reveals the answer and explains the answer. The evidence suggesting that PI really works is overwhelming, and it can be used in any CS class — see http://peerinstruction4cs.com/ for more information on how to do it. I use it regularly in Senior-level undergraduate courses and graduate courses. There are ways to do PI when teaching remotely, as I talked about in this post.
I’m highlighting PI because the evidence suggests that it has a differential impact (see study here). It doesn’t hurt the top students, but it reduces failure rate (measured in multiple CS courses) for students with less background (see paper here). That’s exactly what we’re looking for in this series — how do we improve the odds of success for students who are not in the most privileged groups.
Explain connection to community values
I blogged last year about a paper (see post here) that showed female, Black, Latino/Latina, and first-generation students take CS because they want to help society. These students often do not see a connection between what’s being taught in CS classes and what they want. That’s because we often teach to prepare students for top software engineering jobs — it’s a mismatch between our goals and their goals.
I don’t know if this is an issue in upper-level classes. Maybe students in upper-level classes have already figured out how CS connects to their goals and values. Or maybe we have already filtered out the CS students who care about community values by the upper-level and graduate courses.
CS can certainly be used to advance social goals and community values. Teach that. In every CS class, for everything you teach, explain concretely how this concept or skill could be used to advance social good, cultural relevance, and community values. If you can’t, ask yourself why you’re teaching this concept or skill. If it’s just to promote a Silicon Valley jobs program, consider dropping it. We are all revising our classes this summer for fall. It’s a good time to do this review and update.
Use Parsons Problems
Parsons problems (sometimes referred to as “mixed-up code problems”) are where students are given a programming problem, and given all the lines of code to solve the problem, but the lines are scrambled (I usually say “on refrigerator magnets”). The challenge is to assemble the correct program. My wife, Barbara Ericson, did her dissertation work (see post here) showing that Parsons problems were effective (led to the same learning as writing the programs from scratch or from debugging programs) and efficient (low time cost, low cognitive load). She also invented dynamically adaptive Parsons problems which are even better (for effectiveness and efficiency) than traditional Parsons problems.
Parsons problems work on-line, so they fit into remote teaching easily. I’ve been doing paper-based (and Canvas-based) Parsons for exams and quizzes for several years now (see post here). Parsons problems work great in lower-level classes. There is relatively little research on using them in upper-level and graduate courses — I suspect that they could be useful, if only to break up the all-coding-all-the-time framing of CS classes.
I’m highlighting Parsons problems for two reasons.
- First, they’re efficient. As Manuel noted (as I quoted in my Blog@CACM post), BIPOC students are much more likely to be time-stressed than more privileged students. I’m reading Grading for Equity by Joe Feldman which makes this point in more detail (see website). Our less-privileged students need us to find ways to teach them efficiently. This is going to be a particularly concern during a pandemic when students will have more time constraints, especially if they, or a relative, or someone they live becomes ill.
- Second, they are a more careful and finer-grained assessment tool (see this post). If you ask students with less ability to write a piece of code, you might get students who only get part of the code working, but you get little data from students who only knew how to write part of the code but get none of it working. Parsons problems help the students with less computing background to show what they do know, and to help the teacher figure out what they don’t know how to write yet.
Use subgoal labelling
Subgoal labelling is pretty amazing (see Wikipedia page). Even our first experiment with subgoal labelling for CS worked examples (see post here) has shown improvements in learning (measured immediately after instruction), retention (measured a week later), and transfer (student success on a new task without instruction). Since then, Lauren Margulieux, Briana Morrison, and Adrienne Decker have published a slew of great results.
The one that makes it on this list is their most recent finding (see post here). Subgoal labeling in an introductory computing course, compared to one not using subgoal labeling, led to reduced drop or failure rates. That’s a differential benefit. There was not a statistically significant improvement on learning (measured in terms of exam scores), but it kept the students most at risk of failing or dropping out in the course. That’s teaching to advantage the students with less background in computing. We don’t know if it works for upper-level or graduate classes — my hypothesis is that it would.
The need for better software and systems to support active CS learning
I believe strongly in active learning, such as Peer Instruction (as I have argued here and here). I have discovered that it is far harder than I thought to do for large CS classes.
I decided to use clickers in CS1315 this semester (n=217), rather than use the colored index cards that I’ve used in the past for Peer Instruction (see blog post here). With cards, I can only take a vote — no histogram of results, and I can’t provide any grade value for the participation. With clickers, I can use the evidence-based practice as developed by Eric Mazur, Cynthia Lee, Beth Simon, Leo Porter, et al. (plugging the Peer Instruction for CS website):
- Ask everyone to answer to prime their thinking about the question,
- ask students to discuss the question in groups of 2-3,
- then vote again (consensus within groups), and
- show the results and discuss the misconceptions.
To make it worthwhile, I’m giving 10 points of final course grade for scoring over 50% on the second question (only — first one is just to get predictions and activate knowledge), 5 points for scoring over 30%.
I’m trying to do this all with campus-approved standards: TurningPoint clickers, TurningPoint software. I’d love to use an app-based solution, but our campus Office of Information Technologies warns against it. They can’t guarantee that, in large classes, the network will support all the traffic for everyone to vote at once.
The process is so complicated: Turn on clickers in our learning management software (a form of Sakai called T-Square), download the participant list, open up ResponseWare and define a session (for those using the app version), plug in receiver. After class, save the session, integrate the session with the participant list, then integrate the results with T-Square for grades. The default question-creation process in TurningPoint software automatically shows results and demands a specific format (e.g., which makes it hard to show screenshots as part of a question), so I’m using “Poll Anywhere” option, which requires me to process the session file after class to delete the first question (where everyone votes to prime their thinking) and to define the correct response(s) for each question.
I’m willing to do all that. But it’s more complicated than that.
Turns out that Georgia Tech hasn’t upgraded to the latest version of the TurningPoint software (TurningPoint Cloud). GT only supports TurningPoint 5. TurningPoint stopped distributing that version of the software in May 2016, so you have to get it directly from the on-campus Center for Teaching and Learning. I got the software and installed it — and discovered that it doesn’t run on the current version of MacOS, Sierra.
I did find a solution. Here’s what I do. Before each lecture, I move my lecture slides to a network drive. When I get to class, I load my lecture on the lecture/podium computer (which runs Windows and TurningPoint 5 and has a receiver built-in). I gather all the session data while I teach with the podium computer and do live coding on my computer (two screens in the massive lecture hall). I save the session data back to the network drive. Back in my office, I use an older Mac that still runs an older version of MacOS to download the session data, import it using TurningPoint 5, do all the deletions of priming questions and correct-marking of other questions, then integrate and upload to T-Square.
Counting my laptop where I make up slides and do live coding, my Peer Instruction classes require three computers.
Every CS teacher should use active learning methodologies in our classes. Our classes are huge. We need better and easier mechanisms to make this work.
Why Students Don’t Like Active Learning: Stop making me work at learning!
I enjoy reading Annie Murphy Paul’s essays, and this one particularly struck home because I just got my student opinion surveys from last semester. I use active learning methods in my Media Computation class every day, where I require students to work with one another. One student wrote:
“I didn’t like how he forced us to interact with each other. I don’t think that is the best way for me to learn, but it was forced upon me.”
It’s true. I am a Peer Instruction bully.
At a deeper level, it’s amazing how easily we fool ourselves about what we learn from and what we don’t learn from. It’s like the brain training work. We’re convinced that we’re learning from it, even if we’re not. This student is convinced that he doesn’t learn from it, even though the available evidence says she or he does.
In case you’re wondering about just what “active learning” is, here’s a widely-accepted definition: “Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It emphasizes higher-order thinking and often involves group work.”
Source: Why Students Don’t Like Active Learning « Annie Murphy Paul
Why we are teaching science wrong, and how to make it right: It’s about CS retention, too
Important new paper in Nature that makes the argument for active learning in all science classes, which is one of the arguments I was making in my Top Ten Myths blog post. The image and section I’m quoting below are about a different issue than learning — turns out that active learning methods are important for retention, too.
Active learning is winning support from university administrators, who are facing demands for accountability: students and parents want to know why they should pay soaring tuition rates when so many lectures are now freely available online. It has also earned the attention of foundations, funding agencies and scientific societies, which see it as a way to patch the leaky pipeline for science students. In the United States, which keeps the most detailed statistics on this phenomenon, about 60% of students who enrol in a STEM field switch to a non-STEM field or drop out2 (see ‘A persistence problem’). That figure is roughly 80% for those from minority groups and for women.
via Why we are teaching science wrong, and how to make it right : Nature News & Comment.
A kind of worked examples for large classrooms
I passed on to the MediaComp-Teach list something I’m trying to do in my class this semester. I had several suggestions to share it with others. It’s based on worked examples and peer instruction.
I’m teaching Python MediaComp, first time in 8 years on campus. We have just shy of 300 students, and I have 155 in my lecture. While I’m a big fan of worked examples, the way I’ve used them in small classes of 30-40 won’t work with 155.
Here’s what I’m doing this semester. Every Thursday, I distribute a PDF with a bunch of code in sets, like this:
The students are getting 12-20 little programs every Thursday. Most students type them ALL in before lecture Friday morning at 10 am.
Then on Friday, I put up PI-like questions like this:
and
Students are required to work on these in groups. I walk around the lecture hall and insist that nobody sit alone. I get lots of questions in the five minutes when everybody’s working away.
We don’t have clickers, but I’ve given every student four colored index cards. When I call for votes, everybody holds up the right colored card.
Here’s the interesting part — they TALK about the programs. Here’s a question in Piazza with a student’s answer:
The other instructor in the class is also using these, and he says that the students are using them after the Friday lecture as examples to study from and to use in building homework. I’ve had lots of comments about these from students, in office hours and via email. They find them valuable to study.
My worked examples aren’t giving them much process. I am getting them to look at lots of programs, type them in, get them running, and think about them. I’m pretty excited about it. Given that I haven’t been in this class in the last 8 years, the class isn’t really “mine” anymore. I’m trying to be sensitive to how much I change about a huge machine (14 TA’s, two instructors…) that I’m only visiting in. But everyone seems into this, and it’s fitting in pretty easily.
I have been uploading all of the PDF’s, PPTs, and PY’s at http://home.cc.gatech.edu/mediaComp/95, if you’re interested. (There are some weeks missing because Atlanta actually had some Winter this year.)
To get Interest: Catch and Hold Attention
I’ve been thinking about this question a lot. It’s informing my next round of research proposals.
We know more about how to retain students these days, the “hold” part of Dewey’s challenge mentioned below — consider the UCSD results and the MediaComp results. But how do we “catch” attention? We are particularly bad at “catching” the attention of women and minority students. Our enrollment numbers are rising, but the percentage of women and under-represented minorities is not rising. Betsy DiSalvo has demonstrated a successful “catch” and “hold” design with Glitch. Can we do this reliably? What are the participatory design processes that will help us create programs that “catch”?
So what can parents, teachers and leaders do to promote interest? The great educator John Dewey wrote that interest operates by a process of “catch” and “hold”—first the individual’s interest must be captured, and then it must be maintained. The approach required to catch a person’s interest is different from the one that’s necessary to hold a person’s interest: catching is all about seizing the attention and stimulating the imagination. Parents and educators can do this by exposing students to a wide variety of topics. It is true that different people find different things interesting—one reason to provide learners with a range of subject matter, in the hope that something will resonate.
Success in Introductory Programming: What Works?
Leo Porter, Charlie McDowell, Beth Simon, and I collaborated on a paper on how to make introductory programming work, now available in CACM. It’s a shorter, more accessible version of Leo and Beth’s best-paper-award winning SIGCSE 2013 paper, with history and kibitzing from Charlie and me :
Many Communications readers have been in faculty meetings where we have reviewed and bemoaned statistics about how bad attrition is in our introductory programming courses for computer science majors (CS1). Failure rates of 30%–50% are not uncommon worldwide. There are usually as many suggestions for how to improve the course as there are faculty in the meeting. But do we know anything that really works?
We do, and we have research evidence to back it up. Pair programming, peer instruction, and media computation are three approaches to reforming CS1 that have shown positive, measurable impacts. Each of them is successful separately at improving retention or helping students learn, and combined, they have a dramatic effect.
via Success in Introductory Programming: What Works? | August 2013 | Communications of the ACM.
Learning to Code may be Enough — if it happens
I highly recommend Shuchi Grover’s piece in EdSurge news (linked below). She makes a great point — that the goal of learning computing goes beyond learning to code. It’s not enough to learn to code. She talks about the challenge of learning to code:
I have encountered 12-14 year olds who have ostensibly marched through an entire Javascript course online but struggle to correctly configure terminating conditions for loops that involve Boolean operators in a fairly simple program. Anecdotal evidence from classrooms and teachers that use tools like Scratch, Alice or even the newly released Tynker suggests that while children comfortably learn to modify ready-made pieces of code as a starting point, they struggle when they must progress to tracing unfamiliar code, creating their own algorithmic programs, or debugging. This is not surprising at all. Algorithmic problem solving is not as easy as the “What schools don’t teach” Code.org video would have viewers believe.
There are similar themes in Roy Pea’s 1983 paper with Midian Kurland, “On the cognitive prerequisites of learning computing programming.”
Even among the 25% of the children who were extremely interested in learning programming, the programs they wrote reached but a moderate level of sophistication after a year’s work and approximately 30 hours of on-line programming experience. We found that children’s grasp of fundamental programming concepts such as variables, tests, and recursion, and of specific Logo primitive commands such as REPEAT, was highly context-specific and rote in character. To take one example: A child who had written a procedure using REPEAT which repeatedly printed her name on the screen was unable to recognize the efficiency of using the REPEAT command to draw a square. Instead, the child redundantly wrote the same line-drawing procedure four times in succession.
Coding is hard. Coding has always been hard. We want students to know more than just code about computing.
I’m not sure that Shuchi is right. Maybe learning to code is enough — if it happens. When I studied foreign languages in secondary and post-secondary school (Latin and French for me), there was a great emphasis on learning the culture of a language. There was an explicit belief that learning about the culture of a language facilitated learning the language. Does it go further? Can one learn the language without knowing anything about the culture? If one does learn the language well, did you necessarily learn the culture too? Maybe it works the same for programming languages.
Our human-centered computing PhD students who focus on learning sciences and technologies (LS&T) are required to read two chapters of Noss and Hoyles 1996 book Windows on Mathematical Meanings: Learning Cultures and Computers. They make the argument that you can’t learn Logo well apart from an effective classroom culture. As Pea and Kurland noted in 1983, and Grover has noted thirty years later in 2013, students aren’t really learning programming well.
What if they did? What if students did learn programming? Would they necessarily also learn computing? And isn’t it possible that a culture that taught programming well would also teach things beyond coding? Maybe even problem-solving skills? David Palumbo’s excellent review of the literature on programming and problem-solving pointed out that there was very little link from programming to problem-solving skills — but for the most part, students weren’t learning programming. I don’t really think that that would work, that learning to code would immediately lead to learning problem-solving skills. I do wonder if learning to code might also lead to learning the other things that we think are important about computing.
There is a positive evidence for the value of classroom culture. Consider the work by Leo Porter and Beth Simon, where they found that combining pair programming, peer instruction, and Media Computation led to positive retention and learning (as measured by success in later classes). Porter and Simon have also noted how students learning programming also develop new insight into the applications that they use. Maybe it’s the case that if you change the culture in the classroom and what students do, and maybe students learn programming and computing.
UCSD’s overwhelming argument for Peer Instruction in CS Classes
For teachers in those old, stodgy, non-MOOC, face-to-face classes (“Does anybody even *do* that anymore?!?”), I strongly recommend using “Clickers” and Peer Instruction, especially based on these latest findings from Beth Simon and colleagues at the University of California at San Diego. They have three papers to appear at SIGCSE 2013 about their multi-year experiment using Peer Instruction:
- They found that use of Peer Instruction, beyond the first course (into theory and architecture), halved their failure rates: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=176
- They found that the use of Peer Instruction, with Media Computation and pair-programming, in their first course (on the quarter system, so it’s only 10 weeks of influence) increased the percentage of students in their major (tracking into the second year and beyond) up to 30%: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=96
- They also did a lecture vs. Peer Instruction head-to-head comparison which showed significant impact of the instructional method: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=223
If we have such strong evidence that changing our pedagogy does work, are we doing our students a disservice if we do not use it?
The Bigger Issues in Learning to Code: Culture and Pedagogy
I mentioned in a previous blog post the nice summary article that Audrey Watters wrote (linked below) about Learning to Code trends in educational technology in 2012, when I critiqued Jeff Atwood’s position on not learning to code.
Audrey does an excellent job of describing the big trends in learning to code this last year, from CodeAcademy to Bret Victor and Khan Academy and MOOCs. But the part that I liked the best was where she identified the problem that cool technology and badges won’t solve: culture and pedagogy.
This is a problem. A big problem. A problem that an interactive JavaScript lesson with badges won’t solve.
Two organizations — Black Girls Code and CodeNow — did hold successful Kickstarter campaigns this year to help “change the ratio” and give young kids of color and young girls opportunities to learn programming. And the Irish non-profit CoderDojo also ventured state-side in 2012, helping expand afterschool opportunities for kids interested in hacking. The Maker Movement another key ed-tech trend this year is also opening doors for folks to play and experiment with technologies.
And yet, despite all the hype and hullaballoo from online learning startups and their marketing campaigns that now “everyone can learn to code,” its clear there are still plenty of problems with the culture and the pedagogy surrounding computer science education.
via Top Ed-Tech Trends of 2012: Learning to Code | Inside Higher Ed.
We still do need new programming languages whose design is informed by how humans work and learn. We still do need new learning technologies that can help us provide the right learning opportunities for individual student’s needs and can provide access to those who might not otherwise get the opportunity. But those needs are swamped by culture and pedagogy.
What do I mean by culture and pedagogy?
Culture: Betsy diSalvo’s work on Glitch is a great example of considering culture in computing education. I’ve written about her work before — that she engaged a couple dozen African-American teen men in computing, by hiring them to be video game testers, and the majority of those students went on to post-secondary education in computing. I’ve talked with Betsy several times about how and why that worked. The number one reason why it worked: Betsy spent the time to understand the African-American teen men’s values, their culture, what they thought was important. She engaged in an iterative design process with groups of teen men to figure out what would most appeal to them, how she could reframe computing into something that they would engage with. Betsy taught coding — but in a different way, in a different context, with different values, where the way, context, and values were specifically tuned to her audience. Is it worth that effort? Yeah, because it’s about making a computing that appeals to these other audiences.
Pedagogy: A lot of my work these days is about pedagogy. I use peer instruction in my classrooms, and try out worked examples in various ways. In our research, we use subgoal labels to improve our instructional materials. These things really work.
Let me give you an example with graphs that weren’t in Lauren Margelieux’s paper, but are in the talk slides that she made for me. As you may recall, we had two sets of instructional materials: A set of nice videos and text descriptions that Barbara Ericson built, and a similar set with subgoal labels inserted. We found that the subgoal labelled instruction led to better performance (faster and more correct) immediately after instruction, more retention (better performance a week later), and better performance on a transfer task (got more done on a new app that the students had never seen before). But I hadn’t shown you before just how enormous was the gap between the subgoal labelled group and the conventional group on the transfer task.
Part of the transfer task involved defining a variable in App Inventor — don’t just grab a component, but define a variable to represent that component. The subgoal label group did that more often. ALOT more often.
Lauren also noticed that the conventional group tended to “thrash,” to pull out more blocks in App Inventor than they actually needed. The correlation between number of blocks drawn out and correctness was r = -.349 — you are less likely to be correct (by a large amount) if you pull out extra blocks. Here’s the graph of number of blocks pulled out by each group.
These aren’t small differences! These are huge differences from a surprisingly small difference between the instructional materials. Improving our pedagogy could have a huge impact.
I agree with Audrey: Culture and pedagogy are two of the bigger issues in learning to code.
A new resource for CS teachers doing Peer Instruction
I’m a fan of Peer Instruction. I’m sharing this announcement that Beth Simon just made on the SIGCSE mailing list about a new resource for CS teachers who want to use Peer Instruction:
This website serves to support computing instructors implementing Peer
Instruction — a very specifically designed pedagogy developed by
Harvard physicist Eric Mazur (read more under “About”). In findings
to be presented at SIGCSE 2013, we report on Peer Instruction’s impact
in reducing class fail rates by more than half and present results
from a quasi-experimental study where students in a course adopting
Peer Instruction scored 5.7% better on the final exam than a control
section using standard lecture approaches.
We hope you might find these resources helpful and discuss them with
your colleagues. In particular: If you are interested in participating in an e-support
program for faculty adopting PI, we encourage you to sign up on our
web site. Not only can you get feedback from experienced PI
instructors, but you can also share things that worked with others and
complain about things that didn’t work!
How do faculty learn about and use Peer Instruction?
Passing on a request from the PI Network:
From Mike Reese
Creating a Peer Instruction network
Eric Mazur sent email to a bunch of teachers using PI over the weekend — there’s a new resource for Peer Instruction being set up:
With this first message, we are inviting you to join the Peer Instruction Network at www.peerinstruction.net. It will take only a few minutes to register. Once we have registered a significant number of users, we will launch site features which include the ability to locate other Peer Instruction users from your discipline, at your institution, or in your country. We will also post frequently asked questions and associated answers and publish user experiences with PI. Eventually we plan to facilitate the sharing and dissemination of materials.
In addition, I, along with Peer Instruction Network co-founder Julie Schell, will be happy to respond to questions and interact with the members of the PI Network in order to strengthen the PI user community.
Please join the worldwide network of Peer Instruction users today at www.peerinstruction.net.
Eric Mazur’s Keynote at ICER 2011: Observing demos hurts learning, and confusion is a sign of understanding
Our keynote for ICER 2011 was Eric Mazur, famous Harvard physics education researcher. Mazur maintains a terrific website with his publications and talks, so the slides from his talk are available as well as the papers which serve as the content for this talks. His keynote talk was on “The scientific approach to teaching: Research as a basis for course design.” I was hoping that he might give us some advice, from older-and-wiser physics education research to up-start, trying-to-learn-to-walk computing education research. He didn’t do that. Instead, he told us about three of his most recent findings, which were great fun and intriguing.
The first set of findings were about peer instruction, which we’ve talked about here. He spent some time exploring findings on the Force Concept Inventory (FCI), particularly with respect to gender. In the US and Belgium (as one place where he’s explored this), there is a huge, statistically significant gap between men and women on the FCI. In Taiwan, he didn’t find that same gap, so it is cultural, not innate. With peer instruction, the gap goes away. Good stuff, but not shocking.
The second set of findings were on physics demonstrations, when teachers make spark and lights, balance weights, make things explode (if you’re lucky), and do all kinds of things to wake you up and make you realize your misconceptions. Do they really help? Mazur tried four conditions (rotated around, so students would try them each): No demo, observing a demo, observing a demo after making a prediction of what you thought would happen, and then having a discussion afterward. The results were pretty much always the same (here are the results from one study):
Yeah, you read that right — observing a demo is worse than having no demo at all! The problem is that you see a demo, and remember it in terms of your misconceptions. A week later, you think the demo showed you what you already believed. On some of the wrong answers that students gave in Mazur’s study, they actually said “as shown in the demo.” The demo showed the opposite! The students literally remember it wrong. People remember models, not facts, said Mazur. By recording a prediction, you force yourself to remember when you guessed wrong.
That last line in the data table is another really interesting finding — talking about it didn’t improve learning beyond just making the prediction. Social doesn’t help all learning. Sometimes, just the individual is enough for learning.
This result has some pretty important ramifications for us computing educators. When we run a program in class, we’re doing a demonstration. What do students remember of the results of that program execution? Do they even think about what they expect to see before the program executes? What are they learning from those executions? I think live coding (and execution) is very important. We need to think through what students are learning from those observations.
Third finding: Students praise teachers who give clear lectures, who reduce confusion. Student evaluations of teaching reward that clarity. Students prefer not to be confused. Is that always a good thing? Mazur tried an on-line test on several topics, where he asked students a couple of hard questions (novel situations, things they hadn’t faced previously), and then a meta-question, “Did you know what you were doing on those questions?” Mazur and his colleagues then coded that last question for “confusion” or “no confusion,” and compared that to performance on the first two problems.
Confused students are far more likely to actually understand. It’s better for students to be confused, because it means that they’re trying to make sense of it all.
I asked Mazur if he knew about the other direction: If a student says they know something, do they really? He said that they tried that experiment, and the answer is that students’ self-reported knowledge has no predictive ability for their actual performance. Students really don’t know if they understand something or not — their self-report is just noise.
Workshop on Peer Instruction Concept Tests in CS Ed
As we’ll hear in the keynote, through the development of accepted assessment items (e.g. the Force Concept Inventory), physics faculty are enabled to take a scientific approach to the study of teaching and learning in their classrooms. Peer Instruction is a pedagogical technique which was developed when one physics professor used the FCI to study his own class, and found himself dissatisfied. Should computing instructors be similarly dissatisfied? How would we know?*
Using Peer Instruction’s focus on conceptual understanding, we seek to bring together a group of researchers interested in developing and studying assessment items getting at the conceptual heart of a range of computing courses.
This is NOT a workshop JUST for people interested in adopting Peer Instruction in their courses. Interest in adopting Peer Instruction is NOT required.
If you are interested in:
- Developing, vetting, and/or trialing core conceptual questions in specific areas (e.g., data structures, networks).
- Exploring instructor beliefs about student conceptual challenges in computing and/or the effectiveness of current instructional practices
then this is a workshop for you.
To register for the workshop, before July 22, complete this survey, which asks you to create one “concepTest” for an important concept in one of the courses that you teach. ConcepTest questions should
a) be expressible on a single PPT slide, with between 3-5 multiple choice solution options, with distractors based on common student misunderstandings
b) require deep understanding to answer, not merely recall or simple application of a principle
c) inspire interesting discussion
Register at: http://www.surveymonkey.com/s/KZ62KZK
Recent Comments