Posts tagged ‘peer instruction’

Proposal #1 to Change CS Education to Reduce Inequity: Teach computer science to advantage the students with less computing background

This is my second post in a series about how we have to change how we teach CS education to reduce inequity. I started this series with this post, making an argument based on race, but might also be made in terms of the pandemic. We have to change how we teach CS this year.

The series has several inspirations, but the concrete one that I want to reference back to each week is the statement from the University of Maryland’s CS department:

Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset

We as individual computing teachers make choices that influence whether students with less computing background can succeed. I often see choices being made that encourage the most capable students, but at the cost of the least prepared students. Part of this is because we see ourselves as preparing students for top software engineering jobs. The questions that get asked on technical interviews explicitly drive how many CS departments teach algorithms and theory. We want to encourage “excellence.” But whose excellence do we care about? Is Silicon Valley entrepreneurial perspectives the only ones that matter? The goal of “becoming a great software engineer” does not consider alternative endpoints for computing education (see post here). Not all our students want those kinds of jobs. Many of our students are much more interested in giving back to their community, rather than take the Silicon Valley jobs that our programs aim for (see post here).

Please don’t teach students as if they are you. First, you (as a CS teacher, as someone who reads this blog) are wildly different than our normal student. Second, your memories of how you learned and what worked for you are likely wrong. Humans are terrible at reconstructing how their memory was at a prior time and what led to their learning. That’s why we need research.

In this post, I will identify four of the methods that are differential, that advantage the students with less computing background — there are many more:

  • Use Peer Instruction
  • Explain connection to community values
  • Use Parsons Problems
  • Use subgoal labeling

Use Peer Instruction

When I talk to computer science teachers about peer instruction and how powerful it is for learning, the most common response is, “Oh, we already do that.” When I press them, they tell me that they “have class discussions” or “use undergraduate teaching assistants.” Nope, that’s not peer instruction.

Peer instruction (PI) is a technical term meaning a very specific protocol. Digital Promise and UTeach are creating a set of CS teaching micro credentials, and the one that they have on PI defines it well (see link here). PI is where the teacher poses a question for the class for individual responses, students discuss their answers, students respond again, and the teacher reveals the answer and explains the answer. The evidence suggesting that PI really works is overwhelming, and it can be used in any CS class — see http://peerinstruction4cs.com/ for more information on how to do it. I use it regularly in Senior-level undergraduate courses and graduate courses. There are ways to do PI when teaching remotely, as I talked about in this post.

I’m highlighting PI because the evidence suggests that it has a differential impact (see study here). It doesn’t hurt the top students, but it reduces failure rate (measured in multiple CS courses) for students with less background (see paper here). That’s exactly what we’re looking for in this series — how do we improve the odds of success for students who are not in the most privileged groups.

Explain connection to community values

I blogged last year about a paper (see post here) that showed female, Black, Latino/Latina, and first-generation students take CS because they want to help society. These students often do not see a connection between what’s being taught in CS classes and what they want. That’s because we often teach to prepare students for top software engineering jobs — it’s a mismatch between our goals and their goals.

I don’t know if this is an issue in upper-level classes. Maybe students in upper-level classes have already figured out how CS connects to their goals and values. Or maybe we have already filtered out the CS students who care about community values by the upper-level and graduate courses.

CS can certainly be used to advance social goals and community values. Teach that. In every CS class, for everything you teach, explain concretely how this concept or skill could be used to advance social good, cultural relevance, and community values. If you can’t, ask yourself why you’re teaching this concept or skill. If it’s just to promote a Silicon Valley jobs program, consider dropping it. We are all revising our classes this summer for fall. It’s a good time to do this review and update.

Use Parsons Problems

Parsons problems (sometimes referred to as “mixed-up code problems”) are where students are given a programming problem, and given all the lines of code to solve the problem, but the lines are scrambled (I usually say “on refrigerator magnets”). The challenge is to assemble the correct program. My wife, Barbara Ericson, did her dissertation work (see post here) showing that Parsons problems were effective (led to the same learning as writing the programs from scratch or from debugging programs) and efficient (low time cost, low cognitive load). She also invented dynamically adaptive Parsons problems which are even better (for effectiveness and efficiency) than traditional Parsons problems.

Parsons problems work on-line, so they fit into remote teaching easily. I’ve been doing paper-based (and Canvas-based) Parsons for exams and quizzes for several years now (see post here). Parsons problems work great in lower-level classes. There is relatively little research on using them in upper-level and graduate courses — I suspect that they could be useful, if only to break up the all-coding-all-the-time framing of CS classes.

I’m highlighting Parsons problems for two reasons.

  • First, they’re efficient. As Manuel noted (as I quoted in my Blog@CACM post), BIPOC students are much more likely to be time-stressed than more privileged students. I’m reading Grading for Equity by Joe Feldman which makes this point in more detail (see website). Our less-privileged students need us to find ways to teach them efficiently. This is going to be a particularly concern during a pandemic when students will have more time constraints, especially if they, or a relative, or someone they live becomes ill.
  • Second, they are a more careful and finer-grained assessment tool (see this post). If you ask students with less ability to write a piece of code, you might get students who only get part of the code working, but you get little data from students who only knew how to write part of the code but get none of it working. Parsons problems help the students with less computing background to show what they do know, and to help the teacher figure out what they don’t know how to write yet.

Use subgoal labelling

Subgoal labelling is pretty amazing (see Wikipedia page). Even our first experiment with subgoal labelling for CS worked examples (see post here) has shown improvements in learning (measured immediately after instruction), retention (measured a week later), and transfer (student success on a new task without instruction). Since then, Lauren Margulieux, Briana Morrison, and Adrienne Decker have published a slew of great results.

The one that makes it on this list is their most recent finding (see post here). Subgoal labeling in an introductory computing course, compared to one not using subgoal labeling, led to reduced drop or failure rates. That’s a differential benefit. There was not a statistically significant improvement on learning (measured in terms of exam scores), but it kept the students most at risk of failing or dropping out in the course. That’s teaching to advantage the students with less background in computing. We don’t know if it works for upper-level or graduate classes — my hypothesis is that it would.

July 20, 2020 at 7:00 am 5 comments

The need for better software and systems to support active CS learning

I believe strongly in active learning, such as Peer Instruction (as I have argued here and here).  I have discovered that it is far harder than I thought to do for large CS classes.

I decided to use clickers in CS1315 this semester (n=217), rather than use the colored index cards that I’ve used in the past for Peer Instruction (see blog post here). With cards, I can only take a vote — no histogram of results, and I can’t provide any grade value for the participation. With clickers, I can use the evidence-based practice as developed by Eric Mazur, Cynthia Lee, Beth Simon, Leo Porter, et al. (plugging the Peer Instruction for CS website):

  • Ask everyone to answer to prime their thinking about the question,
  • ask students to discuss the question in groups of 2-3,
  • then vote again (consensus within groups), and
  • show the results and discuss the misconceptions.

To make it worthwhile, I’m giving 10 points of final course grade for scoring over 50% on the second question (only — first one is just to get predictions and activate knowledge), 5 points for scoring over 30%.

I’m trying to do this all with campus-approved standards: TurningPoint clickers, TurningPoint software.  I’d love to use an app-based solution, but our campus Office of Information Technologies warns against it.  They can’t guarantee that, in large classes, the network will support all the traffic for everyone to vote at once.

The process is so complicated: Turn on clickers in our learning management software (a form of Sakai called T-Square), download the participant list, open up ResponseWare and define a session (for those using the app version), plug in receiver. After class, save the session, integrate the session with the participant list, then integrate the results with T-Square for grades. The default question-creation process in TurningPoint software automatically shows results and demands a specific format (e.g., which makes it hard to show screenshots as part of a question), so I’m using “Poll Anywhere” option, which requires me to process the session file after class to delete the first question (where everyone votes to prime their thinking) and to define the correct response(s) for each question.

I’m willing to do all that. But it’s more complicated than that.

Turns out that Georgia Tech hasn’t upgraded to the latest version of the TurningPoint software (TurningPoint Cloud).  GT only supports TurningPoint 5. TurningPoint stopped distributing that version of the software in May 2016, so you have to get it directly from the on-campus Center for Teaching and Learning. I got the software and installed it — and discovered that it doesn’t run on the current version of MacOS, Sierra.

I did find a solution. Here’s what I do.  Before each lecture, I move my lecture slides to a network drive.  When I get to class, I load my lecture on the lecture/podium computer (which runs Windows and TurningPoint 5 and has a receiver built-in).  I gather all the session data while I teach with the podium computer and do live coding on my computer (two screens in the massive lecture hall).  I save the session data back to the network drive.  Back in my office, I use an older Mac that still runs an older version of MacOS to download the session data, import it using TurningPoint 5, do all the deletions of priming questions and correct-marking of other questions, then integrate and upload to T-Square.

Counting my laptop where I make up slides and do live coding, my Peer Instruction classes require three computers.

Every CS teacher should use active learning methodologies in our classes.  Our classes are huge.  We need better and easier mechanisms to make this work.

 

March 31, 2017 at 7:00 am 7 comments

Why Students Don’t Like Active Learning: Stop making me work at learning!

I enjoy reading Annie Murphy Paul’s essays, and this one particularly struck home because I just got my student opinion surveys from last semester.  I use active learning methods in my Media Computation class every day, where I require students to work with one another. One student wrote:

“I didn’t like how he forced us to interact with each other. I don’t think that is the best way for me to learn, but it was forced upon me.”

It’s true. I am a Peer Instruction bully.

At a deeper level, it’s amazing how easily we fool ourselves about what we learn from and what we don’t learn from.  It’s like the brain training work.  We’re convinced that we’re learning from it, even if we’re not. This student is convinced that he doesn’t learn from it, even though the available evidence says she or he does.

In case you’re wondering about just what “active learning” is, here’s a widely-accepted definition: “Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It emphasizes higher-order thinking and often involves group work.”

Source: Why Students Don’t Like Active Learning « Annie Murphy Paul

July 11, 2016 at 7:27 am 7 comments

Why we are teaching science wrong, and how to make it right: It’s about CS retention, too

Important new paper in Nature that makes the argument for active learning in all science classes, which is one of the arguments I was making in my Top Ten Myths blog post. The image and section I’m quoting below are about a different issue than learning — turns out that active learning methods are important for retention, too.

Active learning is winning support from university administrators, who are facing demands for accountability: students and parents want to know why they should pay soaring tuition rates when so many lectures are now freely available online. It has also earned the attention of foundations, funding agencies and scientific societies, which see it as a way to patch the leaky pipeline for science students. In the United States, which keeps the most detailed statistics on this phenomenon, about 60% of students who enrol in a STEM field switch to a non-STEM field or drop out2 (see ‘A persistence problem’). That figure is roughly 80% for those from minority groups and for women.

via Why we are teaching science wrong, and how to make it right : Nature News & Comment.

August 3, 2015 at 7:49 am Leave a comment

A kind of worked examples for large classrooms

I passed on to the MediaComp-Teach list something I’m trying to do in my class this semester.  I had several suggestions to share it with others. It’s based on worked examples and peer instruction.

I’m teaching Python MediaComp, first time in 8 years on campus.  We have just shy of 300 students, and I have 155 in my lecture.  While I’m a big fan of worked examples, the way I’ve used them in small classes of 30-40 won’t work with 155.

Here’s what I’m doing this semester.  Every Thursday, I distribute a PDF with a bunch of code in sets, like this:

worked-examples-pic1

The students are getting 12-20 little programs every Thursday.  Most students type them ALL in before lecture Friday morning at 10 am.

Then on Friday, I put up PI-like questions like this:

Exercises4-5_pptxb

and

 

Exercises4-5_pptx

Students are required to work on these in groups.  I walk around the lecture hall and insist that nobody sit alone.  I get lots of questions in the five minutes when everybody’s working away.

We don’t have clickers, but I’ve given every student four colored index cards.  When I call for votes, everybody holds up the right colored card.

Here’s the interesting part — they TALK about the programs.  Here’s a question in Piazza with a student’s answer:

CS_1315__4_unread_

 

The other instructor in the class is also using these, and he says that the students are using them after the Friday lecture as examples to study from and to use in building homework.  I’ve had lots of comments about these from students, in office hours and via email.  They find them valuable to study.

My worked examples aren’t giving them much process.  I am getting them to look at lots of programs, type them in, get them running, and think about them.  I’m pretty excited about it.  Given that I haven’t been in this class in the last 8 years, the class isn’t really “mine” anymore.  I’m trying to be sensitive to how much I change about a huge machine (14 TA’s, two instructors…) that I’m only visiting in.  But everyone seems into this, and it’s fitting in pretty easily.

I have been uploading all of the PDF’s, PPTs, and PY’s  at http://home.cc.gatech.edu/mediaComp/95, if you’re interested.  (There are some weeks missing because Atlanta actually had some Winter this year.)

 

March 21, 2014 at 1:51 am 14 comments

To get Interest: Catch and Hold Attention

I’ve been thinking about this question a lot.  It’s informing my next round of research proposals.

We know more about how to retain students these days, the “hold” part of Dewey’s challenge mentioned below — consider the UCSD results and the MediaComp results.  But how do we “catch” attention?  We are particularly bad at “catching” the attention of women and minority students.  Our enrollment numbers are rising, but the percentage of women and under-represented minorities is not rising.  Betsy DiSalvo has demonstrated a successful “catch” and “hold” design with Glitch.  Can we do this reliably?  What are the participatory design processes that will help us create programs that “catch”?

So what can parents, teachers and leaders do to promote interest? The great educator John Dewey wrote that interest operates by a process of “catch” and “hold”—first the individual’s interest must be captured, and then it must be maintained. The approach required to catch a person’s interest is different from the one that’s necessary to hold a person’s interest: catching is all about seizing the attention and stimulating the imagination. Parents and educators can do this by exposing students to a wide variety of topics. It is true that different people find different things interesting—one reason to provide learners with a range of subject matter, in the hope that something will resonate.

via The Power Of Interest « Annie Murphy Paul.

December 18, 2013 at 1:04 am 3 comments

Success in Introductory Programming: What Works?

Leo Porter, Charlie McDowell, Beth Simon, and I collaborated on a paper on how to make introductory programming work, now available in CACM. It’s a shorter, more accessible version of Leo and Beth’s best-paper-award winning SIGCSE 2013 paper, with history and kibitzing from Charlie and me :

Many Communications readers have been in faculty meetings where we have reviewed and bemoaned statistics about how bad attrition is in our introductory programming courses for computer science majors (CS1). Failure rates of 30%–50% are not uncommon worldwide. There are usually as many suggestions for how to improve the course as there are faculty in the meeting. But do we know anything that really works?

We do, and we have research evidence to back it up. Pair programming, peer instruction, and media computation are three approaches to reforming CS1 that have shown positive, measurable impacts. Each of them is successful separately at improving retention or helping students learn, and combined, they have a dramatic effect.

via Success in Introductory Programming: What Works? | August 2013 | Communications of the ACM.

August 5, 2013 at 1:40 am 16 comments

UCSD’s overwhelming argument for Peer Instruction in CS Classes

For teachers in those old, stodgy, non-MOOC, face-to-face classes (“Does anybody even *do* that anymore?!?”), I strongly recommend using “Clickers” and Peer Instruction, especially based on these latest findings from Beth Simon and colleagues at the University of California at San Diego.  They have three papers to appear at SIGCSE 2013 about their multi-year experiment using Peer Instruction:

If we have such strong evidence that changing our pedagogy does work, are we doing our students a disservice if we do not use it?

January 15, 2013 at 6:00 am 14 comments

The Bigger Issues in Learning to Code: Culture and Pedagogy

I mentioned in a previous blog post the nice summary article that Audrey Watters wrote (linked below) about Learning to Code trends in educational technology in 2012, when I critiqued Jeff Atwood’s position on not learning to code.

Audrey does an excellent job of describing the big trends in learning to code this last year, from CodeAcademy to Bret Victor and Khan Academy and MOOCs.  But the part that I liked the best was where she identified the problem that cool technology and badges won’t solve: culture and pedagogy.

This is a problem. A big problem. A problem that an interactive JavaScript lesson with badges won’t solve.

Two organizations — Black Girls Code and CodeNow — did hold successful Kickstarter campaigns this year to help “change the ratio” and give young kids of color and young girls opportunities to learn programming. And the Irish non-profit CoderDojo also ventured state-side in 2012, helping expand afterschool opportunities for kids interested in hacking. The Maker Movement another key ed-tech trend this year is also opening doors for folks to play and experiment with technologies.

And yet, despite all the hype and hullaballoo from online learning startups and their marketing campaigns that now “everyone can learn to code,” its clear there are still plenty of problems with the culture and the pedagogy surrounding computer science education.

via Top Ed-Tech Trends of 2012: Learning to Code | Inside Higher Ed.

We still do need new programming languages whose design is informed by how humans work and learn.  We still do need new learning technologies that can help us provide the right learning opportunities for individual student’s needs and can provide access to those who might not otherwise get the opportunity.  But those needs are swamped by culture and pedagogy.

What do I mean by culture and pedagogy?

Culture: Betsy diSalvo’s work on Glitch is a great example of considering culture in computing education.  I’ve written about her work before — that she engaged a couple dozen African-American teen men in computing, by hiring them to be video game testers, and the majority of those students went on to post-secondary education in computing.  I’ve talked with Betsy several times about how and why that worked.  The number one reason why it worked: Betsy spent the time to understand the African-American teen men’s values, their culture, what they thought was important.  She engaged in an iterative design process with groups of teen men to figure out what would most appeal to them, how she could reframe computing into something that they would engage with.  Betsy taught coding — but in a different way, in a different context, with different values, where the way, context, and values were specifically tuned to her audience.  Is it worth that effort?  Yeah, because it’s about making a computing that appeals to these other audiences.

Pedagogy: A lot of my work these days is about pedagogy.  I use peer instruction in my classrooms, and try out worked examples in various ways.  In our research, we use subgoal labels to improve our instructional materials.  These things really work.

Let me give you an example with graphs that weren’t in Lauren Margelieux’s paper, but are in the talk slides that she made for me.  As you may recall, we had two sets of instructional materials: A set of nice videos and text descriptions that Barbara Ericson built, and a similar set with subgoal labels inserted.  We found that the subgoal labelled instruction led to better performance (faster and more correct) immediately after instruction, more retention (better performance a week later), and better performance on a transfer task (got more done on a new app that the students had never seen before).  But I hadn’t shown you before just how enormous was the gap between the subgoal labelled group and the conventional group on the transfer task.

Part of the transfer task involved defining a variable in App Inventor — don’t just grab a component, but define a variable to represent that component.  The subgoal label group did that more often.  ALOT more often.

transfer-chart-variables

Lauren also noticed that the conventional group tended to “thrash,” to pull out more blocks in App Inventor than they actually needed.  The correlation between number of blocks drawn out and correctness was = -.349 — you are less likely to be correct (by a large amount) if you pull out extra blocks.  Here’s the graph of number of blocks pulled out by each group.

transfer-chart-numblocks

These aren’t small differences!  These are huge differences from a surprisingly small difference between the instructional materials.  Improving our pedagogy could have a huge impact.

I agree with Audrey: Culture and pedagogy are two of the bigger issues in learning to code.

December 21, 2012 at 8:47 am 7 comments

A new resource for CS teachers doing Peer Instruction

I’m a fan of Peer Instruction.  I’m sharing this announcement that Beth Simon just made on the SIGCSE mailing list about a new resource for CS teachers who want to use Peer Instruction:

http://peerinstruction4cs.org

This website serves to support computing instructors implementing Peer
Instruction — a very specifically designed pedagogy developed by
Harvard physicist Eric Mazur (read more under “About”).  In findings
to be presented at SIGCSE 2013, we report on Peer Instruction’s impact
in reducing class fail rates by more than half and present results
from a quasi-experimental study where students in a course adopting
Peer Instruction scored 5.7% better on the final exam than a control
section using standard lecture approaches.

We hope you might find these resources helpful and discuss them with
your colleagues.  In particular: If you are interested in participating in an e-support
program for faculty adopting PI, we encourage you to sign up on our
web site.   Not only can you get feedback from experienced PI
instructors, but you can also share things that worked with others and
complain about things that didn’t work!

December 14, 2012 at 10:40 am 3 comments

How do faculty learn about and use Peer Instruction?

Passing on a request from the PI Network:
From Mike Reese

How do teaching innovations spread among faculty?  I am exploring this research question as a sociology doctoral student at Johns Hopkins University.  I am working with several faculty, including Dr. Eric Mazur, to examine the diffusion patterns of the teaching methods they pioneered as part of my dissertation project.
Would you be willing to participate in my dissertation study by completing a short survey?
I’m interested in your response even if you are not an instructor or don’t use Peer Instruction in your class.  As a sociologist, I’m exploring how information about Peer Instruction spreads, not how instructors use it.  “Don’t remember” and “Don’t Use” are valid responses on several questions.
The survey can be completed in as little as 7 minutes with only 2 open-response questions in addition to several multiple choice/rating questions.  All participants will be entered into a raffle for one of eight $25 Amazon.com gift cards or a $100 gift card. No sensitive personal information will be asked beyond your role and colleges at which you have taught/worked.  All data will be kept strictly confidential and will not be publicly shared.  Any publications or presentations resulting from this research will only include aggregate summaries or anonymous quotes. To ensure timely analysis of the data, I am asking all participants to complete the survey by Friday, July 20th.
I know your time is valuable—thank you for your help with this research.  Please don’t hesitate to contact me with any questions.Sincerely,

Mike Reese
Johns Hopkins  Sociology Doctoral Student

July 10, 2012 at 1:01 pm 2 comments

Creating a Peer Instruction network

Eric Mazur sent email to a bunch of teachers using PI over the weekend — there’s a new resource for Peer Instruction being set up:

With this first message, we are inviting you to join the Peer Instruction Network at www.peerinstruction.net. It will take only a few minutes to register. Once we have registered a significant number of users, we will launch site features which include the ability to locate other Peer Instruction users from your discipline, at your institution, or in your country. We will also post frequently asked questions and associated answers and publish user experiences with PI. Eventually we plan to facilitate the sharing and dissemination of materials.

In addition, I, along with Peer Instruction Network co-founder Julie Schell, will be happy to respond to questions and interact with the members of the PI Network in order to strengthen the PI user community.

Please join the worldwide network of Peer Instruction users today at www.peerinstruction.net.

January 10, 2012 at 9:07 am 2 comments

Eric Mazur’s Keynote at ICER 2011: Observing demos hurts learning, and confusion is a sign of understanding

Our keynote for ICER 2011 was Eric Mazur, famous Harvard physics education researcher.  Mazur maintains a terrific website with his publications and talks, so the slides from his talk are available as well as the papers which serve as the content for this talks.  His keynote talk was on “The scientific approach to teaching: Research as a basis for course design.”  I was hoping that he might give us some advice, from older-and-wiser physics education research to up-start, trying-to-learn-to-walk computing education research.  He didn’t do that. Instead, he told us about three of his most recent findings, which were great fun and intriguing.

The first set of findings were about peer instruction, which we’ve talked about here.  He spent some time exploring findings on the Force Concept Inventory (FCI), particularly with respect to gender.  In the US and Belgium (as one place where he’s explored this), there is a huge, statistically significant gap between men and women on the FCI.  In Taiwan, he didn’t find that same gap, so it is cultural, not innate.  With peer instruction, the gap goes away.  Good stuff, but not shocking.

The second set of findings were on physics demonstrations, when teachers make spark and lights, balance weights, make things explode (if you’re lucky), and do all kinds of things to wake you up and make you realize your misconceptions.  Do they really help?  Mazur tried four conditions (rotated around, so students would try them each): No demo, observing a demo, observing a demo after making a prediction of what you thought would happen, and then having a discussion afterward.  The results were pretty much always the same (here are the results from one study):

Yeah, you read that right — observing a demo is worse than having no demo at all!  The problem is that you see a demo, and remember it in terms of your misconceptions.  A week later, you think the demo showed you what you already believed.  On some of the wrong answers that students gave in Mazur’s study, they actually said “as shown in the demo.”  The demo showed the opposite!  The students literally remember it wrongPeople remember models, not facts, said Mazur.  By recording a prediction, you force yourself to remember when you guessed wrong.

That last line in the data table is another really interesting finding — talking about it didn’t improve learning beyond just making the prediction.  Social doesn’t help all learning.  Sometimes, just the individual is enough for learning.

This result has some pretty important ramifications for us computing educators.  When we run a program in class, we’re doing a demonstration.  What do students remember of the results of that program execution?  Do they even think about what they expect to see before the program executes?  What are they learning from those executions?  I think live coding (and execution) is very important. We need to think through what students are learning from those observations.

Third finding: Students praise teachers who give clear lectures, who reduce confusion.  Student evaluations of teaching reward that clarity.  Students prefer not to be confused.  Is that always a good thing?  Mazur tried an on-line test on several topics, where he asked students a couple of hard questions (novel situations, things they hadn’t faced previously), and then a meta-question, “Did you know what you were doing on those questions?”  Mazur and his colleagues then coded that last question for “confusion” or “no confusion,” and compared that to performance on the first two problems.

Confused students are far more likely to actually understand.  It’s better for students to be confused, because it means that they’re trying to make sense of it all.

I asked Mazur if he knew about the other direction: If a student says they know something, do they really?  He said that they tried that experiment, and the answer is that students’ self-reported knowledge has no predictive ability for their actual performance.  Students really don’t know if they understand something or not — their self-report is just noise.

August 17, 2011 at 9:41 am 36 comments

Workshop on Peer Instruction Concept Tests in CS Ed

Peer Instruction ConceptTests: Developing Community Resources to Support Scientific Study of Teaching
Leaders: Beth Simon and Quintin Cutts
Wednesday Aug 10: 8:30-2:30
No Cost, Application Required (July 22)

As we’ll hear in the keynote, through the development of accepted assessment items (e.g. the Force Concept Inventory), physics faculty are enabled to take a scientific approach to the study of teaching and learning in their classrooms.  Peer Instruction is a pedagogical technique which was developed when one physics professor used the FCI to study his own class, and found himself dissatisfied.  Should computing instructors be similarly dissatisfied?  How would we know?*

Using Peer Instruction’s focus on conceptual understanding, we seek to bring together a group of researchers interested in developing and studying assessment items getting at the conceptual heart of a range of computing courses.

This is NOT a workshop JUST for people interested in adopting Peer Instruction in their courses.  Interest in adopting Peer Instruction is NOT required.

If you are interested in:

  • Developing, vetting, and/or trialing core conceptual questions in specific areas (e.g., data structures, networks).
  • Exploring instructor beliefs about student conceptual challenges in computing and/or the effectiveness of current instructional practices

then this is a workshop for you.

To register for the workshop, before July 22, complete this survey, which asks you to create one “concepTest” for an important concept in one of the courses that you teach.  ConcepTest questions should


a) be expressible on a single PPT slide, with between 3-5 multiple choice solution options, with distractors based on common student misunderstandings
b) require deep understanding to answer, not merely recall or simple application of a principle
c) inspire interesting discussion

Register at: http://www.surveymonkey.com/s/KZ62KZK

*Though the McCracken and Leeds ITiCSE Working Groups shed some light here…

July 7, 2011 at 4:05 pm Leave a comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,995 other followers

Feeds

Recent Posts

Blog Stats

  • 1,801,953 hits
October 2020
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

CS Teaching Tips