Archive for July 7, 2011

Workshop on Peer Instruction Concept Tests in CS Ed

Peer Instruction ConceptTests: Developing Community Resources to Support Scientific Study of Teaching
Leaders: Beth Simon and Quintin Cutts
Wednesday Aug 10: 8:30-2:30
No Cost, Application Required (July 22)

As we’ll hear in the keynote, through the development of accepted assessment items (e.g. the Force Concept Inventory), physics faculty are enabled to take a scientific approach to the study of teaching and learning in their classrooms.  Peer Instruction is a pedagogical technique which was developed when one physics professor used the FCI to study his own class, and found himself dissatisfied.  Should computing instructors be similarly dissatisfied?  How would we know?*

Using Peer Instruction’s focus on conceptual understanding, we seek to bring together a group of researchers interested in developing and studying assessment items getting at the conceptual heart of a range of computing courses.

This is NOT a workshop JUST for people interested in adopting Peer Instruction in their courses.  Interest in adopting Peer Instruction is NOT required.

If you are interested in:

  • Developing, vetting, and/or trialing core conceptual questions in specific areas (e.g., data structures, networks).
  • Exploring instructor beliefs about student conceptual challenges in computing and/or the effectiveness of current instructional practices

then this is a workshop for you.

To register for the workshop, before July 22, complete this survey, which asks you to create one “concepTest” for an important concept in one of the courses that you teach.  ConcepTest questions should

a) be expressible on a single PPT slide, with between 3-5 multiple choice solution options, with distractors based on common student misunderstandings
b) require deep understanding to answer, not merely recall or simple application of a principle
c) inspire interesting discussion

Register at:

*Though the McCracken and Leeds ITiCSE Working Groups shed some light here…

July 7, 2011 at 4:05 pm Leave a comment

Trip report on ITICSE 2011: Robots for girls, WeScheme, and student bugs in Scratch.

ITICSE 2011 was a fun, interesting event in Darmstadt, Germany last week.  Here’s a brief report on my experience of ITICSE, completely biased and centered on the events that I attended, and only highlighting a handful of papers.

Ulrik Schroeder’s opening keynote (slides available) focused on the outreach activities from his group at Aachen University, with some high-quality evaluation.  The most interesting insight for me was on their work with Lego Robotics.  I raised the issue that, in our GaComputes work, we find that girls get more excited about computing and change their attitudes with other robots (like Pico Crickets or Pleo Dinosaurs) more than Lego Robotics.  Ulrik agreed and said that they found the same thing.  But boys still like and value being good at Lego Robotics, and that’s important for their goals.  He wants to find and encourage the girls who do well at the same robots that the boys like.  He wants the girls to recognize that they are good at the same CS that the boys do.  It’s a different goal than ours — we’re more about changing girls’ view of CS, and they’re more about finding and encouraging girls who will succeed at the existing definition of CS.

I went to a paper session on A Tool to Support the Web Accessibility Evaluation Process for Novices.  They had a checklist and rubric, including role playing (what would an older user do with this site? A sight-impaired user? Someone whose native language isn’t English?) to use in evaluating the accessibility of a website.  I liked the tool and was wondering where the same model could be used elsewhere.  I got to thinking during this talk: Could we do a similar tool to support the design of curriculum that encourages diversity?  Could we provide checklists, rubrics, and role plays (How would a female CS student respond to this homework write-up?) to help faculty be more sensitized to diversity issues?

The coolest technology I saw was WeScheme — they’ve built a Scheme-to-JavaScript compiler into a Web page, so that students can hack Scheme live from within the browser.  I was less impressed by the paper presentation.  They’re using WeScheme in a middle school, where the kids are doing code reviews (“which most undergraduate programs in the US don’t do”) and presenting their work to “programmers from Facebook and Google.”  Somebody asked during Q&A, “How do you know that most undergraduate programs don’t do code reviews?”  They had no evidence, just an informed opinion.  I’m worried that the paper as a model for outreach.  Are Facebook and Google programmers willing to visit all middle schools in the US?  If not, this doesn’t scale.  Nonetheless, the technology is amazing, and I expect that this is the future of programming in US schools.

Probably the paper that most influenced my thinking was Orni Meerbaum-Salant’s paper on Habits of Programming in Scratch (same session).  They studied a bunch of students’ work in Scratch, and identified a number of common misconceptions and errors.  What was fascinating was that the bugs looked (to me) a lot like the ones that Elliot Soloway found with the Rainfall Problem, and the issues with concurrency were like the ones that Mitchel Resnick found with Multilogo and that John Pane found with HANDS.  That suggests that changing the environment doesn’t change the kinds of errors students are making.  And since all student programming misconceptions come from our instruction (i.e., students don’t know much about programming before we teach them programming), it means that we’ve been teaching programming in basically (from a cognitive perspective) the same way since Pascal.

The paper reporting on a multi-year professional development effort in Alice was really interesting.  They had lots of great stories and lessons learned.  The most amazing story for me was the school district where, not only were the CD/DVD players disabled, but the IT staff had used glue guns to fill in the USB ports on the school computers.  The IT administration wanted there to be no way for teachers to load new software onto those computers.  How depressing and frustrating!

All the papers in the session on Facilitating Programming Instruction were thought provoking.  Paul Denny’s project measures how much thrash and confusion that students face in figuring out Java — and there’s a lot of it.  Shuhaida Mohamed Shuhidan (“Dina”)’s dissertation work is yet another example of how little students understand about even programs as simple as “Hello, World!”  I really liked that Matt Bower is exploring how learning a second language can influence/interfere with the first language learned, but I was disappointed that they only used self-report to measure the influence/interference.  Without any kind of prompt (e.g., an example program in the first language), can you really tell what you’ve forgotten about a first language?

My keynote went well, I thought (slides available).  I talked about CS for non-majors, for professionals who discover CS late in life, and for high school students and teachers.  After lunch the third day, I headed off to Aachen University to visit with Ulrik’s group, so I didn’t get to see more.  IITICSE was a lot of fun and gave me lots of good ideas to think about.

July 7, 2011 at 9:18 am 6 comments

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,186 other subscribers


Recent Posts

Blog Stats

  • 2,060,648 hits
July 2011

CS Teaching Tips