Mark’s Trip Report on ICER 2011: Students’ experience of CS classes, and making compilers more friendly

August 15, 2011 at 9:39 am 4 comments

Last week was the International Computing Education Research conference for 2011 at Rhode Island College in Providence, RI. (What a cool city! My first time, and I enjoyed getting lost on one of my runs!)  It was the first time in years that I actually stayed for the whole conference, since I left after the first day last year.  I enjoyed realizing again why I loved this conference so much. Several of the papers were wonderful, the discussions and hallway chit-chat were terrific, and it was great to see so many colleagues, as well as meet people whose papers I’ve been reading but hadn’t yet met.

I’m labeling this “Mark’s Trip Report” because I’m not going to attempt to be thorough or fair in what papers I mention.  I’ll tell you about what struck me.  I’ll do a separate post just on the keynote.

The first set of papers were ostensibly about how students choose computing, but I thought that there was a strong subtext about understanding the student experience of a computing classes.

  • Colleen Lewis talked about “Deciding to Major in Computer Science: A grounded theory of students’ self-assessment of ability,” but it was really much more about that “self-assessment” part than about the “deciding” part.  Colleen told us that a common theme in her interviews were the tension between growth vs. fixed mindset (drawing on Carol Dweck’s work). Many students decide early on that they’re bad at computing and they can’t get better, i.e., they don’t have the “Geek gene.” Those students won’t choose CS, of course, but for such a disappointing reason.
  • Mike Hewner presented his work, which spoke to where students get their information about CS and how a good class experience can color a student’s perception of a specialization area.
  • Päivi Kinnunen presented “CS Majors’ Self-Efficacy Perceptions in CS1: Results in light of social cognitive theory” which was about applying Bandura’s work (which explores how early failures at something lower students’ belief of their ability to do that something) to CS1.
    Päivi’s paper got me wondering what we’re telling CS majors when we have them use Alice or Scratch in CS1.  As we know from Mike’s work, CS majors know something about CS — they know something about the languages used in regular practice.  When we tell them to use Alice or Scratch, are we saying to them (in light of Bandura’s work), “You aren’t capable of using the real, authentic practice” and thus lower their self-efficacy?  And if we use a “real” language (even if harder) are we saying (in a Dweck growth mindset sense), “Yeah, this is hard, but you can do it.  You can learn to handle the real thing.”?

Päivi’s talk was a great set-up for Sally Fincher’s “Research Design: Necessary Bricolage,” which ended up winning the people’s choice (voted) best paper award (called the “Fool’s Award” at ICER).  Sally was asking how we go about gathering information about our students’ practices.  She said that we rely far too much on semi-structured interviews, and we should think about combining other methods and practices to gain more insight.  She showed examples of some of her research instruments, which were really wonderful (i.e., I plan to steal them as early as this semester!).  Here’s a neat combination of methods:  First, give students a graph of 24×7 in 30 minute increments, and ask them to mark when they work on the class.

That’s the “when.”  To get the “where,” Sally (and Josh Tenenberg and Anthony Robins) gave students cheap digital cameras, and asked them to take a picture of where they were working.

That upper left hand corner is a bus seat.  Would you have guessed that your students do CS homework on the bus?  Notice the mixture of affordances: In the bus, in the dorm room, in the lab with peers, at a table to work with friends.  Did you realize that students are working so much away from a rich computational infrastructure?  There’s no bottomline result here — rather, it’s about what data we should be gathering to figure out the things that we don’t realize yet that we need to know.

I enjoyed the papers by Cynthia Bailey-Lee, Beth Simon (for her paper on PeerWise with lead author Paul Denny — Beth’s name seemed to be on every-other paper this year!), and Matt Jadud because they were all replication studies.  Cynthia was taking a finding from Biology (on using peer instruction) and seeing if it worked in CS.  Beth and Matt were both taking earlier CS Ed papers, and see if they still worked in new settings.  It doesn’t matter what the bottomline finding was.  It’s so cool that our field is starting to go deep and check the work of earlier papers, to explore where it works and where it doesn’t, and to develop more general understanding.

Kathi Fisler presented a really interesting paper, “Do values grow on trees? Expression integrity in functional programming” that was particularly interesting for the variety of interpretations of the paper.  Kathi presented it as an exploration of whether function programming is “unnatural” for students.  I’m not sure how to ask that question.  What I found them exploring was, “How do novices and experts see the nested structure of s-expressions?  Do they see the trees?  Is that evident in their editing behavior, e.g., do they edit maintaining expression integrity, or do they ignore the parentheses when typing?”  Since so much computing involves the Web today, I’m wondering how comparable the results would be to people typing HTML (which is also a nested, tree-based notation).

I had a nice chat with Kathi’s co-author Guillaume Marceau who, with Kathi and Shriram, won the SIGCSE 2011 best paper award on designing error messages for students (which is an issue that has come up here recently).  I told Guillaume about Danny Caballero’s thesis, and he told me about why it’s so difficult to get error messages right for students.  The problem is that, by the time the parser has figured out what the problem is, all the context information to help the student has been thrown away.  An example is “identifier not found.”  For a student, a variable and a method/function name are completely different identifiers, completely different meanings.  It takes students a long time to generalize an identifier-value pairing such that the value could be an integer, an object, or a code block.  For most compilers, though, why you want the identifier is lost when the compiler can’t find an identifier’s meaning.  Racket contains compiler calls that help you construct the context, and thus provide good error messages.  He doesn’t hold out much hope for Java — it’s so hard just to compile Java, and refactoring it for good error messages to help students may be impossible.

Two other papers that I want to say brief words about:

  • Simon’s paper on “Explaining program code: giving students the answer helps — but only just” follows up on the Bracelet work where students were asked to read and explain the purpose of a piece of code.  The students failed miserably.  Simon wondered, “What if we gave them the answer?”  Rather than have the students fill-in-a-blank about what the code did, he gave them a multiple-choice question where the answers were the top five guesses from the first study.  Yes, there was improvement. But no, performance was still appalling.
  • Michael Lee presented on “Personifying programming tool feedback improves novice programmers’ learning,” which is a slightly wrong title.  What they did was to create a programming task (moving a little graphical character around on a board), but “personified” the parser.  A mistyped command might get the little character to say sheepishly, “I’m sorry, but I really don’t know how to do that. I wish I did.  I know how to do X. Is that what you would like me to do?”  I don’t think that the authors really measured learning, but what they did measure was how long students stuck with it — and a personified compiler is not nearly as scary, so students stick with it longer.  (See Bandura above.)
The papers will all show up on the ACM Digital Library soon.
About these ads

Entry filed under: Uncategorized. Tags: , , .

Are the professors there to please the students or tell them the truth? Indiana is hiring in CS Education research!

4 Comments Add your own

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


Recent Posts

Feeds

August 2011
M T W T F S S
« Jul   Sep »
1234567
891011121314
15161718192021
22232425262728
293031  

Blog Stats

  • 924,032 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,955 other followers


Follow

Get every new post delivered to your Inbox.

Join 2,955 other followers

%d bloggers like this: