A Challenge to Computing Education Research: Make Measurable Progress

August 16, 2010 at 9:50 pm 41 comments

In computing education research, we repeatedly conduct studies showing just how little students are learning in our classes.  We’ve been doing it for almost 30 years.  I have a challenge to our community: Let’s show that we can do better!

I had two experiences this week that made me think of this challenge.

Students learn little in CS1: Allison Tew is defending her dissertation at the end of this week.  I’ll blog on that soon.  As I mentioned previously, Allison built the first language independent measure of CS1 knowledge, which she hoped to show was reliable and valid.  To establish the validity of her language independent exam (which used pseudo-code), she ran subjects through a test in their “native” CS1 language and in her pseudo-code, equivalent test. Allison has taken some criticism (from her SIGCSE 2010 paper) for defining a small definition of CS1, only the very most common topics across multiple styles of CS1.  She ran over 950 subjects, across Java, MATLAB, and Python classes at multiple institutions in two countries, making this (I believe) the largest study of CS1 knowledge ever.

The important bottomline was whether her test really works, but the bottomline that I want to focus on here is: The majority of students did not pass.  The average score on the pseudo-code test was 33.78%, and 48.61% on the “native” language test.

That really shouldn’t surprise us.  Elliot Soloway ran hundreds of subjects through his rainfall problem, and always found that students did really badly at it. Every attempt to replicate that study that I know of has found roughly the same thing. The McCracken working group study showed that students can’t design.  The Lister working group study showed that students couldn’t answer simple multiple choice questions about loops.  What’s surprising is that Allison continues to narrow the focus, to the barest of minimums — and still students can’t pass.

People don’t know computing. Erika Poole just submitted her final dissertation document this last week.  She did a fascinating study of families dealing with a variety of technology related challenges that she set out for them (and paid them to attempt), as a way of discovering how they sought out help.  The families in her study did much worse than I might have guessed, e.g., only 2 of 15 families were able to configure their wireless router.  Some of the reasons were because of absolutely brain-dead user interfaces (how about a virtual keyboard that was missing some keys?!?).  But others were more subtle.

One of her challenges was to edit a Wikipedia page.  When you edit a Wikipedia page for the first time, without an account, you get this warning message:

You are not currently logged in. Editing this way will
cause your IP address to be recorded publicly in this
page's edit history. If you create an account, you can
conceal your IP address and be provided with many other
benefits. Messages sent to your IP can be viewed on your
talk page.

Erika writes:

This message was confusing or concerning to these participants, who did not necessarily understand whether IP address exposure was harmful or innocuous.  …

Of the three non-completers, Nyree described the process of editing an encyclopedia and reading this warning message as something that made her ―feel like a criminal. She chose not to save the change she made.  Jamar, uncertain about whether exposing his IP address would open him up to harm,  called a computer savvy friend to help him interpret this message;  his friend told him that he should not complete the task.  Janine Dunwoody, too, was alarmed by this message, but decided she did not want to create an account, either, because she did not want to disclose personal information to Wikipedia.

Should we expect people who edit Wikipedia to know what an “IP address” is?  Is it a user interface mistake to expect that people should know what an “IP address” is?  At the very least, we can probably all agree that any student who takes a course in computing principles should come away knowing what an “IP address” is.

Stop replicating, start improving. Most of the studies described here have tried to understand what students and common users know about computing.  That’s a really important science goal.  However, we are also engineers — we attempt to create solutions to learning problems.  As computing education researchers, we have both goals, to understand and to improve.

After 30 years, why hasn’t somebody beaten the Rainfall Problem?  Why can’t someone teach a course with the explicit goal of their students doing much better on the Rainfall Problem — then publish how they did it?  We ought to make measurable progress.

I don’t think that this is an impossible goal.  In fact, I bet that some of the existing research projects in computing education could “beat” (generate published reports with better results) these current studies.

  • The TeachScheme approach focuses on design based on data.  I bet that their students could beat the Rainfall Problem or the McCracken working group problem.
  • I bet that the Problets, Practice-It!, or CodingBat folks could get better results on either the Lister problem set or Allison’s new test.  My bet is that lots of examples and practice in-the-small are what it would take to get students to understand those concepts.
  • Students who take the new AP CS “Computer Science: Principles” should be able to complete all of Erika’s challenges successfully, and especially, know what an IP address is.

If someone makes an attempt and doesn’t beat the problem, that’s worth publishing, too.  We need to set out some yardsticks and measure progress against them.  It’s great to know exactly what’s going on in our classrooms, even if it’s not great news.  There are lots of people trying innovative approaches to computing education.  We need to close this loop, and see if the innovative approaches are making progress on those same challenges that showed us the bad news.  Let’s make measurable progress.

Entry filed under: Uncategorized. Tags: .

The Digital Divide can’t be bridged by a computer dump Secondary school IT education failing in UK

41 Comments Add your own

  • 1. Alfred Thompson  |  August 16, 2010 at 10:10 pm

    The scariest conclusion I came to after teaching HS comptuer science was that I wasn’t doing it right. OF sure I had some students have some great success both on metrics like the APCS exam and in university and later careers. I like to think I helped them learn somethings. But the place we fail, often, is with students who don’t immediatly fall in love with the subject. As you suggest students can’t design. They have trouble taking the syntax and making it solve problems. Part of it is a chicken and egg problem I think. They don’t really understand the syntax well because they don’t understand how it fits in the context of solving a problem. And because they don’t understand the syntax and its functions well they can’t figure out how it fits into a design solution.

    The TeachScheme methods might be a possible answer. I just want to see some tests down by skeptics rather than people who are biased towards it.

    Reply
  • 2. gasstationwithoutpumps  |  August 17, 2010 at 12:06 am

    For years I’ve felt that no one knows how to teach people to program. Many people learn it, and teachers can help them learn specifics (language syntax, documentation habits, unit testing, and so forth), but the underlying way of thinking that characterizes programming and debugging skill does not appear to be teachable (at least not with current teaching approaches).

    I’ve taught programming classes, and some of the some of the students seem to get the material effortlessly while others struggle no matter what I do.

    I think that the first couple of years of a computer science curriculum are more a selection process than an education process.

    Reply
  • 3. Andrew Begel  |  August 17, 2010 at 1:50 am

    Did Allison look at replicating her results at the end of CS2? Within or between subjects, we know that some of the students eventually do get the programming concept down, since they do qualify to graduate. Is something specific they’re learning later that solidifies their programming knowledge, or is it just the amount of time thinking and practicing with it? Perhaps testing people who pass CS1, but then drop the major, against CS2 students who complete the course? I tried something similar when I was in grad school (testing CS1 Scheme programming concepts with CS2 Java students, and found that most couldn’t transfer concepts between languages yet; they wanted to solve Scheme recursion problems with Java iteration, even though they could express recursion just fine in Java. Perhaps, programming ability transfer between languages, even to pseudo-code, might come only after learning 2+ languages?

    Reply
    • 4. Mark Guzdial  |  August 17, 2010 at 10:11 am

      Hi Andy! Allison did something like that (look at CS1 differences based on CS2 performance) for her ICER 2005 paper. No, she didn’t go beyond in this study. Her goal here was to build the test. I’ll try to answer your questions (and Alan’s) when I do the blog on her work, as soon as I can make some time. (And a blog on Brian Dorn’s dissertation too. Allison defends Friday, and Brian Thursday.)

      Reply
    • 5. Alfred Thompson  |  August 17, 2010 at 11:05 am

      ” Perhaps, programming ability transfer between languages, even to pseudo-code, might come only after learning 2+ languages?”

      I read somewhere (sorry I lost the reference maybe in a piece is an old CACM) that there was a theory that learning a second language was no easier than learning a first but that the third language was the easy one. This would not surprise me as I found that when I explained concepts using multiple languages students seemed to “get” it more completely. This is one reason I have toyed with the idea of writting a text that explained things using 3 or more languages in parallel.

      Reply
      • 6. gasstationwithoutpumps  |  August 17, 2010 at 12:56 pm

        I don’t think that language transfer is really the problem: I’ve met plenty of students on their 3rd or 4th language who still can’t write or debug code in any of them.

        My 14-year-old son, who has learned about a dozen programming languag, is fluent in 3 of them and never had any trouble transfering things he understood (iteration, recursion, assertions, …) from one to another. New concepts still throw him sometimes (he doesn’t understand multiple inheritance, yet, for example).

        Reply
        • 7. Mark Guzdial  |  August 17, 2010 at 2:27 pm

          This is really the crux of Allison’s work. I’ll give away one of the punchlines: Student knowledge of CS1 transfers from their native language to pseudocode. That means that we can use pseudocode to test knowledge of CS1, and better yet, to compare approaches across languages. But exactly as you’re saying, it’s about CS1 concepts, not programming or design. Allison is picking a slice of CS1 so that has an intersection with just about everyone’s CS1, making the exam widely applicable and valuable.

          Reply
  • 8. Leigh Ann Sudol  |  August 17, 2010 at 4:22 am

    I also think we need more work in evaluation and less in defining the space as well. It seems that every third paper is a new rubric or metric, or analysis of what students are doing. Perhaps Allison’s work is what sparks a similar movement in CS as the force concept inventory did for Physics. Once we really show that the students are learning HOW to answer our questions not what the actual concepts are then perhaps we can move beyond some of the theoretical work and into actual performance improvement.

    Reply
  • 9. Alan Kay  |  August 17, 2010 at 9:03 am

    I don’t think I know anything general about teaching computing in the context of a “CS1” to the larger population.

    The first question I had (which Allison may have indeed done) is whether some process that normalized the study group in other areas was carried out. In other words, what can be said about the levels and variations in abilities of the test group?

    My prejudice from personal experience and observation is that it is highly likely that a combination of too little time and weak methods is having a huge effect here. Most people have to spend quite a bit of time getting fluent with the mechanics before they can use them for solving problems, making things, etc.

    This is because the distribution of variation of “intrinsically disposed” for a skill is pretty wide, and not catering to it amounts to a “selection process” just as mentioned above.

    However, it is quite possible to help those who are not immediately set up for getting fluent quickly — this is because “more practice” of certain kinds really does build up more precursors and can substitute for talent to an amazing degree in many areas. This is well known for music and sports and reading and writing, and I’m guessing that it also obtains for computing.

    So perhaps a central question here is “How much do we want to help the larger population to get fluent with things they may not have initial talent for?” And “is computing one of these?”

    Best wishes,

    Alan

    Reply
  • 10. Garth  |  August 17, 2010 at 2:07 pm

    When I teach Programming I (high school) I teach coding. I do not like it, I know doing it does not reach the objective I have in mind, and I end up with kids that can code but not write a program. In my defense I do not have the background, experience, time or brains to write a Programming I course that teaches what I want to teach. I have been looking for a new textbook for our programming courses. Everything I have looked at does exactly what I do now. Chapter 1 is “Introduction to (insert language)”. Chapter 2 is “User interface design” or something similar. Chapter 3 dives into code. It seems to me the first 4 or 5 chapters of a programming textbook should be dealing with the basics of program design, fundamental concepts of programming, top-down and bottom-up design schemes with assignments like the old “how to build a peanut butter sandwich” exercise. This blog article and the comments seem to agree than the teaching of planning and design is missing from the introductory curriculum. I can find a lot of talk on the internet about the topic but I cannot find much on a concrete, “usable in the classroom” solution. My present attempt at writing a curriculum to do what I want to do is just a bit tenuous for my liking and I have the feeling the other programming teacher is not going to like it because the kids do not start typing code on day two. Has anyone written a good Intro to Programming textbook that actually teaches programming and not coding? Junior high and high school seem to be the place to start doing things right but I just cannot find the right material to do the job.

    Reply
    • 11. Mark Guzdial  |  August 17, 2010 at 2:30 pm

      I do agree that students don’t really learn to design or program from a first semester CS course. I don’t agree that it’s possible to cover all that, nor do I think that a student can learn design before the students learn the materials with which they’re designing. I’m one of those that gets students coding as soon as possible. Concrete playing with code before the abstractions of design.

      Reply
      • 12. Garth  |  August 18, 2010 at 2:27 pm

        It is kind of the chicken and egg paradox. I just feel that knowing how to do the design phase is a much more transferable skill to other fields, ie math, science, English, etc. Without knowing how to design the program and having a plan how can a kid know what code to type? I do have to admit hacking something together is a lot more fun than planning the program in detail but in the long run I do not think it is the best way to go as far as a teaching methodology.

        Reply
    • 13. Hélène Martin  |  August 19, 2010 at 11:12 pm

      Have you taken a serious look at How to Design Programs/ TeachScheme? It’s certainly not perfect, but it truly is a remarkable example of a curriculum that has been polished over time and that does provide lots of useful ways of teaching design.

      I disagree that programming should come in late, and hopefully HtDP will show you some ways to introduce design through steadily more complex programming examples. Without programming, it’s too easy to resort to feel-good examples with little content like the peanut butter sandwich exercise, which I do as well.

      Reply
      • 14. Mark Guzdial  |  August 20, 2010 at 9:24 am

        I explicitly said in this post that I think that HtDP/TeachScheme has the best chance of any approach to beat the McCracken results on how students design programs. I’d love for them to test that assertion and publish it! Increasingly introducing design through programming is a great approach. I disagree with trying to teach design first.

        Reply
  • […] out how we can compare different approaches to teaching CS1.  As Alan Kay noted in his comments to my recent previous post on computing education research, there are lots of factors, like who is taking the class and what they’re doing in the class. […]

    Reply
  • 16. Hélène Martin  |  August 19, 2010 at 11:18 pm

    Students who take the new AP CS “Computer Science: Principles” should be able to complete all of Erika’s challenges successfully, and especially, know what an IP address is.

    I think that’s a dangerous assertion to make. Current college-level introductory CS courses can’t reach that bar even when they’re uniquely focused on programming knowledge. Now you’re saying this new course not only needs to do better at conveying programming concepts than existing courses but also needs to teach any number of loosely connected computing facts. Seems like a setup for disappointment, to me.

    Reply
    • 17. Mark Guzdial  |  August 20, 2010 at 9:22 am

      If students come out of the APCS:Principles doing “better at conveying programming concepts,” then we’ve really failed. The Principles course should not be about programming, and certainly shouldn’t cover more programming concepts than the existing Level A.

      Reply
      • 18. Hélène Martin  |  August 20, 2010 at 11:17 am

        Ahh. This makes a lot more sense. I read “Erika’s challenges” and my brain interpreted it as “Allson’s test” and I got really confused/terrified, hence the panicky comment.

        Sorry! I should have assumed I’d parsed something wrong because that really didn’t make any sense.

        Reply
  • 19. Alan Kay  |  August 20, 2010 at 11:58 am

    I still don’t know anything general and useful.

    But the “bricks” vs. “arches” metaphor that I like has a fair amount of evidence supporting the idea that simple almost linear combinations of things are more obvious and tryable than complex non-linear combinations.

    The old rule of thumb in the early 60s was that “you can learn to program in a week” (bricks) but “it takes several years to get into design” (arches).

    Another form of evidence for this today is that most existing software is very poorly designed, yet is still semi-working combinations of bricks of the kind that most programmers learned pretty early in their experience.

    Getting “too good at bricks” for many seems to hurt getting good at design (there’s lots of evidence for this). But lots of design early contributes to the 7 +-2 problems of beginners.

    An interesting side note here is that one of the motivations for higher level languages was to embody them with “good design” and reduce the degrees of freedom that lead unsophisticates astray. This is very hard to see in the languages that educational institutions think they should be teaching today.

    Cheers,

    Alan

    Reply
  • […] that we have a track record of being unable to measure accurately our students’ achievement, I suspect that those of us in Computing are particularly susceptible to this criticism. […]

    Reply
  • 21. Heading off to SIGCSE 2011! « Computing Education Blog  |  March 7, 2011 at 9:01 am

    […] on Role and Value of Quantitative Instruments in CS Education — I plan to talk about the from science-to-engineering themes that I talked about in a post from last year. There’s an NCWIT Academic Alliance reception in the evening before the main conference […]

    Reply
  • […] fascinating was that the bugs looked (to me) a lot like the ones that Elliot Soloway found with the Rainfall Problem, and the issues with concurrency were like the ones that Mitchel Resnick found with Multilogo and […]

    Reply
  • […] value of COMPASS is in having a yardstick.  We can use it to see how we can influence these attitudes.  Danny wrote it so that […]

    Reply
  • […] is a nice op-ed piece.  The point is that testing is useful for teachers. It’s too easy to fool ourselves as teachers and believe that our own testing is good enough. Yes, our education system has lots of problems with it, but standardized testing can help to […]

    Reply
  • 25. This one is for you, Mark « Technology education and me  |  November 22, 2011 at 12:18 pm

    […] science educators to focus on measurable improvements to the way we teach.  He even posted an entry on the topic in August 2010 in which he said: After 30 years, why hasn’t somebody beaten the Rainfall Problem?  Why […]

    Reply
    • 26. Mark Guzdial  |  November 22, 2011 at 3:23 pm

      Very Cool, Amber! Thanks for the post! And congratulations!

      Reply
  • […] a version of a challenge that I have made previously: Show me pedagogical techniques in computing education that have statistically significant impacts […]

    Reply
  • […] assumption that the cartoon about CS textbooks was lampooning. We assume that students learn much more than what our assessments say that they’re learning. But Dan and Nora point out that it only gets better if we can measure what we really think is […]

    Reply
  • […] As I said before, we’re getting to the end of “Georgia Computes!”  This was one of our last big analysis efforts.  It’s really hard to do these kinds of studies (e.g., each of those school that did not participate still got our time and effort in trying to convince them, then there’s the data cleaning and analysis and…).  I’m glad that we got this snapshot, but wish that we got it at an even larger scale and more regularly.  That would be useful for us to use as a yardstick over time. […]

    Reply
  • […] their understanding.  My personal research agenda is more on the latter than the former — it’s more important to me to learn how to teach better, rather than to understand the effects of teaching that might be better if we built on everything […]

    Reply
  • […] statistics well (I really did try this last summer).  It’s hard to teach anything well, and there’s evidence that we need to improve our teaching in computer science.  This doesn’t feel like an indictment of MOOC courses overall. In brief, here is my […]

    Reply
  • […] Actually measuring learning in higher education classes could be a real step forward, in terms of providing motivation to improve learning against those assessments — for both MOOCs and for face-to-face […]

    Reply
  • […] a researcher: I’ve written before about the measures that we have that show how badly we do at computing edu…, and about how important it is to make progress on those measures: like the rainfall problem, and […]

    Reply
  • 34. Education is already Gamified: Dan Hickey on Badges | gyapti  |  January 11, 2013 at 4:40 am

    […] assumption that the cartoon about CS textbooks was lampooning. We assume that students learn much more than what our assessments say that they’re learning. But Dan and Nora point out that it only gets better if we can measure what we really think is […]

    Reply
  • […] the course) than a similar face-to-face course.  It’s not obvious to me either way — there are certainly results that have us questioning the effectiveness of our face-to-face classes.  While MOOCs lead to few finishing, maybe those that do finish learn more than in a face-to-face […]

    Reply
  • […] Erika Poole documented participants failing at simple tasks (like editing Wikipedia pages) because they didn’t understand basic computing ideas like IP addresses.  Her participants gave up on tasks and rebooted their computer, because they were afraid that someone would record their IP address.  How much time is lost because users take action out of ignorance of basic computing concepts? […]

    Reply
  • […] challenge of computing literacy may be even greater than the challenge of financial literacy.  People know even less about computing than they do about finance.  We don’t know the costs are of that ignorance, but we do know […]

    Reply
  • […] PhD students who informed my understanding of computing education: Mike Hewner, Betsy DiSalvo, and Erika Poole.  I struggled with the overall story, until I learned that Licklider’s degrees were mostly […]

    Reply
  • […] FizzBuzz problem described below is pretty interesting, a modern day version of the Rainfall problem.  I will bet that the results claimed for FizzBuzz are true, but I haven’t seen any actual […]

    Reply
  • […] science in future decades, as we develop better cognitive abilities?  Given that performance on the Rainfall Problem has not improved over the last thirty years, I doubt it, but it’s an intriguing […]

    Reply
  • […] The most interesting result of that kind in Briana’s dissertation is one that I’ve written about before, but I’d like to pull it all together here because I think that there are some interesting implications of it. To me, this is a Rainfall Problem kind of question. […]

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,096,336 hits
August 2010
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

CS Teaching Tips