How can teachers help struggling computationalists

October 21, 2016 at 7:51 am 10 comments

My Blog@CACM post for this month is about imagining the remedial teaching techniques of a school-based “Computing Lab” in the near future.

It’s becoming obvious that computing is a necessary skill for 21st Century professionals. Expressing ideas in program code, and being able to read others’ program code, is a kind of literacy. Even if not all universities are including programming as part of their general education requirements yet, our burgeoning enrollments suggest that the students see the value of computational literacy.

We also know that some students will struggle with computing classes. We do not yet have evidence of challenges in learning computation akin to dyslexia. Our research evidence so far suggests that all students are capable of learning computing, but differences in background and preparation will lead to different learning challenges.

One day, we may have “Computing Labs” where students will receive extra help on learning critical computational literacy skills. What would happen in a remedial “Computing Lab”? It’s an interesting thought experiment.

Source: Designing the Activities for a “Computing Lab” to Support Computational Literacy | blog@CACM | Communications of the ACM

I list several techniques in the article, and I’m sure that we can come up with many more.  Here’s one more each DO and DON’T for “Computer Lab” for struggling computationalists.

  • DO use languages other than industry standard languages.  As I’ve mentioned before in this blog, CS educators are far too swayed by industry fads.  I’m a big fan of Livecode, a cross-platform modern form of HyperCard. An ICER 2016 paper by Raina Mason, Simon et al. estimated Livecode to have the lowest cognitive load of several IDE’s in use by students.  If we want to help students struggling to learn computing, we have to be willing to change our tools.
  • DON’T rely on program visualizations.  The evidence that I’ve seen suggests that program visualizations can help high-ability students, and well-designed program visualizations can even help average students.  I don’t see evidence that program visualizations can help the remedial student.  Sketching and gesture are more effective for teaching and learning in STEM than diagrams and visualizations.  Sketching and gesture encourage students to develop improved spatial thinking.  Diagrams and visualizations are likely to lead remedial students into more misconceptions.


Entry filed under: Uncategorized. Tags: , .

K-12 CS Framework: Best we can expect. Maybe as good as we need. We have to teach where the students are: Response to “How We Teach Should Be Independent Of Who We Are Teaching”

10 Comments Add your own

  • 1. Kathi Fisler  |  October 21, 2016 at 9:33 am

    Curious about your comment against program visualizations, and how that squares with ideas that perhaps we should be teaching notional machines more explicitly and carefully. These two ideas seem linked, in that notional machine operations seem one possible target of program visualizations.

    Where does illustrating notional machine behavior fall around your concerns about program visualizations? Obviously, it depends on getting an appropriate notional machine that doesn’t cover more detail than necessary. And it depends on a visualization that didn’t mislead. If we had that, would visualization be useful for weaker students in that context? Is this more a comment on poorly-designed visualizations (of which there are many), or the likelihood of finding reasonable ones at all?


    • 2. Mark Guzdial  |  October 21, 2016 at 10:07 am

      Take a look at the Blog@CACM post that’s linked, Kathi. We have started a new effort in our lab (and it’s capturing my imagination — it’s something that me and new CER PhD student Katie Cunningham are talking about daily) to think through how would we help students sketch/trace code. Absolutely, we need to teach students notional machines, and visual thinking is likely critical to this effort. But that doesn’t mean that we need to use diagrams or animations. Teaching students a way to draw notional machines and having students practice those may be much more effective for developing understanding, debugging skills, and automaticity, especially for low-ability students. I think it’s about building the Type 1 responses.

      • 3. Bonnie  |  October 25, 2016 at 5:32 pm

        Wow, I have been doing this for years. It always seemed like common sense to me that actually DOING the visual trace by hand would teach a student more than simply watching one.

    • 4. Peter Donaldson  |  October 22, 2016 at 6:20 pm

      Having pored through a large chunk of research on both program and algorithm visualisation it seems the main benefit is when novices have to think about what the national machine will do next. Why do I say that? Firstly just watching a visualisation doesn’t seem to improve novices understanding in any measurable way and secondly the quality of the visualisation also has no correlation with how much a novice learns from it. What does seem to matter is the level of active engagement novices are encouraged to have with it. This suggests that having novices carry out the visualisation process itself by creating annotations either on paper or computer assisted will lead to the biggest learning gains.

      The tracing process itself is the valuable aspect but traditional tracing still requires a lot of implicit details to be kept in mind. It’s likely that this is why novices dislike it so much, it’s a high cognitive load activity. There’s not a huge amount of prior art on the impact of trace based teaching but what there is is really encouraging. I’d say problets are a computer based example and the MTA method a paper based one.

      Mark, it’s likely that you may need to develop several approaches that focus on different aspects. For PLAN C we developed several code annotated tracing methods to target different levels of detail. We had a bucket based method for nested expressions of various kinds, another for understanding sequence, selection and repetition and one focused on modular code and parameter passing mechanisms.

  • 5. Kathi Fisler  |  October 21, 2016 at 9:36 am

    I suggested a “computing lab” akin to the “writing center” at WPI a couple of times over the past few years. I sensed that not many people imagined students needing general computational help that wasn’t already contextualized in courses (and hence supported by TAs). There was also a concern that as our class sizes grow, all the students who would be qualified to staff such a center were already working as teaching assistants.

    But I agree this is the kind of center/activity that is likely coming, as genuine computing (not necessarily programming) gets integrated into more courses across the university.


  • 6. shriramkrishnamurthi  |  October 21, 2016 at 9:44 am

    No, that’s not actually what the Mason, et al paper says. I was rather curious to see how DrRacket fared on their evaluation. It’s not even present. That’s because their paper focuses only on mobile apps.

    Within that environment, they have a fairly subjective scoring scheme and arbitrary way of combining scores. Which is all fine; you have to start somewhere. But much more significantly, the work doesn’t take into account errors — an acknowledged but nevertheless _glaring_ omission (not only one that might disadvantage environments that have put a lot of thought into that aspect, but more importantly, one that I think is vital to consider for beginners).

    I’m glad their methodology is out in public so others can compare against it, but their abstract needs to have been much more honest (the phrase “mobile apps”, for instance, never shows up; they use the much more general and misleading phrase “introductory programming course”). It’s VERY disappointing that a presumed elite conference like ICER did not catch and correct this issue.

    At any rate, while it is a somewhat interesting paper, I don’t think you should present it the way you have, without qualifiers.

    I _am_ with you on visualizations!

    • 7. Mark Guzdial  |  October 21, 2016 at 10:02 am

      The focus of the Mason et al. paper is on ways of measuring cognitive load. It’s a methodology paper. Yes, they only considered mobile programming. You have to evaluate a methodology somewhere.

      They wrote, “By contrast, LiveCode has the fewest steps, and the fewest above the threshold. LiveCode scaffolds tasks well using code skeletons, the code is more like English than in many programming languages, and there is extensive on-screen context-sensitive help, all of which contribute to this result.” I wrote, ” An ICER 2016 paper by Raina Mason, Simon et al. estimated Livecode to have the lowest cognitive load of several IDE’s in use by students.” I’d say that I characterized their paper pretty accurately.

      • 8. shriramkrishnamurthi  |  October 21, 2016 at 3:59 pm

        On the contrary, I’d say you characterized it very inaccurately. You said “estimated Livecode to have the lowest cognitive load of several IDE’s in use by students.” But their paper does not claim to be representative of IDs used by students in general. It claims to be the lowest load of a subset of IDEs they chose to measure _to create mobile apps_. The quote you’re extracting from their paper is without the vital context of the part I just underlined. It’s a shame that this critical scoping criterion is missing from the title, abstract, and introduction of their paper (it’s irresponsible of ICER to not have demanded that), but their paper is nowhere near as general as your quote makes it out to be.

  • 9. mgozaydin  |  October 21, 2016 at 3:32 pm

    I learned Fortran, COBOL and ALGOL in 1963 . If was hard for me to comprehend what a computer would do . Plus our teacher was not very good either. Later I learned Time Sharing by GE ‘s Basic Language then PL/1 .
    Even I taught HP engineers Basic Language in 1965 .Then until 1995 somebody did all my computer work .
    I started using Excell and Word after 1995 again .
    My grand daughter WHO is 9 years old now , has been typing the I Pad for 2-3 years now . She is much better than me .
    Today coding is just ” how to read and how to write ”
    Fortunately kids learn it by themselves

  • […] wrote, I drew pictures to describe the behavior, drawing from Sorva’s visualization approach and the SILC emphasis on sketching rather than diagrams. After writing each program, I tested it on a picture. Along the way, I answered questions and […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,052 other followers


Recent Posts

Blog Stats

  • 2,030,653 hits
October 2016

CS Teaching Tips

%d bloggers like this: