Measuring progress on CS learning trajectories at the earliest stages
May 25, 2020 at 7:00 am 15 comments
I’ve written in this blog (and talked about many times) how I admire and build upon the work of Katie Rich, Diana Franklin, and colleagues in the Learning Trajectories for Everyday Computing project at the University of Chicago (see blog posts here and here). They define the sequence of concepts and goals that K-8 students need to be able to write programs consisting of sequential statements, to write programs that contain iteration, and to debug programs. While they ground their work in K-8 literature and empirical work, I believe that their trajectories apply to all students learning to program.
Here are some of the skills that appear in the early stages of their trajectories:
- Precision and completeness are important when writing instructions in advance.
- Different sets of instructions can produce the same outcome.
- Programs are made by assembling instructions from a limited set.
- Some tasks involve repeating actions.
- Programs use conditions to end loops.
- Outcomes can be used to decide whether or not there are errors.
- Reproducing a bug can help find and fix it.
- Step-by-step execution of instructions can help find and fix errors.
These feel fundamental and necessary — that you have to learn all of these to progress in programming. But it’s pretty clear that that’s not true. As I describe in my SIGCSE keynote talk (the relevant 4 minute segment is here), there is lots of valuable programming that doesn’t require all of these. For example, most students programming in Scratch don’t use conditions to end loops — still, millions of students find expressive power in Scratch. The Bootstrap: Algebra curriculum doesn’t have students write their own iteration at all — but they learn algebra, which means that there is learning power in even a subset of this list.
What I find most fascinating about this list is the evidence that CS students older than K-8 do not have all these concepts. One of my favorite papers at Koli Calling last year was It’s like computers speak a different language: Beginning Students’ Conceptions of Computer Science (see ACM DL link here — free downloads through June 30). They interviewed 14 University students about what they thought Computer Science was about. One of the explanations they labeled the “Interpreter.” Here’s an example quote exemplifying this perspective:
It’s like computers speak a different language. That’s how I always imagined it. Because I never understood exactly what was happening. I only saw what was happening. It’s like, for example, two people talking and suddenly one of them makes a somersault and the other doesn’t know why. And then I just learn the language to understand why he did the somersault. And so it was with the computers.
This student finds the behavior of computers difficult to understand. They just do somersaults, and computer science is about coming to understand why they do somersaults? This doesn’t convey to me the belief that outcomes are completely and deterministically specified by the program.
I’ll write in June about Katie Cunningham’s paper to appear next month at the International Conference of the Learning Sciences. The short form is that she asked Data Science students at University to trace through a program. Two students refused, saying that they never traced code. They did not believe that “Step-by-step execution of instructions can help find and fix errors.” And yet, they were successful data science students.
You may not agree that these two examples (the Koli paper and Katie’s work) demonstrate that some University students do not have all the early concepts listed above, but that possibility brings us to the question that I’m really interested in: How would we know?
How can we assess whether students have these early concepts in the trajectories for learning programming? Just writing programs isn’t enough.
- How often do we ask students to write the same thing two ways? Do students realize that this is possible?
- Students may realize that programming languages are “finicky” but may not realize that programming is about “precision and completeness.”
- Students re-run programs all the time (most often with no changes to the code in between!), but that’s not the same as seeing a value in reproducing a bug to help find and fix it. I have heard many students exclaim, “Okay, that bug went away — let’s turn it in.” (Or maybe that’s just a memory from when I said it as a student…)
These concepts really get at fundamental issues of transfer and plugged vs unplugged computing education. I bet that if students learn these concepts, they would transfer. They address what Roy Pea called “language-independent bugs” in programming. If a student understands these ideas about the nature of programs and programming, they will likely recognize that those are true in any programming language. That’s a testable hypothesis. Is it even possible to learn these concepts in unplugged forms? Will students believe you about the nature of programs and programming if they never program?
I find questions like these much more interesting than trying to assess computational thinking. We can’t agree on what computational thinking is. We can’t agree on the value of computational thinking. Programming is an important skill, and these are the concepts that lead to success in programming. Let’s figure out how to assess these.
Entry filed under: Uncategorized. Tags: computing education research, CS Unplugged, trajectories, transfer.
1.
Raul Miller | May 25, 2020 at 3:57 pm
It’s probably worth noting, in this context, that it has been possible to be a successful student without successfully mastering a topic. This issue is not unique to computer science.
There’s cheating, of course. But also there’s testing and exam processes which fail to adequately exercise the “necessary” skills. And there’s the infamous “glibness” or “parroting” issue where a student knows how to quote the literature but with an inadequate understanding of the concepts. And there’s limitations of the class materials. And…
Anyways, … I would be a bit cautious about attempts to define what it is that we’re teaching based on anecdotes involving students. Those anecdotes have relevance, but … not without anecdotes also involving people who have been working at least as many years as apprenticeship or journeyman traditions have demanded in other contexts — people whose work has required heavy use of the skills of interest. (Though, there, you might also want to observe their work, because by that point many core concepts will have become habitual rather than “cognitive”.)
(Which brings up another point, of course: the specializations of a specific trade are going to be different in character than an academic education. But this has been characteristic of many fields — engineering, business, art, etc. And the professionals have had much to contribute there, also, over the years.)
2.
Mark Guzdial | May 25, 2020 at 4:21 pm
Raul, I’m missing it — what do you see as a anecdote in what I’m relating?
3.
Raul Miller | May 25, 2020 at 6:02 pm
Well, on re-reading, I see it this way:
You introduced a couple examples (anecdotes) to illustrate “students not getting the concepts”. You then posed some questions, framing issues of “what are we teaching, and how”.
Perhaps best to say that we’re in violent agreement…
4.
Mark Guzdial | May 26, 2020 at 7:26 pm
I gave examples from some qualitative studies. Not the same as anecdotes.
5.
gasstationwithoutpumps | May 26, 2020 at 9:23 pm
There are some who would argue that “qualitative studies” are just collections of anecdotes.
6.
Mark Guzdial | May 27, 2020 at 8:33 am
Yes, there are. I would not.
7.
Raul Miller | May 26, 2020 at 9:25 pm
Hmm…
I can empathize if you felt I was being overly stuffy. I had read your blog entry too quickly and, in retrospect, if I had read more closely I would not have written that comment. But…
In this context, is a distinction between “anecdote” and “example” the right thing to call out? Anecdote does not imply insignificant.
8.
Mark Guzdial | May 27, 2020 at 8:40 am
I’m sure that I’m being overly sensitive. Much of my work these days is qualitative using novel methods. For example, in March, we had a participatory design session with 20 teachers where they used three different data visualization tools. We have observation notes from the session, pre and post surveys, reflective paragraphs from each teacher, and then their final projects. We’re trying to gain insights about what the teachers preferred, what they actually used (in their final project), how they thought about the tools (both before seeing them and after), and how we can design our tools better. Any quote or moment from any of these data sources might be called an anecdote, but that feels trite. In reality, we have so much data, and we can only share (in publications) bits of it, but what we choose to share is based on a careful analysis of the entire dataset. We’re not doing hypothesis testing. We can’t make strong, generalizable claims. We’re engaged in a design process, and what we are learning about works and what doesn’t might be useful to others. It’s a different kind of process than I’ve used previously — a mix of science, ethnography, and engineering.
9.
gasstationwithoutpumps | May 27, 2020 at 2:54 pm
User studies like that are valuable for getting insight and generating testable hypotheses, but are rarely generalizable (and the education literature is almost always misinterpreted and generalized far beyond what the data support). The small sample sizes and extensive interpretation of the data come out sounding more like anecdotes than anything else.
Such qualititative studies are also very prone to interviewer bias and confirmation bias, which makes them suspect as sources, even when great efforts are made to avoid those biases.
Qualitative research is valuable, but will always be regarded with some suspicion, because of the difficulty in doing it well and the ease with with researchers can fool themselves.
10.
Mark Guzdial | May 27, 2020 at 4:00 pm
Sure, but look at it from an engineering and design perspective. I’m working with social studies teachers who are trying to build data visualizations with their history students. If I wanted to do a hypothesis test, I might compare CODAP with Vega-Lite with SciPy, and I might show that CODAP is far more likely to be used by younger children and SciPy leads to higher quality visualizations. But that totally misses the point that teachers think that these tools mostly suck, and none of the teachers would actually use any of them in their real classrooms. The tools are either too hard to use or too hard to fit to standards. So what if we wanted to build a tool that they’d actually adopt and use? That’s the problem I’m addressing. This isn’t about generalized knowledge. It’s more about requirements development.
Once we have a tool, sure, we can compare it to something else (for learning, for usability, for adoptability, for whatever) and do ANOVAs and claim generalized knowledge. I can answer quantitatively if X is better than Y. I can’t quantitative analysis to help me design Z.
11.
gasstationwithoutpumps | May 27, 2020 at 9:16 pm
Agreed. User studies are an essential part of design, but they are often uninterpretable away from the design project that they are part of.
12.
Mark Guzdial | May 29, 2020 at 7:29 am
And yet there are whole journals like “Design Studies” and conferences like CHI. Perhaps you’re not reading the better user studies.
13.
Pravin Vaz | May 25, 2020 at 6:53 pm
Thanks for this writeup! Very valuable points about getting teachers to ask students thinking ‘two solutions’ to the same problem and trying to get the bug ‘back in’ while testing.
14.
Tim Bell | May 26, 2020 at 5:46 pm
Thanks for the article – it’s really important to step back and ask what we’re really trying to teach.
To address the question “Is it even possible to learn these concepts in unplugged forms?”… Yes, but you can’t beat doing programming on a device for getting lots of experience with these concepts.
(I think there’s a misconception that Unplugged approaches are intended as a substitute for programming, and for some teachers it’s tempting to do that if they don’t feel they can teach programming; but Unplugged is better understood as a potential gateway to programming, as opposed to programming being a gateway to computer science.)
The “yes” part of the answer can be found, for example, in the Kidbots activity (e.g. https://csunplugged.org/en/topics/kidbots/unit-plan/sending-a-rocket-to-mars/), which is a simple turtle-on-a-grid environment. Although the initial challenge is just to get the “turtle” to a square on a grid, the follow-up questions are the key: what’s another way to do it? can each student find a different way? How many other ways are there? (infinite, which is an aha moment for some) How many wrong ‘programs’ are there? (also infinite – what?!) Can you do it with only a forward and left instruction? (i.e. what is a ‘complete’ language?)
We’re starting to see evidence that unplugged-first can improve self-efficacy without increasing teaching time or decreasing programming skill – although I wouldn’t advocate that it’s the only way to get students engaged. But since you asked “How often do we ask students to write the same thing two ways?” – the answer for me is “Always – for beginners at least.” Of course, after an unplugged experience, get them into a programming environment where they are now prepared so that they can see the merciless behaviour of a computer to be about “precision and completeness” rather than being “finicky”.
15.
The Bigger Part of Computing Education is outside of Engineering Education | Computing Education Research Blog | April 26, 2021 at 7:00 am
[…] start on the computer science learning trajectories and discover if they want to learn […]