Instructional Design Principles Improve Learning about Computing: Making Measurable Progress
June 5, 2012 at 7:30 am 23 comments
I have been eager to write this blog for months, but wanted to wait until both of the papers had been reviewed and accepted for publication. Now “Subgoals Improve Performance in Computer Programming Construction Tasks” by Lauren Margulieux, Richard Catrambone, and Mark Guzdial has been accepted to the educational psychology conference EARLI SIG 6 & 7, and “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Mobile Application Development” by the same authors have been accepted into ICER 2012.
Richard Catrambone has developed a subgoal model of learning. The idea is to express instructions with explicit subgoals (“Here’s what you’re trying to achieve in the next three steps”) and that doing so helps students to develop a mental model of the process. He has shown that using subgoals in instruction can help with learning and improve transfer in domains like statistics. Will it work with CS? That’s what his student Lauren set out to find out.
She took a video that Barb had created to help teachers learn how to build apps with App Inventor. She then defined a set of subgoals that she felt captured the mental model of the process. She then ran 40 undergraduates through a process of receiving subgoal-based instruction, or not:
In the first session, participants completed a demographic questionnaire, and then they had 40 minutes to study the first app‘s instructional material. Next, participants had 15 minutes to complete the first assessment task. In the second session, participants had 10 minutes to complete the second assessment task, which measured their retention. Then participants had 25 minutes to study the second app‘s instructional material followed by 25 minutes to complete the third assessment.
An example assessment task:
Write the steps you would take to make the screen change colors depending on the orientation of the phone; specifically, the screen turns blue when the pitch is greater than 2 (hint: you’ll need to make an orientation sensor and use blocks from “Screen 1” in My Blocks).
Here’s an example screenshot from one of Barb’s original videos, which is what the non-subgoal group would see:
This group would get text-based instruction that looked like this:
- Click on “My Blocks” to see the blocks for components you created.
- Click on “clap” and drag out a when clap.Touched block
- Click on “clapSound” and drag out call clapSound.Play and connect it after when clap.Touched
The subgoal group would get a video that looks like this:
That’s it — a callout would appear for a few second to remind them of what subgoal they were on. Their text instructions looked a bit different:
Handle Events from My Blocks
- Click on “My Blocks” to see the blocks for components you created.
- Click on “clap” and drag out a when clap.Touched block
Set Output from My Blocks
- Click on “clapSound” and drag out call clapSound.Play and connect it after when clap.Touched
You’ll notice other educational psychology themes in here. We give them instructional material with a complete worked example. By calling out the mental model of the process explicitly, we reduce cognitive load associated with figuring out a mental model for themselves. (When you tell students to develop something, but don’t tell them how, you are making it harder for them.)
Here’s a quote from one of the ICER 2012 reviewers (who recommended rejecting the paper):
“From Figure 1, it seems that the “treatment” is close to trivial: writing headings every few lines. This is like saying that if you divide up a program into sections with a comment preceding each section or each section implemented as a method, then it is easier to recall the structure.”
Yes. Exactly. That’s the point. But this “trivial” treatment really made a difference!
- The subgoal group attempted and completed successfully more parts (subgoals) of the assessment tasks and faster — all three of those (more subgoals attempted, more completed successfully, and time) were all statistically significant.
- The subgoal group completed successfully more tasks on a retention task (which wasn’t the exact same task — they had to transfer knowledge) one week later, again statistically significantly.
But did the students really learn the mental model communicated by the subgoal labels, or did the chunking things into subgoals just make it easier to read and parse? Lauren ran a second experiment with 12 undergraduates, where she asked students to “talk-aloud” while they did the task. The groups were too small with the second experiment to show the same learning benefits, but all the trends were in the same direction. The subgoal group were still out-performing the non-subgoal groups, but what’s more they talked in subgoals! I find it amazing that she got these results from just one hour sessions. In one hour, Lauren’s video taught undergraduate students how to get something done in App Inventor, and they could remember and do something new with that knowledge a week later — better than a comparable group of Georgia Tech undergraduates seeing the SAME videos (with only callout differences) doing the SAME tasks. That is efficient learning.
Here’s a version of a challenge that I have made previously: Show me pedagogical techniques in computing education that have statistically significant impacts on performance, speed, and retention, and lead to developing a mental model of (even part of) a software development process. What’s in our toolkit? Where is our measurable progress? The CMU Cognitive Tutors count, but they were 20-30 years ago and (unfortunately) are not part of our CS education toolkit today. Alice and Scratch are tools — they are what to teach, not how to teach. Most of our strong results (like Pair Programming, Caspersen’s STREAMS, and Media Computation) are about changing practice in whole courses, mostly for undergraduates, over several weeks. Designing instruction around subgoals in order to communicate a mental model is a small, “trivial” tweak, that anyone can use no matter what they are teaching, with significant wins in terms of quality and efficiency. Instructional design principles could be used to make undergraduate courses better, but they’re even more critical when teaching adults, when teaching working professionals, when teaching high school teachers who have very little time. We need to re-think how we teach computing to cater to these new audiences. Lauren is showing us how to do that.
One of the Ed Psych reviewers wrote, “Does not break new ground theoretically, but provides additional evidence for existing theory using new tasks.” Yes. Exactly. This is no new invention from an instructional design perspective. It is simply mapping things that Richard has been doing for years into a computer science domain, into “new tasks.” And it was successful.
Lauren is working with us this summer, and we will be trying it with high school teachers. Will it work the same as with GT undergraduates? I’m excited by these results — we’re already showing that the CSLearning4U approach of simply picking the low-hanging fruit from educational psychology can have a big impact on computing education quality and efficiency.
(NSF CE21 funds CSLearning4U. Lauren’s work was supported by a Georgia Tech GVU/IPaT research grant. All the claims and opinions here are mine, not necessarily those of any of the funders.)
Entry filed under: Uncategorized. Tags: computing education, computing education research, CSLearning4U, ebooks, educational psychology, instructional design, videos.
1.
Daniel Hickey | June 5, 2012 at 11:51 am
Nice write up Mark, and a nice family collaboration. From my perspective-of course breaking a complex procedure down into its elements and providing feedback increased retention of that procedural knowledge. This is the same finding as all of the hundreds of studies of cognitive load. Showing that students learn specific procedures better when they are taught specific procedures is really just a test of educational malpractice.
The long standing issue is whether breaking complex conceptual knowledge down into specific procedures obscured or even eliminated the conceptual knowledge that is necessary to know when and how to use those procedures. I don’t see any explicit consideration of that concern in your description of your assessment measures, so I suspect you did not study that. And I do not know enough about your knowledge goals to speculate very much. It is pretty straightforward to assess this kind of knowledge, but still more complicated than the procedural knowledge, so many don’t. But there is a good chance that the advantage would be diminished or even reversed, because the conceptual knowledge is sometimes best constructed in the very exploration and puzzlement that gets dismissed as “cognitive load”
This issue was nicely articulated in the response that Cindy Hmelo published in Educational Psychologist after Paul Kirschner and others articulated the literature supporting the position you have articulated. This debate is ultimately intractable because the two positions take very different views of how “higher-order” knowledge is constructed.
Lately I have been thinking about this whole thing very differently. If we treat both the procedures and the concepts as special cases of situated contextualized practices, we find a workaround. Our real concern should be with the contextual knowledge needed to appreciate the consequences of using the particular procedure or concept in the particular context. If we focus primarily on the contextual knowledge, we can objectively assess which way of representing that knowledge delivers more knowledge of both procedures and concepts.
2.
Mark Guzdial | June 5, 2012 at 2:49 pm
I suspect that the relation between procedure (programming, in whatever modality and notation) and concepts in computer science is closer to the relationship between equations and concepts in mathematics, than it is in physical sciences or engineering. When we can point to physical situations and contexts, we can grasp procedures and concepts in multiple ways, and we can contextualize them even in everyday practice (e.g., understanding why heat travels along a metal spoon but not a wooden one). But using a computer (e.g., Office) gives us no insight into the workings of the machine. Like mathematics, computer science is mostly invisible and contextualized mostly within the task of problem-solving.
While the very best mathematicians can think about mathematical concepts apart from equations (Einstein is said to have a physical sensation of mathematics), most mathematics students get to the concepts through the notations. I argued here recently that it is very difficult for students to come to an understanding of the capabilities of the computer and the need for algorithms with a notation that represents those capabilities (and limitations). I believe that the concepts of computer science are most easily expressed and best understood in terms of a notation for a procedure.
Lauren’s study consisted of two one-hour sessions. I wouldn’t predict much conceptual knowledge in that short a period, especially in an “invisible” field like computer science. However, I believe that learning the procedures and notations gives us a handle on the concepts — it gives us a notation for expressing the capabilities and limitations of the computer, so that deeper ideas (like what an algorithm is and what it can do) can be made explicit and learned.
Bottomline: I don’t disagree with your characterization, but I suspect that there’s an optimal path in learning computing that requires some procedural learning before the conceptual learning can progress.
3.
A course in course design :) | Writerly Goodness | June 11, 2012 at 9:01 pm
[…] Instructional Design Principles Improve Learning about Computing: Making Measurable Progress (computinged.wordpress.com) […]
4.
Math teachers critiquing Khan Academy math videos « Computing Education Blog | July 13, 2012 at 2:04 am
[…] is a great point, and it’s the same one that we’re trying to make with Lauren’s paper at ICER 2012. Instructional design matters! Educational psychologists do know how to make learning better. […]
5.
ICER2012 Preview: Adapting the Disciplinary Commons for High School CS Teachers « Computing Education Blog | August 17, 2012 at 8:18 am
[…] already talked about Lauren’s paper on using subgoal analysis to improve instruction about App Inventor, which I’ve made available here. Here […]
6.
Heading Down Under for ICER 2012: 4-13 September 2012 « Computing Education Blog | September 3, 2012 at 9:51 am
[…] Education Research (ICER) 2012 conference which will be 10-11 September. I will be presenting Lauren’s work on subgoal-based instruction in CS, Barbara will be presenting our statewide survey work, Briana Morrison will present the […]
7.
Brief Trip Report on ICER 2012: Answering the global needs for computing education research « Computing Education Blog | September 18, 2012 at 11:44 am
[…] Lauren’s subgoal paper drew some oohs when I showed the results, a few shakes of heads (I don’t think everyone believed it), and some challenging questions. ”Why aren’t you using this in your intro classes?” asked one questioner. ”Or your advanced classes?” asked another. Yup. Good questions. […]
8.
Seeking K-12 teachers for study on learning App Inventor « Computing Education Blog | October 4, 2012 at 6:49 am
[…] trying to replicate Lauren’s video study (from ICER 2012) with K-12 teachers, but we’re not getting enough to fill our groups. We now have permission to […]
9.
The Bigger Issues in Learning to Code: Culture and Pedagogy « Computing Education Blog | December 21, 2012 at 8:48 am
[…] in my classrooms, and try out worked examples in various ways. In our research, we use subgoal labels to improve our instructional materials. These things really […]
10.
What is the current state of high school computer science professional development? The results of the UChicago Landscape Study « Computing Education Blog | January 14, 2013 at 8:00 am
[…] in Computing (BPC-A, like ECEP), Computing Education in the 21st Century (CE21, like our CSLearning4U project), and all the funded projects related to CS10K, sponsored by […]
11.
Russell Duhon | February 25, 2013 at 7:05 pm
Sounds like interesting results, but I’d be interested in hearing about the effect size (and ideally seeing some graphics so I get a real sense of distribution). I’m a bit confused by the focus on statistical significance: this is social science, and nigh-every intervention will have at least a small effect, so all that’s required to guarantee statistical significance is a large enough sample size. Estimates of actual significance — effect size in the context of the domain — are what’s important.
12.
Mark Guzdial | February 25, 2013 at 7:36 pm
n=20 in each group. Some of the graphs were in a later post.
13.
Try an Hour of Code in an Ebook for #CSEdWeek | Computing Education Blog | December 9, 2013 at 1:01 pm
[…] We’ve been exploring ideas like how best to create videos about computer science (hint: use subgoal labels!) and how to reduce cognitive load (hint: Parson’s problems). We’re also working on […]
14.
Big Data vs. Ed Psychology: Work harder vs. work smarter | Computing Education Blog | January 31, 2014 at 1:35 am
[…] Like our work on subgoal labeling, […]
15.
Learnersourcing subgoal labeling to support learning from how-to videos | Computing Education Blog | February 12, 2014 at 1:11 am
[…] a cool idea! Rob Moore is building on the subgoal labeling work that we (read: “Lauren”) did, and is using crowd-sourcing techniques to generate the […]
16.
Important paper at SIGCSE 2015: Transferring Skills at Solving Word Problems from Computing to Algebra Through Bootstrap | Computing Education Blog | May 11, 2015 at 7:44 am
[…] that students were really saying subgoal labels to themselves when transferring knowledge (see subgoal labeling post). When Pea & Kurland looked for transfer, they found that students didn’t really learn […]
17.
How to Learn Computer Programming Efficiently through Computer Games: Michael Lee and Gidget | Computing Education Blog | July 29, 2015 at 7:32 am
[…] Gidget and CodeAcademy are statistically equivalent for learning, and both blow away the constructionist option. A designed curriculum beats a discovery-based learning opportunity. That’s interesting but not too surprising. Here’s the wild part: The Gidget users spend 1/2 as much time. Same learning, half as much time. I would not have predicted this, that Mike’s game is actually more efficient for learning about CS than is a tutorial. I’ve argued that learning efficiency is super important especially for high school teachers (see post here). […]
18.
ICER 2015 Preview: Subgoal Labeling Works for Text, Too | Computing Education Blog | August 7, 2015 at 7:40 am
[…] Briana Morrison is presenting the next stage of our work on subgoal labeled worked examples, with Lauren Margulieux. Their paper is “Subgoals, Context, and Worked Examples in Learning Computing Problem Solving.” As you may recall, Lauren did a terrific set of studies (presented at ICER 2012) showing how adding subgoal labels to videos of App Inventor worked examples had a huge effect on learning, retention, and transfer (see my blog post on this work here). […]
19.
SIGCSE 2016 Preview: Parsons Problems and Subgoal Labeling, and Improving Female Pass Rates on the AP CS exam | Computing Education Blog | February 29, 2016 at 7:56 am
[…] showing how subgoal labels improved learning, retention and transfer in learning App Inventor (see summary here), the 2015 ICER Chairs Paper Award-winning paper from Briana and Lauren showing that subgoals work […]
20.
Optimizing Learning with Subgoal Labeling: Lauren Margulieux Defends her Dissertation | Computing Education Blog | March 29, 2016 at 9:41 pm
[…] ICER 2012 (see post here), her paper showed that subgoal labels inserted into App Inventor videos led to improved learning, […]
21.
Belief in the Geek Gene may be driven by Economics and Educational Inefficiency, plus using blocks to cross language boundaries | Computing Education Blog | June 5, 2017 at 7:00 am
[…] insight gave me a whole new reason for doing our work in efficient CS education, like the greater efficiency in using subgoal-based instruction. The work of Paul Kirschner and Mike Lee & Andy Ko also emphasizes more CS learning in less […]
22.
How CS differs from other STEM Disciplines: Varying effects of subgoal labeled expository text in programming, chemistry, and statistics | Computing Education Research Blog | March 16, 2018 at 7:00 am
[…] you might recall, was a student of Richard’s who applied subgoal labeling to programming (see the post about her original ICER paper) and worked with Briana Morrison on several experiments that applied subgoal labeling to textual […]
23.
Proposal #1 to Change CS Education to Reduce Inequity: Teach computer science to advantage the students with less computing background | Computing Education Research Blog | July 20, 2020 at 7:00 am
[…] (see Wikipedia page). Even our first experiment with subgoal labelling for CS worked examples (see post here) has shown improvements in learning (measured immediately after instruction), retention (measured a […]