Optimizing Learning with Subgoal Labeling: Lauren Margulieux Defends her Dissertation
Lauren Margulieux successfully defended her dissertation Using Subgoal Learning and Self-Explanation to Improve Programming Education in March. Lauren’s been exploring subgoal labeling for improving programming education in a series of fascinating and influential papers. Subgoal labels are inserted into the steps of a worked example to explain the purpose for a set of steps.
- At ICER 2012 (see post here), her paper showed that subgoal labels inserted into App Inventor videos led to improved learning, retention (a week later), and even transfer to new App building problems, all compared to the exact same videos without the subgoal labels. This paper was cited by Rob Moore and his students at MIT in their work developing crowdsourced subgoal labels for videos (see post here).
- At ICER 2015 (see post here), Lauren and Briana Morrison showed that subgoal labels also improved learning for textual programming languages, but the high cognitive load of textual programming language made some forms of subgoal labeling less successful than studies in other disciplines would predict. That paper won the Chairs Award at ICER.
- At SIGCSE 2016 (see post here), Briana presented a paper with Lauren where they showed that subgoal labeling also improved performance on Parson’s Problems.
In her dissertation work, Lauren returned to the challenges of the ICER 2015 paper: Can we make subgoal labeling even more successful? She went back to using App Inventor, to reduce the cognitive load from teaching a textual language.
She compared three different ways of using subgoal labeling.
- In the passive condition, students were just given subgoal labels like in her first experiments.
- In the active condition, students were given a list of subgoal labels. The worked example was segmented into sets of steps that achieved a subgoal, but the label was left blank. Students had to pick the right subgoal label each blank.
- In the constructive condition, students were just given a blank and asked to generate a subgoal label. She had two kinds of constructive conditions. One was “guided” in that there were blanks above sets of steps. The other was “unguided” — just a long list of steps, and she asked students to write labels into margins
Lauren was building on a theory that predicted that the constructive condition would have the best learning, but would also be the hardest. She provided two scaffolds.
- For the conditions where it made sense (i.e., not the passive condition), she provided feedback. She showed half the participants the same worked examples with experimenter labels.
- For half the constructive participants, the label wasn’t blank. Instead there was a hint. All the steps that achieved the same subgoal were labeled “Label 1,” and all the steps that achieved a different subgoal were labelled “Label 2,” and so on.
Here’s the big “interesting/surprising” graph from her dissertation.
As predicted, constructive was better than active or passive. What’s interesting is that the very best performance was guided constructive without hints but with feedback AND with hints but without feedback. Now that’s weird. Why would having more support (both hints and feedback) lead to worse performance?
There are several possible hypotheses for these results, and Lauren pursued one of these one step further. Maybe students developed their own cognitive model when they constructed their own labels with hints, and seeing the feedback (experimenter’s labels) created some kind of dissonance or conflict. Without hints, maybe the feedback helped them make sense of the worked example.
Lauren ran one more experiment where she contrasted getting scaffolding with the experimenter’s labels versus getting scaffolding with the student’s labels (put in all the right places in the worked example). Students who were scaffolded with their own labels performed better on later problem solving than those who were scaffolded with experimenter labels. Students scaffolded with experimenter labels did not perform better than those who did not receive any scaffolding at all. Her results support this hypothesis — the experimenter’s labels can get in the way of the understanding that the students are building.
There are several implications from Lauren’s dissertation. One is that we can do even better than just giving students labels — getting them to write them themselves is even better for learning. Feedback isn’t the most critical part of the learning when subgoal labeling, which is surprising and fascinating. Constructive subgoal labeling lends itself to an online implementation, which is the direction Lauren that is explicitly exploring. How do we build effective programming education online?
Lauren has accepted an Assistant Professor position in the Learning Technologies Division at Georgia State University. I’m so glad for her, and even happier that she’s nearby so that we can continue collaborating!