ICER 2015 Preview: Subgoal Labeling Works for Text, Too
August 7, 2015 at 7:40 am 8 comments
Briana Morrison is presenting the next stage of our work on subgoal labeled worked examples, with Lauren Margulieux. Their paper is “Subgoals, Context, and Worked Examples in Learning Computing Problem Solving.” As you may recall, Lauren did a terrific set of studies (presented at ICER 2012) showing how adding subgoal labels to videos of App Inventor worked examples had a huge effect on learning, retention, and transfer (see my blog post on this work here).
Briana and Lauren are now teaming up to explore new directions in educational psychology space and new directions in computing education research.
- In the educational psychology space, they’re asking, “What if you make the students generate the subgoal labels?” Past research has found that generating the subgoal labels, rather than just having them given to the students, is harder on the students but leads to more learning.
- They’re also exploring what if the example and the practice come from the same or different contexts (where the “context” here is the cover story or word problem story). For example, we might show people how to average test grades, but then ask them to average golf scores — that’s a shift in context.
- In the computing education research space, Briana created subgoal labeled examples for a C-like pseudocode.
One of the important findings is that they replicated the earlier study, but now in a text-based language rather than a blocks-based language. On average, subgoal labels on worked examples improve performance over getting the same worked examples without subgoal labels. That’s the easy message.
The rest of the results are much more puzzling. Being in the same context (e.g., seeing averaging test scores in the worked examples, then being asked to average test scores in the practice) did statiscally worse than having to shift contexts (e.g., from test scores to golf scores). Why might that be?
Generating labels did seem to help performance. The Generate group had the highest attrition. That make sense, because increased complexity and cognitive load would predict that more participants would give up. But that drop-our rate makes it hard make strong claims. Now we’re comparing everyone in the other groups to only “those who gut it out” in the Generate group. The results are more suspect.
There is more nuance and deeper explanations in Briana’s paper than I’m providing here. I find this paper exciting. We have an example here of well-established educational psychology principles not quite working as you might expect in computer science. I don’t think it puts the principles in question. It suggests to me that there may be some unique learning challenges in computer science, e.g., if the complexity of computer science is greater than in other studies, then it’s easier for us to reach cognitive overload. Briana’s line of research may help us to understand how learning computing is different from learning statistics or physics.
Entry filed under: Uncategorized. Tags: cognitive load, educational psychology, learning sciences.
1.
Alisha Adrian | August 7, 2015 at 9:05 am
Perhaps a shift in context triggers an increase in attention?
2.
Tom Krawczewicz | August 7, 2015 at 1:54 pm
I think Alisha is correct. Dan Willingham states that “we remember what we think about” and changing the context forces them to think about it a little more than if they are given the same thing again. In one they can “copy” what they did before (requiring very little thought) and in the other they face a challenge that forces them to think a little (but not too much) while using the existing example as reference (not total reliance).
I suppose this is also what causes the larger attrition in the generating labels group. If you don’t understand the overlying concepts, it’s difficult to generate labels. Maybe these can be developed in the same way – give labels initially and have students generate similar labels in a different context.
Thanks for sharing these results. This will be great help as I prepare to start another school year in a couple of weeks!
3.
ICER 2015 Report: Blocks win–Programming Language Design == UI Design | Computing Education Blog | August 17, 2015 at 7:28 am
[…] Morrison receiving the Chairs Award (one of two best paper awards at ICER) for the paper that I blogged about here. Below is the whole GT contingent at ICER (including chair Brian Dorn, GT […]
4.
Cognitive Load as a Significant Problem in Learning Programming: Briana Morrison’s Dissertation Proposal | Computing Education Blog | November 11, 2015 at 8:49 am
[…] chapter of her work is based on her ICER 2015 paper that won the Chairs Award for best paper (see post here). Good luck, […]
5.
SIGCSE 2016 Preview: Parsons Problems and Subgoal Labeling, and Improving Female Pass Rates on the AP CS exam | Computing Education Blog | February 29, 2016 at 7:56 am
[…] Paper Award-winning paper from Briana and Lauren showing that subgoals work for text languages (see this post for summary), and Briana’s recent dissertation proposal where she explores the cognitive load […]
6.
Optimizing Learning with Subgoal Labeling: Lauren Margulieux Defends her Dissertation | Computing Education Blog | March 29, 2016 at 9:42 pm
[…] ICER 2015 (see post here), Lauren and Briana Morrison showed that subgoal labels also improved learning for textual […]
7.
Growing Computing Education Research to Critical Mass at UNO and UCSD | Computing Education Blog | June 3, 2016 at 7:05 am
[…] at Omaha: I knew that my PhD student, Briana Morrison (dissertation proposal is described here, and her award-winning ICER paper is described here) was joining (my former student) Brian Dorn (here’s a post on his dissertation) in the CS […]
8.
Graduating Dr. Briana Morrison: Posing New Puzzles for Computing Education Research | Computing Education Blog | December 16, 2016 at 7:00 am
[…] One of her bigger wins for her dissertation was showing that subgoal labels work for text languages too (ICER 2015). […]