Graduating Dr. Briana Morrison: Posing New Puzzles for Computing Education Research
December 16, 2016 at 7:00 am 8 comments
I am posting this on the day that I am honored to “hood” Dr. Briana Morrison. “Hooding” is where doctoral candidates are given their academic regalia indicating their doctorate degree. It’s one of those ancient parts of academia that I find really cool. I like the way that the Wikiversity describes it: “The Hooding Ceremony is symbolic of passing the guard from one generation of doctors to the next generation of doctors.”
I’ve written about Briana’s work a lot over the years here:
- Her proposal is described here, “Cognitive Load as a significant problem in Learning Programming.”
- Her first major dissertation accomplishment was developing (with Dr. Brian Dorn) a measurement instrument for cognitive load.
- One of her bigger wins for her dissertation was showing that subgoal labels work for text languages too (ICER 2015).
- Another really significant result was showing that Parson’s Problems were a more sensitive measure of learning than asking students to write code in an assessment, and that subgoal labels make Parson’s Problems better, too.
- She worked a lot with Lauren Margulieux, so many of the links I listed when Dr. Margulieux defended are also relevant for Dr. Morrison.
- At ICER 2016, she presented a replication study of her first given vs. generated subgoals study.
But what I find most interesting about Briana’s dissertation work were the things that didn’t work:
- She tried to show a difference in getting program instruction via audio or text. She didn’t find one. The research on modality effects suggested that she would.
- She tried to show a difference between loop-and-a-half and exit-in-the-middle WHILE loops. Previous studies had found one. She did not.
These kinds of results are so cool to me, because they point out what we don’t know about computing education yet. The prior results and theory were really clear. The study was well-designed and vetted by her committee. The results were contrary to what we expected. WHAT HAPPENED?!? It’s for the next group of researchers to try to figure out.
The most interesting result of that kind in Briana’s dissertation is one that I’ve written about before, but I’d like to pull it all together here because I think that there are some interesting implications of it. To me, this is a Rainfall Problem kind of question.
Here’s the experimental set-up. We’ve got six groups.
- All groups are learning with pairs of a worked example (a completely worked out piece of code) and then a practice problem (maybe a Parson’s Problem, maybe writing some code). We’ll call these WE-P pairs (Worked Example-Practice). Now, some WE-P pairs have the same context (think of it as the story of a story problem), and some have different contexts. Maybe in the same context, you’re asked to compute the average tips for several days of tips as a barista. Maybe in a different context, you compute tips in the worked example, but you compute the average test score in the practice. In general, we predict that different contexts will be harder for the student than having everything the same.
- So we’ve got same context vs different context as one variable we’re manipulating. The other variable is whether the participants get the worked example with NO subgoal labels, or GENERATED subgoal labels, or the participant has to GENERATE subgoal labels. Think of a subgoal label as a comment that explains some code, but it’s the same comment that will appear in several different programs. It’s meant to encourage the student to abstract the meaning of the code.
In the GENERATE condition, the participants get blanks, to encourage them to abstract for themselves. Typically, we’d expect (for research in other parts of STEM with subgoal labels) that GENERATE would lead to more learning than GIVEN labels, but it’s harder. We might get cognitive overload.
In general, GIVEN labels beats out no labels. No problem — that’s what we expect given all the past work on subgoal labels. But when we consider all six groups, we get this picture.
Why would having the same context do worse with GIVEN labels than no labels? Why would the same context do much better with GENERATE labels, but worse when it’s different contexts?
So, Briana, Lauren, and Adrienne Decker replicated the experiment with Adrienne’s students at RIT (ICER 2016). And they found:
The same strange “W” pattern, where we have this odd interaction between context and GIVEN vs. GENERATE that we just don’t have an explanation for.
But here’s the really intriguing part: they also did the experiment with second semester students at RIT. All the weird interactions disappeared! Same context beat different context. GIVEN labels beat GENERATE labels. No labels do the worst. When students get enough experience, they figure things out and behave like students in other parts of STEM.
The puzzle for the community is WHY. Briana has a hypothesis. Novice students don’t attend to the details that they need, unless you change the contexts. Without changing contexts, students even GIVEN labels don’t learn because they’re not paying enough attention. Changing contexts gets them to think, “What’s going on here?” GENERATE is just too hard for novices — the cognitive load of figuring out the code and generating labels is just overwhelming for students, so they do badly when we’d expect them to do better.
Here we have a theory-conflicting result, that has been replicated in two different populations. It’s like the Rainfall Problem. Nobody expected the Rainfall Problem to be hard, but it was. More and more people tried it with their students, and still, it was hard. It took Kathi Fisler to figure out how to teach CS so that most students could succeed at the Rainfall Problem. What could we teach novice CS students so that they avoid the “W” pattern? Is it just time? Will all second semester students avoid the “W”?
Dr. Morrison gave us a really interesting dissertation — some big wins, and some intriguing puzzles for the next researchers to wrestle with. Briana has now joined the computing education research group at U. Nebraska – Omaha, where I expect to see more great results.
Entry filed under: Uncategorized. Tags: cognitive science, computing education research, educational psychology, learning sciences, subgoal labeling.
1.
Power law of practice in software implementation: Does this explain the “W” going away? | Computing Education Blog | January 18, 2017 at 7:26 am
[…] wonder if this result explains why the second semester students in Briana’s studies (see previous blog post) didn’t have the “W” effect. If you do enough code, you move down the power law […]
2.
Embedding and Tailoring Engineering Learning: A Vision for the Future of Engineering Education | Computing Education Blog | March 15, 2017 at 6:00 am
[…] labeling totally works (see Lauren’s dissertation or Briana’s dissertation). Coursera uses it in some of their videos. Rob Miler at MIT has picked it up. But there are very […]
3.
How CS differs from other STEM Disciplines: Varying effects of subgoal labeled expository text in programming, chemistry, and statistics | Computing Education Research Blog | March 16, 2018 at 7:01 am
[…] My colleagues Lauren Margulieux and Richard Catrambone (with Laura M. Schaeffer) have a new journal article out that I find fascinating. Lauren, you might recall, was a student of Richard’s who applied subgoal labeling to programming (see the post about her original ICER paper) and worked with Briana Morrison on several experiments that applied subgoal labeling to textual programming and Parson’s problems (see posts on Lauren’s defense and Briana’s). […]
4.
How to Study for a CS Exam | Computing Education Research Blog | March 30, 2018 at 7:01 am
[…] sciences or educational psychology results from other fields don’t map cleanly to CS (see Briana’s work). I don’t know of research evaluating study practices for learners studying computer science […]
5.
Are you talking to me? Interaction between teachers and researchers around evidence, truth, theory, and decision-making | Computing Education Research Blog | June 15, 2018 at 1:00 am
[…] I don’t know that anybody has done this experiment. We know that predictions work well in physics education, but we know that lots of things from physics education do not work in CS education. (See Briana Morrison’s dissertation.) […]
6.
What do I mean by Computing Education Research? The Social Science Perspective | Computing Education Research Blog | November 5, 2018 at 9:01 am
[…] and they want to learn effective methods for their students. Here’s where we work on ebooks, and subgoal labeling, and Barb’s Parsons problems. I’m interested in how we make computing education efficient and […]
7.
What do I mean by Computing Education Research? The Computer Science Perspective | Computing Education Research Blog | November 12, 2018 at 8:01 am
[…] issue for HTM’s, but certainly is for RH’s), but it’s inefficient. Turns out that we can use worked examples with subgoal labeling and techniques like Parson’s problems and peer instruction to dramatically […]
8.
Subgoal labelling influences student success and retention in CS | Computing Education Research Blog | June 29, 2020 at 7:00 am
[…] all is to see the articles I wrote about Lauren’s graduation (link here) and Briana’s (link here). They have continued their terrific work, and have come out with their most impressive finding […]