Subgoal labelling influences student success and retention in CS

June 29, 2020 at 7:00 am 8 comments

I have written a lot about subgoal labeling in his blog. Probably the best way to find it all is to see the articles I wrote about Lauren’s graduation (link here) and Briana’s (link here). They have continued their terrific work, and have come out with their most impressive finding yet.

In work with Adrienne Decker, they have shown that subgoal labeling reduces the rate at which students fail or drop out of introductory computer science classes: “Reducing withdrawal and failure rates in introductory programming with subgoal labeled worked examples” (see link here). The abstract is below.

We now have evidence that subgoal labelling lead to better learning, better transfer, work in different languages, work in both text and block-based programming languages, and work in Parsons Problems. Now, we have evidence that their use leads to macro effects, like improved student success and retention. We also see a differential impact — the students most at risk of failing are the ones who gain the most.

This is a huge win.

Abstract

Background: Programming a computer is an increasingly valuable skill, but dropout and failure rates in introductory programming courses are regularly as high as 50%. Like many fields, programming requires students to learn complex problem-solving procedures from instructors who tend to have tacit knowledge about low-level procedures that they have automatized. The subgoal learning framework has been used in programming and other fields to breakdown procedural problem solving into smaller pieces that novices can grasp more easily, but it has only been used in short term interventions. In this study, the subgoal learning framework was implemented throughout a semester-long introductory programming course to explore its longitudinal effects. Of 265 students in multiple sections of the course, half received subgoal-oriented instruction while the other half received typical instruction.

Results: Learning subgoals consistently improved performance on quizzes, which were formative and given within a week of learning a new procedure, but not on exams, which were summative. While exam performance was not statistically better, the subgoal group had lower variance in exam scores and fewer students dropped or failed the course than in the control group. To better understand the learning process, we examined students’ responses to open-ended questions that asked them to explain the problem-solving process. Furthermore, we explored characteristics of learners to determine how subgoal learning affected students at risk of dropout or failure.

Conclusions: Students in an introductory programming course performed better on initial assessments when they received instructions that used our intervention, subgoal labels. Though the students did not perform better than the control group on exams on average, they were less likely to get failing grades or to drop the course. Overall, subgoal labels seemed especially effective for students who might otherwise struggle to pass or complete the course.

Keywords: Worked examples, Subgoal learning, Programming education, Failure rates

Entry filed under: Uncategorized. Tags: , .

Managing CS major enrollment boom with a lottery: “A lottery, by definition, is fair.” Paradigm shifts in education and educational technology: Influencing the students here and now

8 Comments Add your own

  • 1. Pito Salas  |  June 30, 2020 at 3:38 pm

    I’ve followed and skimmed many of the links and still don’t really understand concretely what subgoal labeling real is! Help?

    Reply
  • 3. orcmid  |  July 5, 2020 at 2:21 pm

    Amazing. I had worried that detailed worked examples can be too pedantic, even though I create them to satisfy myself all over my computation-theory and open-source efforts.

    Considering that detailed explication is appropriate relative to the learner’s situation and level of experience, there may be some progressive disclosure approach to documentation and tutorials that can be appropriate for a range of audience.

    This is encouraging.

    (My personal focus, here, is on self-directed learning, what one might want to accompany/invite in a software product. I find these results heartening. There’s no reason that learners couldn’t buddy up and mentoring should be possible.)

    Reply
  • 4. Prasanth  |  July 6, 2020 at 1:18 am

    Hello Sir,

    The experiment uses “typical instruction” to compare its effectiveness. But one reason for conducting the experiment is, presumably, that typical instruction seem to not work for large fraction of learners.

    Why use “typical instruction” as the standard of comparison? Would the results shown here diminish in magnitude when compared against for example, peer instruction, instruction using learning assistants and other methods of instruction?

    Thank you.

    Prasanth

    Reply
    • 5. Mark Guzdial  |  July 6, 2020 at 11:29 am

      What a great experiment! Definitely a good thing to try!

      Reply
    • 6. Briana Morrison  |  July 7, 2020 at 1:35 pm

      The “typical instruction” in the experiment did make use of peer instruction and a flipped classroom. There were 6 sections of the course taught by 4 different instructors. The “typical instruction” was what we had been using in the course: students watch videos the week before class, in-class consisted of slides with peer instruction questions and worked examples and “live coding”. There was also a 2 hour lab where students did multiple small programs sometimes individually and sometimes with pair programming.
      The only difference between the two sections in the intervention was that the worked examples in class were preceded by teaching the subgoal labels and then worked examples with subgoal labels and practice problems with subgoal labels at the top (and the instructor). Everything else about the classes (assignments, tests, peer instruction questions, etc.) were identical.

      Reply
      • 7. Prasanth Nair  |  February 2, 2021 at 5:05 am

        Thank you for the clarification!

        Reply
  • […] one that makes it on this list is their most recent finding (see post here). Subgoal labeling in an introductory computing course, compared to one not using subgoal labeling, […]

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,095,048 hits
June 2020
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips