How CS differs from other STEM Disciplines: Varying effects of subgoal labeled expository text in programming, chemistry, and statistics

March 16, 2018 at 7:00 am 7 comments

My colleagues Lauren Margulieux and Richard Catrambone (with Laura M. Schaeffer) have a new journal article out that I find fascinating. Lauren, you might recall, was a student of Richard’s who applied subgoal labeling to programming (see the post about her original ICER paper) and worked with Briana Morrison on several experiments that applied subgoal labeling to textual programming and Parson’s problems (see posts on Lauren’s defense and Briana’s).

In this new paper (see link here), they contrast subgoal labels across three different domains: Chemistry, statistics, and computer science (explicitly, programming).  I’ve been writing lately about how learning programming differs from learning other STEM disciplines (see this post here, for example). So, I was intrigued to see this paper.

The paper contrasts subgoal labeled expository text (e.g., saying explicitly as a heading Compute Average Frequency) and subgoal labeled worked examples (e.g., saying Compute Average Frequency then showing the equation and the values and the computed result).  I’ll jump to the punchline with the table that summarizes the result:

Programming has high complexity.  Students learned best when they had both subgoal labeled text and subgoal labeled worked examples. Either one alone didn’t cut it. In Statistics, subgoal labeled examples are pretty important, but the subgoal labeled text doesn’t help much.  In Chemistry, both the text and the worked examples improve performance, and there’s a benefit to having both.  That’s an argument that Chemistry is more complex than Statistics, but less complex than Programming.

The result is fascinating, for two reasons.  First, it gives us a way to empirically order the complexity of learning in these disciplines. Second, it gives us more reason for using subgoal labels in programming instruction — students just won’t learn as well without it.


Entry filed under: Uncategorized. Tags: , , , .

Announcing Barbara Ericson’s Defense on Effectiveness and Efficiency of Parsons Problems and Dynamically Adaptive Parsons Problems: Next stop, University of Michigan Constructivism vs. Constructivism vs. Constructionism

7 Comments Add your own

  • 1. Austin Cory Bart  |  March 16, 2018 at 11:15 am

    So is expository text *just* the header? Or is it more like something the text shown here:

    In general, is there a guide online or in a paper somewhere for making Worked Examples? I’ve found one paper that taught Worked Examples via Worked Examples, but I remember finding it pretty unhelpful for getting started. Do you folks have recommendations?

    • 2. Mark Guzdial  |  March 16, 2018 at 11:19 am

      Are you looking for just worked examples, or worked examples with subgoal labels? I can offer lots of worked examples: Worked examples with subgoal labels as expository text is new for me, too.

    • 3. Lauren Margulieux  |  March 18, 2018 at 11:09 am

      The expository text was an abstract description of the procedure, without any details that would be specific to a problem. For the example that you shared, it would be like that but without any information specific to dogs or breeds. In our context, teaching App Inventor, the expository text said something like, “To make an app, you need to create the components that will be in the app. Components include pictures, sounds, buttons, labels, etc., everything that is a part of the app, whether visible or not…”

      The manipulation was then whether the expository text had a subgoal label or not (e.g., “Create component”). These labels were the same as those given in the worked example, so if the learner had subgoal labels in both expository text and the worked example, they could more easily map between the abstract description of the procedure and the concrete example.

      • 4. Austin Cory Bart  |  March 18, 2018 at 11:18 am

        Ah, this is a very helpful description! It sounds like that I may have just spent the past semester helping a student assistant create tutorials instead of Worked Examples… I suppose these things are simpler to make then I thought…

  • […] Lauren Margulieux has started a blog which is pretty terrific.  I wrote about Lauren’s doctoral studies here, and I last blogged about her work (a paper comparing learning in programming, statistics, and chemistry) here. […]

  • […] article is also discussed in a post on Mark Guzdial’s blog,, and […]

  • […] The MOOC was better than just a set of videos. The exercises made sure I actually tried to think about what the videos were saying. But it’s clear that the exercises were not developed by assessment experts. There were lots of fill in the blanks like “Name the class that does X.” Who cares? I can always look that up. It’s a problem that the exercises were developed by Smalltalk experts. Some of the problems were of a form that would be simple, if you knew the right tool or the right option (e.g., “Which of the below is not a message that instances of the class Y understand?”), but I often couldn’t remember or find the right tool. Tools can fall into the experts’ blind spot. Good assessments should scaffold me in figuring out the answer (e.g., worked examples or subgoal labels). […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers


Recent Posts

Blog Stats

  • 2,053,716 hits
March 2018

CS Teaching Tips

%d bloggers like this: