Posts tagged ‘instructional design’

Learnersourcing subgoal labeling to support learning from how-to videos

What a cool idea!  Rob Miller is building on the subgoal labeling work that we (read: “Lauren”) did, and is using crowd-sourcing techniques to generate the labels.

Subgoal labeling [1] is a technique known to support learning new knowledge by clustering a group of steps into a higher-level conceptual unit. It has been shown to improve learning by helping learners to form the right mental model. While many learners view video tutorials nowadays, subgoal labels are often not available unless manually provided at production time. This work addresses the challenge of collecting and presenting subgoal labels to a large number of video tutorials. We introduce a mixed-initiative approach to collect subgoal labels in a scalable and efficient manner. The key component of this method is learnersourcing, which channels learners’ activities using the video interface into useful input to the system. The presented method will contribute to the broader availability of subgoal labels in how-to videos.

via Learnersourcing subgoal labeling to support learning from how-to videos.

February 12, 2014 at 1:11 am 5 comments

Context matters when designing courses, too: Know Thy Learner

In 1994, Elliot Soloway, Ken Hay, and I wrote an article about “learner-centered design.”  We contrasted it with the prevailing paradigm of “user-centered design,” arguing that designing for learners is different than designing for experts (which, we suggested, is really what user-centered design is).

I like the below as pointing toward borrowing ideas from modern UX design for learning design.  The most important lesson that we try to teach undergraduates about human-computer interface design is, “Know Thy User, for the User is not You.”  You have to get to know your user, and they’re not like you.  You can’t use introspection to design interfaces.  That same lesson is what we’re hearing below, but about learning.  “Know Thy Learner, for the Learner is not You.”  Your learner has a different context than you, and you have to get to know it before you can design for it.

“Transferring education from the United States to Africa wouldn’t work,” argued Bakary Diallo, rector of African Virtual University. “Because we have our own realities,” he added, “our own context and culture.”

Naveed A. Malik, founding rector of the Virtual University of Pakistan, echoed that sentiment. “This is something that we learned very early in our virtual-university experience,” he said. “We couldn’t pick up a course from outside and then transplant it into a Pakistani landscape—the context was completely different.”

via Virtual Universities Abroad Say They Already Deliver ‘Massive’ Courses – Wired Campus – The Chronicle of Higher Education.

July 11, 2013 at 1:24 am 5 comments

What happens when professionals take on-line CS classes: When Life and Learning Do Not Fit

The journal article on the research that Klara Benda, Amy Bruckman, and I did finally came out last month the ACM Transactions on Computing Education.  The abstract is below.  Klara has a background in sociology, and she’s done a great job of blending research from sociology with more traditional education and learning sciences perspectives to explain what happens when working professionals take on-line CS classes.  This work has informed our CSLearning4U project significantly, and informs my perspective on MOOCs.

We present the results of an interview study investigating student experiences in two online introductory computer science courses. Our theoretical approach is situated at the intersection of two research traditions: distance and adult education research, which tends to be sociologically oriented, and computer science education research, which has strong connections with pedagogy and psychology. The article reviews contributions from both traditions on student failure in the context of higher education, distance and online education as well as introductory computer science. Our research relies on a combination of the two perspectives, which provides useful results for the field of computer science education in general, as well as its online or distance versions. The interviewed students exhibited great diversity in both socio-demographic and educational background. We identified no profiles that predicted student success or failure. At the same time, we found that expectations about programming resulted in challenges of time-management and communication. The time requirements of programming assignments were unpredictable, often disproportionate to expectations, and clashed with the external commitments of adult professionals. Too little communication was available to access adequate instructor help. On the basis of these findings, we suggest instructional design solutions for adult professionals studying introductory computer science education.

via When Life and Learning Do Not Fit.

January 9, 2013 at 9:46 am 3 comments

Instructional Design Principles Improve Learning about Computing: Making Measurable Progress

I have been eager to write this blog for months, but wanted to wait until both of the papers had been reviewed and accepted for publication.  Now “Subgoals Improve Performance in Computer Programming Construction Tasks” by Lauren Margulieux, Richard Catrambone, and Mark Guzdial has been accepted to the educational psychology conference EARLI SIG 6 & 7, and “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Mobile Application Development” by the same authors have been accepted into ICER 2012.

Richard Catrambone has developed a subgoal model of learning.  The idea is to express instructions with explicit subgoals (“Here’s what you’re trying to achieve in the next three steps”) and that doing so helps students to develop a mental model of the process.  He has shown that using subgoals in instruction can help with learning and improve transfer in domains like statistics.  Will it work with CS?  That’s what his student Lauren set out to find out.

She took a video that Barb had created to help teachers learn how to build apps with App Inventor.  She then defined a set of subgoals that she felt captured the mental model of the process.  She then ran 40 undergraduates through a process of receiving subgoal-based instruction, or not:

In the first session, participants completed a demographic questionnaire, and then they had 40 minutes to study the first app‘s instructional material. Next, participants had 15 minutes to complete the first assessment task. In the second session, participants had 10 minutes to complete the second assessment task, which measured their retention. Then participants had 25 minutes to study the second app‘s instructional material followed by 25 minutes to complete the third assessment.

An example assessment task:

Write the steps you would take to make the screen change colors depending on the orientation of the phone; specifically, the screen turns blue when the pitch is greater than 2 (hint: you’ll need to make an orientation sensor and use blocks from “Screen 1” in My Blocks).

Here’s an example screenshot from one of Barb’s original videos, which is what the non-subgoal group would see:

This group would get text-based instruction that looked like this:

  1. Click on “My Blocks” to see the blocks for components you created.
  2. Click on “clap” and drag out a when clap.Touched block
  3. Click on “clapSound” and drag out call clapSound.Play and connect it after when clap.Touched

The subgoal group would get a video that looks like this:

That’s it — a callout would appear for a few second to remind them of what subgoal they were on.  Their text instructions looked a bit different:

Handle Events from My Blocks

  1. Click on “My Blocks” to see the blocks for components you created.
  2. Click on “clap” and drag out a when clap.Touched block

Set Output from My Blocks

  1. Click on “clapSound” and drag out call clapSound.Play and connect it after when clap.Touched

You’ll notice other educational psychology themes in here.  We give them instructional material with a complete worked example.  By calling out the mental model of the process explicitly, we reduce cognitive load associated with figuring out a mental model for themselves.  (When you tell students to develop something, but don’t tell them how, you are making it harder for them.)

Here’s a quote from one of the ICER 2012 reviewers (who recommended rejecting the paper):

“From Figure 1, it seems that the “treatment” is close to trivial: writing headings every few lines. This is like saying that if you divide up a program into sections with a comment preceding each section or each section implemented as a method, then it is easier to recall the structure.”

Yes. Exactly. That’s the point. But this “trivial” treatment really made a difference!

  • The subgoal group attempted and completed successfully more parts (subgoals) of the assessment tasks and faster — all three of those (more subgoals attempted, more completed successfully, and time) were all statistically significant.
  • The subgoal group completed successfully more tasks on a retention task (which wasn’t the exact same task — they had to transfer knowledge) one week later, again statistically significantly.

But did the students really learn the mental model communicated by the subgoal labels, or did the chunking things into subgoals just make it easier to read and parse?  Lauren ran a second experiment with 12 undergraduates, where she asked students to “talk-aloud” while they did the task.  The groups were too small with the second experiment to show the same learning benefits, but all the trends were in the same direction.  The subgoal group were still out-performing the non-subgoal groups, but what’s more they talked in subgoalsI find it amazing that she got these results from just one hour sessions.  In one hour, Lauren’s video taught undergraduate students how to get something done in App Inventor, and they could remember and do something new with that knowledge a week later — better than a comparable group of Georgia Tech undergraduates seeing the SAME videos (with only callout differences) doing the SAME tasks.  That is efficient learning.

Here’s a version of a challenge that I have made previously: Show me pedagogical techniques in computing education that have statistically significant impacts on performance, speed, and retention, and lead to developing a mental model of (even part of) a software development process.  What’s in our toolkit?  Where is our measurable progress? The CMU Cognitive Tutors count, but they were 20-30 years ago and (unfortunately) are not part of our CS education toolkit today. Alice and Scratch are tools — they are what to teach, not how to teach.  Most of our strong results (like Pair Programming, Caspersen’s STREAMS, and Media Computation) are about changing practice in whole courses, mostly for undergraduates, over several weeks.  Designing instruction around subgoals in order to communicate a mental model is a small, “trivial” tweak, that anyone can use no matter what they are teaching, with significant wins in terms of quality and efficiency.  Instructional design principles could be used to make undergraduate courses better, but they’re even more critical when teaching adults, when teaching working professionals, when teaching high school teachers who have very little time.  We need to re-think how we teach computing to cater to these new audiences.  Lauren is showing us how to do that.

One of the Ed Psych reviewers wrote, “Does not break new ground theoretically, but provides additional evidence for existing theory using new tasks.”  Yes. Exactly.  This is no new invention from an instructional design perspective.  It is simply mapping things that Richard has been doing for years into a computer science domain, into “new tasks.”  And it was successful.

Lauren is working with us this summer, and we will be trying it with high school teachers.  Will it work the same as with GT undergraduates?  I’m excited by these results — we’re already showing that the CSLearning4U approach of simply picking the low-hanging fruit from educational psychology can have a big impact on computing education quality and efficiency.

(NSF CE21 funds CSLearning4U.  Lauren’s work was supported by a Georgia Tech GVU/IPaT research grant. All the claims and opinions here are mine, not necessarily those of any of the funders.)

June 5, 2012 at 7:30 am 23 comments

Big-D “Design” for education and online courses: Let’s build more and different

I was fortunate to serve as a reviewer on (now, Dr.) Turadg Aleahmad’s thesis committee at Carnegie Mellon University a couple weeks ago (link to abstract only).  Turadg was addressing a hard problem: Much of what we know about education principles (from research) rarely makes an impact on educational practice.  (This is the same problem that Sally Fincher was talking about in her SIGCSE keynote in 2010 — she called the research results “useless truths.”)  Turadg argues that bridging the gap is a job for Design (Big-D “Design,” as in thinking about Design as an explicit and conscious process).  In his thesis, Turadg uses HCI design practice and adapts it to the task of creating technologies that will actually get used in order to implement educational principles.  (I recommend Chapter 3 to all educational designers, including designers of computing curricula — Turadg describes a process to figure out what the stakeholders want, and to match that to desired principles.)

Turadg created two tools (and deployed them, and evaluated them — as well as inventing a new design method! All in one dissertation!).  One of them, called Nudge, is about reminding students to engage in learning activities spaced out over time, rather than cramming the night before (a form of the procrastination problem that Nick Falkner was just talking about).  The other one, Examplify, is about getting students to self-explain worked examples.  Turadg’s thesis is practice-oriented and practical. For example, he actually figured out the costs of deploying these tools (e.g., using Amazon Mechanical Turk effort to create the worked examples from older exams that the teacher provided as study guides).

Nudge and Examplify both worked, in terms of getting the benefits that Turadg designed them for.  But they didn’t have the uptake that I expected — it wasn’t whole class adoption.  Those who used it got benefit out of it.

I challenged Turadg on this point at his defense.  Does the fact that not everyone used it suggest that his design process failed?  Turadg argued that the point of the design process is to build something that someone will use to achieve the design goals.  He did that.  He did accurately identify a population of users and their needs, and he met those needs.  For importantly, his process assumes “the long tail.”  Educational interventions need to be tailored for different student populations. One tool will rarely work for everyone in the same way. How do you get to everyone?  Build more tools, more systems!  Adapt to the wide range of people.

Turadg gave me a new way of thinking about the results from Coursera and Udacity courses.  It’s not a problem that these systems are mostly attracting the 10-30% of students at the top.  The problem is that we don’t have another dozen systems that are aiming to serve the other 70-90%.  What kinds of online courses do we need that explicitly aim at the low to middle performing students?  Maybe we need on-line courses or books that seek to bore and drive away the upper percentages?

My guess is that the new edX partnership between Harvard and MIT (below) is going to aim similarly at the top students.  Getting those top students has potential value that is worth the competition and money being invested.  There’s likely to be less investment into the low-to-mid range.  From the perspective of serving all the needs in our society, we need more and different forms of these technologies.  I’m personally more interested in these courses, thinking about it from Turadg’s perspective.  It’s a design challenge — can you use the Coursera/Udacity/edX technologies and approaches to reach “the rest of us”?  Or maybe technologies for the other segments of the market will look more like books than courses?

In what is shaping up as an academic Battle of the Titans — one that offers vast new learning opportunities for students around the world — Harvard University and the Massachusetts Institute of Technology on Wednesday announced a new nonprofit partnership, known as edX, to offer free online courses from both universities.

Harvard’s involvement follows M.I.T.’s announcement in December that it was starting an open online learning project to be known as MITx. Its first course, Circuits and Electronics, began in March, enrolling about 120,000 students, some 10,000 of whom made it through the recent midterm exam. Those who complete the course will get a certificate of mastery and a grade, but no official credit. Similarly, edX courses will offer a certificate but will carry no credit.

via Harvard and M.I.T. Team Up to Offer Free Online Courses –

May 14, 2012 at 8:34 am 3 comments

Explicit instruction prevents exploration — but will all students explore?

Interesting result: If you show students something that a novel toy will do, students will do that something, and are unlikely to explore and figure out other features of the toy. That makes sense — how much exploration do you do in your computer applications to figure out everything that they can do? I do believe that not doing explicit instruction is more likely to lead to exploration. But for all students? How many students will do how much exploration? If we don’t teach students anything, will they explore and learn everything?

I thought the bottomline of the report is a fair statement:

So what’s a teacher or parent to do? Schulz is quick to point out that the study is not an argument against instruction. “Things that you’re extremely unlikely to figure out on your own — how to read, how to do calculus, how to drive a car — it would make no sense to try to learn by exploration,” she says.

Rather, the study underscores the real-world trade-offs between education and exploration, and the importance of acknowledging what is unknown even while imparting what is known. Teachers should, where possible, offer the caveat that there may be more to learn.

via Don’t show, don’t tell? – MIT News Office.

July 15, 2011 at 7:42 am 5 comments

Using Worked Examples to improve learning on Loops

Leigh Ann Sudol-DeLyser is doing some interesting work using worked examples to improve CS learning.

I employed a worked example strategy where students were given one example and the loops were broken into three parts (init, update, comparison) and students learned how to write each part separately. I’m preparing a journal paper on the subject, however a small preview of the results – the students were much better at it than I expected!

I believe that the combination of worked examples with specific line-level feedback helped these non-programmers understand not only that they were “wrong” when something didnt work, but why and therefore how to fix it in order to make it right. We need better intelligent tools in order to help scaffold student’s learning rather than relying on them to have the expertise and metacognitive abilities to figure it out for themselves. My current research focuses on developing an understanding of how students think and learn computing by supporting their learning individually and as they have trouble. Stay tuned as I am working on some data analysis that should be very interesting!

via In need of a Base Case.

February 9, 2011 at 1:47 pm 1 comment

In Defense of Lecture

Lectures have a black eye on college campuses today.  We’re told that they are useless, and that they are ineffective with out “explicit constructionism.” We’re told to use active learning techniques in lecture, like clickers.  I’m realizing that there’s nothing wrong with lecture itself, and that the psychology results tell us that lectures should be a highly efficient form of learning.  The problem is that there is an interaction between lecture as learning intervention and our students. That is an education (or broadly, a learning science) result, and it’s important to note the distinction between education (as instructional engineering, as psychology-in-practice) and psychology.

I just served on a Psychology Masters thesis committee.  In 2009, Micki Chi published a paper where she posited a sequence of learning approaches: From passive, to active, to constructive.  She suggested that moving along the sequence resulted in better learning. While her paper drew on lots of dyad comparison studies between two of those styles of learning, nobody had compared all three in a single experiment.  This Masters student tested all three at once. He put subjects into one of three conditions:

  • Passive: Where students simply read a text on ecology drawn from a Sophomore-level textbook.
  • Active: Where students either (a) highlighted text of interest or (b) copy-pasted key sections into “Notes.”
  • Constructive: Where students either (a) created self-explanations of the text or (b) created questions about the text.

He had a test on the content immediately after the training, and another a week later.  Bottomline: No difference on either test. But the Masters student was smarter than just leaving it at that.  He also asked students to self-report on what they were thinking about when they read the text, like “I identified the most important ideas” or “I summarized (to myself) the text” (both signs of “active” cognition in Chi’s paper), or “I connected the text to ideas I already knew” or “I made hypotheses or predictions about the text” (“constructive” level).  Those self-reported higher-levels of cognitive processing were highly correlated with the test scores.  Micki Chi called these “potential covert activities” in these kinds of studies.  That’s a bit of a misnomer, because in reality, it’s those “covert” activities that you’re really trying to engender in the students!

The problem is that Georgia Tech students (the subjects in the study) are darn smart and well-practiced learners.  Even when “just reading” a text, they think about it, explain it to themselves, and summarize it to themselves.  They think about it, and that’s where the learning comes from.  All the “active learning” activities can help with engendering these internal cognitive activities, but they’re not necessary.

Lectures are a highly-efficient form of teaching.  Not only do they let us reach many students at once, but they play upon important principles of instructional design like the modality effect.  Hearing information while looking at related pictures (e.g., diagrams on Powerpoint slides) can allow for better learning (more information in less time) than just reading a book on your own.  Coding live in lecture is a “best practice” in teaching computer science. I don’t dispute all the studies about lectures, however — lectures don’t usually work.  Why?

We add active learning opportunities to lectures because students don’t usually learn that much from a 90 minute lecture. Why? Because it takes a lot of effort to keep learning constructively during a 90 minute lecture.  Because most students (especially undergraduates) are not great learners.  This doesn’t have anything to do with the cognitive processes of learning.  It has everything about motivation, affect, and sustained effort to apply those cognitive processes of learning.

Maybe it has to do with the fact that most of these studies of lectures take place with WEIRD students: “Western, educated, industrialized, rich, and democratic cultures.”  A recent study in the journal Science shows that many of our studies based on WEIRD students break down when the same studies are used with students from different cultures.  Maybe WEIRD students are lazy or inexperienced at focused learning effort. Maybe students in other cultures could focus for 90 whole minutes.  In any case, I teach WEIRD students, and our studies of WEIRD students show that lectures don’t work for them.

There’s another aspect of this belief that lectures don’t work.  I attended talks at education conferences lately where the speaker announces that “Lectures don’t work” and proceeds to engage the audience in some form of active learning, like small group discussion.  I hate that.  I am a good learner.  I take careful notes, I review them and look up interesting ideas and referenced papers later, and if the lecture really captured my attention, I will blog on the lecture later to summarize it.  I take a multi-hour trip to attend a conference and hear this speaker, and now I have to talk to whatever dude happens to be sitting next to me? If you recognize that the complete sentence is “Lectures don’t work…for inexperienced or lazy learners,” then you realize that using “active learning” with professionals at a formal conference is insulting to your audience.  You are assuming that they can’t learn on their own, without your scaffolding.

When I was a student, I remember being taught “learning skills” which included how to take good notes and how to review those notes. I don’t know that those lessons worked, and it’s probably more effective to change lecture than to try to change all those students.  We do want our students to become better learners, and it’s worth exploring how to make that happen.  But let’s make sure that we’re clear in what we’re saying: Lectures don’t work for learning among our traditional American (at least) undergraduate students.  That’s not the same as saying that lectures don’t work for learning.

July 27, 2010 at 3:11 pm 21 comments

In Praise of Drill and Practice

Last night, Barb and I went out to dinner with our two teens.  (The house interior is getting painted, so it was way easier than trying to eat in our kitchen.)  We got to talking about the last academic year.  Our eldest graduated from high school last week, with only one B in four years, including 7 AP classes.  (While I take pride in our son, I do recognize that kids’ IQ is most highly correlated with mothers’ IQ. I married well.) Our middle child was moping a bit about how hard it was going to be to follow in his footsteps, though she’s doing very well at that so far.

Since our middle child had just finished her freshman year, we asked the two of them which teachers we should steer our youngest toward or away from.  As they compared notes on their experiences, I asked about their biology teacher, Mrs. A.  I couldn’t believe the homework load that Mrs. A. sent home with the kids each night — almost all worksheets, fill-in-the-blank, drill-and-practice.  Sometimes, our middle child would have 300 questions to complete in a night!

Both our kids loved Mrs. A!  No, they didn’t love the worksheets, but they said that they really liked how the worksheets “drilled the material into our heads.”  “She’s such a great teacher!” they both said.  They went on to talk about topics in biology, using terms that I didn’t know.  Our middle child said that she’s looking forward to taking anatomy with Mrs. A, and and our eldest said that many of his friends took anatomy just to have Mrs. A again.

I was surprised.  My kids are pretty high-ability, and this messes with my notions of Aptitude-Treatment Interactions.  High ability kids value worksheets, simple drill-and-practice — what I used to call “drill-and-kill”?

On the other hand, their experience meshes with the “brain as muscle” notions that Carl Wieman talked about at SIGCSE.  They felt that they really learned from all that practice in the fundamentals, in the language and terms of the field.  Cognitive load researchers would point out that worksheets have low cognitive load, and once that material is learned, students can build on it in more sophisticated and interesting ways.  That’s definitely what I heard my kids doing, in some really interesting discussions about the latest biology findings, using language that I didn’t know.

I realized again that we don’t have (or at least, use) the equivalent of worksheets in computer science.  Mathematics have them, but my sense is that mathematics educators are still figuring out how to make them work well, in that worksheets have low cognitive load but it’s still hard getting to what we want students to learn about mathematics.  I suspect that computational worksheets would serve mathematics and computer science better than paper-based ones.  A computational worksheet could allow for dynamics, the “playing-out” of the answer to a fill-in-the-blank question.  Much of what we teach in introductory computer science is about dynamics: about how that loop plays out, about how program state is influenced and manipulated by a given process, about how different objects interact.  That could be taught (partially, the foundational ideas) in a worksheet form, but probably best where the dynamics could be made explicit.

Overall, though, my conversation with my kids about Mrs. A and her worksheets reminded me that we really don’t have much for CS learners before throwing them in front of a speeding interpreter or compiler.  A blank editor window is a mighty big fill-in-the-blank question. We need some low cognitive load starting materials, even for the high ability learners.

May 26, 2010 at 10:15 am 15 comments

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,052 other followers


Recent Posts

Blog Stats

  • 2,030,748 hits
September 2022

CS Teaching Tips