Posts tagged ‘worked examples’

Come to my workshop on CS Education at ASEE June 16!

I am attending my first American Society for Engineering Education (ASEE) Conference this year — see the website here: https://www.asee.org/conferences-and-events/conferences/annual-conference/2019.

I’m still figuring out Engineering Education Research, so I’ll be offering a workshop based on our work at Georgia Tech: Techniques for Improved Engagement and Learning of Programming. The workshop is Sunday, June 16, 2019 from 9:00 am to noon. Please come, and please pass this on to others you know who are attending ASEE and might be interested.

Computing education research at Georgia Tech over the last 15 years has led to techniques for teaching programming which improve student learning. Learning is enhanced through greater engagement and reduced cognitive load.

These techniques are:

  • Media computation: Teaching programming through manipulation of digital media which improves students’ sense of utility and relevance leading to greater engagement;
  • Worked examples: Using worked examples in peer instruction and for prompting for predictions that improve learning;
  • Subgoal labeling: Structuring and labeling worked examples to improve immediate learning, retention over time, and transfer to new problems.

The learning objectives for this workshop are for participants to experience these techniques so that they might be able to judge which are most useful for their own practice. Participants will:

  • Manipulate digital media with programs that they write during the workshop (laptops required).
  • Participate in peer instruction questions using worked examples.
  • Compare worked examples with and without subgoal labeling.

 

February 1, 2019 at 7:00 am Leave a comment

An Ebook Integrating Minimal Manuals with Constructionism, Worked Examples, and Inquiry: MOHQ

Our computing education research group at Georgia Tech has been developing and evaluating ebooks for several years (see this post with discussion of some of them). We publish on them frequently, with a new paper just accepted to ICER 2016 in Melbourne. We use the Runestone Interactive platform which allows us to create ebooks with a lot of different kinds of learning activities — not just editing and running code (which I’ve been arguing for awhile is really important to support a range of abilities and motivations), but including editing and running code.

It’s a heavyweight platform. I have been thinking about alternative models of ebooks — maybe closer to e-pamphlets. Since I was working with GP (see previous post) and undergraduate David Tran was interested in working with me on a GP project, we built a prototype of a minimalist medium for learning CS. I call it a MOHQ: Minimal manual Organized around Hypertext Questions: http://home.cc.gatech.edu/gpblocks. (Suggestion: Use Firefox if you can for playing with browser GP. WAY faster for the JavaScript execution than either Chrome or Safari on my Mac.)

Minimal Manuals

John Carroll came up with the idea of minimal manuals back in the 1980’s (see the earliest paper I found on the idea). The goal is to help people to use complicated computing devices with the minimum of overhead. Each page of the manual starts with a task — something that a user would want to do. The goal is to put the instruction for how to achieve that task all on that one page.

The idea of minimalist instruction is described here: http://www.instructionaldesign.org/theories/minimalism.html.

The four principles of minimal instruction design are:

  1. Allow learners to start immediately on meaningful tasks.
  2. Minimize the amount of reading and other passive forms of training by allowing users to fill in the gaps themselves
  3. Include error recognition and recovery activities in the instruction
  4. Make all learning activities self-contained and independent of sequence.

There’s good evidence that minimal manuals really do work (see http://doc.utwente.nl/26430/1/Lazonder93minimal.pdf). Learners become more productive more quickly with minimal manuals, with surprisingly high scores on transfer and retention. A nice attribute of minimal manuals is that they’re geared toward success. They likely increase self-efficacy, a significant problem in CS education.

The goal of most minimal instruction is to be able to do something. What about learning conceptual knowledge?

Adding Learning Theory: Inquiry, Worked Examples, and Constructionism

I started exploring minimal manuals as a model for designing CS educational media after a challenge from Alan Kay. Alan asked me to think about how we would teach people to be autodidacts. One of the approaches used to encourage autodidactism is inquiry-based learning. Could we structure a minimal manual around questions that they might have or that we want students to ask themselves?

We structure our Runestone ebooks around an Examples+Practice framework. We provide a worked example (typically executable code, but sometimes a program visualization), and then ask (practice) questions about that example. We provide one or two practice exercises for every example. Based on Lauren Margeliux’s work, the point of the practice is to get students to think about the example, to engage with it, and to explain it to themselves. It’s less important that they do the questions — I want the students to read the questions and think about them, and Lauren’s work suggests that even the feedback may not be all that important.

Finally, one of the aspects that I like about Runestone is that every example in an active code area is a complete Python interpreter. Modify the code anyway you want. Erase all of it and build something new if you want. It’s constructionist. We want students to construct with the examples and go beyond them.

MOHQ: Minimal Manual Organized around Hypertext Questions

The prototype MOHQ that David Tran and I built (http://home.cc.gatech.edu/gpblocks) is an implementation of this integration of minimal manuals with constructionism, inquiry, and worked examples. Each page in the MOHQ:

  • Starts with a question that a student might be wondering about.
  • Offers a worked example in a video.
  • Offer the opportunity to construct with the example project.
  • Asks one or two practice questions, to prompt thinking about the project.

Using the minimal design principles to structure the explanation:

  1. Allow learners to start immediately on meaningful tasks.

The top page offers several questions that I hope are interesting to a student. Every page offers a project that aims to answer that question. GP is a good choice here because it’s blocks-based (low cognitive load) and I can do MediaComp in it (which is what I wanted to teach in this prototype).

#1: Minimize the amount of reading and other passive forms of training by allowing users to fill in the gaps themselves.

Each page has a video of David or me solving the problem in GP. Immediately afterward is a link to jump directly into the GP project exactly where the video ended. Undo something, redo something, start over and build something else. The point is to watch a video (where we try to explain what we’re doing, but we’re certainly not filling in all the gaps), then figure out how it works on your own.

Then we offer a couple of practice questions to challenge the learner: Did you really understand what was going on here?

#2: Include error recognition and recovery activities in the instruction.

Error recovery is easy when everything is in the browser — just hit the back button. You can’t save. You can’t damage anything. (We tell people this explicitly on every page.)

#3: Make all learning activities self-contained and independent of sequence.

This is the tough one. I want people to actually learn something in a MOHQ, that pixels have red, green, and blue components, and chromakey is about replacing one color with a background image, and that removing every other sample increases the frequency of a sound — and more general ideas, e.g., that elements in a collection can be referenced by index number.

So, all the driving questions from the home page start with, “Okay, you can just dive in here, but you might want to first go check out these other pages.” You don’t have to, but if you want to understand better what’s going on here, you might want to start with simpler questions.

We also want students to go on — to ask themselves new questions, to go try other projects. After each project, we offer some new questions that we hope that students might ask themselves. The links are explicitly prompts. “You might be thinking about these questions. Even if you weren’t, you might want to. Let’s see where we can explore next.”

Current Prototype and What Comes Next

Here’s the map of pages that we have out there right now. We built it in a Wiki which facilitated creating the network of pages that we want. This isn’t a linear book.

Full-MOHQ-Map

There’s maybe a dozen pages out there, but even with that relatively small size, it took most of a semester to pull these together. Producing the videos and building these pages by hand (even in a Wiki) was a lot of work. The tough part was every time we changed our minds about something — and had to go back through all of the previously built pages and update them. Since this is a prototype (i.e., we didn’t know what we wanted when we started), that happened quite often. If we were going to add more to the GP MOHQ, I’d want to use a tool for generating pages from a database as we did with STABLE, the Smalltalk Apprenticeship-Based Learning Environment.

I would appreciate your thoughts about MOHQ. Call this an expert review of the idea.

  • Thumbs-up or down? Worth developing further, or a bad direction?
  • What do you think is promising about this idea?
  • What would we need to change to make it more effective for student learning?

June 15, 2016 at 7:35 am 12 comments

Cognitive Load as a Significant Problem in Learning Programming: Briana Morrison’s Dissertation Proposal

Briana Morrison is defending her proposal today.  One chapter of her work is based on her ICER 2015 paper that won the Chairs Award for best paper (see post here). Good luck, Briana!

Title: Replicating Experiments from Educational Psychology to Develop Insights into Computing Education: Cognitive Load as a Significant Problem in Learning Programming

Briana Morrison
Ph.D. student
Human Centered Computing
College of Computing
Georgia Institute of Technology

Date: Wednesday, November 11, 2015
Time: 2 PM to 4 PM EDT
Location: TSRB 223

Committee
————–
Dr. Mark Guzdial, School of Interactive Computing (advisor)
Dr. Betsy DiSalvo, School of Interactive Computing
Dr. Wendy Newstetter, School of Interactive Computing
Dr. Richard Catrambone, School of Psychology
Dr. Beth Simon, Jacobs School of Engineering at University of California San Diego and Principal Teaching and Learning Specialist, Coursera

Abstract
———–
Students often find learning to program difficult. This may be because the concepts are inherently difficult due to the fact that the elements of learning to program are highly interconnected. Instructors may be able to lower the complexity of learning to program by designing instructional materials that use educational psychology principles.

The overarching goal of this research is to gain more understanding and insight into the optimal conditions under which learning programming can be successful which is defined as students being able to apply their acquired knowledge and skills in new or familiar problem-solving situations. Cognitive load theory (CLT), and its associated effects, describe the role of the learner’s memory during the learning process. By minimizing undesirable loads within the instructional materials the learner’s memory can hold more relevant information, thereby improving the effectiveness of the learning process.

This proposal uses cognitive load theory to improve learning in programming.  First an instrument for measuring cognitive load components within introductory programming was developed and initially validated. We have explored reducing the cognitive load by changing the modality in which students receive the learning material. This had no effect on novices’ retention of knowledge or their ability to transfer knowledge. We then attempted to reduce the cognitive load by adding subgoal labels to the instructional material. This had some effect on the learning gains under some conditions. Students who learned using subgoal labels demonstrated higher learning gains than the other conditions on the programming assessment task. We also explored using a low cognitive load assessment task, a Parsons problem, to measure learning gains. This low cognitive load assessment task proved more sensitive than the open ended programming assessment tasks in capturing student learning. Students who were given subgoal labels regardless of context transfer condition out performed those in the other conditions.

In my final, proposed study I change how we teach a programming construct through its format and content in order to reduce cognitive load. The changed construct is presumed to be a more natural cognitive fit for students based on previous research.

November 11, 2015 at 8:48 am 4 comments

Sorting Is Boring: Computing Education Needs to Join the Real World, like MediaComp and worked examples 

Agree that we get it backwards in computing education.  We ought to do more with worked examples (a form of “word problems”) — see the argument here.  The point of Media Computation has always been to focus on relevance — what the students think that a computer is good for, not what the CS teacher thinks is interesting (see that argument here).

There are people who love math for math’s sake and devote themselves to proving 1 + 1 = 2. There are more people, however, who enjoy using math to prescribe medication and build skyscrapers. In elementary school, we use word problems to show why it’s useful to add fractions (ever want to split that blueberry pie?) or find the perimeter of a square. We wait until college, when math majors choose to devote four years towards pure math, to finally set aside the word problems and focus on theory. We do so because math is a valuable skill that is used in so many different professions and contexts, and we don’t want kids to give up on math because they don’t think it’s useful.

So, why does computer science start with theory and end with word problems?

via Sorting Is Boring: Computer Science Education Needs to Join the Real World | Jessie Duan.

March 7, 2015 at 8:29 am 30 comments

A kind of worked examples for large classrooms

I passed on to the MediaComp-Teach list something I’m trying to do in my class this semester.  I had several suggestions to share it with others. It’s based on worked examples and peer instruction.

I’m teaching Python MediaComp, first time in 8 years on campus.  We have just shy of 300 students, and I have 155 in my lecture.  While I’m a big fan of worked examples, the way I’ve used them in small classes of 30-40 won’t work with 155.

Here’s what I’m doing this semester.  Every Thursday, I distribute a PDF with a bunch of code in sets, like this:

worked-examples-pic1

The students are getting 12-20 little programs every Thursday.  Most students type them ALL in before lecture Friday morning at 10 am.

Then on Friday, I put up PI-like questions like this:

Exercises4-5_pptxb

and

 

Exercises4-5_pptx

Students are required to work on these in groups.  I walk around the lecture hall and insist that nobody sit alone.  I get lots of questions in the five minutes when everybody’s working away.

We don’t have clickers, but I’ve given every student four colored index cards.  When I call for votes, everybody holds up the right colored card.

Here’s the interesting part — they TALK about the programs.  Here’s a question in Piazza with a student’s answer:

CS_1315__4_unread_

 

The other instructor in the class is also using these, and he says that the students are using them after the Friday lecture as examples to study from and to use in building homework.  I’ve had lots of comments about these from students, in office hours and via email.  They find them valuable to study.

My worked examples aren’t giving them much process.  I am getting them to look at lots of programs, type them in, get them running, and think about them.  I’m pretty excited about it.  Given that I haven’t been in this class in the last 8 years, the class isn’t really “mine” anymore.  I’m trying to be sensitive to how much I change about a huge machine (14 TA’s, two instructors…) that I’m only visiting in.  But everyone seems into this, and it’s fitting in pretty easily.

I have been uploading all of the PDF’s, PPTs, and PY’s  at http://home.cc.gatech.edu/mediaComp/95, if you’re interested.  (There are some weeks missing because Atlanta actually had some Winter this year.)

 

March 21, 2014 at 1:51 am 14 comments

The Bigger Issues in Learning to Code: Culture and Pedagogy

I mentioned in a previous blog post the nice summary article that Audrey Watters wrote (linked below) about Learning to Code trends in educational technology in 2012, when I critiqued Jeff Atwood’s position on not learning to code.

Audrey does an excellent job of describing the big trends in learning to code this last year, from CodeAcademy to Bret Victor and Khan Academy and MOOCs.  But the part that I liked the best was where she identified the problem that cool technology and badges won’t solve: culture and pedagogy.

This is a problem. A big problem. A problem that an interactive JavaScript lesson with badges won’t solve.

Two organizations — Black Girls Code and CodeNow — did hold successful Kickstarter campaigns this year to help “change the ratio” and give young kids of color and young girls opportunities to learn programming. And the Irish non-profit CoderDojo also ventured state-side in 2012, helping expand afterschool opportunities for kids interested in hacking. The Maker Movement another key ed-tech trend this year is also opening doors for folks to play and experiment with technologies.

And yet, despite all the hype and hullaballoo from online learning startups and their marketing campaigns that now “everyone can learn to code,” its clear there are still plenty of problems with the culture and the pedagogy surrounding computer science education.

via Top Ed-Tech Trends of 2012: Learning to Code | Inside Higher Ed.

We still do need new programming languages whose design is informed by how humans work and learn.  We still do need new learning technologies that can help us provide the right learning opportunities for individual student’s needs and can provide access to those who might not otherwise get the opportunity.  But those needs are swamped by culture and pedagogy.

What do I mean by culture and pedagogy?

Culture: Betsy diSalvo’s work on Glitch is a great example of considering culture in computing education.  I’ve written about her work before — that she engaged a couple dozen African-American teen men in computing, by hiring them to be video game testers, and the majority of those students went on to post-secondary education in computing.  I’ve talked with Betsy several times about how and why that worked.  The number one reason why it worked: Betsy spent the time to understand the African-American teen men’s values, their culture, what they thought was important.  She engaged in an iterative design process with groups of teen men to figure out what would most appeal to them, how she could reframe computing into something that they would engage with.  Betsy taught coding — but in a different way, in a different context, with different values, where the way, context, and values were specifically tuned to her audience.  Is it worth that effort?  Yeah, because it’s about making a computing that appeals to these other audiences.

Pedagogy: A lot of my work these days is about pedagogy.  I use peer instruction in my classrooms, and try out worked examples in various ways.  In our research, we use subgoal labels to improve our instructional materials.  These things really work.

Let me give you an example with graphs that weren’t in Lauren Margelieux’s paper, but are in the talk slides that she made for me.  As you may recall, we had two sets of instructional materials: A set of nice videos and text descriptions that Barbara Ericson built, and a similar set with subgoal labels inserted.  We found that the subgoal labelled instruction led to better performance (faster and more correct) immediately after instruction, more retention (better performance a week later), and better performance on a transfer task (got more done on a new app that the students had never seen before).  But I hadn’t shown you before just how enormous was the gap between the subgoal labelled group and the conventional group on the transfer task.

Part of the transfer task involved defining a variable in App Inventor — don’t just grab a component, but define a variable to represent that component.  The subgoal label group did that more often.  ALOT more often.

transfer-chart-variables

Lauren also noticed that the conventional group tended to “thrash,” to pull out more blocks in App Inventor than they actually needed.  The correlation between number of blocks drawn out and correctness was = -.349 — you are less likely to be correct (by a large amount) if you pull out extra blocks.  Here’s the graph of number of blocks pulled out by each group.

transfer-chart-numblocks

These aren’t small differences!  These are huge differences from a surprisingly small difference between the instructional materials.  Improving our pedagogy could have a huge impact.

I agree with Audrey: Culture and pedagogy are two of the bigger issues in learning to code.

December 21, 2012 at 8:47 am 7 comments

Surveying Media Computation Students: Self-Efficacy, Worked Examples, Python, and Excel

I’m back from Oxford, after an intense six weeks of teaching “Computational Freakonomics” and “Media Computation.” Since I did new things in Media Computation this term, I put together a little survey to get students’ feedback on what I did — not for research publication, but to inform me as a teacher.

It’s complicated to interpret their responses.  Only 11 of my 22 students completed my survey, so the results may not be representative of the whole class.  (The class was 10 males and 12 females. I didn’t ask about gender on the survey, so I don’t know gender of the respondents.) The first thing I was wondering was whether the worked examples was perceived by students as helping them learn. “I found it useful to type in Python programs and figure them out at the start of class.” 4 strongly agree, 6 agree, 1 neutral.

That seems generally positive — students thought that the worked examples were useful.  How about helping with Python syntax?  “Getting the characters exactly right (the syntax of Python) was difficult.” 2 agree, 1 neutral, 8 disagree.  That’s in the right direction.

In the written portion, several students commented that they liked being able to focus on “understanding” programs “rather than just executing them.”  One student even suggested that I could have questions about the program after they studied them, or I could have them make a change to the program afterward, to demonstrate understanding.  I loved this idea, and particularly loved that it was suggested by a student.  It indicates seeing a value in understanding programming, even before doing programming, while seeing value in that, too.  This worked examples approach really does lead to a different way of thinking about introductory computer science: Programs as something to study first, before designing and engineering them.

When I asked students what their favorite part of the course was, and what their least favorite part of the course was, Excel showed up on both lists (though more often on the least favorite part).  Here’s one of the questions that stymied me to interpret: “Python is harder to learn and use than Excel.”  Could not be a more perfect bell curve — what does that mean?!?

“I wish I could have learned more Excel in this course.”  An almost perfectly uniform distribution!

Their reaction to Excel is so interesting.  On the written parts of the survey, they told me how important it was for them to learn Excel, that it was very important for their careers.  But they did not really like doing something as inauthentic (my word, not their’s) as pixel manipulation in Excel.  They wished they could have done something more useful, like computing “expenses.”

The responses above suggest to me a hypothesis: The students don’t really know how to think about Excel in relation to Python. It’s as if they’re two different things, not two forms of the same thing.  I was hoping for more of the latter, by doing pixel manipulations in both Python and Excel. This may be someplace where prior understanding influences the future understanding.  I suspect that the students classify these things as.

  • “Excel is for business. It’s not for computing.  Doing pixel manipulations in Excel is just weird and painful.”
  • “Python is for computing.  I have to go through it, but it doesn’t really have much to do with my future career.”  On the statement, “Learning programming as we have in this course is not useful to me,” 3 were neutral, and 8 disagreed.  I read that as, “It’s okay. Sorta.”

Something that I always worry about: Are we helping students to develop their sense of self-efficacy in an introductory course, especially for non-majors?

“I am more confident using computers now, after taking this course.”  Quite positive: 10 agree, 1 neutral.

“I think differently about computers and how they work since taking this class.”  Could not get much more positive: 8 strongly agree, 6 agree!

And yet, “I am not the kind of person who is good with computers.”  Mostly, students agree with that: 3 strongly agree, 4 agree, 1 neutral, 3 disagree.  One average, my students still don’t see themselves as among the people who are “good” with computers.

There was lots for me to be happy about.  Some students said that the lectures on algorithmic complexity and the storage hierarchy were among their favorites; that they would have liked to have learned more about the “big questions” of CS; and they they liked writing programs.  On the statement, “I learned interesting and useful computer science in this course,” 3 students strongly agreed, and 8 agreed.  They got that this was about computer science, and some of them even found that useful.

Even in a class of only 22, even seeing them every day for hours, even with grading all their papers — I’m still surprised, intrigued, and confounded by how they think about all of this.  That’s fine by me. As a teacher and a researcher, my job isn’t done yet.

August 8, 2012 at 9:43 am 8 comments

MOOCing an analogy between teachers and John Henry: But maybe it’s students?

I wrote my monthly Blog@CACM piece this last weekend, which was a synthesis of several pieces I wrote here: About the worked examples that I’m trying out in Oxford, the PixelSpreadsheet, and contrasting the study abroad I’m teaching on and MOOCs.  I mention that I’m doing an end-of-term survey about how all this worked, and I expect to say more about those results here in the next couple weeks.

In the Blog@CACM piece, I mention an analogy I’ve been thinking about.  (Please forgive the terrible pun in the title.)  John Henry is an American folk hero who worked on the railroads “driving steel.”  Along comes the steam-powered hammer, which threatened the job of steel-drivers like John Henry.  John Henry raced the steam-powered hammer, and beat it — but suffered a heart attack and died immediately afterwards.  In some versions of the story, John Henry’s wife or son picks up his hammer and keeps driving steel.  But as we all know, the steam-powered hammer did drive the steel-drivers out of a job.

I wonder about the analogy to higher education.  The Internet makes information cheaper and easier to access.  Teachers play the role of John Henry in this analogy.  Sure, they may do a better job than that steam-powered education, but cheap and plentiful is more important than quality, isn’t it?  Taking the analogy in a different direction, the teachers who are building the new Coursera courses at Universities with no additional pay or course/work release remind me of the John Henry who suffered exhaustion and “died with a hammer in his hand.”

Colleagues who went to the Google Faculty Summit came back with stories of how MOOC’s were part of the conversation there.  I heard that my advisor, Elliot Soloway, stood up to say:

 “I’m at the University of Michigan where in addition to our university we have Central Michigan, Eastern Michigan, Western Michigan, etc.  In five years, those schools will be gone.”

That’s when I realized another potential casualty in the battle over MOOCs, if Elliot is right.  My niece went to Central Michigan to get a degree in Occupational Therapy.  Today, she works with special needs children, with both physical and cognitive impairments.  There are only a couple of OT programs in the state of Michigan, and none at U-M.  Can you imagine teaching students how to provide therapy to patients with physical impairments via MOOCs?!?  (Relates to “Gas Stations Without Pumps” on what works as a Coursera course.)  How do we teach everything that we want and need to teach if only elite universities and MOOC’s exist for higher education?  Is the role of John Henry in the higher education version of the analogy played by teachers (as in my original blog post), by degree programs that don’t fit these models, or by the students who seek to do something other than what the elites and MOOCs offer?

It’s over-the-top melodramatic, I admit, but that’s what makes for good folklore.  Folklore and similar stories play a useful purpose if they help us to see new perspectives.  In the vision of the world where community colleges don’t survive, who gets wiped out (besides the Colleges themselves) like John Henry?

August 3, 2012 at 2:27 am 13 comments

A Report on Worked Examples and Self-Explanations in Media Computation

I should give you a little report on how my worked examples/self-explanation intervention worked in my Media Computation class.  I have nothing close to real data, and you shouldn’t believe me if I offered any.  This is a rarified class: 22 students, meeting four days a week for 90 minutes, plus office hours for 90 minutes twice each week (that most of the students have come to), and the teacher (who is the author of the textbook) attends breakfast and dinner with the students.  I think it would be hard to get more student-teacher interaction than in this model.

That said, I would definitely do it again.  I was quite surprised at how seriously the students took the task of explaining these programs!  In retrospect, I shouldn’t have been surprised.  In most classes, aren’t students asked to analyze and explain situations, even asked to make sense of some text?  That’s exactly what I asked these students to do, and they really worked at it.  I had students coming to office hours to ask about their assigned programs, so that they could write up their one paragraph of explanation. There were things that I had to teach them about this process, e.g., teaching them to try a program with different data sets, to make sure that the odd result they got wasn’t an anomaly.  I gave them feedback (every single student, on every single program) about the quality of their explanations, and the explanations definitely got better over time.

The real benefit was that they were trying to understand some relatively complicated code before it was their own code that they were trying to understand (while also designing and debugging it, all before a deadline).   With the worked examples tasks, they were just trying to understand.  There clearly was a reduction in cognitive load.  Variations on the below program had lots of students coming to see me — combining sounds at different rates was a challenging idea, but students did a good job of getting a grasp on it:

def modifysound2(sound):
 retsound = makeEmptySound(2*getLength(sound))
 newsound = makeSound(getMediaPath("bassoon-c4.wav"))
 trgi = 0
 nsi = 0
 for i in range(getLength(sound)):
 value = getSampleValueAt(sound,i)
 if nsi < getLength(newsound):
 nsvalue = getSampleValueAt(newsound,int(nsi))
 else:
 nsvalue = 0
 setSampleValueAt(retsound,trgi,value+nsvalue)
 trgi = trgi + 1
 nsi = nsi + 0.5
 return resound

Because there four labs (that just involved explaining programs) and two homework’s (that involved typing in, executing, and explaining programs), the first real programming assignment was the collage assignment.  Everybody did it.  Everybody turned in a working program.  And some of these were huge.  This one (by Savannah Andersen) was over 100 lines of code:

This one, by Julianne Burch, is over 200 lines of code.  I’m posting shrunk versions here: Julianne’s is about 4000 pixels across, representing the travel portion of this study abroad program.

I suspect that the worked examples and self-explanations gave the students more confidence than they normally have when facing their first programs.  It’s unusual in my experience for students to be willing to write 50-200 lines of working code for their first programming assignment.

But some of these students were also getting it.  A few of my students realized that they could make their collages more easily by using a copy() method to reduce the complication of composing pictures.  I did prompt them to do that, and a few did — most just went with hard-coded FOR loops, because that was easier for them to understand.  When I described how to do that, one student asked, “Aren’t you just naming some of those lines of code?” Yes! Nice way to start thinking about functions and abstract: it’s about naming chunks of code.  One of my students, without prompting, also decided to create a copy() method for her sound collage.  They’re starting to grapple with abstraction.  Given that this is the third week of class, when none of them had any previous programming experience (all my students are liberal arts and management students), I think that they’re doing quite well at moving from notation into abstraction.

They’re working on their first midterm exam now, a take-home exam (to save classroom time.)  I think it’s significantly challenging for a first exam, but it doesn’t have much coding.  It has a lot of analysis of code, because that’s one of the key learning objectives.  I want them to be able to look at a piece of code and predict its behavior, to trace (if necessary) what’s going on.  For me, that’s a more important outcome from a first course than being able to write a lot of code.

July 16, 2012 at 4:53 am 10 comments

Inventing a Worked Examples and Self-Explanation Method for CS Courses

I sent this idea to the mediacomp-teach mailing list, and got a positive response.  I thought I’d share it here, too.

I’m trying a worked examples + self-explanations approach in my Media Computation Python class that started Monday (first time I’ve taught it in seven years!) and in my “Computational Freakonomics” class (first time I’ve taught it in six years).  Whether you’re interested in this method or not, you might like to use the resource that I’ve created.

As I mentioned here, I’m fascinated by the research on worked examples and on self-explanations. The idea behind worked examples is that we ought to have students see more fully worked out examples, with some motivation to actually study them. The idea behind self-explanations is that learning and retention is improved when students explain something to themselves (or others), in their own words.  Pete Pirolli did studies where he had students use worked examples to study computer science (explicitly, recursion), and with Mimi Recker, prompted CS students to self-explain then studied the effect.  In their paper, Pirolli and Recker found:

“Improvement in skill acquisition is also strongly related to the generation of explanations connecting the example material to the abstract terms introduced in the text, the generation of explanations that focus on the novel concepts, and spending more time in planning solutions to novel task components. We also found that self-explanation has diminishing returns. “

Here’s the critical idea: Students (especially novices) need to see more examples, and they need to try to explain them.  This what I’m doing at key points in the class:

  • Each team of two students gets one worked example in class. They have to type it in (to make sure that they notice all the details) and explain it to themselves – what does it do? how does it work?
  • Each team then explains it to the teams on either side of them.
  • At the end of the class, each individual takes one worked example, and does the process themselves: Types it in, pastes it into a Word document (with an example of the output), and explains what the program does.  I very explicitly encourage them to do with this others, and talk about their programs with one another.  I want students to see many examples, and talk about them.

Sure, our book has many examples in it, but how many students actually look at all those examples? How many type them in and try them?  Explain to themselves?

I’m doing this at four points in the MediaComp class: for images with getPixels, images with coordinates, sounds, and text and lists. For my CompFreak class, students are supposed to have had some CS1, and most of them have seen Python at least once, so I’m only doing this at the beginning of the class, and only on text and lists.  There are 22 students in my MediaComp class, so I needed 11 examples in class, then 22 examples one-for-each-person. Round it off to 35 examples. That’s 140 working examples. A lot of them vary in small ways — that’s on purpose. I wanted two teams to say, “I think our program is doing about the same thing as yours — what’s different?”

I did discover some effects that surprised me. For example, try this:

def changesound(sound):
   for sample in getSamples(sound):
     value = getSampleValue(sample)
     if value > 0:
       setSampleValue(sample, 4 * value)
     if value <= 0:
       setSampleValue(sample,0)

Turns out if you zero out all the negative samples, you can still hear the sound pretty clearly.  I wouldn’t have guessed this.

Whether you want to try this example-heavy approach or not, you might find useful all these examples.  I’ve put all 140 examples on the teacher MediaComp sharing site (http://home.cc.gatech.edu/mediacomp/9 — email me if you want the key phrase and don’t have it). I started creating these in Word, but that was tedious to format well. I switched to LaTeX, because that nicely formatted the Python without much effort on my part. I’ve uploaded both the PDF and the LaTeX, since the LaTeX provides easy copy-paste text.

My CompFreak students are doing their assignment now (due tonight), and we just did it for the first time in the MediaComp class today (the take-home portion due in two days).  I was pleased with the feedback.  I got lots of questions about details that students don’t normally ask about at the second lecture (e.g., “makeColor is doing something different than setRed, setGreen, and setBlue differently? What’s the difference between colors and pixels?”).  My hope is that, when they start writing their own code next week, they won’t be stymied by stupid syntax errors, because they will have struggled with many of the obvious ones while working with complete code.  I’m also hoping that they’ll be more capable in understanding (and thus, debugging) their own code.  Most fun: I had to throw the students out of class today.  Class ended at 4:10, and we had a faculty meeting at 4:30.  Students stayed on, typing in their code, looking at each others’ effects.  At 4:25, I shooed them off.

I am offering extra credit for making some significant change (e.g., not just changing variable names) to the example program, and turning that in, too (with explanation and example).  What I didn’t expect is that they’re relating the changes to code we’ve talked about, like in this comment from a student that just got turned in:

“I realized I made an error in my earlier picture so I went back and fixed it. I also added in another extra credit picture. I made a negative of the photo. It looks pretty cool!”

It’s interesting to me that she explicitly decided to “make a negative” (and integrated the code to do it) rather than simply adding/changing a constant somewhere to get the extra credit cheaply.

All my MediaComp students are Business and Liberal Arts students (and is 75% female — while CompFreak is 1 female and 9 males).  I got a message from one of the MediaComp students yesterday, asking about some detail of the class, where she added: “We all were pleasantly surprised to have enjoyed class yesterday!”  I take the phrase “pleasantly surprised” to mean that the expectations are set pretty low.

June 27, 2012 at 1:56 am 16 comments

Practice is better for learning facts, worked examples are better for learning skills

Fascinating piece in US News and World Report on the LearnLab work at Carnegie Mellon University.  Since I’m exploring worked examples research and the implications for CS Education these days, I found the below section of the interview with Ken Koedinger intriguing.  Practice helps you learn facts, but worked examples help you learn skills.  Isn’t learning to program mostly about learning skills?  We should be providing lots more worked examples of programming (not just the code — the process) to teach programming skills.

In math, for example, traditionally, students receive a list of math problems to solve. But this approach “gives novice learners too little support in constructing new knowledge,” Koedinger says. “It’s not as effective as replacing about half of those problems with example solutions. Rather than guessing their way through problems, these worked-out examples allow students to focus on grasping the thinking needed so they can solve future problems on their own.”

Thus, “if every other problem contains a step-by-step solution, students learn more robust skills,” he adds. “Even better is adaptive computer-based practice that adjusts to individual students, providing more worked-out solution steps initially, but then gradually challenging a student with more problems as he or she increases in understanding and skill.”

But Koedinger is quick to point out that using more worked examples is not the answer for all learning goals. “They are best for skills, but pure practice is better for facts,” he says. “For deeper concepts and principles, more emphasis on providing explanations is important, but should these explanations simply be given to students?”

via LearnLab Explores Teaching and Learning – US News and World Report.

April 4, 2012 at 6:56 am 18 comments

A new kind of program visualization tool: Making the student trace

I’m very excited about this new tool, UUhistle. It supports exactly the kind of student activity that I was thinking that would be great as a the practice component of exploring a bunch of programs in a worked examples curriculum.

Visualizing a program’s execution can aid understanding, but research suggests that visualizations are more effective when learners are actively engaged in manipulating or creating them. To this end, UUhistle supports a novel kind of highly interactive visualization-based activity, the visual program simulation exercise (or a VPS exercise for short).

In a VPS exercise, the student has to ‘do the computer’s job’: read given code and execute its statements in the appropriate order, allocating and using memory to keep track of program state. UUhistle provides the graphical elements that the student directly manipulates to indicate what happens during execution, and where, and when. Any aspect of execution that UUhistle can display can also serve as part of a VPS exercise: the student can create variables and objects in memory, evaluate expressions, assign values, manipulate the call stack, pass parameters and so forth. For instance, to assign a value from a variable to another, the student drags the corresponding graphical element with the mouse from the source variable into the target variable.

via UUhistle.org.

September 29, 2010 at 4:05 pm 16 comments

Recursion by Pirolli (1991)

I heard Greg Wilson’s request for me to talk about the papers I’m reading (especially since I owe him a chapter which is a review of literature), so I thought I’d talk about one that Barb and I have been thinking a lot about lately: Effects of Examples and Their Explanations in a Lesson on Recursion: A Production System Analysis by Peter Pirolli (1991), in Cognition and Instruction, 8(3), 207-259.  In this paper, Pirolli describes two studies where he explores what kind of examples are useful in learning to write recursive functions, and how the characteristics of the example influences what errors students make when they write their own recursive functions.  It’s a dense paper, with some sophisticated quantitative analysis on error rates.

I was interested in this paper as one of the first in computer science to build upon Sweller’s worked examples research.  Pirolli was explicitly trying to understand the role of examples in problem-solving and about usefulness of different kinds of examples.  The first interesting tidbit that I got from this paper is how many examples Pirolli sees as necessary to learn the basics of the language.  He’s teaching students to write recursive functions with a version of Lisp (called “SIMPLE”) with only 7 primitives. Here’s an example SIMPLE program:

During the training phase, when he’s just bringing people up to speed on these primitives, he provides 8 examples for each of the 7 primitives.  56 individual examples (where he shows the primitive applied to a list, the student is asked to guess the result, and if fails, the system provides the result) is a lot of examples just to get students familiar with the language.  When you teach CS1, do you show 8 examples of for loops before students try to use them in an assignment?

The most interesting lesson I learned about recursive examples from this paper comes from the two conditions in his first experiment.  In one condition, the recursion examples that students work through are about how to write a recursive function (e.g., “here’s the base case” and “here’s how it recurses”).  In the other condition, the recursion examples are about the dynamics of how it works (e.g., “first there’s this call, then the same function is called but with this input, then the result is returned…”), like this:

Here’s the bottomline of what he finds: Getting students through the “how to write” examples took on average 57 minutes, while the “how it works” examples took an average of 85 minutes.  There was no statistical difference in performance on a post-test on writing recursive functions, though the “how to write” group had slightly fewer errors.

Even more intriguing is the discussion where Pirolli relates these findings to others in John Anderson’s group at the time which suggest, “that knowledge about how recursive functions are written is different from knowledge about how they work” and “that there is little transfer of how-it-works knowledge to function-writing tasks and, more interestingly, that extensive additional practice with simulating the evaluation of programs yields no significant benefit in debugging tasks when compared with extensive practice just coding programs.”  Writing code and tracing code are completely different tasks.

Barb is helping to teach an AP CS class this semester, and she’s teaching recursion right now.  She’s basing how she teaches recursion on Pirolli’s results.  Her first activities have the students looking at recursive functions and highlighting the base case and the recursive call — just figure out the structure.  Then they write some recursive functions. This is Pirolli’s Experiment #1 process, which takes students less time, giving them an early success with less frustration.  Next, she’ll get into debugging recursive functions, which Pirolli suggests is really a different task entirely.

Pirolli’s paper isn’t the definitive statement on teaching recursion or using worked examples.  If it was, he wouldn’t have gone on to write several more papers, including several with his students at Berkeley on using examples to learn recursion.  It is a nice paper that provides good evidence with some practical implications for how we teach.

March 31, 2010 at 8:47 am 19 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,008 other followers

Feeds

Recent Posts

Blog Stats

  • 1,888,707 hits
November 2021
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips