Posts tagged ‘worked examples’

Come to my workshop on CS Education at ASEE June 16!

I am attending my first American Society for Engineering Education (ASEE) Conference this year — see the website here: https://www.asee.org/conferences-and-events/conferences/annual-conference/2019.

I’m still figuring out Engineering Education Research, so I’ll be offering a workshop based on our work at Georgia Tech: Techniques for Improved Engagement and Learning of Programming. The workshop is Sunday, June 16, 2019 from 9:00 am to noon. Please come, and please pass this on to others you know who are attending ASEE and might be interested.

Computing education research at Georgia Tech over the last 15 years has led to techniques for teaching programming which improve student learning. Learning is enhanced through greater engagement and reduced cognitive load.

These techniques are:

  • Media computation: Teaching programming through manipulation of digital media which improves students’ sense of utility and relevance leading to greater engagement;
  • Worked examples: Using worked examples in peer instruction and for prompting for predictions that improve learning;
  • Subgoal labeling: Structuring and labeling worked examples to improve immediate learning, retention over time, and transfer to new problems.

The learning objectives for this workshop are for participants to experience these techniques so that they might be able to judge which are most useful for their own practice. Participants will:

  • Manipulate digital media with programs that they write during the workshop (laptops required).
  • Participate in peer instruction questions using worked examples.
  • Compare worked examples with and without subgoal labeling.

 

February 1, 2019 at 7:00 am Leave a comment

An Ebook Integrating Minimal Manuals with Constructionism, Worked Examples, and Inquiry: MOHQ

Our computing education research group at Georgia Tech has been developing and evaluating ebooks for several years (see this post with discussion of some of them). We publish on them frequently, with a new paper just accepted to ICER 2016 in Melbourne. We use the Runestone Interactive platform which allows us to create ebooks with a lot of different kinds of learning activities — not just editing and running code (which I’ve been arguing for awhile is really important to support a range of abilities and motivations), but including editing and running code.

It’s a heavyweight platform. I have been thinking about alternative models of ebooks — maybe closer to e-pamphlets. Since I was working with GP (see previous post) and undergraduate David Tran was interested in working with me on a GP project, we built a prototype of a minimalist medium for learning CS. I call it a MOHQ: Minimal manual Organized around Hypertext Questions: http://home.cc.gatech.edu/gpblocks. (Suggestion: Use Firefox if you can for playing with browser GP. WAY faster for the JavaScript execution than either Chrome or Safari on my Mac.)

Minimal Manuals

John Carroll came up with the idea of minimal manuals back in the 1980’s (see the earliest paper I found on the idea). The goal is to help people to use complicated computing devices with the minimum of overhead. Each page of the manual starts with a task — something that a user would want to do. The goal is to put the instruction for how to achieve that task all on that one page.

The idea of minimalist instruction is described here: http://www.instructionaldesign.org/theories/minimalism.html.

The four principles of minimal instruction design are:

  1. Allow learners to start immediately on meaningful tasks.
  2. Minimize the amount of reading and other passive forms of training by allowing users to fill in the gaps themselves
  3. Include error recognition and recovery activities in the instruction
  4. Make all learning activities self-contained and independent of sequence.

There’s good evidence that minimal manuals really do work (see http://doc.utwente.nl/26430/1/Lazonder93minimal.pdf). Learners become more productive more quickly with minimal manuals, with surprisingly high scores on transfer and retention. A nice attribute of minimal manuals is that they’re geared toward success. They likely increase self-efficacy, a significant problem in CS education.

The goal of most minimal instruction is to be able to do something. What about learning conceptual knowledge?

Adding Learning Theory: Inquiry, Worked Examples, and Constructionism

I started exploring minimal manuals as a model for designing CS educational media after a challenge from Alan Kay. Alan asked me to think about how we would teach people to be autodidacts. One of the approaches used to encourage autodidactism is inquiry-based learning. Could we structure a minimal manual around questions that they might have or that we want students to ask themselves?

We structure our Runestone ebooks around an Examples+Practice framework. We provide a worked example (typically executable code, but sometimes a program visualization), and then ask (practice) questions about that example. We provide one or two practice exercises for every example. Based on Lauren Margeliux’s work, the point of the practice is to get students to think about the example, to engage with it, and to explain it to themselves. It’s less important that they do the questions — I want the students to read the questions and think about them, and Lauren’s work suggests that even the feedback may not be all that important.

Finally, one of the aspects that I like about Runestone is that every example in an active code area is a complete Python interpreter. Modify the code anyway you want. Erase all of it and build something new if you want. It’s constructionist. We want students to construct with the examples and go beyond them.

MOHQ: Minimal Manual Organized around Hypertext Questions

The prototype MOHQ that David Tran and I built (http://home.cc.gatech.edu/gpblocks) is an implementation of this integration of minimal manuals with constructionism, inquiry, and worked examples. Each page in the MOHQ:

  • Starts with a question that a student might be wondering about.
  • Offers a worked example in a video.
  • Offer the opportunity to construct with the example project.
  • Asks one or two practice questions, to prompt thinking about the project.

Using the minimal design principles to structure the explanation:

  1. Allow learners to start immediately on meaningful tasks.

The top page offers several questions that I hope are interesting to a student. Every page offers a project that aims to answer that question. GP is a good choice here because it’s blocks-based (low cognitive load) and I can do MediaComp in it (which is what I wanted to teach in this prototype).

#1: Minimize the amount of reading and other passive forms of training by allowing users to fill in the gaps themselves.

Each page has a video of David or me solving the problem in GP. Immediately afterward is a link to jump directly into the GP project exactly where the video ended. Undo something, redo something, start over and build something else. The point is to watch a video (where we try to explain what we’re doing, but we’re certainly not filling in all the gaps), then figure out how it works on your own.

Then we offer a couple of practice questions to challenge the learner: Did you really understand what was going on here?

#2: Include error recognition and recovery activities in the instruction.

Error recovery is easy when everything is in the browser — just hit the back button. You can’t save. You can’t damage anything. (We tell people this explicitly on every page.)

#3: Make all learning activities self-contained and independent of sequence.

This is the tough one. I want people to actually learn something in a MOHQ, that pixels have red, green, and blue components, and chromakey is about replacing one color with a background image, and that removing every other sample increases the frequency of a sound — and more general ideas, e.g., that elements in a collection can be referenced by index number.

So, all the driving questions from the home page start with, “Okay, you can just dive in here, but you might want to first go check out these other pages.” You don’t have to, but if you want to understand better what’s going on here, you might want to start with simpler questions.

We also want students to go on — to ask themselves new questions, to go try other projects. After each project, we offer some new questions that we hope that students might ask themselves. The links are explicitly prompts. “You might be thinking about these questions. Even if you weren’t, you might want to. Let’s see where we can explore next.”

Current Prototype and What Comes Next

Here’s the map of pages that we have out there right now. We built it in a Wiki which facilitated creating the network of pages that we want. This isn’t a linear book.

Full-MOHQ-Map

There’s maybe a dozen pages out there, but even with that relatively small size, it took most of a semester to pull these together. Producing the videos and building these pages by hand (even in a Wiki) was a lot of work. The tough part was every time we changed our minds about something — and had to go back through all of the previously built pages and update them. Since this is a prototype (i.e., we didn’t know what we wanted when we started), that happened quite often. If we were going to add more to the GP MOHQ, I’d want to use a tool for generating pages from a database as we did with STABLE, the Smalltalk Apprenticeship-Based Learning Environment.

I would appreciate your thoughts about MOHQ. Call this an expert review of the idea.

  • Thumbs-up or down? Worth developing further, or a bad direction?
  • What do you think is promising about this idea?
  • What would we need to change to make it more effective for student learning?

June 15, 2016 at 7:35 am 11 comments

Cognitive Load as a Significant Problem in Learning Programming: Briana Morrison’s Dissertation Proposal

Briana Morrison is defending her proposal today.  One chapter of her work is based on her ICER 2015 paper that won the Chairs Award for best paper (see post here). Good luck, Briana!

Title: Replicating Experiments from Educational Psychology to Develop Insights into Computing Education: Cognitive Load as a Significant Problem in Learning Programming

Briana Morrison
Ph.D. student
Human Centered Computing
College of Computing
Georgia Institute of Technology

Date: Wednesday, November 11, 2015
Time: 2 PM to 4 PM EDT
Location: TSRB 223

Committee
————–
Dr. Mark Guzdial, School of Interactive Computing (advisor)
Dr. Betsy DiSalvo, School of Interactive Computing
Dr. Wendy Newstetter, School of Interactive Computing
Dr. Richard Catrambone, School of Psychology
Dr. Beth Simon, Jacobs School of Engineering at University of California San Diego and Principal Teaching and Learning Specialist, Coursera

Abstract
———–
Students often find learning to program difficult. This may be because the concepts are inherently difficult due to the fact that the elements of learning to program are highly interconnected. Instructors may be able to lower the complexity of learning to program by designing instructional materials that use educational psychology principles.

The overarching goal of this research is to gain more understanding and insight into the optimal conditions under which learning programming can be successful which is defined as students being able to apply their acquired knowledge and skills in new or familiar problem-solving situations. Cognitive load theory (CLT), and its associated effects, describe the role of the learner’s memory during the learning process. By minimizing undesirable loads within the instructional materials the learner’s memory can hold more relevant information, thereby improving the effectiveness of the learning process.

This proposal uses cognitive load theory to improve learning in programming.  First an instrument for measuring cognitive load components within introductory programming was developed and initially validated. We have explored reducing the cognitive load by changing the modality in which students receive the learning material. This had no effect on novices’ retention of knowledge or their ability to transfer knowledge. We then attempted to reduce the cognitive load by adding subgoal labels to the instructional material. This had some effect on the learning gains under some conditions. Students who learned using subgoal labels demonstrated higher learning gains than the other conditions on the programming assessment task. We also explored using a low cognitive load assessment task, a Parsons problem, to measure learning gains. This low cognitive load assessment task proved more sensitive than the open ended programming assessment tasks in capturing student learning. Students who were given subgoal labels regardless of context transfer condition out performed those in the other conditions.

In my final, proposed study I change how we teach a programming construct through its format and content in order to reduce cognitive load. The changed construct is presumed to be a more natural cognitive fit for students based on previous research.

November 11, 2015 at 8:48 am 4 comments

Sorting Is Boring: Computing Education Needs to Join the Real World, like MediaComp and worked examples 

Agree that we get it backwards in computing education.  We ought to do more with worked examples (a form of “word problems”) — see the argument here.  The point of Media Computation has always been to focus on relevance — what the students think that a computer is good for, not what the CS teacher thinks is interesting (see that argument here).

There are people who love math for math’s sake and devote themselves to proving 1 + 1 = 2. There are more people, however, who enjoy using math to prescribe medication and build skyscrapers. In elementary school, we use word problems to show why it’s useful to add fractions (ever want to split that blueberry pie?) or find the perimeter of a square. We wait until college, when math majors choose to devote four years towards pure math, to finally set aside the word problems and focus on theory. We do so because math is a valuable skill that is used in so many different professions and contexts, and we don’t want kids to give up on math because they don’t think it’s useful.

So, why does computer science start with theory and end with word problems?

via Sorting Is Boring: Computer Science Education Needs to Join the Real World | Jessie Duan.

March 7, 2015 at 8:29 am 30 comments

A kind of worked examples for large classrooms

I passed on to the MediaComp-Teach list something I’m trying to do in my class this semester.  I had several suggestions to share it with others. It’s based on worked examples and peer instruction.

I’m teaching Python MediaComp, first time in 8 years on campus.  We have just shy of 300 students, and I have 155 in my lecture.  While I’m a big fan of worked examples, the way I’ve used them in small classes of 30-40 won’t work with 155.

Here’s what I’m doing this semester.  Every Thursday, I distribute a PDF with a bunch of code in sets, like this:

worked-examples-pic1

The students are getting 12-20 little programs every Thursday.  Most students type them ALL in before lecture Friday morning at 10 am.

Then on Friday, I put up PI-like questions like this:

Exercises4-5_pptxb

and

 

Exercises4-5_pptx

Students are required to work on these in groups.  I walk around the lecture hall and insist that nobody sit alone.  I get lots of questions in the five minutes when everybody’s working away.

We don’t have clickers, but I’ve given every student four colored index cards.  When I call for votes, everybody holds up the right colored card.

Here’s the interesting part — they TALK about the programs.  Here’s a question in Piazza with a student’s answer:

CS_1315__4_unread_

 

The other instructor in the class is also using these, and he says that the students are using them after the Friday lecture as examples to study from and to use in building homework.  I’ve had lots of comments about these from students, in office hours and via email.  They find them valuable to study.

My worked examples aren’t giving them much process.  I am getting them to look at lots of programs, type them in, get them running, and think about them.  I’m pretty excited about it.  Given that I haven’t been in this class in the last 8 years, the class isn’t really “mine” anymore.  I’m trying to be sensitive to how much I change about a huge machine (14 TA’s, two instructors…) that I’m only visiting in.  But everyone seems into this, and it’s fitting in pretty easily.

I have been uploading all of the PDF’s, PPTs, and PY’s  at http://home.cc.gatech.edu/mediaComp/95, if you’re interested.  (There are some weeks missing because Atlanta actually had some Winter this year.)

 

March 21, 2014 at 1:51 am 14 comments

The Bigger Issues in Learning to Code: Culture and Pedagogy

I mentioned in a previous blog post the nice summary article that Audrey Watters wrote (linked below) about Learning to Code trends in educational technology in 2012, when I critiqued Jeff Atwood’s position on not learning to code.

Audrey does an excellent job of describing the big trends in learning to code this last year, from CodeAcademy to Bret Victor and Khan Academy and MOOCs.  But the part that I liked the best was where she identified the problem that cool technology and badges won’t solve: culture and pedagogy.

This is a problem. A big problem. A problem that an interactive JavaScript lesson with badges won’t solve.

Two organizations — Black Girls Code and CodeNow — did hold successful Kickstarter campaigns this year to help “change the ratio” and give young kids of color and young girls opportunities to learn programming. And the Irish non-profit CoderDojo also ventured state-side in 2012, helping expand afterschool opportunities for kids interested in hacking. The Maker Movement another key ed-tech trend this year is also opening doors for folks to play and experiment with technologies.

And yet, despite all the hype and hullaballoo from online learning startups and their marketing campaigns that now “everyone can learn to code,” its clear there are still plenty of problems with the culture and the pedagogy surrounding computer science education.

via Top Ed-Tech Trends of 2012: Learning to Code | Inside Higher Ed.

We still do need new programming languages whose design is informed by how humans work and learn.  We still do need new learning technologies that can help us provide the right learning opportunities for individual student’s needs and can provide access to those who might not otherwise get the opportunity.  But those needs are swamped by culture and pedagogy.

What do I mean by culture and pedagogy?

Culture: Betsy diSalvo’s work on Glitch is a great example of considering culture in computing education.  I’ve written about her work before — that she engaged a couple dozen African-American teen men in computing, by hiring them to be video game testers, and the majority of those students went on to post-secondary education in computing.  I’ve talked with Betsy several times about how and why that worked.  The number one reason why it worked: Betsy spent the time to understand the African-American teen men’s values, their culture, what they thought was important.  She engaged in an iterative design process with groups of teen men to figure out what would most appeal to them, how she could reframe computing into something that they would engage with.  Betsy taught coding — but in a different way, in a different context, with different values, where the way, context, and values were specifically tuned to her audience.  Is it worth that effort?  Yeah, because it’s about making a computing that appeals to these other audiences.

Pedagogy: A lot of my work these days is about pedagogy.  I use peer instruction in my classrooms, and try out worked examples in various ways.  In our research, we use subgoal labels to improve our instructional materials.  These things really work.

Let me give you an example with graphs that weren’t in Lauren Margelieux’s paper, but are in the talk slides that she made for me.  As you may recall, we had two sets of instructional materials: A set of nice videos and text descriptions that Barbara Ericson built, and a similar set with subgoal labels inserted.  We found that the subgoal labelled instruction led to better performance (faster and more correct) immediately after instruction, more retention (better performance a week later), and better performance on a transfer task (got more done on a new app that the students had never seen before).  But I hadn’t shown you before just how enormous was the gap between the subgoal labelled group and the conventional group on the transfer task.

Part of the transfer task involved defining a variable in App Inventor — don’t just grab a component, but define a variable to represent that component.  The subgoal label group did that more often.  ALOT more often.

transfer-chart-variables

Lauren also noticed that the conventional group tended to “thrash,” to pull out more blocks in App Inventor than they actually needed.  The correlation between number of blocks drawn out and correctness was = -.349 — you are less likely to be correct (by a large amount) if you pull out extra blocks.  Here’s the graph of number of blocks pulled out by each group.

transfer-chart-numblocks

These aren’t small differences!  These are huge differences from a surprisingly small difference between the instructional materials.  Improving our pedagogy could have a huge impact.

I agree with Audrey: Culture and pedagogy are two of the bigger issues in learning to code.

December 21, 2012 at 8:47 am 7 comments

Surveying Media Computation Students: Self-Efficacy, Worked Examples, Python, and Excel

I’m back from Oxford, after an intense six weeks of teaching “Computational Freakonomics” and “Media Computation.” Since I did new things in Media Computation this term, I put together a little survey to get students’ feedback on what I did — not for research publication, but to inform me as a teacher.

It’s complicated to interpret their responses.  Only 11 of my 22 students completed my survey, so the results may not be representative of the whole class.  (The class was 10 males and 12 females. I didn’t ask about gender on the survey, so I don’t know gender of the respondents.) The first thing I was wondering was whether the worked examples was perceived by students as helping them learn. “I found it useful to type in Python programs and figure them out at the start of class.” 4 strongly agree, 6 agree, 1 neutral.

That seems generally positive — students thought that the worked examples were useful.  How about helping with Python syntax?  “Getting the characters exactly right (the syntax of Python) was difficult.” 2 agree, 1 neutral, 8 disagree.  That’s in the right direction.

In the written portion, several students commented that they liked being able to focus on “understanding” programs “rather than just executing them.”  One student even suggested that I could have questions about the program after they studied them, or I could have them make a change to the program afterward, to demonstrate understanding.  I loved this idea, and particularly loved that it was suggested by a student.  It indicates seeing a value in understanding programming, even before doing programming, while seeing value in that, too.  This worked examples approach really does lead to a different way of thinking about introductory computer science: Programs as something to study first, before designing and engineering them.

When I asked students what their favorite part of the course was, and what their least favorite part of the course was, Excel showed up on both lists (though more often on the least favorite part).  Here’s one of the questions that stymied me to interpret: “Python is harder to learn and use than Excel.”  Could not be a more perfect bell curve — what does that mean?!?

“I wish I could have learned more Excel in this course.”  An almost perfectly uniform distribution!

Their reaction to Excel is so interesting.  On the written parts of the survey, they told me how important it was for them to learn Excel, that it was very important for their careers.  But they did not really like doing something as inauthentic (my word, not their’s) as pixel manipulation in Excel.  They wished they could have done something more useful, like computing “expenses.”

The responses above suggest to me a hypothesis: The students don’t really know how to think about Excel in relation to Python. It’s as if they’re two different things, not two forms of the same thing.  I was hoping for more of the latter, by doing pixel manipulations in both Python and Excel. This may be someplace where prior understanding influences the future understanding.  I suspect that the students classify these things as.

  • “Excel is for business. It’s not for computing.  Doing pixel manipulations in Excel is just weird and painful.”
  • “Python is for computing.  I have to go through it, but it doesn’t really have much to do with my future career.”  On the statement, “Learning programming as we have in this course is not useful to me,” 3 were neutral, and 8 disagreed.  I read that as, “It’s okay. Sorta.”

Something that I always worry about: Are we helping students to develop their sense of self-efficacy in an introductory course, especially for non-majors?

“I am more confident using computers now, after taking this course.”  Quite positive: 10 agree, 1 neutral.

“I think differently about computers and how they work since taking this class.”  Could not get much more positive: 8 strongly agree, 6 agree!

And yet, “I am not the kind of person who is good with computers.”  Mostly, students agree with that: 3 strongly agree, 4 agree, 1 neutral, 3 disagree.  One average, my students still don’t see themselves as among the people who are “good” with computers.

There was lots for me to be happy about.  Some students said that the lectures on algorithmic complexity and the storage hierarchy were among their favorites; that they would have liked to have learned more about the “big questions” of CS; and they they liked writing programs.  On the statement, “I learned interesting and useful computer science in this course,” 3 students strongly agreed, and 8 agreed.  They got that this was about computer science, and some of them even found that useful.

Even in a class of only 22, even seeing them every day for hours, even with grading all their papers — I’m still surprised, intrigued, and confounded by how they think about all of this.  That’s fine by me. As a teacher and a researcher, my job isn’t done yet.

August 8, 2012 at 9:43 am 8 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,197 other followers

Feeds

Recent Posts

Blog Stats

  • 1,637,878 hits
April 2019
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips