Posts tagged ‘Parsons Problems’

Katie Cunningham’s Purpose-first Programming: Glass box scaffolding for learning to code for authentic contexts

Last month, Katie Cunningham presented her CHI 2021 paper “Avoiding the Turing Tarpit: Learning Conversational Programming by Starting from Code’s Purpose.” The video of her presentation is available here. This is the final study from her dissertation work, about which I blogged here.

Katie is trying to support the kinds of programming learners whom she discovered in her work on tracing — students who want to write programs, but have no interest in understanding the details of how programs work. As one said to her (which became the title of her ICLS 2020 paper), “I’m not a computer.” Block-based programming won’t work for her learners because, like most conversational programmers, the authenticity of the language they’re learning matters. They don’t want to use blocks. They want to see the code that developers see — a form of what Cindy Hmelo-Silver and I called “glass-box scaffolding.”

Katie focused on one particular purpose: writing Python code to scrape Web pages using Beautiful Soup. She and Rahul Bejarano dug into Beautiful Soup code on Github and identified a set of code chunks (“plans”) that were really used for this purpose and which could be recombined in useful ways. She then developed a curriculum as a Runestone ebook for teaching those plans where she taught students how to combine them (using Parsons Problems) and, importantly, how to tailor them for specific needs. Here’s a figure from her paper with an example plan with a description of the “slots” for tailoring.

My favorite part of this study is her analysis of how students debugged using these plans. They did make mistakes, and they fixed them. They reasoned about their programs in terms of the plans. In a think aloud, they talked about the names of the plans and the slots, and where they tailored the plan wrong. It’s not that they were just copying and pasting chunks of Python code. They were reasoning about the chunks — but they were not doing much reasoning about Python. In some sense, she defined a task-specific programming language whose components happened to be defined in terms of visible lines of Python code.

My favorite outcome of the study is that students came away excited and felt that they were doing something “realistic” — from a half hour lesson. One participant asked if she could do this kind of learning for different purposes every week, a kind of DuoLingo for programming. Those are strong results from a short intervention. It is a pretty amazing intervention.

I blogged for CACM this month on how we we predict about knowledge transferring between programming languages may be based on an assumption of mathematics background which might have been true in the 1970’s but is less likely to be true today (see post here). I suggest that we need to develop ways of teaching programming that doesn’t relate to mathematics, that instead connect to the programmer’s purpose and task. Katie’s work is what I had in mind as an example.

June 21, 2021 at 7:00 am 9 comments

Proposal #1 to Change CS Education to Reduce Inequity: Teach computer science to advantage the students with less computing background

This is my second post in a series about how we have to change how we teach CS education to reduce inequity. I started this series with this post, making an argument based on race, but might also be made in terms of the pandemic. We have to change how we teach CS this year.

The series has several inspirations, but the concrete one that I want to reference back to each week is the statement from the University of Maryland’s CS department:

Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset

We as individual computing teachers make choices that influence whether students with less computing background can succeed. I often see choices being made that encourage the most capable students, but at the cost of the least prepared students. Part of this is because we see ourselves as preparing students for top software engineering jobs. The questions that get asked on technical interviews explicitly drive how many CS departments teach algorithms and theory. We want to encourage “excellence.” But whose excellence do we care about? Is Silicon Valley entrepreneurial perspectives the only ones that matter? The goal of “becoming a great software engineer” does not consider alternative endpoints for computing education (see post here). Not all our students want those kinds of jobs. Many of our students are much more interested in giving back to their community, rather than take the Silicon Valley jobs that our programs aim for (see post here).

Please don’t teach students as if they are you. First, you (as a CS teacher, as someone who reads this blog) are wildly different than our normal student. Second, your memories of how you learned and what worked for you are likely wrong. Humans are terrible at reconstructing how their memory was at a prior time and what led to their learning. That’s why we need research.

In this post, I will identify four of the methods that are differential, that advantage the students with less computing background — there are many more:

  • Use Peer Instruction
  • Explain connection to community values
  • Use Parsons Problems
  • Use subgoal labeling

Use Peer Instruction

When I talk to computer science teachers about peer instruction and how powerful it is for learning, the most common response is, “Oh, we already do that.” When I press them, they tell me that they “have class discussions” or “use undergraduate teaching assistants.” Nope, that’s not peer instruction.

Peer instruction (PI) is a technical term meaning a very specific protocol. Digital Promise and UTeach are creating a set of CS teaching micro credentials, and the one that they have on PI defines it well (see link here). PI is where the teacher poses a question for the class for individual responses, students discuss their answers, students respond again, and the teacher reveals the answer and explains the answer. The evidence suggesting that PI really works is overwhelming, and it can be used in any CS class — see http://peerinstruction4cs.com/ for more information on how to do it. I use it regularly in Senior-level undergraduate courses and graduate courses. There are ways to do PI when teaching remotely, as I talked about in this post.

I’m highlighting PI because the evidence suggests that it has a differential impact (see study here). It doesn’t hurt the top students, but it reduces failure rate (measured in multiple CS courses) for students with less background (see paper here). That’s exactly what we’re looking for in this series — how do we improve the odds of success for students who are not in the most privileged groups.

Explain connection to community values

I blogged last year about a paper (see post here) that showed female, Black, Latino/Latina, and first-generation students take CS because they want to help society. These students often do not see a connection between what’s being taught in CS classes and what they want. That’s because we often teach to prepare students for top software engineering jobs — it’s a mismatch between our goals and their goals.

I don’t know if this is an issue in upper-level classes. Maybe students in upper-level classes have already figured out how CS connects to their goals and values. Or maybe we have already filtered out the CS students who care about community values by the upper-level and graduate courses.

CS can certainly be used to advance social goals and community values. Teach that. In every CS class, for everything you teach, explain concretely how this concept or skill could be used to advance social good, cultural relevance, and community values. If you can’t, ask yourself why you’re teaching this concept or skill. If it’s just to promote a Silicon Valley jobs program, consider dropping it. We are all revising our classes this summer for fall. It’s a good time to do this review and update.

Use Parsons Problems

Parsons problems (sometimes referred to as “mixed-up code problems”) are where students are given a programming problem, and given all the lines of code to solve the problem, but the lines are scrambled (I usually say “on refrigerator magnets”). The challenge is to assemble the correct program. My wife, Barbara Ericson, did her dissertation work (see post here) showing that Parsons problems were effective (led to the same learning as writing the programs from scratch or from debugging programs) and efficient (low time cost, low cognitive load). She also invented dynamically adaptive Parsons problems which are even better (for effectiveness and efficiency) than traditional Parsons problems.

Parsons problems work on-line, so they fit into remote teaching easily. I’ve been doing paper-based (and Canvas-based) Parsons for exams and quizzes for several years now (see post here). Parsons problems work great in lower-level classes. There is relatively little research on using them in upper-level and graduate courses — I suspect that they could be useful, if only to break up the all-coding-all-the-time framing of CS classes.

I’m highlighting Parsons problems for two reasons.

  • First, they’re efficient. As Manuel noted (as I quoted in my Blog@CACM post), BIPOC students are much more likely to be time-stressed than more privileged students. I’m reading Grading for Equity by Joe Feldman which makes this point in more detail (see website). Our less-privileged students need us to find ways to teach them efficiently. This is going to be a particularly concern during a pandemic when students will have more time constraints, especially if they, or a relative, or someone they live becomes ill.
  • Second, they are a more careful and finer-grained assessment tool (see this post). If you ask students with less ability to write a piece of code, you might get students who only get part of the code working, but you get little data from students who only knew how to write part of the code but get none of it working. Parsons problems help the students with less computing background to show what they do know, and to help the teacher figure out what they don’t know how to write yet.

Use subgoal labelling

Subgoal labelling is pretty amazing (see Wikipedia page). Even our first experiment with subgoal labelling for CS worked examples (see post here) has shown improvements in learning (measured immediately after instruction), retention (measured a week later), and transfer (student success on a new task without instruction). Since then, Lauren Margulieux, Briana Morrison, and Adrienne Decker have published a slew of great results.

The one that makes it on this list is their most recent finding (see post here). Subgoal labeling in an introductory computing course, compared to one not using subgoal labeling, led to reduced drop or failure rates. That’s a differential benefit. There was not a statistically significant improvement on learning (measured in terms of exam scores), but it kept the students most at risk of failing or dropping out in the course. That’s teaching to advantage the students with less background in computing. We don’t know if it works for upper-level or graduate classes — my hypothesis is that it would.

July 20, 2020 at 7:00 am 5 comments

Making the Case for Adaptive Parsons problems and Task-Specific Programming: Koli Calling 2019 Preview

I am excited to be presenting at the 19th Koli Calling International Conference on Computing Education Research (see site here). Both Barbara Ericson and I have papers this year. This was my third submission to Koli, and my first acceptance. Both of us had multiple rejections from ICER this year (see my blog post on ICER), so we updated and revised based on reviews, and were thrilled to get papers into Koli.

Investigating the Affect and Effect of Adaptive Parsons Problems

By Barbara Ericson, Austin McCall, and Kathryn Cunningham.

Barb is presenting the capstone to her dissertation work on adaptive Parsons problems (see blog post on her dissertation work here). This paper captures the iterative nature of her study. Early on, she did detailed think-aloud/interview protocols with teachers to understand how people used her adaptive Parsons problems. At the end, she looked at log files to get a sense of use at scale.

Abstract: In a Parsons problem the learner places mixed-up code blocks in the correct order to solve a problem. Parsons problems can be used for both practice and assessment in programming courses. While most students correctly solve Parsons problems, some do not. Un- successful practice is not conducive to learning, leads to frustration, and lowers self-efficacy. Ericson invented two types of adaptation for Parsons problems, intra-problem and inter-problem, in order to decrease frustration and maximize learning gains. In intra-problem adaptation, if the learner is struggling, the problem can dynamically be made easier. In inter-problem adaptation, the next problem’s difficulty is modified based on the learner’s performance on the last problem. This paper reports on the first observational studies of five undergraduate students and 11 secondary teachers solving both intra-problem adaptive and non-adaptive Parsons problems. It also reports on a log file analysis with data from over 8,000 users solving non-adaptive and adaptive Parsons problems. The paper reports on teachers’ understanding of the intra-problem adaptation process, their preference for adaptive or non-adaptive Parsons problems, their perception of the usefulness of solving Parsons problems in helping them learn to fix and write similar code, and the effect of adaptation (both intra-problem and inter-problem) on problem correctness. Teachers understood most of the intra-problem adaptation process, but not all. Most teachers preferred adaptive Parsons problems and felt that solving Parsons problems helped them learn to fix and write similar code. Analysis of the log file data provided evidence that learners are nearly twice as likely to correctly solve adaptive Parsons problems than non-adaptive ones.

Task-Specific Programming Languages for Promoting Computing Integration: A Precalculus Example

By Mark Guzdial and Bahare Naimipour

This is my first paper on the work I’m publishing on the new work I’m doing in task-specific programming. I mostly discuss my first prototype (see link here) and some of what math teachers are telling me (see link here). We also include a report on Bahare’s and my work with social studies educators A good bit of this paper is putting task-specific programming in a computing education context. I see what I’m doing as pushing further microworlds.

Typically, a microworld is built on top of a general-purpose language, e.g., Logo for Papert and Boxer for diSessa. Thus, the de- signer of the microworld could assume familiarity with the syntax and semantics of the programming language, and perhaps some general programming concepts like mutable variables and control structures. The problem here is that Logo and Boxer, like any general-purpose programming language, take time to develop proficiency. A task-specific programming language (TSPL) aims to provide the same easy-to-understand operations for a microworld, but with a language designed for a particular purpose.

Here’s the abstract:

Abstract: A task-specific programming language (TSPL) is a domain-specific programming language (in programming languages terms) designed for a particular user task (in human-computer interaction terms). Users of task-specific programming are able to use the tool to complete useful tasks, without prior training, in a short enough period that one can imagine fitting it into a normal class (e.g., around 10 minutes). We are designing a set of task-specific programming languages for use in social studies and precalculus courses. Our goal is offer an alternative to more general purpose programming languages (such as Scratch or Python) for integrating computing into other disciplines. An example task-specific programming language for precalculus offers a concrete context: An image filter builder for learning basic matrix arithmetic (addition and subtraction) and matrix multiplication by a scalar. TSPLs allow us to imagine a research question which we couldn’t ask previously: How much computing might students learn if they used a multiple TSPLs in each subject in each primary and secondary school grade?

Eventually the papers are going to appear in the ACM Digital Library. I have a preprint version of Barb’s paper here, and a longer form (with bigger screenshots) of my paper here.

November 18, 2019 at 7:00 am 5 comments

An Ebook for Java AP CS Review: Guest Blog Post from Barbara Ericson

My research partner, co-author, and wife, Barbara Ericson, has been building an ebook (like the ones we’ve been making for AP CSP, as mentioned here and here) for students studying Advanced Placement (AP) CS Level A. We wanted to write a blog post about it, to help more AP CS A students and teachers find it. She kindly wrote this blog post on the ebooks

I started creating a free interactive ebook for the Advanced Placement (AP) Computer Science (CS) A course in 2014.  See http://tinyurl.com/JavaReview-new. The AP CSA course is intended to be equivalent to a first course for computer science majors at the college level.  It covers programming fundamentals (variables, strings, conditionals, loops), one and two dimensional arrays, lists, recursion, searching, sorting, and object-oriented programming in Java.

The AP CSA ebook was originally intended to be used as a review for the AP CSA exam.  I had created a web-site that thousands of students were using to take practice multiple-choice exams, but that web-site couldn’t handle the load and kept crashing.  Our team at Georgia Tech was creating a free interactive ebook for Advanced Placement Computer Science Principles (CSP) course on the Runestone platform. The Runestone platform was easily handling thousands of learners per day, so I moved the multiple choice questions into a new interactive ebook for AP CSA.  I also added a short description of each topic on the AP CSA exam and several practice exams.

Over the years, my team of undergraduate and high school students and I have added more content to the Java Review ebook and thousands of learners have used it.  It includes text, pictures, videos, executable and modifiable Java code, multiple-choice questions, fill-in-the-blank problems, mixed-up code problems (Parsons problems), clickable area problems, short answer questions, drag and drop questions, timed exams, and links to other practice sites such as CodingBat (https://codingbat.com/java) and the Java Tutor (http://pythontutor.com/java.html#mode=edit). It also includes free response (write code) questions from past exams.

Fill-in-the-blank problems ask a user to type in the answer to a question and the answer is checked against a regular expression. See https://tinyurl.com/fillInBlankEx.   Mixed-up code problems (Parsons problems) provide the correct code to solve a problem, but the code is broken into code blocks and mixed up.  The learner must drag the blocks into the correct order. See https://tinyurl.com/ParsonsEx.  I studied Parsons problems for my dissertation and invented two types of adaptation to modify the difficulty of Parsons problems to keep learners challenged, but not frustrated.  Clickable area questions ask learners to click on either lines of code or table elements to answer a question. See https://tinyurl.com/clickableEx.   Short answer questions allow users to type in text in response to a question.  See https://tinyurl.com/shortAnsEx. Drag and drop questions allow the learner to drag a definition to a concept.  See https://tinyurl.com/y68cxmpw.  Timed exams give the learner practice a set amount of time to finish an exam.  It shows the questions in the exam one at a time and doesn’t give the learner feedback about the correctness of the answer until after the exam.  See https://tinyurl.com/timedEx.

I am currently analyzing the log file data from both the AP CSA and CSP ebooks.  Learners typically attempt to answer the practice type questions, but don’t always run the example code or watch the videos.  In an observation study I ran as part of my dissertation work, teachers said that they didn’t run the code if the got the related practice question correct. They also didn’t always watch the videos, especially if the video content was also in the text.  Usage of the ebook tends to drop from the first chapter to the last instructional chapter, but increases again in the practice exam chapters at the end of the ebook. Usage also drops across the instructional material in a chapter and then increases again in the practice item subchapters near the end of each chapter.

Beryl Hoffman, an Associate Professor of Computer Science at Elms College and a member of the Mobile CSP team, has been creating a new AP CSA ebook based on my AP CSA ebook, but revised to match the changes to the AP CSA course for 2019-20202.  See https://tinyurl.com/csawesome.  One of the reasons for creating this new ebook is to help Mobile CSP teaches prepare to teach CSA.  The Mobile CSP team is piloting this book currently with CSP teachers.

June 17, 2019 at 7:00 am Leave a comment

Adaptive Parsons problems, and the role of SES and Gesture in learning computing: ICER 2018 Preview

 

Next week is the 2018 International Computing Education Research Conference in Espoo, Finland. The proceedings are (as of this writing) available here: https://dl.acm.org/citation.cfm?id=3230977. Our group has three papers in the 28 accepted this year.

“Evaluating the efficiency and effectiveness of adaptive Parsons problems” by Barbara Ericson, Jim Foley, and Jochen (“Jeff”) Rick

These are the final studies from Barb Ericson’s dissertation (I blogged about her defense here). In her experiment, she compared four conditions: Students learning through writing code, through fixing code, through solving Parsons problems, and through solving her new adaptive Parsons problems. She had a control group this time (different from her Koli Calling paper) that did turtle graphics between the pre-test and post-test, so that she could be sure that there wasn’t just a testing effect of pre-test followed by a post-test. The bottom line was basically what she predicted: Learning did occur, with no significant difference between treatment groups, but the Parsons problems groups took less time. Our ebooks now include some of her adaptive Parsons problems, so she can compare performance across many students on adaptive and non-adaptive forms of the same problem. She finds that students solve the problems more and with fewer trials on the adaptive problems. So, adaptive Parsons problems lead to the same amount of learning, in less time, with fewer failures. (Failures matter, since self-efficacy is a big deal in computer science education.)

“Socioeconomic status and Computer science achievement: Spatial ability as a mediating variable in a novel model of understanding” by Miranda Parker, Amber Solomon, Brianna Pritchett, David Illingworth, Lauren Margulieux, and Mark Guzdial

(Link to last version I reviewed.)

This study is a response to the paper Steve Cooper presented at ICER 2015 (see blog post here), where they found that spatial reasoning training erased performance differences between higher and lower socioeconomic status (SES) students, while the comparison class had higher-SES students performing better than lower-SES students. Miranda and Amber wanted to test this relationship at a larger scale.

Why should wealthier students do better in CS? The most common reason I’ve heard is that wealthier students have more opportunities to study CS — they have greater access. Sometimes that’s called preparatory privilege.

Miranda and Amber and their team wanted to test whether access is really the right intermediate variable. They gave students at two different Universities four tests:

  • Part of Miranda’s SCS1 to measure performance in CS.
  • A standardized test of SES.
  • A test of spatial reasoning.
  • A survey about the amount of access they had to CS education, e.g., formal classes, code clubs, summer camps, etc.

David and Lauren did the factor analysis and structural equation modeling to compare two hypotheses: Does higher SES lead to greater access which leads to greater success in CS, or does higher SES lead to higher spatial reasoning which leads to greater success in CS? Neither hypothesis accounted for a significant amount of the differences in CS performance, but the spatial reasoning model did better than the access model.

There are some significant limitations of this study. The biggest is that they gathered data at universities. A lot of SES variance just disappears when you look at college students — they tend to be wealthier than average.

Still, the result is important for challenging the prevailing assumption about why wealthier kids do better in CS. More, spatial reasoning is an interesting variable because it’s rather inexpensively taught. It’s expensive to prepare CS teachers and get them into all schools. Steve showed that we can teach spatial reasoning within an existing CS class and reduce SES differences.

“Applying a Gesture Taxonomy to Introductory Computing Concepts” by Amber Solomon, Betsy DiSalvo, Mark Guzdial, and Ben Shapiro

(Link to last version I saw.)

We were a bit surprised (quite pleasantly!) that this paper got into ICER. I love the paper, but it’s different from most ICER papers.

Amber is interested in the role that gestures play in teaching CS. She started this paper from a taxonomy of gestures seen in other STEM classes. She observed a CS classroom and used her observations to provide concrete examples of the gestures seen in other kinds of classes. This isn’t a report of empirical findings. This is a report of using a lens borrowed from another field to look at CS learning and teaching in a new way.

My favorite part of of this paper is when Amber points out what parts of CS gestures don’t really fit in the taxonomy. It’s one thing to point to lines of code – that’s relatively concrete. It’s another thing to “point” to reference data, e.g., when explaining a sort and you gesture at the two elements you’re comparing or swapping. What exactly/concretely are we pointing at? Arrays are neither horizontal nor vertical — that distinction doesn’t really exist in memory. Arrays have no physical representation, but we act (usually) as if they’re laid out horizontally in front of us. What assumptions are we making in order to use gestures in our teaching? And what if students don’t share in those assumptions?

August 10, 2018 at 7:00 am 7 comments

A Generator for Parsons problems on LaTeX exams and quizzes

I just finished teaching my Introduction to Media Computation a few weeks ago to over 200 students. After Barb finished her dissertation on Parsons problems this semester, I decided that I should include Parsons problems on my last quiz, on the final exam study guide, and on the final exam. Parsons problems are a great fit for this assessment task. We know that Parsons problems are a more sensitive measure of learning than code writing problems, they’re just as effective as code writing or code fixing problems for learning (so good for a study guide), and they take less time than code writing or fixing.

Barb’s work used an interactive tool for providing adaptive Parsons problems. I needed to use paper for the quiz and final exam. There have been several Parsons problems paper-based implementation, and Barb guided me in developing mine.

But I realized that there’s a challenge to doing a bunch of Parsons problems like this. Scrambling code is pretty easy, but what happens when you find that you got something wrong? The quiz, study guide, and final exam were all going to iterate several times as we developed them and tested them with the teaching assistants. How do I make sure that I always kept aligned the scrambled code and the right answer?

I decided to build a gadget in LiveCode to do it.

I paste the correctly ordered code into the field on the left. When I press “Scramble,” a random ordering of the code appears (in a Verbatim LaTeX environment) along with the right answers, to be used in the LaTeX exam class. If you want to list a number of points to be associated with each correct line, you can put a number into the field above the solution field. If empty, no points will be explicitly allocated in the exam document.

I’d then paste both of those fields into my LaTeX source document. (I usually also pasted in the original source code in the correct order, so that I could fix the code and re-run the scramble when I inevitably found that I did something wrong.)

The wording of the problem was significant. Barb coached me on the best practice. You allow students to write just the line number, but encourage them to write the whole line because the latter is going to be less cognitive load for them.

Unscramble the code below that halves the frequency of the input sound.

Put the code in the right order on the lines below. You may write the line numbers of the scrambled code in the right order, or you can write the lines themselves (or both). (If you include both, we will grade the code itself if there’s a mismatch.)

The problem as the student sees it looks like this:

The exam class can also automatically generate a version of the exam with answers for used in grading. I didn’t solve any of the really hard problems in my script, like how do I deal with lines that could be put in any order. When I found that problem, I just edited the answer fields to list the acceptable options.

I am making the LiveCode source available here: http://bit.ly/scrambled-latex-src

LiveCode generates executables very easily. I have generated Windows, MacOS, and Linux executables and put them in a (20 Mb, all three versions) zip here: http://bit.ly/scrambled-latex

I used this generator probably 10-20 times in the last few weeks of the semester. I have been reflecting on this experience as an example of end-user programming. I’ll talk about that in the next blog post.

June 8, 2018 at 2:00 am 5 comments

Announcing Barbara Ericson’s Defense on Effectiveness and Efficiency of Parsons Problems and Dynamically Adaptive Parsons Problems: Next stop, University of Michigan

Today, Barbara Ericson defends her dissertation. I usually do a blog post talking about the defending student’s work as I’ve blogged about it in the past, but that’s really hard with Barb.  I’ve written over 90 blog posts referencing Barb in the last 9 years.  That happens when we have been married for 32 years and collaborators on CS education work for some 15 years.

Barb did her dissertation on adaptive Parsons problems, but she could have done it on Project Rise Up or some deeper analysis of her years of AP CS analyses. She chose well. Her results are fantastic, and summarized below. (Yes, she does have six committee members, including two external members.)

Starting September 1, Barbara and I will be faculty at the University of Michigan. Barb will be an assistant professor in the University of Michigan School of Information (UMSI). I will be a professor in the Computer Science and Engineering (CSE) Division of the Electrical Engineering and Computer Science Department, jointly with their new Engineering Education Research program. Moving from Georgia Tech and Atlanta will be hard — all three of our children will still be here as we leave. We are excited about the opportunities and new colleagues that we will have in Ann Arbor.

Title: Evaluating the Effectiveness and Efficiency of Parsons Problems and Dynamically Adaptive Parsons Problems as a Type of Low Cognitive Load Practice Problem

Barbara J. Ericson

Human-Centered Computing

School of Interactive Computing

College of Computing

Georgia Institute of Technology

Date: Monday, March 12, 2018

Time: 12pm – 3pm

Location: TSRB 222

Committee:

Dr. Jim Foley (Advisor, School of Interactive Computing, Georgia Institute of Technology)

Dr. Amy Bruckman (School of Interactive Computing, Georgia Institute of Technology)

Dr. Ashok K. Goel (School of Interactive Computing, Georgia Institute of Technology)

Dr. Richard Catrambone (School of Psychology, Georgia Institute of Technology)

Dr. Alan Kay (Computer Science Department, University of California, Los Angeles)

Dr. Mitchel Resnick (Media Laboratory, Massachusetts Institute of Technology)

Abstract:

Learning to program can be difficult and time consuming.  Learners can spend hours trying to figure out why their program doesn’t compile or run correctly. Many countries, including the United States, want to train thousands of secondary teachers to teach programming.  However, busy in-service teachers do not have hours to waste on compiler errors or debugging.  They need a more efficient way to learn.

One way to reduce learning time is to use a completion task.  Parsons problems are a type of code completion problem in which the learner must place blocks of correct, but mixed up, code in the correct order. Parsons problems can also have distractor blocks, which are not needed in a correct solution.  Distractor blocks include common syntax errors like a missing colon on a for loop or semantic errors like the wrong condition on a loop.

In this dissertation, I conducted three studies to compare the efficiency and effectiveness of solving Parsons problems, fixing code, and writing code. (Editor’s note: I blogged on her first study here.) I also tested two forms of adaptation. For the second study, I added intra-problem adaptation, which dynamically makes the current problem easier.  For the last study, I added inter-problem adaptation which makes the next problem easier or harder depending on the learner’s performance.  The studies provided evidence that students can complete Parsons problems significantly faster than fixing or writing code while achieving the same learning gains from pretest to posttest.  The studies also provided evidence that adaptation helped more learners successfully solve Parsons problems.

These studies were the first to empirically test the efficiency and effectiveness of solving Parsons problems versus fixing and writing code.  They were also the first to explore the impact of both intra-problem and inter-problem adaptive Parsons problems.  Finding a more efficient and just as effective form of practice could reduce the frustration that many novices feel when learning programming and help prepare thousands of secondary teachers to teach introductory computing courses.

March 12, 2018 at 7:00 am 15 comments

Parsons Problems have same Learning Gains as Writing or Fixing code, in less time: Koli Calling 2017 Preview

On Saturday, Barbara Ericson will be presenting at Koli Calling her paper (with Lauren Margulieux and Jeff Rick), “Solving Parsons Problems Versus Fixing and Writing Code.”

The basic design of her experiment is pretty simple.  Everybody gets a pretest where they answer multiple-choiced questions, write some code, fix some code, and solve some Parsons problems.  (I’ve written about Parsons Problems here before.)

Then there are three instructional treatments with three different kinds of problem-solving practice:

  • One group gets Parsons Problems with distractors in them — blocks that should not be dragged into the solution.
  • One group gets the same code to fix — same code as in the Parsons Problems but all the distractors are there.  They have to fix the broken code in the distractor to get to the same code as the correct block in the Parsons.
  • One group gets to write the code to solve the same problem.

Then they take an isomorphic (same basic problems with context and constants changed) post-test, go away, and come back one week later for a retention test (which is isomorphic to both the pretest and the first posttest: multiple choice questions, Parsons, fix code, write code).  So we have students who study with Parsons Problems getting tested by writing and fixing code.

Here’s the bottom line from their abstract: “We found that solving two-dimensional Parsons problems with distractors took significantly less time than fixing code with errors or than writing the equivalent code. Additionally, there was no statistically significant difference in the learning performance, or in student retention of the knowledge one week later.”

That’s it. It’s simple but profound.  Below is the timing table from the paper. The Parsons Problems took effort, but always less time — sometimes they took only half the time of fixing or writing code, and other times it was only a few percentage less. But it was always less.

One takeaway idea is: If Parsons leads to the same learning in less time, why wouldn’t every teacher use more Parsons problems?  A second one that we’ve been thinking alot about is: Can we provide more Parsons problems so that in the same amount of time that students were writing code, they actually learn more? Efficiency matters, as Elizabeth Patitsas’s work suggests — more efficient learning may mean less belief in Geek Gene by CS teachers.

Cursor_and_ParsonsVsFixAndWrite-Final_pdf__page_8_of_10_

November 17, 2017 at 7:00 am 12 comments

SIGCSE 2016 Preview: Parsons Problems and Subgoal Labeling, and Improving Female Pass Rates on the AP CS exam

Our research group has two papers at this year’s SIGCSE Technical Symposium.

Subgoals help students solve Parsons Problems by Briana Morrison, Lauren Margulieux, Barbara Ericson, and Mark Guzdial. (Thursday 10:45-12, MCCC: L5-L6)

This is a continuation of our subgoal labeling work, which includes Lauren’s original work showing how subgoal labels improved learning, retention and transfer in learning App Inventor (see summary here), the 2015 ICER Chairs Paper Award-winning paper from Briana and Lauren showing that subgoals work for text languages (see this post for summary), and Briana’s recent dissertation proposal where she explores the cognitive load implications for learning programming (see this post for summary). This latest paper shows that subgoal labels improve success at Parson’s Problems, too. One of the fascinating results in this paper is that Parson’s Problems are more sensitive as a learning assessment than asking students to write programs.

Sisters Rise Up 4 CS: Helping Female Students Pass the Advanced Placement Computer Science A Exam by Barbara Ericson, Miranda Parker, and Shelly Engelman. (Friday 10:45-12, MCCC: L2-L3)

Barb has been developing Project Rise Up 4 CS to support African-American students in succeeding at the AP CS exam (see post here from RESPECT and this post here from last year’s SIGCSE). Sisters Rise Up 4 CS is a similar project targeting female students. These are populations that have lower pass rates than white or Asian males. These are examples of supporting equality and not equity. This paper introduces Sisters Rise Up 4 CS and contrasts it with Project Rise Up 4 CS. Barb has resources to support people who want to try these interventions, including a how-to ebook at http://ice-web.cc.gatech.edu/ce21/SRU4CS/index.html and an ebook for students to support preparation for the AP CS A.

February 29, 2016 at 7:56 am 9 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,054,521 hits
April 2023
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930

CS Teaching Tips