Posts tagged ‘educational psychology’

Seeking Collaborators for a Study of Achievement Goal Theory in CS1: Guest blog post by Daniel Zingaro

I have talked about Dan’s work here before, such as his 2014 award-winning ICER paper and his Peer Instruction in CS website. I met with Dan at the last SIGCSE where he told me about the study that he and Leo Porter were planning. Their results are fascinating since they are counter to what Achievement Goal Theory predicts. I invited him to write a guest blog post to seek collaborators for his study, and am grateful that he sent me this.

Why might we apply educational theory to our study of novice programmers? One core reason lies in theory-building: if someone has developed a general learning theory, then we might do well to co-opt and extend it for the computing context. What we get for free is clear: a theoretical basis, perhaps with associated experimental procedures, scales, hypotheses, and predictions. Unfortunately, however, there is often a cost in appropriating this theory: it may not replicate for us in the expected ways.

Briana Morrison’s recent work nicely highlights this point. In two studies, Briana reports her efforts to replicate what is known about subgoals and worked examples. Briefly, a worked example is a sample problem whose step-by-step solution is given to students. And subgoals are used to break that solution into logical chunks to hopefully help students map out the ways that the steps fit together to solve the problem.

Do subgoals help? Well, it’s supposed to go like this, from the educational psychology literature: having students generate their own labeled goals is best, giving students the subgoal labels is worse, and not using subgoals at all is worse still. But that isn’t what Briana found. For example, Briana reports [1] that, on Parsons puzzles, students who are given subgoal labels do better than both those who generate their own subgoal labels and those not given subgoals at all. Why the differences? One possibility is that programming exerts considerable cognitive load on the learner, and that the additional load incurred by generating subgoal labels overloads the student and harms learning.

The point here is that taking seriously the idea of leveraging existing theory requires concomitant attention to how and why the theory may operate differently in computing.

My particular interest here is in another theory from educational psychology: achievement goal theory (AGT). AGT studies the goals that students adopt in achievement situations, and the positive and negative consequences of those goals in terms of educationally-relevant outcomes. AGT zones in on two main goal types: mastery goals (where performance is defined intrapersonally) and performance goals (where performance is defined normatively in comparison to others).

Do these goals matter? Well, it’s supposed to go roughly like this: mastery goals are positively associated with many outcomes of value, such as interest, enjoyment, self-efficacy, and deep study strategies (but not academic performance); performance goals, surprisingly and confusingly, are positively associated with academic performance. But, paralleling the Briana studies from above, this isn’t what we’ve found in CS. With Leo Porter and my students, we’ve been studying goal-outcome links in novice CS students. We’ve found, contrary to theoretical expectations, that performance goals appear to be null or negative predictors of performance, and that mastery goals appear to be positive predictors of performance [2,3].

We are now conducting a larger study of achievement goals and outcomes of CS1 students — larger than that achievable with the couple of institutions to which we have access on our own. We are asking for your help.

The study involves administering two surveys to students in a CS1 course. The first survey, at the beginning of the semester, measures student achievement goals. The second survey, close to the end of the semester, measures potential mediating variables. We plan to collect exam grade, interest in CS, and other outcome variables.

The hope is that we can conduct a multi-institutional study of a variety of CS1 courses to strengthen what we know about achievement goals in CS.

Please contact me at daniel dot zingaro at utoronto dot ca if you are interested in participating in this work. Thanks!

[1] Briana Morrison. Subgoals Help Students Solve Parsons Problems. SIGCSE, 2016. ACM DL link.

[2] Daniel Zingaro. Examining Interest and Performance in Computer Science 1: A Study of Pedagogy and Achievement Goals. TOCE, 2015. ACM DL link.

[3] Daniel Zingaro and Leo Porter. Impact of Student Achievement Goals on CS1 Outcomes. SIGCSE, 2016. ACM DL link.

July 15, 2016 at 7:30 am Leave a comment

Are there elements of human nature that could be better harnessed for better educational outcomes?

I don’t often link to Quora, but when it’s Steven Pinker pointing out the relationship between our human nature to educational goals, it’s worth it.

One potential insight is that educators begin not with blank slates but with minds that are adapted to think and reason in ways that may be at cross-purposes with the goals of education in a modern society. The conscious portion of language consists of words and meanings, but the portion that connects most directly to print consists of phonemes, which ordinarily are below the level of consciousness. We intuitively understand living species as having essences, but the theory of evolution requires us to rethink them as populations of variable individuals. We naturally assess probability by dredging up examples from memory, whereas real probability takes into account the number of occurrences and the number of opportunities. We are apt to think that people who disagree with us are stupid and stubborn, while we are overconfident and self-deluded about our own competence and honesty.

Source: (3) Are there elements of human nature that could be better harnessed for better educational outcomes? – Quora

July 13, 2016 at 7:57 am 2 comments

Why Students Don’t Like Active Learning: Stop making me work at learning!

I enjoy reading Annie Murphy Paul’s essays, and this one particularly struck home because I just got my student opinion surveys from last semester.  I use active learning methods in my Media Computation class every day, where I require students to work with one another. One student wrote:

“I didn’t like how he forced us to interact with each other. I don’t think that is the best way for me to learn, but it was forced upon me.”

It’s true. I am a Peer Instruction bully.

At a deeper level, it’s amazing how easily we fool ourselves about what we learn from and what we don’t learn from.  It’s like the brain training work.  We’re convinced that we’re learning from it, even if we’re not. This student is convinced that he doesn’t learn from it, even though the available evidence says she or he does.

In case you’re wondering about just what “active learning” is, here’s a widely-accepted definition: “Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It emphasizes higher-order thinking and often involves group work.”

Source: Why Students Don’t Like Active Learning « Annie Murphy Paul

July 11, 2016 at 7:27 am 7 comments

Transfer of learning: Making sense of what education research is telling us

I enjoy reading “Gas station without pumps,” and the below-quoted post was one I wanted to respond to.

Two of the popular memes of education researchers, “transferability is an illusion” and “the growth mindset”, are almost in direct opposition, and I don’t know how to reconcile them.

One possibility is that few students actually attempt to learn the general problem-solving skills that math, CS, and engineering design are rich domains for.  Most are content to learn one tiny skill at a time, in complete isolation from other skills and ideas. Students who are particularly good at memory work often choose this route, memorizing pages of trigonometric identities, for example, rather than learning how to derive them at need from a few basics. If students don’t make an attempt to learn transferable skills, then they probably won’t.  This is roughly equivalent to claiming that most students have a fixed mindset with respect to transferable skills, and suggests that transferability is possible, even if it is not currently being learned.

Teaching and testing techniques are often designed to foster an isolation of ideas, focusing on one idea at a time to reduce student confusion. Unfortunately, transferable learning comes not from practice of ideas in isolation, but from learning to retrieve and combine ideas—from doing multi-step problems that are not scaffolded by the teacher.

Source: Transfer of learning | Gas station without pumps

The problem with “transferability” is that it’s an ill-defined term.  Certainly, there is transfer of skill between domains.  Sharon Carver showed a long time ago that she could teach debugging Logo programs, and students would transfer that debugging process to instructions on a map (mentioned in post here).  That’s transferring a skill or a procedure.  We probably do transfer big, high-level heuristics like “divide-and-conquer” or “isolate the problem.”  One issue is whether we can teach them.  John Sweller says that we can’t — we must learn them (it’s a necessary survival skill), but they’re learned from abstracting experience (see Neil Brown’s nice summary of Sweller’s SIGCSE keynote).

Whether we can teach them or not, what we do know is that higher-order thinking is built on lots of content knowledge. Novices are unlikely to transfer until they know a lot of stuff, a lot of examples, a lot of situations. For example, novice designers often have “design fixation.”  They decide that the first thing they think of must be the right answer.  We can insist that novice designers generate more designs, but they’re not going to generate more good designs until they know more designs.  Transfer happens pretty easily when you know a lot of content and have seen a lot of situations, and you recognize that one situation is actually like another.

Everybody starts out learning one tiny skill at a time.  If you know a lot of skills (maybe because you have lots of prior experience, maybe because you have thought about these skills a lot and have recognized the general principles), you can start chunking these skills and learning whole schema and higher-level skills.  But you can’t do that until you know lots of skills.  Students who want to learn one tiny skill at a time may actually need to still learn one tiny skill at a time. People abstract (e.g., able to derive a solution rather than memorize it) when they know enough content that it’s useful and possible for them to abstract over it.  I completely agree that students have to try to abstract.  They have to learn a lot of stuff, and then they have to be in a situation where it’s useful for them to abstract.

“Growth mindset” is a necessity for any of this to work.  Students have to believe that content is worth knowing and that they can learn it.  If students believe that content is useless, or that they just “don’t do math” or “am not a computer person” (both of which I’ve heard in just the last week), they are unlikely to learn content, they are unlikely to see patterns in it, and they are unlikely to abstract over it.

Kevin is probably right that we don’t teach problem solving in engineering or computing well.  I blogged on this theme for CACM last month — laboratory experiments work better for a wider range students than classroom studies.  Maybe we teach better in labs than in classrooms?  The worked examples effect suggests that we may be asking students to problem solve too much.  We should show students more completely worked out problems.  As Sweller said at SIGCSE, we can’t expect students to solve novel problems.  We have to expect students to match new problems to solutions that they have already seen.  We do want students to solve problems, too, and not just review example solutions. Trafton and Reiser showed that these should be interleaved: Example, Problem, Example, Problem… (see this page for a summary of some of the worked examples research, including Trafton & Reiser).

When I used to do Engineering Education research, one of my largest projects was a complete flop.  We had all this prior work showing the benefits of a particular collaborative learning technology and technique, then we took it into the engineering classroom and…poof! Nothing happened.  In response, we started a project to figure out why it failed so badly.  One of our findings was that “learned helplessness” was rampant in our classes, which is a symptom of a fixed mindset.  “I know that I’m wrong, and there’s nothing that I can do about it.  Collaboration just puts my errors on display for everyone,” was the kind of response we’ve got. (See here for one of our papers on this work.)

I believe that all the things Kevin sees going wrong in his classes really are happening.  I believe he’s not seeing transfer that he might reasonably expect to see.  I believe that he doesn’t see students trying to abstract across lower-level skills.  But I suspect that the problem is the lack of a growth mindset.  In our work, we saw Engineering students simply give up.  They felt like they couldn’t learn, they couldn’t keep up, so they just memorized.  I don’t know that that’s the cause of the problems that Kevin is seeing.  In my work, I’ve often found that motivation and incentive are key to engagement and learning.

April 25, 2016 at 7:33 am Leave a comment

Optimizing Learning with Subgoal Labeling: Lauren Margulieux Defends her Dissertation

Lauren Margulieux successfully defended her dissertation Using Subgoal Learning and Self-Explanation to Improve Programming Education in March. Lauren’s been exploring subgoal labeling for improving programming education in a series of fascinating and influential papers. Subgoal labels are inserted into the steps of a worked example to explain the purpose for a set of steps.

  • At ICER 2012 (see post here), her paper showed that subgoal labels inserted into App Inventor videos led to improved learning, retention (a week later), and even transfer to new App building problems, all compared to the exact same videos without the subgoal labels. This paper was cited by Rob Moore and his students at MIT in their work developing crowdsourced subgoal labels for videos (see post here).
  • At ICER 2015 (see post here), Lauren and Briana Morrison showed that subgoal labels also improved learning for textual programming languages, but the high cognitive load of textual programming language made some forms of subgoal labeling less successful than studies in other disciplines would predict. That paper won the Chairs Award at ICER.
  • At SIGCSE 2016 (see post here), Briana presented a paper with Lauren where they showed that subgoal labeling also improved performance on Parson’s Problems.

In her dissertation work, Lauren returned to the challenges of the ICER 2015 paper: Can we make subgoal labeling even more successful? She went back to using App Inventor, to reduce the cognitive load from teaching a textual language.

She compared three different ways of using subgoal labeling.

  • In the passive condition, students were just given subgoal labels like in her first experiments.
  • In the active condition, students were given a list of subgoal labels. The worked example was segmented into sets of steps that achieved a subgoal, but the label was left blank. Students had to pick the right subgoal label each blank.
  • In the constructive condition, students were just given a blank and asked to generate a subgoal label. She had two kinds of constructive conditions. One was “guided” in that there were blanks above sets of steps. The other was “unguided” — just a long list of steps, and she asked students to write labels into margins

Lauren was building on a theory that predicted that the constructive condition would have the best learning, but would also be the hardest. She provided two scaffolds.

  • For the conditions where it made sense (i.e., not the passive condition), she provided feedback. She showed half the participants the same worked examples with experimenter labels.
  • For half the constructive participants, the label wasn’t blank. Instead there was a hint. All the steps that achieved the same subgoal were labeled “Label 1,” and all the steps that achieved a different subgoal were labelled “Label 2,” and so on.

Here’s the big “interesting/surprising” graph from her dissertation.

Lauren-interesting-graph

As predicted, constructive was better than active or passive. What’s interesting is that the very best performance was guided constructive without hints but with feedback AND with hints but without feedback. Now that’s weird. Why would having more support (both hints and feedback) lead to worse performance?

There are several possible hypotheses for these results, and Lauren pursued one of these one step further. Maybe students developed their own cognitive model when they constructed their own labels with hints, and seeing the feedback (experimenter’s labels) created some kind of dissonance or conflict. Without hints, maybe the feedback helped them make sense of the worked example.

Lauren ran one more experiment where she contrasted getting scaffolding with the experimenter’s labels versus getting scaffolding with the student’s labels (put in all the right places in the worked example). Students who were scaffolded with their own labels performed better on later problem solving than those who were scaffolded with experimenter labels. Students scaffolded with experimenter labels did not perform better than those who did not receive any scaffolding at all. Her results support this hypothesis — the experimenter’s labels can get in the way of the understanding that the students are building.

using-learner-labels

There are several implications from Lauren’s dissertation. One is that we can do even better than just giving students labels — getting them to write them themselves is even better for learning. Feedback isn’t the most critical part of the learning when subgoal labeling, which is surprising and fascinating. Constructive subgoal labeling lends itself to an online implementation, which is the direction Lauren that is explicitly exploring. How do we build effective programming education online?

Lauren has accepted an Assistant Professor position in the Learning Technologies Division at Georgia State University. I’m so glad for her, and even happier that she’s nearby so that we can continue collaborating!

March 29, 2016 at 9:41 pm 1 comment

Brain training, like computational thinking, is unlikely to transfer to everyday problem-solving

In a recent blog post, I argued that problem-solving skills learned for solving problems in computational contexts (“computational thinking”) were unlikely to transfer to everyday situations (see post here).  We see a similar pattern in the recent controversy about “brain training.”  Yes, people get better at the particular exercises (e.g., people can learn to problem-solve better when programming). And they may still be better years later, which is great. That’s an indication of real learning.  But they are unlikely to transfer that learning to non-exercise contexts. Most surprisingly, they are unlikely to transfer that learning even though they are convinced that they do.  Just because you think you’re doing computational thinking doesn’t mean that you are.

Ten years later, tests showed that the subjects trained in processing speed and reasoning still outperformed the control group, though the people given memory training no longer did. And 60 percent of the trained participants, compared with 50 percent of the control group, said they had maintained or improved their ability to manage daily activities like shopping and finances. “They felt the training had made a difference,” said Dr. Rebok, who was a principal investigator.

So that’s far transfer — or is it? When the investigators administered tests that mimicked real-life activities, like managing medications, the differences between the trainees and the control group participants no longer reached statistical significance.

In subjects 18 to 30 years old, Dr. Redick also found limited transfer after computer training to improve working memory. Asked whether they thought they had improved, nearly all the participants said yes — and most had, on the training exercises themselves. They did no better, however, on tests of intelligence, multitasking and other cognitive abilities.

Source: F.T.C.’s Lumosity Penalty Doesn’t End Brain Training Debate – The New York Times

March 18, 2016 at 7:26 am 4 comments

Notional Machines and Misconceptions in CS: Developing a Research Agenda at Dagstuhl

Seminar

I facilitated a breakout group at the Dagstuhl Seminar on Assessment in Introductory Computer Science. We started talking about what students know and should know, and several of us started using terms like “notional machines” and “mental models” — and there were some strong disagreements. We decided to have a breakout group to define our terms, and came up with a fascinating set of issues and questions.  It was a large group (maybe a dozen?), and I think there were some differences in attendance between the two days, so I’m not going to try to list everyone here.

Definitions

We agreed on the definition of a notional machine (NM) as a set of abstractions that define the structure and behavior of a computational device. A notional machine includes a grammar and a vocabulary, and is specific to a programming paradigm. It’s consistent and predictive — given a notional machine and a program to run on that machine, we should be able to define the result. The abstract machine of a compiler is a possible notional machine. This definition meshes with duBoulay’s original one and the one that Juha Sorva used in his dissertation (which we could check, because Juha was there).

Note that a NM doesn’t include function. It doesn’t tell a user, “Why would I use this feature? What is it for?” Carsten Shulte and Ashok Goel both found that students tend to focus on structure and behavior, and significant expertise is needed before students can discern function for a program or a NM component.

In CS education, we care about the student’s understanding of the notional machine. Mental model isn’t the right term for that understanding, because (for some) that implies a consistent, executable model in the student’s head. But modern learning science suggests that students are more likely to have “knowledge in pieces” (e.g., diSessa). Students will try to explain one program using one set of predictions about program behavior, and another program in another way. They respond to different programs differently When Michael Caspersen tried to replicate the Dehnadi and Bornat paper (Camel has two humps paper, and its retraction), he found that students would use one rule set for interpreting assignment in part of the test, and another set of rules later — and they either didn’t care or didn’t notice that they were inconsistent.

An early form of student understanding of the NM is simply mimicry. “I saw the teacher type commands like this. So if I repeat them exactly, I should get the same behavior.” As they start to realize that the program causes behavior, cognitive load limits how much of the NM students can think about at once. They can’t predict as we would like them to, simply because they can’t think about all of the NM components and all of the program at once. The greatest challenge to understanding the NM is Roy Pea’s Superbug — the belief that the computer is in fact a human-like intelligent agent trying to discern our intentions.

We define student misconceptions (about the NM) as incorrect beliefs about the notional machine that are reliable (the student will use more than once) and common (more than one student uses it). There are lots of misunderstandings that pop up, but those aren’t interesting if they’re not common and reliable. We decided to avoid the “alternative conception” model in science education because, unlike natural science, we know ground truth. CS is a science of the artificial. We construct notional machines. Conceptions are provably correct or incorrect about the NM.

One of the challenging aspects of student understandings of NM is that our current evidence suggests that students never fix existing models. We develop new understandings, and learn new triggers/indices when to apply these understandings. Sometimes we layer new understandings so deeply that we can’t reach the old ones. Sometimes, when we are stressed or face edge/corner conditions, we fall back on previous understandings. We help students develop new understandings by constraining their process to an appropriate path (e.g., cognitive tutors, cognitive apprenticeship) or by providing the right contexts and examples (like in Betsy Davis’s paper with Mike Clancy Mind your P’s and Q’s).

Where do misconceptions come from?

We don’t know for sure, but we have hypotheses and research questions to explore:

  • We know that some misconceptions come from making analogies to natural language.
  • Teaching can lead to misconceptions. Sometimes it’s a slip of the tongue. For example, students often confuse IF and WHILE. How often do we say (when tracing a WHILE) loop, “IF the expression is true…” Of course, the teacher may not have the right understanding.Research Question (RQ): What is intersection between teacher and student misconceptions? Do teacher misconceptions explain most student misconceptions, or do most student misconceptions come from factors outside of teaching?
  • Under-specification. Students may simply not see enough contexts or examples for them to construct a complete understanding.
  • Students incorrectly applying prior knowledge. RQ: Do students try to understand programs in terms of spreadsheets, the most common computational model that most students see?
  • Notation. We believe that = and == do lead to to significant misconceptions. RQ: Do Lisp’s set, Logo’s make/name, and Smalltalk’s back arrow lead to fewer assignment misconceptions? RQ: Dehnadi and Bornat did define a set of assignment misconceptions. How common are they? In what languages or contexts?

RQ: How much do students identify their own gaps in understanding of a NM (e.g., edge conditions, problem sets that don’t answer their questions)? Are they aware of what they don’t understand?  How do they try to answer their questions?

One advantage of CS over natural sciences is that we can design curriculum to cover the whole NM. (Gail Sinatra was mentioned as someone who has designed instruction to fill all gaps in a NM.) Shriram Krishnamurthi told us that he designs problem-sets to probe understanding of the Java notional machine that he expects students to miss, and his predictions are often right.

RQ: Could we do this automatically given a formal specification for an NM?  Could we define a set of examples that cover all paths in a NM?  Could we develop a model that predicts where students will likely develop misconceptions?

RQ: Do students try to understand their own computational world (e.g., how behavior in a Web page works, how an ATM works, how Web search works) with what we’re teaching them? Kathi Fisler predicts that they rarely do that, because transfer is hard. But if they’re actively trying to understand their computational world, it’s possible.

How do we find and assess gaps in student understanding?

We don’t know how much students think explicitly about a NM. We know from Juha’s work that students don’t always see visualizations as visible incarnations of the NM — for some students, it’s just another set of confusing abstractions.

Carsten Schulte pointed out that Ira Diethelm has a cool way of finding out what students are confused about. She gives them a “miracle question” — if you had an oracle that knew all, what one question would you ask about how the Internet works, or Scratch, or Java? Whatever they say — that’s a gap.

RQ: How we define the right set of examples or questions to probe gaps in understanding of a NM? Can we define it in terms of a NM? We want such a set to lead to reflection and self-explanation that might lead to improved understanding of the NM.

Geoffrey Herman had an interesting way of finding gaps in NM understanding: using historical texts. Turns out Newton used the wrong terms for many physical phenomena, or at least, the terms he used were problematic (“momentum” for both momentum and velocity) and we have better, more exact ones today. Terms that have changed meaning or have been used historically in more than one way tend to be the things that are hard to understand — for scholars past, and for students today.

State

State is a significant source of misconceptions for students. They often don’t differentiate input state, output state, and internal states. Visualization of state only works for students who can handle those kinds of abstractions. Specification of a NM through experimentation (trying out example programs) can really help if students see that programs causally determine behavior, and if they have enough cognitive load to computer behavior (and emergent behavior is particularly hard). System state is the collection of smaller states, which is a large tax on cognitive load. Geoffrey told us about three kinds of state problems: control state, data state, and indirection/reference state.

State has temporality, which is a source of misconceptions for students, like the common misconception that assignment states define a constraint, not an action in time. RQ: Why? Raymond Lister wondered about our understanding of state in the physical world and how that influences our understanding of state in the computational world. Does state in the real world have less temporality? Do students get confused about temporality in state in the physical world?

Another source of misconceptions is state in code, which is always invisible. The THEN part of an IF has implicit state — that block gets executed only if the expression is true. The block with a loop is different than the block after a condition (executed many times, versus once) but look identical. RQ: How common are code state misconceptions?

Scratch has state, but it’s implicit in sprites (e.g., position, costume). Deborah Fields and Yasmin Kafai found that students didn’t use variables much in state, but maybe because they didn’t tackle problems that needed them. RQ: What kinds of problems encourage use of state, and better understanding of state?

RQ: Some functional curricula move students from stateless computation to stateful computation. We don’t know if that’s easier. We don’t know if more/fewer/different misconceptions arise. Maybe the reverse is easier?

RQ: When students get confused about states, how do they think about? How do they resolve their gaps in understanding?

RQ: What if you start students thinking about data (state) before control? Most introductory curricula start out talking about control structures. Do students develop different understanding of state? Different misconceptions? What if you start with events (like in John Pane’s HANDS system)?

RQ: What if you teach different problem-solving strategies? Can we problematize gaps in NM understanding, so that students see them and actively try to correct them?

March 7, 2016 at 7:59 am 1 comment

Older Posts


Recent Posts

July 2016
M T W T F S S
« Jun    
 123
45678910
11121314151617
18192021222324
25262728293031

Feeds

Blog Stats

  • 1,248,037 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,233 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 4,233 other followers