Posts tagged ‘cognitive science’

What do I mean by Computing Education Research? The Computer Science Perspective

 

Last week, I talked about how I explain what I do to social scientists. This time, let me explain what I do to computer scientists. I haven’t given this talk yet, and have only tried the ideas out on a few people. So consider this an experiment, and I’d appreciate your feedback.

Let’s simplify the problem of computing education research (maybe a case of a spherical cow). Let’s imagine that instead of classes of Real Humans, we are teaching programming to Human-like Turing Machines (HTMs). I’m not arguing that Turing machines are sufficient to represent human beings. I’m asking you to believe that (a) we might be able to create Turing Machines that could simulate humans, like those we have in our classes, (b) RH’s would only have additional capabilities beyond what HTM’s have, and (c) HTM’s and RH’s would similar mechanisms for cognition and learning. (Carl Hewitt has a great CACM blog post arguing that message passing is more powerful than TM’s or first order logic, so maybe these should be HMP, Human Message Passers. I don’t think I need more than TM’s for this post.)

This isn’t a radical simplification. Cognitive science started out using computation as a model for understanding cognition (see history here). Information processing theory in psychology starts from a belief that humans process information like a computer (see Wikipedia article and Ed Psychology reference). Newell and Simon won the ACM Turing award and in their Turing Award lecture introduced the physical symbol system hypothesis, “A physical symbol system has the necessary and sufficient means for general intelligent action.” If we have a program on a Turing machine that gives it the ability to process the world in symbols, our theory suggests that it would be capable of intelligence, even human-like intelligence. I’m applying this lens to how we think about humans learning to program.

This simplification buys me two claims:

  • The Geek Gene is off the table. The Geek Gene is the belief that some people can’t learn to program (see blog post for more). Any Turing machine can simulate any other Turing machine. Our HTM’s are capable of tracing a program. If any HTM can also write code, then all HTM’s can write code. Everyone has the same computational capability. (If HTM’s can all code, then RH’s can all code, because HTM’s have a subset of RH cognitive capabilities.)
  • Learning of our students can be analyzed and understood as information processing. The behavior of Turing machines is understandable with analysis. HTM’s are sophisticated Turing machines. The core mechanism of HTM’s can be analyzed and understood. If we think about our students as HTM’s, we might reason about their learning about computing.

Here are some of the research questions that I find interesting, within this framing.

How do HTM’s learn to program?

All HTM’s must learn, and learn at a level where their initial programming (the bootstrap code written on their tape when they come into our world) becomes indistinguishable from learned capabilities. HTM’s must have built-in programming to eat and to sleep. They learn to walk and run and decipher symbols like “A,” such that it’s hard to tell what was pre-programmed and what was learned. HTM’s can extend their programming.

There are lots of models that describe how HTM’s could learn, such as SOAR and ACT-R. But none so far has learned to program. The closest are the models used to build the cognitive tutors for programming, but those couldn’t debug and couldn’t design programs. They could work from a definition of a program to assemble a program, but that’s not what most of us would call coding. How would they do it?

How would HTM’s think about code? How would it be represented in memory (whether that memory is a tape, RAM, or human brains)? There is growing research interest in how people construct mental models of notional machines. Even experts don’t really know the formal semantics of a language. So instead, they have a common, “notional” way of thinking about the language. How does that notional machine get represented, and how does it get developed?

How do we teach HTM’s to learn to program?

You shouldn’t be able to just reprogram HTM’s or extend their programs by some manipulation of the HTM’s. That would be dangerous. The HTM might be damaged, or learn something that led them into danger. Instead, extending HTM’s programming can only be done by conscious effort by the HTM. That’s a core principle of Piaget’s Theory of Cognitive Development — children (RH’s and HTM’s) learn by consciously constructing a model of the world.

So, we can’t just tell an HTM how to program. Instead, we have to give them experiences and situations where they learn to program when trying to make sense of their world. We could just make them program a lot, on increasingly harder programs. Not only is that de-motivating (maybe not an issue for HTM’s, but certainly is for RH’s), but it’s inefficient. Turns out that we can use worked examples with subgoal labeling and techniques like Parson’s problems and peer instruction to dramatically improve learning in less time.

What native capabilities of HTM’s are used when they learn to code?

We know that learning to read involves re-using more primitive mechanisms to see patterns (see article here). When HTM’s learn to program, what parts of the native programming are being re-used for programming?

Programming in RH’s may involve re-use of our built-in ability to reason about space and language. My colleague Wes Weimer (website) is doing FMRI studies showing that programmers tend to use the parts of their brain associated with language and spatial reasoning. In our work, we have been studying the role of spatial reasoning and gesture in learning to program (see summaries of our ICER 2018 papers). We don’t know why spatial reasoning might be playing a role in learning to program. Maybe it’s not spatial reasoning, but some aspect of spatial reasoning or maybe it’s even some other native ability that is related to spatial reasoning.

How does code work as an external representation of HTM’s, and where does it help?

We can safely assume that HTM’s, like RH’s, would enhance their cognition through the use of external representations. Cognition and memory are limited. Even an infinite tape has limitations in terms of time to access. Human cognitive systems are limited in terms of how much can be attended to at once. RH’s use external representations (writing notes, making diagrams, sketches) to enhance their cognition. We’re assuming that HTM’s have a subset of RH abilities, so external representations would help HTM’s, too.

My students and I talk about a wonderful paper by David Kirsh, Thinking with External Representations (see link here). It’s a compelling view of how external representations give us abilities to think that we don’t have with just our brain alone.

How can program code be a useful external representation for HTM’s? When does it help, e.g., with what cognitive tasks is code a useful external representation? For example, a natural one is modeling and simulation — we can model more complex situations with program code than we can keep in our head, and we can simulate that model for a much larger range of time and possible values. Are there cognitive tasks where code by itself, as a notation like written language or mathematics, can enhance cognition? Here I’m thinking about the ability of code to represent causal relationships (e.g., as in Bruce Sherin’s work) or algebraic forms (e.g., as in Bootstrap) — see here for discussion of both.  I’m intrigued by the idea of the affordances of reading code even before writing it.

What makes programming worth learning for HTM’s?

Why should an HTM learn programming? Let’s assume that an HTM’s basic programming is going to be about staying alive, e.g., Maslow’s hierarchy of needs. When would an HTM want to learn programming?

The most obvious reason to learn programming is because you can get paid to do it. It’s about meeting physiological needs and safety. But, if you can meet those needs doing something that’s easier or more pleasant or has fewer barriers, you’ll likely do that.

Sometimes, you’ll want to learn programming because it makes easier something you want to do anyway. Brian Dorn’s graphic designers wanted to learn programming (see here) because they used Photoshop or GIMP and wanted a way to do that easier and faster. Maybe that’s about safety and physiological needs, but maybe it was about esteem or even self-actualization (if HTM’s care about those things).

Where my simplification breaks down: Real humans learn in situated and social contexts

Our learning theory about RH’s say that they are unlikely to start a new subject unless there’s social pressure to do so (see Pat Alexander’s Model of Domain Learning). Would HTM’s feel social pressure? Maybe.

As I described in the previous blog post, much of my work is framed around sociocultural models of learning, like Lave and Wenger’s situated learning. I use Communities of Practice to understand a lot of the situations that I explore. We can only go so far in thinking about programming as just being inside of individual minds (HTM or RH). Much of the interesting stuff comes when we realize that (a) our cognition interacts with the environments and situations around us, and (b) our motivation, affect, and cognition are influenced by our social world.

Setting aside whether it’s social science or computer science, I am still driven by a paper I read in 1982, which was five years after it was written: “Personal Dynamic Media” by Alan Kay and Adele Goldberg (see copy here).  I want people to be to use coding like they use other literacies, to create a literature, and in a casual, informal and still insightful way.  Mitchel Resnick often talks about people using Scratch to write a card to their mother or grandmother — that’s the kind of thing I want to see.  I want people to be able to make small computational models that answer questions, in the same way that people do “back of the envelope” calculations today. I also want great literature — we need Shakespeares and daVinci’s who convey great thoughts with computing (an argument that Andrea diSessa made recently at the PPIG conference which Felienne Hermans blogged about here.) That’s the vision that drives me, whether I’m using cognitive science or situated learning.

 

November 12, 2018 at 8:00 am 8 comments

Constructivism vs. Constructivism vs. Constructionism

I wrote the below in 1997. I’m surprised that I still find references to it from time-to-time. That website may be going away soon, so I thought I’d put it here (only very slightly edited) in case others may find it useful.

I’d like to offer my take on the meaning of these words. I hear them used in so many ways that I often get confused what others mean by them.

Constructivism, the cognitive theory, was invented by Jean Piaget. His idea was that knowledge is constructed by the learner. There was a prevalent idea at the time (and perhaps today as well) that knowledge is transmitted, that the learner was copying ideas read or heard in lecture directly into his or her mind. Piaget theorized that that’s not true. Instead, learning is the compilation of complex knowledge structures. The learner must consciously make an effort to derive meaning, and through that effort, meaning is constructed through the knowledge structures. Piaget liked to emphasize learning through play, but the basic cognitive theory of constructivism certainly supports learning through lecture — as long as that basic construction of meaning takes place.

I don’t know who invented the notion of Constructivism, the educational philosophy, but it says that each students constructs their own, unique meaning for everything that is learned. This isn’t the same as what Piaget said. Piaget’s theory does not rule out the possibility that you and I may construct exactly the same meaning (i.e., exactly the same knowledge constructions) for some concept or domain. The philosophy of constructivism say that learners will construct their own unique meanings for concepts, so it is not at all reasonable to evaluate students as to how well they have all met some normative goal. (Radical constructivists go so far as to say that the whole concept of a curriculum makes no sense since we cannot teach anyone anything — students will always simply create their own meaning, regardless of what teachers do.) Philosophical constructivists emphasize having students take control of their own learning, and they de-emphasize lecture and other transmissive forms of instruction. This philosophical approach gets complicated by varying concepts of reality: If we all interpret things differently, is there any correct reality?

From my perspective, the assumption of constructivists is currently an untestable hypothesis. We know of no way to peer into someone’s mental constructions. Until we can, we do not know if you and I think about the concept of velocity differently or the same.

Constructionism is more of an educational method which is based on the constructivist learning theory. Constructionism, invented by Seymour Papert who was a student of Piaget’s, says that learning occurs “most felicitously” when constructing a public artifact “whether a sand castle on the beach or a theory of the universe.” (Quotes from his chapter “Situating Constructionism” in the book “Constructionism” edited by Papert and Idit Harel.) Seymour does lean toward the constructivist learning philosophy in his writings, where he talks about the difficulty of conveying a complex concept when the reader is going to construct their own meaning. In general, though, his claim is more about method. He believes that students will be more deeply involved in their learning if they are constructing something that others will see, critique, and perhaps use. Through that construction, students will face complex issues, and they will make the effort to problem-solve and learn because they are motivated by the construction.

The confusion that I and others have about these terms stems from (a) similar looking words and (b) meaning at different levels of the word construct. Piaget was talking about how mental constructions get formed, philosophical constructivists talk about how these constructions are unique (noun construction), and Papert is simply saying that constructing is a good way to get mental constructions built. Levels here are shifting from the physical (constructionism) to the mental (constructivism), from theory to philosophy to method, from science to approach to practice.

March 19, 2018 at 9:00 am 5 comments

Elementary School Computer Science – Misconceptions and Developmental Progressions: Papers from SIGCSE 2017

March 8-11, Seattle hosted the ACM SIGCSE Technical Symposium for 2017. This was the largest SIGCSE ever, with over 1500 attendees. I was there and stayed busy (as I described here). This post isn’t a trip report. I want to talk about two of my favorite papers (and one disappointing one) that I’ve read so far.

We are starting to gather evidence on what makes elementary school computer science different than undergraduate computer science. Most of our research on learning programming and computer science is from undergraduates, published in SIGCSE venues. We know relatively little about elementary school students, and it’s obvious that it’s going to be different. But how?

Shuchi Grover and Satabdi Basu of SRI are starting to answer that question in their paper “Measuring Student Learning in Introductory Block-Based Programming: Examining Misconceptions of Loops, Variables, and Boolean Logic.” They looked at the problems that 6th, 7th, and 8th graders had when programming in Scratch. They’re reporting on things that I’ve never heard of before as misconceptions at the undergraduate level. Like this quote:

Students harbored the misconception that a variable is a letter that is used as a short form for an unknown number – an idea that comes from middle school mathematics classes. Together, this led students to believe that repeat(NumberOfTimes) was a new command. One student conjectured it was a command for multiplication by 5 (the value of NumberOfTimes), while another thought it would print each number five times… After being told that NumberOfTimes was indeed a variable, the students could correctly predict the program output, though they continued to take issue with the length of the variable name.

I find their description believable and fascinating. Their paper made me realize that middle school students are expending cognitive load on issues like multi-character variable names that probably no computer scientist even considered. That’s a real problem, but probably fixable — though the fix might be in the mathematics classes, as well as in the CS classes.

The paper that most impressed me was from Diana Franklin’s group, “Using Upper-Elementary Student Performance to Understand Conceptual Sequencing in a Blocks-based Curriculum.” They’re studying over 100 students, and starting to develop general findings about what works at each of these grade levels. Three of their findings are quoted here:

Finding 1: Placing simple instructions in sequence and using simple events in a block-based language is accessible to 4th-6th grade students.

Finding 2: Initialization is challenging for 4th and 5th grade students.

Finding 3: 6th grade students are more precise at 2-dimension navigation than 4th and 5th grade students.

I’ve always suspected that there was likely to be an interaction between a student’s level of cognitive development and what they would likely be able to do in programming, given how much students are learning about abstraction and representation at these ages. Certainly, programming might influence cognitive development. It’s important to figure out what we might expect.

That’s what Diana’s group is doing. She isn’t saying that fourth grader’s can’t initialize variables and properties. She’s saying it’s challenging for them. Her results are likely influenced by Scratch and by how the students were taught — it’s still an important result. Diana’s group is offering a starting point for exploring these interactions and understanding what we can expect to be easy and what might be hard for the average elementary school student at different ages.  There may be studies that also tell us about developmental progressions in countries that are ahead of the US in elementary school CS (e.g., maybe Israel or Germany). This is the first study of its kind that I’ve read.

SIGCSE 2017 introduced having Best Paper awards in multiple categories and Exemplary Paper awards. I applaud these initiatives. Other conferences have these kinds of awards. The awards helps our authors stand out in job searches and promotion time.

To be really meaningful awards, though, SIGCSE has to fix the reviewing processes. There were hiccups in this year’s reviewing where there wasn’t much of a match between reviewer expertise and the paper’s topic. The hiccups led to papers with significant flaws getting high rankings.

The Best Paper award in the Experience Report category was “Making Noise: Using Sound-Art to Explore Technological Fluency.” The authors describe a really nifty idea. They implement a “maker” kind of curriculum. One of the options is that students get toys that make noise then modify and reprogram them. The toys already work, so it’s about understanding a system, then modifying and augmenting it. The class sounds great, but as Leah Buchele has pointed out, “maker” curricula can be overwhelmingly male. I was surprised that this award-winning paper doesn’t mention females or gender — at all. (There is one picture of a female student in the paper.) I understand that it’s an Experience Report, but gender diversity is a critical issue in CS education, particularly with maker curricula. I consider the omission of even a mention of gender to be a significant flaw in the paper.

April 3, 2017 at 7:00 am 9 comments

How the Pioneers of the MOOC Got It Wrong (from IEEE), As Predicted

There is a sense of vindication that the predictions that many of us made about MOOCs have been proven right, e.g., see this blog post where I explicitly argue (as the article below states) that MOOCs misunderstand the importance of active learning. It’s disappointing that so much effort went wasted.  MOOCs do have value, but it’s much more modest than the sales pitch.

What accounts for MOOCs’ modest performance? While the technological solution they devised was novel, most MOOC innovators were unfamiliar with key trends in education. That is, they knew a lot about computers and networks, but they hadn’t really thought through how people learn.

It’s unsurprising then that the first MOOCs merely replicated the standard lecture, an uninspiring teaching style but one with which the computer scientists were most familiar. As the education technology consultant Phil Hill recently observed in the Chronicle of Higher Education, “The big MOOCs mostly employed smooth-functioning but basic video recording of lectures, multiple-choice quizzes, and unruly discussion forums. They were big, but they did not break new ground in pedagogy.”

Indeed, most MOOC founders were unaware that a pedagogical revolution was already under way at the nation’s universities: The traditional lecture was being rejected by many scholars, practitioners, and, most tellingly, tech-savvy students. MOOC advocates also failed to appreciate the existing body of knowledge about learning online, built over the last couple of decades by adventurous faculty who were attracted to online teaching for its innovative potential, such as peer-to-peer learning, virtual teamwork, and interactive exercises. These modes of instruction, known collectively as “active” learning, encourage student engagement, in stark contrast to passive listening in lectures. Indeed, even as the first MOOCs were being unveiled, traditional lectures were on their way out.

Source: How the Pioneers of the MOOC Got It Wrong – IEEE Spectrum

February 17, 2017 at 7:17 am 2 comments

A review of one of my favorite papers: Cognitive Apprenticeship (Collins, Brown, Newman)

I drew on Cognitive Apprenticeship a lot in my dissertation — so much so that Carl Berger asked me at my proposal, “Are you testing Cognitive Apprenticeship as a model?”  I had no idea how to respond, and 25 years later, I still don’t.  How do you test a conceptual framework?

Cognitive apprenticeship, like situated learning, starts from the assumption that apprenticeship is a particularly effective form of education. Then it asks, “How do you offer an apprenticeship around invisible tasks?”

What I like about the essay linked below is that it places cognitive apprenticeship in a broader context.  Apprenticeship isn’t always the best option (as discussed in the post about the Herb Simon paper).

Active listeners or readers, who test their understanding and pursue the issues that are raised in their minds, learn things that apprenticeship can never teach. To the degree that readers or listeners are passive, however, they will not learn as much as they would by apprenticeship, because apprenticeship forces them to use their knowledge. Moreover, few people learn to be active readers and listeners on their own, and that is where cognitive apprenticeship is critical–observing the processes by which an expert listener or reader thinks and practicing these skills under the guidance of the expert can teach students to learn on their own more skillfully.

Source: Cognitive Apprenticeship (Collins, Brown, Newman) | Reading for Pleasure

January 20, 2017 at 7:03 am Leave a comment

Balancing cognition and motivation in computing education: Herbert Simon and evidence-based education

Education is a balancing act between optimally efficient instruction and motivating students. It’s not the same thing to meet the needs of the head and of the heart.

Shuchi Grover tweeted this interesting piece (quoted below) that reviews an article by Herb Simon (and John Anderson and Lynne Reder) which I hadn’t previously heard of.  The reviewer sees Herb Simon as taking a stand against discovery-based, situated, and constructivist learning, and in favor of direct instruction. When I read the article, I saw a more subtle message.  I do recommend reading the review piece linked below.

He [Herbert Simon] rejects discovery learning, and praises teacher instruction

When, for whatever reason, students cannot construct the knowledge for themselves, they need some instruction. The argument that knowledge must be constructed is very similar to the earlier arguments that discovery learning is superior to direct instruction. In point of fact, there is very little positive evidence for discovery learning and it is often inferior (e.g., Charney, Reder & Kusbit, 1990). Discovery learning, even when successful in acquiring the desired construct, may take a great deal of valuable time that could have been spent practicing this construct if it had been instructed. Because most of the learning in discovery learning only takes place after the construct has been found, when the search is lengthy or unsuccessful, motivation commonly flags.

Source: Herbert Simon and evidence-based education | The Wing to Heaven

Some cognitive scientists have been railing against the constructivist and situated approaches to learning for years. Probably the most important paper representing the cognitivist perspective is the Kirschner, Sweller, and Clark paper, “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching.”  I talked about the Kirschner, Sweller, and Clark paper in this blog post with its implication for how we teach computer science.

The conclusion is pretty straightforward: Direct instruction is far more efficient than making the students work it out for themselves. Students struggling to figure something out for themselves does not lead to deeper learning or more transfer than simply telling students what they ought to do. Drill and practice is important. Learning in authentic, complex situations is unnecessary and often undesirable because failure increases with complexity.

The Anderson, Reder, and Simon article does something important that the famous Kirschner, Sweller, and Clark paper doesn’t — it talks about motivation. The words “motivation” and “interests” don’t appear anywhere in the Kirschner, Sweller, and Clark paper. Important attitudes about learning (like Carol Dweck’s fixed and growth mindsets, or Angela Duckworth’s grit) are not even considered.

In contrast, Anderson, Reder, and Simon understand that motivation is a critical part of learning.

Motivational questions lie outside our present discussion, but are at least as complex as the cognitive issues. In particular, there is no simple relation between level of motivation, on the one hand, and the complexity or realism of the context in which the learning takes place, on the other. To cite a simple example, learning by doing in the real-life domain of application is sometimes claimed to be the optimum procedure. Certainly, this is not true, when the tasks are life-threatening for novices (e.g., firefighting), when relevant learning opportunities are infrequent and unpredictable (e.g., learning to fly a plane in bad weather), or when the novice suffers social embarrassment from using inadequate skills in a real-life context (e.g., using a foreign language at a low level of skill). The interaction of motivation with cognition has been described in information-processing terms by Simon (1967, 1994). But an adequate discussion of these issues would call for a separate paper as long as this one.

There are, of course, reasons sometimes to practice skills in their complex setting. Some of the reasons are motivational and some reflect the special skills that are unique to the complex situation. The student who wishes to play violin in an orchestra would have a hard time making progress if all practice were attempted in the orchestra context. On the other hand, if the student never practiced as a member of an orchestra, critical skills unique to the orchestra would not be acquired. The same arguments can be made in the sports context, and motivational arguments can also be made for complex practice in both contexts. A child may not see the point of isolated exercises, but will when they are embedded in the real-world task. Children are motivated to practice sports skills because of the prospect of playing in full-scale games. However, they often spend much more time practicing component skills than full-scale games. It seems important both to motivation and to learning to practice one’s skills from time to time in full context, but this is not a reason to make this the principal mechanism of learning.

As a constructionist-oriented learning scientist, I’d go further with the benefits of a motivating context (which is a subset of what they’re calling a “complex setting”). When you “figure it out for yourself,” you have a different relationship to the domain. You learn about process, as well as content, as in learning what it means to be a scientist or how a programmer thinks. When you are engaged in the context, practice is no longer onerous but an important part of developing expertise — still arduous, but with meaning. Yasmin Kafai and Quinn Burke talk about changing students’ relationship with technology. Computer science shouldn’t just be about learning knowledge, but developing a new sense of empowerment with technology.

I’ve been wondering about what (I think) is an open research question about cognitivist vs. situationist approaches on lifelong learning. I bet you’re more likely to continue learning in a domain when you are a motivated and engaged learner. An efficiently taught but unmotivated learner is less likely to continue learning in the discipline, I conjecture.

While they underestimate the motivational aspect of learning, Anderson, Reder, and Simon are right about the weaknesses of an authentic context. We can’t just throw students into complex situations. Many students will fail, and those that succeed won’t be learning any better. They will learn slower.

Anderson, Reder, and Simon spend much of their paper critiquing Lave & Wenger’s Situated Learning. I draw on situated learning in my work (e.g., see post here) and reference it frequently in my book on Learner-Centered Computing Education, but I agree with their critique. Lave & Wenger are insightful about the motivation part, but miss on the cognitive part. Situated learning, in particular, provides insight into how learning is a process of developing identity. Lave & Wenger value apprenticeship as an educational method too highly. Apprenticeship has lots of weaknesses: inefficient, inequitable, and difficulty to scale.

The motivational component of learning is particularly critical in computing education. Most of our hot issues are issues of motivation:

The challenge to being an effective computing educator is to be authentic and complex enough to maintain motivation, and to use scaffolding to support student success and make learning more efficient. That’s the point of Phyllis Blumenfeld et al.’s “Motivating Project-Based Learning: Sustaining the Doing, Supporting the Learning.” (I’m in the “et al,” and it’s the most cited paper I’ve ever been part of.) Project-based learning is complex and authentic, but has the weaknesses that the cognitivists describe. Blumenfeld et al. suggest using technology to help students sustain their motivation and support their learning.

Good teaching is not just a matter of choosing the most efficient forms of learning. It’s also about motivating students to persevere, to tell them the benefits that make the efforts worthwhile. It’s about feeding the heart in order to feed the head.

January 6, 2017 at 7:00 am 8 comments

Graduating Dr. Briana Morrison: Posing New Puzzles for Computing Education Research

I am posting this on the day that I am honored to “hood” Dr. Briana Morrison. “Hooding” is where doctoral candidates are given their academic regalia indicating their doctorate degree. It’s one of those ancient parts of academia that I find really cool. I like the way that the Wikiversity describes it: “The Hooding Ceremony is symbolic of passing the guard from one generation of doctors to the next generation of doctors.”

I’ve written about Briana’s work a lot over the years here:

But what I find most interesting about Briana’s dissertation work were the things that didn’t work:

  • She tried to show a difference in getting program instruction via audio or text. She didn’t find one. The research on modality effects suggested that she would.
  • She tried to show a difference between loop-and-a-half and exit-in-the-middle WHILE loops. Previous studies had found one. She did not.

These kinds of results are so cool to me, because they point out what we don’t know about computing education yet. The prior results and theory were really clear. The study was well-designed and vetted by her committee. The results were contrary to what we expected. WHAT HAPPENED?!? It’s for the next group of researchers to try to figure out.

The most interesting result of that kind in Briana’s dissertation is one that I’ve written about before, but I’d like to pull it all together here because I think that there are some interesting implications of it. To me, this is a Rainfall Problem kind of question.

Here’s the experimental set-up. We’ve got six groups.

  1. All groups are learning with pairs of a worked example (a completely worked out piece of code) and then a practice problem (maybe a Parson’s Problem, maybe writing some code). We’ll call these WE-P pairs (Worked Example-Practice). Now, some WE-P pairs have the same context (think of it as the story of a story problem), and some have different contexts. Maybe in the same context, you’re asked to compute the average tips for several days of tips as a barista. Maybe in a different context, you compute tips in the worked example, but you compute the average test score in the practice. In general, we predict that different contexts will be harder for the student than having everything the same.
  2. So we’ve got same context vs different context as one variable we’re manipulating. The other variable is whether the participants get the worked example with NO subgoal labels, or GENERATED subgoal labels, or the participant has to GENERATE subgoal labels. Think of a subgoal label as a comment that explains some code, but it’s the same comment that will appear in several different programs. It’s meant to encourage the student to abstract the meaning of the code.

In the GENERATE condition, the participants get blanks, to encourage them to abstract for themselves. Typically, we’d expect (for research in other parts of STEM with subgoal labels) that GENERATE would lead to more learning than GIVEN labels, but it’s harder. We might get cognitive overload.

In general, GIVEN labels beats out no labels. No problem — that’s what we expect given all the past work on subgoal labels. But when we consider all six groups, we get this picture.

Why would having the same context do worse with GIVEN labels than no labels? Why would the same context do much better with GENERATE labels, but worse when it’s different contexts?

So, Briana, Lauren, and Adrienne Decker replicated the experiment with Adrienne’s students at RIT (ICER 2016). And they found:

The same strange “W” pattern, where we have this odd interaction between context and GIVEN vs. GENERATE that we just don’t have an explanation for.

But here’s the really intriguing part: they also did the experiment with second semester students at RIT. All the weird interactions disappeared! Same context beat different context. GIVEN labels beat GENERATE labels. No labels do the worst. When students get enough experience, they figure things out and behave like students in other parts of STEM.

The puzzle for the community is WHY. Briana has a hypothesis. Novice students don’t attend to the details that they need, unless you change the contexts. Without changing contexts, students even GIVEN labels don’t learn because they’re not paying enough attention. Changing contexts gets them to think, “What’s going on here?” GENERATE is just too hard for novices — the cognitive load of figuring out the code and generating labels is just overwhelming for students, so they do badly when we’d expect them to do better.

Here we have a theory-conflicting result, that has been replicated in two different populations. It’s like the Rainfall Problem. Nobody expected the Rainfall Problem to be hard, but it was. More and more people tried it with their students, and still, it was hard. It took Kathi Fisler to figure out how to teach CS so that most students could succeed at the Rainfall Problem. What could we teach novice CS students so that they avoid the “W” pattern? Is it just time? Will all second semester students avoid the “W”?

Dr. Morrison gave us a really interesting dissertation — some big wins, and some intriguing puzzles for the next researchers to wrestle with. Briana has now joined the computing education research group at U. Nebraska – Omaha, where I expect to see more great results.

December 16, 2016 at 7:00 am 7 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,354 other followers

Feeds

Recent Posts

Blog Stats

  • 1,588,295 hits
December 2018
M T W T F S S
« Nov    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

CS Teaching Tips