Posts tagged ‘cognitive science’
I enjoy reading “Gas station without pumps,” and the below-quoted post was one I wanted to respond to.
Two of the popular memes of education researchers, “transferability is an illusion” and “the growth mindset”, are almost in direct opposition, and I don’t know how to reconcile them.
One possibility is that few students actually attempt to learn the general problem-solving skills that math, CS, and engineering design are rich domains for. Most are content to learn one tiny skill at a time, in complete isolation from other skills and ideas. Students who are particularly good at memory work often choose this route, memorizing pages of trigonometric identities, for example, rather than learning how to derive them at need from a few basics. If students don’t make an attempt to learn transferable skills, then they probably won’t. This is roughly equivalent to claiming that most students have a fixed mindset with respect to transferable skills, and suggests that transferability is possible, even if it is not currently being learned.
Teaching and testing techniques are often designed to foster an isolation of ideas, focusing on one idea at a time to reduce student confusion. Unfortunately, transferable learning comes not from practice of ideas in isolation, but from learning to retrieve and combine ideas—from doing multi-step problems that are not scaffolded by the teacher.
The problem with “transferability” is that it’s an ill-defined term. Certainly, there is transfer of skill between domains. Sharon Carver showed a long time ago that she could teach debugging Logo programs, and students would transfer that debugging process to instructions on a map (mentioned in post here). That’s transferring a skill or a procedure. We probably do transfer big, high-level heuristics like “divide-and-conquer” or “isolate the problem.” One issue is whether we can teach them. John Sweller says that we can’t — we must learn them (it’s a necessary survival skill), but they’re learned from abstracting experience (see Neil Brown’s nice summary of Sweller’s SIGCSE keynote).
Whether we can teach them or not, what we do know is that higher-order thinking is built on lots of content knowledge. Novices are unlikely to transfer until they know a lot of stuff, a lot of examples, a lot of situations. For example, novice designers often have “design fixation.” They decide that the first thing they think of must be the right answer. We can insist that novice designers generate more designs, but they’re not going to generate more good designs until they know more designs. Transfer happens pretty easily when you know a lot of content and have seen a lot of situations, and you recognize that one situation is actually like another.
Everybody starts out learning one tiny skill at a time. If you know a lot of skills (maybe because you have lots of prior experience, maybe because you have thought about these skills a lot and have recognized the general principles), you can start chunking these skills and learning whole schema and higher-level skills. But you can’t do that until you know lots of skills. Students who want to learn one tiny skill at a time may actually need to still learn one tiny skill at a time. People abstract (e.g., able to derive a solution rather than memorize it) when they know enough content that it’s useful and possible for them to abstract over it. I completely agree that students have to try to abstract. They have to learn a lot of stuff, and then they have to be in a situation where it’s useful for them to abstract.
“Growth mindset” is a necessity for any of this to work. Students have to believe that content is worth knowing and that they can learn it. If students believe that content is useless, or that they just “don’t do math” or “am not a computer person” (both of which I’ve heard in just the last week), they are unlikely to learn content, they are unlikely to see patterns in it, and they are unlikely to abstract over it.
Kevin is probably right that we don’t teach problem solving in engineering or computing well. I blogged on this theme for CACM last month — laboratory experiments work better for a wider range students than classroom studies. Maybe we teach better in labs than in classrooms? The worked examples effect suggests that we may be asking students to problem solve too much. We should show students more completely worked out problems. As Sweller said at SIGCSE, we can’t expect students to solve novel problems. We have to expect students to match new problems to solutions that they have already seen. We do want students to solve problems, too, and not just review example solutions. Trafton and Reiser showed that these should be interleaved: Example, Problem, Example, Problem… (see this page for a summary of some of the worked examples research, including Trafton & Reiser).
When I used to do Engineering Education research, one of my largest projects was a complete flop. We had all this prior work showing the benefits of a particular collaborative learning technology and technique, then we took it into the engineering classroom and…poof! Nothing happened. In response, we started a project to figure out why it failed so badly. One of our findings was that “learned helplessness” was rampant in our classes, which is a symptom of a fixed mindset. “I know that I’m wrong, and there’s nothing that I can do about it. Collaboration just puts my errors on display for everyone,” was the kind of response we’ve got. (See here for one of our papers on this work.)
I believe that all the things Kevin sees going wrong in his classes really are happening. I believe he’s not seeing transfer that he might reasonably expect to see. I believe that he doesn’t see students trying to abstract across lower-level skills. But I suspect that the problem is the lack of a growth mindset. In our work, we saw Engineering students simply give up. They felt like they couldn’t learn, they couldn’t keep up, so they just memorized. I don’t know that that’s the cause of the problems that Kevin is seeing. In my work, I’ve often found that motivation and incentive are key to engagement and learning.
We’re years into the MOOC phenomenon, and I’d hoped that we’d get past MOOC hype. But we’re not. The article below shows the same misunderstandings of learning and teaching that we heard at the start — misunderstandings that even MOOC supporters (like here and here) have stopped espousing.
The value of being in the front row of a class is that you talk with the teacher. Getting physically closer to the lecturer doesn’t improve learning. Engagement improves learning. A MOOC puts everyone at the back of the class, listening only and doing the homework.
In many ways, we have a romanticized view of college. Popular portrayals of a typical classroom show a handful of engaged students sitting attentively around a small seminar table while their Harrison Ford-like professor shares their wisdom about the world. We all know the real classroom is very different. Especially in big introductory classes — American history, U.S. government, human psychology, etc. — hundreds of disinterested, and often distracted, students cram into large impersonal lecture halls, passively taking notes, occasionally glancing up at the clock waiting for the class to end. And it’s no more engaging for the professor. Usually we can’t tell whether students are taking notes or updating their Facebook page. For me, everything past the ninth row was distance learning. A good online platform puts every student in the front row.
Important paper at SIGCSE 2015: Transferring Skills at Solving Word Problems from Computing to Algebra Through Bootstrap
I was surprised that this paper didn’t get more attention at SIGCSE 2015. The Bootstrap folks are seeing evidence of transfer from the computing and programming activities into mathematics performance. There are caveats on the result, so these are only suggestive results at this time.
What I’d like to see in follow-up studies is more analysis of the students. The paper cited below describes the design of Bootstrap and why they predict impact on mathematics learning, and describes the pre-test/post-test evidence of impact on mathematics. When Sharon Carver showed impact of programming on problem-solving performance (mentioned here), she looked at what the students did — she showed that her predictions were met. Lauren Margulieux did think-aloud protocols to show that students were really saying subgoal labels to themselves when transferring knowledge (see subgoal labeling post). When Pea & Kurland looked for transfer, they found that students didn’t really learn CS well enough to expect anything to transfer — so we need to demonstrate that they learned the CS, too.
Most significant bit: Really cool that we have new work showing potential transfer from CS learning into other disciplines.
Many educators have tried to leverage computing or programming to help improve students’ achievement in mathematics. However, several hopes of performance gains—particularly in algebra—have come up short. In part, these efforts fail to align the computing and mathematical concepts at the level of detail typically required to achieve transfer of learning. This paper describes Bootstrap, an early-programming curriculum that is designed to teach key algebra topics as students build their own videogames. We discuss the curriculum, explain how it aligns with algebra, and present initial data showing student performance gains on standard algebra problems after completing Bootstrap.
In Josh Tenenberg’s lead article in the September 2014 ACM Transactions on Computing Education (linked below), he uses this blog, and in particular, this blog post on research questions, as a foil for exploring what questions we ask in computing education research. I was both delighted (“How wonderful! I have readers who are thinking about what I’m writing!”) and aghast (“But wait! It’s just a blog post! I didn’t carefully craft the language the way I might a serious paper!”) — but much more the former. Josh is kind in his consideration, and raises interesting issues about our perspectives in our research questions.
I disagree with one part of his analysis, though. He argues that my conception of computing education (“the study of how people come to understand computing”) is inherently cognitivist (centered in the brain, ignoring the social context) because of the word “understand.” Maybe. If understanding is centered in cognition, yes, I agree. If understanding is demonstrated through purposeful action in the world (i.e., you understand computing if you can do with computing what you want), then it’s a more situated definition. If understanding is a dialogue with others (i.e., you understand computing if you can communicate about computing with others), then it’s more of a sociocognitive definition.
The questions he calls out are clearly cognitivist. I’m guilty as charged — my first PhD advisor was a cognitive scientist, and I “grew up” as the learning science community was being born. That is my default position when it comes to thinking about learning. But I think that my definition of the field is more encompassing, and in my own work, I tend toward thinking more about motivation and about communities of practice.
Asking significant research questions is a crucial aspect of building a research foundation in computer science CS education. In this article, I argue that the questions that we ask are shaped by internalized theoretical presuppositions about how the social and behavioral worlds operate. And although such presuppositions are essential in making the world sensible, at the same time they preclude carrying out many research studies that may further our collective research enterprise. I build this argument by first considering a few proposed research questions typical of much of the existing research in CS education, making visible the cognitivist assumptions that these questions presuppose. I then provide a different set of assumptions based on sociocultural theories of cognition and enumerate some of the different research questions to which these presuppositions give rise. My point is not to debate the merits of the contrasting theories but to demonstrate how theories about how minds and sociality operate are imminent in the very questions that researchers ask. Finally, I argue that by appropriating existing theory from the social, behavioral, and learning sciences, and making such theories explicit in carrying out and reporting their research, CS education researchers will advance the field.
Premise 1: Teaching is a human endeavor that does not and cannot improve over time.
Premise 2: Human beings are fantastic learners.
Premise 3: Humans don’t learn well in the teaching-focused classroom.
Conclusion: We won’t meet the needs for more and better higher education until professors become designers of learning experiences and not teachers.
Interesting argument linked above, but wrong.
- Premise 1: Teaching does improve with time. Gerhard Fischer published a wonderful piece many years ago that showed how skiing instruction has improved over time, and that the approaches used can be understood in terms of cognitive science.
- Premise 2: Humans are fantastic learners, but as Kirschner, Sweller, and Clark showed, humans learn much better with direct instruction.
- Premise 3: No, no one learns well in a teaching-focused classroom. However, many teachers help their students learn better in a student-centered classrooms.
- The Conclusion doesn’t follow from the premises at all.
I don’t agree that learning a foreign language is as useful as learning a programming language, especially in terms of increased communication capability (so I wouldn’t see it as equivalent to a foreign language requirement). I see learning a foreign language as far more important and useful. It is interesting to think about cognitive effects of learning programming that might be similar to the cognitive effects of learning another human language.
Learning a language increases perception. Multilingual students are better at observing their surroundings. They can focus on important information and exclude information that is less relevant. They’re also better at spotting misleading data. Likewise, programming necessitates being able to focus on what works while eliminating bugs. Foreign language instruction today emphasizes practical communication — what students can do with the language. Similarly, coding is practical, empowering and critical to the daily life of everyone living in the 21st century.
Really interesting blog post, dissecting the mistakes made in a very popular TED talk.
Sir Ken’s ideas aren’t just impractical; they are undesirable. Here’s the trouble with his arguments:
1. Talent, creativity and intelligence are not innate, but come through practice.
2. Learning styles and multiple intelligences don’t exist.
3. Literacy and numeracy are the basis for creativity.
4. Misbehaviour is a bigger problem in our schools than conformity.
5. Academic achievement is vital but unequal, partly because…
6. Rich kids get rich cultural knowledge, poor kids don’t.
I don’t completely agree with all of Pragmatic Education’s arguments.
- Intelligence may not be malleable. You can learn more knowledge, and that can come from practice. It’s not clear that fluid intelligence is improved with practice.
- Learning styles don’t seem to exist. Multiple intelligences? I don’t think that the answer is as clear there.
- Creativity comes from knowing things. Literacy and numeracy are great ways of coming to know things. It’s a bit strong to say that creativity comes from literacy and numeracy.
- There are lots of reasons why rich kids are unequal to poor kids (see the issue about poverty and cognitive function.) Cultural knowledge is just part of it.
But 90% — I think he gets what’s wrong with Sir Ken’s arguments.