Posts tagged ‘educational psychology’
The Muller research being described in the below post was discussed here previously, and is related to the predict-before-demo work that Eric Mazur presented at last year’s ICER. The uppermost bit here is that data mining can’t get at this level of abstraction in terms of identifying good teaching. I’m also concerned that data mining can’t help if you lose 80% of your subject pool — you can’t learn about people who aren’t there.
But even granting that you can get sufficiently rich information about the students, there’s another hard problem. Let’s say that, thanks to the upgrade in your big data infinite improbability drive made possible by your new Spacely’s space sprocket, your system is able to flag at least a critical mass of videos taught in the Mueller method as having a bigger educational impact on the students the average educational video by some measure you have identified. Would the machine be able to infer that these videos belong in a common category in terms of the reason for their effectiveness? Would it be able to figure out what Muller did? There are lots of reasons why a video might be more effective than average. And many of those ways are internal to the narrative structure of the video. The machine only knows things like the format of the video, the length, what kind of class it’s in, who the creator is, when it was made, and so on. Other than the external characteristics of the video file, it mostly knows what we tell it about the contents. It has no way for it to inspect the video and deduce that a particular presentation strategy is being used. We are nowhere close to having a machine that is smart enough to do what Muller did and identify a pattern in the narrative of the speaker.
Was anyone else bothered by the argument in this NYtimes blog post? ”MOOCS aren’t effective in terms of completion rates; Duolingo is not a MOOC; Duolingo is effective.” So…what does that tell us about MOOCs?
The paper on Duolingo effectiveness is pretty cool. I think it’s particularly noteworthy that more prior knowledge of Spanish led to less of an effect of Duolingo. I wonder if that’s because Duolingo is essentially using a worked example model, and worked examples do suffer from the expertise reversal effect.
Moreover, there are early indications that the high interactivity and personalized feedback of online education might ultimately offer a learning structure that can’t be matched by the traditional classroom.
Duolingo, a free Web-based language learning system that grew out of a Carnegie Mellon University research project, is not an example of a traditional MOOC. However, the system, which now teaches German, French, Portuguese, Italian, Spanish and English, has roughly one million users and about 100,000 people spend time on the site daily.
The journal article on the research that Klara Benda, Amy Bruckman, and I did finally came out last month the ACM Transactions on Computing Education. The abstract is below. Klara has a background in sociology, and she’s done a great job of blending research from sociology with more traditional education and learning sciences perspectives to explain what happens when working professionals take on-line CS classes. This work has informed our CSLearning4U project significantly, and informs my perspective on MOOCs.
We present the results of an interview study investigating student experiences in two online introductory computer science courses. Our theoretical approach is situated at the intersection of two research traditions: distance and adult education research, which tends to be sociologically oriented, and computer science education research, which has strong connections with pedagogy and psychology. The article reviews contributions from both traditions on student failure in the context of higher education, distance and online education as well as introductory computer science. Our research relies on a combination of the two perspectives, which provides useful results for the field of computer science education in general, as well as its online or distance versions. The interviewed students exhibited great diversity in both socio-demographic and educational background. We identified no profiles that predicted student success or failure. At the same time, we found that expectations about programming resulted in challenges of time-management and communication. The time requirements of programming assignments were unpredictable, often disproportionate to expectations, and clashed with the external commitments of adult professionals. Too little communication was available to access adequate instructor help. On the basis of these findings, we suggest instructional design solutions for adult professionals studying introductory computer science education.
I mentioned in a previous blog post the nice summary article that Audrey Watters wrote (linked below) about Learning to Code trends in educational technology in 2012, when I critiqued Jeff Atwood’s position on not learning to code.
Audrey does an excellent job of describing the big trends in learning to code this last year, from CodeAcademy to Bret Victor and Khan Academy and MOOCs. But the part that I liked the best was where she identified the problem that cool technology and badges won’t solve: culture and pedagogy.
Two organizations — Black Girls Code and CodeNow — did hold successful Kickstarter campaigns this year to help “change the ratio” and give young kids of color and young girls opportunities to learn programming. And the Irish non-profit CoderDojo also ventured state-side in 2012, helping expand afterschool opportunities for kids interested in hacking. The Maker Movement another key ed-tech trend this year is also opening doors for folks to play and experiment with technologies.
And yet, despite all the hype and hullaballoo from online learning startups and their marketing campaigns that now “everyone can learn to code,” its clear there are still plenty of problems with the culture and the pedagogy surrounding computer science education.
We still do need new programming languages whose design is informed by how humans work and learn. We still do need new learning technologies that can help us provide the right learning opportunities for individual student’s needs and can provide access to those who might not otherwise get the opportunity. But those needs are swamped by culture and pedagogy.
What do I mean by culture and pedagogy?
Culture: Betsy diSalvo’s work on Glitch is a great example of considering culture in computing education. I’ve written about her work before — that she engaged a couple dozen African-American teen men in computing, by hiring them to be video game testers, and the majority of those students went on to post-secondary education in computing. I’ve talked with Betsy several times about how and why that worked. The number one reason why it worked: Betsy spent the time to understand the African-American teen men’s values, their culture, what they thought was important. She engaged in an iterative design process with groups of teen men to figure out what would most appeal to them, how she could reframe computing into something that they would engage with. Betsy taught coding — but in a different way, in a different context, with different values, where the way, context, and values were specifically tuned to her audience. Is it worth that effort? Yeah, because it’s about making a computing that appeals to these other audiences.
Pedagogy: A lot of my work these days is about pedagogy. I use peer instruction in my classrooms, and try out worked examples in various ways. In our research, we use subgoal labels to improve our instructional materials. These things really work.
Let me give you an example with graphs that weren’t in Lauren Margelieux’s paper, but are in the talk slides that she made for me. As you may recall, we had two sets of instructional materials: A set of nice videos and text descriptions that Barbara Ericson built, and a similar set with subgoal labels inserted. We found that the subgoal labelled instruction led to better performance (faster and more correct) immediately after instruction, more retention (better performance a week later), and better performance on a transfer task (got more done on a new app that the students had never seen before). But I hadn’t shown you before just how enormous was the gap between the subgoal labelled group and the conventional group on the transfer task.
Part of the transfer task involved defining a variable in App Inventor — don’t just grab a component, but define a variable to represent that component. The subgoal label group did that more often. ALOT more often.
Lauren also noticed that the conventional group tended to “thrash,” to pull out more blocks in App Inventor than they actually needed. The correlation between number of blocks drawn out and correctness was r = -.349 — you are less likely to be correct (by a large amount) if you pull out extra blocks. Here’s the graph of number of blocks pulled out by each group.
These aren’t small differences! These are huge differences from a surprisingly small difference between the instructional materials. Improving our pedagogy could have a huge impact.
I agree with Audrey: Culture and pedagogy are two of the bigger issues in learning to code.
Fascinating question! Bilingual people have some additional executive control. Does learning a programming language give a similar benefit in executive control? The study described below is suggestive but not conclusive. If we could find evidence for it, it would be another benefit of learning to program.
If computer programming languages are languages, then people who spoke one language and could programme to a high standard should be bilingual. Research has suggested that bilingual people perform faster than monolingual people at tasks requiring executive control – that is, tasks involving the ability to pay attention to important information and ignore irrelevant information (for a review of the “robust” evidence for this, see Hilchey & Klein, 2011). So, I set out to find out whether computer programmers were better at these tasks too. It is thought that the bilingual advantage is the result of the effort involved in keeping two languages separate in the brain and deciding which one to use. I noticed that novice computer programmers have difficulty in controlling “transfer” from English to programming languages (e.g. expecting the command “while” to imply continuous checking; see Soloway and Spohrer, 1989), so it seemed plausible that something similar might occur through the learning of programming languages.
Mike Hewner successfully passed his PhD dissertation defense on Friday. There are just some dissertation tweaks and bureaucracy to go. In the process of the defense, there were several really interesting implications for his theory that got spelled out, and they relate to some of the comments made in response to my post on his dissertation last week.
Early choice is not early decision: In response to a question about when students should decide their specializations (should it be earlier in the degree or later in the degree), Mike said, “Making a choice early doesn’t force making a decision early.” We then spent some time unpacking that.
In Mike’s theory, students spend time exploring until they face a differential in enjoyment between classes that students interpret as an affinity for one topic over another. Students use this process to decide on a major, or to decide on a specialization area within a major. Once they’ve made a decision, they are more committed, and are willing to go through less-enjoyable classes in pursuit of a goal that they have now decided on. Forcing students to make a choice early (between majors or specializations) doesn’t change this process — they don’t decide earlier to become committed to a major or specialization. Forcing the choice early may mean dealing graduation, when students finally decide on something else and become committed to that other path.
Job as ill-defined goal: One of the surprising and somewhat contradictory ideas in Mike’s thesis is that, while US students today may be more driven to get a college education in order to get a better job or a middle class lifestyle, they don’t necessarily know what that job entails. Students that Mike interviewed rarely could describe what kind of job they wanted, or if they did, it was vague (“Work for Google”) and the students couldn’t explain what that job would require or what classes they should take to prepare for that job.
When we were first developing Threads, we talked about helping students to describe the kind of job they wanted, and then we could advise them to pick the Threads that would help them achieve that career. But Mike’s theory says that that’s backwards. Students don’t know what kind of job they want. They use experiences in the classes to help them decide what kind of work they will enjoy.
Hewner’s theory is constructivist. Mike was asked, “How would you advise a student such that they could figure out the best Thread for themselves?” Mike’s response was that students would need to do something that was authentic and representative of work within that Thread — which is hard to do in an accessible manner for students who don’t know much about that Thread yet. You can’t just tell students about the Threads or about the jobs that fit into the Threads. It’s unlikely that students will be able to successfully predict if they would enjoy the work in the Thread based on a description.
In some sense, Mike’s theory is intensely constructivist. Mike’s students won’t decide on a major, specialization or career choice until they experience the work of that major, specialization, or career choice, and then decide if they enjoy it or not for themselves. If decisions are made based on enjoyment, you can’t tell someone that they’d enjoy the experience. They have to figure it out for themselves.
In the last few weeks, the focus of the MOOC debate seems to have shifted to an important question: Exactly what is the value of face-to-face contact? The President of Williams College, Adam F. Falk, published a piece in WSJ claiming that contact hours with a professor is the most important factor in learning.
A recent article in the Georgia Tech Alumni Magazine claims exactly the opposite (with no reference to support this dubious claim): “In fact, one of the core tenets of traditional learning—that face-to-face interaction between teacher and student is critical—is actually of almost no value, according to meta-analysis of education studies.” The very next paragraph starts with: “Meta-analysis shows that the other most effective educational tool is one-on-one tutoring.” So the tutoring is only valuable if it’s not face-to-face?
The below article by Walt Gardners raises a more reasoned critique of Falk’s WSJ piece. The question hasn’t been resolved one way or another for me, but it’s certainly one of the key questions in the debate over the value of MOOCs. What is lost when face-to-face contact is removed? How are on-line media forms best used for learning?
According to Falk, the curriculum, the choice of major, and the GPA do not predict self-reported gains in these critical outcomes nearly as well as “how much time a student spent with professors.” In other words, a professor can be a dud in the classroom and yet still be effective in helping students achieve the stated goals. How is that possible? I don’t doubt that the relationship between professors and students is an important factor in learning. But that’s not what Falk argues. Instead, he asserts that it’s the number of hours a professor logs with students after the bell rings that counts the most. I fail to see what that has to do with instruction.
The rebuttal is that not all learning takes place in the classroom. Fair enough. But “personal contact” can mean having coffee and talking about the latest fashions. ‘Im sure thats a pleasant way to spend time, but how does that translate into, say, being able to write effectively? I assume that the time spent with students does not involve tutoring because Falk never uses the word. The irony, of course, is that when teachers in K-12 complain about the need for small classes so that they have a better chance to know students and design lessons in line with their needs and interests, they are seen as making excuses.
This is a great point, and it’s the same one that we’re trying to make with Lauren’s paper at ICER 2012. Instructional design matters! Educational psychologists do know how to make learning better. Lauren’s well-designed video results in better learning (fewer errors, more retention, and signs of transfer). To Khan’s credit, he is updating his videos in response to these critiques. Better yet, he might hire an instructional designer to do the critiques in-house.
The errors highlight a blind spot that plagues many Khan Academy lectures: Khan is both brilliant and talented, but he doesn’t know much about pedagogy, the science of teaching information effectively.
The video filtered up through the ranks of ed bloggers to Justin Reich’s blog in the trade publication Ed Week, and within five days of going up on YouTube reached Salman Khan. To his credit, Khan took down his original video and released two new, better lectures in its place within two days. He also sent a comment to Reich saying that he appreciates the feedback.
The scenario described in the experiment below has been repeated many times in the education literature: Students are asked to read some material (or listen to a lecture), and are then asked to do something with that material (e.g., take a quiz, write down everything they can remember, do a mind-mapping exercise), and some time later, they take a test to measure retention. In the experiment described below, simple writing beat out creating a mental map. Interesting, but it’s an instance of a case that I wanted to raise.
This pattern of information+activity+retention is common, and really does work. Doing something with the knowledge improves retention over time.
So how do we do this in computer science? What do we ask our students to do after lecture, or after reading, or after programming, to make it more likely that they retain what they learned? If our only answer is, “Write more programs,” then we missed the point. What if we just had our students write down what they learned? Even if it was facts about the program (e.g., “The test for the sentinel value is at the top of the loop when using a WHILE”), it would help to retain that knowledge later. What this particular instance points out is that the retention activity can be very simple and still be effective. Not doing anything to encourage retention is unlikely to be effective.
But two experiments, carried out by Dr Jeffrey Karpicke at Purdue University, Indiana, concluded that this was less effective than constant informal testing and reciting.Dr Karpicke asked around 100 college students to recall in writing, in no particular order, as much as they could from what they had just read from science material.Although most students expected to learn more from the mapping approach, the retrieval exercise actually worked much better to strengthen both short-term and long-term memory.The results support the idea that retrieval is not merely scouring for and spilling out the knowledge stored in one’s mind — the act of reconstructing knowledge itself is a powerful tool that enhances learning about science.
I have been eager to write this blog for months, but wanted to wait until both of the papers had been reviewed and accepted for publication. Now “Subgoals Improve Performance in Computer Programming Construction Tasks” by Lauren Margulieux, Richard Catrambone, and Mark Guzdial has been accepted to the educational psychology conference EARLI SIG 6 & 7, and “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Mobile Application Development” by the same authors have been accepted into ICER 2012.
Richard Catrambone has developed a subgoal model of learning. The idea is to express instructions with explicit subgoals (“Here’s what you’re trying to achieve in the next three steps”) and that doing so helps students to develop a mental model of the process. He has shown that using subgoals in instruction can help with learning and improve transfer in domains like statistics. Will it work with CS? That’s what his student Lauren set out to find out.
She took a video that Barb had created to help teachers learn how to build apps with App Inventor. She then defined a set of subgoals that she felt captured the mental model of the process. She then ran 40 undergraduates through a process of receiving subgoal-based instruction, or not:
In the first session, participants completed a demographic questionnaire, and then they had 40 minutes to study the first app‘s instructional material. Next, participants had 15 minutes to complete the first assessment task. In the second session, participants had 10 minutes to complete the second assessment task, which measured their retention. Then participants had 25 minutes to study the second app‘s instructional material followed by 25 minutes to complete the third assessment.
An example assessment task:
Write the steps you would take to make the screen change colors depending on the orientation of the phone; specifically, the screen turns blue when the pitch is greater than 2 (hint: you’ll need to make an orientation sensor and use blocks from “Screen 1” in My Blocks).
Here’s an example screenshot from one of Barb’s original videos, which is what the non-subgoal group would see:
This group would get text-based instruction that looked like this:
- Click on “My Blocks” to see the blocks for components you created.
- Click on “clap” and drag out a when clap.Touched block
- Click on “clapSound” and drag out call clapSound.Play and connect it after when clap.Touched
The subgoal group would get a video that looks like this:
That’s it — a callout would appear for a few second to remind them of what subgoal they were on. Their text instructions looked a bit different:
Handle Events from My Blocks
- Click on “My Blocks” to see the blocks for components you created.
- Click on “clap” and drag out a when clap.Touched block
Set Output from My Blocks
- Click on “clapSound” and drag out call clapSound.Play and connect it after when clap.Touched
You’ll notice other educational psychology themes in here. We give them instructional material with a complete worked example. By calling out the mental model of the process explicitly, we reduce cognitive load associated with figuring out a mental model for themselves. (When you tell students to develop something, but don’t tell them how, you are making it harder for them.)
Here’s a quote from one of the ICER 2012 reviewers (who recommended rejecting the paper):
“From Figure 1, it seems that the “treatment” is close to trivial: writing headings every few lines. This is like saying that if you divide up a program into sections with a comment preceding each section or each section implemented as a method, then it is easier to recall the structure.”
Yes. Exactly. That’s the point. But this “trivial” treatment really made a difference!
- The subgoal group attempted and completed successfully more parts (subgoals) of the assessment tasks and faster — all three of those (more subgoals attempted, more completed successfully, and time) were all statistically significant.
- The subgoal group completed successfully more tasks on a retention task (which wasn’t the exact same task — they had to transfer knowledge) one week later, again statistically significantly.
But did the students really learn the mental model communicated by the subgoal labels, or did the chunking things into subgoals just make it easier to read and parse? Lauren ran a second experiment with 12 undergraduates, where she asked students to “talk-aloud” while they did the task. The groups were too small with the second experiment to show the same learning benefits, but all the trends were in the same direction. The subgoal group were still out-performing the non-subgoal groups, but what’s more they talked in subgoals! I find it amazing that she got these results from just one hour sessions. In one hour, Lauren’s video taught undergraduate students how to get something done in App Inventor, and they could remember and do something new with that knowledge a week later — better than a comparable group of Georgia Tech undergraduates seeing the SAME videos (with only callout differences) doing the SAME tasks. That is efficient learning.
Here’s a version of a challenge that I have made previously: Show me pedagogical techniques in computing education that have statistically significant impacts on performance, speed, and retention, and lead to developing a mental model of (even part of) a software development process. What’s in our toolkit? Where is our measurable progress? The CMU Cognitive Tutors count, but they were 20-30 years ago and (unfortunately) are not part of our CS education toolkit today. Alice and Scratch are tools — they are what to teach, not how to teach. Most of our strong results (like Pair Programming, Caspersen’s STREAMS, and Media Computation) are about changing practice in whole courses, mostly for undergraduates, over several weeks. Designing instruction around subgoals in order to communicate a mental model is a small, “trivial” tweak, that anyone can use no matter what they are teaching, with significant wins in terms of quality and efficiency. Instructional design principles could be used to make undergraduate courses better, but they’re even more critical when teaching adults, when teaching working professionals, when teaching high school teachers who have very little time. We need to re-think how we teach computing to cater to these new audiences. Lauren is showing us how to do that.
One of the Ed Psych reviewers wrote, “Does not break new ground theoretically, but provides additional evidence for existing theory using new tasks.” Yes. Exactly. This is no new invention from an instructional design perspective. It is simply mapping things that Richard has been doing for years into a computer science domain, into “new tasks.” And it was successful.
Lauren is working with us this summer, and we will be trying it with high school teachers. Will it work the same as with GT undergraduates? I’m excited by these results — we’re already showing that the CSLearning4U approach of simply picking the low-hanging fruit from educational psychology can have a big impact on computing education quality and efficiency.
This week at the NCWIT Summit, I heard Joshua Aronson speak on stereotype threat. I’ve read (and even taught) about stereotype threat before, but there’s nothing like hearing the stories and descriptions from the guy who co-coined the term. Stereotype threat is “apprehension arising from the awareness of a negative stereotype or personal reputation in a situation where the stereotype or identity is relevant, and thus comparable.” Aaronson has lots of examples. Remind women of the gender (and implicitly, of the stereotype that says women are worse than men at math) and their scores drop on math tests. Remind African Americans of their race (and implicitly, of the stereotype about African Americans and intelligence) and their scores on IQ tests drop.
I took a picture of one of Aronson’s slides. He observed that most of the tests in the laboratory experiments were, well, laboratory experiments. They weren’t “real,” that is, they didn’t count for anything. So what if we tweaked the AP Calculus test? Typically, the AP Calc asks students their gender just before they start the test, which makes the stereotypes about gender salient. What if you moved that question to the end of the test? Here are the results:
If you ask before, women do much worse than men, as past results have typically shown. If you ask after, the women do better than the men, but the men also do much worse than before! Reminding men of their gender, and the stereotype, improves their performance. Don’t remind them, and they do worse. Which leaves us in a tough position: When should you ask gender?
Now, there is a solution here: Dweck’s fixed vs growth mindset. Many children believe that intelligence is a fixed quantity, so if they do badly at something, they believe that they can’t do better later with more work. What if we emphasize that intelligence is malleable? Writes Dweck in Brainology:
The wonderful thing about research is that you can put questions like this to the test — and we did (Kamins and Dweck, 1999; Mueller and Dweck, 1998). We gave two groups of children problems from an IQ test, and we praised them. We praised the children in one group for their intelligence, telling them, “Wow, that’s a really good score. You must be smart at this.” We praised the children in another group for their effort: “Wow, that’s a really good score. You must have worked really hard.” That’s all we did, but the results were dramatic. We did studies like this with children of different ages and ethnicities from around the country, and the results were the same.
Here is what happened with fifth graders. The children praised for their intelligence did not want to learn. When we offered them a challenging task that they could learn from, the majority opted for an easier one, one on which they could avoid making mistakes. The children praised for their effort wanted the task they could learn from.
The children praised for their intelligence lost their confidence as soon as the problems got more difficult. Now, as a group, they thought they weren’t smart. They also lost their enjoyment, and, as a result, their performance plummeted. On the other hand, those praised for effort maintained their confidence, their motivation, and their performance. Actually, their performance improved over time such that, by the end, they were performing substantially better than the intelligence-praised children on this IQ test.
Aronson and colleagues asked in their Department of Education report: “Does teaching students to see intelligence as malleable or incrementally developed lead to higher motivation and performance relative to not being taught this theory of intelligence?” They did find that teaching a growth mindset really did result in higher motivation and performance. They recommended the strategy, “Reinforce for students the idea that intelligence is expandable and, like a muscle, grows stronger when worked.”
It turns out that, if you teach students about growth mindset, then they are less likely to be influenced by stereotype threat. Dweck writes in her Brainology essay:
Joshua Aronson, Catherine Good, and their colleagues had similar findings (Aronson, Fried, and Good, 2002; Good, Aronson, and Inzlicht, 2003). Their studies and ours also found that negatively stereotyped students (such as girls in math, or African-American and Hispanic students in math and verbal areas) showed substantial benefits from being in a growth-mindset workshop. Stereotypes are typically fixed-mindset labels. They imply that the trait or ability in question is fixed and that some groups have it and others don’t. Much of the harm that stereotypes do comes from the fixed-mindset message they send. The growth mindset, while not denying that performance differences might exist, portrays abilities as acquirable and sends a particularly encouraging message to students who have been negatively stereotyped — one that they respond to with renewed motivation and engagement.
Dweck is pretty careful in how she talks about intelligence, but some of the others are not She talks about “while not denying that performance differences might exist” and “portrays abilities as acquirable” (emphasis mine). The Dept of Ed report says we should tell students that “intelligence is expandable.” Is it? Is intelligence actually malleable?
The next workshop I went to after Aronson’s was Christopher Chabris’s on women and the collective intelligence of human groups. Chabris showed fascinating work that the proportion of women in groups raises the collective intelligence of groups. But before he got into his study, he talked about personal and collective intelligence. He quoted Charles Spearman from 1904: “Measurements of cognitive ability tend to correlate positively across individuals.” Virtually all intelligence tests correlate positively, which suggests that they’re measuring the same thing, the same psychological construct. What’s more, Chabris showed us that the variance in intelligence can be explained in terms of physical structures of the brain. Personal intelligence is due to physical brain structures, but we can work collectively to do more and think better.
My Georgia Tech colleague, Randy Engle, was interviewed in the NYTimes a few weeks ago, arguing that intelligence is fixed. It’s due to unchanging physical characteristics of the brain. We can’t change it.
For some, the debate is far from settled. Randall Engle, a leading intelligence researcher at the Georgia Tech School of Psychology, views the proposition that I.Q. can be increased through training with a skepticism verging on disdain. “May I remind you of ‘cold fusion’?” he says, referring to the infamous claim, long since discredited, that nuclear fusion could be achieved at room temperature in a desktop device. “People were like, ‘Oh, my God, we’ve solved our energy crisis.’ People were rushing to throw money at that science. Well, not so fast. The military is now preparing to spend millions trying to make soldiers smarter, based on working-memory training. What that one 2008 paper did was to send hundreds of people off on a wild-goose chase, in my opinion.
“Fluid intelligence is not culturally derived,” he continues. “It is almost certainly the biologically driven part of intelligence. We have a real good idea of the parts of the brain that are important for it. The prefrontal cortex is especially important for the control of attention. Do I think you can change fluid intelligence? No, I don’t think you can. There have been hundreds of other attempts to increase intelligence over the years, with little or no — just no — success.”
Is intelligence expandable and malleable, or is it physical and fixed? There is a level where it doesn’t matter. Telling students that intelligence is expandable and malleable does have an effect. It results in higher test scores and better performance. But on the other hand, is it good policy to lie to students, if we’re wrong about the malleability?
Maybe we’re talking about different definitions of “intelligence.” Engle and Chabris may be talking about a core aspect of intelligence that is not malleable, and Dweck and Aronson may be talking about knowledge, skills, and even metacognitive skills that can be grown throughout life. But we say that “intelligence” is malleable, and the work in stereotype threat tells us that the language matters. What words we use, and how (and when) we prompt students impacts performance. If we don’t say “intelligence can be grown like a muscle” and instead say, “knowledge and skills are expandable and malleable,” would we still get the same benefits?
I’m not a psychologist. When I was an education graduate student, I was told to think about education as “psychology engineering.” Educators take the science of psychology into actual practice to create learning systems and structures. I look to the psychology to figure out how to help students learn. While Dweck and Aronson are explicitly giving educators strategies that really work, I worry about the conflict I see between them and other psychologists in terms of the basic science. Is it a good strategy to get positive learning effects by telling students something that may not be true?
Fascinating piece in US News and World Report on the LearnLab work at Carnegie Mellon University. Since I’m exploring worked examples research and the implications for CS Education these days, I found the below section of the interview with Ken Koedinger intriguing. Practice helps you learn facts, but worked examples help you learn skills. Isn’t learning to program mostly about learning skills? We should be providing lots more worked examples of programming (not just the code — the process) to teach programming skills.
In math, for example, traditionally, students receive a list of math problems to solve. But this approach “gives novice learners too little support in constructing new knowledge,” Koedinger says. “It’s not as effective as replacing about half of those problems with example solutions. Rather than guessing their way through problems, these worked-out examples allow students to focus on grasping the thinking needed so they can solve future problems on their own.”
Thus, “if every other problem contains a step-by-step solution, students learn more robust skills,” he adds. “Even better is adaptive computer-based practice that adjusts to individual students, providing more worked-out solution steps initially, but then gradually challenging a student with more problems as he or she increases in understanding and skill.”
But Koedinger is quick to point out that using more worked examples is not the answer for all learning goals. “They are best for skills, but pure practice is better for facts,” he says. “For deeper concepts and principles, more emphasis on providing explanations is important, but should these explanations simply be given to students?”
I used Arnold Arons’ work a lot when I did my dissertation, so I particularly liked this quote from a recent Richard Hake post. There are direct implications for us in CS, where just about everything (from FOR loops to linked lists) are abstract ideas. Lectures, even lucid ones on these topics, don’t work for most students.
“I point to the following unwelcome truth: much as we might dislike the implications, research is showing that didactic exposition of abstract ideas and lines of reasoning (however engaging and lucid we might try to make them) to passive listeners yields pathetically thin results in learning and understanding – except in the very small percentage of students who are specially gifted in the field.”
Arnold Arons (1997)
REFERENCES [URL's shortened by <http://bit.ly/> and accessed on 06 March 2012.] Arons, A.B. 1997. “Teaching Introductory
Physics,” p. 362. Wiley, publisher’s information at <http://bit.ly/jBcyBU>. Amazon.com information at <http://amzn.to/bBPfop>, note the searchable ”Look Inside” feature.
The American Educational Research Association is the main organization for supporting education research in the United States. Thanks to the hard work of Mitch Nathan, they now have a forum for computing education research at their annual meeting! Below is his email announcement:
I want to alert you to a new section in AERA (American Educational Research Association) Division C, Section 1e: Engineering and Computer Science. This is the result of many months of negotiation and represents a great advance for research in engineering education and computer science education.
I have three requests:
(1) Most immediately, please consider volunteering as a scholarly reviewer for this new section. The quality of the research that will emerge from this Section is a direct function of the quality and efforts of our review team;
(2) Please share the announcement widely that there is now a new, highly regarded outlet for research in engineering education, computing education and STEM more broadly; and finally,
(3) please consider submitting your high-quality research to this section. AERA is the largest education research organization in the world and the annual meeting is among the most prestigious. All papers are peer reviewed, the Association hosts an online archive for accepted papers that is highly trafficked regardless of the presentation format (talk, poster, roundtable discussion, and a variety of interactive formats), and authors retain copyright to their work and the freedom to submit this work for future publication, so long as AERA is the first place this work appears.
Mitchell J. Nathan, BSEE, PhD
Director, Center on Education and Work
Faculty in Departments of Educational Psychology,
Curriculum & Instruction, and Psychology
Wisconsin Center for Education Research (WCER)
University of Wisconsin-Madison
1025 West Johnson Street
Madison, WI 53706-1796
Interesting finding that supporting older adults learning better problem-solving skills seems to lead to a change in a personality trait called “openness.” I find this interesting for two reasons. First, it’s wonderful to see continuing evidence about the plasticity of the human mind. Surprisingly little is “fixed” or “innate.” Second, I wonder how “openness” relates to “self-efficacy.” We heard at ICER 2011 how self-efficacy plays a significant role in student ability to succeed in introductory computing. Is there an implication here that if we could improve students’ understanding of computer science, before programming, that we could enhance their openness or self-efficacy, possibly leading to more success? That’s a related hypothesis to what we aim for in CSLearning4U (that studying programming in the small, worksheet-style, will make programming sessions more effective — more learning, less time, less pain), and I’d love to see more evidence for this.
Personality psychologists describe openness as one of five major personality traits. Studies suggest that the other four traits (agreeableness, conscientiousness, neuroticism and extraversion) operate independently of a person’s cognitive abilities. But openness — being flexible and creative, embracing new ideas and taking on challenging intellectual or cultural pursuits — does appear to be correlated with cognitive abilities.