Annie Murphy Paul has a nice article about autodidacts — yes, there are some, but most of us aren’t. MOOCs are mostly for autodidacts. The paper from Educational Psychologist is excellent, and I reading the original as well as Paul’s review.
In a paper published in Educational Psychologist last year, Jeroen J.G. van Merriënboer of Maastricht University and Paul A. Kirschner of the Open University of the Netherlands challenge the popular assumption “that it is the learner who knows best and that she or he should be the controlling force in her or his learning.”
There are three problems with this premise, van Merriënboer and Kirschner write. The first is that novices, by definition, don’t yet know much about the subject they’re learning, and so are ill equipped to make effective choices about what and how to learn next. The second problem is that learners “often choose what they prefer, but what they prefer is not always what is best for them;” that is, they practice tasks that they enjoy or are already proficient at, instead of tackling the more difficult tasks that would actually enhance their expertise. And third, although learners like having some options, unlimited choices quickly become frustrating—as well as mentally taxing, constraining the very learning such freedom was supposed to liberate.
Why the ‘coding for all’ movement is more than a boutique reform – Margolis and Kafai respond to Cuban in Washington Post
Highly recommended reading — Jane Margolis and Yasmin Kafai respond to the concerns of Larry Cuban about the “coding for all” movement (that I blogged on here). They address a wide range of issues, from the challenges of changing school to the importance of education about coding for empowerment.
On a functional level, a basic understanding of code allows for an understanding of the design and functionalities that underlie all aspects of interfaces, technologies, and systems we encounter daily. On a political level, understanding code empowers and provides everyone with resources to examine and question the design decisions that populate their screens. Finally, on a personal level, everyone needs and uses code in some ways for expressive purposes to better communicate, interact with others, and build relationships. We need to be able to constructively, creatively, and critically examine designs and decisions that went into making them.
We’ve talked about this problem before — that it looks like we’re graduating fewer CS undergraduates, despite rising enrollment. Interesting analysis in The Chronicle:
Aside from looking remarkably like the Cisco logo itself a representation of San Francisco’s iconic Golden Gate Bridge, the chart clearly shows fluctuation in interest among undergraduates and graduates in computer science.The reason for that fluctuation isn’t clear from the graph, but we have a couple of theories:
1. The pipeline was primed: In the 1970s and 1980s, many elementary, middle, and high schools taught computer programming to students, according to Joanna Goode. As an associate professor of education studies at the University of Oregon, Ms. Goode has researched access for women and students of color in computer science.“But, as the PC revolution took place, the introduction to the CD-ROMS and other prepackaged software, and then the Internet, changed the typical school curriculum from a programming approach to a ‘computer literacy’ skill-building course about ‘how to use the computer,’”…
2. The job market: Fluctuations in college-degree attainment are often connected to fluctuations in the job market in certain industries.
This is my third blog post in a series inspired by a thread in the SIGCSE-Members list and by the Slate article which argued that “Practice doesn’t make perfect.” Macnamara et al did a meta-analysis of studies of expertise, and found that a relatively small percentage of variance in expertise can be explained through hours of practice. The Slate authors argue that this implies that genetics explains the rest of the variance.
- In the first post (see here), I argued that the practice+genetics is too simple to explain expertise. First, practice can be deliberate, lazy, or teacher-led. Second, there is experience that leads to expertise which is between genetics and practice. The most significant flaw of both Macnamara et al. and Ericsson et al. is ignoring teaching.
- In the second post (appearing yesterday in Blog@CACM), I addressed a claim in the SIGCSE-Members list that programmers are “wired” differently than others. Most CS teachers agree with the Slate authors, that students can NOT be more successful with more work. The evidence that better teaching leads to better learning is overwhelming. In fact, there is significant evidence that teaching can even overcome genetic/innate-ability differences.
Lots of CS teachers believe in the Geek Gene Hypothesis, and for good reason. It’s frustrating to have seemingly no impact on some, especially the lower-end, students. Even the award-winning Porter, Zingaro, and Lister paper points out that the earliest assessments in the class they studied correlate very highly with the final grade. Gas Station without Pumps voiced a similar sentiment in his blog post in response to the Slate article:
But the outcomes for individual students seem to depend more on the students coming in than on what I do. Those students who come in better prepared or “innately” smarter progress faster than those who come in behind, so the end result of the teaching is that differences among the students are amplified, not reduced. Whether the differences in the students coming in are due to prior practice, prior teaching, or genetics is not really knowable, but also not really relevant.
I agree. It’s not really knowable where the difference comes from and it’s not really relevant. The point of my Blog@CACM post is: we can do better. If we can teach spatial ability and subitizing, two skills that have a much stronger claim to being innate than programming, then we can certainly teach people to program better.
If we follow common practice and it’s unsuccessful, it’s not surprising that we think, “I tried. I explained carefully. I gave interesting assignments. I gave good feedback. It’s got to be an innate trait. Some students are just born wired to program.”
I watch my children taking CS classes, along with English, Chemistry, Physics, and Biology classes. In the CS classes, they code. In the other classes, they do on-line interactive exercises, they write papers, they use simulations, they solve problems by-hand. Back in CS, the only activity is coding with feedback. If we only have one technique for teaching, we shouldn’t be surprised if it doesn’t always work
Here’s a reasonable hypothesis: We get poor results because we use ineffective teaching methods. If we want to teach CS more effectively, we need to learn and develop better methods. If we don’t strive for better methods, we’re not going to get better results.
A first step is to be more methodical with how we choose methods. In a 2011 paper by Davide Fossati and me (see here), we found that CS teachers generally don’t use empirical evidence when making changes in how we teach. We act from our intuition, but our students aren’t like us, and our intuition is not a good indicator of what our students need.
Next, we need to experiment with more methods. We want to get to a place where we identify known problems in our students’ understanding, and then used well-supported methods that help students develop more robust understandings. We probably don’t have a wide range of different techniques for teaching assignment, iteration, recursion, and similar concepts? We should try well-supported techniques like pair programming, peer instruction, or Media Computation (see CACM article on these). We should try to expand our techniques repertoire beyond simply grinding at code. We could try techniques like worked examples, Problets, CodingBat, games with learning outcomes like Wu’s Castle, multiple choice questions like in Gidget, the Parson’s Problems in the Runestone Interactive ebooks, or even computing without computers as in CS Unplugged.
We do not make it easy for CS teachers to pick up new, better, more proven methods. Sure, there are the SIGCSE Symposium proceedings, but that’s not a systematic presentation of what to use when. This is on the CS education research community to do better. But it’s also on the CS teaching community to demand better, to seek out better methods and studies of techniques.
If we taught better, there are a lot of problems in CS that we might impact. We might bring in a more diverse group of students. We might make our current students more successful. We might change attitudes about computing. Perhaps most importantly, maybe we as teachers will come to believe that we can teach anyone to program.
A recent article in Slate (see here) suggests that practice may not lead to expertise, that the “10,000 hour rule” is wrong. The “10,000 hour rule” was popularized by Malcolm Gladwell in his book Outliers (see excerpt here), but really comes from an important paper by K. Anders Ericsson and colleagues, “The Role of Deliberate Practice in the Acquisition of Expert Performance.” Ericsson claimed that 10,000 hours of deliberate practice results in expert-level performance.
The Slate article is based mostly on a new meta-analysis (see here) by Macnamara, Hambrick (also a co-author on the Slate article), and Oswald which reviewed and combined studies on expertise. They found that practice always was positively correlated with better performance, but did not explain all of (or even most of) the difference in expertise between study participants. The Slate article authors suggest, then, that deliberate practice is not as important as genetics or innate talent.
Deliberate practice left more of the variation in skill unexplained than it explained…There is now compelling evidence that genes matter for success, too…What all of this evidence indicates is that we are not created equal where our abilities are concerned.
The paper and article make two big mistakes that leave the “10,000 hour rule” as valid and valuable. The first is that practice is not the same as deliberate practice, and the second is that the fallback position can’t be genetics/innate talent. In general, their argument hinges on practice hours all being of equal value, which shows a lack of appreciation for the role of teaching.
Practice is not the same as deliberate practice
Ericsson was pretty clear in his paper that all practice is not created equal. Deliberate practice is challenging, focused on the skills that most need to be developed, with rapid feedback. (Here’s a nice blog post explaining deliberate practice.) Simply putting in 10,000 hours of practice in an activity does not guarantee expertise. Ericsson and the Slate authors would be in agreement on this point.
I’m sure that we’ve all seen musicians or athletes (and if we’re honest, we’ve probably all been like those musicians or athletes) who sometimes just “phone it in” during practice, or even during a game. I used to coach my daughters’ soccer teams, and I can absolutely assure you that there were hours in games and rehearsals where some of my players really didn’t make any progress. They found ways of getting through practice or games without really trying.
In the Macnamara paper, whether practice was “deliberate” or not was determined by asking people. They collected practice logs, surveys, and interviews. The participants in the studies self-reported whether the practice was deliberate. Imagine someone telling the interviewer or writing in their log, “Yeah, well, about 5,000 of those 10,000 hours, I was really lazy and not trying very hard.” It’s impossible to really distinguish practice from deliberate practice in this data set.
The bottom-line is that the Macnamara study did not test Ericsson’s question. They tested a weak form of the “10,000 hour rule” (that it’s just “practice,” not “deliberate practice”) and found it wanting. But their explanation, that it’s genetics, is not supported by their evidence.
Genetics/Innate starts at birth, no later
The Slate authors argue that, if practice doesn’t explain expertise, then it must be genetics. They cite two studies that show that identical twins seem to have similar music and drawing talent compared to fraternal twins. But that’s correlation and doesn’t prove causation — there may be any number of things on which the identical twins aren’t similar. (See this great Radiolab podcast exploring these kinds of miraculous misconceptions.)
If you’re going to make the genetics/innate argument, you have to start tracking participants at birth. Otherwise, there’s an awful lot that might add to expertise that’s not going to get counted in any practice logs.
I took classes on how to coach soccer. One of the lessons in those classes was, “It’s a poor coach who makes all practices into scrimmage.” Rather, we were taught to have students do particular drills to develop particular skills. (Sound like deliberate practice?) For example, if my players were having trouble dribbling, I might have them dribble a ball in a line around cones, across distances, through obstacles.
Can you imagine a child who one day might play in a soccer team with official practices — but before those practices and perhaps even before joining a team might dribble a ball around the neighborhood? Wouldn’t that be developing expertise? And yet, it wouldn’t be counted in player logs or practice hours. A kid who did lots of dribbling might come into a team and seem like a superstar with all kinds of innate talent. One might think that the kid had the “Soccer gene.”
To start counting hours-towards-expertise anything later than birth is discounting the impact of learning in the pre-school years on up. We know that pre-school years make a difference (see this website that Diana Franklin sent me, and the argument for pre-school in this recent Freakonomics podcast). A wide variety of activities can develop skills that can be influence expertise. If you don’t start tracking students from birth, then it’s hard to claim that you’ve counted in the practice log everything that’s relevant for expertise.
The claim that expertise is determined at birth is a common claim among CS educators. Most CS teachers to whom I’ve asked the question are convinced some people “can’t” learn to code, that it’s genetic or innate to learn programming. That’s where the myth of the “Geek Gene” came from (Raymond Lister has written several times on that). Couldn’t it be that there are dribbling-around-the-neighborhood activities that lead toward CS expertise? Consider the famous pre-programming activity of writing the instructions out for making a peanut-butter-and-jelly sandwich (like here). If we believe that that kind of practice helps to develop CS expertise, then other “writing instructions out” activities might lead towards CS expertise. Maybe people who seem to have genetic/innate ability in CS just did a lot of those kinds of activities before they got to our classes.
The clock on developing expertise doesn’t start when students walk through our door.
Bigger than P=NP: Is teaching > genetics?
In the end, it’s very difficult to prove or disprove that genetics accounts for expertise in cognitive skill. I don’t think Macnamara et al. settled the score. But my point about deliberate practice actually points to a much bigger issue.
Teachers Matter is the two word title of a 2012 OECD report (available here). There is a difference between great teachers and poor teachers, and the difference can be seen in terms of student performance. If you believe that (and there’s gobs of evidence that says you should), then it seems obvious that all practice is not created equal. Hours spent in practice with a good teacher are going to contribute more to expertise than hours spent without a teacher. Look back at that definition of “deliberate practice” — who’s going to pick the activities that most address your needs or provide the immediate feedback? The definition of deliberate practice almost assumes that there’s going to be teacher in the loop.
An open question is just how far we can get with excellent teaching. How much can we use teaching to get beyond genetic disparities? Is teaching more powerful than genetics? That’s an important question, and far more important than the classic CS question whether P=NP. I believe that there are limits. There are genetic problems that teaching alone can’t address. But we don’t know what those limits are.
We certainly have evidence that we can use teaching to get past some differences that have been chalked up to genetics or being innate. Consider the fact that men have better spatial skills than women. Is it innate, or is it learned? It’s not clear (see discussion on that here). But the important point is: it doesn’t matter. Terlecki, Newcombe, and Little have found that they can teach women to perform as well as men on visual skills and that the improvements in spatial ability both transfers and persists (see the journal article version here). The point is that spatial skills are malleable, they can be developed. Why should we think that other cognitive skills aren’t? The claims of the Slate authors and Macnamara et al ignore the power of a great teacher to go beyond simple rote practice to create deliberate opportunities to learn. The words teach, teacher, and teaching don’t appear in either article.
Here’s my argument summarized. The Slate authors and Macnamara et al. dismiss the 10K hour rule too lightly, and their explanation of genetic/innate basis for expertise is too simple. Practice is not the same as deliberate practice, or practice with a teacher. Expertise is learned, and we start learning at birth with expertise developing sometimes in ways not directly connected to the later activity. The important part is that we are able to learn to overcome some genetic/innate disparities with good teaching. We shouldn’t be giving up on developing expertise because we don’t have the genes. We should be thinking about how we can teach in order to develop expertise.
In my most recent recent Blog@CACM post on last month’s ACM Ed Council meeting, I mentioned that I gave a talk about the differences between computing education research and engineering education research (EER) and physics education research (PER). Let me spell these out a bit here.
The context was a panel on how to grow computing education research (CER). We were asked to consider the issue of getting more respect for computing education research (an issue I’ve written on before). I decided to explore the characteristics of CER that are important and that are not present in EER or PER. Engineering Education Research (EER) and Physics Education Research (PER) are better established and more well-respected in the United States. But I’ve come to realize that CER has characteristics that are different from what’s in EER and PER.
Engineering Education Research
I came to a new understanding of EER because of a cross-campus STEM Education Research seminar that we’re holding at Georgia Tech this semester. It’s given me the opportunity to spend a couple hours each week with people who publish in Journal of Engineering Education (see here), review for them, and edit for them. JEE is generally considered to top EER journal.
If you’re not familiar, engineering education research is a big deal in the United States. There are well-funded engineering education research centers. There are three academic departments of EER. It’s well-established.
In one of the early sessions, we talked about the McCracken Study (Mike McCracken has been coming to the sessions, which has been great), where an experimental assignment was used in five classes in four countries. Are there similar studies in EER? Our EER colleagues looked at one another and shrugged their shoulders. For the most part, EER studies occur in individual classes at individual institutions. Laboratory studies are rare. International collaborations are really rare.
I started digging into JEE. The last issue of JEE only had papers by American authors from American institutions. I’m digging further back. My colleagues are right — international authors and collaborations are unusual in JEE.
In contrast, I don’t think that the ACM Transactions on Computing Education has ever had an only-American issue. Our ICER conference is not even American-dominated. The ICER 2014 best paper award went to a paper by Leo Porter (American) who worked with Raymond Lister (Australian) using data collected from Daniel Zingaro’s classroom (at U. Toronto in Canada) to address a theory by Anthony Robins (New Zealand). We use classroom studies, laboratories studies, and frequently use multi-institutional, multi-national (MIMN) collaborative studies (and study how to conduct them well).
Physics Education Research
At the January workshop on CER that Steve Cooper organized (paper to appear in CACM next month — it’s where Eric Roberts gave a keynote that I wrote about here), Carl Wieman was the opening keynote speaker. He talked about the hot issues in physics education research.
After his talk, he was asked about how physics education researchers were dealing with the gender skew in physics and about improving access in K-12 to quality educational opportunities. If you look at Brian Danielak’s visualization of AP CS test data, you’ll see that CS is the most gender-skewed, but Physics follows closely after. (Click on the picture to get a bigger version, and look at the lower left-hand corner.)
Carl said that gender diversity just wasn’t a priority in PER. I dug into the PER groups around the US. From what I could find, he’s right. Eric Mazur’s group has one paper on this issue, from 2006 (see here). I couldn’t find any at U. Washington or at Boulder. There probably is work on gender diversity in physics education research, but it certainly doesn’t stand out like the broadening participation in computing effort in the United States (see papers listing from Google Scholar). The January workshop really brought home for me that a key characteristic of CER, particularly in comparison with PER, is an emphasis on broadening participation, on social justice, on improving the diversity of the field, and guaranteeing access to quality educational opportunities for all.
I don’t have a deep bottom-line here. It was only a few minute talk. My exploration of EER and PER gave me a new appreciation that CER has something special. It’s not as big or established as EER or PER, but we’re collaborative, international, working on hard and important problems, and using a wide variety of methods, from in-classroom to laboratory studies. That’s pretty cool.
April Heard at Georgia Tech built this map for us about where AP CS is taught in the state of Georgia. Some of it is totally to be expected. Most of the schools are in the Atlanta region, with a couple in Columbus, a handful in Macon, and a few more in Augusta and Savannah area.
But what’s disappointing is that huge swath in the south of the state with nothing. Not a single school south of Columbus and west of Brunswick. In terms of area, it’s about 1/3 of the state. Albany is home to Albany State University, the largest HBCU in Georgia. No AP CS at all there. And Georgia is one of the top states for having AP CS.
Sure, there might be some non-AP CS teachers in South Georgia, but we’re talking a handful. Not double, and certainly not a magnitude more than AP CS.
I suspect that much of the US looks like this, with wide stretches without a CS teacher in sight. April is continuing to generate these maps for states that we’re working with in ECEP. Here’s California, with big empty stretches.
Tom McKlin just generated this new map, which overlays the AP CS teacher data on top of mean household income in a school district. The correlation is very high — districts with money have AP CS, and those that don’t, don’t.