Logic error: Assuming that early coding leads to top-coding skills
September 2, 2013 at 1:34 am 9 comments
So, you do a survey of top coders, and find that many of them started coding between 8 and 11 years old. Does that imply that starting coding between 8 and 11 leads to being a top-coder? No, because you don’t know how many other kids started coding between 8 and 11 and got totally turned off to programming and are now gardeners. Yes, the data are consistent with the belief that coding early leads to top-coder status, but there’s not enough there to avoid fallacy.
The argument suggested by the post below is like the one that we’re trying to make about the role of early computing experience in influencing under-represented minorities. We found the vast majority of under-represented minorities in CS had early computing experience. But we also found that it was significantly more under-represented minorities had that experience than majority students in CS. That strengthens our case that the early computing experience is particularly important for under-represented minorities. What we haven’t shown yet is that there is a causal relationship. Is it the case that many under-represented minority students who got early computing experience did NOT go into CS classes? Until we know that, we can’t make any strong claims. (I think that the quote below is from the same Neil Fraser who went to Vietnam and came back with a lot of incorrect assumptions about high school CS in the US.)
The article linked below is about teaching kids to program before they learn to read, using ScratchJr. The article is interesting, and it raises a question well-worth exploring.
Early exposure to programming seems to have helped some of the world’s top coders. Earlier this year, Google engineer Neil Fraser in Mountain View, California, polled over 100 of his co-workers about when they first picked up coding, and then compared that with their performance on a simple test of skills. He found that those who wrote their first code between the ages of roughly 8 and 11 were most likely to develop advanced coding skills.
“We didn’t see an effect before 3rd grade, but certainly earlier is good,” Fraser says.
via Kindergarten coders can program before they can read – 26 July 2013 – New Scientist.
Entry filed under: Uncategorized. Tags: ComputingAtSchools, K12, Scratch.
1.
Neil Brown | September 2, 2013 at 3:58 am
I saw Neil Fraser talk about statistics including this one at the CAS conference. Here’s a blog post that may be covering the relevant question: https://neil.fraser.name/news/2012/07/01/ Things that I find suspicious (besides the fallacy that you point out):
1. I believe there is a recall bias. No-one says second grade when asked; I suspect a lot of them round to first grade because it sounds better or neater.
2. Not to mention: how many people who did begin coding at a young age know when, exactly? I started coding when I was young, outside school. My best guess now is age 10, but reasonably it could be anywhere between 7 and 12; I simply don’t know for sure.
3. The question as he writes it in the post is ambiguous. Is it “at what age were you cognitively able to do this” or “at what age did you have the knowledge to do this”.
2.
Mark Guzdial | September 2, 2013 at 8:44 am
I hadn’t seen that blog post, Neil. Thanks for sending it. The use of self-reported ratios of programming knowledge as evidence that schools are flawed is wrong on multiple levels: Nobody can accurately describe the ratios of where there knowledge came from, and because these people didn’t value schools doesn’t mean that nobody got any value from school. Why is the press quoting him?
3.
Jeff Rick | September 2, 2013 at 8:41 am
I was thinking that it was an incorrect assumption for another reason: Since programming is usually not a required class, people who code early are typically ones that have interest early. Interest and competence usually go hand-in-hand, so the survey might just indicate that early interest is important. To me, the greatest promise of early exposure to coding is that it can stimulate an interest. Too many people have an affinity for coding and do not realize it until quite late. There are so many stereotypes about coders being white geeky unsocial unathletic guys that need to be overcome. In that sense, we should also be careful about what ideas we spread early. All of the notable introductory CS research projects (Media Computation, Alice, Scratch, Greenfoot, Leah Buechley’s work, etc.) do a really nice job of providing an inclusive and interesting model of what computation looks like.
4.
gasstationwithoutpumps | September 2, 2013 at 1:14 pm
I learned to program around 1969, mostly from a high school class, since that was the only way I could get access to a computer in those days.
Nowadays, computer access is easy, so kids with interest and parental support are starting quite young (I’d guess around 5th or 6th grade—that seems to be the age group targeted by learn-to-program and robotics after-school clubs here). They generally have quite a head start by high school, and high schools generally only teach one course in programming (if that much), so it is hardly surprising that the current top programmers are mainly self-taught through the end of high school.
I agree with Neil Fraser that waiting until high school to start teaching programming is later than desirable, just as I think that waiting until high school to teach foreign languages is one of the reasons that we’re so poor at learning languages other than English.
I have no data or surveys to back up my opinions, mainly because it isn’t worth my time to find them, but I’m convinced enough of them that I started my son programming around 4th grade (with plenty of pre-programming activity before then—he was counting in binary on his fingers in kindergarten) and started him learning Spanish is kindergarten. My son is now a high school senior, proficient in Python, C, and Scratch; with reasonable fluency in Java, Scheme, and JavaScript. He is now working on projects that are at the complexity level of college senior design projects in computer science or computer engineering (he’s even designing his own PC boards with SMD components for embedded systems projects, and going through the full design process to try to get a product that is manufacturable in low volumes at a particular price point—he and another high school senior are planning a Kickstarter campaign to make and sell 100 of them—before going to college in a year).
Starting early does make a difference, but there is always the question of cause and effect. Do kids become good programmers because they start early, or do they start early because they already have the mindset of good programmers?
5.
Mark Guzdial | September 2, 2013 at 3:01 pm
I like the way you phrased that last question, and it connects to Jeff’s comments. I’m also interested in the inverse of those questions. Do kids who start early become programmers, and if not, do they do anything with the programming skills that they learned? What is the impact on starting early on the kids attitudes about programming if they do not become programmers? Do we end up driving more away, or drawing more in?
6.
Ludger Humbert | October 13, 2015 at 8:30 am
you may take a look at the $3^{rd}$ category of phenomens regarding informatics–there we point out, that there are several real live situations, in which you’re able to deal with problems, when you are able to think like an informaticst
http://is.gd/B6S18k
but it is much more than *only* programming, I think
Ludger
7.
Leigh Ann DeLyser | October 13, 2015 at 9:13 am
There is also a practice and expertise argument here. If a programmer starts early, and persists through career choice and profession, then it is likely they have amassed a significant number of hours in the *practice* of writing code, programs, and creating projects.
The fluency, or expertise, they display could just be the result of many years of focused practice, not any inherent trait. Kevin (GasStation) writes that his son started in 4th grade and as a HS student is now at the equivalent of a college senior. That means in 8 years of practice, his son has grown to the equivalent of a college 4-year program. I’m not trying to take anything away from Kevin’s son – the fact that he did that independently (although it sounds like with the support of his family) is tremendous and puts him ahead of his peers, but can we celebrate his hard work as a strong contributor to his success? Not some inherent trait (after all it doesn’t sound like he learned overnight).
This is similar to the narrative that we are addressing as a discipline to prevent young women from dropping out of CS programs in early courses. They see peers (often young men) who are successful in the early courses (often because of prior practice or experience) and decide they don’t have “the right stuff” to “hack it’. I am not trying to imply that there are not behavioral and personality factors for success in programming as a career (persistence, attention to detail, etc.) but broad generalizations like the one made in this article feel like self confirming bias.
8.
Mark Guzdial | October 13, 2015 at 9:43 am
Yes, if you start early and persist, you develop greater expertise. Access to computing education early doesn’t CAUSE expertise. Access to computing education later doesn’t PREVENT the development of expertise. The Geek Gene or latent ability argument isn’t related to this logical fallacy.
Neil Fraser’s results do not demonstrate a causal relationship between access to computing education early and the outcome of being a great professional software developer.
9.
gasstationwithoutpumps | October 13, 2015 at 4:52 pm
I agree that substantial practice is essential to becoming good at engineering (including computer programming). Early successes can lead students to thinking of themselves as “good at” some subject, making them more likely to continue to practice it. Similarly, early failures can cause students to think of themselves as “bad at” a subject, and so avoid practicing it.
Encouragement of students when they are stuck and critical feedback when they have an inflated view of their achievements are both important to developing a realistic view—getting them into a growth mindset.
I would not claim that my son learned to be a good programmer on his own—he had a lot of one-on-one mentoring from me, and he has had several courses from other teachers. He is now a sophomore majoring in CS at UCSB, doing well in all his classes. His love of CS has grown over the years, to the point where it has pushed out some other things that initially he had just as much interest in—the positive feedback of being able to accomplish things that he felt were worth his time seems to have been important in his choosing which activities to pursue.
Incidentally, his company’s Kickstarter campaign made their goal in the first 15 minutes of the campaign, and the company has had over $80,000 in sales in their first 9 months. Futuristic Lights is hoping to come out with a second product before Xmas this year—doing the hardware and firmware design for that has taken up most of his time over the summer and his spare time at college.
The real project for Futuristic Lights and the motivation of an enthusiastic business partner made a big difference in motivating him to learn hardware design, which he initially had almost no interest in.
Generalizing from his experiences to other students is risky (he’s definitely an outlier on almost any measure), but the motivational power of real projects compared to the toy projects that are all students are usually handed is important. I see it frequently in bioengineering senior projects, where students working on projects whose value they see put in much more effort and do much better work than ones who pick projects that end up looking like makework to them.