Should Everybody Learn to Code? Coverage in Communications of the ACM
February 5, 2014 at 1:28 am 16 comments
I spoke to the author, Esther Shein, a few months ago, but didn’t know that this was coming out until now. She makes a good effort to address both sides of the issues, with Brian Dorn, Jeannette Wing, and me on the pro side, and Chase Felker and Jeff Atwood on the con side. As you might expect, I disagree with Felker and Atwood. “That assumes code is the goal.” No–computational literacy and expression, the ability to use the computer as a tool to think with, and empowerment are the goals. Code is the medium.
Still, I’m excited about the article.
Just as students are taught reading, writing, and the fundamentals of math and the sciences, computer science may one day become a standard part of a K–12 school curriculum. If that happens, there will be significant benefits, observers say. As the kinds of problems we will face in the future will continue to increase in complexity, the systems being built to deal with that complexity will require increasingly sophisticated computational thinking skills, such as abstraction, decomposition, and composition, says Wing.
“If I had a magic wand, we would have some programming in every science, mathematics, and arts class, maybe even in English classes, too,” says Guzdial. “I definitely do not want to see computer science on the side … I would have computer science in every high school available to students as one of their required science or mathematics classes.”
via Should Everybody Learn to Code? | February 2014 | Communications of the ACM.
Entry filed under: Uncategorized. Tags: computing for everyone, end-user programming.
1.
alanone1 | February 5, 2014 at 7:10 am
There are other sides to these question and positions.
For example, there is a lot of evidence online — in a variety of forms — that learning to “code” does not necessarily lead to clear thinking. It even seems to lead to the opposite of good thinking.
What went wrong?
I think part of the problem is that the low levels and ad hoc natures of most programming languages allow very arcane and byzantine hacks to be constructed, which still have enough logical entailment to do something, but not enough design and higher level integrity to constitute “reasonable thoughts”.
It is not an exaggeration to see this as a training ground for bad thinking rather than the “improved thinking” we would hope would be the case. It seems to have some parallels to the elaborate rationalizations for bad ideas that have always been part of human nature.
It’s more likely the case that a better way to make progress here is to “teach thinking”, and use various representational forms to express clear thinking rather than to try to extract clear thinking from representational systems that necessarily have many degrees of freedom.
This is the way it is best done in both natural language expression and with mathematics — the languages should be the servants of the thoughts not the masters.
That being said, we should also ask questions about how the current fledgling programming forms and their languages stack up to the kinds of thoughts that are worthwhile.
To me, they don’t come close — and in comparison, they are like pidgin languages or the counting systems used by Trobriand Islanders.
In the 60s “Computer Science” was an aspiration. Today CS departments say “this is what we do”. But they say it without having come close to realizing it. They have succumbed to all too many human tendencies in a field that cannot use nature as a critic.
In the 60s, when asked about “Computer Science”, Alan Perlis said it is the “science of processes” (and he meant all processes, not just those on computers). He could also have said that it is the “science of systems”. These are very good places to start. What is important about processes and systems? How should we think about them, talk about them, represent them, construct them, debug them?
A serious effort at this would lead to much better programming languages than we have today, and many parts of these will be very well suited to beginners of all ages.
2.
Mark Guzdial | February 5, 2014 at 9:51 am
I strongly agree, Alan! In particular:
And we know that this can work. Sharon Carver successfully showed that she could teach debugging directions on a map by teaching kids to debug Logo programs. Idit Harel Caperton showed that kids learned a whole lot of mathematics and Logo by teaching both in synergy.
So where did we go wrong? I’ve been wondering about this question from a different direction — why did written language, mathematical notation, and musical notation generally get it right? Is it because the users of the language were also the developers of the language? Symbol systems were developed in an evolutionary process where many things were tried, by many people, in parallel. The most useful ones continued on, and others dropped off. There wasn’t one winner — parallel systems developed, if they served different purposes.
One problem that we have with the evolution of programming languages is that there are too few of them. They are too hard to build, so the users don’t build their own often. They’re not different enough, so we’re not exploring a wide enough variation. Programming languages are built by programming language experts to serve (often) theoretical purposes or the purposes of a certain class of full-time software engineers. A very few languages were ever designed to make it easy for non-developers to explore, express, and reflect; to solve small problems and be able to understand the code when the problem comes up again in six months. Yet, our empirical data suggests that those are the most common purposes for programming.
3.
alanone1 | February 5, 2014 at 10:35 am
This is still missing the difference between ideas and representations for ideas. A widely spread quote of mine is “The music is not in the piano” (but I’m not sure it’s really understood … because “the music is not in the notation either” any more than “the play is in the script”)
And, most musicians will tell you that standard musical notation is not very good on many counts precisely because it was only thought through a little early on, and then clung to what it was as ideas improved. A simple example is that it was invented before modern harmony was invented (and the latter has some properties that are like geometry).
But despite this very important part of music since ca 1700 standard notation will e.g. show major and minor triads as the same shape and require the reader to supply external context to decipher them. In practice for fluency, this amounts to having to learn hundreds of shapes instead of just a few. (There is much in common here with how English spelling was developed in an ad hoc way.)
Similar comments could be made about the poor conglomeration of choices in mathematical notations.
My complaint is that “now that we should know better” we should be able to design better — but rarely do we see this. (Similarly after knowing what the printing press did to Europe and having Eisenstein and McLuhan, etc., explain it to us, we should have been able to take better perspectives on both television and the web — but most didn’t.)
And by the way, by the end of the 60s Jean Samet had counted more than 3000 programming languages, and they had quite a range of POV. I think what we have seen is much more rejection of ideas that go against a pop culture orthodoxy than anything else. And some of the orthodoxy has simply been convenience. (For example, Pascal’s time had passed before it ever started working, but it was well documented and easy to implement, so it spread when it should have been bypassed by something much better that was actually also around.)
And I don’t think your theory deals with the disappearance of LOGO and Hypertalk, etc. They weren’t any worse in their domain than (say) Perl (I think they were better), but they are gone.
4.
Mark Guzdial | February 5, 2014 at 3:40 pm
Okay, I’m convinced. Your theory about rejecting ideas that go against a pop culture orthodoxy is a better explanation than my theory about not having enough different kinds of language for the right language to evolve. I think I understand what happened to Logo (as Seymour explains in “The Children’s Machine”), and I have a sense that Apple gave up on HyperCard (for reasons I don’t know). But what leads to the rise (against current orthodoxy) of worse languages?
What do you think about Richard Gabriel’s “Worse is Better” explanations? When C came out, there were better languages with large communities, and yet, it became the orthodoxy. How did all those other languages lose to C?
5.
Peter Donaldson | February 5, 2014 at 6:39 pm
One of the issues I’ve thought that separates people in Computing who’ve possibly programmed for a long time and those who’ve had limited or no exposure is the value placed on economy of expression over readability. Yes it’s beautifully elegant that you can express the LISP language specification on a t-shirt and also how internally consistent it is but quickly trying to read several nested function calls soon highlights how easy it is to misinterpret what’s happening.
I’d agree with Perlis 100% that Computing as a discipline should focus of the science of processes but I’d also add the other side of the coin to that definition to include information and data structures. The fact that we can move back and forward from one to the other is still something I find fascinating.
I wouldn’t say Hypertalk is dead necessarily, it’s the basis for RunRev’s livecode programming environment, rather that it’s less mainstream than it used to be.
I need to find the reference but I’m fairly sure I recently read a paper on how easy it was to understand program code based on the syntax it used. The study compared several languages against a made up reference language with arbitrary keywords and found that languages with C-style syntax were no more easily understood than the arbitrary keyword language.
6.
Mark Guzdial | February 5, 2014 at 7:14 pm
Peter, I think you’re referring to Andreas Stefik’s study “An Empirical Investigation into Programming Language Syntax.” I think you’re overstating the findings a little. His Randomo language was easier for novices to understand than Perl or Java, but that’s not saying much. Python, Ruby, and Quorum were at the top of his list, and Python is arguably C-influenced. Lisp and Smalltalk weren’t in the competition.
RunRev’s LiveCode is pretty popular in Scotland, but not so in the United States. I personally really like it (and just bought my 2013 Mega-bundle! 🙂
7.
alanone1 | February 7, 2014 at 10:18 am
Hi Mark
I think pop music gives some clues, as does pop culture in general.
Two big drivers in pop culture (and in McLuhan’s “Global Village”) are “finding identity” and “demanding participation”. We have “now” not “then”, or “later”.
Perhaps no more surprising: “convenience trumps all”.
Nor should we omit “fleeting fads” or “tribalism” here.
Moreover, the notions of “quality” and “value” are very different from what have previously been associated with these terms. And “history” is not enough in the air to even be missed.
These provide the conditions for the behaviors manifested over the last several decades and more.
8.
Mark Miller | February 11, 2014 at 5:23 pm
First of all, I agree with Alan’s explanation. I think he pretty much covered the gamut of the reasons worse languages have and still can win out over better ones.
It’s been my experience with this field that there is little sense of the architecture represented in a language having intrinsic value. When I was taking CS over 20 years ago our professors told us that language choice represented a programming style, and was only a matter of preference, since every language is Turing Complete. Out in industry the dominant attitude I saw was since most people never see the source code, it’s assumed that language choice is a matter of developer preference only, and that preference can be simply because “I like it,” or, “It’s popular.” What matters is the end result of the program running as specified.
Speed of execution has long been one of the chief considerations, though it’s not an absolute, and as Alan has pointed out, the history of hardware adoption (slow, badly designed architectures) has been a significant driver in eliminating better languages from use.
From my experience, new languages have been adopted in the IT world primarily because they either provide access to efficient functionality on a platform, or efficient access to functionality that is highly desired, and what’s desired is in the utilitarian mode. Almost every popular language that I’ve known about has had a “hook” to the platform that carries it, and it’s the platform that’s driven its adoption. Probably the only exception I’ve seen to this is Java. Its “hook” was “internet awareness,” and its much touted portability (the seamlessness of which depended on whether it was used in the data center or on the client). More recently the economics of development has sometimes been added to the mix, which emphasizes speed of development over efficiency of execution, but the other considerations still hold in very strong ways. The rest of the adoption process is adaptation and consensus building (the function of which is to shut out alternatives). For about 25 years, as long as a language fit these criteria it didn’t matter how bad it was. Large numbers of programmers were tasked with learning and using it.
The “era of the web browser,” along with sufficient processor speeds, has brought about an interesting possibility where better languages have a chance of competing against worse ones. JavaScript is the dominant language on the client end, from what I can surmise, but on the back end developers can conceivably use any language they want, so long as it can access a TCP/IP stack, though pre-existing code libraries that are considered relevant to utilitarian ends still dominate over architecture in the decision making.
9.
Seth Chaiken | April 23, 2014 at 10:32 am
Java has a hook–native methods.
10.
Chris Doherty | February 6, 2014 at 8:45 am
As a working software engineer of some 15 years, I get a little confused by the terminology, because in industry we find useful and reasonably well-defined distinctions between “programming,” “software engineering,” and “computer science.”
– “Programming” is using a programming language to make computers do things.
– “Software engineering” is using programming to create software with structure, scale, and maintainability.
– “Computer science” is understanding the substrate: from basic algorithmic complexity (“bubble sort is bad”) down to theory of computation (Turing machines and the halting problem, neither of which are terribly useful in the day to day routine, if being necessary to understand deeper concepts that *are* useful) and language arcana.
When I see “learn to code,” it sounds like we’re interested in teaching programming, which is entirely worthwhile. People do all kinds of things with just programming and no deeper understanding than that (e.g. PHP and almost everything written in it). The core understanding of a computer program as an inflexible, literalist, step-by-step execution is a big hurdle for most people.
But we keep talking about teaching “computer science” all over, and I don’t see much detail about what computer science concepts we would teach.
Does the “everyone should learn CS” movement have a set of underlying shared assumptions about specifics that I don’t know about? (And obviously those would be different than mine.) Or is the movement really being that vague about what exactly “everyone” should learn?
11.
Mark Guzdial | February 6, 2014 at 9:43 am
As with any curricular decision, Chris, there are a wide variety of opinions and assumptions. Probably the best (in terms of largest backing) definition is the CS Principles definition at http://csprinciples.org. There is another course that NSF is promoting, Exploring CS (http://exploringcs.org), that is aimed at something that all high schools could teach, that all (?) high school students could succeed at.
12.
gasstationwithoutpumps | February 6, 2014 at 2:21 pm
Given that developmentally disabled students who are unable to read or do simple arithmetic are routinely graduated from high school in some states, I doubt that there is anything in CS that “all high school students could succeed at”, unless you distort “succeed” to the point that it is as meaningless as a Texas high school diploma.
13.
gasstationwithoutpumps | February 6, 2014 at 2:27 pm
Chris, I appreciate your explanation of “software engineering”, as from what I’d seen, it seemed to be more about management than about software—programming with inadequate programmers. What you put in your definition to me was just “programming well”.
I agree that “programming” needs to be widely taught (say 50–60% of high school students), and that “software engineering” (by your definition) needs to be taught to a lot (5-10% of college students?) and that relatively few need “computer science” (we’re probably already teaching more than need it).
I think that CS departments should be trying to increase the number of CS minors and students taking service courses, rather than increasing the number of majors, but arcane University budgetary practices discourage such sensible actions.
14.
Bonnie | February 7, 2014 at 7:56 am
Note: I worked as a software engineer for years, and I teach it now at a university.
Software engineering is NOT a management discipline. It is, as its name suggests, an engineering discipline. As in all engineering disciplines, design is at its core. In this case, it is design of largescale software systems. As in all engineering disciplines, organizational and management issues creep in, because those areas impact the construction of systems (buildings, bridges, complex consumer electronics, software). But design and construction are at the core.
The majority of CS students at my school aspire to software engineering type positions when they finish. They will work on large financial systems, aircraft control systems, healthcare systems, and so on. I suspect this is true at many schools. Thus, I think that the majority of computer science majors should be learning software engineering, since that is what they want to do.
15.
Peter Donaldson | February 17, 2014 at 5:28 pm
Hi Mark,
yes that was the paper I was referring to. My time is often severely constrained so it’s a case of reading an interesting research paper when I can and making a few notes for later. I am definitely guilty of overextending the papers findings in this case.
Re-reading my original reply I’m not sure that the point I was making was really clear enough. Aesthetically speaking Computer Scientists appreciate being able to express a computation as concisely as possible avoiding any form of redundancy in its description. Unfortunately novices, and those who only program occasionally, struggle to understand the mechanisms underlying any form of process description so additional cues in the keywords used and the layout of the code help. As hamming codes in information theory prove a certain level of redundancy in written communication can definitely be useful.
The other issue that arises is one that Neil Brown touches upon in his discussion of the interface choices they made for Greenfoot classes and objects. This tradeoff was between simple, easily composable methods versus more complex but convenient methods that restrict the range of possible combinations but allow students to work at a level closer to the problem domain.
@Alan my biggest fear is that companies with a vested interest in computation as a closed shop convince everyone else that their computing devices are just appliances. It would be the equivalent of only using paper for packaging instead of as medium of expression which is what they both really are.
16.
I don’t believe it: Early STEM Education Will Lead to More Women in IT | Computing Education Blog | April 22, 2014 at 9:04 am
[…] I don’t believe the main propositions of the article below. Not all STEM education will lead to more women discovering an interest in IT. Putting computing as a mandatory subject in all schools will not necessarily improve motivation and engagement in CS, and it’s a long stretch to say that that will lead to more people in IT jobs. […]