Archive for September 28, 2012

Learnable Programming: Thinking about Programming Languages and Systems in a New Way

Bret Victor has written a stunningly interesting essay on how to make programming more learnable, and how to draw on more of the great ideas of what it means to think with computing.  I love that he ends with: “This essay was an immune response, triggered by hearing too many times that Inventing on Principle was “about live coding”, and seeing too many attempts to “teach programming” by adorning a JavaScript editor with badges and mascots. Please read Mindstorms. Okay?”  The essay is explicitly a response to the Khan Academy’s new CS learning supports, and includes many ideas from his demo/video on making programming systems more visible and reactive for the programmer, but goes beyond that video in elaborating a careful argument for what programming ought to be.

There’s so much in his essay that I strongly agree with.  He’s absolutely right that we teach programming systems that have been designed for purposes other than learnability, and we need to build new ones that have a different emphasis.  He uses HyperTalk as an example of a more readable, better designed programming language for learning, which I think is spot-on.  His video examples are beautiful and brilliant.  I love his list of characteristics that we must require in a good programming language for learners.

I see a border to Bret’s ideas.  There are things that we want to teach about computing where his methods won’t help us.  I recognize that this is just an essay, and maybe Bret’s vision does cover these additional learning objectives, too.  The learning objectives I’m struggling with are not made easier with his visual approach.

Let me give two examples — one as a teacher, and the other as a researcher.

As a teacher: I’m currently teaching a graduate course on prototyping interactive systems.  My students have all had at least one course in computer programming, but it might have been a long time ago.  They’re mostly novices.  I’m teaching them how to create high-fidelity prototypes — quite literally, programming artifacts to think with.  The major project of the course is building a chat system.

  • The first assignment involved implementing the GUI (in Jython with Swing).  The tough part was not the visual part, laying out the GUI.  The tough part was linking the widgets to the behaviors, i.e., the callbacks, the MVC part.  It’s not visible, and it’s hard to imagine making visible the process of dealing with whatever-input-the-user-might-provide and connecting it to some part of your code which gets executed non-linearly.  (“This handler here, then later, that handler over there.”)  My students struggled with understanding and debugging the connections between user events (which occur sometime in the future) with code that they’re writing now.
  • They’re working on the second assignment now: Connecting the GUI to the Server.  You can’t see the network, and you certainly can’t see all the things that can go wrong in a network connection.  But you have to program for it.

As a researcherI’ve written before about the measures that we have that show how badly we do at computing education, and about how important it is to make progress on those measures: like the rainfall problem, and what an IP address is and whether it’s okay to have Wikipedia record yours.  What makes the rainfall problem hard is not just the logic of it, but not knowing what the input might be.  It’s the invisible future.

I disagree with a claim that Bret makes (quoted below), that the programmer doesn’t have to understand the machine.  The programmer does have to understand the notional machine (maybe not the silicon one), and that’s critical to really understanding computing. A program is a specification of future behavior of some notional machine in response to indeterminate input.  We can make it possible to see all the programs execution, only if we limit the scope of what it is to be a program.  To really understand programming, you have to imagine the future.

It’s possible for people to learn things which are invisible.  Quantum mechanics, theology, and the plains of Mordor (pre-Jackson) are all examples of people learning about the invisible.  It’s hard to do.  One way we teach that is with forms of cognitive apprenticeship: modeling, coaching, reflection, and eliciting articulation.

Bret is absolutely right that we need to think about designing programming languages to be learnable, and he points out a great set of characteristics that help us get there.  I don’t think his set gets us all the way to where we need to be, but it would get us much further than we are now.  I’d love to have his systems, then lay teaching approaches like cognitive apprenticeship on top of them.

Thus, the goals of a programming system should be:

to support and encourage powerful ways of thinking

to enable programmers to see and understand the execution of their programs

A live-coding Processing environment addresses neither of these goals. JavaScript and Processing are poorly-designed languages that support weak ways of thinking, and ignore decades of learning about learning. And live coding, as a standalone feature, is worthless.

Alan Perlis wrote, “To understand a program, you must become both the machine and the program.” This view is a mistake, and it is this widespread and virulent mistake that keeps programming a difficult and obscure art. A person is not a machine, and should not be forced to think like one.

via Learnable Programming.

September 28, 2012 at 11:37 am 9 comments

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers


Recent Posts

Blog Stats

  • 2,054,360 hits
September 2012

CS Teaching Tips