Posts tagged ‘programming languages’

Those Who Say Code Does Not Matter are Wrong

Bertrand Meyer is making a similar point to Andy Ko’s argument about programming languages.  Programming does matter, and the language we use also matters.  Meyer’s goes on to suggest that those saying that “code doesn’t matter” may be just rationalizing that they continue to live with antiquated languages.  It can’t be that the peak of human-computer programming interfaces was reached back in New Jersey in the 1970′s.

Often, you will be told that programming languages do not matter much. What actually matters more is not clear; maybe tools, maybe methodology, maybe process. It is a pretty general rule that people arguing that language does not matter are simply trying to justify their use of bad languages.

via Those Who Say Code Does Not Matter | blog@CACM | Communications of the ACM.

May 16, 2014 at 9:09 am 8 comments

Happy 50th Birthday to BASIC, the Programming Language That Made Computers Personal

A really fun article, with videos of lots of classic Basic systems running.

Kemeny believed that these electronic brains would play an increasingly important role in everyday life, and that everyone at Dartmouth should be introduced to them. “Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate,” he said in a 1991 video interview. “It was as simple as that.”

via Fifty Years of BASIC, the Programming Language That Made Computers Personal |

May 12, 2014 at 8:58 am 2 comments

How do we make programming languages more usable and learnable?

Andy Ko made a fascinating claim recently, “Programming languages are the least usable, but most powerful human-computer interfaces ever invented” which he explained in a blog post.  It’s a great argument, and I followed it up with a Blog@CACM post, “Programming languages are the most powerful, and least usable and learnable user interfaces.”

How would we make them better?  I suggest at the end of the Blog@CACM post that the answer is to follow the HCI dictum, “Know thy users, for they are not you.

We make programming languages today driven by theory — we aim to provide access to Turing/Von Neumann machines with a notation that has various features, e.g., type safety, security, provability, and so on.  Usability is one of the goals, but typically, in a theoretical sense.  Quorum is the only programming language that I know of that tested usability as part of the design process.

But what if we took Andy Ko’s argument seriously?  What if we designed programming languages like we defined good user interfaces — working with specific users on their tasks?  Value would become more obvious.  It would be more easily adopted by a community.  The languages might not be anything that the existing software development community even likes — I’ve noted before that the LiveCoders seem to really like Lisp-like languages, and as we all know, Lisp is dead.

What would our design process be?  How much more usable and learnable could our programming languages become?  How much easier would computing education be if the languages were more usable and learnable?  I’d love it if programming language designers could put me out of a job.

April 1, 2014 at 9:43 am 24 comments

Facts that conflict with identity can lead to rejection: Teaching outside the mainstream

Thought-provoking piece on NPR.  Take parents who believe that the MMR vaccine causes autism.  Show them the evidence that that’s not true.  They might tell you that they believe you — but they become even less likely to vaccinate future children.  What?!?

The explanation (quoted below) is that these parents found a sense of identity in their role as vaccine-deniers.  They rejected the evidence at a deeply personal level, even if they cognitively seemed to buy it.

I wonder if this explains a phenomenon I’ve seen several times in CS education: teaching with a non-traditional but pedagogically-useful tool leads to rejection because it’s not the authentic/accepted tool.  I saw it as an issue of students being legitimate peripheral participants in a community of practice. Identity conflict offers a different explanation for why students (especially the most experienced) reject Scheme in CS1, or the use of IDE’s other than Eclipse, or even CS teacher reaction when asked not to use the UNIX command line.  It’s a rejection of their identity.

An example: I used to teach object-oriented programming and user interface software using Squeak.  I had empirical evidence that it really worked well for student learning.  But students hated it – especially  the students who knew something about OOP and UI software.  “Why aren’t we using a real language?  Real OOP practitioners use Java or C++!”  I could point to Alan Kay’s quote, “I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.”  That didn’t squelch their anger and outrage.  I’ve always interpreted their reaction to the perceived inauthenticity of Squeak — it’s not what the majority of programmers used.  But I now wonder if it’s about a rejection of an identity.  Students might be thinking, “I already know more about OOP than this bozo of a teacher! This is who I am! And I know that you use Java or C++!”  Even showing them evidence that Squeak was more OOP, or that it could do anything they could do in Java or C++ (and some things that they couldn’t do in Java or C++) didn’t matter.  I was telling them facts, and they were arguing about identity.

What Nyhan seems to be finding is that when you’re confronted by information that you don’t like, at a certain level you accept that the information might be true, but it damages your sense of self-esteem. It damages something about your identity. And so what you do is you fight back against the new information. You try and martial other kinds of information that would counter the new information coming in. In the political realm, Nyhan is exploring the possibility that if you boost people’s self-esteem before you give them this disconfirming information, it might help them take in the new information because they don’t feel as threatened as they might have been otherwise.

via When It Comes To Vaccines, Science Can Run Into A Brick Wall : NPR.

March 31, 2014 at 1:13 am 32 comments

The new Wolfram Language: Now available on Raspberry Pi

The new Wolfram Language sounds pretty interesting.  I was struck by the announcement that it’s going to run on the $25 Raspberry Pi (thanks to Guy Haas for that).  And I liked Wolfram’s cute blog post where he makes his holiday cards with his new language (see below), which features the ability to have pictures as data elements.  I haven’t learned much about the language yet — it looks like mostly like the existing Mathematica language.  I’m curious about what they put in to meet the design goal of having it work as an end-user programming language.

Here are the elements of the actual card we’re trying to assemble:

Now we create a version of the card with the right amount of “internal padding” to have space to insert the particular message:

via “Happy Holidays”, the Wolfram Language Way—Stephen Wolfram Blog.

January 23, 2014 at 1:25 am 1 comment

Data typing might be important for someone

Excellent post and interesting discussion at Neil Brown’s blog, on the question of the role of types for professional software developers and for students.  I agree with his points — I see why professional software developers find types valuable, but I see little value for novice programmers nor for end-user programmers.  I have yet to use a typing system that I found useful, that wasn’t just making me specify details (int vs Integer vs Double vs Float) that were far lower level than I cared about nor wanted to care about.

Broadly, what I’m wondering is: are dynamically/flexibly typed systems a benefit to learners by hiding complexity, or are they a hindrance because they hide the types that are there underneath? (Aside from the lambda calculus and basic assembly language, I can’t immediately think of any programming languages that are truly untyped. Python, Javascript et al do have types; they are just less apparent and more flexible.) Oddly, I haven’t found any research into these specific issues, which I suspect is because these variations tend to be per-language, and there are too many other confounds in comparing, say, Python and Java — they have many more differences than their type system.

via The Importance of Types | Academic Computing.

November 15, 2013 at 1:52 am 43 comments

Live coding as a path to music education — and maybe computing, too

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

October 3, 2013 at 7:15 am 20 comments

Live coders challenge CS to think about expression again

Bret Victor’s great time traveling video emphasized that the 1960′s and 1970′s computer scientists were concerned with expression. How do you talk to a computer, and how should it help you express yourself? As I have complained previously, everything but C and C-like languages have disappeared from our undergraduate curriculum. Bret Victor has explored why we talked about expression in those earlier years. I have a different question: How do we get computer scientists to think about expression again?

Live coders think about and talk about expression, as evidenced from the conversations at Dagstuhl. They build their own languages and their own systems. They talk about the abstractions that they’re using (both musical and computational, like temporal recursion), how their languages support various sound generation techniques (e.g., unit generators, synthesized instruments, sampled sounds) and musical styles. If you look at the live coders on the Dagstuhl Seminar participant list, most of them are in music programs, not computer science. Why are the musicians more willing to explore expressive notations than the computer scientists?

Lisp is alive and well in live coding. I now have a half-dozen of these systems running on my laptop. Overtone is a wonderful system based in Clojure. (See here more on Overtone, and particularly powerful combined with quil for Processing visuals combined with music.) Andrew Sorensen’s Impromptu was in Scheme, as is his new environment Extempore.

Extempore is amazing. Take a look at this video of an installation called “Physics Playroom,” all controlled in Extempore. It’s a huge touch sensitive display that lets groups of students play with physics in real-time, e.g., exploring gravity systems on different planets. Andrew said that he could build 90% of this in Impromptu, but the low-level bits would have to be coded in C. He wasn’t happy with changing his expressive tools, so he created Extempore whose lowest level parts would be compiled (via LLVM) directly to machine code. Andrew went to this effort because he care a lot about the expressiveness of his tools. (At the opposite end from the Physics Playroom, see this video of Extempore running on ARM boards.)

Not everything is S-Expressions. Thor Magnusson’s Ixi Lang (more on the Ixi Lang project) is remarkable. I love how he explores the use of text programming as both a notation and a feedback mechanism. When he manipulates sequences of notes or percussion patterns, whatever line he defined the sequence on changes as well (seen in red and green below, as agents/lines that have been manipulated by other operations).


Tidal from Alex Maclean is a domain-specific language built on top of Haskell, and his new Texture system creates more of a diagramming notation. Dave Griffiths has built his live coding environment, Fluxus, in Racket which is used in Program by Design and Bootstrap CS education projects. Dave did all his live coding at Dagstuhl using his Scheme Bricks, which is a Scratch-like block language that represents Scheme forms. (See here for Dave’s blog post on the Dagstuhl seminar.)


How many of our undergraduates have ever seen or used notations like these? How many have considered the design challenges of creating a programming notation for a given domain? Consider especially the constraints of live coding (e.g., expressiveness, conciseness, and usability at 2 am in a dance club). David Ogbourn raised the fascinating question at Dagstuhl of designing programming languages for ad hoc groups, in a collaborative design process. Some evidence suggests that there may be nine times as many end-user programmers in various domains as professional software developers.  Do we teach CS students how to design programming notations to meet the needs and constraints of various domains and communities?

I wonder how many other domains are exploring their own notations, their own programming languages, without much contribution or involvement from computer scientists.  I hope that the live coders and others designing domain-specific languages challenge the academic computer scientists to think again about expression. I really can’t believe that the peak of human expression in a computing medium was reached in 1973 with C, and everything else (Java, C++, C#) is just variations on the motif.  We in computer science should be leading in exploring the design of expressive programming languages for different domains.

October 1, 2013 at 3:52 am 7 comments

A portable graphics library for introductory CS: Cross-platform MediaComp!

I just bumped into this paper looking for something else — how cool!  Eric Roberts and Keith Schwarz have created a cross-platform layer on top of a Java server process, so that their portable graphics library (which includes facilities for doing pixel-level manipulations, as we do in MediaComp) can be accessed from anything!  I often get asked “How can I do MediaComp in C++?”  Here’s a way!

For several decades, instructors who focus on introductory computer science courses have recognized the value of graphical examples. Supporting a graphics library that is appropriate for beginning students has become more difficult over time. This paper describes a new approach to building a graphics library that allows for multiple source languages and a wide range of target architectures and platforms. The key to this approach is using an interprocess pipe to communicate between a platform independent client library and a Java based process to perform the graphical operations specific to each platform.

via A portable graphics library for introductory CS.

August 21, 2013 at 1:04 am 2 comments

Learnable Programming: Thinking about Programming Languages and Systems in a New Way

Bret Victor has written a stunningly interesting essay on how to make programming more learnable, and how to draw on more of the great ideas of what it means to think with computing.  I love that he ends with: “This essay was an immune response, triggered by hearing too many times that Inventing on Principle was “about live coding”, and seeing too many attempts to “teach programming” by adorning a JavaScript editor with badges and mascots. Please read Mindstorms. Okay?”  The essay is explicitly a response to the Khan Academy’s new CS learning supports, and includes many ideas from his demo/video on making programming systems more visible and reactive for the programmer, but goes beyond that video in elaborating a careful argument for what programming ought to be.

There’s so much in his essay that I strongly agree with.  He’s absolutely right that we teach programming systems that have been designed for purposes other than learnability, and we need to build new ones that have a different emphasis.  He uses HyperTalk as an example of a more readable, better designed programming language for learning, which I think is spot-on.  His video examples are beautiful and brilliant.  I love his list of characteristics that we must require in a good programming language for learners.

I see a border to Bret’s ideas.  There are things that we want to teach about computing where his methods won’t help us.  I recognize that this is just an essay, and maybe Bret’s vision does cover these additional learning objectives, too.  The learning objectives I’m struggling with are not made easier with his visual approach.

Let me give two examples — one as a teacher, and the other as a researcher.

As a teacher: I’m currently teaching a graduate course on prototyping interactive systems.  My students have all had at least one course in computer programming, but it might have been a long time ago.  They’re mostly novices.  I’m teaching them how to create high-fidelity prototypes — quite literally, programming artifacts to think with.  The major project of the course is building a chat system.

  • The first assignment involved implementing the GUI (in Jython with Swing).  The tough part was not the visual part, laying out the GUI.  The tough part was linking the widgets to the behaviors, i.e., the callbacks, the MVC part.  It’s not visible, and it’s hard to imagine making visible the process of dealing with whatever-input-the-user-might-provide and connecting it to some part of your code which gets executed non-linearly.  (“This handler here, then later, that handler over there.”)  My students struggled with understanding and debugging the connections between user events (which occur sometime in the future) with code that they’re writing now.
  • They’re working on the second assignment now: Connecting the GUI to the Server.  You can’t see the network, and you certainly can’t see all the things that can go wrong in a network connection.  But you have to program for it.

As a researcherI’ve written before about the measures that we have that show how badly we do at computing education, and about how important it is to make progress on those measures: like the rainfall problem, and what an IP address is and whether it’s okay to have Wikipedia record yours.  What makes the rainfall problem hard is not just the logic of it, but not knowing what the input might be.  It’s the invisible future.

I disagree with a claim that Bret makes (quoted below), that the programmer doesn’t have to understand the machine.  The programmer does have to understand the notional machine (maybe not the silicon one), and that’s critical to really understanding computing. A program is a specification of future behavior of some notional machine in response to indeterminate input.  We can make it possible to see all the programs execution, only if we limit the scope of what it is to be a program.  To really understand programming, you have to imagine the future.

It’s possible for people to learn things which are invisible.  Quantum mechanics, theology, and the plains of Mordor (pre-Jackson) are all examples of people learning about the invisible.  It’s hard to do.  One way we teach that is with forms of cognitive apprenticeship: modeling, coaching, reflection, and eliciting articulation.

Bret is absolutely right that we need to think about designing programming languages to be learnable, and he points out a great set of characteristics that help us get there.  I don’t think his set gets us all the way to where we need to be, but it would get us much further than we are now.  I’d love to have his systems, then lay teaching approaches like cognitive apprenticeship on top of them.

Thus, the goals of a programming system should be:

to support and encourage powerful ways of thinking

to enable programmers to see and understand the execution of their programs

A live-coding Processing environment addresses neither of these goals. JavaScript and Processing are poorly-designed languages that support weak ways of thinking, and ignore decades of learning about learning. And live coding, as a standalone feature, is worthless.

Alan Perlis wrote, “To understand a program, you must become both the machine and the program.” This view is a mistake, and it is this widespread and virulent mistake that keeps programming a difficult and obscure art. A person is not a machine, and should not be forced to think like one.

via Learnable Programming.

September 28, 2012 at 11:37 am 9 comments

Typography to describe iteration

One of the complaints about Python is the use of indentation to define blocks.  A quote from Don Knuth is often used to defend that use:

We will perhaps eventually be writing only small modules which are identified by name as they are used to build larger ones, so that devices like indentation, rather than delimiters, might become feasible for expressing local structure in the source language.

Is it natural to use typography to indicate local structure?  Is it more accessible than curly brace ({}) notation, or BEGIN/END blocks?  Do we understand it easily?

I thought about this Sunday morning when I was at services, using a songbook I’d never seen before. The verses and refrain were indicated like this:

You sing the refrain (the “How great thou art” part) twice, indicated by “(2)” — which is pretty obvious, since it’s clear you wouldn’t sing “(2).”  What’s the “scope” of the repeat?  Well, the refrain is italicized, and so is the “(2).”  So you sing twice the part in italics.

I thought it was interesting to use typography to convey to the general population how to “iterate.” If we could count on the general population to know musical notation, and if the music was provided, there’s a different notation for indicating “iteration.”  The use of typography for defining iteration is meant to be more understandable, more generally accessible.

July 3, 2012 at 4:32 am 9 comments

MATLAB and APL: Meeting Cleve Moler

I have often described MATLAB as “APL with a normal character set,” but I didn’t actually know anything about how MATLAB came to be or if there was any relationship.  Last night, I got to ask the man who invented MATLAB, Cleve Moler, at the IEEE Computer Society Awards Dinner, where Cleve was named a “Computer Society Pioneer.”  When I introduced myself as coming from Georgia Tech, he took notice.  “Georgia Tech is a big MATLAB user!”  We teach 1200 Engineering students a semester in MATLAB.

Cleve developed MATLAB (in Fortran) as a Matrix Calculator (explicitly, a “MATrix LABoratory”) for his students.  There was no explicit tie to APL, but he saw the connections.  He said that he’s always seen MATLAB as “portable APL” because he used a traditional character set.

It’s not just the character set though.  “Iverson showed me J.  I wanted MATLAB to be understandable by normal people.”  He said that someone once converted a program he’d written in MATLAB into APL.  “I asked what that was.  They told me, ‘That’s your program!’ I couldn’t recognize it.”  APL is about being uniform about everything, but MATLAB “is a mishmosh of all kinds of things.”

Others joined in the conversation.  “What do you think about Mathematica?”  Cleve responded, “Mathematica is APL for the 21st century.  Mathematica has a uniformity about it.”

Cleve’s ideas about what make a language usable “by normal people” are interesting.  The success of MATLAB in terms of its use by so many people in so many different contexts, domains, and application areas give him real authority for making such claims.  He sees a “mishmosh” as being easier for people to understand than uniformity.  Marvin Minsky famously said that the brain is likely a “kluge.”  Do we actually prefer messy languages, with less uniformity, perhaps as a reflection of our “kluge” nature?

June 14, 2012 at 10:11 am 4 comments

Designing a language for programming with musical collaborators in front of an audience

If you were going to build a programming language explicitly for musicians to use when programming live with collaborators and in front of an audience, what would you build into it?  What should  musicians have to learn about computer science in order to use this language? There’s a special issue of Computer Music Journal coming out, focused on these themes. What a fascinating set of design constraints, and how different from most programming languages!

We are excited to announce a call for papers for a special issue of
Computer Music Journal, with a deadline of 21st January 2013, for
publication in Spring of the following year. The issue will be guest
edited by Alex McLean, Julian Rohrhuber and Nick Collins, and will
address themes surrounding live coding practice.

Live coding focuses on a computer musician’s relationship with their
computer. It includes programming a computer as an explicit onstage
act, as a musical prototyping tool with immediate feedback, and also
as a method of collaborative programming. Live coding’s tension
between immediacy and indirectness brings about a mediating role for
computer language within musical interaction. At the same time, it
implies the rewriting of algorithms, as descriptions which concern the
future; live coding may well be the missing link between composition
and improvisation. The proliferation of interpreted and just-in-time
compiled languages for music and the increasing computer literacy of
artists has made such programming interactions a new hotbed of musical
practice and theory. Many musicians have begun to design their own
particular representational extensions to existing general-purpose
languages, or even to design their own live coding languages from
scratch. They have also brought fresh energy to visual programming
language design, and new insights to interactive computation, pushing
at the boundaries through practice-based research. Live coding also
extends out beyond pure music and sound to the general digital arts,
including audiovisual systems, linked by shared abstractions.

2014 happens to be the ten-year anniversary of the live coding
organisation TOPLAP ( However, we do not wish to restrict
the remit of the issue to this, and we encourage submissions across a
sweep of emerging practices in computer music performance, creation,
and theory. Live coding research is more broadly about grounding
computation at the verge of human experience, so that work from
computer system design to exposition of live coding concert work is
equally eligible.

Topic suggestions include, but are not limited by:

- Programming as a new form of musical exploration
- Embodiment and linguistic abstraction
- Symbology in music interaction
- Uniting liveness and abstraction in live music
- Bricolage programming in music composition
- Human-Computer Interaction study of live coding
- The psychology of computer music programming
- Measuring live coding and metrics for live performance
- The live coding audience, or live coding without audience
- Visual programming environments for music
- Alternative models of computation in music
- Representing time in interactive programming
- Representing and manipulating history in live performance
- Freedoms, constraints and affordances in live coding environments

Authors should follow all CMJ author guidelines
(, paying particular
attention to the maximum length of 25 double-spaced pages.

Submissions should be received by 21st January 2013.  All submissions
and queries should be addressed to Alex McLean

April 24, 2012 at 9:45 am Leave a comment

Modern HyperCard for Today’s Schools: But Where’s the Community of Practice?

I’ve talked about RunRev/LiveCode here before.  It’s 90% HyperCard, updated to be cross-platform and with enhanced abilities.  I mostly agree with the comments below (but not with the critique of Scratch or Logo): It really does seem like an excellent tool for the needs in today’s schools.  It’s real programming, you can build things quickly, you can build for desktop or Web or mobile devices, it’s cross platform, and it’s designed to be easily learned.  The language is English-like and builds on what we know about how people naively think about programming.

I proposed using this next Fall in a course I’m teaching for graduate students to introduce them to programming. I got shot down.  The faculty argued that they didn’t want a “boutique” language.  They wanted a “real” language.  I do see the point.  Audrey Watters and I talked about this a few weeks ago.  Students don’t just want knowledge — they want to join a community of practice.  The students see or imagine people who do what they want to do, and the students want to learn what they know.  Students want to grow to be at the center of some community of practice.  Where’s the community of practice for HyperCard-like programming today?  Do you see lots of experts who are doing the cool things that you want to do — with HyperCard?  The power and expressivity of a language is not enough.  Languages today have cultures and communities.  To learn a language is to express interest in joining or defining a culture or community.  Alan Perlis said, “A language that doesn’t affect the way you think about programming, is not worth knowing.”  Today, a language that doesn’t reflect who you want to be, is not worth knowing.

Pascal is still available for modern computers.  So is Logo.  We know how to teach both of them to novices far better than we know how to teach Java or C++ to novices.  These languages were not abandoned for pedagogical or cognitive grounds — they work for teaching computing.  So why don’t we use them?  It’s because of the perceptions, the expectations, and the culture/community that grew up around those.  I’ll bet that some teacher who doesn’t know anything about Logo could discover it, not know about its past, and use it really well to teach K-12 kids about computer science.

Let’s go one step beyond the discussion that Audrey and I had, and I’ll use something that I always warn my students about: introspection.

I use LiveCode, and love it.  I don’t have much time to program these days.  So when I need a tool for data analysis, or need to remove student names from thousands of Swiki pages, or want to build a prototype, I use LiveCode because I can code more quickly and easily in that than anything else I know these days.  The code is not well structured or beautiful, but after I write it and use it, nobody (including me) ever looks at it again.  There is a community of LiveCode developers, but I don’t really participate in it much.  When I have wanted to create code that I share with others, I have used Python or Squeak.  Currently, I’m interested in investing some time in learning JavaScript, because of its Lisp/Smalltalk-like internals and because of the platforms that it opens up for me.  I’m not particularly interested in joining the JavaScript community. Yesterday, I took some time off to just play, and I downloaded some new languages that I’m interested in for computer music: Impromptu and Field.  I do choose languages based on what I want to be, but not for the community. I choose languages for the language features, the task needs, the task constraints, and my facility with the language.

I don’t know why I’m not particularly drawn to any language community these days.  Maybe I am choosing languages based on community, but I’m not aware of it. Maybe I am at the center of my community of practice, which frees me to make choices and go in new directions.  I want to be a computing education researcher who can express his ideas in code and can build his own tools.  There aren’t many of us, and there isn’t a language central to that community.  Maybe, in this community of practice, the tool is incidental and not integral to the community.

I do think (perhaps naively) that it’s important for us in computing to be willing to invent a community of practice, not just join an existing one.  If you want to change the way people think about computing, you don’t just join an existing community.  The existing communities were created within and support the existing values.  We should also be about inventing communities that support different values.

I spoke to dozens of teachers who all told me a similar story. There is a sea change in the air. After thirty years of teaching powerpoint and excel spreadsheets, schools are finally returning to the idea that we really need to teach the next generation how to program – but where are the tools to do it? Suddenly ICT teachers up and down the country are being told, “from next year you must teach programming principles” but they have been given no training, tools or guidance on how to achieve this. With little time to learn and a very limited range of choices, these teachers were delighted to discover LiveCode. It seems to be exactly what they are looking for. Easy to learn for both teachers and students, real programming without the limitations of “snap together” tools like scratch or logo, no arcane or hard to understand syntax or symbols, and best of all, it lets the students deploy the end results on their iPad, iPhone or Android device.

via Education, Education | revUp 131.

April 9, 2012 at 8:30 am 8 comments

A nice definition of computational thinking, including risks and cyber-security

GasStationWithoutPumps did a blog piece on the newspaper articles that I mentioned earlier this week, and he pointed out something important that I missed.  The Guardian’s John Naughton provided a really nice definition of computational thinking:

… computer science involves a new way of thinking about problem-solving: it’s called computational thinking, and it’s about understanding the difference between human and artificial intelligence, as well as about thinking recursively, being alert to the need for prevention, detection and protection against risks, using abstraction and decomposition when tackling large tasks, and deploying heuristic reasoning, iteration and search to discover solutions to complex problems.

I like this one.  It’s more succinct than others that I’ve seen, and still does a good job of hitting the key points.

Naughton’s definition includes issues of cyber-security and risk.  I don’t see that often in “Computational Thinking” definitions.  I was reminded of a list that Greg Wilson generated recently in his Software Carpentry blog about what researchers need to know about programming the Web.

Here’s what (I think) I’ve figured out so far:

  1. People want to solve real problems with real tools.
  2. Styling HTML5 pages with CSS and making them interactive with Javascript aren’t core needs for researchers.
  3. All we can teach people about server-side programming in a few hours is how to create security holes, even if we use modern frameworks.
  4. People must be able to debug what they build. If they can’t, they won’t be able to apply their knowledge to similar problems on their own.

Greg’s list surprised me, because it was the first time that I’d thought risk and cyber-security as critical to end-user programmers.  Yes, cyber-security plays a prominent role in the CS:Principles framework (as part of Big Idea VI, on the Internet), but I’d thought of that (cynically, I admit) as being a nod to the software development firms who want everyone to be concerned about safe programming practices.  Is it really key to understanding the role of computing in our everyday lives?  Maybe — the risks and needs for security may be the necessary consequent of teaching end-users about the power and beauty of computing.

Greg’s last point is one that I’ve been thinking a lot about lately.  I’ve agreed to serve on the review committee for Juha Sorva’s thesis, which focuses on his excellent program visualization tool, UUhistle.  I’m enjoying Juha’s document very much, and I’m not even up to the technology part yet.  He has terrific coverage of the existing literature in computing education research, cognitive science, and learning sciences, and the connections he draws between disparate areas is fascinating.  One of the arguments that he’s making is that the ability to understand computing in a transferable way requires the development of a mental model — an executable understanding of how the pieces of a program fit together in order to achieve some function.  For example, you can’t debug without a mental model of how the program works (to connect to Greg’s list).  Juha’s dissertation is making the argument (implicitly, so far in my reading) that you can’t develop a mental model of computing without learning to program.  You have to have a notation, some representation of the context-free executable pieces of the program, in order to recognize that these are decontextualized pieces that work in the same way in any program.  A WHILE loop has the same structure and behavior, regardless of the context, regardless of the function that any particular WHILE loop plays in any particular program. Without the notation, you don’t have names or representations for the pieces that is necessary for transfer.

Juha is making an argument like Alan Perlis’s argument in 1961: Perlis wasn’t arguing that everyone needed to understand programming for its own sake.  Rather, he felt that the systems thinking was the critical need, and that the best way to get to systems thinking was through programming.  The cognitive science literature that Juha is drawing on is saying something stronger: That we can’t get to systems thinking (or computational thinking) without programming.  I’ll say more about Juha’s thesis as I finish reviewing it.

It’s interesting that there are some similar threads about risk and cyber-security appearing in different definitions of computational thinking (Naughton and Wilson discussed here), and those thinking about how to teach computational thinking (Sorva and Perlis here) are suggesting that we need programming to get there.

April 6, 2012 at 8:23 am 8 comments

Older Posts

Recent Posts


July 2014
« Jun    

Blog Stats

  • 923,723 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,953 other followers


Get every new post delivered to your Inbox.

Join 2,953 other followers