Posts tagged ‘live coding’

Inverse Live Coding: A practice for learning web development

I finished my first semester teaching at the University of Michigan. I taught EECS 493 User Interface Development all on the Web. It was a tough class for me since I had never before this class used JavaScript, jQuery, or CSS. I learned a lot, but I also had to teach it.

As readers of this blog now, I am a fan of live coding (see blog post here) where a teacher writes code live in front of the class. The positive aspects of live coding include that the teacher will likely make mistakes, so students see that (a) mistakes are common and expected, (b) mistakes can be corrected, and (c) there is a process to recover from mistakes.

I could not live code in this class. I didn’t know the material well enough. It took me too long to code, and I couldn’t recover from mistakes in front of the class.

It was also so hard in the context of a Web-based user interface course. I had the advantage that I could watch videos of Mark Ackerman and Walter Lasecki teach the course before me. Both of them know JavaScript and Web technologies well enough that they could live code in front of the class. I couldn’t understand what they were doing. Part of it was the low-resolution of the video — I couldn’t read the screens well. But a bigger part of it was the cognitive load of remembering the HTML and the CSS while writing the jQuery code. I couldn’t figure out the code while also remembering the names of the divs and how they were nested, and what CSS formats were being applied to which divs and classes. Maybe the younger kids fare better than me in my mid-50’s.

So, I tried a different practice. Maybe call it Inverse Live Coding.

  • I wrote programs (HTML, CSS, plus JavaScript with jQuery) before class.
  • I demoed and lectured on those programs (briefly — maybe 10-15 minutes).
  • Critical: I gave the students the code and all the associated files (HTML, CSS, and JavaScript).
  • Then, live in-class, I had students pair up to make small changes to the code.

This is like in-class coding or a lab session, but short — like 8-10 minutes of coding. We can do three of these in a lecture period. It’s more like a Peer Instruction (PI) session than a traditional laboratory coding activity.  I typically put 2-3 of these in a 90 minute class, as part of a lecture. I have students use iClickers to “click in” when they’re successfully done with the change, so we’re using PI practices to make clear what kind of activity this is. That ritual (clicking in then re-joining the class session) helps with bringing people back to the whole-class context after the pair-programming activity.

Here’s an example: I wrote a small JavaScript/HTML/CSS project that had a balloon slowly float upward, and could be steered left or right with arrow keys. (Why yes, that is a Smalltalk balloon in the example.)

I explained the code, introducing ideas like closures and the ready() method in jQuery.

Then I gave them a challenge. In this case, make the balloon also drift to the left, or drift in different directions based on the time or a random variable.

Different kinds of questions: I was pleased by how well this worked. First, it didn’t take much time. 80% of the teams solved at least one of the challenges in less than 8 minutes. Second, the teaching assistants and I got tons of questions. Many of the questions were fundamental ones — things I never would have even thought to talk about in class if I was live-coding. These are (paraphrased) actual questions that I got during the inverse live-coding sessions.

  • “Okay, I made the changes in the JavaScript. Now, how do I compile it?”
  • “I made the changes using the debugger in Chrome. How do I make this work in the CSS, HTML, or JavaScript files?”
  • “So, the jQuery is changing the HTML or the CSS, right? How come my file isn’t changing?”

These were pretty easy questions to handle in class, but one could imagine that those could stymie a student working alone. When I got one of these questions 1:1 or 2:1 during the pair-programming/live-coding, I made sure I answered the question for the whole class, too. These were questions about the fundamental nature of how Web technologies work (e.g., that the interpreter/compiler is the browser itself, and that jQuery says it’s changing CSS/HTML, but it’s really changing the DOM tree, etc.). It was much better to push the students to do the programming, and thus face these questions, than for the teacher to do the programming. The teacher would have an expert blind spot and may not even think to explain those parts that are invisible and confusing.

Miranda Parker suggested a variation of this practice. I could invite students to come up to the front of class to solve the problem, or to explain their solution after the session. That could be helpful in making sure that everyone sees something that works.

I would still use live coding in an intro course, when there aren’t invisible other files involved in the task and increasing cognitive load. In classes like Web Design, I will use inverse live coding more in the future.

February 4, 2019 at 7:00 am 21 comments

Georgia Tech’s EarSketch Uses Music To Teach Students Coding

Pleased to see that my colleagues are getting recognition for their cool work.

The White House recognized Georgia Tech last Monday for a coding program that uses music to teach code. It was recognized as part of its national initiatives for Computer Science Education Week.EarSketch is a free online tool that uses music to teach the programming languages of Python and JavaScript.Georgia Tech professors plan to expand the program to more than 250 middle and high schools nationwide next year.

Source: Georgia Tech’s EarSketch Uses Music To Teach Students Coding | WABE 90.1 FM

February 10, 2017 at 7:00 am 2 comments

International Conference on Live Coding (ICLC), 13-15 July, Leeds, Registration open

On my recent trip to Germany, I got to connect to live coding again.  At the Dagstuhl Seminar I attended, I visited with Alan Blackwell who organized the live coding Dagstuhl Seminar I attended and has been doing live coding with Sam Aaron (of SonicPI fame). When I got back to Oldenburg, I visited with Graham Coleman, a Georgia Tech alum who is completing a PhD in computer music and who was an active live coder in Atlanta.  Great to see the first international conference happening soon!

First International Conference on Live Coding

ICSRiM, School of Music, University of Leeds

13th-15th July 2015

We are happy to announce that registration for ICLC2015 is now open. Live coding turns programming languages into live interfaces, allowing us to directly manipulate computation via its notation. Live coding has great potential, being used for example to create improvised music and visuals, to allow developers to collaborate in new ways, to better understand computational models by making fundamental changes to them on-the-fly, and to find new ways to learn and teach programming.

Since the beginning of the TOPLAP movement in 2003 (building on an extensive but hidden pre-history), live coding has grown fast, attracting interest from many people in artistic, creative, scientific, educational, business and mixed contexts. After a good number of international events, the time is right to bring these people together for an academic conference, exchanging ideas and techniques, and enjoying dozens of peer reviewed papers and performances. The conference will also open up the field for people new to live coding, so they may develop and contribute their own perspectives on this emerging field. Join us!

Registration is £80 (£50 concessions) for the three day conference

including lunches, evening events, and more.

See the website for details of the developing programme:

And register here, completing both the on-line payment and registration forms.

ICLC is organised by the Live Coding Research Network, which is funded by the Arts and Humanities Research Council.

June 15, 2015 at 7:08 am Leave a comment

How to Teach Computer Science with Media Computation

In the Preface to the new 4th ed book, I wrote a bit about what we know about how to teach computer science using Media Computation.  These are probably useful in most CS classes, even without Media Computation:

Over the last 10 years, we have learned some of the approaches that work best for teaching Media Computation.

  • Let the students be creative. The most successful Media Computation classes use open-ended assignments that let the students choose what media they use. For example, a collage assignment might specify the use of particular filters and com- positions, but allow for the student to choose exactly what pictures are used. These assignments often lead to the students putting in a lot more time to get just the look that they wanted, and that extra time can lead to improved learning.
  • Let the students share what they produce. Students can produce some beautiful pictures, sounds, and movies using Media Computation. Those products are more motivating for the students when they get to share them with others. Some schools provide online spaces where students can post and share their products. Other schools have even printed student work and held an art gallery.
  • Code live in front of the class. The best part of the teacher actually typing in code in front of the class is that nobody can code for long in front of an audience and not make a mistake. When the teacher makes a mistake and fixes it, the students see (a) that errors are expected and (b) there is a process for fixing them. Coding live when you are producing images and sounds is fun, and can lead to unexpected results and the opportunity to explore, “How did that happen?”
  • Pair programming leads to better learning and retention. The research results on pair programming are tremendous. Classes that use pair programming have better retention results, and the students learn more.
  • Peer instruction is great. Not only does peer instruction lead to better learning and retention outcomes, but it also gives the teacher better feedback on what the students are learning and what they are struggling with. We strongly encourage the use of peer instruction in computing classes.
  • Worked examples help with learning creativity. Most computer science classes do not provide anywhere near enough worked-out examples for students to learn from. Students like to learn from examples. One of the benefits of Media Computation is that we provide a lot of examples (we’ve never tried to count the number of for and if statements in the book!), and it’s easy to produce more of them. In class, we do an activity where we hand out example programs, then show a particular effect. We ask pairs or groups of students to figure out which program generated that effect. The students talk about code, and study a bunch of examples.

May 13, 2015 at 8:09 am 5 comments

First International Conference on Live Coding in July 2015

First International Conference on Live Coding
13-15th July 2015, University of Leeds, UK

With pleasure we announce the initial call for papers and performances for the
first International Conference on Live Coding, hosted by ICSRiM in the School
of Music, University of Leeds, UK.

This conference follows a long line of international events on liveness in
computer programming; the Changing Grammars live audio programming symposium in
Hamburg 2004, the LOSS Livecode festival in Sheffield 2007, the annual Vivo
festivals in Mexico City from 2012, the live.code.festival in Karlsruhe, the
LIVE workshop at ICSE on live programming, and Dagstuhl Seminar 13382 on
Collaboration and Learning through Live Coding in 2013, as well as numerous
workshops, concerts, algoraves and conference special sessions. It also follows
a series of Live Coding Research Network symposia on diverse topics, and the
activities of the TOPLAP community since 2004. We hope that this conference
will act as a confluence for all this work, helping establish live coding as an
interdisciplinary field, exploring liveness in symbolic abstractions, and
understanding the perceptual, creative, productive, philosophical and cultural

The proceedings will be published with ISSN, and there will also be an
follow-on opportunity to contribute to a special issue of the Journal on
Performance Arts and Digital Media; details will be announced soon.


* Templates available and submissions system open: 8th December 2014
* Performance submissions deadline: 16th February 2015
* Paper submissions deadline: 1st March 2015
* Notification of results: 10th April 2015
* Camera ready deadline: 10th May 2015
* Conference: 13-15th July 2015

Submission categories

* Long papers (6-12 pages)
* Short papers (4-6 pages)
* Poster/demo papers (2-4 pages)
* Performances (1 page abstract and technical rider)

ICLC is an interdisciplinary conference, so a wide range of approaches are
encouraged and we recognise that the appropriate length of a paper may vary
considerably depending on the approach. However, all submissions must propose
an original contribution to Live Coding research, cite relevant previous work,
and apply appropriate research methods.

The following long list of topics, contributed by early career researchers in
the field, are indicative of the breadth of research we wish to include:

* Live coding and the body; tangibility, gesture, embodiment
* Creative collaboration through live code
* Live coding in education, teaching and learning
* Live coding terminology and the cognitive dimensions of notation
* Live language and interface design
* CUIs: Code as live user interface
* Domain specific languages, and the live coding ecosystem
* Programming language experience design: visualising live process and state in
code interfaces
* Virtuosity, flow, aesthetics and phenomenology of live code
* Live coding: composition, improvisation or something else?
* Time in notation, process, and perception
* Live coding of and inside computer games and virtual reality
* Live programming languages as art: esoteric and idiosyncratic systems
* Bugfixing in/as performance
* Individual expression in shared live coding environments
* Live coding across the senses and algorithmic synaesthesia
* Audience research and ethnographies of live coding
* Live coding without computers
* Live coding before Live Coding; historical perspectives on live programming
* Heritage, vintage and nostalgia – bringing the past to life with code
* Live coding in public and in private
* Cultural processes of live programming language design
* General purpose live programming languages and live coding operating systems
* Connecting live coding with ancient arts or crafts practice
* Live coding and the hacker/maker movement: DIY and hacker aesthetics
* Critical reflections; diversity in the live coding community
* The freedom of liveness, and free/open source software

Submissions which work beyond the above are encouraged, but all should have
live coding research or practice at their core. Please contact us if you have
any questions about remit.


Please email feedback and/or questions to

January 5, 2015 at 7:54 am 1 comment

Come visit the CS Education Zoo from Steve Wolfman and Will Byrd

Post to SIGCSE-members, re-posted here with Steve’s permission.

TL;DR version: Watch Will&Steve interview interesting CS Ed folks on the CS Ed Zoo at  Most recent episode with Mark Guzdial at

Full version:

It’s a common lament that CS education is an isolated practice. You teach in your own classroom, a colleague drops by once a year for performance review, and otherwise only your students know what you do.

We know what you’re thinking:

I wish there were a place where CS educators were kept on 24-hour public display (locked securely behind iron bars, of course).

Well, now there is!

Announcing the CS Education Zoo, a bi-weekly-very-ish interview series where CS educators (and people with animal-themed last names) Will Byrd and Steve Wolfman interview interesting people involved in CS education (even if they lack animal-themed last names).

So far, we’ve posted six episodes:

+ Mark Guzdial extols the power and potential of live coding (and MUCH more):

+ David Nolen ponders the impact of a programmer’s first language on their learning (and MUCH more):

+ Becky Bates shares how to craft a large, heterogeneous project course (and MUCH more):

+ Jeff Forbes explains why “rapid feedback is better than good feedback” (and MUCH more):

+ Rob Simmons discusses the subtleties of teaching formal reasoning about programming in intro courses (and MUCH more):

+ Kim Voll tells us what to tell our students interested in gaming careers (and MUCH more):

And in the works: a chat with some of the people behind Hacker School.

P.S. Drop us a line ( or tweet like the cool kids apparently do to @steve_wolfman and @webyrd) if there’s some person, group, or other amorphous-but-audible entity you think we should invite!

November 11, 2014 at 8:51 am Leave a comment

A stunningly beautiful connection between music and computing: Jason Freeman’s “Grow Old”

My eldest child graduated from college this last year, and I’m feeling my first half-century these days.  That may be why I was particularly struck by the themes in Jason Freeman’s beautiful new work.  I recommend visiting and reading the page, and you’ll get why this is so cool, even before you listen to the music.  It’s not live coding — it’s kind of the opposite.  It’s another great example of using music to motivate the learning of computing.

Why can’t my music grow old with me?

Why does a recording sound exactly the same every time I listen to it? That makes sense when recordings are frozen in time on wax cylinders or vinyl or compact discs. But most of the music I listen to these days comes from a cloud-based streaming music service, and those digital 1s and 0s are streamed pretty much the same way every time.

In this world of infinitely malleable, movable bits, why must the music always stay the same? From day to day and year to year, I change. I bring new perspectives and experiences to the music I hear. Can my music change with me?

This streaming EP is my attempt to answer these questions. Once a day, a simple computer program recreates each track. From one day to the next, the changes in each track are usually quite subtle, and you may not even notice a difference. But over longer periods of time — weeks, months, or years — the changes become more substantial. So when you return to this music after a hiatus, then it, like you, will have changed.

via Jason Freeman: Grow Old.

May 26, 2014 at 8:25 am 1 comment

Launching Livecoding Network

Interesting announcement from Thor Magnusson and Alex McLean — more energy going into livecoding.  Check out the doctoral consortium around livecoding, too.

AHRC Live Coding Research Network

We are happy to announce the launch of the Live Coding Research
Network (LCRN), hosting a diverse series of events and activities over
the next two years, funded by the UK Arts and Humanities Research
Council (AHRC). In addition the TOPLAP organisation will run a
year-long programme of performance events around the country,
supported by the Sound and Music national agency for new music.

If you are unfamiliar with the practice of live coding, for now we
refer you to the website of our sister organisation TOPLAP:

Following a successful launch symposium last month, we have three more
symposia, an international conference as well as a range of associated
events already planned.


4th-6th July 2014, University of Sussex, Brighton – “Live coding and the body”

Our second symposium will be preceded by an “algorave” night of
performances at The Loft on the 4th July, with the symposium proper
running on the 5th and 6th of July. This symposium will follow after
the NIME conference in London (, which will
itself include a good number of live coding performances and papers.

Please see our website for more information:
25th-28th September 2014, Birmingham – “Live coding in collaboration
and network music”

Our third symposium will run from the 25th-26th September 2014, with
the first day focussed on doctoral research. It will lead into the
well established Network Music Festival
(, running over the weekend, which
will itself showcase network-based live coding music amongst its
programme. Watch our website for details.

* 26th April 2014, Gateshead from 10pm – An algorave celebrating great
Northern live coders Holger Ballweg, Hellocatfood, Shelly Knotts, Sick
Lincoln, Norah Lorway, Section_9, Yaxu + more. Organised by the
Audacious Art Experiment.
More info:

* 13th May 2014, London – Sonic Pattern and the Textility of Code, a
daytime symposium in collaboration with the Craft Council. More
details on our website next week.
We have much more in the pipeline, please watch our website and social
media feeds for more information:

Or get in contact with network co-ordinators Thor Magnusson <Thor
Magnusson <> and Alex McLean

May 5, 2014 at 2:34 am Leave a comment

How do we make programming languages more usable and learnable?

Amy Ko made a fascinating claim recently, “Programming languages are the least usable, but most powerful human-computer interfaces ever invented” which she explained in a blog post.  It’s a great argument, and I followed it up with a Blog@CACM post, “Programming languages are the most powerful, and least usable and learnable user interfaces.”

How would we make them better?  I suggest at the end of the Blog@CACM post that the answer is to follow the HCI dictum, “Know thy users, for they are not you.

We make programming languages today driven by theory — we aim to provide access to Turing/Von Neumann machines with a notation that has various features, e.g., type safety, security, provability, and so on.  Usability is one of the goals, but typically, in a theoretical sense.  Quorum is the only programming language that I know of that tested usability as part of the design process.

But what if we took Amy Ko’s argument seriously?  What if we designed programming languages like we defined good user interfaces — working with specific users on their tasks?  Value would become more obvious.  It would be more easily adopted by a community.  The languages might not be anything that the existing software development community even likes — I’ve noted before that the LiveCoders seem to really like Lisp-like languages, and as we all know, Lisp is dead.

What would our design process be?  How much more usable and learnable could our programming languages become?  How much easier would computing education be if the languages were more usable and learnable?  I’d love it if programming language designers could put me out of a job.

April 1, 2014 at 9:43 am 25 comments

SIGCSE2014 Preview: Engaging Underrepresented Groups in High School Introductory Computing through Computational Remixing with EarSketch

EarSketch is an interesting environment that I got to demo for Jason Freeman and Brian Magerko at the Dagstuhl Livecoding conference. It’s Python programming that creates complex, layered music. The current version of EarSketch isn’t really livecoding (e.g., there’s a “compilation” step from program into digital audio workstation), but I got to see a demo of their new Web-based version which might be usable for live coding .

I got to see the preview talk and was blown away.  The paper is about use in a 10 week programming unit in a high school course, with significant under-represented minority and female involvement. The evaluation results are stunning.  The authenticity angle here is particularly interesting. In the preview talk, Jason talked about “authentic STEAM.” They have audio loops from real musicians, and involve hip-hop artists in the classroom.  Students talk about how they value making music that sounds professional, with tools that professional musicians use.

In this paper, we describe a pilot study of EarSketch, a computational remixing approach to introductory computer science, in a formal academic computing course at the high school level. EarSketch, an integrated curriculum, Python API, digital audio workstation (DAW), audio loop library, and social sharing site, seeks to broaden participation in computing, particularly by traditionally underrepresented groups, through a thickly authentic learning environment that has personal and industry relevance in both computational and artistic domains. The pilot results show statistically significant gains in computing attitudes across multiple constructs, with particularly strong results for female and minority participants.

via SIGCSE2014 – OpenConf Peer Review & Conference Management System.

March 1, 2014 at 1:27 am 1 comment

Special issue of Journal on Live Coding in Music Education

Live Coding in Music Education – A call for papers
We are excited to announce a call for papers for a special issue of The Journal of Music, Technology & Education, with a deadline of 28 February 2014, for likely publication in July/August 2014. The issue will be guest edited by Professor Andrew R. Brown (Griffith University, Australia), and will address epistemological themes and pedagogical practices related to the use of live coding in formal and informal music education settings.
Live coding involves programming a computer as an explicit onstage performance. In such circumstance, the computer system is the musical instrument, and the practice is often improvisational. Live coding techniques can also be used as a musical prototyping (composition and production) tool with immediate feedback. Live coding can be solo or collaborative and can involve networked performances with other live coders, instrumentalists or vocalists.
Live coding music involves the notation of sonic and musical processes in code. These can describe sound synthesis, rhythmic and harmonic organization, themes and gestures, and control of musical form and structure. Live coding also extends out beyond pure music and sound to the general digital arts, including audiovisual systems, robotics and more.
While live coding can be a virtuosic practice, it is increasingly being used in educational and community arts contexts. In these settings, its focus on immediacy, generative creativity, computational and design thinking, and collaboration are being exploited to engage people with music in a non-traditional way. The inherently digital nature of live coding practices presents opportunities for networked collaborations and online leaning.
This special edition of JMTE will showcase research in live coding activities in educational and community arts settings, to inspire music educators about the possibilities of live coding, to interrogate the epistemological and pedagogical opportunities and challenges.
Topic suggestions include, but are not limited to:
– Live coding ensembles
– Bridging art-science boundaries through live coding
– Exploring music concepts as algorithmic processes
– The blending of composition and performance in live coding practices
– Combining instrument design and use
– Coding as music notational literacy
– Informal learning with live coding
– Integrating live coding practices into formal music educational structures
– Online learning with live coding
Contributors should follow all JMTE author guidelines
(URL paying particular attention to the word count of between 5,000 and 8,000 words for an article. In addition, please read carefully the information concerning the submission of images.
Submissions should be received by 28 February 2014.  All submissions and queries should be addressed to

November 18, 2013 at 1:50 am Leave a comment

Live coding as an exploration of our relationship with technology


Above: A piano duet between Andrew Brown (on a physical piano) and Andrew Sorensen (live coding Extempore, generating piano tones)

A fascinating take on live code from artist Holly Herndon: Why is generating music with technology a different relationship for humans than with traditional instruments?  Isn’t our relationship with our computing technology (consider people with their smart phones) even more intimate?

“I’m trying to […] get at the crux of the intimacy we have with our technology, because so many people really cast it in this light of the laptop being cold. I really think it’s a fallacy the way people cast technology in this light and then cast acoustic or even analogue instruments in this warm, human light, because I don’t understand what would be more human between a block of wood and something that was also created by humans, for humans. […] People see code as this crazy, otherworldly thing, but it’s just people writing text. It’s a very idiosyncratic, human language.”

via Holly Herndon interview on Dummy | TOPLAP.

Like Holly, the field of educational technology sees no distinction between technology made of silicon and those made of other materials.  The educational technology that had the fastest adoption in the United States was the blackboard — that was a technology, and it made a significant impact on education.  Adopters talked about how blackboards democratized education, because all students in the class could see the same content at once.  Computing is yet another technology that could have a positive impact on education.  The traditional musical instruments are technologies built by humans for other humans, and so are the live coding systems.  Live coding systems are earlier in the evolution than traditional instruments, but the goals are the same.

My colleague Jason Freeman has an interesting take that merges technology with traditional instruments, in an even more collaborative setting.  His live coders generate traditional music notation from which musicians with traditional instruments then play.

Jason’s take is that a human musician will always generate a more expressive performance than will a machine.  His take on live coding combines the programmers listening to one another and improvising, and musicians interpreting and expressing the live coders intentions.  There we have a rich exploration of our relationship with technology, both computing and analogue.

October 4, 2013 at 1:39 am 2 comments

Live coding as a path to music education — and maybe computing, too

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

October 3, 2013 at 7:15 am 22 comments

A playful live coding practice to explore syntax and semantics

Three of the nights of the Dagstuhl Seminar on Live Coding included performances. Several of these combined live coders with analogue instruments (guitar, piano, cello, and even kazoo), which was terrific to watch.


I found one of their practices fascinating, with real potential for the computer science classroom. Alex Maclean introduced it as “Mexican Roulette,” because they first did it at a live coding event in Mexico City. Live coders take turns (the roulette part) at a shared computer connected to speakers at the front of the room.

  • The first live coder types in some line of code generating music, and gets it running.  From now on, there is music playing.
  • The next live coder changes the code any way she or he wants. The music keeps playing, and changes when the second coder then evaluates the code, thus changing the process.  Now the third coder comes up, and so on.
  • If a live coder is unsure, just a few constants might be changed.
  • If a live coder makes a syntax error, the music continues (because the evaluation that would change the process fails), and the next coder can fix it.  You can see the error messages on the right in the picture above, which I took mid-way through the roulette.
  • If a live coder makes a mistake (at one point, someone created quite a squeal), the next live coder can fix it. Or embellish it.

What I found most promising about this practice is that (to use Briana Morrison’s phrase for this) nothing is ever wrong here. The game is to keep the music going and change it in interesting ways. Responsibility for the music is shared. Mistakes are part of the process, and are really up for definition. Is that a mistake, or an exploration of a new direction? This activity encourages playing with syntax and semantics, in a collaborative setting.  It relies on the separation of program and process — the music is going, while the next live coder is figuring out the change.  This could be used for learning any language that can be used for live coding.

October 2, 2013 at 1:56 am Leave a comment

Older Posts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,966 other followers


Recent Posts

Blog Stats

  • 1,783,874 hits
August 2020

CS Teaching Tips