Posts tagged ‘live coding’

Launching Livecoding Network

Interesting announcement from Thor Magnusson and Alex McLean — more energy going into livecoding.  Check out the doctoral consortium around livecoding, too.

AHRC Live Coding Research Network
http://www.livecodenetwork.org

We are happy to announce the launch of the Live Coding Research
Network (LCRN), hosting a diverse series of events and activities over
the next two years, funded by the UK Arts and Humanities Research
Council (AHRC). In addition the TOPLAP organisation will run a
year-long programme of performance events around the country,
supported by the Sound and Music national agency for new music.

If you are unfamiliar with the practice of live coding, for now we
refer you to the website of our sister organisation TOPLAP:
http://toplap.org/about/

Following a successful launch symposium last month, we have three more
symposia, an international conference as well as a range of associated
events already planned.

UPCOMING SYMPOSIA

4th-6th July 2014, University of Sussex, Brighton – “Live coding and the body”

Our second symposium will be preceded by an “algorave” night of
performances at The Loft on the 4th July, with the symposium proper
running on the 5th and 6th of July. This symposium will follow after
the NIME conference in London (http://www.nime2014.org/), which will
itself include a good number of live coding performances and papers.

Please see our website for more information:
http://www.livecodenetwork.org/2014/04/12/symposium-on-live-coding-and-the-body-and-algorave/
25th-28th September 2014, Birmingham – “Live coding in collaboration
and network music”

Our third symposium will run from the 25th-26th September 2014, with
the first day focussed on doctoral research. It will lead into the
well established Network Music Festival
(http://networkmusicfestival.org/), running over the weekend, which
will itself showcase network-based live coding music amongst its
programme. Watch our website for details.
UPCOMING ASSOCIATED EVENTS

* 26th April 2014, Gateshead from 10pm – An algorave celebrating great
Northern live coders Holger Ballweg, Hellocatfood, Shelly Knotts, Sick
Lincoln, Norah Lorway, Section_9, Yaxu + more. Organised by the
Audacious Art Experiment.
More info: https://www.facebook.com/events/291980540962097/291989147627903/

* 13th May 2014, London – Sonic Pattern and the Textility of Code, a
daytime symposium in collaboration with the Craft Council. More
details on our website next week.
We have much more in the pipeline, please watch our website and social
media feeds for more information:

http://www.livecodenetwork.org
http://twitter.com/livecodenet/
http://facebook.com/livecodenet/

Or get in contact with network co-ordinators Thor Magnusson <Thor
Magnusson <T.Magnusson@sussex.ac.uk> and Alex McLean
<a.mclean@leeds.ac.uk>

May 5, 2014 at 2:34 am Leave a comment

How do we make programming languages more usable and learnable?

Andy Ko made a fascinating claim recently, “Programming languages are the least usable, but most powerful human-computer interfaces ever invented” which he explained in a blog post.  It’s a great argument, and I followed it up with a Blog@CACM post, “Programming languages are the most powerful, and least usable and learnable user interfaces.”

How would we make them better?  I suggest at the end of the Blog@CACM post that the answer is to follow the HCI dictum, “Know thy users, for they are not you.

We make programming languages today driven by theory — we aim to provide access to Turing/Von Neumann machines with a notation that has various features, e.g., type safety, security, provability, and so on.  Usability is one of the goals, but typically, in a theoretical sense.  Quorum is the only programming language that I know of that tested usability as part of the design process.

But what if we took Andy Ko’s argument seriously?  What if we designed programming languages like we defined good user interfaces — working with specific users on their tasks?  Value would become more obvious.  It would be more easily adopted by a community.  The languages might not be anything that the existing software development community even likes — I’ve noted before that the LiveCoders seem to really like Lisp-like languages, and as we all know, Lisp is dead.

What would our design process be?  How much more usable and learnable could our programming languages become?  How much easier would computing education be if the languages were more usable and learnable?  I’d love it if programming language designers could put me out of a job.

April 1, 2014 at 9:43 am 24 comments

SIGCSE2014 Preview: Engaging Underrepresented Groups in High School Introductory Computing through Computational Remixing with EarSketch

EarSketch is an interesting environment that I got to demo for Jason Freeman and Brian Magerko at the Dagstuhl Livecoding conference. It’s Python programming that creates complex, layered music. The current version of EarSketch isn’t really livecoding (e.g., there’s a “compilation” step from program into digital audio workstation), but I got to see a demo of their new Web-based version which might be usable for live coding .

I got to see the preview talk and was blown away.  The paper is about use in a 10 week programming unit in a high school course, with significant under-represented minority and female involvement. The evaluation results are stunning.  The authenticity angle here is particularly interesting. In the preview talk, Jason talked about “authentic STEAM.” They have audio loops from real musicians, and involve hip-hop artists in the classroom.  Students talk about how they value making music that sounds professional, with tools that professional musicians use.

In this paper, we describe a pilot study of EarSketch, a computational remixing approach to introductory computer science, in a formal academic computing course at the high school level. EarSketch, an integrated curriculum, Python API, digital audio workstation (DAW), audio loop library, and social sharing site, seeks to broaden participation in computing, particularly by traditionally underrepresented groups, through a thickly authentic learning environment that has personal and industry relevance in both computational and artistic domains. The pilot results show statistically significant gains in computing attitudes across multiple constructs, with particularly strong results for female and minority participants.

via SIGCSE2014 – OpenConf Peer Review & Conference Management System.

March 1, 2014 at 1:27 am 1 comment

Special issue of Journal on Live Coding in Music Education

Live Coding in Music Education – A call for papers
We are excited to announce a call for papers for a special issue of The Journal of Music, Technology & Education, with a deadline of 28 February 2014, for likely publication in July/August 2014. The issue will be guest edited by Professor Andrew R. Brown (Griffith University, Australia), and will address epistemological themes and pedagogical practices related to the use of live coding in formal and informal music education settings.
Live coding involves programming a computer as an explicit onstage performance. In such circumstance, the computer system is the musical instrument, and the practice is often improvisational. Live coding techniques can also be used as a musical prototyping (composition and production) tool with immediate feedback. Live coding can be solo or collaborative and can involve networked performances with other live coders, instrumentalists or vocalists.
Live coding music involves the notation of sonic and musical processes in code. These can describe sound synthesis, rhythmic and harmonic organization, themes and gestures, and control of musical form and structure. Live coding also extends out beyond pure music and sound to the general digital arts, including audiovisual systems, robotics and more.
While live coding can be a virtuosic practice, it is increasingly being used in educational and community arts contexts. In these settings, its focus on immediacy, generative creativity, computational and design thinking, and collaboration are being exploited to engage people with music in a non-traditional way. The inherently digital nature of live coding practices presents opportunities for networked collaborations and online leaning.
This special edition of JMTE will showcase research in live coding activities in educational and community arts settings, to inspire music educators about the possibilities of live coding, to interrogate the epistemological and pedagogical opportunities and challenges.
Topic suggestions include, but are not limited to:
– Live coding ensembles
– Bridging art-science boundaries through live coding
– Exploring music concepts as algorithmic processes
– The blending of composition and performance in live coding practices
– Combining instrument design and use
– Coding as music notational literacy
– Informal learning with live coding
– Integrating live coding practices into formal music educational structures
– Online learning with live coding
Contributors should follow all JMTE author guidelines
(URL http://tinyurl.com/jmte-info) paying particular attention to the word count of between 5,000 and 8,000 words for an article. In addition, please read carefully the information concerning the submission of images.
Submissions should be received by 28 February 2014.  All submissions and queries should be addressed to andrew.r.brown@griffith.edu.au

November 18, 2013 at 1:50 am Leave a comment

Live coding as an exploration of our relationship with technology

MVI_0151

Above: A piano duet between Andrew Brown (on a physical piano) and Andrew Sorensen (live coding Extempore, generating piano tones)

A fascinating take on live code from artist Holly Herndon: Why is generating music with technology a different relationship for humans than with traditional instruments?  Isn’t our relationship with our computing technology (consider people with their smart phones) even more intimate?

“I’m trying to […] get at the crux of the intimacy we have with our technology, because so many people really cast it in this light of the laptop being cold. I really think it’s a fallacy the way people cast technology in this light and then cast acoustic or even analogue instruments in this warm, human light, because I don’t understand what would be more human between a block of wood and something that was also created by humans, for humans. […] People see code as this crazy, otherworldly thing, but it’s just people writing text. It’s a very idiosyncratic, human language.”

via Holly Herndon interview on Dummy | TOPLAP.

Like Holly, the field of educational technology sees no distinction between technology made of silicon and those made of other materials.  The educational technology that had the fastest adoption in the United States was the blackboard — that was a technology, and it made a significant impact on education.  Adopters talked about how blackboards democratized education, because all students in the class could see the same content at once.  Computing is yet another technology that could have a positive impact on education.  The traditional musical instruments are technologies built by humans for other humans, and so are the live coding systems.  Live coding systems are earlier in the evolution than traditional instruments, but the goals are the same.

My colleague Jason Freeman has an interesting take that merges technology with traditional instruments, in an even more collaborative setting.  His live coders generate traditional music notation from which musicians with traditional instruments then play.

Jason’s take is that a human musician will always generate a more expressive performance than will a machine.  His take on live coding combines the programmers listening to one another and improvising, and musicians interpreting and expressing the live coders intentions.  There we have a rich exploration of our relationship with technology, both computing and analogue.

October 4, 2013 at 1:39 am 2 comments

Live coding as a path to music education — and maybe computing, too

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

October 3, 2013 at 7:15 am 22 comments

Older Posts Newer Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,311 other followers

Feeds

Recent Posts

Blog Stats

  • 1,682,005 hits
September 2019
M T W T F S S
« Aug    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

CS Teaching Tips