Posts tagged ‘music education’

Making Music with Computers: Book is now out!

I got a chance to review and write a foreword for:

Making Music with Computers: Creative Programming in Python (Chapman & Hall/CRC Textbooks in Computing): Bill Manaris, Andrew R. Brown: 9781439867914: Amazon.com: Books.

I’m really pleased to see that it’s finally out!  Recommended.

February 25, 2014 at 1:51 am 1 comment

Special issue of Journal on Live Coding in Music Education

Live Coding in Music Education – A call for papers
We are excited to announce a call for papers for a special issue of The Journal of Music, Technology & Education, with a deadline of 28 February 2014, for likely publication in July/August 2014. The issue will be guest edited by Professor Andrew R. Brown (Griffith University, Australia), and will address epistemological themes and pedagogical practices related to the use of live coding in formal and informal music education settings.
Live coding involves programming a computer as an explicit onstage performance. In such circumstance, the computer system is the musical instrument, and the practice is often improvisational. Live coding techniques can also be used as a musical prototyping (composition and production) tool with immediate feedback. Live coding can be solo or collaborative and can involve networked performances with other live coders, instrumentalists or vocalists.
Live coding music involves the notation of sonic and musical processes in code. These can describe sound synthesis, rhythmic and harmonic organization, themes and gestures, and control of musical form and structure. Live coding also extends out beyond pure music and sound to the general digital arts, including audiovisual systems, robotics and more.
While live coding can be a virtuosic practice, it is increasingly being used in educational and community arts contexts. In these settings, its focus on immediacy, generative creativity, computational and design thinking, and collaboration are being exploited to engage people with music in a non-traditional way. The inherently digital nature of live coding practices presents opportunities for networked collaborations and online leaning.
This special edition of JMTE will showcase research in live coding activities in educational and community arts settings, to inspire music educators about the possibilities of live coding, to interrogate the epistemological and pedagogical opportunities and challenges.
Topic suggestions include, but are not limited to:
– Live coding ensembles
– Bridging art-science boundaries through live coding
– Exploring music concepts as algorithmic processes
– The blending of composition and performance in live coding practices
– Combining instrument design and use
– Coding as music notational literacy
– Informal learning with live coding
– Integrating live coding practices into formal music educational structures
– Online learning with live coding
Contributors should follow all JMTE author guidelines
(URL http://tinyurl.com/jmte-info) paying particular attention to the word count of between 5,000 and 8,000 words for an article. In addition, please read carefully the information concerning the submission of images.
Submissions should be received by 28 February 2014.  All submissions and queries should be addressed to andrew.r.brown@griffith.edu.au

November 18, 2013 at 1:50 am Leave a comment

Designing a language for programming with musical collaborators in front of an audience

If you were going to build a programming language explicitly for musicians to use when programming live with collaborators and in front of an audience, what would you build into it?  What should  musicians have to learn about computer science in order to use this language? There’s a special issue of Computer Music Journal coming out, focused on these themes. What a fascinating set of design constraints, and how different from most programming languages!

We are excited to announce a call for papers for a special issue of
Computer Music Journal, with a deadline of 21st January 2013, for
publication in Spring of the following year. The issue will be guest
edited by Alex McLean, Julian Rohrhuber and Nick Collins, and will
address themes surrounding live coding practice.

Live coding focuses on a computer musician’s relationship with their
computer. It includes programming a computer as an explicit onstage
act, as a musical prototyping tool with immediate feedback, and also
as a method of collaborative programming. Live coding’s tension
between immediacy and indirectness brings about a mediating role for
computer language within musical interaction. At the same time, it
implies the rewriting of algorithms, as descriptions which concern the
future; live coding may well be the missing link between composition
and improvisation. The proliferation of interpreted and just-in-time
compiled languages for music and the increasing computer literacy of
artists has made such programming interactions a new hotbed of musical
practice and theory. Many musicians have begun to design their own
particular representational extensions to existing general-purpose
languages, or even to design their own live coding languages from
scratch. They have also brought fresh energy to visual programming
language design, and new insights to interactive computation, pushing
at the boundaries through practice-based research. Live coding also
extends out beyond pure music and sound to the general digital arts,
including audiovisual systems, linked by shared abstractions.

2014 happens to be the ten-year anniversary of the live coding
organisation TOPLAP (toplap.org). However, we do not wish to restrict
the remit of the issue to this, and we encourage submissions across a
sweep of emerging practices in computer music performance, creation,
and theory. Live coding research is more broadly about grounding
computation at the verge of human experience, so that work from
computer system design to exposition of live coding concert work is
equally eligible.

Topic suggestions include, but are not limited by:

– Programming as a new form of musical exploration
– Embodiment and linguistic abstraction
– Symbology in music interaction
– Uniting liveness and abstraction in live music
– Bricolage programming in music composition
– Human-Computer Interaction study of live coding
– The psychology of computer music programming
– Measuring live coding and metrics for live performance
– The live coding audience, or live coding without audience
– Visual programming environments for music
– Alternative models of computation in music
– Representing time in interactive programming
– Representing and manipulating history in live performance
– Freedoms, constraints and affordances in live coding environments

Authors should follow all CMJ author guidelines
(http://www.mitpressjournals.org/page/sub/comj), paying particular
attention to the maximum length of 25 double-spaced pages.

Submissions should be received by 21st January 2013.  All submissions
and queries should be addressed to Alex McLean
<alex.mclean@icsrim.org.uk>.

April 24, 2012 at 9:45 am Leave a comment

Learning about Learning (even CS), from Singing in the Choir

Earlier this year, I talked about Seymour Papert’s encouragement to challenge yourself as a learner, in order to gain insight into learning and teaching.  I used my first-time experiences working on a play as an example.

I was in my first choir for a only year when our first child was born.  I was 28 when I first started trying to figure out if I was a bass or tenor (and even learn what those terms meant).  Three children and 20 years later, our children can get themselves to and from church on their own. In September, I again joined our church choir.  I am pretty close to a complete novice–I have hardly even had to read a bass clef in the last two decades.

Singing in the choir has the most unwritten, folklore knowledge of any activity I’ve ever been involved with. We will be singing something, and I can tell that what we sang was not what was in the music.  “Oh, yeah. We do it differently,” someone will explain. Everyone just remembers so many pieces and how this choir sings them.  Sometimes we are given pieces like the one pictured above.  It’s just words with chords and some hand-written notes on the photocopy.  We sing in harmony for this (I sing bass).  As the choir director says when he hands out pieces like this, “You all know this one.”  And on average, he’s right.  My wife has been singing in the choir for 13 years now, and that’s about average.  People measure their time in this choir in decades.  The harmony for songs like this were worked out years and years ago, and just about everyone does know it.  There are few new people each year — “new” includes even those 3 years in. (Puts the “long” four years of undergraduate in new perspective for me.) The choir does help the newcomers. One of the most senior bass singers gives me hand gestures to help me figure out when next phrase is going up or down in pitch. But the gap between “novice+help” and “average” is still enormous.

Lave and Wenger in their book “Situated Learning” talk about learning situations like these.  The choir is a community of practice.  There are people who are central to the practice, and there are novices like me.  There is a learning path that leads novices into the center.

The choir is an unusual community of practice in that physical positioning in the choir is the opposite of position with respect to the community.  The newbies (like me) are put in the center of our section.  That helps us to hear where we need to be when singing.  The more experienced people are on the outside.  The most experienced person in the choir, who may also be the eldest, tends to sit on the sidelines, rather than stand with the rest of the choir.  He nails every note, with perfect pitch and timing.

Being a novice in the choir is enormous cognitive overload.  As we sing each piece, I am reading the music (which I’m not too good at) to figure out what I’m singing and where we’re going. I am watching the conductor to make sure that my timing is right and matches everyone else. I am listening intently to the others in my section to check my pitch (especially important for when there is no music!).  Most choir members have sung these pieces for ages and have memorized their phrasing, so they really just watch the director to get synchronized.

When the director introduces a new piece of music with, “Now this one has some tricky parts,” I groan to myself.  It’s “tricky” for the average choir members — those who read the music and who have lots of experience.  It’s “tricky” for those with literacy and fluency.  For me, still struggling with the notation, it takes me awhile to get each piece, to understand how our harmony will blend with the other parts.

I think often about my students learning Java while I am in choir.  In my class, I introduce “tricky” ideas like walking a tree or network, both iteratively and recursively, and they are still struggling with type declarations and public static void main.  I noticed last year that many of my students’ questions were answered by me just helping them use the right language to ask their question correctly. How hard it must be for them to listen to me in lecture, read the programs we’re studying, and still try to get the “tricky” big picture of operations over dynamic data structures–when they still struggle with what the words mean in the programs.

Unlike working on the play, singing in the choir doesn’t take an enormous time investment — we rehearse for two hours one night, and an hour before mass.  I’m having a lot of fun, and hope to stick with it long enough to move out of the newbie class.  What’s motivating me to stick with it is enjoyment of the music and of becoming part of the community.  There’s another good lesson for computer science classes looking to improve retention.  Retention is about enjoying the content and enjoying the community you’re joining.

 

December 20, 2011 at 8:45 am 6 comments

Programming audio visually

Alex McLean is building a really cool new programming environment, described in a movie demo at Text update and source « Alex McLean.  He’s building a programming environment for audio programming, like CMusic, CSound, or SuperCollider.  In Alex’s system, you type the name of the oscillator or filter or generator, and typing the name generates the object.  You then draw lines to connect the pieces.

I want to draw two connections from the theme of this blog to Alex’s work.

  1. Occasionally, I point to geeky-fun work from here, because it is worthwhile for us to think about interesting and challenging ideas about computing and programming (like Alex’s unusual mix of textual and graphical programming) as exemplars to show and provoke students.
  2. Computer music is this strange stepchild of computer science that is almost nonexistent in most curricula, for reasons I don’t quite understand.  Making music with computers is really an old idea, and it’s super easy to do.  The tools and languages around computer music have become more and more esoteric, which does it make harder. I still have never been able to write a working CSound program without essentially copy-pasting examples. I just can’t quite wrap my head around it.  But the basic ideas are easy — I’ve played with sine wave generators in both Python and Squeak.  Yet, so few of us teach it or use it for examples.  Our computer audio class is always in danger of simply disappearing, because we can’t find anyone to teach it.  Why should something so easy and fun to do get ignored in computing education?

February 8, 2011 at 9:28 am 5 comments

Using the iPad for something new

This article felt verydifferent to me from the NYTimes piece about replacing textbooks with iPads.  These folks are inventing a new kind of musical instrument with the iPad (taking advantage of its touch interface), then teaching music creation with it.  I do recognize that this is apples-to-oranges in terms of curriculum, e.g., maybe the music education approach being taken is the same one that might have been taken with a violin, recorder, or flute?  From the description, it feels like using technology for a new kind of learning opportunity, which is better than labs-of-iPads.

“Play with your ear, not just your eyes,” he said. “It’s not just about playing, it’s about listening to the song you’re making and playing with everyone else.” Then, to the already advanced music students, he raised his hand in a fist for the universal “cut off” sign that ended the song.

A crowd of Apple customers had already gathered at the spectacle by that point and a few broke into applause.

“It’s not meant to replace instruments, but augment,” Wang said. “We don’t try to just imitate instruments, but explore new music-making experiences.” Wang, 33, has already developed other popular music apps for the iPhone and iPad, including Magic Flute (where the user blows into the microphone and taps on the screen to play notes) and Glee Karaoke that he developed with the cooperation of Fox Digital Media.

via Mountain View students learn how to turn iPad into ‘Magic Fiddle’ – San Jose Mercury News.

January 14, 2011 at 3:26 pm Leave a comment

Compose Your Own — Music and Software

Really exciting piece by Jason Freeman, Georgia Tech music professor, in the New York Times yesterday.  I think what he says goes just as well for software — so many of us use it, so few of us express ourselves with it.

These days, almost all of us consume music but few of us create it. According to a recent National Endowment for the Arts survey, only 12.6 percent of American adults play a musical instrument even once per year. The survey does not report how many of us compose music, but I suspect that percentage is even smaller.

It saddens me that so few of us make music. I believe that all of us are musically creative and have something interesting to say. I also wish that everyone could share in this experience that I find so fulfilling.

via Compose Your Own – Opinionator Blog – NYTimes.com.

April 23, 2010 at 10:15 am Leave a comment

Guitar Hero as a Form of Scaffolding

My daughter turned 12 on Tuesday, and unfortunately, she was ill.  Dad hung out with her, and played whatever video games she wanted.  One of those she picked was Guitar Hero, so I finally got time to play it repeatedly.  Y’know — it was kind of fun!

Back in December, when I first got Guitar Hero, I wrote a blog post where I agreed with Alan that Guitar Hero is not nearly as good as learning a real musical instrument.  At that time, I wrote:

Guitar Hero might still be fun.  But it’s just fun.  I might learn to do well with it.  But it would be learning that I don’t particularly value, that makes me better.

Now I’m thinking that I might want to eat those words.  I found Guitar Hero hard.  I own a guitar and have taken guitar lessons for two semesters.  (Even putting it in terms of “semesters” suggests how long ago it was.)  Some of my challenges in learning to play a guitar included doing two different things with my hands, and switching chords and strumming to keep the rhythm.  I noticed that that’s exactly what I was having a hard time doing with Guitar Hero.  I also noticed the guitar parts of rock songs — songs that I had heard a million times before but never had noticed all the guitar parts previously. I noticed because I missed my cues, and so those guitar parts were missing.  While I have known Foghat and Pat Benatar for literally decades, Guitar Hero had me listening in a different way.

It occurred to me that Guitar Hero could be a form of scaffolding, a reduction in cognitive load that allows one to focus on one set of skills before dealing with all the skills at once.  Cognitive scaffolding is much like the physical scaffolding, “a temporary support system used until the task is complete and the building stands without support.”  Now, Guitar Hero would only be successful as a form of scaffolding if it actually leads to the full task, that it doesn’t supplant it.  In education terms, if Guitar Hero could fade and if it doesn’t lead to negative transfer, e.g., “I’m great at Guitar Hero, but a real guitar is completely different.”

I did some hunting for studies that have explored the use of Guitar Hero to scaffold real music education.  I could not find any educational psychology or music education studies that have explored Guitar Hero as a form of scaffolding or as a tutor to reduce cognitive load.  I did find papers in music technology that hold up Guitar Hero as a model for future educational music technology! My favorite of these is a paper by Percival, Wang, and Tzanetakis that provides an overview of how multimedia technolgoies are being used to assist in music education.  They point out additional lessons that students are learning with tools like Guitar Hero that I hadn’t noticed.  For example, the physical effort of playing an instrument is more significant than non-players realize, and Guitar Hero (and similar tools) build up the right muscles in the right ways (or so they theorize — no direct studies of Guitar Hero are cited).  The paper also argues that getting students to do something daily has a huge impact on music learning and performance, even if it’s a tutorial activity.

Now here’s the critical question: Does Guitar Hero lead to real music playing, or is it a stopping point?  Nobody is arguing that playing Guitar Hero is making music, that I can see.  Does it work as scaffolding?

I don’t know, but I’m now wondering: Does it matter?  If Guitar Hero stops some people from becoming musicians, then it is a problem.  If some people, who might have pushed themselves to become musicians, decide that Guitar Hero is hard enough, then Guitar Hero is doing a disservice.  But if that’s not true, and people who never would become musicians, have a better appreciation for the music and a better understanding of the athleticism of musicians because of Guitar Hero, then Guitar Hero is providing a benefit.

These are computing education questions.  You have all heard faculty who insist on using Eclipse in their introductory classes, because that’s what real software engineers use.  We have recently read in comments on this blog that students should use “standard tools” and “learn science the way scientists understand it.”  We also know from educational psychology that engaging introductory students in the same activity as experts only works for the best students.  The bottom half of the students get frustrated and fail.

We need Guitar Hero for computer science.  We need more activities that are not what the experts do, that are fun and get students to practice more often, that are scaffolding, and that reduce cognitive load.  We have some, like Scratch and eToys.  We need more. Insisting on the experts’ tools for all students leads to the 30-50% failure rates that we’re seeing today.  We have to be doing more for the rest of the students.

October 29, 2009 at 9:52 am 13 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,005 other followers

Feeds

Recent Posts

Blog Stats

  • 1,879,910 hits
October 2021
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

CS Teaching Tips