Live coding as a path to music education — and maybe computing, too

October 3, 2013 at 7:15 am 20 comments

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

About these ads

Entry filed under: Uncategorized. Tags: , , , .

A playful live coding practice to explore syntax and semantics Live coding as an exploration of our relationship with technology

20 Comments Add your own

  • 1. alanone1  |  October 3, 2013 at 8:53 am

    Well, he was right about lying! Perhaps more than he realizes. And too many dozens to try to put comments to.

    But we can also look at the approach (which is so typical of so many computer people over the years, and so wrong).

    Among the many ways to understand something we have :

    (a) take something you don’t understand very well, and try to make something like it with a computer

    vs

    (b) take a look at real phenomena and try to get a sense of what kinds of relationships are contributing to the phenomena.

    An example of (a): suppose you think that musicians “play the score”. Then you can make a machine that takes the score as a kind of program and you can execute that. [but to a musician it doesn't sound like a musician playing]

    Or suppose you think that a composition of harmonics is what determines musical timbre. Then you can make a machine that can compose sine waves to make composites that sound a bit like a musical instrument. [but to a musician it doesn't sound like an instrument]

    Or we could take the (b) path. We take a recording of (say) Glenn Gould playing that Goldberg Variation and write a program to compare it with the score. We take Peter Sykes playing the same variation and compare him to Gould and the score.

    We could try to see what makes up the actual sensation of timbre by looking at real pianos, harpsichords, violins, trumpets, etc.

    In the (b) path we find that the score is not “played” in any useful sense of the term. Instead we find it is very much like a script for a play: i.e. composed of some actual acts but mostly an outline to be filled in by the players.

    In other words, music and acting are a collaborative re-creation by the author and the player (both of whom might be the same).

    Writing a program that can do some of the things to a score that human players do is a very interesting learning experience, and has been done several times (and occasionally well).

    Similarly, when we look at timbre we find that what we’ve been told in high school physics is not just “less than what is going on”, but what is going on is quite different from the high school physics view. For example what is really going on is that our brain/mind mechanisms are working in a “change of complexity” phase space (and one that is very very similar to how we listen to phonemes).

    In both cases we are not getting a 1st order theory that can be refined to higher order pictures, but we are getting a weak theory whose point of view is so misleading as to make it difficult to get a glimmer of what is going on.

    Bach is almost always used for such examples because a lot of the expression in Bach is in changes of structure, and these are somewhat (but not completely) immune to non-musical timbres and durations of notes (hint: Even in Bach, you don’t play 4 16th notes in a row at all similarly, not even in duration …).

    I look at this as “not good” in many many ways, including the terrible UI. Have they even thought why it might be useful to have a pictorial view instead of what is a form of tablature using programs? Hint: it’s hard to put the harmonic relationships together if you can’t see them related to each other by the approximate time the notes might be played. Yikes!

    When he does show a tab picture he doesn’t show “registration information” (the equivalent of staff lines in keyboard tab are the “black notes”) Without these it is hard to see what the actual pitch relationships, and thus the harmonic relationships, are.

    It’s worth comparing this in the year 2013, with some of the original stuff done 40 years ago at Xerox PARC by two former professional musicians (myself and Chris Jeffers).

    In any case this seems one step lower than “reinventing the flat tire” it’s more like inventing a triangular or worse wheel.

    Reply
    • 2. Mark Guzdial  |  October 3, 2013 at 10:20 am

      Alan,

      I see more than one target for music education. For me, I don’t expect to ever be a musician. I love generating and playing music. I love singing in my church choir. I don’t have the time, motivation, or focus to learn music the way that you and Franklin likely know music. So what’s wrong with me exploring these musical ideas using a system like what Chris Ford was showing?

      Generalizing, not all students want to be musicians. All students will find value in knowing more about music. Do we know that learning music ideas like what Chris was showing inhibits students from really learning to be musicians? Does it decrease motivation or focus to learn “the real thing”?

      Cheers,
      Mark

      Reply
      • 3. alanone1  |  October 4, 2013 at 4:35 am

        Hi Mark

        Anyone who is sensitive to music is already a musician, so you are. And you sing in a choir, so you touch primary aspects of music (the ones Franklin is rightly concerned with) each week.

        I’m not at all against live coding or using computers as instruments or exploring music by writing programs or in using analysis into “pieces and parts” to understand (as Roger is concerned) or notating using various forms (including programs).

        But (1) choice of point of view is both a powerful idea *and* one that makes it hard to see other perhaps more powerful points of view. And (2) Plato’s notion that “we should divide Nature at her joints, not breaking any limb in half as a bad carver might” is still a great principle in design, analysis, and understanding.

        I’ll just mention a few things here that most experienced musicians would try to help people understand and learn (I’ll do it in the same context of Western tonal music used in Chris’ presentation).

        First, music has many things in common with what we do with speech, so it’s worth making a few parallels. For example, just as communication in natural language is not about phonemes or even words, but about “utterances that evoke meanings”, so too with music: it is not primarily about notes but about “phrases that evoke”.

        Singing is a great start for this because the need to take breath, the presence of the lyrics, etc., all help singers to produce phrases rather than sequences of notes. Some instruments help the learner along these lines (such as the winds and the strings) while others are not so helpful (keyboards, guitars, etc.). The latter learners are not forced by the instrument to “breathe”, and part of learning how to play a keyboard is learning how to phrase rather than just playing endless notes.

        Another term often used in these kinds of arts is “line” — which is a bit larger than phrasing but an allied notion — that of providing a kind of continuity of expression without awkward breaks (purposeful meaningful breaks however, are the essence of phrasing).

        A nice encompassing term that can be used equally well in verbal and musical communication is “prosody”. In my first comment I made a parallel with what an actor has to learn how to do with a part (whether written down or not).

        One way to look at this first idea is that — just as with Biology — the fact that we can take analysis down to atoms (phonemes in language or notes in music) still requires one to ask whether this level of structuring has all that much to do with the actual subject.

        I mentioned in my first comment some (b) things that could be done to present to an audience a “more to the point” point of view. I think I would also show examples of prosody and line to help the audience see and hear what it is like to play a role in a play with and without prosodic elements, to play a “musical role” with and without them. I think I’d also do this with dance — which also has its version of prosody, and does use the term “line” for what this means in dance.

        I would also want to show that there is not “just one true way” to express what a playwright or composer is suggesting using the hints that are the backbone of a performance. To this, I would also show lots of ways to misinterpret the backbone (including just saying the words or just playing the notes).

        Now, to Roger’s complaint about my complaint. He writes: “there is a lot to write down for each and every note that a musician performs”. I’m not suggesting that at all, even when computers are used. I think that is missing the points I’m trying to make. A much more interesting program for dealing with music that has been notated in something like a score would be one that tries to do something about the central ideas of musical communication, which try to find and perform the expressive elements that turn a string of notes into a musical experience.

        For example it would try to find phrasings, continuities and breaks, “shapings” (e.g. increases and decreases in level of sound and in tempo that help group musical ideas together and help the listener “hear meaning instead of notes”, etc).

        This doesn’t get everything, but it is a better ballpark than the “notey” approach.

        The score writing system “Sibelius” has a mode of playback where it will try to do just what I’m talking about, and while not perfect, it is infinitely better than just playing the notes back as midi players do. I would not be at all surprised to hear that something much much better exists today.

        My final comment here is to (i) point out that there is nothing esoteric about what I’m discussing, and (ii) to restate that one’s initial point of view can open doors or shut them. When presenting a field to a beginner one can (and should) make up a perfectly understandable version of what is important as their first encounter. Presenting something that misses really important parts is not doing anyone a favor.

        Cheers,

        Alan

        Reply
        • 4. Mark Guzdial  |  October 4, 2013 at 10:28 am

          Hi Alan,

          Thanks for taking the time to help me understand your concerns. I think I’m getting closer now. Let me try to say it back to you.

          I don’t think that you disagree with this blog post’s basic point, that we can use music programming and live coding as a way into music education, and perhaps even computing education. Your concerns are more with Chris Ford’s presentation and the models that he’s describing. He starts out with sine waves and builds up into a bell sound, but as you’ve pointed out to me before, additive synthesis like this is a weak model. You can’t get very far into interesting and powerful music synthesis from additive synthesis. You’ve suggested to me before that I should think more about using FM synthesis from the start, and include envelopes — a critical part of real music-making. That’s a more powerful perspective on sound synthesis that can grow into more useful and interesting computer music. Your response to Roger suggests physical instrument simulation as a way of doing synthesis, which can connect to music education, computing education, and physics. You’re also concerned about the representation of the music itself, that you’d prefer to see something that’s more like phrases (each of which could have its own expression, in terms of tempo, volume, and envelopes) than lists of notes. Is that about right?

          Since this week’s posts are all on live coding, I should point out that the live coders I saw at Dagstuhl all use much more sophisticated representations of music and more powerful sound synthesis techniques than what Chris Ford was showing. The most common underlying technology in this community is SuperCollider, which allows for combination of unit generators and exploration of many kinds of sound synthesis. Others are using sampled sounds in interesting ways, like the EarSketch project here at Georgia Tech and Alex MacLean’s Texture system. Still others are using human performers as the sound generation part of their system, as in Jason Freeman’s work described in today’s post. So while Chris Ford’s demonstration was an example of a kind of live coding, I believe that your concerns about that demonstration apply to relatively little of the live coding community.

          Cheers,
          Mark

          Reply
          • 5. alanone1  |  October 4, 2013 at 10:51 am

            Hi Mark

            I have a few basic concerns, but using computers to help understand music and make music are not among them.

            I am concerned about Will Roger’s observation that “It ain’t what you know that hurts you, it’s what you think you know that ain’t so”.

            A softer version of this has to do with scientific observations of humans: People have wondered why — if children are as curious as they seem to be — “real science” (and etc) didn’t get invented until a few hundreds of years ago, even though there is evidence our species has been on the planet for almost 200 years.

            The reason seems to be a combination of factors

            1– human genetics. We are “somewhat wired” to learn language and much less wired to learn reading and writing to the point that it is a relatively recent and relatively rare invention. We just “can’t see” lots of things. We are less wired to look at the world the way science has to.

            2– ditto. What we call “reality” and “normal” are stories and beliefs. McLuhan: “Until I believe it, I can’t see it”

            3– studies have shown that most children most of the time say “why?” in order to prolong a conversation with an adult. We don’t seem to be as curious about many things as we might seem. We spend a lot of our efforts reengaging with our social systems.

            4– other studies have shown that most children (and adults) are quite satisfied with a story as an answer to a real question (cf 2).

            One thing I get out of this is that 1st order theories made up for the purpose of shortening and simplifying early encounters to ideas can act as stories and fill up and remove a “possible curiosity slot” from a learner’s mind.

            I like the idea of making up introductions to ideas that can be learned and understood by beginners, but which don’t sacrifice the meat of the ideas (I got this very strongly from Jerry Bruner, Seymour and Marvin).

            Many kinds of technical people have been fascinated by music — some of them (like Don Knuth) became fluent musicians — others have blindly substituted too simple mechanisms that obscure.

            For example the Midi standard was (and is) a kludge to have a very keyboard oriented file format for general music. It was designed by non players (especially non keyboard players — I know them) so it left out much of the expression that can be done on a keyboard, and most of the expression that more expressive instruments can do.

            As with html and media, this created a terrible default standard that set things back by decades, and has had to be gradually crawled out of via more kludges (similar to the more and more epicycles that were added trying to save the geocentric theory). This is real and related analogy.

            In particular, I wasn’t arguing for FM synthesis but for a characterization of musical tone and timbre in approximations to actual psychoacoustics (which emphasize how the nervous system responds to changes, etc.).

            I wasn’t arguing for a data structure for phrases, but for a view of music that actually centers on where musical meaning resides and that includes analogies to other parallel performance arts which share general features of prosody and line.

            I just don’t think it is good for a subject to be presented in weak ways that miss what the subject is about. It just reminds me all too much of math and science in the elementary grades.

            Cheers,

            Alan

            Reply
        • 6. yaxu  |  October 4, 2013 at 11:28 am

          Hi Alan,

          This is beautifully put and very illuminating!

          I’ve been thinking about prosody and shape in the context of live coding for a while. In my experience computer music often focuses either entirely on continuous gesture (prosody), or entirely on discrete pattern (phonemes).

          I wonder if you would be interested this related paper I wrote on Tidal, which combines discrete and continuous pattern in one higher order type, supporting what I find to be a highly expressive language for music:
          http://2013.xcoax.org/pdf/xcoax2013-mclean.pdf

          Best wishes

          alex

          Reply
    • 7. Roger Allen  |  October 3, 2013 at 10:46 am

      I’ll start by agreeing with you that the article and Chris’ presentation only scratch the surface of what is necessary to create music of the quality of our best artists. There is only so much a presenter can squeeze into an introductory talk.

      Of course people have thought about better GUIs, but as you point out, performance is far more complex that a few notes on a page and pictures become overwhelming and unhelpful Only by writing it out can you document the mystery, and there is a lot to write down for each and every note that a musician performs.

      I emphatically disagree with your post’s tone and tenor. When we seek to understand something, we necessarily must break it down into constituent pieces and parts. Along the way, we will only have a partial explanation that does not explain everything. Yes, this process began decades ago and it has not been completely fruitful. With exchange of ideas, new tools and new understanding we will be successful, eventually.

      If you think music theory is so wrongheaded, then I hope you will point out that better first-order theory that we can build upon, rather than simply dismissing solid work by intelligent, creative people.

      Reply
      • 8. alanone1  |  October 4, 2013 at 6:43 am

        Hi Roger

        I replied to several of your concerns in my second comment to Mark.

        Here is another example that illustrates my concerns about poor points of view — especially for 1st order theories — leading away from good theories.

        We are told in elementary physics that the length of an organ pipe determines its pitch. But here is a picture of three organ pipes that all produce the very same pitch but with vastly different lengths: http://www.pykett.org.uk/3Pipes-Small.JPG

        What is a better first order theory? One of them is to actually have a sense of what is going on through a simple, less mysterious, but more well fitted model.

        Organ pipes are whistles, and whistles have a “mouth”, an airstream, and an “upper lip”. When the air stream starts it is aimed so that it hits the upper lip in a slightly non-symmetric way so that the stream goes into the pipe. The air in the pipe has mass and resists being pushed by the stream (it is very like a spring). This “sponginess” pushes back on the stream and drives it out of the pipe producing a lower pressure in the pipe. The sponginess of this will be overcome by the outside air pressure and this will “pull” the stream back inside.

        We can see that we’ve got something like a spring in that there is a restoring force (like -kx). The “k” comes partly from the mass of the air and partly from the length of the pipe. We can see from the pictorial example that mass generally dominates, so when you make a pipe shorter, you need to make it wider if you want the same pitch.

        Where does the sound we hear come from? — the fact that some pipes are closed at the top but still work, should be a clue that the sound is actually the coupling to the room of the back and forth air sheet at the lip. What is going on inside the pipe does not affect the sound.

        Why have a pipe at all? Because of the combination of mass and the length *together* produce the opposing force that stabilizes the pitch.

        I looked on YouTube for a graphical computer simulation of what is actually going on but could only find ones about standing and traveling waves in tubes (which I think is where some of the confusion comes from).

        This is now way too long for blog comment. But similar poor 1st order theories about “sine wave compositions” make it difficult to home in on what our brains are actually doing when we listen to sounds that have pitch components.

        The hint is that if you get a trumpet player to play a sustained note and look at the energies at different frequencies you do indeed see that most of the energies are at multiples of the fundamental frequency. (Sounds good so far!) But if we try to synthesize the note from the data we’ve just gathered and play it, it won’t sound very much like a trumpet. This is because most of our recognition and classifications of musical tones lies in the “attack” part, which is highly non-linear and varying in pitch, amplitude, spectral composition, and has a lot of just plain noise mixed in.

        The attack part of most melodic notes is relatively long, and players will do things to longer tones (such as vibrato and even timbal changes) to keep them “interesting” to our nervous systems for more sustained notes (our nervous system is set up to detect *differences* and any theory — especially 1st order ones — has to emphasize this).

        The history of where poor 1st order theories come from — especially in education — in this (and many other areas) is interesting but I won’t touch on it here.

        Best wishes,

        Alan

        Alan

        Reply
  • 9. Franklin Chen  |  October 3, 2013 at 9:34 am

    I admit there is something somewhat cool about the live coding systems mentioned, but I have no involvement with it because for me, that’s not what music is really about. For me, music is about going out there creating/recreating an experience with others, as a performer/participant. It’s not about a formal system, not about scores, not about mechanically generating “notes”. The formal stuff is a shadow, a lossy reminder or outline, of the real thing.

    Reply
    • 10. yaxu  |  October 3, 2013 at 5:01 pm

      I’m a live coder, and I improvise music with other people (including choreographers, instrumentalists and other live coders), and for other people.

      I don’t see why you think writing things down demeans the creative process somehow. Do you feel the same about oral storytelling vs written novels? They are both worthwhile practices, and livecoding is somewhere between the two.

      Reply
      • 11. alanone1  |  October 4, 2013 at 6:49 am

        I read Franklin’s comment as not being against writing things down, but trying to emphasize that music is not primarily about whether things are written down or not, and is not primarily about “the writing” when you are writing.

        Reply
      • 12. Franklin Chen  |  October 4, 2013 at 11:15 am

        I’m not against writing down. In fact, lately I have been frustrated, as I have begun playing ukulele, that I have no good way of writing down more of what I learn about performance practices when observing and learning from others. I was also frustrated when I was involved in dance and notation for choreography is all but impossible. I had to “be there”. So I “wish” that more could be written down, but there is a limit, and it’s part of the essence of the actual experience that it cannot be fully captured by writing down. I am not against photography, but I would rather be hiking up the mountain and seeing the sun there, rather than clicking through photos that someone had already taken and organized so that you can do a “choose your own virtual hike” through computer manipulations.

        Also, I hasten to add that I am not against live coding at all. Unlike Alan, I don’t believe that it is necessarily the case that something less than the “best” kind of possible experience is an enemy. In the case of music, because of its astonishing low priority in American education and everyday life (as a creative activity, rather than as a commodity), I think enabling people to do any kind of active exploration at all, even in the context of a very limited model of what music could be, is a good thing. Pedagogically, Alan notes that starting with a limited experience can close the door on fuller experiences, and this may be true, but unfortunately, this argument can be made for any subject. Most people do not have the resources to optimize their understanding and enjoyment of much at all, whether it is music, programming, mathematics, cooking, running, auto repair. So I worry about whether the best may sometimes be the enemy of the good.

        Reply
        • 13. yaxu  |  October 4, 2013 at 11:50 am

          Thanks Franklin. Let me have another go at convincing you about the potential richness of live coding.

          Lets compare a computer language with a ukelele, as you have implied that live coding can never create as good an experience as playing one.

          With a ukelele, you touch a string and it vibrates, and your skin vibrates with it and against it, adding subtle gesture in a genuine interaction. When we touch a key on a computer keyboard, it generates a discrete signal with no phrasing at all, and the full richness of the interaction is not captured in the sound, at all.

          If we stop there, then yes it is clear that live coding is abstract from the sounds themselves.

          But! This is a trade off. This decoupling from physical interaction allows you to work on the compositional level, knitting patterns with time into a musical tapestry. Words evoke very real experience, and live coding allows that experience to be moulded directly. What you lose from direct engagement on the sensory level, you gain as direct engagement on the perceptual and conceptual level. The feeling you get when locked into creative flow, improvising directly with your temporal perception, with dozens of people shouting and dancing in front of you is great, and a very musical, embodied experience.

          Also, what I was trying to get at before is that words are tremendously evokative. When you read a good novel, which is a single dimensional string of discrete marks, a continuous, complex, sensory world is evoked. I’m convinced that the same is true of code, not least because computational geometry exists.

          Still, I disagree with some of my friends when I say that the music is not *in* the code. The code does not represent my musical thinking, it is something I think through, just one step in the cycle of musical feedback. The code cannot be read and understood, it can only be manipulated and understood through the musical changes that result via sound pressure waves and our perceptual faculties.

          Furthermore, there is nothing stopping a live coder from making music with a ukelele player. I think this point alone is enough to counter your argument.

          When I see a great band playing instruments together, tightly entrained, I still get a bit sad that live coding doesn’t give me that. But it does give me something different, it’s still music, and I don’t see a need to say one is necessarily better than the other… Although clearly live coding has a lot more development to do.

          Reply
          • 14. Franklin Chen  |  October 4, 2013 at 12:17 pm

            Alex, I’m actually interested in trying out live coding for myself, and have enjoyed sampling your videos of Tidal demos (I still have not gotten Tidal to compile on Mac OS, unfortunately).

            You’re absolutely right that there are tradeoffs and at some level, that has always been true, even before the digital era. Fretted string instruments, keyboard instruments, equal temperament chop up the possible space of music, but enable different kinds of abstractions and compositionality. The keyboard especially is a radical compromise.

            Reply
          • 15. Franklin Chen  |  October 4, 2013 at 5:19 pm

            Alex, I’ll contact you later about my Tidal build problem. Meanwhile, gotta scramble to my final rehearsal tonight before my first group uke gig tomorrow!

            Reply
        • 16. alanone1  |  October 4, 2013 at 1:12 pm

          Hi Franklin

          “Better and Perfect are the enemies of What Is Needed”

          Cheers,

          Alan

          Reply
          • 17. yaxu  |  October 4, 2013 at 2:13 pm

            It is possible to get tidal working under MacOS, I can help you through it if you like Franklin. I’ve been tweeting snippets of tidal at http://twitter.com/tidalcycles/

            I’d say the digital era is as old as the analogue one. As you imply, frets and a keyboards are discretisations, particularly on a harpsichord.

            Reply
  • 18. Gary S. Stager  |  October 3, 2013 at 1:04 pm

    I’m of several minds here. I learned to program computers at around the same time that I learned music theory, composition and improvisation. Music creation and computer programming made me feel smart and the processes inside my noggin were indistinguishable. That was until I reached the limits of my talent or willingness to devote eight-hours per day to practice. Being an artist MAY be more a lifelong learning curve than learning to program. For the truly talented musician, music theory and its resulting artifact, composition, becomes less like programming and less methodical or pattern related than it did to me. But I digress…

    A certain level of opacity may make one a better musician or programmer. Not every form of expression requires reducing reducing a system to its most elemental levels. One need not understand sine waves to be Charlie Parker any more than a great programmer needs to work in machine language. Higher-level “objects to think with” are invaluable.

    That said, the Georgia standards (surprise) turn music into yet another vocabulary lesson devoid of experience or powerful ideas.

    I find that most of the people promoting STEAM use the A, a subject they don’t really give a rat’s ass about and have neglected for decades, to soften a bunch of disciplines they fear or don’t understand.

    A good reason that kids should learn to paint, compose, play music, act AND program computers is that each form of expression require deep commitment, careful thought, reflection, sensitivity to external and often unanticipated stimuli AND build upon a young person’s remarkable capacity for intensity. They also allow a kid to spend intense periods of time inside of their own head.

    Reply
  • 19. gasstationwithoutpumps  |  October 5, 2013 at 11:42 am

    One correction for Alan on “For example the Midi standard was (and is) a kludge to have a very keyboard oriented file format for general music. ” The MIDI standard started as a real-time communication interface between keyboards and synthesizers, not as a file format. The interface was designed to accommodate what the keyboards of the time could implement cheaply and was driven by instrument makers, not computer scientists or musicians (except to the extent that the synthesizer engineers were also musicians). It was an adequate standard at the time, when cheap analog electronics did not provide a great deal of precision, and players relied on auditory feedback to do pitch bending and loudness control.

    Recording of MIDI keyboard events, like a player piano roll, is really specific to a single instrument (and analog synthesizers may have had more variance within a model than player pianos did).

    It was a mistake to use the MIDI “standard” for general music recording, since it leaves far too much up to the interpretation of the individual instrument.

    Reply
    • 20. alanone1  |  October 5, 2013 at 12:25 pm

      Yes, I shouldn’t have used the word file. I actually knew some of the people who devised midi (the ones at Roland) and begged them to learn more about what classical keyboards could do, and especially what classical instruments could do.

      In fact they were not musicians and were not particularly curious about how regular keyboards worked and were played.

      They were nice people, but didn’t understand that in so many cases a weak first attempt will often wind up as a defacto standard rather than something that will be superseded by a better standard.

      This is actually a good example of what I’m generally decrying.

      Cheers,

      Alan

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


Recent Posts

Feeds

October 2013
M T W T F S S
« Sep   Nov »
 123456
78910111213
14151617181920
21222324252627
28293031  

Blog Stats

  • 926,053 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,961 other followers


Follow

Get every new post delivered to your Inbox.

Join 2,961 other followers

%d bloggers like this: