Posts tagged ‘music’

Georgia Tech’s EarSketch Uses Music To Teach Students Coding

Pleased to see that my colleagues are getting recognition for their cool work.

The White House recognized Georgia Tech last Monday for a coding program that uses music to teach code. It was recognized as part of its national initiatives for Computer Science Education Week.EarSketch is a free online tool that uses music to teach the programming languages of Python and JavaScript.Georgia Tech professors plan to expand the program to more than 250 middle and high schools nationwide next year.

Source: Georgia Tech’s EarSketch Uses Music To Teach Students Coding | WABE 90.1 FM

February 10, 2017 at 7:00 am 2 comments

EarSketch Workshop at SIGCSE 2015

I’m an advisor on the EarSketch project, and it’s really cool. Recommended.

Next month, the EarSketch team will be offering a workshop at SIGCSE in Kansas City. This is a great opportunity to learn more about EarSketch, get hands on experience with the curriculum and environment, and learn how to use EarSketch in your classroom. This year’s workshop will also offer advice on integrating EarSketch into Computer Science Principles courses, though the workshop is of relevance to anyone teaching an introductory computing course.

For more information about SIGCSE, visit http://sigcse2015.sigcse.org/index.html
To register for the workshop, please visit https://www.regonline.com/register/login.aspx?eventID=1618015&MethodId=0&EventsessionId=
Please contact Jason Freeman (jason.freeman@gatech.edu) with any questions.

SIGCSE 2015
Workshop #20: Computer Science Principles with EarSketch
Saturday, March 7th, 2015
3 pm – 6 pm

Jason Freeman, Georgia Institute of Technology
Brian Magerko, Georgia Institute of Technology
Regis Verdin, Georgia Institute of Technology

EarSketch (http://earsketch.gatech.edu) is an integrated curriculum, software toolset, audio loop library, and social sharing site that teaches computing principles through digital music composition and remixing. Attendees will learn to code in Python and/or JavaScript to place audio clips, create rhythms, and add and control effects within a multi-track digital audio workstation (DAW) environment while learning computing concepts such as variables, iteration, conditionals, strings, lists, functions, and recursion. Participants write code to make music, with a focus on popular genres such as hip hop. The agenda outlines the pedagogy of connecting musical expression to computation to broaden participation and engagement in computing; the underlying concept of thickly authentic STEAM that drives this approach; the alignment of the curriculum and learning environment with CS Principles; and basic musical concepts underlying EarSketch. The intended audience for this workshop is secondary and early post secondary CS educators. The course is of particular relevance to CS Principles teachers but also applicable to any introductory programming or computing course. No prior musical knowledge or experience is expected and no prior programming experience with Python or JavaScript is required.

February 14, 2015 at 8:18 am 1 comment

A stunningly beautiful connection between music and computing: Jason Freeman’s “Grow Old”

My eldest child graduated from college this last year, and I’m feeling my first half-century these days.  That may be why I was particularly struck by the themes in Jason Freeman’s beautiful new work.  I recommend visiting and reading the page, and you’ll get why this is so cool, even before you listen to the music.  It’s not live coding — it’s kind of the opposite.  It’s another great example of using music to motivate the learning of computing.

Why can’t my music grow old with me?

Why does a recording sound exactly the same every time I listen to it? That makes sense when recordings are frozen in time on wax cylinders or vinyl or compact discs. But most of the music I listen to these days comes from a cloud-based streaming music service, and those digital 1s and 0s are streamed pretty much the same way every time.

In this world of infinitely malleable, movable bits, why must the music always stay the same? From day to day and year to year, I change. I bring new perspectives and experiences to the music I hear. Can my music change with me?

This streaming EP is my attempt to answer these questions. Once a day, a simple computer program recreates each track. From one day to the next, the changes in each track are usually quite subtle, and you may not even notice a difference. But over longer periods of time — weeks, months, or years — the changes become more substantial. So when you return to this music after a hiatus, then it, like you, will have changed.

via Jason Freeman: Grow Old.

May 26, 2014 at 8:25 am 1 comment

Launching Livecoding Network

Interesting announcement from Thor Magnusson and Alex McLean — more energy going into livecoding.  Check out the doctoral consortium around livecoding, too.

AHRC Live Coding Research Network
http://www.livecodenetwork.org

We are happy to announce the launch of the Live Coding Research
Network (LCRN), hosting a diverse series of events and activities over
the next two years, funded by the UK Arts and Humanities Research
Council (AHRC). In addition the TOPLAP organisation will run a
year-long programme of performance events around the country,
supported by the Sound and Music national agency for new music.

If you are unfamiliar with the practice of live coding, for now we
refer you to the website of our sister organisation TOPLAP:
http://toplap.org/about/

Following a successful launch symposium last month, we have three more
symposia, an international conference as well as a range of associated
events already planned.

UPCOMING SYMPOSIA

4th-6th July 2014, University of Sussex, Brighton – “Live coding and the body”

Our second symposium will be preceded by an “algorave” night of
performances at The Loft on the 4th July, with the symposium proper
running on the 5th and 6th of July. This symposium will follow after
the NIME conference in London (http://www.nime2014.org/), which will
itself include a good number of live coding performances and papers.

Please see our website for more information:
http://www.livecodenetwork.org/2014/04/12/symposium-on-live-coding-and-the-body-and-algorave/
25th-28th September 2014, Birmingham – “Live coding in collaboration
and network music”

Our third symposium will run from the 25th-26th September 2014, with
the first day focussed on doctoral research. It will lead into the
well established Network Music Festival
(http://networkmusicfestival.org/), running over the weekend, which
will itself showcase network-based live coding music amongst its
programme. Watch our website for details.
UPCOMING ASSOCIATED EVENTS

* 26th April 2014, Gateshead from 10pm – An algorave celebrating great
Northern live coders Holger Ballweg, Hellocatfood, Shelly Knotts, Sick
Lincoln, Norah Lorway, Section_9, Yaxu + more. Organised by the
Audacious Art Experiment.
More info: https://www.facebook.com/events/291980540962097/291989147627903/

* 13th May 2014, London – Sonic Pattern and the Textility of Code, a
daytime symposium in collaboration with the Craft Council. More
details on our website next week.
We have much more in the pipeline, please watch our website and social
media feeds for more information:

http://www.livecodenetwork.org
http://twitter.com/livecodenet/
http://facebook.com/livecodenet/

Or get in contact with network co-ordinators Thor Magnusson <Thor
Magnusson <T.Magnusson@sussex.ac.uk> and Alex McLean
<a.mclean@leeds.ac.uk>

May 5, 2014 at 2:34 am Leave a comment

SIGCSE2014 Preview: Engaging Underrepresented Groups in High School Introductory Computing through Computational Remixing with EarSketch

EarSketch is an interesting environment that I got to demo for Jason Freeman and Brian Magerko at the Dagstuhl Livecoding conference. It’s Python programming that creates complex, layered music. The current version of EarSketch isn’t really livecoding (e.g., there’s a “compilation” step from program into digital audio workstation), but I got to see a demo of their new Web-based version which might be usable for live coding .

I got to see the preview talk and was blown away.  The paper is about use in a 10 week programming unit in a high school course, with significant under-represented minority and female involvement. The evaluation results are stunning.  The authenticity angle here is particularly interesting. In the preview talk, Jason talked about “authentic STEAM.” They have audio loops from real musicians, and involve hip-hop artists in the classroom.  Students talk about how they value making music that sounds professional, with tools that professional musicians use.

In this paper, we describe a pilot study of EarSketch, a computational remixing approach to introductory computer science, in a formal academic computing course at the high school level. EarSketch, an integrated curriculum, Python API, digital audio workstation (DAW), audio loop library, and social sharing site, seeks to broaden participation in computing, particularly by traditionally underrepresented groups, through a thickly authentic learning environment that has personal and industry relevance in both computational and artistic domains. The pilot results show statistically significant gains in computing attitudes across multiple constructs, with particularly strong results for female and minority participants.

via SIGCSE2014 – OpenConf Peer Review & Conference Management System.

March 1, 2014 at 1:27 am 1 comment

Live coding as an exploration of our relationship with technology

MVI_0151

Above: A piano duet between Andrew Brown (on a physical piano) and Andrew Sorensen (live coding Extempore, generating piano tones)

A fascinating take on live code from artist Holly Herndon: Why is generating music with technology a different relationship for humans than with traditional instruments?  Isn’t our relationship with our computing technology (consider people with their smart phones) even more intimate?

“I’m trying to […] get at the crux of the intimacy we have with our technology, because so many people really cast it in this light of the laptop being cold. I really think it’s a fallacy the way people cast technology in this light and then cast acoustic or even analogue instruments in this warm, human light, because I don’t understand what would be more human between a block of wood and something that was also created by humans, for humans. […] People see code as this crazy, otherworldly thing, but it’s just people writing text. It’s a very idiosyncratic, human language.”

via Holly Herndon interview on Dummy | TOPLAP.

Like Holly, the field of educational technology sees no distinction between technology made of silicon and those made of other materials.  The educational technology that had the fastest adoption in the United States was the blackboard — that was a technology, and it made a significant impact on education.  Adopters talked about how blackboards democratized education, because all students in the class could see the same content at once.  Computing is yet another technology that could have a positive impact on education.  The traditional musical instruments are technologies built by humans for other humans, and so are the live coding systems.  Live coding systems are earlier in the evolution than traditional instruments, but the goals are the same.

My colleague Jason Freeman has an interesting take that merges technology with traditional instruments, in an even more collaborative setting.  His live coders generate traditional music notation from which musicians with traditional instruments then play.

Jason’s take is that a human musician will always generate a more expressive performance than will a machine.  His take on live coding combines the programmers listening to one another and improvising, and musicians interpreting and expressing the live coders intentions.  There we have a rich exploration of our relationship with technology, both computing and analogue.

October 4, 2013 at 1:39 am 2 comments

Live coding as a path to music education — and maybe computing, too

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

October 3, 2013 at 7:15 am 22 comments

A playful live coding practice to explore syntax and semantics

Three of the nights of the Dagstuhl Seminar on Live Coding included performances. Several of these combined live coders with analogue instruments (guitar, piano, cello, and even kazoo), which was terrific to watch.

IMG_0143

I found one of their practices fascinating, with real potential for the computer science classroom. Alex Maclean introduced it as “Mexican Roulette,” because they first did it at a live coding event in Mexico City. Live coders take turns (the roulette part) at a shared computer connected to speakers at the front of the room.

  • The first live coder types in some line of code generating music, and gets it running.  From now on, there is music playing.
  • The next live coder changes the code any way she or he wants. The music keeps playing, and changes when the second coder then evaluates the code, thus changing the process.  Now the third coder comes up, and so on.
  • If a live coder is unsure, just a few constants might be changed.
  • If a live coder makes a syntax error, the music continues (because the evaluation that would change the process fails), and the next coder can fix it.  You can see the error messages on the right in the picture above, which I took mid-way through the roulette.
  • If a live coder makes a mistake (at one point, someone created quite a squeal), the next live coder can fix it. Or embellish it.

What I found most promising about this practice is that (to use Briana Morrison’s phrase for this) nothing is ever wrong here. The game is to keep the music going and change it in interesting ways. Responsibility for the music is shared. Mistakes are part of the process, and are really up for definition. Is that a mistake, or an exploration of a new direction? This activity encourages playing with syntax and semantics, in a collaborative setting.  It relies on the separation of program and process — the music is going, while the next live coder is figuring out the change.  This could be used for learning any language that can be used for live coding.

October 2, 2013 at 1:56 am Leave a comment

Education Research Questions around Live Coding: Vygotskian and Non-Constructionist

I posted my trip report on the Dagstuhl Seminar on Live Coding on Blog@CACM (see the post here).  If you don’t want to read the post, check out this video as a fun introduction to live coding:

I have a lot more that I want to think through and share about the seminar. I’m doing a series of blog posts this week on live coding to give me an opportunity to think through some of these issues.

IMG_0100

I saw four sets of computing education research questions in live coding. These are unusual research questions for me because they’re Vygotskian and non-Constructionist.

Live coding is about performance. It’s not an easy task. The live coder has to know their programming language (syntax and semantics) and music improvisation (e.g., including listening to your collaborator and composing to match), and use all that knowledge in real-time. It’s not going to be a task that we start students with, but it may be a task that watching inspires students. Some of my research questions are about what it means to watch the performance of someone else, as opposed to being about students constructing. I’ve written before about the value of lectures, and I really do believe that students can learn from lectures. But not all students learn from lectures, and lectures work only if well-structured. Watching a live coding performance is different — it’s about changing the audience’s affect and framing with respect to coding. Can we change attitudes via a performance?

Vygotsky argued that all personal learning is first experienced at a social level. Whatever we learn must first be experienced as an interaction with others. In computing education, we think a lot about students’ first experience programming, but we don’t think much about how a student first sees code and first sees programming. How can you even consider studying a domain whose main activity you have never even seen? What is the role of that coding generating music, with cultural and creative overtones? The social experience introducing computing is important, and that may be something that live code can offer.

IMG_0073

Here are four sets of research questions that I see:

  1. Making visible. In a world with lots of technology, code and programmers are mostly invisible. What does it mean for an audience to see code to generate music and programming as a live coder? It’s interesting to think about this impact for students (does it help students to think seriously about computing as something to explore in school?) and for a more general audience (how does it change adults’ experience with technology?).
  2. Separating program and process. Live coding makes clear the difference between the program and the executing process. On the first day, we saw performances from Alex MacLean and Thor Magnusson, and an amazing duet between Andrew Sorensen at Dagstuhl and Ben Swift at the VL/HCC conference in San Jose using their Extempore system. These performances highlighted the difference between program and process. The live coders start an execution, and music starts playing in a loop. Meanwhile, they change the program, then re-evaluate the function, which changes the process and the music produced. There is a gap between the executing process and the text of the program, which is not something that students often see.
  3. Code for music. How does seeing code for making music change student’s perception of what code is for? We mostly introduce programming as engineering practice in CS class, but live coding is pretty much the opposite of software engineering. Our biggest challenges in CS Ed are about getting students and teachers to even consider computer science. Could live coding get teachers to see computing as something beyond dry and engineering-ish?  Who is attracted by live coding?  Could it attract a different audience than we do now?  Could we design the activity of live coding to be more attractive and accessible?
  4. Collaboration. Live coding is a collaborative practice, but very different from pair programming. Everybody codes, and everybody pays attention to what the others are doing. How does the collaboration in live coding (e.g., writing music based on other live coders’ music) change the perception of the asocial nature of programming?

I’ll end with an image that Sam Aaron showed in his talk at Dagstuhl, a note that he got from a student in his Sonic Pi class: “Thank you for making dull lifeless computers interesting and almost reality.” That captures well the potential of live coding in computing education research — that activity is interesting and the music is real.

IMG_0074

September 30, 2013 at 5:38 am 6 comments

Designing a language for programming with musical collaborators in front of an audience

If you were going to build a programming language explicitly for musicians to use when programming live with collaborators and in front of an audience, what would you build into it?  What should  musicians have to learn about computer science in order to use this language? There’s a special issue of Computer Music Journal coming out, focused on these themes. What a fascinating set of design constraints, and how different from most programming languages!

We are excited to announce a call for papers for a special issue of
Computer Music Journal, with a deadline of 21st January 2013, for
publication in Spring of the following year. The issue will be guest
edited by Alex McLean, Julian Rohrhuber and Nick Collins, and will
address themes surrounding live coding practice.

Live coding focuses on a computer musician’s relationship with their
computer. It includes programming a computer as an explicit onstage
act, as a musical prototyping tool with immediate feedback, and also
as a method of collaborative programming. Live coding’s tension
between immediacy and indirectness brings about a mediating role for
computer language within musical interaction. At the same time, it
implies the rewriting of algorithms, as descriptions which concern the
future; live coding may well be the missing link between composition
and improvisation. The proliferation of interpreted and just-in-time
compiled languages for music and the increasing computer literacy of
artists has made such programming interactions a new hotbed of musical
practice and theory. Many musicians have begun to design their own
particular representational extensions to existing general-purpose
languages, or even to design their own live coding languages from
scratch. They have also brought fresh energy to visual programming
language design, and new insights to interactive computation, pushing
at the boundaries through practice-based research. Live coding also
extends out beyond pure music and sound to the general digital arts,
including audiovisual systems, linked by shared abstractions.

2014 happens to be the ten-year anniversary of the live coding
organisation TOPLAP (toplap.org). However, we do not wish to restrict
the remit of the issue to this, and we encourage submissions across a
sweep of emerging practices in computer music performance, creation,
and theory. Live coding research is more broadly about grounding
computation at the verge of human experience, so that work from
computer system design to exposition of live coding concert work is
equally eligible.

Topic suggestions include, but are not limited by:

– Programming as a new form of musical exploration
– Embodiment and linguistic abstraction
– Symbology in music interaction
– Uniting liveness and abstraction in live music
– Bricolage programming in music composition
– Human-Computer Interaction study of live coding
– The psychology of computer music programming
– Measuring live coding and metrics for live performance
– The live coding audience, or live coding without audience
– Visual programming environments for music
– Alternative models of computation in music
– Representing time in interactive programming
– Representing and manipulating history in live performance
– Freedoms, constraints and affordances in live coding environments

Authors should follow all CMJ author guidelines
(http://www.mitpressjournals.org/page/sub/comj), paying particular
attention to the maximum length of 25 double-spaced pages.

Submissions should be received by 21st January 2013.  All submissions
and queries should be addressed to Alex McLean
<alex.mclean@icsrim.org.uk>.

April 24, 2012 at 9:45 am Leave a comment

A Festival of (Musical) Algorithms

I’ve heard of computing conferences, and music festivals, and even computer music conferences.  I love the idea of a music festival where there are “Live Algorithms Concerts.”  This is what “Computing for Everyone” is about for me — when computing becomes part of what you do. Not necessarily invisibly–I like the idea that these musicians use algorithms, recognize that, and call them that.

This April will see musicians, artists and coders come to London for a festival of what can be done with the SuperCollider audio programming environment.

Tickets are available from £70 <http://www.sc2012.org.uk/tickets/> for a whole week of sonic inspiration featuring:

==MUSIC==

– LIVE ALGORITHMS CONCERT – three specially-commissioned musicians

will be improvising live on stage, collaborating with

responsive musical algorithms for the first time.

PLEASE SEE OUR CALL FOR CODERS:

http://www.sc2012.org.uk/live/algorithms/

– LIVECODE EVENING – codefaced people hacking music in front of your eyes:

http://www.sc2012.org.uk/live/code/

– ELECTROACOUSTIC CONCERT of new multi-channel works

for electronics and featuring musicians from the Plusminus

Ensemble:

http://www.sc2012.org.uk/live/concert/

– CLUB NIGHT EXTRAVAGANZA, rounding off the festival in style

with a panoply of audiovisual acts,

and headlined by A SPECIAL GUEST TO BE ANNOUNCED…

http://www.sc2012.org.uk/live/club/

==ART==

Sonic art exhibition held in the Mile End Park,

with works both indoors in the Art Pavilion and outdoors in the park:

http://www.sc2012.org.uk/art/

==WORKSHOPS==

For new and intermediate users to learn audio hackery and interactivity with SuperCollider:

http://www.sc2012.org.uk/workshops/

==CONFERENCE==

Three days of talks from an international range of musicians, artists, researchers and coders:

http://www.sc2012.org.uk/conference/

* Tickets for the whole week are available from £70 *

http://www.sc2012.org.uk/tickets/

(Early-bird tickets until the end of February

– so get them quickly)

Please forward to your networks!

All details are on the website, and you can also follow @scsymposium

The Open University is incorporated by Royal Charter (RC 000391), an exempt charity in England & Wales and a charity registered in Scotland (SC 038302).

January 27, 2012 at 11:21 am Leave a comment

Learning about Learning (even CS), from Singing in the Choir

Earlier this year, I talked about Seymour Papert’s encouragement to challenge yourself as a learner, in order to gain insight into learning and teaching.  I used my first-time experiences working on a play as an example.

I was in my first choir for a only year when our first child was born.  I was 28 when I first started trying to figure out if I was a bass or tenor (and even learn what those terms meant).  Three children and 20 years later, our children can get themselves to and from church on their own. In September, I again joined our church choir.  I am pretty close to a complete novice–I have hardly even had to read a bass clef in the last two decades.

Singing in the choir has the most unwritten, folklore knowledge of any activity I’ve ever been involved with. We will be singing something, and I can tell that what we sang was not what was in the music.  “Oh, yeah. We do it differently,” someone will explain. Everyone just remembers so many pieces and how this choir sings them.  Sometimes we are given pieces like the one pictured above.  It’s just words with chords and some hand-written notes on the photocopy.  We sing in harmony for this (I sing bass).  As the choir director says when he hands out pieces like this, “You all know this one.”  And on average, he’s right.  My wife has been singing in the choir for 13 years now, and that’s about average.  People measure their time in this choir in decades.  The harmony for songs like this were worked out years and years ago, and just about everyone does know it.  There are few new people each year — “new” includes even those 3 years in. (Puts the “long” four years of undergraduate in new perspective for me.) The choir does help the newcomers. One of the most senior bass singers gives me hand gestures to help me figure out when next phrase is going up or down in pitch. But the gap between “novice+help” and “average” is still enormous.

Lave and Wenger in their book “Situated Learning” talk about learning situations like these.  The choir is a community of practice.  There are people who are central to the practice, and there are novices like me.  There is a learning path that leads novices into the center.

The choir is an unusual community of practice in that physical positioning in the choir is the opposite of position with respect to the community.  The newbies (like me) are put in the center of our section.  That helps us to hear where we need to be when singing.  The more experienced people are on the outside.  The most experienced person in the choir, who may also be the eldest, tends to sit on the sidelines, rather than stand with the rest of the choir.  He nails every note, with perfect pitch and timing.

Being a novice in the choir is enormous cognitive overload.  As we sing each piece, I am reading the music (which I’m not too good at) to figure out what I’m singing and where we’re going. I am watching the conductor to make sure that my timing is right and matches everyone else. I am listening intently to the others in my section to check my pitch (especially important for when there is no music!).  Most choir members have sung these pieces for ages and have memorized their phrasing, so they really just watch the director to get synchronized.

When the director introduces a new piece of music with, “Now this one has some tricky parts,” I groan to myself.  It’s “tricky” for the average choir members — those who read the music and who have lots of experience.  It’s “tricky” for those with literacy and fluency.  For me, still struggling with the notation, it takes me awhile to get each piece, to understand how our harmony will blend with the other parts.

I think often about my students learning Java while I am in choir.  In my class, I introduce “tricky” ideas like walking a tree or network, both iteratively and recursively, and they are still struggling with type declarations and public static void main.  I noticed last year that many of my students’ questions were answered by me just helping them use the right language to ask their question correctly. How hard it must be for them to listen to me in lecture, read the programs we’re studying, and still try to get the “tricky” big picture of operations over dynamic data structures–when they still struggle with what the words mean in the programs.

Unlike working on the play, singing in the choir doesn’t take an enormous time investment — we rehearse for two hours one night, and an hour before mass.  I’m having a lot of fun, and hope to stick with it long enough to move out of the newbie class.  What’s motivating me to stick with it is enjoyment of the music and of becoming part of the community.  There’s another good lesson for computer science classes looking to improve retention.  Retention is about enjoying the content and enjoying the community you’re joining.

 

December 20, 2011 at 8:45 am 6 comments

The Greatest Potential Impact of Computing Education: Performamatics & Non-Majors

We’ve had Jesse Heines of U. Massachusetts at Lowell visiting with us for the last couple weeks.  He gave a GVU Brown Bag talk on Thursday about his Performamatics project — which has an article in this month’s IEEE Computer!  Jesse has been teaching a cross-disciplinary course on computational thinking, where he team teaches with a music teacher.  Students work in Scratch to explore real music and real computing.  For example, they start out inventing musical notations for “found” instruments (like zipping and unzipping a coat), and talk about the kinds of notations we invent in computer science.  I particularly enjoyed this video of the music teacher, Alex Ruthmann, performing an etude through live coding.

Jesse and I talked afterward: Where does this go from here?  Where could Performamatics have its greatest impact?  We talked about how these music examples could be used in introductory computing courses (CS1 and CS2), but that’s not what’s most exciting.  Is the greatest potential impact of computing education creating more CS majors, creating more developers?  Developers do have a lot of impact, because they build the software that fuels our world (or maybe, that eats our world).  But developers don’t have a monopoly on impact.

I argued that the greatest impact for computing educators is on the non-majors and their attitudes about computing.  I showed him some quotes that Brian Dorn collected in his ICER 2010 paper about adult graphics designers (who have similar educational backgrounds and interests to Jesse’s non-majors) on their attitudes about computer scientists:

P2: I went to a meeting for some kind of programmers, something or other. And they were OLD, and they were nerdy, and they were boring! And I’m like, this is not my personality. Like I can’t work with people like that. And they worked at like IBM, or places like that. They’ve been doing, they were working with Pascal. And I didn’t…I couldn’t see myself in that lifestyle for that long.

P5: I don’t know a whole ton of programmers, but the ones I know, they enjoy seeing them type up all these numbers and stuff and what it makes things do. Um, whereas I just do it, to get it done and to get paid. To be honest. The design aspect is what really interests me a lot more.

These are adults, perhaps not much different than your state or federal legislators, your school administrators, or even your CEO. Brian’s participants are adults who don’t think much of computer scientists and what they do.  There are a lot of adults in the world who don’t think much of computer scientists, despite all evidence of the value of computing and computing professionals in our world.

Will Jesse’s students think the same things about computer scientists 5 years after his course?  10 years later?  Or will they have new, better-informed views about computer science and computer scientists?  The 2005 paper by Scaffidi, Shaw, and Myers predicted 3 million professional software developers in the US by 2012, and 13 million professionals who program but aren’t software developers.  That’s a lot of people programming without seeing themselves as computer scientists or developers. Would even more program if they weren’t predisposed to think that computer science is so uninteresting?

That’s where I think the greatest impact of work like Performamatics might be — in changing the attitudes of the everyday citizens, improving their technical literacy, giving them greater understanding of the computing that permeates their lives, and keeping them open to the possibility that they might be part of that 13 million that needs to use programming in their careers.  There will only be so many people who get CS degrees.  There will be lots of others who will have attitudes about computing that will influence everything from federal investments to school board policies. It’s a large and important impact to influence those attitudes.

December 13, 2011 at 7:47 am 1 comment

An AI music engine, as a new kind of instrument

Exciting article on a concert with Bill Manaris’s system for interactive music recreation. I really like the idea of it as “a new kind of instrument,” and I love Chris Starr’s comments about computing as a form of literacy for everybody, including arts majors.

“It’s like a whole different instrument,” Tan said of the Monterey Mirror.

Manaris said the system is an example of how computer science can be merged with the arts. The college this fall launched a new computing and the arts major, he said. Already, more than 30 undergraduate students have signed on.

Chris Starr, chairman of the college’s computer science department, said the department is trying to engage more students in computer science by merging it with other disciplines, such as business, analytics and music.

“We’re attracting a new kind of student that is technically competent and creative,” Starr said, “not just one or the other.”

He also said he thinks computer science now is a “foundation discipline,” much like literacy was in the 19th and 20th centuries. “There isn’t a single discipline that doesn’t have a software competency aspect,” Starr said.

via Real, artificial brains make magical music | The Post and Courier, Charleston SC – News, Sports, Entertainment.

December 1, 2011 at 8:01 am Leave a comment


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,008 other followers

Feeds

Recent Posts

Blog Stats

  • 1,891,286 hits
November 2021
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips