Posts tagged ‘contextualized computing education’

Live coding as a path to music education — and maybe computing, too

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

October 3, 2013 at 7:15 am 20 comments

Teaching intro CS and programming by way of scientific data analysis

This class sounds cool and similar to our “Computational Freakonomics” course, but at the data analysis stage rather than the statistics stage. I found that Allen Downey has taught another, also similar course “Think Stats” which dives into the algorithms behind the statistics. It’s an interesting set of classes that focus on relevance and introducing computing through a real-world data context.

The most unique feature of our class is that every assignment (after the first, which introduces Python basics) uses real-world data: DNA files straight out of a sequencer, measurements of ocean characteristics (salinity, chemical concentrations) and plankton biodiversity, social networking connections and messages, election returns, economic reports, etc. Whereas many classes explain that programming will be useful in the real world or give simplistic problems with a flavor of scientific analysis, we are not aware of other classes taught from a computer science perspective that use real-world datasets. (But, perhaps such exist; we would be happy to learn about them.)

via PATPAT: Program analysis, the practice and theory: Teaching intro CS and programming by way of scientific data analysis.

September 10, 2012 at 3:33 pm Leave a comment

Report on “Computational Freakonomics” Class: Olympics, game consoles, the Euro, and Facebook

I’ve told you a bit about how the Media Computation class went this summer, with the new things that I tried.  Let me tell you something about how the “Computational Freakonomics” (CompFreak) class went.

The CompFreak class wasn’t new.  Richard Catrambone and I taught it once in 2006.  But we’ve never taught it since then, and I’d never taught it before on my own, so it was “new” for me.  There were six weeks in the term at Oxford.  Each week was roughly the same:

  • On Monday, we discussed a chapter from the “Freakonomics” book.
  • We then discussed social science issues related to that chapter, from the nature of science, through t-tests and ANOVA, up to multiple linear regression.  Sometimes, we did a debate about issues in the chapter (e.g., on “Atlanta is a crime-ridden city” and on “Roe v. Wade is the most significant explanation for the drop in crime in the 1990’s.”)
  • Then I showed them how to implement the methods in SciPy to do real analysis of some Internet-based data sets.  I give them a bunch of example data sets, and show them how to read data from flat text files and from CSV files.

At the end of the course, students do a project where they ask a question, any question they want from any database.  Then, they do it again, but in pair, after a bunch of feedback from me (both on the first project, and on their proposal for the final project).  The idea is that the final projects are better than the first round, since they get feedback and combine efforts in the pair.  And they were.

  • One team looked at the so-called “medal slump” after a country hosts the Olympics.  The “medal slump” got mentioned in some UK newspapers this summer.  One member of the team had found in his first project that, indeed, the host country wins a statistically significant fewer medals in the following year.  But as a pair of students, they found that there was no medal “slump.”  Instead, during the Olympics of hosting, there was a huge medal “bump”!  When hosting, the country gets more medals, but the prior two and following two Olympics all follow the same trends in terms of medals won.
  • Another team looked at Eurozone countries and how their GDP changes tracked one another after moving to the Euro, then tried to explain that in terms of monetary policy and internal trading.  It is this case that Eurozone countries who did move to the Euro found that their GDP started correlating with one another, much more than with non-Euro Eurozone countries or with other countries of similar GDP size.  But the team couldn’t figure out a good explanation for why, e.g., was it because internal trading was facilitated, or because of joint monetary policy, or something else?
  • One team figured out the Facebook API (which they said was awful) and looked at different company’s “likes” versus their stock price over time.  Strongly correlated, but “likes” are basically linear — almost nobody un-likes a company.  Since stock prices generally rise, it’s a clear correlation, but not meaningful.
  • Another team looked at the impact of new consoles on the video game market.  Video game consoles are a huge hit on the stock price of the developing company in the year of release, while the game manufacturers stock rises dramatically.  But the team realized a weakness of their study: They looked at the year of a console’s release.  The real benefit of a new console is in the long lifespan.  The year that the PS3 came out, it was outsold by the PS2.  But that’s hard to see in stock prices.
  • The last team looked at impact of Olympics on the host country’s GDP.  No correlation at all between hosting and changes in GDP.  Olympics is a big deal, but it’s still a small drop in the overall country’s economy.

One of my favorite observations from their presentations: Their honesty.  Most of the groups found nothing significant, or they got it wrong — and they all admitted that.  Maybe it was because it was a class context, versus a tenure-race-influenced conference.  They had a wonderful honesty about what they found and what they didn’t.

I’ve posted the syllabus, course notes, slides that I used (Richard never used PowerPoint, but I needed PowerPoint to prop up my efforts to be Richard), and the final exam that I used on the CompFreak Swiki.  I also posted the student course-instructor opinion survey results, which are interesting to read in terms of what didn’t work.

  • Clearly, I was no Richard Catrambone. Richard is known around campus for how well he explains statistics, and I learned a lot from listening to his lectures in 2006. Students found my discussion of inferential statistics to be the most boring part.
  • They wanted more in-class coding! I had them code in-class every week. After each new test I showed them (correlation, t-test, ANOVA, etc.), I made them code it in pairs (with any data they wanted), and then we all discussed what they found in the last five minutes of class. I felt guilty that they were just programming away while I worked with pairs that had questions or read email. I guess they liked that part and wanted more.
  • I get credit from the students for something that Richard taught me to do. Richard pointed out that his reading of cognitive overload suggests that nobody can pay attention for 90 minutes straight. Our classes were 90 minutes a day, four days a week. In a 90 minute class, I made them get up halfway through and go outside (when it wasn’t raining). They liked that part.
  • Students did learn more about computing, inspired by the questions that they were trying to answer.  They talk in their survey comments about studying more Python on their own and wishing I’d covered more Python and computing.
  • In general, though, they seemed to like the class, and encourage us to offer it on-campus, which we’ve not yet done.

Students who talked to me about the class at the end said that they found it interesting to use statistics for something.  Turns out that I happened to get a bunch of students who had taken a lot of statistics before (e.g., high school AP Statistics).  But they still liked the class because (a) the coding and (b) applying statistics to real datasets.  My students asked all kinds of questions, from what factors influenced money earned by golf pros, to the influences on attendance at Braves games (unemployment is much more significant than how much the team is in contention for the playoffs).  One of the other more interesting findings for me: GPD correlates strongly and significantly with number of Olympic gold medals that a country wins, i.e., rich countries win more medals. However, GPD-per-capita has almost no correlation. One interpretation: To win in the Olympics, you need lots of rich people (vs. a large middle class).

Anyway, I still don’t know if we’ll ever offer this class again, on-campus or study-abroad.  It was great fun to teach.  It’s particularly fun for me as an exploration of other contexts in contextualized computing education.  This isn’t robotics or video games.  This is “studying the world, computationally and quantitatively” as a reason for learning more about computing.

August 16, 2012 at 8:27 am 6 comments

CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts

Boy, do I want to learn more about this! Chuck and Processing, and two semesters — it sounds like Media Computation on steroids!

The National Science Foundation (NSF) has awarded California Institute of the Arts (CalArts) a grant of $111,881 to develop a STEM (Science, Technology, Engineering and Mathematics) curriculum for undergraduate students across the Institute’s diverse arts disciplines. The two-semester curriculum is designed to teach essential computer science skills to beginners. Classes will begin in Fall 2012 and are open to students in CalArts’ six schools—Art, Critical Studies, Dance, Film/Video, Music and Theater.

This innovative arts-centered approach to teaching computer science—developed by Ajay Kapur, Associate Dean of Research and Development in Digital Arts, and Permanent Visiting Lecturer Perry R. Cook, founder of the Princeton University Sound Lab—offers a model for teaching that can be replicated at other arts institutions and extended to students in similar non-traditional STEM contexts.

via CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts.

May 31, 2012 at 7:14 am 2 comments

How can we teach multiple CS1’s?

A common question I get about contextualized approaches to CS1 is: “How can we possibly offer more than one introductory course with our few teachers?”  Valerie Barr has a nice paper in the recent Journal of Computing Sciences in Schools where she explains how her small department was able to offer multiple CS1’s, and the positive impact it had on their enrollment.

The department currently has 6 full time faculty members, and a 6 course per year teaching load. Each introductory course is taught studio style, with integrated lecture and hands-on work. The old CS1 had a separate lab session and counted as 1.5 courses of teaching load. Now the introductory courses (except Programming for Engineers) continue this model, meet the additional time and count as 1.5 courses for the faculty member, allowing substantial time for hands-on activities. Each section is capped at 18 students and taught in a computer lab in order to facilitate the transition between lecture and hands-on work.

In order to make room in the course schedule for the increased number of CS1 offerings, the department eliminated the old CS0 course. A number of additional changes were made in order to accommodate the new approach to the introductory CS curriculum: reduction of the number of proscribed courses for the major from 8 (out of 10) to 5 (this has the added benefit, by increasing the number of electives, of giving students more flexibility and choice within the general guidelines of the major); put elective courses on a rotation schedule so that each one is taught every other or every third year; made available to students a 4-year schedule of offerings so that they can plan according to the course rotation.

May 8, 2012 at 7:23 am 2 comments

A CS Emporium would be wonderful idea: Efficient and Tailored Computing Education

Over the weekend, I read a post by GasStationsWithoutPumps on speeding through college.  The Washington Post has a great article about Virginia Tech’s Math Emporium that provides a mechanism to do that: Self-paced mathematics instruction, with human instructors available for one-on-one help.  It’s efficient, and it provides student learning at their pace.  I would love to see a computer science version of this.  In particular, it would be great if students could explore problems in a variety of contexts (from media to games to robotics to interactive fiction), and get the time in that they need to develop some skill and proficiency.  Like the distance education efforts, this is about improving the efficiency of higher education.  Unlike distance education, the Emporium includes 1:1 human interaction and the potential for individualized approaches and curriculum.  And there’s potential synergy: the content needed to make a CS Emporium work could also be used in a distance education.  Here’s my prediction: Without the 1:1 help, I’d expect the distance folks to still have a higher WFD rate.

No academic initiative has delivered more handsomely on the oft-stated promise of efficiency-via-technology in higher education, said Carol Twigg, president of the National Center for Academic Transformation, a nonprofit that studies technological innovations to improve learning and reduce cost. She calls the Emporium “a solution to the math problem” in colleges.

It may be an idea whose time has come. Since its creation in 1997, the Emporium model has spread to the universities of Alabama and Idaho (in 2000) and to Louisiana State University (in 2004). Interest has swelled as of late; Twigg says the Emporium has been adopted by about 100 schools. This academic year, Emporium-style math arrived at Montgomery College in Maryland and Northern Virginia Community College.

“How could computers not change mathematics?” said Peter Haskell, math department chairman at Virginia Tech. “How could they not change higher education? They’ve changed everything else.”

Emporium courses include pre-calculus, calculus, trigonometry and geometry, subjects taken mostly by freshmen to satisfy math requirements. The format seems to work best in subjects that stress skill development — such as solving problems over and over. Computer-led lessons show promise for remedial English instruction and perhaps foreign language, Twigg said. Machines will never replace humans in poetry seminars.

via At Virginia Tech, computers help solve a math class problem – The Washington Post.

April 25, 2012 at 8:58 am 4 comments

Older Posts


Recent Posts

September 2014
M T W T F S S
« Aug    
1234567
891011121314
15161718192021
22232425262728
2930  

Feeds

Blog Stats

  • 953,699 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,071 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 3,071 other followers