Posts tagged ‘Media Computation’
I’ve seen EarSketch demoed a few times, and Barb is involved in planning their summer camp version. It’s very cool — goes deeper into Python programming and music than MediaComp.
The students use EarSketch, the software created by Magerko and Jason Freeman, an associate professor in Tech’s School of Music. EarSketch utilizes the Python programming language and Reaper, a digital audio work station program similar to those used in recording studios throughout the music industry.
“Young people don’t always realize that computer science and programming can be fun,” Freeman said. “This is allowing students to express their own creative musical ideas as they learn computer science principles.”
Barbara Ericson and I gave the Castle Lecture at West Point in April. The Castle Lecture is a big deal — we spoke before the entire first-year class at West Point. (Last year’s lecture was David Ferrucci, PI of the IBM Watson project.) We received this honor because West Point requires Computer Science of everyone, and this is the first year that they all the first years used our Media Computation Python textbook in that class. So, we got a chance to lecture to 1200 future Army officers and their instructors who all knew Media Computation. It was a stunning experience.
The whole day was amazing. If you’ve never been to West Point, I highly recommend that you take the opportunity. The campus is beautiful. The traditions and stories about the place are amazing. There’s such a sense of history, such a buzz about the place. We ate lunch with a group of cadets (in an absolutely enormous mess hall where thousands of students eat lunch in 20 minutes) and were deeply impressed. These are undergraduate students who are making a huge commitment to service to their country.
The biggest intellectual treat for me was learning more about their course, IT 105. 700 students every semester take the course — in groups of 20. 16 instructors are involved in teaching the course. We met with the instructors who teach just about nothing but IT 105, but also met some of the other West Point EECS instructors who teach a section or two of IT 105 along with their other courses. (Like Dr. Tanya Tolles Markow, a GT alumna, who teaches IT 105 and database classes.)
The person who makes this all work is Susan K. Schwartz (CAPT, USN, Ret). Her attention to detail is phenomenal. Susan is going to give me her errata for the third edition when she finishes this semester, which is more detailed than all the corrections that all instructors have sent me for both of the previous editions combined. Susan creates detailed lecture notes and assignments that drive all the sections for every day across the entire semester. All the students who take the course take the same exams, so Susan provides enough detail so that all the instructors know what to do in each class so that all students get to the finish line.
Barb and I each got to sit in one section. This is the opposite of a MOOC. The teacher knows every student. She (I attended one of Susan’s classes) calls on individual students, prods students to engage, and gives them activities in class. It’s small, interactive, and individualized. Yet, there are 700 students taking it at once. It’s an enormous effort to make that large of a class work such that students can all have that small class experience. We’re going to try to get Susan’s materials available to other Media Computation teachers.
The lecture was fun and exciting to do. We talked about how media was going to influence them for the rest of their lives. I gave a brief audio lecture, then we talked about computers that can process all that we can hear and see, and have the processing power of ten year’s forward. What does that mean for the rest of their lives? Barb gave a great overview of advances in robotics and cyber-security and even prosthetics. Afterward at the reception, we each had 9-12 cadets asking us follow-up questions for about an hour. We got back to the Thayer Hotel (what a place!) just buzzing from the amazing adventure of the day.
“Gas station without pumps” has a great point here (linked below), but I’d go a bit further. As he suggests, proponents of an educational intervention (“fad”) rarely admit that it’s a bad idea, rarely gather evidence showing that they’re wrong, and swamp the research literature with evidence that they’re right.
But what if external observers test the idea, and find that it works as hypothesized? Does that mean that it will work for everyone? Media Computation has been successfully used to improve retention at several institutions with both CS majors and non-CS majors, in evaluations not connected to me and my students. That doesn’t mean that it will work for any teacher and every teacher. There are so many variables in any educational setting. Despite the promises of the “What Works Clearinghouse,” even the well-supported interventions will sometimes fail, and there are interventions that are not well-supported that sometimes works. Well-supported interventions are certainly more promising and more likely to work. The only way to be sure, as the blog post below says, is to try it — and to measure it as well as you can, to see if it’s working for you.
I would posit that there is another series of responses to educational fads:
- It is great, everyone should do this.
- Maybe it doesn’t work that well in everybody’s hands.
- It was a terrible idea—no one should ever do that.
Think, for example, of the Gates Foundation’s attempt to make small high schools. They were initially very enthusiastic, then saw that it didn’t really work in a lot of the schools where they tried it, then they abandoned the idea as being completely useless and even counter-productive.
The difficult thing for practitioners is that the behavior of proponents in stage 1 of an educational fad is exactly the same as in Falkner’s third stage of acceptance. It is quite difficult to see whether a pedagogical method is robust, well-tested, and applicable to a particular course or unit—especially when so much of the information about any given method is hype from proponents. Educational experiments seem like a way to cut through the hype, but research results from educational experiments are often on insignificantly small samples, on very different courses from the one the practitioner needs to teach, and with all sorts of other confounding variables. Often the only way to determine whether a particular pedagogic technique works for a particular class is to try it and see, which requires a leap of faith, a high risk of failure, and (often) a large investment in developing new course materials.
For teachers in those old, stodgy, non-MOOC, face-to-face classes (“Does anybody even *do* that anymore?!?”), I strongly recommend using “Clickers” and Peer Instruction, especially based on these latest findings from Beth Simon and colleagues at the University of California at San Diego. They have three papers to appear at SIGCSE 2013 about their multi-year experiment using Peer Instruction:
- They found that use of Peer Instruction, beyond the first course (into theory and architecture), halved their failure rates: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=176
- They found that the use of Peer Instruction, with Media Computation and pair-programming, in their first course (on the quarter system, so it’s only 10 weeks of influence) increased the percentage of students in their major (tracking into the second year and beyond) up to 30%: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=96
- They also did a lecture vs. Peer Instruction head-to-head comparison which showed significant impact of the instructional method: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=223
If we have such strong evidence that changing our pedagogy does work, are we doing our students a disservice if we do not use it?
IEEE Computer Society does good videos. They did a nice video at the Awards Ceremony, and now, they’ve put together a follow-up video with footage from interviews that they did after the Awards Ceremony. I always find it painful to watch myself being interviewed in a video, but I like how they got what’s important about Media Computation and Georgia Computes in this piece. You always try to get some of the important stuff into an interview, but the stuff you thought was most important usually ends up on the cutting room floor. Here, they got what I thought were the important bits.
This piece got mentioned in an earlier blog post comment Mylène, and I wanted to make sure that it got highlighted. It’s a wonderful post about what really leads to an enduring relationship with a subject matter. There are some great lessons here for computing education. Media Computation fares well when considered from this perspective. I just used MediaComp as a way of introducing graduate students to Python, and they puzzled (for example) over why sounds came out the way that they did. I thought it worked as a way of getting the students to start reasoning with Python.
An ounce of perplexity is worth a pound of engagement. Give me a student with a question in her head, one that math can help her answer, over a student who’s been engaged by a poster or a celebrity testimonial or the promise of a career. Engagement fades. Perplexity endures.
Perhaps it comes to this: rather than remembering your own tastes as a twelve-year-old, empathize with the tastes of a twelve-year-old who isn’t anything like you, one who has experienced only humiliation and failure in mathematics. What does math have to offer that student?
Daphne talks about the educational research that she’s drawing on. I wondered: What’s new here? Why are people excited about MOOCs?
- Mining educational data to learn about learning isn’t new. It’s an established field with a multi-year conference (http://educationaldatamining.org/EDM2012/). In fact, there’s a standard open source repository for these sorts of data for learning scientists (http://learnlab.org/technologies/datashop/index.php). (I wonder if Coursera and Udacity are contributing to that?)
- Using technology to get students to actively engage with their learning isn’t new. Instructional Management Systems had the entire K-12 curriculum covered back in the 1990′s, all based on a similar model of presentation and student activity to enhance learning.
- Getting educational content out to the developing world isn’t new. That was always one of the guiding principles of the Open University UK, and their track record (in terms of completion rates, measured learning, reach into the developing world) is much better than Coursera and Udacity.
- On-line forums are not new. In fact, the older Computer Supported Collaborative Learning (CSCL) systems (like CSILE/Knowledge Forum and even CoWeb/Swiki) have well-supported claims of facilitating learning, unlike the more modern forums that don’t have similar support.
- The two-sigma effect is old (though recent attempts to replicate Bloom’s result suggest that it wasn’t tutoring but mastery learning that led to a two-sigma effect). If the point of Coursera is to get similar effects of tutoring, why aren’t they starting by studying and replicating human tutoring (as the Cognitive Tutors do), versus putting lectures on video? Lectures were the less-successful model.
Here is what I see as new:
- Video on the Internet. There is an effect of medium and distribution here. Video is compelling. We now have the ability to get lots of video created and shipped anywhere cheaply. When Roger Schank was building his learning systems at Northwestern, they spent a huge amount of effort getting lots of video burned to DVD’s that could be easily accessed. That’s simply not a problem anymore.
- They’re doing it for free. There have been lots of smallish research efforts in the past. There have been companies started that provide these technologies at scale for a cost. Free changes things, particularly with students and families today bearing a greater portion of the cost of higher education.
- There is the potential to do more, to make students feel like individuals, rather than part of a 100K herd. When I raised the question of “what’s new about MOOCs” with faculty at Georgia Tech, my colleagues pointed out the potential value of using modern, real-time machine learning and data analytics techniques to get greater insight into learning difficulties, and to better personalize the learning experience. Daphne says in her TED talk that the Coursera system could recognize the need for more remedial material and provide it. I recognize that potential, though the technology isn’t in place yet. Current MOOCs have little or no machine learning, and no attempt at personalization.But I see a problem with Coursera recognizing a need and recommending remedial material. The current MOOCs won’t be able to offer personalization for the audiences that I most care about (e.g., adult learners without previous CS background, non-majors studying CS), audiences that probably would need more background material than the top students, because those students simply aren’t there. My audiences are most likely in the 80-90% who are dropping out of MOOCs after registering. Even the most sophisticated machine learning and data analytics can’t help you to understand students who are no longer there. Until you get students who need the remediation through the system, the ML can’t learn about them, but how do you get them through the system without the ML-recommended remediation?
While I agree with the importance of reaching underserved populations, I am not convinced that MOOCs are currently having much of an effect in the developing world or to broaden participation to students who don’t have much prepratory work (say, in CS) in their schools. I wonder if it’s even possible to make a large impact on the developing world starting at higher education. Not all K-12 programs in the United States prepare students adequately for MIT, Stanford, and Harvard level classes. Can we expect that most K-12 programs in the developing world are adequate preparation? The Open University UK has always been “open,” no pre-requisites, and they provide content at that level. Coursera prides itself on offering top-notch classes. That’s valuable, but I find it unlikely that such courses also meet the needs of underserved populations.
Coursera offers demanding courses via video which only a small percentage complete — for free. That is valuable and interesting. I don’t currently see the model replacing existing courses, or working well for students who don’t have the background knowledge.
Daphne Koller is enticing top universities to put their most intriguing courses online for free — not just as a service, but as a way to research how people learn. Each keystroke, comprehension quiz, peer-to-peer forum discussion and self-graded assignment builds an unprecedented pool of data on how knowledge is processed and, most importantly, absorbed.
I’m back from Oxford, after an intense six weeks of teaching “Computational Freakonomics” and “Media Computation.” Since I did new things in Media Computation this term, I put together a little survey to get students’ feedback on what I did — not for research publication, but to inform me as a teacher.
It’s complicated to interpret their responses. Only 11 of my 22 students completed my survey, so the results may not be representative of the whole class. (The class was 10 males and 12 females. I didn’t ask about gender on the survey, so I don’t know gender of the respondents.) The first thing I was wondering was whether the worked examples was perceived by students as helping them learn. “I found it useful to type in Python programs and figure them out at the start of class.” 4 strongly agree, 6 agree, 1 neutral.
That seems generally positive — students thought that the worked examples were useful. How about helping with Python syntax? ”Getting the characters exactly right (the syntax of Python) was difficult.” 2 agree, 1 neutral, 8 disagree. That’s in the right direction.
In the written portion, several students commented that they liked being able to focus on “understanding” programs “rather than just executing them.” One student even suggested that I could have questions about the program after they studied them, or I could have them make a change to the program afterward, to demonstrate understanding. I loved this idea, and particularly loved that it was suggested by a student. It indicates seeing a value in understanding programming, even before doing programming, while seeing value in that, too. This worked examples approach really does lead to a different way of thinking about introductory computer science: Programs as something to study first, before designing and engineering them.
When I asked students what their favorite part of the course was, and what their least favorite part of the course was, Excel showed up on both lists (though more often on the least favorite part). Here’s one of the questions that stymied me to interpret: “Python is harder to learn and use than Excel.” Could not be a more perfect bell curve — what does that mean?!?
“I wish I could have learned more Excel in this course.” An almost perfectly uniform distribution!
Their reaction to Excel is so interesting. On the written parts of the survey, they told me how important it was for them to learn Excel, that it was very important for their careers. But they did not really like doing something as inauthentic (my word, not their’s) as pixel manipulation in Excel. They wished they could have done something more useful, like computing “expenses.”
The responses above suggest to me a hypothesis: The students don’t really know how to think about Excel in relation to Python. It’s as if they’re two different things, not two forms of the same thing. I was hoping for more of the latter, by doing pixel manipulations in both Python and Excel. This may be someplace where prior understanding influences the future understanding. I suspect that the students classify these things as.
- “Excel is for business. It’s not for computing. Doing pixel manipulations in Excel is just weird and painful.”
- “Python is for computing. I have to go through it, but it doesn’t really have much to do with my future career.” On the statement, “Learning programming as we have in this course is not useful to me,” 3 were neutral, and 8 disagreed. I read that as, “It’s okay. Sorta.”
Something that I always worry about: Are we helping students to develop their sense of self-efficacy in an introductory course, especially for non-majors?
“I am more confident using computers now, after taking this course.” Quite positive: 10 agree, 1 neutral.
“I think differently about computers and how they work since taking this class.” Could not get much more positive: 8 strongly agree, 6 agree!
And yet, “I am not the kind of person who is good with computers.” Mostly, students agree with that: 3 strongly agree, 4 agree, 1 neutral, 3 disagree. One average, my students still don’t see themselves as among the people who are “good” with computers.
There was lots for me to be happy about. Some students said that the lectures on algorithmic complexity and the storage hierarchy were among their favorites; that they would have liked to have learned more about the “big questions” of CS; and they they liked writing programs. On the statement, “I learned interesting and useful computer science in this course,” 3 students strongly agreed, and 8 agreed. They got that this was about computer science, and some of them even found that useful.
Even in a class of only 22, even seeing them every day for hours, even with grading all their papers — I’m still surprised, intrigued, and confounded by how they think about all of this. That’s fine by me. As a teacher and a researcher, my job isn’t done yet.
The IEEE CS Awards videos are up on YouTube, including Eric Roberts’s nice talk. (Well, they probably went up weeks ago, but I just got to my office and found the physical DVD in my mailbox, which was my clue to check). So now people who weren’t there can see me thank Rich LeBlanc, Peter Freeman, Kurt Eiselt, Russ Shackelford, Jim Foley, John Impagliazzo, Barbara Ericson, and Jan Cuny. I’m sure that some will notice that not all the details in the video are completely right — I’m sure that’s my fault for not making everything clear when I provided materials. I’m grateful to the IEEE Computer Society for putting together such a snazzy video on my behalf.
I mentioned awhile ago that some undergraduates built for me a new tool for converting from images to spreadsheets, and back again. It allows us to do image manipulations via spreadsheet tools like Excel. More importantly, it exposes the data abstractions in picture files (turning JPEGs into columns of x,y and RGB), and makes the lower level data malleable.
I’m using this tool in the Media Computation course that I’m teaching this summer. Normally, CS1315 (the course I’m teaching) includes labs on Word, Excel, and Powerpoint, but there’s no sense of “lab” in these compressed courses. And I bet that most of my students know a lot about Office applications already. So I asked them at the start of class: What did they want to learn about Office applications? Several students said that they’d like to learn to use formulas in interesting ways in Excel.
I’ve come up with a homework assignment where students do Media Computation using unusual Excel formulas (e.g., using IF, AND, and COUNTIF). I lectured on Excel on Thursday in support of this assignment, and it was rough. Things that I had worked out in Windows Excel failed or worked differently when doing a live coding session in MacOS Excel (e.g., the FREQUENCY function worked differently, or not at all — hard to tell). Fortunately, we figured it out, but I got a new appreciation of how non-portable the edge of Excel functions can be.
My students are working on this assignment this week, and I’ll let you know how it goes. Based on the questions I’m getting already, it’s challenging for the students. Excel functions are hidden, invisible when you look at a spreadsheet until you click on the right cell. Much of how you do things in Excel, the process, is invisible from watching the screen, e.g., shift-clicking to select a range. So, they’re having a hard time discerning exactly how I did what I did in class.
Maybe they’re learning a greater appreciation for doing all this in Python, rather than Excel.
I should give you a little report on how my worked examples/self-explanation intervention worked in my Media Computation class. I have nothing close to real data, and you shouldn’t believe me if I offered any. This is a rarified class: 22 students, meeting four days a week for 90 minutes, plus office hours for 90 minutes twice each week (that most of the students have come to), and the teacher (who is the author of the textbook) attends breakfast and dinner with the students. I think it would be hard to get more student-teacher interaction than in this model.
That said, I would definitely do it again. I was quite surprised at how seriously the students took the task of explaining these programs! In retrospect, I shouldn’t have been surprised. In most classes, aren’t students asked to analyze and explain situations, even asked to make sense of some text? That’s exactly what I asked these students to do, and they really worked at it. I had students coming to office hours to ask about their assigned programs, so that they could write up their one paragraph of explanation. There were things that I had to teach them about this process, e.g., teaching them to try a program with different data sets, to make sure that the odd result they got wasn’t an anomaly. I gave them feedback (every single student, on every single program) about the quality of their explanations, and the explanations definitely got better over time.
The real benefit was that they were trying to understand some relatively complicated code before it was their own code that they were trying to understand (while also designing and debugging it, all before a deadline). With the worked examples tasks, they were just trying to understand. There clearly was a reduction in cognitive load. Variations on the below program had lots of students coming to see me — combining sounds at different rates was a challenging idea, but students did a good job of getting a grasp on it:
def modifysound2(sound): retsound = makeEmptySound(2*getLength(sound)) newsound = makeSound(getMediaPath("bassoon-c4.wav")) trgi = 0 nsi = 0 for i in range(getLength(sound)): value = getSampleValueAt(sound,i) if nsi < getLength(newsound): nsvalue = getSampleValueAt(newsound,int(nsi)) else: nsvalue = 0 setSampleValueAt(retsound,trgi,value+nsvalue) trgi = trgi + 1 nsi = nsi + 0.5 return resound
Because there four labs (that just involved explaining programs) and two homework’s (that involved typing in, executing, and explaining programs), the first real programming assignment was the collage assignment. Everybody did it. Everybody turned in a working program. And some of these were huge. This one (by Savannah Andersen) was over 100 lines of code:
This one, by Julianne Burch, is over 200 lines of code. I’m posting shrunk versions here: Julianne’s is about 4000 pixels across, representing the travel portion of this study abroad program.
I suspect that the worked examples and self-explanations gave the students more confidence than they normally have when facing their first programs. It’s unusual in my experience for students to be willing to write 50-200 lines of working code for their first programming assignment.
But some of these students were also getting it. A few of my students realized that they could make their collages more easily by using a copy() method to reduce the complication of composing pictures. I did prompt them to do that, and a few did — most just went with hard-coded FOR loops, because that was easier for them to understand. When I described how to do that, one student asked, “Aren’t you just naming some of those lines of code?” Yes! Nice way to start thinking about functions and abstract: it’s about naming chunks of code. One of my students, without prompting, also decided to create a copy() method for her sound collage. They’re starting to grapple with abstraction. Given that this is the third week of class, when none of them had any previous programming experience (all my students are liberal arts and management students), I think that they’re doing quite well at moving from notation into abstraction.
They’re working on their first midterm exam now, a take-home exam (to save classroom time.) I think it’s significantly challenging for a first exam, but it doesn’t have much coding. It has a lot of analysis of code, because that’s one of the key learning objectives. I want them to be able to look at a piece of code and predict its behavior, to trace (if necessary) what’s going on. For me, that’s a more important outcome from a first course than being able to write a lot of code.
I sent this idea to the mediacomp-teach mailing list, and got a positive response. I thought I’d share it here, too.
I’m trying a worked examples + self-explanations approach in my Media Computation Python class that started Monday (first time I’ve taught it in seven years!) and in my “Computational Freakonomics” class (first time I’ve taught it in six years). Whether you’re interested in this method or not, you might like to use the resource that I’ve created.
As I mentioned here, I’m fascinated by the research on worked examples and on self-explanations. The idea behind worked examples is that we ought to have students see more fully worked out examples, with some motivation to actually study them. The idea behind self-explanations is that learning and retention is improved when students explain something to themselves (or others), in their own words. Pete Pirolli did studies where he had students use worked examples to study computer science (explicitly, recursion), and with Mimi Recker, prompted CS students to self-explain then studied the effect. In their paper, Pirolli and Recker found:
“Improvement in skill acquisition is also strongly related to the generation of explanations connecting the example material to the abstract terms introduced in the text, the generation of explanations that focus on the novel concepts, and spending more time in planning solutions to novel task components. We also found that self-explanation has diminishing returns. “
Here’s the critical idea: Students (especially novices) need to see more examples, and they need to try to explain them. This what I’m doing at key points in the class:
- Each team of two students gets one worked example in class. They have to type it in (to make sure that they notice all the details) and explain it to themselves – what does it do? how does it work?
- Each team then explains it to the teams on either side of them.
- At the end of the class, each individual takes one worked example, and does the process themselves: Types it in, pastes it into a Word document (with an example of the output), and explains what the program does. I very explicitly encourage them to do with this others, and talk about their programs with one another. I want students to see many examples, and talk about them.
Sure, our book has many examples in it, but how many students actually look at all those examples? How many type them in and try them? Explain to themselves?
I’m doing this at four points in the MediaComp class: for images with getPixels, images with coordinates, sounds, and text and lists. For my CompFreak class, students are supposed to have had some CS1, and most of them have seen Python at least once, so I’m only doing this at the beginning of the class, and only on text and lists. There are 22 students in my MediaComp class, so I needed 11 examples in class, then 22 examples one-for-each-person. Round it off to 35 examples. That’s 140 working examples. A lot of them vary in small ways — that’s on purpose. I wanted two teams to say, “I think our program is doing about the same thing as yours — what’s different?”
I did discover some effects that surprised me. For example, try this:
def changesound(sound): for sample in getSamples(sound): value = getSampleValue(sample) if value > 0: setSampleValue(sample, 4 * value) if value <= 0: setSampleValue(sample,0)
Turns out if you zero out all the negative samples, you can still hear the sound pretty clearly. I wouldn’t have guessed this.
Whether you want to try this example-heavy approach or not, you might find useful all these examples. I’ve put all 140 examples on the teacher MediaComp sharing site (http://home.cc.gatech.edu/mediacomp/9 – email me if you want the key phrase and don’t have it). I started creating these in Word, but that was tedious to format well. I switched to LaTeX, because that nicely formatted the Python without much effort on my part. I’ve uploaded both the PDF and the LaTeX, since the LaTeX provides easy copy-paste text.
My CompFreak students are doing their assignment now (due tonight), and we just did it for the first time in the MediaComp class today (the take-home portion due in two days). I was pleased with the feedback. I got lots of questions about details that students don’t normally ask about at the second lecture (e.g., “makeColor is doing something different than setRed, setGreen, and setBlue differently? What’s the difference between colors and pixels?”). My hope is that, when they start writing their own code next week, they won’t be stymied by stupid syntax errors, because they will have struggled with many of the obvious ones while working with complete code. I’m also hoping that they’ll be more capable in understanding (and thus, debugging) their own code. Most fun: I had to throw the students out of class today. Class ended at 4:10, and we had a faculty meeting at 4:30. Students stayed on, typing in their code, looking at each others’ effects. At 4:25, I shooed them off.
I am offering extra credit for making some significant change (e.g., not just changing variable names) to the example program, and turning that in, too (with explanation and example). What I didn’t expect is that they’re relating the changes to code we’ve talked about, like in this comment from a student that just got turned in:
“I realized I made an error in my earlier picture so I went back and fixed it. I also added in another extra credit picture. I made a negative of the photo. It looks pretty cool!”
It’s interesting to me that she explicitly decided to “make a negative” (and integrated the code to do it) rather than simply adding/changing a constant somewhere to get the extra credit cheaply.
All my MediaComp students are Business and Liberal Arts students (and is 75% female — while CompFreak is 1 female and 9 males). I got a message from one of the MediaComp students yesterday, asking about some detail of the class, where she added: “We all were pleasantly surprised to have enjoyed class yesterday!” I take the phrase “pleasantly surprised” to mean that the expectations are set pretty low.
CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts
Boy, do I want to learn more about this! Chuck and Processing, and two semesters — it sounds like Media Computation on steroids!
The National Science Foundation (NSF) has awarded California Institute of the Arts (CalArts) a grant of $111,881 to develop a STEM (Science, Technology, Engineering and Mathematics) curriculum for undergraduate students across the Institute’s diverse arts disciplines. The two-semester curriculum is designed to teach essential computer science skills to beginners. Classes will begin in Fall 2012 and are open to students in CalArts’ six schools—Art, Critical Studies, Dance, Film/Video, Music and Theater.
This innovative arts-centered approach to teaching computer science—developed by Ajay Kapur, Associate Dean of Research and Development in Digital Arts, and Permanent Visiting Lecturer Perry R. Cook, founder of the Princeton University Sound Lab—offers a model for teaching that can be replicated at other arts institutions and extended to students in similar non-traditional STEM contexts.
My TEDxGeorgiaTech talk finally got posted. I show how small bits of code can lead to useful and interesting insights, even for students who don’t focus on STEM. It’s a “Computing for Everyone,” Media Computation demonstration talk. I was nervous doing this talk (and unfortunately, it shows) because I had decided to code Python live and play harmonica, in front of a TEDx audience. The talk includes image manipulation, sound manipulation, and changing information modalities (e.g., turning pictures into sound).
A common question I get about contextualized approaches to CS1 is: “How can we possibly offer more than one introductory course with our few teachers?” Valerie Barr has a nice paper in the recent Journal of Computing Sciences in Schools where she explains how her small department was able to offer multiple CS1′s, and the positive impact it had on their enrollment.
The department currently has 6 full time faculty members, and a 6 course per year teaching load. Each introductory course is taught studio style, with integrated lecture and hands-on work. The old CS1 had a separate lab session and counted as 1.5 courses of teaching load. Now the introductory courses (except Programming for Engineers) continue this model, meet the additional time and count as 1.5 courses for the faculty member, allowing substantial time for hands-on activities. Each section is capped at 18 students and taught in a computer lab in order to facilitate the transition between lecture and hands-on work.
In order to make room in the course schedule for the increased number of CS1 offerings, the department eliminated the old CS0 course. A number of additional changes were made in order to accommodate the new approach to the introductory CS curriculum: reduction of the number of proscribed courses for the major from 8 (out of 10) to 5 (this has the added benefit, by increasing the number of electives, of giving students more flexibility and choice within the general guidelines of the major); put elective courses on a rotation schedule so that each one is taught every other or every third year; made available to students a 4-year schedule of offerings so that they can plan according to the course rotation.