Posts tagged ‘Media Computation’
Pearson has asked me to update our Python Media Computation book, “Introduction to Computing and Programming: A Multimedia Approach.” This will be the fourth edition. I plan to address the errata (as well as the ones I haven’t yet posted to the website), add new assignments, and change out the pictures (a lot of those pictures are 12 years old now). I think I’m going to give up on trying to do screen-scraping off a live website — they keep changing too fast. Instead, I might add something about how to parse CSV files, which are common and useful.
I have a couple of bigger ideas for changes, and I’d appreciate feedback from readers. (And I’m certainly interested in other advice you might give me.)
(1) CPython cross-platform libraries have come a long way since the 3rd edition was written. It’s likely that we could write a media library for CPython that works much like media library in JES. A CPython version of Media Computation would likely be faster. We probably would not re-create JES in CPython. It will take some time to develop a CPython version, so a Jython/JES-based 4th edition could be available in early 2015 (aiming to be out before SIGCSE 2015), but a CPython version would probably be mid-2015.
- (a) Is a CPython version something that you would find interesting and worth adopting?
- (b) Would you have a preference for one or the other? Or would you see value in having both versions?
(2) At Georgia Tech, we have started teaching the book with a brief excursion into strings and lists before introducing pictures. We talk about the medium as being language or text, and we manipulate characters in the strings using algorithms like those we later use with pixels in a picture or samples in a sound. For example, we can “mirror” words as we later mirror sounds or pictures. The advantage is that students can see all the characters in the string, and print out every step of the loop — where neither of those is reasonable to do with pictures or sounds.
We’re considering adding an OPTIONAL chapter at the beginning of the book in the 4th edition. We wouldn’t remove the introduction to loops in Chapter 3. We would move some of the string processing from Chapter 10 into this new Chapter 2.5, but leave methods and file I/O for Chapter 10. You would be able to use the book as-is, but if you want to start with characters and words as a text medium first, we would support that path, too.
- Does that seem like a chapter that you would find useful? Or would you rather just keep the book with the chapters as they are now?
Thanks for any advice you would like to give me on producing the 4th edition of the book!
I got a chance to review and write a foreword for:
I’m really pleased to see that it’s finally out! Recommended.
Interesting economic argument being made in the below piece — that we don’t have large numbers of manufacturing jobs, but we have large numbers of jobs that involve creating using digital technologies.
In the start of our Media Computation book, we make the argument that comes after this. Photoshop, Final Cut Pro, and Audacity are wonderful tools that can do a lot — if you know how to use them. Knowing programming gives you the ability to make with digital media, even if you don’t know how to get the tools to do. Knowing programming lets you say things with digital media, even if the tools don’t support it.
“We have moved from the industrial age to the knowledge economy,” said Facebook’s CIO Tim Campos at the HP Discover conference in Barcelona last month. An economy, that is, in which a company’s “core asset” lies not in material infrastructure but rather “the thoughts and ideas that come from our workforce.”
The blog post linked below felt close to home, though I measure it differently than lines of code. The base point is that we tend to start introductory programming courses assuming way more knowledge than is already there. My experience this semester is that we tend to expect students to gain more knowledge more quickly than they do (and maybe, than they can).
I’m teaching Python Media Computation this semester, on campus (for the first time in 7 years). As readers know, I’ve become fascinated with worked examples as a way of learning programming, so I’m using a lot of those in this class. In Ray Lister terms, I’m teaching program reading more than program writing. In Bloom’s taxonomy terms, I’m teaching comprehension before synthesis.
As is common in our large courses at Georgia Tech (I’m teaching in a lecture of 155 students, and there’s another parallel section of just over 100), the course is run by a group of undergraduate TA’s. Our head TA took the course, and has been TA-ing it for six semesters. The TA’s create all homeworks and quizzes. I get to critique (which I do), and they do respond reasonably. I realize that all the TA’s expect that the first thing to measure in programming is writing code. All the homeworks are programming from a blank sheet of paper. Even the first quiz is “Write a function to…”. The TA’s aren’t trying to be difficult. They’re doing as they were taught.
One of the big focal research areas in the new NSF STEM-C solicitation is “learning progressions.” Where can we reasonably expect students to start in learning computer science? How fast can we reasonably expect them to learn? What is a reasonable order of topics and events? We clearly need to learn a lot more about these to construct effective CS education.
I’m not going to articulate the next few orders of magnitude, both because they are not relevant to beginner or intermediate programmers, and because I’m climbing the 1K → 10K transition myself, so I’m not able to articulate it well. But they have to do with elegance, abstraction, performance, scalability, collaboration, best practices, code as craft.
The 3am realization is that many, many “introduction” to programming materials start at the 1 → 10 transition. But learners start at the 0 → 1 transition — and a 10-line program has the approachability of Everest at that point.
The Computing At Schools effort has a regular newsletter, SwitchedOn. It’s packed full of useful information for computer science teachers, and is high-quality (in both content and design). The latest issue is on Computational Thinking and includes mentions of Media Computation and Pixel Spreadsheet, which was really exciting for me.
Download the latest issue of our newsletter here. The newsletter is produced once a term and is packed with articles and ideas for teaching computer science in the classroom.
This issue takes a look at the idea of Computational Thinking. Computational thinking is something children do, not computers. Indeed, many activities that develop computational thought dont need a computer at all. This influential term helps stress the educational processes we are engaged in. Developing learning and thinking skills lies behind our view that all children need exposure to such ideas.There is something of interest to all CAS members and the wider teaching community. Resources and ideas shared by teachers, both primary and secondary. There is also a section on the Network of Excellence for those new to CAS who aren’t familiar with current developments.
I get to teach our Media Computation in Python course, on Georgia Tech’s campus, in Spring 2014. I’ve had the opportunity to teach it on study abroad, and that was wonderful. I have not had the opportunity to teach it on-campus since 2007. Being gone from a course for seven years, especially a big one with an army of undergraduate TA’s behind it, is a long time. The undergraduate TA’s create all the assignments and the exams, in all of the introductory courses in the College of Computing. Bill Leahy, who is teaching it this summer semester, kindly invited me to meet with the TA’s in order to give me a sense for how the course works now.
It’s a very different course than the one that I used to teach.
- I mentioned the collage assignment, which was one of the most successful assignments in MediaComp (and shows up even today in AP CS implementations and MATLAB implementations). Not a single TA knew what I was talking about.
- The TA’s complained to me about Piazza. “Nobody posts” and “I always forget that it’s there” and “It seems to work in CS classes, but not for the other majors.” I told them about work that Jennifer Turns and I did in 1999 that showed why Piazza and newsgroups don’t work as well as integrated computer-supported collaborative learning, and how that work led to our development of Swikis. Swikis were abandoned many years ago in MediaComp, even before the FERPA concerns.
- Sound is mostly gone. Students have to play a sound in one assignment based on turtle graphics. Students never manipulate samples in a sound anymore.
- I started to explain why we do what we do in MediaComp: Introducing iteration as set operations, favoring replicated code over abstraction in the first half of the semester, avoiding else. They thought that those were interesting ideas to consider adding to the course. I borrowed a copy of the textbook from one of them, and read them part of the preface about Ann Fleury’s work. Lesson: Just because you put it in the book and provide the citation, doesn’t mean that anybody actually reads it, even the TA’s.
It’s a relevant story because I’m presenting a paper at ICER 2013 on Monday 12 August that is a 10 year retrospective on the research on Media Computation. (I’m making a preview version of the paper available here, which I’ll take down when the ACM DL opens up the ICER 2013 papers.) It was 10 years ago that we posted our working document on creating MediaComp and our 2002 and 2003 published design papers, all of which are still available. We made explicit hypotheses about what we thought Media Computation would do. The ICER 2013 paper is a progress report. How’d we do? What don’t we know? In hindsight, some seem foolish.
- The Plagiarism Hypothesis: We thought that the creative focus of MediaComp would reduce plagiarism. We haven’t done an explicit study, but if we found a difference with statistical significance, it would be meaningless. Ten years later, still lots of academic misconduct.
- The Retention Hypothesis: Perhaps our biggest win — students are retained better in MediaComp than traditional classes, across multiple institutions. The big follow-up question: Why? Exploring that question has involved the work of multiple PhD students over the last decade, helping us understand contextualized-computing education.
- The Gender Hypothesis: We designed MediaComp based on recommendations from people like Jane Margolis and Joanne Cohoon on how to make an introductory CS course that would be successful with women. Our evidence suggests that it worked, but we don’t actually know much about men in the class.
- The Learning Hypothesis: We hoped that students would learn as much in MediaComp as in our traditional CS1 class. Answering that question led to Allison Elliott Tew’s excellent work on FCS1. The bottom line, though, is that we still don’t know.
- The More-Computing Hypothesis: We thought that non-CS majors taking MediaComp would become enlightened and take more CS classes. No, that didn’t really happen, and Mike Hewner’s work helped us understand why not.
There are two meta-level points that I try to make in this paper.
- The first is: Why did we think that curriculum could do all of this, anyway? Curriculum can only have so much effect. There are lots of other variables in student learning, and curriculum only touches some of those.
- The second is: How did we move from Marco Polo to theory-building? Most papers at SIGCSE have been classified as Marco Polo (“We went here, and we saw that.”) MediaComp’s early papers were pretty much that — with the addition of explicit hypotheses about where we thought we’d go. It’s been those explicit hypotheses that have driven much of the last 10 years of work. Understanding those hypotheses, and the results that we found in pursuit of those hypotheses, have led us to develop theory and to support a broader understanding of how students learn computing.
Lots of things change over 10 years, and not always in positive directions. Good lessons and practices of the past get forgotten. Sometimes change is good and comes from lessons learned that are well worth articulating and making explicit. And sometimes, we got it plain wrong in the past — there are ideas that are worth discarding. It’s worth reflecting back occasionally and figuring out how we got to where we are.
Nice to see AP CS teachers picking up Media Computation, and hope to see more of that when Barbara’s Picture Lab starts rolling out. Myra Deister also sent me links to her AP CS students’ use of MediaComp.
We worked through several activities, focusing on filters and transformations. The students enjoyed seeing that they could write programs that performed some of the same features as Photoshop. The unit concluded with a collage project in which students combined several of their filters and transformations into a final and unique image.
I was extremely pleased to see that one of the new AP Computer Science labs, Picture Lab, was developed by Barbara Ericson and is based on her book. I think this new lab will bring an authentic and engaging series of activities to a wider audience.
Leo Porter, Charlie McDowell, Beth Simon, and I collaborated on a paper on how to make introductory programming work, now available in CACM. It’s a shorter, more accessible version of Leo and Beth’s best-paper-award winning SIGCSE 2013 paper, with history and kibitzing from Charlie and me :
Many Communications readers have been in faculty meetings where we have reviewed and bemoaned statistics about how bad attrition is in our introductory programming courses for computer science majors (CS1). Failure rates of 30%–50% are not uncommon worldwide. There are usually as many suggestions for how to improve the course as there are faculty in the meeting. But do we know anything that really works?
We do, and we have research evidence to back it up. Pair programming, peer instruction, and media computation are three approaches to reforming CS1 that have shown positive, measurable impacts. Each of them is successful separately at improving retention or helping students learn, and combined, they have a dramatic effect.
I’ve mentioned before how much I enjoy the Computing At Schools online forum. I got involved in a discussion about how to teach teachers programming, and the question was raised: Why do we have to teach programming? Shouldn’t we just teach concepts? Neil Brown (in a blog post that I highly recommend reading) suggested, “We teach programming to make it concrete.” One of the commenters suggested that memory is very concrete. I disagreed, and am sharing here my response (for those who don’t yet belong to CAS), with editing and expansion:
Concreteness and abstraction in computing are difficult to define because, really, nothing in computing is concrete, in the Piagetian sense. Piaget talked about concreteness in terms of sensory input. I’ve heard before that “memory is concrete — it’s really there.” Can you see it? Can you touch it? Sure, you can “see” it in a debugger — but that’s seeing through a program. Maybe that memory is “made up” like any video game or movie special effect. It’s no more “real” than Yoda or Mario. We can sense the output of computation, which can then be Piagetian-concrete, but not the computation itself.
Uri Wilensky (who was a student of Seymour Papert) has a wonderful paper on concreteness. He redefines concreteness as being a quality of relationship. “The richer the set of representations of the object, the more ways we have of interacting with it, the more concrete it is for us.” Uri gives us a new way of measuring abstract-concrete in terms of a continuum.
- Memory is really pretty abstract for the novice. How many ways can a newcomer to computing view it, manipulate it? It might be really concrete if you know C, because you can manipulate memory in many ways in C. You can construct a relationship with it (to use Uri’s term). From Scratch or Python or Java, memory is totally abstract for the novice. There’s no way to directly manipulate it
- We did Media Computation because images and sounds are concrete. We get sensory input from them. So, computation to manipulate images and sounds gives us concrete ways to explore computation. We can’t see the computation, but as we change the computation and get a different sensory output, we can develop a relationship with computing.
- Threads are hopeless abstract. You have to be pretty expert, and know how to think about and manipulate processes-as-a-thing before threads can become concrete.
I highly recommend Shuchi Grover’s piece in EdSurge news (linked below). She makes a great point — that the goal of learning computing goes beyond learning to code. It’s not enough to learn to code. She talks about the challenge of learning to code:
There are similar themes in Roy Pea’s 1983 paper with Midian Kurland, “On the cognitive prerequisites of learning computing programming.”
Even among the 25% of the children who were extremely interested in learning programming, the programs they wrote reached but a moderate level of sophistication after a year’s work and approximately 30 hours of on-line programming experience. We found that children’s grasp of fundamental programming concepts such as variables, tests, and recursion, and of specific Logo primitive commands such as REPEAT, was highly context-specific and rote in character. To take one example: A child who had written a procedure using REPEAT which repeatedly printed her name on the screen was unable to recognize the efficiency of using the REPEAT command to draw a square. Instead, the child redundantly wrote the same line-drawing procedure four times in succession.
Coding is hard. Coding has always been hard. We want students to know more than just code about computing.
I’m not sure that Shuchi is right. Maybe learning to code is enough — if it happens. When I studied foreign languages in secondary and post-secondary school (Latin and French for me), there was a great emphasis on learning the culture of a language. There was an explicit belief that learning about the culture of a language facilitated learning the language. Does it go further? Can one learn the language without knowing anything about the culture? If one does learn the language well, did you necessarily learn the culture too? Maybe it works the same for programming languages.
Our human-centered computing PhD students who focus on learning sciences and technologies (LS&T) are required to read two chapters of Noss and Hoyles 1996 book Windows on Mathematical Meanings: Learning Cultures and Computers. They make the argument that you can’t learn Logo well apart from an effective classroom culture. As Pea and Kurland noted in 1983, and Grover has noted thirty years later in 2013, students aren’t really learning programming well.
What if they did? What if students did learn programming? Would they necessarily also learn computing? And isn’t it possible that a culture that taught programming well would also teach things beyond coding? Maybe even problem-solving skills? David Palumbo’s excellent review of the literature on programming and problem-solving pointed out that there was very little link from programming to problem-solving skills — but for the most part, students weren’t learning programming. I don’t really think that that would work, that learning to code would immediately lead to learning problem-solving skills. I do wonder if learning to code might also lead to learning the other things that we think are important about computing.
There is a positive evidence for the value of classroom culture. Consider the work by Leo Porter and Beth Simon, where they found that combining pair programming, peer instruction, and Media Computation led to positive retention and learning (as measured by success in later classes). Porter and Simon have also noted how students learning programming also develop new insight into the applications that they use. Maybe it’s the case that if you change the culture in the classroom and what students do, and maybe students learn programming and computing.
I’ve seen EarSketch demoed a few times, and Barb is involved in planning their summer camp version. It’s very cool — goes deeper into Python programming and music than MediaComp.
The students use EarSketch, the software created by Magerko and Jason Freeman, an associate professor in Tech’s School of Music. EarSketch utilizes the Python programming language and Reaper, a digital audio work station program similar to those used in recording studios throughout the music industry.
“Young people don’t always realize that computer science and programming can be fun,” Freeman said. “This is allowing students to express their own creative musical ideas as they learn computer science principles.”
Barbara Ericson and I gave the Castle Lecture at West Point in April. The Castle Lecture is a big deal — we spoke before the entire first-year class at West Point. (Last year’s lecture was David Ferrucci, PI of the IBM Watson project.) We received this honor because West Point requires Computer Science of everyone, and this is the first year that they all the first years used our Media Computation Python textbook in that class. So, we got a chance to lecture to 1200 future Army officers and their instructors who all knew Media Computation. It was a stunning experience.
The whole day was amazing. If you’ve never been to West Point, I highly recommend that you take the opportunity. The campus is beautiful. The traditions and stories about the place are amazing. There’s such a sense of history, such a buzz about the place. We ate lunch with a group of cadets (in an absolutely enormous mess hall where thousands of students eat lunch in 20 minutes) and were deeply impressed. These are undergraduate students who are making a huge commitment to service to their country.
The biggest intellectual treat for me was learning more about their course, IT 105. 700 students every semester take the course — in groups of 20. 16 instructors are involved in teaching the course. We met with the instructors who teach just about nothing but IT 105, but also met some of the other West Point EECS instructors who teach a section or two of IT 105 along with their other courses. (Like Dr. Tanya Tolles Markow, a GT alumna, who teaches IT 105 and database classes.)
The person who makes this all work is Susan K. Schwartz (CAPT, USN, Ret). Her attention to detail is phenomenal. Susan is going to give me her errata for the third edition when she finishes this semester, which is more detailed than all the corrections that all instructors have sent me for both of the previous editions combined. Susan creates detailed lecture notes and assignments that drive all the sections for every day across the entire semester. All the students who take the course take the same exams, so Susan provides enough detail so that all the instructors know what to do in each class so that all students get to the finish line.
Barb and I each got to sit in one section. This is the opposite of a MOOC. The teacher knows every student. She (I attended one of Susan’s classes) calls on individual students, prods students to engage, and gives them activities in class. It’s small, interactive, and individualized. Yet, there are 700 students taking it at once. It’s an enormous effort to make that large of a class work such that students can all have that small class experience. We’re going to try to get Susan’s materials available to other Media Computation teachers.
The lecture was fun and exciting to do. We talked about how media was going to influence them for the rest of their lives. I gave a brief audio lecture, then we talked about computers that can process all that we can hear and see, and have the processing power of ten year’s forward. What does that mean for the rest of their lives? Barb gave a great overview of advances in robotics and cyber-security and even prosthetics. Afterward at the reception, we each had 9-12 cadets asking us follow-up questions for about an hour. We got back to the Thayer Hotel (what a place!) just buzzing from the amazing adventure of the day.
“Gas station without pumps” has a great point here (linked below), but I’d go a bit further. As he suggests, proponents of an educational intervention (“fad”) rarely admit that it’s a bad idea, rarely gather evidence showing that they’re wrong, and swamp the research literature with evidence that they’re right.
But what if external observers test the idea, and find that it works as hypothesized? Does that mean that it will work for everyone? Media Computation has been successfully used to improve retention at several institutions with both CS majors and non-CS majors, in evaluations not connected to me and my students. That doesn’t mean that it will work for any teacher and every teacher. There are so many variables in any educational setting. Despite the promises of the “What Works Clearinghouse,” even the well-supported interventions will sometimes fail, and there are interventions that are not well-supported that sometimes works. Well-supported interventions are certainly more promising and more likely to work. The only way to be sure, as the blog post below says, is to try it — and to measure it as well as you can, to see if it’s working for you.
I would posit that there is another series of responses to educational fads:
- It is great, everyone should do this.
- Maybe it doesn’t work that well in everybody’s hands.
- It was a terrible idea—no one should ever do that.
Think, for example, of the Gates Foundation’s attempt to make small high schools. They were initially very enthusiastic, then saw that it didn’t really work in a lot of the schools where they tried it, then they abandoned the idea as being completely useless and even counter-productive.
The difficult thing for practitioners is that the behavior of proponents in stage 1 of an educational fad is exactly the same as in Falkner’s third stage of acceptance. It is quite difficult to see whether a pedagogical method is robust, well-tested, and applicable to a particular course or unit—especially when so much of the information about any given method is hype from proponents. Educational experiments seem like a way to cut through the hype, but research results from educational experiments are often on insignificantly small samples, on very different courses from the one the practitioner needs to teach, and with all sorts of other confounding variables. Often the only way to determine whether a particular pedagogic technique works for a particular class is to try it and see, which requires a leap of faith, a high risk of failure, and (often) a large investment in developing new course materials.
For teachers in those old, stodgy, non-MOOC, face-to-face classes (“Does anybody even *do* that anymore?!?”), I strongly recommend using “Clickers” and Peer Instruction, especially based on these latest findings from Beth Simon and colleagues at the University of California at San Diego. They have three papers to appear at SIGCSE 2013 about their multi-year experiment using Peer Instruction:
- They found that use of Peer Instruction, beyond the first course (into theory and architecture), halved their failure rates: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=176
- They found that the use of Peer Instruction, with Media Computation and pair-programming, in their first course (on the quarter system, so it’s only 10 weeks of influence) increased the percentage of students in their major (tracking into the second year and beyond) up to 30%: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=96
- They also did a lecture vs. Peer Instruction head-to-head comparison which showed significant impact of the instructional method: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=223
If we have such strong evidence that changing our pedagogy does work, are we doing our students a disservice if we do not use it?
IEEE Computer Society does good videos. They did a nice video at the Awards Ceremony, and now, they’ve put together a follow-up video with footage from interviews that they did after the Awards Ceremony. I always find it painful to watch myself being interviewed in a video, but I like how they got what’s important about Media Computation and Georgia Computes in this piece. You always try to get some of the important stuff into an interview, but the stuff you thought was most important usually ends up on the cutting room floor. Here, they got what I thought were the important bits.