Posts tagged ‘Media Computation’
I get to teach our Media Computation in Python course, on Georgia Tech’s campus, in Spring 2014. I’ve had the opportunity to teach it on study abroad, and that was wonderful. I have not had the opportunity to teach it on-campus since 2007. Being gone from a course for seven years, especially a big one with an army of undergraduate TA’s behind it, is a long time. The undergraduate TA’s create all the assignments and the exams, in all of the introductory courses in the College of Computing. Bill Leahy, who is teaching it this summer semester, kindly invited me to meet with the TA’s in order to give me a sense for how the course works now.
It’s a very different course than the one that I used to teach.
- I mentioned the collage assignment, which was one of the most successful assignments in MediaComp (and shows up even today in AP CS implementations and MATLAB implementations). Not a single TA knew what I was talking about.
- The TA’s complained to me about Piazza. ”Nobody posts” and “I always forget that it’s there” and “It seems to work in CS classes, but not for the other majors.” I told them about work that Jennifer Turns and I did in 1999 that showed why Piazza and newsgroups don’t work as well as integrated computer-supported collaborative learning, and how that work led to our development of Swikis. Swikis were abandoned many years ago in MediaComp, even before the FERPA concerns.
- Sound is mostly gone. Students have to play a sound in one assignment based on turtle graphics. Students never manipulate samples in a sound anymore.
- I started to explain why we do what we do in MediaComp: Introducing iteration as set operations, favoring replicated code over abstraction in the first half of the semester, avoiding else. They thought that those were interesting ideas to consider adding to the course. I borrowed a copy of the textbook from one of them, and read them part of the preface about Ann Fleury’s work. Lesson: Just because you put it in the book and provide the citation, doesn’t mean that anybody actually reads it, even the TA’s.
It’s a relevant story because I’m presenting a paper at ICER 2013 on Monday 12 August that is a 10 year retrospective on the research on Media Computation. (I’m making a preview version of the paper available here, which I’ll take down when the ACM DL opens up the ICER 2013 papers.) It was 10 years ago that we posted our working document on creating MediaComp and our 2002 and 2003 published design papers, all of which are still available. We made explicit hypotheses about what we thought Media Computation would do. The ICER 2013 paper is a progress report. How’d we do? What don’t we know? In hindsight, some seem foolish.
- The Plagiarism Hypothesis: We thought that the creative focus of MediaComp would reduce plagiarism. We haven’t done an explicit study, but if we found a difference with statistical significance, it would be meaningless. Ten years later, still lots of academic misconduct.
- The Retention Hypothesis: Perhaps our biggest win — students are retained better in MediaComp than traditional classes, across multiple institutions. The big follow-up question: Why? Exploring that question has involved the work of multiple PhD students over the last decade, helping us understand contextualized-computing education.
- The Gender Hypothesis: We designed MediaComp based on recommendations from people like Jane Margolis and Joanne Cohoon on how to make an introductory CS course that would be successful with women. Our evidence suggests that it worked, but we don’t actually know much about men in the class.
- The Learning Hypothesis: We hoped that students would learn as much in MediaComp as in our traditional CS1 class. Answering that question led to Allison Elliott Tew’s excellent work on FCS1. The bottom line, though, is that we still don’t know.
- The More-Computing Hypothesis: We thought that non-CS majors taking MediaComp would become enlightened and take more CS classes. No, that didn’t really happen, and Mike Hewner’s work helped us understand why not.
There are two meta-level points that I try to make in this paper.
- The first is: Why did we think that curriculum could do all of this, anyway? Curriculum can only have so much effect. There are lots of other variables in student learning, and curriculum only touches some of those.
- The second is: How did we move from Marco Polo to theory-building? Most papers at SIGCSE have been classified as Marco Polo (“We went here, and we saw that.”) MediaComp’s early papers were pretty much that — with the addition of explicit hypotheses about where we thought we’d go. It’s been those explicit hypotheses that have driven much of the last 10 years of work. Understanding those hypotheses, and the results that we found in pursuit of those hypotheses, have led us to develop theory and to support a broader understanding of how students learn computing.
Lots of things change over 10 years, and not always in positive directions. Good lessons and practices of the past get forgotten. Sometimes change is good and comes from lessons learned that are well worth articulating and making explicit. And sometimes, we got it plain wrong in the past — there are ideas that are worth discarding. It’s worth reflecting back occasionally and figuring out how we got to where we are.
Nice to see AP CS teachers picking up Media Computation, and hope to see more of that when Barbara’s Picture Lab starts rolling out. Myra Deister also sent me links to her AP CS students’ use of MediaComp.
We worked through several activities, focusing on filters and transformations. The students enjoyed seeing that they could write programs that performed some of the same features as Photoshop. The unit concluded with a collage project in which students combined several of their filters and transformations into a final and unique image.
I was extremely pleased to see that one of the new AP Computer Science labs, Picture Lab, was developed by Barbara Ericson and is based on her book. I think this new lab will bring an authentic and engaging series of activities to a wider audience.
Leo Porter, Charlie McDowell, Beth Simon, and I collaborated on a paper on how to make introductory programming work, now available in CACM. It’s a shorter, more accessible version of Leo and Beth’s best-paper-award winning SIGCSE 2013 paper, with history and kibitzing from Charlie and me :
Many Communications readers have been in faculty meetings where we have reviewed and bemoaned statistics about how bad attrition is in our introductory programming courses for computer science majors (CS1). Failure rates of 30%–50% are not uncommon worldwide. There are usually as many suggestions for how to improve the course as there are faculty in the meeting. But do we know anything that really works?
We do, and we have research evidence to back it up. Pair programming, peer instruction, and media computation are three approaches to reforming CS1 that have shown positive, measurable impacts. Each of them is successful separately at improving retention or helping students learn, and combined, they have a dramatic effect.
I’ve mentioned before how much I enjoy the Computing At Schools online forum. I got involved in a discussion about how to teach teachers programming, and the question was raised: Why do we have to teach programming? Shouldn’t we just teach concepts? Neil Brown (in a blog post that I highly recommend reading) suggested, “We teach programming to make it concrete.” One of the commenters suggested that memory is very concrete. I disagreed, and am sharing here my response (for those who don’t yet belong to CAS), with editing and expansion:
Concreteness and abstraction in computing are difficult to define because, really, nothing in computing is concrete, in the Piagetian sense. Piaget talked about concreteness in terms of sensory input. I’ve heard before that “memory is concrete — it’s really there.” Can you see it? Can you touch it? Sure, you can “see” it in a debugger — but that’s seeing through a program. Maybe that memory is “made up” like any video game or movie special effect. It’s no more “real” than Yoda or Mario. We can sense the output of computation, which can then be Piagetian-concrete, but not the computation itself.
Uri Wilensky (who was a student of Seymour Papert) has a wonderful paper on concreteness. He redefines concreteness as being a quality of relationship. “The richer the set of representations of the object, the more ways we have of interacting with it, the more concrete it is for us.” Uri gives us a new way of measuring abstract-concrete in terms of a continuum.
- Memory is really pretty abstract for the novice. How many ways can a newcomer to computing view it, manipulate it? It might be really concrete if you know C, because you can manipulate memory in many ways in C. You can construct a relationship with it (to use Uri’s term). From Scratch or Python or Java, memory is totally abstract for the novice. There’s no way to directly manipulate it
- We did Media Computation because images and sounds are concrete. We get sensory input from them. So, computation to manipulate images and sounds gives us concrete ways to explore computation. We can’t see the computation, but as we change the computation and get a different sensory output, we can develop a relationship with computing.
- Threads are hopeless abstract. You have to be pretty expert, and know how to think about and manipulate processes-as-a-thing before threads can become concrete.
I highly recommend Shuchi Grover’s piece in EdSurge news (linked below). She makes a great point — that the goal of learning computing goes beyond learning to code. It’s not enough to learn to code. She talks about the challenge of learning to code:
There are similar themes in Roy Pea’s 1983 paper with Midian Kurland, “On the cognitive prerequisites of learning computing programming.”
Even among the 25% of the children who were extremely interested in learning programming, the programs they wrote reached but a moderate level of sophistication after a year’s work and approximately 30 hours of on-line programming experience. We found that children’s grasp of fundamental programming concepts such as variables, tests, and recursion, and of specific Logo primitive commands such as REPEAT, was highly context-specific and rote in character. To take one example: A child who had written a procedure using REPEAT which repeatedly printed her name on the screen was unable to recognize the efficiency of using the REPEAT command to draw a square. Instead, the child redundantly wrote the same line-drawing procedure four times in succession.
Coding is hard. Coding has always been hard. We want students to know more than just code about computing.
I’m not sure that Shuchi is right. Maybe learning to code is enough — if it happens. When I studied foreign languages in secondary and post-secondary school (Latin and French for me), there was a great emphasis on learning the culture of a language. There was an explicit belief that learning about the culture of a language facilitated learning the language. Does it go further? Can one learn the language without knowing anything about the culture? If one does learn the language well, did you necessarily learn the culture too? Maybe it works the same for programming languages.
Our human-centered computing PhD students who focus on learning sciences and technologies (LS&T) are required to read two chapters of Noss and Hoyles 1996 book Windows on Mathematical Meanings: Learning Cultures and Computers. They make the argument that you can’t learn Logo well apart from an effective classroom culture. As Pea and Kurland noted in 1983, and Grover has noted thirty years later in 2013, students aren’t really learning programming well.
What if they did? What if students did learn programming? Would they necessarily also learn computing? And isn’t it possible that a culture that taught programming well would also teach things beyond coding? Maybe even problem-solving skills? David Palumbo’s excellent review of the literature on programming and problem-solving pointed out that there was very little link from programming to problem-solving skills — but for the most part, students weren’t learning programming. I don’t really think that that would work, that learning to code would immediately lead to learning problem-solving skills. I do wonder if learning to code might also lead to learning the other things that we think are important about computing.
There is a positive evidence for the value of classroom culture. Consider the work by Leo Porter and Beth Simon, where they found that combining pair programming, peer instruction, and Media Computation led to positive retention and learning (as measured by success in later classes). Porter and Simon have also noted how students learning programming also develop new insight into the applications that they use. Maybe it’s the case that if you change the culture in the classroom and what students do, and maybe students learn programming and computing.
I’ve seen EarSketch demoed a few times, and Barb is involved in planning their summer camp version. It’s very cool — goes deeper into Python programming and music than MediaComp.
The students use EarSketch, the software created by Magerko and Jason Freeman, an associate professor in Tech’s School of Music. EarSketch utilizes the Python programming language and Reaper, a digital audio work station program similar to those used in recording studios throughout the music industry.
“Young people don’t always realize that computer science and programming can be fun,” Freeman said. “This is allowing students to express their own creative musical ideas as they learn computer science principles.”
Barbara Ericson and I gave the Castle Lecture at West Point in April. The Castle Lecture is a big deal — we spoke before the entire first-year class at West Point. (Last year’s lecture was David Ferrucci, PI of the IBM Watson project.) We received this honor because West Point requires Computer Science of everyone, and this is the first year that they all the first years used our Media Computation Python textbook in that class. So, we got a chance to lecture to 1200 future Army officers and their instructors who all knew Media Computation. It was a stunning experience.
The whole day was amazing. If you’ve never been to West Point, I highly recommend that you take the opportunity. The campus is beautiful. The traditions and stories about the place are amazing. There’s such a sense of history, such a buzz about the place. We ate lunch with a group of cadets (in an absolutely enormous mess hall where thousands of students eat lunch in 20 minutes) and were deeply impressed. These are undergraduate students who are making a huge commitment to service to their country.
The biggest intellectual treat for me was learning more about their course, IT 105. 700 students every semester take the course — in groups of 20. 16 instructors are involved in teaching the course. We met with the instructors who teach just about nothing but IT 105, but also met some of the other West Point EECS instructors who teach a section or two of IT 105 along with their other courses. (Like Dr. Tanya Tolles Markow, a GT alumna, who teaches IT 105 and database classes.)
The person who makes this all work is Susan K. Schwartz (CAPT, USN, Ret). Her attention to detail is phenomenal. Susan is going to give me her errata for the third edition when she finishes this semester, which is more detailed than all the corrections that all instructors have sent me for both of the previous editions combined. Susan creates detailed lecture notes and assignments that drive all the sections for every day across the entire semester. All the students who take the course take the same exams, so Susan provides enough detail so that all the instructors know what to do in each class so that all students get to the finish line.
Barb and I each got to sit in one section. This is the opposite of a MOOC. The teacher knows every student. She (I attended one of Susan’s classes) calls on individual students, prods students to engage, and gives them activities in class. It’s small, interactive, and individualized. Yet, there are 700 students taking it at once. It’s an enormous effort to make that large of a class work such that students can all have that small class experience. We’re going to try to get Susan’s materials available to other Media Computation teachers.
The lecture was fun and exciting to do. We talked about how media was going to influence them for the rest of their lives. I gave a brief audio lecture, then we talked about computers that can process all that we can hear and see, and have the processing power of ten year’s forward. What does that mean for the rest of their lives? Barb gave a great overview of advances in robotics and cyber-security and even prosthetics. Afterward at the reception, we each had 9-12 cadets asking us follow-up questions for about an hour. We got back to the Thayer Hotel (what a place!) just buzzing from the amazing adventure of the day.
“Gas station without pumps” has a great point here (linked below), but I’d go a bit further. As he suggests, proponents of an educational intervention (“fad”) rarely admit that it’s a bad idea, rarely gather evidence showing that they’re wrong, and swamp the research literature with evidence that they’re right.
But what if external observers test the idea, and find that it works as hypothesized? Does that mean that it will work for everyone? Media Computation has been successfully used to improve retention at several institutions with both CS majors and non-CS majors, in evaluations not connected to me and my students. That doesn’t mean that it will work for any teacher and every teacher. There are so many variables in any educational setting. Despite the promises of the “What Works Clearinghouse,” even the well-supported interventions will sometimes fail, and there are interventions that are not well-supported that sometimes works. Well-supported interventions are certainly more promising and more likely to work. The only way to be sure, as the blog post below says, is to try it — and to measure it as well as you can, to see if it’s working for you.
I would posit that there is another series of responses to educational fads:
- It is great, everyone should do this.
- Maybe it doesn’t work that well in everybody’s hands.
- It was a terrible idea—no one should ever do that.
Think, for example, of the Gates Foundation’s attempt to make small high schools. They were initially very enthusiastic, then saw that it didn’t really work in a lot of the schools where they tried it, then they abandoned the idea as being completely useless and even counter-productive.
The difficult thing for practitioners is that the behavior of proponents in stage 1 of an educational fad is exactly the same as in Falkner’s third stage of acceptance. It is quite difficult to see whether a pedagogical method is robust, well-tested, and applicable to a particular course or unit—especially when so much of the information about any given method is hype from proponents. Educational experiments seem like a way to cut through the hype, but research results from educational experiments are often on insignificantly small samples, on very different courses from the one the practitioner needs to teach, and with all sorts of other confounding variables. Often the only way to determine whether a particular pedagogic technique works for a particular class is to try it and see, which requires a leap of faith, a high risk of failure, and (often) a large investment in developing new course materials.
For teachers in those old, stodgy, non-MOOC, face-to-face classes (“Does anybody even *do* that anymore?!?”), I strongly recommend using “Clickers” and Peer Instruction, especially based on these latest findings from Beth Simon and colleagues at the University of California at San Diego. They have three papers to appear at SIGCSE 2013 about their multi-year experiment using Peer Instruction:
- They found that use of Peer Instruction, beyond the first course (into theory and architecture), halved their failure rates: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=176
- They found that the use of Peer Instruction, with Media Computation and pair-programming, in their first course (on the quarter system, so it’s only 10 weeks of influence) increased the percentage of students in their major (tracking into the second year and beyond) up to 30%: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=96
- They also did a lecture vs. Peer Instruction head-to-head comparison which showed significant impact of the instructional method: http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=paper&sessionNumber=223
If we have such strong evidence that changing our pedagogy does work, are we doing our students a disservice if we do not use it?
IEEE Computer Society does good videos. They did a nice video at the Awards Ceremony, and now, they’ve put together a follow-up video with footage from interviews that they did after the Awards Ceremony. I always find it painful to watch myself being interviewed in a video, but I like how they got what’s important about Media Computation and Georgia Computes in this piece. You always try to get some of the important stuff into an interview, but the stuff you thought was most important usually ends up on the cutting room floor. Here, they got what I thought were the important bits.
This piece got mentioned in an earlier blog post comment Mylène, and I wanted to make sure that it got highlighted. It’s a wonderful post about what really leads to an enduring relationship with a subject matter. There are some great lessons here for computing education. Media Computation fares well when considered from this perspective. I just used MediaComp as a way of introducing graduate students to Python, and they puzzled (for example) over why sounds came out the way that they did. I thought it worked as a way of getting the students to start reasoning with Python.
An ounce of perplexity is worth a pound of engagement. Give me a student with a question in her head, one that math can help her answer, over a student who’s been engaged by a poster or a celebrity testimonial or the promise of a career. Engagement fades. Perplexity endures.
Perhaps it comes to this: rather than remembering your own tastes as a twelve-year-old, empathize with the tastes of a twelve-year-old who isn’t anything like you, one who has experienced only humiliation and failure in mathematics. What does math have to offer that student?
Daphne talks about the educational research that she’s drawing on. I wondered: What’s new here? Why are people excited about MOOCs?
- Mining educational data to learn about learning isn’t new. It’s an established field with a multi-year conference (http://educationaldatamining.org/EDM2012/). In fact, there’s a standard open source repository for these sorts of data for learning scientists (http://learnlab.org/technologies/datashop/index.php). (I wonder if Coursera and Udacity are contributing to that?)
- Using technology to get students to actively engage with their learning isn’t new. Instructional Management Systems had the entire K-12 curriculum covered back in the 1990′s, all based on a similar model of presentation and student activity to enhance learning.
- Getting educational content out to the developing world isn’t new. That was always one of the guiding principles of the Open University UK, and their track record (in terms of completion rates, measured learning, reach into the developing world) is much better than Coursera and Udacity.
- On-line forums are not new. In fact, the older Computer Supported Collaborative Learning (CSCL) systems (like CSILE/Knowledge Forum and even CoWeb/Swiki) have well-supported claims of facilitating learning, unlike the more modern forums that don’t have similar support.
- The two-sigma effect is old (though recent attempts to replicate Bloom’s result suggest that it wasn’t tutoring but mastery learning that led to a two-sigma effect). If the point of Coursera is to get similar effects of tutoring, why aren’t they starting by studying and replicating human tutoring (as the Cognitive Tutors do), versus putting lectures on video? Lectures were the less-successful model.
Here is what I see as new:
- Video on the Internet. There is an effect of medium and distribution here. Video is compelling. We now have the ability to get lots of video created and shipped anywhere cheaply. When Roger Schank was building his learning systems at Northwestern, they spent a huge amount of effort getting lots of video burned to DVD’s that could be easily accessed. That’s simply not a problem anymore.
- They’re doing it for free. There have been lots of smallish research efforts in the past. There have been companies started that provide these technologies at scale for a cost. Free changes things, particularly with students and families today bearing a greater portion of the cost of higher education.
- There is the potential to do more, to make students feel like individuals, rather than part of a 100K herd. When I raised the question of “what’s new about MOOCs” with faculty at Georgia Tech, my colleagues pointed out the potential value of using modern, real-time machine learning and data analytics techniques to get greater insight into learning difficulties, and to better personalize the learning experience. Daphne says in her TED talk that the Coursera system could recognize the need for more remedial material and provide it. I recognize that potential, though the technology isn’t in place yet. Current MOOCs have little or no machine learning, and no attempt at personalization.But I see a problem with Coursera recognizing a need and recommending remedial material. The current MOOCs won’t be able to offer personalization for the audiences that I most care about (e.g., adult learners without previous CS background, non-majors studying CS), audiences that probably would need more background material than the top students, because those students simply aren’t there. My audiences are most likely in the 80-90% who are dropping out of MOOCs after registering. Even the most sophisticated machine learning and data analytics can’t help you to understand students who are no longer there. Until you get students who need the remediation through the system, the ML can’t learn about them, but how do you get them through the system without the ML-recommended remediation?
While I agree with the importance of reaching underserved populations, I am not convinced that MOOCs are currently having much of an effect in the developing world or to broaden participation to students who don’t have much prepratory work (say, in CS) in their schools. I wonder if it’s even possible to make a large impact on the developing world starting at higher education. Not all K-12 programs in the United States prepare students adequately for MIT, Stanford, and Harvard level classes. Can we expect that most K-12 programs in the developing world are adequate preparation? The Open University UK has always been “open,” no pre-requisites, and they provide content at that level. Coursera prides itself on offering top-notch classes. That’s valuable, but I find it unlikely that such courses also meet the needs of underserved populations.
Coursera offers demanding courses via video which only a small percentage complete — for free. That is valuable and interesting. I don’t currently see the model replacing existing courses, or working well for students who don’t have the background knowledge.
Daphne Koller is enticing top universities to put their most intriguing courses online for free — not just as a service, but as a way to research how people learn. Each keystroke, comprehension quiz, peer-to-peer forum discussion and self-graded assignment builds an unprecedented pool of data on how knowledge is processed and, most importantly, absorbed.
I’m back from Oxford, after an intense six weeks of teaching “Computational Freakonomics” and “Media Computation.” Since I did new things in Media Computation this term, I put together a little survey to get students’ feedback on what I did — not for research publication, but to inform me as a teacher.
It’s complicated to interpret their responses. Only 11 of my 22 students completed my survey, so the results may not be representative of the whole class. (The class was 10 males and 12 females. I didn’t ask about gender on the survey, so I don’t know gender of the respondents.) The first thing I was wondering was whether the worked examples was perceived by students as helping them learn. “I found it useful to type in Python programs and figure them out at the start of class.” 4 strongly agree, 6 agree, 1 neutral.
That seems generally positive — students thought that the worked examples were useful. How about helping with Python syntax? ”Getting the characters exactly right (the syntax of Python) was difficult.” 2 agree, 1 neutral, 8 disagree. That’s in the right direction.
In the written portion, several students commented that they liked being able to focus on “understanding” programs “rather than just executing them.” One student even suggested that I could have questions about the program after they studied them, or I could have them make a change to the program afterward, to demonstrate understanding. I loved this idea, and particularly loved that it was suggested by a student. It indicates seeing a value in understanding programming, even before doing programming, while seeing value in that, too. This worked examples approach really does lead to a different way of thinking about introductory computer science: Programs as something to study first, before designing and engineering them.
When I asked students what their favorite part of the course was, and what their least favorite part of the course was, Excel showed up on both lists (though more often on the least favorite part). Here’s one of the questions that stymied me to interpret: “Python is harder to learn and use than Excel.” Could not be a more perfect bell curve — what does that mean?!?
“I wish I could have learned more Excel in this course.” An almost perfectly uniform distribution!
Their reaction to Excel is so interesting. On the written parts of the survey, they told me how important it was for them to learn Excel, that it was very important for their careers. But they did not really like doing something as inauthentic (my word, not their’s) as pixel manipulation in Excel. They wished they could have done something more useful, like computing “expenses.”
The responses above suggest to me a hypothesis: The students don’t really know how to think about Excel in relation to Python. It’s as if they’re two different things, not two forms of the same thing. I was hoping for more of the latter, by doing pixel manipulations in both Python and Excel. This may be someplace where prior understanding influences the future understanding. I suspect that the students classify these things as.
- “Excel is for business. It’s not for computing. Doing pixel manipulations in Excel is just weird and painful.”
- “Python is for computing. I have to go through it, but it doesn’t really have much to do with my future career.” On the statement, “Learning programming as we have in this course is not useful to me,” 3 were neutral, and 8 disagreed. I read that as, “It’s okay. Sorta.”
Something that I always worry about: Are we helping students to develop their sense of self-efficacy in an introductory course, especially for non-majors?
“I am more confident using computers now, after taking this course.” Quite positive: 10 agree, 1 neutral.
“I think differently about computers and how they work since taking this class.” Could not get much more positive: 8 strongly agree, 6 agree!
And yet, “I am not the kind of person who is good with computers.” Mostly, students agree with that: 3 strongly agree, 4 agree, 1 neutral, 3 disagree. One average, my students still don’t see themselves as among the people who are “good” with computers.
There was lots for me to be happy about. Some students said that the lectures on algorithmic complexity and the storage hierarchy were among their favorites; that they would have liked to have learned more about the “big questions” of CS; and they they liked writing programs. On the statement, “I learned interesting and useful computer science in this course,” 3 students strongly agreed, and 8 agreed. They got that this was about computer science, and some of them even found that useful.
Even in a class of only 22, even seeing them every day for hours, even with grading all their papers — I’m still surprised, intrigued, and confounded by how they think about all of this. That’s fine by me. As a teacher and a researcher, my job isn’t done yet.
The IEEE CS Awards videos are up on YouTube, including Eric Roberts’s nice talk. (Well, they probably went up weeks ago, but I just got to my office and found the physical DVD in my mailbox, which was my clue to check). So now people who weren’t there can see me thank Rich LeBlanc, Peter Freeman, Kurt Eiselt, Russ Shackelford, Jim Foley, John Impagliazzo, Barbara Ericson, and Jan Cuny. I’m sure that some will notice that not all the details in the video are completely right — I’m sure that’s my fault for not making everything clear when I provided materials. I’m grateful to the IEEE Computer Society for putting together such a snazzy video on my behalf.
I mentioned awhile ago that some undergraduates built for me a new tool for converting from images to spreadsheets, and back again. It allows us to do image manipulations via spreadsheet tools like Excel. More importantly, it exposes the data abstractions in picture files (turning JPEGs into columns of x,y and RGB), and makes the lower level data malleable.
I’m using this tool in the Media Computation course that I’m teaching this summer. Normally, CS1315 (the course I’m teaching) includes labs on Word, Excel, and Powerpoint, but there’s no sense of “lab” in these compressed courses. And I bet that most of my students know a lot about Office applications already. So I asked them at the start of class: What did they want to learn about Office applications? Several students said that they’d like to learn to use formulas in interesting ways in Excel.
I’ve come up with a homework assignment where students do Media Computation using unusual Excel formulas (e.g., using IF, AND, and COUNTIF). I lectured on Excel on Thursday in support of this assignment, and it was rough. Things that I had worked out in Windows Excel failed or worked differently when doing a live coding session in MacOS Excel (e.g., the FREQUENCY function worked differently, or not at all — hard to tell). Fortunately, we figured it out, but I got a new appreciation of how non-portable the edge of Excel functions can be.
My students are working on this assignment this week, and I’ll let you know how it goes. Based on the questions I’m getting already, it’s challenging for the students. Excel functions are hidden, invisible when you look at a spreadsheet until you click on the right cell. Much of how you do things in Excel, the process, is invisible from watching the screen, e.g., shift-clicking to select a range. So, they’re having a hard time discerning exactly how I did what I did in class.
Maybe they’re learning a greater appreciation for doing all this in Python, rather than Excel.