Posts tagged ‘Java’
From Leigh Ann Sudol-DeLyser (email@example.com):
I am looking for faculty who are able to help me find subjects for my final study of my PhD thesis. I have built an online pedagogical IDE which uses problem knowledge to give students feedback about algorithmic components as they are writing code for simple array algorithms.
I am looking for faculty who are willing to assign a 5-problem sequence as a part of a homework assignment or final exam review in a CS1 course in Java. The 5 problems consist of writing code to find the sum of an array of integers, the maximum number in an array of integers, counting the number of values in a range of integers, and completing an indexOf method for an array of integers. These problems are similar to ones you might find in a system like CodingBat where students are given a method header and asked to implement code for the interior of a single method.
If you are willing to help me graduate (please!) send me your name, the university you teach at, and the number of students in your class and I will contact you with login codes for the students and further directions. I am looking for classes of all sizes from all types of colleges and universities. Please forward to your CS1 instructors where applicable.
Earlier this year, I talked about Seymour Papert’s encouragement to challenge yourself as a learner, in order to gain insight into learning and teaching. I used my first-time experiences working on a play as an example.
I was in my first choir for a only year when our first child was born. I was 28 when I first started trying to figure out if I was a bass or tenor (and even learn what those terms meant). Three children and 20 years later, our children can get themselves to and from church on their own. In September, I again joined our church choir. I am pretty close to a complete novice–I have hardly even had to read a bass clef in the last two decades.
Singing in the choir has the most unwritten, folklore knowledge of any activity I’ve ever been involved with. We will be singing something, and I can tell that what we sang was not what was in the music. ”Oh, yeah. We do it differently,” someone will explain. Everyone just remembers so many pieces and how this choir sings them. Sometimes we are given pieces like the one pictured above. It’s just words with chords and some hand-written notes on the photocopy. We sing in harmony for this (I sing bass). As the choir director says when he hands out pieces like this, “You all know this one.” And on average, he’s right. My wife has been singing in the choir for 13 years now, and that’s about average. People measure their time in this choir in decades. The harmony for songs like this were worked out years and years ago, and just about everyone does know it. There are few new people each year — “new” includes even those 3 years in. (Puts the “long” four years of undergraduate in new perspective for me.) The choir does help the newcomers. One of the most senior bass singers gives me hand gestures to help me figure out when next phrase is going up or down in pitch. But the gap between “novice+help” and “average” is still enormous.
Lave and Wenger in their book “Situated Learning” talk about learning situations like these. The choir is a community of practice. There are people who are central to the practice, and there are novices like me. There is a learning path that leads novices into the center.
The choir is an unusual community of practice in that physical positioning in the choir is the opposite of position with respect to the community. The newbies (like me) are put in the center of our section. That helps us to hear where we need to be when singing. The more experienced people are on the outside. The most experienced person in the choir, who may also be the eldest, tends to sit on the sidelines, rather than stand with the rest of the choir. He nails every note, with perfect pitch and timing.
Being a novice in the choir is enormous cognitive overload. As we sing each piece, I am reading the music (which I’m not too good at) to figure out what I’m singing and where we’re going. I am watching the conductor to make sure that my timing is right and matches everyone else. I am listening intently to the others in my section to check my pitch (especially important for when there is no music!). Most choir members have sung these pieces for ages and have memorized their phrasing, so they really just watch the director to get synchronized.
When the director introduces a new piece of music with, “Now this one has some tricky parts,” I groan to myself. It’s “tricky” for the average choir members — those who read the music and who have lots of experience. It’s “tricky” for those with literacy and fluency. For me, still struggling with the notation, it takes me awhile to get each piece, to understand how our harmony will blend with the other parts.
I think often about my students learning Java while I am in choir. In my class, I introduce “tricky” ideas like walking a tree or network, both iteratively and recursively, and they are still struggling with type declarations and public static void main. I noticed last year that many of my students’ questions were answered by me just helping them use the right language to ask their question correctly. How hard it must be for them to listen to me in lecture, read the programs we’re studying, and still try to get the “tricky” big picture of operations over dynamic data structures–when they still struggle with what the words mean in the programs.
Unlike working on the play, singing in the choir doesn’t take an enormous time investment — we rehearse for two hours one night, and an hour before mass. I’m having a lot of fun, and hope to stick with it long enough to move out of the newbie class. What’s motivating me to stick with it is enjoyment of the music and of becoming part of the community. There’s another good lesson for computer science classes looking to improve retention. Retention is about enjoying the content and enjoying the community you’re joining.
We’re re-examining/reconstructing our reading list for the qualifying examination for our PhD in Human-Centered Computing. One of the papers I’ve had the chance to read (and re-read — it’s a dense piece) is by James Greeno, Allan Collins, and Lauren Resnick, three top-notch education researchers.
Greeno, J., Collins, A., & Resnick, L. (1996) ‘Cognition and learning,’ in Berliner, D. & Calfee, R. (eds.), Handbook of Educational Psychology, Macmillan, New York: 15-46.
They contrast three views of education, which are paradigm shifts in the Kuhnian sense. It’s not that one is better than the other. One looks at different things than the other, creates different theory, leads to predictions about entirely different things. The first was behaviorist/empiricist — learning was observed responses to stimuli. Behaviorism explains a lot, and can lead to strong predictions. The second is the cognitive/rationalist view, the view of learning as have knowledge structures in the brain. This was the view started by Piaget, and it has had the greatest impact on schools. The third view is the “situative/pragmatist-sociohistoric.”
A third perspective on knowing focuses on the way knowledge is distributed in the world among individuals, the tools, artifacts, and books that they use, and the communities and practices in which they participate. The situative view of knowing, involving attunements to constraints and affordances of activity systems, suggests a fundamental change in the way that instructional tasks are analyzed. The change is away from analyses of component subtasks to analyses of the regularities of successful activity…When knowing is viewed as practices of communities and abilities of individuals to participate in those practices, then learning is the strengthening of those practices and participatory abilities.
This is the perspective described in Lave & Wenger’s classic Situated learning: Legitimate peripheral participation (also on our HCC reading list). The situative view is most powerful in describing learning in naturalistic settings, from apprenticeship to life-long learning (e.g., how professionals get better at what they do). An important difference between the situative and the cognitive is in defining “what’s worth knowing.” The situative is focused on learning how to participate in a community of practice, to be part of the discourse and activities of a group of people who work towards similar sets of goals in similar ways.
Computer science education is feeling the tension between the cognitive and the situative today. I see it in the discussion about Greenfoot. We CS educators talk about our foundational concepts, and we talk about learning the tools of our community. We say that “We don’t teach languages, we teach concepts,” but then we talk about our courses as “the C course” and “the Java course.” Once, the community of practice in computing was mathematics and electrical engineering. That’s what people knew and talked about, and that’s what we now call our foundational concepts. Today, there is a huge community of computing professionals and scientists with lots of activities and tools. They have a practice and an on-going discussion. Knowing the common knowledge and practices in use in computing today is not vocational — it’s about being able to communicate with others in the practice. Java is the the most common language in the discourse of computing education today. No computer science undergraduate is educated without knowing it. This fact has nothing to do with what languages best make evident the concepts of computing. This has everything to do with being part of a community.
What we teach in our Media Computation data structures book when we teach about simulations in Java is absolutely harder than when I taught similar content in Squeak. But it’s harder, in part, because we’re dealing with how to make this work around the strong typing in Java, and we’re making sure that students understand interfaces. Those are worth knowing, too. Strong typing and interfaces are part of what the community of practice of computing values and talks about today. Students are not well-educated if they cannot be part of that conversation (even if only to say that they don’t like the existing practices!).
As a computing educator, I have a responsibility to stay informed about the activities of computing — in part, that’s what Stroustrup is arguing when he says that professors ought to be building software. I have a whole collection of recent computing books on my shelf that I’m still working through. There are books on Ruby and Lua, Django and script.acul.ous. Are these the right ones? I don’t know. They’re my best guess, and once I read up on them, I’ll have a better sense of whether I think they’re worthwhile. I should be able to talk about the best ideas in tools used by practitioners in my community, the community that I work in and that my students will work in–and importantly, critique them. Part of my job, in the Lave & Wenger sense, is to exemplify the center of the community of practice for my students. To do that, I have to be able to speak the language in that community of practice.
I’m not arguing that Java is a great language, and I’ll continue arguing that Java is a poor beginner’s language. But our students do need to know Java, because Java exemplifies the current ideas and practice of our community. Our students are not well-educated if they can’t participate in that discourse. That’s why it is important for us as computing educators to learn how to teach Java and how to motivate the learning of Java. Not teaching them Java is not an option. Not teaching them Java leaves them uneducated. Not teaching Java only means that our students will be at a disadvantage. Our students don’t win because we refuse to play the game.
An argument from the cognitive perspective (the one I grew up in) is that students who have a strong set of concepts, who understand the core of their field well, can easily teach themselves the current tools and practices of the community. That may be true. What we know about transfer suggests that it’s true. I want to believe that’s true, but I realize that I’m crossing paradigms. I do recognize that the knowledge of the language and tools don’t come for free. Yes, Ruby is like Smalltalk — but just because I know Smalltalk, doesn’t imply that I know Ruby. Just because I know English and Latin and French, doesn’t mean that I know Spanish. It might be easier for me to learn a related language. But I still have to do the work to learn it.
All the tools and languages in common practice in computing today have important ideas embedded in them from some smart people — and maybe some less-important ideas from some not-so-smart people. But they are the ideas that our community is talking about. I’m not a fan of strong typing, but I realize that my students need to know what it’s about, because there are reasons why it’s part of our dialogue today. E.D. Hirsch writes books about “cultural literacy” and has made up long lists of the vocabulary that children need to know at various ages. One can critique Hirsch’s approach for being uninformed by the cognitive. Students need to know the concepts and have the knowledge structures to think about these ideas appropriately, not just know the words. But a focus just on the concepts leaves one open to a reasonable situative criticism. Our students must also be able to talk to the practitioners in our community.
We absolutely need to create better computing education. Java is a poor beginner’s language. We have to continue to critique and develop our practices. Our students are going to join this community, and that involves not just having a set of powerful knowledge structures. It means knowing the language and common practices of this community. Java is important to learn for the situative learning goal. The goal of an education in computing is a set of concepts and the fluency in the languages and practices of the community.
The value of Basic being described in this piece is the same argument that, I think the ACM Java Task Force was making about Java. Their point isn’t that Java (or Basic) is a great language. The point is that having a lingua franca, a language that you could count on being everywhere, that there was lots of educational support for, is a cultural advantage for developing more computer scientists. It’s a real cost today that Basic (or something else to take its place) is not omnipresent today.
“I have never received as much hate mail as I got for that article, not even for my infamous attacks on Star Wars,” Brin recalled recently. “It was almost entirely from people who missed the point, with all the rage directed at Basic. Let me be clear that I am not defending Basic. It was a primitive line-coding program, but everyone had it. Textbooks had exercises written in Basic, and teachers could count on a large fraction of their students being able to perform those assignments.”
“I am not defending Basic,” says writer David Brin, who talked about the death of the programming language in a 2006 Salon.com article. “It was a primitive line-coding program, but everyone had it.”
Today, the top one-tenth of one percent of students “will go to summer camp and learn programming, but the rest may never know that the dots comprising their screens are positioned by logic, math and human-written code,” Brin complains.
I’m about to start teaching Greenfoot in my data structures class (the one where we introduce data structures in explaining how the wildebeest’s charge over the ridge in Disney’s The Lion King) — I’m a big fan, and am glad to hear that they’re providing more support for teachers. They just announced support for using Microsoft Kinect with Greenfoot, demoed at SIGCSE 2011.
Millions of young people are expected to benefit from a University of Kent-established international teacher training network for Greenfoot, a free-to-download software tool that teaches computer programming to pupils from 14 years upwards.
Free and available for download at www.greenfoot.org, Greenfoot was designed by members of the University’s Computing Education Research Group and colleagues at La Trobe University in Melbourne to engage pupils through an interactive environment which enables them to easily create games and simulations. To date, more than a million pupils around the world have been able to experiment with creating games and animations, with more than 250,000 active users currently developing their knowledge and expertise.
With well over one thousand institutions also using the software for their computer science teaching, the design team has increased its support for teachers by establishing seven new international hubs that will offer face-to-face workshops, training and discussions.
Doug Blank just sent out this report on where the IPRE robot education technology Myro was going — the movement into new languages and platforms is pretty exciting!
This is a note to let you know the status of three new versions of Myro,
the API to interact with the Fluke and Scribbler. For more information on
any of these projects, please feel free to use this mailing list.
1) Myro in C++. This project has been developed at the University of
Tennessee at Knoxville, by Bruce MacLennan, John Hoare, and others. Mayro
in C++ is ready to use. For more information, please see:
2) Myro in Java. This project is underway at DePauw University by Doug
Harms. Myro in Java is under development and ready for testers. For more
information, please see:
3) Myro in the Pyjama Project. Pyjama is a new scripting environment for
Python, Ruby, Scheme, and more. This is the latest version of Myro from
the IPRE. Pyjama is designed to run very easily on multiple platforms, and
with multiple languages. Pyjama is under development and ready for
testers. Form more information, please see:
The pages at http://wiki.roboteducation.org/ will begin to change to
reflect these exciting developments and alternatives.
I invite users and developer of all of these systems to further describe
the projects, and provide additional details.
A recent article in InfoWorld on up-and-coming languages got me thinking about the future of CS1 languages. They went on at some length about Python, which I think most people consider to be the up-and-coming CS1 language.
There seems to be two sorts of people who love Python: those who hate brackets, and scientists. The former helped create the language by building a version of Perl that is easier to read and not as chock-full of opening and closing brackets as a C descendant. Fast-forward several years, and the solution was good enough to be the first language available on Googles AppEngine — a clear indication Python has the kind of structure that makes it easy to scale in the cloud, one of the biggest challenges for enterprise-grade computing.Python’s popularity in scientific labs is a bit hard to explain, given that, unlike Stephen Wolframs Mathematica for mathematicians, the language never offered any data structures or elements explicitly tuned to meet the needs of scientists. Python creator Guido von Rossum believes Python caught on in the labs because “scientists often need to improvise when trying to interpret results, so they are drawn to dynamic languages which allow them to work very quickly and see results almost immediately.”
There have only really been three “CS1 languages,” the way that I’m using the term: Pascal, C++, and Java. All three programming languages were used in a large (over 50%) percentage of CS1 (intro CS for CS majors in post-secondary education in the US, and AP in high school) classes. All three were AP CS languages.
Pascal at one point was probably in over 80-90% CS1 courses. Not everyone jumped immediately to C++, but C++ was in the majority of CS1 classes. I know that because, when our Java MediaComp book came out, our publisher said that Java had just pulled even with C++ in terms of percent of the market — that means C++ had to have been in lots of classes. Java is the dominant language in CS1 classes today, but it’s declining. Python’s market share is rapidly growing, 40% per year the last three years. While it’s not clear that the new AP CS nor the AP CS Level A would ever adopt Python, Python might still gain the plurality of all CS1 languages. I doubt that any language will ever gain more than 30-40% of the CS1 market again — there are (and will be) too many options for CS1 languages, and too many entrenched interests. Faculty will stick with one, and may skip a plurality, e.g., I’ve talked to teachers at schools where they stuck with C++ but now are switching to Python.
I have two specific predictions to make about future CS1 languages, based on observations of the last three and the likely fourth.
- All future CS1 languages will be in common use in industry.
- No language will gain a plurality of CS1 courses unless it existed at the time of the last transition.
The transition from Pascal to C++ led to the greatest spike in AP CS Level A tests taken in Georgia. Until 2010, that was largest number of AP CS exams taken in Georgia. The transition from C++ to Java had nowhere near that kind of impact on the test numbers in Georgia. What might have led to so much more interest in the Pascal -> C++ transition? Pascal was a language that was not (perceived to be) common in industry, while C++ was. I don’t think that people perceived such a huge difference between C++ and Java. I believe that the sense that C++ was vocationally useful, was approved of by industry, had a huge positive impact on student interest in the test.
In this blog, we have often touched on the tension between vocational and academic interests in computer science classes. Vocational most often wins, especially in the majority schools. The elite schools might play with BYOB Scratch in their intro courses (but notice — even at Harvard and Berkeley, it’s for the non-majors, not for those who will major in CS), and community colleges might use Alice to ease the transition into programming, but the vast majority of schools in the middle value industry-approval too much to adopt a pedagogical language for their CS majors.
The implication of the first prediction is that, if Scratch or Alice are ever adopted for the new AP CS, only schools on the edges of the distribution will give CS major credit for it, because most schools will not adopt a CS1 language that isn’t useful for programming in industry. That isn’t necessarily a bad thing for the new AP CS — to succeed, schools must agree to give some credit for it, not necessarily CS major. Another implication is, if my prediction holds true, Scheme will never gain a plurality in CS1 courses.
The second implication is based on an observation of the timing of the four languages. Each existed as the previous was adopted for the AP CS Level A, which is a reasonable point at which to claim that the language had reached plurality. C++ existed (since 1983) when the AP CS Level A was started in Pascal (1988, I think). C++ was adopted in 2001, and Java came out in 1995. AP CS Level A shifted to Java in 2003, and Python 1.0 came out in 1989, with Python 2.0 (the one receiving the most interest) in 2000. It takes a lot of time to develop that industry use, and to build up the sense that the new language may be worth the pain in shifting.
The implication is that, whatever the next CS1 language will be (after Python), it exists today, as Python reaches plurality. Maybe Ruby, or Scala –more likely Ruby, given the greater industry penetration. Any language that we might invent for CS1 must wait for the next iteration. Scratch, Alice, and Kodu are unlikely to ever become CS1 languages, because it is unlikely that industry will adopt them. Few professional programmers will get their jobs due to their expertise in Scratch, Alice, or Kodu. That absolutely should not matter to CS1 instructors. But it does.
Beth Simon made an excellent recommendation after my report on my first Peer Instruction lesson: Was it really a bad question that students misinterpreted? Why not ask the students? You would expect that students would most likely give me the answer on this survey that they thought I wanted. This was the first slide of the day.
Here’s the distribution of responses:
I did several more “clicker” questions today in lecture, and I’m getting a better sense of what works and what doesn’t work. (Something that doesn’t work: My <expletive deleted> Lenovo TabletPC that refused to wake up at the start of class, requiring me to reboot, and losing 10 minutes of lecture! ARGH!) I asked students to write in a piece of code today (rewrite a FOR loop as a WHILE loop). The answers were actually pretty good, but the writing took a long time. I won’t do that often.
One of the general insights I’m getting is about the large variance in the class. Here’s another question I asked in class (before the Java nitpickers let loose — we’re using DrJava, they’ve seen that code works fine without semi-colons, and in fact, we had just done these three lines verbatim with variable “fred” instead of “mabel”):
And the responses:
Most of the class grokked this one, but 5 of the 22 students who responded (some told me after class that they didn’t even respond) are pretty confused. That’s over 20%.
I chatted with several of the students after class today. They’re very confused, despite having read the first two chapters of the book (they claim) and taken the quiz. (I’m using out-of-class Video Quizzes, where students watch a videotape of me using Java, then answer questions about it.) My main insight into their confusion: After only one semester of CS classes, reading code is not an automatized skill. That’s not surprising, but it’s not something that I’d thought much about. The students told me that they’re metaphorically “sounding out” the code. They’ve thinking through what’s a method (and translating that into a MATLAB or Python “function”) and what’s a class and what’s valid Java syntax with semi-colons. That’s taking them time, and sometimes, they’re responding before they’re really confident about what they read.
Peer Instruction is taking me extra time: To get the slides onto Ubiquitous Presenter, to only present from my TabletPC, to write questions and insert them into slides, and to take time from lecture (for students to answer, to discuss, to respond again). I still think it’s worthwhile, and I plan to continue trying it.
Allison Elliott Tew has been working for five years to be able to figure out how we can compare different approaches to teaching CS1. As Alan Kay noted in his comments to my recent previous post on computing education research, there are lots of factors, like who is taking the class and what they’re doing in the class. But to make a fair comparison in terms of the inputs, we need a stable measure of the output. Allison made a pass in 2005, but became worried when she couldn’t replicate her results in later semesters. She decided that the problem was that we had no scientific tool that we could rely on to measure CS1 knowledge. We have had no way of measuring what students learn in CS1, in a way that was independent of language or approach, that was reliable and valid. Allison set out to create one.
Allison defends this week. She took a huge gamble — at the end of her dissertation work, she collected two multiple choice question exams from each of 952 subjects. If you get that wrong, you can’t really try again.
She doesn’t need to. She won.
Her dissertation had three main questions.
(1) How do you do this? All the standard educational assessment methods involve comparing new methods to old methods in order to validate them. How do you bootstrap a new test when one has never been created before? She developed a multi-step process for validating her exam, and she carefully defined the range of the test using a combination of text analysis and curriculum standards.
(2) Can you use pseudo-code to make the test language-independent? First, she developed 3 open-ended versions of her test in MATLAB, Python, and Java, then had subjects take those. By analyzing those, she was able to find three distractors (wrong answers) for every question that covered the top three wrong answers in each language — which by itself was pretty amazing. I wouldn’t have guessed that the same mistakes would be made in all three languages.
Then she developed her pseudo-code test. She ran subjects through two sessions (counter-balanced). In one session, they took the test in their “native” language (whatever their CS1 was in), and in another (a week later, to avoid learning effects), the pseudo-code version.
The pseudo-code and native language tests were strongly correlated. The social scientists say that, in this kind of comparison, a correlation statistic r over 0.37 is considered the same test. She beat that on every language.
Notice that the Python correlation was only .415. She then split out the Python CS1 with only CS majors, from the one with mostly non-majors. That’s the .615 vs. the .372 — CS majors will always beat non-majors. One of her hypotheses was that this transfer from native code to pseudo-code would work best for the best students. She found that that was true. She split her subjects into quartiles and the top quartile was significantly different than the third, the third from the second, and so on. I think that this is really important for all those folks who might say, “Oh sure, your students did badly. Our students would rock that exam!” (As I mentioned, the average score on the pseudo-code test was 33.78%, and 48.61% on the “native” language test.) Excellent! Allison’s test works even better as a proxy test for really good students. Do show us better results, then publish it and tell us how you did it!
(3) Then comes the validity argument — is this testing really testing what’s important? Is it a good test? Like I said, she had a multi-step process. First, she had a panel of experts review her test for reasonableness of coverage. Second, she did think-alouds with 12 students to make sure that they were reading the exam the way she intended. Third, she ran IRT analysis to show that her problems were reasonable. Finally, she correlated performance on her pseudo-code test (FCS1) with the final exam grades. That one is the big test for me — is this test measuring what we think is important, across two universities and four different classes? Another highly significant set of correlations, but it’s this scatterplot that really tells the story for me.
Next, Allison defends, and takes a job as a post-doc at University of British Columbia. She plans to make her exam available for other researchers to use — in comparison of CS1 approaches and languages. Want to know if your new Python class is leading to the same learning as your old Java class? This is your test! But she’ll never post it for free on the Internet. If there’s any chance that a student has seen the problems first, the argument for validity fails. So, she’ll be carefully controlling access to the test.
Allison’s work is a big deal. We need it in our “Georgia Computes!” work, as do our teachers. As we change our approaches to broaden participation, we need to show that learning isn’t impacted. In general, we need it in computing education research. We finally have a yardstick by which we can start comparing learning. This isn’t the final and end-all assessment. For example, there are no objects in this test, and we don’t know if it’ll be valid for graphical languages. But it’s the first test like this, and that’s a big step. I hope that others will follow the trail Allison made so that we end up with lots of great learning measures in computing education research.
Beth Simon just let me know that her paper has just been accepted to ITICSE 2010. She shared the submitted draft with me, and I’ve been biting my lip, wanting to talk about it here. Now that it’s accepted, I can talk about it, while still leaving the real thunder for Beth’s paper and her presentation this summer. For me, it’s exciting to see two year’s worth of data with CS majors, including following the students into their second year. Beth deals head-on with one of the criticisms of Media Computation (e.g., no, it’s not a tour of all-things-Java — you won’t cover as many language features as you used to) and provides the answers that really matter (e.g., you retain more students, they learn more about problem-solving, and they do really well in the next course). I’ll quote her abstract here:
Previous reports of a media computation approach to teaching programming have either focused on pre-CS1 courses or courses for non-majors. We report the adoption of a media computation context in a majors’ CS1 course at a large, selective R1 institution in the U.S. The main goal was to increase retention of majors, but do so by replacing the traditional CS1 course directly (fully preparing students for the subsequent course). In this paper we provide an experience report for instructors interested in this approach. We compare a traditional CS1 with a media computation CS1 in terms of desired student competencies (analyzed via programming assignments and exams) and find the media computation approach to focus more on problem solving and less on language issues. In comparing student success (analyzed via pass rates and retention rates one year later) we find pass rates to be statistically significantly higher with media computation both for majors and for the class as a whole. We give examples of media computation exam questions and programming assignments and share student and instructor experiences including advice for the new instructor.
Last night, a user reported a bug in our latest version of JES, the Jython IDE that we use in our Media Computation classes. In cleaning up the code for release, one of the developers renamed the short variable “pict” to “picture”–in all but one spot. The function that broke (with a “name not found” error in the Jython function) is writePictureTo, a really important function for being able to share the images resulting from playing with Media Computation. This was particularly disappointing because this release was a big one (e.g., moving from one-based to zero-based indexing) and was our most careful development efforts (e.g., long testing cycle with careful bug tracking). But at the end, there was a “simple clean-up” that certainly (pshaw!) wasn’t worth re-running the regression tests–or so the developer thought. And now, Version 3.2.1 and 4.2.1 (for zero and one-based indexing in the media functions) will be out later today.
This has got me wondering about the wisdom of developing an application used by hundreds, if not thousands, of students in Python (or Jython). I’ve done other “largish” (defined here, for a non-Systems-oriented CS professor, as “anything that takes more than three days to code”) systems in Python. I built a case library which generated multiple levels of scaffolding from a small set of base case material, called STABLE. Running the STABLE generator was aggravating because it would run for awhile…then hit one of my typos. Over and over, I would delete all the HTML pages generated so far, make the 5 second fix, and start the run all over. It was annoying, but it wasn’t nearly as painful as this bug — requiring everyone who downloaded JES 3.2/4.2 to download it again.
I’m particularly sensitized to this issue after this summer, where I taught workshops (too often) where I literally switched Python<->Java every day. I became aware of the strengths and weaknesses of each for playing around with media. Python is by-far more fun for trying out a new idea, generating a new kind of sound or image effect. But this bug wouldn’t have happened in Java! The compiler would have caught the mis-named variable. I build another “largish” system in Squeak (Swiki), which also would have caught this bug at compile time.
My growing respect for good compilers doesn’t change my attitude about good first languages for students of computing. The first language should be fun, with minimal error messages (even at compile time), with rapid response times and lots of opportunities for feedback. So where does one make the transition, as a student? Why is it important to have good compilers in one place and not in the other?
I am not software engineering researcher, so I haven’t thought about this as deeply as they have. My gut instinct is that your choice of language is a function (at least in part) of the number of copies of the code that will ever exist. If you’re building an application that’s going to live on hundreds, thousands, or millions of boxes, then you have to be very careful — correcting a bug is very expensive. You need a good compiler helping you find mistakes. However, if you’re building an application for the Web, I can see why dynamic, scripting languages make so much sense. They’re fun and flexible (letting you build new features quickly, as Paul Graham describes), and fixing a bug is cheap and easy. If there’s only one copy of the code, it’s as easy as fixing a piece of code for yourself.
First-time programmers should only be writing code for themselves. It should be a fun, personal, engaging experience. They should use programming languages that are flexible and responsive, without a compiler yelling at them. (My students using Java always complain about “DrJava’s yelling at me in yellow!” when the error system highlights the questionable line of code.) But they should also be told in no uncertain terms that they should not believe that they are creating code for others. If they want to produce application software for others, they need to step up to another level of discipline and care in what they do, and that usually means new tools.
I still strongly believe that the first course in computing should not be a course in software engineering. Students should not have to learn the discipline of creating code for others, while just starting to make sense of the big ideas of computing. The first course should be personal, about making code for your expression, your exploration, and your ideas. But when students start building code for others, engineering practice and discipline is required. Just don’t start there.
I just got my copy of the new book by Wanda Dann, Steve Cooper, and Barbara Ericson “Exploring Wonderland.”
I’m really interested to see how this book works in classrooms. As the title suggests, the book integrates Alice and Java programming with Media Computation. It’s not 1/2 Alice and 1/2 Java. Rather, both are integrated around the context of storytelling. You might use Media Computation to create grayscale images or sounds at different frequencies or echoes in your Alice stories. Or you might use Alice to create perfect greenscreens for doing chromakey in Media Computation. Students can put themselves into an Alice movie, or take Alice characters and have them interact with live action video. This isn’t Java to learn Java. This is Java as the special effects studio for Alice storytelling.
The order of the book goes back-and-forth. First, students use Alice to learn about variables and objects, then they do the same thing with turtles in Java. Back to Alice for iteration and conditionals, then see the same things in Java. There’s a real effort to encourage transfer between the two languages.
That explicit effort to transfer within a context is what makes this effort so interesting. Efforts that I’ve seen at Georgia Tech to teach two languages in a first course have failed. It’s just too hard to learn any one thing well to get it to transfer. The advantage of a contextualized computing education approach is that it encourages higher time-on-task — we know from studies at multiple schools with multiple contexts that students will do more with the context if they buy into it, if they’re engaged. Will storytelling work to get students to engage so that the first language is learned well enough to transfer to the second? And if so, do the students end up learning more because they have this deeper, transferrable knowledge?
I’m spending Father’s Day reading. Just finished Terry Pratchett’s Equal Rites (the first appearance of Granny Weatherwax, which I had never read before), and have now just started Nudge: Improving Decisions about Health, Wealth, and Happiness by Thaler and Sunstein. I’d heard of behavioral economics before, especially in the context of how these ideas are influencing the Obama administration. I’m recognizing implications for computing education as well.
The basic premise of behavioral economics is that people are bad decision makers, and those decisions are easily biased by factors like the ordering of choices. Consider the choice between a cupcake and a piece of fruit. The worse choice there only has consequences much later and the direct feedback (“You gained weight because you chose the cupcake!”) is weak. Thaler and Sunstein promote libertarian paternalism. The idea is that we want to offer choices to people, but most people will make bad choices. Libertarian paternalism suggests that we make the default or easiest choice the one which we (paternalistically) define as the best one — that’s a nudge. It’s not always easy to decide which is the best choice, and we want to emphasize making choices that people would make for themselves (as best as we can) if they had more time and information.
An obvious implication for computing education is our choice of first programming language. Alan Kay has pointed out many times that people are sometimes like Lorenz’s ducks, who were convinced that the Lorenz was their parent: people “imprint” on the first choice they see. Thaler and Sunstein would probably agree that the first language someone learns will be their default choice when facing a new problem. We want to make sure that that’s a good default choice.
How do we choose the first, “best choice” language? If our students are going to become software engineers, then choosing a language which is the default (most common, most popular) in software engineering would make sense: C++ or Java. But what if our students are not going to become software engineers? Then we’ve made their first language harder to learn (because it’s always harder as a novice to learn the tool used by experts), and the students don’t have the vocational aspirations to make the extra effort worthwhile. That choice might then lead to higher failure/withdrawal rates and students regretting trying computer science. Hmm, that seems familiar…
Another choice might be to show students a language in which the best thinking about computer science is easiest. For example, Scheme is a great language for pointing out powerful ideas in computer science. I believe that Structure and Interpretation of Computer Programming by Abelson and Sussman is the best computer science textbook ever written. It’s power stems, in part, from its use of Scheme for exemplifying its ideas.
The challenge of using Scheme is that it is not naturally the language of choice for whatever problem comes the student’s way. Sure, you can write anything in Scheme, but few people do, even people who know Scheme. Libraries of reusable tools that make it easy to solve common problems tend to appear in the languages that more people are using. If students were well-informed (or are/become informed), would they choose Scheme? If the answer to that question is “No,” the teacher appears coercive and constraining, and the course is perceived as being irrelevant. That’s another familiar story.
The ideas of Nudge have implications for teachers, too. I am on the Commission to design the new Advanced Placement (AP) Computer Science exam in “Computer Science: Principles.” (This exam is in contrast to the existing Level A CS AP exam in computer science programming in Java.) We just met for the first time this last week. There will be programming in the new APCS exam, and there’s interest in providing teachers with choices of what language they teach. Providing infinite choice makes it really hard to write a standardized, national exam. Teachers will likely be offered a menu of choices. How will those choices be ordered? How will teachers make these choices? While there are some wonderful high school teachers, there are too few high school CS teachers. The new APCS exam will only be successful if most of the teachers offering it are brand new to computer science. These teachers need help in making these choices, with reasonable default values, because they simply won’t have the experience yet to make well-informed choices.