Posts tagged ‘programming languages’
Alan kindly forwarded this to me (thanks, Alan!) — it looks amazing! Here’s the challenge for the readers of this blog. Imagine attending this conference as a computing education expert (instead of a language geek, as I imagine many of us are anyways). What would you look for? What metric would you use for comparing these languages if your first criterion was value to the student? Would you define that criterion as ease of learning, power of expression, aid to later development, or all of the above?
The first ever conference on emerging programming languages. For language designers, implementors, users, and enthusiasts.
CMU’s press release about their new robot language doesn’t make much sense to me.
- The language should be “easy enough for elementary students to use, but powerful enough for college-level engineering courses.” Why? Is it even possible to do that? And why is it desirable?
- It’s based on “industry-standard C programming language”?!? I’ve argued previously that it is probably now criminally negligent to teach C or C++ as a first programming language — there’s enough evidence that it’s too hard for students, and we do know how to do it better.
- “Hundreds of thousands of children gain their first programming experience with robots.” Can that really be right? Only about 16,000 students took the AP CS test last year. Let’s take that as a starting place. There’s a multiple of that actually taking CS classes in high school, but the multiplier is not ten. There’s a lot more CS in high school than elementary school, and relatively few high schools have robots. I don’t buy their numbers — I’d like to see the evidence.
- They argue that it should all be C because it is the language that children “likely will use for years to come” and “will help them transition to those used by professionals.” The key criteria for a children’s programming is that it will help them in transitioning to industry? For 10-15 years later? Do we even know what people will be using in industry in 10-15 years? And should it really be the focus in elementary school to prepare these students for professional software development?
The folks at CMU do terrific work thaI rave about regularly here. I think this one isn’t in the right direction.
Carnegie Mellon University’s Robotics Academy announces the release of ROBOTC2.0®, a programming language for robots and an accompanying suite of training tools that are easy enough for elementary students to use, but powerful enough for college-level engineering courses.Like the original, this latest version of ROBOTC is an implementation of the industry-standard C programming language and has a modern programming environment that can grow as students move from elementary through college-level robot programming…“Computer programming is not taught at the middle school level, yet hundreds of thousands of children gain their first programming experience with robots,” said Robin Shoop, director of the Robotics Academy. “We introduced ROBOTC four years ago because students working with robots should spend their time learning scientific, mathematical and engineering principles, not learning a different programming language for each robot platform. Also, the programming environment students use should be compatible with a language such as C that they likely will use for years to come and with an interface that will help them transition to those used by professionals.”
The March 2010 Communications of the ACM (CACM) includes publication of two Blog@CACM pieces, a sort of point-counterpoint. CACM published my piece about “How we teach computer science is wrong,” where I argued that dumping students in front of a speeding compiler is not the best way to ramp students up into computing, and that we might think about instructional design mechanisms liked worked examples. CACM also published Judy Robertson’s piece “Introductory Computer Science Lessons — Take Heart!” where she argued that what we actually do introductory computing actually has the right pieces that good instructional design research recommends. When I heard that they were going to publish these two pieces together, I thought it was a great idea.
The title they chose was, “Too much programming too soon.” I think it’s really about the definition of “programming.” I do think a novice facing an empty edit buffer in an IDE is an awful and scary way to get started with computing. However, I deeply believe that programming is a wonderful part of computer science, but programming more broadly than “Debugging a blank sheet of paper.” It’s creative, powerful, awesome, and often surprising. There are lots of ways of getting started with programming that are much less scary, such as Squeak Etoys, Alice, and Scratch. I also think that we should explore reading examples, modifying existing code, debugging code, and new kinds of activities where students do limited text programming, some form of “reduced cognitive load” activities. We need broader definitions of what “programming” means.
Can direct manipulation lower the barriers to computer programming and promote transfer of training?
Chris Hundhausen has a really important paper in the latest issue of ACM TOCHI: Can direct manipulation lower the barriers to computer programming and promote transfer of training?.
We’ve known for a couple decades now that programmers read and understand visual programs no better than textual programs — Thomas Green, Marian Petre, and Tom Moher settled that question a long time ago. However, everybody experiences that starting with a visual programming language is easier than a textual language. But does it transfer? If you want students to eventually program in text, does starting out with Alice or Squeak or Etoys hurt? Given Chris found: “We found that the direct manipulation interface promoted significantly better initial programming outcomes, positive transfer to the textual interface, and significant differences in programming processes. Our results show that direct manipulation interfaces can provide novices with a ‘way in’ to traditional textual programming.” I think that this is big news for computing educators.
I think the fundamental thing that set Rails apart was a culture of putting the programmer first. The idea that Web programming should be fun and that programmers should be enjoying themselves.
The culture bred ideas like Convention over Configuration, where we standardized all the things that programmers do most of the time for most applications anyway.
What makes programmers happy? The Ruby on Rails creator says that it is, in part, creating standards (“Conventions”) that do things for the students. Another school of thought is that programmers want flexibility. These seem to be contrasting perspectives to me. I suppose that the middle ground is that programmers want things that they don’t want to do already done for them, and they want flexibility with the things that they want to do. My bet is that those things (what programmers want to deal with, and what they don’t want to deal with) vary from domain-to-domain, maybe even programmer-to-programmer. Hard to design for.
What do we want for students? Do we want lots of things done for them, or provide them with small pieces (like Lego blocks) that they can put together in a wide variety of different patterns? Do they want standards or do they want flexibility? What do we as teachers want for them? Should we have structures that are in place, so that students can’t build anything but what they do build is supported (I’m thinking Alice and Scratch here, as examples)? Or should we give them maximal flexibility so that they can assemble things and come to understand from the bottom-up (I’m thinking Pascal and the hardware-first approach of Patt and Patel)?
Bigger question: Should the answers to these things be different? Is the balance of standards and flexibility that works for programmers what we also want for students? I’m not suggesting the exact same tools are right for both novices and for experienced programmers, but I am wondering if the balance between what’s provided and what’s flexible might be similar.
Marlene Scardamalia and Carl Bereiter of the University of Toronto’s Ontario Institute for Studies in Education (OISE) have been arguing for years for “higher levels of agency” for children. Think about what you do when you have a question: You go find a source (a website, a book, or an expert) and ask your questions to learn the answer. Think about what we do with children in most classrooms: You find children who don’t know the questions or answers, put them in front of teachers who know both the questions and the answers, then the teacher asks the children questions. Scardamalia and Bereiter want to find ways to put children in the former situation.
What would Scardamalia and Bereiter want to see for computing education? I suggest that the analogy is that we think about students as wanting to build something, and we’re about giving them the tools to build that something with as much support and as little irrelevant detail as possible. But how do we get students to learn what’s useful and important for them, that they might not realize they need or that might not arise when they build yet-another-video-game? That’s the real challenge of placing more agency in the hands of the students — how do you get them to use that agency wisely?
A really interesting interview with Kenneth Iverson and others on the development of APL. What’s striking is that APL was invented as a human-to-human communication tool, used mostly in classrooms, which was years later turned into a programming language: The Origins of APL – 1974 Video by Catherine – MySpace Video.
I saw this in a storefront in Luzern, Switzerland yesterday:
It was the quote that caught my eye, “Art is making something out of nothing and selling it.” I found it attributed to Frank Zappa. Programming is pretty much “making something out of nothing and selling it,” which says that Frank Zappa and Donald Knuth agree — programming is art.
College semesters are starting all over the country, and I’m starting to hear from teachers with whom I have worked in the past on Media Computation. I’m learning how many of them are backing down or giving up on Media Computation. (By “Media Computation” here, I meant the general notion of using media to motivate and engage the learner, not necessarily our tools or our books.)
- At one school, a CS1 faculty member has gone back to an introduction to algorithms and the language, before using Media Computation. Manipulation of media will appear in only some of the assignments, with no sharing of student products. He doesn’t like that he has to use special libraries and tools to access the media. Top goals for him: Students should know the release as it comes out of the box, and should know all of the language.
- At another school, the CS1 teacher has decided not to do any media at all. He’s using the beta of his language of choice in his classroom, and the media supports haven’t been ported to the new version yet. Top goal for him: Students should know the latest, cutting edge version.
- Another teacher is reducing the amount of media in the data structures class. There’s no question that the majority of the class is motivated and engaged by the media context. It’s that the top students don’t want it, and they complain to him. The undergraduate teaching assistants for the class all took the class in the past and did really well — and they don’t like the media either. They want the data structures, pure and unadulterated. Top goal for him: Make the best students as good as they can possibly be, giving them more challenging content and keeping them happy.
- At one institution, they are stopping using media computation entirely. The CS1 teacher is simply uncomfortable talking about media — he doesn’t know the content, and he doesn’t personally like it or find it engaging. At that school, they had withdrawal-or-failure rates around 50%, which dropped to around 25% with media computation, and now are rising again. Women are leaving the class or failing more than the men. He’s okay with that, because he trusts that the students he graduates are ready to go on. Top goal for him: Do the things that he can get excited about, and produce the best possible students.
None of the teachers I have heard from are saying that our studies are wrong. Media Computation, across multiple schools, does lead to improved success rates and broader participation in computing — women and members of under-represented groups succeed as well as white or Asian males. These teachers are simply deciding that success rates and broadening participation is not their most important priority. They are concerned about training the best students, about teaching the latest technology, about preparing students to use the industry standard languages, and about maintaining their own interest in the classroom. It’s not about the data. It’s about the goals.
Let’s assume for the moment that these teachers are representative of most higher-education computing teachers. They’re not of course — these are the teachers who have been willing to try something new. They are more innovative and engaged than most. If these are the issues that higher-education computing teachers are struggling with, the real battle for NCWIT and BPC (Broadening Participation in Computing), then, is not to create more best practices or to generate evidence about these best practices. The real battle is for the hearts and minds of these teachers, to convince them that getting a broad range of students engaged with computing is important. It’s not about media computation — it’s about deciding priorities.
Of course, in a perfect world, we would achieve all these goals: Top students would be challenged, the majority of the students would be supported, the latest technology would be taught, students would learn how to use the languages in common practice, and a broad range of students would work in contexts that they find engaging and motivating. And in a perfect world, all students have personal tutors. Unfortunately, we have to make trade-offs because of economic realities. For example, there are more developers creating new features in new tools, than there are developers making sure that contexts like media work in the new tools. The top students want something different than the less-engaged students, and we can’t afford two classes. Choices have to be made.
La plus ça change, plus c’est la meme chose. The more things change, the more things stay the same. That’s a statement about inertia, but what’s interesting to me is why there is inertia. Why do people go back to what they used to do? Because it worked. Because it met the goals and needs that had been priorities in the past. Getting people to have new priorities — now that’s a challenge. New priorities will lead to new practices. Media computation is a new practice, but adopting the new practice doesn’t change the priorities.
Last night, a user reported a bug in our latest version of JES, the Jython IDE that we use in our Media Computation classes. In cleaning up the code for release, one of the developers renamed the short variable “pict” to “picture”–in all but one spot. The function that broke (with a “name not found” error in the Jython function) is writePictureTo, a really important function for being able to share the images resulting from playing with Media Computation. This was particularly disappointing because this release was a big one (e.g., moving from one-based to zero-based indexing) and was our most careful development efforts (e.g., long testing cycle with careful bug tracking). But at the end, there was a “simple clean-up” that certainly (pshaw!) wasn’t worth re-running the regression tests–or so the developer thought. And now, Version 3.2.1 and 4.2.1 (for zero and one-based indexing in the media functions) will be out later today.
This has got me wondering about the wisdom of developing an application used by hundreds, if not thousands, of students in Python (or Jython). I’ve done other “largish” (defined here, for a non-Systems-oriented CS professor, as “anything that takes more than three days to code”) systems in Python. I built a case library which generated multiple levels of scaffolding from a small set of base case material, called STABLE. Running the STABLE generator was aggravating because it would run for awhile…then hit one of my typos. Over and over, I would delete all the HTML pages generated so far, make the 5 second fix, and start the run all over. It was annoying, but it wasn’t nearly as painful as this bug — requiring everyone who downloaded JES 3.2/4.2 to download it again.
I’m particularly sensitized to this issue after this summer, where I taught workshops (too often) where I literally switched Python<->Java every day. I became aware of the strengths and weaknesses of each for playing around with media. Python is by-far more fun for trying out a new idea, generating a new kind of sound or image effect. But this bug wouldn’t have happened in Java! The compiler would have caught the mis-named variable. I build another “largish” system in Squeak (Swiki), which also would have caught this bug at compile time.
My growing respect for good compilers doesn’t change my attitude about good first languages for students of computing. The first language should be fun, with minimal error messages (even at compile time), with rapid response times and lots of opportunities for feedback. So where does one make the transition, as a student? Why is it important to have good compilers in one place and not in the other?
I am not software engineering researcher, so I haven’t thought about this as deeply as they have. My gut instinct is that your choice of language is a function (at least in part) of the number of copies of the code that will ever exist. If you’re building an application that’s going to live on hundreds, thousands, or millions of boxes, then you have to be very careful — correcting a bug is very expensive. You need a good compiler helping you find mistakes. However, if you’re building an application for the Web, I can see why dynamic, scripting languages make so much sense. They’re fun and flexible (letting you build new features quickly, as Paul Graham describes), and fixing a bug is cheap and easy. If there’s only one copy of the code, it’s as easy as fixing a piece of code for yourself.
First-time programmers should only be writing code for themselves. It should be a fun, personal, engaging experience. They should use programming languages that are flexible and responsive, without a compiler yelling at them. (My students using Java always complain about “DrJava’s yelling at me in yellow!” when the error system highlights the questionable line of code.) But they should also be told in no uncertain terms that they should not believe that they are creating code for others. If they want to produce application software for others, they need to step up to another level of discipline and care in what they do, and that usually means new tools.
I still strongly believe that the first course in computing should not be a course in software engineering. Students should not have to learn the discipline of creating code for others, while just starting to make sense of the big ideas of computing. The first course should be personal, about making code for your expression, your exploration, and your ideas. But when students start building code for others, engineering practice and discipline is required. Just don’t start there.
Just got back from the five hour drive to drop off my daughters at their youth choral group summer camp. I listened to a podcast of a Long Now Foundation debate between Drew Endy and Jim Thomas on synthetic biology. If you’re not familiar with the Long Now Foundation seminar series, I highly recommend that you check it out. Think of the TED talks, but now let the same speakers (in some cases) talk for two hours, instead of 20 minutes. The discussion is deep and meaningful. I particularly recommend the debates, and this one was just as thought-provoking as the others I’ve heard.
The particular Computing Education insight that I got from listening to Drew Endy was in thinking about DNA as a programming language. Drew talks about DNA as “a replicating machine” that can be “programmed” “in garages, like the early homebrew computer hackers.” Take a look at this comic book treatment of how to build logic components out of DNA. Those simple logic elements look like the same level as Yale Patt’s hardware-first take on CS1. There is even a “programming language” for designing DNA components.
Could you teach an introduction to computing using DNA as the context, medium, and “programming language”? I don’t see why not. It’s clearly relevant. It would avoid a lot of the issues that are important to majors but aren’t really important to a basic understanding of computing. There aren’t any issues about commenting in one’s DNA, though there are wonderful issues about re-use. (Endy has a great quote in the debate, “Real intelligent design would include documentation!”)
Here’s the real advantage to talking about DNA as a CS1 language: no computing teacher would just do DNA. You’d want to leave students with a “real” programming language that they could use in everyday tasks, maybe Python or Java. Then, the class would no longer be about programming or even computers, and would instead be about systems and computation. You’d want to spend as little time on syntax as possible, because there’s not much syntax in DNA and it’s not at all important for what the class would really be about. What does it mean for DNA components vs. Python objects to interact? How does a branch or selection get encoded in DNA or in a programming language? What are all the ways in which a computational system might make choices? How is computation similar or dissimilar in biology and in silicon?
I don’t know if synthetic biology has really reached the point where it could be used as a context for introductory computing — let alone whether it could be safely and ethically taught to first year undergraduate students. But the thought of using a non-silicon context for teaching computing is a useful tool for thinking about what we really want to be teaching about computing.