Archive for August, 2009
Blog post on U. Toronto’s switch to Python (and Media Computation)
Nice blog post about U. Toronto’s switch to Python. They used Media Computation as a starting point, but re-wrote everything, from libraries to textbook.
La plus ca change: It’s the goals not the data
College semesters are starting all over the country, and I’m starting to hear from teachers with whom I have worked in the past on Media Computation. I’m learning how many of them are backing down or giving up on Media Computation. (By “Media Computation” here, I meant the general notion of using media to motivate and engage the learner, not necessarily our tools or our books.)
- At one school, a CS1 faculty member has gone back to an introduction to algorithms and the language, before using Media Computation. Manipulation of media will appear in only some of the assignments, with no sharing of student products. He doesn’t like that he has to use special libraries and tools to access the media. Top goals for him: Students should know the release as it comes out of the box, and should know all of the language.
- At another school, the CS1 teacher has decided not to do any media at all. He’s using the beta of his language of choice in his classroom, and the media supports haven’t been ported to the new version yet. Top goal for him: Students should know the latest, cutting edge version.
- Another teacher is reducing the amount of media in the data structures class. There’s no question that the majority of the class is motivated and engaged by the media context. It’s that the top students don’t want it, and they complain to him. The undergraduate teaching assistants for the class all took the class in the past and did really well — and they don’t like the media either. They want the data structures, pure and unadulterated. Top goal for him: Make the best students as good as they can possibly be, giving them more challenging content and keeping them happy.
- At one institution, they are stopping using media computation entirely. The CS1 teacher is simply uncomfortable talking about media — he doesn’t know the content, and he doesn’t personally like it or find it engaging. At that school, they had withdrawal-or-failure rates around 50%, which dropped to around 25% with media computation, and now are rising again. Women are leaving the class or failing more than the men. He’s okay with that, because he trusts that the students he graduates are ready to go on. Top goal for him: Do the things that he can get excited about, and produce the best possible students.
None of the teachers I have heard from are saying that our studies are wrong. Media Computation, across multiple schools, does lead to improved success rates and broader participation in computing — women and members of under-represented groups succeed as well as white or Asian males. These teachers are simply deciding that success rates and broadening participation is not their most important priority. They are concerned about training the best students, about teaching the latest technology, about preparing students to use the industry standard languages, and about maintaining their own interest in the classroom. It’s not about the data. It’s about the goals.
Let’s assume for the moment that these teachers are representative of most higher-education computing teachers. They’re not of course — these are the teachers who have been willing to try something new. They are more innovative and engaged than most. If these are the issues that higher-education computing teachers are struggling with, the real battle for NCWIT and BPC (Broadening Participation in Computing), then, is not to create more best practices or to generate evidence about these best practices. The real battle is for the hearts and minds of these teachers, to convince them that getting a broad range of students engaged with computing is important. It’s not about media computation — it’s about deciding priorities.
Of course, in a perfect world, we would achieve all these goals: Top students would be challenged, the majority of the students would be supported, the latest technology would be taught, students would learn how to use the languages in common practice, and a broad range of students would work in contexts that they find engaging and motivating. And in a perfect world, all students have personal tutors. Unfortunately, we have to make trade-offs because of economic realities. For example, there are more developers creating new features in new tools, than there are developers making sure that contexts like media work in the new tools. The top students want something different than the less-engaged students, and we can’t afford two classes. Choices have to be made.
La plus ça change, plus c’est la meme chose. The more things change, the more things stay the same. That’s a statement about inertia, but what’s interesting to me is why there is inertia. Why do people go back to what they used to do? Because it worked. Because it met the goals and needs that had been priorities in the past. Getting people to have new priorities — now that’s a challenge. New priorities will lead to new practices. Media computation is a new practice, but adopting the new practice doesn’t change the priorities.
Nice CACM piece on K-12 Education Policy
Cameron Wilson and Peter Harsha have written a nice piece in the September CACM highlighting the challenges in changing K-12 education policy as it relates to computing. They hit on all the big points, from the challenges of getting computing education to “count” for anything in high schools, to teacher certification, to “No Child Left Behind.” Of course, I’m also pleased that “Georgia Computes!” gets highlighted. Recommended for understanding why it’s so hard to get more and better computing education into schools!
Language Choice = f(Number of Copies)
Last night, a user reported a bug in our latest version of JES, the Jython IDE that we use in our Media Computation classes. In cleaning up the code for release, one of the developers renamed the short variable “pict” to “picture”–in all but one spot. The function that broke (with a “name not found” error in the Jython function) is writePictureTo, a really important function for being able to share the images resulting from playing with Media Computation. This was particularly disappointing because this release was a big one (e.g., moving from one-based to zero-based indexing) and was our most careful development efforts (e.g., long testing cycle with careful bug tracking). But at the end, there was a “simple clean-up” that certainly (pshaw!) wasn’t worth re-running the regression tests–or so the developer thought. And now, Version 3.2.1 and 4.2.1 (for zero and one-based indexing in the media functions) will be out later today.
This has got me wondering about the wisdom of developing an application used by hundreds, if not thousands, of students in Python (or Jython). I’ve done other “largish” (defined here, for a non-Systems-oriented CS professor, as “anything that takes more than three days to code”) systems in Python. I built a case library which generated multiple levels of scaffolding from a small set of base case material, called STABLE. Running the STABLE generator was aggravating because it would run for awhile…then hit one of my typos. Over and over, I would delete all the HTML pages generated so far, make the 5 second fix, and start the run all over. It was annoying, but it wasn’t nearly as painful as this bug — requiring everyone who downloaded JES 3.2/4.2 to download it again.
I’m particularly sensitized to this issue after this summer, where I taught workshops (too often) where I literally switched Python<->Java every day. I became aware of the strengths and weaknesses of each for playing around with media. Python is by-far more fun for trying out a new idea, generating a new kind of sound or image effect. But this bug wouldn’t have happened in Java! The compiler would have caught the mis-named variable. I build another “largish” system in Squeak (Swiki), which also would have caught this bug at compile time.
My growing respect for good compilers doesn’t change my attitude about good first languages for students of computing. The first language should be fun, with minimal error messages (even at compile time), with rapid response times and lots of opportunities for feedback. So where does one make the transition, as a student? Why is it important to have good compilers in one place and not in the other?
I am not software engineering researcher, so I haven’t thought about this as deeply as they have. My gut instinct is that your choice of language is a function (at least in part) of the number of copies of the code that will ever exist. If you’re building an application that’s going to live on hundreds, thousands, or millions of boxes, then you have to be very careful — correcting a bug is very expensive. You need a good compiler helping you find mistakes. However, if you’re building an application for the Web, I can see why dynamic, scripting languages make so much sense. They’re fun and flexible (letting you build new features quickly, as Paul Graham describes), and fixing a bug is cheap and easy. If there’s only one copy of the code, it’s as easy as fixing a piece of code for yourself.
First-time programmers should only be writing code for themselves. It should be a fun, personal, engaging experience. They should use programming languages that are flexible and responsive, without a compiler yelling at them. (My students using Java always complain about “DrJava’s yelling at me in yellow!” when the error system highlights the questionable line of code.) But they should also be told in no uncertain terms that they should not believe that they are creating code for others. If they want to produce application software for others, they need to step up to another level of discipline and care in what they do, and that usually means new tools.
I still strongly believe that the first course in computing should not be a course in software engineering. Students should not have to learn the discipline of creating code for others, while just starting to make sense of the big ideas of computing. The first course should be personal, about making code for your expression, your exploration, and your ideas. But when students start building code for others, engineering practice and discipline is required. Just don’t start there.
The Learning Process for Education Research
One of the more influential projects in physics education (and learning sciences over all) was the effort by Jill Larkin and Herb Simon to characterize how students and experts solved physics problems. They found that students tend to look at physics problems at a shallow level, while experts see deep structure. Students tend to look at what variables are present in the problem, match them to the equations given in class, and see what they can compute with those variables. Experts look at a problem and identify the kind of problem it was, then work out a process to a solution.
My son is currently taking AP Physics, and I’m seeing this same process when he asks me for help. My dissertation work was about teaching students kinematics by having them build simulations, so I’m familiar with some of the content. I’m no expert, but am a bit closer than my son. Matt brought me a problem then started with, “I can figure out delta-Y here, but can’t see why that’s useful.” He knew the equation that matched the variables. I drew a picture, then figured out what we needed to compute. I then remembered the wrong equation (evidence that I’m no expert) and came up with an answer that clearly couldn’t be right. (Kudos to Matt for realizing that!) Matt then figured out the right equation, and we came up with a more reasonable answer. I worked from the problem situation to an equation, and Matt started by looking for an equation.
I’ve been seeing this same process lately in how people come to understand education research. I’m teaching an undergraduate and graduate (joint) class on educational technology this semester. (We just started class last week.) In the first week, I had them read two chapters of Seymour Papert’s Mindstorms; the paper “Pianos, not Stereos” by Mitchel Resnick, Amy Bruckman, and Fred Martin; and Jeanette Wing’s “Computational Thinking.” I started the class discussion by asking for summary descriptions of the papers. A Ph.D. student described Jeanette’s position as “Programming is useful for everyone to understand, because it provides useful tools and metaphors for understanding the world.” I corrected that, to explain that Jeanette questions whether “programming” is necessary for gaining “computational thinking.” The student shrugged off my comment with a (paraphrased) “Whatever.” For those of us who care about computing education, that’s not a “whatever” issue at all — it’s a deep and interesting question whether someone can understand computing with little (or no?) knowledge of programming. At the same time, the student can be excused for not seeing the distinction. It”s the first week of class, and it’s hard to see deep structure yet. The surface level is still being managed. It’s hard to distinguish “learning programming” and “learning to think computationally,” especially for people who have learned to program. “How else would you come to think computationally?”
This last week, we’ve been reviewing the findings from our first year of our Disciplinary Commons for Computing Educators where we had university and high school computer science teachers do Action Research in their own classrooms. Well, we tried to do Action Research. We found that the teachers had a hard time inventing researchable questions about their own classrooms. We ended up scaffolding the process, by starting out with experimental materials from others’ studies, so that teachers could simply pick the experiment that he or she felt would be most useful to replicate in his or her classroom. We then found that the teachers did not immediately see how the results had any implication for their own classrooms. It took us awhile to get teachers to even ask the questions: “The results show X (e.g., most students in my classroom never read the book). What does that mean for my students? Does that mean X is true for all my students? Should I be doing something different in response?”
These results aren’t really surprising, either — at least in hindsight. High school and university teachers have their jobs not because they are expert at education research. University researchers typically are expert at some computing-related research, not computing education research, and a general “research perspective” doesn’t seem to transfer. Our teachers were looking at the surface level, and it does take some particular knowledge about how to develop researchable questions and how to interpret results into an action plan afterwards.
Education research is a field of study. I’ve been doing this kind of work for over 20 years, so you’d think I’d have realized that by now, but I still get surprised. Simply being a teacher doesn’t make you an expert in education research, and being a domain researcher doesn’t make you an expert in education research in that domain. It takes time and effort to see the deeper issues in education research, and everyone starts out just managing the surface features.
New social networking site for CS teachers
Helene Martin has just started a new social networking site for CS Educators: http://csteachers.ning.com/ The focus is on K-12 computing education, but is inclusive of university faculty, particularly those with an interest in making introductory classes better, in research, and in outreach. Helene is especially interested in getting media computation teachers involved, to talk about what assignments and activities were particularly successful.
Please do visit her site to join up, and pass this link along your social networks for others.
“Exploring Wonderland” is out: Encouraging transfer between Alice and Java
I just got my copy of the new book by Wanda Dann, Steve Cooper, and Barbara Ericson “Exploring Wonderland.”
I’m really interested to see how this book works in classrooms. As the title suggests, the book integrates Alice and Java programming with Media Computation. It’s not 1/2 Alice and 1/2 Java. Rather, both are integrated around the context of storytelling. You might use Media Computation to create grayscale images or sounds at different frequencies or echoes in your Alice stories. Or you might use Alice to create perfect greenscreens for doing chromakey in Media Computation. Students can put themselves into an Alice movie, or take Alice characters and have them interact with live action video. This isn’t Java to learn Java. This is Java as the special effects studio for Alice storytelling.
The order of the book goes back-and-forth. First, students use Alice to learn about variables and objects, then they do the same thing with turtles in Java. Back to Alice for iteration and conditionals, then see the same things in Java. There’s a real effort to encourage transfer between the two languages.
That explicit effort to transfer within a context is what makes this effort so interesting. Efforts that I’ve seen at Georgia Tech to teach two languages in a first course have failed. It’s just too hard to learn any one thing well to get it to transfer. The advantage of a contextualized computing education approach is that it encourages higher time-on-task — we know from studies at multiple schools with multiple contexts that students will do more with the context if they buy into it, if they’re engaged. Will storytelling work to get students to engage so that the first language is learned well enough to transfer to the second? And if so, do the students end up learning more because they have this deeper, transferrable knowledge?
How will you know that American universities have collapsed?
In Jared Dimond’s book Collapse, he responds to a student’s question, “How could the person who cut down the last tree on Easter Island do that? How could he finish the deforestation of his island?” Dimond responds that the cutter really didn’t know that that was what he was doing. Deforestation had to be going on for years, and the economy and society had to adapt to the lack of trees. By the time the last three was cut down, it was probably just a sapling that was being cleared for some other purpose, like growing something else. His point is that collapse comes gradually and is hard to identify when it’s happening.
I thought of that today watching TED video podcasts. I watched Alan Kay’s and Dan Dennet’s in the same morning workout. Alan points out what is missing from our educational system, and how they have brought in some of that discovery and scientific insight in the LA schools where he’s been working. Alan mentions in passing how the video the day before, on how molecules combine, was flawed in that it doesn’t show the seething mass of molecular movement that makes the combination occur. Dan’s talk starts out by describing how Charles Darwin reasoned upside-down, that Darwin saw that the Absolute Wisdom of the world’s design emerged from Absolute Ignorance — just a seething mass of organisms.
Which got me to wondering if maybe our education system isn’t broken at all. Maybe the seeting mass of different people trying different things, with teachers flying in all kinds of directions, with experimental curricula and variations on standardized curricula, with many more failures than successes is actually exactly what you want to create a diversity of people with the few geniuses we need to keep things going. But being a critical kind of guy, I also got to wondering, “How would I know if that’s wrong?”
What would it look like if things really were collapsing in our educational system, and wasn’t just a creative, chaotic mess? Niall Ferguson in his book Empire recommends that the new American Empire (us) should look to the last great Empire (British) to learn from their lessons. He says that it doesn’t matter whether we really have ambitions of Empire — we basically have one, and other nations treat us like an imperial power. Therefore, we should heed the lessons. What were the signs that the British Empire was fading? What would we look for as signs that the American Empire is fading? And in a smaller version of that, what would be the sign that the American education system is failing, and isn’t just a seething mass where the right thing does emerge from anarchy? The analogy isn’t as strong, since the British education system didn’t collapse, and may be much better than the American one. It is the case that more foreigners flood into American universities than British ones (last numbers I saw), so maybe that’s the sign that we are not failing. How would we know? Maybe we won’t, even after the last tree is cut down.
New Scottish Computer Science Curriculum
Interesting blog on the CACM site: http://cacm.acm.org/blogs/blog-cacm/37565-what-does-a-computer-scientist-do/fulltext. Scotland is developing a new computer science curriculum, from grade school up through higher education. Their “Big Ideas” are pretty similar to the ones being developed for the new AP “Computer Science: Principles.” They’re much more about what is possible with a computer, than how one uses a computer or how one works.
The Whole Package Matters
The enormous discussion on “Lisp and Smalltalk are dead” (latest count: over 2,900 views) has spawned a parallel thread of discussion within the College of Computing. One of the points of discussion that I found particularly interesting is the discussion of Vinny’s comment, about how Smalltalk wouldn’t be so bad if he could use it in the editor of his choice.
At first, I thought that was an entirely orthogonal issue. What does the editor have to do with the expressiveness of the language? Amy Bruckman called me on that point, and now I see her point. The user interface does matter. How well the interface supports the language does matter. One of the biggest complaints that students had with Squeak when I taught it was the user interface. Complaints ranged from how unpleasant the colors were (which were changeable, but as Nudge points out, when the default doesn’t work well, people aren’t willing to make the choice) to how hard it was to find the menu item you wanted. I chalked that up to being part of the learning curve, but maybe that’s the point.
I’ve been exploring some other languages recently like Ruby, Scala, and various Lisp/Scheme implementations. I’m surprised at how few of these have editors or IDEs that come with them. (With the great exception of DrScheme which is my favorite pedagogical IDE.) Most of these have some kind of Eclipse plug-in, which doesn’t do me any good at all. I have never been able to get Eclipse to install properly, and never got my head around how to use it. On the other hand, they all have Emacs plug-ins, too. I can use Emacs. I’m not great at it (I’m a vi guy from my internship at Bell Labs in 1982), but I can use it. And for the most part, it’s all the same then — it’s reliable and relatively consistent, whatever language I’m playing with.
Several years ago, Don Gentner and Jakob Nielsen wrote a great paper called The Anti-Mac Interface. They considered what kind of interface you’d get if you reliable broke every one of the Mac’s UI guidelines. They found that you resulted in a consistent and powerful user interface. It was no longer good for novices doing occasional tasks. It was great for experts doing regular tasks and who wanted shortcuts and macros.
Nudge points out that the surface level matters, and if that isn’t smooth, people are discouraged from looking deeper. The user interface level to these tools matter, and if they’re not understandable, nobody gets to the expressiveness.
Lisp and Smalltalk are dead: It’s C all the way down.
Georgia Tech’s College of Computing is now considering a proposal to remove Smalltalk from the required curriculum in favor of C++. When I got here in 1993, we taught Pascal (mostly) and had required courses in C, Lisp, and Smalltalk. The faculty explicitly valued that students see more than one school of programming thought. I took over the Smalltalk-using course from John Schilling and Richard LeBlanc, and moved it from ObjectWorks to Squeak. When we moved to semesters in 1999, Lisp got dropped, and we’d moved from Pascal to Java as our main teaching language. When we drop Smalltalk (now using VisualWorks), we will have a first semester in Python, and the rest of the required curriculum will be Java, C, C++, and C#. We will explicitly tell students “C and C-like languages are all that there is.”
Why drop Smalltalk? Students and teachers view it as “a dead language, not worth learning.” It is the case that there are concepts in Objects and Design (the name of the course) which can most easily be discussed in C++. C++ is wildly popular in industry, so it’s not surprising that some language-specific techniques have developed, techniques that our students should probably know.
It’s reasonable to teach a course on object-oriented analysis, design, and programming in C++ rather than Smalltalk. I’m more disappointed that we will have a curriculum that is all about C.
Richard Gabriel has been thinking a lot about the C-ness of our discipline. If you have not read Richard Gabriel’s articles on “Worse is Better,” I recommend them. Dan Weinreb has a nice overview, and there’s a list of all the various pieces in Gabriel’s debate (some of which was with himself!). Gabriel has been trying to understand why Lisp, despite its many measurable benefits over C (e.g., Lisp programmers are more productive and generate fewer bugs, Lisp environments are amazingly powerful, Lisp code is small and runs fast), has so clearly lost the battle over hearts and minds.
Gabriel contrasts two design philosophies, the MIT/Stanford philosophy (which he calls “the right thing“) and the “New Jersey” C/UNIX philosophy (which he calls “worse is better”). In short form, the MIT/Stanford philosophy (which he associates with Lisp, and which I also associate with Smalltalk) is that correctness and consistency are the most important design quality. In Lisp and Smalltalk, we have S-expressions and objects consistently. The C/UNIX philosophy places simplicity in interface and implementation as the most important design quality.
Python is a mishmash of the two design philosophies. Yes, you get lambda and map/reduce and objects and classes. But you lose the consistency and syntactic flexibility of Lisp and Smalltalk. What’s interesting is that Python, being the least C-like of the popular languages in computing education today, is mostly seen as a language for the NON-computing major. It’s like faculty are saying, “Oh sure, those simpler and more conceptual ways of programming are fine for people who won’t be real programmers. For our students, it’s C all the way.”
I don’t dispute that Unix/C philosophy has won the marketplace. I worry about only teaching that to our students. I think it’s important for Computing majors to understand Gabriel’s debate, to understand what’s valuable about Lisp, Smalltalk, APL, and other high-power, lots-done-in-few-lines-of-code, flexible languages and environments. We want our students to be thought leaders, to think about possibilities that aren’t currently in vogue in the marketplace. They should know about lessons of history, to avoid repeating mistakes and to resurrect old ideas when technology and opportunities fit.
The political forces are lined up to make the Georgia Tech change likely. In comparison with the departments that I had contact with this summer, we’re late. The C-only train has left the station. Few departments teach surveys of programming languages anymore, and I don’t know of any department that teaches a required course in history of computing. I worry about what this means for our discipline. Are we really going to tell students that the peak of human expressibility for computation was in 1973? That all programming language research from here on out is wasted energy? That simplicity is all that we can ever hope for, and correctness and consistency just aren’t worth working on? Are we forever stuck with 30+ year old ideas and don’t even teach that anything else is possible?
Fashion counts: Cell phones vs. Calculators
My advisor, Elliot Soloway, appeared in the Atlanta Journal Constitution this week, which made me proud. Education columnist Maureen Downey wrote a piece on “Cellphone as Teacher” in which she talked about Elliot and his quest to make cell phones into useful and powerful educational tools. The idea is to “capitalize on children’s natural affinity for technology and the omnipresence of cellphones.” The article talks about how the cell phones might be used: “Students measured the area of a school hallway, recorded the geologic stages of the rock cycle and found mean, median, mode and range from a group of numbers. They sketched and even animated on the phones.”
My kids started school this week (Georgia starts waaay early), so I’ve been spending lots of time in Target and Office Depot picking up school supplies — including calculators. Have you looked at calculators lately? They are amazingly powerful! A $30 calculator provides a list interface to input sets of numbers, and then does regression analysis and solves simultaneous equations. The $100 calculator that’s required for the high school does graphing, animation, and includes a digital periodic table. These calculators can easily be used for everything Maureen describes. A $100 calculator is way cheaper than a cell phone plus minutes. There’s a huge amount of curricular materials for calculators, and the teachers now do welcome calculators into the classroom, unlike cell phones. Maureen quotes Elliot saying, “Now, we truly, finally have personal computers that are going to fit in our pockets.” Calculators have been there for years.
So why not push calculators, rather than cell phones? They are cheaper, more powerful, the curricula already exist, and teachers already accept them. I’m pretty sure that I know how Elliot would answer: you start from where the kids are. Calculators are not cool, are not interesting. They are out of fashion. As Maureen’s piece says:
“Laptops are very ’90s,” says University of Michigan researcher Elliot Soloway. “They are your daddy’s computers.”
He might as well have said, “Calculators are very ’80’s. They’re your grandfather’s computers.”
I think about that with respect to computing education (and the next blog post I’m planning). I’ve argued that no student gets engaged anymore by seeing the word “Hello World!” appear on the screen. In MediaComp, the equivalent of “Hello World!” is to open a picture and play a sound. That’s a minimally interesting unit of computation. But what will it be next year? In five years? In ten years?
In contrast, I look at my kids’ math books, and social science texts, and even science books. I recognize the pedagogical methods, even some of the figures and diagrams. There is change there, but there is also a sense for what makes education work.
Will we ever get there with computing education and educational technology? Our field is so influenced by fashion, by the latest and greatest thing. What’s cool engages. What’s out of fashion is rejected by students. Why does fashion seem to influence other disciplines less? Maybe it does influence engagement there, too, and not changing is a downfall. On the other hand, there are lots of kids taking Calculus AP, and few taking CS AP. Math Ed seems not to be a slave to fashion. How do we get to the point where we can talk about computing education that works, period, and that we can keep using for decades? Or does the continual upheaval in the field force us to always be on a treadmill of creating the next trendy educational technology or computing education initiative, none of which will last long?
Chemistry is to Biology as X is to Computing
Alan Kay asked in a comment thread on a previous post:
What is the equivalent of molecular basis of life, how and why chemistry works, and why evolution should be plausible — that cannot be omitted from a first course?
I haven’t taken biology since 1978, so I admit at the start that I don’t really know what goes into Modern Biology classes. I have been thinking some (mostly in my role on the APCS Commission to design the “Computer Science: Principles” class) about what the big/key/supporting (different adjectives get used at different times) ideas of Computer Science are. Here’s a shot at the computing equivalent of the “molecular basis of life.”
- We know of several equivalent models for what is computable, for what is possible to have a machine to calculate, for what mathematics is capable of describing. One of these is particularly useful because it can be implemented in terms of electronics, i.e., lots and lots of transistors.
- This model requires the use of a finite memory store that contains numbers — just integer numbers in a fixed range. Everything that we would want to compute with, from real numbers to digital video, can be turned into these small, fixed integers.
- We have a calculating unit that is capable of taking instructions from this store. These instructions tell the calculating unit to read numbers, to do simple mathematics on these numbers, to store numbers into different parts of memory, to change which instruction is being used next (so that instructions might repeat), and to choose which instruction to use next based on a decision about two numbers (e.g., whether they’re equal, whether one is greater than another).
- Everything that the computer is capable of, from Twitter and Facebook to making shiny bumpers in Pixar’s “Cars,” from describing how proteins might fold to predicting the weather or economy, is built up from this simple model of a calculating unit with a memory store, implemented on millions of transistors that get cheaper and cheaper.
That’s a strawman. It did occur to me to suggest an object model, or a functional model, or even a rule-based model as the “molecular basis of computing.” But as Yale Patt suggests, there is something compelling about the connection to the physical circuitry, to explaining all of computing in terms of light switches.
While I am making a pass at Alan’s challenge of a “molecular basis of computing,” I’m not at all addressing the other part of the challenge, “that cannot be omitted from a first course.” In fact, very few intro computing approaches (other than the Patt and Patel book) start from the hardware first. In Computer Science (and our Computational Media) classes, this is a Sophomore-level class that introduces this bottom-up view of computing. Even Computer Engineering programs that use the Patt and Patel book use it at the Sophomore level. The argument is that first year students want to do something meaningful with the computer, and then we can explain how the computer works. I find that a compelling argument.
Perhaps the “molecular basis of computing” is something we sneak up on, in a spiral fashion (as we do in mathematics and as Ben Shneiderman suggested years ago for computing education). In our Media Computation CS1, we do spend a lot of time on item #2 on this list — we talk about how each medium is digitized into bytes. At the end of the class, we spend time on items #1 and #4 — how there are multiple models of computation (e.g., objects and functional), and that everything is built up from these fundamental units. The molecular basis of computing may be something that we introduce where it makes sense to the student, we revisit often, and eventually focus on explaining it when the student understands the role of the model.
New report on on-line learning from US Dept of Ed
A new report from the US Department of Education is touting the effectiveness of on-line courses as compared to face-to-face classes. Note that there’s a significant flaw in the meta-analysis, which appears in the Dept of Ed report (page xvii in the Executive Summary), but not in the “Inside Higher Ed” article: The meta-analysis did not consider failure/retention rates, because too few of the studies controlled for failure rates. Another meta-analysis that appeared in “Review of Educational Research” a couple years ago found that on-line courses have double the failure rates of face-to-face classes. If you flunk out twice as many students, yes, you do raise the average performance since you have fewer students left and they’re the ones who scored higher. Face-to-face classes have the advantage of being a regular constant pressure to stay engaged, to keep showing up.
The grand challenge of on-line learning is how to motivate the students to complete the course without raising costs (e.g., through the teacher spending more time on-line, through production of higher-quality materials, etc.)
Questioning the report that High School CS is declining
The T.H.E. Journal has a report on the results of the 2009 CSTA Teacher Survey. The results are pretty dire: “not only have the number of students enrolled in computer science has dropped significantly in the last four years and so have the number of AP computer science courses offered at high schools.” The specific numbers are stark. “Only 65 percent [of survey respondents] reported that their schools offer introductory or pre-AP computer science classes. This compares with 73 percent in 2007 and 78 percent in 2005. Only 27 percent reported that their schools offer AP computer science. This compares with 32 percent in 2007 and 40 percent in 2005.”
I have questions about these findings, though. It may very well be that high school CS is declining, but I suspect that the truth is a little more complicated than the T.H.E. Journal article is stating.
The 27% of schools offering APCS seems too high to be a national average. We know from College Board data that Georgia, at 22%, has a higher percentage of high schools offering APCS than other states in the Southeast, such as Alabama, Florida, South Carolina, and North Carolina. The rest of the country is so high that the average is 27%?
There is a seeming contraction in the article that helps to make sense of the results. The first line says, “the number of students enrolled in computer science has dropped significantly in the last four years.” Later, though, it says, “among schools that offer CS courses, enrollments have not seemed to change much over the last three years. Of those participating in the survey, 23 percent reported that CS enrollments have increased; 22 percent said CS enrollments have decreased; and 55 percent reported no real change in enrollments.” So where’s the enrollment drop? The inference I make is that enrollments have dropped because the number of schools offering CS has declined, while at the schools that have it, the enrollment has stayed the same.
These results lead me to more questions. According to the College Board, the number of students taking the APCS Level A exam has risen each of the last four years. Where are those additional test-takers coming from, given the declines reported in the surveys? If the number of schools offering APCS has declined at the survey respondents’ schools, and the enrollments at those schools are flat, yet the number of test-takers has risen, then either the percent of kids going on to take the test has increased, or…there is increase where the survey is not.
According to the CSTA report, the survey was administered “to 14,000 high school teachers who defined themselves as computer science, computer programming, or AP computer science teachers.” I wonder if that’s why the numbers aren’t quite making sense. This explains why the percentage of APCS seems so high — we’re talking to the teachers of CS, not sampling all schools. This may also explain why there seems to be more high school CS than is reported. We have found in Georgia that teachers teaching computer science sometimes (maybe even “often”) define themselves as business or math teachers, not computer science teachers. If you’re training is in mathematics education, and you only teach one or two computer science classes, it’s not surprising that you would see your identity as a math teacher, not a computer science teacher.
Given the focus of the CSTA survey, it may be that there is growth in APCS, but only in those schools that are new to teaching computer science and don’t have teachers who define themselves as computer science teachers. Further, there may be a decline in the number of high school CS classes nationwide, but the CSTA report really only reflects those schools have had high school CS in the past. Again, high schools new to CS, or with new teachers, may not be included in these numbers.
So the survey result is not really about high school CS overall — that’s not who was surveyed. The article is making claims about “high school CS” which really can only be about “schools that have self-described CS teachers.” The survey raises important questions about why CS should be declining at the places where it used to succeed! The real story is in the changes in survey responses over the years, not as a measure of high school CS nationwide.
We need a survey of a sampling of high schools nationwide, not just those with a teacher who claims the role of “computer teacher.” High school CS is changing, and it may be that the action is in the new schools with the new teachers just starting to construct their own identity as teachers.
Recent Comments