Archive for June, 2011
I didn’t know that Engineering was a “foundation subject” and “compulsory” in the UK elementary school curriculum. The article below makes it sound really great. The part that I’m wondering about is whether the UK is having more luck filling their STEM classes than we are in the US. My perception was that the whole of the Western world was having trouble enticing kids into STEM. If UK has had engineering in the elementary school curriculum for 20 years, but is ending up with the same problems getting students to major in STEM, maybe the argument is weakened that we should put Engineering and Computing into elementary school to help bolster enrollments?
The way the U.K. teaches engineering is a lot more exciting these days. The curriculum has evolved from making matchbox holders in woodworking to designing circuit boards and electronics. Design and Technology, D&T, was introduced around 20 years ago and takes a holistic approach to learning. Science and math principles are taught through hands-on activities, not through rote learning. Students learn by making things, making mistakes and learning from both. D&T can help shape the next generation of engineers.
While D&T is growing in the U.K., it’s all but absent in the U.S. At a time when engineering is in such high demand, D&T should be considered as part of the school day. Science and engineering vacancies are anticipated to grow 70 percent faster than other jobs, but there won’t be enough qualified people to fill them. With China trending to overtake the U.S. as the number one economy, ensuring the next generation is equipped with the skills needed to engineer the future is paramount.
This is exactly the problem that Alan Collins was describing in his AERA talk this last Spring. The Internet is deeply divided along economic lines. His concern was that open learning created opportunities for the rich but not the poor, and removed the “compulsory” subjects that created a sense of civic duty.
A study from the University of California, Berkeley, suggests the social Web is becoming more of a playground for the affluent and the well-educated than a true digital democracy.
Despite the proliferation of social media — and recent focus on sites like Twitter and Facebook playing pivotal roles in such pro-democracy movements as the Arab Spring — most blogs, Web sites and video-sharing sites represent the perspectives of college-educated, Web 2.0-savvy affluent users, a UC Berkeley release said Tuesday.
“Having Internet access is not enough. Even among people online, those who are digital producers are much more likely to have higher incomes and educational levels,” said Jen Schradie, a doctoral candidate in sociology at UC Berkeley and author of the study.
May I whine?
I just got the rejection and reviews from my NSF Cyberlearning proposal. I was proposing to work on inquiry learning in CS Education, i.e., to conduct studies to explore what questions students had about computing, and with some technological probes, see if students could be prompted to have more questions about computing. I had applied for a smallish, two year grant under the “Exploration” category which is to “explore the proof-of-concept or feasibility of a novel or innovative technology or use of such technology to promote learning.”
As I read it (completely biased as I am in interpreting these), I was rejected for basically two reasons. First, I didn’t make the case strong enough that this proposal was “potentially transformative.” That was my fault. I strongly believe that we do not teach computer science via inquiry today, and the case for inquiry learning is very strong, so it is potentially transformative to move computer science education to an inquiry-based model. But if you don’t know CS education (so know that it isn’t inquiry-based) or don’t know the science education literature (so know the results on inquiry learning), that may not be obvious. That was my job to convey that, no matter what the background of the reviewers was, but I clearly wasn’t successful.
The second reason is aggravating. I applied under the “Exploration” category. Here are quotes from my reviews.
- From the panel summary: “However the project would be stronger if it first conducted a pilot study of such questions and used those findings to inform the design of the technological innovation.”
- Reviewer #1: “The project outcomes would be stronger if they included at least some preliminary evidence that some learning is occurring as a result of these activities, and that this learning matches some set of predictions.”
- Reviewer #2: “It seems that the PI could simply ask students some questions about this as a pilot or preliminary study.”
I thought that the whole idea of having the “Exploration” track was to fund preliminary work. That’s what I was proposing to do. I was rejected because I had not yet done preliminary work. I am willing to believe that everyone acted correctly and in good faith, e.g., the reviewers were well-chosen, well-informed, and evaluated proposals according to the proposal solicitation. But that means is that the bar for “proof-of-concept” is really quite high. I was expected to have done enough preliminary work that the “proof” was pretty obvious given previous studies.
At a higher level, beyond Guzdial whining, this is an example of what Rich DeMillo calls the “Cost of Sale.” This is the cost of developing the proposal. Here I am applying for “Exploratory” funding, and I’m being told that I need to do some exploration first. That’s “cost of sale.” What if the “Exploration” failed? Then that’s research cost that was not supported by an external funder. Whatever an external funder might later provide would not cover those earlier costs for the preliminary or pilot work. This is one of Rich’s top ten reasons why Universities lose money on research.
Okay, back to figuring out the next proposal…
Interesting interview from the Wall Street Journal. I agree with Mayer, that we need more people of all shapes and sizes, and that will get us more women. I agree with Fitzpatrick about the “soft evidence” that social media is drawing more people into CS — but I’d like to see harder.
Ms. Mayer: There is a decline in women graduating in computer science and engineering and that is concerning, but the simple fact is we just arent producing enough computer scientists, so I view it as less of a gender issue. So if we can produce more computer scientists the absolute numbers of women will grow and well achieve more balance.
WSJ: Has the growth of powerful, consumer-facing Internet companies such as Facebook Inc. attracted more women to the field?
Ms. Fitzpatrick: I see soft evidence of it all the time when I go out and talk to women in computer science at universities. There is so much real evidence of what you can do with a mobile phone and a developer tool kit; you can create magic, and it makes it feel much more attainable.
I am leaving tomorrow afternoon for Frankfurt, Germany, and from there to Darmstadt for the ACM SIGCSE ITICSE 2011 conference. I’m giving the last day keynote talk (Wednesday), on Technology for Teaching the Rest of Us — it’s a variation on my “Computing for Everyone” talk, where I emphasize the kinds of technology we might build to help us to reach universal computational literacy.
The motivated student is easy to teach. You facilitate learning and get out of the way. It’s much more challenging to teach the student who is less motivated, or who needs knowledge to support their main interest. Think of the graphics designer who chooses to learn scripting to make their job easier, but doesn’t want to learn to “program” and whose many (simple) mistakes cost valuable time. Think of the secondary-school business teacher who wants to teach computer science, but who doesn’t want to learn to be a professional programmer. The number of people who need some knowledge of a domain may be much greater than those who need expertise in that domain. Providing learning opportunities tailored to the needs and interests of the learner, potentially motivating that interest where necessary, is a great and important challenge in an increasingly technological society. My talk will describe characteristics of these challenges and suggest where computing technologies and computing education research insights may provide solutions.
On Wednesday, afternoon I’m driving to Aachen University with Ulrik Schroeder, who is giving the opening keynote for ITICSE. Ulrik has asked me to speak on Thursday about innovative CS pedagogy, and I’ve decided to give one of my favorite overview MediaComp lectures, on how most of CS can be accessed through a context like CS and talk about results at Georgia Tech, UCSD, U Ill-Chicago, and Gainesville College: Using Digital Media to Motivate Learning about Computer Science. I plan to use some pedagogical techniques that I want to emphasize: Live coding and peer instruction (with thanks.
Today’s students live in a world filled with digital media, from listening to music in digital form, viewing YouTube videos, and sharing digital photographs. If we teach computer science in terms of only numbers and words, we seem old-fashioned and out-of-touch. Our students understand computing as being primarily about digital media. In this talk, I will present tools and technique for teaching computer science through manipulation and creation of digital media. I will present some research results showing the effectiveness of these techniques at improving student engagement and retention.
I’ll be back on Friday July 1. I’m running the Peachtree 10K road race on the Fourth of July — I’ve had a number each of the last three years, but keep injuring myself just before, so I have my fingers crossed that I’m going to make it! On July 5, Barb and I are speaking at the Tennessee Tech University event, TTU-Tapestry.
In case I don’t have time to write blog posts next week, I already have a week and half’s worth stored up. But I don’t know what my connectivity is going to be like until July 6. Please excuse some ebbs in the ComputingEd flow.
A member of the SIGCSE mailing list asked the other day for recommendations on teaching a course on “HCI or Interaction Design.” We at Georgia Tech teach a variety of undergraduate and graduate courses like that, and I figured that lots of others do, too. I was surprised at some of the responses:
- “Our main theme was that computer scientists should know how to implement interfaces but should not try to design them. Frankly, I’ve not seen any evidence that has changed my mind since then.”
- “My personal experience with over 20 years of teaching GUIs is that CS students can be taught to be quite good at the software development aspects of GUIs, that they can be taught to at least understand good interaction design techniques, but that it does not really resonate with them and they do not tend to do it well, and that most of them are hopeless with respect to artistic design.”
Alan Kay sent me a link to this interesting video. I hadn’t heard of the Ceibal Project before — according to the video, 100% of schoolchildren in Uruguay now have OLPC laptops, and 92% of the schools have Internet access. It’s a big gamble. The Economist says that project is “less than 5% of the education budget.” The video paints a compelling picture of improving the society through equalized access to information. (The Wikipedia article on the Ceibal Project is interesting in a weird way. It’s decidedly negative in tone, but with odd complaints like potential access to pornography and bacteria being transported on keyboards.)
It’s such a huge project that I wonder how you measure it’s impact. Do you measure learning at the individual student level, or do you look for larger social trends (e.g., how often do the computers appear in television, and how often do people talk about using computers or seeking a job in IT?)? I am curious as to how the curriculum changes with the technology. Do we see schools introducing computer science, because now they can?
I don’t disagree with the claim here, that students don’t learn to program that well in one semester of CS. But I think the author doesn’t also consider that maybe our expectations are too high in CS2, too. Programming isn’t learned quickly. If we don’t have it in high schools, most people are going to take multiple semesters in undergrad to become competent — for expertise, multiple years.
In particular, I believe that expecting a student to learn to program well enough to study Computer Science in a single 15-week course is almost as absurd as expecting a student with no instrumental musical experience to be ready to join the university orchestra after 15 weeks. There are, of course, musical prodigies that can handle this challenge. Likewise, there are many “natural born programmers” who learn how to program with very little apparent effort. However, these individuals are the exception, not the rule.
The approach of getting people to use perceptual knowledge, instead of cognition, goes against what I learned in cognitive science. We want people to think about what they’re doing. But I do see the value of this direction, and wonder if we could use this in computing education. Certainly, part of the challenge in learning programming is learning to read programs. Could we help people to learn to recognize patterns in the code usefully, even before they understand those patterns? Would that help in getting past syntax challenges?
For years school curriculums have emphasized top-down instruction, especially for topics like math and science. Learn the rules first — the theorems, the order of operations, Newton’s laws — then make a run at the problem list at the end of the chapter. Yet recent research has found that true experts have something at least as valuable as a mastery of the rules: gut instinct, an instantaneous grasp of the type of problem they’re up against. Like the ballplayer who can “read” pitches early, or the chess master who “sees” the best move, they’ve developed a great eye.
Now, a small group of cognitive scientists is arguing that schools and students could take far more advantage of this same bottom-up ability, called perceptual learning. The brain is a pattern-recognition machine, after all, and when focused properly, it can quickly deepen a person’s grasp of a principle, new studies suggest. Better yet, perceptual knowledge builds automatically: There’s no reason someone with a good eye for fashion or wordplay cannot develop an intuition for classifying rocks or mammals or algebraic equations, given a little interest or motivation.
Our publisher has asked Barb and me to explore making a 3rd edition of our Python Media Computation book, and in particular, they would like us to talk about and use Python 3.0 features. Our book isn’t a generic Python book — we can only use a language with our Media Computation approach if we can manipulate the pixels in the images and the samples in the recorded sounds. Can I do that in Python 3.0?
The trick of our Java and Python books is that we can manipulate pixels and samples in Java. I wrote the original libraries, which did work — but then Barbara saw my code, eventually stopped laughing, and re-wrote them as a professional programmer would. Our Python Media Computation book doesn’t use normal C-based Python. We use Jython, a Python interpreter written in Java, so that we could use those same classes. We solved the problem of accessing pixels and samples only once, but used it with two languages. We can’t use that approach for the Python 3.0 request, because Jython is several versions behind in compatibility with CPython — Jython is only at Python 2.5 right now, and there won’t be Jython 3.0 for some time yet.
We used our Java-only media solution because it was just so hard to access pixels and samples in Python, especially in a cross-platform manner. Very few multimedia libraries support lower levels of access — even in other languages. Sure, we can play sounds and show pictures, but changing sounds and pictures is much more rare. I know how to do it in Squeak (where it’s easy and fast), and I’ve seen it done in C (particularly in Jennifer Burg’s work).
I have so-far struck out in finding any way to manipulate pixels and samples in CPython. (I don’t have the cycles to build my own cross-platform C libraries and link them into CPython.) My biggest disappointment is Pygame, which I tried to use last summer. The API documentation suggests that everything is there! It just doesn’t work. Pixels work fine in Pygame. Every sound I opened with Pygame reported a sampling rate of 44100, even if I knew it wasn’t. The exact same code manipulating sounds worked differently on Mac and Windows. I just checked, and Pygame hasn’t come out with a new version since 2009, so the bugs I found last summer are probably still there.
What I don’t get is why libraries don’t support this level of manipulation as a given, simply obvious. Manipulating pixels and samples is fun and easy — we’ve shown that it’s a CS1-level activity. If the facilities are available to play sounds and show pictures, then the pixel and samples are already there – in memory, somewhere. Just provide access! Why is computing with media so rarely supported in programming languages? Why don’t computer scientists argue for more than just playing and showing from our libraries? Are there other languages where it’s better? I have a book on multimedia in Haskell, but it doesn’t do pixels and samples either. I heard Donald Knuth once say that the hallmark of a computer scientist is that we shift our work across levels of abstractions, all the way down to bytes when necessary. Don’t we want that for media, too?
So, no, I still have no idea how to do media computation with Python 3.0. If anyone has a suggestion of where to look, I’d appreciate it!
I find intriguing the various claims about brain “calisthenics” or “exercises” that have learning benefits. I’m dubious, but curious about the claims. This one is interesting, not because it showed a positive effect, but because it lasted for “up to three months.”
Three months? That’s a lot longer than working memory, and long-term memory isn’t supposed to have an expiration date. Roger Schank has argued that there must be more than two just levels in the memory hierarchy. (A great example: Why is it that, on a multi-day trip, you can remember your hotel room number or rental car description, but can’t remember it the next week?) This result seems to be influencing one of those middle levels, but isn’t changing long-term memory permanently.
In an award address on May 28 at the annual meeting of the Association for Psychological Science in Washington, D.C., University of Michigan psychologist John Jonides presented new findings showing that practicing this kind of task for about 20 minutes each day for 20 days significantly improves performance on a standard test of fluid intelligence—the ability to reason and solve new problems, which is a crucial element of general intelligence. And this improvement lasted for up to three months.
Interesting reaction (in the NYTimes) to the article about the rise in enrollment (from the NYTimes). Good to have these kinds of debates, and this piece echoes the concerns about cyclical enrollments that are appearing on the SIGCSE mailing list.
We’re in the middle of a new bubble now, with a fresh set of millionaires. There is little doubt that this will burst and enrollments will drop again. And we’ll have another generation of students who joined computer science for the wrong reasons.
If we want a real Sputnik moment, we need to create the same demand — and excitement — we had for engineers and scientists in the ’60s, when it seemed that the nation’s survival was at stake. Parents encouraged their children to become scientists; the president told us it was a national priority; and we made huge investments. Science was sexy, chic and essential.
Whoa! The NSDL is being cut off from funding? What does this mean for the Ensemble CS Education Portal? At the end of the article, the author suggests that MIT OpenCourseWare and Yale’s Open Courses are facing “questions of financial sustainability.” How do we keep the digital equivalent to the public libraries open?
The National Science Digital Library had ambitious goals when it started in 2000: create a massive open repository of STEM learning materials culled from projects funded by its benefactor, the National Science Foundation; then organize these materials so that they could be easily cherry-picked and used by science and math instructors, from higher ed all the way down. The NSF poured well over $100 million into the project.
Just over a decade later, the science digital library is on death row. It is set to be stripped of all funds in 2012, “based in part on recent evaluation findings that point to the challenges of sustaining such a program in the face of changing technology and the ways educators now find and use classroom materials,” according to a foundation directorate issued in February.
This makes economic sense. The rising value of the CS degree does lead to the ability to charge more for the CS degree, though that raises the possibility that students may avoid CS to avoid the additional cost.
I am most interested in carrying this idea through to high school, on the supply side. If we’re going to charge more for CS education, we ought to be able to pay high school CS teachers more, because what they are teaching has such high economic value. Raising CS teacher’s salaries would improve our odds of competing with industry. If you know enough to teach Java in AP CS, you also know enough to get a better paying job than teaching high school. We can never truly compete with industry salaries, but we can make high school teaching more economically attractive.
With a hot market for their skills and employers who offer top-notch salaries and benefits, should computer science students pay more for their bachelor’s degree than theater or history majors? In Washington state, the answer could soon be yes.
I’ve heard the argument that the Bayh-Doyle act was the downfall of undergraduate education in America. By allowing universities to keep the intellectual property rights to sponsored research, an enormous incentive was created for universities to push faculty into research, and away from education. A recent Supreme Court ruling may have placed a limit on the Bayh-Doyle Act, by ruling that an individual researcher’s rights supersede the university’s. The New York Times editorial linked below is disappointed by this ruling, predicting increased tension between universities and faculty.
Looking for a silver lining, I wonder if this ruling might not create the opportunity to get back to education. Rich DeMillo continues to point out in his blog how research is a losing proposition for universities. Could this ruling reduce the incentive for universities to push research, by raising the costs (and lowering the potential benefits) of faculty research? (Rich’s latest blog post on the point directly addresses the nay-sayers who say that research only makes money for universities – a recommended and compelling read.)
Although the decision is based on a literal reading of a poorly drafted initial agreement between Stanford and the researcher, it is likely to have a broader effect. It could change the culture of research universities by requiring them to be far more vigilant in obtaining ironclad assignments from faculty members and monitoring any contracts between researchers and private companies. Relationships between the university and its faculty are likely to become more legalistic and more mercantile. By stressing “the general rule that rights in an invention belong to the inventor,” the majority opinion of Chief Justice John Roberts Jr. romanticizes the role of the solo inventor. It fails to acknowledge the Bayh-Dole Act’s importance in fostering collaborative enterprises and its substantial benefit to the American economy.