Archive for February, 2011
Call for papers
THE SEVENTH INTERNATIONAL COMPUTING EDUCATION RESEARCH WORKSHOP
Providence, Rhode Island, USA, August 8-9, 2011
Computing education research is the study of how people come to understand computational processes and devices, and how to improve that understanding. As computation becomes ubiquitous in our world, understanding of computing in order to design, structure, maintain, and utilize these technologies becomes increasingly important–both for the technology professional, but also for the technologically literate citizen. The study of how the understanding of computation develops, and how to improve that understanding, is critically important for the technology-dependent societies in which we live.
The International Computing Education Research (ICER) Workshop aims at gathering high-quality contributions to the computing education research discipline. Papers for the ICER workshop will be peer-reviewed. For the first time this year, ICER will accept papers in two different categories. They are:
Research papers. 8 pages. As in the past, research papers should include:
A clear theoretical basis, building on existing literature in computing education, computer science, and other related disciplines.
A strong empirical basis, drawing on relevant research methods. Papers that re-interpret and explain others’ empirical results are welcome.
An explication of the paper’s impact on, and contribution to, existing knowledge about computing education.
6 pages. Work in progress, or dissemination and discussion of new ideas in Computing Education Research. Discussion papers fail to meet one or more of the criteria for research papers, but have the potential to become exemplary ICER papers if given the opportunity to be presented to and discussed by the community.
All papers should follow the ACM SIGCSE formatting guidelines. Templates for submissions can be found at the ACM SIG Proceedings website. LaTeX users should use option #2 (tighter alternate style) when formatting their document.
Authors may find it helpful to read the review form before finalizing their papers.
Submission deadline: 20 April 2011
Re-submission deadline: 27 April 2011 (*)
Notification of acceptance: 1 June 2011
Deadline for final version: 13 June 2011
Saturday was the first C^3 Conference. It was a great pleasure to sit in the audience and see a parade of good speakers from Georgia walk up to talk about their efforts to improve computing education! We had about 30 high school and university teachers stay in an auditorium on a gorgeous Atlanta Saturday (70F in February!), to talk about their teaching practice.
We’re planning on one more C^3 Conference for 2011. Call for participation is below.
), dedicated to gathering local computing educators, including both undergraduate computing faculty and high school computing teachers, to share their best practices of, and building scholarship in, teaching introductory Computer Science. This event is also intended to provide opportunities for collaboration and communication among the participants. The conference is designed to create a forum where local computing educators are able to meet, present, share ideas, and discuss topics of interest about teaching computer science courses.
- Your name, school name, e-mail address, mailing address, and phone number
- The session for which you are submitting the proposal abstract – discussion session or poster?
- Your proposal abstract with title, presenter(s), and a short description of your presentation or poster. If you are submitting a proposal for a presentation, be sure to include a description of your objectives and a short summary of the content of your presentation along with ways of involving the audience in the discussion.
Please register for attendance by Friday, April 8, 2011.
I blogged a few weeks ago about Sally Fincher’s project to gather teaching practice change stories. There’s a deadline on the project of next Friday. Please visit soon and tell her about how you change your practice at
A bunch of University Presidents (is that the right collective noun for a group of University Presidents? A herd? A coven? A flock?) gathered recently to talk about the University of the Future. I found Georgia Tech’s president’s comments pretty interesting. I’m not sure that the 25 year view really works for a strategic plan — how can we know what’s going to be valuable in 25 years, and if we don’t know, how can that inform our strategies today? I buy the importance of flexibility (see previous post on sociology and drop-outs), but I think he overstates the importance of technology today. Yes, students have expectations of even more technology — what’s the cost of not meeting those expectations, and what’s the cost of encouraging a focus on an oral culture (as Alan has pointed out)? His story in the below about Georgia Tech using social media to continue classes during our ice storm week is unfortunately false– we were told that faculty could not hold classes or other learning activities (e.g., on Facebook, or via video on Sakai or YouTube) when campus was closed because campus was closed and students should not be expected to engage in those activities. (Similarly, we’ve told that we can make up classes (in evenings or weekends), but we cannot mandate students attend. So what’s the point then of “making up”…?) In any case, I do think that we have an interesting mandate to explore the role of technology in extending and expanding the concept of university.
Peterson also talked about how technology is changing the way we live our lives and run our universities. He said Georgia Tech is now developing a 25-year strategic plan. He acknowledges that it’s pretty tough to imagine what educational life will be like in 25 years. But when we look back 25 years, we can clearly see what the exercise is valuable. It was roughly 25 years ago when the first PC was available commercially. Now we’re texting billions of messages a day around the world. The world is largely transformed in 25 years.
“We must ask what has and what will continue to distinguish our graduates from other graduates around the world,” he said. “We can’t look two, three or five years ahead; we need to look 25 or 30 years ahead.” Universities need to make plans to meet the needs of future students being born today.
Universities have to re-evaluate the way they teach the digital generation, he argued. Young people see technology as part of their everyday lives; the expect a continuation and expansion of this at university.
And technological capabilities can come in pretty handy beyond the daily routine of a university. When a rare winter storm forced Georgia Tech to close down for three days, professors used Facebook, email and Skype to deliver their lectures and stay on schedule.
Peterson said universities also have to adopt a more flexible approach to education. He envisions a model of undergraduates working with a committee of faculty to choose their individual course path leading to a degree; younger students enjoying the kind of flexibility currently afforded to graduate students.
This is an interesting argument that I hadn’t met previously: Pagination is better for long digital texts because it’s easier for sustained reading. What are the implications for reading source code? Is pagination (and perhaps formatting via something like Knuth’s WEB) better than a scroll bar?
Let’s put it under the umbrella term ‘scrollable’. Scrollable content works very well for two or three screenfuls of content, because it lets you adjust, pixel by pixel or line by line, to your changing context. You can say “I want this thing on the screen, and this nearby thing on the screen at the same time”, which is often useful — particularly if the content has varied elements like buttons and links and images as well as text. That is to say, scrollable content generally works very well for web pages.
But for anything of real length, it is seriously hard work. It’s important to realise what you’re doing when you’re scrolling. You’re gazing at the line you were reading as you draw it up the screen, to near the top. When it gets to the top, you can continue reading. You do this very quickly, so it doesn’t really register as hard work. Except that it changes your behaviour — because a misfire sucks. A misfire occurs when you scroll too far too rapidly, and the line you were reading disappears off the top of the screen. In this case, you have to scroll in the other direction and try to recognise your line — but how well do you remember it? Not necessarily by sight, so immediately you have to start reading again, just to find where you were.
Beyond this, even if you have startling accuracy, still you are doing a lot of work, because your eyes must track your current line as it animates across the screen. For sustained reading, this quickly gets physically tiring.
Pagination works for long text, not because it has a real-world analogy to printed books or whatever, but because it maximises your interface: you read the entire screenful of text, then with a single command, you request an entirely new screenful of text. There’s very little wastage of attention or effort. You can safely blink as you turn.
A colleague of mine sent me a link to the iConference 2011 website, suggesting that I should consider attending and submitting papers to future instantiations. It looks like an interesting conference, with lots of research in human-computer interaction and computer-supported collaborative work. There was very little about learning. There was a session on Scratch, focused on “end-user programming,” not on learning about computing.
I started to wonder: Have human-computer interaction research and computational thinking become ideological opposites? By “computational thinking” I mean “that knowledge about computing that goes beyond application use and that is useful in any discipline.” Or as Jeanette Wing described it, “Computational thinking builds on the power and limits of computing processes, whether they are executed by a human or by a machine.” Notice that she points out the limits. Limits suggest things that the computer can’t do, and if you’re going to think about them, you have to be aware of them. They must be visible to you. If Computational Thinking involves, for example, understanding the power and limits of digital representations, and how those serve as metaphors in thinking about other problems, then those representations have to be visible.
Let’s contrast that with Don Norman’s call for the Invisible Computer. Or Mark Weiser’s call for the “highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it.” Or any number of user-interface design books that tell us that the goal of user-centered design is for the user to focus on the task and make the computer become “invisible.”
Michael Mateas has talked about this in his discussion of a published dialog between Alan Perlis and Peter Elias. Elias claims, like Norman and Weiser, that one day “undergraduates will face the console with such a natural keyboard and such a natural language that there will be very little left, if anything, to the teaching of programming.” Michael responds, “The problem with this vision is that programming is really about describing processes, describing complex flows of cause and effect, and given that it takes work to describe processes, programming will always involve work, never achieving this frictionless ideal.”
The invisible-computer goal (that not all in HCI share, but I think it’s the predominant goal) aims to create a task-oriented interface for anything that a human will want to do with a computer. No matter what the task, the ad promises: “There’s an app for that!” Is that even possible? Can we really make invisible all the seams between tasks and digital representations of those tasks? Computational thinking is about engaging with what the computer can and cannot do, and explicitly thinking about it.
Computing education may be even more an ideological foe of this HCI design goal. Computing education is explicitly assuming that we can’t create an app for everything that we want to do, that some people (all professionals, in the extreme version that I subscribe to) need to know how to think about the computer in its own terms, in order to use it in new, innovative ways and (at least) to create those apps for others. It’s not clear who builds the apps in the invisible-computer world (because they would certainly need computing education), but whoever they are, they’re invisible, too.
I used to think that computing education was the far end of a continuum that started with HCI design. At some point, you can’t design away the computer, it has to become visible, and then you have to learn about it. After reviewing the iConference program, I suspect that HCI designers who believe in the invisible-computer have a goal for that never to happen. All possible tasks are covered by apps. Computing education should never be necessary except for an invisible few. Computational thinking is unnecessary, because we can make invisible all limitations.
Here’s a prediction: We won’t see a panel on “Computational Thinking” at CHI, CSCW, or iConference any time soon.
The fate of Scalar, which has not yet been released to the public, also remains to be seen. Mellon had backed an earlier attempt to build multimedia-authoring software, called Sophie. The first version failed, says Bob Stein, a director of the Institute for the Future of the Book, who left the Sophie project after blowing through more than $2.5-million working on it. A second version is not usable now but may end up being the “holy grail,” he says.
“The easier you try to make an authoring environment, the harder it is to build it,” says Mr. Stein. “It’s easy to build an authoring environment that requires experts to use. It’s very hard to build an authoring environment that somebody can use after reading two pages of instructions.”
Is this last thought true, that opportunities in video games are growing? Last I heard, we already have an over-supply of video game programmers. Each programmer is actually pretty productive, so a relatively small number of programmers is all that the relatively small number of major game studios really need. Is that not the case?
An increasing number of schools and teachers now recognize that games can be used to improve mathematics, physics and computer science outcomes in the classroom itself.
Moreover, awareness of opportunities in these industries and the requisite skills will add a modern and exciting flavor to the study of these subjects, normally considered dry and boring, and thus attract more students towards them. These disciplines would then be viewed as leading to creative careers rather than technical ones alone.
Thus, the report suggests ”We need to set in motion a virtuous circle where video games and visual effects help draw young people into maths, physics and computer science, and improve their learning outcomes, in turn enlarging the talent pool for these industries in the future. Schools should do more to encourage cross-curricular learning. Career guidance needs to reflect the growing employment opportunities in high-tech creative industries like video games and visual effects.”
I’m working with Amy Bruckman and Klara Benda on a paper describing the results of a study that Klara did of students taking on-line CS courses. Klara points out in her review of the literature that most retention/attrition models focus on psychological factors, e.g., having appropriate background knowledge, motivation, and metacognitive skills like planning. But the factors that appear in empirical studies of students who drop out, especially in on-line classes, emphasize sociological factors, like changes in job and residence situations, changes in financial status, and family pressures. That’s certainly what Klara found in her study of on-line CS students, and those same issues are echoed in this MSU study.
Depression, a loss of financial aid, increased tuition, unexpected bad grades and roommate conflicts are among key risk factors that lead college students to drop out, according to a study led by Michigan State University researchers.
Not so influential: a death in the family, failure to get their intended major, a significant injury and addiction.
“Prior to this work, little was known about what factors in a student’s everyday life prompt them to think about withdrawing from college,” Tim Pleskac, an MSU assistant professor of psychology and the lead researcher, said in a news release this afternoon.
Change the verb “game” to “program,” and “gamer” to “hacker” in the quote below, and I think that this could almost be a transcript from Margolis and Fisher’s Unlocking the Clubhouse. Recall that Margolis and Fisher found that many of the factors that drove away women from CS at CMU were cultural and social, e.g. the male-dominated geek culture, and the bravado of showing-off knowledge in classroom questions. Maybe it’s for the same reason that so few females take game design and programming classes? Maybe it’s not about the technical content, or even about games, but about game culture? As MMPORGs become increasingly dominant, the social aspect of games may become the most visible, especially to women. It that culture is not welcoming to women, that would be a disincentive to take more classes in the field.
Who I am talking to are the guys in between, and there’s a whole swath of them. They’re the guys who claim they have no problem with “girls who game” but seem to have a problem with “girl gamers.” They’re the ones who probably wouldn’t seem to have an issue with women in their everyday lives but if one shows up on the game server, all rules of normal social decorum go out the window.
I’ve said it before and I’ll say it again: stop assuming that women who game are trying to be this Girl Gamer you keep getting hung up on. There is no such thing.
First of all, when I ask guys like you what you mean by “trying to be a girl gamer,” the definitions are ambiguous and sketchy. “They talk a lot and act all cute.” “They’re too chatty, they just want attention.” “They…you know, act like girls.”
At schools that have closed down CS, journalism has been closed down too. Colorado is now talking about closing down journalism, and to create a School of Information. Is that the first step towards closing down CS, too, in keeping with the trend? Isn’t it ironic, that CS innovations have led to the closing of Journalism, but that’s somehow joined with taking down CS, too?
The University of Colorado should eliminate its standalone journalism degree and create both a new school of information and an institute to study the “global digital future,” according to documents released Tuesday by the Boulder campus.
CU officials announced in August that they would take unprecedented steps to possibly close down CU’s traditional School of Journalism and Mass Communication, citing budget cuts and the rapid evolution of media.
Through the program discontinuance process, a CU panel and top campus leaders have recommended shutting down the traditional school and relocating its tenured professors elsewhere on campus.
I liked this piece in the NYTimes about why online courses aren’t taking off. The author’s point about online courses “lacking the third dimension” (social, face-to-face interactivity) is a good one (and that’s where OpenStudy comes in), but the side point he makes is more interesting to me. The media of online courses just is nowhere near what it needs to be! Powerpoint slides, PDF tests, and no feedback is just abysmal, and we can do so much better!
When colleges and universities finally decide to make full use of the Internet, most professors will lose their jobs.
That includes me. I’m not worried, though, at least for the moment. Amid acute budget crises, state universities like mine can’t afford to take that very big step — adopting the technology that renders human instructors obsolete.
I strongly agree with this. Certainly, we can show learning in short-term studies. But the most important issues in education (e.g., motivation, attitudes, broadening participation, success in later academic career, success after graduation) can’t be studied in the standard three years of an NSF grant.
A group of education researchers and representatives of private philanthropies argued on Monday for more money for long-term studies of education. Such studies, they said, are often harder to find money to support but tend to be more effective than shorter-term projects at decisively answering key research and policy questions.
The researchers and philanthropists made their case at a gathering on Capitol Hill, titled “Payoffs of Long-Term Investment in Education Research,” that was organized by the American Educational Research Association, the Education Deans Alliance, and the National Academy of Education.
Below is a note that Barb just send to MediaComp teachers. In-browser IDE’s are pretty important for high schools. Some high school districts we work with have draconian IT policies, e.g., by default, ALL websites are disabled from the firewall, and only certain websites are permitted. In these schools, nothing can be installed on any computer. In our children’s school district, all computers are wiped every night and reloaded from a base image. If you (as the teacher) don’t have your program loaded into the image, you either reinstall every day, or you just can’t install Scratch, eToys, Alice, etc. Thus, having tools available through the browser helps teachers to use software apps without dealing with IT.
Dr. Jam Jenkins recently made a prototype of JavaWIDE that includes the Media Computation libraries, and he would like some teachers to try it out and give feedback. The site is at
. For those who have never heard of JavaWIDE, it is an online IDE that supports collaboration, concurrent editing, and version control.
For more information:
Dr. Jam Jenkins
Assistant Professor of Information Technology
Georgia Gwinnett College
Readers of this blog may recall that Greg Wilson has been developing a course he calls Software Carpentry, providing the computing knowledge that computational scientists and engineers will need. He just concluded his course with a summary seven principles of computational thinking, based on Jon Udell’s seven principles of the Web. Yet another take, to contrast with the CS:Principles work.
Hello, and welcome to the final episode of Software Carpentry. We’re going to wrap up the course by looking at a few key ideas that underpin everything else we’ve done. We have left them to the end because like most big ideas, they don’t make sense until you have seen the examples that they are generalizations of.
Our seven principles are:
- It’s all just data.
- Data doesn’t mean anything on its own—it has to be interpreted.
- Programming is about creating and composing abstractions.
- Models are for computers, and views are for people.
- Paranoia makes us productive.
- Better algorithms are better than better hardware.
- The tool shapes the hand.