Important article that gets at some of my concerns about using MOOCs to inform education research. The sampling bias mentioned in the article below is one of my responses to the claim that we can inform education research by analyzing the results of MOOCs. We can only learn from the data of participants. If 90% of the students go away, we can’t learn about them. Making claims about computing education based on the 10% who complete a CS MOOC (and mostly white/Asian, male, wealthy, and well-educated at that) is bad science.
Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.
Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.”
Last month, Steve Cooper organized a remarkable workshop at Stanford on the Future of Computing Education Research. The question was, “How do we grow computing education research in the United States?” We pretty quickly agreed that we have a labor shortage — there are too few people doing computing education research in the US. We need more. In particular, we need more CS Ed PhD students. The PhD students do the new and exciting research. They bring energy and enthusiasm into a field.
We also need these students to fit into Computing departments, where that could be Computer Science, or Informatics, or Information Systems/Technology/Just-Information Departments/Schools/Colleges. Yes, we need a presence in Education Schools at some point, to influence how we develop new teachers, but that’s not how we’ll best push the research.
How do we get there?
Roy Pea came to the event. He could only spare a few hours for us, and he only gave a brief 10 minute talk, but it was one of the highlights of the two days for me. He encouraged us to think about Learning Sciences as a model. Learning Science grew out of cognitive science and computer science. It’s a field that CS folks recognize and value. It’s not the same as Education, and that’s a positive thing for our identity. He told us that the field must grow within Computing departments because Domain Matters. The representations, the practices, the abstractions, the mental models — they all differ between domains. If we want to understand the learning of computing, we have to study it from within computing.
I asked Roy, “But how do we influence teacher education? I don’t see learning science classes in most pre-service teacher development programs.” He pointed out that I was thinking about it all wrong. (Not his words — he was more polite than that.) He described how learning sciences has influenced teacher development, integrated into it. It’s not about a separate course: “Learning science for teachers.” It’s about changing the perspective in the existing classes.
Ken Hay, a learning scientist (and long-time friend and colleague) who is at Indiana University, echoed Roy’s recommendation to draw on the learning sciences as a model. He pointed out that Language Matters. He said that when Indiana tried to hire a “CS Education Researcher,” faculty in the CS department said, “I teach CS. I’m a CS Educator. How is s/he different than me?”
We started talking about how “Computer Science Education Research” is a dead-end name for the research that we want to situate in computing departments. It’s the right name for the umbrella set of issues and challenges with growing computing education in the United States. It includes issues like teacher professional development and K-12 curricula. But that’s not what’s going to succeed in computing departments. It’s the part that looks like the learning sciences that can find a home in computing departments. Susanne Hambrusch of Purdue offered a thought experiment that brought it home for me. Imagine that there is a CS department that has CS Ed Research as a research area. They want to list it on their Research web page. Well, drop the word “Research” — this is the Research web page, so that’s a given. And drop the “CS” because this is the CS department, after all. So all you list is “Education.” That conveys a set of meanings that don’t necessarily belong in a CS department and don’t obviously connect to our research questions.
In particular, we want to separate (a) the research about how people learn and practice computing from (b) making teaching and learning occur better in a computing department. (a) can lead to (b), but you don’t want to demand that all (a) inform (b). We need to make the research on learning and practice in computing be a value for computing departments, a differentiator. “We’re not just a CS department. We embrace the human side and engage in social and learning science research.” Lots of schools offer outreach, and some are getting involved in professional development. But to do those things informed by learning sciences and informing learning sciences (e.g., can get published in ICER and ICLS and JLS and AERA) — that’s what we want to encourage and promote.
I was in a breakout that tried to generate names. Michael Horn of Northwestern came up with several of my favorites. Unfortunately, none of them were particularly catchy:
- Learning Sciences of Computing
- Learning Sciences for Computing
- Computational Learning and Practice (sounds too much like machine learning)
- Learning Sciences in Computing Contexts
- Learning and Practice in Computing
- Computational Learning and Literacy
We do have a name for a journal picked out that I really like: Journal of Computational Thinking and Learning.
I’d appreciate your thoughts on these. What would be a good name for the field which studies how people learn computing, how to improve that learning, how professionals practice computing (e.g., end-user programming, computational science & engineering), and how to help novices join those professional communities of practice?
I can’t remember the last time I learned so much and had my preconceived notions so challenged in just two days. I have a lot more notes on the workshop, and they may make it into some future blog posts. Kudos to Steve for organizing an excellent workshop, and my thanks to all the participants!
Yup, Herminia has the problem right — if CS MOOCs are even more white and male than our face-to-face CS classes, and if hiring starts to rely on big data from MOOCs, we become even less diverse.
But that’s just the tip of the iceberg. One of the developments that will undoubtedly cement the relationship between big data and talent processes is the rise of massive open online courses, or MOOCs. Business schools are jumping into them whole hog. Soon, your MOOC performance will be sold to online recruiters taking advantage of the kinds of information that big data allows—fine distinctions not only on content assimilation but also participation, contribution to, and status within associated online communities. But what if these new possibilities—used by recruiters and managers to efficiently and objectively get the best talent—only bake in current inequities? Or create new ones?
If states offer career and technical education in pathways (typically 3-4 courses) with a pathway completion exam, they are eligible for Perkins legislation funding to pay for staff and equipment. If AP CS is one of those courses, it’s easier to build the pathway (2-3 courses to define, rather than 3-4) and the pathway is more likely to lead to college-level CS, if a student so chooses. But as the below report mentions, many states believe that Perkins legislation disallows the AP to count. It can, and here’s the report describing how.
If you’re hearing this story in your state, be sure to send your department of education this report!
Career and Technical Education and Advanced Placement (July 2013, PDF)
Traditionally Advanced Placement® (AP) courses and exams have not been recommended for students in Career Technical Education (CTE) programs. This paper, jointly developed and released by NASDCTEc and the College Board aims to bust this myth by showing how AP courses and exams can be relevant to a student’s program of study across the 16 Career Clusters®.
I hadn’t heard about this theory before the below blog post — recommended reading. As usual, I appreciate Kevin’s analysis.
As parents and teachers we encourage children to pursue fields that they enjoy, that they are good at, and that can support them later in life. It may be that girls are getting the “that they are good at” message more strongly than boys are, or that enjoyment is more related to grades for girls. These habits of thought can become firmly set by the time students become men and women in college, so minor setbacks (like getting a B in an intro CS course) may have a larger effect on women than on men. I’m a little wary of putting too much faith in this theory, though, as the author exhibits some naiveté.
The story is interesting and disappointing. Why would GitHub go through all these contortions just because they had this one female engineer — and would have there been less drama and stress if there had been more than just one female engineer? The story has been updated in Sunday’s NYTimes.
The exit of engineer Julie Ann Horvath from programming network GitHub has sparked yet another conversation concerning women in technology and startups. Her claims that she faced a sexist internal culture at GitHub came as a surprise to some, given her former defense of the startup and her internal work at the company to promote women in technology.
In her initial tweets on her departure, Horvath did not provide extensive clarity on why she left the highly valued startup, or who created the conditions that led to her leaving and publicly repudiating the company.
Horvath has given TechCrunch her version of the events, a story that contains serious allegations towards GitHub, its internal policies, and its culture. The situation has greater import than a single person’s struggle: Horvath’s story is a tale of what many underrepresented groups feel and experience in the tech sector.
Hackathons seem the antithesis of what we want to promote about computer science. On the one hand, they emphasize the Geek stereotype (it’s all about caffeine and who needs showers?), so they don’t help to attract the students who aren’t interested in being labeled “geeky.” On the other hand, it’s completely against the idea of designing and engineering software. “Sure, you can do something important by working for 36 hours straight with no sleep or design! That’s how good software ought to be written!” It’s not good when facing the public (thinking about the Geek image) or when facing industry and academia.
So why try to make them “female-friendly”?
OK, so there are a number of valid reasons women tend to stay away from hackathons. But what can hackathon planners due to get more females to attend their events? I found some women offering advice on this subject. Here are some suggestions for making your hackathon more female-friendly.
Amy Quispe, who works at Google and ran hackathons while a student at Carnegie Mellon University, writes that having a pre-registration period just for women makes them feel more explicitly welcome at your event. Also, shy away from announcing that its a competition (to reduce the intimidation factor), make sure the atmosphere is clean and not “grungy” and make it easy for people to ask questions. “A better hackathon for women was a better hackathon for everyone,” she writes.