Posts tagged ‘MOOCs’
I wrote my Blog@CACM post this month about the Inverse Lake Wobegon effect (see the post here), a term that I coin in my new book (link to post about book). The Inverse Lake Wobegon effect is where we observe a biased, privileged/elite/superior sample and act as if it is an unbiased, random sample from the overall population. When we assume that undergraduates are like students in high school, we are falling prey to the Inverse Lake Wobegon effect.
Here’s an example from The Chronicle of Higher Education in the quote below. Looking at learning analytics from MOOCs can only tell us about student success and failure of those who sign up for the MOOC. As we have already discussed in this blog (see post here), people who take MOOCs are a biased sample — well-educated and rich. We can’t use MOOCs to learn about learning for those who aren’t there.
“It takes a lot of mystery out of why students succeed and why students fail,” said Robert W. Wagner, executive vice provost and dean at Utah State, and the fan of the spider graphic. “It gives you more information, and when you can put that information into the hands of faculty who are really concerned about students and completion rates and retention, the more you’re able to create better learning and teaching environments.”
A second example: There’s a common thread of research in SIGCSE Symposium and ITICSE that uses survey data from the SIGCSE Members List as a source of information. SIGCSE Members are elite undergraduate computer science teachers. They are teachers who have the resources to participate in SIGCSE and the interest in doing so. I know that at my own institution, only a small percentage (<10%) of our lecturers and instructors participate in SIGCSE. I know that no one at the local community college’s CS department belongs to SIGCSE. My guess is that SIGCSE Members represents less than 30% of undergraduate computer science teachers in the United States, and a much smaller percentage of computer science teachers worldwide. I don’t know if we can assume that SIGCSE Members are necessarily more expert or higher-quality. We do know that they value being part of a professional organization for teaching, so we can assume that SIGCSE Members have an identity as a CS teacher — but that may mean that most CS teachers don’t have an identity as a CS teacher. A survey of SIGCSE Members tell us about an elite sample of undergraduate CS teachers, but not necessarily about CS teachers overall.
When I talk to people about MOOCs these days, I keep finding myself turning to two themes.
Theme #1. Our schools aren’t getting worse. The gap between the rich and the poor is growing. We have more poorer kids, and they are doing worse because of everything, not just because of school.
Before we can figure out what’s happening here, let’s dispel a few myths. The income gap in academic achievement is not growing because the test scores of poor students are dropping or because our schools are in decline. In fact, average test scores on the National Assessment of Educational Progress, the so-called Nation’s Report Card, have been rising — substantially in math and very slowly in reading — since the 1970s. The average 9-year-old today has math skills equal to those her parents had at age 11, a two-year improvement in a single generation. The gains are not as large in reading and they are not as large for older students, but there is no evidence that average test scores have declined over the last three decades for any age or economic group.
The widening income disparity in academic achievement is not a result of widening racial gaps in achievement, either. The achievement gaps between blacks and whites, and Hispanic and non-Hispanic whites have been narrowing slowly over the last two decades, trends that actually keep the yawning gap between higher- and lower-income students from getting even wider. If we look at the test scores of white students only, we find the same growing gap between high- and low-income children as we see in the population as a whole.
It may seem counterintuitive, but schools don’t seem to produce much of the disparity in test scores between high- and low-income students. … It boils down to this: The academic gap is widening because rich students are increasingly entering kindergarten much better prepared to succeed in school than middle-class students. This difference in preparation persists through elementary and high school.
Theme #2: There are definitely tangible effects of MOOCs, as seen in the study linked below. They help rich white men find better jobs. They help educate the rich. They help a small percentage of the poor.
All the money being poured into developing MOOCs fuels the gap between the rich and the poor. If you want to improve education generally, nationally or worldwide, aim at the other 90%. MOOCs aren’t improving education. They enrich those who are already rich.
Using data from MOOCs offered by the University of Pennsylvania, Alcorn, Christensen and Emanuel were some of the first to suggest that MOOC learners were more likely to be employed men in developed countries who had previously earned a degree — countering the early narrative that MOOCs would democratize higher education around the world.
Commenters pointed out that I didn’t make my argument clear. I’m posting one of my comment responses here to make clearer what I was trying to say:
As Alan pointed out, the second article I cited only once says that MOOC learners are “more likely to be employed men in developed countries.” I probably should have supported that point better, since it’s key to my argument. All the evidence I know suggests that MOOC learners are typically well-educated, more affluent from the developed world, and male.
- In the original EdX MOOC, 78% of the attendees had already taken the class before. (See full report here.)
- Tucker Balch released demographics on his MOOC: 91% male, 73.3% from OECD countries, and over 50% had graduate degrees. (See post here.)
- Still the most careful analysis of MOOC demographics that I know is the 2013 Penn study (see article here) which found, “The student population tends to be young, well educated, and employed, with a majority from developed countries. There are significantly more males than females taking MOOCs, especially in developing countries.”
- As you know, Georgia Tech’s Online MS (OMS) in CS is 85% domestic (the opposite of our face-to-face MS, which actually serves more students from the developing world). (See one page report here.)
If your MOOCs have significantly different demographics, I’d be interested in hearing your statistics. However, given the preponderance of evidence, your MOOC may be an outlier if you do have more students from the developing world.
The argument I’m making in this post is that (a) to improve education, we have to provide more to the underprivileged, (b) most MOOC students are affluent, well-educated students from the developing world, and (c) the benefits of MOOCs are thus accruing mostly to people who don’t need more enrichment. Some people are benefitting from MOOCs. My point is that they are people who don’t need the benefit. MOOCs are certainly not “democratizing education” and are mostly not providing opportunities to those who don’t have them anyway.
I got an email from CodersTrust, asking me to help promote this idea of developing grants to help students in the developing world learn to code. But the education materials they’re offering is the same CodeAcademy, Coursera MOOCs, and similar developed-world materials. Should they be? Should we just be sending the educational materials developed for US and Europe to the developing world? I thought that that was one of the complaints about existing MOOCs, that they’re a form of educational imperialism.
CodersTrust is the brainchild of Ferdinand Kjærulff. As a Captain of the Danish army he served as recovery officer in Iraq after the fall of Saddam. He pioneered a recovery project with the allied forces, bringing internet and e-learning to the citizens of the region in which he was stationed. The project was a massive success and inspired him to eventually create CodersTrust – supported by Danida – with a vision to democratize access to education via the internet on a global scale.
via CodersTrust | About.
ICER 2015 (see website here) is August 9-13 in Omaha, Nebraska. The event starts for me and Barbara Ericson, Miranda Parker, and Briana Morrison on Saturday August 8. They’re all in the Doctoral Consortium, and I’m one of the co-chairs this year. (No, I’m not a discussant for any of my students.) The DC kickoff dinner is on Saturday, and the DC is on Sunday. My thanks to my co-chair Anthony Robins and to our discussants Tiffany Barnes, Steve Cooper, Beth Simon, Ben Shapiro, and Aman Yadav. A huge thanks to the SIGCSE Board who fund the DC each year.
We’ve got two papers in ICER this year, and I’ll preview each of them in separate blog posts. The papers are already available in the ACM digital library (see listing here), and I’ll put them on my Guzdial Papers page as soon as the Authorizer updates with them.
I’m very excited that the first CSLearning4U project paper is being presented by Barbara on Tuesday. (See our website here, the initial blog post when I announced the project here, and the announcement that the ebook is now available). Her paper, “Analysis of Interactive Features Designed to Enhance Learning in an Ebook,” presents the educational psychology principles on memory and learning that we’re building on, describes features of the ebooks that we’re building, and presents the first empirical description of how the Runestone ebooks that we’re studying (some that we built, some that others have built) are being used.
My favorite figure in the paper is this one:
This lists all the interactive practice elements of one chapter of a Runestone ebook along the horizontal axis (in the order in which they appear in the book left-to-right), and the number of users who used that element vertically. The drop-off from left-to-right is the classic non-completion rate that we see in MOOCs and other online education. Notice the light blue bars labelled “AC-E”? That’s editing code (in executable Active Code elements). Notice all the taller bars around those light blue bars? That’s everything else. What we see here is that fewer and fewer learners edit code, while we still see learners doing other kinds of learning practice, like Parsons Problems and multiple choice problems. Variety works to keep more users engaged for longer.
A big chunk of the paper is a detailed analysis of learners using Parsons Problems. Barbara did observational studies and log file analyses to gauge how difficult the Parsons problems were. The teachers solved them in one or two tries, but they had more programming experience. The undergraduate and high schools students had more difficulty — some took over 100 tries to solve a problem. Her analysis supports her argument that we need adaptive Parsons Problems, which is a challenge that she’s planning on tackling next.
We’re years into the MOOC phenomenon, and I’d hoped that we’d get past MOOC hype. But we’re not. The article below shows the same misunderstandings of learning and teaching that we heard at the start — misunderstandings that even MOOC supporters (like here and here) have stopped espousing.
The value of being in the front row of a class is that you talk with the teacher. Getting physically closer to the lecturer doesn’t improve learning. Engagement improves learning. A MOOC puts everyone at the back of the class, listening only and doing the homework.
In many ways, we have a romanticized view of college. Popular portrayals of a typical classroom show a handful of engaged students sitting attentively around a small seminar table while their Harrison Ford-like professor shares their wisdom about the world. We all know the real classroom is very different. Especially in big introductory classes — American history, U.S. government, human psychology, etc. — hundreds of disinterested, and often distracted, students cram into large impersonal lecture halls, passively taking notes, occasionally glancing up at the clock waiting for the class to end. And it’s no more engaging for the professor. Usually we can’t tell whether students are taking notes or updating their Facebook page. For me, everything past the ninth row was distance learning. A good online platform puts every student in the front row.
I don’t think that MOOCs are a good solution for required classes. I agree with the idea that MOOCs are for people who want to learn something because they’re interested in it, and that completion rates don’t matter there.
That suggests that we shouldn’t use MOOCs where (a) the students don’t know what they need to know and (b) completion rates matter.
- Thus, don’t use MOOCs for intro courses (as we learned at GT with English composition and physics) where students don’t know that they really need this knowledge to go on, and the completion rates are even worse than in other MOOCs. The combination hurts the students who want to go on to subsequent courses. Using MOOCs to provide adults with content that might be covered in an intro course isn’t the same thing. For example, an intro to programming course for adults who want to understand something about coding, but not necessarily continue in CS studies, makes sense for a MOOC. If they’re not trying to prepare for a follow-on course, then the completion rate doesn’t really matter. If the MOOC learners are adults who are foraging for certain information, then the even-lower completion rate in intro-content MOOCs makes sense. There may only be a small part of that content that someone doesn’t already know.
- Thus, don’t use MOOCs to teach high school teachers about CS, where they don’t know what CS they need to know, they’re uncertain about becoming CS teachers, and a lack of completion means that the teachers who don’t complete (90-95% of enrollees) don’t know the curriculum that they’re supposed to teach. Using MOOCs to provide existing CS teachers with new opportunities to learn is a good match for the student audience to the affordances of the medium. Trying to draw in new CS teachers (when they are so hard to recruit) via MOOCs makes little sense to me.
Setting aside my concerns about MOOCs, it’s not exactly clear what’s going on in the below article. I get that it’s not good that California had to just forgive the loan of $7M USD, and that they will likely to continue to lose money. I get that the quote below says, “we got extremely little in return.” I don’t see what was the return. I don’t see how many students actually participated (e.g., we’re told that there was only 250 non-UC students, but not how many UC students participated), and if the courses they created could continue to be used for years after, and so on. It doesn’t look good, but there’s not enough information here to know that it was bad.
“We spent a lot of money and got extremely little in return,” said Jose Wudka, a physics professor at UC-Riverside who previously chaired the Systemwide Committee on Educational Policy of the Academic Senate, which represents faculty in the UC System.
The project, which cost $7 million to set up at a time when the state was cutting higher-education funding, aspired to let students take courses across campuses.
These do sound like the kinds of things that learning scientists were saying at the start of the MOOC hype (like this post), but I’m glad that he now realizes that MOOCs have limited use and that students vary widely.
And as for MOOCs, which many still predict will displace traditional teaching, he said that they “were the answer when we weren’t sure what the question was.”
He said that their massive nature, which attracted so much attention, was ultimately a problem. “When I think about MOOCs, the advantage — the ability to prepare a course and offer it without personal interaction — is what makes them inexpensive and makes them very limited.”
Students “vary widely in terms of their skills and capability,” he said, such that massiveness is simply not an educational advantage. “For some it’s too deep and for some it is too shallow.”