Posts tagged ‘MOOCs’
We’ve heard about this problem before: Online courses don’t reach the low-income students who most need them, because they don’t have access to the technology on-ramp. This was an issue in the San Jose State experiment.
That’s because the technology required for online courses isn’t always easily accessible or affordable for these students. Although the course may be cheaper than classroom-based courses, the Campaign for the Future of Higher Education argues in a report released Wednesday low-income students might still have a harder time accessing it.
“We have to wrap our heads around the fact that we can’t make assumptions that this will be so simple because everyone will just fire up their computers and do the work,” says Lillian Taiz, a professor at California State University, Los Angeles, and president of the California Faculty Association.
Many students, Taiz says, don’t have computers at home, high-speed Internet access, smart phones, or other technologies necessary to access course content.
The US News article suggests Google Chromebooks as an answer — cheap and effective. The Indian government is trying an even cheaper tablet solution. Could you use one of these to access MOOCs?
The Indian government realized a few years ago that the technology industry had no motivation to cater to the needs of the poor. With low cost devices, the volume of shipments would surely increase, but margins would erode to the point that it wasn’t worthwhile for the big players. So, India decided to design its own low-cost computer. In July 2010, the government unveiled the prototype of a $35 handheld touch-screen tablet and offered to buy 100,000 units from any vendor that would manufacture them at this price. It promised to have these to market within a year and then purchase millions more for students.
I know faculty at both KSU and SPSU. My PhD student, Briana Morrison, is faculty at SPSU. No one that I spoke to had any idea this was happening. These aren’t small schools. SPSU is one of the few universities in Georgia with a publicly-funded engineering program. KSU+SPSU is considerably larger than Georgia Tech. Is this part of the consolidation of higher education foretold by the MOOCopalyptic visions?
Kennesaw State University and Southern Polytechnic State University will consolidate to form a new institution to be named Kennesaw State University. The Board of Regents of the University System of Georgia will be asked by Chancellor Hank Huckaby to approve the consolidation plan during its upcoming November meeting.
“We must continue to carefully examine our structure and programs to ensure we have the right model that best serves our students and the state,” Huckaby said. “This proposal offers us some exciting possibilities to enlarge our academic outreach through the existing talent and resources at both these institutions.”
The decision to consolidate the two institutions, whose combined enrollment this fall is 31,178 students and combined annual economic impact on the region is $1.15 billion
Lisa Kaczmarczyk wrote a blog post about a bunch of the private, for-profit groups teaching CS who visited the ACM Education Council meeting on Nov. 2. I quoted below the section where the Ed Council asked tough questions about evaluation. I wonder if the private efforts to educate mean the same things about evaluation as the academic and research folks mean by “evaluation.” There are different goals and different value systems between each. Learning for all in public education is very different from a privatized MOOC where it’s perfectly okay for 1-10% to complete.
Of course there was controversy; members of the Ed Council asked all of the panelists some tough questions. One recurrent theme had to do with how they know what they are doing works. Evaluation how? what kind? what makes sense? what is practical? is an ongoing challenge in any pedagogical setting and when you are talking about a startup as 3 out of the 4 companies on the panel were in the fast paced world of high tech – its tricky. Some panelists addressed this question better than others. Needless to say I spent quite a bit of time on this – it was one of the longer topics of discussion over dinner at my table.
Neil Fraser from Googles Blockly project said some things that were unquestionably controversial. The one that really got me was when he said several times, and with followup detail that one of the things they had learned was to ignore user feedback. I can’t remember his exact words after that but the idea seemed to be that users didnt know what was best for them. Coming on the heels of earlier comments that were less than tactful about computing degree programs and their graduates … I have to give Neil credit for having the guts to share his views.
Great blog post that really captures the most important criticism of MOOCs (thanks to Karen Head for forwarding it). We had Armando Fox of Berkeley’s “MOOC” Center visit (video of his GVU Brown Bag talk), and he said explicitly in his talk, “MOOCs are not about democratization of education. They’re really about the rich getting richer.” I blogged on these themes this month for Blogs@CACM: Results from the first-year course MOOCs: Not there yet
Worst of all, they may become a convenient excuse for giving up on the reforms needed to provide broad access to affordable higher education. The traditional kind, that is, which for all its problems still affords graduates higher chances of employment and long-term economic advantages.
Seen from this perspective, the techno-democratization of education looks like a cover story for its aristocratization. MOOCs aren’t digital keys to great classrooms’ doors. At best, they are infomercials for those classrooms. At worst, they are digital postcards from gated communities.
This is why I am a MOOC dissenter. More than a revolution, so far this movement reminds me of a different kind of disruption: colonialism.
A Collision Between Changes in Higher Education and Changes in Publishing | The Next Bison: Social Computing and Culture
Amy Bruckman has been doing a great job of finding the interesting issues in our on-line MS in CS degree program. She’s doing innovative work in making project-based learning work in MOOCs. In this blog post, she considers a problem with doing graduate classes in a MOOC setting.
How do you assign readings to a large number of people in a free online course?
I’ve been puzzling over this question this week. I voted against the creation of our online master’s of computer science, and I still have serious reservations about it–particularly about the hastiness of the development plan. But since we’re going ahead with the program, I was thinking maybe I’d offer a class. (We’re doing it–I might as well help.) Our model is that classes have a for-credit section for which students pay a low tuition, and a free not-for-credit one (MOOC). The for-credit students will have access to our library. The free students of course can’t. So this week I asked what I thought was a simple question: how do we get readings to the MOOC students?
I asked colleagues teaching online classes, administrators, and our library. No one really had an answer. One colleague suggested the students “will just have to find the reading on their own.” (That seems like a lawsuit in the making–encouraging copyright infringement.) Another said “I might not assign any reading, since the MOOC students can’t get access to it.” (Really? Does the future of higher education involve watching videos and not reading?)
Karen Head has finished her series on how well the freshman-composition course fared (quoted and linked below), published in The Chronicle. The stats were disappointing — only about 238 of the approximately 15K students who did the first homework finished the course. That’s even less than the ~10% we saw completing other MOOCs.
Georgia Tech also received funding from the Gates Foundation to trial a MOOC approach to a first year of college physics course. I met with Mike Schatz last Friday to talk about his course. The results were pretty similar: 20K students signed up, 3K students completed the first assignment, and only 170 finished. Mike had an advantage that Karen didn’t — there are standardized tests for measuring the physics knowledge he was testing, and he used those tests pre-post. Mike said the completers fell into three categories: those who came in with a lot of physics knowledge and who ended with relatively little gain, those who came in with very little knowledge and made almost no progress, and a group of students who really did learn alot. They don’t know why nor the relative percentages yet.
The researchers also say, perhaps unsurprisingly, that what mattered most was how hard students worked. “Measures of student effort trump all other variables tested for their relationships to student success,” they write, “including demographic descriptions of the students, course subject matter, and student use of support services.”
It’s not surprising, but it is relevant. Students need to make effort to learn. New college students, especially first generation college students (i.e., whose parents have never gone to college), may not know how much effort is needed. Who will be most effective at communicating that message about effort and motivating that effort — a video of a professor, or an in-person professor who might even learn your name?
As Gary May, our Dean of Engineering, recently wrote in an op-ed essay published in Inside Higher Ed, “The prospect of MOOCs replacing the physical college campus for undergraduates is dubious at best. Other target audiences are likely better-suited for MOOCs.”
On the freshman-composition MOOC, Karen Head writes:
No, the course was not a success. Of course, the data are problematic: Many people have observed that MOOCs often have terrible retention rates, but is retention an accurate measure of success? We had 21,934 students enrolled, 14,771 of whom were active in the course. Our 26 lecture videos were viewed 95,631 times. Students submitted work for evaluation 2,942 times and completed 19,571 peer assessments (the means by which their writing was evaluated). However, only 238 students received a completion certificate—meaning that they completed all assignments and received satisfactory scores.
Our team is now investigating why so few students completed the course, but we have some hypotheses. For one thing, students who did not complete all three major assignments could not pass the course. Many struggled with technology, especially in the final assignment, in which they were asked to create a video presentation based on a personal philosophy or belief. Some students, for privacy and cultural reasons, chose not to complete that assignment, even when we changed the guidelines to require only an audio presentation with visual elements. There were other students who joined the course after the second week; we cautioned them that they would not be able to pass it because there was no mechanism for doing peer review after an assignment’s due date had passed.
The Washington Post series on “The Tuition is Too Damn High” has been fascinating, filled with interesting data, useful insights, and economic theory that I hadn’t met previously. The article linked below is about “Baumol’s cost disease” which suggests an explanation for why wages might increase when productivity does not. It’s an explanation that some have used to explain the rise in tuition, which Post blogger Dylan Matthews rejects based on the data (in short: faculty salaries aren’t really rising — the increase in tuition is due to other factors).
But I actually had a concern about an earlier stage in his argument. It’s absolutely true that our labor intensive methods do not lead to an increase in productivity in terms of number of students, while MOOCs and similar other methods can. However, we can gain productivity in terms of quality of learning and retention. We absolutely have teaching methods, well-supported with research, that lead to better learning and more retention — we can get students to complete more classes with better understanding. In the end, isn’t THAT what we should be measuring as productivity of an educational enterprise, not “millions of customers served” (even if they don’t complete and don’t learn)?
Performing a string quartet will always require two violinists, a violist and a cellist. You can’t magically produce the same piece with just two people. Higher education, for at least the past few millennia, has seemed to fall in this category as well. “What just happened in my classroom is not very different from what happened in Plato’s academy,” quips Archibald. We’ve gotten better at auditorium-building, perhaps, but lecturers generally haven’t gotten more productive.
Carl Weiman has accepted a position at Stanford to focus on science teaching. It’s a great place for him, and I expect that we’ll hear more interesting things from him in the future. One aspect of the story that I find particular interesting is Weiman’s dislike of MOOCs, and how that conflicts with the perspective of some of the MOOC advocates at Stanford.
Mr. Wieman left the White House last summer, after receiving a diagnosis of multiple myeloma and after spending two years searching for ways to force universities to adopt teaching methods shown through scientific analysis to be more effective than traditional approaches.
His health has improved, Mr. Wieman said in an interview last week. But rather than try again through the political process to prod universities to accept what research tells them would be better ways of teaching and retaining students in the sciences, he now hopes at Stanford to work on making those methods even better.
(Shoot — I meant to put this on “draft” and come back to it, but hit the wrong button. Sigh.)
Here’s what I thought was interesting about this piece: I agree with Fish’s depiction of “data and experiment culture” about education, and the “ineffable culture,” too. But his alignment of MOOCs with “data and experiment culture” of MOOCs seems wrong. Our data about MOOCs says that they’re not working. So, belief in MOOCs is “ineffable.” It’s about having warm feelings for technology and the hopes for its role in education.
About halfway through his magisterial study “Higher Education in America,” Derek Bok, twice president of Harvard, identifies what he calls the “two different cultures” of educational reform. The first “is an evidence-based approach to education … rooted in the belief that one can best advance teaching and learning by measuring student progress and testing experimental efforts to increase it.” The second “rests on a conviction that effective teaching is an art which one can improve over time through personal experience and intuition without any need for data-driven reforms imposed from above.”
Bok is obviously a member of the data and experiment culture, which makes him cautiously sympathetic to developments in online teaching, including the recent explosion of MOOCs (massive open online courses). But at the same time, he is acutely aware of the limits of what can be tested, measured and assessed, and at crucial moments in his analysis that awareness pushes him in the direction of the other, “ineffable” culture.
NYTimes just had a nice article about the Georgia Tech online Masters degree program based in MOOCs. I’m glad that the OMS (Online MS) group is getting that kind of attention.
For my research interests, I’m more excited about the alternative to MOOCs described below. I am not well-versed in feminist perspectives, but I appreciate the values that are informing Anne Balsamo’s design and do see that this approach has a greater chance of drawing in women (based on research like Joanne Cohoon’s) than traditional MOOCs.
At participating colleges, professors will base their own courses on each weekly theme, sharing course materials and assignments, but customizing them for their own students. The courses will vary, as some are undergraduate and some are graduate, and the institutions see list at right vary widely by mission and geography — including institutions in Australia, Britain, Canada and the United States. The class sizes will be between 15 and 30 students each, decidedly non-massive. “There is another pedagogical commitment here,” Balsamo said. “Who you learn with is as important as what you learn. Learning is a relationship, not just something that can be measured by outcomes or formal metrics.”
The First Annual ACM Conference on Learning at Scale will be held March 4-5,
2014 in Atlanta, GA (immediately prior to and collocated with SIGCSE-14).
The Learning at Scale conference is intended to promote scientific exchange
of interdisciplinary research at the intersection of the learning sciences
and computer science. Inspired by the emergence of Massive Open Online
Courses (MOOCs) and the accompanying huge shift in thinking about education,
this conference was created by ACM as a new scholarly venue and key focal
point for the review and presentation of the highest quality research on how
learning and teaching can change and improve when done at scale.
“Learning at Scale” refers to new approaches for students to learn and for
teachers to teach, when engaging large numbers of students, either in a
face-to-face setting or remotely, whether synchronous or asynchronous, with
the requirement that the techniques involve large numbers of students (where
“large” is preferably thousands of students, but can also apply to hundreds
in in-person settings). Topics include, but are not limited to: Usability
Studies, Tools for Automated Feedback and Grading, Learning Analytics,
Analysis of Log Data, Studies of Application of Existing Learning Theory,
Investigation of Student Behavior and Correlation with Learning Outcomes,
New Learning and Teaching Techniques at Scale.
November 8, 2013: Paper submissions due
November 8, 2013: Tutorial proposals due
December 23, 2013: Notification to authors of full papers
January 2, 2014: Works-in-progress submissions due (posters and demos)
January 14, 2014: Notification to authors of acceptance of works-in-progress
January 17, 2014: All revised and camera-ready materials due
March 4-5, 2014: Learning at Scale meeting
Additional information is available at: http://learningatscale.acm.org/
We have very few AP CS teachers in the United States — about 1 for every 12 high schools, and they’re not evenly distributed. I do get that an AP CS MOOC may make it more available to more students. Still, I’m not too excited about a MOOC to teach AP CS. AP CS is already overwhelmingly white and male. The demographic data from existing CS MOOCs is even more white and male than our face-to-face classes. I can’t see how an AP CS MOOC will improve diversity, and we have a desperate need to improve diversity.
But beyond that — Rupert Murdoch?!? Really? Why is he interested in CS education? I do note that he is starting out with a monetizing scheme. Want your questions answered? $200 per student per year. I do see how this AP CS MOOC may deal with some of the shortcomings of other MOOCs, and may even be better with diversity than existing MOOCs, because of the availability of direct support — at a price.
Now, Rupert Murdoch, the billionaire media mogul behind News Corp., wants to do something about the lack of computer science education. Murdoch’s Amplify education unit plans to launch a new advanced placement online computer science course this fall, taught by longtime high-school instructor Rebecca Dovi.
The course is described as a MOOC, short for massive open online course. It is free to high school students, though additional resources will be made available for $200 per student. It is geared toward those who want to take the computer science AP exam in 2014.
That students who had offline help did the best in this MOOC study is not surprising. Sir John Daniel reported in Mega-Universities that face-to-face tutors was the largest line item in the Open University UK’s budget. But the fact that 90% of the students didn’t talk online (a statistic that is similar to what Tucker Balch found) says that success in MOOCs may be more about talking offline than online.
“On average, with all other predictors being equal, a student who worked offline with someone else in the class or someone who had expertise in the subject would have a predicted score almost three points higher than someone working by him or herself,” write the authors.The correlation, described by the authors as the “strongest” in the data set, was limited to a single instance of a particular MOOC, and is not exactly damning to the format. But it nonetheless may give ammunition to critics who say human tutelage remains essential to a good education.Other findings could also raise eyebrows. For example, the course’s discussion forum was largely the dominion of a relatively small group of engaged users; most students simply lurked. “It should be stressed that over 90 percent of the activity on the discussion forum resulted from students who simply viewed pre-existing discussion threads, without posting questions, answers, or comments,” the authors write.
In a sense, what Chris Quintana is doing here is a connectivist MOOC, but one where the student is guided via software-realized scaffolding through a self-study on a topic of their own interest. It’s an interesting idea, to help students organize a wide variety of learning opportunities in support of inquiry learning.
We aim to support cross-context inquiry that spans formal and informal settings by developing Zydeco Sci-To-Go, a system integrating mobile devices and cloud technologies for middle school science inquiry. Zydeco enables teachers and students to create science investigations by defining goals, questions, and “labels” to annotate, organize, and reflect on multimodal data e.g., photos, videos, audio, text that they collect in museums, parks, home, etc. As students collect this information, it is stored in the cloud so that students and teachers can access that annotated information later and use it with Zydeco tools to develop a scientific explanation addressing the question they are investigating.
Really interesting — the data are starting to appear on what’s going on in MOOCs. I wouldn’t have predicted differences in media preferences in homework vs. exam.
In their analysis of 6.002x resource usage, Pritchard and RELATE postdocs tallied clickstream data, such as where and when users clicked on videos, discussion threads, tutorials or textbook pages when working on homework, in comparison to when they were taking the midterm or final exam.
Interestingly, the group found that in completing homework assignments, users spent more time on video lectures more than any other resource. However, during an exam, students referred most to the online textbook, which they virtually ignored when doing homework. The data, although preliminary, illustrate how students may use different online strategies to solve homework versus exam problems.
While use of the discussion forum was not required in the course, the researchers found it to be the most popular resource for students completing homework assignments. In fact, 90 percent of the clickstream activity on the forum came from users who viewed existing threads without posting comments.