Posts tagged ‘MOOCs’

What I learned from taking a MOOC: Live Object Programming in Pharo

I wrote this post a month ago, before COVID-19 changed how a great many of us teach in higher education. It feels so long ago now. I thought about writing a different post for this week, one about how I’m managing my large (260+) Senior-level User Interface Development class with projects. But I realize — I have a ton of those kinds of posts in my to-read queue now. We’re all being bombarded with advice on how to take our classes on-line. I can’t read it all. I’m sure that you can’t either.

So instead, I decided to move this post up in the queue. It’s about taking the students’ perspective. I worry about what’s going to happen to students as we all move into on-line modes. I wrote my Blog@CACM post this week about how the lowest-performing students are the ones who will be most hurt by the move to on-line — you can find that post here. This is a related story: What I learned about MOOCs by taking a MOOC.

I received in February my certificate of success for the MOOC I took on Pharo. I have not, in general, been a big fan of MOOCs (among many other posts, here’s one I wrote in 2018 about MOOCs and ethics). This MOOC was perfect for what I needed and wanted. But I’m still not generally a MOOC fan.

I’m a long-time Smalltalk programmer and have written or edited a couple of books about Squeak. I’m building software again at the University of Michigan (see the task-specific programming environments I’ve posted about). Pharo is a terrific, modern Smalltalk that I’d like to use.

A MOOC on Pharo matched what I needed. I fit the demographics of a student who succeeds at a MOOC — I already know a lot about the material, and I’m looking for specific pieces of information. Pharo has a test-driven development model that is remarkable. You define your classes, then start writing tests, and then you execute them. You can then build your system from the Debugger! You get prompts like, “You’re referencing the instance variable window here, but it doesn’t exist. Shall I create it for you?” I’ve never programmed like that before, and it was great to learn all the support Pharo has for that style of programming.

Yes, it was in French. They provide versions of the videos dubbed in English, and the French version can display English captions. I preferred the latter. I had French in undergraduate, which means that I didn’t understand everything, but I understood occasional words which was enough to be able to synchronize between the video and the captions to figure out what was going on.

My favorite part of the MOOC was just watching the videos of Stéphane Ducasse programming. He’s a very expert Smalltalk programmer. It’s great seeing how he works and hearing him think aloud while he’s programming. But he’s very, very expert — there were things he did that I had to re-watch in slow motion to figure out, “Okay, how did he do that?”

The MOOC was better than just a set of videos. The exercises made sure I actually tried to think about what the videos were saying. But it’s clear that the exercises were not developed by assessment experts. There were lots of fill in the blanks like “Name the class that does X.” Who cares? I can always look that up. It’s a problem that the exercises were developed by Smalltalk experts. Some of the problems were of a form that would be simple, if you knew the right tool or the right option (e.g., “Which of the below is not a message that instances of the class Y understand?”), but I often couldn’t remember or find the right tool. Tools can fall into the experts’ blind spot. Good assessments should scaffold me in figuring out the answer (e.g., worked examples or subgoal labels).

I ran into one of the problems that MOOCs suffer — they’re really expensive to make and update. The Pharo MOOC was written for Pharo 6.0. Pharo 8.0 was just released. Not all the packages in the MOOC still work in 8.0, or there are updated versions that aren’t exactly the same as in the videos. There were things in the MOOC that I couldn’t do in modern Pharo. It’s hard and costly to keep a MOOC updated over time.

My opinions about MOOCs haven’t changed. They’re a great way for experienced people to get a bit more knowledge. That’s where the Georgia Tech OMSCS works. But I still they that they are a terrible way to help people who need initial knowledge, and they don’t help to broaden participation in computing.

March 23, 2020 at 7:00 am Leave a comment

Using MOOCs for Computer Science Teacher Professional Development

When our ebook work was funded by IUSE, our budget was cut from what we proposed. Something had to be dropped from our plan of work. What we dropped was a comparison between ebooks and MOOCs. I had predicted that we could get better learning and higher completion rates from our ebooks than from our MOOCs. That’s the part that got dropped — we never did that comparison.

I’m glad now. It’s kind of a ridiculous comparison because it’s about the media, not particular instances. I’m absolutely positive that we could find a terrible ebook that led to much worse results than the absolutely best possible MOOC, even if my hypothesis is right about the average ebook and the average MOOC. The medium itself has strengths and weaknesses, but I don’t know how to experimentally compare two media.

I’m particularly glad since I wouldn’t want to go up against Carol Fletcher and her creative team who are finding ways to use MOOCs successfully for CS teacher PD. You can find their recent presentation “Comparing the Efficacy of Face to Face, MOOC, and Hybrid Computer Science Teacher Professional Development” on SlideShare:

Carol sent me a copy of the paper from  the 2016″Learning with MOOCs” conference*. I’m quoting from the abstract below:

This research examines the effectiveness of three primary strategies for increasing the number of teachers who are CS certified in Texas to determine which strategies are most likely to assist non-CS teachers in becoming CS certified. The three strategies compared are face-to-face training, a MOOC, and a hybrid of both F2F and MOOC participation. From October 2015, to August of 2016, 727 in-service teachers who expressed an interest in becoming CS certified participated in one of these pathways. Researchers included variables such as educational background, teaching certifications, background in and motivation to learn computer science, and their connection to computer science through their employment or the community at large as covariates in the regression analysis. Findings indicate that the online only group was no less effective than the face-to-face only group in achieving certification success. Teachers that completed both the online and face-to-face experiences were significantly more likely to achieve certification. In addition, teachers with prior certification in mathematics, a STEM degree, or a graduate degree had greater odds of obtaining certification but prior certification in science or technology did not. Given the long-term lower costs and capacity to reach large numbers that online courses can deliver, these results indicate that investment in online teacher training directed at increasing the number of CS certified teachers may prove an effective mechanism for scaling up teacher certification in this high need area, particularly if paired with some opportunities for direct face-to-face support as well.

That they got comparable results from MOOC-based on-line and face-to-face is an achievement. It matches my expectations that a blended model with both would be more successful than just on-line.

Carol and team are offering a new on-line course for the Praxis test that several states use for CS teacher certification. You can find details about this course at https://utakeit.stemcenter.utexas.edu/foundations-cs-praxis-beta/.


* Fletcher, C., Monroe, W., Warner, J., Anthony, K. (2016, October). Comparing the Efficacy of Face-to-Face, MOOC, and Hybrid Computer Science Teacher Professional Development. Paper presented at the Learning with MOOCs Conference, Philadelphia, PA.

March 29, 2019 at 7:00 am 1 comment

Disrupt This!: MOOCs and the Promises of Technology by Karen Head

Over the summer, I read the latest book from my Georgia Tech colleague, Karen Head. Karen taught a MOOC in 2013 to teach freshman composition, as part of a project funded by the Gates Foundation. They wanted to see if MOOCs could be used to meet general education requirements. Karen wrote a terrific series or articles in The Chronicle of Higher Education about the experience (you can see my blog post on her last article in the series here). Her experience is the basis for her new book Disrupt This! (link to Amazon page here). There is an interview with her at Inside Higher Education that I also recommend (see link here).

In Disrupt This!, Karen critiques the movement to “disrupt education” with a unique lens. I’m an education researcher, so I tend to argue with MOOC advocates with data (e.g., my blog post in May about how MOOCs don’t serve to decrease income inequality). Karen is an expert in rhetoric. She analyzes two of the books at the heart of the education disruption literature: Clayton Christensen and Henry Eyring’s The Innovative University: Changing the DNA of Higher Education from the Inside Out and Richard DeMillo’s Abelard to Apple: The Fate of American Colleges and Universities. She critiques these two books from the perspective of how they argue — what they say, what they don’t say, and how the choice of each of those is designed to influence the audience. For example, she considers why we like the notion of “disruption.”

Disruption appeals to the audience’s desire to be in the vanguard. It is the antidote to complacency, and no one whose career revolves around the objectives of critical thinking and originality—the pillars of scholarship—wants to be accused of that…Discussions of disruptive innovation frequently conflate “is” (or “will be”) and “ought.” In spite of these distinctions, however, writers often shift from making dire warnings to an apparently gleeful endorsement of disruption. This is not unrelated to the frequent use of millenarian or religiously toned language, which often warns against a coming apocalypse and embraces disruption as a cleansing force.

Karen is not a luddite. She volunteered to create the Composition MOOC because she wanted to understand the technology. She has high standards and is critical of the technology when it doesn’t meet those standards. She does not suffer gladly the fools who declare the technology or the disruption as “inevitable.”

The need for radical change in today’s universities—even if it is accepted that such change is desirable—does not imply that change will inevitably occur. To imply that because the church should have embraced the widespread publication of scripture, modern universities should also embrace the use of MOOCs is simply a weak analogy.

Her strongest critique focuses on who these authors are. She argues that the people who are promoting change in education should (at least) have expertise in education. Her book mostly equates expertise with experience. My colleagues and I work to teach faculty about education, to develop their expertise before they enter the classroom (as in this post). I suspect Karen would agree with me about different paths to develop expertise, but she particularly values getting to know students face-to-face. She’s angry that the authors promoting education disruption do not know students.

It is a travesty that the conversation about the reform or disruption of higher education is being driven by a small group of individuals who are buffered from exposure to a wide range of students, but who still claim to speak on their behalf and in their interests.

Disrupt This! gave me a new way to think about MOOCs and the hype around disruptive technologies in education. I often think in terms of data. Karen shows how to critique the rhetoric — the data are less important if the argument they are supporting is already broken.

October 6, 2017 at 7:00 am 2 comments

IEEE Prism on the Georgia Tech Online MS in CS Program

Nice piece in IEEE Prism about Georgia Tech’s On-line (Udacity MOOC-based) MS in CS degree.  I like how they emphasized that the program really discovered an un-met demand for graduate education.

Only after students began enrolling in OMS CS did researchers discover another unprecedented element of this massive online course. As economist Joshua Goodman of Harvard University tells Prism, he and his co-investigators found “large demand among mid-career [professionals], particularly mid-career Americans . . . for high-quality continuing education.” Indeed, demand is so robust that the program appears capable of boosting the overall production of computer science degrees in this country.Whether the new credential can fortify experienced professionals against the widespread threat of replacement by younger and cheaper workers remains an open question. For the thousands who have enrolled so far, however, the answer clearly is yes.

Source: Course Correction

August 7, 2017 at 7:00 am 1 comment

Finding that MANY students get lousy returns on online education, but SOME students succeed

The point made below is that online education does work for some students. Our OMS CS succeeds (see evidence here) because it serves a population that has CS background knowledge and can succeed online. Not everyone succeeds in MOOCs.  I don’t like the first sentence in this piece.  “Online education” can be effective.  The models matter.

Despite Hoxby’s troubling findings, it’s hard to say whether online education in and of itself is inherently problematic or whether certain models could be successful. Goodman’s research on a Georgia Institute of Technology online master’s in computer science program indicates that, if done right, an online degree can provide a decent education at a fraction of the cost.“That model doesn’t generalize very well to the broader set of people that are out there,” he said. That’s because the students in the Georgia Tech program have already proved themselves to be successful in higher education (the admissions standards are relatively similar to the school’s elite brick-and-mortar computer science program), which is often not the case for many of the 30-something students that are typical of online education programs.

Source: Damning study finds students get lousy returns on online education – MarketWatch

July 3, 2017 at 7:00 am 2 comments

MOOCs don’t serve to decrease income inequality

At this year’s NSF Broadening Participation in Computing PI meeting, I heard a great talk by Kevin Robinson that asked the question: Do MOOCs “raise all boats” but maintain or even increase income inequality, or do they help to reduce the economic divide?  It’s not the question whether poor students take MOOCs.  It’s whether it helps the poor more, or the rich more.

Kevin has made his slides available here. The work he described is presented in this article from Science.  I want to share the one slide that really blew me away.

The gray line is the average income for US citizens at various ages.  As you would expect, that number generally increases up until retirement.  The black line is the average income for students in Harvard and MIT’s MOOC participants.  The MOOC participants are not only richer, but as they get older, they diverge more.  These are highly-privileged people, the kind with many advantages.  MOOCs are mostly helping the rich.

May 1, 2017 at 7:00 am 8 comments

Passing of William G. Bowen: Walk Deliberately, Don’t Run, Toward Online Education

William G. Bowen of Princeton and of the Mellon Foundation recently died at the age of 83. His article about MOOCs in 2013 is still relevant today.

In particular is his note about “few of those studies are relevant to the teaching of undergraduates.”  As I look at the OMS CS results and the empirical evidence about MOOC completers (which matches results of other MOOC experiments of which I’m aware at Georgia Tech), I see that MOOCs are leading to learning and serving a population, but that tends to be the most privileged population.  Higher education is critiqued for furthering inequity and not doing enough to serve underprivileged students.  MOOCs don’t help with that.  It reminds me of Annie Murphy Paul’s article on lecture — they best serve the privileged students that campuses already serve well.  That’s a subtle distinction: MOOCs help, but not the students who most need help.

What needs to be done in order to translate could into will? The principal barriers are the lack of hard evidence about both learning outcomes and potential cost savings; the lack of shared but customizable teaching and learning platforms (or tool kits); and the need for both new mind-sets and fresh thinking about models of decision making.

How effective has online learning been in improving (or at least maintaining) learning outcomes achieved by various populations of students in various settings? Unfortunately, no one really knows the answer to either that question or the important follow-up query about cost savings. Thousands of studies of online learning have been conducted, and my colleague Kelly Lack has continued to catalog them and summarize their findings.

It has proved to be a daunting task—and a discouraging one. Few of those studies are relevant to the teaching of undergraduates, and the few that are relevant almost always suffer from serious methodological deficiencies. The most common problems are small sample size; inability to control for ubiquitous selection effects; and, on the cost side, the lack of good estimates of likely cost savings.

Source: Walk Deliberately, Don’t Run, Toward Online Education – The Chronicle of Higher Education

March 17, 2017 at 7:00 am 5 comments

How the Pioneers of the MOOC Got It Wrong (from IEEE), As Predicted

There is a sense of vindication that the predictions that many of us made about MOOCs have been proven right, e.g., see this blog post where I explicitly argue (as the article below states) that MOOCs misunderstand the importance of active learning. It’s disappointing that so much effort went wasted.  MOOCs do have value, but it’s much more modest than the sales pitch.

What accounts for MOOCs’ modest performance? While the technological solution they devised was novel, most MOOC innovators were unfamiliar with key trends in education. That is, they knew a lot about computers and networks, but they hadn’t really thought through how people learn.

It’s unsurprising then that the first MOOCs merely replicated the standard lecture, an uninspiring teaching style but one with which the computer scientists were most familiar. As the education technology consultant Phil Hill recently observed in the Chronicle of Higher Education, “The big MOOCs mostly employed smooth-functioning but basic video recording of lectures, multiple-choice quizzes, and unruly discussion forums. They were big, but they did not break new ground in pedagogy.”

Indeed, most MOOC founders were unaware that a pedagogical revolution was already under way at the nation’s universities: The traditional lecture was being rejected by many scholars, practitioners, and, most tellingly, tech-savvy students. MOOC advocates also failed to appreciate the existing body of knowledge about learning online, built over the last couple of decades by adventurous faculty who were attracted to online teaching for its innovative potential, such as peer-to-peer learning, virtual teamwork, and interactive exercises. These modes of instruction, known collectively as “active” learning, encourage student engagement, in stark contrast to passive listening in lectures. Indeed, even as the first MOOCs were being unveiled, traditional lectures were on their way out.

Source: How the Pioneers of the MOOC Got It Wrong – IEEE Spectrum

February 17, 2017 at 7:17 am 2 comments

OMS CS graduates a different kind of student: Work from Harvard Graduate School of Education

This is the work that most impresses me about OMSCS — that it attracts a different group of students that might get a face-to-face MS in CS. I’m not sure that I buy “equivalent in all ways to an in-person degree,” but I do see that it’s hard to measure and the paper makes a good effort at it.

Previous research has shown that most users of online education look fairly similar to the average college graduate — suggesting that digital learning isn’t yet the great educational equalizer it has the potential to be. But in a study of Georgia Tech’s hugely successful online master of science in computer science (OMSCS) program, educational economists Joshua Goodman and Amanda Pallais and public policy expert Julia Melkers found that digital learning can tap into a new market of students by offering an online degree that is equivalent in all ways to an in-person degree, at a fraction of the cost.

Source: The Digital Bridge | Harvard Graduate School of Education

November 16, 2016 at 7:14 am 8 comments

Helping Adults to Learn while Saving Face: Ukulele and MOOCs at Dagstuhl

I played ukulele every night while at the Dagstuhl seminar on CS learning assessment. Most nights, there was a group of us — some on guitars from the music room, one on piano, and several singers. It was wonderful fun! I don’t often get a chance to play in a group of other instruments and other singers, and I learned a lot about styles of play and synchronizing. The guitar players were all much more experienced, but we were all playing and singing music seen for the first time. We weren’t performance-quality — there were lots of off-key notes, missed entrances/exits. We were a bunch of amateurs having fun. (Thanks to Ben Shapiro, Jan Erik Moström, Lisa Kaczmarczyk, and Shriram Krishnamurthi for sharing these photos.)

Dagstuhl-playing-collage

We were not always a popular group. Some participants groaned when the guitars and ukulele came in to the room. One commenter asked if the singing was meant to drown out the playing.  Another complained that our choice of songs was “wrong” for the instruments and voices. Clearly, some of the complaints were for humorous effect, and some were pretty funny.

Here’s the thought experiment: Imagine these were kids playing music and singing. I predict the result would be different. I doubt the listeners would criticize the players and singers in the same way, not even for humorous effect. While adults certainly criticize children when in a teacher-student or mentoring relationship, casual criticism by passerby adults of a child playing or practicing is unusual.

Why is it different for adults?

I’ve talked before about the challenges of adult learning. We expect adults to have expertise. We expect quality. It’s hard for adults to be novices. It’s hard for adults to learn and to save face.  My colleague Betsy DiSalvo points out that we typically critique people at a near-peer level of power — we don’t casually critique those with much less power than us (children) because that’s mean, and we don’t casually critique our bosses and managers (to their faces) because that’s foolish.  Getting critiqued is a sign that you’re recognized as a peer.

After her work at Xerox PARC, Adele Goldberg helped develop learning systems, including systems for the Open University in the UK. She once told me that online systems were particularly important for adult learners. She said, “It’s hard for people with 20 years of expertise in a field to raise their hands and say that they don’t know something.”

Amy Ko framed MOOCs for me in a new way at the Dagstuhl Seminar on Assessment in CS. In the discussion of social and professional practice (see previous blog post), I told her about some ideas I had about helping people to retrain for the second half of life. We live much longer than people did 30-50 years ago. Many college-educated workers can expect a work life into our 70’s. I’ve been wondering what it might be like to support adult students who might retrain in their 40’s or 50’s for another 20 year career later. Amy pointed out MOOCs are perfect for this.

College-educated professionals currently in their careers do have prior education, which is a population with which MOOCs are most successful. MOOCs can allow well-educated students to retrain themselves as time permits and without loss of face. A recent Harvard study shows that students who participate Georgia Tech’s MOOC-based OMS CS program are in a demographic unlikely to have participated in a face-to-face MS in CS program (see page here). The MOOCs are serving an untapped need — it’s not necessarily reaching those who wouldn’t have access to education otherwise, but it can be a significant help to people who want to re-train themselves.

There are lots of uses of MOOCs that still don’t make sense to me.  Based on the empirical evidence of MOOCs today (in their current forms), I argue that:

  • MOOCs are not going to democratize education.  They have not been effective at motivating novices to learn required content, as opposed to elective or chosen content.
  • MOOCs are unlikely to broaden participation in computing.  Betsy DiSalvo and I ran a study about why women aren’t participating in OMS CS.  Those reasons are unlikely to change soon.
  • MOOCs may not work for adults who are being required to, or are asked to retrain, as opposed to those who choose to retrain.  Motivation matters. I have not yet seen convincing evidence that MOOCs can play a significant role in developing new CS teachers.  It’s hard to convince teachers to learn to be CS teachers — they’re not necessarily motivated to do so. Without the intrinsic motivation of choosing to be there, they may not complete.  A teacher who doesn’t complete doesn’t know the whole curriculum.

Adults will still have to have tough skins when practicing their new skills. We expect a lot of expertise out of the starting gate for adults in our society, even when retraining for a second career. MOOCs might be excellent preparation for adults in their second acts.

March 11, 2016 at 8:02 am 2 comments

The Inverse Lake Wobegon Effect in Learning Analytics and SIGCSE Polls

I wrote my Blog@CACM post this month about the Inverse Lake Wobegon effect (see the post here), a term that I coin in my new book (link to post about book).  The Inverse Lake Wobegon effect is where we observe a biased, privileged/elite/superior sample and act as if it is an unbiased, random sample from the overall population.  When we assume that undergraduates are like students in high school, we are falling prey to the Inverse Lake Wobegon effect.

Here’s an example from The Chronicle of Higher Education in the quote below. Looking at learning analytics from MOOCs can only tell us about student success and failure of those who sign up for the MOOC.  As we have already discussed in this blog (see post here), people who take MOOCs are a biased sample — well-educated and rich.  We can’t use MOOCs to learn about learning for those who aren’t there.

“It takes a lot of mystery out of why students succeed and why students fail,” said Robert W. Wagner, executive vice provost and dean at Utah State, and the fan of the spider graphic. “It gives you more information, and when you can put that information into the hands of faculty who are really concerned about students and completion rates and retention, the more you’re able to create better learning and teaching environments.”

Source: This Chart Shows the Promise and Limits of ‘Learning Analytics’ – The Chronicle of Higher Education

A second example: There’s a common thread of research in SIGCSE Symposium and ITICSE that uses survey data from the SIGCSE Members List as a source of information.  SIGCSE Members are elite undergraduate computer science teachers.  They are teachers who have the resources to participate in SIGCSE and the interest in doing so.  I know that at my own institution, only a small percentage (<10%) of our lecturers and instructors participate in SIGCSE.  I know that no one at the local community college’s CS department belongs to SIGCSE.  My guess is that SIGCSE Members represents less than 30% of undergraduate computer science teachers in the United States, and a much smaller percentage of computer science teachers worldwide. I don’t know if we can assume that SIGCSE Members are necessarily more expert or higher-quality.  We do know that they value being part of a professional organization for teaching, so we can assume that SIGCSE Members have an identity as a CS teacher — but that may mean that most CS teachers don’t have an identity as a CS teacher. A survey of SIGCSE Members tell us about an elite sample of undergraduate CS teachers, but not necessarily about CS teachers overall.

January 18, 2016 at 8:03 am 3 comments

No Rich Child Left Behind, and Enriching the Rich: Why MOOCs are not improving education

When I talk to people about MOOCs these days, I keep finding myself turning to two themes.

Theme #1. Our schools aren’t getting worse.  The gap between the rich and the poor is growing.  We have more poorer kids, and they are doing worse because of everything, not just because of school.

Before we can figure out what’s happening here, let’s dispel a few myths. The income gap in academic achievement is not growing because the test scores of poor students are dropping or because our schools are in decline. In fact, average test scores on the National Assessment of Educational Progress, the so-called Nation’s Report Card, have been rising — substantially in math and very slowly in reading — since the 1970s. The average 9-year-old today has math skills equal to those her parents had at age 11, a two-year improvement in a single generation. The gains are not as large in reading and they are not as large for older students, but there is no evidence that average test scores have declined over the last three decades for any age or economic group.

The widening income disparity in academic achievement is not a result of widening racial gaps in achievement, either. The achievement gaps between blacks and whites, and Hispanic and non-Hispanic whites have been narrowing slowly over the last two decades, trends that actually keep the yawning gap between higher- and lower-income students from getting even wider. If we look at the test scores of white students only, we find the same growing gap between high- and low-income children as we see in the population as a whole.

It may seem counterintuitive, but schools don’t seem to produce much of the disparity in test scores between high- and low-income students. … It boils down to this: The academic gap is widening because rich students are increasingly entering kindergarten much better prepared to succeed in school than middle-class students. This difference in preparation persists through elementary and high school.

Source: No Rich Child Left Behind – The New York Times

Theme #2:  There are definitely tangible effects of MOOCs, as seen in the study linked below. They help rich white men find better jobs.  They help educate the rich.  They help a small percentage of the poor.

All the money being poured into developing MOOCs fuels the gap between the rich and the poor.  If you want to improve education generally, nationally or worldwide, aim at the other 90%.  MOOCs aren’t improving education. They enrich those who are already rich.

Using data from MOOCs offered by the University of Pennsylvania, Alcorn, Christensen and Emanuel were some of the first to suggest that MOOC learners were more likely to be employed men in developed countries who had previously earned a degree — countering the early narrative that MOOCs would democratize higher education around the world.

Source: Study finds tangible benefits for learners from Coursera’s massive open online courses | InsideHigherEd

Addendum:

Commenters pointed out that I didn’t make my argument clear.  I’m posting one of my comment responses here to make clearer what I was trying to say:

 

As Alan pointed out, the second article I cited only once says that MOOC learners are “more likely to be employed men in developed countries.” I probably should have supported that point better, since it’s key to my argument. All the evidence I know suggests that MOOC learners are typically well-educated, more affluent from the developed world, and male.

  • In the original EdX MOOC, 78% of the attendees had already taken the class before. (See full report here.)
  • Tucker Balch released demographics on his MOOC: 91% male, 73.3% from OECD countries, and over 50% had graduate degrees. (See post here.)
  • Still the most careful analysis of MOOC demographics that I know is the 2013 Penn study (see article here) which found, “The student population tends to be young, well educated, and employed, with a majority from developed countries. There are significantly more males than females taking MOOCs, especially in developing countries.”
  • As you know, Georgia Tech’s Online MS (OMS) in CS is 85% domestic (the opposite of our face-to-face MS, which actually serves more students from the developing world). (See one page report here.)

If your MOOCs have significantly different demographics, I’d be interested in hearing your statistics. However, given the preponderance of evidence, your MOOC may be an outlier if you do have more students from the developing world.

The argument I’m making in this post is that (a) to improve education, we have to provide more to the underprivileged, (b) most MOOC students are affluent, well-educated students from the developing world, and (c) the benefits of MOOCs are thus accruing mostly to people who don’t need more enrichment. Some people are benefitting from MOOCs. My point is that they are people who don’t need the benefit. MOOCs are certainly not “democratizing education” and are mostly not providing opportunities to those who don’t have them anyway.

 

November 25, 2015 at 8:38 am 23 comments

Providing computing education to the developing world: How do we avoid educational imperialism?

I got an email from CodersTrust, asking me to help promote this idea of developing grants to help students in the developing world learn to code. But the education materials they’re offering is the same CodeAcademy, Coursera MOOCs, and similar developed-world materials. Should they be? Should we just be sending the educational materials developed for US and Europe to the developing world? I thought that that was one of the complaints about existing MOOCs, that they’re a form of educational imperialism.

CodersTrust is the brainchild of Ferdinand Kjærulff. As a Captain of the Danish army he served as recovery officer in Iraq after the fall of Saddam. He pioneered a recovery project with the allied forces, bringing internet and e-learning to the citizens of the region in which he was stationed. The project was a massive success and inspired him to eventually create CodersTrust – supported by Danida – with a vision to democratize access to education via the internet on a global scale.

via CodersTrust | About.

September 7, 2015 at 7:50 am 2 comments

ICER 2015 Preview: First CSLearning4U Ebook Paper

ICER 2015 (see website here) is August 9-13 in Omaha, Nebraska. The event starts for me and Barbara Ericson, Miranda Parker, and Briana Morrison on Saturday August 8. They’re all in the Doctoral Consortium, and I’m one of the co-chairs this year. (No, I’m not a discussant for any of my students.) The DC kickoff dinner is on Saturday, and the DC is on Sunday. My thanks to my co-chair Anthony Robins and to our discussants Tiffany Barnes, Steve Cooper, Beth Simon, Ben Shapiro, and Aman Yadav. A huge thanks to the SIGCSE Board who fund the DC each year.

We’ve got two papers in ICER this year, and I’ll preview each of them in separate blog posts. The papers are already available in the ACM digital library (see listing here), and I’ll put them on my Guzdial Papers page as soon as the Authorizer updates with them.

I’m very excited that the first CSLearning4U project paper is being presented by Barbara on Tuesday. (See our website here, the initial blog post when I announced the project here, and the announcement that the ebook is now available). Her paper, “Analysis of Interactive Features Designed to Enhance Learning in an Ebook,” presents the educational psychology principles on memory and learning that we’re building on, describes features of the ebooks that we’re building, and presents the first empirical description of how the Runestone ebooks that we’re studying (some that we built, some that others have built) are being used.

My favorite figure in the paper is this one:

icer101-final-barb-ebook-paper_pdf__page_7_of_10_

This lists all the interactive practice elements of one chapter of a Runestone ebook along the horizontal axis (in the order in which they appear in the book left-to-right), and the number of users who used that element vertically. The drop-off from left-to-right is the classic non-completion rate that we see in MOOCs and other online education. Notice the light blue bars labelled “AC-E”? That’s editing code (in executable Active Code elements). Notice all the taller bars around those light blue bars? That’s everything else. What we see here is that fewer and fewer learners edit code, while we still see learners doing other kinds of learning practice, like Parsons Problems and multiple choice problems. Variety works to keep more users engaged for longer.

A big chunk of the paper is a detailed analysis of learners using Parsons Problems. Barbara did observational studies and log file analyses to gauge how difficult the Parsons problems were.  The teachers solved them in one or two tries, but they had more programming experience.  The undergraduate and high schools students had more difficulty — some took over 100 tries to solve a problem. Her analysis supports her argument that we need adaptive Parsons Problems, which is a challenge that she’s planning on tackling next.

August 5, 2015 at 7:35 am 3 comments

Moving Beyond MOOCS: Could we move to understanding learning and teaching?

We’re years into the MOOC phenomenon, and I’d hoped that we’d get past MOOC hype. But we’re not.  The article below shows the same misunderstandings of learning and teaching that we heard at the start — misunderstandings that even MOOC supporters (like here and here) have stopped espousing.

The value of being in the front row of a class is that you talk with the teacher.  Getting physically closer to the lecturer doesn’t improve learning.  Engagement improves learning.  A MOOC puts everyone at the back of the class, listening only and doing the homework.

In many ways, we have a romanticized view of college. Popular portrayals of a typical classroom show a handful of engaged students sitting attentively around a small seminar table while their Harrison Ford-like professor shares their wisdom about the world. We all know the real classroom is very different. Especially in big introductory classes — American history, U.S. government, human psychology, etc. — hundreds of disinterested, and often distracted, students cram into large impersonal lecture halls, passively taking notes, occasionally glancing up at the clock waiting for the class to end. And it’s no more engaging for the professor. Usually we can’t tell whether students are taking notes or updating their Facebook page. For me, everything past the ninth row was distance learning. A good online platform puts every student in the front row.

via Moving Beyond MOOCS | Steven M. Gillon.

June 5, 2015 at 7:14 am 9 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,995 other followers

Feeds

Recent Posts

Blog Stats

  • 1,801,952 hits
October 2020
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

CS Teaching Tips