Posts tagged ‘failure rates’

A biased attempt at measuring failure rates in introductory programming

Do students fail intro CS at higher rates than in comparable classes (e.g., intro Physics, or Calculus, or History)?  We’ve been trying to answer that question for years.  I studied that question here at Georgia Tech (see my Media Computation retrospective paper at last year’s ICER).  Jens Bennedsen and Michael Caspersen answered that question with a big international survey (see paper here).  They recognized the limitations of their study — it was surveying on the SIGCSE member’s list and similar email lists (i.e., to teachers biased toward being informed about the latest in computing education), and they got few responses.

This last year’s ITiCSE best paper awardee tried to measure failure rates again (see link below), by studying published accounts of pass rates.  While they got a larger sample size this way, it’s even more limited than the Bennedsen and Caspersen study:

  1. Nobody publishes a paper saying, “Hey, we’ve had lousy retention rates for 10 years running!”  Analyzing publications means that you’re biasing your sample to teachers and researchers who are trying to improve those retention rates, and they’re probably publishing positive results.  You’re not really getting the large numbers of classes whose results aren’t published and whose teachers aren’t on the SIGCSE members list.
  2. I recognized many of the papers in the meta-analysis.  I was co-author on several of them.  The same class retention data appeared in several of those papers.  There was no funny business going on.  We reported on retention data from our baseline classes.  We then tried a variety of interventions, e.g., with Media Computation and with Robotics.  The baseline then appears in both papers.  The authors say that they made sure that that didn’t double count any classes that appeared in two papers, but I can’t see how they could possibly tell.
  3. Finally, the authors do not explicitly cite the papers used in their meta-analysis.  Instead, they’re included on a separate page (see here).  SIGCSE shouldn’t publish papers that do this.  Meta-analyses should be given enough pages to list all their sources, or they shouldn’t be published.   Including them on a separate page makes it much harder to check the work, to see what data got used in the analysis.  Second, they are referencing work that won’t appear in any reverse citation indices or in the authors’ H-index calculations.  I know some of the authors of those papers who are up for promotion or tenure decisions this coming year.  Those authors are having impact through this secondary publication, but they are receiving no credit for it.

This paper is exploring an important question, and does make a contribution.  But it’s a much more limited study than what has come before.

Whilst working on an upcoming meta-analysis that synthesized fifty years of research on predictors of programming performance, we made an interesting discovery. Despite several studies citing a motivation for research as the high failure rates of introductory programming courses, to date, the majority of available evidence on this phenomenon is at best anecdotal in nature, and only a single study by Bennedsen and Caspersen has attempted to determine a worldwide pass rate of introductory programming courses.In this paper, we answer the call for further substantial evidence on the CS1 failure rate phenomenon, by performing a systematic review of introductory programming literature, and a statistical analysis on pass rate data extracted from relevant articles. Pass rates describing the outcomes of 161 CS1 courses that ran in 15 different countries, across 51 institutions were extracted and analysed. An almost identical mean worldwide pass rate of 67.7% was found. Moderator analysis revealed significant, but perhaps not substantial differences in pass rates based upon: grade level, country, and class size. However, pass rates were found not to have significantly differed over time, or based upon the programming language taught in the course. This paper serves as a motivation for researchers of introductory programming education, and provides much needed quantitative evidence on the potential difficulties and failure rates of this course.

via Failure rates in introductory programming revisited.

September 30, 2014 at 8:41 am Leave a comment

Teachers cheating on student tests

The Atlanta Journal Constitution did a big analysis (ala Freakonomics) last year about possible cheating on the statewide high-stakes testing program.  This year, the state department of education is finding evidence that the cheating is widespread.

When the stakes get high enough (e.g., teacher merit pay linked to student performance on those tests), the incentive to cheat becomes enormous.  I wonder what role education research can play in this.  Can we create more opportunities to learn, more teaching methods, more options to improve learning (and bring up scores) of low-performing students?  How do we release the pressure on these teachers so that cheating doesn’t look like the only way out?

One in five Georgia public schools faces accusations of tampering with student answers on last spring’s state standardized tests, officials said Wednesday, throwing the state’s main academic measure into turmoil. The Atlanta district is home to 58 of the 191 schools statewide that are likely to undergo investigations into potential cheating. Another 178 schools will probably see new test security mandates, such as stepped-up monitoring during testing. The findings singled out 69 percent of Atlanta elementary and middle schools — far more than any other district — as needing formal probes into possible tampering.

via Suspicious test scores widespread in state  | ajc.com.

February 11, 2010 at 10:55 am 5 comments

New report on on-line learning from US Dept of Ed

A new report from the US Department of Education is touting the effectiveness of on-line courses as compared to face-to-face classes.  Note that there’s a significant flaw in the meta-analysis, which appears in the Dept of Ed report (page xvii in the Executive Summary), but not in the “Inside Higher Ed” article: The meta-analysis did not consider failure/retention rates, because too few of the studies controlled for failure rates.  Another meta-analysis that appeared in “Review of Educational Research” a couple years ago found that on-line courses have double the failure rates of face-to-face classes.  If you flunk out twice as many students, yes, you do raise the average performance since you have fewer students left and they’re the ones who scored higher.  Face-to-face classes have the advantage of being a regular constant pressure to stay engaged, to keep showing up.

The grand challenge of on-line learning is how to motivate the students to complete the course without raising costs (e.g., through the teacher spending more time on-line, through production of higher-quality materials, etc.)

August 11, 2009 at 10:36 am 2 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,004 other followers

Feeds

Recent Posts

Blog Stats

  • 1,875,510 hits
September 2021
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  

CS Teaching Tips