NYTimes takes on Cognitive Tutors: What can we really prove with studies?

October 11, 2011 at 9:45 am 5 comments

I looked up the Department of Education’s report on Carnegie Learning, and the NYTimes article was actually kinder than it might have been. Three of the four papers that were reviewed actually showed that the cognitive tutor had a negative impact on the outcome measure! Their standards for what studies counted and which didn’t were a little odd to me. Like this comment from the paper:

Plano, G. S., Ramey, M., & Achilles, C. M. (2005). Implications for
student learning using a technology-based algebra program
in a ninth-grade algebra course. Unpublished manuscript. The
study is ineligible for review because it does not use a sample
aligned with the protocol—the sample is not within the specified age or grade range.

What does that mean? The ninth graders being studied might not have actually been in ninth grade? Or somebody should have checked?

I wonder whether either the “hype” of the salespeople or the whole “What Works Clearinghouse” make any sense at all. I raised this question in my piece for Greg’s Making Software. We have studies where Media Computation has worked well in terms of impacting student retention. So? That shows that it can work. That is no guarantee that it will work. The WWC says, “Let’s use randomized trials, of both students and teachers.” I’d make the same claim. Even the greatest teacher can be stymied by a class of poor and starving students, and even the greatest textbook can be completely ineffective with an unprepared teacher and unmotivated students. Is it possible to prove that any intervention will always work?

The federal review of Carnegie Learning’s flagship software, Cognitive Tutor, said the program had “no discernible effects” on the standardized test scores of high school students. A separate 2009 federal look at 10 major software products for teaching algebra as well as elementary and middle school math and reading found that nine of them, including Cognitive Tutor, “did not have statistically significant effects on test scores.”

via A Classroom Software Boom, but Mixed Results Despite the Hype – NYTimes.com.

Entry filed under: Uncategorized.

Why Do Some People Learn Faster? Linking Dweck and EEG data Single-Sex Education Is Ineffective, Report Says – NYTimes.com

5 Comments Add your own

  • 1. Alan Kay  |  October 11, 2011 at 10:01 am

    Let’s suppose that everything was done well (this is a reach because an astounding number of “comparative studies” have enormous flaws — including just the simple idea that multiple year trials are needed for most new things to help normalize the learning curves of teachers and systems etc.)

    I have been asserting that the first goals for cognitive tutors are to be “better than no teacher and better than a bad teacher”. These studies seem to suggest that the tutor in question is “as good as a normal teacher” (which for math could cover the above two categories, or it could mean something better).

    Another very useful study would be “is a computer tutor more effective for most students than using a well written text book?” One could guess about this, but studies are supposed to be better than guessing.

    And we also have here an intermediate stage of tutoring. The Carnegie Learning tutors are not flexible enough to be a stand-in for human teachers. (We can readily imagine future versions that could very well be.)

    So even a great study is getting results from a whole system, not just from the software it purports to be testing.

    Cheers,

    Alan

    Reply
  • 2. John Pane  |  October 11, 2011 at 10:10 am

    Another important point about all of the rigorous studies of Cognitive Tutor (I’m aware of) is that they measure the effects of the entire curriculum. This includes the technology, which is supposed to be used for 40% of class time, and textbook materials and prescribed practices for the rest of instructional time. If Carnegie Learning made poor choices for these classroom pieces, or teachers have difficulty implementing them, they could mask any positive effects of the technology. Experiments of this type cannot tease out the effects of the technology itself.

    John

    Reply
  • 3. Bijan Parsia  |  October 12, 2011 at 2:50 am

    This is something that drives me nuts about people advocating extremely burdensome or expensive changes to teaching: 1) failure to account for confounding variables, 2) failure to account for key dependencies with big downsides (e.g., a technique might improve outcomes in the hands of a good teacher when it is well executed, but degrade them if poorly executed or just in the hands of a poor teacher), and 3) failure to assess the cost vs. benefit.

    I’m not willing, qua university employee, to spend a ton of effort on something that at best improves outcomes marginally and has a high risk of hurting things. I’ll be esp. put out if doing something cheap or easy would get me 50% of that benefit for almost no effort with little downside risk.

    An obvious example is Blackboard (and lots of other VLEs). Let’s just stipulate that instructor availability in on line fora plus electronic collection of coursework and management of feedback improves student perception of their experience (I’m not even going for a substantive outcome improvement). The fact that it is costing me hour upon hour of range inducing work to get simple things to function and to work around insanity (no pre tag in quiz questions, really?) makes the cost very high (putting aside the absurd fees and infrastructure costs). The fact that the whole infrastructure makes me generate a lot of easily avoidable errors makes the students frustrated.

    The fact that I cannot grade assignments by downloading student answers into a reasonable structure (e.g., directores of non-name mangled files), entering marks and comments into a spreadsheet, then uploading the results is a travesty. It takes 2-5 minutes *minimum* to enter a grade + feedback (when that feedback is already done!) There’s no reasonable way to survey what you’ve done so far, so online grading of a group is just broken.

    So the actual implementation prevents us from giving better feedback and catching grading errors early. The actual implementation raises the cost of setting assignments. There’s no research that needs to be done here to figure out that this is majorly wasteful.

    Instead of dazzling us with fancy pants heroic treatments, how about doing basic work on basic educational public health? Cognitive tutor seems more like an MRI machine, than a vaccine, and where vaccines and hand washing would save millions of lives for much less we should be focusing on distributing them more widely.

    Reply
  • 4. Bijan Parsia  |  October 12, 2011 at 2:57 am

    Sorry to rant here, but it’s really absurd. I can’t even get little basic usability things that are under out control improved. We have a portal that lists all our BB courses. This takes forever to load because it refreshes from some remote, slow db every freaking time we log in (e.g., 30 seconds minimum). This panel *rarely* changes, e.g., maybe twice a year on average (i.e., when courses are scheduled).

    This is coupled with an absurdly short (though apparently random) time out. If you work through the portal, you can end up doing this 2-5 times a day. If a student has a quick issue you want to check on, it goes from 2 seconds, to 5 minutes and a blot of pure rage.

    Reply
  • 5. gasstationwithoutpumps  |  October 24, 2011 at 10:44 am

    Mark, you may want to respond to this commentary:
    http://oilf.blogspot.com/2011/10/is-pope-jewish-ii-classroom-technology.html

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,045 other followers

Feeds

Recent Posts

Blog Stats

  • 2,024,884 hits
October 2011
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

CS Teaching Tips


%d bloggers like this: