Stages of acceptance, reversed: How do you prove something works?
“Gas station without pumps” has a great point here (linked below), but I’d go a bit further. As he suggests, proponents of an educational intervention (“fad”) rarely admit that it’s a bad idea, rarely gather evidence showing that they’re wrong, and swamp the research literature with evidence that they’re right.
But what if external observers test the idea, and find that it works as hypothesized? Does that mean that it will work for everyone? Media Computation has been successfully used to improve retention at several institutions with both CS majors and non-CS majors, in evaluations not connected to me and my students. That doesn’t mean that it will work for any teacher and every teacher. There are so many variables in any educational setting. Despite the promises of the “What Works Clearinghouse,” even the well-supported interventions will sometimes fail, and there are interventions that are not well-supported that sometimes works. Well-supported interventions are certainly more promising and more likely to work. The only way to be sure, as the blog post below says, is to try it — and to measure it as well as you can, to see if it’s working for you.
I would posit that there is another series of responses to educational fads:
- It is great, everyone should do this.
- Maybe it doesn’t work that well in everybody’s hands.
- It was a terrible idea—no one should ever do that.
Think, for example, of the Gates Foundation’s attempt to make small high schools. They were initially very enthusiastic, then saw that it didn’t really work in a lot of the schools where they tried it, then they abandoned the idea as being completely useless and even counter-productive.
The difficult thing for practitioners is that the behavior of proponents in stage 1 of an educational fad is exactly the same as in Falkner’s third stage of acceptance. It is quite difficult to see whether a pedagogical method is robust, well-tested, and applicable to a particular course or unit—especially when so much of the information about any given method is hype from proponents. Educational experiments seem like a way to cut through the hype, but research results from educational experiments are often on insignificantly small samples, on very different courses from the one the practitioner needs to teach, and with all sorts of other confounding variables. Often the only way to determine whether a particular pedagogic technique works for a particular class is to try it and see, which requires a leap of faith, a high risk of failure, and (often) a large investment in developing new course materials.