Stages of acceptance, reversed: How do you prove something works?
February 1, 2013 at 1:19 am 3 comments
“Gas station without pumps” has a great point here (linked below), but I’d go a bit further. As he suggests, proponents of an educational intervention (“fad”) rarely admit that it’s a bad idea, rarely gather evidence showing that they’re wrong, and swamp the research literature with evidence that they’re right.
But what if external observers test the idea, and find that it works as hypothesized? Does that mean that it will work for everyone? Media Computation has been successfully used to improve retention at several institutions with both CS majors and non-CS majors, in evaluations not connected to me and my students. That doesn’t mean that it will work for any teacher and every teacher. There are so many variables in any educational setting. Despite the promises of the “What Works Clearinghouse,” even the well-supported interventions will sometimes fail, and there are interventions that are not well-supported that sometimes works. Well-supported interventions are certainly more promising and more likely to work. The only way to be sure, as the blog post below says, is to try it — and to measure it as well as you can, to see if it’s working for you.
I would posit that there is another series of responses to educational fads:
- It is great, everyone should do this.
- Maybe it doesn’t work that well in everybody’s hands.
- It was a terrible idea—no one should ever do that.
Think, for example, of the Gates Foundation’s attempt to make small high schools. They were initially very enthusiastic, then saw that it didn’t really work in a lot of the schools where they tried it, then they abandoned the idea as being completely useless and even counter-productive.
The difficult thing for practitioners is that the behavior of proponents in stage 1 of an educational fad is exactly the same as in Falkner’s third stage of acceptance. It is quite difficult to see whether a pedagogical method is robust, well-tested, and applicable to a particular course or unit—especially when so much of the information about any given method is hype from proponents. Educational experiments seem like a way to cut through the hype, but research results from educational experiments are often on insignificantly small samples, on very different courses from the one the practitioner needs to teach, and with all sorts of other confounding variables. Often the only way to determine whether a particular pedagogic technique works for a particular class is to try it and see, which requires a leap of faith, a high risk of failure, and (often) a large investment in developing new course materials.
via Stages of acceptance, reversed « Gas station without pumps.
Entry filed under: Uncategorized. Tags: assessment, computing education research, evaluation, Media Computation.
1.
Mark Urban-Lurain | February 1, 2013 at 7:46 am
Interesting point and well-taken. Innovations are usually promoted by people who have worked hard on them and are committed to their success. Adopters of the innovations rarely have that level of engagement.
On the other hand, one point from the original post (not quoted in this post) that ” Some techniques have stuck around for a long time because they work relatively well for the effort expended (lecturing, for example), ..” misses the opposite side of the equation. Resisters of change often demand all sorts of evidence, practically to the point of a warranty, that the innovation will work while providing little evidence that what they do (e.g., lecture) “works.” I believe that the research evidence is pretty overwhelming that lecture doesn’t “work” in general unless one defines “working” as “weeding out” yet it persists due to lots of inertia in the system.
Charles Henderson has studied adoption patterns quite extensively and makes that point that faculty do NOT adopt and persist with innovations because of research evidence. See for example:
Click to access 2012HendersonPRST-PER_RBISFactors.pdf
He notes that most faculty are aware of research-based techniques and many have tried them, but that they rarely persist. His data do point to issues such as implementation fidelity and lack of peer support.
2.
Mark Guzdial | February 1, 2013 at 7:52 am
Lijun Ni found the same thing in her study of MediaComp adopters — research evidence played no role in the decision: http://dl.acm.org/citation.cfm?id=1508865.1509051&coll=DL&dl=GUIDE&CFID=270495951&CFTOKEN=27382779
3.
Mark Urban-Lurain | February 1, 2013 at 8:28 am
NSF is starting to get the message on this and looking for more active forms of dissemination than simply publishing research results. I expect to see new language about this in the yet-to-be released TUES solicitation.