A Terrific and Dismal View of What Influences CS Faculty to Adopt Teaching Practices
Lecia Barker had a terrific paper in SIGCSE 2015 that I just recently had the chance to dig into. (See paper in ACM DL here.) Here’s the abstract:
Despite widespread development, research, and dissemination of teaching and curricular practices that improve student retention and learning, faculty often do not adopt them. This paper describes the first findings of a two-part study to improve understanding of adoption of teaching practices and curriculum by computer science faculty. The paper closes with recommendations for designers and developers of teaching innovations hoping to increase their chance of adoption.
I’ve published in this area before. Davide Fossati and I wrote a paper about the practices of CS teachers (based on interviews with about a dozen CS university teachers): how they made change, what convinced them to change, and how they decided if the change worked. (See blog post about this here.) The general theme was that these decisions rarely had an empirical basis.
Lecia and her co-authors went far beyond our study. She interviewed and observed 66 CS faculty from 36 institutions, explicitly chosen to represent a diverse set of schools. The result is the best picture I’ve yet seen of how CS faculty make decisions.
Lecia found more evidence of teachers using empirical evidence than we did, which was great to see. But whether students “liked” it or not was still the most critical variable:
On the other hand, if students don’t “like it,” faculty are unlikely to continue using a new practice. At a public research university, a professor said, “You can do something that you think, ‘Wow! If the learning experience was way better this term, the experiment really worked.’ And then you read your teaching reviews, and it’s like the students are pissed off because you did not do what they expected.”
Lecia discovered a reason not to adopt that I’d not heard before. She found that CS teachers filter out innovations that didn’t come from a context like their own. Those of us at research universities are filtered out by some teachers at teaching-oriented institutions:
Faculty trust colleagues who have similar teaching and research contexts, share attitudes toward students and teaching, or teach similar subjects. In describing what conference speakers he finds credible at SIGCSE, a professor at a private liberal arts university acknowledged, “I do have the anti- ‘Research One’ bias. Like if the speaker is somebody who teaches at <prestigious public research university>, the mental clout that I give them as a teacher—unless they’re a lecturer—I drop them a notch. When someone stands up to speak and they’re from a really successful teaching college <names several> or universities that have a real reputation of being great undergraduate teaching institutions, I give them a lot of merit.”
The part that I found most depressing (even if not surprising) is that research evidence did not matter at all in adopting new ways to teach:
Despite being researchers themselves, the CS faculty we spoke to for the most part did not believe that results from educational studies were credible reasons to try out teaching practices.
Lecia’s study is well done, and the paper is fascinating, but the overall picture is rather dismal. She points out many other issues that I’m not going into here, like the trade-off between cost and benefit of adopting a new practice, and about the need for specialized equipment in classrooms for some new practices. Overall, she finds that it’s really hard to get higher education CS faculty to adopt better practices. We reported on that in “Georgia Computes!” (see post here) but it’s even more disappointing when you see it in a large, broad study like this.