Reading student writing: The value of what can’t be automated
I really liked this post, in part because of how differently it is being interpreted within my department. I posted it on a school-wide discussion list, to emphasize the value of what we do that cannot be automated. However, my MOOC-favoring colleagues read this post in exactly the opposite way to how I interpreted it. “Anyone can do this kind of grading, so we shouldn’t waste our time at it! Instead, we should abandon all courses that require this kind of grading.” What can’t be automated isn’t worth doing?
I know that a lot of MOOC-proponents are pushing automatic grading of papers as a cost-effective way to handle classes with over 1000 students. Quite frankly, the idea appalls me—I can’t see any way that computer programs could provide anything like useful feedback to students on any sort of writing above the 1st-grade level. Even spelling checkers (which I insist on students using) do a terrible job, and what passes for grammar checking is ludicrous nonsense. And spelling and grammar are just the minor surface problems, where the computer has some hope of providing non-negative advice. But the feedback I’m providing covers lots of other things like the structure of the document, audience assessment, ordering of ideas, flow of sentences within a paragraph, proper topic sentences, design of graphical representation of data, feedback on citations, even suggestions on experiments to try—none of which would be remotely feasible with the very best of artificial intelligence available in the next 10 years.