Sally Fincher on the need for CER: What Are We Doing When We Teach Computing in Schools?
May 1, 2015 at 8:21 am 8 comments
I’ve been looking forward to seeing this article appear in CACM for over a year. Last January and May, I heard Sally Fincher give two talks about computing education research (CER), where she started by describing (failed) efforts to teach reading over the last hundred years. She created a compelling analogy. What educators were doing when they simplified the learning of reading seem analogous to our efforts today to simplify the learning of programming — but those efforts to teach simplified reading led to significant harm to the students. What harm are we doing to students when we teach programming in these new ways? She is not calling for an end to these efforts. Rather, she’s calling for research to figure out what we’re doing and to investigate the effects. She agreed to write up her story for Viewpoints, which is published this month in CACM. Thanks, Sally!
Other approaches believe it is more appropriate to use real syntax, but constrain the environment to a particular (attractive) problem domain so learners become fluent in a constrained space. Event-driven environments (such as Greenfoot) or scaffolded systems (like Processing.js) aim for the learner to develop an accurate mental model of what their code is doing, and ultimately transfer that to other environments. Although whether they actually do so remains unclear: we may be restricting things in the wrong way.
Still others hold that coding—howsoever approached—is insufficient for literacy and advocate a wider approach, taking in “computational thinking,” for instance as embedded in the framework of the “CS Principles”: Enduring Understandings, Learning Objectives, and Essential Knowledge.
What is resolutely held common with traditionally formulated literacy is that these approaches are unleashed on classrooms, often whole school districts, even into the curriculum of entire countries—with scant research or evaluation. And without carrying the teachers. If we are to teach computing in schools we should go properly equipped. Alongside the admirable energy being poured into creating curricular and associated classroom materials, we need an accompanying set of considered and detailed programs of research, to parallel those done for previous literacies.
via What Are We Doing When We Teach Computing in Schools? | May 2015 | Communications of the ACM.
Entry filed under: Uncategorized. Tags: computing education research, CS:Principles, Greenfoot, Processing.
1.
Michael S. Kirkpatrick | May 1, 2015 at 9:58 am
I agree with her conclusion that we need CER, but I find the comparison with reading less compelling. She even alludes to my objection exactly: “aim for the learner to develop an accurate mental model of what their code is doing.” With reading, students already have a fairly accurate mental model because the conceptual model is the same as one they’ve already developed: how to use words to speak. In contrast, the conceptual models of coding are very different than the GUI-driven models they’ve developed. (This is also why I’m lukewarm on referring to coding as the “new literacy.”)
There’s also a danger if one takes her argument too literally. Insisting that we go into teaching CS “properly equipped” can be taken as an argument against expanding K-12 CS because we are not yet ready: After all, we may actually be doing harm with our current curricula. Of course, we are probably doing harm by not moving forward. So it becomes a catch-22.
Ultimately, CER and K-12 CS have to be parallel efforts, and we just have to accept that it’s a learning process for all of us.
2.
shriramkrishnamurthi | May 2, 2015 at 11:50 am
I am really surprised that you claim there is a “danger” if we take her argument “too literally”. I think taking it literally is perfectly reasonable, and perhaps even what she intended (I don’t presume to speak for her). Other areas (notably medicine) have a “do no harm” principle.
CS is currently overflowing with good intentions. An hour spent with the Broader Impact sections of proposals in any NSF area will present numerous examples. There is a school of thought that believes that good intentions trump everything else. There is another, that I subscribe to, that says good intentions do not necessarily translate to good outcomes, and when they don’t, the intention should be left in the back pocket, not inflicted on others.
As a nit, this is not a catch-22.
3.
Michael S. Kirkpatrick | May 4, 2015 at 10:58 am
I actually agree with you regarding good intentions. My point about the catch-22 was this: We don’t know which is better (block-based or syntax-driven platforms) for developing the right cognitive models. Syntax-driven languages may be imposing a high cognitive load, pushing away many otherwise qualified individuals who aren’t interested enough yet to grapple with pedantry. On the other hand, the alternative may be building improper cognitive models. There is a non-zero chance of doing harm (to demographics or to cognition), whether we pick C++ or Snap! for K-12 CS. At this point, we just don’t know which one causes more harm. So the only way to avoid doing harm would be to not teach CS in K-12 at all until we know which is the most appropriate approach. And that would definitely be doing harm. So there is no way to progress without accepting the possibility of harm.
As for BI sections, yeah, I agree. I personally think that section is the least valuable and most dishonest part of a proposal, and I don’t mean just in CER. From what I have seen and heard, BI sections are more of sales pitches than sound scientific interpretation.
4.
Peter Donaldson | May 6, 2015 at 1:53 pm
I’m not sure that the harm caused by less than optimal K12 CS learning experience is greater than the harm of allowing the majority of the population to think that digital devices are a collection of different fixed function appliances with no unifying principles.
The downsides are considerable particularly as more and more devices and services become networked. If I can access a device and change it’s instructions in some way I can turn a phone into a device for spying or use it to help me disable or disrupt any companies digital based services. Any information I share publicly can be easily harvested and thousands of copies created in the blink of an eye. If I model an offline process on the computer then my interpretation of that process defines how it works in future and could affect thousands or millions of peoples lives. Instead of hundreds of people carrying out the process it ends up instead being carried out by one program where the code effectively becomes law, have you every tried to argue with a program or someone completely reliant on it for answers? This type of ignorance leaves people extremely vulnerable and makes it very easy for a small group who do really understand the concepts to effectively take control.
Programming is the new literacy in the sense that it’s a powerful new medium of expression that allows us to directly use computation to manipulate and control information. Given that various peripherals allow us to be able to convert both energy and matter into information and then back again there’s the very real sense that learning about computation and how to create information models is pretty fundamental to the types of opportunities that people will be able to access. I know that some of this can be achieved with special purpose pieces of software but a general purpose programming language is more flexible and is probably worth the effort to learn. Computing Science shouldn’t be viewed as a technical niche but as a way of understanding major parts of the world people are now growing up in.
5.
Mark Guzdial | May 3, 2015 at 10:45 am
Really interesting analysis, Michael. In fact, that’s what we’re finding in our attempts to replicate some ed psych experiments from math and science in CS. The cognitive load of building a mental model of a program (especially during debugging) at an introductory level is far greater than solving similar-level algebra or physics problems.
I still think Sally has an excellent point. Sure, learning to read is different than learning to program. But we could still screw it up pretty badly.
I have a different concern with Sally’s argument. English is outside of academic control. English departments don’t get to redefine English, which is why it’s important to learn the English that exists, not the English that we might invent to make it easier. Isn’t CS different? If we invented a better way to program, why isn’t that the new definition of “programming”? Or are we already so stuck in the C-tarpit (see my earlier concerns) that C really is the definition of programming?
6.
Michael S. Kirkpatrick | May 4, 2015 at 11:23 am
I sometimes wonder how my own experience shaped my views. I’m a functional-first product of Indiana U., where my first language was Scheme before moving on to Java. I never had C as an undergrad. But over time, I turned into a systems person. So, for me, C feels more “pure” or “sophisticated” because it maps more closely to the operation of the computer. This leads to the feeling that mastery of C is somehow the goal; high-level languages are just crutches to support this endeavor.
I think that, as long as “programming” is linked to a traditional multiprogrammed computer, C is the definition. I don’t think things will change until we have a diverse pool of devices that fundamentally differ from the von Neumann model. Perhaps something like the Internet of Things could be a driver?
7.
Franklin Chen | May 1, 2015 at 11:23 am
Non-gated version here: http://www.cs.kent.ac.uk/people/staff/saf/CACM-May-2015.pdf
8.
Educating Computer Scientists: What should we discuss at #SIGCSE journal club? | O'Really? | August 30, 2019 at 8:59 am
[…] ….and the second published in 2015 (see comments on Mark Guzdial’s summary): […]