What do I mean by Computing Education Research? The Computer Science Perspective

November 12, 2018 at 8:00 am 8 comments


Last week, I talked about how I explain what I do to social scientists. This time, let me explain what I do to computer scientists. I haven’t given this talk yet, and have only tried the ideas out on a few people. So consider this an experiment, and I’d appreciate your feedback.

Let’s simplify the problem of computing education research (maybe a case of a spherical cow). Let’s imagine that instead of classes of Real Humans, we are teaching programming to Human-like Turing Machines (HTMs). I’m not arguing that Turing machines are sufficient to represent human beings. I’m asking you to believe that (a) we might be able to create Turing Machines that could simulate humans, like those we have in our classes, (b) RH’s would only have additional capabilities beyond what HTM’s have, and (c) HTM’s and RH’s would similar mechanisms for cognition and learning. (Carl Hewitt has a great CACM blog post arguing that message passing is more powerful than TM’s or first order logic, so maybe these should be HMP, Human Message Passers. I don’t think I need more than TM’s for this post.)

This isn’t a radical simplification. Cognitive science started out using computation as a model for understanding cognition (see history here). Information processing theory in psychology starts from a belief that humans process information like a computer (see Wikipedia article and Ed Psychology reference). Newell and Simon won the ACM Turing award and in their Turing Award lecture introduced the physical symbol system hypothesis, “A physical symbol system has the necessary and sufficient means for general intelligent action.” If we have a program on a Turing machine that gives it the ability to process the world in symbols, our theory suggests that it would be capable of intelligence, even human-like intelligence. I’m applying this lens to how we think about humans learning to program.

This simplification buys me two claims:

  • The Geek Gene is off the table. The Geek Gene is the belief that some people can’t learn to program (see blog post for more). Any Turing machine can simulate any other Turing machine. Our HTM’s are capable of tracing a program. If any HTM can also write code, then all HTM’s can write code. Everyone has the same computational capability. (If HTM’s can all code, then RH’s can all code, because HTM’s have a subset of RH cognitive capabilities.)
  • Learning of our students can be analyzed and understood as information processing. The behavior of Turing machines is understandable with analysis. HTM’s are sophisticated Turing machines. The core mechanism of HTM’s can be analyzed and understood. If we think about our students as HTM’s, we might reason about their learning about computing.

Here are some of the research questions that I find interesting, within this framing.

How do HTM’s learn to program?

All HTM’s must learn, and learn at a level where their initial programming (the bootstrap code written on their tape when they come into our world) becomes indistinguishable from learned capabilities. HTM’s must have built-in programming to eat and to sleep. They learn to walk and run and decipher symbols like “A,” such that it’s hard to tell what was pre-programmed and what was learned. HTM’s can extend their programming.

There are lots of models that describe how HTM’s could learn, such as SOAR and ACT-R. But none so far has learned to program. The closest are the models used to build the cognitive tutors for programming, but those couldn’t debug and couldn’t design programs. They could work from a definition of a program to assemble a program, but that’s not what most of us would call coding. How would they do it?

How would HTM’s think about code? How would it be represented in memory (whether that memory is a tape, RAM, or human brains)? There is growing research interest in how people construct mental models of notional machines. Even experts don’t really know the formal semantics of a language. So instead, they have a common, “notional” way of thinking about the language. How does that notional machine get represented, and how does it get developed?

How do we teach HTM’s to learn to program?

You shouldn’t be able to just reprogram HTM’s or extend their programs by some manipulation of the HTM’s. That would be dangerous. The HTM might be damaged, or learn something that led them into danger. Instead, extending HTM’s programming can only be done by conscious effort by the HTM. That’s a core principle of Piaget’s Theory of Cognitive Development — children (RH’s and HTM’s) learn by consciously constructing a model of the world.

So, we can’t just tell an HTM how to program. Instead, we have to give them experiences and situations where they learn to program when trying to make sense of their world. We could just make them program a lot, on increasingly harder programs. Not only is that de-motivating (maybe not an issue for HTM’s, but certainly is for RH’s), but it’s inefficient. Turns out that we can use worked examples with subgoal labeling and techniques like Parson’s problems and peer instruction to dramatically improve learning in less time.

What native capabilities of HTM’s are used when they learn to code?

We know that learning to read involves re-using more primitive mechanisms to see patterns (see article here). When HTM’s learn to program, what parts of the native programming are being re-used for programming?

Programming in RH’s may involve re-use of our built-in ability to reason about space and language. My colleague Wes Weimer (website) is doing FMRI studies showing that programmers tend to use the parts of their brain associated with language and spatial reasoning. In our work, we have been studying the role of spatial reasoning and gesture in learning to program (see summaries of our ICER 2018 papers). We don’t know why spatial reasoning might be playing a role in learning to program. Maybe it’s not spatial reasoning, but some aspect of spatial reasoning or maybe it’s even some other native ability that is related to spatial reasoning.

How does code work as an external representation of HTM’s, and where does it help?

We can safely assume that HTM’s, like RH’s, would enhance their cognition through the use of external representations. Cognition and memory are limited. Even an infinite tape has limitations in terms of time to access. Human cognitive systems are limited in terms of how much can be attended to at once. RH’s use external representations (writing notes, making diagrams, sketches) to enhance their cognition. We’re assuming that HTM’s have a subset of RH abilities, so external representations would help HTM’s, too.

My students and I talk about a wonderful paper by David Kirsh, Thinking with External Representations (see link here). It’s a compelling view of how external representations give us abilities to think that we don’t have with just our brain alone.

How can program code be a useful external representation for HTM’s? When does it help, e.g., with what cognitive tasks is code a useful external representation? For example, a natural one is modeling and simulation — we can model more complex situations with program code than we can keep in our head, and we can simulate that model for a much larger range of time and possible values. Are there cognitive tasks where code by itself, as a notation like written language or mathematics, can enhance cognition? Here I’m thinking about the ability of code to represent causal relationships (e.g., as in Bruce Sherin’s work) or algebraic forms (e.g., as in Bootstrap) — see here for discussion of both.  I’m intrigued by the idea of the affordances of reading code even before writing it.

What makes programming worth learning for HTM’s?

Why should an HTM learn programming? Let’s assume that an HTM’s basic programming is going to be about staying alive, e.g., Maslow’s hierarchy of needs. When would an HTM want to learn programming?

The most obvious reason to learn programming is because you can get paid to do it. It’s about meeting physiological needs and safety. But, if you can meet those needs doing something that’s easier or more pleasant or has fewer barriers, you’ll likely do that.

Sometimes, you’ll want to learn programming because it makes easier something you want to do anyway. Brian Dorn’s graphic designers wanted to learn programming (see here) because they used Photoshop or GIMP and wanted a way to do that easier and faster. Maybe that’s about safety and physiological needs, but maybe it was about esteem or even self-actualization (if HTM’s care about those things).

Where my simplification breaks down: Real humans learn in situated and social contexts

Our learning theory about RH’s say that they are unlikely to start a new subject unless there’s social pressure to do so (see Pat Alexander’s Model of Domain Learning). Would HTM’s feel social pressure? Maybe.

As I described in the previous blog post, much of my work is framed around sociocultural models of learning, like Lave and Wenger’s situated learning. I use Communities of Practice to understand a lot of the situations that I explore. We can only go so far in thinking about programming as just being inside of individual minds (HTM or RH). Much of the interesting stuff comes when we realize that (a) our cognition interacts with the environments and situations around us, and (b) our motivation, affect, and cognition are influenced by our social world.

Setting aside whether it’s social science or computer science, I am still driven by a paper I read in 1982, which was five years after it was written: “Personal Dynamic Media” by Alan Kay and Adele Goldberg (see copy here).  I want people to be to use coding like they use other literacies, to create a literature, and in a casual, informal and still insightful way.  Mitchel Resnick often talks about people using Scratch to write a card to their mother or grandmother — that’s the kind of thing I want to see.  I want people to be able to make small computational models that answer questions, in the same way that people do “back of the envelope” calculations today. I also want great literature — we need Shakespeares and daVinci’s who convey great thoughts with computing (an argument that Andrea diSessa made recently at the PPIG conference which Felienne Hermans blogged about here.) That’s the vision that drives me, whether I’m using cognitive science or situated learning.


Entry filed under: Uncategorized. Tags: , , , .

When do we know that a programming course is not working for non-CS majors? How Machine Learning Impacts the Undergraduate Computing Curriculum

8 Comments Add your own

  • 1. orcmid  |  November 12, 2018 at 12:58 pm

    One Quibble. It is any Universal Turing Machine that can simulate any Turing Machine. Not all Turing Machines are Universal. Also, some Universal Turing Machines are not well-suited to use in simulating others because the encoding problem is just too awful.

    Another quibble. Originally, coders worked from relatively-higher detailed specifications (e.g., flow charts). These days, compilers (and script interpreters) are coders. I bristle when someone asks whether I am a coder. I was always a programmer though. I once worked on code that was created from flowcharts drawn by Grace Hopper. A colleague and I hacked the machine-language code, not the flowcharts, as part of a modification for a particular customer. The code was for a Fortransit-like subset of Fortran.

    Final quibble. I have my doubts about HTMs. First, I don’t think they will be TMs in the Turing-intended manner. That is, not that particular model of computation. If you mean something that is Church-Turing complete, I am not clear that Hewitt’s rant will help. We do need higher-order models that deal with interactivity, and others have been working on that for some time. Interactivity has been proposed as an escape from the Church-Turing thesis. My sense is that demonstrates a misunderstanding of CT-completeness and/or maybe what is meant by computability and algorithmic. And interactivity is very powerful as a practical matter. I the notion of a level of interactivity that would be recognized as social connection is what is being circled around?

    Perhaps it is best to look at the Computer Science communities of practice and be careful about the lingo and the framing.

    Finally, I don’t question your (personal) vision, although I am uncertain that it will constitute a literacy, as discussed elsewhere. An useful art, certainly. Evocative of creativity, indeed.

    Don Knuth has some observations about how some folks seem to naturally manage layers of abstraction and others seem incapable. My hypothesis is that it depends on recognition of abstraction at all and there is a cognitive dissonance that is not overcome otherwise. Perhaps reconciling that precedes whatever we could settle on as training for “computational” thinking.

    Try teaching modus ponens, simple propositional logic, and how an implication can be true yet does not support any deduction on its own. Took me until I met Raymond Smullyan in a social setting for my bafflement to be cleared it up with a few minutes on a chalk-board in his study.

    This observation does not deny useful, artistic, and creative employment of computer-based facilities without understanding much about the nature of computation.

  • 2. gasstationwithoutpumps  |  November 12, 2018 at 7:14 pm

    I don’t find the HTM metaphor to be at all useful—if anything it will hurt your case with computer scientists.

    Turing machines are only useful as models for determining computability—they are useless for discussions of efficiency, maintainability, or pretty much any other aspect of computer science, software engineering, and programming other than computability. Those things that are most relevant to Turing machines are
    irrelevant for people learning—we are not interested in whether it is theoretically possible for something to be learned, but in how to ease learning for something that is trivially known to be learnable (many existing instances of it having been learned).

    The crucial aspects of human learning (attention, working memory limitations, cognitive load, motivation, …) are not captured with a Turing machine model, nor can you easily adjust the model to capture any of them.

    I think you need to cross out this approach as a bad idea and come up with something more convincing.

    • 3. Mark Guzdial  |  November 12, 2018 at 7:19 pm

      “Turing motivates his approach by reflecting on idealized human computing agents. Citing finitary limits on our perceptual and cognitive apparatus, he argues that any symbolic algorithm executed by a human can be replicated by a suitable Turing machine. He concludes that the Turing machine formalism, despite its extreme simplicity, is powerful enough to capture all humanly executable mechanical procedures over symbolic configurations. Subsequent discussants have almost universally agreed.” — https://plato.stanford.edu/entries/computational-mind/

      • 4. gasstationwithoutpumps  |  November 13, 2018 at 1:02 am

        There is no evidence that human learning can be reliably represented by a symbolic algorithm. Learning is not a “mechanical procedure”—if it were, then teaching would be a hell of a lot easier.

        I still think your use of Turing machines as analogies for human learning is badly flawed and more likely to turn computer scientists away from you than attract them.

        • 5. Mark Guzdial  |  November 13, 2018 at 7:40 am

          I find the evidence from ACT-R and SOAR more compelling than you do. They exactly replicate learning in some laboratory situations.

          But the more concrete point, I agree with you on. Computer science faculty are unlikely to be swayed by my Turing Machine analogy.

          • 6. orcmid  |  November 13, 2018 at 10:40 am

            I’m with gasstation on this. Turing’s hypothesis is a serious reductionism and there is no basis for presumption that a TM is the proper model for an acceptable mechanical human or, what we might want to call a humanly-social machine (HSM).

            I’m a bit more appreciative of models of computation with regard to what they can reveal about the practices of programming and software engineering, because that is where I have been focused lately.

            I can’t claim any competence on what it is to provide instruction on such matters, or to use programming as an on-ramp to the community of software-development/-engineering practice. I am far less clear on communities of practice that focus more on matters of human enterprise instead and the utility of digital mechanisms as instruments thereof.

            Mark, what do you identify as the significant exposure to programming and *science*, *engineering*, and (discrete) *mathematics* that would equip an educator to deliver some kind of Computer Science Education at an appropriate level? And what about simple fluency with information technology? How much exposure to programming is essential to that?

            (Sorry, I just drove off into the weeds. Let me put it another way. Does the HSM have to know how to program? How to learn to program? Why? I would say, looking around in today’s world, that a more important faculty would be comprehension of what it means to live in a democracy and what is required of individuals to sustain the resiliency of such a social-economic system.)

            • 7. Mark Guzdial  |  November 13, 2018 at 10:50 am

              As I said in my response to Kevin, yes, there is basis for that presumption — it’s Turing himself. But I give up on making that point.

              Dennis, I can’t answer the questions about “significant exposure,” and I’m not sure that anyone can yet. Probably the best answer to that question is a paper by David Weintrop and Uri Wilensky (and others) on how everyday scientists and engineers use computational thinking..

              The second questions are ones I’m explicitly working on answering. Yes, the HSM needs to learn how to program, but I don’t think that you and I mean the same thing by “programming.” I’ve written a couple of blog posts that are coming out over the next few weeks to respond to this issue. (BTW, all of Alan Kay’s Scientific American pieces are freely available outside of paywalls.) Let me just offer a relatively new book by an English professor on why she thinks people need to learn to program: https://mitpress.mit.edu/books/coding-literacy

  • 8. Alan Fekete  |  November 14, 2018 at 10:08 am

    I agree with everyone, that this “HTM” model isn’t going to convince typical CS faculty of anything (especially since AI mostly gave up on any interest in human intelligence, or general intelligence, and has instead focussed on simply getting high scores on certain highly constrained classification or decision tasks). So why introduce this model at all?
    I think your real argument to win respect from these people, needs to go to the roots: rather than try to disguise what you do as a form of math theory (like theoretical CS) or as a form of engineering (like systems CS), you need to put the case to CS faculty, that a social/learning science approach is a useful and legitimate form of scholarship that they should value, offering insights that might have real-world impact. I think the best way is to start with the aspects of human-centric CS that have already won some legitimacy, such as HCI and empirical SE. CS faculty already know that CHI and ICSE are top-rank venues, so show the sort of work that appears there, and then extend the analogy from “understanding how UIs work” and “understanding how software is written/maintained”, to “understanding how the students learn in your classes”.
    I think the key challenges are going to be (i) people who say “this is simply part of Education Research, so you belong in an Ed School” and (ii) people whose implicit model of learning is content transmission, so they say “teaching is simple, just tell the students the facts; if they can’t get it right, its because they are stupid”. I suggest you plan to explicitly address each of these misconceptions.

    Good luck! The whole field will be better, as a wider range of intellectual approaches are included. Just as an encouraging side-note, I have been active with some groups trying to expand the sort of work accepted in the database community, to include human-centric approaches (design systems that take account of users capabilities, make systems that are more usable, etc). It’s been slow going, starting with a colocated workshop alongisde SIGMOD, but we are starting to see a few papers in SIGMOD itself that have evaluation done by user study rather than just throughput measurments.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers


Recent Posts

Blog Stats

  • 2,054,191 hits
November 2018

CS Teaching Tips

%d bloggers like this: