Archive for April 29, 2019

A new definition of Computational Thinking: It’s the Friction that we want to Minimize unless it’s Generative,

David Benedetto wrote a blog post about computational thinking for CSTA that gave me new insight into Computational Thinking (thanks to Shuchi Grover whose tweets drew me to it):

http://advocate.csteachers.org/2019/02/27/situated-computational-thinking/

David says:

I think this definition of CT is as good a starting point as any:

Computational Thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent (Cuny, Snyder, Wing, 2010).

He evolves this position until, like Shuchi, he comes up with two definitions of CT:

What are the implications of this? I think there are two clear options for how we define CT:

(A) Restrict what we mean by CT. This is perfectly reasonable and probably necessary for most practical purposes. However, this has the inevitable consequence of fragmenting our understanding of CT. There will be different CTs in different disciplines / fields. We will do this, but we should try to understand the restrictions that we are imposing, and the consequences of imposing them.

(B) Break our concept of CT wide open. I think the scientific community (at least, those who are studying the construct of CT and how it plays out in real cultural contexts) should do this, so that we can explore how CT is understood and practiced in a variety of contexts and for a wide range of purposes.

As a researcher, I’m more in favor of the former — let’s define Computational Thinking precisely.  David’s concern is really about the social context around CT. People want to call lots of things Computational Thinking. Can we come up with a definition for CT that bridges these? That represents the discipline-specific uses of CT, and is well enough defined that we can actually measure something about it?

There are many other “thinkings” that lay claim to providing students with critical skills. Admiral Grace Hopper would likely support “mathematical thinking” more than “computational thinking,” as this interesting essay from Yale points out. Skills like “decomposition” or “abstraction” are included in many definitions of computational thinking (eg this blog post), and it’s true that you need those in computing.  But those skills first belonged to mathematics, engineering, and science, and I’d argue that the teachers in those subjects might be in a better position to teach them and to measure them. Computation can play an important role in learning decomposition and abstraction, but those skills don’t belong uniquely to computation, or to a class on computational thinking. So, what is unique about computation?

The tension between HCI and Computational Thinking

On the computer science side of my life, my research community is human-computer interaction.  I’ve published in CHI, DIS, CSCW, VL/HCC, and UIST. The Cuny, Snyder, and Wing definition is hard for me to reconcile with being an HCI researcher.  The point of HCI research is to minimize the amount that a user has to learn in order to “formulate problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent.”  HCI is trying to make it easier for the user to think with a computer whatever they want to think about. Computational Thinking is about what you need to think with a computer.

Over the last few weeks in this blog, I’ve been exploring the notion of task-specific programming languages. I was amazed at how much social studies teachers could do with Vega-Lite in a participatory design session we ran in early March. Sarah Chasin’s work with Helena and Rousillon is absolutely stunning for how much people could achieve with no training. Hariharan Subramonyam sent me this fascinating essay on end-user programming and about how to minimize the effort it takes end users to start programming: https://www.inkandswitch.com/end-user-programming.html. As I talked about in my SIGCSE 2019 keynote, Bootstrap:Algebra and most uses of Scratch actually rely on a small number of computational ideas. There is expressive and learning power in even a small amount of computation.

Michael Mateas wrote an essay back in 2009 that has been influential in my thinking. I blogged about it here: “There will always be friction.” Michael looked at the Alan Perlis talk of 1961 (that I talk and write about often), and particularly, at the exchange with Peter Elias. Elias argued that students shouldn’t have to learn to program — the computer should learn to understand us. Both Perlis and Mateas disagree. The computer can never understand us completely. We have to smooth the communication because the computer cannot. There will always be a challenge to human-computer interaction. There will always be friction, and it’s the human’s job to manage that friction..

A New Definition for Computational Thinking

So, here’s my new take on Computational Thinking: It’s the friction. Let’s take the original Cuny, Snyder, and Wing definition — computational thinking is about framing problems so that computers can solve them. The work around task-specific programming languages is showing us that we can make that amount that the user has to learn in order to use programming for their problem very small.

To meet Alan Kay’s point about generativity, there are some things in computing that we want to teach because they give us new leverage on thinking. We want to teach things that are useful, but not those that are necessary just because we have bad user interfaces.

A minimal definition of Computational Thinking: The stuff that we have to learn in order to communicate our tasks with a computer. It should be small, and the part that we learn should be generative, useful for new problems and new thinking. Everything else should be eliminated by good user interfaces.

You don’t have to master abstraction and decomposition just to use a programming language to help you learn. Our social studies teachers modified Vega-Lite programs, made mistakes (every single one of them) and recovered from them, and tried new things that they figured out on their own — all in 10-20 minutes. They already have problem solving skills. They didn’t need any “computational problem solving skills.” They certainly didn’t learn any special computational abilities to abstract and decompose in 10 minutes. They already know enough to use programming to learn. If we can eliminate the need for a particular skill in order to use computing to learn something else, we should.

This meshes with David Weintrop and Uri Wilesnky’s definition — it’s the computational practices of actual scientists and engineers who use computing. Their definition is particularly strong because it’s empirically grounded. They asked computational scientists and engineers what they actually do. Weintrop and Wilesnky’s participants want to do their work, not programming for its own sake. So they use a minimal subset of computing that buys them something for their thinking and in their tasks.

I like this definition because it’s aspirational.  Today, there’s a lot of stuff that you have to learn to use a computer to solve problems. Philip Guo gave a talk here at Michigan recently (similar to one he gave at U-W) and described how data scientists have to become system administrators to manage all the various packages and databases to do their job.  That’s a problem. That’s not computational thinking. That’s stuff to get rid of. How small can we make computational thinking?

 

April 29, 2019 at 7:00 am 14 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,004 other followers

Feeds

Recent Posts

Blog Stats

  • 1,875,570 hits
April 2019
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips