Posts tagged ‘computational thinking’
Shuchi Grover and Roy Pea (Stanford) have a review of the field of computational thinking in K-12 schools in this month’s Educational Researcher. It’s a very nice paper. I’m excited that the paper is published where it is! Educational Researcher is the main publication venue for the largest education research organization in the United States (American Educational Research Association). Roy has been doing work in computing education for a very long time (e.g., “On the prerequisites of learning computer programming,” 1983, Pea and Kurland). This is computational thinking hitting the education mainstream.
Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.
This is a fascinating essay. Some of it goes too far for me (e.g., that code “produces new forms of algorithmic identity”), but the section I quote below is making a deep comment relative to the arguments we’ve been making here about “computing for everyone.”
Why should everyone know about computing? I’ve argued about the value of computational literacy as literacy — a way of expressing and notating thought. I’ve also argued about the value of computer science as science — insight into how the world we inhabit works. This part of the essay is saying something more generative — that code provides metaphors for the way we think about the world, so not knowing about code thus limits one’s ability to understand modern culture and science. The idea is akin to computational thinking, but more about cultural practices than cognitive processes.
Code is the language of computation; it instructs software how to act. But as the instructions written down in code travel out into the world, organized in the algorithmic procedures that make up software, it also has a significant influence on everything it touches. The result is a profound softwarization of society as software has begun to mediate our everyday ways of thinking and doing.
For example, software and its constituent codes and algorithms have become a metaphor for the mind, for ideology, for biology, for government, and for the economy, and with the rapid proliferation of software as an interface to the world, code has been seemingly naturalized in collective life. Computer code has also been described as a kind of law, or the set of rules and constitutional values that regulate the web. The idea that code is law suggests that choices about how to code the web will define the controls and freedoms that are built or programmed into it.
These ways of looking at code demonstrate that code is much more than a language for instructing computing machines. Instead, we need to understand code as a system of thought that spills out of the domain of computation to transform and reconfigure the world it inhabits.
An interesting piece, which argues that proficiency with computing is an important part of a modern liberal arts education. The argument is a modern and updated version of the argument that Alan Perlis made back in 1961. The specific computing literacies being described go beyond computational thinking — it’s explicitly about being able to make with computing. Steve Jobs’ made a similar famous claim that computer science is a liberal art.
Students who graduate with a degree in liberal arts should understand the basic canon of our civilization as well as their place in the world, sure, but they also need to understand how to explore and communicate their ideas through visual communication, data manipulation, and even making a website or native mobile app. If they can’t, they’ll just understand the global context of their own unemployment.
In the About page for this blog, I wrote, “Computing Education Research is about how people come to understanding computing, and how we can facilitate that understanding.” Juha Sorva’s dissertation (now available!) helped me come to an understanding of what it means to “understand computing.” I describe a fairly technical (in terms of cognitive and learning sciences) definition, which basically is Juha’s. I end with some concrete pedagogical recommendations that are implied by this definition.
A Notional Machine: Benedict DuBoulay wrote in the 1980′s about a “notional machine,” that is, an abstraction of the computer that one can use for thinking about what a computer can and will do. Juha writes:
Du Boulay was probably the first to use the term notional machine for “the general properties of the machine that one is learning to control” as one learns programming. A notional machine is an idealized computer “whose properties are implied by the constructs in the programming language employed” but which can also be made explicit in teaching (du Boulay et al., 1981; du Boulay, 1986).
The notional machine is how to think about what the computer is doing. It doesn’t have to be about the CPU at all. Lisp and Smalltalk each have small, well-defined notional machines — there is a specific definition of what happens when the program executes, in terms of application of S-expressions (Lisp) and in terms of message sending to instances of classes (Smalltalk). C has a different notional machine, which isn’t at all like Lisp’s or Smalltalk’s. C’s notional machine is closer to the notional machine of the CPU itself, but is still a step above the CPU itself (e.g., there are no assignment statements or types in assembly language). Java has a complicated notional machine, that involves both object-oriented semantics and bit-level semantics.
A notional machine is not a mental representation. Rather, it’s a learning objective. I suggest that understanding a realistic notional machine is implicitly a goal of computational thinking. We want students to understand what a computer can do, what a human can do, and why that’s different. For example, a computer can easily compare two numbers, can compare two strings with only slightly more effort, and has to be provided with an algorithm (that is unlikely to work like the human eye) to compare two images. I’m saying “computer” here, but what I really mean is, “a notional machine.” Finding a route from one place to another is easy for Google Maps or my GPS, but it requires programming for a notional machine to be able to find a route along a graph. Counting the number of steps from the top of the tree to the furthest leaf is easy for us, but hard for novices to put in an algorithm. While it’s probably not important for everyone to learn that algorithm, it’s important for everyone to understand why we need algorithms like that — to understand that computers have different operations (notional machines) than people. If we want people to understand why we need algorithms, and why some things are harder for computers than humans, we want people to understand a notional machine.
Mental Models: A mental model is a personal representation of some aspect of the world. A mental model is executable (“runnable” in Don Norman’s terms) and allows us to make predictions. When we turn on and off a switch, we predict that the light will go on and off. Because you were able to read that sentence and know what I meant, you have a mental model of a light which has a switch. You can predict how it works. A mental model is absolutely necessary to be able to debug a program: If you have to have a working expectation of what the program was supposed to do, and how it was supposed to get there, so that you can compare what it’s actually doing to that expectation.
So now I can offer a definition, based on Juha’s thesis:
To understand computing is to have a robust mental model of a notional machine.
My absolutely favorite part of Juha’s thesis is his Chapter 5, where he describes what we know about how mental models are developed. I’ve already passed on the PDF of that chapter to my colleagues and student here at Georgia Tech. He found some fascinating literature about the stages of mental model development, about how mental models can go wrong (it’s really hard to fix a flawed mental model!), and about the necessary pieces of a good mental model. DeKleer and Brown provide a description of mental models in terms of sub-models, and tell us what principles are necessary for “robust” mental models. The first and most important principle is this one (from Juha Sorva’s thesis, page 55):
- The no-function-in-structure principle: the rules that specify the behavior of a system component are context free. That is, they are completely independent of how the overall system functions. For instance, the rules that describe how a switch in an electric circuit works must not refer, not even implicitly, to the function of the whole circuit. This is the most central of the principles that a robust model must follow.
When we think about a switch, we know that it opens and closes a circuit. A switch might turn on and off a light. That would be one function for the switch. A switch might turn on and off a fan. That’s another function for a switch. We know what a switch does, completely decontextualized from any particular role or function. Thus, a robust mental model of a notional machine means that you can talk about what a computer can do, completely apart from what a computer is doing in any particular role or function.
A robust mental model of a notional machine thus includes an understanding of how an IF or WHILE or FOR statement works, or what happens when you call a method on an object in Java (including searching up the class hierarchy), or how types do – completely independently of any given program. If you don’t know the pieces separately, you can’t make predictions, or understand how they work a particular function in a particular program.
It is completely okay to have a mental model that is incomplete. Most people who use scissors don’t think about them as levers, but if you know physics or mechanical engineering, you understand different sub-models that you can use to inform your mental model of how scissors work. You don’t even have to have a complete mental model of the notional machine of your language. If you don’t have to deal with casting to different types, then you don’t have to know it. Your mental model doesn’t have to encompass the notional machine. You just don’t want your mental model to be wrong. What you know should be right, because it’s so hard to change a mental model later.
These observations lead me to a pedagogical prediction:
Most people cannot develop a robust mental model of a notional machine without a language.
Absolutely, some people can understand what a computer can do without having a language given to them. Turing came up with his machine, without anyone telling him what the operations of the machine could do. But very few of us are Turings. For most people, having a name (or a diagram — visual notations are also languages) for an operation (or sub-model, in DeKleer and Brown terms) makes it easier for us to talk about it, to reference it, to see it in the context of a given function (or program).
I’m talking about programming languages here in a very different way than how they normally enter into our conversation. In much of the computational thinking discussion, programming is yet another thing to learn. It’s a complexity, an additional challenge. Here, I’m talking about languages as a notation which makes it easier to understand computing, to achieve computational thinking. Maybe there isn’t yet a language that achieves these goals.
Here’s another pedagogical recommendation that Juha’s thesis has me thinking about:
We need to discuss both structure and function in our computing classes.
I suspect that most of the time when I describe “x = x + 1″ in my classes, I say, “increment x.” But that’s the function. Structurally, that’s an assignment statement. Do I make sure that I emphasize both aspects in my classes? They need both, and to have a robust mental model, they probably need the structure emphasized more than the function.
We see that distinction between structure and function a lot in Juha’s thesis. Juha not only does this amazing literature review, but he then does three studies of students using UUhistle. UUhistle works for many students, but Juha also explores when it didn’t — which may be more interesting, from a research perspective. A common theme in his studies is that some students didn’t really connect the visualization to the code. They talk about these “boxes” and do random walks poking at graphics. As he describes in one observation session (which I’m leaving unedited, because I enjoyed the honesty of Juha’s transcripts):
What Juha describes isn’t unique to program visualization systems. I suspect that all of us have seen or heard something pretty similar to the above, but with text instead of graphics. Students do “random walks” of code all the time. Juha talks a good bit about how to help his students better understand how UUhistle graphical representations map to code and to the notional machine.
Juha gives us a conceptual language to think about this with. The boxes and “incomprehensible things” are structures that must be understood on their own terms, in order to develop robust mental models, and understood in terms of their function and role in a program. That’s a challenge for us as educators.
So here’s the full definition: Computing education research is about understanding how people develop robust models of notional machines, and how we can help them achieve those mental models.
Here’s a great piece to read when wondering about the questions, “Do scientists really all need to learn to program? Surely they’re not going to program are they? What would they do?” What they’ll do is patch together piece of other’s code, with lots of data transformation. What do they need to know? A robust mental model of how the modules work and what the data needs for each are. This is beyond computational thinking.
In my past 20 years as a programmer, I’ve seen the rise of object-oriented programming and ‘modularity’ is something that was hammered onto my forehead. These days, I organise my entire life as a computational biologist around little modules that I re-use in almost every workflow. Yes, sure, you may call me a one-trick pony, but in terms of productivity, call me plough horse.
My core modules are ACQUISITION, COMPUTATION, VISUALISATION, and usually I glue those together with a few lines of Perl or the Unix command line. Here come the constraints again: To overcome the limitations of the software that I’m often “misusing”, I use my own scripts to shove data from one format into the next, and back again. I think every biologist who deals with lots of data, not only us computational folk, should know a few handy lines to quickly turn comma-separated files into tab-delimited, strip a table of empty quotes or grep some essential info.
April 2012 is Mathematics Awareness Month. When I read the description of the theme for this year, “Mathematics, Statistics, and the Data Deluge,” it sounds to me like it’s just as much about computational thinking. Nobody is going to deal with big data by hand. Their view meshes pretty well with Jeannette Wing’s definition, e.g., automatized algorithms.
The American Mathematical Society, the American Statistical Association, the Mathematical Association of America, and the Society for Industrial and Applied Mathematics announce that the theme for Mathematics Awareness Month, April 2012, is Mathematics, Statistics, and the Data Deluge.
Massive amounts of data are collected every day, often from services we use regularly, but never think about. Scientific data comes in massive amounts from sensor networks, astronomical instruments, biometric devices, etc., and needs to be sorted out and understood. Personal data from our Google searches, our Facebook or Twitter activities, our credit card purchases, our travel habits, and so on, are being mined to provide information and insight. These data sets provide great opportunities, and pose dangers as well.
GasStationWithoutPumps did a blog piece on the newspaper articles that I mentioned earlier this week, and he pointed out something important that I missed. The Guardian’s John Naughton provided a really nice definition of computational thinking:
… computer science involves a new way of thinking about problem-solving: it’s called computational thinking, and it’s about understanding the difference between human and artificial intelligence, as well as about thinking recursively, being alert to the need for prevention, detection and protection against risks, using abstraction and decomposition when tackling large tasks, and deploying heuristic reasoning, iteration and search to discover solutions to complex problems.
I like this one. It’s more succinct than others that I’ve seen, and still does a good job of hitting the key points.
Naughton’s definition includes issues of cyber-security and risk. I don’t see that often in “Computational Thinking” definitions. I was reminded of a list that Greg Wilson generated recently in his Software Carpentry blog about what researchers need to know about programming the Web.
Here’s what (I think) I’ve figured out so far:
- People want to solve real problems with real tools.
- All we can teach people about server-side programming in a few hours is how to create security holes, even if we use modern frameworks.
- People must be able to debug what they build. If they can’t, they won’t be able to apply their knowledge to similar problems on their own.
Greg’s list surprised me, because it was the first time that I’d thought risk and cyber-security as critical to end-user programmers. Yes, cyber-security plays a prominent role in the CS:Principles framework (as part of Big Idea VI, on the Internet), but I’d thought of that (cynically, I admit) as being a nod to the software development firms who want everyone to be concerned about safe programming practices. Is it really key to understanding the role of computing in our everyday lives? Maybe — the risks and needs for security may be the necessary consequent of teaching end-users about the power and beauty of computing.
Greg’s last point is one that I’ve been thinking a lot about lately. I’ve agreed to serve on the review committee for Juha Sorva’s thesis, which focuses on his excellent program visualization tool, UUhistle. I’m enjoying Juha’s document very much, and I’m not even up to the technology part yet. He has terrific coverage of the existing literature in computing education research, cognitive science, and learning sciences, and the connections he draws between disparate areas is fascinating. One of the arguments that he’s making is that the ability to understand computing in a transferable way requires the development of a mental model — an executable understanding of how the pieces of a program fit together in order to achieve some function. For example, you can’t debug without a mental model of how the program works (to connect to Greg’s list). Juha’s dissertation is making the argument (implicitly, so far in my reading) that you can’t develop a mental model of computing without learning to program. You have to have a notation, some representation of the context-free executable pieces of the program, in order to recognize that these are decontextualized pieces that work in the same way in any program. A WHILE loop has the same structure and behavior, regardless of the context, regardless of the function that any particular WHILE loop plays in any particular program. Without the notation, you don’t have names or representations for the pieces that is necessary for transfer.
Juha is making an argument like Alan Perlis’s argument in 1961: Perlis wasn’t arguing that everyone needed to understand programming for its own sake. Rather, he felt that the systems thinking was the critical need, and that the best way to get to systems thinking was through programming. The cognitive science literature that Juha is drawing on is saying something stronger: That we can’t get to systems thinking (or computational thinking) without programming. I’ll say more about Juha’s thesis as I finish reviewing it.
It’s interesting that there are some similar threads about risk and cyber-security appearing in different definitions of computational thinking (Naughton and Wilson discussed here), and those thinking about how to teach computational thinking (Sorva and Perlis here) are suggesting that we need programming to get there.
The National Research Council just released a new report on Computational Thinking last week. Marcia Linn of Berkeley came to present the report to the NSF CE21 meeting last week. It’s on how to teach Computational Thinking. I saw Marcia Thursday night before she spoke, and she asked me how I defined CT. I declined to answer her (because last time I came up with one, the response was mostly how wrong I was), and asked her for her definition. She gave a nice one that involved relating computing to the problem domain context, but admitted that that was her definition. The committee couldn’t come to a consensus on a definition. I asked her if she thought computer scientists would agree with her definition. She said that she was able to convince the ones she found most difficult (because her definition included programming, and that was key to the computer scientists she worked with), and that was good enough for her.
There is lots of pressure to teach and assess computational thinking — for which we have too many definitions and too little consensus. Really hard to make progress on a goal if we don’t know what the goal is.
In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking and on articulating what “computational thinking for everyone” might mean. A report of that workshop was released in January 2010.Drawing in part on the proceedings of that workshop, Report of a Workshop of Pedagogical Aspects of Computational Thinking, summarizes the second workshop, which was held February 4-5, 2010, in Washington, D.C., and focuses on pedagogical considerations for computational thinking. This workshop was structured to gather pedagogical inputs and insights from educators who have addressed computational thinking in their work with K-12 teachers and students. It illuminates different approaches to computational thinking and explores lessons learned and best practices. Individuals with a broad range of perspectives contributed to this report. Since the workshop was not intended to result in a consensus regarding the scope and nature of computational thinking, Report of a Workshop of Pedagogical Aspects of Computational Thinking does not contain findings or recommendations.
Still trying to dig out from under the grading pile — it’s finals week here at Georgia Tech, and grades are due Monday at noon. My TA for Media Computation data structures had to leave the semester a couple weeks early, so I just finished catching up on all the grading (programming homework, quizzes, and final exam) for that class yesterday. I also have 40 students in a Senior Design class, so I’m deep into reviewing project documentation, design diagrams, and personal reflections on their process.
I’ve had a theme arise from both classes in the last couple days that is worth mentioning here.
Theme: I got a lovely note from one of my MediaComp DS students reflecting on his time in the class. (As a teacher, it’s an enormous boost to get one of these — even when critical, it affirms your job as a teacher: “Someone was listening!”) Against the recommendations of his advisors, he took my class and the follow-up intro to Java course concurrently, which means that he only gets elective credit for my course. But it gave him the opportunity to compare the two courses, which is pretty interesting for me. Besides these two CS courses, he was taking a course in combinatorics. He saw my course as the “glue” which combined the ideas of the three courses.
The concepts you introduced formed essential links with material from my other classes to illustrate the harmony of what I considered three more or less independent studies (for a long time I considered <MediaComp DS class> and <intro-to-Java> very different other than their shared use of Java, with one being the “general programming class” and the other being the “media and simulation programming class”).
What I found most intriguing was that he saw the MediaComp DS course as being the more “theoretical” course. Of course, any data structures course deals with theory issues more than a simple introduction to programming. But because this course included simulation, we also dug into probability distributions and continuous/discrete-event issues which connected to combinatorics and statistics in interesting ways. In a real sense, that made the MediaComp DS course harder than the introduction to Java course.
Recapitulation: One of my Senior Design teams refactored some code for our Physics department. Physics at Georgia Tech uses VPython in several labs. The physicists found that some of the code that the students had to write (to simulate a falling object, to graph data, etc.) was clumsy and had students struggling with parameterization issues.
My Senior Computational Media students, well-versed in HCI as they are, wanted to create a GUI for a Physics simulation. The Physics teachers (to their credit, in my opinion!) insisted on having their students write code. They explicitly wanted their Physics students to deal with “computational thinking” (their term, which may mean something different than others). So, the team created a nice set of objects, rather than the umpteen functions that students had to use previously. The Physics teachers are thrilled — the team did a very good job. But in their reflections, my Seniors are still complaining that they’d prefer to have built GUIs. ”It would have been easier on the Physics students.”
I agree, a GUI-based simulation would have been easier on the Physics students. The students also would have learned less. They would have had less flexibility. The Physics teachers wanted the interface to VPython to be usable — to be understandable and to focus on the Physics and on the representational issues (e.g., how do you want to represent a vector to be useful?). While harder than a GUI, the Physics teachers felt that the code helped achieve their learning goals better. It’s not always about make things easier.
I met with Jeannette Wing yesterday, and we discussed the need for a good, authoritative definition of computational thinking. I told her about the CE21 Community Meeting where I saw K-12 evaluators looking for a definition that they could use to develop an assessment of computational thinking at the middle school level. Some of these evaluators were using the CS:Principles materials which made me uncomfortable — we designed those principles and practices to reflect what we saw as the core of computer science and as being appropriate for an advanced placement course. We didn’t write these to be a guide to what middle school students need to know about how to think about and with computing.
She gave me a copy of the most recent The Link, a CMU publication, in which she has an article, “Computational Thinking — What and Why?” She offers a definition and a rationale for the definition, taken from a work-in-progress paper by Jan Cuny, Larry Snyder, and Jeannette, “Demystifing Computational Thinking for Non-Computer Scientists.” She gave me permission to blog on the definition and the rationale.
Computational thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can effectively be carried out by an information-processing agent.
The article goes on to expand on this definition and offer examples. She says, “Computational thinking overlaps with logical thinking and systems thinking. It includes algorithmic thinking and parallel thinking, which in turn engage other kinds of thought processes, such as compositional reasoning, pattern matching, procedure thinking, and recursive thinking.” Jeannette pointed to a section of the paper on “Benefits of Computational Thinking” as being key:
Computational thinking enables you to bend computation to your needs. It is becoming the new literacy of the 21st century. Why should everyone learn a little computational thinking? Cuny, Snyder and I advocate these benefits [CunySnyderWing10]:
Computational thinking for everyone means being able to:
- Understand which aspects of a problem are amenable to computation,
- Evaluate the match between computational tools and techniques and a problem,
- Understand the limitations and power of computational tools and techniques,
- Apply or adapt a computational tool or technique to a new use,
- Recognize an opportunity to use computation in a new way, and
- Apply computational strategies such divide and conquer in any domain.
Computational thinking for scientists, engineers, and other professionals further means being able to:
- Apply new computational methods to their problems,
- Reformulate problems to be amenable to computational strategies,
- Discover new science through analysis of large data,
- Ask new questions that were not thought of or dared to ask because of scale, but which are easily addressed computationally, and
- Explain problems and solutions in computational terms.
This definition is still pretty high-level, but is still much better than having no definition. It’s a broad definition that encompasses a lot of powerful cognitive skills. We can move away from trying to draw lines between what is and what isn’t computational thinking, and instead focus on implications. What parts of this are appropriate to see at the middle school level? How do we teach these abilities? How would we measure them?
A colleague of mine sent me a link to the iConference 2011 website, suggesting that I should consider attending and submitting papers to future instantiations. It looks like an interesting conference, with lots of research in human-computer interaction and computer-supported collaborative work. There was very little about learning. There was a session on Scratch, focused on “end-user programming,” not on learning about computing.
I started to wonder: Have human-computer interaction research and computational thinking become ideological opposites? By “computational thinking” I mean “that knowledge about computing that goes beyond application use and that is useful in any discipline.” Or as Jeanette Wing described it, “Computational thinking builds on the power and limits of computing processes, whether they are executed by a human or by a machine.” Notice that she points out the limits. Limits suggest things that the computer can’t do, and if you’re going to think about them, you have to be aware of them. They must be visible to you. If Computational Thinking involves, for example, understanding the power and limits of digital representations, and how those serve as metaphors in thinking about other problems, then those representations have to be visible.
Let’s contrast that with Don Norman’s call for the Invisible Computer. Or Mark Weiser’s call for the “highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it.” Or any number of user-interface design books that tell us that the goal of user-centered design is for the user to focus on the task and make the computer become “invisible.”
Michael Mateas has talked about this in his discussion of a published dialog between Alan Perlis and Peter Elias. Elias claims, like Norman and Weiser, that one day “undergraduates will face the console with such a natural keyboard and such a natural language that there will be very little left, if anything, to the teaching of programming.” Michael responds, “The problem with this vision is that programming is really about describing processes, describing complex flows of cause and effect, and given that it takes work to describe processes, programming will always involve work, never achieving this frictionless ideal.”
The invisible-computer goal (that not all in HCI share, but I think it’s the predominant goal) aims to create a task-oriented interface for anything that a human will want to do with a computer. No matter what the task, the ad promises: “There’s an app for that!” Is that even possible? Can we really make invisible all the seams between tasks and digital representations of those tasks? Computational thinking is about engaging with what the computer can and cannot do, and explicitly thinking about it.
Computing education may be even more an ideological foe of this HCI design goal. Computing education is explicitly assuming that we can’t create an app for everything that we want to do, that some people (all professionals, in the extreme version that I subscribe to) need to know how to think about the computer in its own terms, in order to use it in new, innovative ways and (at least) to create those apps for others. It’s not clear who builds the apps in the invisible-computer world (because they would certainly need computing education), but whoever they are, they’re invisible, too.
I used to think that computing education was the far end of a continuum that started with HCI design. At some point, you can’t design away the computer, it has to become visible, and then you have to learn about it. After reviewing the iConference program, I suspect that HCI designers who believe in the invisible-computer have a goal for that never to happen. All possible tasks are covered by apps. Computing education should never be necessary except for an invisible few. Computational thinking is unnecessary, because we can make invisible all limitations.
Here’s a prediction: We won’t see a panel on “Computational Thinking” at CHI, CSCW, or iConference any time soon.
Readers of this blog may recall that Greg Wilson has been developing a course he calls Software Carpentry, providing the computing knowledge that computational scientists and engineers will need. He just concluded his course with a summary seven principles of computational thinking, based on Jon Udell’s seven principles of the Web. Yet another take, to contrast with the CS:Principles work.
Hello, and welcome to the final episode of Software Carpentry. We’re going to wrap up the course by looking at a few key ideas that underpin everything else we’ve done. We have left them to the end because like most big ideas, they don’t make sense until you have seen the examples that they are generalizations of.
Our seven principles are:
- It’s all just data.
- Data doesn’t mean anything on its own—it has to be interpreted.
- Programming is about creating and composing abstractions.
- Models are for computers, and views are for people.
- Paranoia makes us productive.
- Better algorithms are better than better hardware.
- The tool shapes the hand.
CMU has quite a star-studded CS Education going on today — http://www.cs.cmu.edu/csed/. Jan Cuny of NSF’s BPC and CE21 programs is the keynote speaker, and includes themes of Alice, Running on Empty, Computational Thinking (from Jeanette Wing), and the Pittsburgh Science of Learning Center. Links to lots of good resources on the page.
Pretty interesting: Google is offering a definition of computational thinking, and a set of resources created by Google engineers working with teachers.
Google is committed to promoting computational thinking throughout the K-12 curriculum to support student learning and expose everyone to this 21st century skill.
What is Computational Thinking? Computational thinking (CT) involves a set of problem-solving skills and techniques that software engineers use to write programs that underlay the computer applications you use such as search, email, and maps. Below is a list of specific techniques along with real world examples from our every day lives.
Barb is on a panel on Computational Thinking at the Scratch Conference this afternoon. We’re as confused as anyone else what “Computational Thinking” means, but I’ve had a couple of recent experiences that have highlighted for me one aspect of computational thinking, something which I feel is really computational thinking — not computer science, not algorithmic thinking, not mathematical thinking.
Story 1: It’s in my foot and my iPod. I like to run. Running doesn’t always like me. I had knee surgery for a torn meniscus three years ago. The last two Peachtree 10K runs, I’ve been registered, but have torn a muscle (calf one year, hamstring the next) within weeks of the race and missed it, forced to take 6-8 weeks off. I have now been running since January without injury.
So I was really bummed when I went out to run a five mile a couple weeks, and after two miles, I had the all-too-familiar feeling of my hamstring straining. It didn’t pull or tear, but I turned around, and ended up walking more than a mile of the two miles back. I took a couple days off, ran short runs and worked out on the elliptical, and within a week, was running 3 miles again without pain. What happened? And how could I avoid it in the future?
I have been running with Nike+ iPod since Christmas. A fob goes on my iPod, and a sensor goes in my shoe. When I next synched my iPod to the Nike+ website, I decided to look at my last few weeks worth of runs. Here’s what I saw, with my scribbled red arrow on the run where I was hurting:
When I look at this graph, I see lots of runs averaging around 3 miles (particularly when I was traveling alot), then one run above 4 miles, then a handful of shorter runs, then within two days, my two longest runs of the summer (4.71 and 4.75 miles). Two days later (with elliptical and weight work in between — no rest days), I tried to go even further at five miles. Well, duh! I think I pushed myself too quickly. It’s one thing to increase my distance. It’s another to do it day-after-day, without shorter runs or rests. I have to ramp up slowly, not increase my longest distance every day. That helps to explain my Peachtree training injuries too, as I tried too quickly to ramp up to 10K mile runs.
I showed this graph to Barb and told her my reasoning. She said, “That’s computational thinking!” I think she’s right.
Story 2: It’s not blowing in the wind. Atlanta, like most of the country, has been wicked hot this summer. The newspapers are using phrases like, “Hottest summer in 30 years!” Is it really? I was wondering, and I was wondering how I could find out.
I went to WolframAlpha and typed in “temp in Atlanta.” From the “History” pull-down menu, I selected “Last 5 years” and saw this graph.
I find this graph unconvincing that this is the hottest summer in even the last five years. This looks like pretty much the exact same curve and the exact same maximums across the five years. Then I clicked the “More” link, and got these graphs:
The humidity is high, but it looks higher back in 2007. But look at that last one, the wind speed. To my novice eye, that looks like the lowest wind speeds in the last five years. Maybe that’s why it feels so hot? It’s not the temperature, it’s the lack of wind? And maybe that is related to the high humidity?
Computational Thinking includes Going to the Data. The common theme in both of these stories is “Go to the data.” In both cases, I realized that data had been collected that could help me to answer my question. This is computational thinking. In our modern world, there are lots of data being collected (or can easily be collected), stored in the cloud, and available for processing. Dealing with these data is different than mathematical thinking (though the interpretation of the visualizations is clearly mathematics), algorithmic thinking, and notably for me, computer science. None of the above looks like computer science to me. However, it feels like it’s important for 21st Century citizens to be able to do this kind of reasoning.
Some of the questions that I had to answer for myself here, and that we’d want students to be able to answer for themselves, include:
- Are the data already collected that help me to answer my question? Or can I easily collect the data? WolframAlpha has an amazing collection of data already available to crunch. My run data comes for free after I put the sensor in the shoe, and run with my iPod — which I like to do anyway. (FYI, my current top running accompaniment are tunes by John Prine, Pink, Coldplay, and the soundtrack to “Mama Mia!”)
- Is the data what I want? Do I trust the data?
- What can I access in the data?
- How can I query and visualize the data?
Of course, next comes the interpretation of the data, the inference and development of hypotheses. I’m not saying that that’s computational thinking. Understanding a model requires knowledge of science and mathematics. But it really is uniquely computational thinking to find (or create) these collections of data and use them to create the visualizations that allow us to apply our knowledge of science and mathematics to answer our questions.