Posts tagged ‘computational thinking’
I’m leaving May 24 for a two week trip to Germany. Both one week parts are interesting and worth talking about here. I’ve been reflecting on my own thinking on the piece between, and how it relates to computing education themes, too.
I’m attending a seminar at Schloss Dagstuhl on Human-Centric Development of Software Tools (see seminar page here). Two of the seminar leaders are Shriram Krishnamurthi of Bootstrap fame who is a frequent visitor and even a guest blogger here (see post here) and Andy Ko whose seminal work with Michael Lee on Gidget has been mentioned here several times (for example here). I’ve only been to Dagstuhl once before at the live-coding seminar (see description here) which was fantastic and has influenced my thinking literally years later. The seminar next week has me in the relative-outsider role that I was at the live-coding seminar. Most of the researchers coming to this event are programming language and software engineering researchers. Only a handful of us are social scientists or education researchers.
The Dagstuhl seminar ends Thursday after lunch. Saturday night, I’m to meet up with a group in Oldenburg Germany and then head up Sunday to Stadland (near the North Sea) for a workshop where I will be advising STEM Education PhD students. I don’t have a web link to the workshop, but I do have a page about the program I’ll be participating in — see here. My only contact there is Ira Diethelm, whom I’ve met several times and saw most recently at WIPSCE 2014 in Berlin (see trip report here). I really don’t know what to expect. Through the ICER DC and WIPSCE, I’ve been impressed by the Computing Education PhD students I’ve met in Germany, so I look forward to an interesting time. I come back home on Friday June 5 from Bremen.
There’s a couple day gap between the two events, from Thursday noon to Saturday evening. I got a bunch of advice on what to do on holiday. Shriram gave me the excellent advice of taking a boat cruise partway north, stopping at cities along the way, and then finishing up with a train on Saturday. Others suggested that I go to Cologne, Bremen, Luxembourg, or even Brussels.
I’ve decided to take a taxi to Trier from Dagstuhl, tour around there for a couple days, then take a seven hour train ride north on Saturday. Trier looks really interesting (see Tripadvisor page), though probably not as cool as a boat ride.
Why did I take the safer route?
The science writer, Kayt Sukel, was a once student of mine at Georgia Tech — we even have a pub together. I am so pleased to see the attention she’s received for her book Dirty Minds/This is Your Brain on Sex. She has a new book coming out on risk, and that’s had me thinking more about the role of risk in computing education.
In my research group, we often refer to Eccles model of academic achievement and decision making (1983), pictured below. It describes how students’ academic decisions consider issues like gender roles and stereotypes (e.g., do people who are like me do this?), expectation for success (e.g., can I succeed at this?), and the utility function (e.g., will this academic choice be fun? useful? money-making?). It’s a powerful model for thinking about why women and under-represented minorities don’t take computer science.
Eccles’ model doesn’t say much about risk. What happens if I don’t succeed? What do I need to do to reduce risk? How will I manage if I fail? How much am I willing to suffer/pay for reduced risk?
That’s certainly playing into my thinking about my in-between days in Germany. I don’t speak German. If I get into trouble in those in-between days, I know nobody I could call for help. I still have another week of a workshop with a keynote presentation after my couple days break. I’ve already booked a hotel in Trier. I plan on walking around and taking pictures, and then I will take a train (which I’ve already booked, with Shriram’s help) to Oldenburg on Saturday. A boat ride with hops into cities sounds terrific, but more difficult to plan with many more opportunities for error (e.g., lost luggage, pickpockets). That’s managing risk for me.
I hear issues of risk coming into students’ decision-making processes all the time, combined with the other factors included in Eccles’ model. My daughter is pursuing pre-med studies. She’s thinking like many other pre-med students, “What undergrad degree do I get now that will be useful even if I don’t get into med school?” She tried computer science for one semester, as Jeanette Wing recommended in her famous article on Computational Thinking: “One can major in computer science and go on to a career in medicine, law, business, politics, any type of science or engineering, and even the arts.” CS would clearly be a good fallback undergraduate degree. She was well-prepared for CS — she had passed the AP CS exam in high school, and was top of her engineering CS1 in MATLAB class. After one semester in CS for CS majors, my daughter hated it, especially the intense focus on enforced software development practices (e.g., losing points on homework for indenting with tabs rather than spaces) and the arrogant undergraduate teaching assistants. (She used more descriptive language.) Her class was particularly unfriendly to women and members of under-represented groups (a story I told here). She now rejects the CS classroom culture, the “defensive climate” (re: Barker and Garvin-Doxas). She never wants to take another CS course. The value of a CS degree in reducing risks on a pre-med path does not outweigh the costs of CS classes for her. She’s now pursuing psychology, which has a different risk/benefit calculation (i.e., a psychology undergraduate degree is not as valuable in the marketplace as a CS undergraduate degree), but has reduced costs compared to CS or biology.
Risk is certainly a factor when students are considering computer science. Students have expectations about potential costs, potential benefits, and about what could go wrong. I read it in my students’ comments after the Media Computation course. “The course was not what I expected! I was expecting it to be much harder.” “I took a light load this semester so that I’d be ready for this.” Sometimes, I’m quite sure, the risk calculation comes out against us, and we never see those students.
The blog will keep going while I’m gone — we’re queued up for weeks. I may not be able to respond much to comments in the meantime, though.
Here’s an interesting project that could really get at generalizable “computational thinking” skills:
Wilkerson-Jerde’s research project will explore how young people think and learn about data visualization from the perspective of a conceptual toolkit. Her goals for “DataSketch: Exploring Computational Data Visualization in the Middle Grades” are to understand the knowledge and skills students bring together to make sense of novel data visualizations, and to design tools and activities that support students’ development of critical, flexible data visualization competence.
“Usually when we think of data visualization in school, we think of histograms or line graphs. But in contemporary science and media, people rely on novel, interactive visualizations that tell unique stories using data,” she explains.
The Computing At Schools effort has a regular newsletter, SwitchedOn. It’s packed full of useful information for computer science teachers, and is high-quality (in both content and design). The latest issue is on Computational Thinking and includes mentions of Media Computation and Pixel Spreadsheet, which was really exciting for me.
Download the latest issue of our newsletter here. The newsletter is produced once a term and is packed with articles and ideas for teaching computer science in the classroom.
This issue takes a look at the idea of Computational Thinking. Computational thinking is something children do, not computers. Indeed, many activities that develop computational thought dont need a computer at all. This influential term helps stress the educational processes we are engaged in. Developing learning and thinking skills lies behind our view that all children need exposure to such ideas.There is something of interest to all CAS members and the wider teaching community. Resources and ideas shared by teachers, both primary and secondary. There is also a section on the Network of Excellence for those new to CAS who aren’t familiar with current developments.
Shuchi Grover and Roy Pea (Stanford) have a review of the field of computational thinking in K-12 schools in this month’s Educational Researcher. It’s a very nice paper. I’m excited that the paper is published where it is! Educational Researcher is the main publication venue for the largest education research organization in the United States (American Educational Research Association). Roy has been doing work in computing education for a very long time (e.g., “On the prerequisites of learning computer programming,” 1983, Pea and Kurland). This is computational thinking hitting the education mainstream.
Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.
This is a fascinating essay. Some of it goes too far for me (e.g., that code “produces new forms of algorithmic identity”), but the section I quote below is making a deep comment relative to the arguments we’ve been making here about “computing for everyone.”
Why should everyone know about computing? I’ve argued about the value of computational literacy as literacy — a way of expressing and notating thought. I’ve also argued about the value of computer science as science — insight into how the world we inhabit works. This part of the essay is saying something more generative — that code provides metaphors for the way we think about the world, so not knowing about code thus limits one’s ability to understand modern culture and science. The idea is akin to computational thinking, but more about cultural practices than cognitive processes.
Code is the language of computation; it instructs software how to act. But as the instructions written down in code travel out into the world, organized in the algorithmic procedures that make up software, it also has a significant influence on everything it touches. The result is a profound softwarization of society as software has begun to mediate our everyday ways of thinking and doing.
For example, software and its constituent codes and algorithms have become a metaphor for the mind, for ideology, for biology, for government, and for the economy, and with the rapid proliferation of software as an interface to the world, code has been seemingly naturalized in collective life. Computer code has also been described as a kind of law, or the set of rules and constitutional values that regulate the web. The idea that code is law suggests that choices about how to code the web will define the controls and freedoms that are built or programmed into it.
These ways of looking at code demonstrate that code is much more than a language for instructing computing machines. Instead, we need to understand code as a system of thought that spills out of the domain of computation to transform and reconfigure the world it inhabits.
An interesting piece, which argues that proficiency with computing is an important part of a modern liberal arts education. The argument is a modern and updated version of the argument that Alan Perlis made back in 1961. The specific computing literacies being described go beyond computational thinking — it’s explicitly about being able to make with computing. Steve Jobs’ made a similar famous claim that computer science is a liberal art.
Students who graduate with a degree in liberal arts should understand the basic canon of our civilization as well as their place in the world, sure, but they also need to understand how to explore and communicate their ideas through visual communication, data manipulation, and even making a website or native mobile app. If they can’t, they’ll just understand the global context of their own unemployment.
In the About page for this blog, I wrote, “Computing Education Research is about how people come to understanding computing, and how we can facilitate that understanding.” Juha Sorva’s dissertation (now available!) helped me come to an understanding of what it means to “understand computing.” I describe a fairly technical (in terms of cognitive and learning sciences) definition, which basically is Juha’s. I end with some concrete pedagogical recommendations that are implied by this definition.
A Notional Machine: Benedict DuBoulay wrote in the 1980’s about a “notional machine,” that is, an abstraction of the computer that one can use for thinking about what a computer can and will do. Juha writes:
Du Boulay was probably the first to use the term notional machine for “the general properties of the machine that one is learning to control” as one learns programming. A notional machine is an idealized computer “whose properties are implied by the constructs in the programming language employed” but which can also be made explicit in teaching (du Boulay et al., 1981; du Boulay, 1986).
The notional machine is how to think about what the computer is doing. It doesn’t have to be about the CPU at all. Lisp and Smalltalk each have small, well-defined notional machines — there is a specific definition of what happens when the program executes, in terms of application of S-expressions (Lisp) and in terms of message sending to instances of classes (Smalltalk). C has a different notional machine, which isn’t at all like Lisp’s or Smalltalk’s. C’s notional machine is closer to the notional machine of the CPU itself, but is still a step above the CPU itself (e.g., there are no assignment statements or types in assembly language). Java has a complicated notional machine, that involves both object-oriented semantics and bit-level semantics.
A notional machine is not a mental representation. Rather, it’s a learning objective. I suggest that understanding a realistic notional machine is implicitly a goal of computational thinking. We want students to understand what a computer can do, what a human can do, and why that’s different. For example, a computer can easily compare two numbers, can compare two strings with only slightly more effort, and has to be provided with an algorithm (that is unlikely to work like the human eye) to compare two images. I’m saying “computer” here, but what I really mean is, “a notional machine.” Finding a route from one place to another is easy for Google Maps or my GPS, but it requires programming for a notional machine to be able to find a route along a graph. Counting the number of steps from the top of the tree to the furthest leaf is easy for us, but hard for novices to put in an algorithm. While it’s probably not important for everyone to learn that algorithm, it’s important for everyone to understand why we need algorithms like that — to understand that computers have different operations (notional machines) than people. If we want people to understand why we need algorithms, and why some things are harder for computers than humans, we want people to understand a notional machine.
Mental Models: A mental model is a personal representation of some aspect of the world. A mental model is executable (“runnable” in Don Norman’s terms) and allows us to make predictions. When we turn on and off a switch, we predict that the light will go on and off. Because you were able to read that sentence and know what I meant, you have a mental model of a light which has a switch. You can predict how it works. A mental model is absolutely necessary to be able to debug a program: If you have to have a working expectation of what the program was supposed to do, and how it was supposed to get there, so that you can compare what it’s actually doing to that expectation.
So now I can offer a definition, based on Juha’s thesis:
To understand computing is to have a robust mental model of a notional machine.
My absolutely favorite part of Juha’s thesis is his Chapter 5, where he describes what we know about how mental models are developed. I’ve already passed on the PDF of that chapter to my colleagues and student here at Georgia Tech. He found some fascinating literature about the stages of mental model development, about how mental models can go wrong (it’s really hard to fix a flawed mental model!), and about the necessary pieces of a good mental model. DeKleer and Brown provide a description of mental models in terms of sub-models, and tell us what principles are necessary for “robust” mental models. The first and most important principle is this one (from Juha Sorva’s thesis, page 55):
- The no-function-in-structure principle: the rules that specify the behavior of a system component are context free. That is, they are completely independent of how the overall system functions. For instance, the rules that describe how a switch in an electric circuit works must not refer, not even implicitly, to the function of the whole circuit. This is the most central of the principles that a robust model must follow.
When we think about a switch, we know that it opens and closes a circuit. A switch might turn on and off a light. That would be one function for the switch. A switch might turn on and off a fan. That’s another function for a switch. We know what a switch does, completely decontextualized from any particular role or function. Thus, a robust mental model of a notional machine means that you can talk about what a computer can do, completely apart from what a computer is doing in any particular role or function.
A robust mental model of a notional machine thus includes an understanding of how an IF or WHILE or FOR statement works, or what happens when you call a method on an object in Java (including searching up the class hierarchy), or how types do — completely independently of any given program. If you don’t know the pieces separately, you can’t make predictions, or understand how they work a particular function in a particular program.
It is completely okay to have a mental model that is incomplete. Most people who use scissors don’t think about them as levers, but if you know physics or mechanical engineering, you understand different sub-models that you can use to inform your mental model of how scissors work. You don’t even have to have a complete mental model of the notional machine of your language. If you don’t have to deal with casting to different types, then you don’t have to know it. Your mental model doesn’t have to encompass the notional machine. You just don’t want your mental model to be wrong. What you know should be right, because it’s so hard to change a mental model later.
These observations lead me to a pedagogical prediction:
Most people cannot develop a robust mental model of a notional machine without a language.
Absolutely, some people can understand what a computer can do without having a language given to them. Turing came up with his machine, without anyone telling him what the operations of the machine could do. But very few of us are Turings. For most people, having a name (or a diagram — visual notations are also languages) for an operation (or sub-model, in DeKleer and Brown terms) makes it easier for us to talk about it, to reference it, to see it in the context of a given function (or program).
I’m talking about programming languages here in a very different way than how they normally enter into our conversation. In much of the computational thinking discussion, programming is yet another thing to learn. It’s a complexity, an additional challenge. Here, I’m talking about languages as a notation which makes it easier to understand computing, to achieve computational thinking. Maybe there isn’t yet a language that achieves these goals.
Here’s another pedagogical recommendation that Juha’s thesis has me thinking about:
We need to discuss both structure and function in our computing classes.
I suspect that most of the time when I describe “x = x + 1″ in my classes, I say, “increment x.” But that’s the function. Structurally, that’s an assignment statement. Do I make sure that I emphasize both aspects in my classes? They need both, and to have a robust mental model, they probably need the structure emphasized more than the function.
We see that distinction between structure and function a lot in Juha’s thesis. Juha not only does this amazing literature review, but he then does three studies of students using UUhistle. UUhistle works for many students, but Juha also explores when it didn’t — which may be more interesting, from a research perspective. A common theme in his studies is that some students didn’t really connect the visualization to the code. They talk about these “boxes” and do random walks poking at graphics. As he describes in one observation session (which I’m leaving unedited, because I enjoyed the honesty of Juha’s transcripts):
What Juha describes isn’t unique to program visualization systems. I suspect that all of us have seen or heard something pretty similar to the above, but with text instead of graphics. Students do “random walks” of code all the time. Juha talks a good bit about how to help his students better understand how UUhistle graphical representations map to code and to the notional machine.
Juha gives us a conceptual language to think about this with. The boxes and “incomprehensible things” are structures that must be understood on their own terms, in order to develop robust mental models, and understood in terms of their function and role in a program. That’s a challenge for us as educators.
So here’s the full definition: Computing education research is about understanding how people develop robust models of notional machines, and how we can help them achieve those mental models.