HCI and Computational Thinking are Ideological Foes
February 23, 2011 at 2:02 pm 25 comments
A colleague of mine sent me a link to the iConference 2011 website, suggesting that I should consider attending and submitting papers to future instantiations. It looks like an interesting conference, with lots of research in human-computer interaction and computer-supported collaborative work. There was very little about learning. There was a session on Scratch, focused on “end-user programming,” not on learning about computing.
I started to wonder: Have human-computer interaction research and computational thinking become ideological opposites? By “computational thinking” I mean “that knowledge about computing that goes beyond application use and that is useful in any discipline.” Or as Jeanette Wing described it, “Computational thinking builds on the power and limits of computing processes, whether they are executed by a human or by a machine.” Notice that she points out the limits. Limits suggest things that the computer can’t do, and if you’re going to think about them, you have to be aware of them. They must be visible to you. If Computational Thinking involves, for example, understanding the power and limits of digital representations, and how those serve as metaphors in thinking about other problems, then those representations have to be visible.
Let’s contrast that with Don Norman’s call for the Invisible Computer. Or Mark Weiser’s call for the “highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it.” Or any number of user-interface design books that tell us that the goal of user-centered design is for the user to focus on the task and make the computer become “invisible.”
Michael Mateas has talked about this in his discussion of a published dialog between Alan Perlis and Peter Elias. Elias claims, like Norman and Weiser, that one day “undergraduates will face the console with such a natural keyboard and such a natural language that there will be very little left, if anything, to the teaching of programming.” Michael responds, “The problem with this vision is that programming is really about describing processes, describing complex flows of cause and effect, and given that it takes work to describe processes, programming will always involve work, never achieving this frictionless ideal.”
The invisible-computer goal (that not all in HCI share, but I think it’s the predominant goal) aims to create a task-oriented interface for anything that a human will want to do with a computer. No matter what the task, the ad promises: “There’s an app for that!” Is that even possible? Can we really make invisible all the seams between tasks and digital representations of those tasks? Computational thinking is about engaging with what the computer can and cannot do, and explicitly thinking about it.
Computing education may be even more an ideological foe of this HCI design goal. Computing education is explicitly assuming that we can’t create an app for everything that we want to do, that some people (all professionals, in the extreme version that I subscribe to) need to know how to think about the computer in its own terms, in order to use it in new, innovative ways and (at least) to create those apps for others. It’s not clear who builds the apps in the invisible-computer world (because they would certainly need computing education), but whoever they are, they’re invisible, too.
I used to think that computing education was the far end of a continuum that started with HCI design. At some point, you can’t design away the computer, it has to become visible, and then you have to learn about it. After reviewing the iConference program, I suspect that HCI designers who believe in the invisible-computer have a goal for that never to happen. All possible tasks are covered by apps. Computing education should never be necessary except for an invisible few. Computational thinking is unnecessary, because we can make invisible all limitations.
Here’s a prediction: We won’t see a panel on “Computational Thinking” at CHI, CSCW, or iConference any time soon.
Entry filed under: Uncategorized. Tags: computational thinking, HCI.
1.
Alfred Thompson | February 23, 2011 at 2:17 pm
I’ve been hearing talk of “one day anyone will be able to get the comptuer to do what ever they need using natural langauges” since my first programming course almost 40 years ago. (Wait am I that old? Ouch) Anyway, as someone was quoted as saying in a post you made on ebooks “The easier you try to make an authoring environment, the harder it is to build it,” The same is true of programming systems no matter how you define programming. One of the lessons of Watson may be that for as great as it is at that narrow scope of questions we still have a long way to go to get computers to actually understand us.
2.
Rob St. Amant | February 23, 2011 at 4:37 pm
I think there’s a tendency for HCI folks to think of the human architecture as being fixed, and our goal being to accommodate computer systems to it. All the discretionary use applications that have appeared over the past couple of decades probably add to the momementum. You’ve raised some important issues I haven’t thought about before. Hmm.
Coincidentally, Judy Robertson writes in her most recent CACM blog post about a system called Toque, described in a 2010 CHI paper with the summary,
Presents implications from an intergenerational design process to create a cooking-based programming language utilizing a Wiimote. Can assist researchers, working in tangible systems, with teaching computational thinking to young children.
So there’s some hope.
3.
Andy J. Ko | February 23, 2011 at 9:50 pm
As someone who works at the intersection of HCI, software engineering, and computing education, I couldn’t disagree more (but am still thoroughly impressed by your analysis). My disagreement stems largely from your premises.
First, the HCI school of thought that computers should be simple, invisible, and “natural,” while still present in some technical areas of HCI like UbiComp, was supplanted long ago by the more relativist notion that computers should do what people need them to do (not what engineers think they should do). Countless papers from CHI are rejected every year for grounding research in simplistic notions of good design as simply simple, when in fact what people need computers to do depends greatly on what people you’re talking about, what they’re trying to do, and the circumstances they’re trying to to it in.
Second, the iConference is not an HCI conference. It is an information science conference. HCI may have strong connections to information science, but no HCI researcher would consider the iConference in any way representative of what HCI researchers study. I was at the recent conference in Seattle and most of the HCI attendees felt a bit like foreigners.
Finally, and most importantly, how is programming not human-computer interaction? It was the first kind of human-computer interaction and is still (arguably) the most difficult kind of interaction to design well. If anything, I view computing education and everything it entails as a sub-area of HCI, in that HCI is largely concerned with helping people express and get what they need from computers, whereas computing education is largely concerned with helping people express and get what they need from computers via programming. The distinction is really only between interface (direct manipulation vs. indirect manipulation). In fact, the first three or four years of publications at CHI were nothing but studies of people interacting with parsers, compilers and command languages.
Ultimately, the limits of computing is a fundamental topic in HCI, in that to design interactions between people and computers that fully exploit the collective strengths each, one has to understand both the limits of humanity and the limits of machines. That’s precisely why I find code the most fascinating of human-computer interfaces: it allows me to explore one of the most cognitively demanding activities people engage in, while testing the limits of how well computing can support it.
4.
Mark N. | February 24, 2011 at 6:06 am
That’s an interesting perspective! I’m a 90%-outside observer of HCI, but if that’s what HCI is like, it meshes much more with my concerns. In AI, we have a strong interest in “man-machine interaction”, including things like shared knowledge representations, domain-specific languages, knowledge elicitation, etc., but my impression was that stream of HCI, associated with folks like Herb Simon and Alan Newell, wasn’t in the HCI mainstream anymore.
I could be reading the wrong proceedings, but the heritage I feel a lot more strongly is the industrial-psychology heritage rather than the man-machine interaction one, with a strong focus on interfaces, usability, learning curves, evaluation methodologies, etc.; rather than ideas like shared representations and computational expressivity.
5.
Mark Guzdial | February 24, 2011 at 9:48 am
Hi Andy,
Thanks for taking the time to provide a detailed response. I want to address each of your points, hoping you can help me with some of my confusion.
How does building computers for “what people need them to do” help to address issues of computational thinking? The goal of giving people what they think they need seems in complete agreement with the invisible computer goal of Norman and Weiser. Most people don’t decide that they need to be dragged out of the Platonic cave. Computational thinking is useful, despite the fact it isn’t obviously connected to any given task or obvious need. I do completely agree that the task is of critical importance. I love my apps as much as the next iAddict, and I want them to work flawlessly and seamlessly. I don’t believe that there is an existing app for any task I might want to perform with a computer, and I relish my (few) opportunities to dig in and use my knowledge of computing to invent new applications and to deal with the computer on its terms.
You’re absolutely right that the iConference is not supposed to be an HCI conference. When I visited the iConference 2011 site, I saw that “The iConference is an annual gathering of scholars and practitioners in the information field spanning the public, private and non-profit sectors,” and I thought, “Cool! Education and learning fits in with that!” And I saw almost no education work in the program, and none on computing education. You were there, so you would have a better sense than me, but the titles of the papers and posters I saw sound a lot like the titles I see in the CHI and CSCW programs. It is exactly that disconnect that led me to writing this blog post. Of the three conferences, iConference clearly should have the most computational thinking and computing education in it, given its goals.
Programming is absolutely part of the definition of HCI — I completely agree. But programming is not the focus of much CHI or CSCW research. I’m not arguing that the definition of HCI is opposed to computing education. I argue that the mainstream HCI design and research efforts are opposed to computing education, even opposed to the need for computing education. The computer should be invisible, it should be seamless with the task — that’s what I’m reading.
“[T]he limits of computing is a fundamental topic in HCI, in that to design interactions between people and computers that fully exploit the collective strengths each, one has to understand both the limits of humanity and the limits of machines.” That’s a great statement, and one I strongly believe — according to the definition of HCI. That’s not what I see at CHI, CSCW, or the iConference. If there are papers at iConference 2011 that I’m missing and that I ought to be looking at, please do point them out to me! For the most part, I see agreement with the Peter Elias perspective: Interaction with the computer should be frictionless, and it’s not worth learning about the computer because any friction that’s there will just go away.
6.
Andy J. Ko | February 24, 2011 at 11:41 am
Thanks Mark, your reply helped me see one possible root of our disagreement. I completely agree that computational thinking is in conflict with society’s broad disinterest in creating their own computational solutions to problems they encounter. If somebody else builds it for you, why bother? (Those of us on the blog can and want to make the case for why to bother, but that’s for a different thread!). Our disagreement is probably stemming from whether HCI is a field that actively opposes computational thinking or whether the collection of research it engages in is simply reflective of society’s distaste for creating computational things.
I believe the latter. There is plenty of HCI research that attempts to support tasks that people need to do, where no app (and no market that would produce an app) exists. Everything on end-user programming, spanning a wide range of domains from the web, to teaching, science, finance, etc., actively engages in understanding how to better support people who have no choice but to create their own computational solutions. These user populations aren’t always happy about it, but they do it because they need to. Research in end-user programming certainly isn’t a majority topic in HCI research, but I tend to find that in HCI, every topic is niche 🙂
If HCI researchers were actively against computational thinking, the reviews that come back about end-user programming research would critique it for trying to “force” people to program. This is rarely the concern; the issue with this work is more that the solutions that we invent to support end-user programming are often difficult to generalize to other domains, because if we’ve done them well, they fit the domain’s idiosyncrasies.
I think my major quibble is with your use of the word “oppose”. Few HCI researchers I know oppose computational thinking; there may not be a lot of work that actively supports it, but I think that’s due more to society’s disinterest. I think a more progressive way to think about the relationship between HCI and computational thinking would be to explore in what ways computational thinking is fundamental to HCI.
I will say briefly that HCI practice (e.g., usability engineering, user experience, etc.), does have a much less nuanced and negative stance towards computational thinking and complexity in general. I think that’s more a failure of HCI education than a statement about the views within the HCI research community.
7.
Mark Guzdial | February 24, 2011 at 12:13 pm
Well said, Andy! The open question is whether the majority of HCI practitioners and researchers “oppose” computing education (with computational thinking as one subset of that), vs. little work actively supporting it. I don’t know, and you do have a better perspective on that than I do.
8.
Grant Schoenebeck | February 23, 2011 at 9:55 pm
I don’t see HCI and computational thinking opposed in the least bit. I think that you may misunderstand the idea of “computational thinking”, at least as thought of by theoretical computer scientists. Let me try to give a theoretical computer scientist’s perspective on it and see if I can change your mind.
The idea of “computational thinking” is to use computer science intuition to understand the world (apart from computers) just like “mathematical thinking” has influenced many fields. Even if you have never done additional tables after the second grade, it still might help you to think about how to budget your money; or about how the comment count on your blog grows. The difference between mathematical and computational thinking is that mathematics generally studies equations and static objects (groups, rings, fields, vector spaces, etc), and theoretical computer science generally studies (discrete) processes.
As you state, the goal of “computational thinking” is to export the computer science theory to other disciplines. Some of the disciplines that this has been tried with are statistical physics, evolutionary biology, economics, and social networks. It has been more successful in some fields than others, and only time will give us the final results. (As a computer science theoretician, I don’t know anyone trying to export “computational thinking” to HCI; perhaps you have unluckily happened upon some, which I would be interested to hear about.)
Even if you abstracted all the code away and there was an app for everything, it still might be helpful to think of particles in a crystal lattice with changing spins as a process (in this case, usually the Glauber dynamics), or to think of information diffusion across a population as a cascade over a graph. These are objects that have been studied in theoretical computer science (with lots of help from adjoining areas).
The idea is that to understand fundamental processes that happen in our world, a computational lens is helpful. Thus learning to code can serve a purpose to those that never code again. It is not to claim the necessity of coding everything for its own sake, or even to claim that everyone should learn how to code so that they can understand these phenomena. Computer science theoreticians are happy to teach computer science classes with no coding, but plenty of computational “thinking”.
Secondly, I hope to illustrate the relevance of “computational thinking” by applying it to your blog post: People who dream of an app for everything do not dream big enough. The idea of the Entscheidungsproblem (decision problem) is that I write a question on a computer tape and a computer gives me the answer. Sounds great, but Alan Turning showed (in 1940 before any modern computer existed) that there will likely never be an app for the Entscheidungsproblem because no computer ever conceived of by man (not even quantum computers) can solve even a very limited case of this. Written in the laws of nature is that some problems cannot be solved by computers; there are fundamental limits to what computers can do (and also to what they can do quickly–how about an app to break RSI encryption?)
You cannot make an app that defies the laws of nature, but you can try to make apps that shoe-horn people’s thoughts, so that people only think of doing things we know how to do (e.g. search Twitter). As the post points out about computers, “IF you’re going to think about them, you have to be aware of them”. In the future, it might be possible for most people never to think about computers while using them every second of the day. However, IF they do decide to think about computers or many other things, knowing what things the laws of nature deem possible and impossible will help, and this knowledge is often encoded in computational thinking.
9.
Steve Tate | February 24, 2011 at 9:26 am
After reading Mark’s original blog post at home this morning, I thought about writing a reply on my way in to the office, but it looks like you’ve beat me to most of the points I wanted to make.
Frankly, I think Mark’s representation of computational thinking is wrong here. When Jeanette talks about “Computational thinking builds on the power and limits of computing processes” she is most definitely NOT talking about what a piece of technology does or doesn’t do. Whether “there is an app for that” is completely irrelevant. Perhaps you could say the important statement is “there could be an app for that” – a slight modification in wording, but a very, very different meaning. Computational thinking really has nothing to do with what technologies people have created, or how easy they are to use, or efficacy of a device or system.
The only way HCI (or AI) could obviate computational thinking is if technologies advance to the point that they do our thinking for us – sort of a Kurzweil singularity moment. But if we get to the point where we are no longer thinking about things, because machines are doing it for us, then we’re talking about a very different world.
10.
Mark Guzdial | March 7, 2011 at 9:27 am
Steve,
Agreed that Jeanette doesn’t say anything about “there is an app for that.” That’s my extension. The notion that “there is an app for that” implies that a consumer-oriented attitude toward computational technology. Jeanette is talking about computing, and I’m extending to technology. To be truly innovative, knowledge-building professionals need to have an author’s attitude towards computational technology.
11.
Mark Guzdial | February 24, 2011 at 9:34 am
Hi Grant, it’s hard to argue about definitions and misunderstandings of computational thinking since Jeanette Wing carefully avoided ever defining it. However, at the NSF CE21 (Computing Education in the 21st Century) meeting, the http://csprinciples.org/ effort was highlighted as an example framework for computational thinking. That framework does explicitly call for students to learn about (for example) digital representations, simple ideas of programming, abstraction as a powerful tool for dealing with complexity, etc. Mainstream HCI research and design aims to hide all of those ideas. In that way, modern HCI efforts are in opposition to computational thinking.
12.
Grant Schoenebeck | February 24, 2011 at 5:02 pm
Mark,
I understand your comments and confusion about the vagueness of the definition of computational thinking. Being in the CS theory community myself, I have heard this term batted around a lot. Christos Papadimitriou went around giving talks about the vision of computational thinking (though he called it “the algorithmic lens”) which you can find on-line (slides: http://lazowska.cs.washington.edu/fcrc/Christos.FCRC.pdf; video http://www.scivee.tv/node/10204). These talks outline applications of computational thinking to other disciplines: math, biology, physics, economics, and sociology. Christos was describing Jeanette Wing’s ideas about “computational thinking” in these talks.
The operative definition of “computational thinking” of http://csprinciples.org/ which you linked to does not seem to accord well even with the vague definition of Jeanette Wing that you mentioned. That site doesn’t seem to talk about “knowledge about computing that goes beyond application use and that is useful in any discipline” nor about computing processes that “are executed by a human.” The notion of computational thinking that the theoretical computer science community has embraced takes computational thinking beyond computing and thus also may take it beyond coding.
Even taking this definition, the application in education is still really important: can we teach computational thinking without coding?
I think we can, though others may disagree (including perhaps those working on http://csprinciples.org/). A class that is a start in this direction seems to be Sanjeev Arora’s COS 116: The Computational Universe http://www.cs.princeton.edu/courses/archive/spring11/cos116/
at Princeton. It is an introduction to many computer science ideas without any programming (though pseudo-code is used and computers are instructed what to do though the use of apps e.g. YouTube, Wikipedia, etc). Most of the subject matter stays within the traditional confines of computers and their applications, though the introductory sides do talk about biology by interpreting DNA as a kind of computational system. My point is that if you use the theoretical computer scientists’ idea of “computational thinking” then there does not seem to be much conflict. (Unless perhaps there is a third definition of “computational thinking” that you feel both accords with Jeanette Wing’s definition, but not with that of theoretical computer scientists.)
The idea of “computing education” being rooted in programming is at odds with what Jeanette Wing means by either “computing education” or by “computational thinking” (as I perceived from this which I’m sure you’re well familiar with http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing06.pdf ).
The slides for the first lecture of Sanjeev Arora’s class have this distinction as a talking point “Important Distinction: Computer Science vs. Programming. Anyway, my point here is that you might find a lot of alignment with the computer science theory community and the educational perspective you’re coming from.
13.
Mark Guzdial | March 7, 2011 at 9:23 am
Grant, the big ideas at CSPrinciples.org that have to do with representation and the power of algorithms and abstraction certainly seem to me to fall under the category of “knowledge about computing that goes beyond application use and that is useful in any discipline.” BTW, I am one of the authors of the big ideas and computing practices in Computer Science: Principles.
14.
Rob St. Amant | February 24, 2011 at 1:11 pm
As a computer science theoretician, I don’t know anyone trying to export “computational thinking” to HCI; perhaps you have unluckily happened upon some, which I would be interested to hear about.
Grant, you may find Harold Thimbleby’s work interesting. His HCI textbook, Press On, shows how interfaces can be modeled using state machines; what models of interfaces as graphs can tell us (sample observation: “The size of a maximal independent set indicates the difficulty of a device to use”); how Huffman trees can be used to efficiently encode key shortcuts; and so forth. Also, it’s written in an introductory, informal style that makes it accessible to more than computer scientists.
15.
Grant Schoenebeck | February 24, 2011 at 5:04 pm
Thanks for the pointer. Sounds like a nice application of computational thinking!
16.
edtechdev | February 23, 2011 at 10:03 pm
Sorry, I usually don’t post long comments, but I have been thinking about this with respect to NSF CE21 and so forth lately.
This is a little funny since I learned how to program in an HCI course (I know programming != computational thinking), and I got counted off for it (I was only supposed to create a static mockup in hypercard, not something that supported interactions). Luckily HCI and the tools and techniques they use have evolved since then (interaction design, interactive mockup tools, bodystorming, game design, and so forth – there might be more and more computational thinking involved in HCI these days, or perhaps it could use more).
But, there is *doing* computational thinking (which as Wing says we already do in our everyday lives, but we are often unaware of it), and there is *learning* computational thinking, which might involve 1) a metacognitive aspect – being aware of the thinking processes we are using (and having names for them), as well as 2) a training aspect, getting better at computational thinking and learning new tools for thought (new algorithms, techniques, etc.).
Perhaps HCI won’t help with the metacognitive aspect, but I think it can help with the training aspect. And also I think it would help every computer scientist if they took an HCI class, because computer scientists do more than “build systems that interact with the real world” (Wing), they build systems that interact with people or that have to be understood by people in order to be maintained or used or fixed.
Just as everyone should understand the constraints and limitations of computers, they should understand the constraints and limitations of the ‘human architecture’ as well, and that’s what HCI helps you do.
Like I said earlier, the tools and techniques for HCI are becoming more and more complex and may benefit from more computational-like and systems-like thinking. Structure-behavior-function theory helps me think about students interacting with and learning about systems. Out of SBF falls 3 motivating contexts for becoming more aware of how a system works: analysis (curiosity about how something works leads you to inspect it, take it apart: Structure to Behavior), design (improving a system, making something that’s useful to others or more useful to you: Function to Structure), and troubleshooting (fixing a flaw or defective behavior in a system, solving a real problem as in problem-based learning). So much curriculum focuses on analysis problems only and assumes an innate curiosity in the students, but modern HCI and game design and other designers are making interfaces and computer applications that are more tweakable, customizable, relevant to the real-world, and so forth – little big planet, gamified web sites, etc.
So, yes, I think ‘traditional’ HCI and ‘traditional’ computer science are maybe opposed and not helpful to one another, and yes the fact that computational thinking mainly involves learning about things that computers can already do means that it is harder to think of contexts for motivating people to learn how to do these things themselves, but the two fields hopefully are evolving and will help improve each other.
BTW Wing states at the end that she thinks computational thinking will eventually become invisible/transparent/whatever: “Computational thinking will be a reality when it is so integral to human endeavors it disappears as an explicit philosophy.”
17.
Mark Guzdial | February 24, 2011 at 8:54 am
Hi Doug — great points! I think Wing’s point at the end isn’t that computational thinking goes away, but it becomes part of normal, accepted literacy. Different sense of disappearing, I think, than what Norman and Weiser are saying about the computer.
18.
Sarita Yardi | February 24, 2011 at 11:17 am
Passing on one comment from the Twitter stream (I tweeted this because I think this is a great blog post). Kate Starbird at Colorado replied that she had a paper on teaching programming in 3D environments at the iConf. It’s just one data point so doesn’t change your core argument but worth mentioning (Andrea Forte also tweeted there were some LST and CSEd folks there but not all presenting).
More than the Usual Suspects: The Physical Self and Other Resources for Learning to Program Using a 3D Avatar Environment; Kate Starbird (UC Boulder) and Leysia Palen (UC Boulder)
19.
Mark Guzdial | February 24, 2011 at 4:14 pm
Thanks, Sarita — both for the kind comments and for the link. I’ll definitely look that up. I’d certainly classify that as Computing Education.
20.
Jonathan Grudin | February 24, 2011 at 12:31 pm
Sarita got in just before me.The last paper session I attended at the iConference was about education. It was great. The first paper was The Structure of Collaborative Problem Solving in a Virtual Math Team, by Gerry Stahl of Drexel, long ago a student of Gerhard Fischer at Colorado. The third paper was Emerging Contexts for Science Education: Embedding a Forensic Science Game in a Virtual World, presented by Carlos Monroy from Rice. The second was the paper Sarita mentioned, The Physical Self and Other Resources for Learning to Program Using a 3D Avatar Environment, presented by Kate Starbird of Colorado, whom I would bet even money GT will be trying to hire in a year or two. Her talk was one of the nicest on computtational thinking and its limits or obstructions that I have seen. What is ingenious about the work is that by having 18-year-olds attempt to program certain actions using a fixed, limited set of basic graphical elements (joint movements in an avatar body), they get immediate feedback as to what is going right or wrong in their algorithms by seeing the behavior of the avatar. It was fun and compelling. The papers are in the ACM Digital Library, a link was on the iConference web site last time I looked.
The larger issue here is about computation, but the focus is on the conferences, so as co-chair of iConference 2011 and someone who has attended most CHI’s since the first, let me address that. As Andy Ko noted, early CHIs had a very large component around studies of programming. That’s because most of the people who interacted directly with computers in the early 1980s were programmers. Computer programming was the first profession that chose to adopt hands-on use of computers. Now of course billions of people use digital technology hands-on, and fewer than one in a thousand of them are programmers. So priorities have shifted some.
However, conferences are what we make them. Conferences like CHI, with acceptance rates around 20%, are what the program committee makes them. Conferences like the iConference, with acceptance rates around 60%, are what the authors make them. Many HCI people are now in iSchools — take a look at the faculties of Michigan, Penn State. Univ. of Washington, Drexel, Berkeley, Toronto, and others. Looking at the CHI Lifetime Achievement Award winners, many are in iSchools and only Ben Shneiderman is wholly in a Computer Science Department (with Jim Foley arguably in both; Jim has attended an iConference). If there are not more sessions of your stripe at the iConference you can easily change that.
The iConference has grown rapidly. It was 50% larger than the last CSCW conference and had people from over 100 universities and organizations, mostly universities. It was sponsored by Google, Intel, Microsoft, NSF, and others. It remains to be seen what it will develop into — new iSchools are appearing. To me, it had the feeling of CHI around 1983-1985, full of possibilities. I hope some of you help it realize some of the possibilities that you see.
21. Tweets that mention HCI and Computational Thinking are Ideological Foes « Computing Education Blog -- Topsy.com | February 25, 2011 at 2:17 am
[…] This post was mentioned on Twitter by Leif Singer and Zhicheng Liu, Sarita Yardi. Sarita Yardi said: excellent post by @guzdial on computation thinking versus HCI (and CHI/CSCW/iConference). http://bit.ly/i38q7h […]
22.
Mark Guzdial | February 26, 2011 at 3:29 pm
Interesting response from Beki Grinter: http://beki70.wordpress.com/2011/02/24/hci-and-computational-thinking/
23. Balancing HCI and Computational Thinking: Levels of Abstraction and Agency « The Next Bison: Social Computing and Culture | February 27, 2011 at 3:24 pm
[…] colleague Mark Guzdial wrote a great blog post last week called “HCI and Computational Thinking are Ideological Foes.” A lot of HCI wants to make the computer invisible, like Heidegger’s hammer. While […]
24.
Sometimes, Education is not about making it easier « Computing Education Blog | May 6, 2011 at 9:57 am
[…] Senior Computational Media students, well-versed in HCI as they are, wanted to create a GUI for a Physics simulation. The Physics teachers (to their credit, in my opinion!) insisted on having their students write […]
25.
Science Education Research: Misconceptions are surpressed, not supplanted « Computing Education Blog | July 31, 2012 at 4:26 am
[…] increasing live our lives in a computing world, it’s a constructed, designed world — a world in which the computer science is explicitly hidden. I bet that students only make up theories about computing in times of break down, when they have […]