Posts tagged ‘computational thinking’

Interesting new NSF Career award in interactive data visualization

Here’s an interesting project that could really get at generalizable “computational thinking” skills:

Wilkerson-Jerde’s research project will explore how young people think and learn about data visualization from the perspective of a conceptual toolkit. Her goals for “DataSketch: Exploring Computational Data Visualization in the Middle Grades” are to understand the knowledge and skills students bring together to make sense of novel data visualizations, and to design tools and activities that support students’ development of critical, flexible data visualization competence.

“Usually when we think of data visualization in school, we think of histograms or line graphs. But in contemporary science and media, people rely on novel, interactive visualizations that tell unique stories using data,” she explains.

via Michelle Wilkerson-Jerde (PhD12) Wins National Science Foundation CAREER Award :: School of Education & Social Policy :: Northwestern University.

February 24, 2014 at 1:50 am Leave a comment

CAS’ latest SwitchedOn Newsletter includes Media Computation and Pixel Spreadsheet

The Computing At Schools effort has a regular newsletter, SwitchedOn.  It’s packed full of useful information for computer science teachers, and is high-quality (in both content and design).  The latest issue is on Computational Thinking and includes mentions of Media Computation and Pixel Spreadsheet, which was really exciting for me.

Download the latest issue of our newsletter here. The newsletter is produced once a term and is packed with articles and ideas for teaching computer science in the classroom.

This issue takes a look at the idea of Computational Thinking. Computational thinking is something children do, not computers. Indeed, many activities that develop computational thought dont need a computer at all. This influential term helps stress the educational processes we are engaged in. Developing learning and thinking skills lies behind our view that all children need exposure to such ideas.There is something of interest to all CAS members and the wider teaching community. Resources and ideas shared by teachers, both primary and secondary. There is also a section on the Network of Excellence for those new to CAS who aren’t familiar with current developments.

via Computing At School :: Computing for the Next Generation ….

January 14, 2014 at 1:18 am Leave a comment

Computational Thinking in K–12: A report in Ed Researcher

Shuchi Grover and Roy Pea (Stanford) have a review of the field of computational thinking in K-12 schools in this month’s Educational Researcher.  It’s a very nice paper.  I’m excited that the paper is published where it is!  Educational Researcher is the main publication venue for the largest education research organization in the United States (American Educational Research Association).  Roy has been doing work in computing education for a very long time (e.g., “On the prerequisites of learning computer programming,” 1983, Pea and Kurland).  This is computational thinking hitting the education mainstream.

Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.

via Computational Thinking in K–12.

March 1, 2013 at 1:38 am 1 comment

Code Acts: How Computer Code influences the Way We Perceive the World

This is a fascinating essay.  Some of it goes too far for me (e.g., that code “produces new forms of algorithmic identity”), but the section I quote below is making a deep comment relative to the arguments we’ve been making here about “computing for everyone.”

Why should everyone know about computing?  I’ve argued about the value of computational literacy as literacy — a way of expressing and notating thought.  I’ve also argued about the value of computer science as science — insight into how the world we inhabit works.  This part of the essay is saying something more generative — that code provides metaphors for the way we think about the world, so not knowing about code thus limits one’s ability to understand modern culture and science.  The idea is akin to computational thinking, but more about cultural practices than cognitive processes.

Code is the language of computation; it instructs software how to act. But as the instructions written down in code travel out into the world, organized in the algorithmic procedures that make up software, it also has a significant influence on everything it touches. The result is a profound softwarization of society as software has begun to mediate our everyday ways of thinking and doing.

For example, software and its constituent codes and algorithms have become a metaphor for the mind, for ideology, for biology, for government, and for the economy, and with the rapid proliferation of software as an interface to the world, code has been seemingly naturalized in collective life. Computer code has also been described as a kind of law, or the set of rules and constitutional values that regulate the web. The idea that code is law suggests that choices about how to code the web will define the controls and freedoms that are built or programmed into it.

These ways of looking at code demonstrate that code is much more than a language for instructing computing machines. Instead, we need to understand code as a system of thought that spills out of the domain of computation to transform and reconfigure the world it inhabits.

via Code Acts: How Computer Code Configures Learning | DMLcentral.

February 14, 2013 at 1:50 am 7 comments

Essay calling for digital skills to be added to liberal arts disciplines

An interesting piece, which argues that proficiency with computing is an important part of a modern liberal arts education.  The argument is a modern and updated version of the argument that Alan Perlis made back in 1961. The specific computing literacies being described go beyond computational thinkingit’s explicitly about being able to make with computing.  Steve Jobs’ made a similar famous claim that computer science is a liberal art.

Students who graduate with a degree in liberal arts should understand the basic canon of our civilization as well as their place in the world, sure, but they also need to understand how to explore and communicate their ideas through visual communication, data manipulation, and even making a website or native mobile app. If they can’t, they’ll just understand the global context of their own unemployment.

via Essay calling for new skills to be added to liberal arts disciplines | Inside Higher Ed.

October 30, 2012 at 9:34 am 4 comments

Defining: What does it mean to understand computing?

In the About page for this blog, I wrote, “Computing Education Research is about how people come to understanding computing, and how we can facilitate that understanding.”  Juha Sorva’s dissertation (now available!) helped me come to an understanding of what it means to “understand computing.”  I describe a fairly technical (in terms of cognitive and learning sciences) definition, which basically is Juha’s.  I end with some concrete pedagogical recommendations that are implied by this definition.

A Notional Machine:  Benedict DuBoulay wrote in the 1980’s about a “notional machine,” that is, an abstraction of the computer that one can use for thinking about what a computer can and will do.  Juha writes:

Du Boulay was probably the first to use the term notional machine for “the general properties of the machine that one is learning to control” as one learns programming. A notional machine is an idealized computer “whose properties are implied by the constructs in the programming language employed” but which can also be made explicit in teaching (du Boulay et al., 1981; du Boulay, 1986).

The notional machine is how to think about what the computer is doing.  It doesn’t have to be about the CPU at all. Lisp and Smalltalk each have small, well-defined notional machines — there is a specific definition of what happens when the program executes, in terms of application of S-expressions (Lisp) and in terms of message sending to instances of classes (Smalltalk).  C has a different notional machine, which isn’t at all like Lisp’s or Smalltalk’s.  C’s notional machine is closer to the notional machine of the CPU itself, but is still a step above the CPU itself (e.g., there are no assignment statements or types in assembly language). Java has a complicated notional machine, that involves both object-oriented semantics and bit-level semantics.

A notional machine is not a mental representation.  Rather, it’s a learning objective.  I suggest that understanding a realistic notional machine is implicitly a goal of computational thinking.  We want students to understand what a computer can do, what a human can do, and why that’s different.  For example, a computer can easily compare two numbers, can compare two strings with only slightly more effort, and has to be provided with an algorithm (that is unlikely to work like the human eye) to compare two images.  I’m saying “computer” here, but what I really mean is, “a notional machine.”  Finding a route from one place to another is easy for Google Maps or my GPS, but it requires programming for a notional machine to be able to find a route along a graph.  Counting the number of steps from the top of the tree to the furthest leaf is easy for us, but hard for novices to put in an algorithm.  While it’s probably not important for everyone to learn that algorithm, it’s important for everyone to understand why we need algorithms like that — to understand that computers have different operations (notional machines) than people.  If we want people to understand why we need algorithms, and why some things are harder for computers than humans, we want people to understand a notional machine.

Mental Models:  A mental model is a personal representation of some aspect of the world.  A mental model is executable (“runnable” in Don Norman’s terms) and allows us to make predictions.  When we turn on and off a switch, we predict that the light will go on and off.  Because you were able to read that sentence and know what I meant, you have a mental model of a light which has a switch. You can predict how it works.  A mental model is absolutely necessary to be able to debug a program: If you have to have a working expectation of what the program was supposed to do, and how it was supposed to get there, so that you can compare what it’s actually doing to that expectation.

So now I can offer a definition, based on Juha’s thesis:

To understand computing is to have a robust mental model of a notional machine.

My absolutely favorite part of Juha’s thesis is his Chapter 5, where he describes what we know about how mental models are developed.  I’ve already passed on the PDF of that chapter to my colleagues and student here at Georgia Tech.  He found some fascinating literature about the stages of mental model development, about how mental models can go wrong (it’s really hard to fix a flawed mental model!), and about the necessary pieces of a good mental model.  DeKleer and Brown provide a description of mental models in terms of sub-models, and tell us what principles are necessary for “robust” mental models.  The first and most important principle is this one (from Juha Sorva’s thesis, page 55):

  • The no-function-in-structure principle: the rules that specify the behavior of a system component are context free. That is, they are completely independent of how the overall system functions. For instance, the rules that describe how a switch in an electric circuit works must not refer, not even implicitly, to the function of the whole circuit. This is the most central of the principles that a robust model must follow.

When we think about a switch, we know that it opens and closes a circuit. A switch might turn on and off a light. That would be one function for the switch. A switch might turn on and off a fan. That’s another function for a switch. We know what a switch does, completely decontextualized from any particular role or function.  Thus, a robust mental model of a notional machine means that you can talk about what a computer can do, completely apart from what a computer is doing in any particular role or function.

A robust mental model of a notional machine thus includes an understanding of how an IF or WHILE or FOR statement works, or what happens when you call a method on an object in Java (including searching up the class hierarchy), or how types do — completely independently of any given program.  If you don’t know the pieces separately, you can’t make predictions, or understand how they work a particular function in a particular program.

It is completely okay to have a mental model that is incomplete.  Most people who use scissors don’t think about them as levers, but if you know physics or mechanical engineering, you understand different sub-models that you can use to inform your mental model of how scissors work.  You don’t even have to have a complete mental model of the notional machine of your language.  If you don’t have to deal with casting to different types, then you don’t have to know it.  Your mental model doesn’t have to encompass the notional machine.  You just don’t want your mental model to be wrong.  What you know should be right, because it’s so hard to change a mental model later.

These observations lead me to a pedagogical prediction:

Most people cannot develop a robust mental model of a notional machine without a language.

Absolutely, some people can understand what a computer can do without having a language given to them.  Turing came up with his machine, without anyone telling him what the operations of the machine could do.  But very few of us are Turings.  For most people, having a name (or a diagram — visual notations are also languages) for an operation (or sub-model, in DeKleer and Brown terms) makes it easier for us to talk about it, to reference it, to see it in the context of a given function (or program).

I’m talking about programming languages here in a very different way than how they normally enter into our conversation.  In much of the computational thinking discussion, programming is yet another thing to learn.  It’s a complexity, an additional challenge.  Here, I’m talking about languages as a notation which makes it easier to understand computing, to achieve computational thinking.  Maybe there isn’t yet a language that achieves these goals.

Here’s another pedagogical recommendation that Juha’s thesis has me thinking about:

We need to discuss both structure and function in our computing classes.

I suspect that most of the time when I describe “x = x + 1″ in my classes, I say, “increment x.”  But that’s the function.  Structurally, that’s an assignment statement.  Do I make sure that I emphasize both aspects in my classes?  They need both, and to have a robust mental model, they probably need the structure emphasized more than the function.

We see that distinction between structure and function a lot in Juha’s thesis.  Juha not only does this amazing literature review, but he then does three studies of students using UUhistle.  UUhistle works for many students, but Juha also explores when it didn’t — which may be more interesting, from a research perspective.  A common theme in his studies is that some students didn’t really connect the visualization to the code.  They talk about these “boxes” and do random walks poking at graphics.  As he describes in one observation session (which I’m leaving unedited, because I enjoyed the honesty of Juha’s transcripts):

What Juha describes isn’t unique to program visualization systems. I suspect that all of us have seen or heard something pretty similar to the above, but with text instead of graphics.  Students do “random walks” of code all the time.  Juha talks a good bit about how to help his students better understand how UUhistle graphical representations map to code and to the notional machine.

Juha gives us a conceptual language to think about this with.  The boxes and “incomprehensible things” are structures that must be understood on their own terms, in order to develop robust mental models, and understood in terms of their function and role in a program. That’s a challenge for us as educators.

So here’s the full definition:  Computing education research is about understanding how people develop robust models of notional machines, and how we can help them achieve those mental models.

May 24, 2012 at 7:52 am 21 comments

A computational biologist’s personal toolbox : What a scientist will really do with programming

Here’s a great piece to read when wondering about the questions, “Do scientists really all need to learn to program?  Surely they’re not going to program are they?  What would they do?”  What they’ll do is patch together piece of other’s code, with lots of data transformation.  What do they need to know?  A robust mental model of how the modules work and what the data needs for each are.  This is beyond computational thinking.

In my past 20 years as a programmer, I’ve seen the rise of object-oriented programming and ‘modularity’ is something that was hammered onto my forehead. These days, I organise my entire life as a computational biologist around little modules that I re-use in almost every workflow. Yes, sure, you may call me a one-trick pony, but in terms of productivity, call me plough horse.

My core modules are ACQUISITION, COMPUTATION, VISUALISATION, and usually I glue those together with a few lines of Perl or the Unix command line. Here come the constraints again: To overcome the limitations of the software that I’m often “misusing”, I use my own scripts to shove data from one format into the next, and back again. I think every biologist who deals with lots of data, not only us computational folk, should know a few handy lines to quickly turn comma-separated files into tab-delimited, strip a table of empty quotes or grep some essential info.

via Soapbox Science: Tool Tales: A computational biologist’s personal toolbox : Soapbox Science.

May 2, 2012 at 8:55 am 3 comments

Older Posts


Recent Posts

September 2014
M T W T F S S
« Aug    
1234567
891011121314
15161718192021
22232425262728
2930  

Feeds

Blog Stats

  • 950,315 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,061 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 3,061 other followers