Computer Science was always supposed to be taught to everyone, and it wasn’t about getting a job: A historical perspective

I gave four keynote talks in the last two months, at SIGITE, Models 2021 Educators’ Symposium, VL/HCC, and CSERC. I’m honored to be invited to them, but I do suspect that four keynotes in six weeks suggest some “personal issues” in planning and saying “No.” Some of these were recorded, but I don’t believe that any of them are publicly available

The keynotes had a similar structure and themes. (A lot easier than four completely different keynotes!) My activities in computing education these days are organized around two main projects:

My goal was to put both of these efforts in a historical context. My argument is that computer science was originally invented to be taught to everyone, but not for economic advantage. I see the LSA effort and our Teaspoon languages connected to the original goals for computer science. The talks were similar to my SIGCSE 2019 keynote (blog post about that talk here, and video version here), but puts some of the early history in a different perspective. I’m not going to go into the LSA Computing Education effort or Teaspoon languages here. I’m writing this up because I hope that it’s a perspective on the early history that might be useful to others.

I start out with C.P. Snow.

My PhD advisor, Elliot Soloway, would have all of his students read this book, “The Two Cultures.” Snow was a scientist who bemoaned the split between science and humanities in Western culture. Snow mostly blamed the humanities. That wasn’t Elliot’s point for having us read his book. Elliot wanted us to think about “Who could use what we have to teach, but might not even enter our classroom?”

This is George Forsythe. Donald Knuth claims that George Forthye first published the term “computer science” in a paper in the Journal of Engineering Education in 1961. Forsythe argued (in a 1968 article) that the most valuable parts of a scientific or technical education were facility with natural language, mathematics, and computer science.

In 1961, the MIT Sloan School held a symposium on “Computers and the World of the Future.” It was an amazing event. Attendees included Gene Amdahl, John McCarthy, Alan Newell, and Grace Hopper. Martin Greenberger’s book in 1962 included transcripts of all the lectures and all the discussants’ comments.

C.P. Snow’s chapter (with Norbert Wiener of Cybernetics as discussant) predicted a world where software would rule our lives, but the people who wrote the software would be outside the democratic process. He wrote, “A handful of people, having no relation to the will of society, having no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.” He argued that everyone needed to learn about computer science, in order to have democratic control of these processes.

In 1967, Turing laureate Peter Naur made a similar argument (quoting from Michael Caspersen’s paper): “Once informatics has become well established in general education, the mystery surrounding computers in many people’s perceptions will vanish. This must be regarded as perhaps the most important reason for promoting the understanding of informatics. This is a necessary condition for humankind’s supremacy over computers and for ensuring that their use do not become a matter for a small group of experts, but become a usual democratic matter, and thus through the democratic system will lie where it should, with all of us.” The Danish computing curriculum explicitly includes informing students about the risks of technology in society.

Alan Perlis (first ACM Turing Award laureate) made a different argument in his chapter. He suggested that everyone at University should learn to program because it changes how we understand everything else. He argued that you can’t think about integral calculus the same after you learn about computational iteration. He described efforts at Carnegie Tech to build economics models and learn through simulating them. He was foreshadowing modern computational science, and in particular, computational social science.

Perlis’s discussants include J.C.R. Licklider, grandfather of the Internet, and Peter Elias. Michael Mateas has written a fascinating analysis of their discussion (see paper here) which he uses to contextualize his work on teaching computation as an expressive medium.

In 1967, Perlis with Herb Simon and Alan Newell published a definition for computer science in the journal Science. They said that CS was “the study of computers and all the phenomena surrounding them.” I love that definition, but it’s too broad for many computer scientists. I think most people would accept that as a definition for “computing” as a field of study.

Then, we fast forward to 2016 when then-President Obama announced the goal of “CS for All.” He proposed:

Computer science (CS) is a “new basic” skill necessary for economic opportunity and social mobility.

I completely buy the necessity part and the basic skill part, and it’s true that CS can provide economic opportunity and social mobility. But that’s not what Perlis, Simon, Newell, Snow, and Forsythe were arguing for. They were proposing “CS for All” decades before Silicon Valley. There is value in learning computer science that is older and more broadly applicable than the economic benefits.

The first name that many think of when talking about teaching computing to everyone is Seymour Papert. Seymour believed, like Alan Perlis, “that children can learn to program and learning to program can affect the way that they learn everything else.”

The picture in the lower right of this slide is important. On the right is Gary Stager, who kindly shared this picture with me. On the left is Wally Feurzeig who implemented the programming language Logo with Danny Bobrow, when Seymour was a consultant to their group at BBN. In the center is Cynthia Solomon who collaborated with Seymour on the invention of the Turtle (originally a robot, seen at the top) and the development of Logo curriculum.

Cynthia was the lead author of a recent paper describing the history of Logo (see link here), which included the example of early Logo use on the upper right of this slide, which generates random sentences. Logo is named for the Greek word logos for “word.” The first examples of Logo were about manipulating natural language. Logo has always been used as an expressive medium (music, graphics, storytelling, and animation), as well as for learning mathematics (see the great book Turtle Geometry).

This is the context in which I think about the work with the LSA Computing Education Task Force. Our question was: At an R1 University with a Computer Science & Engineering undergraduate degree and an undergraduate BS in Information (with tracks in information analysis and user experience (UX) design), what else might undergraduates need? What are the purposes for computing that are broader and older than the economic advantages of professional software development? We ended up defining three themes of what LSA faculty do with computing and what they want their students to know:

  • Computing for Discovery – LSA computational scientists create models and simulate them (not just analyze data that already exists), just as Alan Perlis suggested in 1961.
  • Computing for Expression – Computing has created new ways for humans to express themselves, which is important to study and to use to explore, invent, and create new forms of expression, as the Logo community did starting in the 1960’s.
  • Computing for Justice – LSA scholars investigate how computing systems can encode and exacerbate inequities, which requires some understand of computing, just as C.P. Snow talked about in 1961.

We develop our Teaspoon languages to meet the needs of teachers in teaching non-CS and even non-STEM classes. We argue that there are computing education learning objectives that we address with Teaspoon languages, even if they don’t include common languages features like for, while, and if statements. A common argument against our work in Teaspoon languages is that we’re undertaking a Sisyphean task. Computing is what it is, programming languages are what they are, and education is not going to be a driving force for changing anything in computing.

And yet, that’s exactly how the desktop user interface was invented.

Alan Kay (another Turing laureate in this story), Adele Goldberg, and Dan Ingalls led the development of Smalltalk in Xerox PARC in the 1970’s. The goal for Smalltalk was to realize Alan’s vision of a Dynabook, using the computer as a tool for learning. The WIMP (overlapping Windows, Icons, Menus, and mouse Pointer) interface was invented in order to achieve computing education goals. For the purposes of education, the user interface that you are using right now was invented.

The Smalltalk work tells us that we don’t have to accept computing as it is. Computing education today focuses mostly on preparing students to be professional software developers, using the tools of professional software development. That’s important and useful, but often eclipses other, broader goals for learning computing. The earliest goals for computing education are different from those in most of today’s computing education. We should question our goals, our tools, and our assumptions. Computing for everyone is likely going to look different than the computing we have today which has been defined for a narrow set of goals and for far fewer people than “all.”

November 26, 2021 at 10:00 am 22 comments

Media Computation today: Runestone, Snap!, Python 3, and a Teaspoon Language

I don’t get to teach Media Computation1 since I moved to the University of Michigan, so I haven’t done as much development on the curriculum and infrastructure as I might like if I were teaching it today. I did get a new version of JES (Jython Environment for Students) released in March 2020 (blog post here), but have rarely even started JES since then.

But using Jython for Media Computation is so 2002. Where is Media Computation going today?

I’ve written a couple of blog posts about where Media Computation is showing up outside of JES and undergraduate CS. Jens Moenig has been doing amazing things with doing Media Computation in Snap! — see this blog post from last year on his Snap!Con keynote talk. SAP is now offering a course From Media Computation to Data Science using Snap! (see link here). Barbara Ericson’s work with Runestone ebooks (see an example blog post here) includes image manipulation in Python inside the browser at an AP CS Principles level (see example here). The amazing CS Awesome ebook that Beryl Hoffman and Jen Rosato have been doing with Barb for AP CS A includes in-browser coding of Java for the Picture Lab (see example here).

I was contacted this last January by Russ Tuck and Jonathan Senning. They’re at Gordon College where they teach Media Computation, but they wanted to do it in Python 3 instead of Jython. You can find it here. It works SO well! I miss having the image and sound explorers, but my basic demos with both images and sounds work exactly as-is, with no code changes. Bravo to the Gordon College team!

On the right is Python 3 code doing Media Computation. On the left are two images -- the original in the middle, and a red-reduced image on the far left.

Most of my research these days is grounded in Task-Specific Programming languages, which I’ve blogged about here (here’s a thread of examples here and here’s an announcement of funding for the work in social studies). We now refer to the project as Teaspoon Computing or Teaspoon Languages — task-specific programming => TSP => Teaspoon. We’re adding a teaspoon of computing into other subjects. Tammy Shreiner and I have contributed a chapter on Teaspoon computing to a new book by Aman Yadav and Ulf Dalvad Berthelsen (see announcement of the book here).

We have a new Teaspoon language, Pixel Equations, that uses Media Computation to support an Engineering course in a Detroit Public School. Here, students choose a picture as input, then (1) enter the boolean equations for what pixels to select and (2) enter equations for new red, green, and blue values for those pixels. The conditionals and pixel loops are now implicit.

In several of our tools, we’re now exploring bilingual or multilingual interfaces, inspired by Sara Vogel’s work on translanguaging (see paper here) and Manuel Pérez-Quiñones’s recent work on providing interfaces for bilingual users (see his TED talk here and his ACM Interactions paper here). You can see in the screenshot below that colors can be referenced in either English or Spanish names. We’re now running participatory design sessions with teachers using Pixel Equations.

I’m planning a series of blog posts on all our Teaspoon languages work, but it’ll take a while until I get there.


  1. For new readers, Media Computation is a way of introducing computing by focusing on data abstractions used in digital media. Students write programs to manipulate pixels of a picture (to create photo filters), samples of a sound (e.g., to reverse sounds), characters of a text, and frames of a video (for video special effects). More at http://mediacomputation.org

September 6, 2021 at 7:00 am 5 comments

ICER 2021 Preview: The Challenges of Validated Assessments, Developing Rich Conceptualizations, and Understanding Interest #icer2021

The International Computing Education Research Conference (ICER) 2021 is this week (website here). It should have been in Charleston, South Carolina (one of my favorite cities), but it will instead be all on-line. Unlike previous years, if you are not already registered, you’re unfortunately out of luck. As seen in Matthias Hauswirth’s terrific guest blog post from last week (see here), getting set up in Clowdr is complicated. ICER won’t have the resources to bring people on-line and get them through the half hour prep sessions on-the-fly. There will be no “onsite” registration.

However, all the papers should be available in the ACM Digital Library (free for some time), and I think all the videos of the talks will be made available after the fact, so you can still gain a lot from the conference. Let me point out a few of the highlights that I’m excited about. (As of this writing, the papers are not yet appearing in the ACM DL — all the DOI links are failing for me. I’ll include the links here in hopes that everything is fixed soon.)

Our keynoter is Tammy Clegg, whom I got to know when she was a PhD student at Georgia Tech. She’s now at U. Maryland doing amazing work around computation and relevant science learning. I’m so looking forward to hearing what she has to say to the ICER community.

Miranda Parker, Allison Elliott Tew, and I have a paper “Uses, Revisions, and the Future of Validated Assessments in Computing Education: A Case Study of the FCS1 and SCS1.” This is a paper that we planned to write when Miranda first developed the SCS1 (first published in 2016). We created the SCS1 in order to send it out to the world for use in research. We hoped that we could sometime later do in CS what Richard Hake did in Physics, when he used the FCI to make some strong statements about teaching practices with a pool of 6,000 students (see paper here). Hake’s paper had a huge impact, as it started making the case to shift from lecture to active learning. Could we use the collected use of the SCS1 to make some strong arguments for improving CS learning? We decided that we couldn’t. The FCI was used in pretty comparable situations, and it’s tightly focused on force. CS1 is far too broad, and FCS1 and SCS1 are being used in so many different places — not all of which it’s been validated for. Our retrospective paper is kind of a systemic literature review, but it’s done from the perspective of tracing these two instruments and how they’ve been used by the research community.

One of the papers that I got a sneak peek at was “When Wrong is Right: The Instructional Power of Multiple Conceptions” by Lauren Margulieux, Paul Denny, Katie Cunningham, Mike Deutsch, and Ben Shapiro. The paper is exploring the tensions between direct instruction and more student-directed approaches (like constructionism and inquiry learning) (see a piece I did in 2015 about these tensions). The basic argument of this new paper is that just telling students the right answer is not enough to develop rich understanding. We have to figure out how to help students to be able to hold and compare multiple conceptions (not all of which is canonical or held by experts), so that they can compare and contrast, and use the right one at the right time.

I’m chair for a session on interest. While I haven’t seen the papers yet, I got to watch the presentations (which are already loaded in Clowdr). “Children’s Implicit and Explicit Stereotypes on the Gender, Social Skills, and Interests of a Computer Scientist” by de Wit, Hermans, and Aivaloglou is a report on a really interesting experiment. They look at how kids associate gender with activities (e.g., are boys more connected to video games than girls?). The innovative part is that they asked the questions and timed the answers. A quick answer likely connects to implicit beliefs. If they take a long time to answer, maybe they told you what they thought you wanted to hear? The second paper “All the Pieces Matter: The Relationship of Momentary Self-efficacy and Affective Experiences with CS1 Achievement and Interest in Computing” by Lishinski and Rosenberg asks about what leads to students succeeding and wanting to continue in computing. They look at students affective state coming into CS1 (e..g, how much do they like computing? How much do they think that they can succeed in computing?), and relate that to students’ experiences and affective state after the class. They make some interesting claims about gender — that gender gaps are really self-efficacy gaps.

One of the more unusual sessions is a pair of papers from IT University of Copenhagen that make up a whole session. ICER doesn’t often give over a whole session to a single research group on multiple papers. One is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (Recruitment Perspective)” and the other is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (CS1 Appeal Perspective).” The pitch is that framing CS1 as being about people rather than things leads to better recruitment (first paper) and more success in CS1 (second paper) in terms of gender diversity. It’s empirical support for a hypothesis that we’ve heard before, and the authors frame the direction succinctly: “CS is about people not things.” Is that succinct enough to get CS faculty to adopt this and teach CS differently?

August 16, 2021 at 7:00 am Leave a comment

The Drawbacks of the One-Second Conference Trip. Or, how to prepare for ICER 2021. Guest Blog Post from Matthias Hauswirth

I miss physical conferences. But there are some things about them I do not miss at all. I don’t miss sprinting through airports to catch a connecting flight. I don’t miss standing in line at immigration for over an hour, just to enter the next long line to get through customs. And I don’t miss sitting in a tight middle seat for ten hours straight.

With today’s virtual conferences the trips are more pleasant. I can travel there with a single mouse click. It’s a one-second trip. And I love that! *

However, by eliminating the trip to the conference, we also eliminated an opportunity to prepare for the conference while being stuck in airports, planes, stations, and trains. My physical conference trips used to provide ample idle time. I used that time to contact colleagues to schedule a dinner, lunch, or coffee at the conference; to read the conference program and highlight the talks I wanted to see; to check out the map of the venue to know where to find the relevant rooms; and even to read a paper or two to prepare for talking to the authors at the conference.

That kind of preparation takes more than a second. And without the time provided by those arduous trips, I might show up ill prepared and miss out on half of the fun.

So here is my plan. For my next one-second conference trip, I will allocate a little bit of extra time to prepare. Not crammed into an airplane seat, but at home, in a comfy chair, with a nice cup of coffee.

Oh, and if your next conference trip takes you to ICER 2021 this coming Monday, here are some suggestions from the ICER Chairs for how to prepare for this conference, which will be hosted in the most recent version of Clowdr:

  • Find the invitation email you received from Clowdr (check your spam folder, too!) and log in (3 minutes).
  • Watch the ICER 2021 Clowdr Intro video (13 minutes). This will teach you the basics of how to navigate the platform. We recommend following along interactively on the Clowdr site as you watch, to familiarize yourself with the navigation
  • Watch the ICER 2021 Paper Sessions: Participant Experience video (14 minutes). This will teach you how our paper sessions will work. You won’t just be watching videos, you’ll also be interacting while you watch, talking in small groups afterwards, and asking questions.
  • Once logged in, read the ICER Clowdr Experience FAQ page (4 minutes). This has the videos above and more detail for specific types of events.
  • On Clowdr, read the Code of Conduct page (3 minutes). Everyone is responsible for following these rules to ensure everyone feels safe and welcome.
  • On Clowdr, read the How to Set Up Your Profile page and set up your profile (3 minutes). This ensures people know who you are, what your name and pronouns are, where you’re visiting from, and what roles you’re playing at the conference. 

In Clowdr you will find a lot of content, including the entire program. We recommend that inside Clowdr you “star” events you are interested in to create your personal schedule. There is a page for each paper and poster/lightning talk. On each paper page you already find the presentation as an embedded video, on each ICER poster page there’s the poster pitch video and the PDF of the poster, and on each ICER lightning talk page you find the talk slide. Have a quick look to plan your personal schedule. And while you’re there, why not already leave a message or comment for the authors in the chat at the right of the paper/poster’s page? Note that the links to the papers in the ACM DL are not yet active; we expect ACM to make the DOIs work and the papers visible in the DL by the start of the conference.

We are confident that with an hour or so of up-front effort you will get much more out of the conference! (We suspect, though, that you will end up spending more than an hour because the content draws you in!) ICER 2021 is a compact conference packed with exciting content and interaction. Log in now to make the most of it!

*) I also very much love the minimal carbon footprint, low cost, and reduced health risks of virtual conferences.

August 13, 2021 at 1:00 pm 1 comment

Why aren’t more girls in the UK choosing to study computing and technology? Guest blog post by Peter Kemp

The Guardian raised the question in the title in this article in June. Pat Yongpradit sent it to me and Peter Kemp, and Peter’s response was terrific — insightful and informed by data. I asked him if I could share it here as a guest post, and he graciously agreed.

We’ve just started a 3 year project, scaricomp, that aims to look at girls’ performance and participation in computer science in English schools. There’s not much to see at the moment, as we started in April, but we’re hoping to sample 5000+ students across schools with large numbers of students taking CS and/or high numbers of females in the CS cohorts. I’ll let you know when we have some analysis in hand.

You reference The Guardian article’s quote: “In 2019, 17,158 girls studied computer science, compared with the 20,577 girls who studied ICT in 2018”. It’s worth noting that the 2018 ICT figure was the end of the line for ICT, numbers in previous years were much higher, and the female figure was actually ~40% of the overall ICT entries, whilst it represents about 20% of the GCSE CS cohort, i.e. females were proportionally better represented in ICT than CS. For a fuller picture of the changing numbers and demographics in English computing, see slide 8 of this, or the video presentation). It’s also worth noting that since the curriculum change in 2012/13 we’ve lost the majority of time dedicated to teaching computing (including CS) at age 14-16, I’ve argued that this has had a disproportionate impact on girls and poorer students (page 45-48).

To add a bit of context from England: Students typically pick 8-10 subjects for GCSE, though their ‘options’ might be limited. Most schools will insist that students take Maths, English Language, English Literature, Physics, Chemistry, Biology, and often: French or German, and History or Geography. This leaves students with one or two actual ‘options’. Many schools are also imposing entry requirements on GCSE CS, only letting the high achieving students (often focusing on maths) onto the course; this will likely have an impact in access to the curriculum for poorer students who are less likely to achieve well in mathematics. Why don’t females pick CS in the same way they picked ICT? This might well be linked to curriculum, role models, contextualisation etc.

One of the reasons given for the curriculum change in 2012 was that students were being “bored to death” by ICT, with ICT generally being the application of software products to solve problems and the implication of technology on the world. The application of technology to the world lends itself to the contextualisation of the curriculum and the assessment materials. There was a lot of project-based assessment with real world scenarios for students to engage with, e.g. making marketing materials for businesses, using spreadsheets to organise holiday bookings etc https://web.archive.org/web/20161130183550if_/http://www.aqa.org.uk/subjects/computer-science-and-it/gcse/information-and-communication-technology-4520) . The GCSE CS is a different beast. It can be contextualised, but this is probably more difficult to do as there is an awful lot of material to cover and the assessment methodology is entirely exam based and on paper for the largest exam boards. Anecdotally we hear of schools cutting down on programming time on computers, as the exam is handwritten.

Data looking at what females ‘liked’ in the old ICT curriculum is quite limited, but what does exist places some of the ‘non-CS’ elements quite highly. So, the actual curriculum content might have a part to play here. Having taught ICT (and CS) for many years, most students I knew really enjoyed the ICT components. I’d argue that the pre-reform discourse around ICT being: “useless, boring, easy”, CS being: “useful, exciting, rigorous” was an easy political position to take, and not reflective of reality where schools had competent teachers. We now find ourselves in a position where we probably have a little too much CS, and not enough digital literacy / ICT for the general needs of students. I and people like Miles Berry (p49) have argued for more generalist qualification which maintains elements of CS. Though there appears to be little political will to make this happen.

To add another suggestions as to why we’re seeing females disengaging, within the English context, we see females substantially underachieving at GCSE in comparison to their other subjects and males of similar ‘abilities’ (ability here being similar grade profiles in other subjects). Why this is remains unclear, we see similar under achievement in Maths and Physics. My fear is that encouraging females to take CS might lead them to having their self-efficacy knocked and therefore make them less likely to pursue further study or a career in tech. We also found that females from poorer backgrounds were more likely to pick GCSE CS than their middle-class peers, we speculate that this might be the result of different cultural/family pressures and a keener engagement with the ’employability’ and ‘good pay’ discourse that often surrounds the representation of studying CS, however true this might be for these groups in reality. More research on the above coming soon through scaricomp.

Additionally, in terms of the UK picture, you’ll probably want to check in with Sue Sentance and the Gender Balance in Computing Project. One of their theories for the decline in computing is that CS is being timetabled at the same time as other (generally) more attractive subjects for females. I’m not sure if they’ve started this part of the research yet, but it’s worth checking in. They are running interventions across the country, but I don’t believe that they are trying to do a nationally representative survey.

August 2, 2021 at 7:00 am Leave a comment

Announcing the inaugural Illinois Computer Science Summer Teaching Workshop: Guest blog post from Geoffrey Challen

We are excited to invite you to the inaugural Illinois Computer Science Summer Teaching Workshop: https://teaching-workshop.cs.illinois.edu/. The 2021 workshop will be held virtually over two half-days on August 10–11, 2021. The workshop is free to attend, and teaching faculty, research faculty, as well as graduate and undergraduate students are all invited to participate—either by presenting, or by joining the conversation. The deadline to submit an abstract is Tuesday July 20th.


Our goal is to bring together college instructors who are engaged with teaching computer science to discuss best practices, present new ideas, challenge the status quo, propose new directions, debunk existing assumptions, advocate for new approaches, and present surprising or preliminary results. This year’s theme is “How the Pandemic Transformed Our Teaching“, allowing participants to reflect on the difficult year behind us as we prepare to return to classrooms next fall. We are excited to welcome Professors Margo Seltzer (UBC), Tiffani Williams (Illinois), Susan Rodger (Duke), Nate Derbinsky (Northeastern), and David Malan (Harvard) as invited speakers.

July 17, 2021 at 10:51 am 2 comments

Considering the Danish Informatics Curriculum: Comparing National Computer Science Curricula

Michael Caspersen invited me to review a chapter on the Danish Informatics curriculum (see a link here). He asked me to compare it to existing school CS curriculum with which I’m familiar. That was an interesting idea — how does anyone relate curricula across diverse contexts, even between nations? I gave it a shot. I most likely missed, in that there are many curricula that I don’t know or don’t know well enough. I welcome comments on other CS curricula.

The Danish Informatics curriculum is unique for its focus on four competence areas:

  • Digital empowerment which describes the ability to review and critique digital artifacts to ask where the strict demands of a computational system may not serve well the messy world in which humans live.
  • Digital design and design processes which describes the ways in which designers come to understand the problem domain for which we design digital artifacts.
  • Computational thinking and modeling which describes how data and algorithms are used to construct digital solutions and artifacts.
  • Technological knowledge and skills which describes the tools (e.g., programming languages) and infrastructures (e.g., computer systems, networking) used to construct digital solutions and artifacts.

I am not familiar with any curriculum that encompasses all four competencies. I’m most familiar with elementary and high school curricula in the United States. Each US state has control over its own school system (i.e., there is no national curriculum) though many are influenced by recommendations from the Computer Science Teachers Association (CSTA) (see link here) and the K12 CS Framework (link here).

In the United States, most computing curricula focus on technological knowledge and skills and computational thinking and modeling. The former is important because the economic argument for computing education in schools is the most salient in the United States. The latter most often appears as a focus on learning computing skills without programming, e.g., like in the CS Unplugged activities from Tim Bell at the University of Canterbury (link).

Modeling is surprising rare in most state curricula. Calls for modeling and simulation are common in US mathematics and science education frameworks like the Next Generation Science Standards (link), but these have influenced few state curricula around computing education. Efforts to integrate computing to serve the needs of mathematics and science education are growing, but only a handful of states actively promote computing education to support mandatory education. For example, Indiana has include computing learning objectives in their state’s science education standards, in order to develop more integrated approaches.

I don’t know of any state curricula that include digital empowerment nor digital design and design processes. These are critically important. Caspersen’s arguments for the Danish Informatics curriculum build on quotes from Henry Kissinger and Peter Naur, but could also build on the work of C.P. Snow and Alan Perlis (the first ACM Turing Award laureate). In 1961, Snow and Perlis both argued for mandatory computing (though at the University level). Perlis argued that computing gave us new ways to understand the world. He would have recognized the digital design and design processes competency area. Snow warned that everyone should learn computing in order to understand how computing is influencing our world. He wrote: “A handful of people, having no relation to the will of society, having no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.” He would recognize the concerns of Kissinger and Naur, and the importance of digital empowerment.

The Danish Informatics curriculum is unique in its breadth and for considering the social aspects of computing artifacts and design. It encompasses important needs for citizens of the 21st Century.

July 12, 2021 at 7:00 am 9 comments

There is transfer between programming and other subjects: Skills overlap, but it may not be causal

A 2018 paper by Ronny Scherer et al. “The cognitive benefits of learning computer programming: A meta-analysis of transfer effects” was making the rounds on Twitter. They looked at 105 studies and found that there was a measurable amount of transfer between programming and situations requiring mathematical skills and spatial reasoning. But here’s the critical bit — it may not be casual. We cannot predict that students learning programming will automatically get higher mathematics grades, for example. They make a distinction between near transfer (doing things that are very close to programming, like mathematics) and far transfer, which might include creative thinking or metacognition (e.g., planning):

Despite the increasing attention computer programming has received recently (Grover & Pea, 2013), programming skills do not transfer equally to different skills—a finding that Sala and Gobet (2017a) supported in other domains. The findings of our meta-analysis may support a similar reasoning: the more distinct the situations students are invited to transfer their skills to are from computer programming the more challenging the far transfer is. However, we notice that this evidence cannot be interpreted causally—alternative explanations for the existence of far transfer exist.

Here’s how I interpret their findings. Learning program involves learning a whole set of skills, some of which overlap with skills in other disciplines. Like, being able to evaluate an expression with variables, once you know the numeric value for those variables — you have to do that in programming and in mathematics. Those things transfer. Farther transfer depends on how much overlap there is. Certainly, you have to plan in programming, but not all of the sub-skills for the kinds of planning used in programming appear in every problem where you have to plan. The closer the problem is to programming, the more that there’s an overlap, and the more we see transfer.

This finding is like a recent paper out of Harvard (see link here) that shows that AP Calculus and AP CS both predict success in undergraduate computer science classes. Surprisingly, regular (not AP) calculus is also predictive of undergraduate CS success, but not regular CS. There are sub-skills in common between mathematics and programming, but the directionality is complicated.

We have known for a long time that we can teach programming in order to get a learning effect in other disciplines. That’s the heart of what Bootstrap does. Sharon Carver showed that many years ago. But that’s different than saying “Let’s teach programming, and see if there’s any effect in other classes.”

So yes, there is transfer between programming and other disciplines — not that it buys you much, and the effect is small. But we can no longer say that there is no transfer.

July 5, 2021 at 7:00 am 2 comments

Rules work as a way of communicating computation at a mechanistic level without teaching programming

Sometimes as a reviewer, you get to read a paper that you wish was published immediately. That’s how I felt when I got to review Eliane Wiese and Marcia Linn’s paper “It Must Include Rules”: Middle School Students’ Computational Thinking with Computer Models in Science. It was published in ACM TOCHI in April (see link here).

Eliane and Marcia offer a solution to a problem that teachers face when they want to teach about computational models, but they don’t want to teach programming. How do you get students to reason about the models underlying the simulations they’re exploring without talking about program code? And if you do talk about some notation, some representation of the model, what can you expect students to reason about without teaching them the notation or representation first?

Eliane and Marcia show that rules work. They have students interact with simulations, and then show them rules that might be in that model. Like in a simulation of light, photosynthesis, and glucose levels in plants, a rule might be: When light is on, total glucose made increases.. Eliane and Marcia show rules to students and ask “Are these in the model?” In their abstract, they write:

In our sample, 99% of students identified at least one key rule underlying a model, but only 14% identified all key rules; 65% believed that model rules can contradict; and 98% could not distinguish between emergent patterns and behaviors that directly resulted from model rules. Despite these misconceptions, compared to the “typical” questions about the science content alone, questions about model rules elicited deeper science thinking, with 2–10 times more responses including reasoning about scientific mechanisms. These results suggest that incorporating computational thinking instruction into middle school science courses might yield deeper learning and more precise assessments around scientific models.

The misconceptions don’t bother me. Students will have misconceptions about models — that’s part of teaching science with models. What’s fascinating to me is that the rules worked. Students reasoned mechanistically about the computational models.

My favorite result in this study was where they asked students to predict what would happen if they added a new rule to the model. Basically, “What happens if we change the program like this?” Students were way better at playing these what-if games if the question was posed as a rule. Quoting from the paper:

Asking students to make predictions about the implementation of incorrect rules led to more scientific reasoning about mechanisms than simply asking students about a causal relationship portrayed in a correct model. This pattern was evident for both model contexts, with twice as many workgroups proposing mechanisms with the New Rule question compared to the Typical question for Global Climate (29% vs. 14%) and ten times as many workgroups doing so for Chemical Reactions (53% vs. 5%).

Students can reason about computational models described as rules, even without instruction on rules. That’s a terrific result. It’s one that I’m thinking about how to use in my task-specific programming languages.

Now, this isn’t saying that students can’t reason with function or with imperative statements. Maybe functional or procedural programming paradigms would work, too. Eliane and Marcia have found one approach that does work. They offer us a way to integrate computational modeling into science education, with real discussion of the mechanism of the models, without teaching programming first.

June 28, 2021 at 7:00 am Leave a comment

Katie Cunningham’s Purpose-first Programming: Glass box scaffolding for learning to code for authentic contexts

Last month, Katie Cunningham presented her CHI 2021 paper “Avoiding the Turing Tarpit: Learning Conversational Programming by Starting from Code’s Purpose.” The video of her presentation is available here. This is the final study from her dissertation work, about which I blogged here.

Katie is trying to support the kinds of programming learners whom she discovered in her work on tracing — students who want to write programs, but have no interest in understanding the details of how programs work. As one said to her (which became the title of her ICLS 2020 paper), “I’m not a computer.” Block-based programming won’t work for her learners because, like most conversational programmers, the authenticity of the language they’re learning matters. They don’t want to use blocks. They want to see the code that developers see — a form of what Cindy Hmelo-Silver and I called “glass-box scaffolding.”

Katie focused on one particular purpose: writing Python code to scrape Web pages using Beautiful Soup. She and Rahul Bejarano dug into Beautiful Soup code on Github and identified a set of code chunks (“plans”) that were really used for this purpose and which could be recombined in useful ways. She then developed a curriculum as a Runestone ebook for teaching those plans where she taught students how to combine them (using Parsons Problems) and, importantly, how to tailor them for specific needs. Here’s a figure from her paper with an example plan with a description of the “slots” for tailoring.

My favorite part of this study is her analysis of how students debugged using these plans. They did make mistakes, and they fixed them. They reasoned about their programs in terms of the plans. In a think aloud, they talked about the names of the plans and the slots, and where they tailored the plan wrong. It’s not that they were just copying and pasting chunks of Python code. They were reasoning about the chunks — but they were not doing much reasoning about Python. In some sense, she defined a task-specific programming language whose components happened to be defined in terms of visible lines of Python code.

My favorite outcome of the study is that students came away excited and felt that they were doing something “realistic” — from a half hour lesson. One participant asked if she could do this kind of learning for different purposes every week, a kind of DuoLingo for programming. Those are strong results from a short intervention. It is a pretty amazing intervention.

I blogged for CACM this month on how we we predict about knowledge transferring between programming languages may be based on an assumption of mathematics background which might have been true in the 1970’s but is less likely to be true today (see post here). I suggest that we need to develop ways of teaching programming that doesn’t relate to mathematics, that instead connect to the programmer’s purpose and task. Katie’s work is what I had in mind as an example.

June 21, 2021 at 7:00 am 9 comments

Call for Nominations for Editor-in-Chief of ACM Transactions on Computing Education

Call for Nominations: Editor-In-Chief, ACM Transactions on Computing Education

The term of the current Editor-in-Chief (EiC) of the ACM Transactions on Computing Education (TOCE) is coming to an end, and the ACM Publications Board has set up a nominating committee to assist the Board in selecting the next EiC.  TOCE was established in 2001 and is a premier journal for computing education, publishing over 30 papers annually.

Nominations, including self nominations, are invited for a three-year term as TOCE EiC, beginning on September 1, 2021.  The EiC appointment may be renewed at most one time. This is an entirely voluntary position, but ACM will provide appropriate administrative support.

Appointed by the ACM Publications Board, Editors-in-Chief (EiCs) of ACM journals are delegated full responsibility for the editorial management of the journal consistent with the journal’s charter and general ACM policies. The Board relies on EiCs to ensure that the content of the journal is of high quality and that the editorial review process is both timely and fair. He/she has final say on acceptance of papers, size of the Editorial Board, and appointment of Associate Editors. A complete list of responsibilities is found in the ACM Volunteer Editors Position Descriptions. Additional information can be found in the following documents:

·                 Rights and Responsibilities in ACM Publishing

·                ACM’s Evaluation Criteria for Editors-in-Chief

Nominations should include a vita along with a brief statement of why the nominee should be considered. Self-nominations are encouraged. Nominations should include a statement of the candidate’s vision for the future development of TOCE. The deadline for submitting nominations is July 21, 2021, although nominations will continue to be accepted until the position is filled.

Please send all nominations to the nominating committee chairs, Mark Guzdial (mjguz@umich.edu) and Betsy DiSalvo (bdisalvo@cc.gatech.edu).

The search committee members are:

  • Betsy DiSalvo (Georgia Institute of Technology), co-chair
  • Mark Guzdial (University of Michigan), co-chair
  • Diana Franklin (University of Chicago)
  • Andrew Luxton-Reilly (University of Auckland)
  • Aman Yadav (Michigan State University)

Louiqa Raschid (University of Maryland) will serve as the ACM Publications Board Liaison

June 14, 2021 at 7:00 am Leave a comment

Call for Special Issue on CT in Early Childhood: Guest Blog Post from Wang, Bers, and Lee

In my blog post on what I got wrong in the 2010’s, I pointed to the many definitions of computational thinking (CT) that I had shared in this blog. I said that I hoped that I wouldn’t be offering any more, but I was probably wrong on that too.

Below you will find (yet another) definition of CT, which is pretty intriguing.


Early Childhood Research Quarterly
Call for Papers

Special Issue: Examining Computational Thinking in Early Childhood

Guest Editors 

X. Christine Wang, State University of New York at Buffalo, wangxc@gmail.com

Marina Bers, Tufts University, marina.bers@tufts.edu

Victor R. Lee, Stanford University, vrlee@stanford.edu 

Described as the new literacy of the 21stcentury, computational thinking (CT) is broadly defined as systematic analysis, exploration, and testing of solutions to open-ended and often complex problems based on the analytical process rooted in the discipline of computer science. Driven by the increasing demands for computing professionals, CT has been popularized as a key goal of computer science teaching and learning in K-12 schools. On the one hand, much new research is currently exploring the relationships between CT and coding, CT in everyday unplugged activities, and CT and cognitive and socio-emotional domains of knowledge. On the other hand, there is also heated debate about the validity and applicability of CT, whether CT refers to a new set of competencies, and what value CT has in schooling. Because of the complicated nature of these explorations and conversations, CT has drawn considerable attention in educational research and practice, including early childhood education in recent years (Bers, 2018; Jung & Won, 2018; Toh et al., 2016; Xia & Zhong, 2018). 

To help advance this burgeoning area of research, this special issue seeks empirical and theoretical contributions about young children’s (ages 2-8) CT learning and teaching. We encourage researchers to explore, but not limit themselves to, one or more of the following topics:

(1) Critical examinations of definitions and/or conceptualizations of CT in early childhood

(2) Operationalizations of CT learning goals and practices in early childhood

(3) Developmentally appropriate approaches in promoting CT in early childhood

(4) Relationships between CT and other domains of learning and development

(5) Assessment of CT learning and development in early childhood

(6) Supports for early childhood educators who are bringing CT to young children

(7) Equity and inclusion issues related to CT learning and teaching

For this special issue, we are soliciting a wide range of manuscripts describing rigorous empirical studies, design studies, integrative reviews, theoretical perspectives, or evaluation studies. We welcome studies that employ diverse theoretical and methodological approaches.

Submission Details
We are inviting interested researchers to submit a short proposal prior to manuscript submission. The proposal should be no more than 500 words (excluding references, images, or figures) and must include the following information: (1) Title/Author(s), (2) Key Issues/Problems, (3) Methods/Processes, (4) Findings/Evidence-Based Claims, and (5) Relevance and Contribution to the Special Issue.

Please submit your proposal via email to the Guest Editors with the subject line “ECRQ: CT in Early Childhood”:  X. Christine Wang (wangxc@gmail.com), Marina Bers (marina.bers@tufts.edu), and Victor R. Lee (vrlee@stanford.edu).

The guest editors will provide timely feedback and select proposed papers based on their quality and suitability for this special issue. Selected authors will then be invited to submit a full manuscript.

All full manuscripts must be submitted via the EM system: https://www.editorialmanager.com/ecrq/default.aspx. After you log in and click on “Submit New Manuscript,”  please select “VSI: CT in Early Childhood” on the “Select Article Type” page and proceed accordingly. 

Invitation to submit a full paper will not be a guarantee of acceptance. All manuscripts will undergo the standard ECRQ double-blind peer review procedure. For further information please contact Managing Guest Editor X. Christine Wang (wangxc@gmail.com) or Special Content Editor Gary Resnick (sevenalaris@msn.com).

Deadlines
Proposal submission: July 15, 2021  
Invitation for manuscript submission: August 15, 2021
Manuscript Submission: December 15, 2021
 

May 24, 2021 at 7:00 am Leave a comment

Seeking Collaborators for a Study of Impostor Phenomenon in Computer Science: Guest Blog Post from Leo Porter

Impostor Phenomenon (IP)** is often described as high-achieving individuals experiencing feelings of intellectual phoniness.  Based on the research conducted in various fields with different populations over the past four decades, we know that IP causes problems for those who experience it, including being associated with anxiety and depression.  

In computer science, we often hear our colleagues and students talking about their struggles with IP.  There are panels on IP at Grace Hopper and other conferences aimed at helping members of our community cope with these feelings.  But how prevalent is it in CS?

An informal survey conducted by Blind asked participants to self-report their feelings of IP, and among the 10,000 software engineers who participated, 58% reported feelings of IP [5].  However, self-reporting isn’t necessarily an accurate way to measure IP.   In a pilot study at UC San Diego, we used the Clance IP scale [1], a validated instrument that is used in the majority of studies to measure IP.  After administering the Clance IP scale in upper-division and graduate CS courses, we found that 57% of participants met the diagnostic criteria for experiencing IP [7], which was quite similar to that earlier reported finding from Blind.  What was most concerning about our results was the differences for gender among the students:  52% of men met the diagnostic criteria whereas 71% of women did.  That’s a huge (and statistically significant) difference!

But what does this mean?  We can look at results from other studies and see that computer science seems to have higher rates of students who experience IP than in fields like health professionals (31%) [4], undergraduates studying education (28%) [3], undergraduates in business related fields (39%) [8], and undergraduates from racially underrepresented group studying educational psychology (48%) [2].  This suggests that CS may be an outlier with our students struggling more with IP than other fields.  However, a recent study among medical students [6] reported similar results to what we found in CS, suggesting computing might not be alone.

Before we begin asking questions of why CS (and perhaps also medicine) might be outliers, we need to conduct a replication study to verify (or refute) these initial findings from just a single institution.  To that end, we’re putting out a call for other researchers to help participate in a large-scale replication effort to answer these questions:  What is the rate of IP among students in computer science courses?  Does the rate of IP change as students move farther through the curriculum?  Are students from underrepresented groups in computer science more likely to experience IP than those from traditionally represented groups?

If you are willing to be participate in this replication effort, please fill out this brief interest form:

https://forms.gle/MWYPFnmepWT9nMzNA

For those participating, we’ll ask that you administer the instrument in at least one course at your institution.  If you are interested, we’ll also invite you to engage in the data analysis and authoring of any related publications.  We’ll also help you obtain Human Subjects approval at your institution or leverage our approved protocol at UC San Diego.

** Impostor Phenomenon is the original term [1], however Impostor Syndrome and Impostor Phenomenon are commonly used interchangeably.

References

  1. Sabine M. Chrisman, W. A. Pieper, Pauline R. Clance, C. L. Holland, and Cheryl Glickauf-Hughes. 1995. Validation of the Clance Impostor Phenomenon Scale. Journal of Personality Assessment 65, 3 (1995), 456–467.
  2. Kevin Cokley, Leann Smith, Donte Bernard, Ashley Hurst, Stacey Jackson, Steven Stone, Olufunke Awosogba, Chastity Saucer, Marlon Bailey, Davia Roberts. 2017. Impostor feelings as a moderator and mediator of the relationship between perceived discrimination and mental health among racial/ethnic minority college students. Journal of Counseling Psychology 64, 2 (2017), 141–154.
  3. Joseph R. Ferrari. 2005. Impostor Tendencies And Academic Dishonesty: Do They Cheat Their Way To Success? Social Behavior and Personality: an international journal 33, 1 (2005), 11–18.
  4. Kris Henning, Sydney Ey, and Darlene Shaw. 1998. Perfectionism, the impostor phenomenon and psychological adjustment in medical, dental, nursing and pharmacy students. Medical Education 32, 5 (1998), 456–464.
  5. Kim. 2018. 58 Percent of Tech Workers Feel Like Impostors. https://blog.teamblind.com/index.php/2018/09/05/58-percent-of-tech-workers-feel-like-impostors
  6. Beth Levant, Jennifer A. Villwock, and Ann M. Manzardo. 2019. Impostorism in third-year medical students: an item analysis using the Clance impostor phenomenon scale. Perspectives on medical education (2020), 1-9.
  7. Adam Rosenstein, Aishma Raghu, and Leo Porter. 2020. “Identifying the prevalence of the impostor phenomenon among computer science students.” Proceedings of the 51st ACM Technical Symposium on Computer Science Education.
  8. Kenneth T. Wang, Marina S. Sheveleva, and Tatiana M. Permyakova. 2019. Imposter syndrome among Russian students: The link between perfectionism and psychological distress. Personality and Individual Differences 143 (2019), 1–6.

May 6, 2021 at 7:00 am 2 comments

The Bigger Part of Computing Education is outside of Engineering Education

My Blog@CACM post this month is about the differences I’ve seen between computing education and engineering education (see link here). Engineering education has a goal of producing professional engineers. I describe in the post how ASEE is about the profession of engineering, and developing an engineering identity is a critical goal of engineering education. Computing education is about producing software engineers, but that’s only part of what computing education is about. SIGCSE is about learning and teaching of computing, and as computing educators, we teach students with diverse identities. They overlap, but the part of computing education that is outside the intersection with engineering education is much bigger than the part inside.

Computing education for me is about helping people to understand computing (see the Call for Papers for the International Computing Education Research conference) — not just CS education at the undergraduate level. Preparing future software engineers is certainly part of computing education, but sometimes computing educators only see engineering education goals. Computing education has a bigger scope and range than engineering education. Here are three areas where we need to focus on the bigger part outside engineering.

1. K-12 is for everyone. Computing education in elementary and secondary school should be about more than producing software professionals. There are certainly CS teachers who disagree with me. An example is Scott Portnoff’s critique of CS curricula that does not adequately prepare students for the AP CS A exam and the CS major. I agree that we should offer CS courses at secondary school that give students adequate preparation for post-secondary CS education, if students want to go on to a CS major and become a computing professional. But K-12 has to serve everyone, and the most important goals for K-12 CS education are goals for what everyone should learn about computing. We want students:

I am personally much more interested in K-12 teachers using computing to teach everything else better. Computational science and mathematics are powerful for helping scientists and mathematicians gain insight. We should use computing in the same way to advance student learning in STEM, social studies, and other disciplines — without turning those other classes  into CS classes. This is the difference that Shuchi Grover is talking about with her two kinds of CT: learning about CS vs using computing to learn other things.

2. Courses for non-CS Majors. I’m co-chairing a task force on computing education for the University of Michigan’s College of Literature, Sciences, & Arts (LSA) (see a blog post on this effort and our website with our NEW preliminary report). I’m learning about the ways that LSA faculty use computing and how they want their students to learn about and use computing. Their purposes are so different from what we teach in classes about computer science or data science. Sure, computational scientists analyze data like data scientists, but they also create models that turn their theories into simulations (which can then generate data). Computational artists use computing to tell engaging stories in new ways. Computational journalists investigate and discover truth with computing. LSA faculty care a great deal about their students critiquing how our computing systems and infrastructure may be unjust and inequitable. (Interesting note: The word “justice” does not appear in the new Computing Curriculum 2020, and the word “equity” appears only once.)

There are computer scientists who tell me that there is only one computer science for all students. Their argument is that better engineering practices help everyone — if those computational scientists, journalists, and artists just programmed like software engineers, the world would be a better place. Their code would be more robust, more secure, and more extensible. That is likely true, but that perspective is misunderstanding the role of code in doing science and making art. You don’t critique the poet for not writing like a journalist or a novelist. These are different activities with different goals.

We should teach non-CS majors with courses that serve their needs, speak to their identities, and support their values. We should not require all artists and scientists to think, act, and program like engineers just to take computing classes.

A CS educator in the Bay area once tried to convince me that the most important purpose for courses for non-CS majors was to identify the potential for being great programmers. He claimed that there are programmers who are two magnitudes better than their peers, and identifying them is the most important thing we can do to support and advance the software companies on which our world economy depends. He argued that we should teach non-CS majors in order to identify and promote future engineers, not for their own purposes. I see his argument, but I do not agree that scientists, journalists, and artists are less important than engineers. As I consider this pandemic, I think about the role that computing has played in medicine, logistics, and media. Of course, we have relied heavily on software engineering, but I don’t believe that it’s more important than all the other roles that computing played.

3. Supporting diverse identities. There is a disconnect between efforts to broaden participation in computing and framing CS classes as engineering education. As I mentioned in my Blog@CACM post, I taught my first EER course this last semester and read a lot of EER papers. A big focus in engineering education is developing an engineering identity, i.e., helping students to see themselves as members of the engineering community of practice and as future professional engineers.

One of my favorite papers that we read this semester was “Feminist Theory in Three Engineering Education Journals: 1995–2008” by Beddoes and Borrego. They define different branches of feminism. “Liberal feminism” is the goal for women to be treated the same as men, to get access to the same jobs at the same pay. “Standpoint feminism” points out that “liberal feminism” is too much about fitting women into the jobs and cultures of men, as opposed to asking how things would be different if created from a feminist standpoint.

The professional identity of software engineering is male and White. That’s true from the demographics of who is in the Tech industry, but it’s also true from a historical perspective on the systemic bias in computing. Computing has become dominated by men, with many studies and books describing how women were forced out (see for example The Computer Boys Take Over and Programmed Inequality). Our tools privilege one part of the world. Every one of our mainstream programming languages is built on English keywords. That’s a barrier for 85% of the people on Earth. (Related point: I recommend Manuel Pérez Quiñones’ TED talk “Why I want My Voice Assistant to Speak Spainglish” in which he suggests that the homogenous background of American software engineers leads to few bilingual user interfaces — surprising when 60% of the human race is.)

There’s the disconnect. We want students in computing with diverse perspectives and identities. But engineering education is about developing an identity as a future professional engineer. Professional software engineering is male and White. How do we prepare diverse students to be future software engineers when that professional identity conflicts with their identities? We should teach computing, even for CS majors, in ways that go beyond the engineering education goal of developing a professional engineering identity.

We might argue that we want everyone to have the opportunity to participate in CS, but that’s taking the “liberal” perspective. Broadening participation should not be about fitting everyone into the same identity. It’s not enough to say that everyone has the chance to learn the programming languages that are based in English, that are grounded in Western epistemologies, and where the contributions of women have been marginalized. We need to find ways to accept and support the unique identities of diverse people. 

One way to support a “standpoint” perspective on computing education might be to support activity over identity in our CS curriculum. At Georgia Tech, the undergraduate computer science degree is based on Threads (see website). There are threads for Intelligence, People, Media, Devices, and Theory — eight of them in all. A BS in CS at Georgia Tech is any two threads, so there are 28 paths to a degree. This allows students to define their professional identity in terms of what they are going to DO with computing. “I’m studying People and Devices” is something a student might say who wants to create consumer computational devices like Echo or Roomba. The Threads curriculum allows students to make choices about professional identity, in terms of how they want to contribute to society.

Of course, some of our students want to become software engineers at a FAANG company. That’s great, and we should support them and prepare them for those roles. But we should not require those identities. Computing education is about more than producing software engineers who have the traditional engineering identity.

The Bigger Part of Computing Education. I claimed at the start of this post that “computing education that is outside the intersection with engineering education is much bigger than the part inside.” All the studies I have seen say that’s true. While CS undergraduate enrollment has been exploding, the number of end-user programmers is likely a magnitude larger than the number of professional software developers. K-12 is about 50 million students in the United States, and computing education is available to most of them. The number of computing education students who are NOT seeking an engineering identity or profession is much larger than those who are. That’s the more-than-engineering challenge for computing education.


My thanks to Leo Porter, Cynthia Lee, Adrienne Decker, Briana Morrison, Ben Shapiro, Bahare Naimipour, Tamara Nelson-Fromm, and Amber Solomon who gave me comments on earlier drafts of this post.

April 26, 2021 at 7:00 am 7 comments

From Guided Exploration to Possible Adoption: Patterns of Pre-Service Social Studies Teacher Engagement with Programming and Non-Programming Based Learning Technology Tools

In October, Bahare Naimipour presented our paper ”From Guided Exploration to Possible Adoption: Patterns of Pre-Service Social Studies Teacher Engagement with Programming and Non-Programming Based Learning Technology Tools” (Naimipour, Guzdial, Shreiner, and Spencer, 2021) at the Society for Information Technology and Teacher Education (SITE) 2021 conference. (Draft of the paper is available here. Full paywall version available here.) This paper is the first one about our work with social studies teachers since we received NSF funding. It was also a report on our last face-to-face participatory design session (in March 2020) before the pandemic lockdown. And most importantly, it was our first session with our data visualization tool DV4L in the mix.

I have blogged about our participatory design sessions before (see Bahare’s FIE paper from last Fall). Basically, we set up a group of social studies teachers in pairs, then ask them to try out various visualization tools with activity sheets that we have created to scaffold their process. The goal is to get everyone to make a visualization successfully in less than 10 minutes, and leave time to explore or try one (or both) other tools. There is time for the pairs to persuade each other to (a) come try the cool tool they found or (b) avoid this tool because it’s too hard or not useful. The tools in this set were Vega-Lite (a declarative programming tool which our teachers have found complex but useful in the past), CODAP (a drag-and-drop visualization tool designed for middle and high school students), and our DV4L (a purpose-built visualization tool that makes code visible but not required).

The teachers saw value in having students build visualization themselves (e.g., “I think making your own data visualization allows for a deeper connection and understanding of the data.”) As we hoped, they teased out what they liked and disliked about the tools. Most of the teachers preferred DV4L over the other two tools, because of its simplicity. Critically, they felt that they were engaging with the inquiry and not the tool: “(With DV4L) I found myself asking questions connected to the data itself, rather than asking questions in order to figure out how to work the visual.”

That teachers found DV4L easier than Vega-Lite isn’t really surprising. We were pleased that teachers weren’t disappointed with DV4L’s more limited visualization capabilities. What was really surprising was that our teachers preferred DV4L to CODAP, and this has happened in successive in-service teacher participatory design sessions during the pandemic. CODAP is drag-and-drop, creates high-quality visualizations, and was designed explicitly for middle and high school students. A teacher in one of our in-service design sessions explained to me why she preferred DV4L to CODAP. “CODAP is really powerful, but it would take me at least three hours to get my students comfortable with it. Is it worth it?” Just how much visualization is any social studies teacher going to use? Again, too much focus on the tool gets in the way of the social studies inquiry.

Now you might be asking, “But Mark, do the students learn history with DV4L? And do they see and learn about computing?” Great questions — we’re not there yet. Here’s one of our big questions, after running several more participatory design sessions with teachers since the lockdown: Why aren’t teachers adopting DV4L in their classrooms? They tell us that they really like it. But nobody’s adopted yet. How do we go from “ooh, great tool!” to “and here’s my lesson plan, and we’ll use it next week”? That’s an active area of research for all of us right now.

April 19, 2021 at 7:00 am 2 comments

Older Posts Newer Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,185 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,060,430 hits
June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

CS Teaching Tips