Posts tagged ‘computing education research’

Three types of computing education research: for CS, for CS but not professionally, and for everyone

In February, I was invited to give a lecture at the University of Washington’s Allen School. I had a great day visiting there, even though it was all on Zoom. My talk is available on YouTube:

I got a chance to talk to Jeff Heer and Amy Ko before my visit. The U-W CSE department had been thinking about making a push into computing education research. They suggested that I describe the lay of the land — and particularly, to identify where I fit in that space. What I do these days (e.g. Teaspoon languages for history and mathematics classes) isn’t in the mainstream of computing education research, and it was important to tell people unfamiliar with the field, “There’s a lot more out there, and most of it doesn’t look like this.”

CS Education research dates back to the late 1960’s (see the history chapter that Ben du Boulay and I wrote). ACM SIGCSE started in 1968 with a particular focus on how to teach Computer Science and Information Technology majors. Much of what SIGCSE has published is focused even more specifically on the first course, which we now call CS1. This is a big and important space. These majors will be significant drivers of the world’s infrastructure.

There is a growing trend in computing education research to look at people who are learning programming (like in the first circles), but not for the purpose of becoming technology professionals. This includes K-12 CS teachers, end-user programmers, and conversational programmers. This kind of research sometimes appears in venues like CHI, CSCW, and VL/HCC, and occasionally in venues like SIGCSE, RESPECT, and ITiCSE. These circles aren’t scaled correctly by size of potential student population. By most measures, the outer circle (of people learning programming but who aren’t going to become technology professionals) is at least ten times the size of the student population inside the first circles.

My research is one level further out. I’m interested in studying what should we be teaching to everyone, whether or not they’re going to program like professionals, and how do we facilitate that learning. These students might not use the same tools or languages, and certainly have different goals for studying computing. I offer three reasons for the broader “everyone” to learn computing (drawn from the work of C.P. Snow, Alan Perlis, Peter Naur, and Seymour Papert — see this earlier blog post):

  • To make sure that technology is controlled by a democracy.
  • To support new ways of thinking and learning.
  • To be part of a new computational literacy, a new tool for human expression.

This outer circle is far bigger in terms of number of students potentially impacted than any of the inner circles. But it’s also where we know the least in terms of research results.

Take a look at the talk for more on this way of thinking about the field, and how I connect that to existing research. I’d be interested in your perspective on this framing.

May 25, 2022 at 7:00 am 2 comments

Updates: NSF Funding to Study Learning with Teaspoon Languages for Discrete Mathematics

A few months before the pandemic started, Dr. Elise Lockwood at Oregon State reached out to me. She’d heard that I was interested in programming for teaching non-CS subjects, and that’s what she was doing. I loved what she was doing, and we started having regular chats.

Elise is a mathematics education researcher who has been studying how students come to understand counting problems. Like “If you have three letters and four digits, how many license plates can you make?” Or “How many two letter words can you make from the letters ROCKET, if you don’t allow double letters?” She’s been exploring having students learn counting problems by manipulating Python programs to generate all the possible combinations, then counting them. (Check out her recent papers on her Google Scholar page, especially those with her student Adaline De Chenne.)

As I said, I loved what she was doing, but Python seemed heavy-handed for this. I was starting to work on our Teaspoon languages. Could we build lighter-weight languages for the same problems?

As I kept reading Elise’s papers, I started working on two possible designs.

In one of them (called Counting Sheets), we play off of students’ understanding of spreadsheets. You can just describe what you want in each column, and the system will exhaustively generate every combination:

Or you can use an “=“ formula that knows how to do very simple operations with sets. Here’s a solution to the two letter words from ROCKET without repeating problem:

This is one of the tools that we’ve been building in support for both Spanish and English keywords (like Pixel Equations, that I talked about last September):

Elise found Counting Sheets intriguing, but she was worried if it would work to make the iterative structures implicit and declarative. Would students need to see the iteration to be able to reason about the counting processes?

So, I built a second Teaspoon language, called Programmed Counting. Here, the loops are explicit, like Python, but the only variable type is a set, and the words and phrases of the language come from counting problems.

Elise was a real sport, trying out the languages as I generated prototypes and finding the holes in what I was doing. We met face-to-face only once, when I went to Portland for SIGCSE 2020 — the one that got cancelled the very morning it was supposed to start. I had lunch with Elise, and we worked for a few hours on the designs. Barb and I went home the next day, and the big pandemic lockdown started right afterwards.

Will these work for learning? We don’t know — but we just got funding from NSF to find out! “We” here is me and PhD student Emma Dodoo, and we’ll be involving Adaline as a consultant. Elise is currently a rotator at NSF, so she’s involved only from the sidelines because of NSF COI issues. Our plan is to run experiments with various combinations of the Teaspoon languages (one or both), standalone and with Python. Do we need Python if we have the Teaspoon languages? Do the Teaspoon languages serve as scaffolding to introduce concepts before starting into Python?

Below is the abstract on the new IUSE grant, as an overview of the project. University of Michigan CSE Communications wrote a nice article about the work, available here. Huge thanks to Jessie Houghton, Angela Li, and Derrick White who turned my LiveCode prototypes into functioning Web versions.

Abstract for NSF

Programming is a powerful tool that scientists, engineers, and mathematicians use to gain insight into their problems. Educators have shown how programming integrated into other subjects can be a powerful tool to enhance learning, from algebra to language arts. However, the cost is learning the programming language. Few students in the US learn programming — less than 5% of high school students nationwide. Most students do not have the opportunity to use programming to support ™ learning. This project is investigating a new approach to designing and implementing programming languages in classrooms: Task-specific programming (TSP) languages. TSP languages are explicitly design for integration in specific classes, to meet teacher needs, and to be usable with less than 10 minutes of instruction. TSP languages can make the power of programming to enhance learning more accessible. This project will test the value of TSP languages in discrete mathematics, which is a gateway course in some computer science programs.

The proposed project tests the use of two different TSP languages and contrasting that with a traditional programming language, Python. The proposed work will contribute to understanding about (1) the role of programming in learning in discrete mathematics, (2) the value of task-specific languages to scaffold learning, (3) how alternative representational forms for programming influence student use of TSP languages, and (4) how the use of TSP languages alone or in combination with traditional languages enhance students’ sense of authenticity and ability to transfer knowledge.

February 23, 2022 at 7:00 am 1 comment

Helping social studies teachers to teach data literacy with Teaspoon languages

Last year, Tammy Shreiner and I received NSF funding to develop and evaluate computational supports for helping social studies teachers to teach data literacy and computing(see post here). We’re excited about what we’re doing and what we’re learning. Here’s an update on where we’re at on the project.

Teaspoon Languages

We have a chapter in the new book by Aman Yadav and Ulf Dalvad Berthelsen Computational Thinking in Education: A Pedagogical Perspective. This is the publication where we introduce the idea of Teaspoon Languages. Teaspoon languages are a form of task-specific languages (TSP => Teaspoon — see?). Teaspoon languages:

  • Support learning tasks that teachers (typically non-CS teachers) want students to achieve;
  • Are programming languages, in that they specify computational processes for a computational agent to execute; and
  • Are learnable in less than 10 minutes, so that they can be learned and used in a one hour lesson. If the language is never used again, it wasn’t a significant learning cost and still provided the benefit of a computational lesson.

We say that we’re adding a teaspoon of computing to other subjects. The goal is to address the goal of “CS for All” by integrating computing into other subjects, by placing the non-CS subjects first. We believe that programming can be useful in learning other subjects. Our primary goal is to meet learning objectives outside of CS using programming. Teachers (and students eventually) will be learning foundational CS content — but not necessarily the content we typically teach in CS classes. All students should learn that a program is non-WYSIWYG, that it’s a specification of a computational process that gets interpreted by a computational agent, that programming languages can be in many forms, and that all students can be successful at programming.

Our chapter, “Integrating Computing through Task-Specific Programming for Disciplinary Relevance: Considerations and Examples” (see link here) offers two use cases of how we imagine teaspoon languages to work in classrooms (history and language arts in these examples). The first use case is around DV4L, our Data Visualization for Learning tool. The second is around a chatbot language that we developed —- and have long since discarded.

We develop our teaspoon languages in a participatory design process, where teachers try our prototypes in authentic tasks as design probes, and then they tell us what we got wrong and what they really want. Our current iteration is called Charla-bots and is notable for having user-definable languages. We have a variety of Charla-bot languages now, with English, Spanish, and mixed keywords.

Our vision for teaspoon languages is a contrast with the “Hour of Code” approach. The “Hour of Code” is a one hour programming activity that many schools use in every grade, typically once a year during CS Ed Week (in early December). The great idea is to build familiarity and confidence in programming by showing students real computer science every year. The teaspoon languages approach is to imagine one or two little learning programming activity in every social studies, language arts, and mathematics class every year. Each of these languages is tiny and different. The goal is that by the time that US students take a CS class (typically, in high school or undergraduate), they will have had many programming experiences, have seen a variety of types of programming languages, and have a sense that “programming isn’t hard.”

Meeting the Needs of Social Studies Teachers

The second paper, “Using Participatory Design Research to Support the Teaching and Learning of Data Literacy in Social Studies” (see link here) was just presented in October by Tammy at CUFA, the College and University Faculty Assembly 2021 of the National Council of the Social Studies. (We have a longer form of this paper that we have just submitted to a journal.) This is an exciting paper for me because it’s exactly addressing the critical challenge in our work. We can design and implement all kinds of prototype Teaspoon languages, but to achieve our goals, teachers in disciplines other than CS have to see value and adopt them.

The paper is about our workshops with practicing social studies teachers. Tammy has a goal to teach social studies teachers how to teach data literacy. She has built a large online education resource (OER) on teaching data literacy in social studies. Learning data literacy involves being able to read, comprehend, and argue with data visualizations, but also being able to create them. That’s where we come in. Her OER links to several tools for creating data visualizations, like Timeline JS, CODAP, and GapMinder. Most of them were not created for social studies teachers or classes. When we run these workshops, our tools are just in-the-mix. We offer scaffolding for using all of them. These are our design probes. The teachers use the tools and then tell us what they really want. These are our data, and we analyze them in detail —- as in this paper.

Let’s jump to the bottom line: We’re not there yet. The teachers love the OER, but get confused about why should do in their classes. They find the tools for data visualization fascinating, but overwhelming. They like DV4L a lot:

One pre-service teacher explained that they preferred our prototype over other tools because “(with the prototype DV4L) I found myself asking questions connected to the data itself, rather than asking questions in order to figure out how to work the visual.”

Recently, I held a focus group with some social studies teachers who told me that they won’t use any computational tools —- they believe in teaching data visualization, but all created with pencil and ruler. That’s our challenge: Can we be more powerful, more enticing, and easy enough to beat out pencil and ruler? Our tool, DV4L, is purpose-built for these teachers, and they appreciate its advantages — and yet, few are adopting. That’s where we need to work next.

Opportunities for Social Studies Teachers to Get Involved

If you know a social studies teacher who would want to keep informed about our work and perhaps participate in our workshops or studies, please have them sign up on our mailing list. Thank you!

Often, what teachers tell us they really want suggests new features or entirely new tools. We have two ongoing studies where we are looking for design feedback from social studies teachers. If you know social studies teachers who would like to play with something new (and we’ll pay them for their time), would you please forward these to them?

Timeline Builder

We’re looking for K-12 Social Studies teachers to try out our new timeline visualization tool, TimelineBuilder. TimelineBuilder has been made with teachers and usability in mind. In it, ‘events’ are added to a timeline using a form-based interface. Changes to the timeline can be seen automatically, with events showing up as soon as they are added.

This study will consist of completing 2 surveys and 3 asynchronous activities guided by worksheets. All participants will be compensated with a $20 gift card for survey and activity completion. There is an additional option to be invited to a focus group, which will provide additional compensation.

If you are interested in participating in this study, you can complete the consent form and 1st survey here. (Plain text Link: https://forms.gle/gwxfn5bRgTjyothF6 )

Please contact Mark Guzdial (mjguz@umich.edu) or Tamara Nelson-Fromm (tamaranf@umich.edu) with any questions.

The University of Michigan Institutional Review Board Health Sciences and Behavioral Sciences has determined that this study is exempt from IRB oversight.

DV4L Scripting Study

Through our work with social studies educators thus far, we have designed the tools DV4L-Basic and DV4L-Scripting specifically to support data literacy standards in social studies classrooms. If you are a social studies middle or high school teacher, we would love to hear your feedback. If you can spare less than an hour of your time to participate in our study, we will send you a $50 gift card for your time and valuable feedback.

If you are interested but want more details, please visit/complete the consent form here: https://forms.gle/yo3yWGThQ1wnhu7g7

For questions or concerns, please contact Mark Guzdial (mjguz@umich.edu) or Bahare Naimipour (baharen@umich.edu).

References

Guzdial, M. and Tamara L. Shreiner. 2021. “Integrating Computing through Task-Specific Programming for Disciplinary Relevance: Considerations and Examples.” In Computational Thinking in Education: A Pedagogical Perspective, Aman Yadav and Ulf Dalvad Berthelsen (Eds). PDF of Submitted.

Shreiner, Tamara L., Mark Guzdial, and Bahare Naimipour. 2021. “Using Participatory Design Research to Support the Teaching and Learning of Data Literacy in Social Studies.” Presented at CUFA, the College and University Faculty Assembly 2021 of the National Council of the Social Studies. PDF

December 22, 2021 at 10:00 am 10 comments

Media Computation today: Runestone, Snap!, Python 3, and a Teaspoon Language

I don’t get to teach Media Computation1 since I moved to the University of Michigan, so I haven’t done as much development on the curriculum and infrastructure as I might like if I were teaching it today. I did get a new version of JES (Jython Environment for Students) released in March 2020 (blog post here), but have rarely even started JES since then.

But using Jython for Media Computation is so 2002. Where is Media Computation going today?

I’ve written a couple of blog posts about where Media Computation is showing up outside of JES and undergraduate CS. Jens Moenig has been doing amazing things with doing Media Computation in Snap! — see this blog post from last year on his Snap!Con keynote talk. SAP is now offering a course From Media Computation to Data Science using Snap! (see link here). Barbara Ericson’s work with Runestone ebooks (see an example blog post here) includes image manipulation in Python inside the browser at an AP CS Principles level (see example here). The amazing CS Awesome ebook that Beryl Hoffman and Jen Rosato have been doing with Barb for AP CS A includes in-browser coding of Java for the Picture Lab (see example here).

I was contacted this last January by Russ Tuck and Jonathan Senning. They’re at Gordon College where they teach Media Computation, but they wanted to do it in Python 3 instead of Jython. You can find it here. It works SO well! I miss having the image and sound explorers, but my basic demos with both images and sounds work exactly as-is, with no code changes. Bravo to the Gordon College team!

On the right is Python 3 code doing Media Computation. On the left are two images -- the original in the middle, and a red-reduced image on the far left.

Most of my research these days is grounded in Task-Specific Programming languages, which I’ve blogged about here (here’s a thread of examples here and here’s an announcement of funding for the work in social studies). We now refer to the project as Teaspoon Computing or Teaspoon Languages — task-specific programming => TSP => Teaspoon. We’re adding a teaspoon of computing into other subjects. Tammy Shreiner and I have contributed a chapter on Teaspoon computing to a new book by Aman Yadav and Ulf Dalvad Berthelsen (see announcement of the book here).

We have a new Teaspoon language, Pixel Equations, that uses Media Computation to support an Engineering course in a Detroit Public School. Here, students choose a picture as input, then (1) enter the boolean equations for what pixels to select and (2) enter equations for new red, green, and blue values for those pixels. The conditionals and pixel loops are now implicit.

In several of our tools, we’re now exploring bilingual or multilingual interfaces, inspired by Sara Vogel’s work on translanguaging (see paper here) and Manuel Pérez-Quiñones’s recent work on providing interfaces for bilingual users (see his TED talk here and his ACM Interactions paper here). You can see in the screenshot below that colors can be referenced in either English or Spanish names. We’re now running participatory design sessions with teachers using Pixel Equations.

I’m planning a series of blog posts on all our Teaspoon languages work, but it’ll take a while until I get there.


  1. For new readers, Media Computation is a way of introducing computing by focusing on data abstractions used in digital media. Students write programs to manipulate pixels of a picture (to create photo filters), samples of a sound (e.g., to reverse sounds), characters of a text, and frames of a video (for video special effects). More at http://mediacomputation.org

September 6, 2021 at 7:00 am 5 comments

ICER 2021 Preview: The Challenges of Validated Assessments, Developing Rich Conceptualizations, and Understanding Interest #icer2021

The International Computing Education Research Conference (ICER) 2021 is this week (website here). It should have been in Charleston, South Carolina (one of my favorite cities), but it will instead be all on-line. Unlike previous years, if you are not already registered, you’re unfortunately out of luck. As seen in Matthias Hauswirth’s terrific guest blog post from last week (see here), getting set up in Clowdr is complicated. ICER won’t have the resources to bring people on-line and get them through the half hour prep sessions on-the-fly. There will be no “onsite” registration.

However, all the papers should be available in the ACM Digital Library (free for some time), and I think all the videos of the talks will be made available after the fact, so you can still gain a lot from the conference. Let me point out a few of the highlights that I’m excited about. (As of this writing, the papers are not yet appearing in the ACM DL — all the DOI links are failing for me. I’ll include the links here in hopes that everything is fixed soon.)

Our keynoter is Tammy Clegg, whom I got to know when she was a PhD student at Georgia Tech. She’s now at U. Maryland doing amazing work around computation and relevant science learning. I’m so looking forward to hearing what she has to say to the ICER community.

Miranda Parker, Allison Elliott Tew, and I have a paper “Uses, Revisions, and the Future of Validated Assessments in Computing Education: A Case Study of the FCS1 and SCS1.” This is a paper that we planned to write when Miranda first developed the SCS1 (first published in 2016). We created the SCS1 in order to send it out to the world for use in research. We hoped that we could sometime later do in CS what Richard Hake did in Physics, when he used the FCI to make some strong statements about teaching practices with a pool of 6,000 students (see paper here). Hake’s paper had a huge impact, as it started making the case to shift from lecture to active learning. Could we use the collected use of the SCS1 to make some strong arguments for improving CS learning? We decided that we couldn’t. The FCI was used in pretty comparable situations, and it’s tightly focused on force. CS1 is far too broad, and FCS1 and SCS1 are being used in so many different places — not all of which it’s been validated for. Our retrospective paper is kind of a systemic literature review, but it’s done from the perspective of tracing these two instruments and how they’ve been used by the research community.

One of the papers that I got a sneak peek at was “When Wrong is Right: The Instructional Power of Multiple Conceptions” by Lauren Margulieux, Paul Denny, Katie Cunningham, Mike Deutsch, and Ben Shapiro. The paper is exploring the tensions between direct instruction and more student-directed approaches (like constructionism and inquiry learning) (see a piece I did in 2015 about these tensions). The basic argument of this new paper is that just telling students the right answer is not enough to develop rich understanding. We have to figure out how to help students to be able to hold and compare multiple conceptions (not all of which is canonical or held by experts), so that they can compare and contrast, and use the right one at the right time.

I’m chair for a session on interest. While I haven’t seen the papers yet, I got to watch the presentations (which are already loaded in Clowdr). “Children’s Implicit and Explicit Stereotypes on the Gender, Social Skills, and Interests of a Computer Scientist” by de Wit, Hermans, and Aivaloglou is a report on a really interesting experiment. They look at how kids associate gender with activities (e.g., are boys more connected to video games than girls?). The innovative part is that they asked the questions and timed the answers. A quick answer likely connects to implicit beliefs. If they take a long time to answer, maybe they told you what they thought you wanted to hear? The second paper “All the Pieces Matter: The Relationship of Momentary Self-efficacy and Affective Experiences with CS1 Achievement and Interest in Computing” by Lishinski and Rosenberg asks about what leads to students succeeding and wanting to continue in computing. They look at students affective state coming into CS1 (e..g, how much do they like computing? How much do they think that they can succeed in computing?), and relate that to students’ experiences and affective state after the class. They make some interesting claims about gender — that gender gaps are really self-efficacy gaps.

One of the more unusual sessions is a pair of papers from IT University of Copenhagen that make up a whole session. ICER doesn’t often give over a whole session to a single research group on multiple papers. One is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (Recruitment Perspective)” and the other is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (CS1 Appeal Perspective).” The pitch is that framing CS1 as being about people rather than things leads to better recruitment (first paper) and more success in CS1 (second paper) in terms of gender diversity. It’s empirical support for a hypothesis that we’ve heard before, and the authors frame the direction succinctly: “CS is about people not things.” Is that succinct enough to get CS faculty to adopt this and teach CS differently?

August 16, 2021 at 7:00 am Leave a comment

The Drawbacks of the One-Second Conference Trip. Or, how to prepare for ICER 2021. Guest Blog Post from Matthias Hauswirth

I miss physical conferences. But there are some things about them I do not miss at all. I don’t miss sprinting through airports to catch a connecting flight. I don’t miss standing in line at immigration for over an hour, just to enter the next long line to get through customs. And I don’t miss sitting in a tight middle seat for ten hours straight.

With today’s virtual conferences the trips are more pleasant. I can travel there with a single mouse click. It’s a one-second trip. And I love that! *

However, by eliminating the trip to the conference, we also eliminated an opportunity to prepare for the conference while being stuck in airports, planes, stations, and trains. My physical conference trips used to provide ample idle time. I used that time to contact colleagues to schedule a dinner, lunch, or coffee at the conference; to read the conference program and highlight the talks I wanted to see; to check out the map of the venue to know where to find the relevant rooms; and even to read a paper or two to prepare for talking to the authors at the conference.

That kind of preparation takes more than a second. And without the time provided by those arduous trips, I might show up ill prepared and miss out on half of the fun.

So here is my plan. For my next one-second conference trip, I will allocate a little bit of extra time to prepare. Not crammed into an airplane seat, but at home, in a comfy chair, with a nice cup of coffee.

Oh, and if your next conference trip takes you to ICER 2021 this coming Monday, here are some suggestions from the ICER Chairs for how to prepare for this conference, which will be hosted in the most recent version of Clowdr:

  • Find the invitation email you received from Clowdr (check your spam folder, too!) and log in (3 minutes).
  • Watch the ICER 2021 Clowdr Intro video (13 minutes). This will teach you the basics of how to navigate the platform. We recommend following along interactively on the Clowdr site as you watch, to familiarize yourself with the navigation
  • Watch the ICER 2021 Paper Sessions: Participant Experience video (14 minutes). This will teach you how our paper sessions will work. You won’t just be watching videos, you’ll also be interacting while you watch, talking in small groups afterwards, and asking questions.
  • Once logged in, read the ICER Clowdr Experience FAQ page (4 minutes). This has the videos above and more detail for specific types of events.
  • On Clowdr, read the Code of Conduct page (3 minutes). Everyone is responsible for following these rules to ensure everyone feels safe and welcome.
  • On Clowdr, read the How to Set Up Your Profile page and set up your profile (3 minutes). This ensures people know who you are, what your name and pronouns are, where you’re visiting from, and what roles you’re playing at the conference. 

In Clowdr you will find a lot of content, including the entire program. We recommend that inside Clowdr you “star” events you are interested in to create your personal schedule. There is a page for each paper and poster/lightning talk. On each paper page you already find the presentation as an embedded video, on each ICER poster page there’s the poster pitch video and the PDF of the poster, and on each ICER lightning talk page you find the talk slide. Have a quick look to plan your personal schedule. And while you’re there, why not already leave a message or comment for the authors in the chat at the right of the paper/poster’s page? Note that the links to the papers in the ACM DL are not yet active; we expect ACM to make the DOIs work and the papers visible in the DL by the start of the conference.

We are confident that with an hour or so of up-front effort you will get much more out of the conference! (We suspect, though, that you will end up spending more than an hour because the content draws you in!) ICER 2021 is a compact conference packed with exciting content and interaction. Log in now to make the most of it!

*) I also very much love the minimal carbon footprint, low cost, and reduced health risks of virtual conferences.

August 13, 2021 at 1:00 pm 1 comment

Why aren’t more girls in the UK choosing to study computing and technology? Guest blog post by Peter Kemp

The Guardian raised the question in the title in this article in June. Pat Yongpradit sent it to me and Peter Kemp, and Peter’s response was terrific — insightful and informed by data. I asked him if I could share it here as a guest post, and he graciously agreed.

We’ve just started a 3 year project, scaricomp, that aims to look at girls’ performance and participation in computer science in English schools. There’s not much to see at the moment, as we started in April, but we’re hoping to sample 5000+ students across schools with large numbers of students taking CS and/or high numbers of females in the CS cohorts. I’ll let you know when we have some analysis in hand.

You reference The Guardian article’s quote: “In 2019, 17,158 girls studied computer science, compared with the 20,577 girls who studied ICT in 2018”. It’s worth noting that the 2018 ICT figure was the end of the line for ICT, numbers in previous years were much higher, and the female figure was actually ~40% of the overall ICT entries, whilst it represents about 20% of the GCSE CS cohort, i.e. females were proportionally better represented in ICT than CS. For a fuller picture of the changing numbers and demographics in English computing, see slide 8 of this, or the video presentation). It’s also worth noting that since the curriculum change in 2012/13 we’ve lost the majority of time dedicated to teaching computing (including CS) at age 14-16, I’ve argued that this has had a disproportionate impact on girls and poorer students (page 45-48).

To add a bit of context from England: Students typically pick 8-10 subjects for GCSE, though their ‘options’ might be limited. Most schools will insist that students take Maths, English Language, English Literature, Physics, Chemistry, Biology, and often: French or German, and History or Geography. This leaves students with one or two actual ‘options’. Many schools are also imposing entry requirements on GCSE CS, only letting the high achieving students (often focusing on maths) onto the course; this will likely have an impact in access to the curriculum for poorer students who are less likely to achieve well in mathematics. Why don’t females pick CS in the same way they picked ICT? This might well be linked to curriculum, role models, contextualisation etc.

One of the reasons given for the curriculum change in 2012 was that students were being “bored to death” by ICT, with ICT generally being the application of software products to solve problems and the implication of technology on the world. The application of technology to the world lends itself to the contextualisation of the curriculum and the assessment materials. There was a lot of project-based assessment with real world scenarios for students to engage with, e.g. making marketing materials for businesses, using spreadsheets to organise holiday bookings etc https://web.archive.org/web/20161130183550if_/http://www.aqa.org.uk/subjects/computer-science-and-it/gcse/information-and-communication-technology-4520) . The GCSE CS is a different beast. It can be contextualised, but this is probably more difficult to do as there is an awful lot of material to cover and the assessment methodology is entirely exam based and on paper for the largest exam boards. Anecdotally we hear of schools cutting down on programming time on computers, as the exam is handwritten.

Data looking at what females ‘liked’ in the old ICT curriculum is quite limited, but what does exist places some of the ‘non-CS’ elements quite highly. So, the actual curriculum content might have a part to play here. Having taught ICT (and CS) for many years, most students I knew really enjoyed the ICT components. I’d argue that the pre-reform discourse around ICT being: “useless, boring, easy”, CS being: “useful, exciting, rigorous” was an easy political position to take, and not reflective of reality where schools had competent teachers. We now find ourselves in a position where we probably have a little too much CS, and not enough digital literacy / ICT for the general needs of students. I and people like Miles Berry (p49) have argued for more generalist qualification which maintains elements of CS. Though there appears to be little political will to make this happen.

To add another suggestions as to why we’re seeing females disengaging, within the English context, we see females substantially underachieving at GCSE in comparison to their other subjects and males of similar ‘abilities’ (ability here being similar grade profiles in other subjects). Why this is remains unclear, we see similar under achievement in Maths and Physics. My fear is that encouraging females to take CS might lead them to having their self-efficacy knocked and therefore make them less likely to pursue further study or a career in tech. We also found that females from poorer backgrounds were more likely to pick GCSE CS than their middle-class peers, we speculate that this might be the result of different cultural/family pressures and a keener engagement with the ’employability’ and ‘good pay’ discourse that often surrounds the representation of studying CS, however true this might be for these groups in reality. More research on the above coming soon through scaricomp.

Additionally, in terms of the UK picture, you’ll probably want to check in with Sue Sentance and the Gender Balance in Computing Project. One of their theories for the decline in computing is that CS is being timetabled at the same time as other (generally) more attractive subjects for females. I’m not sure if they’ve started this part of the research yet, but it’s worth checking in. They are running interventions across the country, but I don’t believe that they are trying to do a nationally representative survey.

August 2, 2021 at 7:00 am Leave a comment

There is transfer between programming and other subjects: Skills overlap, but it may not be causal

A 2018 paper by Ronny Scherer et al. “The cognitive benefits of learning computer programming: A meta-analysis of transfer effects” was making the rounds on Twitter. They looked at 105 studies and found that there was a measurable amount of transfer between programming and situations requiring mathematical skills and spatial reasoning. But here’s the critical bit — it may not be casual. We cannot predict that students learning programming will automatically get higher mathematics grades, for example. They make a distinction between near transfer (doing things that are very close to programming, like mathematics) and far transfer, which might include creative thinking or metacognition (e.g., planning):

Despite the increasing attention computer programming has received recently (Grover & Pea, 2013), programming skills do not transfer equally to different skills—a finding that Sala and Gobet (2017a) supported in other domains. The findings of our meta-analysis may support a similar reasoning: the more distinct the situations students are invited to transfer their skills to are from computer programming the more challenging the far transfer is. However, we notice that this evidence cannot be interpreted causally—alternative explanations for the existence of far transfer exist.

Here’s how I interpret their findings. Learning program involves learning a whole set of skills, some of which overlap with skills in other disciplines. Like, being able to evaluate an expression with variables, once you know the numeric value for those variables — you have to do that in programming and in mathematics. Those things transfer. Farther transfer depends on how much overlap there is. Certainly, you have to plan in programming, but not all of the sub-skills for the kinds of planning used in programming appear in every problem where you have to plan. The closer the problem is to programming, the more that there’s an overlap, and the more we see transfer.

This finding is like a recent paper out of Harvard (see link here) that shows that AP Calculus and AP CS both predict success in undergraduate computer science classes. Surprisingly, regular (not AP) calculus is also predictive of undergraduate CS success, but not regular CS. There are sub-skills in common between mathematics and programming, but the directionality is complicated.

We have known for a long time that we can teach programming in order to get a learning effect in other disciplines. That’s the heart of what Bootstrap does. Sharon Carver showed that many years ago. But that’s different than saying “Let’s teach programming, and see if there’s any effect in other classes.”

So yes, there is transfer between programming and other disciplines — not that it buys you much, and the effect is small. But we can no longer say that there is no transfer.

July 5, 2021 at 7:00 am 2 comments

Rules work as a way of communicating computation at a mechanistic level without teaching programming

Sometimes as a reviewer, you get to read a paper that you wish was published immediately. That’s how I felt when I got to review Eliane Wiese and Marcia Linn’s paper “It Must Include Rules”: Middle School Students’ Computational Thinking with Computer Models in Science. It was published in ACM TOCHI in April (see link here).

Eliane and Marcia offer a solution to a problem that teachers face when they want to teach about computational models, but they don’t want to teach programming. How do you get students to reason about the models underlying the simulations they’re exploring without talking about program code? And if you do talk about some notation, some representation of the model, what can you expect students to reason about without teaching them the notation or representation first?

Eliane and Marcia show that rules work. They have students interact with simulations, and then show them rules that might be in that model. Like in a simulation of light, photosynthesis, and glucose levels in plants, a rule might be: When light is on, total glucose made increases.. Eliane and Marcia show rules to students and ask “Are these in the model?” In their abstract, they write:

In our sample, 99% of students identified at least one key rule underlying a model, but only 14% identified all key rules; 65% believed that model rules can contradict; and 98% could not distinguish between emergent patterns and behaviors that directly resulted from model rules. Despite these misconceptions, compared to the “typical” questions about the science content alone, questions about model rules elicited deeper science thinking, with 2–10 times more responses including reasoning about scientific mechanisms. These results suggest that incorporating computational thinking instruction into middle school science courses might yield deeper learning and more precise assessments around scientific models.

The misconceptions don’t bother me. Students will have misconceptions about models — that’s part of teaching science with models. What’s fascinating to me is that the rules worked. Students reasoned mechanistically about the computational models.

My favorite result in this study was where they asked students to predict what would happen if they added a new rule to the model. Basically, “What happens if we change the program like this?” Students were way better at playing these what-if games if the question was posed as a rule. Quoting from the paper:

Asking students to make predictions about the implementation of incorrect rules led to more scientific reasoning about mechanisms than simply asking students about a causal relationship portrayed in a correct model. This pattern was evident for both model contexts, with twice as many workgroups proposing mechanisms with the New Rule question compared to the Typical question for Global Climate (29% vs. 14%) and ten times as many workgroups doing so for Chemical Reactions (53% vs. 5%).

Students can reason about computational models described as rules, even without instruction on rules. That’s a terrific result. It’s one that I’m thinking about how to use in my task-specific programming languages.

Now, this isn’t saying that students can’t reason with function or with imperative statements. Maybe functional or procedural programming paradigms would work, too. Eliane and Marcia have found one approach that does work. They offer us a way to integrate computational modeling into science education, with real discussion of the mechanism of the models, without teaching programming first.

June 28, 2021 at 7:00 am Leave a comment

Katie Cunningham’s Purpose-first Programming: Glass box scaffolding for learning to code for authentic contexts

Last month, Katie Cunningham presented her CHI 2021 paper “Avoiding the Turing Tarpit: Learning Conversational Programming by Starting from Code’s Purpose.” The video of her presentation is available here. This is the final study from her dissertation work, about which I blogged here.

Katie is trying to support the kinds of programming learners whom she discovered in her work on tracing — students who want to write programs, but have no interest in understanding the details of how programs work. As one said to her (which became the title of her ICLS 2020 paper), “I’m not a computer.” Block-based programming won’t work for her learners because, like most conversational programmers, the authenticity of the language they’re learning matters. They don’t want to use blocks. They want to see the code that developers see — a form of what Cindy Hmelo-Silver and I called “glass-box scaffolding.”

Katie focused on one particular purpose: writing Python code to scrape Web pages using Beautiful Soup. She and Rahul Bejarano dug into Beautiful Soup code on Github and identified a set of code chunks (“plans”) that were really used for this purpose and which could be recombined in useful ways. She then developed a curriculum as a Runestone ebook for teaching those plans where she taught students how to combine them (using Parsons Problems) and, importantly, how to tailor them for specific needs. Here’s a figure from her paper with an example plan with a description of the “slots” for tailoring.

My favorite part of this study is her analysis of how students debugged using these plans. They did make mistakes, and they fixed them. They reasoned about their programs in terms of the plans. In a think aloud, they talked about the names of the plans and the slots, and where they tailored the plan wrong. It’s not that they were just copying and pasting chunks of Python code. They were reasoning about the chunks — but they were not doing much reasoning about Python. In some sense, she defined a task-specific programming language whose components happened to be defined in terms of visible lines of Python code.

My favorite outcome of the study is that students came away excited and felt that they were doing something “realistic” — from a half hour lesson. One participant asked if she could do this kind of learning for different purposes every week, a kind of DuoLingo for programming. Those are strong results from a short intervention. It is a pretty amazing intervention.

I blogged for CACM this month on how we we predict about knowledge transferring between programming languages may be based on an assumption of mathematics background which might have been true in the 1970’s but is less likely to be true today (see post here). I suggest that we need to develop ways of teaching programming that doesn’t relate to mathematics, that instead connect to the programmer’s purpose and task. Katie’s work is what I had in mind as an example.

June 21, 2021 at 7:00 am 9 comments

Call for Nominations for Editor-in-Chief of ACM Transactions on Computing Education

Call for Nominations: Editor-In-Chief, ACM Transactions on Computing Education

The term of the current Editor-in-Chief (EiC) of the ACM Transactions on Computing Education (TOCE) is coming to an end, and the ACM Publications Board has set up a nominating committee to assist the Board in selecting the next EiC.  TOCE was established in 2001 and is a premier journal for computing education, publishing over 30 papers annually.

Nominations, including self nominations, are invited for a three-year term as TOCE EiC, beginning on September 1, 2021.  The EiC appointment may be renewed at most one time. This is an entirely voluntary position, but ACM will provide appropriate administrative support.

Appointed by the ACM Publications Board, Editors-in-Chief (EiCs) of ACM journals are delegated full responsibility for the editorial management of the journal consistent with the journal’s charter and general ACM policies. The Board relies on EiCs to ensure that the content of the journal is of high quality and that the editorial review process is both timely and fair. He/she has final say on acceptance of papers, size of the Editorial Board, and appointment of Associate Editors. A complete list of responsibilities is found in the ACM Volunteer Editors Position Descriptions. Additional information can be found in the following documents:

·                 Rights and Responsibilities in ACM Publishing

·                ACM’s Evaluation Criteria for Editors-in-Chief

Nominations should include a vita along with a brief statement of why the nominee should be considered. Self-nominations are encouraged. Nominations should include a statement of the candidate’s vision for the future development of TOCE. The deadline for submitting nominations is July 21, 2021, although nominations will continue to be accepted until the position is filled.

Please send all nominations to the nominating committee chairs, Mark Guzdial (mjguz@umich.edu) and Betsy DiSalvo (bdisalvo@cc.gatech.edu).

The search committee members are:

  • Betsy DiSalvo (Georgia Institute of Technology), co-chair
  • Mark Guzdial (University of Michigan), co-chair
  • Diana Franklin (University of Chicago)
  • Andrew Luxton-Reilly (University of Auckland)
  • Aman Yadav (Michigan State University)

Louiqa Raschid (University of Maryland) will serve as the ACM Publications Board Liaison

June 14, 2021 at 7:00 am Leave a comment

Call for Special Issue on CT in Early Childhood: Guest Blog Post from Wang, Bers, and Lee

In my blog post on what I got wrong in the 2010’s, I pointed to the many definitions of computational thinking (CT) that I had shared in this blog. I said that I hoped that I wouldn’t be offering any more, but I was probably wrong on that too.

Below you will find (yet another) definition of CT, which is pretty intriguing.


Early Childhood Research Quarterly
Call for Papers

Special Issue: Examining Computational Thinking in Early Childhood

Guest Editors 

X. Christine Wang, State University of New York at Buffalo, wangxc@gmail.com

Marina Bers, Tufts University, marina.bers@tufts.edu

Victor R. Lee, Stanford University, vrlee@stanford.edu 

Described as the new literacy of the 21stcentury, computational thinking (CT) is broadly defined as systematic analysis, exploration, and testing of solutions to open-ended and often complex problems based on the analytical process rooted in the discipline of computer science. Driven by the increasing demands for computing professionals, CT has been popularized as a key goal of computer science teaching and learning in K-12 schools. On the one hand, much new research is currently exploring the relationships between CT and coding, CT in everyday unplugged activities, and CT and cognitive and socio-emotional domains of knowledge. On the other hand, there is also heated debate about the validity and applicability of CT, whether CT refers to a new set of competencies, and what value CT has in schooling. Because of the complicated nature of these explorations and conversations, CT has drawn considerable attention in educational research and practice, including early childhood education in recent years (Bers, 2018; Jung & Won, 2018; Toh et al., 2016; Xia & Zhong, 2018). 

To help advance this burgeoning area of research, this special issue seeks empirical and theoretical contributions about young children’s (ages 2-8) CT learning and teaching. We encourage researchers to explore, but not limit themselves to, one or more of the following topics:

(1) Critical examinations of definitions and/or conceptualizations of CT in early childhood

(2) Operationalizations of CT learning goals and practices in early childhood

(3) Developmentally appropriate approaches in promoting CT in early childhood

(4) Relationships between CT and other domains of learning and development

(5) Assessment of CT learning and development in early childhood

(6) Supports for early childhood educators who are bringing CT to young children

(7) Equity and inclusion issues related to CT learning and teaching

For this special issue, we are soliciting a wide range of manuscripts describing rigorous empirical studies, design studies, integrative reviews, theoretical perspectives, or evaluation studies. We welcome studies that employ diverse theoretical and methodological approaches.

Submission Details
We are inviting interested researchers to submit a short proposal prior to manuscript submission. The proposal should be no more than 500 words (excluding references, images, or figures) and must include the following information: (1) Title/Author(s), (2) Key Issues/Problems, (3) Methods/Processes, (4) Findings/Evidence-Based Claims, and (5) Relevance and Contribution to the Special Issue.

Please submit your proposal via email to the Guest Editors with the subject line “ECRQ: CT in Early Childhood”:  X. Christine Wang (wangxc@gmail.com), Marina Bers (marina.bers@tufts.edu), and Victor R. Lee (vrlee@stanford.edu).

The guest editors will provide timely feedback and select proposed papers based on their quality and suitability for this special issue. Selected authors will then be invited to submit a full manuscript.

All full manuscripts must be submitted via the EM system: https://www.editorialmanager.com/ecrq/default.aspx. After you log in and click on “Submit New Manuscript,”  please select “VSI: CT in Early Childhood” on the “Select Article Type” page and proceed accordingly. 

Invitation to submit a full paper will not be a guarantee of acceptance. All manuscripts will undergo the standard ECRQ double-blind peer review procedure. For further information please contact Managing Guest Editor X. Christine Wang (wangxc@gmail.com) or Special Content Editor Gary Resnick (sevenalaris@msn.com).

Deadlines
Proposal submission: July 15, 2021  
Invitation for manuscript submission: August 15, 2021
Manuscript Submission: December 15, 2021
 

May 24, 2021 at 7:00 am Leave a comment

Seeking Collaborators for a Study of Impostor Phenomenon in Computer Science: Guest Blog Post from Leo Porter

Impostor Phenomenon (IP)** is often described as high-achieving individuals experiencing feelings of intellectual phoniness.  Based on the research conducted in various fields with different populations over the past four decades, we know that IP causes problems for those who experience it, including being associated with anxiety and depression.  

In computer science, we often hear our colleagues and students talking about their struggles with IP.  There are panels on IP at Grace Hopper and other conferences aimed at helping members of our community cope with these feelings.  But how prevalent is it in CS?

An informal survey conducted by Blind asked participants to self-report their feelings of IP, and among the 10,000 software engineers who participated, 58% reported feelings of IP [5].  However, self-reporting isn’t necessarily an accurate way to measure IP.   In a pilot study at UC San Diego, we used the Clance IP scale [1], a validated instrument that is used in the majority of studies to measure IP.  After administering the Clance IP scale in upper-division and graduate CS courses, we found that 57% of participants met the diagnostic criteria for experiencing IP [7], which was quite similar to that earlier reported finding from Blind.  What was most concerning about our results was the differences for gender among the students:  52% of men met the diagnostic criteria whereas 71% of women did.  That’s a huge (and statistically significant) difference!

But what does this mean?  We can look at results from other studies and see that computer science seems to have higher rates of students who experience IP than in fields like health professionals (31%) [4], undergraduates studying education (28%) [3], undergraduates in business related fields (39%) [8], and undergraduates from racially underrepresented group studying educational psychology (48%) [2].  This suggests that CS may be an outlier with our students struggling more with IP than other fields.  However, a recent study among medical students [6] reported similar results to what we found in CS, suggesting computing might not be alone.

Before we begin asking questions of why CS (and perhaps also medicine) might be outliers, we need to conduct a replication study to verify (or refute) these initial findings from just a single institution.  To that end, we’re putting out a call for other researchers to help participate in a large-scale replication effort to answer these questions:  What is the rate of IP among students in computer science courses?  Does the rate of IP change as students move farther through the curriculum?  Are students from underrepresented groups in computer science more likely to experience IP than those from traditionally represented groups?

If you are willing to be participate in this replication effort, please fill out this brief interest form:

https://forms.gle/MWYPFnmepWT9nMzNA

For those participating, we’ll ask that you administer the instrument in at least one course at your institution.  If you are interested, we’ll also invite you to engage in the data analysis and authoring of any related publications.  We’ll also help you obtain Human Subjects approval at your institution or leverage our approved protocol at UC San Diego.

** Impostor Phenomenon is the original term [1], however Impostor Syndrome and Impostor Phenomenon are commonly used interchangeably.

References

  1. Sabine M. Chrisman, W. A. Pieper, Pauline R. Clance, C. L. Holland, and Cheryl Glickauf-Hughes. 1995. Validation of the Clance Impostor Phenomenon Scale. Journal of Personality Assessment 65, 3 (1995), 456–467.
  2. Kevin Cokley, Leann Smith, Donte Bernard, Ashley Hurst, Stacey Jackson, Steven Stone, Olufunke Awosogba, Chastity Saucer, Marlon Bailey, Davia Roberts. 2017. Impostor feelings as a moderator and mediator of the relationship between perceived discrimination and mental health among racial/ethnic minority college students. Journal of Counseling Psychology 64, 2 (2017), 141–154.
  3. Joseph R. Ferrari. 2005. Impostor Tendencies And Academic Dishonesty: Do They Cheat Their Way To Success? Social Behavior and Personality: an international journal 33, 1 (2005), 11–18.
  4. Kris Henning, Sydney Ey, and Darlene Shaw. 1998. Perfectionism, the impostor phenomenon and psychological adjustment in medical, dental, nursing and pharmacy students. Medical Education 32, 5 (1998), 456–464.
  5. Kim. 2018. 58 Percent of Tech Workers Feel Like Impostors. https://blog.teamblind.com/index.php/2018/09/05/58-percent-of-tech-workers-feel-like-impostors
  6. Beth Levant, Jennifer A. Villwock, and Ann M. Manzardo. 2019. Impostorism in third-year medical students: an item analysis using the Clance impostor phenomenon scale. Perspectives on medical education (2020), 1-9.
  7. Adam Rosenstein, Aishma Raghu, and Leo Porter. 2020. “Identifying the prevalence of the impostor phenomenon among computer science students.” Proceedings of the 51st ACM Technical Symposium on Computer Science Education.
  8. Kenneth T. Wang, Marina S. Sheveleva, and Tatiana M. Permyakova. 2019. Imposter syndrome among Russian students: The link between perfectionism and psychological distress. Personality and Individual Differences 143 (2019), 1–6.

May 6, 2021 at 7:00 am 2 comments

Embodiment in CS Learning: How Space, Metaphor, Gesture, and Sketching Support Student Learning: Amber Solomon’s defense

Amber Solomon defends her dissertation today, co-advised by Betsy DiSalvo and me. I have learned a lot from Amber and her work. She came into her PhD studies with a particular perspective — a question about how we teach CS. She knew about the studies showing that spatial ability is correlated with success in computing. Why is that? Is it because there is something inherently spatial about computing? Or maybe because we are physical beings and come to understand everything in terms of our spatial experiences? Or maybe it’s because of how we teach computing?

That last one is concerning. Computing education is new. We haven’t spent enough time checking whether what we are doing is right for everyone — or if what we’re doing creates barriers for some students. In particular, she’s concerned about how we teach and learn with embodiment, i.e., references to space and our physical presence, in language, gesture, and sketching. In general, we don’t design our gestures and metaphors in CS education, maybe in part because Dijkstra warned us not to. That’s a problem. Because gesture has a cultural and social component, and we may inadvertently be teaching in a way that says to some students, “You don’t belong. We don’t use your gestures. We use ours.”

Amber’s first project was her study of our augmented-reality design studio for media computation where students’ work was displayed on the walls (see blog post here). One of the surprising outcomes in this project is that it influenced the climate in the classroom — students were more willing to seek help when everyone’s work was on display. The problem of a defensive climate in the classroom is longstanding in CS. Amber showed that changing the environment where we teach can change climate.

Amber with Miranda Parker led our SPARCS study, exploring why socioeconomic status (SES) predicts CS performance. In general, rich kids do better in CS than poor kids. Why? They compared two different models for why SES predicted performance on a standardized CS test. One model suggested that higher SES led to greater access to CS education. Rich kids got to take CS classes, camps, and robotics clubs while poorer kids did not. The second model suggested something more subtle — that higher SES predicted greater spatial ability which predicted better performance. That spatial ability model was a better fit to the data. Now consider Amber’s original hypothesis, that spatial ability predicts CS performance because of the way that we teach CS. The SPARCS study raises the possibility that the whole CS Ed system is rigged in favor of higher SES kids at a deep way. Just teaching more classes to lower SES kids won’t make a difference, if those classes are still taught in a way that requires higher spatial ability.

Amber’s dissertation asks two big questions: (1) How do teachers use embodiment when they teach CS? (2) How do student use embodiment when they learn CS? Part of the answer to the first question appeared at ICLS last year. I talked about helping with Amber’s coding of student videos in my blog post about Dijkstra. Her summary is below.

I’m not going to summarize her whole dissertation here. Here is one example from her defense. She shows a video clip of a teacher explaining a function call. He points to a function definition and says, “Now we come here. I am five. N is five…Do you see what I’m doing?” Read that last sentence imagining that you’ve not had years of CS or mathematics teachers modeling this kind of language. Who are “we” and what does it mean to “come here”? What does he mean that he’s five? Now N is five? Is he N? When he’s saying ‘what I’m doing,’ what is he referring to? Playing the computer, or writing the program, or drawing on the slide? Now imagine hearing that and you have visual disabilities and don’t know that he’s pointing at a function definition. Amber supports a strong claim in her dissertation — we have not designed the language and metaphors of CS education. There’s no way that we CS teachers plan to say things which are that confusing.

Throughout her PhD career, Amber has written about her experience of being a Black woman in CS. She taught me what intersectionality is about. I am grateful that she has been both a CS education researcher and activist during her PhD. I am grateful to have had the chance to work with her.

Title: Embodiment in Computer Science Learning: How Space, Metaphor, Gesture, and Sketching Support Student Learning

Amber Solomon

Human-Centered Computing Ph.D. Candidate

School of Interactive Computing

College of Computing

Georgia Institute of Technology

Summary:

Recently, correlational studies have found that psychometrically assessed spatial skills may be influential in learning computer science (CS). Correlation does not necessarily mean causation; these correlations could be due to several reasons unrelated to spatial skills. Nonetheless, the results are intriguing when considering how students learn to program and what supports their learning. However, it’s hard to explain these results. There is not an obvious match between the logic for computer programming and the logic for thinking spatially. CS is not imagistic or visual in the same way as other STEM disciplines since students can’t see bits or loops. Spatial abilities and STEM performance are highly correlated, but that makes sense because STEM is a highly visual space. In this thesis, I used qualitative methods to document how space influences and appears in CS learning. My work is naturalistic and inductive, as little is known about how space influences and appears CS learning. I draw on constructivist, situative, and distributed learning theories to frame my investigation of space in CS learning. I investigated CS learning through two avenues. The first is as a sense-making, problem-solving activity, and the second is as a meaning-making and social process between teachers and students. In some ways, I was inspired to understand what was actually happening in these classrooms and how students are actually learning and what supports that learning. While looking for space, I discovered the surprising role embodiment and metaphor played while students make sense of computation and teachers express computational ideas. The implication is that people make meaning from their body-based, lived experiences and not just through their minds, even in a discipline such as computing, which is virtual in nature. For example, teachers use the following spatial language when describing a code trace: “then, it goes up here before going back down to the if-statement.” The code is not actually going anywhere, but metaphor and embodiment are used to explain the abstract concept. This dissertation makes three main contributions to computing education research. First, I conducted some of the first studies on embodiment and space in CS learning. Second, I present a conceptual framework for the kinds of embodiment in CS learning. Lastly, I present evidence on the importance of metaphor for learning CS.

Date: Monday, April 12th, 2021

Time: 2:00pm – 5:00pm (EDT)

Location: Bluejeans Link

Meeting URL

https://bluejeans.com/182730963?src=joininfo

Committee:

  • Dr. Betsy DiSalvo (Advisor, School of Interactive Computing, Georgia Institute of Technology)
  • Dr. Mark Guzdial (Advisor, Electrical Engineering and Computer Science, University of Michigan)
  • Dr. Ashok Goel (School of Interactive Computing, Georgia Institute of Technology)
  • Dr. Wendy Newstetter (School of Interactive Computing, Georgia Institute of Technology)
  • Dr. Ben Shapiro (College of Education and Human Development, Georgia State University)
  • Dr. David Uttal (School of Education and Social Policy, Northwestern University)

April 12, 2021 at 9:00 am 4 comments

Become a Better CS Teacher by Seeing Differently

My Blog@CACM post this month is How I evaluate College Computer Science Teaching. I get a lot of opportunities to read teaching statements and other parts of an academic’s teaching record. I tend to devalue quantitative student evaluations of teaching — they’re biased, and students don’t know what serves them best. What I most value are reports of the methods teachers use when they teach. Teachers who seek out and use the best available methods are mostly likely the best teachers. That is what I look for when I have to review College CS teaching records.

On Twitter, people are most concerned with my comments about office hours. Computer science homework assignments should not be written expecting or requiring everyone in the class to come to office hours in order to complete the assignment. That’s an instructional design problem. If there are questions that are coming up often in office hours, then the teacher should fix the assignment, or add to lecture, or make announcements with the clarification. Guided instruction beats discovery learning, and inquiry learning is improved with instruction. There is no advantage to having everyone in the class discover that they need a certain piece of information or question answered.

My personal experience likely biases me here. I went to Wayne State University in Detroit for undergraduate, and I lived in a northern suburb, five miles up from Eight Mile Road. I drove 30-45 minutes a day each way. (I took the bus sometimes, if the additional time cost was balanced out by the advantage of reading time.) I worked part-time, and usually had two part-time jobs. I don’t remember ever going to office hours. I had no time for office hours. I often did my programming assignments on nights and weekends, when there were no office hours scheduled. If an assignment would have required me to go to office hours, I likely would have failed the assignment. That was a long time ago (early 1980’s) — I was first generation, but not underprivileged. Today, as Manuel pointed out (quoted in this earlier blog post), time constraints (from family and work) are a significant factor for some of our students.

Teachers who require attendance at office hours are not seeing the other demands on their students’ lives. Joe Feldman argues that we ought to be teaching for the non-traditional student, the ones who have family and work demands. If we want diverse students in our classes, we have to learn to teach for the students whose experiences we don’t know and whose time costs we don’t see.

CS teachers get better at what we see

I’m teaching an Engineering Education Research class this semester on “Theoretical and Conceptual Frameworks for Engineering Education Research.” We just read the fabulous chapter in How People Learn on How Experts differ from Novices. One of the themes is on how experts don’t necessarily make good teachers and about the specialized knowledge of teachers (like pedagogical content knowledge). I started searching for papers that did particularly insightful analyses of CS teacher knowledge, and revisited the terrific work of Neil Brown and Amjad Altadmri on “Novice Java Programming Mistakes: Large-Scale Data vs. Educator Beliefs” (see paper here).

Neil and Amjad analyze the massive Blackbox database of keystroke-level data from thousands of students learning Java. They identify the most common mistakes that students make in Java. My favorite analyses in the paper are where they rank these common mistakes by time to fix. An error with curly brackets is very common, but is also very easy to fix. Errors that can take much longer (or might stymie a student completely) include errors with logical operators (ANDs and ORs), void vs non-void return values, and typing issues (e.g., using == on strings vs .equals).

The more controversial part of their analysis is when they ask CS teachers what students get wrong. Teachers’ predictions of the most common errors are not accurate. They’re not accurate when considered in aggregate (e.g., which errors did more teachers vote for) nor when considering the years of experience of a teacher.

Neil and Amjad contrast their findings with work by Phil Sadler and colleagues showing that teacher efficacy is related to their ability to predict student errors (see blog post here).

If one assumes that educator experience must make a difference to educator efficacy, then this would imply that ranking student mistakes is, therefore, unrelated to educator efficacy. However, work from Sadler et al. 2013 in physics found that “a teacher’s ability to identify students’ most common wrong answer on multiple-choice items . . . is an additional measure of science teacher competence.” Although picking answers to a multiple-choice question is not exactly the same as programming mistakes, there is a conflict here—either the Sadler et al. result does not transfer and ranking common student mistakes is not a measure of programming teacher competence, or experience has no effect on teacher competence. The first option seems more likely. (Emphasis added.)

I don’t see a conflict in that sentence. I believe both options are true, with some additional detail. Ranking common student compiler mistakes is not a measure of programming teacher competence. And experience has no effect on teacher competence on things they don’t see or practice.

Expertise is developed from deliberate practice. We get better at the things we work at. CS teachers certainly get better (become more competent) at teaching. Why would that have anything to do with knowing what compiler errors that Java students are getting? Teachers rarely see what compiler errors their students are getting, especially in higher-education with our enormous classes.

When I taught Media Computation, I thought I became pretty good at knowing what errors students got in Python. I worked side-by-side students many times over many years as they worked on their Python programs. But that’s still a biased sample. I had 200-300 students a semester. I might have worked with maybe 10% of those students. I did not have any visibility on what most students were getting wrong in Python. I probably would have failed a similar test on predicting the most common errors in Python based on my personal experience. I’m sure I’d do much better when I rely on studies of students programming in Python (like the study of common errors when students write methods in Python) — research studies let me see differently.

Here at the University of Michigan, I mostly teach a user interface software class on Web front-end programming in JavaScript. I am quite confident that I do NOT know what JavaScript errors my students get. I have 260-360 students a semester. Few come to office hours with JavaScript errors. I rarely see anybody’s code.

I do see exams and quizzes. I know that my students struggle with understanding the Observer Design pattern and MVC. I know that they often misunderstand the Universal Design Principles. I know that CSS and dealing with Java asynchronous processing is hard because that’s where I most often get regrade requests. There I’ll find that there is some unexpected way to get a given effect, and I often have to give points back because their approach works too. I get better at teaching these things every semester.

CS teachers can be expected to become more competent at what they see and focus on. Student compiler errors are rarely what they see. They may see more conceptual or design issues, so that’s where we would expect to see increased teacher competence. To developer teacher competence beyond what we see, we have to rely on research studies that go beyond personal experience.

CS teachers need to get better at teaching those we don’t see

The same principle applies to why we don’t improve the diversity of our CS classes. CS teachers don’t see the students who aren’t there. How do you figure out how to teacher to recruit and retain women and students from Black, Latino/Latina, and indigenous groups if they’re not in your classes? We need to rely on research studies, using others’ eyes and others’ experiences.

Our CS classes are huge. It’s hard to see that we’re keeping students out and that we’re sending a message that students “don’t belong,” when all we see are huge numbers. And when we have these huge classes, we want the majority of students to succeed. We teach to the average, with maybe individual teacher preference for the better students. We rarely teach explicitly to empower and advantage the marginalized students. They are invisible in the sea of (mostly male, mostly white or Asian) faces.

I have had the opportunity over the last few months to look at several CS departments’ diversity data. What’s most discouraging is that the problem is rarely recruitment. The problem is retention. There were more diverse students in the first classes or in the enrolled population — but they withdrew, failed, or dropped out. They were barely visible to the CS teachers, in the sea of huge classes, and they become completely invisible. We didn’t teach in a way that kept these students in our classes.

Our challenge is to teach for those who we don’t easily see. We have to become more competent at teaching to recruit those who aren’t there and retain those students who are lost in our large numbers. We easily become more competent at teaching for the students we see. We need to become more competent at teaching for diversity. We do that by relying on research and better teaching methods, like those I talk about in my Blog@CACM post.

February 15, 2021 at 7:00 am 2 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,038 other followers

Feeds

Recent Posts

Blog Stats

  • 2,014,486 hits
May 2022
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

CS Teaching Tips