The information won’t just sink in: Helping teachers provide technology-assisted data literacy instruction in social studies
Last year, Tammy Shreiner and I published an article in the British Journal of Educational Technology, “The information won’t just sink in: Helping teachers provide technology-assisted data literacy instruction in social studies.” (I haven’t been able to blog much the last year while starting up PCAS, so please excuse my tardiness in sharing this story.) The journal version of the paper is here, and our final submitted version (not paywalled) is available here.
Tammy and I used this paper to describe what happened (mostly during the pandemic) as we continued to provide support to in-service/practicing social studies teachers to adopt data literacy instruction in their classes. Since this was a journal on educational technology, we mostly focused on two technologies:
- The OER Tammy created to support data literacy in social studies education — see link here.
- DV4L, the Data Visualization for Learning tool that we created explicitly for social studies teachers — see link here.
When we started collaborating together, we looked for a theoretical model could inform our work. The end goal was easy to describe: we wanted social studies teachers to teach data literacy. But it’s hard to measure progress towards that big, high-level goal. Teachers are teaching data literacy, or they’re not. How do you know if you’re getting closer to the goal? We structured our work and our evaluation around the Technology Acceptance Model (TAM). TAM suggests that adoption of a new technology boils down to two questions: (1) is the technology actually useful in solving a problem that users care about, and (2) is the technology usable by the users? Those were things that we could measure progress towards.
During the pandemic, we ran several on-line professional learning opportunities — a workshop where practicing teachers could try out the OER with some guidance (e.g., “Make sure you see this” and “Why don’t you try that?”), and kick the tires on a bunch of tools including DV4L. We gathered lots of data on those teachers, and Tammy did the hard work of analyzing those data over time. We made progress on TAM goals — our tools got more usable and more useful.
But we still got very little adoption. TAM didn’t work for us. Adoption didn’t increase as usability and usefulness increased.
Why not? That’s a really big question, and we barely touch on it in this paper. It’s now a couple of years since we wrote the BJET article, and I could now tick off a dozen bullet points of reasons why teachers do not adopt, despite a technology being both useful and usable. I’m not going to list them here, because there are other publications in the pipeline. Bahare Naimipour, the EER PhD student working on our project, is finishing a case study of some teachers who did adopt and how their beliefs about data literacy changed.
I can give you a big meta-reason which probably isn’t a surprise to most education researchers but might be a surprise to many computer scientists: It’s not all about (or even mostly about) the technology. I led the group that worked on DV4L, and I’ve been directing students who have been helping Tammy make the OER more usable and useful (including build new tools that we haven’t yet released). TAM matters, but the characteristics of the individual teachers and the context of the teacher’s classroom are critical factors that technology is unlikely to overcome.
This is work funded in part by our National Science Foundation grant, #DRL2030919
A Workshop on Slow Reveal Graphs for Social Studies Teachers
My collaborator, Tammy Shreiner, is running a workshop for social studies educators on teaching with Slow Reveal Graphs. The idea of slow reveal graphs is that visualizations are just too complex for students to pick out all the visual elements. Instead, a slow reveal graph is presented in stages, and at each stage, students are prompted to reflect (and discuss, or write about), “What do you notice now? What do you wonder about?”
Tammy has been building a bunch of slow reveal graphs that really fascinating. I’m particularly amazed at the ones that she and her colleague Bradford Dykes have been building. They are taking hand-drawn visualizations (like the fascinating ones by W.E.B. Du Bois) and recreating them in R, so that they can generate the slow reveal process.
She’s offering a workshop in January that I highly recommend.
Dear friends,
I am writing to share information about a professional learning opportunity focused on teaching primary source data visualizations using the “slow reveal” process. The PLO will take place on Zoom over two Saturdays, January 21 and 28, 9:00-noon. It is open to teachers inside and outside of Michigan.
Please share the attached flyer with your social studies teacher colleagues. A sneak peak of the website that we will share with participants is below.
Thanks for sharing!
Tammy

Launching PCAS, the first two COMPFOR classes, and hiring our first lecturer
I last gave an update on the Program in Computing for the Arts and Sciences (PCAS) here in February (see blog post). Since then, it’s become real. I was hired as of July 1 as the Director of PCAS. My Computing Education Task Force co-chair, Gus Evrard, is the Associate Director. We even have a website: https://lsa.umich.edu/computingfor
I am building and teaching our first two courses now. I love our course code: COMPFOR. It stands for “COMPuting FOR…” The two courses are:
- COMPFOR 111 Computing’s Impact on Justice: From Text to the Web
- COMPFOR 121 Computing for Creative Expression
I have never worked harder than this semester– building these two courses, teaching both courses at the same time, learning how to be a program director (e.g., explicit classes and workshops on academic leadership, on evaluating faculty, and on how University of Michigan budgets work), and creating the program. I am having enormous fun.
I plan to write more about the two courses here and our innovations in teaching them. Here’s a brief summary. We are using teaspoon languages to introduce concepts, Snap for programming assignments, and Runestone ebooks for helping students to transfer their knowledge from blocks to traditional textual languages (Python, Processing, and SQL). I gave a talk for the CS for Michigan Conference a few weeks ago, and for the attendees, I created a page connecting to some of what we’re building and a narrative account of a couple of the units: https://guzdial.engin.umich.edu/cs4mi-pcas/.
We have been given permission to hire our first lecturer: https://careers.umich.edu/job_detail/226551/leo-lecturer-i-compfor-111-and-compfor-121. Right now, it’s a one year position for 2023-2024, but if enrollments grow, we have been encouraged to request three year positions starting in Fall 2024. Please do forward this job announcement to anyone you think might be interested. It’s also available in other places like CRA: https://cra.org/job/university-of-michigan-lecturer/.
Doing a Little Housekeeping and Rebranding
I discovered today that I have written over 2,500 blog posts here on WordPress, starting in June 2009. There was a time when I was writing daily. This is the first post I’ve written here since June. From a pace of a new post every day, to once every six months.
Our lives change so much from year to year. Thirteen years feels like so many changes ago. I live in a different state, working for a different University. Even the name of the department where I work has changed — I was in the School of Interactive Computing at Georgia Tech. Now I’m in the Division of Computer Science and Engineering in the College of Engineering at the University of Michigan and I direct the Program in Computing for the Arts and Sciences.
I changed the name of this blog once before. When I started out here in 2009, there were few websites and blogs on computing education. I just called it the Computing Education Blog. But as the field grew and there were more and more terrific sites helping teachers do computing education, I renamed the site Computing Education Research Blog. My focus was on the research, and not on how to be a great computing educator.
Today, there are many great resources even on Computing Education Research. I particularly recommend https://csedresearch.org/. I attended one of their on-line panels this last week — such interesting ideas and such a wonderful and diverse group of researchers.
I have been reticent to post under a banner saying “Computing Education Research” because this site has pretty broad visibility now. There are many more subscribers than the first few years. Newcomers might come here with that title and expect to read a newsletter or an authoritative perspective on the field — that’s an overwhelming responsibility. I recognize that I’m a senior (read: “old”) voice in the field, but I am just one of many voices in the field. Like any academic, I want to share what we’re working on and what I’m thinking about. I do not want to my posts here to appear like I’m speaking for the field.
So, I have renamed the blog for a second time: Computing Ed Research – Guzdial’s Take. This blog represents my perspective. That’s how I’ve always thought of the blog, but I want to make it explicit.
I have updated the Guzdial Papers page for the first time in a decade, and changed the About page.
Thanks for reading!
New ICER paper award for Lasting Impact: Guest blog post from Quintin Cutts
I serve on the ACM SIGCSE International Computing Education Research (ICER) conference steering committee. Quintin Cutts is Chair of the Steering Committee. I offered to share his announcement of a new Lasting Impact paper award here in the blog.
This is an invitation to nominate a paper for the ICER Lasting Impact Award 2022, or to offer to serve on the judging panel.
Which ICER paper has caused you to change the way you teach, or the direction of your research? Which has helped you to see and understand CS education more clearly? Has it also had an impact right across the community? I know which paper I would nominate, if I were allowed (but I’m not! – see below). It’s been a game-changer for me, and across CS education. Which one has done this for you and others?
1. Description of the award
The ICER Lasting Impact Award recognizes an outstanding paper published in the ICER Conference that has had meaningful impact on computing education. Significant impact can be demonstrated through citations, adoptions and/or adaptations of techniques and practices described in the paper by others, techniques described in the paper that have become widely recognized as best practices, further theoretical or empirical studies based on the original work, or other evidence the paper is an outstanding work in the domain of computing education research. The paper must have been published in ICER at least 10 years prior (i.e., for the 2022 award, papers must have been published in or before ICER 2011.)
2. Requirements for nominating a paper
a. An ACM Digital Library link to the paper being nominated.
b. A brief summary of the technical content of the paper and a brief explanation of its significance (limit 750 words).
c. Signatories to the summary and significance statement, with at least two current SIGCSE members. The name, contact email address and affiliation of each person who has agreed to sign the endorsement is acceptable.
3. Nominating yourself as a potential award judge
Please consider nominating yourself as a potential judge. We are seeking judges who have significant experience in the ICER community. We will ask judges to serve who do not have nominated papers, to avoid conflicts of interest.
4. Additional Notes
a. ICER Steering Committee members cannot nominate papers.
b. The ICER Steering Committee chair, Quintin Cutts, will run the process this year, and his papers cannot be nominated.
c. In this inaugural year of the Award, we will not have a pre-defined rubric. We will ask the judges to report the rationale for their decision, and the report will be made public when we announce the winner.
5. Timetable (all times, 23.59 AoE)
17th July: Nominations close.
18th July: Judging panel selected from the candidate pool, depending on the number of nominations and conflicts. Papers sent out to the judges.
2nd August: Judging panel sits to deliberate and makes a decision, which is passed to PC chairs. Winner notified.
The award will be presented at the ICER 2022 conference in Lugano, Switzerland either in person or on-line.
6. Submitting nominations
Please send both paper nominations and judging self-nominations to me, Quintin Cutts at Quintin.Cutts@glasgow.ac.uk
Programming in blocks lets far more people code — but not like software engineers: Response to the Ofsted Report
A May 2022 report from the UK government Research Review Series: Computing makes some strong claims about block-based programming that I think are misleading. The report is summarizing studies from the computing education research literature. Here’s the paragraph that I’m critiquing:
Block-based programming languages can be useful in teaching programming, as they reduce the need to memorise syntax and are easier to use. However, these languages can encourage pupils to develop certain programming habits that are not always helpful. For example, small-scale research from 2011 highlighted 2 habits that ‘are at odds with the accepted practice of computer science’ (footnote). The first is that these languages encourage a bottom-up approach to programming, which focuses on the blocks of the language and not wider algorithm design. The second is that they may lead to a fine-grained approach to programming that does not use accepted programming constructs; for example, pupils avoiding ‘the use of the most important structures: conditional execution and bounded loops’. This is problematic for pupils in the early stages of learning to program, as they may carry these habits across to other programming languages.
I completely agree with the first sentence — there are benefits to using block-based programming in terms of reducing the need to memorize syntax and increasing usability. There is also evidence that secondary school students learn computing better in block-based programming than in text-based programming (see blog post). Blanchard, Gardner-McCune, and Anthony found (a Best Paper awardee from SIGCSE 2020) that university students learned better when they used both blocks and text than when they used blocks alone.
The two critiques of block-based programming in the paragraph are:
- “These languages encourage a bottom-up approach to programming, which focuses on the blocks of the language and not wider algorithm design.”
- “They may lead to a fine-grained approach to programming that does not use accepted programming constructs…conditional execution and bounded loops.”
Key Point #1: Block-based programming doesn’t cause either of those critiques. What about programming with blocks rather than text could cause either of these to be true?
I’m programming a lot in Snap! these days for two new introductory computing courses I’m developing at the University of Michigan. I’ve been enjoying the experience. I don’t think that either of these critiques are true about my code or that of the students helping me develop the courses. I regularly do top-down programming where I define high-level custom blocks, as I design my program overall. Not only do I use conditional execution and bounded loops regularly, but Snap allows me to create new kinds of control structures, which has been a terrific help as I create block-based versions of our Teaspoon languages. My experience is only evidence that those two statements need not be true, just because the language is block-based.
I completely believe that the studies being cited in this research report saw and accurately describe exactly these points — that students worked bottom-up and that they rarely used conditioned execution and bounded loops. I’m not questioning the studies. I’m questioning the inference. I don’t believe at all that those are caused by block-based languages.
Key Point #2: Block-Based Programming is Scaffolding, but not Instant Expertise. For those not familiar, here are two education research terms that will be useful in making my argument.
- Scaffolding is the support provided by a learner to enable them to achieve some task or process which they might not be able to achieve without that support. A kid can’t hop a fence by themselves, but they can with a boost — that’s a kind of scaffolding. Block-based programming languages are a kind of scaffolding (and here’s a nice paper from Weintrop and Wilensky describing how it is scaffolding — thanks to Ben Shapiro for pointing it out).
- The Zone of Proximal Development (ZPD) describes the difference between what a student can do on their own (one edge of the ZPD) and what they might be able to do with the support of a teacher or scaffolding (the far edge of the ZPD). Maybe you can’t code a linked list traversal on your own, but if I give you the pseudocode or give you a lecture on how to do it, then you can. But the far edge of ZPD is unlikely to be that you’re a data structure expert.
Let’s call the task that students were facing in the studies reviewed in the report: “Building a program using good design and with conditioned execution.” If we asked students to achieve this task in a text-based language, we would be asking them to perform the task without scaffolding. Here’s what I would expect:
- Fewer students would complete the task. Not everyone can achieve the goal without scaffolding.
- Those students who do complete the task likely already have a strong background in math or computing. They are probably more likely to use good design and to use conditioned execution. The average performance in the text-based condition would be higher than in the block-based condition — simply because you’ve filtered out everyone who doesn’t have the prior background..
Fewer people succeed. More people drop-out. Pretty common CS Ed result. If you just compare performance text vs. blocks, text looks better. For a full picture, you also have to look at who got left out.
So let’s go back to the actual studies. Why didn’t we see good design in students’ block-based programs? Because the far edge of the ZPD is not necessarily expert practice. Without scaffolding (block-based programming languages), many students are not able to succeed at all. Giving them the scaffolding doesn’t make them experts. The scaffolding can take them as far as the ZPD allows. It may take more learning experiences before we can get to good design and conditioned execution — if that even makes sense.
Key Point #3: Good software engineering practice is the wrong goal. Is “building a program using good design and with conditioned execution” really the task that students were engaging in? Is that what we want student to succeed at? Not everyone who learns to program is going to be a software engineer. (See the work I cite often on “alternative endpoints.”) Using good software engineering practices as the measure of success doesn’t make sense, as Ben Shapiro wrote about these kinds of studies several years ago on Twitter (see his commentary here, shared with his permission). A much more diverse audience of students are using block-based programming than ever used text-based programming. They are going to solve different problems for different purposes in different ways (a point I made in this blog post several years ago). Few US teachers in K-12 are taught how to teach good software engineering practice — that’s simply not their goal (a point that Aman Yadav made to me when discussing this post). We know from many empirical studies that most Scratch programs are telling a story. Why would you need algorithmic design and conditioned execution for that task? They’re not doing complicated coding, but the little bit of coding that they’re using is powerful and is engaging for students — and relatively few students are getting that. I’m far more concerned about the inequitable access to computing education than I am about whether students are becoming good software engineers.
Summary: It’s inaccurate to suggest that block-based programming causes bad programming habits. Block-based programming makes programming far more accessible than it ever has been before. Of course, we’re not going to see expert practice as used in text-based languages for traditional tasks. These are diverse novices using a different kind of notation for novel tasks. Let’s encourage the learning and engagement appropriate for each student.
Getting feedback on Teaspoon Languages from CS educators and researchers at the Raspberry Pi Foundation seminar series
In May, I had the wonderful opportunity to speak at the Raspberry Pi Foundation Seminar series. I’ve attended some of these seminars before. I highly recommend them (see past seminars here). It’s a terrific format. The speaker presents for up to a half hour, then everyone gets put into a breakout room for small group discussions. The participants and speaker come back for 30-35 minutes of intensive Q&A — at least, it feels “intensive” from the speaker’s perspective. The questions you get have been vetted through the breakout room process. They’re insightful, and sometimes critical, but always in a constructive way. I was excited about this opportunity because I wanted to make it a hands-on session where the CS teachers and researchers who attended might actually use some Teaspoon Languages and give me feedback on them. I have rarely had the opportunity to work with CS teachers, so I was excited for the opportunity.
Sue Sentance wrote up a very nice blog post describing my talk (thank you!) — see here. The video of the talk and discussion is available. You can watch the whole thing, or, you can read the blog post then skip ahead to where the conversation takes place (around 26:00 in the video). If you have been wondering, “Why isn’t Mark just using Logo, Scratch, Snap, or NetLogo? We already have great tools! Why invent new languages that are clearly less powerful than what we already have?”, then you should jump to 34:38 and see Ken Kahn (inventor of ToonTalk) push me on this point.
The whole experience was terrific for me, and I hope that it’s valuable for the viewer and attendees as well. The questions and comments indicated understanding and appreciation for what I’m trying to do, and the concerns and criticisms are valuable input for me and my team. Thanks to Sue, Diana Kirby, the Raspberry Pi Foundation, and all the attendees!
Ruthe Farmer’s important big idea: The Last Mile Education Fund to increase diversity in STEM
I met Ruthe Farmer (Wikipedia page) when she represented the Girl Scouts in the early days of the NSF Broadening Participation in Computing (BPC) alliances. She played a significant role in NCWIT. I had many opportunities to interact with her in her roles at NCWIT and CSforAll. Ruthe organized the White House summit with ECEP in 2016 (see blog post) when she was with the Office of Science and Technology Policy in the Obama administration. Her latest project may be the one that’s closest to my heart.
Ruthe has founded and is CEO of the Last Mile Education Fund. Their mission is:
The Last Mile Education Fund offers a disruptive approach to increasing diversity in tech and engineering fields by addressing critical gaps in financial support for low-income underrepresented students within four semesters of graduation.
I was still at Georgia Tech when I heard about the completion microgrant program at Georgia State. Georgia State was (and still is) making headlines for their use of big data to boost retention and get students graduated. Georgia State is the kind of institution where over half of their students are classified as low-income. There is a huge social benefit when GSU can improve their retention statistics. The completion grant program was started in 2011 and focuses on students who could graduate (e.g., their grades were fine), but they had run out of money before they finished. The grant program gave no more than $2,500 per student (Inside Higher Education article). Today, we know that the average grant has actually been $900. That’s a shockingly low cost for getting students the rest of the way to their college degree. It’s a great idea, and deserves to be applied more broadly than one university.
The Last Mile Education Fund especially focuses on getting students from diverse backgrounds into STEM careers. These last gaps in funding are among the barriers that keep girls out from STEM careers (where Ruthe’s focus was in the Girl Scouts and NCWIT) but also low-income students and people of color.
I was reminded to write about the Last Mile Education Fund by Alfred Thompson’s blog (see post here). He’s got a lot more information about the Last Mile Education Fund there.
I am a first generation graduate. My parents and I had no idea how to even apply to college. I am forever grateful that Wayne State University found me in my high school, guided me to applying, and gave me a scholarship to attend. I’m a privileged white guy. Not everybody gets the opportunities I had. It’s critical to extend the opportunity of a higher education degree to a broader and more diverse audience.
The Last Mile Education Fund is important for closing the gap for students from diverse backgrounds. I’m a monthly supporter, and I encourage you to consider giving, too.
Three types of computing education research: for CS, for CS but not professionally, and for everyone
In February, I was invited to give a lecture at the University of Washington’s Allen School. I had a great day visiting there, even though it was all on Zoom. My talk is available on YouTube:
I got a chance to talk to Jeff Heer and Amy Ko before my visit. The U-W CSE department had been thinking about making a push into computing education research. They suggested that I describe the lay of the land — and particularly, to identify where I fit in that space. What I do these days (e.g. Teaspoon languages for history and mathematics classes) isn’t in the mainstream of computing education research, and it was important to tell people unfamiliar with the field, “There’s a lot more out there, and most of it doesn’t look like this.”
CS Education research dates back to the late 1960’s (see the history chapter that Ben du Boulay and I wrote). ACM SIGCSE started in 1968 with a particular focus on how to teach Computer Science and Information Technology majors. Much of what SIGCSE has published is focused even more specifically on the first course, which we now call CS1. This is a big and important space. These majors will be significant drivers of the world’s infrastructure.

There is a growing trend in computing education research to look at people who are learning programming (like in the first circles), but not for the purpose of becoming technology professionals. This includes K-12 CS teachers, end-user programmers, and conversational programmers. This kind of research sometimes appears in venues like CHI, CSCW, and VL/HCC, and occasionally in venues like SIGCSE, RESPECT, and ITiCSE. These circles aren’t scaled correctly by size of potential student population. By most measures, the outer circle (of people learning programming but who aren’t going to become technology professionals) is at least ten times the size of the student population inside the first circles.

My research is one level further out. I’m interested in studying what should we be teaching to everyone, whether or not they’re going to program like professionals, and how do we facilitate that learning. These students might not use the same tools or languages, and certainly have different goals for studying computing. I offer three reasons for the broader “everyone” to learn computing (drawn from the work of C.P. Snow, Alan Perlis, Peter Naur, and Seymour Papert — see this earlier blog post):
- To make sure that technology is controlled by a democracy.
- To support new ways of thinking and learning.
- To be part of a new computational literacy, a new tool for human expression.
This outer circle is far bigger in terms of number of students potentially impacted than any of the inner circles. But it’s also where we know the least in terms of research results.

Take a look at the talk for more on this way of thinking about the field, and how I connect that to existing research. I’d be interested in your perspective on this framing.
College Board stops sharing data on Advanced Placement Computer Science exams
Barb Ericson has been gathering data on the Advanced Placement exams in Computer Science for a decade. The College Board made available data about who took the exam (demographic statistics) and how well they did for each state, for AP CS Level A and then for AP CS Principles when that exam started. When she first started in 2010, she would download each state’s reports, then copy the data from the PDF’s into her Excel spreadsheets. By the time she processed the 2020 data, it was mostly mechanized. Her annual reports on the AP CS exam results were posted here until 2018. She now makes her reports and her archived data collection available at her blog.
However, the 2020 data she has posted are now the last data that are available. The College Board is no longer sharing data on AP CS exams. The archive is gone, and the 2021 data are not posted.
Researchers can request the data. Barb did several months ago. She still hasn’t received it. She was told that they would sign an agreement with the University of Michigan to give her access to the data — but not to her personally. She would also have to promise that she wouldn’t share the data.
Barb talked to someone at the College Board who explained that this is a cost-saving measure — but that doesn’t make much sense. The College Board still produces all the reports and distributes them to the states. They have just stopped making them publicly available.
I agree with Joanna Goode in this tweet from April:
The National Science Foundation paid for the development of the AP CS Principles exam explicitly to broaden participation in computer science. The goal was to create an AP CS exam that any high school could teach, that would be welcoming, and that would encourage more and more diverse students to discover computing. But now, the data showing us whether that’s working are being hidden. Why?
Updates: Workshop on Contextualized Approaches to Introduction to Computing, from the Center for Inclusive Computing at Northeastern University
From Nov 2020 to Nov 2021, I was a Technical Consultant for the Center for Inclusive Computing at Northeastern University, directed by Carla Brodley. (Website here.) CIC works directly with CS departments to create significant improvements in female participation in computer science programs. I’m no longer in the TC role, but I’m still working with CIC and Carla. I’ll be participating in a workshop that they’re running on Monday March 21. I’ll be talking about Media Computation in Python, and probably show some of the things we’re working on for the new classes here at Michigan.
https://www.khoury.northeastern.edu/event/contextual-approaches-to-introduction-to-computing/
Contextual Approaches to Introduction to Computing
Monday 3/21/22, 3pmEST/12pmPST
Moderator: Carla Brodley; Speakers: Valerie Barr, Mark Guzdial, Ben Hescott, Ran Libeskind-Hadas, Jakita Thomas
Brought to you by the Center for Inclusive Computing at Northeastern University
In this 1.5 hour virtual workshop, faculty from five different universities in the U.S. will present their approach to creating and offering an introductory computer science class (CS0 or CS1) for students with no prior exposure to computing. The key differentiator of these approaches is that the introduction is contextualized in one area outside of computing throughout the semester. Using the context of areas such as cooking, business, biology, media arts, and digital humanities, these courses appeal to students across the university and have realized spectacular results for student retention in CS0/CS1, persistence to taking additional CS courses, and declaring a major or minor in computing. The importance of attracting students to computing after they enter university is critical to moving the needle on increasing the demographic diversity of students who graduate in computing. Interdisciplinary introductory computing classes provide a pathway to students discovering and enjoying computing after they start university. They also help students with no prior coding experience gain familiarity with computing before taking additional courses required for the CS major. The workshop will begin with a short presentation by each faculty member on their approach to contextualized CS0/CS1 and will touch upon the university politics involved in its creation, the curriculum, and the outcomes. We will then split into smaller breakout sessions five times to enable participants to meet with each of the five presenters for questions and more in-depth conversations.
Updates: Dr. Barbara Ericson awarded ACM SIGCSE 2022 Outstanding Contributions to Education
March 2-5 is the ACM SIGCSE Technical Symposium for 2022 in Providence, RI. (Schedule is here.) I am absolutely thrilled that my collaborator, co-author, and wife is receiving the Outstanding Contributions to Education award! She is giving a keynote on Friday morning. Her abstract is below.
She’s got more papers there, on CS Awesome, on her ebooks, and on Sisters Rise Up. I’m not going to summarize them here. I’ll let you look them up in the schedule.
A couple of observations about the SIGCSE Awards this year that I love. Both Barb and the Lifetime Service to the Computer Science Education Community awardee, Simon, earned their PhD’s later in life, both within the last 10 years. Barb is the first Assistant Professor to win the Outstanding Contributions award in the 40 year history of the award.
I have one Lightning Talk. The work I’m doing these days is computing education, but it’s not in the mainstream of CS education — I focus on computing education for people who don’t want to study CS. So, I’m doing a five minute lightning talk on Teaspoon languages as provocation to come talk to me about this approach to integrating computing into non-CS subjects. You can see the YouTube version here. This is my attempt to show that each Teaspoon language can be learned in 10 minutes — I describe all of two of them in less than five minutes!
Outstanding Contribution Plenary
Friday, March 4 / 8:15 – 9:45
Ballroom A-E (RICC)
Barbara Ericson (University of Michigan)
Improving Diversity in Computing through Increased Access and Success
My goal is to increase diversity in computing. In this talk I explain why diversity is important to me. My strategy to improve diversity is to increase access and success. This work includes teacher professional development, summer camps, weekend workshops with youth serving organizations, curriculum development, helping states make systemic changes to computing education, publicizing gender and race issues in Advanced Placement Computer Science, creating free and interactive ebooks, testing new types of practice problems/tools, and offering near-peer mentoring programs.
Barbara Ericson is an Assistant Professor in the School of Information at the University of Michigan. She conducts research at the intersection of computing education, the learning sciences and HCI, to improve students’ access to and success in computing. With her husband and colleague, Dr. Mark Guzdial, she received the 2010 ACM Karl V. Karlstrom Outstanding Educator Award for their work on media computation. She was the 2012 winner of the A. Richard Newton Educator Award for her efforts to attract more females to computing. She is also an ACM Distinguished Member for Outstanding Educational Contributions to Computing.
Updates: NSF Funding to Study Learning with Teaspoon Languages for Discrete Mathematics
A few months before the pandemic started, Dr. Elise Lockwood at Oregon State reached out to me. She’d heard that I was interested in programming for teaching non-CS subjects, and that’s what she was doing. I loved what she was doing, and we started having regular chats.
Elise is a mathematics education researcher who has been studying how students come to understand counting problems. Like “If you have three letters and four digits, how many license plates can you make?” Or “How many two letter words can you make from the letters ROCKET, if you don’t allow double letters?” She’s been exploring having students learn counting problems by manipulating Python programs to generate all the possible combinations, then counting them. (Check out her recent papers on her Google Scholar page, especially those with her student Adaline De Chenne.)
As I said, I loved what she was doing, but Python seemed heavy-handed for this. I was starting to work on our Teaspoon languages. Could we build lighter-weight languages for the same problems?
As I kept reading Elise’s papers, I started working on two possible designs.
In one of them (called Counting Sheets), we play off of students’ understanding of spreadsheets. You can just describe what you want in each column, and the system will exhaustively generate every combination:

Or you can use an “=“ formula that knows how to do very simple operations with sets. Here’s a solution to the two letter words from ROCKET without repeating problem:

This is one of the tools that we’ve been building in support for both Spanish and English keywords (like Pixel Equations, that I talked about last September):

Elise found Counting Sheets intriguing, but she was worried if it would work to make the iterative structures implicit and declarative. Would students need to see the iteration to be able to reason about the counting processes?
So, I built a second Teaspoon language, called Programmed Counting. Here, the loops are explicit, like Python, but the only variable type is a set, and the words and phrases of the language come from counting problems.

Elise was a real sport, trying out the languages as I generated prototypes and finding the holes in what I was doing. We met face-to-face only once, when I went to Portland for SIGCSE 2020 — the one that got cancelled the very morning it was supposed to start. I had lunch with Elise, and we worked for a few hours on the designs. Barb and I went home the next day, and the big pandemic lockdown started right afterwards.
Will these work for learning? We don’t know — but we just got funding from NSF to find out! “We” here is me and PhD student Emma Dodoo, and we’ll be involving Adaline as a consultant. Elise is currently a rotator at NSF, so she’s involved only from the sidelines because of NSF COI issues. Our plan is to run experiments with various combinations of the Teaspoon languages (one or both), standalone and with Python. Do we need Python if we have the Teaspoon languages? Do the Teaspoon languages serve as scaffolding to introduce concepts before starting into Python?
Below is the abstract on the new IUSE grant, as an overview of the project. University of Michigan CSE Communications wrote a nice article about the work, available here. Huge thanks to Jessie Houghton, Angela Li, and Derrick White who turned my LiveCode prototypes into functioning Web versions.
Abstract for NSF
Programming is a powerful tool that scientists, engineers, and mathematicians use to gain insight into their problems. Educators have shown how programming integrated into other subjects can be a powerful tool to enhance learning, from algebra to language arts. However, the cost is learning the programming language. Few students in the US learn programming — less than 5% of high school students nationwide. Most students do not have the opportunity to use programming to support ™ learning. This project is investigating a new approach to designing and implementing programming languages in classrooms: Task-specific programming (TSP) languages. TSP languages are explicitly design for integration in specific classes, to meet teacher needs, and to be usable with less than 10 minutes of instruction. TSP languages can make the power of programming to enhance learning more accessible. This project will test the value of TSP languages in discrete mathematics, which is a gateway course in some computer science programs.
The proposed project tests the use of two different TSP languages and contrasting that with a traditional programming language, Python. The proposed work will contribute to understanding about (1) the role of programming in learning in discrete mathematics, (2) the value of task-specific languages to scaffold learning, (3) how alternative representational forms for programming influence student use of TSP languages, and (4) how the use of TSP languages alone or in combination with traditional languages enhance students’ sense of authenticity and ability to transfer knowledge.
Updates: Developing the University of Michigan LSA Program in Computing for the Arts and Science
This blog is pretty old. I started it in June 2009 — almost 13 years ago. The pace of posting has varied from every day (today, I can’t understand how I ever did that!) to once every couple of months (most recently). There are things happening around here that are worth sharing and might be valuable to some readers, but I’m not finding much time to write. So, the posts the rest of this week will be quick updates with links for more information.
During most of the pandemic, I co-chaired (with Gus Evrard, a Physics professor and computational cosmologist) the Computing Education Task Force (website) for the University of Michigan’s College of Literature, Science, and the Arts (LSA). LSA is huge — about 20K students. (I blogged about this effort in April of last year.) Our job was to figure out what LSA was doing in computing education, and what else was needed. Back in November, I talked here about the three themes that we identified as computing education in LSA:
- Computing for Discovery: Think computational science, or data science + modeling and simulation.
- Computing for Expression: Think chatbots to Pixar to social media to Media Computation.
- Computing for Justice: Think critical computing and everything that C.P. Snow and Peter Naur warned us about computing back in the 1960’s.
Our report was released last month. You can see the release statement here, and the full report here. It’s a big report, covering dozens of interviews, a hundred survey responses, and a huge effort searching over syllabi and course descriptions to find where computing is in LSA. We made recommendations about creating a new program, new courses, new majors and minors, and coordinating computing education across LSA.
Now, we’re in the next phase — acting on the recommendations. LSA bought me out of my teaching for this semester, and it’s my full-time job to define a computing education program for LSA and to create the first courses in the program. We’re calling it the Program for Computing in the Arts and Science (PCAS). I’m designing courses for the Computing for Expression and Computing for Justice themes, in an active dialogue (drawing on the participatory design methods I learned from Betsy DiSalvo) with advisors from across LSA. (There are courses in LSA that can serve as introductions to the Computing for Discovery theme, and Gus is leading the effort to coordinate them.) The plan is to put up the program this summer, and I’ll start teaching the new courses in the Fall.
Helping social studies teachers to teach data literacy with Teaspoon languages
Last year, Tammy Shreiner and I received NSF funding to develop and evaluate computational supports for helping social studies teachers to teach data literacy and computing(see post here). We’re excited about what we’re doing and what we’re learning. Here’s an update on where we’re at on the project.
Teaspoon Languages
We have a chapter in the new book by Aman Yadav and Ulf Dalvad Berthelsen Computational Thinking in Education: A Pedagogical Perspective. This is the publication where we introduce the idea of Teaspoon Languages. Teaspoon languages are a form of task-specific languages (TSP => Teaspoon — see?). Teaspoon languages:
- Support learning tasks that teachers (typically non-CS teachers) want students to achieve;
- Are programming languages, in that they specify computational processes for a computational agent to execute; and
- Are learnable in less than 10 minutes, so that they can be learned and used in a one hour lesson. If the language is never used again, it wasn’t a significant learning cost and still provided the benefit of a computational lesson.
We say that we’re adding a teaspoon of computing to other subjects. The goal is to address the goal of “CS for All” by integrating computing into other subjects, by placing the non-CS subjects first. We believe that programming can be useful in learning other subjects. Our primary goal is to meet learning objectives outside of CS using programming. Teachers (and students eventually) will be learning foundational CS content — but not necessarily the content we typically teach in CS classes. All students should learn that a program is non-WYSIWYG, that it’s a specification of a computational process that gets interpreted by a computational agent, that programming languages can be in many forms, and that all students can be successful at programming.
Our chapter, “Integrating Computing through Task-Specific Programming for Disciplinary Relevance: Considerations and Examples” (see link here) offers two use cases of how we imagine teaspoon languages to work in classrooms (history and language arts in these examples). The first use case is around DV4L, our Data Visualization for Learning tool. The second is around a chatbot language that we developed —- and have long since discarded.
We develop our teaspoon languages in a participatory design process, where teachers try our prototypes in authentic tasks as design probes, and then they tell us what we got wrong and what they really want. Our current iteration is called Charla-bots and is notable for having user-definable languages. We have a variety of Charla-bot languages now, with English, Spanish, and mixed keywords.
Our vision for teaspoon languages is a contrast with the “Hour of Code” approach. The “Hour of Code” is a one hour programming activity that many schools use in every grade, typically once a year during CS Ed Week (in early December). The great idea is to build familiarity and confidence in programming by showing students real computer science every year. The teaspoon languages approach is to imagine one or two little learning programming activity in every social studies, language arts, and mathematics class every year. Each of these languages is tiny and different. The goal is that by the time that US students take a CS class (typically, in high school or undergraduate), they will have had many programming experiences, have seen a variety of types of programming languages, and have a sense that “programming isn’t hard.”
Meeting the Needs of Social Studies Teachers
The second paper, “Using Participatory Design Research to Support the Teaching and Learning of Data Literacy in Social Studies” (see link here) was just presented in October by Tammy at CUFA, the College and University Faculty Assembly 2021 of the National Council of the Social Studies. (We have a longer form of this paper that we have just submitted to a journal.) This is an exciting paper for me because it’s exactly addressing the critical challenge in our work. We can design and implement all kinds of prototype Teaspoon languages, but to achieve our goals, teachers in disciplines other than CS have to see value and adopt them.
The paper is about our workshops with practicing social studies teachers. Tammy has a goal to teach social studies teachers how to teach data literacy. She has built a large online education resource (OER) on teaching data literacy in social studies. Learning data literacy involves being able to read, comprehend, and argue with data visualizations, but also being able to create them. That’s where we come in. Her OER links to several tools for creating data visualizations, like Timeline JS, CODAP, and GapMinder. Most of them were not created for social studies teachers or classes. When we run these workshops, our tools are just in-the-mix. We offer scaffolding for using all of them. These are our design probes. The teachers use the tools and then tell us what they really want. These are our data, and we analyze them in detail —- as in this paper.
Let’s jump to the bottom line: We’re not there yet. The teachers love the OER, but get confused about why should do in their classes. They find the tools for data visualization fascinating, but overwhelming. They like DV4L a lot:
One pre-service teacher explained that they preferred our prototype over other tools because “(with the prototype DV4L) I found myself asking questions connected to the data itself, rather than asking questions in order to figure out how to work the visual.”
Recently, I held a focus group with some social studies teachers who told me that they won’t use any computational tools —- they believe in teaching data visualization, but all created with pencil and ruler. That’s our challenge: Can we be more powerful, more enticing, and easy enough to beat out pencil and ruler? Our tool, DV4L, is purpose-built for these teachers, and they appreciate its advantages — and yet, few are adopting. That’s where we need to work next.
Opportunities for Social Studies Teachers to Get Involved
If you know a social studies teacher who would want to keep informed about our work and perhaps participate in our workshops or studies, please have them sign up on our mailing list. Thank you!
Often, what teachers tell us they really want suggests new features or entirely new tools. We have two ongoing studies where we are looking for design feedback from social studies teachers. If you know social studies teachers who would like to play with something new (and we’ll pay them for their time), would you please forward these to them?
Timeline Builder
We’re looking for K-12 Social Studies teachers to try out our new timeline visualization tool, TimelineBuilder. TimelineBuilder has been made with teachers and usability in mind. In it, ‘events’ are added to a timeline using a form-based interface. Changes to the timeline can be seen automatically, with events showing up as soon as they are added.
This study will consist of completing 2 surveys and 3 asynchronous activities guided by worksheets. All participants will be compensated with a $20 gift card for survey and activity completion. There is an additional option to be invited to a focus group, which will provide additional compensation.
If you are interested in participating in this study, you can complete the consent form and 1st survey here. (Plain text Link: https://forms.gle/gwxfn5bRgTjyothF6 )
Please contact Mark Guzdial (mjguz@umich.edu) or Tamara Nelson-Fromm (tamaranf@umich.edu) with any questions.
The University of Michigan Institutional Review Board Health Sciences and Behavioral Sciences has determined that this study is exempt from IRB oversight.

DV4L Scripting Study
Through our work with social studies educators thus far, we have designed the tools DV4L-Basic and DV4L-Scripting specifically to support data literacy standards in social studies classrooms. If you are a social studies middle or high school teacher, we would love to hear your feedback. If you can spare less than an hour of your time to participate in our study, we will send you a $50 gift card for your time and valuable feedback.
If you are interested but want more details, please visit/complete the consent form here: https://forms.gle/yo3yWGThQ1wnhu7g7
For questions or concerns, please contact Mark Guzdial (mjguz@umich.edu) or Bahare Naimipour (baharen@umich.edu).

References
Guzdial, M. and Tamara L. Shreiner. 2021. “Integrating Computing through Task-Specific Programming for Disciplinary Relevance: Considerations and Examples.” In Computational Thinking in Education: A Pedagogical Perspective, Aman Yadav and Ulf Dalvad Berthelsen (Eds). PDF of Submitted.
Shreiner, Tamara L., Mark Guzdial, and Bahare Naimipour. 2021. “Using Participatory Design Research to Support the Teaching and Learning of Data Literacy in Social Studies.” Presented at CUFA, the College and University Faculty Assembly 2021 of the National Council of the Social Studies. PDF
Recent Comments