PCAS Expansion, Growth, Research, and SIGCSE 2024 Presentations

The ACM SIGCSE Technical Symposium is March 20-23 in Portland (see website here). I rarely blog these days, but the SIGCSE TS is a reminder to update y’all with what’s going on in the College of Literature, Science, & the Arts (LSA) Program in Computing for the Arts and Sciences (PCAS). PCAS is my main activity these days. Here’s the link to the PCAS website, which Tyrone Stewart and Kelly Campbell have done a great job creating and maintaining. (Check out our Instagram posts on the front page!)

PCAS Expansion

I’ve blogged about our first two courses, COMPFOR (COMPuting FOR) 111 “Computing’s Impact on Justice: From Text to the Web” and COMPFOR 121 “Computing for Creative Expression.” Now, we’re up to eight courses (see all the courses described here). As I mentioned at the start of PCAS, we think about computing in LSA in three themes: Computing for Discovery, Expression, and Justice. Several of these courses are collaborations with other departments, like our Discovery classes with Physics, Biophysics, Ecology and Evolutionary Biology, and Linguistics.

This semester, I’m teaching two brand new courses. That means that I’m creating them just ahead of the students. I did this in Fall 2022 for our first two courses (see links to the course pages here with a description of our participatory design process), and I hope to never do this again. It’s quite a sprint to always be generating material, all semester long, for about a hundred students.

One course is like the Media Computation course I developed at Georgia Tech, but in Python 3: COMPFOR 221: Digital Media with Python. The course title has changed. When we first offered it, we called it “Python Programming for Digital Media,” and at the end of registration, we had only five students enrolled! We sent out some surveys and found that we’d mis-named it. Students read “Python Programming” and skipped the rest. The class filled once we changed our messaging to emphasize Digital Media first.

When we taught Media Computation at Georgia Tech, we used Jython and our purpose-built IDE, JES. Today, there’s jes4py that provides the JES media API in Python 3. I had no idea how hard it was to install libraries in Python 3 today! I’m grateful to Ben Shapiro at U-W who helped me figure out a bunch of fixes for different installation problems (see our multi-page installation guide).

The second is more ambitious. It’s a course on Generative AI, with a particular focus on how it differs from human intelligence. We call it Alien Anatomy: How ChatGPT Works. It’s a special-topics course this semester, but in the future, it’ll be a 200-level (second year undergraduate) course with no pre-requisites open to all LSA students, so we’re relying on teaspoon languages and Snap! with a little Python. I’m team-teaching with Steve Abney, a computational linguist. Steve actually understands LLMs, and I knew very little. He’s been a patient teacher and a great partner on this. I’ve had to learn a lot, and we’re relying heavily on the great Generative AI Snap! projects that Jens Mönig has been creating, SciSnap from Eckart Modrow, and Ken Kahn’s blocks that provide an API to TensorFlow.

As of January, we are approved to offer two minors: Computing for Expression and Computing for Scientific Discovery. We have about a half dozen students enrolled so-far in the minors, which is pretty good for three months in.

PCAS Growth

When I offered the first two courses in Fall 2022, we had 11 students in Expression and 14 in Justice. Now, we’re up to 308 students enrolled. That’s probably our biggest challenge — managing growth and figuring out how to sustain it.

Research in PCAS

We’re starting to publish some of what we’re learning from PCAS. Last November, Gus Evrard and I published a paper at the Koli Calling International Conference on Computing Education Research about the process that we followed co-chairing the LSA Computing Education Task Force to figure out what LSA needed in computing education. That paper, Identifying the Computing Education Needs of Liberal Arts and Sciences Students, won a Best Discussion Paper Award. Here’s Gus and me at the conference banquet when we got our award.

Tamara Nelson-Fromm just presented a paper at the 2024 PLATEAU Workshop on evidence we have suggesting transfer of learning from teaspoon languages into our custom Snap! blocks. I’ll wait until those papers are released to tell you more about that.

SIGCSE 2024 Presentations

We’re pretty busy at SIGCSE 2024, and almost all of our presentations are connected to PCAS.

Thursday morning, I’m on a panel led by Kate Lehman on “Re-Making CS Departments for Generation CS” 10:45 – 12:00 at Oregon Ballroom 203. This is going to be a hardball panel. Yes, we’ll talk about radical change, but be warned that Aman and I are on the far end of the spectrum. Aman is going to talk about burning down the current CS departments to start over. I’m going to talk about giving up on traditional CS departments ever addressing the needs of Generation CS (because they’re too busy doing something else) and that we need more new programs like PCAS. I’m looking forward to hear all the panelists — it’ll be a fun session.

Thursday just after lunch, Neil Brown and I are presenting our paper, Confidence vs Insight: Big and Rich Data in Computing Education Research 13:45 – 14:10 at Meeting Room D135. It’s an unusual computing education research paper because we’re making an argument, not offering an empirical study. We’re both annoyed at SIGCSE reviewers who ask for contextual information (Who were these students? What programming assignments were they working on? What was their school like?) from big (millions of points) data, and then complaining about small sample sizes from rich data with interviews, personal connections, and contextual information. In the paper, we make an argument about what are reasonable questions to ask about each kind of data. In the presentation, the gloves come off, and we show actual reviews. (There are also costumes.)

We don’t really get into why SIGCSE reviewers evaluate papers with criteria that don’t match the data, but I have a hypothesis. SIGCSE reviewers are almost all CS teachers, and they read a paper asking, “Does this impact how I teach? Does it tell me what I need to do in my class? Does it convince me to change?” Those questions are too short-sighted. We need papers that answer those questions to help us with our current problems, but we also need to have knowledge for the next set of problems (like when we start teaching entirely new groups of students). The right question for evaluating a computing education research paper is, “Does this tell the computing education research community (not you the reviewer, personally, based on your experience) something we didn’t know that’s worth knowing, maybe in the future?

At the NSF Project Showcase Thursday 15:45 – 17:00 at Meeting Rooms E143-144, Tamara Nelson-Fromm is going to show where we are on our Discrete Mathematics project. She’ll demonstrate and share links to our ebooks for solving counting problems with Python and with one of our teaspoon languages, Counting Sheets.

In the “second flock” of Birds of a Feather sessions Thursday 18:30 – 19:20 at Meeting Room D136, we’re going to be a part of Zach Dodd’s group on “Computing as a University Graduation Requirement”. There’s a real movement towards building out computing courses for everyone, not just CS majors, as we’re doing in PCAS. Zach is pushing further, for a general education requirement. I’m excited for the session to hear what everyone is doing.

On Saturday afternoon 15:30 – 18:30 at Meeting Rooms B113, I’ll offer a three-hour workshop on how we teach in our PCAS courses for arts and humanities students, with teaspoon languages, custom Snap! blocks, and ebooks. Brian Miller has been teaching these courses this year, and he’s kindly letting me share the materials he’s been developing — he’s made some great improvements over what I did. This workshop was inspired by a comment from Joshua Paley in response to our initial posts about how we’re teaching, where he asked if I’d do a SIGCSE workshop on how we’re teaching PCAS. Will do it on Saturday!

March 18, 2024 at 8:00 am Leave a comment

What Humanities Scholars Want Students To Know About the Internet: Alternative Paths for Alternative Endpoints

After we got the go-ahead to start developing PCAS (see an update on PCAS here), I had meetings with a wide range of liberal arts and sciences faculty. I’d ask faculty how they used computing in their work and what they wanted their students to know about computing. Some faculty had suggested that I talk to history professor, LaKisha Michelle Simmons. I met with her in January 2023, and she changed how I thought about what we were doing in PCAS.

I told her that I’d heard that she built websites to explain history research to the general public, and she stopped me. “No, no —- my students build websites. I don’t built websites.” I asked her what she would like her students to know about the Internet. “I could teach them about how the Internet works with packets and IP addresses. I could explain about servers and domain names.”

She said no. She was less interested in how the Internet worked. She had three specific things she wanted me to teach students.

  1. She wanted students to know that there are things called databases.
  2. That databases, if they are designed well, are easy to index and to find information in.
  3. Databases could be used to automatically generate Web pages.

Her list explains a huge part of the Web, but was completely orthogonal to what I was thinking about teaching. She wasn’t asking me to teach tools. She wanted me to teach fundamental concepts. She wanted students to have understanding about a set of technologies and ideas, and the students really didn’t need IP addresses and packets to understand them.

The important insight for me was that the computing that she was asking for was a reasonable set, but different from what we normally teach. These are advanced CS ideas in most undergraduate programs, typically coming after a lot of data structures and algorithms. From her perspective, these were fundamental ideas. She didn’t see the need for the stuff we normally teach first.

I put learning objectives related to her points on the whiteboards in my participatory design sessions. This showed up in the upper-right hand corner of the Justice class whiteboard — the most important learning objective. LaKisha gave me the learning objectives, and the humanities scholars who advised me supported what she said. This became a top priority for our class Computing’s Impact on Justice: From Text to the Web.

Figuring out how

During the summer of 2022, a PhD student working with me, Tamara Nelson-Fromm, a group of undergraduate assistants, and I worked at figuring out how to achieve these goals. We had to figure out how to have students work with LaKisha’s three learning objectives, without complicated tools. We were committed to having students program and construct things — we didn’t want this to be a lecture and concepts-only class.

We were already planning on using Snap, and it has built-in support for working with CSV files. Undergraduate Fuchun Wang created a great set of blocks explicitly designed to look like SQL for manipulating CSV files. We used these blocks to talk about queries and database design in the class.

Tamara and I talked a lot about how to make the HTML part work. I had promised our advisors that we would not require LSA students to install anything on their computers in the intro courses. We talked about the possibility of building a teaspoon language for Web page development and for use as templating tool for databases, but I was worried that we were already throwing so many languages at the students.

Then it occurred to us that we could do this all with Snap. We built a set of blocks to represent the structure of an HTML page, like in this example. Since we could define our own control structures in HTML, we could present the pieces of a Web page nested inside other blocks, to mirror the nested structure of the tags.

Those last two blocks were key. The view webpage block displays in the stage the first 50 lines of the input HTML. That’s important so that students see the mapping from blocks to HTML. The open HTML page block opens a browser window and renders the HTML into it. (That was a tricky hack to get working.)

This was enough for us to talk about building Web pages in Snap, viewing the HTML, then rendering the HTML in the browser. Here’s a slide from the class. In deciding what computer science ideas to emphasize, I used the work of Tom Park who studied student errors in HTML and CSS, and found that ideas of hierarchy and encapsulation were a major source of error. Those are important ideas across computing, so I used those as themes across the CS instruction — and the structure we could build in the Snap block helped to present those ideas.

All of that together is enough to build Web pages from database queries. Here’s an example — querying the billionaires database from Forbes for those from Microsoft, then creating a Web page form letter asking them for money.

We use these blocks in both of our classes:

  • In the Justice class, students use the HTML blocks to create a resume for a fictional or historical character in a homework assignment. In a bigger project, students design their own database of anything they want, then create two queries. One should return 1-3 items, and should generate a detail page for each of those items. The second query should return several items, and return an overview page for that set of items.
  • In the Expression class, building an HTML page is the last homework. They use style rules and have to embed a Snap project so that there’s interactivity in the page. Here’s a slide from the class where we’re showing how adding style rules changes the look-and-feel of an HTML page.

Alternative Paths to Alternative Endpoints

Mike Tissenbaum, David Weintrop, Nathan Holbert, and Tammy Clegg have a paper that I really like called “The case for alternative endpoints in computing education” (BJET link, UIUC repository link). They argue “for why we need more and diverse endpoints to computing education. That many possible endpoints for computing education can be more inclusive, just and equitable than software engineering.” I strongly agree with them, but I learned from this process that there are also alternative paths.

Computer science sequences don’t usually start with databases, HTML, and building web pages from database queries, but that’s what my humanities scholars advisors wanted. Computer science usually starts from algorithms, data structures, and writing robust and secure code, which our scholars did not want. Our PCAS courses are certainly about alternative endpoints — we’re not preparing students to be professional software developers. We’re also showing that we can start from a different place, and introduce “advanced” ideas even in the first class. Computing education isn’t a sequence — it’s a network.

July 17, 2023 at 7:00 am 3 comments

A Scaffolded Approach into Programming for Arts and Humanities Majors: ITiCSE 2023 Tips and Techniques Papers

I am presenting two “Tips and Techniques” papers at the ITiCSE 2023 conference in Turku, Finland on Tuesday July 11th. The papers are presenting the same scaffolded sequence of programming languages and activities, just in two different contexts. The complete slide deck in Powerpoint is here. (There’s a lot more in there than just the two talks, so it’s over 100 Mb.)

When I met with my advisors on our new PCAS courses (see previous blog post), one of the overarching messages was “Don’t scare them off!” Faculty told me that some of my arts and humanities students will be put off by mathematics and may have had negative experiences with (or perceptions of) programming. I was warned to start gently. I developed this pattern as a way of easing into programming, while showing the connections throughout.

The pattern is:

  1. Introduce computer representations, algorithms, and terms using a teaspoon language. We spend less than 10 minutes introducing the language, and 30-40 minutes total of class time (including student in-class activities). It’s about getting started at low-cost (in time and effort).
  2. Move to Snap! with custom blocks explicitly designed to be similar to the teaspoon language. We design the blocks to promote transfer, so that the language is similar (surface level terms) and the notional machine is similar. Students do homework assignments in Snap!.
  3. At the end of the unit, students use a Runestone ebook, with a chapter for each unit. The ebook chapter has (1) a Snap! program seen in class, (2) a Python or Processing program which does the same thing, and (3) multiple choice questions about the text program. These questions were inspired by discussions with Ethel Tshukudu and Felienne Hermans last summer at Dagstuhl where they gave me advice on how to promote transfer — I’m grateful for their expertise.

I always teach with peer instruction now (because of the many arguments for it), so steps 1 and 2 have lots of questions and activities for students throughout. These are in the talk slides.

Digital Image Filters

The first paper is “Scaffolding to Support Liberal Arts Students Learning to Program on Photographs” (submitted version of paper here). We use this unit in this course: COMPFOR 121: Computing for Creative Expression.

Step 1: The teaspoon language is Pixel Equations which I blogged about here. You can run it here.

Students choose an image to manipulate as input, then specify their image filter by (a) writing a logical expression describing the pixels that they want to manipulate and (b) writing equations for how to compute the red, green, and blue channels for those pixels. Values for each channel are 0 to 255, and we talk about single byte values per channel. The equation for specifying the channel change can also reference the previous values of the channels, using the variables red, green, blue, rojo, verde, or azul.

Step 2:The latest version of the pixel microworld for Snap is available here. Click See Code to see all the examples — I leave lots of worked examples in the projects, as a starting point for homework and other projects.

Here’s what negation looks like:

Here’s an example of replacing a green background with the Alice character so that Alice is standing in front of a waterfall.

The homework assignment here involves creating their own image filters, then generate a collage of their own images (photos or drawn) in their original form and filtered.

Step 3:The Runestone ebook chapter on pixels is here.

Questions after the Python code include “Why do we have the for loop above?” And “What would happen if we changed all the 255 values to 120? (Yes, it’s totally fair to actually try it.)”

Recognizing and Generating Human Language

The second paper is “Scaffolding to Support Humanities Students Programming in a Human Language Context” (submitted version here). I originally developed this unit for this course COMPFOR 111: Computing’s Impact on Justice: From Text to the Web because we use chatbots early on in the course. But then, I added chatbots as an expressive medium to the Expression course, and we use parts of this unit in that course, too.

Step 1: I created a little teaspoon language for sentence recognition and generation — first time that I’ve created a teaspoon language with me as the teacher, because I needed one for my course context. The language is available here (you switch between recognition and generation from a link in the upper left corner).

The program here is a sentence model. It can use five words: noun, verb, adverb, adjective, and article. Above the sentence model is the dictionary or lexicon. Sentence generation creates 10 random sentence from the model. Sentence recognition also takes an input sentence, then tries to match the elements in the model to the input sentence. I explain the recognition behavior like this:

This is very simple, but it’s enough to create opportunities to debug and question how things work.

  • I give students sentences and models to try. Why is “The lazy dog runs to the student quickly” recognized as “noun verb noun” but “The lazy dog runs to the house quickly” not recognized? Because “house” isn’t in the original lexicon. As students add words to the lexicon, we can talk about program behavior being driven by both algorithm and data (which sets us up for talking about the importance of training data when creating ML systems later).
  • I give them sentences in different English dialects and ask them to explore how to make models and lexicons that can match all the different forms.
  • For generation, I ask them: Which leads to better generated sentences? Smaller models (”noun verb”) or larger models (“article adjective noun verb adverb”)? Does adding more words to the lexicon? Or tuning the words that are in the lexicon?

Step 2: Tamara Nelson-Fromm built the first set of blocks for language recognition and generation, and I’ve added to them since. These include blocks for language recognition.

And language generation.

The examples in this section are fun. We create politically biased bots who tweet something negative about one party, listen for responses about their own party, then say something positive in retort about their party.

We create scripts that generate Dr. Seuss like rhymes.

The homework in this section is to generate haiku.

Step 3: The Runestone ebook chapter for this unit is here. The chapter starts out with the sentence generator, and then Snap! blocks that do the same thing, and then two different Python programs that do the same thing. We ask questions like “Which of the following is a sentence that could NOT be produced from the code above?” And “Let’s say that you want to make it possible for to generate ‘A curious boat floats.’ Which of the lines below do you NOT have to change?”

Where might this pattern be useful?

We don’t use this whole three-step pattern for every unit in these classes. We do something similar for chatbots, but that’s really it. Teaspoon languages in these classes are about getting started, to get past the “I like computers. I hate coding” stage (as described by Paulina Haduong in a paper I cite often). We use the latter two steps in the pattern more often — each class has an ebook with four or five chapters. The Snap to Python steps are about increasing the authenticity for the block-based programming and developing confidence that students can transfer their knowledge.

I developed this pattern to give non-STEM (arts and humanities) students a gradual, scaffolded approach to program, but it could be useful in other contexts:

  • We originally developed teaspoon languages for integrating computing into other subjects. The first two steps in this process might be useful in non-CS classes to create a path into Snap programming.
  • The latter two steps might be useful to promote transfer from block-based into textual programming.

July 10, 2023 at 1:00 am 2 comments

Participatory Design to Set Standards for PCAS Courses

My main activity for the last year has been building two new courses for our new Program in Computing for the Arts and Sciences (PCAS), which I’ve blogged about recently here (with video of a talk about PCAS) and here where I described our launch. Here are the detailed pages describing the courses (e.g., including assignments and examples of students’ work):

When we got the go-ahead to start developing PCAS last year, the first question was, “Well, what should we teach?” The ACM/IEEE Computing Curriculum volumes weren’t going to be much help. They’re answering the question “What do CS, Software Engineering, Information Technology, etc. majors need to know?” They’re not answering the question, “What do students in liberal arts and sciences need to know about Computing for Discovery, for Expression, and for Justice?

My starting place was the Computing Education Task Force (CETF) report (see link here) which summarized dozens of hours of interviews and survey results from over 100 faculty. We decided that the first two courses would be on Expression and Justice. There already were classes that introduced programming in a Discovery framing in some places on campus (and my colleague, Gus Evrard, has taken that even farther now — but that’s another blog post). There was nothing for first year students to introduce them to coding in an Expression or Justice context.

When faced with a design problem, I often think “WWBD”“What Would Betsy Do.” I learned about participatory design working with Betsy DiSalvo at Georgia Tech, and now I reach for those methods often. I created participatory design activities so that Expression and Justice faculty in our College of Literature, Science, and the Arts (LSA) could set the standards for these courses.

I created three Padlets, shared digital whiteboards. A group of people edit a whiteboard, and everyone can see everyone else’s edits.

  • One of them was filled with about 20 learning goals derived from the CETF report. These aren’t well-formed learning goals (e.g., not always framed in terms of what students should be able to do). These were what people said when we asked them “What should students in LSA learn about computing?” I wasn’t particularly thorough about this — I just grabbed a bunch that interested me when I reviewed the document and thought about what I might teach.
  • I created two more Padlets with possible learning activities for students in these classes. Yvette Granata had recommended several books to me on coding in Expression and Justice contexts, so a lot of the project ideas came out of those. These were things that I was actively considering for the courses.

I ran two big sessions (with some 1:1 discussions afterwards with advisors who couldn’t make the big sessions), one for Expression and one for Justice. These were on-line (via Zoom) with me, Aadarsh Padiyath (PhD student working with me and Barbara Ericson), and a set of advisors. The advisors were faculty who self-identified as working in Computing for Expression or Computing for Justice. The design sessions had the same format.

  1. I gave the advisors a copy of the learning goals Padlet. (Each session started with the same starting position.) I asked them as a group to move to the right those learning goals they wanted in the class and to move to the left those learning goals that they thought were less important. They did this activity over about 20-30 minutes, talking through their rationale and negotiating placement left-to-right.
  2. I then gave the advisors a copy of the learning activities Padlet. Again, I asked them to sort them right is important and left is less important.. Again, about 20-30 minutes with lots of discussion.

We got transcripts from the discussion, and Aadarsh produced a terrific set of notes from each session. These were my standards for these courses. This guided me in deciding what goes in and what to de-emphasize in the courses.

Below are the end states of the shared whiteboards. There’s a lot in here. Three things I find interesting:

  • Notice where the computer science goals like “Write secure, safe, and robust code” end up.
  • Notice what’s in the upper-right corner — I was surprised in both cases.
  • Notice that building chatbots is right-shifted for both Expression and Justice. Today, you’d say “Well, of course! ChatGPT!” But I held these sessions in February of 2022. The classes have a lot about chatbots in them, and that put us in a good place for integrating discussions about LLMSs this last semester.

Expression Learning Goals

(To see the full-res version, right-click/control-click on the picture, and open the image a new tab.)

Justice Learning Goals

Expression – Learning Activities

Justice – Learning Activities

These are Standards

The best description of how I used these whiteboards and the discussion notes is that these are my standards. My advisors said very clearly during the sessions — there are too many learning objectives and activities for one course here. Things on the left are not unimportant. They’re just not as important.. I can’t possibly get to everything on these whiteboards in a single semester class designed for arts and humanities students (as the primary audience) with no programming background and with some hesitancy about mathematics.

My advisors were designing in a vacuum. They weren’t going to actually teach this course. Most of them had never seen a course that tried to achieve these objectives for this student audience. So they told me (for Justice), “Yeah, use Jupyter notebooks, and teach HTML and databases and code, all in one semester. And don’t make students install anything on their computers — do it all in the browser.” But they didn’t really have an idea how this might work, or if it was possible. They also didn’t articulate, “You’ll probably to teach about data and iteration and conditionals in here, too.”

It was my job to use these standards as priorities, cover what I could, and fill in with the computer science knowledge to make these do-able. We are also using these to inform future classes, the next classes we make for PCAS. You can compare these whiteboards back up to the course pages at the top of this post to decide how well we did.

Overall, we use participatory design methods a lot as we design for PCAS, to get the input of faculty outside of CS, because these aren’t computer science classes. They are not CS0, CS1, or CS0.5, all of which presume a linear progression towards the goal of being a CS major. Yes, we’re teaching computer science knowledge and skills, but these are classes in Computing for Expression and Computing for Justice. The faculty in those areas are the authorities in what we should teach. They decide what’s important.

Side note: I’ve had these data for over a year, and even presented some of them in a poster at ITiCSE last year. I have trying to figure out how to share them. Maybe this could have been a peer-reviewed publication (conference or journal)? I don’t know. It’s a design activity, and I learned a lot from it, but I don’t know how to write about it as scholarship. I finally decided to write this blog post so that I could share the whole big Padlet whiteboards. Traditional publication venues would be unlikely to let me put these big pictures out there, but I can in a blog post.


My many thanks to my advisors for these classes: Yvette Granata, Catherine Griffiths, M. Remi Yergeau, Tony Bushner, Justin Schell, Jan Van den Bulck, Justin Joque, Sarita Schoenebeck , Nick Henricksen, Maggie Frye, Anne Cong-Huyen, and Matt Bui

July 7, 2023 at 7:12 am 5 comments

Putting a Teaspoon of Programming into Other Subjects (May 2023 Communications of the ACM): About Teaspoon Languages

In May, my students and I published a paper in Communications of the ACM, “Putting a Teaspoon of Programming into Other Subjects” (see link here) about our work with teaspoon languages. (Submitted version, non-paywalled is here.) It’s a short Viewpoint, but we were able to squeeze into our 1800 word limit description of a couple of teaspoon languages, a definition of them, a description of our participatory design process for them, and some of the research questions we’re exploring with them, like what drives teacher adoption of teaspoon languages, use multilingual keywords to engage emerging bilingual students, and identifying challenges to even our simplified notions of programming.

My students helped me to be consistent with our language in this piece, which was so helpful. I’ve been talking about teaspoon languages for awhile, and my language has likely changed over that time. They’re challenging me to be more exact about what I mean.

For example, we use the phrase “teaspoon languages” and not “teaspoon programming languages.” The term “teaspoon” comes from the shorthand “TSP” for “Task-Specific Programming.” So, the “programming” bit is already in there. But in particular, I don’t want to generate the reaction, “But, hey, that doesn’t look like a real programming language…”

Programming languages are used to create software — preferably, software that is reliable, robust, safe, and secure. The programming languages research community works to make programming more effective for people who are using those languages to create software. Programming as an activity can also be used to solve problems and explore domains. We’re building languages for that latter purpose. Much of the programming that scientists and others do to solve problems and explore domains happens to be in programming languages that can also be used to create software (e.g., Python, R, Mathematica, MATLAB). Teaspoon languages (so far) can’t really be used to create software for someone else to execute. They’re not general. I don’t think any of the teaspoon languages that we have created are Turing complete. But teaspoon languages are used to define the behavior of a computational agent. It’s still programming.

Another question we hear quite a bit is “Isn’t this just a domain-specific language?” We tried to answer that in the piece. Yes, teaspoon languages are a kind of domain-specific language, but for a very small domain — a single task. The most critical part of teaspoon languages is “They can be used by students for a task that is useful to a teacher.” DSLs are so much bigger than teaspoon languages. Maybe we can use DSL tools one day to make teaspoon languages, but so-far, we’ve built unique user interfaces and unique languages for each one. The focus is on meeting the need now, and we’ll see if we ever get to generalizability and tools later.

The issues we study in our research with teaspoon languages don’t have much overlap with the programming languages research community. I don’t have good answers to questions like, “How do you support type safety?” or “Why can’t I define a lambda in Pixel Equations?” So, we’ll just call them “teaspoon languages” — and let the “programming” word be silent in there.

June 30, 2023 at 8:00 pm 3 comments

How computing instructors plan to adapt to ChatGPT, GitHub Copilot, and other AI coding assistants (ICER 2023 paper): Guest blog post from Philip Guo

Hi everyone! This is Philip Guo from UC San Diego. My Ph.D. student (and soon-to-be faculty colleague) Sam Lau and I are presenting a paper at ICER 2023 on a topic that’s been at the top of many of our colleagues’ minds in recent months:

How are instructors planning to adapt their programming-related courses as more and more students start using AI coding assistance tools such as ChatGPT and GitHub Copilot?

A series of studies throughout 2021 and 2022 showed that GitHub Copilot can solve many kinds of CS1 and CS2 homework problems (see Section 3 of our paper for a summary of these studies). However, Copilot still requires users to install, configure, and activate it within an IDE like Visual Studio Code, which can be hard for novices to do. So when ChatGPT launched at the end of 2022, it made this AI technology far more accessible and easier to use. Now any student can visit the ChatGPT website, copy-paste in their homework assignments and instructor-provided starter code, and watch ChatGPT generate the solutions for them. Given this new reality we’re all now living in, Sam and I wanted to know what computing instructors are planning to do to ensure that their students are still learning well.

To gather a diverse sample of perspectives, we interviewed 20 university CS1/CS2 instructors across 9 countries (Australia, Botswana, Canada, Chile, China, Rwanda, Spain, Switzerland, United States) spanning all 6 populated continents. To our knowledge, our paper is the first empirical study to gather instructor perspectives about these AI coding tools that more and more students will likely have access to in the future.

Here’s a summary of our findings:

Short-term, many planned to take immediate measures to discourage AI-assisted cheating by weighing exams more, trying to ban these tools, or showing students their limitations. Then opinions diverged sharply about what to do longer-term, with one side wanting to resist AI tools by creating more AI-resistant assignments and exams, and the other side wanting to embrace these tools by integrating them deeply into introductory programming courses.

Our study findings capture a rare snapshot in time in early 2023 as computing instructors are just starting to form opinions about this fast-growing phenomenon but have not yet converged to any consensus about best practices. Using these findings as inspiration, we synthesized a diverse set of open research questions regarding how to develop, deploy, and evaluate AI coding tools for computing education. For instance, what mental models do novices form both about the code that AI generates and about how the AI works to produce that code? And how do those novice mental models compare to experts’ mental models of AI code generation? See Section 7 of our paper for more examples.

We hope that these findings, along with our open research questions, can spur conversations in our community about how to work with these tools in effective, equitable, and ethical ways. 

Check out our paper here and email us if you’d like to discuss anything related to it!

Sam Lau and Philip J. Guo. From “Ban It Till We Understand It” to “Resistance is Futile”: How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot. ACM Conference on International Computing Education Research (ICER), 2023.

June 20, 2023 at 8:00 am 1 comment

Broadening Participation in Computing by Moving Away from Computer Science: Information, Arts, Humanities, and Sciences offer better models for #CSforAll

 In April, I gave a talk at Carnegie Mellon University’s Software and Societal Systems Department (S3D) “Broadening Participation in Computing by Moving Away from Computer Science” — slides available here, and video available here.

The argument I’m making is that computer science as a field has become more narrow over time. I wrote a CACM Blog Post last month where I provided several definitions of computer science: “Education is always changing: We need to define CS to keep the good stuff.” The earliest definitions of computer science described it as something to be taught to everyone, a critical literacy for 21st century citizenry, and touching on many different aspects of modern life. (I tell the story of those early definitions in this blog post.) More modern computer science definitions are much more narrow.

Computer science departments perform a critical function. They produce software professionals. Our society needs those. But that’s not the only societal need for computing education. CS departments also perform a gatekeeping function so that they can certify their graduates as ready for professional programming. If we want “Computing Education for All” with alternative endpoints, we need less of that.

During the pandemic, I worked with a Computing Education Task Force at my University to discover what kinds of computing education was currently available (see that story here and our final report here) across the campus. There’s a lot, even outside of CS. Arts teaches wonderful courses on expression with computing. Sciences teach how to discover with computing. Humanities reflects on the role of computing in society and critiques our digital and computational systems. Our School of Information teaches similar introductory computing courses to what we offer in CS, but with a focus on data science, user experience, and impact on society.

I suggest in my talk that new initiatives like our Program in Computing for the Arts and Sciences in University of Michigan’s College of Literature, Science, and the Arts (LSA) is more likely to invent computing education for everyone than will CS departments. The first half of my talk is about the history of computing education and about the narrowing of definitions (for which I use the U-M CSE standard slide templates), and the second half (for which I use the U-M LSA standard slide templates) is about PCAS, our new courses, and how this serves to broaden access to and participation in computing education.

Let’s think about how we could broaden our goals beyond CS departments and CS majors in K-12 education. Advanced Placement CS A and CS Principles are tightly tied to CS curriculum. What would it look like to create Advanced Placement exams for other needs for computing education? For example, what would an AP in Computational Science look like? What would it mean to value alternative endpoints in “Computing Education for All”?

June 15, 2023 at 8:00 am 11 comments

Participatory Design to Support University to High School Curricular Transition/Translation in FIE 2022

Here’s my second blog post on papers we presented during the first year of PCAS. Emma Dodoo is an Engineering Education Research PhD student working with me and co-advised with Lisa Lattuca. When she first started working with me, she wanted a project that supported STEM learning in high school. We happened upon this fascinating project which eventually led to an FIE 2022 paper (see link here).

The University of Michigan Marsal Family School of Education has a collaboration with the School at Marygrove in the Detroit Public Schools – Community District. The School at Marygrove requires all high school students to take a course in Engineering every year. The school is new, so when we came into the story, they were just starting to build an 11th grade Engineering curriculum. Where does an innovative K-12 school find curriculum for not-often-taught subjects like Engineering? It seemed natural to look to the partner university.

The University of Michigan has recently established a Robotics Department with an innovative undergraduate curriculum. The leadership in the U-M School of Education and the School at Marygrove decided to use some projects from the undergraduate curriculum for the 11th grade Engineering curriculum. Emma and I came in to run participatory design sessions to help with supporting the high school in adopting the university curriculum. We focused on one project in particular, where students would input data from a LIDAR sensor on a robot, then visualize the results. What we wanted to know was: What are the issues that come up when using a university curriculum to inform an innovative high school curriculum?

We started out with a set of interviews. We talked to undergraduates who had been in the Robotics curriculum and asked them: What was hard about the robotics projects? What were things that you wished you knew before you started? They gave us a list of issues, like the realization that equations on a plane could be used to define regions of a picture, that colors could be mapped to numbers and equations, and that pixels could be queried for their colors.

We talked to high school educators about what they wanted students to learn from the project. They were pretty worried about the mathematics required for the project, especially after the students had spent all of 10th grade on-line during the pandemic.

Emma took these objectives and concerns, and generated a set of possible activities to be used in the class. She used Desmos, Geogebra, and our new Pixel Equations teaspoon language. (I mentioned back in this blog post that we were using Pixel Equations in participatory design sessions — that’s when this study was happening.)

Pixel Equations was developed explicitly for the concerns that the undergraduates were raising and that the math educators cared about. Users specify the pixels they want to manipulate (leftmost column) by providing an equation on a plane or an equation based on the RGB channels in the color in the pixel. They specify the desired color changes in terms of the red, green, and blue channels (three columns on the right). The syntax for the boolean expressions and the equations for calculating colors is the same as what students would see in Java, C, or JavaScript. But there are no explicit loops, conditionals, or data — it’s a teaspoon language.

Emma ran participatory design sessions with stakeholders, including teachers from the School at Marygrove. Her goal was to identify the features that the stakeholders would find valuable, and in particular, to identify the concerns of the high school educators that may not be addressed in the university curriculum. She identified four sets of issues that were important to the stakeholders when transferring curriculum from the university to the high school:

  • Prior Knowledge: The students knew Desmos. Using a tool they used before would help a lot when dealing with the new robotics concepts.
  • Priming: The computing must be introduced so that there is time for the students to become familiar with it.
  • Motivational Play: High school students need more opportunities to see the fun in an activity than undergraduate students.
  • Self-efficacy: It’s important for students to feel that they can succeed at the activities.

That’s where the paper ends

All of this design and development happened during the pandemic. We didn’t hear much about what was going on in the high school, and we couldn’t visit. When we talked to our contacts at the School of Education, we found that they didn’t have much news either. It wasn’t until much later that we saw this news item. The 11th grade Engineering class actually didn’t do any of the mathematics activities that we’d helped with. Instead, the school got a grant for a bunch of robots, and the classes focused on directly programming the robots instead. It’s disappointing that they didn’t use any of the things that we worked on, but as I’ve been mentioning in this blog, we find that adoption is really hard. Other factors (like grants and the wow factor of programming a robot) can change priorities.

The paper is interesting for investigating stakeholder issues when transferring activities from university to high schools. Those are useful issues to know about, but even if you address all the issues, you still might not get adoption.

June 5, 2023 at 8:00 am 1 comment

The information won’t just sink in: Helping teachers provide technology-assisted data literacy instruction in social studies

Last year, Tammy Shreiner and I published an article in the British Journal of Educational Technology, “The information won’t just sink in: Helping teachers provide technology-assisted data literacy instruction in social studies.” (I haven’t been able to blog much the last year while starting up PCAS, so please excuse my tardiness in sharing this story.) The journal version of the paper is here, and our final submitted version (not paywalled) is available here.

Tammy and I used this paper to describe what happened (mostly during the pandemic) as we continued to provide support to in-service/practicing social studies teachers to adopt data literacy instruction in their classes. Since this was a journal on educational technology, we mostly focused on two technologies:

  • The OER Tammy created to support data literacy in social studies education — see link here.
  • DV4L, the Data Visualization for Learning tool that we created explicitly for social studies teachers — see link here.

When we started collaborating together, we looked for a theoretical model could inform our work. The end goal was easy to describe: we wanted social studies teachers to teach data literacy. But it’s hard to measure progress towards that big, high-level goal. Teachers are teaching data literacy, or they’re not. How do you know if you’re getting closer to the goal? We structured our work and our evaluation around the Technology Acceptance Model (TAM). TAM suggests that adoption of a new technology boils down to two questions: (1) is the technology actually useful in solving a problem that users care about, and (2) is the technology usable by the users? Those were things that we could measure progress towards.

During the pandemic, we ran several on-line professional learning opportunities — a workshop where practicing teachers could try out the OER with some guidance (e.g., “Make sure you see this” and “Why don’t you try that?”), and kick the tires on a bunch of tools including DV4L. We gathered lots of data on those teachers, and Tammy did the hard work of analyzing those data over time. We made progress on TAM goals — our tools got more usable and more useful.

But we still got very little adoption. TAM didn’t work for us. Adoption didn’t increase as usability and usefulness increased.

Why not? That’s a really big question, and we barely touch on it in this paper. It’s now a couple of years since we wrote the BJET article, and I could now tick off a dozen bullet points of reasons why teachers do not adopt, despite a technology being both useful and usable. I’m not going to list them here, because there are other publications in the pipeline. Bahare Naimipour, the EER PhD student working on our project, is finishing a case study of some teachers who did adopt and how their beliefs about data literacy changed.

I can give you a big meta-reason which probably isn’t a surprise to most education researchers but might be a surprise to many computer scientists: It’s not all about (or even mostly about) the technology. I led the group that worked on DV4L, and I’ve been directing students who have been helping Tammy make the OER more usable and useful (including build new tools that we haven’t yet released). TAM matters, but the characteristics of the individual teachers and the context of the teacher’s classroom are critical factors that technology is unlikely to overcome.


This is work funded in part by our National Science Foundation grant, #DRL2030919

May 30, 2023 at 8:00 am 9 comments

A Workshop on Slow Reveal Graphs for Social Studies Teachers

My collaborator, Tammy Shreiner, is running a workshop for social studies educators on teaching with Slow Reveal Graphs. The idea of slow reveal graphs is that visualizations are just too complex for students to pick out all the visual elements. Instead, a slow reveal graph is presented in stages, and at each stage, students are prompted to reflect (and discuss, or write about), “What do you notice now? What do you wonder about?”

Tammy has been building a bunch of slow reveal graphs that really fascinating. I’m particularly amazed at the ones that she and her colleague Bradford Dykes have been building. They are taking hand-drawn visualizations (like the fascinating ones by W.E.B. Du Bois) and recreating them in R, so that they can generate the slow reveal process.

She’s offering a workshop in January that I highly recommend.


Dear friends,

I am writing to share information about a professional learning opportunity focused on teaching primary source data visualizations using the “slow reveal” process. The PLO will take place on Zoom over two Saturdays, January 21 and 28, 9:00-noon. It is open to teachers inside and outside of Michigan.

Please share the attached flyer with your social studies teacher colleagues. A sneak peak of the website that we will share with participants is below.

Thanks for sharing!

Tammy

December 5, 2022 at 7:00 am Leave a comment

Launching PCAS, the first two COMPFOR classes, and hiring our first lecturer

I last gave an update on the Program in Computing for the Arts and Sciences (PCAS) here in February (see blog post). Since then, it’s become real. I was hired as of July 1 as the Director of PCAS. My Computing Education Task Force co-chair, Gus Evrard, is the Associate Director. We even have a website: https://lsa.umich.edu/computingfor

I am building and teaching our first two courses now. I love our course code: COMPFOR. It stands for “COMPuting FOR…” The two courses are:

  • COMPFOR 111 Computing’s Impact on Justice: From Text to the Web
  • COMPFOR 121 Computing for Creative Expression

I have never worked harder than this semester– building these two courses, teaching both courses at the same time, learning how to be a program director (e.g., explicit classes and workshops on academic leadership, on evaluating faculty, and on how University of Michigan budgets work), and creating the program. I am having enormous fun.

I plan to write more about the two courses here and our innovations in teaching them. Here’s a brief summary. We are using teaspoon languages to introduce concepts, Snap for programming assignments, and Runestone ebooks for helping students to transfer their knowledge from blocks to traditional textual languages (Python, Processing, and SQL). I gave a talk for the CS for Michigan Conference a few weeks ago, and for the attendees, I created a page connecting to some of what we’re building and a narrative account of a couple of the units: https://guzdial.engin.umich.edu/cs4mi-pcas/.

We have been given permission to hire our first lecturer: https://careers.umich.edu/job_detail/226551/leo-lecturer-i-compfor-111-and-compfor-121. Right now, it’s a one year position for 2023-2024, but if enrollments grow, we have been encouraged to request three year positions starting in Fall 2024. Please do forward this job announcement to anyone you think might be interested. It’s also available in other places like CRA: https://cra.org/job/university-of-michigan-lecturer/.

December 4, 2022 at 11:00 am 1 comment

Doing a Little Housekeeping and Rebranding

I discovered today that I have written over 2,500 blog posts here on WordPress, starting in June 2009. There was a time when I was writing daily. This is the first post I’ve written here since June. From a pace of a new post every day, to once every six months.

Our lives change so much from year to year. Thirteen years feels like so many changes ago. I live in a different state, working for a different University. Even the name of the department where I work has changed — I was in the School of Interactive Computing at Georgia Tech. Now I’m in the Division of Computer Science and Engineering in the College of Engineering at the University of Michigan and I direct the Program in Computing for the Arts and Sciences.

I changed the name of this blog once before. When I started out here in 2009, there were few websites and blogs on computing education. I just called it the Computing Education Blog. But as the field grew and there were more and more terrific sites helping teachers do computing education, I renamed the site Computing Education Research Blog. My focus was on the research, and not on how to be a great computing educator.

Today, there are many great resources even on Computing Education Research. I particularly recommend https://csedresearch.org/. I attended one of their on-line panels this last week — such interesting ideas and such a wonderful and diverse group of researchers.

I have been reticent to post under a banner saying “Computing Education Research” because this site has pretty broad visibility now. There are many more subscribers than the first few years. Newcomers might come here with that title and expect to read a newsletter or an authoritative perspective on the field — that’s an overwhelming responsibility. I recognize that I’m a senior (read: “old”) voice in the field, but I am just one of many voices in the field. Like any academic, I want to share what we’re working on and what I’m thinking about. I do not want to my posts here to appear like I’m speaking for the field.

So, I have renamed the blog for a second time: Computing Ed Research – Guzdial’s Take. This blog represents my perspective. That’s how I’ve always thought of the blog, but I want to make it explicit.

I have updated the Guzdial Papers page for the first time in a decade, and changed the About page.

Thanks for reading!

December 3, 2022 at 10:25 am 3 comments

New ICER paper award for Lasting Impact: Guest blog post from Quintin Cutts

I serve on the ACM SIGCSE International Computing Education Research (ICER) conference steering committee. Quintin Cutts is Chair of the Steering Committee. I offered to share his announcement of a new Lasting Impact paper award here in the blog.

This is an invitation to nominate a paper for the ICER Lasting Impact Award 2022, or to offer to serve on the judging panel.

Which ICER paper has caused you to change the way you teach, or the direction of your research?  Which has helped you to see and understand CS education more clearly?  Has it also had an impact right across the community?  I know which paper I would nominate, if I were allowed (but I’m not! – see below).  It’s been a game-changer for me, and across CS education.  Which one has done this for you and others?

1. Description of the award

The ICER Lasting Impact Award recognizes an outstanding paper published in the ICER Conference that has had meaningful impact on computing education. Significant impact can be demonstrated through citations, adoptions and/or adaptations of techniques and practices described in the paper by others, techniques described in the paper that have become widely recognized as best practices, further theoretical or empirical studies based on the original work, or other evidence the paper is an outstanding work in the domain of computing education research. The paper must have been published in ICER at least 10 years prior (i.e., for the 2022 award, papers must have been published in or before ICER 2011.)

2. Requirements for nominating a paper

a. An ACM Digital Library link to the paper being nominated.
b. A brief summary of the technical content of the paper and a brief explanation of its significance (limit 750 words).
c. Signatories to the summary and significance statement, with at least two current SIGCSE members. The name, contact email address and affiliation of each person who has agreed to sign the endorsement is acceptable.

3.  Nominating yourself as a potential award judge

Please consider nominating yourself as a potential judge. We are seeking judges who have significant experience in the ICER community. We will ask judges to serve who do not have nominated papers, to avoid conflicts of interest.

4. Additional Notes

a. ICER Steering Committee members cannot nominate papers.
b. The ICER Steering Committee chair, Quintin Cutts, will run the process this year, and his papers cannot be nominated.
c. In this inaugural year of the Award, we will not have a pre-defined rubric. We will ask the judges to report the rationale for their decision, and the report will be made public when we announce the winner.

5. Timetable (all times, 23.59 AoE)

17th July: Nominations close.
18th July: Judging panel selected from the candidate pool, depending on the number of nominations and conflicts. Papers sent out to the judges.
2nd August: Judging panel sits to deliberate and makes a decision, which is passed to PC chairs. Winner notified.

The award will be presented at the ICER 2022 conference in Lugano, Switzerland either in person or on-line.

6. Submitting nominations

Please send both paper nominations and judging self-nominations to me, Quintin Cutts at Quintin.Cutts@glasgow.ac.uk

June 28, 2022 at 7:00 am 2 comments

Programming in blocks lets far more people code — but not like software engineers: Response to the Ofsted Report

A May 2022 report from the UK government Research Review Series: Computing makes some strong claims about block-based programming that I think are misleading. The report is summarizing studies from the computing education research literature. Here’s the paragraph that I’m critiquing:

Block-based programming languages can be useful in teaching programming, as they reduce the need to memorise syntax and are easier to use. However, these languages can encourage pupils to develop certain programming habits that are not always helpful. For example, small-scale research from 2011 highlighted 2 habits that ‘are at odds with the accepted practice of computer science’ (footnote). The first is that these languages encourage a bottom-up approach to programming, which focuses on the blocks of the language and not wider algorithm design. The second is that they may lead to a fine-grained approach to programming that does not use accepted programming constructs; for example, pupils avoiding ‘the use of the most important structures: conditional execution and bounded loops’. This is problematic for pupils in the early stages of learning to program, as they may carry these habits across to other programming languages.

I completely agree with the first sentence — there are benefits to using block-based programming in terms of reducing the need to memorize syntax and increasing usability. There is also evidence that secondary school students learn computing better in block-based programming than in text-based programming (see blog post). Blanchard, Gardner-McCune, and Anthony found (a Best Paper awardee from SIGCSE 2020) that university students learned better when they used both blocks and text than when they used blocks alone.

The two critiques of block-based programming in the paragraph are:

  • “These languages encourage a bottom-up approach to programming, which focuses on the blocks of the language and not wider algorithm design.”
  • “They may lead to a fine-grained approach to programming that does not use accepted programming constructs…conditional execution and bounded loops.”

Key Point #1: Block-based programming doesn’t cause either of those critiques. What about programming with blocks rather than text could cause either of these to be true?

I’m programming a lot in Snap! these days for two new introductory computing courses I’m developing at the University of Michigan. I’ve been enjoying the experience. I don’t think that either of these critiques are true about my code or that of the students helping me develop the courses. I regularly do top-down programming where I define high-level custom blocks, as I design my program overall. Not only do I use conditional execution and bounded loops regularly, but Snap allows me to create new kinds of control structures, which has been a terrific help as I create block-based versions of our Teaspoon languages. My experience is only evidence that those two statements need not be true, just because the language is block-based.

I completely believe that the studies being cited in this research report saw and accurately describe exactly these points — that students worked bottom-up and that they rarely used conditioned execution and bounded loops. I’m not questioning the studies. I’m questioning the inference. I don’t believe at all that those are caused by block-based languages.

Key Point #2: Block-Based Programming is Scaffolding, but not Instant Expertise. For those not familiar, here are two education research terms that will be useful in making my argument.

  • Scaffolding is the support provided by a learner to enable them to achieve some task or process which they might not be able to achieve without that support. A kid can’t hop a fence by themselves, but they can with a boost — that’s a kind of scaffolding. Block-based programming languages are a kind of scaffolding (and here’s a nice paper from Weintrop and Wilensky describing how it is scaffolding — thanks to Ben Shapiro for pointing it out).
  • The Zone of Proximal Development (ZPD) describes the difference between what a student can do on their own (one edge of the ZPD) and what they might be able to do with the support of a teacher or scaffolding (the far edge of the ZPD). Maybe you can’t code a linked list traversal on your own, but if I give you the pseudocode or give you a lecture on how to do it, then you can. But the far edge of ZPD is unlikely to be that you’re a data structure expert.

Let’s call the task that students were facing in the studies reviewed in the report: “Building a program using good design and with conditioned execution.” If we asked students to achieve this task in a text-based language, we would be asking them to perform the task without scaffolding. Here’s what I would expect:

  • Fewer students would complete the task. Not everyone can achieve the goal without scaffolding.
  • Those students who do complete the task likely already have a strong background in math or computing. They are probably more likely to use good design and to use conditioned execution. The average performance in the text-based condition would be higher than in the block-based condition — simply because you’ve filtered out everyone who doesn’t have the prior background..

Fewer people succeed. More people drop-out. Pretty common CS Ed result. If you just compare performance text vs. blocks, text looks better. For a full picture, you also have to look at who got left out.

So let’s go back to the actual studies. Why didn’t we see good design in students’ block-based programs? Because the far edge of the ZPD is not necessarily expert practice. Without scaffolding (block-based programming languages), many students are not able to succeed at all. Giving them the scaffolding doesn’t make them experts. The scaffolding can take them as far as the ZPD allows. It may take more learning experiences before we can get to good design and conditioned execution — if that even makes sense.

Key Point #3: Good software engineering practice is the wrong goal. Is “building a program using good design and with conditioned execution” really the task that students were engaging in? Is that what we want student to succeed at? Not everyone who learns to program is going to be a software engineer. (See the work I cite often on “alternative endpoints.”) Using good software engineering practices as the measure of success doesn’t make sense, as Ben Shapiro wrote about these kinds of studies several years ago on Twitter (see his commentary here, shared with his permission). A much more diverse audience of students are using block-based programming than ever used text-based programming. They are going to solve different problems for different purposes in different ways (a point I made in this blog post several years ago). Few US teachers in K-12 are taught how to teach good software engineering practice — that’s simply not their goal (a point that Aman Yadav made to me when discussing this post). We know from many empirical studies that most Scratch programs are telling a story. Why would you need algorithmic design and conditioned execution for that task? They’re not doing complicated coding, but the little bit of coding that they’re using is powerful and is engaging for students — and relatively few students are getting that. I’m far more concerned about the inequitable access to computing education than I am about whether students are becoming good software engineers.

Summary: It’s inaccurate to suggest that block-based programming causes bad programming habits. Block-based programming makes programming far more accessible than it ever has been before. Of course, we’re not going to see expert practice as used in text-based languages for traditional tasks. These are diverse novices using a different kind of notation for novel tasks. Let’s encourage the learning and engagement appropriate for each student.

June 20, 2022 at 7:00 am 20 comments

Getting feedback on Teaspoon Languages from CS educators and researchers at the Raspberry Pi Foundation seminar series

In May, I had the wonderful opportunity to speak at the Raspberry Pi Foundation Seminar series. I’ve attended some of these seminars before. I highly recommend them (see past seminars here). It’s a terrific format. The speaker presents for up to a half hour, then everyone gets put into a breakout room for small group discussions. The participants and speaker come back for 30-35 minutes of intensive Q&A — at least, it feels “intensive” from the speaker’s perspective. The questions you get have been vetted through the breakout room process. They’re insightful, and sometimes critical, but always in a constructive way. I was excited about this opportunity because I wanted to make it a hands-on session where the CS teachers and researchers who attended might actually use some Teaspoon Languages and give me feedback on them. I have rarely had the opportunity to work with CS teachers, so I was excited for the opportunity.

Sue Sentance wrote up a very nice blog post describing my talk (thank you!) — see here. The video of the talk and discussion is available. You can watch the whole thing, or, you can read the blog post then skip ahead to where the conversation takes place (around 26:00 in the video). If you have been wondering, “Why isn’t Mark just using Logo, Scratch, Snap, or NetLogo? We already have great tools! Why invent new languages that are clearly less powerful than what we already have?”, then you should jump to 34:38 and see Ken Kahn (inventor of ToonTalk) push me on this point.

The whole experience was terrific for me, and I hope that it’s valuable for the viewer and attendees as well. The questions and comments indicated understanding and appreciation for what I’m trying to do, and the concerns and criticisms are valuable input for me and my team. Thanks to Sue, Diana Kirby, the Raspberry Pi Foundation, and all the attendees!

June 14, 2022 at 7:00 am Leave a comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,094,717 hits
April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips