Archive for December, 2025
Dr. Tamara Nelson-Fromm defends her dissertation: What Debugging Looks like in Alternative Endpoints
In May, Tamara Nelson-Fromm defended her dissertation “A Qualitative Exploration of Programming Instruction for Alternative Endpoints in Post-Secondary Computing Education.”
I’ve talked about Tamara’s work a few times in this blog.
- One of her early projects was a teaspoon language to help history teachers to build history timelines (blog post).
- At PLATEAU 2024, she presented our paper suggesting that there was transfer from the Pixel Equations teaspoon language into building Image filters in Snap! (blog post).
- She presented our paper at SIGCSE 2025 on how we designed the PCAS courses oriented towards creative expression and social justice (blog post). Tamara worked with me on that design process, particularly on how to meet justice scholars desire for their students to learn about databases, HTML, and SQL (blog post) and on helping students to understand how a computer might generate language (blog post).
Tamara has published a lot more than that during her PhD work in part because she became an expert on reflexive thematic analysis. She worked with several other students on using RTA. At SIGCSE 2026, she and Aadarsh Padiyath will present their paper on how to use RTA for computing education research. I’ve read the paper and loved it — I have been recommending it widely.

Tamara with her committee: Valerie Barr (on Zoom), (from right) Nikola Banovic, Barry Fishman, Tamara, and me
I want to tell you about her dissertation, but I don’t want to divulge too much — only the first study has been published so-far. The big idea that drives her work is alternative endpoints. She and I have talked a lot about the paper by Mike Tissenbaum and his colleagues. The big question that she’s helping to answer is “What will CS education look like as we move beyond producing more software developers?”
Study #1: New CS Teachers learning Debugging: Her first study investigated how we develop new CS teachers. From the start of her PhD, she has been interested in how students learn to debug. Her method was novel (and hard to get past reviewers). Instead of studying new CS teachers and how they learned debugging, she interviewed expert teachers of new CS teachers. She interviewed the people who run professional training, summer workshops, and many the other ways that teachers learn CS. Rather than track individuals (who might not struggle with debugging, or who might not be representative of new teachers), she talked to people who have been doing this for years. What do they do to teach debugging?
Here was the amazing answer: Avoid it. In hindsight, it makes all the sense in the world. Imagine: You’ve got a teacher new to CS in your workshop. In the first workshops (which is often all you get with teachers), you want them to succeed.. You want them to come back for more workshops. So, you do all that you can to avoid bugs. Since bugs will happen still, you provide checklists and “Here’s what to look for if it doesn’t work” guidance.
Of course, really learning to debug comes later…or does it? Tamara raises the intriguing possibility that maybe that’s enough. Maybe for what these teachers are doing (especially in primary school), maybe it’s enough to just have checklists. Again, it’s about alternative endpoints — what does a K-12 teacher need to know about debugging? The paper on her first study will appear at SIGCSE 2026 in February.
Study #2 and #3: PCAS Students: Her second and third studies involved PCAS students. In her second study, she looked at why arts, sciences, and humanities students would want to take courses involving programming. In her third study, she returned to the theme of the first study — how do PCAS students debug?
I don’t want to say too much about these studies, but I do want to tell one story from Study #3 that connects strongly to the story about teachers in Study #1. One of the ways that Tamara saw PCAS students debugging was the way that your modern mechanic fixes your car.
Mechanics today do not need to how your car actually works. Instead, they plug it into the diagnostic machine, and they get a code. The code tells the mechanic where the problem is. The mechanic then follows a procedure or (more likely) replaces a part — whatever the manufacturer guidance is for that code. They then try it again.
That’s how some of the PCAS students debugged. Each assignment for the arts and humanities classes was open-ended, and I gave them completely working examples. The students would write their programs and try them. If they didn’t work, they’d check that they didn’t make a simple mistake. If they couldn’t figure it out, they would go back to one of the worked examples and copy-paste the part that worked and did about the same thing. Then they’d test again. If they still couldn’t get it to work, they’d explore changing what they were trying to do, so that they still met the requirements — but they could get it working.
Is this problem? Do the students need to learn better debugging skills? Let’s go back to alternative endpoints again. Not everyone needs to have a strong mental model of the working program.
Tamara wasn’t prescriptive in her dissertation. She didn’t make judgements of good or bad. Rather, she described the world as she found it, and raises the reasonable possibility that what she saw is working just fine.
Tamara’s dissertation is important. The alternative endpoints paper suggested that we should think about different audiences learning to program for different purposes than software development. Tamara showed us what that is looking like.
Creating a measure of Critical Reflection and Agency in Computing
I stopped blogging while I was on sabbatical because I had to focus on finishing the second edition of Learner-Centered Design of Computing Education. And then we came back from sabbatical. I’d heard that it was tough getting back to normal work after sabbatical, and it was. I had it easier than most (e.g., I came back to summer time, and I had a light teaching schedule this Fall). But it was still a transition, so it’s taken me awhile to get back to blogging.
In the meantime, Aadarsh Padiyath published two papers (and a poster) about the the development and validation of an instrument to measure Critical Reflection and Agency in Computing. Aadarsh Padiyath is a PhD student (soon to graduate! Hire him!) advised by Barb Ericson and me. He last appeared here with a guest post a year ago with a pushback against technological determinism — computing education researchers assuming that the future of CS education can be predicted by the development of ChatGPT.
These new papers are about the second study from his dissertation. Aadarsh is interested in how we can better prepare computer sciences for recognizing and dealing with ethical issues. Typically, we do that with computing ethics classes. But do they work? Aadarsh recognizes that being able to measure progress is an important way to encourage progress.
In May, he published a paper at CHI 2025 “Development of the Critical Reflection and Agency in Computing Index.” The title captures the two aspects of computing and ethics that Aadarsh is most interested in — that student reflect on the ethical implications of their work and that they have a sense of agency, i.e., that they can do something that can address problems. This first paper was about defining the constructs (see Table 1 below). He created 45 items for his measure. He had a panel of experts review the items, and he interviewed five undergraduate students as they responded to the items. His paper was recognized with a Best Paper Honorable Mention.

Aadarsh presented a poster at SIGCSE 2025 in February, “The Development and Validation of the Critical Reflection and Agency in Computing Scale.”
The big finale was his ICER 2025 paper in August, “Validation of the Critical Reflection and Agency in Computing Index: Do Computing Ethics Courses Make a Difference?”. This paper summarized the CHI 2025 story of how the index came to be, then presented the results of a two-round validation study (474 participants in one, 464 in the other). Overall, he has strong support for the validity of his measure.
But in addition to taking the measure, Aadarsh ask the participants if they had taken a computing ethics course. He found “Participants who completed computing ethics courses showed higher scores in some dimensions of ethical reflection and agency, but they also exhibited stronger techno-solutionist beliefs, highlighting a challenge in current pedagogy.” Here’s my interpretation of his results: after taking a course in computing ethics, students were more reflective (yay!) and believed that they could make a change if they saw an ethical problem in their work (double yay!), but they tended to belief that more technology is the answer to addressing ethical problems with technology (uh-oh).
This is an impressive set of papers. It gives us a way of measuring the impact of our interventions in teaching computing students about ethics. It also highlights some real issues that we should be addressing in our computing ethics classes.
Recent Comments