Posts filed under ‘Uncategorized’

Broadening Participation in Computing is Different in Every State: Michigan as an Example

In December, Rick Adrion, Sarah T. Dunton, Barbara Ericson, Renee Fall, Carol Fletcher, and I published an essay in Communications of the ACM, “U.S. States Must Broaden Participation While Expanding Access to Computer Science Education.” (See link here, and pre-print available at the bottom of this post.). Rick, Renee, Barb, and I were the founders of the ECEP Alliance which helps states and US territories with their computing education policy and practices. Carol is now the PI on ECEP (which feels so great to say — ECEP continues past the founders, with excellent leadership) — the whole leadership team is here. Sarah likely knows more about state-level computing education policy than anyone else in the US. She has worked with individual teams in individual states for years. Our argument is that broadening participation and expanding access are not the same thing. Simply making CS classes available doesn’t get students into those classes. We tell the story of two states (Nevada and Rhode Island) and how CS Ed is growing there.

Barbara and I now live in Michigan. The CSTA, Code.org, and ECEP report 2020 State of Computer Science Education: Illuminating Disparities (see link here) has a sub-report for every US state. Michigan is on page 56. The press release for the 2020 report says that 47% of US high schools now offer CS. Michigan is at 37%. Michigan is the only state (as far as I can tell) that used to have CS teacher certification and pre-service CS but got rid of it (story here).

Also in December, Michigan Department of Education (MDE) released the first “State of Computer Science in Michigan Report” (see link here). The data collection and writing on the report was led by Aman Yadav and Sarah Gretter of Michigan State with Cheryl Wilson of MDE. A quote from page 11: “The trend of declining course offerings continues at the high school level where even fewer high schools offer CS courses. Code.org course offering data suggests that only 23.7% of rural high schools, 28% of town high schools, 29.1% of sub-urban high schools, and 21.7% of city high schools offer CS.” (The numbers on the website are lower than these — Aman and Cheryl kindly sent me an early peek at a revision that they’re posting soon.)

MDE’s numbers are a lot lower than the 37% in the Code.org/CSTA/ECEP report. What’s going on here? My best guess is that CS is rare enough in Michigan that not everybody who fills out a survey knows what the national CS education movement means by “computer science.” We had this a lot in the early days of “Georgia Computes,” too. A principal would say that they teach CS, when they might mean Microsoft Office or Web design (with no HTML, CSS, or JavaScript).

In any case, Michigan is clearly below national averages on providing CS education to its citizens and creating sustainable CS education policy. How do we help Michigan progress in providing computing education to its citizens?

I don’t know. Aman, Barb, and I have had conversations about the potential for growing CS Ed in Michigan. We don’t have the same leverage points in Michigan that we have had in other ECEP states. Michigan is a local control state. Individual local education agencies (LEA’s — sometimes a school district, sometimes a county-wide collection of districts) can make up their own rules on important issues like CS teacher certification. In Georgia and South Carolina, the state government has a lot of control in education, so there was a point of leverage. California is also a local control state, but the California University systems are important to all high schools, so that’s a point of influence. Massachusetts is again a local control state, but the Tech industry is very important to the Boston area, and that’s important to the state. Tech isn’t important in the same way in Michigan. If you read the MDE report, there’s a lot of ambivalence about CS in the state. Administrators aren’t that excited about teaching CS. They don’t see CS education as important for their students. Michigan is a big state, where agriculture and tourism are two of the most significant industries. Manufacturing is a big deal, but manufacturing workers don’t necessarily need to know much about computing. CS isn’t an obvious benefit to much of Michigan.

Aman’s strategy is to grow CS education in the state slowly, to develop pockets of value for CS and success in teaching CS. We have to plant seeds and grow to a critical mass, which seems like the right approach to me. He has projects where he is helping develop teachers and relevant curriculum for CS education in specific counties. He works closely with the MDE. Sarah is involved with Apple’s Developer Academy to open in Detroit (see story here). Michigan does have a powerful and large teacher’s group supporting educational technology, MACUL (Michigan Association for Computer Users in Learning, see website), which could be a significant player in growing CS education in the state.

The important point here is that, in the United States, growing CS education is a state-by-state challenge. Each state has its own story and issues.

Pre-print of CACM BPC article

January 21, 2021 at 7:00 am Leave a comment

Promote diversity by teaching to many goals for computing

My Blog@CACM post for this month is about the working definitions of computing that we are developing in a task force at the University of Michigan see post here). We are charged with identifying the computing education needs for undergraduates in the College of Literature, Sciences, and the Arts (LS&A). My post describes three different goals for computing education, based on what LS&A faculty do with computing and what they want their students to know.

  • Computing for Discovery
  • Computing for Expression
  • Critical Computing

In my post, I described how these are different, and about the challenges of meeting all of these educational needs. The biggest challenge I wonder about is the organizational one. Whose job is it to teach to each of these goals?

In this post, I want to argue from a different direction. All of these have a CS component. These aren’t typically priorities in many CS departments. To have more diversity in computer science, we ought to make them a priority.

There’s CS in All of These

Each of the three LS&A themes represent a significant CS research thrust. We distilled them from discussions with faculty in Literature, Sciences, & the Arts, but students could be interested in these themes and seek a computer science degree and career. I’d expect that these themes are more common among students who enter computing from liberal arts and sciences than from engineering.

Computer scientists often create infrastructure and theory for “Computing for Discovery,” from NeurIPS to ACM SIGSIM. At Georgia Tech, there is a School for Computational Science and Engineering. One of my colleagues in that school was Richard Fujimoto, who studied how to run discrete event simulations in parallel and distributed systems. He does his research so that others (scientists or engineers) could do theirs.

Computer scientists invent and create tools to make “Computing for Expression” possible, presented in places like ACM SIGGRAPH and CHI. Alanson Sample joined U-M CSE the same time I did. He was formerly at Disney Research at Pittsburgh, where some of his team worked on the new Pandora exhibits at Disney World. The animatronic Na’vi were difficult for the animators to control, since the robot representation of the aliens were not meant to be human-like. Alanson’s colleagues created new kinds of design tools to support translating facial animations into robotic actuation for the Na’vi. I love that as an example of computer science enabling a new kind of expression.

Technology Review recently published an accessible summary of the paper that led to Timnit Gebru’s being fired from Google (see link here). I knew about Timnit’s work as a scholar in “Critical Computing.” The TR piece did a terrific job explaining the deep CS ideas in their paper — like the potential fallacies of the language models used by Google and the enormous energy costs of running them. Computer science plays an important part in making thoughtful critiques of existing computing systems and infrastructures.

Supporting Diverse Goals for Diverse Students

Imagine that you are a student who has always dreamed of working at Pixar and building tools for animators. Or you are a student who is concerned about creating sustainable IT infrastructure for your community. You decide to pursue a computer science degree, and now you’re in classes about AVL trees or learning the issues between cache coherence and memory consistency. You might very reasonably drop out, to pursue a degree that move clearly helps you better achieves your goals. The problem is that that those are computer science issues. It’s perfectly reasonable to pursue computing education for those goals, but those might not be the goals that most CS Departments at Universities support.

This does happen exactly as I described. Colleen Lewis and her colleagues showed us how it most often happens with candidates who are from groups under-represented in computer science (see blog about the paper here). These students come to computer science with their goals, and if they don’t see how to achieve their goals with the classes they’re given, they lose interest and drop out. Colleen and her students showed that having goals about community values were were more common among students who were female, Black, or Hispanic than students who were male, white, or Asian.

The draft of the 2020 ACM/IEEE Computing Curriculum report is here. It’s a big document, so I might have missed it, but I don’t see these goals represented in the computer science outcomes. Some of these themes are in information systems or information technology. Some of the media fundamentals are in computer engineering. The core of computer science in the 2020 report is focused on “algorithms and complexity, programming languages, software development fundamentals, and software engineering” (quoting page 28). There is very little in the document about justice, equity, and critical consideration of our computing systems and infrastructure.

A student can certainly start from the core of CS and focus on any of these sets of goals — but do students know that? How do we communicate that to them? This was a real problem when we created the Threads program at Georgia Tech where students identify two “threads” of computing which they will combine to create their BS in CS degree program. A student who chooses Media and Theory may be interested in video compression algorithms, and a student who chooses People and Intelligence might be interested in creating explainable AI, but both of those students will be in the same data structures and discrete math classes. We (mostly Charles Isbell and Bill Leahy) made sure that the foundational classes created the narratives that explained how the foundational concepts connected to these Threads. We wanted students to see how their goals were met by the core of CS.

This might be easier in colleges focused on liberal arts and sciences with smaller classes. At my University, I taught the introduction to computing course to 760 students. We regularly have first year CS courses with over 1000 students. It’s very hard to cater to individual student goals at that scale. What we did at Georgia Tech and what we’re doing in our task force at the University of Michigan is to identify common goals and themes, and provide support and narrative for those. We will not reach all students’ goals. We aim to support more student goals than just software development in large Tech firms.

We do our students a disservice if we do not help them see how they can pursue their goals within our undergraduate programs. A computer science degree from a major University is a big deal. It’s worth a lot in the economic marketplace. Is it fair to deny the degree to students who are engaged and curious about computer science because our CS undergraduate programs focus on one set of goals and ignore the others? Computer science is broader than just what the FAANG companies hire. CS undergraduate degree programs should not just be a Silicon Valley jobs program. Universities should support diversity in CS thoughts and goals if we want to have students from diverse backgrounds in computing.

January 11, 2021 at 7:00 am 2 comments

The goal of the first CS course should be to promote confidence if we’re going to increase diversity in CS: Paying off on a bet

This should be a thing: If you make a public bet on Twitter, and lose, you should have to write a blog post explaining how you got it wrong.

Let me set the stage for the bet. There are studies suggesting that the Advanced Placement (AP) Computer Science A exam has a significantly different impact on students’ majors than other AP exams. (For non-US readers: AP tests provide an opportunity for secondary school students to earn post-secondary school credit.) AP CS A exam-takers are more likely to go on to take more CS courses or become a CS major — more likely than, say, students taking AP Calculus or AP US History exams to become mathematics or history majors. But does that extend to the newer AP CS exam, AP CS Principles? AP CS Principles was designed to be less about the kinds of programming that CS majors do in their first year, and more about a broader understanding of computing and its effects (see College Board site here). There were several of us talking about this in the Spring. On April 1, 2019, I tweeted to Jeff Forbes (see link): “I bet that AP CS Principles has no impact on CS or STEM majors. It’s such a different course (eg doesn’t map to CS courses on most campuses).” He took that bet, and he was right. A study released by the College Board shows that there is a causal relationship between taking AP CS Principles and majoring in CS in undergraduate (see report link here). The impact is large. Overall, students who take AP CS Principles are three times more likely to major in computer science in college. AP CSP students who are female are twice as likely to major in CS.

I wasn’t crazy for expecting that AP CS Principles would not have such a big impact on recruitment and retention. At SIGCSE 2020, Joanna Goode and co-authors published a paper showing that (see blog post link here) AP CS Principles is effectively recruiting much more diverse students than the AP CS A course (which is mostly focused on Java programming). But, AP CS A students end up with more confidence in computing and much more interest in computing majors and tech careers. ACM TOCE in 2019 published a paper using NCWIT Aspirations award winners (see blog post link here) showing that taking the CS Advanced Placement A exam was one of the best predictors of persistence three years after the high school survey in both CS and other technology-related majors. The TOCE paper authors made a particular emphasis on the importance of programming: “It seems that involvement in general tech-related fields other than programming in high school does not transfer to entering and persisting in computer science in college for the girls in our sample.”

So I had good reason to believe that non-programming-intensive courses might not have a big impact on recruitment into the CS major and retention. But I accept the evidence that I was wrong. What else is going on?

Here’s another recent piece of evidence that supports Jeff’s belief that AP CS Principles (and classes like that) could be having a big impact. Philip Boda and Steve McGee have a paper coming out in SIGCSE 2021 showing that the Exploring CS course (see website here) is having a significant impact in driving up AP CS A participation and diversity (see paper here), which continues to have a large impact on majoring in CS. Exploring CS, like AP CS Principles, is de-emphasizes programming in favor of a broader understanding of computing and helping students to see themselves as successful at CS.

Neither of these papers offers an explanation for why AP CSP or ECS is having this positive impact. They’re both large scale quantitative stories. You’d think that I might have learned my lesson from this last failed bet. Nah. I’ve got guesses. My guesses might be wrong, as they were in this case. I’m a post-positivist. I don’t think we’ll ever get to the place where we know the complete truth, but we should keep trying, keeping making hypotheses, and we can keep getting closer.

Here’s my hypothesis for what’s going on, stated as a prediction:

A first course will be successful at promoting recruitment into CS as a major or career and at retaining students in CS if it increases students self-efficacy about programming tasks.

The critical part is for students to increase their confidence that they can be successful at programming tasks. AP CS A easily does this, which is why it has such great results in recruitment and retention. Not all classes or experiences do, as the NCWIT study suggests. AP CS Principles and ExploringCS are all about increasing student confidence, helping them to see themselves as successful at computing. I don’t know how little programming a student needs to do to increase their self-efficacy. Maybe it’s enough to see programs and what programming is about.

Recent research in computing education has been focusing on self-efficacy as one of the most important variables predicting student recruitment and retention in CS. Alex Lishinski and his co-authors showed that self-efficacy had different relationships for female and male CS students (see paper link here) and that programming projects influenced students’ sense of self-efficacy, which in turn influenced performance in the CS class (see paper link here). Jamie Gorson and Nell O’Rourke found (in an ICER 2020 paper that I blogged about here) that CS students had deflated self-efficacy, in part, because they had unreasonable expectations of what real programmers do. Dr. Katie Cunningham, soon to be a post-doc joining Nell’s lab, showed in her dissertation how students simply give up on programming tasks that they don’t think that they’ll be successful at (see blog post on Katie’s dissertation defense). Self-efficacy is likely an important variable in recruitment and retention, particularly of female students, and it’s one that we can manipulate with better designed education.

I’m not the first person to to suggest this relationship. In a study with over 5 million participants, Peter Kemp and colleagues suggest that female participation in secondary school computer science in England is being negatively impacted because of female students’ low self-efficacy in CS — and that this is because of the CS classes (see paper link here). In England, curriculum in Information and Communications Technology is being faded out in favor of a Computer Science focus. They write in their paper “Female Performance and Participation in Computer Science: A National Picture”:

The move to introduce CS into the English curriculum and the removal of the ICT qualifications look to be having a negative impact on female participation and attainment in computing. Using the theory of self-efficacy, we argue that the shift towards CS might decrease the number of girls choosing further computing qualifications or pursuing computing as a career. Computing curriculum designers and teachers need to carefully consider the inclusive nature of their computing courses.

I made my bet because I thought that the programming-light focus of AP CS Principles (or even ExploringCS) would have less of an impact on CS recruitment and retention than the programming-intensive focus of AP CS A. I now believe I was wrong. I would now bet that the amount of programming probably isn’t the critical variable at all. It’s whether students come out of these courses saying, “I can do this. I can program.” That’s the critical variable for recruitment and retention that I believe AP CS Principles and Exploring CS are influencing successfully.

December 29, 2020 at 7:00 am 24 comments

Dijkstra’s Truths about Computing Education Aren’t: The many kinds of programming

ACM Turing Award laureate Edsger Dijkstra had several popular pieces about computer science education. I did my Blog@CACM post on one of these (see post here), “On the cruelty of really teaching computer science,” which may be the most-cited computing education paper ever. Modern learning sciences and computing education research have shown him to be mostly wrong. Dijkstra encouraged us to avoid metaphor in learning the “radical novelty” of computing, which we now know is likely impossible. Instead, the study of metaphor in computing education gives us new insights into how we learn and teach about programming. So far, I’m not aware of any evidence of anyone teaching or learning CS without metaphor.

After my Blog@CACM post, I learned on Twitter about Briana Bettin’s dissertation about metaphors in CS (see link here). Briana considers the potential damage from Dijkstra’s essay on computing education. How many CS teachers think that analogy and metaphors are bad, citing Dijkstra, when the reality is that they are critical?

The second most popular of his computing education essays is “How do we tell truths that might hurt?” (See link here). This essay is known for zingers like:

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

He goes on to critique those who use social science methods and anthropomorphic terms when describing computing. He’s wrong about those, too (as I described in the Blog@CACM post), but I’ll just take up the Basic comment here.

Today, we can consider Dijkstra’s comments in light of research on brain plasticity (see example article here). It wasn’t until 2002 that we had evidence of how even adult brains can grow and reorganize their neural networks. We can always learn and regenerate, even as adults. Changing minds is always hard. The way to achieve change is through motivating change — being able to show that change is in the person’s best interest (see example here). Maybe people stick with Basic (or for me, with HyperTalk and Smalltalk) because the options aren’t obviously better enough to overcome inertia. The onus isn’t on the adult learner to change. It’s on the teacher to motivate change.

There are computer scientists, like Dijkstra, who believe that innate differences separate those who can program from those who can not, a difference that is sometimes called the “Geek Gene.” An interview with Donald Knuth (another Turing Award laureate) last year quoted him saying that only one person in 50 will “groove with programming” (see interview here). We have a lot of evidence that there is no Geek Gene (see this blog post here), i.e., we have note yet identified innate differences that prevent someone from learning to program. Good teaching overcomes many innate differences (see blog post here making this argument).

Of course, there are innate differences between people, but that fact doesn’t have to limit who can program. Computers are the most flexible medium that humans have ever created. To argue that only a small percentage of people can “groove with programming” or that learning a specific programming language “mentally mutilates” is to define programming in a very narrow way. There are lots of activities that are programming. Remember that most Scratch programs have only Forever loops (if any loops at all), and Bootstrap:Algebra doesn’t have students write structures to control repetition. Students are still programming in Scratch and Bootstrap:Algebra. Maybe only one in 50 will be able to read and understand all of Knuth’s The Art of Computer Programming (I’m not one of those), and maybe people who programmed in Basic are unlikely to delve into Dijkstra’s ideas about concurrent and distributed programming (that’s me again). Let’s accept a wide range of abilities and interests (and endpoints) without denigrating those who will learn and work differently.

December 7, 2020 at 7:00 am 6 comments

Purpose-first programming: A programming learning approach for learners who care most about what code achieves: Katie Cunningham’s Defense

On Wednesday, Katie Cunningham is defending her dissertation, “Purpose-First Programming: A Programming Learning Approach for Learners Who Care Most About What Code Achieves.” I’m proud of the work Katie has done with Barb and me over the years. Let me relate the story here, with links to the blog posts.

I first met Katie through through an on-line essay she wrote explaining the issues of gender and CS to her faculty (see my blog post referencing at link here). After she graduated, she worked on the CSin3 project at California State University at Monterey Bay which helped Latino and Latina students get undergraduate degrees in CS in three years. The paper she and the CSin3 team wrote won a Best Paper award at SIGCSE 2018 (see paper here).

Katie started her PhD research studying how students traced code when trying to understand and predict program behavior. She published her findings at ICER 2017 (see blog post). As you’d expect, students who traced programs line-by-line were more likely to get prediction problems (What is the output? What is this variable’s value?) correct. But not always. Most intriguing: Students who stopped mid-way through a trace were more likely to get the problems wrong than those who never traced at all.

In her next study, she replicated the original experiment and then brought into the lab those students who had stopped mid-way in order to ask them “why?” A common answer was that the students were trying to see the “pattern” of the program, and once they saw the pattern, they were able to predict the answer. The problem is that the students were novices. They didn’t know many patterns. They often guessed wrong. Katie presented this paper at ITiCSE 2019 (see blog post).

Katie did a think-aloud study where she could watch students tracing, and something unexpected and interesting happened — two participants refused to trace. These were data science students who did program successfully, but they were unwilling to trace code at the line-by-line level. She wrote an ICLS 2020 paper about their reasons (see blog post). She decided to study that population.

A 2018 CHI paper by another U-M student, April Wang, had talked about how computing education fails conversational programmers (see paper here). Katie decided to build a new kind of curriculum that addressed her data science students and April’s conversational programmers. How do you teach programming to students who (1) don’t want to become professional programmers and (2) are dissuaded from high cognitive load activities like tracing code? This is a very different problem than most of CS education at the undergraduate level where we have eager CS majors who want to get software development jobs. Katie was dealing with issues both of motivation and of cognitive load.

Katie invented purpose-first programming. I don’t want to say too much about it here — her dissertation and her future papers will go more into it. I’ll give you a sense for her process. She used Github repositories and expert interviews to identify a few programming plans (just like Elliot Soloway and Jim Spohrer studied years ago) that were in common use in a domain that her participants cared about. She then taught those plans. Students modified and combined the plans to create programs that the students found useful. Rather than start with syntax or semantics, she started with the program’s purpose. The results were very positive in terms of learning, performance, and affect. Rather than be turned away, they wanted more. One student asked if she could create a whole set of curricula like this, each for a different purpose. That’s the idea exactly. Katie may be on her way to inventing the Duolingo of programming.

Katie already has a post-doc lined up. She’ll be a CI Fellow with Nell O’Rourke at Northwestern. The defense will be on Zoom — feel free to come and cheer her on!

The School of Information is pleased to announce the oral defense of Kathryn Cunningham:

Title: Purpose-First Programming: A Programming Learning Approach for Learners Who Care Most About What Code Achieves

Date: Wednesday, December 2nd

Time: 10 am – 12 pm EST

Place: This defense will be held virtually for the public to attend. Please use this link.

Barbara Ericson and Mark Guzdial, serving as committee chairs, will preside over the oral defense.

All are welcome to (virtually) attend!

November 30, 2020 at 7:00 am 11 comments

Define Computer Science so CS Departments include CS Ed

The CSEd Grad website and research project is supporting growth of research in CS education by supporting pathways for CSEd graduate students. I am excited to be speaking at their conference in a couple weeks (see program here), in a Q&A session with Dr. Amy Ko.

Where would you expect that pathway to lead? Where would you expect faculty working in CS Education research to have their academic home? Education? Information? Computer Science?

If we want to see computer science departments include CS education research, then we have to define computer science in a way that includes computing education research. My favorite definition of computer science is the first one published, in 1967 from Allen Newell, Alan Perlis, and Herbert Simon (all three Turing Award laureates, and Simon is also a Nobel laureate). They say that: Computer science is the study of the phenomena surrounding computers. Helping people to learn what computation is and how to program falls within that definition — it’s part of the phenomena surrounding computers. Some historians, like Nathan Ensmenger (see post here), have suggested that the lack of investment and innovation in CS education influenced the direction of CS research.

Most definitions of computer science are not as broad as that. CSTA, Code.org, and ECEP have just come out with a new report on the state of CS Education in the United States (see report here). The definition they use (see K-12 Framework page here): “the study of computers and algorithmic processes, including their principles, their hardware and software designs, their implementation, and their impact on society” This definition includes fields like social computing and human-computer interaction, but doesn’t include the study of how people learn about computing. That’s a little ironic, that a report promoting CS education is promoting a definition that keeps education out of CS.

The definition matters when decisions are made on the basis of it. A popular website that ranks CS departments around the world, CSRankings.org, does not include CS education. I wrote my Blog@CACM post this month on my critique of CSRankings.org (see post here). I am opposed to it because it’s America-first, anti-progressive, and anti-interdisciplinarity. People make decisions based on CSRankings.org. Graduate students use it to pick departments to apply to. Recommendation letters reference CSRankings.org for what is quality CS. If people use CSRankings.org to determine what “counts” (for attracting students, for promotion and tenure), then CS education literally doesn’t count. Researchers in CS education are at a disadvantage if their work doesn’t help their department in influential rankings.

Let’s define computer science to reflect our values long-term. Where do we build a home for CS education researchers in the future?

October 26, 2020 at 9:00 pm 8 comments

Social Studies Teachers using Programming for Data Visualization: An FIE 2020 Paper Preview

The Frontiers in Education (FIE) 2020 conference starts Wednesday October 21 in Uppsala, Sweden — see program here. My student Bahare Naimipour will be presenting our paper “Engaging Pre-Service Teachers in Front-End Design: Developing Technology for a Social Studies Classroom” (see preprint here) by Bahare, me, and Tammy Shreiner. This work came long before the NSF work that we just got funded for (see blog post here), but it’s in the same line of research.

The paper is about two of our participatory design sessions with pre-service social studies teachers in Tammy’s class on data literacy. In both of these sessions, we asked teachers to program in JavaScript or Vega-Lite to build a visualization, and in the second one, we also introduce CODAP, a visualization tool explicitly designed for middle and high school students. The paper is less about the technology and more about what the teachers told us about what they thought about tools for visualization in their class.

Social studies teachers are such an interesting group to study. They’re not particularly interested in STEM, data, or computers. They want to teach social studies. Very few of our participants had ever seen any code. (One told us, “This looks a lot like setting up my MySpace page in middle school!”)They’re only interested if we can help them teach what they want to teach. It’s a hard audience to engage, in all the right ways.

I’m going to highlight just two lessons we learned here:

First: The results from the two participatory design sessions were remarkably different. Participatory design isn’t a “okay, we did that — check off the box” methodology. Each group of participants can be remarkably different. There’s no generalization here. Each session is useful, but I don’t know how many sessions we’d have to do to get anywhere near saturation. That’s okay — we learned design lessons from each session.

Second: There is no one answer to how teachers think about programming. I have heard from many people that teachers find programming hard (see this CACM Blog post about that discussion), and I’ve hypothesized that to be true in this blog (see this post). So, now I’ve been in the room as social studies teachers have their first programming experiences and interviewing them afterwards, and….it’s complicated.

Teachers tell us often in our sessions that programming is overwhelming, but several teachers also told us that CODAP (explicitly designed for their use, and not a programming tool) was overwhelming. The question is whether it’s worth the complexity — and for whom. We get contradictory responses from the teachers, which we report in this paper. One told us that she wanted a simpler tool for herself and JavaScript for her students: “I don’t mind keeping life simple for me, but I wanted to challenge my students and give them useful, new skills.” Another teacher told us the opposite: “I would like Java[script] because it would let me do more to the visualization. Vega-lite would be better for students because it seems far more simple.”

We couldn’t fit in all the great stories and insights from these two participatory design sessions. Like the teacher who wants JavaScript in her class because, “That’s similar to what they use in math and science, right? I don’t want history to be the ‘dumbed-down’ programming.” I found that surprising, and wondered what the teachers would think of a block-based language. Another teacher told us that she wants to use programming in her history class, “Because maybe that would make history ‘cool.’” One of the tensions I found most interesting in these sessions was between the desire to know the tools and be comfortable in front of the class, and the desire to push their students to learn more. Some teachers told us that they preferred CODAP to any programming tool because they would be embarrassed to get a syntax error in front of their kids, which they realized would always be possible when programming. Other teachers told us that they were more concerned with going beyond basic tools — (paraphrasing one comment we received), “My students will already know Excel and Google Sheets. I want them to do more in my class.”

Our work is ramping up now. We had another PD session with pre-service teachers in March, just before pandemic lockdown, which was our first one with our data visualization tool in the mix. We’ve just held our first workshop in August for in-service (practicing) teachers. We’ve got more workshops planned over the next year. You’ll likely be hearing more from these studies in future posts.

October 19, 2020 at 7:00 am 3 comments

HyperBlocks come to Snap! — UX for PX in CS4All

Jens Moenig kindly shared with me a video announcing HyperBlocks that he’s added to the next version of Snap! The idea of hyperblocks is to support vector and matrix operations in Snap!, as in APL or MATLAB.

I’m interested in the research question whether vector operations are easier or harder for students, including considering who the students are (e.g., does more math background make vector operations easier?) and how we define easier or harder (e.g., is it about startup costs, or the ability to build larger and more sophisticated programs?). My suspicion based on the work of folks like L.A. Miller, John Pane, Diana Franklin, Debbie Fields, and Yasmin Kafai is that vector operations would be easier. Students find iteration hard. Users have found it easier to describe operations on sets than to define a process which applies the operation to specific elements. It’s a fascinating area for future research.

And, you can do Media Computation more easily (as Jens shows) which is a real win in my book!

They also have an online course, on using Snap! from Media Computation to Data Science: https://open.sap.com/courses/snap2

Soon after Jens sent me this video, I got to see him do this in real-time at Snap!Con, and then he and Brian Harvey won the NTLS Education Leadership Award for their work on Snap! (see link here). Congratulations to them both!

So here’s the question that I wonder: Who does Snap! work for, and who doesn’t it?

  • I find Snap! fascinating but not usable for me. I have tried to do what I see Jens doing, but have to keep going back and forth from the video to the tool. It’s not obvious, for example, where to get the camera input and how to use it. I’m sure if I spent enough time using Snap!, I’d get it. What teachers and students are willing to pay that cost? Obviously, some are — Snap! is popular and used in many places. Who gets turned off to Snap!? Who doesn’t succeed at it?
  • I attended some of the sessions at Snap!Con this summer: https://www.snapcon.org/conferences/2020. I was particularly struck by Paul Goldenberg’s session. He showed videos of a young kid (somewhere in 8-10) using Snap!. He was struggling to place blocks with a trackpad. Think about it — press down at the right place, drag across the trackpad without lifting up, release at the right place. This is hard for young kids.

These are important questions to consider in pursuit of tools that enable CS for All. UX for PX – how do we design the user experience of the programming experience.

P.S. Jens just sent me the link to his Snap!Con talk video: https://youtu.be/K1qR4vTAw4w

October 5, 2020 at 7:00 am 10 comments

Award-winning papers at ICER 2020 explore new directions and point towards the next work to do

The 2020 ACM SIGCSE International Computing Education Research Conference was in August (see website here), hosted in Dunedin, New Zealand — but was unfortunately entirely virtual. I became so much more aware of the affordances of face-to-face conferences when attending one of my favorite conferences all through my screen. The upside of the all-virtual format is that all the talks are available on YouTube (see ICER 2020 channel here). Here are my comments on the three papers receiving awards — see them listed here.

What Do We Think We Think We Are Doing?: Metacognition and Self-Regulation in Programming. (Paper link)

This is the paper that I have read and re-read the most since the conference. The authors review what the literature tells us about metacognition in programming. Metacognition is thinking about thinking, like “Did I really understand that? Maybe I should re-read this. Or maybe I should write down my thoughts so I can reflect on them I’m not sure that I’m making progress here. Taking a walk would probably help me clear my head and focus.”

One of their findings that is most intriguing is “Metacognitive knowledge is difficult to achieve in domains about which the learner has little content knowledge.” In other words, you can’t teach students metacognition and self-regulation first, and then teach them something using those new thinking skills. Learners have to know some of the domain first. Now why is that?

Here’s a hypothesis: Metacognition and self-regulation are hard. They take a lot of cognitive load. You have to pay attention to things that are invisible (your own memory and thoughts) and that’s hard. Trying to learn or problem-solve at the same time that you’re monitoring yourself and thinking about your own learning — super hard. Maybe you have to know enough about the domain for some of that activity to be automatized, so that you don’t have to pay as much attention to it in order to do it.

So the biggest hole I see in this paper (which given that it’s a review paper, probably means that the hole is in the literature) is that it does not consider enough factors like gender, race, disability, or SES (e.g., wealth). (Gender gets mentioned when reporting Alex Lishinski’s great work, but only with respect to self-efficacy.) My hypothesis is that the story is more complicated when you consider non-dominant groups. If you don’t think you belong, that takes more of your attention, which takes attention away from your learning — and leaves even less attention for metacognition and self-regulation. If you’re worrying about your screen reader working or where you’re going to get dinner tonight, how do you also have attention left over for monitoring your learning?

The biggest unique opportunity I see in thinking about metacognition and programming is in thinking about debugging. Like psychology or veterinarian science, but unlike most other fields, a lot of a computer scientists’ job is in understanding the “thinking” (behavior, processing, whatever) of another agent. When you’re debugging your program, isn’t that a kind of metacognition. “Okay, what is the computer doing here? How is it interpreting what I wrote? Oh wait, is that what I wanted to write? Is that what I wanted to happen?” The complexity of mapping your thoughts and intentions to what you wrote to what the computer did is enormous. Now, debug someone else’s code — you’ve got what you want in mind, you’re constructing a model of mind of whoever wrote the code before you (did they know what they were doing? is this code brilliant or broken?), and you’re trying to figure out how the computational agent is “thinking about” the code. There’s some seriously complex metacognition going on there.

Exploring Student Behavior Using the TIPP&SEE Learning Strategy. (See paper here.)

No surprise that Diana Franklin’s CANON Lab at U. Chicago continues to do terrific and award-winning work. I’m excited about the TIPP&SEE learning strategy. A commonly found problem in computer science education is that students are bad at Explain In Plain English (EIPE) problems (e.g., see this SIGCSE 2012 paper on the topic). EIPE problems are a measure of students being able to step back from the structure and behavior of code to describe its function or purpose. Katie Cunningham has been exploring how some students focus more on the purpose of the programming, and others get stuck in the code and can’t see the purpose of the program. The TIPP&SEE learning strategy explicitly addresses these problems. Students are guided through how to understand a programming project and relating code to purpose.

This award-winning paper (which follows on their SIGCSE 2020 paper) shows us that students using the TIPP&SEE approach perform better than students who don’t. They get more of their programs done. The SIGCSE 2020 paper shows that they learn more.

The papers totally convince me that this strategy works. The next question is that I want to know is how and why. The SIGCSE paper does some qualitative work, but it’s pretty big n — 184 students. With this kind of scale, the programs are given and the problems are given. There’s not as much opportunity for the detailed cognitive interviews to figure out how the students are thinking about interpreting programs. What happens when these students just go to the Scratch website to look at something that they want to reuse? Do they use TIPP&SEE? Do they understand the programs that they just happen to come across? What happens when they want to build something, where they provide their purpose? Can they draw on TIPP&SEE and succeed? This is not a critique of the papers — they’re great and make real contributions. I’m thinking about what I want to know next.

Hedy: A Gradual Language for Programming Education. (See program link here.)

Easily my favorite paper at ICER 2020 this year. Felienne is doing what I am trying to do. Let’s invent new more usable programming languages! I am happy that she got this paper published, because (selfishly) it gives me hope that I can get my new work published. I am thrilled that the ICER community valued this paper so highly that it received a John Henry award.

The basic idea is to create a sequence of programming languages, where advancing levels have most of the elements of the previous level but include new elements. Her earliest level has no punctuation — no quotes, no semi-colons, no curly-braces. I recently built a task-specific programming language that had the same attribute, and one of the students I’m working with looked at it and asked, “Wait — you can have programming languages without all that punctuation? Well, then, why do we have so much when it scares people off?” Great question! When do we need all that extra punctuation, and where can we avoid it?

The next stage is to explore how we design languages like these. (I’m biased since this is where I’m spending most of my research time on these days.) Why do we choose those language features? Why the keywords print, ask, echo, assign, if, else, and repeat? How do we design and iteratively develop the language? How do we know that people can do things that they want to do with this language? My answer to this is participatory design with teachers, but there are many other viable answers. Felienne provides good design rationale for Hedy’s language features, based in literature from computing education and natural language acquisition. In a process of user experience (UX) design, we’d also user iterative development including testing with real users. This paper shows us use at large scale, and a big chunk of her paper describes what people did with it. It’s fascinating work, but we don’t talk to any of them. We don’t know what they liked, what they disliked, what they found frustrating, and what they were able to do. We need to move programming language design closer to user experience design — UX for PX.

All three papers are terrific contributions to the research community, and I plan to cite and built on them all. I’m eager to see what comes next!


Sidebar: I am a member of the ICER Steering Committee (which has no role in reviewing papers or in picking awards), and I was a metareviewer for ICER 2020. I am speaking here just for myself as a reader and attendee.

September 28, 2020 at 7:00 am 2 comments

Let’s program in social studies classes: NSF funding for our work in task-specific programming languages

If we want all students to learn computer science (CS for All), we have to go to where the students are. Unfortunately, that’s not computer science class. In most US states, less than 5% of high school students take a course in computer science.

Programming is applicable and useful in many domains today, so one answer is to use programming in science, mathematics, social studies, and other non-CS classes. We take programming to where the students are, and hope to increase their interest and knowledge about CS. I love that idea and have been working towards that goal for the last four years. But it’s a hard sell. I told the story in 2018 (see post here) about how the mathematics teachers rejected our pre-calculus course that integrated computing. How do we help non-CS teachers to see value in computing integrated into their classes?

That’s the question Tammy Shreiner at Grand Valley State and I get three years to explore, thanks to a new grant from the US National Science Foundation in the research strand of the “CS for All” Program. Tammy teaches a course on “Data Literacy for Social Studies Teachers” at GVSU, and she (with her colleague Bradford Dykes) have been building an open educational resource (OER) to support data literacy education in social studies classes. We have been working with her to build usable and useful data visualization tools for her curriculum. Through the grant, we’re going to follow her students for three years: From taking her pre-service class, out into their field experiences, and then into their first classes. At each stage, we’re going to offer mentoring and workshops to encourage teachers to use the things we’ve showed them. In addition, we’ll work on assessments to see if students are really developing skills and positive attitudes about data literacy and programming.

Just a quick glimpse into the possibilities here. AP CS Principles exam-takers are now about 25% female. AP US History is 56% female exam takers. There are fives times as many Black AP US History exam-takers as AP CSP exam-takers. It’s a factor of 14 for Hispanic students. Everyone takes history. Programming activities in a history class reach a far more diverse audience.

I have learned so much in the last couple of years about what prevents teachers from adopting curriculum and technology — it’s way more complicated than just including it in their pre-service classes. Context swamps pre-service teaching. The school the teacher goes to influences what they adopt more than what they learned pre-service. I’ve known Anne Ottenbreit-Leftwich for years for her work in growing CS education in Indiana, but just didn’t realize that she is an expert on technology adoption by teachers — I draw on her papers often now.

Here’s one early thread of this story. Bahare Naimipour, an EER PhD student working with me, is publishing a paper at FIE next month about our early participatory design sessions with pre-service social studies teachers. The two tools that teachers found most interesting were CODAP and Vega-Lite. Vega-Lite is interesting here because it really is programming, but it’s a declarative language with a JSON syntax. The teachers told us that it was powerful, flexible — and “overwhelming.” How could we create a scaffolded path into Vega-Lite?

We’ve been developing a data visualization tool explicitly designed for history inquiry (you may remember seeing it back here). We always show at least two visualizations, because historical problems start from two accounts or two pieces of data that conflict.

As you save graphs in your inquiry to the right, you’re likely going to lose track of what’s what. Click on one of them.

This is a little declarative script, in a Vega-lite-inspired JSON syntax. It’s in a task-specific programming language, but this isn’t a program you write. This is a program the describes the visualization — code as a concise way of describing process.

We now have a second version where you can edit the code, or use the pull-down menus. These are linked representations. Changing the menu changes the code and updates the graph. Changing the code updates the menu and the graph. Now the code is also malleable. Is this enough to draw students and teachers into programming? Does it make Vega-Lite less overwhelming? Does it lead to greater awareness of what programming is, and greater self-efficacy about programming tasks?

We just had our first in-service teacher workshop with these tools in August. One teacher just gushed over them. “These are so great! How did I not know that they existed before?” That’s easy — they didn’t exist six months ago! We’re building things and putting them in front of teachers for feedback as quickly as we can, in a participatory design process. We make lots of mistakes, and we’re trying to document those, too. We’re about applying an HCI process to programming experience design — UX for PX.


If you know a social studies teacher who would want to keep informed about our work and perhaps participate in our workshops, please have them sign up on our mailing list. Thank you!

September 14, 2020 at 7:00 am 11 comments

Proposal #3 to Change CS Education to Reduce Inequity: Call a truce on academic misconduct cases for programming assignments

I participated in a Black Lives Matter protest in Ann Arbor a few weeks ago, where I first heard the slogan “Defund the Police.” I was immediately uncomfortable. The current model for police in the US may be broken, but the function of the police is important. But the more I learned, the more I became more comfortable with the idea. As this NYTimes article suggests (see link here), the larger notion gaining support in the US is that we need a reinvestment. We want to spend less on catching criminals, and more on supporting community health and welfare. That’s when I realized what I wanted for my third and final proposal to change CS education to reduce inequity.

This is my four and last post in a series* about how we have to change how we teach CS education to reduce inequity. The series has several inspirations, but the concrete one that I want to reference back to each week is the statement from the University of Maryland’s CS department about improving diversity, equity, and inclusion within their department:

Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset

Students don’t learn best by discovery

Paul Kirschner, John Sweller, and Richard Clark have been writing a series of controversial and influential papers in educational psychology. The most cited (in Educational Psychologist) lays out the whole premise in its title “Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discover, problem-based, experiential, and inquiry-based teaching” (see link here). Another, in American Educator, is a more accessible version “Putting students on the path to learning: The case for fully guided instruction” (see link here). A quick summary of the argument is that learning is hard, and it’s particularly hard to learn if you are trying to “figure things out” or “problem-solve” at the same time. In fact, it’s so hard that, unless you tell students exactly what you want them to learn, the majority of your students probably won’t learn it.

Computer scientists are big believers in discovery learning. I’ve had a senior faculty member in my department tell me that, if they gave students feedback from the unit tests (vs. a binary passed/failed) used in autograding, “we would be stealing from students the opportunity to figure it out for themselves.” I have been interviewing teaching assistants for the Fall. They tell me that if I made my class harder, so students have to struggle more to figure out the programming assignments, they would learn more and retain it longer. I know of little evidence for these beliefs, and none in CS education. Telling students leads to more students learning and learning more efficiently than making them figure it out. Efficiency in learning does matter, especially when we are talking about students who may have competing interests for their time (like a job) and during the stress of a pandemic.

Learning requires challenge, but too much cognitive load reduces learning. My guess is that we believe in the power of struggle because it’s how many of us learned computing. We struggled to figure out undocumented systems, to make things work, and to figure out why they worked. We come away with a rationalization that the process of discovery, without a teacher or guidance, is what led to our learning. The problem is that for experts and high-ability/highly-motivated learners, we like to learn that way. We want to figure it out for ourselves. There is a motivational (affective) value for discovery. However, the available evidence suggests that our belief in discovery is a mirage, a cognitive illusion, a trick we play on ourselves. We don’t learn best by discovery.

What’s worse, by forcing more students to learn by discovery, we will likely drive away the less prepared, the less motivated, and the less able students. That’s the point of this series of blog posts. We as CS teachers make decisions that often emphasize how we wanted to be taught and how our top students want to learn. That is inequitable. We need to teach “such that students starting out with less computing background can succeed.”

Programming assignments should be practice, not assessment

Clark, Kirschner, and Sweller describe how we should be teaching to be most effective and efficient:

Teachers providing explicit instructional guidance fully explain the concepts and skills that students are required to learn. Guidance can be provided through a variety of media, such as lectures, modeling, videos, computer-based presentations, and realistic demonstrations. It can also include class discussions and activities—if the teacher ensures that through the discussion or activity, the relevant information is explicitly provided and practiced. In a math class, for example, when teaching students how to solve a new type of problem, the teacher may begin by showing students how to solve the problem and fully explaining the how and why of the mathematics involved. Often, in following problems, step-by-step explanations may gradually be faded or withdrawn until, through practice and feedback, the students can solve the problem themselves. In this way, before trying to solve the problem on their own, students would already have been walked through both the procedure and the concepts behind the procedure.

Programming assignments are the opportunities to practice in this model, not the time to “figure it out for themselves” and not the time to assess learning or performance. In explicit instruction in programming, the teacher tells the student exactly what to do to solve a programming problem. Tell them how to solve the problem, and let them practice the same problem. (Better yet, give students worked examples and practice interleaved, as we do in our ebooks.) Programming is a great place for learning, since it provides feedback on our tests and hypotheses.

Students should be encouraged to engage in programming practice. The way we do that is by giving points towards grades. We should probably give more points for correct solutions, because that creates desirable incentives. But being able to program does not indicate understanding. The recent ITiCSE 2020 paper by Jean Sala and Diana Franklin showed that use of a given code construct was not correlated well with understanding of that code construct (see paper here). It’s also the case that students may understand the concept but can’t make it work in code.

As I was writing this blog post, the ACM SIGCSE-Members email list had a (yet another!) great thread on how to reduce cheating in CS1. The teachers on the list were torn. They want to support student learning, but they don’t want to reward cheaters. Many echoed this same point — that programming assignments have to be an opportunity for learning, not a summative assessment.

We need to separate learning and assessment activities. Most programming should be a learning opportunity, and not a time to assess student learning. I suppose we might have a special programming assignment labelled, “This one is under exam conditions,” and then it’s clear that it should be done alone and for assessment. I don’t encourage trying to make those kinds of distinctions during remote teaching and learning. I completely understand the reason for plagiarism detecting and prosecution on exams and quizzes. Those are assessment activities, not learning activities.

We should evaluate the students’ programs and give them feedback on them. Feedback improves learning. It shouldn’t be about punishing students who struggle with or even fail at the programs — programming should be part of a learning process.

We can assess learning about programming without having students program

One of our biggest myths in computer science is that the only way to test students’ knowledge of programming is by having them program. Allison Elliott Tew showed that her FCS1 correlated highly with the final exam scores of the students from four courses at two universities who were part of her study (see post here, with diagram of this scatterplot). Her test (all multiple choice) was predicted the grade of the semester’s worth of programming assignments, quizzes, and tests.

Over the years, I’ve attended several AP CS presentations from psyshometricians from ETS. Every time, they show us that they don’t need students to program on the AP CS exams. They can completely predict performance on the programming questions from the multiple choice questions. We can measure the knowledge and skill of programming without having students program.

Of course, it’s easier to tell the students to program, as a way of testing their programming knowledge. However, it’s not an effective measurement instrument (understanding and coding ability are not equivalent), it’s inefficient (takes more time than a test), and it creates stress and cognitive load on the students. (I recommend the work by Kinnunen and Simon on how intro programming assignments depress students’ self-efficacy.) We can and should build better assessments. For example, we could use Parsons problems which are more sensitive measures of understanding about programming than writing programs (see blog post). We want students to program, and most of our students want to program. Our focus should be on improving programming as a learning activity, not as a form of assessment.

Now more than ever, encourage collaboration

Here’s the big ask, Stop prosecuting students for academic misconduct if you detect plagiarism on programming assignments. My argument is just like the policing argument — we should be less worried about catching those who will exploit the opportunity to get unearned points, and more worried about discouraging students from collaboration that will help them learn. We already have inequality in our classrooms. During the pandemic, the gap between the most and less prepared students will likely grow. We have to take specific actions to close that gap and always in favor of the less-prepared students.

Notice that I did not say “stop trying to detect plagiarism.” We should use tools like MOSS to look for potential cheating. But let’s use any detection of plagiarism as an opportunity to learn, and maybe, as a cry for help.

Why do students cheat on programming assignments? There’s a body of literature on that question, but let me jump to the critical insight for this moment in time: All those reasons will be worse this year.

  • When we ask students to program, we are saying, “I have shown you all that you need to be able to complete this program. I now want you to demonstrate that you can.” Are we sure about that first part? We’re going to be doing everything online. We might miss covering concepts that we might normally teach, maybe in side conversations. How would we know if we got it wrong this next year?
  • One of the most powerful enablers for cheating is that students feel anonymous. If students feel that nobody knows them or notices them, then they might as well cheat. Students are going to feel even more anonymous in remote teaching.
  • Finally, at both higher education institutions where I’ve taught, the policy term for cheating is “illicit collaboration.” Especially now in a pandemic with remote teaching, we want students to collaborate. The evidence on pair programming and buddy programming is terrific — it helps with learning, motivation, and persistence in CS. But where’s the line between allowed and illicit collaboration when it’s all over Zoom? I’m worried about students not collaborating because they fear that they’ll cross that line. I have talked to students who won’t collaborate because they fear accidentally doing something disallowed. It will be even harder for students to see that line in a pandemic.

Some students cheat because they think that they have to. “If I don’t cheat and everyone else does, I’m at a disadvantage.” That’s only true if student grades are comparative. That’s why Proposal #2 is a critical step for Proposal #3 — stop pre-allocating, curving, or rationing grades. Use grades to reward learning, not “rising above your peers.”

I worry about us encouraging cheating. The pressures that Feldman identifies as exacerbating cheating will be even greater in all-online learning:

For example, we lament our students’ rampant cheating and copying of homework. Yet when we take a no-excuses approach to late work in the name of preparing students for real-world skills and subtract points or even refuse to accept the work, we incentivize students to complete work on time by hook or by crook and disincentivize real learning. Some common grading practices encourage the very behaviors we want to stop.

Feldman, Joe. Grading for Equity (p. xxii). SAGE Publications. Kindle Edition.

If you detect plagiarism, contact the student. Tell them what you found. Ask them what happened. Ask how they’re doing. Are they getting lost in the class? Use this as an opportunity to explain what illicit collaboration is. Use this as an opportunity to figure out how you’re teaching and what’s going on in the lives of your students. This will be most effective for your first-generation students and your students who are in a minority group. They would likely feel alone, isolated, and invisible even in the in-person class. It’s going to be worse in remote teaching. They are less likely to reach out for help in office hours. Let them know that you’re there and that you care.

Last year, I was in charge of “cheat finding” for a large (over 750 students) introductory programming course. In the end, we filed academic misconduct accusations for about 10% of the class (not all of whom were found guilty by the Honor Council). It was a laborious, time-consuming task — gathering evidence, discussing with the instructional team, writing up the cases, etc. We should have spent that time talking to those students. We would have learned more. They would have learned more. It would have been a better experience for everyone.

Let’s change CS teaching from being about policing over plagiarism, to being about student health, welfare, and development.


* This will be last post for awhile. I’m taking a hiatus from blogging. This series on CS teaching to reduce inequity is my “going out with a bang.”

July 30, 2020 at 7:00 am 8 comments

Proposal #2 to Change CS Education to Reduce Inequity: Make the highest grades achievable by all students

What does an “A” mean in your course? The answer likely depends on why you teach. Research on teacher beliefs suggests that grading practices are related to teachers’ reasons for teaching. Joe Feldman points out that our grading relates to who we are as teachers, and it is passionately held:

Conversations about grading weren’t like conversations about classroom management or assessment design, which teachers approached with openness and in deference to research. Instead, teachers talked about grading in a language of morals about the “real world” and beliefs about students; grading seemed to tap directly into the deepest sense of who teachers were in their classroom.

Feldman, Joe. Grading for Equity (pp. xix-xx). SAGE Publications. Kindle Edition.

I’ve used the Teacher Perspectives Inventory (TPI) (see link here) with dozens of CS teachers. The most common teaching perspective I see among CS teachers is “Apprenticeship.” CS faculty see themselves as preparing future software developers. They value demonstrating and modeling good software practices. An “A” for an apprenticeship teacher is likely to indicate that the student produces good code. An “A” is reserved for “excellence” (as one CS teacher told me recently). An “A” indicates that the student has risen above his or peers in producing “high quality products” (as another CS teacher posted on Facebook recently). An “A” means that, in this teacher’s opinion, the student is recommended to go on to a highly-desired software development job, perhaps at a place like Google or Amazon.

A student with less computing background is much less likely to earn an “A” in an Apprenticeship-oriented class. If you bring more experience to the table, you have a head start on producing higher-quality products than other students in the class. Their products are more likely to be marked as “excellent.”

In my opinion, the teacher attitude of “rugged individualism” defined in SIGCSE 2020 paper by Hovey, Lehmann, and Riggers-PiehlLinking faculty attitudes to pedagogical choicesmeshes with the TPI category of “Apprenticeship.” “Rugged individualism” teachers believe that “learning and success are the individual student’s responsibility.” (Hovey et al did not make this claim or compute the correlation — this is my prediction.) They showed that teachers who believe in “rugged individualism” are less likely to use student-centered teaching practices and more likely to lecture. Students with less computing background do better with student-centered teaching practices.

This is my third post in a series about how we have to change how we teach computing to reduce inequity (see last post). The series has several inspirations, but the concrete one that I want to reference back to each week is the statement from the University of Maryland’s CS department:

Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset

The style of grading that means to identify “talent” or “excellence” is inherently inequitable. It presumes a fixed mindset. If you believe that there is a random distribution of “talent” or “ability” or “Geek Gene” in the course, and (critically), there’s nothing much that teaching can do to change that, then it makes sense to grade to the curve. There can be only a few “A” slots, more “B” slots, and so on. Empirical evidence suggests the opposite — good teaching can trump a lot of other factors. Belief in a growth mindset leads to better learning outcomes and better performance. If we value teaching, and believe that students can get better at computer science, then over time, we should teach better and students should learn more. If students learn more, they should get a higher grade It’s not a fixed-result game.

Measure learning or progress towards objectives, not code quality

My teacher perspective is that we are in the job of maximizing individual human development. It’s our job to help each student achieve as much as they can within our discipline. We should measure achievement in terms of learning, not product quality. I tend to align with the “nurturing” perspective on the TPI, but that’s not what I’m going to argue for here.

It is not at all the same thing to grade for excellence and to grade for learning. Writing code isn’t the same as learning. We have evidence that writing a given piece of code is not indicative of understanding that piece of code. The recent ITiCSE 2020 paper by Jean Sala and Diana Franklin showed that use of a given code construct was not correlated well with understanding of that code construct (see paper here).

I mentioned in the last post that I’m reading Grading for Equity by Joe Feldman (see his website here). He points out all the other factors that influence grades that have nothing to do with learning. Some of these extra-curricular factors — like the ability to meet deadlines that presume privileged, full-time student status without outside pressures — are less likely for Black, Hispanic, poorer, or first-generation college students. These factors influence the production of high-quality code even more than they influence learning. These pressures are going to be even greater during a time of a pandemic.

We should give grades depending how much is learned in a given course. That’s hard to do without measuring students as they enter the course and as they leave the course, and grading on the delta. There is a movement suggesting that “labor-based grading” leads to more compassionate and equitable grading (see article here). I’m not arguing for that.

State your criteria clearly and be objective in grading

I’m arguing that we aim for a Teaching Perspective most closely aligned with the one called “Transmission.” The goal of a Transmission teacher is for students to learn what is needed for the next course or to meet the course objectives. Assessments of understanding should be as objective as possible. Grades should represent achievement of the learning objectives and nothing else.

There is a lot in any given CS course that is not about preparing students for the next course in the sequence. We cover a lot of material. In some courses, we’re preparing students for the imagined technical interview with Google or Amazon. That’s not fair to require understanding and performance on standards that go beyond the course objectives.

I recommend setting the objectives clearly, announcing them on the first day, and grading to those. It’s okay to aim for the targeted average on each assignment to be low but passing (e.g., a “C”), as long as you’re clear and fair. In my classes, I rely heavily on weekly quizzes because those are more likely to lead to learning (see post here). I give points for writing code, but that’s to encourage the activity, not to make-or-break the students’ grade. Programming is for learning, and the quizzes, midterm, and final exam are assessments.

Go ahead and bore your best students

Students with a lot of computing background get an easy “A” in my courses. That’s fine. I expect that. I explicitly tell my students that I teach to the bottom third of the course. I want to move B and C students up into A’s and B’s. I give out a lot of A’s. In years past, I did a series of blog posts on “Boredom vs Failure” (here’s the first post in the series, and here’s the last one). The question is: which is worse, to bore and give easy A’s to the most privileged and most prepared students, or to fail (or discourage to the point that they drop) the students with less privilege and the least computing background? Think about the students who might fail or drop out in a system that makes sure that the most well prepared students are challenged. Each one of those students who continues on does more to change the status quo than does keeping the more privileged students from getting bored. Helping the students with less computing background succeed makes a much bigger difference for society long-term than does keeping entertained the most privileged students.

One response to this proposal is that I’m degrading the value of past A’s. The A’s don’t mean the same thing anymore. That’s true. I take a historical perspective. Those A’s were earned when the students with less computing background were not being taught with methods that helped them succeed (Proposal #1). Those A’s were earned in unfair competition, where the students with prior computing background were compared to students with less computing background. I’m proposing a more just system where the students with less computing background have a chance at the highest grades, where they’re taught in ways that meet their needs, and where their teachers believe that they can grow and improve. I’m not particularly concerned about preserving the past glories of those who won in an unjust system.

If we pre-allocate, ration, or otherwise curve “down” grades so that the top scores are a scarce resource in a competitive system, we are privileging the most prepared students and disadvantaging the least prepared students. I am proposing differentiated instruction. Teach explicitly for the least-prepared students. You will likely have to give up on pushing your top students to greater excellence — that’s the kind of privilege which we have to be willing to surrender. Aim to help every student achieve their potential, and if you have to make a choice, make choices in favor of the students with less privilege and less computing background.

July 27, 2020 at 7:00 am 13 comments

Proposal #1 to Change CS Education to Reduce Inequity: Teach computer science to advantage the students with less computing background

This is my second post in a series about how we have to change how we teach CS education to reduce inequity. I started this series with this post, making an argument based on race, but might also be made in terms of the pandemic. We have to change how we teach CS this year.

The series has several inspirations, but the concrete one that I want to reference back to each week is the statement from the University of Maryland’s CS department:

Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset

We as individual computing teachers make choices that influence whether students with less computing background can succeed. I often see choices being made that encourage the most capable students, but at the cost of the least prepared students. Part of this is because we see ourselves as preparing students for top software engineering jobs. The questions that get asked on technical interviews explicitly drive how many CS departments teach algorithms and theory. We want to encourage “excellence.” But whose excellence do we care about? Is Silicon Valley entrepreneurial perspectives the only ones that matter? The goal of “becoming a great software engineer” does not consider alternative endpoints for computing education (see post here). Not all our students want those kinds of jobs. Many of our students are much more interested in giving back to their community, rather than take the Silicon Valley jobs that our programs aim for (see post here).

Please don’t teach students as if they are you. First, you (as a CS teacher, as someone who reads this blog) are wildly different than our normal student. Second, your memories of how you learned and what worked for you are likely wrong. Humans are terrible at reconstructing how their memory was at a prior time and what led to their learning. That’s why we need research.

In this post, I will identify four of the methods that are differential, that advantage the students with less computing background — there are many more:

  • Use Peer Instruction
  • Explain connection to community values
  • Use Parsons Problems
  • Use subgoal labeling

Use Peer Instruction

When I talk to computer science teachers about peer instruction and how powerful it is for learning, the most common response is, “Oh, we already do that.” When I press them, they tell me that they “have class discussions” or “use undergraduate teaching assistants.” Nope, that’s not peer instruction.

Peer instruction (PI) is a technical term meaning a very specific protocol. Digital Promise and UTeach are creating a set of CS teaching micro credentials, and the one that they have on PI defines it well (see link here). PI is where the teacher poses a question for the class for individual responses, students discuss their answers, students respond again, and the teacher reveals the answer and explains the answer. The evidence suggesting that PI really works is overwhelming, and it can be used in any CS class — see http://peerinstruction4cs.com/ for more information on how to do it. I use it regularly in Senior-level undergraduate courses and graduate courses. There are ways to do PI when teaching remotely, as I talked about in this post.

I’m highlighting PI because the evidence suggests that it has a differential impact (see study here). It doesn’t hurt the top students, but it reduces failure rate (measured in multiple CS courses) for students with less background (see paper here). That’s exactly what we’re looking for in this series — how do we improve the odds of success for students who are not in the most privileged groups.

Explain connection to community values

I blogged last year about a paper (see post here) that showed female, Black, Latino/Latina, and first-generation students take CS because they want to help society. These students often do not see a connection between what’s being taught in CS classes and what they want. That’s because we often teach to prepare students for top software engineering jobs — it’s a mismatch between our goals and their goals.

I don’t know if this is an issue in upper-level classes. Maybe students in upper-level classes have already figured out how CS connects to their goals and values. Or maybe we have already filtered out the CS students who care about community values by the upper-level and graduate courses.

CS can certainly be used to advance social goals and community values. Teach that. In every CS class, for everything you teach, explain concretely how this concept or skill could be used to advance social good, cultural relevance, and community values. If you can’t, ask yourself why you’re teaching this concept or skill. If it’s just to promote a Silicon Valley jobs program, consider dropping it. We are all revising our classes this summer for fall. It’s a good time to do this review and update.

Use Parsons Problems

Parsons problems (sometimes referred to as “mixed-up code problems”) are where students are given a programming problem, and given all the lines of code to solve the problem, but the lines are scrambled (I usually say “on refrigerator magnets”). The challenge is to assemble the correct program. My wife, Barbara Ericson, did her dissertation work (see post here) showing that Parsons problems were effective (led to the same learning as writing the programs from scratch or from debugging programs) and efficient (low time cost, low cognitive load). She also invented dynamically adaptive Parsons problems which are even better (for effectiveness and efficiency) than traditional Parsons problems.

Parsons problems work on-line, so they fit into remote teaching easily. I’ve been doing paper-based (and Canvas-based) Parsons for exams and quizzes for several years now (see post here). Parsons problems work great in lower-level classes. There is relatively little research on using them in upper-level and graduate courses — I suspect that they could be useful, if only to break up the all-coding-all-the-time framing of CS classes.

I’m highlighting Parsons problems for two reasons.

  • First, they’re efficient. As Manuel noted (as I quoted in my Blog@CACM post), BIPOC students are much more likely to be time-stressed than more privileged students. I’m reading Grading for Equity by Joe Feldman which makes this point in more detail (see website). Our less-privileged students need us to find ways to teach them efficiently. This is going to be a particularly concern during a pandemic when students will have more time constraints, especially if they, or a relative, or someone they live becomes ill.
  • Second, they are a more careful and finer-grained assessment tool (see this post). If you ask students with less ability to write a piece of code, you might get students who only get part of the code working, but you get little data from students who only knew how to write part of the code but get none of it working. Parsons problems help the students with less computing background to show what they do know, and to help the teacher figure out what they don’t know how to write yet.

Use subgoal labelling

Subgoal labelling is pretty amazing (see Wikipedia page). Even our first experiment with subgoal labelling for CS worked examples (see post here) has shown improvements in learning (measured immediately after instruction), retention (measured a week later), and transfer (student success on a new task without instruction). Since then, Lauren Margulieux, Briana Morrison, and Adrienne Decker have published a slew of great results.

The one that makes it on this list is their most recent finding (see post here). Subgoal labeling in an introductory computing course, compared to one not using subgoal labeling, led to reduced drop or failure rates. That’s a differential benefit. There was not a statistically significant improvement on learning (measured in terms of exam scores), but it kept the students most at risk of failing or dropping out in the course. That’s teaching to advantage the students with less background in computing. We don’t know if it works for upper-level or graduate classes — my hypothesis is that it would.

July 20, 2020 at 7:00 am 5 comments

Changing Computer Science Education to eliminate structural inequities and in response to a pandemic: Starting a Four Part Series

George Floyd’s tragic death has sparked a movement to learn about race and to eliminate structural inequities and racism. My email is flooded with letters and statements demanding change and recommending actions. These include a letter from Black scholars and other members of the ACM to the leadership of the ACM (see link here), the Black in Computing Open Letter and Call to Action (see link here, and the Hispanics in Computing supportive response link here). The letter about addressing institutional racism in the SIGCHI community from the Realizing that All Can be Equal (R.A.C.E) is powerful and enlightening (see link here).

I’m reading daily about race. I’m not an expert, or even particularly well-informed yet. One of the books I’m reading is Me and White Supremacy by Layla F. Saad (see Amazon link here) where the author warns against:

Using perfectionism to avoid doing the work and fearing using your voice or showing up for antiracism work until you know everything perfectly and can avoid being called out for making mistakes.

This post is the start of a four part series about what we should be changing in computing education towards eliminating structural inequities. We too often build computing education for the most privileged, for the majority demographic groups. It’s past time to support alternative pathways into computing. Even if you’re not driven by concerns about racial injustice, I ask you to take my proposals seriously because of the pandemic. We don’t know how to teach CS remotely at this enormous scale over the next year, and the least-privileged students will be hurt the most by this. We must CS teach differently so that we eliminate the gap between the most and least privileged of our students. Here’s what we need to do.

Learn about Race

Amber Solomon, a PhD student working with Betsy DiSalvo and me, reviewed my first two posts about race in CS Education (at Blog@CACM and here a few weeks ago). Amber has written on intersectionality in CS education, and is writing a dissertation about the role of embodied representations in CS education (see a post here about her most recent paper). She recommended more on learning about race:

  • Whiteness as Property by Cheryl Harris (see link here). Harris, Andre Brock, and some race scholars, argue that to understand racism, you should understand whiteness, not Blackness.
  • The Matter of Race in Histories of American Technology by Herzig (see link here). She has the clearest explanation of “race and/as technology” that I’ve read. And she also does a great job explaining why we can’t just say that race is a social construct.

Two videos:

  • Repurposing Our Pedagogies: Abolitionist Teaching in a Global Pandemic (see YouTube link here).
  • Data, AI, Public Health, Policing, the Pandemic, and Un-Making Carceral States (see YouTube link here). it’s about data, but they get into what it means to be racially Black, white, etc.; and Ruha says something super interesting “rather than collecting racial data, think about what it would mean to collect data on racism.”

Think about the words you use, like “Underrepresented Minority”

Tiffani Williams wrote a Blog@CACM post in June that made an important point about the term “underrepresented minority” (see link here). She argues that it’s a racist term and we should strike it from our language.

Reason #1: URM is racist language because it denies groups the right to name themselves.

Reason #2: URM is racist language because it blinds us to the differences in circumstances of members in the group.

Reason #3: URM is racist language because it implies a master-slave relationship between overrepresented majorities and underrepresented minorities.

In Me and White Supremacy, Saad uses BIPOC (Black, Indigenous, and People of Color), but points out that that is mostly a shorthand for “people lacking white privilege.” She argues, as does Williams, that the term BIPOC ignores the differences in experience between people in those groups. I am striving to be careful in my language and be thoughtful when I use terms like “BIPOC” and “underrepresented.”

Change Computing Departments

Amy Ko has made some strong and insightful posts in the last month about the injustice and exclusion in CS education (see Microsoft presentation slides here). She wrote a powerful post about why her undergraduate major in Information Technology at the University of Washington is racist (see link here). Obviously, her point is that it’s not just her program — certainly the vast majority of computing majors are racist in the ways that she describes. There are mechanisms that are better, like the lottery I described recently to reduce the bias in admission to the major. Amy’s points are inspiring this blog post series.

Manuel Pérez-Quiñones has started blogging, with posts on what CS departments should do to dismantle racism (see link here) and about why CS departments should create more student organizations to combat racism (see link here). His first post has a quote that inspires me:

First, it should come as no surprise that many things we assume to be fair, standard, or just plain normal in reality are not. Even our notion of “fair” has been constructed from a point of view that prioritizes fairness for certain groups. Not only is history written by the victors; laws, structures, and other pieces of society are developed by them too. To expect them to be fair or equitable is naive at best.

Chad Jenkins shared with me a video of his keynote from the RSS 2018 Conference where he suggests that CS departments need to change their research focus, too, to incorporate a value for equity and human values.

The Chair of our department’s Diversity, Equity, and Inclusion (DEI) Committee, Wes Weimer, is pushing for all Computer Science departments to be transparent about how they’re doing on their goals to make CS more diverse and equitable, and what their plans are. The Computer Science & Engineering division at the University of Michigan is serving as an example by making its annual DEI report publicly available here. In the comments, please share your department’s DEI report. Let’s follow Wes’s lead and makes this the common, annual practice.

Change how we teach Computing

Manuel’s point isn’t just about departments. We as individual teachers of computer science and computing make choices which we think are “fair, standard” but actually support and enforce structural inequities. We have to change how we teach. CS for All has published a statement on anti-racism and injustice (see link here) where they say:

We pledge to repeatedly speak out against our historical pedagogies and approaches to computer science instruction that are grounded and designed to weed out all but a small prerogative subset of the US population.

Chad Jenkins mailed me the statement from the University of Maryland’s CS department about their recommendations to improve diversity and inclusion. I loved this quote, which will be the theme for this series of posts:

Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset

We currently teach computer science in ways that “weed out all but a small prerogative subset of the US population” (CS for All). We need to teach so that “students starting out with less computing background can succeed” (UMdCS). We teach in ways that assume a fixed mindset — we presume that some students have a “Geek Gene” and there’s nothing much that teaching can do to change that. We know the opposite — teaching can trump genetics.

Even if you don’t care about race or believe that we have created structural inequities in CS education, I ask you to change because of the pandemic. Teaching on-line will likely hurt our students with the least preparation (see post here). We have to teach differently this year when students will have fewer resources, and we are literally inventing our classes anew in remote forms. If we don’t teach differently, we will increase the gap between those more or less computing background.

While I am just learning about race, I have been studying for years how to teach computing to people with less computing background. This is what this series of posts is about. In the next three posts, I make concrete recommendations about how we should teach differently to reduce inequity. I hope that you are inspired by the desire to eliminate racial inequities, but if not, I trust that you will recognize the need to teach differently because of the pandemic.

First step: Stop using pseudocode on the AP CS Principles Exam

Here’s an example of a structural inequity that weeds out students with less computing background. The AP CS Principles exam (see website here) is meant to be agnostic about what programming language the students are taught, so the programming problems on the actual exam are given in a pseudocode — either text or block-based (randomly). There is no interpreter generally available for the pseudocode, so students learn one language (maybe Snap! or Scratch or MIT App Inventor) and answer questions in another one.

Advanced Placement (AP) classes are generally supposed to replicate the experience of introductory courses at College. AP CSP is supposed to map to a non-CS majors’ intro to computing course. How many of these teach in language X, but then ask students to take their final exam in language Y which they’ve never used?

Allison Elliott Tew’s dissertation (see link here) is one of the only studies I know where students completed a validated instrument both in a pseudocode and in whatever language they learned (Java, MATLAB, and Python in her study). She found that the students who scored the best on the pseudocode exam had the closest match in scores between the pseudocode exam and the intro language exam — averaging a difference in 2.31 answers out of the 27 questions on the exams (see table below.). But the average difference increases dramatically. For the bottom two quartiles, the difference is 17% (4.8 questions out of 27) and 22% (6.2 questions out of 27). It’s not too difficult for students in the best-performing quartile to transfer their knowledge to the pseudocode, but it’s a significant challenge for the lowest-performing two quartiles. These results suggest predict that giving the AP CSP exam in a pseudocode is a barrier, which is easily handled by the most prepared students and is much more of a barrier for the least prepared students.

I went to a bunch of meetings around the AP CSP exam when it was first being set up. At a meeting where the pseudocode plans were announced, I raised the issue of Allison’s results. The response from the College Board was that, while it was a concern, it was not likely going to be a significant problem for the average student. That’s true, and I accepted it at the time. But now, we’re aware of the structural inequities that we have erected that “weed out all but a small prerogative subset of the US population” (CS for All). It’s not acceptable that switching to a pseudocode dramatically increases the odds that students in the bottom half fail the AP CSP exam, when they might have passed if they were given the language that they learned.

Further, studies of block-based and text languages in the context of AP CSP support the argument that students overall do better in a block-based language (see this post here as an example). Every text-based problem decreases the odds that female, Black, and Hispanic students will pass the exam (using the race labels the students use to self-identify on the exam). If the same problem was in a block-based language, they would likely do better.

Using pseudocode on the AP CSP exam is like a tax. Everyone has to manage a bit more difficulty by mapping to a new language they’ve never used. But it’s a regressive tax. It’s much more easily handled by the most privileged and most prepared students.

AP CSP is an important program that is making computing education available to students who otherwise might never access a CS course. We should grow this program. But we should scrap the AP CSP exam in its current form. I understand that the College Board and the creators of the AP CSP exam were aiming with the pseudocode, mixed-modality exam to create freedom for schools and teachers to teach with whatever the language and curriculum they wanted. However, we now know that that flexibility comes at a cost, and that cost is greater for students with less computing background, with less preparation, and who are female, Black, or Hispanic. This is structural inequity.

July 13, 2020 at 7:00 am 10 comments

Paradigm shifts in education and educational technology: Influencing the students here and now

Back on my last blog post referencing Morgan Ames’ book The Charisma Machine, Alan Kay said in a comment, “What we have here is a whole world view and a whole different world.” I’ve been thinking about that sentence a lot because it captures what I think is going on here. A Kuhnian paradigm shift is happening (and maybe has already happened) in research around education and educational technology from the world of Papert and Bruner to the world of learning sciences. I am going to take a pass at describing the change that I see happening in the field, but I encourage you also to read the International Society of the Learning Science (ISLS) presidential address from Victor Lee here, which describes the field more authority and with more authenticity than me.

I remember asking Janet Kolodner (first editor-in-chief of the Journal of the Learning Sciences), “Why? Why learning sciences? We have educational psychology and cognitive science and so many other education disciplines.” She said that learning scientists were tired of just knowing what should happen. They wanted to get out to influence education practice and understand why learning doesn’t happen. Cognitive scientists mostly (at the time) ignored affect and motivation. Educational psychologists most often worked in controlled laboratories or experimental classrooms. Learning scientists wanted to understand and influence what was really happening in educational contexts, both formal and informal. More, they were devoted to expanding access to high-quality education. Yes, learning scientists explored cutting edge technologies to see what was possible, but even more, we try to figure out contexts that make or inhibit learning for real kids. Look at the titles of the Invited Speakers at ICLS 2020: Lost and found in dialogue: Embracing the promises of interdiscursivity and diminishing its risks, The Ed-Tech Imaginary, and Learning as an Act of Fugitivity. Words like “promises” and “imaginary” and “fugitivity” reflect a desire to change and to respond to what we thought might be, but discovered that reality is different. (Audrey Watters’ keynote is available as an essay here.)

David Feldon told me once that the field is misnamed. It’s much more “Learning humanities” than “Learning sciences.” Once you decide to study what’s going on in actual practice with actual students, you find that you’re mostly in studies with really small n. Contexts, teachers, and students vary wildly. Nobody that I know in learning sciences is trying to invent a general dynamic medium for thought, because it’s so hard to get anything actually adopted and used in an impactful manner. I see Jim Spohrer’s work in Service Science as being part of the same paradigm — how do you actually get services designed and implemented that work in practice?

This shift from the general to the specific, and from what could work to what does work is true in my research too. One of my recent NSF proposals is about working closely with a particular school district to figure out what is going to work there. What we know about Brookline or Brasil is almost irrelevant for this district. Another proposal is about inventing a dynamic medium for thought — but in a particular set of classes, in a task-specific form. I still would love to have a general dynamic medium for thought (as Alan suggests), but I believe we have to figure it out from the ground up. Over time, we will find specific notations that can work for specific tasks, and generalize as possible from there.

The majority of the literature that I draw on these days is about teachers: how they learn, why pre-service education has so little influence on actual teacher practice, and how to influence adoption. Teachers are a gateway for technology in the classroom. There are lots of technologies that could work with kids, but don’t work with teachers. In my work today, I draw on Bruner and Papert for their theoretical framings.. I draw on Bruner’s laboratory-based work (e.g., his definition of scaffolding). I draw on Papert’s descriptions of what the computer offers learners, e.g., its protean nature. But I draw less on their implementation work. Bruner’s MACOS was a brilliant project that had a catastrophic result because they didn’t consider enough what would actually work in US schools. Papert created interesting interventions that didn’t become systemic or sustained. Ames is telling me what’s going wrong in actual implementations of OLPC and may some of why it went wrong. If I want things to be actually adopted, I need to avoid the mistakes that The Charisma Machine is describing.

David’s description of what happened in Brasil in a comment to that earlier blog post is fascinating and super-useful, but doesn’t decrease the value of Ames’ description in Paraguay. I don’t agree with all of her rationalizations of why things turned out as they did (e.g., I don’t find the “technically precocious boys” perspective compelling or having explanatory power), and there are very likely things she missed. But what she describes obviously did happen. Learning from the experiences she describes informs our design processes and iterative feedback loops as a way of improving outcomes.

Like any paradigm shift, it doesn’t mean that all the work that went before is wrong. The questions being asked in each paradigm are different. They start from different world views. Papert and Bruner both offer a vision of what we want, Logo and MACOS. Both ran up against the reality of school in the US, where Thorndike won and Dewey lost. Now, how do we help every student, in real school contexts?

Nathan Holbert and David Weintrop recently told me a great phrase that’s common in the constructionism community (variously attributed to Seymour Papert or Uri Wilensky): “Are you designing for Someday or are you designing for Monday?” Are you designing for a world that might be, or are you designing for things that can go in the classroom soon? Neither are wrong. I don’t think that they even need to be a dichotomy. In my task-specific programming work, I’m making things that can’t go in the classroom Monday, but could go in the classroom next year, which is still a lot closer than Someday. Even to be in the classroom next year, I have to start from where schools are now. There won’t be a Dewey-an revolution in schools over the next year. But maybe Someday there will.

July 6, 2020 at 7:00 am 10 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 8,422 other followers

Feeds

Recent Posts

Blog Stats

  • 1,829,333 hits
January 2021
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

CS Teaching Tips