Call for Special Issue on CT in Early Childhood: Guest Blog Post from Wang, Bers, and Lee

In my blog post on what I got wrong in the 2010’s, I pointed to the many definitions of computational thinking (CT) that I had shared in this blog. I said that I hoped that I wouldn’t be offering any more, but I was probably wrong on that too.

Below you will find (yet another) definition of CT, which is pretty intriguing.


Early Childhood Research Quarterly
Call for Papers

Special Issue: Examining Computational Thinking in Early Childhood

Guest Editors 

X. Christine Wang, State University of New York at Buffalo, wangxc@gmail.com

Marina Bers, Tufts University, marina.bers@tufts.edu

Victor R. Lee, Stanford University, vrlee@stanford.edu 

Described as the new literacy of the 21stcentury, computational thinking (CT) is broadly defined as systematic analysis, exploration, and testing of solutions to open-ended and often complex problems based on the analytical process rooted in the discipline of computer science. Driven by the increasing demands for computing professionals, CT has been popularized as a key goal of computer science teaching and learning in K-12 schools. On the one hand, much new research is currently exploring the relationships between CT and coding, CT in everyday unplugged activities, and CT and cognitive and socio-emotional domains of knowledge. On the other hand, there is also heated debate about the validity and applicability of CT, whether CT refers to a new set of competencies, and what value CT has in schooling. Because of the complicated nature of these explorations and conversations, CT has drawn considerable attention in educational research and practice, including early childhood education in recent years (Bers, 2018; Jung & Won, 2018; Toh et al., 2016; Xia & Zhong, 2018). 

To help advance this burgeoning area of research, this special issue seeks empirical and theoretical contributions about young children’s (ages 2-8) CT learning and teaching. We encourage researchers to explore, but not limit themselves to, one or more of the following topics:

(1) Critical examinations of definitions and/or conceptualizations of CT in early childhood

(2) Operationalizations of CT learning goals and practices in early childhood

(3) Developmentally appropriate approaches in promoting CT in early childhood

(4) Relationships between CT and other domains of learning and development

(5) Assessment of CT learning and development in early childhood

(6) Supports for early childhood educators who are bringing CT to young children

(7) Equity and inclusion issues related to CT learning and teaching

For this special issue, we are soliciting a wide range of manuscripts describing rigorous empirical studies, design studies, integrative reviews, theoretical perspectives, or evaluation studies. We welcome studies that employ diverse theoretical and methodological approaches.

Submission Details
We are inviting interested researchers to submit a short proposal prior to manuscript submission. The proposal should be no more than 500 words (excluding references, images, or figures) and must include the following information: (1) Title/Author(s), (2) Key Issues/Problems, (3) Methods/Processes, (4) Findings/Evidence-Based Claims, and (5) Relevance and Contribution to the Special Issue.

Please submit your proposal via email to the Guest Editors with the subject line “ECRQ: CT in Early Childhood”:  X. Christine Wang (wangxc@gmail.com), Marina Bers (marina.bers@tufts.edu), and Victor R. Lee (vrlee@stanford.edu).

The guest editors will provide timely feedback and select proposed papers based on their quality and suitability for this special issue. Selected authors will then be invited to submit a full manuscript.

All full manuscripts must be submitted via the EM system: https://www.editorialmanager.com/ecrq/default.aspx. After you log in and click on “Submit New Manuscript,”  please select “VSI: CT in Early Childhood” on the “Select Article Type” page and proceed accordingly. 

Invitation to submit a full paper will not be a guarantee of acceptance. All manuscripts will undergo the standard ECRQ double-blind peer review procedure. For further information please contact Managing Guest Editor X. Christine Wang (wangxc@gmail.com) or Special Content Editor Gary Resnick (sevenalaris@msn.com).

Deadlines
Proposal submission: July 15, 2021  
Invitation for manuscript submission: August 15, 2021
Manuscript Submission: December 15, 2021
 

May 24, 2021 at 7:00 am Leave a comment

Seeking Collaborators for a Study of Impostor Phenomenon in Computer Science: Guest Blog Post from Leo Porter

Impostor Phenomenon (IP)** is often described as high-achieving individuals experiencing feelings of intellectual phoniness.  Based on the research conducted in various fields with different populations over the past four decades, we know that IP causes problems for those who experience it, including being associated with anxiety and depression.  

In computer science, we often hear our colleagues and students talking about their struggles with IP.  There are panels on IP at Grace Hopper and other conferences aimed at helping members of our community cope with these feelings.  But how prevalent is it in CS?

An informal survey conducted by Blind asked participants to self-report their feelings of IP, and among the 10,000 software engineers who participated, 58% reported feelings of IP [5].  However, self-reporting isn’t necessarily an accurate way to measure IP.   In a pilot study at UC San Diego, we used the Clance IP scale [1], a validated instrument that is used in the majority of studies to measure IP.  After administering the Clance IP scale in upper-division and graduate CS courses, we found that 57% of participants met the diagnostic criteria for experiencing IP [7], which was quite similar to that earlier reported finding from Blind.  What was most concerning about our results was the differences for gender among the students:  52% of men met the diagnostic criteria whereas 71% of women did.  That’s a huge (and statistically significant) difference!

But what does this mean?  We can look at results from other studies and see that computer science seems to have higher rates of students who experience IP than in fields like health professionals (31%) [4], undergraduates studying education (28%) [3], undergraduates in business related fields (39%) [8], and undergraduates from racially underrepresented group studying educational psychology (48%) [2].  This suggests that CS may be an outlier with our students struggling more with IP than other fields.  However, a recent study among medical students [6] reported similar results to what we found in CS, suggesting computing might not be alone.

Before we begin asking questions of why CS (and perhaps also medicine) might be outliers, we need to conduct a replication study to verify (or refute) these initial findings from just a single institution.  To that end, we’re putting out a call for other researchers to help participate in a large-scale replication effort to answer these questions:  What is the rate of IP among students in computer science courses?  Does the rate of IP change as students move farther through the curriculum?  Are students from underrepresented groups in computer science more likely to experience IP than those from traditionally represented groups?

If you are willing to be participate in this replication effort, please fill out this brief interest form:

https://forms.gle/MWYPFnmepWT9nMzNA

For those participating, we’ll ask that you administer the instrument in at least one course at your institution.  If you are interested, we’ll also invite you to engage in the data analysis and authoring of any related publications.  We’ll also help you obtain Human Subjects approval at your institution or leverage our approved protocol at UC San Diego.

** Impostor Phenomenon is the original term [1], however Impostor Syndrome and Impostor Phenomenon are commonly used interchangeably.

References

  1. Sabine M. Chrisman, W. A. Pieper, Pauline R. Clance, C. L. Holland, and Cheryl Glickauf-Hughes. 1995. Validation of the Clance Impostor Phenomenon Scale. Journal of Personality Assessment 65, 3 (1995), 456–467.
  2. Kevin Cokley, Leann Smith, Donte Bernard, Ashley Hurst, Stacey Jackson, Steven Stone, Olufunke Awosogba, Chastity Saucer, Marlon Bailey, Davia Roberts. 2017. Impostor feelings as a moderator and mediator of the relationship between perceived discrimination and mental health among racial/ethnic minority college students. Journal of Counseling Psychology 64, 2 (2017), 141–154.
  3. Joseph R. Ferrari. 2005. Impostor Tendencies And Academic Dishonesty: Do They Cheat Their Way To Success? Social Behavior and Personality: an international journal 33, 1 (2005), 11–18.
  4. Kris Henning, Sydney Ey, and Darlene Shaw. 1998. Perfectionism, the impostor phenomenon and psychological adjustment in medical, dental, nursing and pharmacy students. Medical Education 32, 5 (1998), 456–464.
  5. Kim. 2018. 58 Percent of Tech Workers Feel Like Impostors. https://blog.teamblind.com/index.php/2018/09/05/58-percent-of-tech-workers-feel-like-impostors
  6. Beth Levant, Jennifer A. Villwock, and Ann M. Manzardo. 2019. Impostorism in third-year medical students: an item analysis using the Clance impostor phenomenon scale. Perspectives on medical education (2020), 1-9.
  7. Adam Rosenstein, Aishma Raghu, and Leo Porter. 2020. “Identifying the prevalence of the impostor phenomenon among computer science students.” Proceedings of the 51st ACM Technical Symposium on Computer Science Education.
  8. Kenneth T. Wang, Marina S. Sheveleva, and Tatiana M. Permyakova. 2019. Imposter syndrome among Russian students: The link between perfectionism and psychological distress. Personality and Individual Differences 143 (2019), 1–6.

May 6, 2021 at 7:00 am 2 comments

The Bigger Part of Computing Education is outside of Engineering Education

My Blog@CACM post this month is about the differences I’ve seen between computing education and engineering education (see link here). Engineering education has a goal of producing professional engineers. I describe in the post how ASEE is about the profession of engineering, and developing an engineering identity is a critical goal of engineering education. Computing education is about producing software engineers, but that’s only part of what computing education is about. SIGCSE is about learning and teaching of computing, and as computing educators, we teach students with diverse identities. They overlap, but the part of computing education that is outside the intersection with engineering education is much bigger than the part inside.

Computing education for me is about helping people to understand computing (see the Call for Papers for the International Computing Education Research conference) — not just CS education at the undergraduate level. Preparing future software engineers is certainly part of computing education, but sometimes computing educators only see engineering education goals. Computing education has a bigger scope and range than engineering education. Here are three areas where we need to focus on the bigger part outside engineering.

1. K-12 is for everyone. Computing education in elementary and secondary school should be about more than producing software professionals. There are certainly CS teachers who disagree with me. An example is Scott Portnoff’s critique of CS curricula that does not adequately prepare students for the AP CS A exam and the CS major. I agree that we should offer CS courses at secondary school that give students adequate preparation for post-secondary CS education, if students want to go on to a CS major and become a computing professional. But K-12 has to serve everyone, and the most important goals for K-12 CS education are goals for what everyone should learn about computing. We want students:

I am personally much more interested in K-12 teachers using computing to teach everything else better. Computational science and mathematics are powerful for helping scientists and mathematicians gain insight. We should use computing in the same way to advance student learning in STEM, social studies, and other disciplines — without turning those other classes  into CS classes. This is the difference that Shuchi Grover is talking about with her two kinds of CT: learning about CS vs using computing to learn other things.

2. Courses for non-CS Majors. I’m co-chairing a task force on computing education for the University of Michigan’s College of Literature, Sciences, & Arts (LSA) (see a blog post on this effort and our website with our NEW preliminary report). I’m learning about the ways that LSA faculty use computing and how they want their students to learn about and use computing. Their purposes are so different from what we teach in classes about computer science or data science. Sure, computational scientists analyze data like data scientists, but they also create models that turn their theories into simulations (which can then generate data). Computational artists use computing to tell engaging stories in new ways. Computational journalists investigate and discover truth with computing. LSA faculty care a great deal about their students critiquing how our computing systems and infrastructure may be unjust and inequitable. (Interesting note: The word “justice” does not appear in the new Computing Curriculum 2020, and the word “equity” appears only once.)

There are computer scientists who tell me that there is only one computer science for all students. Their argument is that better engineering practices help everyone — if those computational scientists, journalists, and artists just programmed like software engineers, the world would be a better place. Their code would be more robust, more secure, and more extensible. That is likely true, but that perspective is misunderstanding the role of code in doing science and making art. You don’t critique the poet for not writing like a journalist or a novelist. These are different activities with different goals.

We should teach non-CS majors with courses that serve their needs, speak to their identities, and support their values. We should not require all artists and scientists to think, act, and program like engineers just to take computing classes.

A CS educator in the Bay area once tried to convince me that the most important purpose for courses for non-CS majors was to identify the potential for being great programmers. He claimed that there are programmers who are two magnitudes better than their peers, and identifying them is the most important thing we can do to support and advance the software companies on which our world economy depends. He argued that we should teach non-CS majors in order to identify and promote future engineers, not for their own purposes. I see his argument, but I do not agree that scientists, journalists, and artists are less important than engineers. As I consider this pandemic, I think about the role that computing has played in medicine, logistics, and media. Of course, we have relied heavily on software engineering, but I don’t believe that it’s more important than all the other roles that computing played.

3. Supporting diverse identities. There is a disconnect between efforts to broaden participation in computing and framing CS classes as engineering education. As I mentioned in my Blog@CACM post, I taught my first EER course this last semester and read a lot of EER papers. A big focus in engineering education is developing an engineering identity, i.e., helping students to see themselves as members of the engineering community of practice and as future professional engineers.

One of my favorite papers that we read this semester was “Feminist Theory in Three Engineering Education Journals: 1995–2008” by Beddoes and Borrego. They define different branches of feminism. “Liberal feminism” is the goal for women to be treated the same as men, to get access to the same jobs at the same pay. “Standpoint feminism” points out that “liberal feminism” is too much about fitting women into the jobs and cultures of men, as opposed to asking how things would be different if created from a feminist standpoint.

The professional identity of software engineering is male and White. That’s true from the demographics of who is in the Tech industry, but it’s also true from a historical perspective on the systemic bias in computing. Computing has become dominated by men, with many studies and books describing how women were forced out (see for example The Computer Boys Take Over and Programmed Inequality). Our tools privilege one part of the world. Every one of our mainstream programming languages is built on English keywords. That’s a barrier for 85% of the people on Earth. (Related point: I recommend Manuel Pérez Quiñones’ TED talk “Why I want My Voice Assistant to Speak Spainglish” in which he suggests that the homogenous background of American software engineers leads to few bilingual user interfaces — surprising when 60% of the human race is.)

There’s the disconnect. We want students in computing with diverse perspectives and identities. But engineering education is about developing an identity as a future professional engineer. Professional software engineering is male and White. How do we prepare diverse students to be future software engineers when that professional identity conflicts with their identities? We should teach computing, even for CS majors, in ways that go beyond the engineering education goal of developing a professional engineering identity.

We might argue that we want everyone to have the opportunity to participate in CS, but that’s taking the “liberal” perspective. Broadening participation should not be about fitting everyone into the same identity. It’s not enough to say that everyone has the chance to learn the programming languages that are based in English, that are grounded in Western epistemologies, and where the contributions of women have been marginalized. We need to find ways to accept and support the unique identities of diverse people. 

One way to support a “standpoint” perspective on computing education might be to support activity over identity in our CS curriculum. At Georgia Tech, the undergraduate computer science degree is based on Threads (see website). There are threads for Intelligence, People, Media, Devices, and Theory — eight of them in all. A BS in CS at Georgia Tech is any two threads, so there are 28 paths to a degree. This allows students to define their professional identity in terms of what they are going to DO with computing. “I’m studying People and Devices” is something a student might say who wants to create consumer computational devices like Echo or Roomba. The Threads curriculum allows students to make choices about professional identity, in terms of how they want to contribute to society.

Of course, some of our students want to become software engineers at a FAANG company. That’s great, and we should support them and prepare them for those roles. But we should not require those identities. Computing education is about more than producing software engineers who have the traditional engineering identity.

The Bigger Part of Computing Education. I claimed at the start of this post that “computing education that is outside the intersection with engineering education is much bigger than the part inside.” All the studies I have seen say that’s true. While CS undergraduate enrollment has been exploding, the number of end-user programmers is likely a magnitude larger than the number of professional software developers. K-12 is about 50 million students in the United States, and computing education is available to most of them. The number of computing education students who are NOT seeking an engineering identity or profession is much larger than those who are. That’s the more-than-engineering challenge for computing education.


My thanks to Leo Porter, Cynthia Lee, Adrienne Decker, Briana Morrison, Ben Shapiro, Bahare Naimipour, Tamara Nelson-Fromm, and Amber Solomon who gave me comments on earlier drafts of this post.

April 26, 2021 at 7:00 am 5 comments

From Guided Exploration to Possible Adoption: Patterns of Pre-Service Social Studies Teacher Engagement with Programming and Non-Programming Based Learning Technology Tools

In October, Bahare Naimipour presented our paper ”From Guided Exploration to Possible Adoption: Patterns of Pre-Service Social Studies Teacher Engagement with Programming and Non-Programming Based Learning Technology Tools” (Naimipour, Guzdial, Shreiner, and Spencer, 2021) at the Society for Information Technology and Teacher Education (SITE) 2021 conference. (Draft of the paper is available here. Full paywall version available here.) This paper is the first one about our work with social studies teachers since we received NSF funding. It was also a report on our last face-to-face participatory design session (in March 2020) before the pandemic lockdown. And most importantly, it was our first session with our data visualization tool DV4L in the mix.

I have blogged about our participatory design sessions before (see Bahare’s FIE paper from last Fall). Basically, we set up a group of social studies teachers in pairs, then ask them to try out various visualization tools with activity sheets that we have created to scaffold their process. The goal is to get everyone to make a visualization successfully in less than 10 minutes, and leave time to explore or try one (or both) other tools. There is time for the pairs to persuade each other to (a) come try the cool tool they found or (b) avoid this tool because it’s too hard or not useful. The tools in this set were Vega-Lite (a declarative programming tool which our teachers have found complex but useful in the past), CODAP (a drag-and-drop visualization tool designed for middle and high school students), and our DV4L (a purpose-built visualization tool that makes code visible but not required).

The teachers saw value in having students build visualization themselves (e.g., “I think making your own data visualization allows for a deeper connection and understanding of the data.”) As we hoped, they teased out what they liked and disliked about the tools. Most of the teachers preferred DV4L over the other two tools, because of its simplicity. Critically, they felt that they were engaging with the inquiry and not the tool: “(With DV4L) I found myself asking questions connected to the data itself, rather than asking questions in order to figure out how to work the visual.”

That teachers found DV4L easier than Vega-Lite isn’t really surprising. We were pleased that teachers weren’t disappointed with DV4L’s more limited visualization capabilities. What was really surprising was that our teachers preferred DV4L to CODAP, and this has happened in successive in-service teacher participatory design sessions during the pandemic. CODAP is drag-and-drop, creates high-quality visualizations, and was designed explicitly for middle and high school students. A teacher in one of our in-service design sessions explained to me why she preferred DV4L to CODAP. “CODAP is really powerful, but it would take me at least three hours to get my students comfortable with it. Is it worth it?” Just how much visualization is any social studies teacher going to use? Again, too much focus on the tool gets in the way of the social studies inquiry.

Now you might be asking, “But Mark, do the students learn history with DV4L? And do they see and learn about computing?” Great questions — we’re not there yet. Here’s one of our big questions, after running several more participatory design sessions with teachers since the lockdown: Why aren’t teachers adopting DV4L in their classrooms? They tell us that they really like it. But nobody’s adopted yet. How do we go from “ooh, great tool!” to “and here’s my lesson plan, and we’ll use it next week”? That’s an active area of research for all of us right now.

April 19, 2021 at 7:00 am 2 comments

Embodiment in CS Learning: How Space, Metaphor, Gesture, and Sketching Support Student Learning: Amber Solomon’s defense

Amber Solomon defends her dissertation today, co-advised by Betsy DiSalvo and me. I have learned a lot from Amber and her work. She came into her PhD studies with a particular perspective — a question about how we teach CS. She knew about the studies showing that spatial ability is correlated with success in computing. Why is that? Is it because there is something inherently spatial about computing? Or maybe because we are physical beings and come to understand everything in terms of our spatial experiences? Or maybe it’s because of how we teach computing?

That last one is concerning. Computing education is new. We haven’t spent enough time checking whether what we are doing is right for everyone — or if what we’re doing creates barriers for some students. In particular, she’s concerned about how we teach and learn with embodiment, i.e., references to space and our physical presence, in language, gesture, and sketching. In general, we don’t design our gestures and metaphors in CS education, maybe in part because Dijkstra warned us not to. That’s a problem. Because gesture has a cultural and social component, and we may inadvertently be teaching in a way that says to some students, “You don’t belong. We don’t use your gestures. We use ours.”

Amber’s first project was her study of our augmented-reality design studio for media computation where students’ work was displayed on the walls (see blog post here). One of the surprising outcomes in this project is that it influenced the climate in the classroom — students were more willing to seek help when everyone’s work was on display. The problem of a defensive climate in the classroom is longstanding in CS. Amber showed that changing the environment where we teach can change climate.

Amber with Miranda Parker led our SPARCS study, exploring why socioeconomic status (SES) predicts CS performance. In general, rich kids do better in CS than poor kids. Why? They compared two different models for why SES predicted performance on a standardized CS test. One model suggested that higher SES led to greater access to CS education. Rich kids got to take CS classes, camps, and robotics clubs while poorer kids did not. The second model suggested something more subtle — that higher SES predicted greater spatial ability which predicted better performance. That spatial ability model was a better fit to the data. Now consider Amber’s original hypothesis, that spatial ability predicts CS performance because of the way that we teach CS. The SPARCS study raises the possibility that the whole CS Ed system is rigged in favor of higher SES kids at a deep way. Just teaching more classes to lower SES kids won’t make a difference, if those classes are still taught in a way that requires higher spatial ability.

Amber’s dissertation asks two big questions: (1) How do teachers use embodiment when they teach CS? (2) How do student use embodiment when they learn CS? Part of the answer to the first question appeared at ICLS last year. I talked about helping with Amber’s coding of student videos in my blog post about Dijkstra. Her summary is below.

I’m not going to summarize her whole dissertation here. Here is one example from her defense. She shows a video clip of a teacher explaining a function call. He points to a function definition and says, “Now we come here. I am five. N is five…Do you see what I’m doing?” Read that last sentence imagining that you’ve not had years of CS or mathematics teachers modeling this kind of language. Who are “we” and what does it mean to “come here”? What does he mean that he’s five? Now N is five? Is he N? When he’s saying ‘what I’m doing,’ what is he referring to? Playing the computer, or writing the program, or drawing on the slide? Now imagine hearing that and you have visual disabilities and don’t know that he’s pointing at a function definition. Amber supports a strong claim in her dissertation — we have not designed the language and metaphors of CS education. There’s no way that we CS teachers plan to say things which are that confusing.

Throughout her PhD career, Amber has written about her experience of being a Black woman in CS. She taught me what intersectionality is about. I am grateful that she has been both a CS education researcher and activist during her PhD. I am grateful to have had the chance to work with her.

Title: Embodiment in Computer Science Learning: How Space, Metaphor, Gesture, and Sketching Support Student Learning

Amber Solomon

Human-Centered Computing Ph.D. Candidate

School of Interactive Computing

College of Computing

Georgia Institute of Technology

Summary:

Recently, correlational studies have found that psychometrically assessed spatial skills may be influential in learning computer science (CS). Correlation does not necessarily mean causation; these correlations could be due to several reasons unrelated to spatial skills. Nonetheless, the results are intriguing when considering how students learn to program and what supports their learning. However, it’s hard to explain these results. There is not an obvious match between the logic for computer programming and the logic for thinking spatially. CS is not imagistic or visual in the same way as other STEM disciplines since students can’t see bits or loops. Spatial abilities and STEM performance are highly correlated, but that makes sense because STEM is a highly visual space. In this thesis, I used qualitative methods to document how space influences and appears in CS learning. My work is naturalistic and inductive, as little is known about how space influences and appears CS learning. I draw on constructivist, situative, and distributed learning theories to frame my investigation of space in CS learning. I investigated CS learning through two avenues. The first is as a sense-making, problem-solving activity, and the second is as a meaning-making and social process between teachers and students. In some ways, I was inspired to understand what was actually happening in these classrooms and how students are actually learning and what supports that learning. While looking for space, I discovered the surprising role embodiment and metaphor played while students make sense of computation and teachers express computational ideas. The implication is that people make meaning from their body-based, lived experiences and not just through their minds, even in a discipline such as computing, which is virtual in nature. For example, teachers use the following spatial language when describing a code trace: “then, it goes up here before going back down to the if-statement.” The code is not actually going anywhere, but metaphor and embodiment are used to explain the abstract concept. This dissertation makes three main contributions to computing education research. First, I conducted some of the first studies on embodiment and space in CS learning. Second, I present a conceptual framework for the kinds of embodiment in CS learning. Lastly, I present evidence on the importance of metaphor for learning CS.

Date: Monday, April 12th, 2021

Time: 2:00pm – 5:00pm (EDT)

Location: Bluejeans Link

Meeting URL

https://bluejeans.com/182730963?src=joininfo

Committee:

  • Dr. Betsy DiSalvo (Advisor, School of Interactive Computing, Georgia Institute of Technology)
  • Dr. Mark Guzdial (Advisor, Electrical Engineering and Computer Science, University of Michigan)
  • Dr. Ashok Goel (School of Interactive Computing, Georgia Institute of Technology)
  • Dr. Wendy Newstetter (School of Interactive Computing, Georgia Institute of Technology)
  • Dr. Ben Shapiro (College of Education and Human Development, Georgia State University)
  • Dr. David Uttal (School of Education and Social Policy, Northwestern University)

April 12, 2021 at 9:00 am 4 comments

Become a Better CS Teacher by Seeing Differently

My Blog@CACM post this month is How I evaluate College Computer Science Teaching. I get a lot of opportunities to read teaching statements and other parts of an academic’s teaching record. I tend to devalue quantitative student evaluations of teaching — they’re biased, and students don’t know what serves them best. What I most value are reports of the methods teachers use when they teach. Teachers who seek out and use the best available methods are mostly likely the best teachers. That is what I look for when I have to review College CS teaching records.

On Twitter, people are most concerned with my comments about office hours. Computer science homework assignments should not be written expecting or requiring everyone in the class to come to office hours in order to complete the assignment. That’s an instructional design problem. If there are questions that are coming up often in office hours, then the teacher should fix the assignment, or add to lecture, or make announcements with the clarification. Guided instruction beats discovery learning, and inquiry learning is improved with instruction. There is no advantage to having everyone in the class discover that they need a certain piece of information or question answered.

My personal experience likely biases me here. I went to Wayne State University in Detroit for undergraduate, and I lived in a northern suburb, five miles up from Eight Mile Road. I drove 30-45 minutes a day each way. (I took the bus sometimes, if the additional time cost was balanced out by the advantage of reading time.) I worked part-time, and usually had two part-time jobs. I don’t remember ever going to office hours. I had no time for office hours. I often did my programming assignments on nights and weekends, when there were no office hours scheduled. If an assignment would have required me to go to office hours, I likely would have failed the assignment. That was a long time ago (early 1980’s) — I was first generation, but not underprivileged. Today, as Manuel pointed out (quoted in this earlier blog post), time constraints (from family and work) are a significant factor for some of our students.

Teachers who require attendance at office hours are not seeing the other demands on their students’ lives. Joe Feldman argues that we ought to be teaching for the non-traditional student, the ones who have family and work demands. If we want diverse students in our classes, we have to learn to teach for the students whose experiences we don’t know and whose time costs we don’t see.

CS teachers get better at what we see

I’m teaching an Engineering Education Research class this semester on “Theoretical and Conceptual Frameworks for Engineering Education Research.” We just read the fabulous chapter in How People Learn on How Experts differ from Novices. One of the themes is on how experts don’t necessarily make good teachers and about the specialized knowledge of teachers (like pedagogical content knowledge). I started searching for papers that did particularly insightful analyses of CS teacher knowledge, and revisited the terrific work of Neil Brown and Amjad Altadmri on “Novice Java Programming Mistakes: Large-Scale Data vs. Educator Beliefs” (see paper here).

Neil and Amjad analyze the massive Blackbox database of keystroke-level data from thousands of students learning Java. They identify the most common mistakes that students make in Java. My favorite analyses in the paper are where they rank these common mistakes by time to fix. An error with curly brackets is very common, but is also very easy to fix. Errors that can take much longer (or might stymie a student completely) include errors with logical operators (ANDs and ORs), void vs non-void return values, and typing issues (e.g., using == on strings vs .equals).

The more controversial part of their analysis is when they ask CS teachers what students get wrong. Teachers’ predictions of the most common errors are not accurate. They’re not accurate when considered in aggregate (e.g., which errors did more teachers vote for) nor when considering the years of experience of a teacher.

Neil and Amjad contrast their findings with work by Phil Sadler and colleagues showing that teacher efficacy is related to their ability to predict student errors (see blog post here).

If one assumes that educator experience must make a difference to educator efficacy, then this would imply that ranking student mistakes is, therefore, unrelated to educator efficacy. However, work from Sadler et al. 2013 in physics found that “a teacher’s ability to identify students’ most common wrong answer on multiple-choice items . . . is an additional measure of science teacher competence.” Although picking answers to a multiple-choice question is not exactly the same as programming mistakes, there is a conflict here—either the Sadler et al. result does not transfer and ranking common student mistakes is not a measure of programming teacher competence, or experience has no effect on teacher competence. The first option seems more likely. (Emphasis added.)

I don’t see a conflict in that sentence. I believe both options are true, with some additional detail. Ranking common student compiler mistakes is not a measure of programming teacher competence. And experience has no effect on teacher competence on things they don’t see or practice.

Expertise is developed from deliberate practice. We get better at the things we work at. CS teachers certainly get better (become more competent) at teaching. Why would that have anything to do with knowing what compiler errors that Java students are getting? Teachers rarely see what compiler errors their students are getting, especially in higher-education with our enormous classes.

When I taught Media Computation, I thought I became pretty good at knowing what errors students got in Python. I worked side-by-side students many times over many years as they worked on their Python programs. But that’s still a biased sample. I had 200-300 students a semester. I might have worked with maybe 10% of those students. I did not have any visibility on what most students were getting wrong in Python. I probably would have failed a similar test on predicting the most common errors in Python based on my personal experience. I’m sure I’d do much better when I rely on studies of students programming in Python (like the study of common errors when students write methods in Python) — research studies let me see differently.

Here at the University of Michigan, I mostly teach a user interface software class on Web front-end programming in JavaScript. I am quite confident that I do NOT know what JavaScript errors my students get. I have 260-360 students a semester. Few come to office hours with JavaScript errors. I rarely see anybody’s code.

I do see exams and quizzes. I know that my students struggle with understanding the Observer Design pattern and MVC. I know that they often misunderstand the Universal Design Principles. I know that CSS and dealing with Java asynchronous processing is hard because that’s where I most often get regrade requests. There I’ll find that there is some unexpected way to get a given effect, and I often have to give points back because their approach works too. I get better at teaching these things every semester.

CS teachers can be expected to become more competent at what they see and focus on. Student compiler errors are rarely what they see. They may see more conceptual or design issues, so that’s where we would expect to see increased teacher competence. To developer teacher competence beyond what we see, we have to rely on research studies that go beyond personal experience.

CS teachers need to get better at teaching those we don’t see

The same principle applies to why we don’t improve the diversity of our CS classes. CS teachers don’t see the students who aren’t there. How do you figure out how to teacher to recruit and retain women and students from Black, Latino/Latina, and indigenous groups if they’re not in your classes? We need to rely on research studies, using others’ eyes and others’ experiences.

Our CS classes are huge. It’s hard to see that we’re keeping students out and that we’re sending a message that students “don’t belong,” when all we see are huge numbers. And when we have these huge classes, we want the majority of students to succeed. We teach to the average, with maybe individual teacher preference for the better students. We rarely teach explicitly to empower and advantage the marginalized students. They are invisible in the sea of (mostly male, mostly white or Asian) faces.

I have had the opportunity over the last few months to look at several CS departments’ diversity data. What’s most discouraging is that the problem is rarely recruitment. The problem is retention. There were more diverse students in the first classes or in the enrolled population — but they withdrew, failed, or dropped out. They were barely visible to the CS teachers, in the sea of huge classes, and they become completely invisible. We didn’t teach in a way that kept these students in our classes.

Our challenge is to teach for those who we don’t easily see. We have to become more competent at teaching to recruit those who aren’t there and retain those students who are lost in our large numbers. We easily become more competent at teaching for the students we see. We need to become more competent at teaching for diversity. We do that by relying on research and better teaching methods, like those I talk about in my Blog@CACM post.

February 15, 2021 at 7:00 am 2 comments

National Academies Report on authenticity to promote computing interests and competencies

The National Academies has now released the report that I’ve been part of developing for the last 18 months or so: “Cultivating Interest and Competencies in Computing: Authentic Experiences and Design Factors.” The report is available here, and you can read it online for free here.

The starting question for the report is, “What’s the role of authentic experiences in developing students’ interests and abilities in computing?” The starting place is a simple observation — lots of current software engineers did things like take apart their toasters as kids, or participate in open-source programming projects as novices. I hear that it’s pretty common in technical interviews to ask students about their GitHub repositories, assuming that that’s indicative of their potential abilities as engineers.

There’s a survivor bias in the observation about toasters and open-source projects. You’re only seeing the people who made it to the software engineering jobs. You’re not seeing the people who were turned off by those activities. You’re not seeing the people who couldn’t even get into open-source projects. Is there a causal relationship? If a student engages in “authentic experiences,” does it lead to greater interest and skill development?

You can skip all the way to Chapter 8 for the findings: We don’t know. There are not enough careful studies exploring the question to establish a causal relationship. But that’s not the most important part of the report.

The key questions of the report really are: “What is an authentic learning experience? What prevents students from getting them?” We came up with two definitions:

  • There’s professional authenticity which is what the starting question assumes — the activity has something to do with professional practice in the field.
  • There’s personal authenticity which is where the activity is meaningful and interesting to the student.

These don’t have to be in opposition, but they often are. The Tech industry and open-source development is overwhelmingly male and white or Asian. Learning activities that are culturally relevant may be interesting and meaningful to students, but may not obviously connect to professional practice. Activities that are grounded in current practice may not be interesting or meaningful to students, especially if the students see themselves as outsiders and not belonging to the culture of software development (open source or industry). Formal educational systems place a premium on professional, vocational practice, and informal education systems need personal authenticity to keep drawing students in.

The report does a good job covering the research — what we know (and what we don’t), how the issues vary in informal and formal education, and what we can recommend about designing for authenticity (both kinds, without opposition) in learning experiences.

If you ever get the chance participate in a National Academies consensus report, I highly recommend the experience. You’re producing something for the community, so the amount of review and rewriting is significant — more than in any other kind of writing I’ve ever done. It’s worth it. You learn so much! It’s the National Academies, and they gather pretty amazing committees. If you ever get the chance to grab a coffee or beer with any of the participants on the committee, external or staff, take that chance! (I’m not sure I’d recommend chairing or directing one of these committees — the amount of work that Barbara and Amy did was astounding.) Every single one of these folks have amazing insights and experiences. I’m grateful for the opportunity to hang out with them (even when it went all on-line), write with them, and learn from them.

February 8, 2021 at 7:00 am Leave a comment

ICER 2021 Call for Papers out with Changes for ICER Authors

The International Computing Education Research (ICER) Conference Call for papers is now out — see the conference website here. Abstracts are due 19 March, and full papers are due 26 March.

There are big changes in the author experience of ICER 2021 — see a blog post describing them here. Here are two of them:

  • ICER is going to use the new ACM TAPS publication process, and the paper size limits are now based on word count instead of number of pages. I hope that this relieves authors from some of the tedium of the last minute adjusting of figure sizes and tweaking of text/fonts to just barely get everything squeezed in to the given page limits.
  • There will now be conditional accepts. It’s heart breaking when there’s a paper that’s so good, but it’s got one small and easily-fixable fatal flaw (something that the reviewers and program chairs feel is not publishable as-is). In a conference setting, when the only options are accept or reject, there’s not much to do but reject. Now, there will be an option to conditionally accept a paper with a small review process after revision, to make sure that the small flaw is fixed.

Please do submit to ICER — let’s get lots of great CS Ed research out into the community discussion!

February 1, 2021 at 7:00 am Leave a comment

Broadening Participation in Computing is Different in Every State: Michigan as an Example

In December, Rick Adrion, Sarah T. Dunton, Barbara Ericson, Renee Fall, Carol Fletcher, and I published an essay in Communications of the ACM, “U.S. States Must Broaden Participation While Expanding Access to Computer Science Education.” (See link here, and pre-print available at the bottom of this post.). Rick, Renee, Barb, and I were the founders of the ECEP Alliance which helps states and US territories with their computing education policy and practices. Carol is now the PI on ECEP (which feels so great to say — ECEP continues past the founders, with excellent leadership) — the whole leadership team is here. Sarah likely knows more about state-level computing education policy than anyone else in the US. She has worked with individual teams in individual states for years. Our argument is that broadening participation and expanding access are not the same thing. Simply making CS classes available doesn’t get students into those classes. We tell the story of two states (Nevada and Rhode Island) and how CS Ed is growing there.

Barbara and I now live in Michigan. The CSTA, Code.org, and ECEP report 2020 State of Computer Science Education: Illuminating Disparities (see link here) has a sub-report for every US state. Michigan is on page 56. The press release for the 2020 report says that 47% of US high schools now offer CS. Michigan is at 37%. Michigan is the only state (as far as I can tell) that used to have CS teacher certification and pre-service CS but got rid of it (story here).

Also in December, Michigan Department of Education (MDE) released the first “State of Computer Science in Michigan Report” (see link here). The data collection and writing on the report was led by Aman Yadav and Sarah Gretter of Michigan State with Cheryl Wilson of MDE. A quote from page 11: “The trend of declining course offerings continues at the high school level where even fewer high schools offer CS courses. Code.org course offering data suggests that only 23.7% of rural high schools, 28% of town high schools, 29.1% of sub-urban high schools, and 21.7% of city high schools offer CS.” (The numbers on the website are lower than these — Aman and Cheryl kindly sent me an early peek at a revision that they’re posting soon.)

MDE’s numbers are a lot lower than the 37% in the Code.org/CSTA/ECEP report. What’s going on here? My best guess is that CS is rare enough in Michigan that not everybody who fills out a survey knows what the national CS education movement means by “computer science.” We had this a lot in the early days of “Georgia Computes,” too. A principal would say that they teach CS, when they might mean Microsoft Office or Web design (with no HTML, CSS, or JavaScript).

In any case, Michigan is clearly below national averages on providing CS education to its citizens and creating sustainable CS education policy. How do we help Michigan progress in providing computing education to its citizens?

I don’t know. Aman, Barb, and I have had conversations about the potential for growing CS Ed in Michigan. We don’t have the same leverage points in Michigan that we have had in other ECEP states. Michigan is a local control state. Individual local education agencies (LEA’s — sometimes a school district, sometimes a county-wide collection of districts) can make up their own rules on important issues like CS teacher certification. In Georgia and South Carolina, the state government has a lot of control in education, so there was a point of leverage. California is also a local control state, but the California University systems are important to all high schools, so that’s a point of influence. Massachusetts is again a local control state, but the Tech industry is very important to the Boston area, and that’s important to the state. Tech isn’t important in the same way in Michigan. If you read the MDE report, there’s a lot of ambivalence about CS in the state. Administrators aren’t that excited about teaching CS. They don’t see CS education as important for their students. Michigan is a big state, where agriculture and tourism are two of the most significant industries. Manufacturing is a big deal, but manufacturing workers don’t necessarily need to know much about computing. CS isn’t an obvious benefit to much of Michigan.

Aman’s strategy is to grow CS education in the state slowly, to develop pockets of value for CS and success in teaching CS. We have to plant seeds and grow to a critical mass, which seems like the right approach to me. He has projects where he is helping develop teachers and relevant curriculum for CS education in specific counties. He works closely with the MDE. Sarah is involved with Apple’s Developer Academy to open in Detroit (see story here). Michigan does have a powerful and large teacher’s group supporting educational technology, MACUL (Michigan Association for Computer Users in Learning, see website), which could be a significant player in growing CS education in the state.

The important point here is that, in the United States, growing CS education is a state-by-state challenge. Each state has its own story and issues.

Pre-print of CACM BPC article

January 21, 2021 at 7:00 am Leave a comment

Promote diversity by teaching to many goals for computing

My Blog@CACM post for this month is about the working definitions of computing that we are developing in a task force at the University of Michigan see post here). We are charged with identifying the computing education needs for undergraduates in the College of Literature, Sciences, and the Arts (LS&A). My post describes three different goals for computing education, based on what LS&A faculty do with computing and what they want their students to know.

  • Computing for Discovery
  • Computing for Expression
  • Critical Computing

In my post, I described how these are different, and about the challenges of meeting all of these educational needs. The biggest challenge I wonder about is the organizational one. Whose job is it to teach to each of these goals?

In this post, I want to argue from a different direction. All of these have a CS component. These aren’t typically priorities in many CS departments. To have more diversity in computer science, we ought to make them a priority.

There’s CS in All of These

Each of the three LS&A themes represent a significant CS research thrust. We distilled them from discussions with faculty in Literature, Sciences, & the Arts, but students could be interested in these themes and seek a computer science degree and career. I’d expect that these themes are more common among students who enter computing from liberal arts and sciences than from engineering.

Computer scientists often create infrastructure and theory for “Computing for Discovery,” from NeurIPS to ACM SIGSIM. At Georgia Tech, there is a School for Computational Science and Engineering. One of my colleagues in that school was Richard Fujimoto, who studied how to run discrete event simulations in parallel and distributed systems. He does his research so that others (scientists or engineers) could do theirs.

Computer scientists invent and create tools to make “Computing for Expression” possible, presented in places like ACM SIGGRAPH and CHI. Alanson Sample joined U-M CSE the same time I did. He was formerly at Disney Research at Pittsburgh, where some of his team worked on the new Pandora exhibits at Disney World. The animatronic Na’vi were difficult for the animators to control, since the robot representation of the aliens were not meant to be human-like. Alanson’s colleagues created new kinds of design tools to support translating facial animations into robotic actuation for the Na’vi. I love that as an example of computer science enabling a new kind of expression.

Technology Review recently published an accessible summary of the paper that led to Timnit Gebru’s being fired from Google (see link here). I knew about Timnit’s work as a scholar in “Critical Computing.” The TR piece did a terrific job explaining the deep CS ideas in their paper — like the potential fallacies of the language models used by Google and the enormous energy costs of running them. Computer science plays an important part in making thoughtful critiques of existing computing systems and infrastructures.

Supporting Diverse Goals for Diverse Students

Imagine that you are a student who has always dreamed of working at Pixar and building tools for animators. Or you are a student who is concerned about creating sustainable IT infrastructure for your community. You decide to pursue a computer science degree, and now you’re in classes about AVL trees or learning the issues between cache coherence and memory consistency. You might very reasonably drop out, to pursue a degree that move clearly helps you better achieves your goals. The problem is that that those are computer science issues. It’s perfectly reasonable to pursue computing education for those goals, but those might not be the goals that most CS Departments at Universities support.

This does happen exactly as I described. Colleen Lewis and her colleagues showed us how it most often happens with candidates who are from groups under-represented in computer science (see blog about the paper here). These students come to computer science with their goals, and if they don’t see how to achieve their goals with the classes they’re given, they lose interest and drop out. Colleen and her students showed that having goals about community values were were more common among students who were female, Black, or Hispanic than students who were male, white, or Asian.

The draft of the 2020 ACM/IEEE Computing Curriculum report is here. It’s a big document, so I might have missed it, but I don’t see these goals represented in the computer science outcomes. Some of these themes are in information systems or information technology. Some of the media fundamentals are in computer engineering. The core of computer science in the 2020 report is focused on “algorithms and complexity, programming languages, software development fundamentals, and software engineering” (quoting page 28). There is very little in the document about justice, equity, and critical consideration of our computing systems and infrastructure.

A student can certainly start from the core of CS and focus on any of these sets of goals — but do students know that? How do we communicate that to them? This was a real problem when we created the Threads program at Georgia Tech where students identify two “threads” of computing which they will combine to create their BS in CS degree program. A student who chooses Media and Theory may be interested in video compression algorithms, and a student who chooses People and Intelligence might be interested in creating explainable AI, but both of those students will be in the same data structures and discrete math classes. We (mostly Charles Isbell and Bill Leahy) made sure that the foundational classes created the narratives that explained how the foundational concepts connected to these Threads. We wanted students to see how their goals were met by the core of CS.

This might be easier in colleges focused on liberal arts and sciences with smaller classes. At my University, I taught the introduction to computing course to 760 students. We regularly have first year CS courses with over 1000 students. It’s very hard to cater to individual student goals at that scale. What we did at Georgia Tech and what we’re doing in our task force at the University of Michigan is to identify common goals and themes, and provide support and narrative for those. We will not reach all students’ goals. We aim to support more student goals than just software development in large Tech firms.

We do our students a disservice if we do not help them see how they can pursue their goals within our undergraduate programs. A computer science degree from a major University is a big deal. It’s worth a lot in the economic marketplace. Is it fair to deny the degree to students who are engaged and curious about computer science because our CS undergraduate programs focus on one set of goals and ignore the others? Computer science is broader than just what the FAANG companies hire. CS undergraduate degree programs should not just be a Silicon Valley jobs program. Universities should support diversity in CS thoughts and goals if we want to have students from diverse backgrounds in computing.

January 11, 2021 at 7:00 am 2 comments

The goal of the first CS course should be to promote confidence if we’re going to increase diversity in CS: Paying off on a bet

This should be a thing: If you make a public bet on Twitter, and lose, you should have to write a blog post explaining how you got it wrong.

Let me set the stage for the bet. There are studies suggesting that the Advanced Placement (AP) Computer Science A exam has a significantly different impact on students’ majors than other AP exams. (For non-US readers: AP tests provide an opportunity for secondary school students to earn post-secondary school credit.) AP CS A exam-takers are more likely to go on to take more CS courses or become a CS major — more likely than, say, students taking AP Calculus or AP US History exams to become mathematics or history majors. But does that extend to the newer AP CS exam, AP CS Principles? AP CS Principles was designed to be less about the kinds of programming that CS majors do in their first year, and more about a broader understanding of computing and its effects (see College Board site here). There were several of us talking about this in the Spring. On April 1, 2019, I tweeted to Jeff Forbes (see link): “I bet that AP CS Principles has no impact on CS or STEM majors. It’s such a different course (eg doesn’t map to CS courses on most campuses).” He took that bet, and he was right. A study released by the College Board shows that there is a causal relationship between taking AP CS Principles and majoring in CS in undergraduate (see report link here). The impact is large. Overall, students who take AP CS Principles are three times more likely to major in computer science in college. AP CSP students who are female are twice as likely to major in CS.

I wasn’t crazy for expecting that AP CS Principles would not have such a big impact on recruitment and retention. At SIGCSE 2020, Joanna Goode and co-authors published a paper showing that (see blog post link here) AP CS Principles is effectively recruiting much more diverse students than the AP CS A course (which is mostly focused on Java programming). But, AP CS A students end up with more confidence in computing and much more interest in computing majors and tech careers. ACM TOCE in 2019 published a paper using NCWIT Aspirations award winners (see blog post link here) showing that taking the CS Advanced Placement A exam was one of the best predictors of persistence three years after the high school survey in both CS and other technology-related majors. The TOCE paper authors made a particular emphasis on the importance of programming: “It seems that involvement in general tech-related fields other than programming in high school does not transfer to entering and persisting in computer science in college for the girls in our sample.”

So I had good reason to believe that non-programming-intensive courses might not have a big impact on recruitment into the CS major and retention. But I accept the evidence that I was wrong. What else is going on?

Here’s another recent piece of evidence that supports Jeff’s belief that AP CS Principles (and classes like that) could be having a big impact. Philip Boda and Steve McGee have a paper coming out in SIGCSE 2021 showing that the Exploring CS course (see website here) is having a significant impact in driving up AP CS A participation and diversity (see paper here), which continues to have a large impact on majoring in CS. Exploring CS, like AP CS Principles, is de-emphasizes programming in favor of a broader understanding of computing and helping students to see themselves as successful at CS.

Neither of these papers offers an explanation for why AP CSP or ECS is having this positive impact. They’re both large scale quantitative stories. You’d think that I might have learned my lesson from this last failed bet. Nah. I’ve got guesses. My guesses might be wrong, as they were in this case. I’m a post-positivist. I don’t think we’ll ever get to the place where we know the complete truth, but we should keep trying, keeping making hypotheses, and we can keep getting closer.

Here’s my hypothesis for what’s going on, stated as a prediction:

A first course will be successful at promoting recruitment into CS as a major or career and at retaining students in CS if it increases students self-efficacy about programming tasks.

The critical part is for students to increase their confidence that they can be successful at programming tasks. AP CS A easily does this, which is why it has such great results in recruitment and retention. Not all classes or experiences do, as the NCWIT study suggests. AP CS Principles and ExploringCS are all about increasing student confidence, helping them to see themselves as successful at computing. I don’t know how little programming a student needs to do to increase their self-efficacy. Maybe it’s enough to see programs and what programming is about.

Recent research in computing education has been focusing on self-efficacy as one of the most important variables predicting student recruitment and retention in CS. Alex Lishinski and his co-authors showed that self-efficacy had different relationships for female and male CS students (see paper link here) and that programming projects influenced students’ sense of self-efficacy, which in turn influenced performance in the CS class (see paper link here). Jamie Gorson and Nell O’Rourke found (in an ICER 2020 paper that I blogged about here) that CS students had deflated self-efficacy, in part, because they had unreasonable expectations of what real programmers do. Dr. Katie Cunningham, soon to be a post-doc joining Nell’s lab, showed in her dissertation how students simply give up on programming tasks that they don’t think that they’ll be successful at (see blog post on Katie’s dissertation defense). Self-efficacy is likely an important variable in recruitment and retention, particularly of female students, and it’s one that we can manipulate with better designed education.

I’m not the first person to to suggest this relationship. In a study with over 5 million participants, Peter Kemp and colleagues suggest that female participation in secondary school computer science in England is being negatively impacted because of female students’ low self-efficacy in CS — and that this is because of the CS classes (see paper link here). In England, curriculum in Information and Communications Technology is being faded out in favor of a Computer Science focus. They write in their paper “Female Performance and Participation in Computer Science: A National Picture”:

The move to introduce CS into the English curriculum and the removal of the ICT qualifications look to be having a negative impact on female participation and attainment in computing. Using the theory of self-efficacy, we argue that the shift towards CS might decrease the number of girls choosing further computing qualifications or pursuing computing as a career. Computing curriculum designers and teachers need to carefully consider the inclusive nature of their computing courses.

I made my bet because I thought that the programming-light focus of AP CS Principles (or even ExploringCS) would have less of an impact on CS recruitment and retention than the programming-intensive focus of AP CS A. I now believe I was wrong. I would now bet that the amount of programming probably isn’t the critical variable at all. It’s whether students come out of these courses saying, “I can do this. I can program.” That’s the critical variable for recruitment and retention that I believe AP CS Principles and Exploring CS are influencing successfully.

December 29, 2020 at 7:00 am 25 comments

Dijkstra’s Truths about Computing Education Aren’t: The many kinds of programming

ACM Turing Award laureate Edsger Dijkstra had several popular pieces about computer science education. I did my Blog@CACM post on one of these (see post here), “On the cruelty of really teaching computer science,” which may be the most-cited computing education paper ever. Modern learning sciences and computing education research have shown him to be mostly wrong. Dijkstra encouraged us to avoid metaphor in learning the “radical novelty” of computing, which we now know is likely impossible. Instead, the study of metaphor in computing education gives us new insights into how we learn and teach about programming. So far, I’m not aware of any evidence of anyone teaching or learning CS without metaphor.

After my Blog@CACM post, I learned on Twitter about Briana Bettin’s dissertation about metaphors in CS (see link here). Briana considers the potential damage from Dijkstra’s essay on computing education. How many CS teachers think that analogy and metaphors are bad, citing Dijkstra, when the reality is that they are critical?

The second most popular of his computing education essays is “How do we tell truths that might hurt?” (See link here). This essay is known for zingers like:

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

He goes on to critique those who use social science methods and anthropomorphic terms when describing computing. He’s wrong about those, too (as I described in the Blog@CACM post), but I’ll just take up the Basic comment here.

Today, we can consider Dijkstra’s comments in light of research on brain plasticity (see example article here). It wasn’t until 2002 that we had evidence of how even adult brains can grow and reorganize their neural networks. We can always learn and regenerate, even as adults. Changing minds is always hard. The way to achieve change is through motivating change — being able to show that change is in the person’s best interest (see example here). Maybe people stick with Basic (or for me, with HyperTalk and Smalltalk) because the options aren’t obviously better enough to overcome inertia. The onus isn’t on the adult learner to change. It’s on the teacher to motivate change.

There are computer scientists, like Dijkstra, who believe that innate differences separate those who can program from those who can not, a difference that is sometimes called the “Geek Gene.” An interview with Donald Knuth (another Turing Award laureate) last year quoted him saying that only one person in 50 will “groove with programming” (see interview here). We have a lot of evidence that there is no Geek Gene (see this blog post here), i.e., we have note yet identified innate differences that prevent someone from learning to program. Good teaching overcomes many innate differences (see blog post here making this argument).

Of course, there are innate differences between people, but that fact doesn’t have to limit who can program. Computers are the most flexible medium that humans have ever created. To argue that only a small percentage of people can “groove with programming” or that learning a specific programming language “mentally mutilates” is to define programming in a very narrow way. There are lots of activities that are programming. Remember that most Scratch programs have only Forever loops (if any loops at all), and Bootstrap:Algebra doesn’t have students write structures to control repetition. Students are still programming in Scratch and Bootstrap:Algebra. Maybe only one in 50 will be able to read and understand all of Knuth’s The Art of Computer Programming (I’m not one of those), and maybe people who programmed in Basic are unlikely to delve into Dijkstra’s ideas about concurrent and distributed programming (that’s me again). Let’s accept a wide range of abilities and interests (and endpoints) without denigrating those who will learn and work differently.

December 7, 2020 at 7:00 am 6 comments

Purpose-first programming: A programming learning approach for learners who care most about what code achieves: Katie Cunningham’s Defense

On Wednesday, Katie Cunningham is defending her dissertation, “Purpose-First Programming: A Programming Learning Approach for Learners Who Care Most About What Code Achieves.” I’m proud of the work Katie has done with Barb and me over the years. Let me relate the story here, with links to the blog posts.

I first met Katie through through an on-line essay she wrote explaining the issues of gender and CS to her faculty (see my blog post referencing at link here). After she graduated, she worked on the CSin3 project at California State University at Monterey Bay which helped Latino and Latina students get undergraduate degrees in CS in three years. The paper she and the CSin3 team wrote won a Best Paper award at SIGCSE 2018 (see paper here).

Katie started her PhD research studying how students traced code when trying to understand and predict program behavior. She published her findings at ICER 2017 (see blog post). As you’d expect, students who traced programs line-by-line were more likely to get prediction problems (What is the output? What is this variable’s value?) correct. But not always. Most intriguing: Students who stopped mid-way through a trace were more likely to get the problems wrong than those who never traced at all.

In her next study, she replicated the original experiment and then brought into the lab those students who had stopped mid-way in order to ask them “why?” A common answer was that the students were trying to see the “pattern” of the program, and once they saw the pattern, they were able to predict the answer. The problem is that the students were novices. They didn’t know many patterns. They often guessed wrong. Katie presented this paper at ITiCSE 2019 (see blog post).

Katie did a think-aloud study where she could watch students tracing, and something unexpected and interesting happened — two participants refused to trace. These were data science students who did program successfully, but they were unwilling to trace code at the line-by-line level. She wrote an ICLS 2020 paper about their reasons (see blog post). She decided to study that population.

A 2018 CHI paper by another U-M student, April Wang, had talked about how computing education fails conversational programmers (see paper here). Katie decided to build a new kind of curriculum that addressed her data science students and April’s conversational programmers. How do you teach programming to students who (1) don’t want to become professional programmers and (2) are dissuaded from high cognitive load activities like tracing code? This is a very different problem than most of CS education at the undergraduate level where we have eager CS majors who want to get software development jobs. Katie was dealing with issues both of motivation and of cognitive load.

Katie invented purpose-first programming. I don’t want to say too much about it here — her dissertation and her future papers will go more into it. I’ll give you a sense for her process. She used Github repositories and expert interviews to identify a few programming plans (just like Elliot Soloway and Jim Spohrer studied years ago) that were in common use in a domain that her participants cared about. She then taught those plans. Students modified and combined the plans to create programs that the students found useful. Rather than start with syntax or semantics, she started with the program’s purpose. The results were very positive in terms of learning, performance, and affect. Rather than be turned away, they wanted more. One student asked if she could create a whole set of curricula like this, each for a different purpose. That’s the idea exactly. Katie may be on her way to inventing the Duolingo of programming.

Katie already has a post-doc lined up. She’ll be a CI Fellow with Nell O’Rourke at Northwestern. The defense will be on Zoom — feel free to come and cheer her on!

The School of Information is pleased to announce the oral defense of Kathryn Cunningham:

Title: Purpose-First Programming: A Programming Learning Approach for Learners Who Care Most About What Code Achieves

Date: Wednesday, December 2nd

Time: 10 am – 12 pm EST

Place: This defense will be held virtually for the public to attend. Please use this link.

Barbara Ericson and Mark Guzdial, serving as committee chairs, will preside over the oral defense.

All are welcome to (virtually) attend!

November 30, 2020 at 7:00 am 12 comments

Define Computer Science so CS Departments include CS Ed

The CSEd Grad website and research project is supporting growth of research in CS education by supporting pathways for CSEd graduate students. I am excited to be speaking at their conference in a couple weeks (see program here), in a Q&A session with Dr. Amy Ko.

Where would you expect that pathway to lead? Where would you expect faculty working in CS Education research to have their academic home? Education? Information? Computer Science?

If we want to see computer science departments include CS education research, then we have to define computer science in a way that includes computing education research. My favorite definition of computer science is the first one published, in 1967 from Allen Newell, Alan Perlis, and Herbert Simon (all three Turing Award laureates, and Simon is also a Nobel laureate). They say that: Computer science is the study of the phenomena surrounding computers. Helping people to learn what computation is and how to program falls within that definition — it’s part of the phenomena surrounding computers. Some historians, like Nathan Ensmenger (see post here), have suggested that the lack of investment and innovation in CS education influenced the direction of CS research.

Most definitions of computer science are not as broad as that. CSTA, Code.org, and ECEP have just come out with a new report on the state of CS Education in the United States (see report here). The definition they use (see K-12 Framework page here): “the study of computers and algorithmic processes, including their principles, their hardware and software designs, their implementation, and their impact on society” This definition includes fields like social computing and human-computer interaction, but doesn’t include the study of how people learn about computing. That’s a little ironic, that a report promoting CS education is promoting a definition that keeps education out of CS.

The definition matters when decisions are made on the basis of it. A popular website that ranks CS departments around the world, CSRankings.org, does not include CS education. I wrote my Blog@CACM post this month on my critique of CSRankings.org (see post here). I am opposed to it because it’s America-first, anti-progressive, and anti-interdisciplinarity. People make decisions based on CSRankings.org. Graduate students use it to pick departments to apply to. Recommendation letters reference CSRankings.org for what is quality CS. If people use CSRankings.org to determine what “counts” (for attracting students, for promotion and tenure), then CS education literally doesn’t count. Researchers in CS education are at a disadvantage if their work doesn’t help their department in influential rankings.

Let’s define computer science to reflect our values long-term. Where do we build a home for CS education researchers in the future?

October 26, 2020 at 9:00 pm 8 comments

Social Studies Teachers using Programming for Data Visualization: An FIE 2020 Paper Preview

The Frontiers in Education (FIE) 2020 conference starts Wednesday October 21 in Uppsala, Sweden — see program here. My student Bahare Naimipour will be presenting our paper “Engaging Pre-Service Teachers in Front-End Design: Developing Technology for a Social Studies Classroom” (see preprint here) by Bahare, me, and Tammy Shreiner. This work came long before the NSF work that we just got funded for (see blog post here), but it’s in the same line of research.

The paper is about two of our participatory design sessions with pre-service social studies teachers in Tammy’s class on data literacy. In both of these sessions, we asked teachers to program in JavaScript or Vega-Lite to build a visualization, and in the second one, we also introduce CODAP, a visualization tool explicitly designed for middle and high school students. The paper is less about the technology and more about what the teachers told us about what they thought about tools for visualization in their class.

Social studies teachers are such an interesting group to study. They’re not particularly interested in STEM, data, or computers. They want to teach social studies. Very few of our participants had ever seen any code. (One told us, “This looks a lot like setting up my MySpace page in middle school!”)They’re only interested if we can help them teach what they want to teach. It’s a hard audience to engage, in all the right ways.

I’m going to highlight just two lessons we learned here:

First: The results from the two participatory design sessions were remarkably different. Participatory design isn’t a “okay, we did that — check off the box” methodology. Each group of participants can be remarkably different. There’s no generalization here. Each session is useful, but I don’t know how many sessions we’d have to do to get anywhere near saturation. That’s okay — we learned design lessons from each session.

Second: There is no one answer to how teachers think about programming. I have heard from many people that teachers find programming hard (see this CACM Blog post about that discussion), and I’ve hypothesized that to be true in this blog (see this post). So, now I’ve been in the room as social studies teachers have their first programming experiences and interviewing them afterwards, and….it’s complicated.

Teachers tell us often in our sessions that programming is overwhelming, but several teachers also told us that CODAP (explicitly designed for their use, and not a programming tool) was overwhelming. The question is whether it’s worth the complexity — and for whom. We get contradictory responses from the teachers, which we report in this paper. One told us that she wanted a simpler tool for herself and JavaScript for her students: “I don’t mind keeping life simple for me, but I wanted to challenge my students and give them useful, new skills.” Another teacher told us the opposite: “I would like Java[script] because it would let me do more to the visualization. Vega-lite would be better for students because it seems far more simple.”

We couldn’t fit in all the great stories and insights from these two participatory design sessions. Like the teacher who wants JavaScript in her class because, “That’s similar to what they use in math and science, right? I don’t want history to be the ‘dumbed-down’ programming.” I found that surprising, and wondered what the teachers would think of a block-based language. Another teacher told us that she wants to use programming in her history class, “Because maybe that would make history ‘cool.’” One of the tensions I found most interesting in these sessions was between the desire to know the tools and be comfortable in front of the class, and the desire to push their students to learn more. Some teachers told us that they preferred CODAP to any programming tool because they would be embarrassed to get a syntax error in front of their kids, which they realized would always be possible when programming. Other teachers told us that they were more concerned with going beyond basic tools — (paraphrasing one comment we received), “My students will already know Excel and Google Sheets. I want them to do more in my class.”

Our work is ramping up now. We had another PD session with pre-service teachers in March, just before pandemic lockdown, which was our first one with our data visualization tool in the mix. We’ve just held our first workshop in August for in-service (practicing) teachers. We’ve got more workshops planned over the next year. You’ll likely be hearing more from these studies in future posts.

October 19, 2020 at 7:00 am 4 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 8,460 other followers

Feeds

Recent Posts

Blog Stats

  • 1,859,849 hits
June 2021
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
282930  

CS Teaching Tips