Archive for February, 2021

Become a Better CS Teacher by Seeing Differently

My Blog@CACM post this month is How I evaluate College Computer Science Teaching. I get a lot of opportunities to read teaching statements and other parts of an academic’s teaching record. I tend to devalue quantitative student evaluations of teaching — they’re biased, and students don’t know what serves them best. What I most value are reports of the methods teachers use when they teach. Teachers who seek out and use the best available methods are mostly likely the best teachers. That is what I look for when I have to review College CS teaching records.

On Twitter, people are most concerned with my comments about office hours. Computer science homework assignments should not be written expecting or requiring everyone in the class to come to office hours in order to complete the assignment. That’s an instructional design problem. If there are questions that are coming up often in office hours, then the teacher should fix the assignment, or add to lecture, or make announcements with the clarification. Guided instruction beats discovery learning, and inquiry learning is improved with instruction. There is no advantage to having everyone in the class discover that they need a certain piece of information or question answered.

My personal experience likely biases me here. I went to Wayne State University in Detroit for undergraduate, and I lived in a northern suburb, five miles up from Eight Mile Road. I drove 30-45 minutes a day each way. (I took the bus sometimes, if the additional time cost was balanced out by the advantage of reading time.) I worked part-time, and usually had two part-time jobs. I don’t remember ever going to office hours. I had no time for office hours. I often did my programming assignments on nights and weekends, when there were no office hours scheduled. If an assignment would have required me to go to office hours, I likely would have failed the assignment. That was a long time ago (early 1980’s) — I was first generation, but not underprivileged. Today, as Manuel pointed out (quoted in this earlier blog post), time constraints (from family and work) are a significant factor for some of our students.

Teachers who require attendance at office hours are not seeing the other demands on their students’ lives. Joe Feldman argues that we ought to be teaching for the non-traditional student, the ones who have family and work demands. If we want diverse students in our classes, we have to learn to teach for the students whose experiences we don’t know and whose time costs we don’t see.

CS teachers get better at what we see

I’m teaching an Engineering Education Research class this semester on “Theoretical and Conceptual Frameworks for Engineering Education Research.” We just read the fabulous chapter in How People Learn on How Experts differ from Novices. One of the themes is on how experts don’t necessarily make good teachers and about the specialized knowledge of teachers (like pedagogical content knowledge). I started searching for papers that did particularly insightful analyses of CS teacher knowledge, and revisited the terrific work of Neil Brown and Amjad Altadmri on “Novice Java Programming Mistakes: Large-Scale Data vs. Educator Beliefs” (see paper here).

Neil and Amjad analyze the massive Blackbox database of keystroke-level data from thousands of students learning Java. They identify the most common mistakes that students make in Java. My favorite analyses in the paper are where they rank these common mistakes by time to fix. An error with curly brackets is very common, but is also very easy to fix. Errors that can take much longer (or might stymie a student completely) include errors with logical operators (ANDs and ORs), void vs non-void return values, and typing issues (e.g., using == on strings vs .equals).

The more controversial part of their analysis is when they ask CS teachers what students get wrong. Teachers’ predictions of the most common errors are not accurate. They’re not accurate when considered in aggregate (e.g., which errors did more teachers vote for) nor when considering the years of experience of a teacher.

Neil and Amjad contrast their findings with work by Phil Sadler and colleagues showing that teacher efficacy is related to their ability to predict student errors (see blog post here).

If one assumes that educator experience must make a difference to educator efficacy, then this would imply that ranking student mistakes is, therefore, unrelated to educator efficacy. However, work from Sadler et al. 2013 in physics found that “a teacher’s ability to identify students’ most common wrong answer on multiple-choice items . . . is an additional measure of science teacher competence.” Although picking answers to a multiple-choice question is not exactly the same as programming mistakes, there is a conflict here—either the Sadler et al. result does not transfer and ranking common student mistakes is not a measure of programming teacher competence, or experience has no effect on teacher competence. The first option seems more likely. (Emphasis added.)

I don’t see a conflict in that sentence. I believe both options are true, with some additional detail. Ranking common student compiler mistakes is not a measure of programming teacher competence. And experience has no effect on teacher competence on things they don’t see or practice.

Expertise is developed from deliberate practice. We get better at the things we work at. CS teachers certainly get better (become more competent) at teaching. Why would that have anything to do with knowing what compiler errors that Java students are getting? Teachers rarely see what compiler errors their students are getting, especially in higher-education with our enormous classes.

When I taught Media Computation, I thought I became pretty good at knowing what errors students got in Python. I worked side-by-side students many times over many years as they worked on their Python programs. But that’s still a biased sample. I had 200-300 students a semester. I might have worked with maybe 10% of those students. I did not have any visibility on what most students were getting wrong in Python. I probably would have failed a similar test on predicting the most common errors in Python based on my personal experience. I’m sure I’d do much better when I rely on studies of students programming in Python (like the study of common errors when students write methods in Python) — research studies let me see differently.

Here at the University of Michigan, I mostly teach a user interface software class on Web front-end programming in JavaScript. I am quite confident that I do NOT know what JavaScript errors my students get. I have 260-360 students a semester. Few come to office hours with JavaScript errors. I rarely see anybody’s code.

I do see exams and quizzes. I know that my students struggle with understanding the Observer Design pattern and MVC. I know that they often misunderstand the Universal Design Principles. I know that CSS and dealing with Java asynchronous processing is hard because that’s where I most often get regrade requests. There I’ll find that there is some unexpected way to get a given effect, and I often have to give points back because their approach works too. I get better at teaching these things every semester.

CS teachers can be expected to become more competent at what they see and focus on. Student compiler errors are rarely what they see. They may see more conceptual or design issues, so that’s where we would expect to see increased teacher competence. To developer teacher competence beyond what we see, we have to rely on research studies that go beyond personal experience.

CS teachers need to get better at teaching those we don’t see

The same principle applies to why we don’t improve the diversity of our CS classes. CS teachers don’t see the students who aren’t there. How do you figure out how to teacher to recruit and retain women and students from Black, Latino/Latina, and indigenous groups if they’re not in your classes? We need to rely on research studies, using others’ eyes and others’ experiences.

Our CS classes are huge. It’s hard to see that we’re keeping students out and that we’re sending a message that students “don’t belong,” when all we see are huge numbers. And when we have these huge classes, we want the majority of students to succeed. We teach to the average, with maybe individual teacher preference for the better students. We rarely teach explicitly to empower and advantage the marginalized students. They are invisible in the sea of (mostly male, mostly white or Asian) faces.

I have had the opportunity over the last few months to look at several CS departments’ diversity data. What’s most discouraging is that the problem is rarely recruitment. The problem is retention. There were more diverse students in the first classes or in the enrolled population — but they withdrew, failed, or dropped out. They were barely visible to the CS teachers, in the sea of huge classes, and they become completely invisible. We didn’t teach in a way that kept these students in our classes.

Our challenge is to teach for those who we don’t easily see. We have to become more competent at teaching to recruit those who aren’t there and retain those students who are lost in our large numbers. We easily become more competent at teaching for the students we see. We need to become more competent at teaching for diversity. We do that by relying on research and better teaching methods, like those I talk about in my Blog@CACM post.

February 15, 2021 at 7:00 am 2 comments

National Academies Report on authenticity to promote computing interests and competencies

The National Academies has now released the report that I’ve been part of developing for the last 18 months or so: “Cultivating Interest and Competencies in Computing: Authentic Experiences and Design Factors.” The report is available here, and you can read it online for free here.

The starting question for the report is, “What’s the role of authentic experiences in developing students’ interests and abilities in computing?” The starting place is a simple observation — lots of current software engineers did things like take apart their toasters as kids, or participate in open-source programming projects as novices. I hear that it’s pretty common in technical interviews to ask students about their GitHub repositories, assuming that that’s indicative of their potential abilities as engineers.

There’s a survivor bias in the observation about toasters and open-source projects. You’re only seeing the people who made it to the software engineering jobs. You’re not seeing the people who were turned off by those activities. You’re not seeing the people who couldn’t even get into open-source projects. Is there a causal relationship? If a student engages in “authentic experiences,” does it lead to greater interest and skill development?

You can skip all the way to Chapter 8 for the findings: We don’t know. There are not enough careful studies exploring the question to establish a causal relationship. But that’s not the most important part of the report.

The key questions of the report really are: “What is an authentic learning experience? What prevents students from getting them?” We came up with two definitions:

  • There’s professional authenticity which is what the starting question assumes — the activity has something to do with professional practice in the field.
  • There’s personal authenticity which is where the activity is meaningful and interesting to the student.

These don’t have to be in opposition, but they often are. The Tech industry and open-source development is overwhelmingly male and white or Asian. Learning activities that are culturally relevant may be interesting and meaningful to students, but may not obviously connect to professional practice. Activities that are grounded in current practice may not be interesting or meaningful to students, especially if the students see themselves as outsiders and not belonging to the culture of software development (open source or industry). Formal educational systems place a premium on professional, vocational practice, and informal education systems need personal authenticity to keep drawing students in.

The report does a good job covering the research — what we know (and what we don’t), how the issues vary in informal and formal education, and what we can recommend about designing for authenticity (both kinds, without opposition) in learning experiences.

If you ever get the chance participate in a National Academies consensus report, I highly recommend the experience. You’re producing something for the community, so the amount of review and rewriting is significant — more than in any other kind of writing I’ve ever done. It’s worth it. You learn so much! It’s the National Academies, and they gather pretty amazing committees. If you ever get the chance to grab a coffee or beer with any of the participants on the committee, external or staff, take that chance! (I’m not sure I’d recommend chairing or directing one of these committees — the amount of work that Barbara and Amy did was astounding.) Every single one of these folks have amazing insights and experiences. I’m grateful for the opportunity to hang out with them (even when it went all on-line), write with them, and learn from them.

February 8, 2021 at 7:00 am Leave a comment

ICER 2021 Call for Papers out with Changes for ICER Authors

The International Computing Education Research (ICER) Conference Call for papers is now out — see the conference website here. Abstracts are due 19 March, and full papers are due 26 March.

There are big changes in the author experience of ICER 2021 — see a blog post describing them here. Here are two of them:

  • ICER is going to use the new ACM TAPS publication process, and the paper size limits are now based on word count instead of number of pages. I hope that this relieves authors from some of the tedium of the last minute adjusting of figure sizes and tweaking of text/fonts to just barely get everything squeezed in to the given page limits.
  • There will now be conditional accepts. It’s heart breaking when there’s a paper that’s so good, but it’s got one small and easily-fixable fatal flaw (something that the reviewers and program chairs feel is not publishable as-is). In a conference setting, when the only options are accept or reject, there’s not much to do but reject. Now, there will be an option to conditionally accept a paper with a small review process after revision, to make sure that the small flaw is fixed.

Please do submit to ICER — let’s get lots of great CS Ed research out into the community discussion!

February 1, 2021 at 7:00 am Leave a comment


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 8,988 other followers

Feeds

Recent Posts

Blog Stats

  • 1,869,045 hits
February 2021
M T W T F S S
1234567
891011121314
15161718192021
22232425262728

CS Teaching Tips