## Teachers are not the same as students, and the role of tracing: ICER 2017 Preview

*August 18, 2017 at 7:00 am* *
3 comments *

The International Computing Education Research conference starts today at the University of Washington in Tacoma. You can find the conference schedule here, and all the proceedings in the ACM Digital Library here. In past years, all the papers have been free for the first couple weeks after the conference, so grab them while they are outside the paywall.

Yesterday was the Doctoral Consortium, which had a significant Georgia Tech presence. My colleague Betsy DiSalvo was one of the discussants. Two of my PhD students were participants:

- Amber Solomon who is exploring the relationship between spatial reasoning and CS learning (see her position paper here). I blogged about Amber's work on our design studio using AR technology.
- Katie Cunningham who is exploring how instructors can use students' sketching and tracing to get greater insight into the students' mental models of the notional machine (see her position paper here). I blogged about Katie and her work when she won an NSF fellowship earlier this year.

We have two research papers being presented at ICER this year. Miranda Parker and Kantwon Rogers will be presenting *Students and Teachers Use An Online AP CS Principles EBook Differently: Teacher Behavior Consistent with Expert Learners* (see paper here) which is from Miranda C. Parker, Kantwon Rogers, Barbara J. Ericson, and me. Miranda and Kantwon studied the ebooks that we've been creating for AP CSP teachers and students (see links here). They're asking a big question: "**Can we develop one set of material for both high school teachers and students, or do they need different kinds of materials?**" First, they showed that there was statistically significantly different behaviors between teachers and students (e.g. different number of interactions with different types of activities). Then, they tried to explain *why* there were differences.

We develop a model of teachers as expert learners (e.g., they know more knowledge so they can create more linkages, they know *how* to learn, they know better how to monitor their learning) and high school students as more novice learners. They dig into the log file data to find evidence consistent with that explanation. For example, students repeatedly try to solve Parsons problems long after they are likely to get it right and learn from it, while teachers move along when they get stuck. Students are more likely to run code and then run it again (with no edits in between) than teachers. At the end of the paper, they offer design suggestions based on this model for how we might develop learning materials designed explicitly for teachers vs. students.

Katie Cunningham will be presenting *Using Tracing and Sketching to Solve Programming Problems: Replicating and Extending an Analysis of What Students Draw* (see paper here) which is from Kathryn Cunningham, Sarah Blanchard, Barbara Ericson, and me. The big question here is: "**Of what use is paper-and-pen based sketching/tracing for CS students?**" Several years ago, the Leeds' Working Group (at ITiCSE 2004) did a multi-national study of how students solved complicated problems with iteration, and they collected the students' scrap paper. (You can find a copy of the paper here.) They found (not surprisingly) that students who traced code were far more likely to get the problems right. Barb was doing an experiment for her study of Parsons Problems, and gave scrap paper to students, which Katie and Sarah analyzed.

First, they replicate the Leeds' Working Group study. Those who trace do better on problems where they have to predict the behavior of the code. Already, it's a good result. But then, Katie and Sarah go further. For example, they find it's not always true. If a problem is pretty easy, those who trace are actually more likely to get it *wrong*, so the correlation goes the other way. And those who start to trace but then give up are even *more* likely to get it wrong than those who *never traced at all.*

They also start to ask a tantalizing question: Where did these tracing methods come from? A method is only useful if it gets used — what leads to use? Katie interviewed the two teachers of the class (each taught about half of the 100+ students in the study). Both teachers did tracing in class. Teacher A's method gets used by some students. Teacher B's method gets used by *no* students! Instead, some students use the method taught by the head Teaching Assistant. Why do some students pick up a tracing method, and why do they adopt the one that they do? Because it's easier to remember? Because it's more likely to lead to a right answer? Because they trust the person who taught it? More to explore on that one.

Entry filed under: Uncategorized. Tags: computing education research, ebooks, ICER, spatial reasoning, tracing.

1.Prediction: The majority of US high school students will take CS classes online #CSEdWeek | Computing Education Blog | December 4, 2017 at 7:00 am[…] required.Â I know from log file analyses that we are seeing huge numbers of students coming into our ebooks through virtual high school […]

2.Our work at SIGCSE 2019: Ebooks, jobs, privilege, what is research, and what is literacy | Computing Education Research Blog | February 25, 2019 at 7:00 am[…] Saturday afternoon, Barb Ericson (with Brad Miller and Jackie Cohen of U-M) will present a workshop on how to use the Runestone ebook platform that we use in our work (see, for example, this post). […]

3.How to reduce the defensive climate, and what students really need to understand code: ITICSE 2019 Preview | Computing Education Research Blog | July 14, 2019 at 1:00 pm[…] presented a study of how students trace their programs on paper (see ICER 2017 paper here, and my blog post summary here). She had some fascinating and surprising results. For example, if students traced their programs […]