Posts tagged ‘computing education research’

Social Studies Teachers using Programming for Data Visualization: An FIE 2020 Paper Preview

The Frontiers in Education (FIE) 2020 conference starts Wednesday October 21 in Uppsala, Sweden — see program here. My student Bahare Naimipour will be presenting our paper “Engaging Pre-Service Teachers in Front-End Design: Developing Technology for a Social Studies Classroom” (see preprint here) by Bahare, me, and Tammy Shreiner. This work came long before the NSF work that we just got funded for (see blog post here), but it’s in the same line of research.

The paper is about two of our participatory design sessions with pre-service social studies teachers in Tammy’s class on data literacy. In both of these sessions, we asked teachers to program in JavaScript or Vega-Lite to build a visualization, and in the second one, we also introduce CODAP, a visualization tool explicitly designed for middle and high school students. The paper is less about the technology and more about what the teachers told us about what they thought about tools for visualization in their class.

Social studies teachers are such an interesting group to study. They’re not particularly interested in STEM, data, or computers. They want to teach social studies. Very few of our participants had ever seen any code. (One told us, “This looks a lot like setting up my MySpace page in middle school!”)They’re only interested if we can help them teach what they want to teach. It’s a hard audience to engage, in all the right ways.

I’m going to highlight just two lessons we learned here:

First: The results from the two participatory design sessions were remarkably different. Participatory design isn’t a “okay, we did that — check off the box” methodology. Each group of participants can be remarkably different. There’s no generalization here. Each session is useful, but I don’t know how many sessions we’d have to do to get anywhere near saturation. That’s okay — we learned design lessons from each session.

Second: There is no one answer to how teachers think about programming. I have heard from many people that teachers find programming hard (see this CACM Blog post about that discussion), and I’ve hypothesized that to be true in this blog (see this post). So, now I’ve been in the room as social studies teachers have their first programming experiences and interviewing them afterwards, and….it’s complicated.

Teachers tell us often in our sessions that programming is overwhelming, but several teachers also told us that CODAP (explicitly designed for their use, and not a programming tool) was overwhelming. The question is whether it’s worth the complexity — and for whom. We get contradictory responses from the teachers, which we report in this paper. One told us that she wanted a simpler tool for herself and JavaScript for her students: “I don’t mind keeping life simple for me, but I wanted to challenge my students and give them useful, new skills.” Another teacher told us the opposite: “I would like Java[script] because it would let me do more to the visualization. Vega-lite would be better for students because it seems far more simple.”

We couldn’t fit in all the great stories and insights from these two participatory design sessions. Like the teacher who wants JavaScript in her class because, “That’s similar to what they use in math and science, right? I don’t want history to be the ‘dumbed-down’ programming.” I found that surprising, and wondered what the teachers would think of a block-based language. Another teacher told us that she wants to use programming in her history class, “Because maybe that would make history ‘cool.’” One of the tensions I found most interesting in these sessions was between the desire to know the tools and be comfortable in front of the class, and the desire to push their students to learn more. Some teachers told us that they preferred CODAP to any programming tool because they would be embarrassed to get a syntax error in front of their kids, which they realized would always be possible when programming. Other teachers told us that they were more concerned with going beyond basic tools — (paraphrasing one comment we received), “My students will already know Excel and Google Sheets. I want them to do more in my class.”

Our work is ramping up now. We had another PD session with pre-service teachers in March, just before pandemic lockdown, which was our first one with our data visualization tool in the mix. We’ve just held our first workshop in August for in-service (practicing) teachers. We’ve got more workshops planned over the next year. You’ll likely be hearing more from these studies in future posts.

October 19, 2020 at 7:00 am 3 comments

Award-winning papers at ICER 2020 explore new directions and point towards the next work to do

The 2020 ACM SIGCSE International Computing Education Research Conference was in August (see website here), hosted in Dunedin, New Zealand — but was unfortunately entirely virtual. I became so much more aware of the affordances of face-to-face conferences when attending one of my favorite conferences all through my screen. The upside of the all-virtual format is that all the talks are available on YouTube (see ICER 2020 channel here). Here are my comments on the three papers receiving awards — see them listed here.

What Do We Think We Think We Are Doing?: Metacognition and Self-Regulation in Programming. (Paper link)

This is the paper that I have read and re-read the most since the conference. The authors review what the literature tells us about metacognition in programming. Metacognition is thinking about thinking, like “Did I really understand that? Maybe I should re-read this. Or maybe I should write down my thoughts so I can reflect on them I’m not sure that I’m making progress here. Taking a walk would probably help me clear my head and focus.”

One of their findings that is most intriguing is “Metacognitive knowledge is difficult to achieve in domains about which the learner has little content knowledge.” In other words, you can’t teach students metacognition and self-regulation first, and then teach them something using those new thinking skills. Learners have to know some of the domain first. Now why is that?

Here’s a hypothesis: Metacognition and self-regulation are hard. They take a lot of cognitive load. You have to pay attention to things that are invisible (your own memory and thoughts) and that’s hard. Trying to learn or problem-solve at the same time that you’re monitoring yourself and thinking about your own learning — super hard. Maybe you have to know enough about the domain for some of that activity to be automatized, so that you don’t have to pay as much attention to it in order to do it.

So the biggest hole I see in this paper (which given that it’s a review paper, probably means that the hole is in the literature) is that it does not consider enough factors like gender, race, disability, or SES (e.g., wealth). (Gender gets mentioned when reporting Alex Lishinski’s great work, but only with respect to self-efficacy.) My hypothesis is that the story is more complicated when you consider non-dominant groups. If you don’t think you belong, that takes more of your attention, which takes attention away from your learning — and leaves even less attention for metacognition and self-regulation. If you’re worrying about your screen reader working or where you’re going to get dinner tonight, how do you also have attention left over for monitoring your learning?

The biggest unique opportunity I see in thinking about metacognition and programming is in thinking about debugging. Like psychology or veterinarian science, but unlike most other fields, a lot of a computer scientists’ job is in understanding the “thinking” (behavior, processing, whatever) of another agent. When you’re debugging your program, isn’t that a kind of metacognition. “Okay, what is the computer doing here? How is it interpreting what I wrote? Oh wait, is that what I wanted to write? Is that what I wanted to happen?” The complexity of mapping your thoughts and intentions to what you wrote to what the computer did is enormous. Now, debug someone else’s code — you’ve got what you want in mind, you’re constructing a model of mind of whoever wrote the code before you (did they know what they were doing? is this code brilliant or broken?), and you’re trying to figure out how the computational agent is “thinking about” the code. There’s some seriously complex metacognition going on there.

Exploring Student Behavior Using the TIPP&SEE Learning Strategy. (See paper here.)

No surprise that Diana Franklin’s CANON Lab at U. Chicago continues to do terrific and award-winning work. I’m excited about the TIPP&SEE learning strategy. A commonly found problem in computer science education is that students are bad at Explain In Plain English (EIPE) problems (e.g., see this SIGCSE 2012 paper on the topic). EIPE problems are a measure of students being able to step back from the structure and behavior of code to describe its function or purpose. Katie Cunningham has been exploring how some students focus more on the purpose of the programming, and others get stuck in the code and can’t see the purpose of the program. The TIPP&SEE learning strategy explicitly addresses these problems. Students are guided through how to understand a programming project and relating code to purpose.

This award-winning paper (which follows on their SIGCSE 2020 paper) shows us that students using the TIPP&SEE approach perform better than students who don’t. They get more of their programs done. The SIGCSE 2020 paper shows that they learn more.

The papers totally convince me that this strategy works. The next question is that I want to know is how and why. The SIGCSE paper does some qualitative work, but it’s pretty big n — 184 students. With this kind of scale, the programs are given and the problems are given. There’s not as much opportunity for the detailed cognitive interviews to figure out how the students are thinking about interpreting programs. What happens when these students just go to the Scratch website to look at something that they want to reuse? Do they use TIPP&SEE? Do they understand the programs that they just happen to come across? What happens when they want to build something, where they provide their purpose? Can they draw on TIPP&SEE and succeed? This is not a critique of the papers — they’re great and make real contributions. I’m thinking about what I want to know next.

Hedy: A Gradual Language for Programming Education. (See program link here.)

Easily my favorite paper at ICER 2020 this year. Felienne is doing what I am trying to do. Let’s invent new more usable programming languages! I am happy that she got this paper published, because (selfishly) it gives me hope that I can get my new work published. I am thrilled that the ICER community valued this paper so highly that it received a John Henry award.

The basic idea is to create a sequence of programming languages, where advancing levels have most of the elements of the previous level but include new elements. Her earliest level has no punctuation — no quotes, no semi-colons, no curly-braces. I recently built a task-specific programming language that had the same attribute, and one of the students I’m working with looked at it and asked, “Wait — you can have programming languages without all that punctuation? Well, then, why do we have so much when it scares people off?” Great question! When do we need all that extra punctuation, and where can we avoid it?

The next stage is to explore how we design languages like these. (I’m biased since this is where I’m spending most of my research time on these days.) Why do we choose those language features? Why the keywords print, ask, echo, assign, if, else, and repeat? How do we design and iteratively develop the language? How do we know that people can do things that they want to do with this language? My answer to this is participatory design with teachers, but there are many other viable answers. Felienne provides good design rationale for Hedy’s language features, based in literature from computing education and natural language acquisition. In a process of user experience (UX) design, we’d also user iterative development including testing with real users. This paper shows us use at large scale, and a big chunk of her paper describes what people did with it. It’s fascinating work, but we don’t talk to any of them. We don’t know what they liked, what they disliked, what they found frustrating, and what they were able to do. We need to move programming language design closer to user experience design — UX for PX.

All three papers are terrific contributions to the research community, and I plan to cite and built on them all. I’m eager to see what comes next!


Sidebar: I am a member of the ICER Steering Committee (which has no role in reviewing papers or in picking awards), and I was a metareviewer for ICER 2020. I am speaking here just for myself as a reader and attendee.

September 28, 2020 at 7:00 am 2 comments

Let’s program in social studies classes: NSF funding for our work in task-specific programming languages

If we want all students to learn computer science (CS for All), we have to go to where the students are. Unfortunately, that’s not computer science class. In most US states, less than 5% of high school students take a course in computer science.

Programming is applicable and useful in many domains today, so one answer is to use programming in science, mathematics, social studies, and other non-CS classes. We take programming to where the students are, and hope to increase their interest and knowledge about CS. I love that idea and have been working towards that goal for the last four years. But it’s a hard sell. I told the story in 2018 (see post here) about how the mathematics teachers rejected our pre-calculus course that integrated computing. How do we help non-CS teachers to see value in computing integrated into their classes?

That’s the question Tammy Shreiner at Grand Valley State and I get three years to explore, thanks to a new grant from the US National Science Foundation in the research strand of the “CS for All” Program. Tammy teaches a course on “Data Literacy for Social Studies Teachers” at GVSU, and she (with her colleague Bradford Dykes) have been building an open educational resource (OER) to support data literacy education in social studies classes. We have been working with her to build usable and useful data visualization tools for her curriculum. Through the grant, we’re going to follow her students for three years: From taking her pre-service class, out into their field experiences, and then into their first classes. At each stage, we’re going to offer mentoring and workshops to encourage teachers to use the things we’ve showed them. In addition, we’ll work on assessments to see if students are really developing skills and positive attitudes about data literacy and programming.

Just a quick glimpse into the possibilities here. AP CS Principles exam-takers are now about 25% female. AP US History is 56% female exam takers. There are fives times as many Black AP US History exam-takers as AP CSP exam-takers. It’s a factor of 14 for Hispanic students. Everyone takes history. Programming activities in a history class reach a far more diverse audience.

I have learned so much in the last couple of years about what prevents teachers from adopting curriculum and technology — it’s way more complicated than just including it in their pre-service classes. Context swamps pre-service teaching. The school the teacher goes to influences what they adopt more than what they learned pre-service. I’ve known Anne Ottenbreit-Leftwich for years for her work in growing CS education in Indiana, but just didn’t realize that she is an expert on technology adoption by teachers — I draw on her papers often now.

Here’s one early thread of this story. Bahare Naimipour, an EER PhD student working with me, is publishing a paper at FIE next month about our early participatory design sessions with pre-service social studies teachers. The two tools that teachers found most interesting were CODAP and Vega-Lite. Vega-Lite is interesting here because it really is programming, but it’s a declarative language with a JSON syntax. The teachers told us that it was powerful, flexible — and “overwhelming.” How could we create a scaffolded path into Vega-Lite?

We’ve been developing a data visualization tool explicitly designed for history inquiry (you may remember seeing it back here). We always show at least two visualizations, because historical problems start from two accounts or two pieces of data that conflict.

As you save graphs in your inquiry to the right, you’re likely going to lose track of what’s what. Click on one of them.

This is a little declarative script, in a Vega-lite-inspired JSON syntax. It’s in a task-specific programming language, but this isn’t a program you write. This is a program the describes the visualization — code as a concise way of describing process.

We now have a second version where you can edit the code, or use the pull-down menus. These are linked representations. Changing the menu changes the code and updates the graph. Changing the code updates the menu and the graph. Now the code is also malleable. Is this enough to draw students and teachers into programming? Does it make Vega-Lite less overwhelming? Does it lead to greater awareness of what programming is, and greater self-efficacy about programming tasks?

We just had our first in-service teacher workshop with these tools in August. One teacher just gushed over them. “These are so great! How did I not know that they existed before?” That’s easy — they didn’t exist six months ago! We’re building things and putting them in front of teachers for feedback as quickly as we can, in a participatory design process. We make lots of mistakes, and we’re trying to document those, too. We’re about applying an HCI process to programming experience design — UX for PX.


If you know a social studies teacher who would want to keep informed about our work and perhaps participate in our workshops, please have them sign up on our mailing list. Thank you!

September 14, 2020 at 7:00 am 11 comments

Proposal #2 to Change CS Education to Reduce Inequity: Make the highest grades achievable by all students

What does an “A” mean in your course? The answer likely depends on why you teach. Research on teacher beliefs suggests that grading practices are related to teachers’ reasons for teaching. Joe Feldman points out that our grading relates to who we are as teachers, and it is passionately held:

Conversations about grading weren’t like conversations about classroom management or assessment design, which teachers approached with openness and in deference to research. Instead, teachers talked about grading in a language of morals about the “real world” and beliefs about students; grading seemed to tap directly into the deepest sense of who teachers were in their classroom.

Feldman, Joe. Grading for Equity (pp. xix-xx). SAGE Publications. Kindle Edition.

I’ve used the Teacher Perspectives Inventory (TPI) (see link here) with dozens of CS teachers. The most common teaching perspective I see among CS teachers is “Apprenticeship.” CS faculty see themselves as preparing future software developers. They value demonstrating and modeling good software practices. An “A” for an apprenticeship teacher is likely to indicate that the student produces good code. An “A” is reserved for “excellence” (as one CS teacher told me recently). An “A” indicates that the student has risen above his or peers in producing “high quality products” (as another CS teacher posted on Facebook recently). An “A” means that, in this teacher’s opinion, the student is recommended to go on to a highly-desired software development job, perhaps at a place like Google or Amazon.

A student with less computing background is much less likely to earn an “A” in an Apprenticeship-oriented class. If you bring more experience to the table, you have a head start on producing higher-quality products than other students in the class. Their products are more likely to be marked as “excellent.”

In my opinion, the teacher attitude of “rugged individualism” defined in SIGCSE 2020 paper by Hovey, Lehmann, and Riggers-PiehlLinking faculty attitudes to pedagogical choicesmeshes with the TPI category of “Apprenticeship.” “Rugged individualism” teachers believe that “learning and success are the individual student’s responsibility.” (Hovey et al did not make this claim or compute the correlation — this is my prediction.) They showed that teachers who believe in “rugged individualism” are less likely to use student-centered teaching practices and more likely to lecture. Students with less computing background do better with student-centered teaching practices.

This is my third post in a series about how we have to change how we teach computing to reduce inequity (see last post). The series has several inspirations, but the concrete one that I want to reference back to each week is the statement from the University of Maryland’s CS department:

Creating a task force within the Education Committee for a full review of the computer science curriculum to ensure that classes are structured such that students starting out with less computing background can succeed, as well as reorienting the department teaching culture towards a growth mindset

The style of grading that means to identify “talent” or “excellence” is inherently inequitable. It presumes a fixed mindset. If you believe that there is a random distribution of “talent” or “ability” or “Geek Gene” in the course, and (critically), there’s nothing much that teaching can do to change that, then it makes sense to grade to the curve. There can be only a few “A” slots, more “B” slots, and so on. Empirical evidence suggests the opposite — good teaching can trump a lot of other factors. Belief in a growth mindset leads to better learning outcomes and better performance. If we value teaching, and believe that students can get better at computer science, then over time, we should teach better and students should learn more. If students learn more, they should get a higher grade It’s not a fixed-result game.

Measure learning or progress towards objectives, not code quality

My teacher perspective is that we are in the job of maximizing individual human development. It’s our job to help each student achieve as much as they can within our discipline. We should measure achievement in terms of learning, not product quality. I tend to align with the “nurturing” perspective on the TPI, but that’s not what I’m going to argue for here.

It is not at all the same thing to grade for excellence and to grade for learning. Writing code isn’t the same as learning. We have evidence that writing a given piece of code is not indicative of understanding that piece of code. The recent ITiCSE 2020 paper by Jean Sala and Diana Franklin showed that use of a given code construct was not correlated well with understanding of that code construct (see paper here).

I mentioned in the last post that I’m reading Grading for Equity by Joe Feldman (see his website here). He points out all the other factors that influence grades that have nothing to do with learning. Some of these extra-curricular factors — like the ability to meet deadlines that presume privileged, full-time student status without outside pressures — are less likely for Black, Hispanic, poorer, or first-generation college students. These factors influence the production of high-quality code even more than they influence learning. These pressures are going to be even greater during a time of a pandemic.

We should give grades depending how much is learned in a given course. That’s hard to do without measuring students as they enter the course and as they leave the course, and grading on the delta. There is a movement suggesting that “labor-based grading” leads to more compassionate and equitable grading (see article here). I’m not arguing for that.

State your criteria clearly and be objective in grading

I’m arguing that we aim for a Teaching Perspective most closely aligned with the one called “Transmission.” The goal of a Transmission teacher is for students to learn what is needed for the next course or to meet the course objectives. Assessments of understanding should be as objective as possible. Grades should represent achievement of the learning objectives and nothing else.

There is a lot in any given CS course that is not about preparing students for the next course in the sequence. We cover a lot of material. In some courses, we’re preparing students for the imagined technical interview with Google or Amazon. That’s not fair to require understanding and performance on standards that go beyond the course objectives.

I recommend setting the objectives clearly, announcing them on the first day, and grading to those. It’s okay to aim for the targeted average on each assignment to be low but passing (e.g., a “C”), as long as you’re clear and fair. In my classes, I rely heavily on weekly quizzes because those are more likely to lead to learning (see post here). I give points for writing code, but that’s to encourage the activity, not to make-or-break the students’ grade. Programming is for learning, and the quizzes, midterm, and final exam are assessments.

Go ahead and bore your best students

Students with a lot of computing background get an easy “A” in my courses. That’s fine. I expect that. I explicitly tell my students that I teach to the bottom third of the course. I want to move B and C students up into A’s and B’s. I give out a lot of A’s. In years past, I did a series of blog posts on “Boredom vs Failure” (here’s the first post in the series, and here’s the last one). The question is: which is worse, to bore and give easy A’s to the most privileged and most prepared students, or to fail (or discourage to the point that they drop) the students with less privilege and the least computing background? Think about the students who might fail or drop out in a system that makes sure that the most well prepared students are challenged. Each one of those students who continues on does more to change the status quo than does keeping the more privileged students from getting bored. Helping the students with less computing background succeed makes a much bigger difference for society long-term than does keeping entertained the most privileged students.

One response to this proposal is that I’m degrading the value of past A’s. The A’s don’t mean the same thing anymore. That’s true. I take a historical perspective. Those A’s were earned when the students with less computing background were not being taught with methods that helped them succeed (Proposal #1). Those A’s were earned in unfair competition, where the students with prior computing background were compared to students with less computing background. I’m proposing a more just system where the students with less computing background have a chance at the highest grades, where they’re taught in ways that meet their needs, and where their teachers believe that they can grow and improve. I’m not particularly concerned about preserving the past glories of those who won in an unjust system.

If we pre-allocate, ration, or otherwise curve “down” grades so that the top scores are a scarce resource in a competitive system, we are privileging the most prepared students and disadvantaging the least prepared students. I am proposing differentiated instruction. Teach explicitly for the least-prepared students. You will likely have to give up on pushing your top students to greater excellence — that’s the kind of privilege which we have to be willing to surrender. Aim to help every student achieve their potential, and if you have to make a choice, make choices in favor of the students with less privilege and less computing background.

July 27, 2020 at 7:00 am 13 comments

Paradigm shifts in education and educational technology: Influencing the students here and now

Back on my last blog post referencing Morgan Ames’ book The Charisma Machine, Alan Kay said in a comment, “What we have here is a whole world view and a whole different world.” I’ve been thinking about that sentence a lot because it captures what I think is going on here. A Kuhnian paradigm shift is happening (and maybe has already happened) in research around education and educational technology from the world of Papert and Bruner to the world of learning sciences. I am going to take a pass at describing the change that I see happening in the field, but I encourage you also to read the International Society of the Learning Science (ISLS) presidential address from Victor Lee here, which describes the field more authority and with more authenticity than me.

I remember asking Janet Kolodner (first editor-in-chief of the Journal of the Learning Sciences), “Why? Why learning sciences? We have educational psychology and cognitive science and so many other education disciplines.” She said that learning scientists were tired of just knowing what should happen. They wanted to get out to influence education practice and understand why learning doesn’t happen. Cognitive scientists mostly (at the time) ignored affect and motivation. Educational psychologists most often worked in controlled laboratories or experimental classrooms. Learning scientists wanted to understand and influence what was really happening in educational contexts, both formal and informal. More, they were devoted to expanding access to high-quality education. Yes, learning scientists explored cutting edge technologies to see what was possible, but even more, we try to figure out contexts that make or inhibit learning for real kids. Look at the titles of the Invited Speakers at ICLS 2020: Lost and found in dialogue: Embracing the promises of interdiscursivity and diminishing its risks, The Ed-Tech Imaginary, and Learning as an Act of Fugitivity. Words like “promises” and “imaginary” and “fugitivity” reflect a desire to change and to respond to what we thought might be, but discovered that reality is different. (Audrey Watters’ keynote is available as an essay here.)

David Feldon told me once that the field is misnamed. It’s much more “Learning humanities” than “Learning sciences.” Once you decide to study what’s going on in actual practice with actual students, you find that you’re mostly in studies with really small n. Contexts, teachers, and students vary wildly. Nobody that I know in learning sciences is trying to invent a general dynamic medium for thought, because it’s so hard to get anything actually adopted and used in an impactful manner. I see Jim Spohrer’s work in Service Science as being part of the same paradigm — how do you actually get services designed and implemented that work in practice?

This shift from the general to the specific, and from what could work to what does work is true in my research too. One of my recent NSF proposals is about working closely with a particular school district to figure out what is going to work there. What we know about Brookline or Brasil is almost irrelevant for this district. Another proposal is about inventing a dynamic medium for thought — but in a particular set of classes, in a task-specific form. I still would love to have a general dynamic medium for thought (as Alan suggests), but I believe we have to figure it out from the ground up. Over time, we will find specific notations that can work for specific tasks, and generalize as possible from there.

The majority of the literature that I draw on these days is about teachers: how they learn, why pre-service education has so little influence on actual teacher practice, and how to influence adoption. Teachers are a gateway for technology in the classroom. There are lots of technologies that could work with kids, but don’t work with teachers. In my work today, I draw on Bruner and Papert for their theoretical framings.. I draw on Bruner’s laboratory-based work (e.g., his definition of scaffolding). I draw on Papert’s descriptions of what the computer offers learners, e.g., its protean nature. But I draw less on their implementation work. Bruner’s MACOS was a brilliant project that had a catastrophic result because they didn’t consider enough what would actually work in US schools. Papert created interesting interventions that didn’t become systemic or sustained. Ames is telling me what’s going wrong in actual implementations of OLPC and may some of why it went wrong. If I want things to be actually adopted, I need to avoid the mistakes that The Charisma Machine is describing.

David’s description of what happened in Brasil in a comment to that earlier blog post is fascinating and super-useful, but doesn’t decrease the value of Ames’ description in Paraguay. I don’t agree with all of her rationalizations of why things turned out as they did (e.g., I don’t find the “technically precocious boys” perspective compelling or having explanatory power), and there are very likely things she missed. But what she describes obviously did happen. Learning from the experiences she describes informs our design processes and iterative feedback loops as a way of improving outcomes.

Like any paradigm shift, it doesn’t mean that all the work that went before is wrong. The questions being asked in each paradigm are different. They start from different world views. Papert and Bruner both offer a vision of what we want, Logo and MACOS. Both ran up against the reality of school in the US, where Thorndike won and Dewey lost. Now, how do we help every student, in real school contexts?

Nathan Holbert and David Weintrop recently told me a great phrase that’s common in the constructionism community (variously attributed to Seymour Papert or Uri Wilensky): “Are you designing for Someday or are you designing for Monday?” Are you designing for a world that might be, or are you designing for things that can go in the classroom soon? Neither are wrong. I don’t think that they even need to be a dichotomy. In my task-specific programming work, I’m making things that can’t go in the classroom Monday, but could go in the classroom next year, which is still a lot closer than Someday. Even to be in the classroom next year, I have to start from where schools are now. There won’t be a Dewey-an revolution in schools over the next year. But maybe Someday there will.

July 6, 2020 at 7:00 am 10 comments

Subgoal labelling influences student success and retention in CS

I have written a lot about subgoal labeling in his blog. Probably the best way to find it all is to see the articles I wrote about Lauren’s graduation (link here) and Briana’s (link here). They have continued their terrific work, and have come out with their most impressive finding yet.

In work with Adrienne Decker, they have shown that subgoal labeling reduces the rate at which students fail or drop out of introductory computer science classes: “Reducing withdrawal and failure rates in introductory programming with subgoal labeled worked examples” (see link here). The abstract is below.

We now have evidence that subgoal labelling lead to better learning, better transfer, work in different languages, work in both text and block-based programming languages, and work in Parsons Problems. Now, we have evidence that their use leads to macro effects, like improved student success and retention. We also see a differential impact — the students most at risk of failing are the ones who gain the most.

This is a huge win.

Abstract

Background: Programming a computer is an increasingly valuable skill, but dropout and failure rates in introductory programming courses are regularly as high as 50%. Like many fields, programming requires students to learn complex problem-solving procedures from instructors who tend to have tacit knowledge about low-level procedures that they have automatized. The subgoal learning framework has been used in programming and other fields to breakdown procedural problem solving into smaller pieces that novices can grasp more easily, but it has only been used in short term interventions. In this study, the subgoal learning framework was implemented throughout a semester-long introductory programming course to explore its longitudinal effects. Of 265 students in multiple sections of the course, half received subgoal-oriented instruction while the other half received typical instruction.

Results: Learning subgoals consistently improved performance on quizzes, which were formative and given within a week of learning a new procedure, but not on exams, which were summative. While exam performance was not statistically better, the subgoal group had lower variance in exam scores and fewer students dropped or failed the course than in the control group. To better understand the learning process, we examined students’ responses to open-ended questions that asked them to explain the problem-solving process. Furthermore, we explored characteristics of learners to determine how subgoal learning affected students at risk of dropout or failure.

Conclusions: Students in an introductory programming course performed better on initial assessments when they received instructions that used our intervention, subgoal labels. Though the students did not perform better than the control group on exams on average, they were less likely to get failing grades or to drop the course. Overall, subgoal labels seemed especially effective for students who might otherwise struggle to pass or complete the course.

Keywords: Worked examples, Subgoal learning, Programming education, Failure rates

June 29, 2020 at 7:00 am 7 comments

Why do students study computing, especially programming

Alan Kay asked me in a comment on my blog post from Monday:

You and your colleagues have probably done a survey over the years, but it would be useful to see one or two examples, and especially one from the present time of “why are you currently studying computing, especially programming?”

It would be illuminating — and very important — to see the reasons, and especially the percentage who say: “to learn and understand and do computing and programming”.

A half-dozen papers sprang to mind. Rather than type them into a teeny tiny response box, I’m going to put them here. This is not a comprehensive survey. It’s the papers that occurred to me at 8:30 am EDT in response to Alan’s query.

The biggest recent study of this question is the Santo, Vogel, and Ching white paper CS for What? Diverse Visions of Computer Science Education in Practice (find it here). This paper is particularly valuable because it starts K-12 — what are the reasons for studying CS in school?

The most recent paper I know on this topic (and there were probably new ones at RESPECT and SIGCSE 2020 that I haven’t found yet) is this one from Koli, It’s like computers speak a different language: Beginning Students’ Conceptions of Computer Science (find it here). I liked this one because I most resonated with the “Creator” perspective, but I design today for those with the “Interpreter” perspective.

Alan particularly asked what we had done in our group. We started asking these questions when we were doing Media Computation (here’s a 2005 paper where we got those answers from Georgia Tech and Gainesville College students — GT students mostly wanted to know how to use a computer better and then get a good grade, while Gainesville students wanted to know what programming was). We got different answers from the follow-on course MediaComp Data Structures course where we started seeing a real split in views (see Lana Yarosh’s paper here). When we were doing “Georgia Computes!”, we did a statewide survey in 2010 to understand influences on students’ persistence in CS (see paper here). This is important to read, to realize that it’s not just about ability and desire. Women and BIPOC students are told that they don’t belong, and they need particular attention and encouragement to get them to go on, even if they believe they could and should. Probably the study from my group most explicitly on this question is Mike Hewner’s Undergraduate conceptions of the field of computer science (see paper here).

Two of my former students, but not with me, developed a validated computing attitudes survey (Tew, Dorn, and Schneider, paper here). Here, they ask experts what CS is, then use the same instrument to ask students what CS is, so that they can compare how “expert-like” the students answers are.

Not from my research group, but a really important paper in this space is Maureen Biggers et al’s Student perceptions of computer science: a retention study comparing graduating seniors with CS leavers (see link here). Most studies look at those who stay in CS. Maureen and her team also interviewed those who left, and how their perception of the field differed.

There are so many others, like the “Draw a Computer Scientist” task to elicit what the field is about (see example paper here and here). I particularly like Philip Guo’s groups paper on “conversational programmers” — people who study programming so that they can talk to programmers, not to ever program or to understand programming (see CHI’16 paper here).

Here’s what I think is the main answer to Alan’s question: Yes, it’s changed. But there are some specific answers that are consistent. Yes, some want to learn programming. Others want to learn programming as part of general computer skills.

I’m going to stop here and push this out before my meetings start for the day. I invite readers to add their favorite papers answering this question below.

June 17, 2020 at 9:03 am 12 comments

How do teachers teach recursion with embodiment, and why won’t students trace their programs: ICLS 2020 Preview

This coming week was supposed to be the International Conference of the Learning Sciences (ICLS) 2020 in Nashville (see conference website here). But like most conferences during the pandemic, the face-to-face meeting was cancelled (see announcement here). The on-line sessions are being announced on the ICLS2020 Twitter feed here.

I’m excited that two of my students had papers accepted at ICLS 2020. I haven’t published at ICLS since 2010. It’s nice to get back involved in the learning sciences community. Here’s a preview of their papers.

How do teachers teach recursion with embodiment

I’ve written here about Amber Solomon’s work on studying the role of space and embodiment in CS learning. This is an interesting question. We live in a physical world and think in terms of physical things, and we have to use that to understand the virtual, mostly invisible, mostly non-embodied world of computing. At ICER 2018, she used a taxonomy of gestures used in science learning to analyze the gestures she saw in a high school computer science classroom (see link here). Last summer at ITiCSE, she published a paper on how making CS visible in the classroom (through gesture and augmented reality) may reduce defensive climate (see link here). In her dissertation, she’s studying how teachers teach recursion and how learners learn recursion, with a focus on spatial symbol systems.

Her paper at ICLS 2020 is the first of these studies: Embodied Representations in Computing Education: How Gesture,Embodied Language, and Tool Use Support Teaching Recursion. She watched hours of video of teachers teaching recursion, and did a deep dive on two of them.

I’m fascinated by Amber’s findings. Looking at what teachers say and gesture about recursion from the perspective of physical embodiment, I’m amazed that students ever learn computer science. There are so many metaphors and assumptions that we make. One of the teachers says, when explaining a recursive function:

“Then it says “… “now I have to call.”

Let’s think about this from the perspective of the physical world (which is where we all start when trying to understand computing):

  • What does it mean for a function to “say” something?
  • The function “says” things, but I “call”? Who is the agent in this explanation, the function or me? It’s really the computer with the agency, but that doesn’t get referenced at all.
  • Recursion is typically explained as a function calling itself. We typically “call” something that is physically distant from us. If a function is re-invoking itself, why does it have to “call” as if at a distance?

For most computer scientists, this may seem like explaining that the sky is blue or that gravel exists. It’s obvious what all of this means, isn’t it? It is to us, but we had to learn it. Maybe not everyone does. Remember how very few students take or succeed at computer science (for example, see this blog post), and what enormously high failure and drop-out rates we have in CS. Maybe only the students who pick up on these metaphors are the ones succeeding?

Why won’t students trace their programs?

Katie Cunningham’s first publication as a PhD student was her replication and extension of the Leeds Working group study, showing that students who trace program code successfully line-by-line are able to answer more accurately questions about the code (see blog post here). But one of her surprising results was that students who start tracing and give up do worse on prediction questions than those students who never traced at all. In her ITICSE 2019 paper (see post here), she got the chance to ask those students who stopped tracing why they did. She was extending that with a think-aloud protocol, when something unusual happened. Two data science students, who were successful at programming, frankly refused to trace code.

Her paper “I’m not a computer”: How identity informs value and expectancy during a programming activity is an exploration of why students would flat out refuse to trace code — and yet successfully program. She uses Eccle’s Expectancy Value Theory (which comes up pretty often in our thinking, see this blog post) to describe why the cost of tracing outweighs the utility for these students, which is defined in terms of their sense of identity — what they see themselves doing in the future. Sure, there will be some programs that they won’t be able to debug or understand because they won’t trace line-by-line. But maybe they’ll never actually have to deal with code that complex. Is this so bad?

Katie’s live session is 2:00-2:40pm Eastern time on June 23. The video link will be available on the conference website to registered attendees. A pre-print version of her paper is available here.

Both of these papers give us new insight into the unexpected consequences of how we teach computing. We currently expect students to figure out how their teachers are relating physical space and computation, through metaphors that we don’t typically explain. We currently teach computing expecting students to be able to trace code line-by-line, though some students will not do it (and maybe don’t really need to). If we want to grow who can succeed at computing education, we need to think through who might be struggling with how we’re teaching now, and how we might do better.

June 15, 2020 at 7:00 am 47 comments

Measuring progress on CS learning trajectories at the earliest stages

I’ve written in this blog (and talked about many times) how I admire and build upon the work of Katie Rich, Diana Franklin, and colleagues in the Learning Trajectories for Everyday Computing project at the University of Chicago (see blog posts here and here). They define the sequence of concepts and goals that K-8 students need to be able to write programs consisting of sequential statements, to write programs that contain iteration, and to debug programs. While they ground their work in K-8 literature and empirical work, I believe that their trajectories apply to all students learning to program.

Here are some of the skills that appear in the early stages of their trajectories:

  • Precision and completeness are important when writing instructions in advance. 
  • Different sets of instructions can produce the same outcome. 
  • Programs are made by assembling instructions from a limited set. 
  • Some tasks involve repeating actions. 
  • Programs use conditions to end loops.  
  • Outcomes can be used to decide whether or not there are errors.
  • Reproducing a bug can help find and fix it.
  • Step-by-step execution of instructions can help find and fix errors.

These feel fundamental and necessary — that you have to learn all of these to progress in programming. But it’s pretty clear that that’s not true. As I describe in my SIGCSE keynote talk (the relevant 4 minute segment is here), there is lots of valuable programming that doesn’t require all of these. For example, most students programming in Scratch don’t use conditions to end loops — still, millions of students find expressive power in Scratch. The Bootstrap: Algebra curriculum doesn’t have students write their own iteration at all — but they learn algebra, which means that there is learning power in even a subset of this list.

What I find most fascinating about this list is the evidence that CS students older than K-8 do not have all these concepts. One of my favorite papers at Koli Calling last year was  It’s like computers speak a different language: Beginning Students’ Conceptions of Computer Science (see ACM DL link here — free downloads through June 30). They interviewed 14 University students about what they thought Computer Science was about. One of the explanations they labeled the “Interpreter.” Here’s an example quote exemplifying this perspective:

It’s like computers speak a different language. That’s how I always imagined it. Because I never understood exactly what was happening. I only saw what was happening. It’s like, for example, two people talking and suddenly one of them makes a somersault and the other doesn’t know why. And then I just learn the language to understand why he did the somersault. And so it was with the computers. 

This student finds the behavior of computers difficult to understand. They just do somersaults, and computer science is about coming to understand why they do somersaults? This doesn’t convey to me the belief that outcomes are completely and deterministically specified by the program.

I’ll write in June about Katie Cunningham’s paper to appear next month at the International Conference of the Learning Sciences. The short form is that she asked Data Science students at University to trace through a program. Two students refused, saying that they never traced code. They did not believe that “Step-by-step execution of instructions can help find and fix errors.” And yet, they were successful data science students.

You may not agree that these two examples (the Koli paper and Katie’s work) demonstrate that some University students do not have all the early concepts listed above, but that possibility brings us to the question that I’m really interested in: How would we know?

How can we assess whether students have these early concepts in the trajectories for learning programming? Just writing programs isn’t enough. 

  • How often do we ask students to write the same thing two ways? Do students realize that this is possible?
  • Students may realize that programming languages are “finicky” but may not realize that programming is about “precision and completeness.” 
  • Students re-run programs all the time (most often with no changes to the code in between!), but that’s not the same as seeing a value in reproducing a bug to help find and fix it. I have heard many students exclaim, “Okay, that bug went away — let’s turn it in.” (Or maybe that’s just a memory from when I said it as a student…)

These concepts really get at fundamental issues of transfer and plugged vs unplugged computing education. I bet that if students learn these concepts, they would transfer. They address what Roy Pea called “language-independent bugs” in programming. If a student understands these ideas about the nature of programs and programming, they will likely recognize that those are true in any programming language. That’s a testable hypothesis. Is it even possible to learn these concepts in unplugged forms? Will students believe you about the nature of programs and programming if they never program?

I find questions like these much more interesting than trying to assess computational thinking. We can’t agree on what computational thinking is. We can’t agree on the value of computational thinking. Programming is an important skill, and these are the concepts that lead to success in programming. Let’s figure out how to assess these.

May 25, 2020 at 7:00 am 14 comments

SIGCSE 2020: Papers freely available, AP CSA over AP CSP for diversifying computing, and a tour of computing ed research in one hour

My Blog@CACM post for this month was about my first stop on my tour of SIGCSE 2020 papers (see link here). While the SIGCSE 2020 conference was cancelled, the papers are freely available now through the end of June — see all the proceedings here. I’ve started going through the proceedings myself. The obvious place to start such a tour is with the award-winning papers. My Blog@CACM post is on the paper from An Nguyen and Colleen M. Lewis of Harvey Mudd College on the negative impact of competitive enrollment policies (having students enroll to get into CS, or requiring a higher-than-just-passing GPA to get into the computing major) on students’ sense of belonging, self-efficacy, and perceptions of the department.

I said that this was the first stop on my tour, but that’s not really true. I’d already looked up the paper Does AP CS Principles Broaden Participation in Computing?: An Analysis of APCSA and APCSP Participants (see link here), because I’d heard about it from co-author Joanna Goode. I was eager to see the result. They show that AP CS Principles is effectively recruiting much more diverse students than the AP CS A course (which is mostly focused on Java programming). But, AP CS A students end up with more confidence in computing and much more interest in computing majors and tech careers. Maybe CSA students had more interest to begin with — there is likely some selection bias here. This result reminds me of the Weston et al result (see blog post here) showing that the female high school students they studied continued on to tech and computing majors and careers if they had programming classes.

I’ve been reading The Model of Domain Learning: Understanding the Development of Expertise (see Amazon link) which offers one explanation of what’s going on here. Pat Alexander’s Model of Domain Learning points out that domain knowledge is necessary to have sustained interest in a domain. You can draw students in with situational interest (having activities that are exciting and engage novices), but you only get sustained interest if they also learn enough about the domain. Maybe AP CSP has more situational interest, but doesn’t provide enough of the domain knowledge (like programming) that leads to continued success in computing.

In my SIGCSE 2020 Preview blog post (posted just two days before the conference was posted), I mentioned the cool session that Colleen Lewis was organizing where she was going to get 25 authors to present the entire 700+ page Cambridge Handbook of Computing Education Research in 75 minutes. Unfortunately, that display of organizational magic didn’t occur. However, in a demonstration of approximately the same level of organizational magic, Colleen got the authors to submit videos, and she compiled a 55 minute version (which is still shorter than reading the entire tome) — see it on YouTube here.

There are lots of other great papers in the proceedings that I’m eager to get into. A couple that are high on my list:

  • Dual-Modality Instruction and Learning: A Case Study in CS1 from Jeremiah Blanchard, Christina Gardner-McCune, and Lisa Anthony from University of Florida, which provides evidence that a blocks-based version of Java leads to more and deeper version on the same assessments as students learning with textual Java (see link here).
  • Design Principles behind Beauty and Joy of Computing by Paul Goldenberg and others. I love design principles papers, because they explain why the authors and developers were doing what they were doing. I have been reading Paul since back in the Logo days. I’m eager to read his treatment of how BJC works (see link here).

Please share in the comments your favorite papers with links to them.

May 4, 2020 at 7:00 am 3 comments

Active learning has differential benefits for underserved students

We have had evidence that active learning teaching methods have more benefit for underserved populations than for majority groups (for example, I discussed the differential impact of active learning here). Just published in March in the Proceedings of the National Academy of Science is a meta-analysis of over 40 studies giving us the strongest argument yet: “Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math” at https://www.pnas.org/content/117/12/6476. I’ll remind everyone that a terrific resource for peer instruction in computer science is here: http://peerinstruction4cs.com/

Achievement gaps increase income inequality and decrease workplace diversity by contributing to the attrition of underrepresented students from science, technology, engineering, and mathematics (STEM) majors. We collected data on exam scores and failure rates in a wide array of STEM courses that had been taught by the same instructor via both traditional lecturing and active learning, and analyzed how the change in teaching approach impacted underrepresented minority and low-income students. On average, active learning reduced achievement gaps in exam scores and passing rates. Active learning benefits all students but offers disproportionate benefits for individuals from underrepresented groups. Widespread implementation of high-quality active learning can help reduce or eliminate achievement gaps in STEM courses and promote equity in higher education.

April 20, 2020 at 7:00 am 5 comments

Ebooks, Handbooks, Strong Themes, and Undergraduate Research: SIGCSE 2020 Preview

A few items on things that we’re doing at SIGCSE 2020. Yes, SIGCSE 2020 is still have a face-to-face meeting. Attendance looks to be down by at least 30% because of coronavirus fears.

Barbara Ericson (and Brad Miller, who won’t be there) are presenting a paper on their amazingly successful Runestone open-source platform for publishing ebooks: Runestone: A Platform for Free, On-line, and Interactive Ebooks on Sat Mar 14, 2020 11:10 AM – 11:35 AM in D135. They are also hosting a workshop to help others to develop with Runestone: Workshop #401: Using and Customizing Ebooks for Computing Courses with Runestone Interactive on Sat Mar 14, 2020 3:30 PM – 6:30 PM in C120.

I’m part of the massive special session on Thursday 1:45 PM – 3:00 PM in B113 that Colleen Lewis is organizing: Session 2H: The Cambridge Handbook of Computing Education Research Summarized in 75 minutes. Colleen, who must have done graduate work in organizational management (or perhaps cat herding), has organized 25 authors (!) to present the entire Handbook in a single session. Even if I wasn’t one of the presenters, I’d go just to see if we can all pull it off! It’s going to be kind of like watching NASCAR — you’re on the edge of your seat as everyone tries to avoid crashing into one another.

Bravo to Bob Sloan who got this panel accepted this year: Session 6K: CS + X Meets CS 1: Strongly Themed Intro Courses on Fri Mar 13, 2020 3:45 PM – 5:00 PM in Portland Ball Room 255. The panelists are teachers and developers who have put together contextualized introductions to computing, like Media Computation. The panelists have done interesting classes, and I’m eager to hear what they have to say about them.

I am collaborating with Sindhu Kutty on her interesting summer reading group to engage undergraduates in CS research. (Read as: we meet occasionally to work on assessment, but Sindhu is really doing all the work.) The evidence suggests that she’s able to give undergraduates a better understanding of CS graduate research, at a larger scale (e.g. a couple dozen students to one faculty member) than typical undergraduate research programs. It seems like it might feel a bit safer and easier to try for female students. She was going to present a poster at RESPECT on Wednesday Undergraduate Student Research With Low Faculty Cost, but it’s now going to be virtual. I’m not sure how it’s going to work right now.

March 11, 2020 at 7:00 am 10 comments

BDSI – A New Validated Assessment for Basic Data Structures: Guest Blog Post from Leo Porter and colleagues

Leo Porter, Michael Clancy, Cynthia Lee, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, and Daniel Zingaro have developed a new concept inventory that they are making available to instructors and researchers. They have written this guest blog post to describe their new instrument and explain why you should use it. I’m grateful for their contribution!

We recently published a Concept Inventory for Basic Data Structures at ICER 2019 [1] and hope it will be of use to you in your classes and/or research.

The BDSI is a validated instrument to measure student knowledge of Basic Data Structure Concepts [1].  To validate the BDSI, we engaged faculty at a diverse set of institutions to decide on topics, help with question design, and ensure the questions are valued by instructors.  We also conducted over one hundred interviews with students in order to identify common misconceptions and to ensure students properly interpret the questions. Lastly, we ran pilots of the instrument at seven different institutions and performed a statistical evaluation of the instrument to ensure the questions are properly interpreted and discriminate between students’ abilities well.

What Our Assessment Measures

The BDSI measures student performance on Basic Data Structure concepts commonly found in a CS2 course.  To arrive at the topics and content of the exam, we worked with fifteen faculty at thirteen different institutions to ensure broad applicability.  The resulting topics on the CI include: Interfaces, Array-Based Lists, Linked-Lists, and Binary Search Trees. If you are curious about the learning goals or want more details on the process we used in arriving at these goals, please see our SIGCSE 2018 publication [2].

Why Validated Assessments are Great for Instructors

Suppose you want to know how well your students understand various topics in your CS2 course.  How could you figure out how much your students are learning relative to other schools? You could, perhaps, get a final exam from another school and use it in your class to compare results, but invariably, the final exam may not be a good fit.  Moreover, you may find flaws in some of the questions and wonder if students interpret them properly. Instead, you can use a validated assessment. The advantage of using a validated assessment is there is general agreement that it is measuring what you want to measure and it accurately measures student thinking.  As such, you can compare your findings to results from other schools who have used the instrument to determine if your students are learning particular topics better or worse than cohorts and similar institutions.

Why Validated Assessments are Great for Researchers

As CS researchers, we often experiment with new ways to teach courses.  For example, many people use Media Computation or Peer Instruction (PI), two complementary pedagogical approaches developed over the past several decades.  It’s important to establish whether these changes are helping our students. Do more students pass? Do fewer students withdraw? Do more students continue studying CS?  Does it boost outcomes for under-represented groups? Answering these questions using a variety of courses can give us insight into whether what we do corresponds with our expectations.

One important question is: using our new approach, do students learn more than before?  Unfortunately, answering this is complicated by the lack of standardized, validated assessments.  If students score 5% higher on an exam when studying with PI vs. not studying with PI, all we know is that PI students did better on that exam.  But exams are designed by one instructor, for one course at one institution, not for the purposes of cross-institution, cross-cohort comparisons.  They are not validated. They do not take into account the perspectives of other CS experts. When students answer a question on an exam correctly, we assume that it’s because they know the material; when they answer incorrectly, we assume it’s because they don’t know the material.  But we don’t know: maybe the exam contains incidental cues that subtly influence how students respond.

A Concept Inventory (CI) solves these problems.  Its rigorous design process leads to an assessment that can be used across schools and cohorts, and can be used to validly compare teaching approaches.

How to Obtain the BDSI

The BDSI is available via the google group.  If you’re interested in using it, please join the group and add a post with your name, institution, and how you plan to use the BDSI.

How to Use the BDSI

The BDSI is designed to be given as a post-test after students have completed the covered material.  Because the BDSI was validated as a full instrument, it is important to use the entire assessment, and not alter or remove any of the questions.  We ask that instructors not make copies of the assessment available to students after giving the BDSI, to try to avoid the questions becoming public.  We likewise recommend giving participation credit, but not correctness credit, to students for taking the BDSI, to avoid incentivizing cheating.  We have found giving the BDSI as part of a final review session, collecting the assessment from students, and then going over the answers to be a successful methodology for having students take it. 

Want to Learn More?

If you’re interested in learning more about how to build a CI, please come to our talk at SIGCSE 2020 (from 3:45-4:10pm on Thursday, March 12th) or read our paper [3].  If you are interested in learning more about how to use validated assessments, please come to our Birds of a Feather session on “Using Validated Assessments to Learn About Your Students” at SIGCSE 2020 (5:30-6:20pm on Thursday, March 12th) or our tutorial on using the BDSI at CCSC-SW 2020 (March 20-21).

References:

[1] Leo Porter, Daniel Zingaro, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, Cynthia Lee, and Michael Clancy. 2019. BDSI: A Validated Concept Inventory for Basic Data Structures. In Proceedings of the 2019 ACM Conference on International Computing Education Research (ICER ’19).

[2] Leo Porter, Daniel Zingaro, Cynthia Lee, Cynthia Taylor, Kevin C. Webb, and Michael Clancy. 2018. Developing Course-Level Learning Goals for Basic Data Structures in CS2. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE ’18).

[3] Cynthia Taylor, Michael Clancy, Kevin C. Webb, Daniel Zingaro, Cynthia Lee, and Leo Porter. 2020. The Practical Details of Building a CS Concept Inventory. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE ’20).

February 24, 2020 at 7:00 am Leave a comment

Call for participation at the 2nd Excited Summer School on Research in Computing Education

Sharing a note from Birgit Rognebakke Krogstie:

Call for participation at the 2nd Excited Summer School on Research in Computing Education

We are happy to announce the second Excited Summer School on Research in Computing Education, which will take place 8-12 June 2020 at NTNU campus Gløshaugen in Trondheim, Norway. The school is intended for PhD students and post docs. There will be varied sessions led by experts in the computing education field, combining presentations with discussion and hands-on groupwork. Topics range from how to teach beginner-level programming courses to the inclusion of environmental sustainability in IT education. The school is a great arena for network building and informal discussion with fellow students as well as with the invited experts. For more information, see our web site: https://www.ntnu.edu/excited/call-for-participants-excited-summer-school-2020. Application deadline: 10 March 2020.

February 21, 2020 at 9:34 am 3 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,995 other followers

Feeds

Recent Posts

Blog Stats

  • 1,801,952 hits
October 2020
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

CS Teaching Tips