Posts tagged ‘ICER’
New ICER paper award for Lasting Impact: Guest blog post from Quintin Cutts
I serve on the ACM SIGCSE International Computing Education Research (ICER) conference steering committee. Quintin Cutts is Chair of the Steering Committee. I offered to share his announcement of a new Lasting Impact paper award here in the blog.
This is an invitation to nominate a paper for the ICER Lasting Impact Award 2022, or to offer to serve on the judging panel.
Which ICER paper has caused you to change the way you teach, or the direction of your research? Which has helped you to see and understand CS education more clearly? Has it also had an impact right across the community? I know which paper I would nominate, if I were allowed (but I’m not! – see below). It’s been a game-changer for me, and across CS education. Which one has done this for you and others?
1. Description of the award
The ICER Lasting Impact Award recognizes an outstanding paper published in the ICER Conference that has had meaningful impact on computing education. Significant impact can be demonstrated through citations, adoptions and/or adaptations of techniques and practices described in the paper by others, techniques described in the paper that have become widely recognized as best practices, further theoretical or empirical studies based on the original work, or other evidence the paper is an outstanding work in the domain of computing education research. The paper must have been published in ICER at least 10 years prior (i.e., for the 2022 award, papers must have been published in or before ICER 2011.)
2. Requirements for nominating a paper
a. An ACM Digital Library link to the paper being nominated.
b. A brief summary of the technical content of the paper and a brief explanation of its significance (limit 750 words).
c. Signatories to the summary and significance statement, with at least two current SIGCSE members. The name, contact email address and affiliation of each person who has agreed to sign the endorsement is acceptable.
3. Nominating yourself as a potential award judge
Please consider nominating yourself as a potential judge. We are seeking judges who have significant experience in the ICER community. We will ask judges to serve who do not have nominated papers, to avoid conflicts of interest.
4. Additional Notes
a. ICER Steering Committee members cannot nominate papers.
b. The ICER Steering Committee chair, Quintin Cutts, will run the process this year, and his papers cannot be nominated.
c. In this inaugural year of the Award, we will not have a pre-defined rubric. We will ask the judges to report the rationale for their decision, and the report will be made public when we announce the winner.
5. Timetable (all times, 23.59 AoE)
17th July: Nominations close.
18th July: Judging panel selected from the candidate pool, depending on the number of nominations and conflicts. Papers sent out to the judges.
2nd August: Judging panel sits to deliberate and makes a decision, which is passed to PC chairs. Winner notified.
The award will be presented at the ICER 2022 conference in Lugano, Switzerland either in person or on-line.
6. Submitting nominations
Please send both paper nominations and judging self-nominations to me, Quintin Cutts at Quintin.Cutts@glasgow.ac.uk
ICER 2021 Preview: The Challenges of Validated Assessments, Developing Rich Conceptualizations, and Understanding Interest #icer2021
The International Computing Education Research Conference (ICER) 2021 is this week (website here). It should have been in Charleston, South Carolina (one of my favorite cities), but it will instead be all on-line. Unlike previous years, if you are not already registered, you’re unfortunately out of luck. As seen in Matthias Hauswirth’s terrific guest blog post from last week (see here), getting set up in Clowdr is complicated. ICER won’t have the resources to bring people on-line and get them through the half hour prep sessions on-the-fly. There will be no “onsite” registration.
However, all the papers should be available in the ACM Digital Library (free for some time), and I think all the videos of the talks will be made available after the fact, so you can still gain a lot from the conference. Let me point out a few of the highlights that I’m excited about. (As of this writing, the papers are not yet appearing in the ACM DL — all the DOI links are failing for me. I’ll include the links here in hopes that everything is fixed soon.)
Our keynoter is Tammy Clegg, whom I got to know when she was a PhD student at Georgia Tech. She’s now at U. Maryland doing amazing work around computation and relevant science learning. I’m so looking forward to hearing what she has to say to the ICER community.
Miranda Parker, Allison Elliott Tew, and I have a paper “Uses, Revisions, and the Future of Validated Assessments in Computing Education: A Case Study of the FCS1 and SCS1.” This is a paper that we planned to write when Miranda first developed the SCS1 (first published in 2016). We created the SCS1 in order to send it out to the world for use in research. We hoped that we could sometime later do in CS what Richard Hake did in Physics, when he used the FCI to make some strong statements about teaching practices with a pool of 6,000 students (see paper here). Hake’s paper had a huge impact, as it started making the case to shift from lecture to active learning. Could we use the collected use of the SCS1 to make some strong arguments for improving CS learning? We decided that we couldn’t. The FCI was used in pretty comparable situations, and it’s tightly focused on force. CS1 is far too broad, and FCS1 and SCS1 are being used in so many different places — not all of which it’s been validated for. Our retrospective paper is kind of a systemic literature review, but it’s done from the perspective of tracing these two instruments and how they’ve been used by the research community.
One of the papers that I got a sneak peek at was “When Wrong is Right: The Instructional Power of Multiple Conceptions” by Lauren Margulieux, Paul Denny, Katie Cunningham, Mike Deutsch, and Ben Shapiro. The paper is exploring the tensions between direct instruction and more student-directed approaches (like constructionism and inquiry learning) (see a piece I did in 2015 about these tensions). The basic argument of this new paper is that just telling students the right answer is not enough to develop rich understanding. We have to figure out how to help students to be able to hold and compare multiple conceptions (not all of which is canonical or held by experts), so that they can compare and contrast, and use the right one at the right time.
I’m chair for a session on interest. While I haven’t seen the papers yet, I got to watch the presentations (which are already loaded in Clowdr). “Children’s Implicit and Explicit Stereotypes on the Gender, Social Skills, and Interests of a Computer Scientist” by de Wit, Hermans, and Aivaloglou is a report on a really interesting experiment. They look at how kids associate gender with activities (e.g., are boys more connected to video games than girls?). The innovative part is that they asked the questions and timed the answers. A quick answer likely connects to implicit beliefs. If they take a long time to answer, maybe they told you what they thought you wanted to hear? The second paper “All the Pieces Matter: The Relationship of Momentary Self-efficacy and Affective Experiences with CS1 Achievement and Interest in Computing” by Lishinski and Rosenberg asks about what leads to students succeeding and wanting to continue in computing. They look at students affective state coming into CS1 (e..g, how much do they like computing? How much do they think that they can succeed in computing?), and relate that to students’ experiences and affective state after the class. They make some interesting claims about gender — that gender gaps are really self-efficacy gaps.
One of the more unusual sessions is a pair of papers from IT University of Copenhagen that make up a whole session. ICER doesn’t often give over a whole session to a single research group on multiple papers. One is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (Recruitment Perspective)” and the other is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (CS1 Appeal Perspective).” The pitch is that framing CS1 as being about people rather than things leads to better recruitment (first paper) and more success in CS1 (second paper) in terms of gender diversity. It’s empirical support for a hypothesis that we’ve heard before, and the authors frame the direction succinctly: “CS is about people not things.” Is that succinct enough to get CS faculty to adopt this and teach CS differently?
The Drawbacks of the One-Second Conference Trip. Or, how to prepare for ICER 2021. Guest Blog Post from Matthias Hauswirth
I miss physical conferences. But there are some things about them I do not miss at all. I don’t miss sprinting through airports to catch a connecting flight. I don’t miss standing in line at immigration for over an hour, just to enter the next long line to get through customs. And I don’t miss sitting in a tight middle seat for ten hours straight.
With today’s virtual conferences the trips are more pleasant. I can travel there with a single mouse click. It’s a one-second trip. And I love that! *
However, by eliminating the trip to the conference, we also eliminated an opportunity to prepare for the conference while being stuck in airports, planes, stations, and trains. My physical conference trips used to provide ample idle time. I used that time to contact colleagues to schedule a dinner, lunch, or coffee at the conference; to read the conference program and highlight the talks I wanted to see; to check out the map of the venue to know where to find the relevant rooms; and even to read a paper or two to prepare for talking to the authors at the conference.
That kind of preparation takes more than a second. And without the time provided by those arduous trips, I might show up ill prepared and miss out on half of the fun.
So here is my plan. For my next one-second conference trip, I will allocate a little bit of extra time to prepare. Not crammed into an airplane seat, but at home, in a comfy chair, with a nice cup of coffee.
Oh, and if your next conference trip takes you to ICER 2021 this coming Monday, here are some suggestions from the ICER Chairs for how to prepare for this conference, which will be hosted in the most recent version of Clowdr:
- Find the invitation email you received from Clowdr (check your spam folder, too!) and log in (3 minutes).
- Watch the ICER 2021 Clowdr Intro video (13 minutes). This will teach you the basics of how to navigate the platform. We recommend following along interactively on the Clowdr site as you watch, to familiarize yourself with the navigation
- Watch the ICER 2021 Paper Sessions: Participant Experience video (14 minutes). This will teach you how our paper sessions will work. You won’t just be watching videos, you’ll also be interacting while you watch, talking in small groups afterwards, and asking questions.
- Once logged in, read the ICER Clowdr Experience FAQ page (4 minutes). This has the videos above and more detail for specific types of events.
- On Clowdr, read the Code of Conduct page (3 minutes). Everyone is responsible for following these rules to ensure everyone feels safe and welcome.
- On Clowdr, read the How to Set Up Your Profile page and set up your profile (3 minutes). This ensures people know who you are, what your name and pronouns are, where you’re visiting from, and what roles you’re playing at the conference.
In Clowdr you will find a lot of content, including the entire program. We recommend that inside Clowdr you “star” events you are interested in to create your personal schedule. There is a page for each paper and poster/lightning talk. On each paper page you already find the presentation as an embedded video, on each ICER poster page there’s the poster pitch video and the PDF of the poster, and on each ICER lightning talk page you find the talk slide. Have a quick look to plan your personal schedule. And while you’re there, why not already leave a message or comment for the authors in the chat at the right of the paper/poster’s page? Note that the links to the papers in the ACM DL are not yet active; we expect ACM to make the DOIs work and the papers visible in the DL by the start of the conference.
We are confident that with an hour or so of up-front effort you will get much more out of the conference! (We suspect, though, that you will end up spending more than an hour because the content draws you in!) ICER 2021 is a compact conference packed with exciting content and interaction. Log in now to make the most of it!
*) I also very much love the minimal carbon footprint, low cost, and reduced health risks of virtual conferences.
ICER 2021 Call for Papers out with Changes for ICER Authors
The International Computing Education Research (ICER) Conference Call for papers is now out — see the conference website here. Abstracts are due 19 March, and full papers are due 26 March.
There are big changes in the author experience of ICER 2021 — see a blog post describing them here. Here are two of them:
- ICER is going to use the new ACM TAPS publication process, and the paper size limits are now based on word count instead of number of pages. I hope that this relieves authors from some of the tedium of the last minute adjusting of figure sizes and tweaking of text/fonts to just barely get everything squeezed in to the given page limits.
- There will now be conditional accepts. It’s heart breaking when there’s a paper that’s so good, but it’s got one small and easily-fixable fatal flaw (something that the reviewers and program chairs feel is not publishable as-is). In a conference setting, when the only options are accept or reject, there’s not much to do but reject. Now, there will be an option to conditionally accept a paper with a small review process after revision, to make sure that the small flaw is fixed.
Please do submit to ICER — let’s get lots of great CS Ed research out into the community discussion!
Award-winning papers at ICER 2020 explore new directions and point towards the next work to do
The 2020 ACM SIGCSE International Computing Education Research Conference was in August (see website here), hosted in Dunedin, New Zealand — but was unfortunately entirely virtual. I became so much more aware of the affordances of face-to-face conferences when attending one of my favorite conferences all through my screen. The upside of the all-virtual format is that all the talks are available on YouTube (see ICER 2020 channel here). Here are my comments on the three papers receiving awards — see them listed here.
What Do We Think We Think We Are Doing?: Metacognition and Self-Regulation in Programming. (Paper link)
This is the paper that I have read and re-read the most since the conference. The authors review what the literature tells us about metacognition in programming. Metacognition is thinking about thinking, like “Did I really understand that? Maybe I should re-read this. Or maybe I should write down my thoughts so I can reflect on them I’m not sure that I’m making progress here. Taking a walk would probably help me clear my head and focus.”
One of their findings that is most intriguing is “Metacognitive knowledge is difficult to achieve in domains about which the learner has little content knowledge.” In other words, you can’t teach students metacognition and self-regulation first, and then teach them something using those new thinking skills. Learners have to know some of the domain first. Now why is that?
Here’s a hypothesis: Metacognition and self-regulation are hard. They take a lot of cognitive load. You have to pay attention to things that are invisible (your own memory and thoughts) and that’s hard. Trying to learn or problem-solve at the same time that you’re monitoring yourself and thinking about your own learning — super hard. Maybe you have to know enough about the domain for some of that activity to be automatized, so that you don’t have to pay as much attention to it in order to do it.
So the biggest hole I see in this paper (which given that it’s a review paper, probably means that the hole is in the literature) is that it does not consider enough factors like gender, race, disability, or SES (e.g., wealth). (Gender gets mentioned when reporting Alex Lishinski’s great work, but only with respect to self-efficacy.) My hypothesis is that the story is more complicated when you consider non-dominant groups. If you don’t think you belong, that takes more of your attention, which takes attention away from your learning — and leaves even less attention for metacognition and self-regulation. If you’re worrying about your screen reader working or where you’re going to get dinner tonight, how do you also have attention left over for monitoring your learning?
The biggest unique opportunity I see in thinking about metacognition and programming is in thinking about debugging. Like psychology or veterinarian science, but unlike most other fields, a lot of a computer scientists’ job is in understanding the “thinking” (behavior, processing, whatever) of another agent. When you’re debugging your program, isn’t that a kind of metacognition. “Okay, what is the computer doing here? How is it interpreting what I wrote? Oh wait, is that what I wanted to write? Is that what I wanted to happen?” The complexity of mapping your thoughts and intentions to what you wrote to what the computer did is enormous. Now, debug someone else’s code — you’ve got what you want in mind, you’re constructing a model of mind of whoever wrote the code before you (did they know what they were doing? is this code brilliant or broken?), and you’re trying to figure out how the computational agent is “thinking about” the code. There’s some seriously complex metacognition going on there.
Exploring Student Behavior Using the TIPP&SEE Learning Strategy. (See paper here.)
No surprise that Diana Franklin’s CANON Lab at U. Chicago continues to do terrific and award-winning work. I’m excited about the TIPP&SEE learning strategy. A commonly found problem in computer science education is that students are bad at Explain In Plain English (EIPE) problems (e.g., see this SIGCSE 2012 paper on the topic). EIPE problems are a measure of students being able to step back from the structure and behavior of code to describe its function or purpose. Katie Cunningham has been exploring how some students focus more on the purpose of the programming, and others get stuck in the code and can’t see the purpose of the program. The TIPP&SEE learning strategy explicitly addresses these problems. Students are guided through how to understand a programming project and relating code to purpose.
This award-winning paper (which follows on their SIGCSE 2020 paper) shows us that students using the TIPP&SEE approach perform better than students who don’t. They get more of their programs done. The SIGCSE 2020 paper shows that they learn more.
The papers totally convince me that this strategy works. The next question is that I want to know is how and why. The SIGCSE paper does some qualitative work, but it’s pretty big n — 184 students. With this kind of scale, the programs are given and the problems are given. There’s not as much opportunity for the detailed cognitive interviews to figure out how the students are thinking about interpreting programs. What happens when these students just go to the Scratch website to look at something that they want to reuse? Do they use TIPP&SEE? Do they understand the programs that they just happen to come across? What happens when they want to build something, where they provide their purpose? Can they draw on TIPP&SEE and succeed? This is not a critique of the papers — they’re great and make real contributions. I’m thinking about what I want to know next.
Hedy: A Gradual Language for Programming Education. (See program link here.)
Easily my favorite paper at ICER 2020 this year. Felienne is doing what I am trying to do. Let’s invent new more usable programming languages! I am happy that she got this paper published, because (selfishly) it gives me hope that I can get my new work published. I am thrilled that the ICER community valued this paper so highly that it received a John Henry award.
The basic idea is to create a sequence of programming languages, where advancing levels have most of the elements of the previous level but include new elements. Her earliest level has no punctuation — no quotes, no semi-colons, no curly-braces. I recently built a task-specific programming language that had the same attribute, and one of the students I’m working with looked at it and asked, “Wait — you can have programming languages without all that punctuation? Well, then, why do we have so much when it scares people off?” Great question! When do we need all that extra punctuation, and where can we avoid it?
The next stage is to explore how we design languages like these. (I’m biased since this is where I’m spending most of my research time on these days.) Why do we choose those language features? Why the keywords print, ask, echo, assign, if, else, and repeat? How do we design and iteratively develop the language? How do we know that people can do things that they want to do with this language? My answer to this is participatory design with teachers, but there are many other viable answers. Felienne provides good design rationale for Hedy’s language features, based in literature from computing education and natural language acquisition. In a process of user experience (UX) design, we’d also user iterative development including testing with real users. This paper shows us use at large scale, and a big chunk of her paper describes what people did with it. It’s fascinating work, but we don’t talk to any of them. We don’t know what they liked, what they disliked, what they found frustrating, and what they were able to do. We need to move programming language design closer to user experience design — UX for PX.
All three papers are terrific contributions to the research community, and I plan to cite and built on them all. I’m eager to see what comes next!
Sidebar: I am a member of the ICER Steering Committee (which has no role in reviewing papers or in picking awards), and I was a metareviewer for ICER 2020. I am speaking here just for myself as a reader and attendee.
Why some students do not feel that they belong in CS, and how we can encourage the sense that they do belong
One of my favorite papers at ICER 2019 was by Colleen Lewis and her colleagues, and is available on her website. I’ll quote her first:
Does a match between a students’ values of helping society and their perception of computing matter? Yes! A mismatch between a students’ goals of helping society and their perception of computing predicts a lower sense of belonging. And students from groups who – on average – are more likely to want to help society (women, Black students, Latinx students, and first-generation college students), this may be particularly problematic! (pdf)
- Lewis, C. M., Bruno, P., Raygoza, J., & Wang, J. (2019). Alignment of Goals and Perceptions of Computing Predicts Students’ Sense of Belonging in Computing.Proceedings of the International Computer Science Education Research Workshop. Toronto, Canada.
I want to expand a bit on that paragraph. I often get the question, “Why aren’t more women and URM students going into CS?” We’re seeing female students and students of color leaving/avoiding CS at many stages, e.g., Barb’s deep analysis of AP CS*. Colleen and her collaborators are giving us one answer.
We know that students who have a sense of belonging in computing are more likely to stay in computing. Colleen et al. found that students who found that their values were supported in computing were more likely to feel a sense of belonging. So, if what you want to do with your life matches computing, you’re more likely to stick around in computing. This is the “alignment of goals” and “perceptions of computing” part of the title.
Next step: Students from demographic groups underrepresented in computing were more likely to value community and helping society than other students. These are their goals. Do students see that their goals align with their perception of computing? If so, then you have an increased sense of belonging. Colleen and her colleagues found that If the students who valued community perceived that they could use computing to support communal values, then they were more likely to stick around.
This result is obviously explanatory. It helps us to understand who stays in computing. It also suggests interventions. Want to retain more under-represented students in your CS classes? Help them to see that they can pursue their values in computing. Help them to update their perceptions so that they see the alignment of their goals with computing goals.
But what if you (as the teacher) don’t? This paper suggests future research questions. What if your CS class is entirely de-contextualized and doesn’t say anything about what the students might do with computing? What perceptions do the students bring to the CS class if nobody helps them to see the possibilities in computing? Which student goals align with these perceived goals of computing? We might guess what the answers might be, but it really does call for some explicit research. What are students’ goals and perceptions of computing in most CS classes today?
* Check out Barb’s blog at https://cs4all.home.blog/. As I’m writing this, Barb is finishing up the 2019 AP analysis. The gap between white and Black student pass rates on AP CSP is enormous, far larger than the gap on AP CS A. I’m hoping that she has updates there by the time this post appears.
Social studies teachers programming, when high schools choose to teach CS, and new models of cognition and intelligence in programming: An ICER 2019 Preview
My group will be presenting two posters at ICER this year.
- Bahare Naimipour (Engineering Education Research PhD student at U-Michigan) will be presenting our participatory design session with social studies educators, Helping Social Studies Teachers to Design Learning Experiences Around Data–Participatory design for new teacher-centric programming languages. We had 18 history and economics teachers building data visualizations in either Vega-Lite or JavaScript with Google Charts. Everyone got the starter visualization running and made changes that they wanted in less than 20 minutes. Those who started in Vega-Lite also tried out the JavaScript code, but only about 1/4 of the JS groups moved to Vega-Lite successfully.
- Miranda Parker (Human-Centered Computing PhD student at Georgia Tech) will be presenting her quantitative model explaining about half of the variance in whether Georgia high schools taught CS in 2016, A Statewide Quantitative Analysis of Computer Science: What Predicts CS in Georgia Public High School. The most important factor was whether the school taught CS the year before, suggesting that overcoming inertia is a big deal — it’s easier to sustain a CS program than start one. She may talk a little about her new qualitative work, where she’s studying four schools as case studies about their factors in choosing to teach CS, or not.
Barbara is co-author on a paper, A Spaced, Interleaved Retrieval Practice Tool that is Motivating and Effective, with Iman Yeckehzaare and Paul Resnick . This is about a spaced practice tool that 32% of the students in an introductory programming course used more than they needed to, and the number of hours of use had a measurable positive effect on the final exam grade.
All of our other papers were rejected this year, but we’re in good company — the accept rate was around 18%. But I do want to talk about a set of papers that will be presented by others at ICER 2019. These are papers that I heard about, then I asked the authors for copies. I’m excited about all three of them.
How Do Students Talk About Intelligence? An Investigation of Motivation, Self-efficacy, and Mindsets in Computer Science by Jamie Gorson and Eleanor O’Rourke (see released version of the paper here)
One of the persistent questions in computing education research is why growth mindset interventions are not always effective (see blog post here). We get hard-to-interpret results. I met Jamie and Nell at the Northwestern Symposium on Computer Science and the Learning Sciences in April (amazing event, see here for more details). Nell worked with Carol Dweck during her graduate studies.
Jamie and Nell found mixed mindsets among the CS students that they studied. Some of the students they studied had growth mindsets about intelligence, but their talk about programming practices showed more fixed mindset characteristics. Other students self-identified as having some of both growth and fixed mindset beliefs.
In particular, some students talked about intelligence in CS in ways that are unproductive when it came to the practice of programming. For example, some students talked about the best programmers as being able to write the whole code in one sitting, or never getting any errors. A more growth mindset approach to programming would be evidenced by talking about building programs in pieces, expecting errors, and improving through effort over time.
This is a really helpful finding. It gives us new hypotheses to explore about why growth mindset interventions haven’t been as successful in CS as in other disciplines. Few disciplines have this strong distinction between their knowledge and their practice as acutely as we do in CS. It’s no wonder that we see these mixed mindsets.
Toward Context-Dependent Models of Productive Knowledge in Programming Cognition, by Brian A. Danielak
I’ve known Brian since he was a PhD student, and have been hoping that he’d start to publish some of his dissertation work. I got to read one chapter of it, and found it amazingly insightful. Brian explained how what we might see as a “random walk” of syntax was actually purposeful and rational behavior. I was excited to hear about this paper, and I enjoyed reading it.
It’s such an unusual paper for ICER! It’s empirical, but has no methods section. A big part of it is connecting to prior literature, but it’s not about a formal literature review.
Brian is making an argument about how we characterize knowledge and student success in CS. He points out that we often talk about students being wrong and having misconceptions, which is less productive than figuring out what they understand and where their alternative conceptions work or fail. I see his work following on to the work of Rich et al. (mentioned in this blog post) on CS learning trajectories. There are so many things to learn in CS, and sometimes, just getting started on the trajectory is a big step.
Spatial Encoding Strategy Theory: The Relationship between Spatial Skill and STEM Achievement by Lauren Margulieux.
Lauren is doing some impressive theoretical work here. She’s considering the work exploring the relationship between spatial reasoning and CS learning/performance, then constructs a theory explaining the observed results. Since it’s Lauren, the theory is thorough and covers well the known results in this space. I wrote her that I didn’t think that theory explains things that we expect are related to spatial reasoning, but we don’t yet have empirical evidence to support it. For example, when programmers simulate a program in their mind, their mental models may have a spatial component to them, but I don’t know of empirical work that explores that dimension of CS performance. But again, since it’s Lauren, I wouldn’t be surprised if her presentation addresses this point, beyond what was in the paper. (Also, read Lauren’s own summary of the paper here.)
I am looking forward to the discussion of these papers at ICER!
Come hang out with Wil and me to talk about new research ideas! ACM ICER 2019 Work in Progress Workshop
Wil Doane and I are co-hosting the ACM ICER 2019 Work in Progress workshop that Colleen Lewis introduced at ICER 2014 in Glasgow (my report on participating). Colleen and I co-hosted last year.
It really is a “hosting” job more than an “organizing” or “presenting” role. I love Colleen’s informal description of WiP, “You’re borrowing 4 other smart people’s brains for an hour. Then you loan them yours.” The participants do the presenting. For one hour, your group listens to your idea and helps you think through it, and then you pass the baton. The whole organizing task is “Let’s put these 4 people together, and those 4 people together, and so on. We give them 4 hours, and appropriate coffee/lunch breaks.” (Where the value “4” may be replaced with “5” or “6”.)
Another useful description of WiP is “doctoral consortia for after-graduation.” Doctoral consortia are these great opportunities to share your research ideas and get feedback on them. Then there’s this sense that you graduate and…not have those ideas anymore? Or don’t need to share them or get feedback on them? I’ve expressed concern previously about the challenges of learning when you’re no longer seen as a learner. Of course, PhD graduates are supposed to have new research ideas, which go into proposals and papers. But how do you develop ideas when you’re at the early stages, when they’re not ready for proposals or papers? That’s what the WiP is about.
The WiP page is here (and quoted in part below). To sign up, you just fill out this form, and later give us a drafty concept paper to share with your group.
The WIP Workshop (formerly named the Critical Research Review) is a dedicated 1-day workshop for ICER attendees to provide and receive friendly, constructive feedback on works-in-progress. To apply for the workshop you will specify a likely topic about which you’ll request feedback. WIP participants will be assigned to thematic groups with 4-6 participants.
Two weeks before ICER, participants will submit to the members of their group a 2-4 page primer document to help prepare for the session and identify the types of feedback sought. At WIP, depending upon group size, each participant will have 45-75 minutes to provide context, elicit advice, support, feedback, and critique. Typically, one of the other group members acts as a notetaker during an individual’s time in order to allow the presenter to engage fully in the discussion.
WIP may be the right experience for you, if you would like to provide and receive constructive advice, support, feedback, or critique on computing education research issues such as:
- A kernel of a research idea
- A grant proposal
- A rejected ICER paper
- A study design
- A qualitative analysis approach
- A quantitative analysis approach
- A motivation for a research project
- A theoretical framing
- A challenge in a research project
The goal of the workshop is to provide a space where we can receive support and provide support. The workshop is intended for active CS education researchers. PhD students are instead encouraged to apply for the Doctoral Consortium, held on the same day as WIP.
Growth mindset matters for individual human performance, with a more indirect connection to academic success

Adaptive Parsons problems, and the role of SES and Gesture in learning computing: ICER 2018 Preview
Next week is the 2018 International Computing Education Research Conference in Espoo, Finland. The proceedings are (as of this writing) available here: https://dl.acm.org/citation.cfm?id=3230977. Our group has three papers in the 28 accepted this year.
“Evaluating the efficiency and effectiveness of adaptive Parsons problems” by Barbara Ericson, Jim Foley, and Jochen (“Jeff”) Rick
These are the final studies from Barb Ericson’s dissertation (I blogged about her defense here). In her experiment, she compared four conditions: Students learning through writing code, through fixing code, through solving Parsons problems, and through solving her new adaptive Parsons problems. She had a control group this time (different from her Koli Calling paper) that did turtle graphics between the pre-test and post-test, so that she could be sure that there wasn’t just a testing effect of pre-test followed by a post-test. The bottom line was basically what she predicted: Learning did occur, with no significant difference between treatment groups, but the Parsons problems groups took less time. Our ebooks now include some of her adaptive Parsons problems, so she can compare performance across many students on adaptive and non-adaptive forms of the same problem. She finds that students solve the problems more and with fewer trials on the adaptive problems. So, adaptive Parsons problems lead to the same amount of learning, in less time, with fewer failures. (Failures matter, since self-efficacy is a big deal in computer science education.)
“Socioeconomic status and Computer science achievement: Spatial ability as a mediating variable in a novel model of understanding” by Miranda Parker, Amber Solomon, Brianna Pritchett, David Illingworth, Lauren Margulieux, and Mark Guzdial
(Link to last version I reviewed.)
This study is a response to the paper Steve Cooper presented at ICER 2015 (see blog post here), where they found that spatial reasoning training erased performance differences between higher and lower socioeconomic status (SES) students, while the comparison class had higher-SES students performing better than lower-SES students. Miranda and Amber wanted to test this relationship at a larger scale.
Why should wealthier students do better in CS? The most common reason I’ve heard is that wealthier students have more opportunities to study CS — they have greater access. Sometimes that’s called preparatory privilege.
Miranda and Amber and their team wanted to test whether access is really the right intermediate variable. They gave students at two different Universities four tests:
- Part of Miranda’s SCS1 to measure performance in CS.
- A standardized test of SES.
- A test of spatial reasoning.
- A survey about the amount of access they had to CS education, e.g., formal classes, code clubs, summer camps, etc.
David and Lauren did the factor analysis and structural equation modeling to compare two hypotheses: Does higher SES lead to greater access which leads to greater success in CS, or does higher SES lead to higher spatial reasoning which leads to greater success in CS? Neither hypothesis accounted for a significant amount of the differences in CS performance, but the spatial reasoning model did better than the access model.
There are some significant limitations of this study. The biggest is that they gathered data at universities. A lot of SES variance just disappears when you look at college students — they tend to be wealthier than average.
Still, the result is important for challenging the prevailing assumption about why wealthier kids do better in CS. More, spatial reasoning is an interesting variable because it’s rather inexpensively taught. It’s expensive to prepare CS teachers and get them into all schools. Steve showed that we can teach spatial reasoning within an existing CS class and reduce SES differences.
“Applying a Gesture Taxonomy to Introductory Computing Concepts” by Amber Solomon, Betsy DiSalvo, Mark Guzdial, and Ben Shapiro
We were a bit surprised (quite pleasantly!) that this paper got into ICER. I love the paper, but it’s different from most ICER papers.
Amber is interested in the role that gestures play in teaching CS. She started this paper from a taxonomy of gestures seen in other STEM classes. She observed a CS classroom and used her observations to provide concrete examples of the gestures seen in other kinds of classes. This isn’t a report of empirical findings. This is a report of using a lens borrowed from another field to look at CS learning and teaching in a new way.
My favorite part of of this paper is when Amber points out what parts of CS gestures don’t really fit in the taxonomy. It’s one thing to point to lines of code – that’s relatively concrete. It’s another thing to “point” to reference data, e.g., when explaining a sort and you gesture at the two elements you’re comparing or swapping. What exactly/concretely are we pointing at? Arrays are neither horizontal nor vertical — that distinction doesn’t really exist in memory. Arrays have no physical representation, but we act (usually) as if they’re laid out horizontally in front of us. What assumptions are we making in order to use gestures in our teaching? And what if students don’t share in those assumptions?
A Place to Get Feedback and Develop New Ideas: WIPW at ICER 2018
Everybody’s got an idea that they’re sure is great, or could be great with just a bit of development. Similarly, everyone has hit a tricky crossroads in their research and could use a little nudge to get unstuck. The ICER Work in Progress workshop is the place to get feedback and help on that idea, and give feedback and help to others on their cool ideas. I did it a few years ago at the Glasgow ICER and had a wonderful day. You learn a lot, and you get a bunch of new insights about your own idea. As Workshop Leader (and the inventor of the ICER Work in Progress workshop series) Colleen Lewis put it, “You get the chance to borrow the brains of some really awesome people to work on your problem.”
Colleen is the Senior Chair again this year, and I’m the Junior Chair-in-Training.
The workshop is only one day and super-fun. If you’re attending ICER this year, please apply for the Work in Progress workshop! https://icer.hosting.acm.org/icer-2018/work-in-progress/ The application is due June 8 (it’s just a quick Google form).
Let Colleen or me know if you have questions!
ICER 2018 Call for Participation (I’m co-chairing Works in Progress)
Do submit to ICER 2018 in Finland. I particularly encourage you to join the Works in Progress workshop, for which I’ll be the junior co-chair as I learn the ropes from Colleen Lewis. I was a participant in the Works in Progress workshop in Glasgow and found it fun and useful.
Submission Categories
Important Deadlines and Dates
6 April, 2018 – – Full paper submission
1 June – – Notification of acceptance
15 June – -Final camera ready deadline
1 May – – Doctoral consortium submissions
8 June – – Work in progress workshop application
Conference Schedule
Doctoral Consortium, Sunday, August 12, 2018
ICER Conference, Monday, August 13 – Wednesday August 15, 2018
Work in Progress Workshop, Wednesday evening, August 15 – Thursday, August 16, 2018
For more details, see the conference website: http://www.icer-conference.org
Ari Korhonen, Aalto University, Finland (Ari.Korhonen@aalto.fi)
Robert McCartney, University of Connecticut, USA (robert.mccartney@uconn.edu)
Andrew Petersen, University of Toronto Mississauga, Canada (andrew.petersen@utoronto.ca)
Teachers are not the same as students, and the role of tracing: ICER 2017 Preview
The International Computing Education Research conference starts today at the University of Washington in Tacoma. You can find the conference schedule here, and all the proceedings in the ACM Digital Library here. In past years, all the papers have been free for the first couple weeks after the conference, so grab them while they are outside the paywall.
Yesterday was the Doctoral Consortium, which had a significant Georgia Tech presence. My colleague Betsy DiSalvo was one of the discussants. Two of my PhD students were participants:
- Amber Solomon who is exploring the relationship between spatial reasoning and CS learning (see her position paper here). I blogged about Amber's work on our design studio using AR technology.
- Katie Cunningham who is exploring how instructors can use students' sketching and tracing to get greater insight into the students' mental models of the notional machine (see her position paper here). I blogged about Katie and her work when she won an NSF fellowship earlier this year.
We have two research papers being presented at ICER this year. Miranda Parker and Kantwon Rogers will be presenting Students and Teachers Use An Online AP CS Principles EBook Differently: Teacher Behavior Consistent with Expert Learners (see paper here) which is from Miranda C. Parker, Kantwon Rogers, Barbara J. Ericson, and me. Miranda and Kantwon studied the ebooks that we've been creating for AP CSP teachers and students (see links here). They're asking a big question: "Can we develop one set of material for both high school teachers and students, or do they need different kinds of materials?" First, they showed that there was statistically significantly different behaviors between teachers and students (e.g. different number of interactions with different types of activities). Then, they tried to explain why there were differences.
We develop a model of teachers as expert learners (e.g., they know more knowledge so they can create more linkages, they know how to learn, they know better how to monitor their learning) and high school students as more novice learners. They dig into the log file data to find evidence consistent with that explanation. For example, students repeatedly try to solve Parsons problems long after they are likely to get it right and learn from it, while teachers move along when they get stuck. Students are more likely to run code and then run it again (with no edits in between) than teachers. At the end of the paper, they offer design suggestions based on this model for how we might develop learning materials designed explicitly for teachers vs. students.
Katie Cunningham will be presenting Using Tracing and Sketching to Solve Programming Problems: Replicating and Extending an Analysis of What Students Draw (see paper here) which is from Kathryn Cunningham, Sarah Blanchard, Barbara Ericson, and me. The big question here is: "Of what use is paper-and-pen based sketching/tracing for CS students?" Several years ago, the Leeds' Working Group (at ITiCSE 2004) did a multi-national study of how students solved complicated problems with iteration, and they collected the students' scrap paper. (You can find a copy of the paper here.) They found (not surprisingly) that students who traced code were far more likely to get the problems right. Barb was doing an experiment for her study of Parsons Problems, and gave scrap paper to students, which Katie and Sarah analyzed.
First, they replicate the Leeds' Working Group study. Those who trace do better on problems where they have to predict the behavior of the code. Already, it's a good result. But then, Katie and Sarah go further. For example, they find it's not always true. If a problem is pretty easy, those who trace are actually more likely to get it wrong, so the correlation goes the other way. And those who start to trace but then give up are even more likely to get it wrong than those who never traced at all.
They also start to ask a tantalizing question: Where did these tracing methods come from? A method is only useful if it gets used — what leads to use? Katie interviewed the two teachers of the class (each taught about half of the 100+ students in the study). Both teachers did tracing in class. Teacher A's method gets used by some students. Teacher B's method gets used by no students! Instead, some students use the method taught by the head Teaching Assistant. Why do some students pick up a tracing method, and why do they adopt the one that they do? Because it's easier to remember? Because it's more likely to lead to a right answer? Because they trust the person who taught it? More to explore on that one.
Call for Nominations to Chair ICER 2019
SIGCSE is changing how they organize ICER. Posted with Judy Sheard’s permission:
The ACM/SIGCSE International Computing Education Research conference (icer.acm.org) is the premier conference in the world focused on computer science education research, now in its 13th year. The leadership structure has recently been reorganized so that the the individual overseeing the selection of the program (the Program Chair) and the individual overseeing the running of the conference at a particular venue (the Site Chair) are to be held by different individuals.
We are currently seeking nominations for a Site Chair and a Program Chair for ICER 2019, to be held in North America.
Both appointments to Chair are for two years, called the “junior” and “senior” years, respectively. Site Chairs host the conference at their home institution during their senior year. Only one appointment for each role will be made each year, so that in any given year there is a junior and senior Site co-chair and a junior and senior Program co-chair. A nomination committee of the Program and Site chairs for the current year and the SIGCSE Board ICER liaison nominates the ICER Site chair and Program chair to start serving two years from the current year. The SIGCSE Board makes the appointments to both roles.
For both positions, the country of the home institution of each appointee will be rotated geographically by year as has been the tradition for ICER conference chairs, i.e.
- Year 1: North America
- Year 2: Europe
- Year 3: North America
- Year 4: Australasia
The criteria for appointees:
- Program co-chair:
- Prior attendance at ICER
- Prior publication at ICER
- Past service on the ICER Program Committee
- Research excellence in Computing Education
- Collaborative and organizational skills sufficient to work on the Conference Committee and to share oversight of the program selection process.
- Site chair:
- Prior attendance at ICER
- Collaborative and organizational skills sufficient to work on the Conference Committee and to oversee all of the local arrangements.
- Demonstrated interest in the computing education research community.
To nominate an individual, please include the individual’s CV and a cover letter explaining how the individual meets the criteria for the role. Self-nominations are welcomed. Please send nominations for the Site chair to the 2017 Site Chair, Donald Chinn (dchinn@uw.edu), and nominations for the Program chair to the 2017 Program Chair, Josh Tenenberg (jtenenbg@uw.edu). We also encourage informal expressions of interest to the individuals just mentioned.
Learning Curves, Given vs Generated Subgoal Labels, Replicating a US study in India, and Frames vs Text: More ICER 2016 Trip Reports
My Blog@CACM post for this month is a trip report on ICER 2016. I recommend Amy Ko’s excellent ICER 2016 trip report for another take on the conference. You can also see the Twitter live feed with hashtag #ICER2016.
I write in the Blog@CACM post about three papers (and reference two others), but I could easily write reports on a dozen more. The findings were that interesting and that well done. I’m going to give four more mini-summaries here, where the results are more confusing or surprising than those I included in the CACM Blog post.
This year was the first time we had a neck-and-neck race for the attendee-selected award, the “John Henry” award. The runner-up was Learning Curve Analysis for Programming: Which Concepts do Students Struggle With? by Kelly Rivers, Erik Harpstead, and Ken Koedinger. Tutoring systems can be used to track errors on knowledge concepts over multiple practice problems. Tutoring systems developers can show these lovely decreasing error curves as students get more practice, which clearly demonstrate learning. Kelly wanted to see if she could do that with open editing of code, not in a tutoring system. She tried to use AST graphs as a sense of programming “concepts,” and measure errors in use of the various constructs. It didn’t work, as Kelly explains in her paper. It was a nice example of an interesting and promising idea that didn’t pan out, but with careful explanation for the next try.
I mentioned in this blog previously that Briana Morrison and Lauren Margulieux had a replication study (see paper here), written with Adrienne Decker using participants from Adrienne’s institution. I hadn’t read the paper when I wrote that first blog post, and I was amazed by their results. Recall that they had this unexpected result where changing contexts for subgoal labeling worked better (i.e., led to better performance) for students than keeping students in the same context. The weird contextual-transfer problems that they’d seen previously went away in the second (follow-on) CS class — see below snap from their slides. The weird result was replicated in the first class at this new institution, so we know it’s not just one strange student population, and now we know that it’s a novice problem. That’s fascinating, but still doesn’t really explain why. Even more interesting was that when the context transfer issues go away, students did better when they were given subgoal labels than when they generated them. That’s not what happens in other fields. Why is CS different? It’s such an interesting trail that they’re exploring!
Mike Hewner and Shitanshu Mishra replicated Mike’s dissertation study about how students choose CS as a major, but in Indian institutions rather than in US institutions: When Everyone Knows CS is the Best Major: Decisions about CS in an Indian context. The results that came out of the Grounded Theory analysis were quite different! Mike had found that US students use enjoyment as a proxy for ability — “If I like CS, I must be good at it, so I’ll major in that.” But Indian students already thought CS was the best major. The social pressures were completely different. So, Indian students chose CS — if they had no other plans. CS was the default behavior.
One of the more surprising results was from Thomas W. Price, Neil C.C. Brown, Dragan Lipovac, Tiffany Barnes, and Michael Kölling, Evaluation of a Frame-based Programming Editor. They asked a group of middle school students in a short laboratory study (not the most optimal choice, but an acceptable starting place) to program in Java or in Stride, the new frame-based language and editing environment from the BlueJ/Greenfoot team. They found no statistically significant differences between the two different languages, in terms of number of objectives completed, student frustration/satisfaction, or amount of time spent on the tasks. Yes, Java students got more syntax errors, but it didn’t seem to have a significant impact on performance or satisfaction. I found that totally unexpected. This is a result that cries out for more exploration and explanation.
There’s a lot more I could say, from Colleen Lewis’s terrific ideas to reduce the impact of CS stereotypes to a promising new method of expert heuristic evaluation of cognitive load. I recommend reviewing the papers while they’re still free to download.
Recent Comments