Posts tagged ‘computing education research’

Research results: Where does Coding to Learn Belong in the K-12 Curriculum?

I’m not a big fan of the method in this paper — too little was controlled (e.g., what was being taught? how?). But I applaud the question.  Where are things working and where are they not working when using coding to help students learn something beyond coding? We need more work that looks critically at the role of introducing computing in schools.

Nevertheless, there is a lack of empirical studies that investigate how learning to program at an early age affects other school subjects. In this regard, this paper compares three quasi-experimental research designs conducted in three different schools (n=129 students from 2nd and 6th grade), in order to assess the impact of introducing programming with Scratch at different stages and in several subjects. While both 6th grade experimental groups working with coding activities showed a statistically significant improvement in terms of academic performance, this was not the case in the 2nd grade classroom.

Source: Informing Science Institute – Code to Learn: Where Does It Belong in the K-12 Curriculum?

October 12, 2016 at 7:57 am 1 comment

Maryland school district showcases computer science education at all levels: ECEP’s role in Infrastructure

The Expanding Computing Education Pathways (ECEP) Alliance, funded by NSF to support broadening participation in computing through state-level efforts, is one of the more odd projects I’ve been part of.  I don’t know how to frame the research aspect of what we’re doing.  We’re not learning about learning or teaching, nor about computer science.  We’re learning a lot about how policy makers think about CS, how education is structured in different states (and how CS is placed within that structure), and how decision-making happens around STEM education.

It’s not the kind of story that the press loves.  We’re not building curriculum. We don’t work directly with students or teachers. We fund others to do summer camps and provide professional development. We help states figure out how to measure what’s going on in their state with computing education. We help organize (and sometimes fund) meetings, and we get states sharing with each other how to talk to policy makers and industry leaders.

So it’s nice when we get a blurb like the below, in a story about the terrific efforts to grow CS for All in Charles County, MD.  It’s amazing how much Charles County has accomplished in providing computing education in every school.  I’m pleased that ECEP’s role got recognized in what’s going on there.

Expanding Computer Education Pathways (ECEP) provided grant funding for summer camp computer programs. CCPS’s facilitators participate in their Train-the-Trainer webinars to design and plan an effective workshop, build an educator community, increase diversity in Computer Science and teach Computer Science content knowledge. ECEP also funded the Maryland Computer Science Summit in a joint effort with Maryland State Department of Education to bring over 200 attendees from every county in Maryland to share and set priorities for Computer Science education.

Source: Maryland school district showcases computer science education at all levels | NSF – National Science Foundation

October 10, 2016 at 7:16 am 1 comment

How to Write a Guzdial Chart: Defining a Proposal in One Table

In my School, we use a technique for representing an entire research proposal in a single table. I started asking students to build these logic models when I got to Georgia Tech in the 1990’s. In Georgia Tech’s Human-Centered Computing PhD program, they have become pretty common. People talk about building “Guzdial Charts.” I thought that was cute — a local cultural practice that got my name on it.

Then someone pointed out to me that our HCC graduates have been carrying the practice with them. Amy Voida (now at U. Colorado-Boulder) has been requiring them in her research classes (see syllabus here and here). Sarita Yardi (U. Michigan) has written up a guide for her students on how to summarize a proposal in a single table. Guzdial Charts are a kind of “thing” now, at least in some human-centered computing schools.

Here, I explain what a Guzdial Chart is, where it came from, and why it should really be a Blumenfeld Chart [*].

Phyllis Teaches Elliot Logic Models

In 1990, I was in Elliot Soloway’s office at the University of Michigan as he was trying to explain an NSF proposal he was planning with School of Education professor, Phyllis Blumenfeld. (When I mention Phyllis’s name to CS folks, they usually ask “who?” When I mention her name to Education folks, they almost always know her — maybe for her work in defining project-based learning or maybe her work in instructional planning or maybe her work in engagement. She’s retired now, but is still a Big Name in Education.) Phyllis kept asking questions. “How many students in that study?” and “How are you going to measure that?” She finally got exasperated.

She went to the whiteboard and said, “Draw me a table like this.” Each row of the table is one study in the overall project.

  • Leftmost column: What are you trying to achieve? What’s the research question?
  • Next column: What data are you going to collect? What measures are you going to use (e.g., survey, log file, GPS location)?
  • Next column: How much data are you going to collect? How many participants? How often are you going to use these measures with these participants (e.g., pre/post? Midterm? After a week delay?)?
  • Next column: How are you going to analyze these data?
  • Rightmost column: What do you expect to find? What’s your hypothesis for what’s going to happen?

This is a kind of a logic model, and you can find guides on how to build logic models. Logic models are used by program evaluators to describe how resources and activities will lead to desired impacts. This is a variation that Phyllis made us use in all of our proposals at UMich. (Maybe she invented it?) This version focused on the research being proposed. Each study reads on a row from left-to-right,

  • from why you were doing it,
  • to what you were doing,
  • to what you expected to find.

When I got to Georgia Tech, I made one for every proposal I wrote. I made my students do them for their proposals, too. Somewhere along the way, lots of people started doing them. I think Beth Mynatt first called them “Guzdial Charts,” and despite my story about Phyllis Blumenfeld’s invention, the name stuck. People at Georgia Tech don’t know Phyllis, but they did know Guzdial.

Variations on a Guzdial Chart Theme

The critical part of a Guzdial Chart is that each study is one row, and includes purpose, methods, and expected outcome. There are lots of variations. Here’s an example of one that Jason Freeman (in our School of Music) wrote up for a proposal he was doing on EarSketch. He doesn’t list hypotheses, but it still describes purpose and methods, one row per study.

In Sarita’s variation, she has the students put the Expected Publication in the rightmost column. I like that — very concrete. If you’re in a discipline with some clearly defined publication targets, with a clear distinction between them (e.g. , the HCI community where Designing Interactive Systems (DIS) is often about process, and User Interface Software and Technology (UIST) is about UI technologies), then the publication targets are concrete and definable.

My former student, Mike Hewner, did one of the most qualitative dissertations of any of my students. He used a Guzdial Chart, but modified it for his study. Still one row per study, still including research question, hypothesis, analysis, and sampling.

I still use Guzdial Charts, and so do my students. For example, we used one to work through the story for a paper. Here’s one that we started on a whiteboard outside of my office, and we left it there for several weeks, filling in the cells as they made sense to us.


A Guzdial Chart is a handy way of summarizing a research project and making sure that it makes sense (or to use when making sense), row-by-row, left-to-right.



[*] Because Ulysses now makes it super-easy to post to blogs, and I do most of my writing in Ulysses, I accidentally posted this post to Medium — my first ever Medium post.  I wanted this to appear in my WordPress blog, also, so I decided to two blog posts: The Medium one on Blumenfeld Charts, and this one on Guzdial Charts.

October 3, 2016 at 7:05 am 2 comments

No, Really – Programming is Hard and CS Flipped Classrooms are Complicated: ITICSE 2016 Award-Winning Papers

I only recently started digging into the papers from the ITICSE 2016 conference (see Table of Contents link at ACM Digital Library here).  There were two papers that caught my attention.

First, the best paper award went to one of my former PhD students, Brian Dorn: An Empirical Analysis of Video Viewing Behaviors in Flipped CS1 Courses, by  Suzanne L. Dazo, Nicholas R. Stepanek, Robert Fulkerson, and Brian Dorn.  Brian has this cool piece of technology where students can view videos, annotate them, be challenged to answer questions from specific places, and have discussions.  They used this for teaching a flipped CS1 class, where students were required to watch videos before class and then engage in more active learning opportunities in class.  The real trick, as you might imagine and that the paper goes into detail on, is getting students to watch the video.  I liked both the techniques for prodding students to watch videos and the fascinating results showing the relationship between watching the videos and learning.

ITICSE 2016 recognized two “commended” papers this year.  I haven’t found the listing of which papers they were, but I did learn that one of them is Learning to Program is Easy by Andrew Luxton-Reilly.  I enjoyed reading the paper and recommend it — even though I disagree with his conclusions, captured in the paper title.  He does a good job of exploring the evidence that programming is hard (and even uses this blog as a foil, since I’ve claimed several times that programming is hard), and overall, is a terrific synthesis of a bunch of computing education papers (40 references is a lot for a six page ITICSE paper).

His argument that programming is easy has two parts:

  • First, children do it. As he says in the abstract, “But learning to program is easy — so easy that children can do it.”  That’s a false comparison — what children do in programming is not the same definition of “programming” that is in most of the literature that Andrew cites.  The evidence that programming is hard is coming mostly from higher-ed CS classes.  What is going on in introductory University CS classes and what children do is dramatically different.  We saw that in the WIPSCE 2014 Fields and Kafai paper, and those results were recently replicated in a recent ICER 2016 paper.  These are two different activities.
  • Second, what higher-education CS teachers expect at the end of the first course is too much.  He presents significant evidence that what CS teachers expect is achieved by students, but at the end of the second course.  The paper from Morrison, Decker, and Margulieux supports the argument that students think and work very differently and much more successfully by the end of the second CS course than in the first course.

I see Andrew’s argument as evidence that programming is hard.  The problem is that Andrew doesn’t define the target.  What level of ability counts as “programming”?  I believe that level of ability described by the McCracken Working Group, by the FCS1/SCS1 exams, and by most teachers as the outcomes from CS1 (these are all cited by Andrew’s paper) is the definition of the lowest level of “programming ability.”  That it takes two courses to reach that level of ability is what I would call hard.

I’ve been reading a terrific book, Proust and the Squid: The Story and Science of the Reading Brain by Maryanne Wolf.  It’s the story of how humans invented reading, how we teach reading, and how reading changes our brains (physically and in terms of cognitive function).  Oral language is easy.  We are literally wired for that.  Reading is hard.  We are not wired for that, and much of the invention of reading is about inventing how to teach reading.  Unless you can teach reading to a significant part of your population, you don’t develop a literate culture, and your written language doesn’t succeed.

Much of the invention of written language is about making it easier to learn and teach because learning to read is so hard.  Have you ever thought about why our Latin alphabet is ordered?  Why do we talk about the “ABC”‘s and sing a song about them?  We don’t actually need them to be ordered to read.  Ordering the alphabet makes it easier to memorize, and learning to read is a lot about memorization, about drill-and-practice to make the translation of symbols to sounds to words to concepts effortless (or at least, all System 1 in Kahneman terms).  This makes it easier, but the task of learning to read and write is still a cognitively complex task that takes a significant amount of time to master. It’s hard.

Programming is hard like written language is hard.  It’s not possible to program unless you know how to read.  Programming is particularly hard because the concepts that we’re mapping to are unfamiliar, are not part of our daily experience.  We only see it as easy because we have expert blind-spot.  We have already learned these concepts and made those mappings.  We have constructed understandings of iteration and conditional execution and variable storage.  It is difficult for experts to understand how hard it is to develop those concepts.  The evidence of children programming suggests that most children who program don’t have those concepts.

I remain unconvinced by Andrew’s argument, but I recommend the paper for a great summary of literature and an interesting read.


September 30, 2016 at 7:22 am 3 comments

Assessing Learning In Introductory Computer Science: Dagstuhl Seminar Report now Available

I have written about this Dagstuhl Seminar (see earlier post). The formal report is now available.

This seminar discussed educational outcomes for first-year (university-level) computer science. We explored which outcomes were widely shared across both countries and individual universities, best practices for assessing outcomes, and research projects that would significantly advance assessment of learning in computer science. We considered both technical and professional outcomes (some narrow and some broad) as well as how to create assessments that focused on individual learners. Several concrete research projects took shape during the seminar and are being pursued by some participants.

Source: DROPS – Assessing Learning In Introductory Computer Science (Dagstuhl Seminar 16072)

September 26, 2016 at 7:26 am Leave a comment

Learning Curves, Given vs Generated Subgoal Labels, Replicating a US study in India, and Frames vs Text: More ICER 2016 Trip Reports

My Blog@CACM post for this month is a trip report on ICER 2016. I recommend Andy Ko’s excellent ICER 2016 trip report for another take on the conference. You can also see the Twitter live feed with hashtag #ICER2016.

I write in the Blog@CACM post about three papers (and reference two others), but I could easily write reports on a dozen more. The findings were that interesting and that well done. I’m going to give four more mini-summaries here, where the results are more confusing or surprising than those I included in the CACM Blog post.

This year was the first time we had a neck-and-neck race for the attendee-selected award, the “John Henry” award. The runner-up was Learning Curve Analysis for Programming: Which Concepts do Students Struggle With? by Kelly Rivers, Erik Harpstead, and Ken Koedinger. Tutoring systems can be used to track errors on knowledge concepts over multiple practice problems. Tutoring systems developers can show these lovely decreasing error curves as students get more practice, which clearly demonstrate learning. Kelly wanted to see if she could do that with open editing of code, not in a tutoring system. She tried to use AST graphs as a sense of programming “concepts,” and measure errors in use of the various constructs. It didn’t work, as Kelly explains in her paper. It was a nice example of an interesting and promising idea that didn’t pan out, but with careful explanation for the next try.

I mentioned in this blog previously that Briana Morrison and Lauren Margulieux had a replication study (see paper here), written with Adrienne Decker using participants from Adrienne’s institution. I hadn’t read the paper when I wrote that first blog post, and I was amazed by their results. Recall that they had this unexpected result where changing contexts for subgoal labeling worked better (i.e., led to better performance) for students than keeping students in the same context. The weird contextual-transfer problems that they’d seen previously went away in the second (follow-on) CS class — see below snap from their slides. The weird result was replicated in the first class at this new institution, so we know it’s not just one strange student population, and now we know that it’s a novice problem. That’s fascinating, but still doesn’t really explain why. Even more interesting was that when the context transfer issues go away, students did better when they were given subgoal labels than when they generated them. That’s not what happens in other fields. Why is CS different? It’s such an interesting trail that they’re exploring!


Mike Hewner and Shitanshu Mishra replicated Mike’s dissertation study about how students choose CS as a major, but in Indian institutions rather than in US institutions: When Everyone Knows CS is the Best Major: Decisions about CS in an Indian context. The results that came out of the Grounded Theory analysis were quite different! Mike had found that US students use enjoyment as a proxy for ability — “If I like CS, I must be good at it, so I’ll major in that.” But Indian students already thought CS was the best major. The social pressures were completely different. So, Indian students chose CS — if they had no other plans. CS was the default behavior.

One of the more surprising results was from Thomas W. Price, Neil C.C. Brown, Dragan Lipovac, Tiffany Barnes, and Michael Kölling, Evaluation of a Frame-based Programming Editor. They asked a group of middle school students in a short laboratory study (not the most optimal choice, but an acceptable starting place) to program in Java or in Stride, the new frame-based language and editing environment from the BlueJ/Greenfoot team.  They found no statistically significant differences between the two different languages, in terms of number of objectives completed, student frustration/satisfaction, or amount of time spent on the tasks. Yes, Java students got more syntax errors, but it didn’t seem to have a significant impact on performance or satisfaction. I found that totally unexpected. This is a result that cries out for more exploration and explanation.

There’s a lot more I could say, from Colleen Lewis’s terrific ideas to reduce the impact of CS stereotypes to a promising new method of expert heuristic evaluation of cognitive load.  I recommend reviewing the papers while they’re still free to download.

September 16, 2016 at 7:07 am Leave a comment

Learning CS while Learning English: Scaffolding ESL CS Learners – Thesis from Yogendra Pal

When I visited Mumbai for LaTICE 2016, I mentioned meeting Yogendra Pal. I was asked to be a reader for his thesis, which I found fascinating. I’m pleased to report that he has now graduated and his thesis, A Framework for Scaffolding to Teach Vernacular Medium Learners, is available here:

I learned a lot from Yogendra’s thesis, like what “vernacular medium learners” means. Here’s the problem that he’s facing (and that Yogendra faced as a student). Students go through primary and secondary school learning in one language (Hindi, in Yogendra’s personal case and in his thesis), and then come to University to study Computer Science. Do you teach them (what Yogendra calls “Medium of Instruction” or MoI) in English, or in Hindi? Note that English is pervasive in Computer Science, e.g., almost all our programming languages use English keywords.

Here’s Yogendra’s bottomline finding: “We find that self-paced video-based environment is more suitable for vernacular medium students than a classroom environment if English-only MoI are used.” Yogendra uses a design-based research methodology. He measures the students, tries something based on his current hypothesis, then measures them again. He compares what he thought would happen to what he saw, and revises his hypothesis — and then iterate. Some of the scaffolds he tested may seem obvious (like using a slower pace), but a strength of the thesis is that he develops rationale for each of his changes and tests them. Eventually, he came to this surprising (to me) and interesting result: It’s better to teach with Hindi in the classroom, and in English when students are learning from self-paced videos.

The stories at the beginning of the thesis are insightful and moving. I hadn’t realized what a handicap it is to be learning English in a class that uses English. It’s obvious that the learners would be struggling with the language. What I hadn’t realized was how hard it is to raise your hand and ask questions. Maybe you have a question just because you don’t know the language. Maybe you’ll expose yourself to ridicule because you’ll post the question wrong.

Yogendra describes solutions that the Hindi-speaking students tried, and where the solutions didn’t work. The Hindi-speaking students used English-to-English dictionaries. They didn’t want English-Hindi dictionaries, because they wanted to become fluent in English, but they needed help with the complicated (especially technical) words. They tried using online videos for additional explanations of concepts, but most of those were made by American or British speakers. When you’re still learning English, switching from an Indian accent to another accent is a barrier to understanding.

The middle chapters are a detailed description of Yogendra’s attempts to scaffold student learning. He tried to teach in all-Hindi but some English technical terms like “execute” don’t have a direct translation in Hindi. He selected other Hindi words to represent the technical terms, but the words he selected as the Hindi translation were unusual and not well-known to the students. Perhaps the most compelling insight for me in these chapters was how important it was to both the students and the teachers that the students learn English — even when the Hindi materials were measurably better for learning in some conditions.

In the end, he found that Hindi language screencasts led to better learning (statistically significantly) when the learners (who had received primary and secondary school instruction in Hindi) were in a classroom, but that the English language screencasts led to better learning (again, statistically significantly) when the learners were watching the screencasts self-paced. When the students are self-paced, they can rewind and re-watch things that are confusing, so it’s okay to struggle with the English. In the classroom, the lecture just goes on by. It works best if it’s in Hindi for the students who learned in Hindi in school.

Yogendra tells a convincing story. It’s an interesting question of how these lessons transfer to other contexts. For example, what are the issues for Spanish-speaking students learning CS in the United States? In a general form, can we use the lessons from this thesis to make CS learning accessible to more ESL (English as a Second Language) learners?

September 8, 2016 at 5:50 pm 6 comments

Older Posts

Recent Posts

October 2016
« Sep    


Blog Stats

  • 1,277,522 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,576 other followers

CS Teaching Tips