Posts tagged ‘educational psychology’

How to Write a Guzdial Chart: Defining a Proposal in One Table

In my School, we use a technique for representing an entire research proposal in a single table. I started asking students to build these logic models when I got to Georgia Tech in the 1990’s. In Georgia Tech’s Human-Centered Computing PhD program, they have become pretty common. People talk about building “Guzdial Charts.” I thought that was cute — a local cultural practice that got my name on it.

Then someone pointed out to me that our HCC graduates have been carrying the practice with them. Amy Voida (now at U. Colorado-Boulder) has been requiring them in her research classes (see syllabus here and here). Sarita Yardi (U. Michigan) has written up a guide for her students on how to summarize a proposal in a single table. Guzdial Charts are a kind of “thing” now, at least in some human-centered computing schools.

Here, I explain what a Guzdial Chart is, where it came from, and why it should really be a Blumenfeld Chart [*].

Phyllis Teaches Elliot Logic Models

In 1990, I was in Elliot Soloway’s office at the University of Michigan as he was trying to explain an NSF proposal he was planning with School of Education professor, Phyllis Blumenfeld. (When I mention Phyllis’s name to CS folks, they usually ask “who?” When I mention her name to Education folks, they almost always know her — maybe for her work in defining project-based learning or maybe her work in instructional planning or maybe her work in engagement. She’s retired now, but is still a Big Name in Education.) Phyllis kept asking questions. “How many students in that study?” and “How are you going to measure that?” She finally got exasperated.

She went to the whiteboard and said, “Draw me a table like this.” Each row of the table is one study in the overall project.

  • Leftmost column: What are you trying to achieve? What’s the research question?
  • Next column: What data are you going to collect? What measures are you going to use (e.g., survey, log file, GPS location)?
  • Next column: How much data are you going to collect? How many participants? How often are you going to use these measures with these participants (e.g., pre/post? Midterm? After a week delay?)?
  • Next column: How are you going to analyze these data?
  • Rightmost column: What do you expect to find? What’s your hypothesis for what’s going to happen?

This is a kind of a logic model, and you can find guides on how to build logic models. Logic models are used by program evaluators to describe how resources and activities will lead to desired impacts. This is a variation that Phyllis made us use in all of our proposals at UMich. (Maybe she invented it?) This version focused on the research being proposed. Each study reads on a row from left-to-right,

  • from why you were doing it,
  • to what you were doing,
  • to what you expected to find.

When I got to Georgia Tech, I made one for every proposal I wrote. I made my students do them for their proposals, too. Somewhere along the way, lots of people started doing them. I think Beth Mynatt first called them “Guzdial Charts,” and despite my story about Phyllis Blumenfeld’s invention, the name stuck. People at Georgia Tech don’t know Phyllis, but they did know Guzdial.

Variations on a Guzdial Chart Theme

The critical part of a Guzdial Chart is that each study is one row, and includes purpose, methods, and expected outcome. There are lots of variations. Here’s an example of one that Jason Freeman (in our School of Music) wrote up for a proposal he was doing on EarSketch. He doesn’t list hypotheses, but it still describes purpose and methods, one row per study.

In Sarita’s variation, she has the students put the Expected Publication in the rightmost column. I like that — very concrete. If you’re in a discipline with some clearly defined publication targets, with a clear distinction between them (e.g. , the HCI community where Designing Interactive Systems (DIS) is often about process, and User Interface Software and Technology (UIST) is about UI technologies), then the publication targets are concrete and definable.

My former student, Mike Hewner, did one of the most qualitative dissertations of any of my students. He used a Guzdial Chart, but modified it for his study. Still one row per study, still including research question, hypothesis, analysis, and sampling.

I still use Guzdial Charts, and so do my students. For example, we used one to work through the story for a paper. Here’s one that we started on a whiteboard outside of my office, and we left it there for several weeks, filling in the cells as they made sense to us.


A Guzdial Chart is a handy way of summarizing a research project and making sure that it makes sense (or to use when making sense), row-by-row, left-to-right.



[*] Because Ulysses now makes it super-easy to post to blogs, and I do most of my writing in Ulysses, I accidentally posted this post to Medium — my first ever Medium post.  I wanted this to appear in my WordPress blog, also, so I decided to two blog posts: The Medium one on Blumenfeld Charts, and this one on Guzdial Charts.

October 3, 2016 at 7:05 am 2 comments

Making learning effective, efficient, and engaging: An Interview With an Educational Realist and Grumpy Old Man, Paul Kirschner

I am a fan of Paul Kirschner‘s work.  This interview is great with useful insights about education — deep and pragmatic thinking.

I want to fundamentally understand how people can learn in effective, efficient, and enjoyable ways, and how you can teach and design learning materials to achieve this objective. If a learner doesn’t enjoy the learning experience, even if it’s effective and/or efficient, they won’t do it. The same is true for teaching: that is it must also be effective, efficient, and enjoyable for the teacher because if a teacher doesn’t enjoy the teaching process, even if it’s effective and/or efficient, they won’t do it.

Source: GUEST POST: An Interview With an Educational Realist and Grumpy Old Man — The Learning Scientists

September 28, 2016 at 7:59 am Leave a comment

Learning Curves, Given vs Generated Subgoal Labels, Replicating a US study in India, and Frames vs Text: More ICER 2016 Trip Reports

My Blog@CACM post for this month is a trip report on ICER 2016. I recommend Andy Ko’s excellent ICER 2016 trip report for another take on the conference. You can also see the Twitter live feed with hashtag #ICER2016.

I write in the Blog@CACM post about three papers (and reference two others), but I could easily write reports on a dozen more. The findings were that interesting and that well done. I’m going to give four more mini-summaries here, where the results are more confusing or surprising than those I included in the CACM Blog post.

This year was the first time we had a neck-and-neck race for the attendee-selected award, the “John Henry” award. The runner-up was Learning Curve Analysis for Programming: Which Concepts do Students Struggle With? by Kelly Rivers, Erik Harpstead, and Ken Koedinger. Tutoring systems can be used to track errors on knowledge concepts over multiple practice problems. Tutoring systems developers can show these lovely decreasing error curves as students get more practice, which clearly demonstrate learning. Kelly wanted to see if she could do that with open editing of code, not in a tutoring system. She tried to use AST graphs as a sense of programming “concepts,” and measure errors in use of the various constructs. It didn’t work, as Kelly explains in her paper. It was a nice example of an interesting and promising idea that didn’t pan out, but with careful explanation for the next try.

I mentioned in this blog previously that Briana Morrison and Lauren Margulieux had a replication study (see paper here), written with Adrienne Decker using participants from Adrienne’s institution. I hadn’t read the paper when I wrote that first blog post, and I was amazed by their results. Recall that they had this unexpected result where changing contexts for subgoal labeling worked better (i.e., led to better performance) for students than keeping students in the same context. The weird contextual-transfer problems that they’d seen previously went away in the second (follow-on) CS class — see below snap from their slides. The weird result was replicated in the first class at this new institution, so we know it’s not just one strange student population, and now we know that it’s a novice problem. That’s fascinating, but still doesn’t really explain why. Even more interesting was that when the context transfer issues go away, students did better when they were given subgoal labels than when they generated them. That’s not what happens in other fields. Why is CS different? It’s such an interesting trail that they’re exploring!


Mike Hewner and Shitanshu Mishra replicated Mike’s dissertation study about how students choose CS as a major, but in Indian institutions rather than in US institutions: When Everyone Knows CS is the Best Major: Decisions about CS in an Indian context. The results that came out of the Grounded Theory analysis were quite different! Mike had found that US students use enjoyment as a proxy for ability — “If I like CS, I must be good at it, so I’ll major in that.” But Indian students already thought CS was the best major. The social pressures were completely different. So, Indian students chose CS — if they had no other plans. CS was the default behavior.

One of the more surprising results was from Thomas W. Price, Neil C.C. Brown, Dragan Lipovac, Tiffany Barnes, and Michael Kölling, Evaluation of a Frame-based Programming Editor. They asked a group of middle school students in a short laboratory study (not the most optimal choice, but an acceptable starting place) to program in Java or in Stride, the new frame-based language and editing environment from the BlueJ/Greenfoot team.  They found no statistically significant differences between the two different languages, in terms of number of objectives completed, student frustration/satisfaction, or amount of time spent on the tasks. Yes, Java students got more syntax errors, but it didn’t seem to have a significant impact on performance or satisfaction. I found that totally unexpected. This is a result that cries out for more exploration and explanation.

There’s a lot more I could say, from Colleen Lewis’s terrific ideas to reduce the impact of CS stereotypes to a promising new method of expert heuristic evaluation of cognitive load.  I recommend reviewing the papers while they’re still free to download.

September 16, 2016 at 7:07 am Leave a comment

Preview ICER 2016: Ebooks Design-Based Research and Replications in Assessment and Cognitive Load Studies

The International Computing Education Research (ICER) Conference 2016 is September 8-12 in Melbourne, Australia (see website here). There were 102 papers submitted, and 26 papers accepted for a 25% acceptance rate. Georgia Tech computing education researchers are justifiably proud — we submitted three papers to ICER 2016, and we had three acceptances. We’re over 10% of all papers at ICER 2016.

One of the papers extends the ebook work that I’ve reported on here (see here where we made them available and our paper on usability and usage from WiPSCE 2015). Identifying Design Principles for CS Teacher Ebooks through Design-Based Research (click on the title to get to the ACM DL page) by Barbara Ericson, Kantwon Rogers, Miranda Parker, Briana Morrison, and I use a Design-Based Research perspective on our ebooks work. We describe our theory for the ebooks, then describe the iterations of what we designed, what happened when we deployed (data-driven), and how we then re-designed.

Two of our papers are replication studies — so grateful to the ICER reviewers and communities for seeing the value of replication studies. The first is Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment by Miranda Parker, me, and Shelly Engleman. This is Miranda’s paper expanding on her SIGCSE 2016 poster introducing the SCS1 validated and language-independent measure of CS1 knowledge. The paper does a great survey of validated measures of learning, explains her process, and then presents what one can and can’t claim with a validated instrument.

The second is Learning Loops: A Replication Study Illuminates Impact of HS Courses by Briana Morrison, Adrienne Decker, and Lauren Margulieux. Briana and Lauren have both now left Georgia Tech, but they were still here when they did this paper, so we’re claiming them. Readers of this blog may recall Briana and Lauren’s confusing results from SIGCSE 2016 result that suggest that cognitive load in CS textual programming is so high that it blows away our experimental instructional treatments. Was that an aberration? With Adrienne Decker’s help (and student participants), they replicated the study. I’ll give away the bottom line: It wasn’t an aberration. One new finding is that students who did not have high school CS classes caught up with those who did in the experiment, with respect to understanding loops

We’re sending three of our Human-Centered Computing PhD students to the ICER 2016 Doctoral Consortium. These folks will be in the DC on Sept 8, and will present posters to the conference on Sept 9 afternoon.

September 2, 2016 at 7:53 am 14 comments

Seeking Collaborators for a Study of Achievement Goal Theory in CS1: Guest blog post by Daniel Zingaro

I have talked about Dan’s work here before, such as his 2014 award-winning ICER paper and his Peer Instruction in CS website. I met with Dan at the last SIGCSE where he told me about the study that he and Leo Porter were planning. Their results are fascinating since they are counter to what Achievement Goal Theory predicts. I invited him to write a guest blog post to seek collaborators for his study, and am grateful that he sent me this.

Why might we apply educational theory to our study of novice programmers? One core reason lies in theory-building: if someone has developed a general learning theory, then we might do well to co-opt and extend it for the computing context. What we get for free is clear: a theoretical basis, perhaps with associated experimental procedures, scales, hypotheses, and predictions. Unfortunately, however, there is often a cost in appropriating this theory: it may not replicate for us in the expected ways.

Briana Morrison’s recent work nicely highlights this point. In two studies, Briana reports her efforts to replicate what is known about subgoals and worked examples. Briefly, a worked example is a sample problem whose step-by-step solution is given to students. And subgoals are used to break that solution into logical chunks to hopefully help students map out the ways that the steps fit together to solve the problem.

Do subgoals help? Well, it’s supposed to go like this, from the educational psychology literature: having students generate their own labeled goals is best, giving students the subgoal labels is worse, and not using subgoals at all is worse still. But that isn’t what Briana found. For example, Briana reports [1] that, on Parsons puzzles, students who are given subgoal labels do better than both those who generate their own subgoal labels and those not given subgoals at all. Why the differences? One possibility is that programming exerts considerable cognitive load on the learner, and that the additional load incurred by generating subgoal labels overloads the student and harms learning.

The point here is that taking seriously the idea of leveraging existing theory requires concomitant attention to how and why the theory may operate differently in computing.

My particular interest here is in another theory from educational psychology: achievement goal theory (AGT). AGT studies the goals that students adopt in achievement situations, and the positive and negative consequences of those goals in terms of educationally-relevant outcomes. AGT zones in on two main goal types: mastery goals (where performance is defined intrapersonally) and performance goals (where performance is defined normatively in comparison to others).

Do these goals matter? Well, it’s supposed to go roughly like this: mastery goals are positively associated with many outcomes of value, such as interest, enjoyment, self-efficacy, and deep study strategies (but not academic performance); performance goals, surprisingly and confusingly, are positively associated with academic performance. But, paralleling the Briana studies from above, this isn’t what we’ve found in CS. With Leo Porter and my students, we’ve been studying goal-outcome links in novice CS students. We’ve found, contrary to theoretical expectations, that performance goals appear to be null or negative predictors of performance, and that mastery goals appear to be positive predictors of performance [2,3].

We are now conducting a larger study of achievement goals and outcomes of CS1 students — larger than that achievable with the couple of institutions to which we have access on our own. We are asking for your help.

The study involves administering two surveys to students in a CS1 course. The first survey, at the beginning of the semester, measures student achievement goals. The second survey, close to the end of the semester, measures potential mediating variables. We plan to collect exam grade, interest in CS, and other outcome variables.

The hope is that we can conduct a multi-institutional study of a variety of CS1 courses to strengthen what we know about achievement goals in CS.

Please contact me at daniel dot zingaro at utoronto dot ca if you are interested in participating in this work. Thanks!

[1] Briana Morrison. Subgoals Help Students Solve Parsons Problems. SIGCSE, 2016. ACM DL link.

[2] Daniel Zingaro. Examining Interest and Performance in Computer Science 1: A Study of Pedagogy and Achievement Goals. TOCE, 2015. ACM DL link.

[3] Daniel Zingaro and Leo Porter. Impact of Student Achievement Goals on CS1 Outcomes. SIGCSE, 2016. ACM DL link.

July 15, 2016 at 7:30 am Leave a comment

Are there elements of human nature that could be better harnessed for better educational outcomes?

I don’t often link to Quora, but when it’s Steven Pinker pointing out the relationship between our human nature to educational goals, it’s worth it.

One potential insight is that educators begin not with blank slates but with minds that are adapted to think and reason in ways that may be at cross-purposes with the goals of education in a modern society. The conscious portion of language consists of words and meanings, but the portion that connects most directly to print consists of phonemes, which ordinarily are below the level of consciousness. We intuitively understand living species as having essences, but the theory of evolution requires us to rethink them as populations of variable individuals. We naturally assess probability by dredging up examples from memory, whereas real probability takes into account the number of occurrences and the number of opportunities. We are apt to think that people who disagree with us are stupid and stubborn, while we are overconfident and self-deluded about our own competence and honesty.

Source: (3) Are there elements of human nature that could be better harnessed for better educational outcomes? – Quora

July 13, 2016 at 7:57 am 2 comments

Why Students Don’t Like Active Learning: Stop making me work at learning!

I enjoy reading Annie Murphy Paul’s essays, and this one particularly struck home because I just got my student opinion surveys from last semester.  I use active learning methods in my Media Computation class every day, where I require students to work with one another. One student wrote:

“I didn’t like how he forced us to interact with each other. I don’t think that is the best way for me to learn, but it was forced upon me.”

It’s true. I am a Peer Instruction bully.

At a deeper level, it’s amazing how easily we fool ourselves about what we learn from and what we don’t learn from.  It’s like the brain training work.  We’re convinced that we’re learning from it, even if we’re not. This student is convinced that he doesn’t learn from it, even though the available evidence says she or he does.

In case you’re wondering about just what “active learning” is, here’s a widely-accepted definition: “Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It emphasizes higher-order thinking and often involves group work.”

Source: Why Students Don’t Like Active Learning « Annie Murphy Paul

July 11, 2016 at 7:27 am 7 comments

Older Posts

Recent Posts

October 2016
« Sep    


Blog Stats

  • 1,277,540 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,576 other followers

CS Teaching Tips