Posts tagged ‘games’
I was honored to serve on Michael Lee’s dissertation committee. Mike’s basic thesis is available at this link, or you can get the jumbo-expanded edition with an enormous appendix describing everything in his software plus his learning evaluation (described below) at this link. His thesis brings together several studies he’s done on Gidget, his game in which he teaches programming. I’ve written about his work before, like his terrific finding that including assessments improves engagement in his game (see blog post here) and about how Gidget offers us a new way to think about assessing learning (see blog post here).
Michael had several fascinating results with Gidget. One of my favorites that I have not blogged on yet was that personifying the programming tool improves retention (see his ICER 2011 paper here). When Gidget sees a syntax error, she (I’m assigning gender here) doesn’t say, “Missing semicolon” or “Malformed expression.” Instead, she says “I don’t what this is, so I’ll just go on to the next step” and looks sad that she was unable to do what the programmer asked her to do. The personification of the programming tool dramatically improved the number of game levels completed. They kept going. In course terms, they were retained.
The dissertation has yet another Big Wow result. Mike developed an assessment of computing knowledge based on Allison Elliott Tew’s work on FCS1 (see here). He did a nice job validating it using Amazon’s Mechanical Turk.
He then compares three different conditions for learning differences:
- Gidget, as a game for learning.
- CodeAcademy, as a tutorial for learning.
- The Gidget game level designer. The idea was to provide a constructionist learning environment without a curriculum. Mike wanted it be like using Scratch or Alice or any other open-ended creative programming environment. What would the students learn without guidance in Gidget?
Gidget and CodeAcademy are statistically equivalent for learning, and both blow away the constructionist option. A designed curriculum beats a discovery-based learning opportunity. That’s interesting but not too surprising. Here’s the wild part: The Gidget users spend 1/2 as much time. Same learning, half as much time. I would not have predicted this, that Mike’s game is actually more efficient for learning about CS than is a tutorial. I’ve argued that learning efficiency is super important especially for high school teachers (see post here).
Mike is now an assistant professor at the New Jersey Institute of Technology (see his web page here). I wish him luck and look forward to what he does next!
I’ve seen Michael Lee present two papers on Gidget at ICER, and they were both fascinating. Gidget is now moving out of the laboratory, and I’m eager to see what happens when lots of people get a chance to play with it. Andy Ko has a blog post about Gidget that explains some of the goals.
Hello Gidget Supporter!
We are happy to announce that Gidget has launched today! You, your friends, and your family members can now help Gidget debug faulty code to solve puzzles at helpgidget.org
Gidget is a game designed to teach computer programming concepts through debugging puzzles. Gidget the robot was damaged on its way to clean up a chemical spill and save the animals, so it is the players’ job to fix Gidget’s problematic code to complete all the missions. As the levels become more challenging, players can combine newly introduced concepts with previously used commands to solve the puzzles and progress through the game.
Gidget is the dissertation work of Michael J. Lee who is a PhD candidate at the University of Washington’s Information School. Prior to its public release, over 800 online participants played through various versions of the game, and over 60 teenagers played through the game and created their own levels during four summer camps in 2013 and 2014. Our research has shown that novice programmers of all ages become very engaged with the activity, and that they are able to create their own levels (i.e., create their own programs from scratch) successfully after playing through the game.
Please share widely and refer to the press release for more information. We hope you have fun playing the game, and appreciate your interest and support for Gidget.
Michael J. Lee and the rest of the Gidget Team
Michael J. Lee
PhD Candidate, Information School
University of Washington
Seattle, WA 98195-2840
We’ve talked about the UK and the US worrying about having enough cyberwarriors to deal with future cybersecurity issues. CMU is helping to build a game to entice high school students into computing, with cybersecurity as the focus.
Carnegie Mellon University and one of the government’s top spy agencies want to interest high school students in a game of computer hacking.
Their goal with “Toaster Wars” is to cultivate the nation’s next generation of cyber warriors in offensive and defensive strategies. The free, online “high school hacking competition” is scheduled to run from April 26 to May 6, and any U.S. student or team in grades six through 12 can apply and participate.
David Brumley, professor of computer science at Carnegie Mellon, said the game is designed to be fun and challenging, but he hopes participants come to see computer security as an excellent career choice.
Do video games provide some kind of cognitive benefit after the game play? There have been arguments that video games lead to improved attention, quicker responses, and visual skills. A paper in Frontiers in Psychology has reviewed the past literature and found that they are all flawed with some basic bias errors. This doesn’t mean that video games don’t have cognitive benefits. But we don’t have any evidence that they do.
Most of the studies compare the cognitive performances of expert gamers with those of non-gamers, and suffer from well-known pitfalls of experimental design. The studies are not blinded: participants know that they have been recruited because they have gaming expertise, which can influence their performance, because they are motivated to do well and prove themselves. And the researchers know which participants are in which group, so they can have preconceptions that might inadvertently affect participants’ performance.
Heading to International Computing Education Research 2011 in Rhode Island: How CS students choose Threads
I’m heading out Sunday for the 2011 International Computing Education Research (ICER) Workshop, hosted by Dr. Kate Sanders at Rhode Island College in Providence. The schedule is exciting — we have a bunch of speakers from communities who have been doing CS Ed research, but have not been at ICER previously. (“Workshop” is ACM’s name for a small conference.) I’m chairing the discussion papers session. I’m looking forward to Eric Mazur’s keynote (who has a new educational technology that he’s promoting), and his advice from the Physics Education Research community to the much-younger Computing Education Research community.
The second talk of the conference is from my PhD student, Mike Hewner (same student who previously studied what game developers look for in graduates). Mike’s dissertation research is asking, “How do computer science undergraduates define ‘computer science,’ and how does their definition influence their educational decisions?” He’s using grounded theory, which is a demanding social science method. He’s done about a dozen interviews so-far, and has not yet reached “saturation” (where new interviews don’t contribute to the developing theory), so the current theory is still considered “tentative.” This paper is one piece of that work.
In most CS degree programs, there are some options for students: Choices between electives, between specialization paths, between Threads. Mike wanted to know how students made those choices. Several findings surprised me. First, students don’t “begin with the end in mind.” Students he interviewed had little idea what job they wanted, and if they did, they didn’t really know what the job entailed. Second, students don’t think that the choice of specialization is all that important — they figure that they’re at a good school, they trust the faculty, so whatever choice they make will turn out fine. Finally, an engaging, fun class can dramatically influence students’ perception of a field. A “fun” theory class can convince students that they like theory. Their opinion of the subject is easily swayed by the qualities of the class and the teacher. “Why are you in robotics (even though it doesn’t have much to do with what you say you want to do for your job)?” “Well, I really liked the robots we used in CS101…”
Hope to see some of you there!
Ian’s call to re-brand “gamification” as “exploitationware” is getting a lot of attention. It was covered in the Wall Street Journal’s blog yesterday. It’s certainly true that the term “gamification” is getting traction, e.g., I was just on an NSF panel where reviewers praised proposals trying to “gamify” educational software. Ian points out that the language matters. Consider the different connotations between “global warming” and “climate change,” where both terms are describing the same phenomena but from different political perspectives. Most of the comments on Ian’s blog seem to be saying, “Give up! It’s too late.” But I agree with Ian’s strategy. It is possible to change language, by calling attention to it and offering a significant alternative.
Note how deftly Zicherman makes his readers believe that points, badges, levels, leader boards, and rewards are “key game mechanics.” This is wrong, of course — key game mechanics are the operational parts of games that produce an experience of interest, enlightenment, terror, fascination, hope, or any number of other sensations. Points and levels and the like are mere gestures that provide structure and measure progress within such a system.
But as Frank Luntz has shown time and time again, reality matters far less than perception. When people hear “gamification,” it’s this incredible facility that registers, the simplicity, smoothness, and ease with which the wild, magical beast of games can be tamed and integrated into any other context at low cost and high scale.
Margaret Robertson has critiqued gamification on the basis that it takes the least essential aspects of games and presents them as the most essential. Robertson coins the derogatory term pointsification as a more accurate description of this process.
There’s a challenging and interesting paper being presented this afternoon at SIGCSE 2011 Exploring the Appeal of Socially Relevant Computing: Are Students Interested in Socially Relevant Problems? by Cyndi Rader, Doug Hakkarinen, Barbara Moskal, and Keith Hellman from the Colorado School of Mines. I’ve worked with Barbara Moskal before, and know her to be a careful and thoughtful evaluator. So, when I read their abstract, especially the bottom line, I was surprised and intrigued.
Prior research indicates that today‘s students, especially women, are attracted to careers in
which they recognize the direct benefit of the field for serving societal needs. Traditional
college level computer science courses rarely illustrate the potential benefits of computer
science to the broader community. This paper describes a curricula development effort
designed to embed humanitarian projects into undergraduate computer science courses. The impact of this program was measured through student self-report instruments. Through this investigation, it was found that students preferred projects that they perceived as “fun” over the projects that were social in nature.
As I expected, the paper is careful and insightful. The authors did create some new socially relevant assignments to put into CS1 and Software Engineering assignments, and they asked students about their experience doing those. They also collected a wide variety of assignment descriptions for students to rank in terms of how interesting the assignment was: “A coding of ‘1’ reflected a rating of ‘I definitely would not like to do this project’ and a coding of ‘4’ reflected a rating of ‘I definitely would like to do this project.’ In other words, a higher rating reflected greater interest in the given project.”
- The authors found that students preferred the projects building games to those focused on social good. They also found a distinction that another researcher (Buckley et al., SIGCSE 2008) had identified — that students were more motivated by social and personally meaningful: “In other words, students may need to recognize the application of the solution to a problem to their own life.”
- While the Software Engineering assignments worked well, the CS1-level socially-relevant assignments did not — in part, because they were just so hard. “Our efforts were successful in Software Engineering, with 88% and 93% responding positively to the SAR and DM projects, respectively. However, only 54% of the studentsin the CS1 course, including 47% of the females, indicated that they found the SAR project appealing.” The authors conclude that, “This [the lack of interest in the socially-relevant projects in CS1] may, in part, be due to the fact that it was difficult to reduce socially relevant problems to a level that beginning students could easily comprehend. This made it difficult to capitalize on the appeal of socially relevant problems in the early computer science courses.”
I’m looking forward to seeing this paper presented this afternoon. There’s a certain cynical similarity to this paper, and work we’ve reported on about teachers. Davide Fossati’s paper on Saturday describes that faculty he interviewed changed their teaching practice for their own reasons, never because of student learning results, and Lijun Ni’s work last year showing that teachers adopt a new approach because they find something fun, not because it’s been shown to be effective. I wonder if we’d see similar results outside the United States?