Archive for September, 2014
A biased attempt at measuring failure rates in introductory programming
Do students fail intro CS at higher rates than in comparable classes (e.g., intro Physics, or Calculus, or History)? We’ve been trying to answer that question for years. I studied that question here at Georgia Tech (see my Media Computation retrospective paper at last year’s ICER). Jens Bennedsen and Michael Caspersen answered that question with a big international survey (see paper here). They recognized the limitations of their study — it was surveying on the SIGCSE member’s list and similar email lists (i.e., to teachers biased toward being informed about the latest in computing education), and they got few responses.
This last year’s ITiCSE best paper awardee tried to measure failure rates again (see link below), by studying published accounts of pass rates. While they got a larger sample size this way, it’s even more limited than the Bennedsen and Caspersen study:
- Nobody publishes a paper saying, “Hey, we’ve had lousy retention rates for 10 years running!” Analyzing publications means that you’re biasing your sample to teachers and researchers who are trying to improve those retention rates, and they’re probably publishing positive results. You’re not really getting the large numbers of classes whose results aren’t published and whose teachers aren’t on the SIGCSE members list.
- I recognized many of the papers in the meta-analysis. I was co-author on several of them. The same class retention data appeared in several of those papers. There was no funny business going on. We reported on retention data from our baseline classes. We then tried a variety of interventions, e.g., with Media Computation and with Robotics. The baseline then appears in both papers. The authors say that they made sure that that didn’t double count any classes that appeared in two papers, but I can’t see how they could possibly tell.
- Finally, the authors do not explicitly cite the papers used in their meta-analysis. Instead, they’re included on a separate page (see here). SIGCSE shouldn’t publish papers that do this. Meta-analyses should be given enough pages to list all their sources, or they shouldn’t be published. Including them on a separate page makes it much harder to check the work, to see what data got used in the analysis. Second, they are referencing work that won’t appear in any reverse citation indices or in the authors’ H-index calculations. I know some of the authors of those papers who are up for promotion or tenure decisions this coming year. Those authors are having impact through this secondary publication, but they are receiving no credit for it.
This paper is exploring an important question, and does make a contribution. But it’s a much more limited study than what has come before.
Whilst working on an upcoming meta-analysis that synthesized fifty years of research on predictors of programming performance, we made an interesting discovery. Despite several studies citing a motivation for research as the high failure rates of introductory programming courses, to date, the majority of available evidence on this phenomenon is at best anecdotal in nature, and only a single study by Bennedsen and Caspersen has attempted to determine a worldwide pass rate of introductory programming courses.In this paper, we answer the call for further substantial evidence on the CS1 failure rate phenomenon, by performing a systematic review of introductory programming literature, and a statistical analysis on pass rate data extracted from relevant articles. Pass rates describing the outcomes of 161 CS1 courses that ran in 15 different countries, across 51 institutions were extracted and analysed. An almost identical mean worldwide pass rate of 67.7% was found. Moderator analysis revealed significant, but perhaps not substantial differences in pass rates based upon: grade level, country, and class size. However, pass rates were found not to have significantly differed over time, or based upon the programming language taught in the course. This paper serves as a motivation for researchers of introductory programming education, and provides much needed quantitative evidence on the potential difficulties and failure rates of this course.
Why one school district decided giving laptops to students is a terrible idea
A really fascinating piece about all the problems that Hoboken had with their one-laptop-per-child program. The quote listed below describes the problems with breakage and pornography. The article goes on to describe problems with too little memory, bad educational software, wireless network overload, anti-virus expense, and teacher professional learning costs. I firmly believe in the vision of one-laptop-per-student. I also firmly believe that it’s crazy-expensive and hard to make work right, especially in large school districts.
We had “half a dozen kids in a day, on a regular basis, bringing laptops down, going ‘my books fell on top of it, somebody sat on it, I dropped it,’ ” said Crocamo. Screens cracked. Batteries died. Keys popped off. Viruses attacked. Crocamo found that teenagers with laptops are still… teenagers. “We bought laptops that had reinforced hard-shell cases so that we could try to offset some of the damage these kids were going to do,” said Crocamo. “I was pretty impressed with some of the damage they did anyway. Some of the laptops would come back to us completely destroyed.”
Crocamo’s time was also eaten up with theft. Despite the anti-theft tracking software he installed, some laptops were never found. Crocamo had to file police reports and even testify in court.
Hoboken school officials were also worried they couldn’t control which websites students would visit. Crocamo installed software called Net Nanny to block pornography, gaming sites and Facebook. He disabled the built-in web cameras. He even installed software to block students from undoing these controls. But Crocamo says students found forums on the Internet that showed them how to access everything.“There is no more determined hacker, so to speak, than a 12-year-old who has a computer,” said Crocamo.
Interview on the CS Education Zoo: Livecoding, HyperCard, Parsons Problems, and student misconceptions
Steven Wolfman and Will Byrd host a podcast series called the “CS Education Zoo.” I was just interviewed on it yesterday. Will couldn’t make it, but Dutch Meyer filled in. We covered a lot of ground, from livecoding music to HyperCard to the importance of having even low-quality CS education in schools to Parsons’s Problems to ebooks and MOOCs, to how to address student misconceptions in class. I had great fun and appreciate their invitation to join in!
Pushback in California on Computing in Schools
I’ve been thrilled to see the legislative progress in California around CS education issues. The governor has now signed Senate Bill 1200 which starts the process of CS counting for UC/CSU admissions. Dan Lewis’s article in The Mercury News tempered that enthusiasm (linked below). I wasn’t aware that UC was pushing back, nor how the number of CS classes and teachers is dropping in California. Lots more work to do there.
The Legislature just passed two bills to address these issues. Senate Bill 1200 allows but does not require the University of California to count computer science toward the math requirements for admission. However, there’s been a lot of push back from UC on this, so for now, all we really have is an expression of intent from the Legislature. Thankfully, AB 1764 allows high schools to count computer science toward graduation requirements. Of course, that may not mean much for students applying to UC.
For these reasons, computer science isn’t a priority for students. Nor is it a priority for schools when determining course offerings based on limited budgets: While California high school enrollment has risen 15 percent since 2000, the number of classes on computer science or programming fell 34 percent, and the number of teachers assigned to those courses fell 51 percent.
via Computer science: It’s where the jobs are, but schools don’t teach it – San Jose Mercury News.
A new policy brief was just released from the California STEM Learning Network on the state of CS education in California (see here). California actually lags behind the rest of the US on some important indicators like number of CS degrees conferred. That’s pretty scary for Silicon Valley.
Challenges of using Big Data to inform education
The story below is interesting, but not too surprising. Researchers are having trouble using MOOC data to inform our understanding of student behavior and learning. Lots of data doesn’t necessarily mean lots of insight.
I watched Charlie Rose interview the Freakonomics guys (view here), Dubner and Levitt, and found Levitt’s comments about “big data” intriguing. He’s concerned that we don’t really have the methods for analyzing such large pools of data, and there’s a real chance that Big Data could lead us to Big Mistakes, because we might act in response to our “findings,” when we don’t really have good methods for arriving at (and testing) those “findings.”
Coursera isn’t the only MOOC provider to leave researchers longing for better data collection procedures. When Harvard University and the Massachusetts Institute of Technology last week released student data collected by edX, some higher education consultants remarked that the data provided “no insight into learner patterns of behavior over time.”“It’s not as simple as them providing better data,” Whitmer said. “They should have some skin in it, because this is their job. They should be helping us with this.”
via After grappling with data, MOOC Research Initiative participants release results @insidehighered.
An FTC commissioner (see article) just pointed out the possibility of big data to lead to discriminatory practices. How much more is education at risk?
During a conference held yesterday in Washington, DC, called “Big Data: A Tool for Inclusion or Exclusion?” FTC Commissioner Julie Brill declared that regulatory agencies should shift their critical lens to what she described as the “unregulated world of data brokers.” According to Brill, there is a “clear potential” for the profiles of low-income and racialized consumers built with personal data “to harm low-income and other vulnerable consumers.”
Larry Cuban on why requiring coding is a bad idea
Larry Cuban is a remarkable educational historian. He’s written an article about why requiring coding is a bad idea, and links it to the history of Logo in the 1980’s. I think #1 is the most important, and is similar to Seymour Papert’s “Why School Reform is Impossible” article and to Roy Pea’s concerns about requiring computing.
The reasons are instructive to current enthusiasts for coding:
1. While the overall national context clearly favors technological expertise, Big Data, and 21st century skills like programming, the history of Logo showed clearly, that schools as institutions have lot to say about how any reform is put into practice. Traditional schools adapt reforms to meet institutional needs.
2. Then and now, schools eager to teach coding , for the most part, catered to mostly white, middle- and upper-middle class students. They were and are boutique offerings.
3. Then and now, most teachers were uninvolved in teaching Logo and had little incentive or interest in doing so. Ditto for coding.
4. Then and now, Logo and coding depend upon the principle of transfer and the research supporting such confidence is lacking.
JES 5 Now Released: New Jython, Faster, Updated Watcher, with Jython Music
- First, we’re on github! Come join us in stomping out bugs and making JES even better!
- Upgrading the Jython interpreter to version 2.5, making available new language features and speeding up many user programs. I have been working on the 4th edition of the Python MediaComp book this summer, and have introduced the time library so that users can actually time their algorithms (one of those CS Principles ideas), so I had ready-made programs to run in both JES4.3 and JES5.0. The speed doubled.
- Adding code to JES and the installers to support double-clicking .py files to open them in JES, on all supported platforms.
- Bundling JMusic and the Jython Music libraries, allowing JES to be used with the text “Making Music with Computers” by Bill Manaris and Andrew Brown. This is super exciting to me. All of their examples (like these) work as-is in JES 5 — plus you can do sampled sound manipulations using the MediaComp libraries. The combination makes for a powerful and fun platform for exploring computation and sound. My thanks to Bill who worked with us in making everything work in JES.
- Adding a plugin system that allows developers to easily bundle libraries for use with JES.
- Fixing the Watcher, so that user programs can be executed at arbitrary speeds. This has been broken for a long time, and it’s great to have it back. When you’re looking for a bug in a program that loops over tens of thousands of pixels or sound samples, the last thing you want is a breakpoint.
- Adding new color schemes for the Command Window, which allow users to visually see the difference between return values and print output. This was a suggestion from my colleague Bill Leahy. Students when first learning return can’t see how it does something different from printing. Now, we can use color to make the result of each more distinctive. Thanks to Richard Ladner at ACCESS Computing who helped us identify color palettes to use for colorblind students, so we can offer this distinction in multiple color sets.
- Fixing numerous bugs, especially threading issues. When we first wrote JES, threading just wasn’t a big deal. Today it is, and Matthew stomped on lots of threading problems in JES 5. We got lots of suggestions and bug reports from Susan Schwartz, Brian Dorn, and others which we’re grateful for.
Thanks to Matthew for pulling this all together! Matthew’s effort was supported by NSF REU funding.
The Open Source Identity Crisis, limiting the potential for legitimate peripheral participation
An interesting new piece on identity within the open source community. Noah Slater addresses a concern that I have, that the definition of contribution in open source communities limits the opportunity for legitimate peripheral participation.
Perhaps the most obvious way in which the hacker identity has a hold over the open source identity is this notion that you have to code to contribute to open source. Much like technical talent is centered in the tech industry, code is seen as the one true way to contribute. This can be such a powerful idea that documentation, design, marketing, and so on are often seen as largely irrelevant. And even when this isn’t the case, they are seen as second class skills. For many hackers, open source is an escape from professional environments where collaboration with these “lesser”, more “mainstream” activities is mandatory.
via The Open Source Identity Crisis, by Noah Slater | Model View Culture.
Gidget is now released: A debugging puzzle game for novice programmers
I’ve seen Michael Lee present two papers on Gidget at ICER, and they were both fascinating. Gidget is now moving out of the laboratory, and I’m eager to see what happens when lots of people get a chance to play with it. Amy Ko has a blog post about Gidget that explains some of the goals.
Hello Gidget Supporter!
We are happy to announce that Gidget has launched today! You, your friends, and your family members can now help Gidget debug faulty code to solve puzzles at helpgidget.org
Gidget is a game designed to teach computer programming concepts through debugging puzzles. Gidget the robot was damaged on its way to clean up a chemical spill and save the animals, so it is the players’ job to fix Gidget’s problematic code to complete all the missions. As the levels become more challenging, players can combine newly introduced concepts with previously used commands to solve the puzzles and progress through the game.
Gidget is the dissertation work of Michael J. Lee who is a PhD candidate at the University of Washington’s Information School. Prior to its public release, over 800 online participants played through various versions of the game, and over 60 teenagers played through the game and created their own levels during four summer camps in 2013 and 2014. Our research has shown that novice programmers of all ages become very engaged with the activity, and that they are able to create their own levels (i.e., create their own programs from scratch) successfully after playing through the game.
Please share widely and refer to the press release for more information. We hope you have fun playing the game, and appreciate your interest and support for Gidget.
https://twitter.com/GidgetRobot
https://www.facebook.com/gidgetrobot
Sincerely,
Michael J. Lee and the rest of the Gidget Team
—
Michael J. Lee
PhD Candidate, Information School
University of Washington
Seattle, WA 98195-2840
Computing’s Narrow Focus May Hinder Women’s Participation: Context matters
Of course, I buy into the argument here about the importance of context. Beyond that, this article does a nice job of tying context to success of women in computing (with quotes both from Barbara Ericson and Valerie Barr, formerly at NSF).
“Boys fall in love with computers as machines; girls see them as tools to do something else,” said Barbara Ericson, a senior research scientist at the Georgia Institute of Technology who tracks the AP exam. “Then girls think, ‘maybe I don’t belong because I don’t love them like the boys do.’”
…
In her position as a professor of computer science at Union College, Barr found contextualizing computer science classes led to an increase in female enrollment. “We said, ‘let’s show them that computer science can be useful by giving themes to the introductory CS courses, so students can see their relevance,’” she said. “For us, it’s been enormously successful. Ten years ago we taught the introductory course to 29 students, and 14% of them were women. This year there were over 200 students, and 39% of them were women.” Beyond college, Barr said, she’d also like to see “a bigger funnel into the corporate world and the tech industry, with people coming from many other majors. It doesn’t have to be just CS majors.”
via Computing’s Narrow Focus May Hinder Women’s Participation | News | Communications of the ACM.
Guest Post by Joanna Goode: On CS for Each
I wrote a blog post recently about Joanna Goode promoting the goal of “CS for Each.” Several commenters asked for more details. I asked Joanna, and she wrote me this lovely, detailed explanation. I share it here with her permission — thanks, Joanna!
To answer, we as CS educators want to purposefully design learning activities that build off of students’ local knowledge to teach particular computer science concepts or practices. Allowing for students to integrate their own cultural knowledge and social interests into their academic computational artifacts deepens learning and allows for students to develop personal relationships with computing. More specifically, computer science courses lend themselves well for project-based learning, a more open-ended performance assessment that encourages student discretion in the design and implementation of a specified culminating project. Allowing students to use a graphical programming environment to create a Public Service Announcement of a topic of their choice, for example, is more engaging for most youth than a one-size-fits-all generic programming assignment with one “correct” answer.
Along with my colleagues Jane Margolis and Jean Ryoo, we recently wrote a piece for Educational Leadership (to be published later this year) that uses ExploringCS (ECS) to show how learning activities can be designed to draw on students’ local knowledge, cultural identity, and social interests. Here is an excerpt:
The ECS curriculum is rooted in research on science learning that shows that for traditionally underrepresented students, engagement and learning is deepened when the practices of the field are recreated in locally meaningful ways that blend youth social worlds with the world of science[.1] Consider these ECS activities that draw on students’ local and cultural knowledge:
- In the first unit on Human-Computer Interaction, as students learn about internet searching, they conduct “scavenger hunts” for data about the demographics, income level, cultural assets, people, and educational opportunities in their communities.
- In the Problem-Solving unit, students work with Culturally-Situated Design Tools [2], a software program that “help students learn [math and computing] principles as they simulate the original artifacts, and develop their own creations.” In one of the designs on cornrow braids students learn about the history of this braiding tradition from Africa through the Middle Passage, the Civil Rights movement to contemporary popular culture, and how the making of the cornrows is based on transformational geometry.
- In the Web Design unit, students learn how to use html and css so they can create websites about any topic of their choosing, such as an ethical dilemma, their family tree, future career, or worldwide/community problems.
- In the Introduction to Programming unit, students design a computer program to create a game or an animated story about an issue of concern.
- In the Data Analysis and Computing unit, students collect and combine data about their own snacking behavior and learn how to analyze the data and compare it to large data sources.
- In the Robotics unit, students creatively program their robots to work through mazes or dance to students’ favorite songs.
Each ECS unit concludes with a culminating project that connects students’ social worlds to computer science concepts. For example, in unit two they connect their knowledge of problem solving, data collection and minimal spanning trees to create the shortest and least expensive route for showing tourists their favorite places in their neighborhoods.
[1] Barton, A.C. and Tan, E. 2010. We be burnin’! Agency, identity, and science learning. The Journal of the Learning Sciences, 19, 2, 187-229.
[2] Eglash, Ron. Culturally Situated Design Tools. See: See: csdt.rpi.edu
CodeSpells: Express Yourself With Magic by ThoughtSTEM — Kickstarter
Sarah Esper (one of the leads on CodeSpells) was part of the 2013 ICER Doctoral Consortium, and was just in the ICER CRR with me. She’s designing CodeSpells based on computing education research. It’s worth checking out!
Become the most powerful wizard the world has ever seen by crafting magical spells in code.When we were young, wizards like Gandalf and Dumbledore struck a chord in our minds. We spent hours pretending to be wizards and casting epic imaginary spells.Now, we want to bring that kind of creative freedom to video games. Instead of giving the player pre-packaged spells, CodeSpells allows you to craft your own magical spells. It’s the ultimate spellcrafting sandbox.What makes it all possible is code. The game provides a coding interface where you can specify exactly what your spells will do. This interface is intuitive enough for individuals young and old who have never coded before. But skilled coders will also enjoy using their coding skills in new and creative ways! Even children can use this interface to make mountains out of the terrain, make an impenetrable force field around yourself, or even make a golem creature out of the surrounding rocks. The sky is the limit!
via CodeSpells: Express Yourself With Magic by ThoughtSTEM — Kickstarter.
Digital Literacy vs. Learning to Code: A False Dichotomy
The below linked article makes some strong assumptions about “learning to code” that lead to the author’s confusion about the difference between learning to code and digital literacy. NOBODY is arguing that all students “need to learn how to build the next Dropbox.” EVERYONE is in agreement about the importance of digital literacy — but what does that mean, and how do you get there?
As I’ve pointed out several times, a great many professionals code, even those who don’t work in traditional “computing” jobs — for every professional software developer, there are four to nine (depending on how you define “code”) end-user programmers. They code not to build Dropbox, but to solve problems that are more unique and require more creative solutions than canned applications software provides. We’re not talking thousands of lines of code. We’re talking 10-20, at most 100 lines of code for a solution (as my computational engineer colleagues tell me). For many people, coding WILL be part of the digital literacy that they need.
Learning some basic coding is an effective way of developing the valued understanding of how the cloud works and how other digital technology in their world works. Applications purposefully hide the underlying technology. Coding is a way of reaching a level lower, the level at which we want students to understand. In biology, we use microscopes and do dissections to get (literally) below the surface level. That’s the point of coding. No student who dissects a fetal pig is then ready for heart surgery, and no student who learns how to download a CSV data file and do some computation over the numbers in it is then ready to build Dropbox. But both groups of hypothetical students would then have a better understanding of how their world works and how they can be effective within it.
Offering programming electives for students who want to learn Python or scripting won’t solve the underlying problem of digital illiteracy. So even if your goal is to teach all students to code, schools will first need to introduce computer-science concepts that help students learn how to stack the building blocks themselves.
They don’t need to learn how to build the next Dropbox, but they should understand how the cloud works.
“If you want to be able to use the machine to do anything, whether it’s use an existing application or actually write your own code, you have to understand what the machines can do for you, and what they can’t, even if you’re never going to write code,” Ari Gesher, engineering ambassador at Palantir Technologies, said at the event.
via Kids Need To Learn Digital Literacy—Not How To Code – ReadWrite.
Computing ed researcher fired from NSF over questions about her role as 1980s activist
I’ve known Valerie Barr for years and believe that she was honest with the agents. I don’t believe that she lied about her involvement with a domestic terrorist organization that had “ties” (whatever that means) to two political activist organizations she belonged to.
I’m most shocked about the process. Valerie was dismissed on the basis of a report by a possibly biased agent — there are no transcripts or notes from the interview. The OPM is prosecutor, judge, and jury — there is no defense. Doesn’t sound like due process to me. It’s a loss to our community that a well-regarded researcher is forced out of NSF.
It’s a greater loss in that it will make it less likely that another “typical liberal college professor” (a quote from the below article) might offer to serve.
After again being asked if she had been a member of any organization that espoused violence, Barr was grilled for 4.5 hours about her knowledge of all three organizations and several individuals with ties to them, including the persons who tried to rob the Brink’s truck. Four people were found guilty of murder in that attack and sentenced to lengthy prison terms, including Kathy Boudin, who was released in 2003 and is now an adjunct assistant professor of social work at Columbia University. “I found out about the Brink’s robbery by hearing it on the news, and just like everybody else I was shocked,” she recalls.
But OPM apparently thought otherwise, again citing her “deliberate misrepresentation” in its report. Relying heavily on that investigation, NSF handed Barr a letter on 25 July saying that it planned to terminate her IPA at the end of the first year because the OPM review had found her to be unfit for the job…Barr was given a chance to appeal NSF’s decision, and on 11 August she submitted a letter stating that OPM’s summary report of its investigation “contains many errors or mischaracterizations of my statements.” As is standard practice, agencies receive only a summary of the OPM investigation, not a full report, and lawyers familiar with the process say that an agent’s interview notes are typically destroyed after the report is written.
New Video on Exploring CS at UCLA
Nice job — I like the interviews with the students the best (though Jane rocks, of course).
In case the embedded video doesn’t work, click here: http://www.nsf.gov/news/special_reports/science_nation/intotheloop.jsp
Education research team successfully launches innovative computer science curriculum
Jane Margolis is an educator and researcher at UCLA, who has dedicated her career to democratizing computer science education and addressing under-representation in the field. Her work inspires students from diverse backgrounds to study computer science and to use their knowledge to help society. With support from the National Science Foundation (NSF), Margolis and her team investigated why so few girls and under-represented minorities are learning computer science. They developed “Exploring Computer Science,” or ECS, to reverse the trend.
Recent Comments