Archive for October 26, 2010
Call for Participation: Computing Commons Collaboration Conference
We are currently inviting you to submit a proposal for presentations and posters. Both high school computing teachers and undergraduate computing faculty are welcome to present by submitting a 1-page proposal. The deadline for submission for the first mini-conference is Nov 30th, 2010.
Call for Participation for the C3 Conference
- Your name, school name, e-mail address, a mailing address, and phone number
- Which session are you submitting for, discussion session or poster?
- Your proposal abstract with title, presenter(s) and the proposal for your presentation or poster. The proposal should include description of the objectives and content of your presentation and ways of involving the audience in the discussion.
Those SIGCSE Reviewers say the darnedest things!
SIGCSE Symposium 2011 acceptances and rejections came out this last weekend. I’m thrilled with how our group did. We submitted three papers: Allison’s on her dissertation work, Lijun’s on our community support for CS teachers, and Davide Fossati’s on his interview study of CS instructors. All three were accepted. Barb and I submitted two workshops — both were accepted. I was on two panels or special sessions — both were accepted. Cool!
And yet, I’m annoyed. I’m annoyed because I’m concerned with raising the quality of work at the SIGCSE conferences. We can only do that with reviewers who recognize good quality work, and we still have reviewers in the pool who don’t understand science.
Davide Fossati interviewed 14 post-secondary CS instructors at 3 different institutions, asking them to tell him stories about when they changed something in their classes, and how they decided if it was successful or not. Each instructor gave at least one story of success and one of failure. Davide used appropriate qualitative analysis techniques to draw out themes and commonalities from these stories. In my horribly biased opinion, it’s good work that tells us something that we didn’t know about the decision-making process of CS instructors. It’s not comprehensive, but you can’t do a comprehensive study without first knowing what you’re looking for. That’s what Davide did. We don’t claim that the reasoning we saw represents all instructors. However, the reasoning we saw really does exist, and the reasoning we saw repeatedly, represents processes that are not unique to a single institution or instructor. That’s important to know.
One reviewer completely hated the paper. “I don’t believe that interviewing 14 Computer Science instructors from three different institutions sets the appropriate framework for drawing conclusions that are statistically valid and meaningful.” The reviewer and we do agree on the goal — it’s about arriving at “meaning,” drawing conclusions that mean something. But one can arriving at “meaningful” without being nationwide or worldwide or even statistically significant. “The paper as it stands is not ready for publication, in my opinion, for the reasons cited above. The paper has the look and feel of a pilot study that will be used to design the actual statistically-based study.” The “actual” study?!?
The definition of “good science” is not “used statistics!” I do believe in the use of statistics. Statistics are important for testing generalizability and for checking that you’re not fooling yourself. They are really important in science. But science is also about seeing what’s there, before you try to measure it or even try to explain it. Cataloging biological specimens is an important part of science. Asking people why they do what they do is the first step to understanding why they do it. If we want teachers to make better decisions, let’s first figure out how they’re making decisions now.
Our papers don’t always get accepted to SIGCSE conferences. Only half of our ICER 2010 submissions were accepted. When the reviews are fair and well-informed, you learn something even from the rejections. ICER reviews have been really, really good.
This review points out the roadblocks to getting good work published at the SIGCSE Symposium, disseminated to a wider audience, and informing (and hopefully, improving) a community. Reviewers like this allow for the publication of meaningless work that has a good p value, and inhibit meaningful work that tells us something new that we didn’t know before.
To return to my original point: I’m thrilled. Davide’s paper did get accepted, so the process of multiple reviews and a meta-reviewer (7 reviewers total!) corrected for the statistics-obsessed reviewer. The system worked. Nonetheless, it’s important, as a community, to continue to have conversations about what is good quality work and what we should be looking for in a review. In my opinion, we need reviewers who understand the value of publishing work that uses statistics when it’s important for the claim, and doesn’t use statistics when it’s not important for the claim.
Recent Comments