Archive for March 2, 2016
SIGCSE 2016 Preview: Miranda Parker replicated the FCS1
I’ve been waiting a long time to write this post, though I do so even now with some trepidation.
In 2010, Allison Elliott Tew completed her dissertation on building FCS1, the first language-independent and validated measure of introductory computer science knowledge (see this post summarizing the work). The FCS1 was a significant accomplishment, but it didn’t get used much. Allison had concerns about the test becoming freely available and no longer useful as a research instrument.
Miranda Parker joined our group and replicated the FCS1. She created an isomorphic test (which we’re calling SCS1 for Secondary CS1 instrument — it comes after the first). She then followed a rigorous process for replicating a validated instrument, including think-aloud protocols to check usability (do the problems read as she meant them?), large-scale counter-balanced study using both tests, and analysis, including correlational and item-response theory (IRT) analysis. Her results support that SCS1 is effectively identical to FCS1, but also point out the weaknesses of both tests and why we need more and better assessments.
(Note: Complaining in this paragraph — some readers might just want to skip this.) As the first time anyone had ever replicated a validated CS research instrument, the process is a significant result. SIGCSE reviewers did not agree. The Associate Chair’s comment on our rejected paper said, “Two reviewers had concerns about appropriateness of this paper for SIGCSE: #XXX because it didn’t directly address improved learning, and #YYY because replicating the FCS1 wasn’t deemed to be as noteworthy as the original work.” An assessment tool doesn’t improve learning, and a first-ever replication is not publishable.
Miranda was hesitant to release SCS1 for use (e.g., post in my blog, send emails on CSEd-Research email lists) until the result was peer-reviewed. A disadvantage that my students have suffered for having an advisor who blogs — some reviewers have rejected my students’ papers because my blogging made it discoverable who did the research, and thus our papers can’t be sufficiently anonymized to meet those reviewers’ standards. So, I haven’t talked about SCS1, despite my pleasure and pride in Miranda’s accomplishment.
I’m posting this now because Miranda does have a poster on SCS1 at the SIGCSE 2016 Technical Symposium. Come see her at the 3-5 pm Poster Session on Friday. Miranda had a major success in her first year as a PhD student, and the research community now has a new validated research instrument.
Here’s the trepidation part: her paper on the replication process was just rejected for ITICSE. There’s no Associate Chair for ITICSE, so there’s no meta-review that gives the overall reasons. One reviewer raised some concerns about the statistics, which we’ll have to investigate. Another reviewer strongly disagrees with the idea of a replication, much like the #YYY reviewer at SIGCSE. One reviewer complained that this paper was awfully similar to a paper by Elliott Tew and Guzdial, so maybe it shouldn’t be published. I’m not sure how we convince SIGCSE and ITICSE reviewers that replication is important and something that most STEM disciplines are calling for more of. (Particularly aggravating point: Because FCS1 is not freely available, the reviewer doesn’t believe that FCS1 is “valid, consistent, and reliable” without inspecting it — as if you can tell those characteristics just by looking at the test?)
I’m talking about SCS1 now because she has her poster accepted, so she has a publication on that. We really want to publish her process and in particular, the insights we now have about both instruments. We’ll have to wait to publish that — and I hope the reviewers of the next conference don’t give us grief because I talked about the result here.
Contact Miranda at scs1assessment@gmail.com for access to the test.
Recent Comments