SIGCSE 2016 Preview: Miranda Parker replicated the FCS1
March 2, 2016 at 8:00 am 10 comments
I’ve been waiting a long time to write this post, though I do so even now with some trepidation.
In 2010, Allison Elliott Tew completed her dissertation on building FCS1, the first language-independent and validated measure of introductory computer science knowledge (see this post summarizing the work). The FCS1 was a significant accomplishment, but it didn’t get used much. Allison had concerns about the test becoming freely available and no longer useful as a research instrument.
Miranda Parker joined our group and replicated the FCS1. She created an isomorphic test (which we’re calling SCS1 for Secondary CS1 instrument — it comes after the first). She then followed a rigorous process for replicating a validated instrument, including think-aloud protocols to check usability (do the problems read as she meant them?), large-scale counter-balanced study using both tests, and analysis, including correlational and item-response theory (IRT) analysis. Her results support that SCS1 is effectively identical to FCS1, but also point out the weaknesses of both tests and why we need more and better assessments.
(Note: Complaining in this paragraph — some readers might just want to skip this.) As the first time anyone had ever replicated a validated CS research instrument, the process is a significant result. SIGCSE reviewers did not agree. The Associate Chair’s comment on our rejected paper said, “Two reviewers had concerns about appropriateness of this paper for SIGCSE: #XXX because it didn’t directly address improved learning, and #YYY because replicating the FCS1 wasn’t deemed to be as noteworthy as the original work.” An assessment tool doesn’t improve learning, and a first-ever replication is not publishable.
Miranda was hesitant to release SCS1 for use (e.g., post in my blog, send emails on CSEd-Research email lists) until the result was peer-reviewed. A disadvantage that my students have suffered for having an advisor who blogs — some reviewers have rejected my students’ papers because my blogging made it discoverable who did the research, and thus our papers can’t be sufficiently anonymized to meet those reviewers’ standards. So, I haven’t talked about SCS1, despite my pleasure and pride in Miranda’s accomplishment.
I’m posting this now because Miranda does have a poster on SCS1 at the SIGCSE 2016 Technical Symposium. Come see her at the 3-5 pm Poster Session on Friday. Miranda had a major success in her first year as a PhD student, and the research community now has a new validated research instrument.
Here’s the trepidation part: her paper on the replication process was just rejected for ITICSE. There’s no Associate Chair for ITICSE, so there’s no meta-review that gives the overall reasons. One reviewer raised some concerns about the statistics, which we’ll have to investigate. Another reviewer strongly disagrees with the idea of a replication, much like the #YYY reviewer at SIGCSE. One reviewer complained that this paper was awfully similar to a paper by Elliott Tew and Guzdial, so maybe it shouldn’t be published. I’m not sure how we convince SIGCSE and ITICSE reviewers that replication is important and something that most STEM disciplines are calling for more of. (Particularly aggravating point: Because FCS1 is not freely available, the reviewer doesn’t believe that FCS1 is “valid, consistent, and reliable” without inspecting it — as if you can tell those characteristics just by looking at the test?)
I’m talking about SCS1 now because she has her poster accepted, so she has a publication on that. We really want to publish her process and in particular, the insights we now have about both instruments. We’ll have to wait to publish that — and I hope the reviewers of the next conference don’t give us grief because I talked about the result here.
Contact Miranda at scs1assessment@gmail.com for access to the test.
Entry filed under: Uncategorized. Tags: assessment, computing education research, evaluation.
1.
Neil Brown | March 2, 2016 at 8:17 am
The SIGCSE review process is notoriously noisy, but combined with the ITiCSE reviews, this seems to be an attitude issue as much as getting one bad review. Fingers crossed for ICER, which I presume is your next step.
2.
Raul Miller | March 2, 2016 at 10:22 am
Since real science depends on the hypothesis/test cycle, and since this review board has other criteria, it sounds like it’s going to take some real effort to both establish and maintain a publishing vehicle for recording efforts to reproduce results.
Conceptually, reproduction attempts which succeed and reproduction attempts which fail are both of interest. (Though, in both cases, there’s also the issue of what variables have influenced the outcome.)
And, since this is painstaking “low profile” work, it’s going to tend to get lots of pushback from people with a strong “sensationalist” bias. So that would need to be dealt with, also.
This does not sound easy. It sounds like discouraging, difficult work. But the alternative [giving up on real science] sounds worse.
3.
Alan Fekete | March 11, 2016 at 4:25 pm
I am a SIGCSE/ITICSE reviewer every year (though I did not review the submission Mark described). I always judge these papers from the perspective of “does this work help someone teach better?”. I see these conferences (different from ICER) as meetings of a community of practice among computing educators who are seeking innovation and improvement [just like QCon is a conference for developers, different from ICSE which is for SE researchers]. So a SIGCSE/ITICSE paper about a research achievement needs to show lessons that can be useful in practice, for me to rate it as “accept”. This shouldn’t be difficult for a new assessment tool, and even a paper about a new process for validating assessment tools should be able to make a case, for example, that practicing teachers can adopt a cut-down version of the validation approach, to check if their assessments are reasonable.
However, as Neil says, the SIGCSE/ITICSE review process is noisy (I prepare myself by thinking of a SIGCSE submission as essentially a 1-in-3 random chance). But so is, eg the research grant application process in many countries. Academe is like that, sucess may well be just the outcome of a random walk, not a reflection of talent or merit. We hope that our assessment of our students is less random, and indeed it is clear that there is a strong correlation between grades in different classes – I believe (but without seeing any evidence) that the grades we award are also a good signal of likely success in the industry.
4.
Miguel Rubio | April 3, 2016 at 12:09 pm
One of the working groups at ITICSE15 published a paper about replication studies in CSEd.
They discuss why it is important to replicate previous work and present some case studies showing that this is not trivial.
It might be a helpful reference if you present this work to another CSEd conference.
The reference is:
Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies
http://dx.doi.org/10.1145/2858796.2858798
5.
SIGCSE 2016 | Mccricks's Blog | April 6, 2016 at 9:45 am
[…] about Jan Cuny’s SIGCSE Outstanding Contribution award and a description of one of his posters replicating his earlier work. It was enlightening to read about the frustrations in publishing replicated work. There’s […]
6.
Preview ICER 2016: Ebooks Design-Based Research and Replications in Assessment and Cognitive Load Studies | Computing Education Blog | September 2, 2016 at 7:53 am
[…] Assessment by Miranda Parker, me, and Shelly Engleman. This is Miranda’s paper expanding on her SIGCSE 2016 poster introducing the SCS1 validated and language-independent measure of CS1 knowledge. The paper does a […]
7.
SIGCSE 2017 Preview: Ebooks, GP, EarSketch, CS for All, and more from Georgia Tech | Computing Education Blog | March 8, 2017 at 7:01 am
[…] PhD student, Miranda Parker (who has been working on privilege issues and on the SCS1), and Leigh Ann Delyser (of CSNYC and CS for All fame) will present on the new K-12 CS Framework […]
8.
An Analysis of Supports and Barriers to Offering Computer Science in Georgia Public High Schools: Miranda Parker’s Defense | Computing Education Research Blog | October 7, 2019 at 7:01 am
[…] Readers of this blog will know Miranda from her guest blog post on the Google-Gallup polls, her SCS1 replication of the multi-lingual and validated measure of CS1 knowledge, her study of teacher-student […]
9.
Why don’t high schools teach CS: It’s the lack of teachers, but it’s way more than that (Miranda Parker's dissertation) | Computing Education Research Blog | December 16, 2019 at 8:00 am
[…] of this blog will know Miranda from her guest blog post on the Google-Gallup polls, her development of SCS1 as a replication of a multi-lingual and validated measure of CS1 knowledge, the study she did of […]
10.
ICER 2021 Preview: The Challenges of Validated Assessments, Developing Rich Conceptualizations, and Understanding Interest #icer2021 | Computing Education Research Blog | August 16, 2021 at 7:00 am
[…] FCS1 and SCS1.” This is a paper that we planned to write when Miranda first developed the SCS1 (first published in 2016). We created the SCS1 in order to send it out to the world for use in research. We hoped that we […]