Posts tagged ‘computing education research’
I needed to look up a paper on Andreas Stefik’s page the other day and came across this fascinating new paper from him:
Phillip Merlin Uesbeck, Andreas Stefik, Stefan Hanenberg, Jan Pedersen, and Patrick Daleiden. 2016. An empirical study on the impact of C++ lambdas and programmer experience. In Proceedings of the 38th International Conference on Software Engineering (ICSE ’16). ACM, New York, NY, USA, 760-771.
(You can download it for free from his publications page: http://web.cs.unlv.edu/stefika/research.html.)
Since this is Stefik, he carefully describes what his paper is saying and what it’s not saying. For example, he and his students measured C++ lambdas vs iterators — not a particularly pleasant syntax to work with.
The results are quite interesting. This graph is what caught my eye. For professionals, iteration and lambdas work just about the same. For novices, iterators blows lambdas away. Lambda-using students took more time to complete tasks and received more compiler errors (though that might be a good thing, in terms of using the compiler to find and correct bugs). Most interesting was how the differences disappeared with experience. Quoting from the abstract:
Finally, experienced users were more likely to complete tasks, with or without lambdas, and could do so more quickly, with experience as a factor explaining 45.7% of the variance in our sample in regard to completion time.
This is an example of my “Test, don’t trust” principle (see earlier blog post). I was looking up Stefik’s paper because I received an email from someone who simply claimed, “And I’m using functional notation because it’s much easier for novices than procedural or object-oriented.” That may be true, but it ought to be tested.
Seeking Collaborators for a Study of Achievement Goal Theory in CS1: Guest blog post by Daniel Zingaro
I have talked about Dan’s work here before, such as his 2014 award-winning ICER paper and his Peer Instruction in CS website. I met with Dan at the last SIGCSE where he told me about the study that he and Leo Porter were planning. Their results are fascinating since they are counter to what Achievement Goal Theory predicts. I invited him to write a guest blog post to seek collaborators for his study, and am grateful that he sent me this.
Why might we apply educational theory to our study of novice programmers? One core reason lies in theory-building: if someone has developed a general learning theory, then we might do well to co-opt and extend it for the computing context. What we get for free is clear: a theoretical basis, perhaps with associated experimental procedures, scales, hypotheses, and predictions. Unfortunately, however, there is often a cost in appropriating this theory: it may not replicate for us in the expected ways.
Briana Morrison’s recent work nicely highlights this point. In two studies, Briana reports her efforts to replicate what is known about subgoals and worked examples. Briefly, a worked example is a sample problem whose step-by-step solution is given to students. And subgoals are used to break that solution into logical chunks to hopefully help students map out the ways that the steps fit together to solve the problem.
Do subgoals help? Well, it’s supposed to go like this, from the educational psychology literature: having students generate their own labeled goals is best, giving students the subgoal labels is worse, and not using subgoals at all is worse still. But that isn’t what Briana found. For example, Briana reports  that, on Parsons puzzles, students who are given subgoal labels do better than both those who generate their own subgoal labels and those not given subgoals at all. Why the differences? One possibility is that programming exerts considerable cognitive load on the learner, and that the additional load incurred by generating subgoal labels overloads the student and harms learning.
The point here is that taking seriously the idea of leveraging existing theory requires concomitant attention to how and why the theory may operate differently in computing.
My particular interest here is in another theory from educational psychology: achievement goal theory (AGT). AGT studies the goals that students adopt in achievement situations, and the positive and negative consequences of those goals in terms of educationally-relevant outcomes. AGT zones in on two main goal types: mastery goals (where performance is defined intrapersonally) and performance goals (where performance is defined normatively in comparison to others).
Do these goals matter? Well, it’s supposed to go roughly like this: mastery goals are positively associated with many outcomes of value, such as interest, enjoyment, self-efficacy, and deep study strategies (but not academic performance); performance goals, surprisingly and confusingly, are positively associated with academic performance. But, paralleling the Briana studies from above, this isn’t what we’ve found in CS. With Leo Porter and my students, we’ve been studying goal-outcome links in novice CS students. We’ve found, contrary to theoretical expectations, that performance goals appear to be null or negative predictors of performance, and that mastery goals appear to be positive predictors of performance [2,3].
We are now conducting a larger study of achievement goals and outcomes of CS1 students — larger than that achievable with the couple of institutions to which we have access on our own. We are asking for your help.
The study involves administering two surveys to students in a CS1 course. The first survey, at the beginning of the semester, measures student achievement goals. The second survey, close to the end of the semester, measures potential mediating variables. We plan to collect exam grade, interest in CS, and other outcome variables.
The hope is that we can conduct a multi-institutional study of a variety of CS1 courses to strengthen what we know about achievement goals in CS.
Please contact me at daniel dot zingaro at utoronto dot ca if you are interested in participating in this work. Thanks!
 Briana Morrison. Subgoals Help Students Solve Parsons Problems. SIGCSE, 2016. ACM DL link.
 Daniel Zingaro. Examining Interest and Performance in Computer Science 1: A Study of Pedagogy and Achievement Goals. TOCE, 2015. ACM DL link.
 Daniel Zingaro and Leo Porter. Impact of Student Achievement Goals on CS1 Outcomes. SIGCSE, 2016. ACM DL link.
I enjoy reading Annie Murphy Paul’s essays, and this one particularly struck home because I just got my student opinion surveys from last semester. I use active learning methods in my Media Computation class every day, where I require students to work with one another. One student wrote:
“I didn’t like how he forced us to interact with each other. I don’t think that is the best way for me to learn, but it was forced upon me.”
It’s true. I am a Peer Instruction bully.
At a deeper level, it’s amazing how easily we fool ourselves about what we learn from and what we don’t learn from. It’s like the brain training work. We’re convinced that we’re learning from it, even if we’re not. This student is convinced that he doesn’t learn from it, even though the available evidence says she or he does.
In case you’re wondering about just what “active learning” is, here’s a widely-accepted definition: “Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It emphasizes higher-order thinking and often involves group work.”
Interesting and relevant for this list. There’s a lot in the NSF big ideas document (see link here) about using technology for learning, but there’s also some on what we want students to know (including about computing technology), e.g., “the development and evaluation of innovative learning opportunities and educational pathways, grounded in an education-research-based understanding of the knowledge and skill demands needed by a 21st century data-capable workforce.”
The six “research” ideas are intended to stimulate cross-disciplinary activity and take on important societal challenges. Exploring the human-technology frontier, for example, reflects NSF’s desire “to weave in technology throughout the fabric of society, and study how technology affects learning,” says Joan Ferrini-Mundy, who runs NSF’s education directorate. She thinks it will also require universities to change how they educate the next generation of scientists and engineers.
Google has now released the results of the Gallup surveys from last year of parents, teachers, and principals about attitudes on CS disaggregated by 11 populous US states — see state reports (and methodology explanation) here. The blog announcement about the report is here. These are fascinating to read, especially for me and my colleagues since some of these states are also ECEP states (see our recent report on activity in ECEP states). Pennsylvania, Wisconsin, and Texas are doing much better than the US average in this analysis, while Ohio and North Carolina are far behind.
These are the results of a large scale survey, not an interview, or focus groups. The advantage is that we get a lot of answers (9, 693 elementary school principals across the US). The disadvantage is that they answered these questions, without probes, follow-ups, or any “What did you mean by that?”
For example, one of the benchmark items is “CS offered > 5 years.” My first thought was that this meant that there was CS offered in the curriculum for five grades, e.g., middle school and high school. The actual question answered by principals was “How long has your school offered opportunities to learn computer science? (% greater than 5 years)” So this item is about the longevity of CS ed at these particular schools that were sampled. That’s interesting, but I’m not sure what it says about the state compared to the particular schools sampled — especially in local control states (e.g., California, Massachusetts, Nebraska) where individual districts can do anything they want.
We’re told that parents want more CS, but principals and parents mostly think that CS is computer literacy (e.g., how to use a computer). We’re told that 64% of Michigan principals say “just as/more important” to “Do you think offering opportunities to learn CS is more important, just as important, or less important to a student’s future success than required courses like math, science, history and English?” What does that mean, if they think that CS is keyboarding skills? When 11% of the principals in Illinois say that demand for CS education among parents is high, does that mean that the principals think the parents think it’s keyboarding? or real CS? Is one more valuable than the other to parents, in the opinion of principals? Maybe the principals are right, and only 11% of the parents would want CS if they knew what CS was.
Overall, recommended reading, but sometimes, it feels like reading tea leaves.
Getting closer to “all” in #CSforAll: Instructional supports for students with disabilities in K-5 computing
I’ve been arguing for a while that we don’t know how to get to CS for All, because we don’t know how to teach “all” yet. This is what the Bootstrap group has been arguing from a STEM discipline and economics perspective (see blog post). I’ve also been concerned that we’re biased by the Inverse Lake Wobegone Effect and are assuming that the high-ability learners we’ve been teaching represent everyone.
Maya Israel is one of the few researchers who’s asking, “How do we teach computing to students with cognitive or learning disabilities in K-12?” Below is a link to her most recent study. Here, she’s looking at how we teach, what helps the students to engage in the computing activity. I talked with her about this paper — we still don’t know what the students are learning.
As computer programming and computational thinking (CT) become more integrated into K-12 instruction, content teachers and special educators need to understand how to provide instructional supports to a wide range of learners, including students with disabilities. This cross-case analysis study examined the supports that two students with disabilities, who were initially disengaged during computing activities, received during computing instruction. Data revealed that students’ support needs during computing activities were not CT-specific. Rather, supports specific to these students’ needs that were successful in other educational areas were also successful and sufficient in CT. Although additional studies would need to be conducted to ascertain the transferability of these findings to other contexts and students, our results contribute evidence that students with disabilities can and should participate in CT and be provided with the supports they need, just as in all other areas of the curriculum. We present a framework for evaluating student engagement to identify student-specific supports and, when needed, refine the emerging K-12 CT pedagogy to facilitate full participation of all students. We then offer a list of four implications for practice based on the findings.
I recently posted a piece about my personal plans for the future, and I talked about how great it would be to be at a place where there were three or more CS Ed faculty — a critical mass. Kevin rightly called me out on that in the comments, suggesting that it would be hard to get more than a couple Computing Education researchers in a US CS department. (Outside the US, there are multiple institutions with critical mass CER communities, including U. Kent at Canterbury and U. Adelaide.)
With this year’s hires, there are now two US campuses with that kind of depth! In both cases, they’re avoiding the problem Kevin describes by spreading across multiple departments, not just in CS.
University of Nebraska at Omaha: I knew that my PhD student, Briana Morrison (dissertation proposal is described here, and her award-winning ICER paper is described here) was joining (my former student) Brian Dorn (here’s a post on his dissertation) in the CS department at UNO. Then I learned that Michelle Friend (whose work with middle school girls in CS was presented at ICER 2013 and mentioned in this post) just finished her PhD at Stanford is also joining UNO in teacher education. They are well-situated to become a (the?) major player in research on CS teacher professional development.
University of California – San Diego: Leo Porter (winner of many SIGCSE and ICER best paper awards, including work described in this post) is in CS, Christine Alvarado (who was key to the growth of women in computing at Harvey Mudd), Scott Klemmer (who gave the keynote at ICER 2012) is in the Design Lab, and Beth Simon (whom still probably has the most ICER publications of anyone) has just returned to UCSD (from Coursera) to join Education. And now, Philip Guo just announced that he’s joining UCSD in Cognitive Science. Philip built the Python Tutor that we use in our ebooks, blogs frequently on CS Ed issues, and has been publishing a ton recently (including four papers at VL/HCC last year) on issues related to learning programming.
While I’m jealous that I’m not part of a critical mass CER group, it’s a great thing for the field — more students, more CS teachers, more development and evaluation of interventions and curricula, more answers for the growing demand for computing education research, and more attention to the issues of computing education.