Archive for August, 2019

Holding ourselves to a higher standard: “Language-independent” just doesn’t cut it

My CACM blog post this month (see link here) is a retraction of the term “language-independent” in our work on the FCS1 and SCS1:

There is no language independence here. The FCS1 and SCS1 are multi-lingual which is a remarkable achievement. We might also call them pseudcode-based assessments, which is how they can be multi-lingual, but since a pseudocode-based test isn’t necessarily validated across other languages, “multi-lingual” is a stronger claim than “pseudocode-based.” We do not cover all of any of those languages (Java, MATLAB, or Python), but we do cover the subset most often appearing in an introductory CS course.

They are clearly not language independent. In the great design space of programming languages, Java, MATLAB, and Python cluster together pretty closely. There are much more different programming languages than these — I’m sure it’ll take any reader here just a few moments to generate a half-dozen candidates whose learners would score poorly on the FCS1 and SCS1, from Scratch to Haskell to Prolog.

I only vaguely remember the discussion about using the term “language independence” with Allison many years ago.  I remember her asking me if we should worry about the (relatively few) classes that used languages other than Python, MATLAB, and Java.  I think I told her she needed to graduate. I judged from the perspective of what was being published at the SIGCSE Symposium — Python, MATLAB, and Java was “language independent” enough for the paper to be seen as valuable to SIGCSE reviewers. I don’t remember the details, but I’ll accept the blame for the decision to call FCS1 (and SCS1 later) language independent.

That was a long time ago, before the International Computing Education Research Conference (ICER) was invented.  Since then, we have a computing education research community that aims to answer questions about how people learn computing — period. We’re not just about undergraduate introductory computer science classes. Even at the undergraduate level, we should study the classes (no matter how few) doing something different to see what’s powerful and interesting about them. We should explicitly be exploring unusual (even purpose-invented) languages to understand more of the interaction between programming languages and human cognition.  An insightful PPIG paper from Clayton Lewis (see link here) was recently circulated on Twitter (see tweet) that makes great points about the complexity of measuring that interaction:

The PPIG community should be proud that cognitive dimensions analysis emerged from the work of people in its ranks, Thomas Green, Marian Petre, Alan Blackwell, and others. We should be skeptical of calls to replace its use with A-B trials or other quantitative methods that cannot cope with the complexity of the language design landscape. When results of A-B trials and similar studies are presented, we should diplomatically ask for the mechanisms that are involved to be described. Colleagues who present the results of such trials should be prepared to respond to this request, so that the generalizability of their results can be assessed.

We should not be driven by what’s in classrooms today (see previous post making that argument). We should hold ourselves to a higher standard. Our goal is to create a lasting record of exploration and research for a research community.

That’s why it’s past time for this retraction.

 

August 26, 2019 at 7:00 am 1 comment

Summarizing findings about block-based programming in computing education

As readers of my blog know, I’m interested in alternative modalities and representations for programming. I’m an avid follower of David Weintrop’s work, especially the work comparing blocks and text for programming (e.g., as discussed in this blog post).

David wrote a piece for CACM summarizing some of his studies on block-based programming in computing education. It has just been published in the August issue.  Here’s the link to the piece — I recommend it.

To understand how learners make sense of the block-based modality and understand the scaffolds that novice programmers find useful, I conducted a series of studies in high-school computer science classrooms. As part of this work, I observed novices writing programs in block-based tools and interviewed them about the experience. Through these interviews and a series of surveys, a picture emerged of what the learners themselves identified as being useful about the block-based approach to programming. Students cited features discussed here such as the shape and visual layout of blocks, the ability to browse available commands, and the ease of the drag-and-drop composition interaction. They also cited the language of the blocks themselves, with one student saying “Java is not in English it’s in Java language, and the blocks are in English, it’s easier to understand.” I also surveyed students after working in both block-based and text-based programming environment and they overwhelmingly reported block-based tools as being easier. These findings show that students themselves see block-based tools as useful and shed light as to why this is the case.

August 19, 2019 at 7:00 am Leave a comment

Social studies teachers programming, when high schools choose to teach CS, and new models of cognition and intelligence in programming: An ICER 2019 Preview

My group will be presenting two posters at ICER this year.

  • Bahare Naimipour (Engineering Education Research PhD student at U-Michigan) will be presenting our participatory design session with social studies educators, Helping Social Studies Teachers to Design Learning Experiences Around Data–Participatory design for new teacher-centric programming languages. We had 18 history and economics teachers building data visualizations in either Vega-Lite or JavaScript with Google Charts. Everyone got the starter visualization running and made changes that they wanted in less than 20 minutes. Those who started in Vega-Lite also tried out the JavaScript code, but only about 1/4 of the JS groups moved to Vega-Lite successfully.
  • Miranda Parker (Human-Centered Computing PhD student at Georgia Tech) will be presenting her quantitative model explaining about half of the variance in whether Georgia high schools taught CS in 2016, A Statewide Quantitative Analysis of Computer Science: What Predicts CS in Georgia Public High School. The most important factor was whether the school taught CS the year before, suggesting that overcoming inertia is a big deal — it’s easier to sustain a CS program than start one. She may talk a little about her new qualitative work, where she’s studying four schools as case studies about their factors in choosing to teach CS, or not.

Barbara is co-author on a paper, A Spaced, Interleaved Retrieval Practice Tool that is Motivating and Effective, with Iman Yeckehzaare and Paul Resnick . This is about a spaced practice tool that 32% of the students in an introductory programming course used more than they needed to, and the number of hours of use had a measurable positive effect on the final exam grade.

All of our other papers were rejected this year, but we’re in good company — the accept rate was around 18%. But I do want to talk about a set of papers that will be presented by others at ICER 2019. These are papers that I heard about, then I asked the authors for copies. I’m excited about all three of them.

How Do Students Talk About Intelligence? An Investigation of Motivation, Self-efficacy, and Mindsets in Computer Science by Jamie Gorson and Eleanor O’Rourke (see released version of the paper here)

One of the persistent questions in computing education research is why growth mindset interventions are not always effective (see blog post here). We get hard-to-interpret results. I met Jamie and Nell at the Northwestern Symposium on Computer Science and the Learning Sciences in April (amazing event, see here for more details). Nell worked with Carol Dweck during her graduate studies.

Jamie and Nell found mixed mindsets among the CS students that they studied. Some of the students they studied had growth mindsets about intelligence, but their talk about programming practices showed more fixed mindset characteristics. Other students self-identified as having some of both growth and fixed mindset beliefs.

In particular, some students talked about intelligence in CS in ways that are unproductive when it came to the practice of programming. For example, some students talked about the best programmers as being able to write the whole code in one sitting, or never getting any errors. A more growth mindset approach to programming would be evidenced by talking about building programs in pieces, expecting errors, and improving through effort over time.

This is a really helpful finding. It gives us new hypotheses to explore about why growth mindset interventions haven’t been as successful in CS as in other disciplines. Few disciplines have this strong distinction between their knowledge and their practice as acutely as we do in CS. It’s no wonder that we see these mixed mindsets.

Toward Context-Dependent Models of Productive Knowledge in Programming Cognition, by Brian A. Danielak

I’ve known Brian since he was a PhD student, and have been hoping that he’d start to publish some of his dissertation work. I got to read one chapter of it, and found it amazingly insightful. Brian explained how what we might see as a “random walk” of syntax was actually purposeful and rational behavior. I was excited to hear about this paper, and I enjoyed reading it.

It’s such an unusual paper for ICER! It’s empirical, but has no methods section. A big part of it is connecting to prior literature, but it’s not about a formal literature review.

Brian is making an argument about how we characterize knowledge and student success in CS. He points out that we often talk about students being wrong and having misconceptions, which is less productive than figuring out what they understand and where their alternative conceptions work or fail. I see his work following on to the work of Rich et al. (mentioned in this blog post) on CS learning trajectories. There are so many things to learn in CS, and sometimes, just getting started on the trajectory is a big step.

Spatial Encoding Strategy Theory: The Relationship between Spatial Skill and STEM Achievement by Lauren Margulieux.

Lauren is doing some impressive theoretical work here. She’s considering the work exploring the relationship between spatial reasoning and CS learning/performance, then constructs a theory explaining the observed results. Since it’s Lauren, the theory is thorough and covers well the known results in this space. I wrote her that I didn’t think that theory explains things that we expect are related to spatial reasoning, but we don’t yet have empirical evidence to support it. For example, when programmers simulate a program in their mind, their mental models may have a spatial component to them, but I don’t know of empirical work that explores that dimension of CS performance. But again, since it’s Lauren, I wouldn’t be surprised if her presentation addresses this point, beyond what was in the paper. (Also, read Lauren’s own summary of the paper here.)

I am looking forward to the discussion of these papers at ICER!

August 12, 2019 at 7:00 am 1 comment

Let’s think more broadly about computing education research: Questions about alternative futures

At the Dagstuhl Seminar in July, we spent the last morning going broad.  We posed three questions for the participants.

Imagine that Gates funds a CS teacher in every secondary school in the world, but requires all new languages to be taught (not Java, not Python, not R, not even Racket). “They’re all cultural colonialism! We have to start over!” Says Bill. We have five years to get ready for this. What should we do?

Imagine that Oracle has been found guilty of some heinous crime, that they stole some critical part of the JVM, whatever. The company goes bankrupt, and installation of Java on publicly-owned computers is outlawed in most countries. How do we recover CS Ed?

Five years from now, we’ll discover that Google has secretly been moving all of their infrastructure to Racket, Microsoft to Scala, and Amazon to Haskell (or swap those around). The CS Ed world is shocked — they have been preparing students for the wrong languages for these plum jobs! What do we do now? How do you redesign undergrad ed when it’s not about C++/#/Java/Python?

We got some pushback.  “That’s ridiculous. That’s not at all possible.” (I found amusing the description of we organizers as “Willy Wonka.”) Or, “Our goal should be to produce good programmers for industry — PERIOD!”

Those are reasonable positions, but they should be explicitly selected positions. The point of these questions is to consider our preconceptions, values, and goals. All computing education researchers (strike that: all researchers) should be thinking about alternative futures. What are we trying to change and why? In the end, our goal is to have impact. We have to think about what we are trying to preserve (and it’s okay for “producing industry programmers” to be a preserved goal) and what we are trying to change.

August 5, 2019 at 7:00 am 13 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,054,518 hits
August 2019
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

CS Teaching Tips