Posts tagged ‘computing education research’

Why some students do not feel that they belong in CS, and how we can encourage the sense that they do belong

One of my favorite papers at ICER 2019 was by Colleen Lewis and her colleagues, and is available on her website. I’ll quote her first:

Does a match between a students’ values of helping society and their perception of computing matter? Yes! A mismatch between a students’ goals of helping society and their perception of computing predicts a lower sense of belonging. And students from groups who – on average – are more likely to want to help society (women, Black students, Latinx students, and first-generation college students), this may be particularly problematic! (pdf)

  • Lewis, C. M., Bruno, P., Raygoza, J., & Wang, J. (2019). Alignment of Goals and Perceptions of Computing Predicts Students’ Sense of Belonging in Computing.Proceedings of the International Computer Science Education Research Workshop. Toronto, Canada.

I want to expand a bit on that paragraph. I often get the question, “Why aren’t more women and URM students going into CS?” We’re seeing female students and students of color leaving/avoiding CS at many stages, e.g., Barb’s deep analysis of AP CS*. Colleen and her collaborators are giving us one answer.

We know that students who have a sense of belonging in computing are more likely to stay in computing. Colleen et al. found that students who found that their values were supported in computing were more likely to feel a sense of belonging. So, if what you want to do with your life matches computing, you’re more likely to stick around in computing. This is the “alignment of goals” and “perceptions of computing” part of the title.

Next step: Students from demographic groups underrepresented in computing were more likely to value community and helping society than other students. These are their goals. Do students see that their goals align with their perception of computing? If so, then you have an increased sense of belonging. Colleen and her colleagues found that If the students who valued community perceived that they could use computing to support communal values, then they were more likely to stick around.

This result is obviously explanatory. It helps us to understand who stays in computing. It also suggests interventions. Want to retain more under-represented students in your CS classes? Help them to see that they can pursue their values in computing. Help them to update their perceptions so that they see the alignment of their goals with computing goals.

But what if you (as the teacher) don’t? This paper suggests future research questions. What if your CS class is entirely de-contextualized and doesn’t say anything about what the students might do with computing? What perceptions do the students bring to the CS class if nobody helps them to see the possibilities in computing? Which student goals align with these perceived goals of computing? We might guess what the answers might be, but it really does call for some explicit research. What are students’ goals and perceptions of computing in most CS classes today?


* Check out Barb’s blog at https://cs4all.home.blog/. As I’m writing this, Barb is finishing up the 2019 AP analysis. The gap between white and Black student pass rates on AP CSP is enormous, far larger than the gap on AP CS A. I’m hoping that she has updates there by the time this post appears.

December 9, 2019 at 7:00 am Leave a comment

Please forward to high school history teachers: Task-specific programming in social studies data viz

Hi,

We built this tool, DV4L (Data Visualization for Literacy), to help teachers quickly create data visualizations for social study classes. This is currently a minimum viable product (MVP) version of our tool, and we would like to collect your feedback so that we can improve! 

  • Please take a few moments and try out our tool at http://b48ca06e.ngrok.io/index.html
  • We intend for DV4L to be intuitive and easy to use, so you shouldn’t need any instructions to start. (If you ever see a screen with ‘TOO MANY CONNECTIONS’ on the top, please just wait for a few seconds and refresh the page.)
  • After you have tried out the tool, please take a few minutes to fill out this survey at https://forms.gle/jASed4f7GfTZD9xa8

We would really appreciate it if you could do this by Sunday Dec 8th to meet our class requirements. 

Feel free to share this with any other teachers or students who would like to try this out. Thank you so much!

I have been working with a team of four University of Michigan Computer Science students (in a class with Elliot Soloway) to develop a data visualization tool for history classes. They have a prototype ready for testing, and they need user data for their class by Sunday, December 8. Could you please forward this to any high school (or middle school) history teachers you know? They would also love to get some feedback from high school history students, too.

Here are three reasons why I’m excited about this tool.

First, the team really listened to us, our history professor collaborator (Tammy Shreiner), and the social studies teachers who gave us feedback on different data visualization tools. One of the students drove 2.5 hours to attend a participatory design session with social studies teachers in October. As an example, there are driving questions for each database, to guide students in what they might inquire about — that’s a specific request from Tammy.

Second, this is a tool for history inquiry. Data visualization tools like Vega-Lite and CODAP are terrific. (Readers of this blog know how impressed I am by Vega-Lite.) But they’re not designed for inquiry. As Bob Bain has taught me, historical inquiry starts from two pieces of evidence or two accounts that disagree. This tool is designed to support comparing different pieces of evidence, and maintaining a trace of what you’ve explored. Inquiry is about comparison, not from building a single visualization.

Third, this is task-specific programming in a subtle and interesting way. You build visualizations by making choices from pull-down menus on the left. As you find interesting graphs, you save them to the right. When you want to remember what the graph is about, you hover over it and get a textual representation of what pull-down menus generated this graph. I’m arguing that hover text is a program —- it’s a representation of the process for generating that graph, and it serves as a reminder of where you’ve been. It’s a program whose value is in reading it, not executing it.

Amy Ko told me once (I’m paraphrasing) that a program is a description of a process for the future. Using a tool is for now. A program represents the future. This program could be executed by the user in the future — set the pop-up menus to the same values, and you’ll get the same visualization. More importantly, the program represents the past and serves as a reminder for what you can do next.

Please do pass this around so that our team can get a sense of what’s working and what’s not in this prototype. Thanks!

December 5, 2019 at 8:00 am Leave a comment

Making the Case for Adaptive Parsons problems and Task-Specific Programming: Koli Calling 2019 Preview

I am excited to be presenting at the 19th Koli Calling International Conference on Computing Education Research (see site here). Both Barbara Ericson and I have papers this year. This was my third submission to Koli, and my first acceptance. Both of us had multiple rejections from ICER this year (see my blog post on ICER), so we updated and revised based on reviews, and were thrilled to get papers into Koli.

Investigating the Affect and Effect of Adaptive Parsons Problems

By Barbara Ericson, Austin McCall, and Kathryn Cunningham.

Barb is presenting the capstone to her dissertation work on adaptive Parsons problems (see blog post on her dissertation work here). This paper captures the iterative nature of her study. Early on, she did detailed think-aloud/interview protocols with teachers to understand how people used her adaptive Parsons problems. At the end, she looked at log files to get a sense of use at scale.

Abstract: In a Parsons problem the learner places mixed-up code blocks in the correct order to solve a problem. Parsons problems can be used for both practice and assessment in programming courses. While most students correctly solve Parsons problems, some do not. Un- successful practice is not conducive to learning, leads to frustration, and lowers self-efficacy. Ericson invented two types of adaptation for Parsons problems, intra-problem and inter-problem, in order to decrease frustration and maximize learning gains. In intra-problem adaptation, if the learner is struggling, the problem can dynamically be made easier. In inter-problem adaptation, the next problem’s difficulty is modified based on the learner’s performance on the last problem. This paper reports on the first observational studies of five undergraduate students and 11 secondary teachers solving both intra-problem adaptive and non-adaptive Parsons problems. It also reports on a log file analysis with data from over 8,000 users solving non-adaptive and adaptive Parsons problems. The paper reports on teachers’ understanding of the intra-problem adaptation process, their preference for adaptive or non-adaptive Parsons problems, their perception of the usefulness of solving Parsons problems in helping them learn to fix and write similar code, and the effect of adaptation (both intra-problem and inter-problem) on problem correctness. Teachers understood most of the intra-problem adaptation process, but not all. Most teachers preferred adaptive Parsons problems and felt that solving Parsons problems helped them learn to fix and write similar code. Analysis of the log file data provided evidence that learners are nearly twice as likely to correctly solve adaptive Parsons problems than non-adaptive ones.

Task-Specific Programming Languages for Promoting Computing Integration: A Precalculus Example

By Mark Guzdial and Bahare Naimipour

This is my first paper on the work I’m publishing on the new work I’m doing in task-specific programming. I mostly discuss my first prototype (see link here) and some of what math teachers are telling me (see link here). We also include a report on Bahare’s and my work with social studies educators A good bit of this paper is putting task-specific programming in a computing education context. I see what I’m doing as pushing further microworlds.

Typically, a microworld is built on top of a general-purpose language, e.g., Logo for Papert and Boxer for diSessa. Thus, the de- signer of the microworld could assume familiarity with the syntax and semantics of the programming language, and perhaps some general programming concepts like mutable variables and control structures. The problem here is that Logo and Boxer, like any general-purpose programming language, take time to develop proficiency. A task-specific programming language (TSPL) aims to provide the same easy-to-understand operations for a microworld, but with a language designed for a particular purpose.

Here’s the abstract:

Abstract: A task-specific programming language (TSPL) is a domain-specific programming language (in programming languages terms) designed for a particular user task (in human-computer interaction terms). Users of task-specific programming are able to use the tool to complete useful tasks, without prior training, in a short enough period that one can imagine fitting it into a normal class (e.g., around 10 minutes). We are designing a set of task-specific programming languages for use in social studies and precalculus courses. Our goal is offer an alternative to more general purpose programming languages (such as Scratch or Python) for integrating computing into other disciplines. An example task-specific programming language for precalculus offers a concrete context: An image filter builder for learning basic matrix arithmetic (addition and subtraction) and matrix multiplication by a scalar. TSPLs allow us to imagine a research question which we couldn’t ask previously: How much computing might students learn if they used a multiple TSPLs in each subject in each primary and secondary school grade?

Eventually the papers are going to appear in the ACM Digital Library. I have a preprint version of Barb’s paper here, and a longer form (with bigger screenshots) of my paper here.

November 18, 2019 at 7:00 am 2 comments

Where a Notional Machine Doesn’t Help: JavaScript and the DOM

At the Dagstuhl Seminar on Notional Machines and Programming Language Semantics, I came to a new understanding about where notional machines are not useful and another kind of support is needed.

I joined a breakout group on “Notional Machines for everything else.” There had been discussions about notional machines for popular programming languages for Scratch and Python, and one breakout group was formed around that goal. This group was about exploring where else notional machines could be useful, like trying to understand machine learning, generating proofs, and JavaScript for Web development. Joe Politz was our group leader.

After the first round of discussion, we all decided to focus on JavaScript. We had some serious experts on JavaScript and the DOM in our group, like Shriram Krishnamurthi and Titus Barik. The discussion was amazing! I was learning so much, and I took pages and pages of notes. Everyone noticed my feverish note-taking, so I got elected to report back to the group. That’s this blog post — it’s the work of the whole group (not just me). I just happened to be the guy who made the slides.

We had already come to the realization that there isn’t just one notional machine per language. A notional machine is about helping students understand a computational process, and there can be lots of processes in a given language. So we picked a specific scenario that we were aiming to explain: You (as a student) want to turn the background yellow when the button is clicked.

But the student makes a mistake.

And I came to realize that, without a lot of support:

The problem is that window.bg = "yellow" isn’t wrong. Because there isn’t a previously defined bg property, this assignment simply creates a property bg for the window object. No error. It just doesn’t do what the students wanted. How does the student figure out that the desired property is backgroundColor? Get a list of all bindings on window? There are hundreds or even thousands of them. How do you find backgroundColor among all of those?

The breakout group started listing on the blackboard the things we might need to explain to students to help them understand when went wrong or what might go wrong with clicking on a button to turn the background yellow. It was a long list.

You probably can’t read all of those, so I’ll list a few of them here:

  • What happens if you have two DOM elements with the same ID?
  • Where did window as an object get defined at all?
  • If you have event handlers defined on an object, and you delete the object, what happens to the event handler?
  • What happens if objects higher up in the DOM modify the event triggered by the event click?
  • Something I heard students ask: If my JavaScript changed the DOM, how come my HTML file didn’t change?
  • And Many More

I really had no idea just how complicated JavaScript and the DOM were! Amy Ko looked up the JavaScript definition of what == means (see link here). This isn’t the formal semantics. This is meant to be understandable. It’s insanely complicated.

At this point, Ben Shapiro raised a really interesting side question: What’s the cost of JavaScript’s overly-complicated rules? Is there a way of measuring the lost productivity of bad programming language design?

So, what’s the answer?

I realized that the answer is not a notional machine. The problem is that long list Amy found for us. I can teach part of that list as a notional machine, but I can’t teach all of it with a simplified model. Any simplification I create would be insufficient for the complexity of the reality. And even having a notional machine wouldn’t help if a student typed window.bg = "yellow". The student needs IDE’s and other supports to figure out errors that never trigger an error.

The solution is to reduce complexity to make it teachable.  In an earlier talk at Dagstuhl, Ben Shapiro explained how Shriram and Joe and their collaborators did this with Pyret.  In Pyret, they explicitly disallow some things that Python allows but are way too complicated to explain.

Probably the best idea for JavaScript, following Racket’s lead, is to have language levels. We should teach students a strict subset of JavaScript, where the really complicated things are simply disallowed. The goal is to help students to learn a real subset, then grow the subset.  TypeScript offers an alternative model, because it offers a more sane way of doing JavaScript.  For example, TypeScript’s type checking might help figure out the window.bg bug. There are lots of other languages that are more reasonable and compile to JavaScript — but those are avoiding JavaScript entirely.

The very best idea would be to fix JavaScript and the DOM, but it’s probably too late for that.

This working group was useful to me for two reasons. First, I really do have to teach JavaScript and the DOM (again) in January 2020, and now I have a new sense of the challenges and my options. Second, this was a great example of where a notional machine is not the answer to a pedagogical problem.

Thanks to members of the group who reviewed an earlier draft of this summary.  They’re not responsible for where I still didn’t get the details right.

 

September 9, 2019 at 7:55 am 13 comments

Holding ourselves to a higher standard: “Language-independent” just doesn’t cut it

My CACM blog post this month (see link here) is a retraction of the term “language-independent” in our work on the FCS1 and SCS1:

There is no language independence here. The FCS1 and SCS1 are multi-lingual which is a remarkable achievement. We might also call them pseudcode-based assessments, which is how they can be multi-lingual, but since a pseudocode-based test isn’t necessarily validated across other languages, “multi-lingual” is a stronger claim than “pseudocode-based.” We do not cover all of any of those languages (Java, MATLAB, or Python), but we do cover the subset most often appearing in an introductory CS course.

They are clearly not language independent. In the great design space of programming languages, Java, MATLAB, and Python cluster together pretty closely. There are much more different programming languages than these — I’m sure it’ll take any reader here just a few moments to generate a half-dozen candidates whose learners would score poorly on the FCS1 and SCS1, from Scratch to Haskell to Prolog.

I only vaguely remember the discussion about using the term “language independence” with Allison many years ago.  I remember her asking me if we should worry about the (relatively few) classes that used languages other than Python, MATLAB, and Java.  I think I told her she needed to graduate. I judged from the perspective of what was being published at the SIGCSE Symposium — Python, MATLAB, and Java was “language independent” enough for the paper to be seen as valuable to SIGCSE reviewers. I don’t remember the details, but I’ll accept the blame for the decision to call FCS1 (and SCS1 later) language independent.

That was a long time ago, before the International Computing Education Research Conference (ICER) was invented.  Since then, we have a computing education research community that aims to answer questions about how people learn computing — period. We’re not just about undergraduate introductory computer science classes. Even at the undergraduate level, we should study the classes (no matter how few) doing something different to see what’s powerful and interesting about them. We should explicitly be exploring unusual (even purpose-invented) languages to understand more of the interaction between programming languages and human cognition.  An insightful PPIG paper from Clayton Lewis (see link here) was recently circulated on Twitter (see tweet) that makes great points about the complexity of measuring that interaction:

The PPIG community should be proud that cognitive dimensions analysis emerged from the work of people in its ranks, Thomas Green, Marian Petre, Alan Blackwell, and others. We should be skeptical of calls to replace its use with A-B trials or other quantitative methods that cannot cope with the complexity of the language design landscape. When results of A-B trials and similar studies are presented, we should diplomatically ask for the mechanisms that are involved to be described. Colleagues who present the results of such trials should be prepared to respond to this request, so that the generalizability of their results can be assessed.

We should not be driven by what’s in classrooms today (see previous post making that argument). We should hold ourselves to a higher standard. Our goal is to create a lasting record of exploration and research for a research community.

That’s why it’s past time for this retraction.

 

August 26, 2019 at 7:00 am Leave a comment

Summarizing findings about block-based programming in computing education

As readers of my blog know, I’m interested in alternative modalities and representations for programming. I’m an avid follower of David Weintrop’s work, especially the work comparing blocks and text for programming (e.g., as discussed in this blog post).

David wrote a piece for CACM summarizing some of his studies on block-based programming in computing education. It has just been published in the August issue.  Here’s the link to the piece — I recommend it.

To understand how learners make sense of the block-based modality and understand the scaffolds that novice programmers find useful, I conducted a series of studies in high-school computer science classrooms. As part of this work, I observed novices writing programs in block-based tools and interviewed them about the experience. Through these interviews and a series of surveys, a picture emerged of what the learners themselves identified as being useful about the block-based approach to programming. Students cited features discussed here such as the shape and visual layout of blocks, the ability to browse available commands, and the ease of the drag-and-drop composition interaction. They also cited the language of the blocks themselves, with one student saying “Java is not in English it’s in Java language, and the blocks are in English, it’s easier to understand.” I also surveyed students after working in both block-based and text-based programming environment and they overwhelmingly reported block-based tools as being easier. These findings show that students themselves see block-based tools as useful and shed light as to why this is the case.

August 19, 2019 at 7:00 am Leave a comment

Social studies teachers programming, when high schools choose to teach CS, and new models of cognition and intelligence in programming: An ICER 2019 Preview

My group will be presenting two posters at ICER this year.

  • Bahare Naimipour (Engineering Education Research PhD student at U-Michigan) will be presenting our participatory design session with social studies educators, Helping Social Studies Teachers to Design Learning Experiences Around Data–Participatory design for new teacher-centric programming languages. We had 18 history and economics teachers building data visualizations in either Vega-Lite or JavaScript with Google Charts. Everyone got the starter visualization running and made changes that they wanted in less than 20 minutes. Those who started in Vega-Lite also tried out the JavaScript code, but only about 1/4 of the JS groups moved to Vega-Lite successfully.
  • Miranda Parker (Human-Centered Computing PhD student at Georgia Tech) will be presenting her quantitative model explaining about half of the variance in whether Georgia high schools taught CS in 2016, A Statewide Quantitative Analysis of Computer Science: What Predicts CS in Georgia Public High School. The most important factor was whether the school taught CS the year before, suggesting that overcoming inertia is a big deal — it’s easier to sustain a CS program than start one. She may talk a little about her new qualitative work, where she’s studying four schools as case studies about their factors in choosing to teach CS, or not.

Barbara is co-author on a paper, A Spaced, Interleaved Retrieval Practice Tool that is Motivating and Effective, with Iman Yeckehzaare and Paul Resnick . This is about a spaced practice tool that 32% of the students in an introductory programming course used more than they needed to, and the number of hours of use had a measurable positive effect on the final exam grade.

All of our other papers were rejected this year, but we’re in good company — the accept rate was around 18%. But I do want to talk about a set of papers that will be presented by others at ICER 2019. These are papers that I heard about, then I asked the authors for copies. I’m excited about all three of them.

How Do Students Talk About Intelligence? An Investigation of Motivation, Self-efficacy, and Mindsets in Computer Science by Jamie Gorson and Eleanor O’Rourke (see released version of the paper here)

One of the persistent questions in computing education research is why growth mindset interventions are not always effective (see blog post here). We get hard-to-interpret results. I met Jamie and Nell at the Northwestern Symposium on Computer Science and the Learning Sciences in April (amazing event, see here for more details). Nell worked with Carol Dweck during her graduate studies.

Jamie and Nell found mixed mindsets among the CS students that they studied. Some of the students they studied had growth mindsets about intelligence, but their talk about programming practices showed more fixed mindset characteristics. Other students self-identified as having some of both growth and fixed mindset beliefs.

In particular, some students talked about intelligence in CS in ways that are unproductive when it came to the practice of programming. For example, some students talked about the best programmers as being able to write the whole code in one sitting, or never getting any errors. A more growth mindset approach to programming would be evidenced by talking about building programs in pieces, expecting errors, and improving through effort over time.

This is a really helpful finding. It gives us new hypotheses to explore about why growth mindset interventions haven’t been as successful in CS as in other disciplines. Few disciplines have this strong distinction between their knowledge and their practice as acutely as we do in CS. It’s no wonder that we see these mixed mindsets.

Toward Context-Dependent Models of Productive Knowledge in Programming Cognition, by Brian A. Danielak

I’ve known Brian since he was a PhD student, and have been hoping that he’d start to publish some of his dissertation work. I got to read one chapter of it, and found it amazingly insightful. Brian explained how what we might see as a “random walk” of syntax was actually purposeful and rational behavior. I was excited to hear about this paper, and I enjoyed reading it.

It’s such an unusual paper for ICER! It’s empirical, but has no methods section. A big part of it is connecting to prior literature, but it’s not about a formal literature review.

Brian is making an argument about how we characterize knowledge and student success in CS. He points out that we often talk about students being wrong and having misconceptions, which is less productive than figuring out what they understand and where their alternative conceptions work or fail. I see his work following on to the work of Rich et al. (mentioned in this blog post) on CS learning trajectories. There are so many things to learn in CS, and sometimes, just getting started on the trajectory is a big step.

Spatial Encoding Strategy Theory: The Relationship between Spatial Skill and STEM Achievement by Lauren Margulieux.

Lauren is doing some impressive theoretical work here. She’s considering the work exploring the relationship between spatial reasoning and CS learning/performance, then constructs a theory explaining the observed results. Since it’s Lauren, the theory is thorough and covers well the known results in this space. I wrote her that I didn’t think that theory explains things that we expect are related to spatial reasoning, but we don’t yet have empirical evidence to support it. For example, when programmers simulate a program in their mind, their mental models may have a spatial component to them, but I don’t know of empirical work that explores that dimension of CS performance. But again, since it’s Lauren, I wouldn’t be surprised if her presentation addresses this point, beyond what was in the paper. (Also, read Lauren’s own summary of the paper here.)

I am looking forward to the discussion of these papers at ICER!

August 12, 2019 at 7:00 am 1 comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,099 other followers

Feeds

Recent Posts

Blog Stats

  • 1,709,360 hits
December 2019
M T W T F S S
« Nov    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

CS Teaching Tips