Posts tagged ‘computing education research’

How computing education researchers and learning scientists might better collaborate

Lauren Margulieux has started a blog which is pretty terrific.  I wrote about Lauren’s doctoral studies here, and I last blogged about her work (a paper comparing learning in programming, statistics, and chemistry) here.

In her blog, Lauren is explaining in lay terms papers from learning sciences, educational psychology, and educational technology.  She’s an interdisciplinary researcher, and she’s blogging to help others connect across disciplines.

Her most recent blog post is about an issue I’ve been thinking about a lot lately. I wrote a blog post in the summer about the challenge of bridging the modes of science and truth-seeking in (computing) education vs. computer science. Lauren summarizes a paper by Peffer and Renken about concrete strategies to be used between discipline-based education researchers (like math education researchers, science education researchers, or computing education researchers) and learning scientists. Quoting part of it below:

Challenges in Interdisciplinary Research: Collaboration within a field can be difficult as people attempt to reconcile different ideas towards one goal. Collaboration between fields, each with its own traditions in theory and methodology, can seem like a minefield. Below are some common challenges that DBERers and learning scientists face.

  1. Differences in hard and soft sciences – researchers in the hard sciences can often feel frustrated by the lack of predictability in human-subjects research, and researchers in social sciences can become frustrated when those in the hard sciences have unrealistic expectations or view research in the soft sciences as non-scientific.

  2. Differences in theories and frameworks – What constitutes a theory or framework can be different in different domains, confusing what is often a fundamental building block of research.

  3. Differences in research methodologies – those unfamiliar with human-subjects research can find its methodologies complex, varied, and full of uncertainty, and those who have endured countless hours of training in these methodologies can find it difficult to describe or justify methodological decisions in a concise way.

See more at https://laurenmarg.com/2018/07/29/peffer-renken-2016-dber-and-learning-sciences-collaboration-strategies/

August 12, 2018 at 11:00 pm 1 comment

Adaptive Parsons problems, and the role of SES and Gesture in learning computing: ICER 2018 Preview

 

Next week is the 2018 International Computing Education Research Conference in Espoo, Finland. The proceedings are (as of this writing) available here: https://dl.acm.org/citation.cfm?id=3230977. Our group has three papers in the 28 accepted this year.

“Evaluating the efficiency and effectiveness of adaptive Parsons problems” by Barbara Ericson, Jim Foley, and Jochen (“Jeff”) Rick

These are the final studies from Barb Ericson’s dissertation (I blogged about her defense here). In her experiment, she compared four conditions: Students learning through writing code, through fixing code, through solving Parsons problems, and through solving her new adaptive Parsons problems. She had a control group this time (different from her Koli Calling paper) that did turtle graphics between the pre-test and post-test, so that she could be sure that there wasn’t just a testing effect of pre-test followed by a post-test. The bottom line was basically what she predicted: Learning did occur, with no significant difference between treatment groups, but the Parsons problems groups took less time. Our ebooks now include some of her adaptive Parsons problems, so she can compare performance across many students on adaptive and non-adaptive forms of the same problem. She finds that students solve the problems more and with fewer trials on the adaptive problems. So, adaptive Parsons problems lead to the same amount of learning, in less time, with fewer failures. (Failures matter, since self-efficacy is a big deal in computer science education.)

“Socioeconomic status and Computer science achievement: Spatial ability as a mediating variable in a novel model of understanding” by Miranda Parker, Amber Solomon, Brianna Pritchett, David Illingworth, Lauren Margulieux, and Mark Guzdial

(Link to last version I reviewed.)

This study is a response to the paper Steve Cooper presented at ICER 2015 (see blog post here), where they found that spatial reasoning training erased performance differences between higher and lower socioeconomic status (SES) students, while the comparison class had higher-SES students performing better than lower-SES students. Miranda and Amber wanted to test this relationship at a larger scale.

Why should wealthier students do better in CS? The most common reason I’ve heard is that wealthier students have more opportunities to study CS — they have greater access. Sometimes that’s called preparatory privilege.

Miranda and Amber and their team wanted to test whether access is really the right intermediate variable. They gave students at two different Universities four tests:

  • Part of Miranda’s SCS1 to measure performance in CS.
  • A standardized test of SES.
  • A test of spatial reasoning.
  • A survey about the amount of access they had to CS education, e.g., formal classes, code clubs, summer camps, etc.

David and Lauren did the factor analysis and structural equation modeling to compare two hypotheses: Does higher SES lead to greater access which leads to greater success in CS, or does higher SES lead to higher spatial reasoning which leads to greater success in CS? Neither hypothesis accounted for a significant amount of the differences in CS performance, but the spatial reasoning model did better than the access model.

There are some significant limitations of this study. The biggest is that they gathered data at universities. A lot of SES variance just disappears when you look at college students — they tend to be wealthier than average.

Still, the result is important for challenging the prevailing assumption about why wealthier kids do better in CS. More, spatial reasoning is an interesting variable because it’s rather inexpensively taught. It’s expensive to prepare CS teachers and get them into all schools. Steve showed that we can teach spatial reasoning within an existing CS class and reduce SES differences.

“Applying a Gesture Taxonomy to Introductory Computing Concepts” by Amber Solomon, Betsy DiSalvo, Mark Guzdial, and Ben Shapiro

(Link to last version I saw.)

We were a bit surprised (quite pleasantly!) that this paper got into ICER. I love the paper, but it’s different from most ICER papers.

Amber is interested in the role that gestures play in teaching CS. She started this paper from a taxonomy of gestures seen in other STEM classes. She observed a CS classroom and used her observations to provide concrete examples of the gestures seen in other kinds of classes. This isn’t a report of empirical findings. This is a report of using a lens borrowed from another field to look at CS learning and teaching in a new way.

My favorite part of of this paper is when Amber points out what parts of CS gestures don’t really fit in the taxonomy. It’s one thing to point to lines of code – that’s relatively concrete. It’s another thing to “point” to reference data, e.g., when explaining a sort and you gesture at the two elements you’re comparing or swapping. What exactly/concretely are we pointing at? Arrays are neither horizontal nor vertical — that distinction doesn’t really exist in memory. Arrays have no physical representation, but we act (usually) as if they’re laid out horizontally in front of us. What assumptions are we making in order to use gestures in our teaching? And what if students don’t share in those assumptions?

August 10, 2018 at 7:00 am Leave a comment

CS educators listen to authority more than evidence: Time to move on

My CACM Blog post for July starts from Stuart Reges’ inflammatory blog post in June “Why Women Don’t Code.”  I use his post and other writing as a foil to critique how we make arguments in computing education.  They tend to be arguments from authority, not from evidence.

Why is that? Why do CS educators use evidence and research less than (as quoted in the CACM post) Physics educators?  Is it because of the youth of the field, so when we grow up we’ll think more about research on how to teach well?  Is it because of the economics of the field?  Getting a CS background is so lucrative that students are desperate to succeed in the classes. We don’t have to teach well — student motivation will make up for where our teaching lacks. Or is it something else — is it something about CS in its nature that leads to opposition to using evidence and research when making educational decisions?

In June, Stuart Reges, principal lecturer in Computer Science and Engineering at the University of Washington, published a blog post Why Women Don’t Code that led to several articles and blog posts in response (e.g., Seattle Times and GeekWire). Reges argues that women are simply never going to enter computing at significant numbers, and 20% is about all that we’re ever going to get.

Our community must face the difficult truth that we aren’t likely to make further progress in attracting women to computer science. Women can code, but often they don’t want to. We will never reach gender parity. You can shame and fire all of the Damores you find, but that won’t change the underlying reality.

It’s time for everyone to be honest, and my honest view is that having 20 percent women in tech is probably the best we are likely to achieve. Accepting that idea doesn’t mean that women should feel unwelcome. Recognizing that women will be in the minority makes me even more appreciative of the women who choose to join us.

Hank Levy, Director of the U-W CSE School, wrote a great statement in response (see here). Levy disagrees with Reges’s conclusions, but supports Reges’s right to make his argument. Levy puts the current gender ratio in computer science in context by comparing to other disciplines.

I was most struck by the 20% claim. That’s easily proven wrong. There are many CS educational programs in the US with more than 20% female (like Computational Media at Georgia Tech). There are countries where CS is more than 50% female. How can Reges claim that 20% is the best that we can possibly do?

Here’s something important about Stuart Reges that people outside of CS education might not know — he’s a rockstar. He packs the house when he speaks at education conferences. He publishes regularly in the field. He has written a popular book on how to teach Java in introductory computer science (see Building Java Programs). Students love him, and teachers want to be like him. When Stuart Reges speaks, CS educators listen.

In this post, I want to step back and consider how Reges is making his argument, because it says something about how we make decisions in computing education. I am going to characterize the argument style in computing education as argument from authority which Wikipedia describes as “a claimed authority’s support is used as evidence for an argument’s conclusion.” We need to recognize the form before we can move beyond it.

Click here to read the rest of the CACM Blog Post.

August 6, 2018 at 7:00 am 8 comments

We might want naive and delusional PhD students

We’re in the midst of cleaning out 25 years of accumulated stuff in our house in order to sell this house, buy a new house in Ann Arbor, and move to the University of Michigan by September 1.

As I was cleaning, I found the below — my original statement of purpose that I submitted to the University of Michigan in 1988 to start my doctorate.

I shared it with some friends, ruefully.  It felt silly, as well as grammatically flawed. I really did think that I was going to get a faculty position in “Computer Science and Education” when I graduated in the early 1990’s.  I was naive, maybe even delusional. I had no idea what academic CS was like when I applied. The reality is far different than what I imagined.  At the Home4CS event just this last April, I mentioned that it would be great if we had CS Education faculty slots in Schools of Education today.  As Diane Levitt reported on Twitter, the audience roared with laughter.  How crazy was I to think that we’d have some in the 1990’s?

But now, some positions like that do exist.  There are faculty who have been hired at US higher-education institutions to focus on CS Ed.  My new job at the University of Michigan is a joint position between CS and their Engineering Education Research program.  It took 25 years, but yeah, I’m going to have the kind of job for which I earned my PhD.

Some friends encouraged me to share this statement. Maybe it’s a good thing to have naive new PhD students.  Maybe that’s what we want in PhD students. We want PhD students to think long term, i.e., to have bought into a goal, a set of research questions, or a vision — and be willing to work at it for decades.  Eventually, if the student is really lucky and others are working on similar visions at the same time, the vision doesn’t seem not quite so naive, not quite so delusional.

I’ll be taking some time off from the blog while making the move to Michigan. I may post some guest contributions over the next few weeks, but for now, I’m putting the blog on hiatus.

June 29, 2018 at 7:00 am 5 comments

Visiting NTNU in Trondheim Norway June 3-23

Barbara and I are just back from a three week trip to NTNU in Trondheim, Norway. Katie Cunningham came with us (here’s a blog post about some of her work). Three weeks is enough time to come up with a dozen ideas for blog posts, but I don’t have the cycles for that. So let me just give you the high-level view, with pictures and links to learn more.

We went at the beginning of June because Barb and I (and the University of Michigan) are part of the IPIT network (International Partnerships for Excellent Education and Research in Information Technology) that had its kick-off meeting June 3-5. The partnership is about software engineering and computing education research, with a focus student and faculty exchange and meetings at each others’ institutions: NTNU, U. Michigan, Tsinghua University, and Nanjing University. I learned a lot about software engineering that I didn’t know before, especially about DevOps.

If you ever get the chance to go to a meeting organized by Letizia Jaccheri of NTNU, GO! She was the organizer for IPIT, co-chair of IDC 2018, and our overall host for our three weeks there. She has a wonderful sense for blending productivity with fun. During the IDC 2018 poster session, she brought in high school students dressed as storybook characters, just to wander around and “bring in a bit of whimsy.” For a bigger example, she wanted IPIT to connect with the NTNU campus at Ålesund, which just happens to be near the Geiranger fjord, one of the most beautiful in Norway. So, she flew the whole meeting to Ålesund from Trondheim! We took a large cruise-ship like boat with meeting rooms down the fjord. We got in some 5-6 hours of meetings, while also seeing amazing waterfalls and other views, and then visited the Ålesund campus the next day before flying home. We got work done and WOW!

For the next week and a half, we got to know the computing education research folks at NTNU. We were joined at the end of the first week by Elisa Rubegni from the University of Lincoln, and Roberto Martinez-Maldonado came by a couple days later. Barb, Elisa, and I held a workshop on the first Monday after IPIT. A couple days later, we had a half-day meeting with Michalis Giannakos’s group and Roberto, then Elisa led us all in a half-day design exercise (pictured below — Elisa, Sofia, Javi, and Katie). In between, we had individual meetings. I think I met with every one of the PhD students there working in computing education research. (And, in our non-meeting time, Barb and I were writing NSF proposals!)

Michalis’s group is doing some fascinating work. Let me tell you about some of the projects that most intrigued me.

  • Sofia (with Kshitij and Ilias) is lead on a project where they track what kids using Scratch are looking at, both on and off screen. It’s part of this cool project where kids program these beautiful artist-created robots with Scratch. It’s a pretty crazy looking experimental setup, with fiducial markers on notebooks and robots and screens.
  • Kshitij is trying to measure EEG and gaze in order to determine cognitive load in a user interface. Almost all cognitive load measures are based on self-report (including ours). They’re trying to measure cognitive load physiologically, and correlate it with self-report.
  • Katerina and Kshitij is using eye-tracking to measure how undergrads use tools like Eclipse. What I found most interesting was what they did not observe. I noticed in their data that they had no data on using the debugger. They explained that in 40 students, only five people even looked at the debugger. Nobody used data or control flow visualizations at all. I’m fascinated by this — what does it take to get students to actually look at the debuggers and visualizers that were designed to help them learn?
  • Roberto is doing this amazing work with learning analytics in physical spaces, where nurses are working on robot patients. Totally serious — they can gather all kinds of data about where people are standing, how they interact, and when they interact. For tasks like nursing, this is super important to understand what students are learning.

Then came FabLearn with an amazing keynote by Leah Buechley on art, craft, and computation. I have a long list of things to look up after her talk, including Desmos, computer controlled cutting machines (which I had never heard of before) which are way cheaper than 3-D printers but still allow you to do computational craft, and http://blog.recursiveprocess.com/ which is all about learning coding and mathematics. She made an argument that I find fascinating — that art is what helps diverse students reflect their identity and culture in their school, and that’s why students who get art classes (controlling for SES) are more likely to succeed in school and go onto post-secondary schooling. Can computing make it easier to bring art back into school? Can computing then play a role in engaging children with school again?

The next reason we were at NTNU was to attend the EXCITED Centre advisory board meeting. Barb and I were there for the launch of EXCITED in January 2017. It’s a very ambitious project, starting from students making informed decisions to go into CS/IT, helping students develop identities in CS, learning through construction, increasing diversity in CS, and moving into careers. We got to hang out with Arnold Pears, Mats Daniels, and Aletta Nylén of UpCERG (Upssala Computing Education Research Group), the world’s largest CER group.

Finally, for the last four days, we attended the Interaction, Design and Children Conference, IDC 2018. I wrote my Blog@CACM post for this month about my experiences there. I saw a lot there that’s relevant to people who read this blog. My favorite paper there tested the theory of concreteness fading on elementary school students learning computing concepts. Here’s a picture of a slide (not in the paper) that summarizes the groups in the experiment.

I’ll end with my favorite moment in IDC 2018, not in the Blog@CACM post. We met Letizia’s post-doc, Javier “Javi” Gomez at the end of our first week in Trondheim. Summer weather in Trondheim is pretty darn close to winter in Atlanta. One day, we woke up to 44F and rain. But we lucked out — the weekends were beautiful. On our first Saturday, Letizia invited us all to a festival near her home, and we met Javi and Elisa. That evening (but still bright sunlight), Javi, Elisa, Barb, and I took a wonderful kayaking trip down the Nidelva river. So it was a special treat to be at IDC 2018 to see Javi get TWO

awards for his contributions, one for his demo and an honorable mention for his note. The note was co-authored by Letizia, and was her first paper award (as she talks about in the lovely linked blog post). It was wonderful to be able to celebrate the success of our new friends.

On the way back, Barb and I stopped in London to spend a couple days with Alan Kay and his wife, Bonnie MacBird. If I could come up with a dozen blog post ideas from 3 weeks, it’s probably like two dozen per day with Alan and Bonnie, and we had two days with them. Visiting a science museum with an exhibit on early computers (including an Alto!) is absolutely amazing when you’re with Alan. But those blog posts will have to wait until after my blog hiatus.

June 28, 2018 at 7:00 am 2 comments

We can build new programming languages that people will teach, learn, and use: Scratch 3.0 in August

When I come out with blog posts saying that we need new programming languages (like this one), I regularly get a bunch of skepticism.  People will only use industry-approved languages, says one argument.  We need to teach the languages that exist, says another.

Then I just reply, “Scratch.”  It’s real programming, it’s popular, and it’s taught around the world.  We ought to study how Scratch succeeded.  One key insight: Don’t beat your head against the traditional CS1 teachers.  There’s a lot more people to teach, and not everyone has to become a software developer.

A new version of Scratch is coming this August!

Source: 3 Things To Know About Scratch 3.0 – The Scratch Team Blog – Medium

June 25, 2018 at 7:00 am 20 comments

It Matters a Lot Who Teaches Introductory Courses if We Want Students to Continue

Thanks to Gary Stager who sent this link to me. The results mesh with Pat Alexander’s Model of Domain Learning. A true novice to a field is not going to pursue studies because of interest in the field — a novice doesn’t know the field. The novice is going to pursue studies because of social pressures, e.g., it’s a requirement for a degree or a job, it’s expected by family or community, or the teacher is motivating.  As the novice becomes an intermediate, interest in the domain can drive further study.  These studies suggest that persistence is more likely to happen if the teacher is a committed, full-time teacher.

The first professor whom students encounter in a discipline, evidence suggests, plays a big role in whether they continue in it.

On many campuses, teaching introductory courses typically falls to less-experienced instructors. Sometimes the task is assigned to instructors whose very connection to the college is tenuous. A growing body of evidence suggests that this tension could have negative consequences for students.

Two papers presented at the American Educational Research Association’s annual meeting in New York on Sunday support this idea.

The first finds that community-college students who take a remedial or introductory course with an adjunct instructor are less likely to take the next course in the sequence.

The second finds negative associations between the proportion of a four-year college’s faculty members who are part-time or off the tenure track and outcomes for STEM majors.

Source: It Matters a Lot Who Teaches Introductory Courses. Here’s Why.

June 22, 2018 at 7:00 am 7 comments

Older Posts


Recent Posts

August 2018
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Feeds

Blog Stats

  • 1,538,499 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,301 other followers

CS Teaching Tips