Posts tagged ‘computing education research’

What does it mean for Computer Science to be harder to learn than other STEM subjects?

I made an argument in my Blog@CACM Post for this month that “Learning Computer Science is Different than Learning Other STEM Disciplines,” and on Twitter, I explicitly added “It’s harder.”

In my Blog@CACM post, I thought it was a no-brainer that CS is harder:

  1. Our infrastructure for teaching CS is younger, smaller, and weaker  (CS is so new, and we don’t have the decades of experience to figure out how to do it well yet.)

  2. We don’t realize how hard learning to program is (The fact that the Rainfall problem seems easy, but it’s clearly not easy, means that CS teachers don’t know how to estimate yet what’s hard for students, so our classes are probably harder than we mean them to be.)

  3. CS is so valuable that it changes the affective components of learning (Classes that are stuffed full of both CS majors and non-majors means that issues of self-efficacy, motivation, and belonging are much bigger in CS than in other STEM disciplines.)

The push back was really interesting.  People pointed out that they took CS classes and math classes, or CS and physics, and CS seemed easy in comparison.  They may be right, but that’s self-report on introspection by people who succeeded at both classes.  My point is that we are probably flunking out (or students are giving up, or opting out) of CS at much higher rates than any other STEM subject, because of the reasons I give.  We’re really using two different measures of “harder” — harder to succeed, or harder in retrospect once succeeded.

I only have a qualitative argument for “It’s harder.” I’m not sure how one would even evaluate the point empirically.  Any suggestions?  How could we measure when one subject is harder than another?

It’s not an important question to answer which is harder, CS vs math, or CS vs physics. A much more important and supportable claim is that CS “is harder” than it needs to be.  We have a lot of extraneous complexity and cognitive load in learning CS.

January 19, 2018 at 7:00 am 17 comments

ICER 2018 Call for Participation (I’m co-chairing Works in Progress)

Do submit to ICER 2018 in Finland.  I particularly encourage you to join the Works in Progress workshop, for which I’ll be the junior co-chair as I learn the ropes from Colleen Lewis. I was a participant in the Works in Progress workshop in Glasgow and found it fun and useful.

ICER’18 – Call For Participation

The fourteenth annual ACM International Computing Education Research (ICER) Conference aims to gather high-quality contributions to the computing education research discipline. We invite submissions across a variety of categories for research investigating how people of all ages come to understand computational processes and devices, and empirical evaluation of approaches to improve that understanding in formal and informal learning environments.


Research areas of particular interest include:
– discipline based education research (DBER) in computer science (CS), information sciences (IS), and related disciplines
– design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge or skills development
pedagogical environments fostering computational thinking
learning sciences work in the computing content domain
psychology of programming
learning analytics and educational data mining in CS/IS content areas
learnability/usability of programming languages
informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals
measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines
research on CS/computing teacher thinking and professional development models at all levels
rigorous replication of empirical work to compare with or extend previous empirical research results
systematic literature review on some topic related to computer science education


In addition to standard research paper contributions, we continue our longstanding commitment to fostering discussion and exploring new research areas by offering several ways to engage. These include a doctoral consortium for graduate students just prior to the conference, a work-in-progress workshop for researchers following the conference, and poster and lightning talks. This is in addition to the format of conference sessions, where all research paper presentations include time for discussion among the attendees followed by feedback to the paper presenters.

Submission Categories

ICER provides multiple options for participation, with various levels of discussion and interaction between the presenter and audience. These sessions also support work at various levels, ranging from formative work to polished, complete research results.


Research Papers
Papers are limited to 8 pages, excluding references, double-blind peer reviewed and published in the ACM digital library as part of the conference proceedings. Accepted papers are allotted time for presentation and discussion at the conference


Doctoral Consortium
2 page extended abstract submission required and published in ACM digital library as part of the conference proceedings. Students will present their work to distinguished faculty mentors during an all-day workshop and during the conference in a dedicated poster session.


Lightning Talks and Posters
Abstract (250 words) submission required and made available on conference website, but not published in proceedings. Accepted abstracts for lightning talks will be given a 3-minute time slot for rapid presentation at the conference followed by a discussion period for all attendees. Posters may either accompany a lightning talk or may be proposed separately using the same abstract submission process.


Work in Progress Workshop
This one-day workshop is a venue to get sustained engagement with and feedback about early work in computing education.    White paper submission required but not included in proceedings.


Co-located Workshops
Proposals for pre/post conference workshops of interest to the ICER community (i.e., those that aim to advance computer science education research) are welcomed and encouraged. ICER local arrangements personnel will be available to assist with workshop logistics where possible. If interested, contact the conference chairs for more details by April 10th, 2018: Lauri.Malmi@aalto.fi or Ari.Korhonen@aalto.fi.


For more information about preparation and submission, please visit the page corresponding to the submission type of interest.

Important Deadlines and Dates


Research Papers

30 March, 2018 – – Abstract submission (250 words, mandator)
6 April, 2018 – – Full paper submission 
1 June – – Notification of acceptance 
15 June – -Final camera ready deadline
Other Submission Types
1 May – – Doctoral consortium submissions
8 June – – Lightning talk and Poster proposals
8 June – – Work in progress workshop application

Conference Schedule

Doctoral Consortium, Sunday, August 12, 2018
ICER Conference, Monday, August 13 – Wednesday August 15, 2018
Work in Progress Workshop, Wednesday evening, August 15 – Thursday, August 16, 2018
For more details, see the conference website:
 http://www.icer-conference.org

Conference Co-Chairs
Lauri Malmi, Aalto University, Finland (Lauri.Malmi@aalto.fi)
Ari Korhonen, Aalto University, Finland (Ari.Korhonen@aalto.fi
Robert McCartney, University of Connecticut, USA (robert.mccartney@uconn.edu)
Andrew Petersen, University of Toronto Mississauga, Canada (andrew.petersen@utoronto.ca)


AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

January 15, 2018 at 7:30 am Leave a comment

Parsons Problems have same Learning Gains as Writing or Fixing code, in less time: Koli Calling 2017 Preview

On Saturday, Barbara Ericson will be presenting at Koli Calling her paper (with Lauren Margulieux and Jeff Rick), “Solving Parsons Problems Versus Fixing and Writing Code.”

The basic design of her experiment is pretty simple.  Everybody gets a pretest where they answer multiple-choiced questions, write some code, fix some code, and solve some Parsons problems.  (I’ve written about Parsons Problems here before.)

Then there are three instructional treatments with three different kinds of problem-solving practice:

  • One group gets Parsons Problems with distractors in them — blocks that should not be dragged into the solution.
  • One group gets the same code to fix — same code as in the Parsons Problems but all the distractors are there.  They have to fix the broken code in the distractor to get to the same code as the correct block in the Parsons.
  • One group gets to write the code to solve the same problem.

Then they take an isomorphic (same basic problems with context and constants changed) post-test, go away, and come back one week later for a retention test (which is isomorphic to both the pretest and the first posttest: multiple choice questions, Parsons, fix code, write code).  So we have students who study with Parsons Problems getting tested by writing and fixing code.

Here’s the bottom line from their abstract: “We found that solving two-dimensional Parsons problems with distractors took significantly less time than fixing code with errors or than writing the equivalent code. Additionally, there was no statistically significant difference in the learning performance, or in student retention of the knowledge one week later.”

That’s it. It’s simple but profound.  Below is the timing table from the paper. The Parsons Problems took effort, but always less time — sometimes they took only half the time of fixing or writing code, and other times it was only a few percentage less. But it was always less.

One takeaway idea is: If Parsons leads to the same learning in less time, why wouldn’t every teacher use more Parsons problems?  A second one that we’ve been thinking alot about is: Can we provide more Parsons problems so that in the same amount of time that students were writing code, they actually learn more? Efficiency matters, as Elizabeth Patitsas’s work suggests — more efficient learning may mean less belief in Geek Gene by CS teachers.

Cursor_and_ParsonsVsFixAndWrite-Final_pdf__page_8_of_10_

November 17, 2017 at 7:00 am 7 comments

Open Research Questions in Computing Education, 2017 Edition

When I last taught the Computing Education Research question class in 2015, we generated a list of open research questions (see post here).  We’ve got even more students this time in the class, so our list of questions is even longer.  We tried to cluster them, so similar questions should be near each other

.

Open Research Questions

What areas/findings of CS education research transfer to online learning? What doesn’t work the same?

Would more students pursue CS if it was incorporated into other introductory classes (different domains)?

Would more collaboration in two CS classes help reduce the defensive climate?

Do certain spoken languages allow for more effective learning of computing? If so, which ones?

Why don’t girls/minorities enter CS classes even if offered at their K12/undergrad school?

In underrepresented communities, is CS education a priority? If not, why not?

Does learning computing earlier quicken abstract reasoning?

How can you tell if a middle-schooler is learning computational thinking? What is computational thinking, operationally?

How can we get the attrition rate to decrease in CS education? Do we offer fewer jobs in industry? Force more people to teach CS? How do retain CS teachers?

Would teaching testing strategies from CS1 increase code writing skills?

Would undergrads be better programmers if they used weakly-typed languages before using strongly-typed languages?

Would students be better programmers if they learned ML or R first? Or if they learned to diagram programs first?

Can short informal CS Ed interventions (e.g., in museums, public spaces, etc.) have any effect on CS learning and/or self-efficacy and/or attitudes towards computing?

How can teach undergraduate students how to better understand documentation? Should we explore creating language documentation specific to intro classes?

How does learning functional vs. procedural programming first effect development of computational literacy?

How can we increase diversity in online CS education?

What kind of Community of Learners gravitates towards online education?

How can we make informal computational learning accessible to a wide audience?

Is it a positive or negative thing to have different forms of education for different communities within CS?

Would a computing course focused on creative activities have better recruitment/retention of diverse students?

Does physical computing engage students in a different way than traditional programming?

How do computational scientists think about code differently than computer scientists?

Can we teach computing to an elderly community of learners?

Would more diversity in Maker and Tinkerer spaces increase the diversity in CS?

November 3, 2017 at 6:00 am 5 comments

Learning Programming at Scale: Philip Guo’s research

I love these kinds of blog posts.  Philip Guo summarizes the last three years of his research in the post linked below.  I love it because it’s so important and interesting (especially for students trying to understand a field) to get a broad explanation of how a set of papers relate and what they mean.  Blog posts may be our best medium for presenting this kind of overview — books take too long (e.g., I did a book to do an overview of 10-15 years of work, but it may not be worth the effort for a shorter time frame), and few conferences or journals will publish this kind of introspection.

My research over the past three years centers on a term that I coined in 2015 called learning programming at scale. It spans the academic fields of human-computer interaction, online learning, and computing education.

Decades of prior research have worked to improve how computer programming is taught in traditional K-12 and university classrooms, but the vast majority of people around the world—children in low-income areas, working adults with full-time jobs, the fast-growing population of older adults, and millions in developing countries—do not have access to high-quality classroom learning environments. Thus, the central question that drives my research is: How can we better understand the millions of people from diverse backgrounds who are now learning programming online and then design scalable software to support their learning goals? To address this question, I study learners using both quantitative and qualitative research methods and also build new kinds of interactive learning systems.

Source: Learning Programming at Scale | blog@CACM | Communications of the ACM

September 11, 2017 at 7:00 am Leave a comment

The Role of Emotion in Computing Education, and Computing Education in Primary School: ICER 2017 Recap

I wrote my Blog@CACM post in August about the two ICER 2017 paper awards:

  • Danielsiek et al’s development of a new test of student self-efficacy in algorithms classes;
  • Rich et al.’s trajectories of K-5 CS learning, which constitute an important new set of theories about how young students learn computing.

Rich et al.’s paper is particularly significant to me because it has me re-thinking my beliefs about elementary school computer science. I have expressed significant doubt about teaching computer science in early primary grades — it’s expensive, there are even more teachers to prepare than in secondary schools, and it’s not clear that it does any longterm good. If a third grader learns something about Scratch, will they have learned something that they can use later in high school? Katie Rich presented not just trajectories but Big Ideas. Like Big Ideas for sequential programming include precision and ordering. It’s certainly plausible that a third grader who learns that precision and ordering in programs matters, might still remember that years later. I can believe that Big Ideas might transfer (at least, within computing) over years.

I was struck by a recurring theme of emotion in the papers at ICER 2017. We have certainly had years where cognition has been a critical discussion, or objects, or programming languages, or student’s process. This year, I noticed that many of these papers were thinking about beliefs and feelings.

I find this set of papers interesting for highlight an important research question: What’s the most significant issue influencing student success or withdrawal from computer science? Is it the programming language they use (blocks vs text, anyone?), the kind of error messages they see, the context in which the instruction is situated, or whether they use pair programming? Or is the most significant issue what the students believe about what they’re doing? And maybe all of those other issues (from blocks to pairs) are really just inputs to the function of student belief?

(Be sure to check out Andy Ko’s summary of ICER 2017.)

September 1, 2017 at 7:00 am 1 comment

Teachers are not the same as students, and the role of tracing: ICER 2017 Preview

The International Computing Education Research conference starts today at the University of Washington in Tacoma. You can find the conference schedule here, and all the proceedings in the ACM Digital Library here. In past years, all the papers have been free for the first couple weeks after the conference, so grab them while they are outside the paywall.

Yesterday was the Doctoral Consortium, which had a significant Georgia Tech presence. My colleague Betsy DiSalvo was one of the discussants. Two of my PhD students were participants:

We have two research papers being presented at ICER this year. Miranda Parker and Kantwon Rogers will be presenting Students and Teachers Use An Online AP CS Principles EBook Differently: Teacher Behavior Consistent with Expert Learners (see paper here) which is from Miranda C. Parker, Kantwon Rogers, Barbara J. Ericson, and me. Miranda and Kantwon studied the ebooks that we've been creating for AP CSP teachers and students (see links here). They're asking a big question: "Can we develop one set of material for both high school teachers and students, or do they need different kinds of materials?" First, they showed that there was statistically significantly different behaviors between teachers and students (e.g. different number of interactions with different types of activities). Then, they tried to explain why there were differences.

We develop a model of teachers as expert learners (e.g., they know more knowledge so they can create more linkages, they know how to learn, they know better how to monitor their learning) and high school students as more novice learners. They dig into the log file data to find evidence consistent with that explanation. For example, students repeatedly try to solve Parsons problems long after they are likely to get it right and learn from it, while teachers move along when they get stuck. Students are more likely to run code and then run it again (with no edits in between) than teachers. At the end of the paper, they offer design suggestions based on this model for how we might develop learning materials designed explicitly for teachers vs. students.

Katie Cunningham will be presenting Using Tracing and Sketching to Solve Programming Problems: Replicating and Extending an Analysis of What Students Draw (see paper here) which is from Kathryn Cunningham, Sarah Blanchard, Barbara Ericson, and me. The big question here is: "Of what use is paper-and-pen based sketching/tracing for CS students?" Several years ago, the Leeds' Working Group (at ITiCSE 2004) did a multi-national study of how students solved complicated problems with iteration, and they collected the students' scrap paper. (You can find a copy of the paper here.) They found (not surprisingly) that students who traced code were far more likely to get the problems right. Barb was doing an experiment for her study of Parsons Problems, and gave scrap paper to students, which Katie and Sarah analyzed.

First, they replicate the Leeds' Working Group study. Those who trace do better on problems where they have to predict the behavior of the code. Already, it's a good result. But then, Katie and Sarah go further. For example, they find it's not always true. If a problem is pretty easy, those who trace are actually more likely to get it wrong, so the correlation goes the other way. And those who start to trace but then give up are even more likely to get it wrong than those who never traced at all.

They also start to ask a tantalizing question: Where did these tracing methods come from? A method is only useful if it gets used — what leads to use? Katie interviewed the two teachers of the class (each taught about half of the 100+ students in the study). Both teachers did tracing in class. Teacher A's method gets used by some students. Teacher B's method gets used by no students! Instead, some students use the method taught by the head Teaching Assistant. Why do some students pick up a tracing method, and why do they adopt the one that they do? Because it's easier to remember? Because it's more likely to lead to a right answer? Because they trust the person who taught it? More to explore on that one.

August 18, 2017 at 7:00 am 1 comment

Older Posts


Recent Posts

January 2018
M T W T F S S
« Dec    
1234567
891011121314
15161718192021
22232425262728
293031  

Feeds

Blog Stats

  • 1,469,183 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,214 other followers

CS Teaching Tips