The BlueJ Blackbox now available: large scale programming education data collection

August 27, 2013 at 1:34 am 3 comments

Neil Brown announced this at ICER last week.  The new version of BlueJ now anonymously logs user actions onto a server for analysis by researchers.  I just signed up to get access to the site.  I have a couple of ideas for research projects using these data.  It’s pretty exciting: Big data comes to computing education research!

We have begun a data collection project, called Blackbox, to record the actions of BlueJ users. We’re inviting all the BlueJ users (with the latest version, 3.1.0, onwards) to take part. About 2 months in to the project, we already have 25,000 users who have agreed to take part, with 1,000 sending us data each day. Based on current estimates, I expect that in November 2013 we should see around 5,000 users sending data each day, with a total of over 100,000 users. Rather than hoarding the data, we are making this data available to other computing education researchers for use in their own research, so that we can all benefit from this project.

via Blackbox: large scale programming education data collection | Academic Computing.

Entry filed under: Uncategorized. Tags: , , , .

Programmers insist: “Everybody” does not need to learn to code CSTA Blog on new UChicago study of US CS high school teachers

3 Comments Add your own

  • 1. Brian Danielak  |  August 28, 2013 at 4:37 pm

    Mark,

    I have a question regarding the reasoning in the above-linked article. Neil Brown writes:

    Imagine that a particular problem or misconception affects, say, one student in hundred. A teacher with a class of twenty will see such a student once every five years. A researcher with a study of 100 will see one such student. A researcher with a large database, say 100,000 users, will have 1,000 such students. It’s clear who has the best chance of analysing this problem. Large scale data also has the advantage of generalising beyond some of the confounds in smaller scale studies. An individual smaller study can be confounded by the institution in which it is run, or the cohort of students, or the handful of teachers involved. A larger study which has participants from hundreds or thousands of institutions automatically overcomes these biases that can be problematic at smaller scales.

    If the argument is that scaling up an N gives us the sensitivity to find rarer patterns, then I agree. But my question is: if the patterns are that rare in CSEd (1% or fewer occurrences) are they worth finding? Especially when, by the author’s own argument, in five years of teaching an instructor at a small college might never encounter a student who has that “misconception.”

    My general question is: what’s the utility of using large scale studies to find tiny effects if most instructors won’t encounter those effects?

    Reply
    • 2. Neil Brown  |  August 29, 2013 at 1:33 pm

      This is a good question. I don’t yet have the data (hence Blackbox!), but here’s a working hypothesis. My guess is that the frequencies across the students obey a power law. Take 100 students. There’s a few really common misconceptions/mistakes that lots of students make, say the top mistake is made by 50 or 60 students, but the sixth most frequent mistake is made by 10 or 15, and the tenth most frequent mistake is made by 2 or 3 and so on. So what do you do with the long tail? One valid option is to disregard it: it’s a series of rare misconceptions/mistakes. As a teacher you deal with them as and when; as a tool designer you just ignore it. That may still be the best option.

      But what if some of these mistakes are particularly costly and can be easily fixed? For example, one item I’ve looked at is “empty if statements”. This is where students put a semi colon immediately after the if condition, thus ending it early:

      if (x < 5); //If statement already ends here with semi-colon
      {
      x = 5; //Always executes!
      }

      This mistake occurs rarely. Only 0.15% of source files ever contain this mistake over their lifetime so far (source: Blackbox data up until last week). But what if this is a hard mistake for students to spot and fix? (Still working on how to measure this aspect!) It might be rare but it's costly, and it's quite easy for the tool (compiler or pre-compiler) to issue a warning about this. Perhaps there are other similar hard-but-rare issues, and we could spend a small amount of time here and there to save a small amount of students a large amount of time and frustration? We don't know yet for sure. But one interesting point to note is that these higher-level questions can only be answered by a large-scale study. I can't decide if that's a good justification or a circular argument; until we have a lot of data, we don't know what we can use a lot of data for?

      Reply
  • […] and I are speaking Thursday 3:45-5 (with Neil Brown on his Blackbox work) in Hanover DE on our AP CS analysis paper (also previewed at a GVU Brown Bag). The full paper is […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


Recent Posts

August 2013
M T W T F S S
« Jul   Sep »
 1234
567891011
12131415161718
19202122232425
262728293031  

Feeds

Blog Stats

  • 1,268,007 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,562 other followers

CS Teaching Tips


%d bloggers like this: