Posts tagged ‘computing education research’

What we want kids to learn through coding: Requirements for task-specific programming languages for learning

Marina Umaschi Bers has an essay from last year that I’ve been thinking about more since the discussion about task-specific programming languages (see previous post here). Her essay is: What Kids Can Learn Through Coding.

In creating her ScratchJr kitten, Liana practiced some of the most powerful ideas of computer sciences:

  • She learned that a programing language has a set of rules in which symbols represent actions.
  • She understood that her choices had an impact on what was happening on the screen.
  • She was able to create a sequence of programming blocks to represent a complex behavior (such as appearing and disappearing).
  • She used logic to correctly order the blocks in a sequence.
  • She practiced and applied the concept of patterns, which she had learned earlier during math time in class.

ScratchJr is just what the name suggests — a programming language like Scratch, but made even simpler and aimed at young children.  See a description of activities with ScratchJr here.

What I particularly like about Marina’s list is how it connects to the learning trajectories work that I’ve been talking about here and highlighted in my 2019 SIGCSE Keynote (as described in Ann Leftwich’s tweet). Ideas like “Precision and completeness are important when instructions in advance” are hard to learn. Knowing that the computer requires precision and isn’t trying to understand you like a human is really the first step to getting past Roy Pea’s “Superbug.” These are important ideas to learn. I’ll bet that most students don’t have that insight (even when they get to undergraduate education). That would be an interesting research question — what percentage of University students know ideas like the importance of precision and completeness in computer instructions?

I have a hypothesis that these fundamental ideas (what Marina is pointing out, what Katie Rich et al. are noting in their learning trajectories) even transfer. These aren’t higher-order thinking skills. This isn’t about learning a programming language to be used across the curriculum. Rather, these concepts are about recognizing, “The nature of programming.” I bet that students will learn these and remember them in new contexts, because it’s about what the computer and programming is. Once you learn that computer instructions require precision and completeness, I predict that you always remember that programming has that requirement.

What else do we want students to get from coding?

Please note that I’m not talking about “computational thinking.” I’m setting aside the possibility of more general “mindset” benefits.  Right now, I’m thinking in pragmatic and measurable terms.  We can measure students learning the concepts in the trajectories.  We can measure if those concepts are retained and applied later.

This is why task-specific programming languages are interesting to me. The goal isn’t “learning programming” (and I’ll argue in a few weeks that that isn’t a thing). The goal is using code to improve learning about something other than programming skills. Notice the term that Marina uses in her essay: “the most powerful ideas of computer sciences.” She’s not teaching students how to program. She is teaching them what programming is.

A task-specific programming language used in an educational context should improve learning in that context. It should provide some leverage that wasn’t there previously (i.e., without the computer).

  • For the social studies educators with whom I am working, programming allowed them to build visualizations to highlight the features that they wanted to highlight, from large data sets. They found it easier to get the visualization they wanted via code than via Excel (or so they told us). The programming managed scale (e.g., if you had three data points, you could graph them by hand pretty easily).
  • The programming language can find mistakes that the student might not notice, and provide feedback. That’s how I’m trying to use programming in a history class. A program can represent an argument. A computer can find gaps and weaknesses in an argument that a student might not see.
  • The Bootstrap folks argue for the value of rigor. I think they’re referring to the specificity that a program requires. Writing a program requires students to specify a problem and a solution in detail, and can lead to greater insight.
  • A program can make something “real.” Bootstrap: Algebra works, in part, because the students’ algebra makes a video game. It breathes life into the mathematics. That a program executes is a powerful motivator.

I think what Alan was telling me in the comments to an earlier blog post about task-specific programming and again in the blog post on multiple languages in schools is that it should also lead to generativity. If I teach you a programming language that solves your task and doesn’t connect to more powerful ideas and languages, then I have taught you a dead-end solution. I might achieve the goals I’ve identified earlier, but I’m not helping you to solve the next problems or the problems you’re going to face next year or next class. I’m not sure right now how to achieve that goal, but I recognize the value of it.

What are other requirements for the use of task-specific languages for learners?

April 22, 2019 at 7:00 am 2 comments

A Task-Specific Programming Language for Web Scraping in which learners are successful very quickly

One of the papers that has most influenced my thinking about task-specific programming languages is Rousillon: Scraping Distributed Hierarchical Web Data by Sarah Chasins, Maria Mueller, and Rastislav Bodik.

Rousillon is a programming by demonstration system that generates a program in Helena, a task-specific programming language for Web scraping (grabbing data out of Web pages). Below is a flow diagram describing how Rousillon generates a program in Helena.

Check out the Helena program at the far right of that diagram. Yeah, it’s a block-based programming language — for adults. The choice of blocks was made explicitly to avoid syntax errors. If the user wants to modify the synthesized code, she is guided to what can be changed — change a slot in a block, or delete or move a block.  It’s a purposeful choice to improve the user experience of programming with Helena.

Helena doesn’t do statistics. It doesn’t do visualizations. It does one thing extremely well.  Maybe it’s competition for R, because people do use R for Web scraping.  But it’s far easier to do web-scraping in Helena.

The below graph is the one that blew me away. They ran a study comparing Rousillon and Selenium, a comparable Web data scraping system. Everybody using Rousillon completed both tasks in a few minutes. Most people using Selenium couldn’t finish the tasks (if the bar goes all the way up to the top, people ran out of time before completing).

But here’s the part that is just astonishing. Notice the black border on some of the Selenium bars in that graph? Those are the people who knew Selenium already. NONE of the Rousillon users knew the language before hand. That time for the Rousillon users includes training — and they still beat out the Selenium users.  A task that a trained Selenium user can solve in 25 minutes, a complete novice can solve with Rousillon and Helena in 10 minutes.

Here’s a goal for task-specific programming languages for adult end-users: It should be easily learned. End-users want to be able to succeed at simple tasks nearly immediately. They want to achieve their task, and any time spent learning is time away from completing the task. It’s about reducing the costs of integrating programming into the adult’s context.

The goal isn’t exactly the same when we’re talking about learners in non-CS classes. I think it’s about balancing two challenges:

  • If the learning is generative and will be used again, then some additional learning is valuable. For example, learning about vectors in MATLAB and lists in Racket make sense — you’ll use them whenever you will use either language. It’s a concept that can be applied whenever you use those languages. (I have a blog post that talks about generativity and other goals for task-specific programming languages next week.)
  • But we don’t want to turn algebra, history, or economics classes into CS classes. We don’t want to make mathematics or social studies teachers feel like like they’re now teaching CS. When is it too much CS that you’re teaching?  Perhaps a rule of thumb is when the teacher is teaching more than they could learn in a single professional development session. That’s just a guess.

April 15, 2019 at 7:00 am 3 comments

Why we should explore more than one programming language across the curriculum

Ben duBoulay and I wrote the history chapter for the new Cambridge University Press Handbook of Computing Education Research (mentioned here).  A common theme has been the search for the “best language” to learn programming.  We see that from the 1960’s on up.

One of the criteria for “best language” is one that could be used across the curriculum, in different classes and for different problems.  I was reminded of that when we recently ran a participatory design session with social science teachers.  We heard the message that they want the same language to use in history, English, mathematics, and science. The closest we ever got was Logo.

But now, I’m not sure that that’s the right goal, for two reasons:

  1. We have no evidence currently that language-specific programming knowledge will transfer, nor how to achieve it.  If you use one language to learn algebra (e.g., Bootstrap Algebra), do students use or even reference the language when they get to (say) science class?  Maybe we could design a language with algebra-specific representations and biology-specific representations and history-specific representations and so on, but it might make the language unnecessarily complex and abstract to make it cover a range of domain constructs.  My bet is that the fundamental computational ideas do transfer.  If you learn that the order of language elements matters and the specificity of those language elements matter (two early learning goals as described in the learning trajectories work), I’ll bet that you’ll use those later. Those aren’t about the programming language. Those are about programming and the nature of programs.
  2. School is so much more diverse and heterogeneous than adult life. I’ll bet that most of you reading took physics and biology at some point in our lives.  I don’t know about you, but I rarely use F=ma or the difference between mitosis and meiosis in daily life.  On a daily basis, we tend to solve problems within a small range of domains. But our students do take science and math and history and English all in the same day.  In adult life, there are different programming languages for different kinds of problems and different domains (as Philip Guo has been talking about in his talks recently). Why shouldn’t that be true for K-12, too?

The key is to make the languages simple enough that there’s little overhead in learning them. As one of the teachers in our study put it, “one step up from Excel.” Scratch fits that goal in terms of usability, but I don’t think it’s a one-size-fits-all solution to computing across the curriculum.  It doesn’t meet the needs of all tasks and domains in the school curriculum. Rather, we should explore some multi-lingual solutions, with some task-specific programming languages, and think hard about creating transfer between them.

The single language solution makes sense if the language is hard to learn. You don’t want to pay that effort more than once. But if the language fits the domain and task, learning time can be minimal — and if well-designed, the part that you have to learn transfers to other languages.

 

April 8, 2019 at 7:00 am 21 comments

Opportunities to explore research questions with Code.org: Guest post from Baker Franke

I wrote my Blog@CACM column this month as a response to the 60 Minutes segment on Code.org with Hadi Partovi, “Five Research Questions Raised by a Pre-Mortem on the 60 Minutes Segment on Code.org.” Baker Franke, research and evaluation manager at Code.org (see Felienne’s blog post about Baker here) responded on Facebook with an invitation to engage with Code.org on these (and other) research questions.  He provided me with a longer form of his Facebook post, which I’m sharing here with his permission. Thanks, Baker!

Hi Mark,
These are all of course great questions and ones we really hope the community takes up. Some are things we’re actively tracking or working on with other research partners. So, some questions are actually more at the mid-mortem stage 🙂 or at least we can see trends. If you’d like me to share something I’m more than happy to.
But for anyone out there who would like to dig in a little bit more please consider becoming a research partner. I recently gave a flash talk at the SPLICE workshop at SIGCSE ‘19 about how we do research partnerships. Here is a working document about Research Partnerships at Code.org. It explains with a bit more detail how we work with researchers and the kinds of data we do and don’t have. If you have ideas for killer research projects that make use of our platform let us (read: me) know.

To offer a little something, I’ll take the bait on Mark’s first hypothesis: “Students with Code.org accounts don’t actually use the Code.org resources.” Answer: depends how you count! Actually it’s more like: depends where you want to draw the line on a full spectrum of activity we can log. We do try to track the difference between just visiting the site, logging in, doing an activity, starting a course with a teacher, and doing more. (Of course nothing would replace a well-designed implementation study, did I mention our research partnership program?)

For example, in CS Fundamentals (K5), we’ve been looking at students who “start” the course (defined in next paragraph) v. those who reach a basic level of “coding proficiency,” but understanding and defining this is tricky and is an ongoing project (click this link to learn more about how we define it)Mark_Guzdial_Guest_Blog_Post_-_Google_DocsWe also track what we call “course started” v. “fully enrolled”. “Course started” is defined as a student user who (1) completes any single activity within one of our full courses (i.e. CSF, CSD, CSP, but not hour of code) and (2) who is in a teacher-section with 5 or more other students. Some exploratory analysis has shown that >= 5 students filters out a lot of noise that is probably not school-related activity. “Fully enrolled” is a metric that we’re also continuing to refine, and is a bit of misnomer because we can’t actually see what course the student is enrolled in at their school, but it means something like: “system activity data is indicative of this person probably actually going through the course in a classroom with a teacher.” We calculate it based on a student account (1) being in a section of >= 5 students (2) reaching a threshold of activity within each unit of the course — different thresholds for each unit depending on size, amount of unplugged, etc. — (3) over all units in the course. I think the important thing to highlight about these metrics is that they’re focused on activity indicative of students in classrooms, not just any old activity. Here are the numbers for the most recently completed school year.

Mark_Guzdial_Guest_Blog_Post_-_Google_Docs-v2

Right now, our definition for “fully enrolled” is admittedly rough, and we’re going to refine it based on what we’re seeing in actual classrooms and publish a definition later this year with better numbers, but these numbers give a high level idea of what we’re seeing.

So do students with Code.org accounts use the resources? Yeah, a lot do. It’s tempting to compare these numbers to, say, MOOC completion rates, but we’re not really a MOOC because our curricula aren’t actually online courses. It would be hard for a student to go through our courses without a teacher and a classroom of schoolmates. The difference between “started” and “fully enrolled” could be a teacher only using one unit of a course or a few activities. So where’s the line? Another way to slice this would be to look at started v. completed for students of teachers who went through our PD program. Wouldn’t that be interesting? Want to find out more? Did I mention our Research Partnership Program?

Baker
baker@code.org

April 5, 2019 at 7:00 am Leave a comment

The problem with sorting students into CS classes: We don’t know how, and we may institutionalize inequity

One of the more interesting features of the ACM SIGCSE ITiCSE (Innovation and Technology in CS Education) conference are “working groups” (see description here). Groups of attendees from around the world work together before and at the conference on an issue of interest, then publish a report on what happened. This is the mechanism that Mike McCracken used when he organized the first Multi-Institutional, Multi-National (MIMN) study in CS Ed (see paper here). This year’s Working Group #9 caught my eye (see list here).

The description of what the group wants to explore is interesting: How can we measure what will lead to success in introductory computer science?

The main issues are the following.

  • The ability to predict skill in the absence of prior experience
  • The value of programming language neutrality in an assessment instrument
  • Stigma and other perception issues associated with students’ performance, especially among groups underrepresented in computer science

It’s a deep and interesting question that several research groups have explored. Probably the most famous of these is the “The Camel has Two Humps.” If you read that paper, be sure to read Caspersen et al’s (unsuccessful) attempt to replicate the results (here), Simon’s work with Dehnadi and Bornat to replicate the results (again unsuccessful, here), and then finally the retraction of the original results (here). Bennedsen and Caspersen have a nice survey paper about what we know about predictive factors from ICER 2005, and there was a paper at SIGCSE 2019 that used multiple data sources to predict success in multiple CS courses (here). The questions as I see it are (a) what are the skills and knowledge that improve success in learning to program, (b) how can we measure to determine if they are there, and (c) how can we teach those skills explicitly if they are not.

Elizabeth Patitsas explored the question of whether there are innate differences between students that lead to success or failure in introductory CS (see paper here). She does not prove that there is no so-called Geek Gene. We can’t prove that something does not exist. She does show that (a) that grades at one institution over many years are (mostly) not bimodal, and (b) some faculty see bimodal grade distributions even if the distribution is normal. If there was something else going on (Geek Gene, aptitude, whatever), you wouldn’t expect that much normality. So she gives us evidence to doubt the Geek Gene hypothesis, and she gives us a reasonable alternative hypothesis. But it isn’t definitive proof — that’s what Ahadi and Lister argued at ICER 2013. We have to do more research to better understand the problem.

Are Patitsas’s results suspect because they’re from an elite school? Maybe. Asking that question is really common among CS faculty — Lecia Barker found that that’s one of the top reasons why CS faculty ignore CS Ed research. We discount findings from places unlike ours. That’s why Multi-Institutional, Multi-National (MIMN) is such a brilliant idea. They control for institutional and even national biases. (A MIMN study showing the effectiveness of Peer Instruction is in the top-10 SIGCSE papers list.)

In my research group, we’re exploring spatial reasoning as one of those skills that may be foundational (though we don’t yet know how or why). We can measure spatial reasoning, and that we can (pretty easily) teach. We have empirically shown that wealth (more specifically, socioeconomic status (SES)) leads to success in computing (see this paper), and we have a literature review identifying other privileges that likely lead to success in CS (see Miranda Parker’s lit review).

I am concerned about the goals of the working group. The title is “Towards an Ability to Direct College Students to an Appropriately Paced Introductory Computer Science Course.” The first line of the description is:

We propose a working group to investigate methods of proper placement of university entrance-level students into introductory computer science courses.

The idea is that we might have two (or more) different intro courses, one at a normal pace and one at a remedial pace. The overall goal is to meet student needs. There is good evidence that having different intro courses is a good practice. Most institutions that I know that have multiple introductory courses choose based on major, or allow students to choose based on interest or on expectations of abilities. It’s a different thing to have different courses assigned by test.

If we don’t know what those skills are that might predict success in CS, how are you going to measure them? And if you do build a test that sorts students, what will you actually be sorting on?  It’s hard to build a CS placement test that doesn’t actually sort on wealth, prior experience, and other forms of privilege.

If the test sorts on privilege, it is institutionalizing inequity. Poor kids go into one pile, and rich kids go into the other. Kids who have access to CS education go into one pile, everyone else into another.

Why build the test at all?  To build a test to sort people into classes presumes that there are differences that cannot be mitigated by teaching. Building such a test presumes that there is a constant answer to “What does it take to succeed in CS1?” If we had such a test, would the results be predictive for classes that both use and don’t use pair programming? Peer instruction? Parsons problems?

I suggest a different perspective on the problem.  We can get so much further if we instead improve success in CS1. Let’s make the introductory course one that more students will succeed in.

There’s so much evidence that we can improve success rates with better teaching methods and revised curriculum. Beth Simon, Leo Porter, Cynthia Lee, and I have been teaching workshops the last four years to new CS faculty on how to teach better. It works — I learned Peer Instruction from them, and use it successfully today. My read on the existing literature suggests that everyone benefits from active learning, and the less privileged students benefit the most (see Annie-Murphy Paul’s articles).

One of the reasons why spatial reasoning is so interesting to explore is that (a) it does seem related to learning computing and (b) it is teachable.  Several researchers have shown that spatial skills can be increased relatively easily, and that the improved skills are long-lasting and do transfer to new contexts. Rather than sort people, we’re better off teaching people the skills that we want them to have, using research-informed methods that have measurable effects.

Bottom line: We are dealing with extraordinary enrollment pressures in CS right now. We do need to try different things, and multiple introductory courses is a good idea. Let’s manage the enrollment with the best of our research results. Let’s avoid institutionalizing inequities.

April 1, 2019 at 7:00 am 10 comments

Using MOOCs for Computer Science Teacher Professional Development

When our ebook work was funded by IUSE, our budget was cut from what we proposed. Something had to be dropped from our plan of work. What we dropped was a comparison between ebooks and MOOCs. I had predicted that we could get better learning and higher completion rates from our ebooks than from our MOOCs. That’s the part that got dropped — we never did that comparison.

I’m glad now. It’s kind of a ridiculous comparison because it’s about the media, not particular instances. I’m absolutely positive that we could find a terrible ebook that led to much worse results than the absolutely best possible MOOC, even if my hypothesis is right about the average ebook and the average MOOC. The medium itself has strengths and weaknesses, but I don’t know how to experimentally compare two media.

I’m particularly glad since I wouldn’t want to go up against Carol Fletcher and her creative team who are finding ways to use MOOCs successfully for CS teacher PD. You can find their recent presentation “Comparing the Efficacy of Face to Face, MOOC, and Hybrid Computer Science Teacher Professional Development” on SlideShare:

Carol sent me a copy of the paper from  the 2016″Learning with MOOCs” conference*. I’m quoting from the abstract below:

This research examines the effectiveness of three primary strategies for increasing the number of teachers who are CS certified in Texas to determine which strategies are most likely to assist non-CS teachers in becoming CS certified. The three strategies compared are face-to-face training, a MOOC, and a hybrid of both F2F and MOOC participation. From October 2015, to August of 2016, 727 in-service teachers who expressed an interest in becoming CS certified participated in one of these pathways. Researchers included variables such as educational background, teaching certifications, background in and motivation to learn computer science, and their connection to computer science through their employment or the community at large as covariates in the regression analysis. Findings indicate that the online only group was no less effective than the face-to-face only group in achieving certification success. Teachers that completed both the online and face-to-face experiences were significantly more likely to achieve certification. In addition, teachers with prior certification in mathematics, a STEM degree, or a graduate degree had greater odds of obtaining certification but prior certification in science or technology did not. Given the long-term lower costs and capacity to reach large numbers that online courses can deliver, these results indicate that investment in online teacher training directed at increasing the number of CS certified teachers may prove an effective mechanism for scaling up teacher certification in this high need area, particularly if paired with some opportunities for direct face-to-face support as well.

That they got comparable results from MOOC-based on-line and face-to-face is an achievement. It matches my expectations that a blended model with both would be more successful than just on-line.

Carol and team are offering a new on-line course for the Praxis test that several states use for CS teacher certification. You can find details about this course at https://utakeit.stemcenter.utexas.edu/foundations-cs-praxis-beta/.


* Fletcher, C., Monroe, W., Warner, J., Anthony, K. (2016, October). Comparing the Efficacy of Face-to-Face, MOOC, and Hybrid Computer Science Teacher Professional Development. Paper presented at the Learning with MOOCs Conference, Philadelphia, PA.

March 29, 2019 at 7:00 am 1 comment

Where to find Guzdial and Ericson Web Resources post-Georgia Tech: Bookmark this post

Georgia Tech has now shut down the web servers that Barbara Ericson and I have been using to share resources for the last umpteen years.  They warned us back in the Fall that they were going to, so we have been moving Web resources over to the University of Michigan.

Here are links to where to find some of our most often accessed resources (not all links and images will work, since some were hardcoded to Georgia Tech URLs):

We do not have everything moved over.  I believe that there were FERPA concerns about some of our websites (that we might be referencing student names), so we were not able to download those. We are recreating those resources best we can. Barbara has now set up a new blog for her AP CS A tracking data (see http://cs4all.home.blog), and I have uploaded my Computational Freakonomics slides and coursenotes to U-M.

Bookmark this post, and as I find more resources in the stash I downloaded, I’ll link them from here.

 

March 22, 2019 at 7:00 am 3 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,196 other followers

Feeds

Recent Posts

Blog Stats

  • 1,637,513 hits
April 2019
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips