Archive for November, 2019

The future of computing education is in providing literacy to all: Video of SIGCSE 2019 Keynote now available

Here in the United States, it’s Thanksgiving week, so it’s a good time to put a cap on one of the events that I’m most thankful for this year.

My keynote at SIGCSE 2019, Computing Education as a Foundation for 21st Century Literacy, was recorded, but something went wrong with the audio. (That happens when the audio includes speaking, singing, harmonica, ukulele, and totally messed up digital sounds.) I reached out to Rebecca Quintana of U. Michigan’s Center for Academic Innovation, and they agreed to re-record my lecture in the studio with a professional engineer. It’s a series of smaller videos, rather than one long one hour video.

The crowdsourced blog post about the keynote is here, and my post with my many thanks is here. The slides are available here:

I fixed some typos and updated a bit the extended abstract associated with the talk. You can get the (non-paywalled) updated paper here.

Happy Thanksgiving, everyone!

November 25, 2019 at 2:00 am 1 comment

Making the Case for Adaptive Parsons problems and Task-Specific Programming: Koli Calling 2019 Preview

I am excited to be presenting at the 19th Koli Calling International Conference on Computing Education Research (see site here). Both Barbara Ericson and I have papers this year. This was my third submission to Koli, and my first acceptance. Both of us had multiple rejections from ICER this year (see my blog post on ICER), so we updated and revised based on reviews, and were thrilled to get papers into Koli.

Investigating the Affect and Effect of Adaptive Parsons Problems

By Barbara Ericson, Austin McCall, and Kathryn Cunningham.

Barb is presenting the capstone to her dissertation work on adaptive Parsons problems (see blog post on her dissertation work here). This paper captures the iterative nature of her study. Early on, she did detailed think-aloud/interview protocols with teachers to understand how people used her adaptive Parsons problems. At the end, she looked at log files to get a sense of use at scale.

Abstract: In a Parsons problem the learner places mixed-up code blocks in the correct order to solve a problem. Parsons problems can be used for both practice and assessment in programming courses. While most students correctly solve Parsons problems, some do not. Un- successful practice is not conducive to learning, leads to frustration, and lowers self-efficacy. Ericson invented two types of adaptation for Parsons problems, intra-problem and inter-problem, in order to decrease frustration and maximize learning gains. In intra-problem adaptation, if the learner is struggling, the problem can dynamically be made easier. In inter-problem adaptation, the next problem’s difficulty is modified based on the learner’s performance on the last problem. This paper reports on the first observational studies of five undergraduate students and 11 secondary teachers solving both intra-problem adaptive and non-adaptive Parsons problems. It also reports on a log file analysis with data from over 8,000 users solving non-adaptive and adaptive Parsons problems. The paper reports on teachers’ understanding of the intra-problem adaptation process, their preference for adaptive or non-adaptive Parsons problems, their perception of the usefulness of solving Parsons problems in helping them learn to fix and write similar code, and the effect of adaptation (both intra-problem and inter-problem) on problem correctness. Teachers understood most of the intra-problem adaptation process, but not all. Most teachers preferred adaptive Parsons problems and felt that solving Parsons problems helped them learn to fix and write similar code. Analysis of the log file data provided evidence that learners are nearly twice as likely to correctly solve adaptive Parsons problems than non-adaptive ones.

Task-Specific Programming Languages for Promoting Computing Integration: A Precalculus Example

By Mark Guzdial and Bahare Naimipour

This is my first paper on the work I’m publishing on the new work I’m doing in task-specific programming. I mostly discuss my first prototype (see link here) and some of what math teachers are telling me (see link here). We also include a report on Bahare’s and my work with social studies educators A good bit of this paper is putting task-specific programming in a computing education context. I see what I’m doing as pushing further microworlds.

Typically, a microworld is built on top of a general-purpose language, e.g., Logo for Papert and Boxer for diSessa. Thus, the de- signer of the microworld could assume familiarity with the syntax and semantics of the programming language, and perhaps some general programming concepts like mutable variables and control structures. The problem here is that Logo and Boxer, like any general-purpose programming language, take time to develop proficiency. A task-specific programming language (TSPL) aims to provide the same easy-to-understand operations for a microworld, but with a language designed for a particular purpose.

Here’s the abstract:

Abstract: A task-specific programming language (TSPL) is a domain-specific programming language (in programming languages terms) designed for a particular user task (in human-computer interaction terms). Users of task-specific programming are able to use the tool to complete useful tasks, without prior training, in a short enough period that one can imagine fitting it into a normal class (e.g., around 10 minutes). We are designing a set of task-specific programming languages for use in social studies and precalculus courses. Our goal is offer an alternative to more general purpose programming languages (such as Scratch or Python) for integrating computing into other disciplines. An example task-specific programming language for precalculus offers a concrete context: An image filter builder for learning basic matrix arithmetic (addition and subtraction) and matrix multiplication by a scalar. TSPLs allow us to imagine a research question which we couldn’t ask previously: How much computing might students learn if they used a multiple TSPLs in each subject in each primary and secondary school grade?

Eventually the papers are going to appear in the ACM Digital Library. I have a preprint version of Barb’s paper here, and a longer form (with bigger screenshots) of my paper here.

November 18, 2019 at 7:00 am 5 comments

We should be emphasizing design of computing over teaching computational thinking

Alan Kay, Cathie Norris, Elliot Soloway, and I have an article in this month’s Communications of the ACM called “Computational Thinking Should Just Be Good Thinking.” (See link here, and a really nice summary at U-M which links to a preprint draft.) Our argument is that “computational thinking” is already here — students use computing every day, and that computing is undoubtedly influencing their thinking. But that fact is almost trivial. What we really care about is effective, critical, “expanded” thinking where computing can play a role in helping us think better. To do that, we need better computing.

It’s more important to improve computing than to teach students to think with existing computing. The state of our current tools is poor. JavaScript wasn’t designed to be learnable and to help users think. (Actually, I might have just stopped with “JavaScript wasn’t designed.”) We really need to up our game, and we should not be focusing solely on how to teach students about current practices around iteration or abstraction. We should also be about developing better designs so that we spend less time on the artifacts of our current poor designs.

Ken Kahn called us out, in the comments at the CACM site, suggesting that general-purpose programming tools are better than building specialized programming tools. I wrote a Blog@CACM post in response “The Size of Computing Education, By-The-Numbers.” We have so little success building tools that reach large numbers of students that it doesn’t make sense to just build on our best practice. They may all be local maxima. We should try a wide variety of approaches.

I got asked an interesting question on Twitter in response to the article.

Do you think @Bootstrapworld and @BerkeleyDataSci Data 8 modules both embody your philosophy?

I don’t think we’re espousing a philosophy. We’re suggesting a value for design and specifically improved design of computing.

Bootstrap clearly does this. The whole Bootstrap team has worked hard to build, iterate, test, and invent. If you haven’t seen it, I recommend Shriram Krishnamurthi’s August 2019 keynote at the FCRC. They solved some significant computer science design problems in creating Bootstrap.

Berkeley’s Data 8 is curriculum about existing tools, R and Jupyter notebooks. That’s following an approach like most of computational thinking — the focus is on teaching the existing tools. That’s not a bad thing to do, but you end up spending a lot of time teaching around the design flaws in the existing tools. I just don’t buy that R or Jupyter notebooks are well-designed for students. We can do much better. LivelyR (see link here) is an example of trying to do better.

We should be teaching students about computing. But computing is also the most flexible medium humans have ever invented. We should be having an even greater emphasis on fixing, designing, and inventing better computing.

Many thanks to Barbara Ericson, Amy Ko, Shriram Krishnamurthi, and Ben Shapiro who gave me comments on versions (multiple!) of this essay while it was in development. They are not responsible for anything we said, but it would be far less clear without them. The feedback from experts was immensely valuable in tuning the essay. Thanks!

November 13, 2019 at 2:00 am 5 comments

Come to the CUE.NEXT Workshop: Making computing education work for all undergraduates

I’m going to be the keynoter at the Dec. 5 workshop in DC. The workshop series is near and dear to my heart — how do we make computing education accessible to all undergraduates? Below is taken from the CRA website here.


CS Departments have seen significant enrollment increases in undergraduate computer science courses and programs. The number of non-majors in CS courses has also increased significantly, and many CS departments cannot meet the demand. One key reason for the increased demand from non-majors is the fact that computing and computer science have become relevant to undergraduate education in all disciplines. However, there is currently no consensus on how to design computing courses or how to structure curricula aimed at teaching the fundamentals of CS and computing to students who need to use computing effectively in the context of the other disciplines.

The goal of the upcoming CUE.NEXT workshops — organized by Larry Birnbaum (Northwestern), Susanne Hambrusch (Purdue), and Clayton Lewis (UC Boulder) — is to initiate a national dialog on the role of computing in undergraduate education. Computing educators and CS departments, as well as colleagues and academic units representing other stakeholder disciplines, will work together to define and address the challenges. Three NSF funded workshops are scheduled to take place in Chicago (November 18 and 19), DC (December 5 and 6) and Denver (January 2020).

November 11, 2019 at 7:00 am Leave a comment

Freakonomics misunderstands what public education is, how it works, and how to change it

I am a fan of Freakonomics Radio. I have heard all the old ones (some more than once), and I keep up with the new ones. Freakonomics informs and inspires me, including many posts in this blog. So, I want to respond when they get it really wrong.

Episode 391 America’s Math Curriculum Doesn’t Add Up (see link here) is hosted by Steven Levitt (the economist) rather than the usual host Stephen Dubner (the journalist). The podcast is inspired by the struggles Levitt’s teenage children face with their mathematics classes. Levitt contends that the US mathematics curriculum is out-dated and in serious need of reform. I agree with his premise. His interviews with Jo Boaler and Sally Sadoff are interesting and worth listening to. But there are huge holes in his argument, and his solution makes no sense at all.

Part of his argument is based on a poll they took through the Freakonomics twitter account.

MARTSCHENKO: So, we’ve been putting together a survey that we sent out to Freakonomics listeners. We asked our survey respondents which subjects they use in their daily life, traditional math and data-related. So trigonometry, geometry, calculus, versus more data-related skills like analyzing and interpreting data and visualizing it.

LEVITT: So what percent of people, say, use calculus on a daily basis?

MARTSCHENKO: About 2 percent said that they use calculus on a daily basis, and almost 80 percent say they never use it.

LEVITT: Okay. I would think calculus would get used more than trigonometry and geometry, although that would be hard if only 2 percent are using it. But what percent use trigonometry and geometry?

MARTSCHENKO: Yeah. Less than 2 percent of respondents said that they use trigonometry in their daily life, but over 70 percent of them said that they never use it.

LEVITT: And how about geometry?

MARTSCHENKO: Geometry was a little bit better. There were about 4 percent of respondents who said that they use geometry daily, but again, over 50 percent said that they never use it.

LEVITT: So it’s a pretty sad day when we’re celebrating the use of geometry because 4 percent of the people report using it.

I don’t dispute his results. Even engineers don’t use geometry or trigonometry every day, but they have to learn it. We don’t only teach subjects that people use on a daily basis. I don’t think about the American Revolution or the three branches of the US government every day, but it’s important for American citizens to know how their country came to be and how it’s structured. We hope that every voter knows the roles that they’re voting for, though they may not think about them daily.

One of the reasons we teach what we do is to provide the tools to learn other important things. Engineers and scientists have to know geometry and trigonometry to do what they do. We could wait until undergrad to teach geometry, trig, and calc — but that’s pretty late. There’s an argument that we should tell students what science and engineering is really about (and show them the real math), both to encourage them and to fully inform them.

The Freakonomics on-line survey misunderstands why we teach what we teach. It’s not just about everyday. It’s also about the things that every student will need someday (like understanding how impeachment works) and about the things that might inspire them to think about a future day when they are people who use calculus and trigonometry.

The moment that made me exclaim out loud while listening to the podcast was in the interview with David Coleman, CEO of the College Board. Levitt wants to replace some (all?) of the high school mathematics curriculum with a focus on data science. That’s an interesting proposal worth exploring. Levitt makes an important point — how do we teach teachers about data science?

Levitt: But will teachers in AP Biology or AP Government have the skills to teach the data-fluency parts of their courses?

COLEMAN: One magnificent thing about teaching is, it’s often the most lively when the teacher himself or herself is learning something. I think the model of practiced expertise being the only way that teaching is exciting is false.

I think what’s more interesting is, can we create environments for teachers and students where together the data comes alive and fascinates them. The question is not to try to suddenly retrain the American teaching force to be data analysts, but instead design superb data experiences, superb courses, where the hunt for data and the experimentation is so lively that it excites them as well as their students. And then they together might be surprised at the outcomes.

I know of no data that says that a teacher’s “surprise” leads to better learning outcomes than a teacher who has significant content knowledge. Much the opposite — the evidence I know suggests that teachers only learn pedagogical content knowledge (how to teach the subject matter) when they develop sufficient expertise in the content area. Learning outcomes are improved by teachers knowing the content and how to teach it. The idea that classes are somehow better (more “lively”) when the teacher doesn’t know what’s going to happen makes no sense to me at all.

Finally, Levitt’s solution to reforming the mathematics curriculum is for all of us to sign a petition, because (he argues) there are only six to ten people in each state that we have to convince in order to reform each state’s mathematics curriculum.

LEVITT: So tell me, who makes the decisions? How does curriculum get set in the U.S., in education systems?

MARTSCHENKO: In public education, the people with power are those on the state boards of education. So each state will have a state board of education. There are typically six to 10 people on the board, and they’re the ones who make those decisions about the curriculum, what gets taught, how testing is done.

LEVITT: So literally this set of six to 10 people have the power to set the guidelines, say, for whether or not data courses are required.

MARTSCHENKO: That’s correct.

LEVITT: So what you’re implying is that each state sets its own standards. Okay, so there are these state boards of education who have all the power, it seems to me what you’re saying is, if we can get in front of those boards, and we can convince, say, even one of them of the wisdom of what we’re doing, they can flip a switch, although that’s probably way too simple, and put into motion a whole series of events which will lead in that state to the teaching of data being part of the math curriculum.

They have a petition (see link here) that they encourage people to fill out and send to their state boards.

He’s right that his solution is “way too simple.” In fact, for every state that I have worked with (16 states and Puerto Rico, as part of the ECEP Alliance), his description is downright wrong.

US States are all different, and they each own their own K-12 system. One of the important dimensions on which states differ is how much control remains at the state level (“state control”) and how much control is pushed down to districts and schools (“local control,” which is how California, Nebraska, and Massachusetts are all structured). What is being described is “state control,” but it still misses the complexity — it isn’t just the board that makes decisions. I have watched how Georgia (state control) and Michigan (local control) have created standards and curricula.

  • In Georgia, yes, there is a central control structure that makes decisions, but so many other people are involved to make anything happen. I was part of a Georgia Department of Education effort to create a precalculus course that included programming — this is coming from that centralized control. Our committee alone was six people. The course was stopped by another committee of math teachers (secondary and higher ed) who decided that “a course that included programming couldn’t also be math”. Let’s set aside whether they were right (I don’t think they were), the reality is that those math teachers should get a voice, even in a central control state. Even if those 6-10 people want something, you can’t just jam a new course down the throats of teachers who don’t want to teach it.
  • In Michigan, each individual school district makes its own decisions. (In California, high school graduation requirements can vary by district.) Yes, there are standards at the state level in Michigan, and those standards are supported by assessment tests that are state-wide, but the assessment tests don’t cover everything — districts have a lot of leeway. Even just setting standards goes way beyond the board. I’ve watched Michigan build both its social science and computer science standards while I’ve been here. The effort to build these standards are broad and involve teachers from all over the state. There are big committees, and then there are still lots of other people involved to make these standards work in the individual districts.

Let’s imagine that Levitt’s worldview was right — six to ten people make all the decisions. Play it out. Who sets the standards (desired learning standards) for the new data science focus? Not just those six to ten people. Who defines the curriculum — resources, lesson plans, and assessments? Who prepares the teachers to teach the new curriculum? And in a local control state, how do you enforce these new standards with all those districts? Nothing as big as changing the US math curriculum happens with just those six to ten people.

This last point is close to home for all of us in computing education. Every CS ed researcher I know who is in a CS department struggles with getting their colleagues to understand, appreciate, and use research-based methods. Even if the Chair is supportive, there are lots of teachers involved, each with their own opinion. How much more complicated is a whole state.

Education in the United States is a vast system. I’ve mentioned before that I have an Alan Kay quote on a post-it on my monitor: “You can fix a clock. You have to negotiate with a system.” You can’t fix math in the US education system. You can only negotiate with it.

Freakonomics misunderstands why the US education system exists the way that it does, what makes it work (informed teachers), and how decisions are made and executed within that system.

November 4, 2019 at 7:00 am 9 comments

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,039 other followers


Recent Posts

Blog Stats

  • 2,020,807 hits
November 2019

CS Teaching Tips