## Freakonomics misunderstands what public education is, how it works, and how to change it

I am a fan of Freakonomics Radio. I have heard all the old ones (some more than once), and I keep up with the new ones. Freakonomics informs and inspires me, including many posts in this blog. So, I want to respond when they get it really wrong.

Episode 391 *America’s Math Curriculum Doesn’t Add Up* (see link here) is hosted by Steven Levitt (the economist) rather than the usual host Stephen Dubner (the journalist). The podcast is inspired by the struggles Levitt’s teenage children face with their mathematics classes. Levitt contends that the US mathematics curriculum is out-dated and in serious need of reform. I agree with his premise. His interviews with Jo Boaler and Sally Sadoff are interesting and worth listening to. But there are huge holes in his argument, and his solution makes no sense at all.

Part of his argument is based on a poll they took through the Freakonomics twitter account.

MARTSCHENKO: So, we’ve been putting together a survey that we sent out to Freakonomics listeners. We asked our survey respondents which subjects they use in their daily life, traditional math and data-related. So trigonometry, geometry, calculus, versus more data-related skills like analyzing and interpreting data and visualizing it.

LEVITT: So what percent of people, say, use calculus on a daily basis?

MARTSCHENKO: About 2 percent said that they use calculus on a daily basis, and almost 80 percent say they never use it.

LEVITT: Okay. I would think calculus would get used more than trigonometry and geometry, although that would be hard if only 2 percent are using it. But what percent use trigonometry and geometry?

MARTSCHENKO: Yeah. Less than 2 percent of respondents said that they use trigonometry in their daily life, but over 70 percent of them said that they never use it.

LEVITT: And how about geometry?

MARTSCHENKO: Geometry was a little bit better. There were about 4 percent of respondents who said that they use geometry daily, but again, over 50 percent said that they never use it.

LEVITT: So it’s a pretty sad day when we’re celebrating the use of geometry because 4 percent of the people report using it.

I don’t dispute his results. Even engineers don’t use geometry or trigonometry *every* day, but they *have* to learn it. We don’t only teach subjects that people use on a daily basis. I don’t think about the American Revolution or the three branches of the US government every day, but it’s important for American citizens to know how their country came to be and how it’s structured. We hope that every voter knows the roles that they’re voting for, though they may not think about them daily.

One of the reasons we teach what we do is to provide the tools to learn other important things. Engineers and scientists have to know geometry and trigonometry to do what they do. We could wait until undergrad to teach geometry, trig, and calc — but that’s pretty late. There’s an argument that we should tell students what science and engineering is really about (and show them the real math), both to encourage them and to fully inform them.

The Freakonomics on-line survey misunderstands *why* we teach what we teach. It’s not just about *everyday*. It’s also about the things that every student will need *someday* (like understanding how impeachment works) and about the things that might inspire them to think about a *future day* when they are people who use calculus and trigonometry.

The moment that made me exclaim out loud while listening to the podcast was in the interview with David Coleman, CEO of the College Board. Levitt wants to replace some (all?) of the high school mathematics curriculum with a focus on data science. That’s an interesting proposal worth exploring. Levitt makes an important point — how do we teach teachers about data science?

Levitt: But will teachers in AP Biology or AP Government have the skills to teach the data-fluency parts of their courses?

COLEMAN: One magnificent thing about teaching is, it’s often the most lively when the teacher himself or herself is learning something. I think the model of practiced expertise being the only way that teaching is exciting is false.

I think what’s more interesting is, can we create environments for teachers and students where together the data comes alive and fascinates them. The question is not to try to suddenly retrain the American teaching force to be data analysts, but instead design superb data experiences, superb courses, where the hunt for data and the experimentation is so lively that it excites them as well as their students. And then they together might be surprised at the outcomes.

I know of no data that says that a teacher’s “surprise” leads to better learning outcomes than a teacher who has significant content knowledge. Much the opposite — the evidence I know suggests that teachers only learn pedagogical content knowledge (how to teach the subject matter) when they develop sufficient expertise in the content area. Learning outcomes are improved by teachers knowing the content ** and** how to teach it. The idea that classes are somehow better (more “lively”) when the teacher doesn’t know what’s going to happen makes no sense to me at all.

Finally, Levitt’s solution to reforming the mathematics curriculum is for all of us to sign a petition, because (he argues) there are only six to ten people in each state that we have to convince in order to reform each state’s mathematics curriculum.

LEVITT: So tell me, who makes the decisions? How does curriculum get set in the U.S., in education systems?

MARTSCHENKO: In public education, the people with power are those on the state boards of education. So each state will have a state board of education. There are typically six to 10 people on the board, and they’re the ones who make those decisions about the curriculum, what gets taught, how testing is done.

LEVITT: So literally this set of six to 10 people have the power to set the guidelines, say, for whether or not data courses are required.

MARTSCHENKO: That’s correct.

LEVITT: So what you’re implying is that each state sets its own standards. Okay, so there are these state boards of education who have all the power, it seems to me what you’re saying is, if we can get in front of those boards, and we can convince, say, even one of them of the wisdom of what we’re doing, they can flip a switch, although that’s probably way too simple, and put into motion a whole series of events which will lead in that state to the teaching of data being part of the math curriculum.

They have a petition (see link here) that they encourage people to fill out and send to their state boards.

He’s right that his solution is “way too simple.” In fact, for every state that I have worked with (16 states and Puerto Rico, as part of the ECEP Alliance), his description is downright wrong.

US States are all different, and they each own their own K-12 system. One of the important dimensions on which states differ is how much control remains at the state level (“state control”) and how much control is pushed down to districts and schools (“local control,” which is how California, Nebraska, and Massachusetts are all structured). What is being described is “state control,” but it still misses the complexity — it isn’t just the board that makes decisions. I have watched how Georgia (state control) and Michigan (local control) have created standards and curricula.

- In Georgia, yes, there is a central control structure that makes decisions, but so many other people are involved to make anything happen. I was part of a Georgia Department of Education effort to create a precalculus course that included programming — this is coming
*from*that centralized control. Our committee alone was six people. The course was stopped by another committee of math teachers (secondary and higher ed) who decided that “a course that included programming couldn’t also be math”. Let’s set aside whether they were right (I don’t think they were), the reality is that those math teachers*should*get a voice, even in a central control state. Even if those 6-10 people want something, you can’t just jam a new course down the throats of teachers who don’t want to teach it. - In Michigan, each individual school district makes its own decisions. (In California, high school graduation requirements can vary by district.) Yes, there are standards at the state level in Michigan, and those standards are supported by assessment tests that are state-wide, but the assessment tests don’t cover everything — districts have a lot of leeway. Even just setting standards goes way beyond the board. I’ve watched Michigan build both its social science and computer science standards while I’ve been here. The effort to build these standards are broad and involve teachers from all over the state. There are big committees, and then there are still lots of other people involved to make these standards work in the individual districts.

Let’s imagine that Levitt’s worldview was right — six to ten people make all the decisions. Play it out. Who sets the standards (desired learning standards) for the new data science focus? Not just those six to ten people. Who defines the curriculum — resources, lesson plans, and assessments? Who prepares the teachers to teach the new curriculum? And in a local control state, how do you enforce these new standards with all those districts? Nothing as big as changing the US math curriculum happens with just those six to ten people.

This last point is close to home for all of us in computing education. *Every* CS ed researcher I know who is in a CS department struggles with getting their colleagues to understand, appreciate, and use research-based methods. Even if the Chair is supportive, there are lots of teachers involved, each with their own opinion. How much more complicated is a whole ** state**.

Education in the United States is a vast system. I’ve mentioned before that I have an Alan Kay quote on a post-it on my monitor: “You can fix a clock. You have to negotiate with a system.” You can’t fix math in the US education system. You can only negotiate with it.

Freakonomics misunderstands *why* the US education system exists the way that it does, *what* makes it work (informed teachers), and *how* decisions are made and executed within that system.

## Come to the NAS Workshop on the Role of Authentic STEM Learning Experiences in Developing Interest and Competencies for Technology and Computing

Register here. And view the agenda here.

**November 4, 20191:00 p.m.–6:00 p.m. **

*(reception hour following)*

### Workshop

Role of Authentic STEM Learning Experiences in Developing Interest and Competencies for Technology and Computing

Keck Building, Room 100

500 5th St., NW

Washington, DC**#STEMforCompTech**

The Board on Science Education of the National Academies of Sciences, Engineering, and Medicine will host a public workshop on November 4, 2019 to explore issues in STEM education. The workshop will illustrate the various ways in which stakeholders define and conceptualize authentic STEM learning opportunities for young people in grades K-12 in formal and informal settings, and what that means for the goals, design, and implementation of such experiences. Presenters will unpack the state of the evidence on the role of authentic STEM learning opportunities and promising approaches and strategies in the development of interest and competencies for technology and computing fields. A recurring theme throughout the workshop will be implications for increasing diversity and access to authentic STEM learning experiences among underserved young people.

Confirmed Speakers:

- Lisa Brahms,
*Monshire Museum of Science*(virtual) - Loretta Cheeks,
*Strong TIES* - Tamara Clegg,
*University of Maryland* - Jill Denner,
*ETR* - Ron Eglash,
*University of Michigan* - Sonia Koshy,
*Kapor Center* - Keliann LaConte,
*Space Science Institute*(virtual) - Amon Millner,
*Olin College* - Kylie Peppler,
*University of California, Irvine* - Jean Ryoo,
*University of California, Los Angeles* - Emmanuel Schanzer,
*Bootstrap* - Shirin Vossoughi,
*Northwestern University*(virtual) - David Weintrop,
*University of Maryland*

Questions? Email us at STEMforCompTech@nas.edu

## How to change undergraduate computing to engage and retain more women

My Blog@CACM post for this month talks about the Weston et al paper (from last week), and about a new report from the Reboot Representation coalition (see their site here). The report covers what the Tech industry is doing to close the gender gap in computing and “what works” (measured both empirically and from interviews with people running programs addressing gender issues).

I liked the emphasis in the report on redesigning the experience of college students (especially female) who are majoring in computing. Some of their emphases:

- Work with community colleges, too. Community colleges tend to be better with more diverse students, and it’s where about half of undergraduates start today. If you want to attract more diverse students, that’s where to start.
- They encourage companies to offer “significant cash awards” to colleges that are successful with diverse students. That’s a great idea — computer science departments are struggling to manage undergraduate enrollment these days, and incentives to keep an eye on diversity will likely have a big impact.
- Grow computer science teachers
*and*

The report is interesting — I recommend it.

## Results from Longitudinal Study of Female Persistence in CS: AP CS matters, After-school programs and Internships do not

NCWIT has been tracking their Aspirations in Computing award applicants for several years. The Aspirations award is given to female students to recognize their success in computing. Tim Weston, Wendy DuBow, and Alexis Kaminsky have just published a paper in ACM TOCE (see link here) about their six year study with some 500 participants — and what they found led to persistence into CS in College. The results are fascinating and somewhat surprising — read all the way to the end of the abstract copied here:

While demand for computer science and information technology skills grows, the proportion of women entering computer science (CS) fields has declined. One critical juncture is the transition from high school to college. In our study, we examined factors predicting college persistence in computer science and technology related majors from data collected from female high school students. We fielded a survey that asked about students’ interest and confidence in computing as well as their intentions to learn programming, game design, or invent new technology. The survey also asked about perceived social support from friends and family for pursuing computing as well as experiences with computing, including the CS Advanced Placement (AP) exam, out-of-school time activities such as clubs, and internships. Multinomial regression was used to predict persistence in computing and tech majors in college. Programming during high school, taking the CS Advanced Placement exam, and participation in the Aspirations awards program were the best predictors of persistence three years after the high school survey in both CS and other technology-related majors. Participation in tech-related work, internships, or after-school programs was negatively associated with persistence, and involvement with computing sub-domains of game design and inventing new applications were not associated with persistence. Our results suggest that efforts to broaden participation in computing should emphasize education in computer programming.

There’s also an article at *Forbes* on the study which includes recommendations on what works for helping female students to persist in computing, informed by the study (see link here). I blogged on this article for CACM here.

That AP CS is linked to persistence is something we’ve seen before, in earlier studies without the size or length of this study. It’s nice to get that revisited here. I’ve not seen before that high school work experience, internships, and after-school programs did *not* work. The paper makes a particular emphasis on *programming*:

While we see some evidence for students’ involvement in computing diverging and stratifying after high school, it seems that involvement in general tech-related fields

other than programmingin high school does not transfer to entering and persisting in computer science in college for the girls in our sample. Understanding the centrality of programming is important to the field’s push to broaden participation in computing. ()Italics in original.

This is an important study for informing what we do in high school CS. Programming is front-and-center if we want girls to persist in computing. There are holes in the study. I keep thinking of factors that I wish that they’d explored, but they didn’t — nothing about whether the students did programming activities that were personally or socially meaningful, nothing about role models, and nothing about mentoring or tutoring. This paper makes a contribution in that we now know more than we did, but there’s still lots to figure out.

## An Analysis of Supports and Barriers to Offering Computer Science in Georgia Public High Schools: Miranda Parker’s Defense

*Miranda Parker defends her dissertation this Thursday. It’s a really fascinating story, trying to answer the question: Why does a high school in Georgia decide (or not) to offer computer science? She did a big regression analysis, and then four detailed case studies. Readers of this blog will know Miranda from her guest blog post on the Google-Gallup polls, her SCS1 replication of the multi-lingual and validated measure of CS1 knowledge, her study of teacher-student differences in using ebooks, and her work exploring the role of spatial reasoning to relate SES and CS performance (work that was part of her dissertation study). I’m looking forward to flying down to Atlanta and being there to cheer her on to the finish.*

**Title: **An Analysis of Supports and Barriers to Offering Computer Science in Georgia Public High Schools

Miranda Parker

Human-Centered Computing Ph.D. Candidate

School of Interactive Computing

College of Computing

Georgia Institute of Technology

**Date:** Thursday, October 10, 2019

**Time:** 10AM to 12PM EST

**Location:** 85 5th Street NE, Technology Square Research Building (TSRB), 2nd floor, Room 223

Committee:

Dr. Mark Guzdial (Advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Betsy DiSalvo, School of Interactive Computing, Georgia Institute of Technology

Dr. Rebecca E. Grinter, School of Interactive Computing, Georgia Institute of Technology

Dr. Willie Pearson, Jr., School of History and Sociology, Georgia Institute of Technology

Dr. Leigh Ann DeLyser, CSforAll Consortium

**Abstract**:

There is a growing international movement to provide every child access to high-quality computing education. Despite the widespread effort, most children in the US do not take any computing classes in primary or secondary schools. There are many factors that principals and districts must consider when determining whether to offer CS courses. The process through which school officials make these decisions, and the supports and barriers they face in the process, is not well understood. Once we understand these supports and barriers, we can better design and implement policy to provide CS for all.

In my thesis, I study public high schools in the state of Georgia and the supports and barriers that affect offerings of CS courses. I quantitatively model school- and county-level factors and the impact these factors have on CS enrollment and offerings. The best regression models include prior CS enrollment or offerings, implying that CS is likely sustainable once a class is offered. However, large unexplained variances persist in the regression models.

To help explain this variance, I selected four high schools and interviewed principals, counselors, and teachers about what helps, or hurts, their decisions to offer a CS course. I build case studies around each school to explore the structural and people-oriented themes the participants discussed. Difficulty in hiring and retaining qualified teachers in CS was one major theme. I frame the case studies using diffusion of innovations providing additional insights into what attributes support a school deciding to offer a CS course.

The qualitative themes gathered from the case studies and the quantitative factors used in the regression models inform a theory of supports and barriers to CS course offerings in high schools in Georgia. This understanding can influence future educational policy decisions around CS education and provide a foundation for future work on schools and CS access.

## Task-specific programming for and about computing education (Precalculus TSP Part 5 of 5)

I am exploring task-specific programming as a direct outgrowth of my work on GaComputes, ECEP, and ebooks. I’ve worked hard at helping computing education to grow in the US, but it’s not growing much (see my September Blog@CACM post for stats on that). There are too few people learning with the power of computing. It’s because we make programming so hard. We need to make programming more *accessible*, and one way to do that is to make it *easier*.

Why do we need to make it more accessible? My answer is: in order for people to use computer science for learning *everything else*. In 2009, when Matthias and Shriram wrote “Why computer science doesn’t matter” (see paper here), I hated it. Of course, computer science matters! Now I realize that they’re *right*. Nobody gets turned away from college admissions because they didn’t have high school CS. *MANY* students get turned away because they can’t pass Algebra 1. Other students don’t finish their degrees because they can’t get past Calculus. This other stuff *really* matters. I believe that we can use programming to help learn the stuff that *really matters*.

A key insight for me is that what students really use in Bootstrap:Algebra or even in Scratch is a small piece of programming. (I talked about this a good bit in my SIGCSE keynote.) We can reduce how much of programming we teach and still get huge benefits. Essentially, the students in Scratch and Bootstrap:Algebra are doing task-specific programming. I’m just going one step further to strip away even the trappings of a more general programming language. I’m making it as small as I can (but large enough to cover a learner’s task), so that we can increase usability, and thus increase the probability that we can apply programming to improve learning outcomes in other disciplines.

But it’s still programming, so the insights and theories of computing education research (CER) deeply influence this work on task-specific programming. In turn, task-specific programming offers the opportunity to ask some of our CER questions in new contexts. Here are two examples.

## Notional Machines

At the Dagstuhl Seminar on Notional Machines (see post by Ben Shapiro), there was a key moment for me. Someone said something there about “Part of the notional machine for algebra.” I stopped and went all academic on them. “Wait a minute — if that’s a *real* rule used for evaluating algebra, then it’s not a *notional machine*. Notional machines are simplifications. That’s *real* algebra, not a *notional* machine.” There was a bit of a fight after that. It’s kind of a blur now.

In my two prototypes, I want the mathematics to *be the notional machine*. The notional machine for the image filter builder is matrix arithmetic and scalar multiplication. Underneath, it’s doing something more complicated, but I want students to completely understand what’s going on in terms of matrices.

The notional machine for the texture wave builder is a bit more complicated. My goal is for the notional machine to be *just* the wave function, but it’s a bit more than that. It’s also how the wave function maps to RGB values in a picture, and I’m actually not sure I have it right yet. If you have a wave where you just manipulate red here, and a wave that manipulates gray there (where red=green=blue at all pixels), then how do I combine the red component with the gray component in some reasonable way? I’ve tried a few ways already. I’ve thought about adding *another* task-specific language, just to let the students specify the mapping.

Of course, these are really simple programming models (no variables, no user-defined functions), so the notional machines are really simple, too. As much as possible, the notional machine is the context itself — math, or math+graphics. When does learning this notional machine help you learn other notional machines later?

And what have you learned if you learn those? Does task-specific programming help you learn more within the task domain? I hope that learning the matrix notional machine for image filters helps you with matrix manipulation later. Do students really learn precalculus from these prototypes?

If you learn the notional machine for task-specific programming, does that help you learn other notional machines later? There still is a computational notional machine embedded in there, e.g., about controlling the computational agent, about order of execution, and so on. Does that knowledge transfer to other computational contexts?

## Structure-behavior-function models

My student Katie Cunningham is studying the use of structure-behavior-function (SBF) models for understanding how students come to understand programs. (I defined SBF models here). In short, this theoretical framing helps us understand the relationships between students learning to read code, to write code, to trace code, and to explain code in plain English.

Task-specific programming doesn’t fit the same way in that model. There is no writing of code. There is *structure* to the programs, but more of it is embedded in the environment than in the textual language. One of the insights from the participatory design sessions that we’ve had with teachers is that the environment is so much more powerful than the language. Consider the statement in my wave texture generator `Set Gray to 4sin(5(x-3))+0`

. That does completely define the structure and transformation. However, the below picture is is so much more powerful and is what students really see — multiple, linked representations that explain that one line of code:

*Behavior* is complicated. As I said above, I want the behavior to be the notional machine of the mathematics. To trace the operation above, a student might plug in values for X to see what Y is generated, and check the plot and the wave to see if it makes sense. But it’s not like variable tracing.

But the explain in plain English task of figuring out the *function* is still there. Check out this image filter program:

Readers who know Media Computation or graphics will likely recognize that as the program to compute the negation of an image. How do we help students to do that? How do we help students to see the program and figure out what it *does* at a macroscopic level? I built tools into the wave texture builder to make it possible to see the role of each wave in the overall texture, but if you were to describe a texture as a “tightly-woven red and green plaid,” I’m not sure how you’d get that purpose/function from the definition of the waves The problem of figuring out the *function* is pretty much the same in task-specific programming.

## Where to go from here

So this is the end of the series. I’ve described two prototypes for task-specific programming in precalculus (matrix transformations and wave functions), and explored the implications of task-specific programming for research about programming, in education, and in relation to computing education research (this post).

I did these as blog posts, in part, because I’m not yet sure where I might publish and fund this work.

- Most learning sciences work focuses on students. I’m focusing on teachers first.
- Most CS education work focuses on learning about
, especially programming. I’m focusing on using CS programming for learning*CS*.*something else* - Most programming languages work focuses, well…on
*languages.*I’m focusing on programming where languages are a second-class citizen. - Most work on CS in K-12 is focused on either computational thinking or teaching standalone CS classes. I’m focusing on integrating computing into classes with a goal of reducing computational thinking as much as possible.
- Most NSF funding in CS Education is tied to work with
*schools*and*districts*(RPP), and is about*CS**integration*in elementary school and*CS-specific*classes in high school (STEM+C). I’m doing design work withfor*teachers***CS**at the*integration*.*high school level* - There is funding (like NSF DRK12 and Cyberlearning) for developing interesting technology for learning, but I’m at the design stage. DRK12 has exploratory grants, but the funding level is too low to pay for my collaborators and my students. How do you get something like this started?

I’m seeing more clearly the point that Greg Nelson and Amy Ko talked about at ICER last year (see paper here). This is *design-first* work. It’s hard to find a home for that.

I’d appreciate your advice. Where is there a research community that’s concerned about these kinds of things? Where should I be publishing this work? Where should I be looking for funding?

## Task Specific Programming will only matter if it solves a user’s problem (Precalculus TSP Part 4 of 5)

This is the fourth in a five part series about Precalculus Task-Specific Programming. I presented two prototypes in Parts 1 and 2, and discussed what I’m exploring about programming in Part 3.

I’ve shown my prototypes to several teachers — some computer science (e.g., I presented the first prototype at the Work In Progress Workshop at ICER in Toronto) and a half-dozen math teachers. The computer science teachers have been pretty excited, and I have had several requests for the system to try out with various student groups.

Why am I looking at precalculus? Because it’s what leads to success in college calculus. I’m influenced by Sadler and Sonnert’s work showing that high school calculus isn’t the critical thing to support success in undergraduate calculus (see article here). It’s precalculus. Undergraduate calculus is a gatekeeper. Students don’t get STEM undergraduate degrees because they can’t get through calculus. So if we want more students to succeed in STEM undergraduate, we want high school precalculus to get better, in terms of more inclusive success.

Precalculus is a pretty abstract set of topics (see an example list here). For the most part, it’s foreword looking: “You’ll need this when you get to Calculus.” My idea is to teach precalculus with concrete contexts made possible by computing, like image filters. I want more students to find precalculus more engaging and more personally meaningful, leading to more learning.

So, might my prototypes help students learn precalculus?

Math teachers have generally been, “*Meh*.”

I’ve had four teachers tell me that it’s “interesting*.” One math teacher was blunt and honest — **neither of these tools solve a problem that students have**. Basic matrix notation and element-by-element matrix operations are the *easiest* parts of matrices. Precalculus students can already (typically) figure out how to plot a given wave equation.

What’s hard? Students struggle with forms of matrix multiplication and determinants. They struggle with what each of the terms in the wave function do, and what influences periodicity. Seeing the graphed points is good, but having the values display in symbolic form like `(3*pi)/2`

would be more powerful for making a connection to a unit circle. I’m learning these in a participatory design context, so I actually pushed more on what would be useful and what I need to do better — I have a much longer list of things to do better than just these points.

The math teachers have generally liked that I am paying attention to disciplinary literacy. I’m using their notations to communicate in the ways that things are represented in their math textbooks. I am trying to communicate in the way that they want to communicate.

Here’s the big insight that I learned from the mathematics teachers with whom I’ve spoken: **Teachers aren’t going to devote class time or their prep time to something that doesn’t solve their problems.** Some teachers are willing to put time into additional, enrichment activities — if the teacher needs more of those. As one teacher told me, “Most math classes are less about *more exploration*, and more about *less failure*.” The point of precalculus is to prepare students to pass calculus. If you want more diverse students to get into STEM careers, more diverse students have to get through calculus. Precalculus is important for that. The goal is *less failure, *more success, and more student understanding of mathematics.

This insight helps me understand why some computational tools just don’t get a foothold in schools. At the risk of critiquing a sacred cow, this helps to explain why Logo didn’t scale. Seymour Papert and crew developed turtle geometry, which Andrea diSessa and Hal Abelson showed was *really* deep. But did Logo actually solve a problem that students and teachers had? Turtle graphics is beautiful, and being body syntonic is great, but that’s not the students’ problem with math. Most of their real problems with mathematics had to do with the cartesian coordinate system, not with being able to play being a turtle. Every kid can walk a square and a triangle. Did students learn general problem-solving skills? Not really. So, why should teachers devote time to something that didn’t reduce student failure in mathematics?

It would be hard to be disciplinary literate when Logo and turtle geometry was invented. Logo was originally developed on teletype machines. (See Cynthia Solomon’s great keynote about this story.) The turtle was originally a robot. Even when they moved Logo to the Apple II, they could not match the representations in the kid’s textbooks, the representations that the teachers were most comfortable with. So instead, we asked student to think in terms of `fd 200 rt 90`

instead of `(x,y)`

. Basic usability principles tell us to use what the user is familiar with. Logo didn’t. It demanded more of the students and teachers, and maybe it was worthwhile in long run — but that tradeoff wasn’t obvious to the teachers.

I have a new goal:

*I want to provide a programming experience that can be used in five minutes which can be integrated into a precalculus class to improve on student learning.*

I want to use programming to solve a learning problem in another domain. Programming won’t enter the classroom if it doesn’t solve a teacher’s problem, which is what they perceive as the student’s problem. *Improving student learning is my users’ (teachers’) goals. Good UI design is about helping the user achieve their goals.*

I’ve started on designs for two more precalc prototypes based on the participatory design sessions I’ve had, and I’m working on improving the wave texture generator to better address real learning problems. The work I’m doing with social science educators is *completely* driven by teachers and student learning challenges. That’s one of the reasons why I don’t have working prototypes there yet — it’s harder to address *real* problems. My precalc prototypes were based on my read of literature on precalculus. That’s never going to be as informed as the teacher-in-the-classroom.

Now, there’s another way in which we might argue that these prototypes help with education — maybe they help with learning something about computer science? Here’s a slide from my SIGCSE 2019 keynote, referencing the learning trajectories work by Katie Rich, Carla Strickland, T. Andrew Binkowski, Cheryl Moran, and Diana Franklin (see this blog post).

You’re not going to learn anything about #1 from my prototypes — you can’t write imprecise or incomplete code in these task-specific programming environments. You can learn about #2 — different sets of transformation *can* produce the same outcomes. You definitely learn about #3 — programs are made by assembling instructions from a (*very*) limited set. If I were to go on and look at the Rich et al. debugging learning trajectories (see paper here), there’s a lot of that in my prototypes, e.g., “Outcome can be used to decide whether or not there are errors.”

So here’s the big research question: *Could students learn something about the nature of programming and programs from using task-specific programming? * I predict yes. Will it be transferable? To text or blocks language? Maybe…

Here’s the bigger research question that these prototypes have me thinking about. For the moment, imagine that we had tools like these which could be used reasonably in less than five minutes of tool-specific learning, and could be dropped into classes as part of a one hour activity. Imagine we could have one or two per class (e.g., algebra, geometry, trigonometry, biology, chemistry, and physics), throughout middle and high school. *Now*: Does it transfer? If you saw a dozen little languages before your first traditional, general-purpose programming language, would you have a deeper sense of what programs did (e.g., would you know that there is no Pea-esque “super-bug” homunculus)? Would you have a new sense for what the activity of programming is about, including debugging?

I don’t know, but I think it’s worth exploring task-specific programming more to see if it works.

**Request to the reader:** I plan to continue working on precalculus task-specific programming (as well as in social studies). If you know a precalculus teacher or mathematics education researcher who would be interested in collaborating with me (e.g., to be a design informant, to try out any of these tools, to collaborate on design or assessment or evaluation), please let me know. It’s been hard to find math ed people who are willing to work with me on something this weird. Thanks!

* In the South, if you hear “Bless your heart!” you should know that you’re likely being insulted. It sort of means, “You are so incompetent that you’re pitiful.” I’ve learned the equivalent from teachers now. It’s “That would make a nice *enhancement* activity” or “We might use that *after testing*.” In other words, “I’d never use this. It doesn’t solve any of my problems.”

Recent Comments