Archive for September, 2019

Task Specific Programming will only matter if it solves a user’s problem (Precalculus TSP Part 4 of 5)

This is the fourth in a five part series about Precalculus Task-Specific Programming. I presented two prototypes in Parts 1 and 2, and discussed what I’m exploring about programming in Part 3.

I’ve shown my prototypes to several teachers — some computer science (e.g., I presented the first prototype at the Work In Progress Workshop at ICER in Toronto) and a half-dozen math teachers. The computer science teachers have been pretty excited, and I have had several requests for the system to try out with various student groups.

Why am I looking at precalculus?  Because it’s what leads to success in college calculus. I’m influenced by Sadler and Sonnert’s work showing that high school calculus isn’t the critical thing to support success in undergraduate calculus (see article here). It’s precalculus. Undergraduate calculus is a gatekeeper. Students don’t get STEM undergraduate degrees because they can’t get through calculus. So if we want more students to succeed in STEM undergraduate, we want high school precalculus to get better, in terms of more inclusive success.

Precalculus is a pretty abstract set of topics (see an example list here).  For the most part, it’s foreword looking: “You’ll need this when you get to Calculus.”  My idea is to teach precalculus with concrete contexts made possible by computing, like image filters. I want more students to find precalculus more engaging and more personally meaningful, leading to more learning.

So, might my prototypes help students learn precalculus?

Math teachers have generally been, “Meh.”

I’ve had four teachers tell me that it’s “interesting*.” One math teacher was blunt and honest — neither of these tools solve a problem that students have. Basic matrix notation and element-by-element matrix operations are the easiest parts of matrices. Precalculus students can already (typically) figure out how to plot a given wave equation.

What’s hard? Students struggle with forms of matrix multiplication and determinants. They struggle with what each of the terms in the wave function do, and what influences periodicity. Seeing the graphed points is good, but having the values display in symbolic form like (3*pi)/2 would be more powerful for making a connection to a unit circle. I’m learning these in a participatory design context, so I actually pushed more on what would be useful and what I need to do better — I have a much longer list of things to do better than just these points.

The math teachers have generally liked that I am paying attention to disciplinary literacy. I’m using their notations to communicate in the ways that things are represented in their math textbooks. I am trying to communicate in the way that they want to communicate.

Here’s the big insight that I learned from the mathematics teachers with whom I’ve spoken: Teachers aren’t going to devote class time or their prep time to something that doesn’t solve their problems. Some teachers are willing to put time into additional, enrichment activities — if the teacher needs more of those. As one teacher told me, “Most math classes are less about more exploration, and more about less failure.” The point of precalculus is to prepare students to pass calculus.  If you want more diverse students to get into STEM careers, more diverse students have to get through calculus. Precalculus is important for that. The goal is less failure, more success, and more student understanding of mathematics.

This insight helps me understand why some computational tools just don’t get a foothold in schools. At the risk of critiquing a sacred cow, this helps to explain why Logo didn’t scale. Seymour Papert and crew developed turtle geometry, which Andrea diSessa and Hal Abelson showed was really deep. But did Logo actually solve a problem that students and teachers had? Turtle graphics is beautiful, and being body syntonic is great, but that’s not the students’ problem with math. Most of their real problems with mathematics had to do with the cartesian coordinate system, not with being able to play being a turtle. Every kid can walk a square and a triangle. Did students learn general problem-solving skills? Not really. So, why should teachers devote time to something that didn’t reduce student failure in mathematics?

It would be hard to be disciplinary literate when Logo and turtle geometry was invented. Logo was originally developed on teletype machines. (See Cynthia Solomon’s great keynote about this story.) The turtle was originally a robot. Even when they moved Logo to the Apple II, they could not match the representations in the kid’s textbooks, the representations that the teachers were most comfortable with. So instead, we asked student to think in terms of fd 200 rt 90 instead of (x,y). Basic usability principles tell us to use what the user is familiar with. Logo didn’t. It demanded more of the students and teachers, and maybe it was worthwhile in long run — but that tradeoff wasn’t obvious to the teachers.

I have a new goal:

I want to provide a programming experience that can be used in five minutes which can be integrated into a precalculus class to improve on student learning.

I want to use programming to solve a learning problem in another domain. Programming won’t enter the classroom if it doesn’t solve a teacher’s problem, which is what they perceive as the student’s problem. Improving student learning is my users’ (teachers’) goals. Good UI design is about helping the user achieve their goals.

I’ve started on designs for two more precalc prototypes based on the participatory design sessions I’ve had, and I’m working on improving the wave texture generator to better address real learning problems. The work I’m doing with social science educators is completely driven by teachers and student learning challenges. That’s one of the reasons why I don’t have working prototypes there yet — it’s harder to address real problems.  My precalc prototypes were based on my read of literature on precalculus. That’s never going to be as informed as the teacher-in-the-classroom.

Now, there’s another way in which we might argue that these prototypes help with education — maybe they help with learning something about computer science? Here’s a slide from my SIGCSE 2019 keynote, referencing the learning trajectories work by Katie Rich, Carla Strickland, T. Andrew Binkowski, Cheryl Moran, and Diana Franklin (see this blog post).

You’re not going to learn anything about #1 from my prototypes — you can’t write imprecise or incomplete code in these task-specific programming environments. You can learn about #2 — different sets of transformation can produce the same outcomes. You definitely learn about #3 — programs are made by assembling instructions from a (very) limited set. If I were to go on and look at the Rich et al. debugging learning trajectories (see paper here), there’s a lot of that in my prototypes, e.g., “Outcome can be used to decide whether or not there are errors.”

So here’s the big research question: Could students learn something about the nature of programming and programs from using task-specific programming? I predict yes. Will it be transferable? To text or blocks language? Maybe…

Here’s the bigger research question that these prototypes have me thinking about. For the moment, imagine that we had tools like these which could be used reasonably in less than five minutes of tool-specific learning, and could be dropped into classes as part of a one hour activity. Imagine we could have one or two per class (e.g., algebra, geometry, trigonometry, biology, chemistry, and physics), throughout middle and high school. Now: Does it transfer? If you saw a dozen little languages before your first traditional, general-purpose programming language, would you have a deeper sense of what programs did (e.g., would you know that there is no Pea-esque “super-bug” homunculus)? Would you have a new sense for what the activity of programming is about, including debugging?

I don’t know, but I think it’s worth exploring task-specific programming more to see if it works.

Request to the reader: I plan to continue working on precalculus task-specific programming (as well as in social studies).  If you know a precalculus teacher or mathematics education researcher who would be interested in collaborating with me (e.g., to be a design informant, to try out any of these tools, to collaborate on design or assessment or evaluation), please let me know. It’s been hard to find math ed people who are willing to work with me on something this weird. Thanks!


* In the South, if you hear “Bless your heart!” you should know that you’re likely being insulted.  It sort of means, “You are so incompetent that you’re pitiful.”  I’ve learned the equivalent from teachers now.  It’s “That would make a nice enhancement activity” or “We might use that after testing.”  In other words, “I’d never use this. It doesn’t solve any of my problems.”

 

September 30, 2019 at 7:00 am 9 comments

Task Specific Programming vs Coding: Designing programming like we design other user interfaces (Precalculus TSP Part 3 of 5)

In the last two posts, I described a prototype task-specific programming tool for building image filters from matrix manipulations, and for building textures out of wave functions.

So, let’s tackle the big question here: Is this programming?

These are both programs. Both of them are specification of process. Both of them allow for an arbitrary number of operations. In the image filter example (second example), the order of operations can matter (e.g., if you change the red matrix through scalar multiplication, then add or subtract it with another matrix, versus the reverse order). The image filter is also defining a process which can be applied to an arbitrary input (picture).

Wil Doane pointed out that this is not coding. No program code is written — not text, not blocks, and not even by spreadsheet formula. This is programming by layering transformations which are selected through radio buttons, pull-down menus, and text areas. It’s really applying HCI techniques to constructing a program.

Probably the big sticking point here is just how limited the programming language is here. Notice that I’m using the term task-specific programming and not task-specific programming language in this series. The programming language is important here as a notation: What transformations am I using here, and what’s the order of execution? If you didn’t get the effect you wanted, the language is important for figuring out which card you want to change. I’m working on more prototypes where the language plays a similar role: A notation for explanation and rationalization. Amy Ko has a paper I like where she asks, “What is a programming language, really?” (See link here.) I’m not sure that the use of programming language here fits into her scheme. Overall, the focus in these two prototypes is on the programming not the programming language.

I spent several months in Spring and Summer reading in the programming language literature — some research papers, but more books like the Little series (e.g., The Little Prover), Shriram Krishnamurthi’s Programming Languages: Application and Interpretation, and Beautiful Racket. I started a Pharo MOOC because it covered how to build domain-specific programming languages. I plan to return to more programming languages research, techniques, and tools in the future. I have a lot to learn about what is possible in programming, about notations for programming, and about alternative models of programming. I need better tools, too, because I’m certainly going to need more horsepower than LiveCode.

But as I started designing the prototypes, I realized that my focus was on the user interface, not the language. Of course, text and blocks are user interfaces, but they use few user interface mechanisms. Even what I’m doing here is a far cry from what’s possible in user interfaces. I am using common UI widgets like buttons, menus, and text areas in order to specify a program. I realized that I didn’t need programming language techniques for these prototypes. By any measure of programming language quality, these are poorly-designed languages. My real focus is on the design process. This approach worked for getting prototypes built so that I could start running participatory design sessions with teachers.

The goal of task-specific programming: Usability for Integration

Here’s the goal I set out for myself:

I want to provide a programming experience that can be used in five minutes which can be integrated into a precalculus class.

In the small sessions I’ve had with 1 or 2 people, yeah, I think it worked.

  • I believe that this user interface is easier to learn and use than the comparable user interface (to do the same tasks) in text-based languages or even a block-based programming language like Snap! or GP.
  • I’d even bet that the language itself (seen in the above screenshots) would be less usable if it wasn’t embedded in the environment, e.g., those two examples above are not that understandable outside of the two prototypes. The environment (e.g., visualizations, UI elements, matrix displays, etc.) are providing more information than the language is.

There are some interesting usability experiments to explore in the future with task-specific programming.  The empirical evidence we have so far on task-specific programming is pretty impressive, as I described in my June 2019 Blog@CACM post (see link here) and which has been published in the September CACM on the topic of “designing away Computational Thinking” (see link here), which is what I was trying to do. Sara Chasins’ published work showed that she had people solving real programming problems in less than 10 minutes. Vega-Lite is amazing, and we’re finding social studies teachers building visualizations in it in less than 20 minutes.  I’m curious to explore other directions in improving usability through these tiny, task-specific languages. For example, it is very easy here to represent the language in something other than English words. Perhaps we might use task-specific programming to make programming more accessible to non-English speakers by targeting languages other than English as the notation.

There’s a cost to that ease of use. There’s a commensurate loss in generalizability. This is laser-focused on TASK-SPECIFIC programming. My prototypes are not Turing-complete. There are no loops, conditionals, or variables. There is decomposition (e.g., which matrix transformations and wave equations do I need for my desired goal), but no abstraction.

I’m inspired by Amy Ko’s blog post: Programming languages are the least usable, but most powerful human-computer interfaces ever invented (see post here).

  • Most attempts to improve on the usability problem are about making programming languages more usable, e.g., less complicated text languages or shifting modality to block-based languages. In general, though, the languages are just as powerful — and are still harder to understand than most students and teachers are willing to bear.
  • There have been attempts to embed programming languages within applications, e.g., VBA in Office applications or JavaScript to script Photoshop. Again, these languages tend to be just as powerful, and not much more usable.
  • I’m explicitly starting from an application (e.g., an image filter) and adding a programming language that only provides a notation for the facilities. It’s less powerful. I hope it’s more usable. I’m designing to be more like Excel and less like C.

Here’s the big idea here: We should design programming languages as we design other user interfaces. We should involve the users. We should aim to achieve the users’ goals. We should empower users. We should reduce their cognitive load.

Task-specific languages certainly can include more programming languages features.  For example, including variables with types makes sense for data science tasks (as in Bootstrap:Data Science). Those are part of data science tasks, and can help students avoid errors.  The critical part is including the components that help the learners with their tasks. My use of task-specific programming is really about learner-centered design of programming languages.

I have submitted two NSF proposals about building task-specific programming for precalculus and for social studies, both rejected. I spoke to both program officers. One told me it was impractical to propose a new programming language, and especially multiple languages. The officer pointed out that a language is hard and requires all kinds of infrastructure around it, like a community and a literature. I built these prototypes to explain what I meant by a “programming language.” It doesn’t have to be big. It can be little, be easy to learn and use, and fit into a one hour lesson in a precalculus class.

Alan Kay has argued for using the “real thing” as much as possible, because students might imprint on the “toy” thing and use it for everything. The concern is that it’s then hard to move on to better notations. I hope to avoid that fate by providing a notation that is so task-specific that it can’t be used for anything else. There is no support for abstraction here, alternative decompositions, or any kind of system design at all. It’s at the very lowest end of the Rich et al. learning trajectories.

Talking about trajectories is the segue to the next post.

I welcome your comments. Do you buy task-specific programming as a kind of programming?

September 27, 2019 at 7:00 am 6 comments

Building Textures from Waves: Debugging in Task-Specific Programming (Precalculus TSP Part 2 of 5)

The second prototype I built for Task Specific Programming in Precalculus is about using wave functions for building textures. As I mentioned in my last blog post, my goal is to use computing to make precalculus less abstract and more relevant. I want to provide a programming experience that can be used in five minutes which can be integrated into a precalculus class. In this prototype, I explicitly designed for inquiry and debugging. What would debugging look like in task-specific programming when the goal is learning precalculus and not programming or computer science?

I’m going to describe this one bottom-up. Wave functions are a common topic in a precalculus class (like in this YouTube lecture). These are typically taught in the form:

A sin(B(x-C)) + D

Or

Amplitude sin(frequency(x-horizontalShift)+ verticalShift

I built a tool where the student specifies a wave function, then that wave is used to define a texture in terms of gray, red, green, or blue components.

This equation is plotted (lower right hand corner) where the X-axis controls the darkness. Each pixel is evaluated left-to-right, and it’s X-position is plugged into this formula. The Y value mapped to a gray scale. The plot should (if it all worked right, and it doesn’t really in this example) line up with the texture so that darkness and lightness maps to rise and fall in the darkness level of the texture.

Let’s set the “Challenge” section aside for just a few minutes.

We can map equations to other colors, and swap sine for cosine, and use the X or Y components of the pixel to determine the color. Like in the first prototype, each wave function appears on a separate card in a deck.

You can have an arbitrary number of these cards. When you get to the front page of the deck, you have a program of all the wave functions. These are then composed to create a more complex texture.

What does debugging look like here?

With this prototype, I thought more about the student’s process than I did with the image filter tool. In my work with both precalculus and social studies teachers, I’m increasingly thinking about inquiry, and I see debugging as a form of inquiry. You have some output, and you’re trying to understand it — and by “understand,” I mean being able to explain how the parts constitute the whole. The impetus to inquire (i.e., to debug) is that the output you have is not the output you expect or want. So, if we want to support inquiry as debugging, we have to support starting from a goal — an example, a requirement, or a set of tests.

I’m inspired by Diana Franklin’s group’s work on La Playa, a Scratch-like programming environment for 4th-6th grade students (see paper here). In La Playa, the programming tool opens up onto a starter file. The student could ignore it and just do open-ended programming, but more likely, the starter file guides the student and serves as a scaffold.

I made a prominent Challenge texture on the front page (and the challenge image appears on every wave card afterward). Can you make this? If you need a Hint, press the Hint button for a suggestion about is needed to make this texture.

In general, my challenge textures don’t really work. I built a bunch of cool textures to use as hints, like below, but the composition of color waves is one of the more complicated parts here — I talk more about it in Part 5 of the series.  But on the other hand, I showed some of these textures to researchers in culturally-relevant curricula who said that they reminded them of woven cloths from the cultures that they study.  Could we use wave functions in a form of culturally-relevant curriculum that blends precalculus and computing?

Imagine that you have a texture that you generated on the first page, and now you want to understand how your existing waves contribute to this texture, e.g, to figure out which equations need to be changed. One tool is the Pop button on the front Texture Program page. This pops out the texture into a separate window.

Now, you can drag the texture over the individual wave textures to understand how the pieces compose into the whole.

Each wave has a checkbox, so that the wave can be removed from the texture (without deleting), then put back in.

The idea is that a student can apply basic debugging principles: Compare to the example, find how this statement/computational-unit impacts the final outcome (like a print statement), and remove the statement/unit (like with a comment) to see the effect on the final outcome.

Multiple-linked representations

As I’ve shown this to math teachers, one of the things they like here are the multiple linked representations. We’ve got the equation:

When the student hovers, they get an explanation for each of the terms:

The plot and the texture, and if you press the “Points” button, you see exactly which points were plotted — both x,y on the graph, but also the computed x and y before they were mapped to the discrete points:

When I showed this to teachers, they wanted it to support more exploration. For example, they told me that students struggle to understand the difference between 3sin(x) and sin(3x). I was thinking that a useful addition to this tool would be a Bret Victor inspired scrubbing gesture over each of the terms in the wave function (as in his fabulous Learnable Programming essay, or in the Scrubbing Calculator). I want to be able to click on the “3” above, drag up, then move the cursor left or right to make that number less or more. As I scrub, I want to see the plot change and the texture change — but with some kind of shadow/echo so that I can compare the “3” plot/texture to the new values for “C.” Students can’t generally keep the original curve “in their head” when seeing the new one, so they need both to be visible to understand the effect of the equation change.

I built these in Livecode. The source file is available here. I’m also making binaries available for Macintosh and for Windows. (Yes, I can provide Linux, if someone’s interested.) These are provided with no warranty or guarantee — these barely do what I describe, and probably have lots of other mistakes, holes, weaknesses, and vulnerabilities. They’re slow as molasses. I strongly discourage you from using my prototypes with students. As I describe in Part 4 of this series, this is nowhere near where it needs to be in order to be useful.

September 23, 2019 at 3:00 am 10 comments

Precalculus Task-Specific Programming for Building Image Filters from Matrix Operations (Precalculus TSP Part 1 of 5)

This is the start of a five part series of blog posts on Task-Specific Programming in Precalculus, subtitled “What Mark Did This Summer.” I wrote several blog posts in the Spring (last in May, linked here) about “task-specific programming languages.” I’ve been thinking a lot about them over the summer, and built my first prototypes for two task-specific programming environments. In these blog posts, I’m describing the two prototypes, and then considering the interaction with research on programming, on education, and on computing education research.

Both of my prototypes are built around teaching precalculus. My goal is to use computing to make precalculus less abstract and more relevant. I am inspired by Phil Sadler and Gerhard Sonnert’s work supporting the argument that precalculus is the most important set of concepts for students to succeed at college calculus (even more than high school calculus).  I want to provide a programming experience that can be used in five minutes which can be integrated into a precalculus class.

The first prototype is for constructing image filters from simple matrix manipulations that are part of many precalculus texts. In a digitized picture, each pixel has a red, green, and blue component (or channel) for its color. All the red values from the pixels can be described as a matrix for the red channel in the picture.

This is the opening screen for the prototype.

The image being manipulated is on the left. The list of matrix manipulations appears in the upper left. The manipulated image is on the upper right.

Across the bottom appear the matrices for the red, blue, and green pixels of this picture. A text description explains where these matrices come from. I tried hard to get the matrix representation close to what students might see in a precalculus textbook. That’s part of being able to be “integrated into a precalculus course.” The teachers I’ve shown this to have appreciated that attention — they don’t want to have to explain differences between the user interface and the textbook.

Changing the picture (see the button under the source picture), updates the matrices, the description, and re-runs the matrix transformation steps on the new input picture.

The lower right hand corner is an inspector to check individual pixel values. This was a suggestion from Line Have Musaeus from Aarhus when I demonstrated this prototype at ICER in Toronto. Precalculus teachers want students to understand how the math is working at the matrix-element level, so I give them a way to check individual matrix values. That’s when I added the block-color picture into the picture set — it makes it easier to understand the four quadrants, what the base colors are, and how the transformations change them.

Each of the matrix transformations appears on a separate card in a deck. Here’s one:

This transformation does a scalar multiplication of 0.5 (which the user sets by changing the text field in the statement). A pull-down lets the user pick which matrix they want to modify. When the user picks a matrix, the display in the lower left updates to show the new matrix. Again, I’m trying to get the formatting for scalar multiplication right from the textbook. The text description on the right updates to describe the selected transformation, and the text and matrix displayed on the lower right updates to show how that matrix is being transformed by this operation.

Here’s the other possible transformation (selected by radio button):

Here, we’re changing one of the channel matrices (pull down for red, green, or blue) to basic element-by-element arithmetic (addition or subtraction) of two selected matrices: red, green, blue, or a matrix of all-255’s. (I’ve had several suggestions to also provide the option of having a matrix of all the same number — all 1’s, all 0’s, or something like all-7’s).

I’m using a semi-transparent gray graphic to “hide” the unselected operation on the card. I want students to know that they can do either operation, so I don’t want it to disappear, but I want to show that it’s not selected.

Currently, the user presses the “Do It” button to process the individual transformations, or to do the whole program of transformations. You can have an arbitrary number of operations, and you can do some interesting transformations, like image negation:

Or making a scene look like it’s nearing sunset:

Press the Change Picture button to get a different input picture. So, the list of operations specifies a process which can be applied to an arbitrary input. It’s a program (for a specific task), but it’s not coding (an important distinction that Wil Doane pointed out to me at ICER this year).

I built these in Livecode. The source file is available here. I’m also making binaries available for Macintosh and for Windows. (Yes, I can provide Linux, if someone’s interested.) These are provided with no warranty or guarantee — these barely do what I describe, and probably have lots of other mistakes, holes, weaknesses, and vulnerabilities. They’re slow as molasses. I strongly discourage you from using my prototypes with students. As I describe in Part 4 of this series, this is nowhere near where it needs to be in order to be useful.

September 20, 2019 at 3:00 am 7 comments

Upcoming NSF Computing Education Workshops from Jeff Forbes

Jeff Forbes has just moved back to the National Science Foundation — great news!  He’s asked me to share information on a set of workshops that has just been funded, relevant to this list. People can sign up for the RPP and BPC Departmental Plans workshops now — the rest will have registration information upcoming.

BPC Plans Department Workshop

Award abstract: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1941413

CISE PIs are encouraged to include meaningful BPC plans in proposals submitted to a subset of CISE’s research programs. Nancy Amato (University of Illinois) is hosting a workshop about the development of departmental BPC plans. The workshop is schedule for Nov 13-15 at Univ of Illinois to bring together teams of 2-3 people/department. Register here.

Computing in Undergraduate Education Workshop

Three workshops to “spark a national dialogue about the role of computing in undergraduate education.” The workshops will likely be in Chicago, DC, and Denver. These workshops will hopefully inform the community and NSF as we develop programs like CUE.

See the award abstract for more information https://www.nsf.gov/awardsearch/showAward?AWD_ID=1944777

CS for All RPP Development workshops

http://nnerpp.rice.edu/csforall-workshops/

Career Workshops for Teaching Track Faculty

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1933380

Data Science for All: Designing the Successful Inclusion of Data Science in High School Computer Science

NY Hall of Science will hold a workshop exploring the potential for including authentic data science curricula and hands-on projects in high school CS courses.

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1922898

Women of Color in Tech

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1923245

Workshop – BP in STEM, Computer Science and Engineering through improved Financial Literacy

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1939739

 

September 19, 2019 at 7:00 am Leave a comment

What’s generally good for you vs what meets a need: Balancing explicit instruction vs problem/project-based learning in computer science classes

Lauren Margulieux has posted another of her exceptionally interesting journal article summaries (see post here). Her post summarizes recent article asking which is more effective: Direct instruction or learning through problem-solving-first (like in project-based learning or problem-based learning — or just about any introductory computer science course in any school anywhere)? Direct instruction won by a wide margin.

Lauren points out that there are lots of conditions when problem-solving-first might make sense. In more advanced classes where students have lots of expertise, we should use a different teaching strategy than what we use in introductory classes. When the subject matter isn’t cognitively complex (e.g., memorizing vocabulary words), there is advantages to having the students try to figure it out themselves first. Neither of these conditions are true for introductory computer science.

This is an on-going discussion in computing education. Felienne Hermans had a keynote at the 2019 RStudio Conference where she made an argument for explicit direct instruction (see link here). I made an argument for direct instruction in Blog@CACM last November (see post here). Back in 2017, I recommended balancing direct instruction and projects (see post here), because projects are clearly more motivating and authentic for computer science students, while the literature suggest that direct instruction leads to better learning — even of problem-solving skills.

Lately, I’ve been thinking about this question with a health metaphor. Let me try it here:

Everybody should exercise, right? Exercise provides a wide variety of benefits (listed in a fascinating blog post from Freakonomics from this June), including cardiovascular improvements, better aging, better sleep, and less stress. But if you have a heart problem, you’re going to get treatment for that, right? If you’re having high cholesterol, you should continue to exercise (or even increase it), but you might also be prescribed a statin.  If you have a specific need (like a vitamin deficiency), you address that need.

Students in computing should work on projects. It’s authentic, it’s motivating, and there are likely a wide range of benefits. But if you want to gain specific skills, e.g., you want to achieve learning objectives, teach those directly. Don’t just assign a big project and hope that they learn the right things there. If you want to see specific improvement in specific areas, teach those. So sure, assign projects — but in balance. Meet the students’ needs AND give them opportunities to practice project skills.

And when you teach explicitly: Always, ALWAYS, ALWAYS use active learning techniques like peer instruction. It’s simply unethical to lecture without active learning.

September 16, 2019 at 7:00 am 6 comments

Come talk about the Role of Authentic STEM Learning Experiences in Developing Interest and Competencies for Technology and Computing #STEMforCompTech

I’m on a National Academies committee to write a report about the role of authentic STEM learning experiences in promoting interest and ability in computing.  We’re having an open meeting/workshop (I don’t really know what it’s about yet) in November in DC. Visit this link for more information.

Save_The_Date__November_4th_Workshop- Role_of_Authentic_STEM_Learning_Experiences_in_Developing_Interest_and_Competencies_for_Technology_and_Computing

September 13, 2019 at 10:00 am Leave a comment

Where a Notional Machine Doesn’t Help: JavaScript and the DOM

At the Dagstuhl Seminar on Notional Machines and Programming Language Semantics, I came to a new understanding about where notional machines are not useful and another kind of support is needed.

I joined a breakout group on “Notional Machines for everything else.” There had been discussions about notional machines for popular programming languages for Scratch and Python, and one breakout group was formed around that goal. This group was about exploring where else notional machines could be useful, like trying to understand machine learning, generating proofs, and JavaScript for Web development. Joe Politz was our group leader.

After the first round of discussion, we all decided to focus on JavaScript. We had some serious experts on JavaScript and the DOM in our group, like Shriram Krishnamurthi and Titus Barik. The discussion was amazing! I was learning so much, and I took pages and pages of notes. Everyone noticed my feverish note-taking, so I got elected to report back to the group. That’s this blog post — it’s the work of the whole group (not just me). I just happened to be the guy who made the slides.

We had already come to the realization that there isn’t just one notional machine per language. A notional machine is about helping students understand a computational process, and there can be lots of processes in a given language. So we picked a specific scenario that we were aiming to explain: You (as a student) want to turn the background yellow when the button is clicked.

But the student makes a mistake.

And I came to realize that, without a lot of support:

The problem is that window.bg = "yellow" isn’t wrong. Because there isn’t a previously defined bg property, this assignment simply creates a property bg for the window object. No error. It just doesn’t do what the students wanted. How does the student figure out that the desired property is backgroundColor? Get a list of all bindings on window? There are hundreds or even thousands of them. How do you find backgroundColor among all of those?

The breakout group started listing on the blackboard the things we might need to explain to students to help them understand when went wrong or what might go wrong with clicking on a button to turn the background yellow. It was a long list.

You probably can’t read all of those, so I’ll list a few of them here:

  • What happens if you have two DOM elements with the same ID?
  • Where did window as an object get defined at all?
  • If you have event handlers defined on an object, and you delete the object, what happens to the event handler?
  • What happens if objects higher up in the DOM modify the event triggered by the event click?
  • Something I heard students ask: If my JavaScript changed the DOM, how come my HTML file didn’t change?
  • And Many More

I really had no idea just how complicated JavaScript and the DOM were! Amy Ko looked up the JavaScript definition of what == means (see link here). This isn’t the formal semantics. This is meant to be understandable. It’s insanely complicated.

At this point, Ben Shapiro raised a really interesting side question: What’s the cost of JavaScript’s overly-complicated rules? Is there a way of measuring the lost productivity of bad programming language design?

So, what’s the answer?

I realized that the answer is not a notional machine. The problem is that long list Amy found for us. I can teach part of that list as a notional machine, but I can’t teach all of it with a simplified model. Any simplification I create would be insufficient for the complexity of the reality. And even having a notional machine wouldn’t help if a student typed window.bg = "yellow". The student needs IDE’s and other supports to figure out errors that never trigger an error.

The solution is to reduce complexity to make it teachable.  In an earlier talk at Dagstuhl, Ben Shapiro explained how Shriram and Joe and their collaborators did this with Pyret.  In Pyret, they explicitly disallow some things that Python allows but are way too complicated to explain.

Probably the best idea for JavaScript, following Racket’s lead, is to have language levels. We should teach students a strict subset of JavaScript, where the really complicated things are simply disallowed. The goal is to help students to learn a real subset, then grow the subset.  TypeScript offers an alternative model, because it offers a more sane way of doing JavaScript.  For example, TypeScript’s type checking might help figure out the window.bg bug. There are lots of other languages that are more reasonable and compile to JavaScript — but those are avoiding JavaScript entirely.

The very best idea would be to fix JavaScript and the DOM, but it’s probably too late for that.

This working group was useful to me for two reasons. First, I really do have to teach JavaScript and the DOM (again) in January 2020, and now I have a new sense of the challenges and my options. Second, this was a great example of where a notional machine is not the answer to a pedagogical problem.

Thanks to members of the group who reviewed an earlier draft of this summary.  They’re not responsible for where I still didn’t get the details right.

 

September 9, 2019 at 7:55 am 14 comments

How the Cheesecake Factory is like Healthcare and CS Education

Stephen Dubner of the Freakonomics Podcast did an interview with Atul Gawande, author of the Checklist Manifesto. Atul is a proponent of a methodical process — from a behavioral economist perspective, not from an optimization perspective. People make mistakes, and if we’re methodical (like the use of checklists), we’re less likely to make those mistakes. If we can turn our process into a “recipe,” then we will make fewer mistakes and have more reliable results. Here’s a segment of the interview where he argues for that process in healthcare.

DUBNER: Okay, what’s the difference between a typical healthcare system and say, a restaurant chain like the Cheesecake Factory?

GAWANDE: You’re referring to the article I wrote about the Cheesecake Factory…Basically what I was talking about was the idea that, here’s this restaurant chain. And yes, it’s highly caloric, but the Cheesecake Factorys here have as much business as a medium-sized hospital — $100 million in business a year. And they would cook to order every meal people had. And in order to make that happen, they have to run a whole process that they have real cooks, but then they have managers.

I was talking to one of the managers there about how he would make healthcare work. And his answer was, “Here’s what I would do, but of course you guys do this. I would look to see what the best people are doing. I would find a way to turn that into a recipe, make sure everybody else is doing it, and then see how far we improve and try learning again from that.” He said, “You do that, right?” And we don’t. We don’t do that.

Here’s where Gawande’s approach ties to education. Later in the interview:

DUBNER: And do you ever in the middle of, let’s say, a surgery think about, “Oh, here’s what I will be writing about this day?”

GAWANDE: You know, I don’t really; I’m in the flow. One of the things that I love about surgery is: it is, I have to confess, the least stressful thing I do, because at this point I’ve done thousands of the operations I do. Ninety-seven to 98 percent go pretty much as expected, and the 2 percent that don’t, I know the 10 different ways that are most common that they’ll go wrong and I have approaches to it. So it’s kind of freeing in a certain way.

I think about teaching like this. I have been teaching computing since 1980. I actively seek out new teaching methods. Teaching CS is fun for me — because I know lots of ways to do it, I have choices which keeps it interesting. I have mentioned the fascinating work on measuring teacher PCK (Pedagogical Content Knowledge) by checking how accurately teachers can predict how students will get exam questions wrong. I know lots of ways students can get computer science wrong. I still get surprises, but because I’m familiar with how students can get things wrong, I have a starting place for supporting students in constructing stronger conceptions.

We can teach this. The model of expertise that Gawande describes is achievable for CS teachers. We can teach new CS teachers methods that they can choose from. We can teach them common student mental models of programming, how to diagnose weaker understandings, and ways to help them improve their mental models. We can make a checklist of what a CS teacher needs to know and be able to do.

September 2, 2019 at 7:00 am 10 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,186 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,060,648 hits
September 2019
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

CS Teaching Tips