So what’s a notional machine anyway? A guest blog post from Ben Shapiro

July 15, 2019 at 12:00 pm 14 comments

Last week, we had a Dagstuhl Seminar about the concept of notional machines, as I mentioned in an earlier blog post about the work of Ben Shapiro and his student Abbie Zimmermann-Niefield. There is an amazing amount being written about the seminar already (see the Twitter stream here), with a detailed description from Amy Ko here in her blog and several posts from Felienne on her blog. I have written my own summary statement on the CACM Blog (see post here). It seems appropriate to let Ben have the summary word here, since I started the seminar with a reference to his work.

I’m heading back to Boulder from a Dagstuhl seminar on Notional Machines and Programming Language Semantics in Education. The natural question to ask is: what is a notional machine?

I don’t think we converged on an answer, but here’s my take: A notional machine is an explanation of the rules of a programmable system. The rules account for what makes a program a valid one and how a system will execute it.

Why this definition? Well, for one, it’s consistent with how du Boulay, coiner of the term notional machine, defined it at the workshop (“the best lie that explains what the computer does”). Two, it has discriminant utility (i.e. precision): the definition allows us to say that some things are notional machines and some are not. Three, it is consistent with a reasonable definition of formal semantics, and thus lets us imagine a continuum of notional machines that include descriptions of formal semantics, but also descriptions that are too imprecise — too informal — to be formal semantics but that still have explanatory value.

The first affordance is desirable because it allows us to avoid a breaking change in nomenclature. It would be good if people reading research papers about notional machines (see Juha Sorva’s nice review), including work on how people understand them, how teachers generate or select them, etc., don’t need to wrestle with what contemporary uses of the term mean in comparison to how du Boulay used the term thirty years ago. It may make it easier for the research community to converge on a shared sense of notional machine, unlike, say, computational thinking, where this has not been possible.

The second affordance, discriminant utility, is useful because it gives us a reason to want to have a term like notional machine in our vocabulary when we already have other useful and related terms like explanation and model and pedagogical content knowledge. Why popularize a new term when you already have perfectly good ones? A good reason to do so is because you’d like to refer to a distinct set of things than those terms refer to.

The scope of our workshop was explicitly pedagogical: it was about notional machines “in education.” It was common within the workshop for people to refer to notional machines as pedagogical devices. It is often the case that notional machines are invented for pedagogical purposes, but other contexts may also give rise to them. Consider the case of Newtonian mechanics. Newton’s laws, and the representations that we construct around them (e.g. free body diagrams), were invented before Einstein described relativity. Newton’s laws weren’t intended as pedagogical tools but as tools to describe the laws of the universe, within the scales of size and velocity that were accessible to humans at the time. Today we sequence physics curriculum to offer up Newtonian physics before quantum because we believe it is easier to understand. But in many cases, even experts will continue to use it, even if they have studied (and hopefully understand) quantum physics. This is because in many cases, the additional complexity of working within a quantum model offers no additional utility over using the simpler abstractions that Newtonian physics provides. It doesn’t help one to predict the behavior of a system any better within the context of use, but likely does impose additional work on the system doing the calculation. So, while pedagogical contexts may be a primary locus for the generation, selection, and learning of notional machines, they are not solely of pedagogical value.

Within the workshop, I noticed that people often seemed to want their definitions, taxonomies, and examples of notional machines to include entities and details beyond those encompassed by the definition I have provided above. For example, some participants suggested that action rules can be, or be part of, notional machines. An example of an action rule might be “use descriptive variable names” or “make sure to check for None when programming in Python.” While both of these practices can be quite helpful, my definition of notional machines accepts neither of them. It rejects them because they aren’t about the rules by which a computer executes a program. In most languages, what one names variables does not matter, so long as one uses a name consistently within the appropriate scope. “Make sure to check for None” is a good heuristic for writing a correct program, but not an account of the rules a programming environment uses to run a program. In contrast, “dereferencing a null pointer causes a crash” is a valid notional machine, or at least a fragment of one.

Why do I want to exclude these things? Because a) I think it’s valuable to have a term that refers to the ways we communicate about what programming languages are and how the programs written in them will behave. And b) a broader definition will refer to just about everything that has anything to do with the practice of programming. That doesn’t seem worth having another term in our lexicon, and it would be less helpful for designing and interpreting research studies for computing education.

The third affordance is desirable because it may allow us to form stronger bridges to the programming languages research world. It allows us to examine — and value — the kinds of artifacts that they produce (programming languages and semantics for those languages) while also studying the contradictions between the values embedded in the production of those artifacts and the values that drive our own work. Programming languages (PL) researchers are generally quite focused on demonstrating the soundness of designs they create, but typically pay little attention to the usability of the artifacts they produce. Research languages and written (with Greek) semantics have difficult user interfaces, at least to those of us sitting on the outside of that community. How can we create a research community that includes the people, practices, and artifacts of PL and that conducts research on learning? One way is to decide to treat the practices and artifacts of PL researchers, such as writing down formal semantics, an instance of something that computing education researchers care about: producing explanations of how programming systems work. PL researchers describing languages’ semantics aren’t doing something that is very different in kind than what educators do when they explain how programming languages work. But (I think) they usually do so with greater precision and less abstraction than educators do. Educators’ abstractions may be metaphorical (e.g. “There’s a little man inside the box that reads what you wrote, and follows your instructions, line by line…”) but at least if we use my definition, they are of the same category as the descriptions that semanticists write down. As such, the range of things that can be notional machines, in addition to the programming languages they describe, may serve as boundary objects to link our communities together. I think we can learn a lot from each other.

That overlap presents opportunities. It’s an opportunity for us to learn from each other and an opportunity to conduct new lines of research. Imagine that we are faced with the desire to explain a programming system. How would a semanticist explain this system? How would an experienced teacher? An inexperienced teacher? What do the teachers’ explanations tell us about what’s important? What does a semanticist’s explanation tell us about what’s the kernel of truth that must be conveyed? How do these overlap? How do they diverge? What actually works for students? Can pedagogical explanations be more precise (and less metaphorical) and still be as helpful to students? Are more precise definitions actually more helpful to students than less precise ones? If so, what does one need to know to write a formal semantics? How does one learn to do that? How does one teach educators to do that? How can we design better programming languages, where better is defined as being easier to understand or use? How can we design better programming languages when we have different theories of what it means to program well? How do we support and assess learning of programming, and design programming languages and notional machines to explain them, when we have different goals for what’s important to accomplish with programming?

There are many other questions we could ask too. Several groups at the workshop held breakout sessions to brainstorm these, but I think it’s best to let them tell their own stories.

In summary, I think the term notional machines has value to computing education research, but only if we can come to a consensus about what the term means, and what it doesn’t. That’s my definition and why I’ve scoped it how I have. What’s your take?

If you’d like to read more (including viewpoints different than mine), make sure to check out Felienne’s and Amy’s blog posts on this same topic.

Thank you to Shriram, Mark, Jan, and Juha for organizing the workshop, and to the other participants in the workshop for many lively and generous conversations. Thanks as well to the wonderful Dagstuhl staff.

 

Entry filed under: Uncategorized. Tags: , .

How to reduce the defensive climate, and what students really need to understand code: ITICSE 2019 Preview Beta release of new JES (Jython Environment for Students) now available: Media Computation for Python IDE

14 Comments Add your own

  • 1. alanone1  |  July 15, 2019 at 1:50 pm

    But why not have the children really learn something by having them make real “notational machines” that actually make and run programming language? For example, here is one of a number of examples for middle schoolers from Etoys over 15 years ago: a “rules-based” interpreter for a parallel language to do “StarLogo” like things. The rules elements are pictures that are organized as productions. The productions are run by an Etoys program, and the elements themselves are defined by an Etoys program. The result in this example is an “epidemic simulation” done in the new language.

    I would like to put a one page picture of the entire thing in here but this !@#$%^& (+ really stupid) non-authoring system won’t let me paste it in …. (how can all of you put up with this absolute shit?)

    Reply
    • 2. Mark Guzdial  |  July 16, 2019 at 5:13 am

      WordPress won’t let me insert a graphics link here. I’ve placed the image here: http://web.eecs.umich.edu/~mjguz/AlansEtoyExampleblob.jpg

      Reply
      • 3. alanone1  |  July 16, 2019 at 5:17 am

        Thanks very much Mark!

        But an infinity of Boos on WordPress — hard to believe that this is the case about 46 years after this was first done WYSIWYG at Parc — Yikes!

        Reply
  • 4. alanone1  |  July 15, 2019 at 2:39 pm

    Yes, I really do think the idea of “notional machine” is both good and important.

    There is the kind that is described in careful English (e.g. some of the early examples include the Algol-60 report, and subsequent programming languages described by Niklaus Wirth).

    And there is the kind that is described formally by the simplest interpreter that can run the language (McCarthy’s Lisp in itself — but for pedagogical reasons of simplicity and clarity, I would vote for Wirth’s and Weber’s formal description of the semantics of Euler in the Jan-Feb CACM in 66.

    We could imagine children’s versions of both of these, and we could also imagine a system that can act out a rule step by step so a kid can see what the rule means and does.

    This brings up the point that any useful “notional machine” needs to be clear enough to be useful while also being simple enough to be understood (down deep it’s just another language with semantics).

    From this point of view, it’s worth thinking about first teaching the kids to do some simple programs in the language that will describe the notional machine for the next language. That would be a “very computery” kind of thing to do.

    P.S. I don’t think I’d want to teach the children any kind of language that would make adult programmers feel that “it’s too much work to do a good UI or authoring tools” (that would leave out most of the languages being used today for both adults and children).

    Reply
  • 5. Ben Shapiro  |  July 15, 2019 at 3:55 pm

    Alan,

    I love your ideas above.

    Here are some clarifications that may be helpful: What I tried to do in the blog post, and what we tried to do in much of the workshop, is to define a construct (notional machine) for use in research. Notional machines are, at least by my definition, explanations of the semantics of a programming language.

    The definition doesn’t say anything about what form they take (production rules, animations, metaphorical stories), just what they describe. The definition also doesn’t say anything about what the languages they describe should actually be (though of course that’s a very interesting topic).

    At the workshop, the example notional machine that Mark presented was something that you and Adele Goldberg wrote to explain calling/message-passing semantics in Smalltalk-76.

    We also discussed, at length, the relationship between visual tracing tools and notional machines. As an example, Nelson, Xie, and Ko’s PLTutor seems to be an instance of “a system that can act out a rule step by step so a kid can see what the rule means and does” that you suggest above. While I’m not sold on the level of abstraction they’re focusing on being the best place for students to start, I think it is an intriguing example to consider. (See: http://www.greglnelson.info/p2-nelson.pdf )

    Ben

    Reply
    • 6. alanone1  |  July 15, 2019 at 4:14 pm

      Hi Ben

      I think this is a really important area to revive (from a wave of interest and examples in the 60s and leaking a bit into the 70s — this is why I mentioned Wirth/Weber and Euler).

      The key principle for the kids is that the notional machine has to actually be understandable and relatively easily so — I know you understand this well — or they will be faced with two difficult mysteries instead of just one. The upside is that this is a great area for deep illumination about computers as “language machines” and “language making machines”.

      Etoys was one of the first “block-based languages”, and one of the other examples of making an interpreter by children was to make a simple version of a block-based language that could do turtle geometry etc.

      I think a lot more could be done here — one of the nicest things about block-based extensible languages is that text parsing is not needed to deal with syntax — this allows the visual design to be an act of user-interface design, and frees up brain cells to concentrate on the semantics. A number of key features in Etoys (which were unfortunately omitted from Scratch) — many of them having to do with how all DnD construction is done in Etoys — make the transition much easier to giving a construction a meaning as a programming element.

      It would be great to take another pass at many of these important issues that were quite ignored by the current overly-simplistic (and even wrong) approaches in the now ongoing fads.

      Reply
      • 7. Ben Shapiro  |  July 15, 2019 at 4:57 pm

        100% agree.

        Reply
    • 8. alanone1  |  July 16, 2019 at 11:07 am

      I was prejudiced to like the Nelson, Xie, Ko paper before reading it, but was surprised to not warm up to the actual approach taken.

      It’s not that “cognitive load is everything”, but — to me — for user interface design and most other aspects that have to do with setting up environments for learning, it takes front and center.

      Given that the learner is already beyond their chunking limit, enormous care must be taken in what’s added to make a UI to help (somehow it has to be also learned and used under deep load conditions and the combination has to somehow reduce the overall load — we found that coming up with solutions for this at Parc to be extremely challenging!)

      I do think that acting out semantics is generally a good idea for beginners, but the questions are what to act out, what to show, how to help the learner remember in the midst of saturation, etc.

      Widening out to the larger problems of teaching programming, we’ve got quite a few cases that have to be dealt with, including where we get to choose the initial language that is to be learned, or whether this is predetermined by others.

      I also think there is usually more than one good way to help a person learn each idea, but at least one needs to be found. For example, one on one coaching to get something fun running works extremely well for elementary (and most other) learners (whereas typical classroom one to many instruction does not).

      There are examples of this using Etoys in many of my talks that people have put into YouTube: the start is to draw a car, program it to move, draw a steering wheel, program the car to get its direction from the steering wheel, etc. The learner has not just made a program using active objects, but also made a system of parallel communicating processes.

      Next you modify the car to follow a path by itself. All this takes about 20 minutes or so, and quite a bit of the UI, how the language is used, objects, properties, behaviors, variables, scripts, processes, etc., how facilities are found, etc. is experienced, gently corrected, and groked. And so on. The cognitive load is lower than one might think because of the degree of “situatedness” in the actual project itself.

      Doing this one on one goes very quickly, but also at the exact pace for the particular learner. The next stage is for the pair to split up and teach another pair one on one. (There can also be observers …).

      The “notional machine” for a system like Etoys is in the simplicity and concreteness of the connection between what a basic block says and what happens when it does it. Pretty generally, what is hidden doesn’t count and isn’t needed to think about what is going on and what might be done next.

      As with all programming, the design part is what is actually difficult (and by all the evidence not well learned even by a very high percentage of “pros” today). There is less of a royal road to design, and it actually takes a fair amount of experience to acquire a feeling for it.

      I think this is where most of the “learning to program” effort should be concentrated, and that it really makes sense to create excellent pedagogical languages to provide the initial experiences (this is partly because most of the commercially used languages are not great language designs, and good programming in them is very often making better semantic structures than the languages carry themselves).

      Reply
      • 9. alanone1  |  July 16, 2019 at 1:05 pm

        A footnote here: The Etoys design was a bit of a departure in that the basic programming features were made quite few and simple in number, but the object system and the environment had quite a bit of work put into them.

        The idea was to see if the design and systems aspects of programming could be gotten into much sooner, and then to see if the simpler bricks actually made it easier for the kids to see and use some of the possible arches.

        One of the other motivations for Etoys was to teach ideas drawn from various scientific and engineering fields rather than to teach “programming” per se.

        The four main merged inspirations for the Etoys design were LOGO (and Seymour, etc.), Xerox Parc Smalltalk, HyperCard, and StarLogo.

        As a purely local assessment, Etoys turned out to be by far the most well learned and used system by children of all the ones in the 25 years since the first Smalltalk experiments in 1973.

        Reply
      • 10. alanone1  |  July 17, 2019 at 3:03 am

        Four more ideas here (which are likely already afloat out in the CER world).

        The first is that “Biology means variation”, and there is not a lot of obvious heed given to the manifest fact that the learning population is made up of a distribution of different abilities and perspectives. Good classroom teachers can home in on this to some extent, but classroom teaching is tough to do to a distributed population. The one-on-one parts of sports and music etc teaching really make a difference, while still including a lot of group work as separate activities. I think this is critical to deal with.
        The second is how the goals of the teaching process (a) help (b) interfere with what the learner actually needs to do. Many teaching processes — and teachers — “overteach”, partly from “trying to cover the material” and partly because many teachers — in sports and music also — feel like they “are not doing their job” if they don’t try to teach. In fact, cognitive load is a big deal here. Most people need a lot more time than teaching processes are willing to give them.
        The third idea is from one of the two or three best teachers I’ve observed over many years was Tim Gallwey (the “Inner Game of Tennis” and other “inner games”). He pointed out that “the parts of your brain that need to do the learning often don’t understand English”, and that quite a bit of repetition of various kinds is needed to help the non-English (Kahneman’s “System 1”) mentalities learn. “But there is a big difference in what you are thinking about during these repetitions”.

        He also pointed out that a big problem with being a beginner in anything is that you spend most of your time not doing the subject (in tennis, most of the time you are chasing the ball rather than hitting it, etc.). In other words the state of being a beginner is that the repetitions you do teach you to stay a beginner (you get better at running after the ball, etc.).

        So he devised a way to start people who had never touched a racket before as “low intermediates” where more than 95% of the time they were actually doing “tennis”. At Parc we used this idea as part of the invention of the Parc GUI: have the UI start people off as intermediates rather than as beginners trying to remember command lines to the OS to find their files to work on, etc. And we also used it to teach children OOP from the start. The “blocks-based programming” that was introduced in Etoys and now used widely is another way to have the UI start learners as intermediates.

        The fourth idea is also a big UI principle that is a hard one for most, including computer people: the appearance on the screen has an active computer behind it (this is what the WordPress, Wikipedia, Vi emulators, etc people don’t get). For programs and notional machines, we really want to understand what Al Perlis meant when he said “You have to become both the machine and your program” (he didn’t mean the underlying hardware of the computer — that would be silly, and Al was not silly, he was a great computer scientist — he meant that the programming language is “the machine” and your “program makes another machine”, and you have to live in both worlds).

        One experiment to try (it probably was done years ago) would be to stay in the context of the language rather than add more cognitive load by showing explicity stacks etc. For example, you could do what Dan Swinehart did with his “Co-Pilot” debugger for Algol: show the program text slightly decorated with e.g. variable names associated with their values right in the program e.g. the value could be put right below the variable name.

        Even more interesting to try would be to show the execution of the program as successive reductions of expressions in various ways — I vaguely think this was done by someone long ago, and it was good. The UI is acting like a reader-interpreter of the text of the program, and showing its execution in situ. We can easily imagine three or four good ways to show this (and we should combine with some of Philip Guo’s excellent visualization ideas).

        An important idea here is that a really good visualization scheme will be somewhat like what most skilled programmers “see” when they look at code.

        Another important related idea (that is not heeded enough I think) is to realize the huge difference between how most programs say what they say and how most natural language prose says what it says. The difference — the huge lack of written intent in most code — means that you really want the early learners to be dealing with their own programs more than trying to read programs written by others.

        Reply
        • 11. alanone1  |  July 17, 2019 at 4:05 am

          A footnote to the “four more ideas” (note that — unlike Quora which allows authors to go back and fix and add — WordPress allows nothing — it also for some reason removed the numerals — e.g. “1.” — I put before each of the four ideas).

          I meant to say a little more about the variability and distribution of the learners that is vanishingly rare to see in the CER discussions.

          And this is that there are a small percentage of learners of programming (not so much design) who take to it immediately and have no discernible difficulties learning, whether machine code or a so-called higher level language.

          The US Air Force needed programmers in the early 60s for a variety of purposes and enlisted IBM to help. IBM came up with a aptitude screening test (maybe they already used it for their own purposes). On the AF base I was serving, this was “the test that no one had ever passed”. I liked taking tests so I decided to try it. It was relatively short (aimed at one hour) and one of the most difficult I’d encountered. I wound up passing it and immediately was sent to Air Training Command (along with others around the world who had passed the test) to be a programmer.

          At ATC we were sent to a week long wall to wall course conducted by IBM in learning to machine language code the IBM 1401 (a very idiosyncratic architecture — but we didn’t know that). And then were put to work converting flowchart diagrams to 1401 programs (basically we were “compilers” of the flowcharts — this was what “coding” meant back then).

          Everyone who had passed the screening test did very well in the week long course, and we all agreed that it was easier than the screening test (so the test probably eliminated some who still would have been successful learning to program).

          The “programmers” were those experienced hands who did the designs and wrote the flowcharts. As “coders” we didn’t have to do this for the first months or year, we just had to learn how to make code for the flowcharted processes — quite a few of the flowcharts were actually for the punched card machines the 1401 was gradually replacing, so the logic was sometimes “interesting”.

          The learning curve within “coding” was how to debug (you could only get the machine with an operator about 3 minutes a day, so your code had to be “written to be darn close to right” at your desk). The other learning curve — more or less optional — was to start to really use the IBM Autocoder macro assembler (of considerable power) to help your coding.

          After about 6 months of this OJT (on the job training) there were reviews of the actual work you had done, and you were ready for more training, and to do a bit more design (they reckoned that you could learn to code in a week — really learning over a few months after the class — but that learning design took around two years).

          I write this to provide a different perspective on “coding” and “programming” and “design” in the hope that it might help some of the people trying to think about ways to approach this today. The main point (after the obvious one) is that the “skilled sport/art” nature of the craft requires a lot of Kahneman “Systems 1” learning by doing and repetition and design guidance (the flowcharts) before there are enough chops to start thinking good larger design thoughts.

          The process above (to me) is a huge contrast to how most schools at any level approach trying to teach programming today (where the very same methods would quite obviously fail if applied to sports or music etc).

          Reply
  • […] Shapiro offers a more refined definition on his guest post in Mark Guzdial’s […]

    Reply
  • […] Ben Shapiro’s guest post on Mark Guzdial’s blog […]

    Reply
  • […] the Dagstuhl Seminar on Notional Machines (see post by Ben Shapiro), there was a key moment for me. Someone said something there about “Part of the notional machine […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,062 other followers

Feeds

Recent Posts

Blog Stats

  • 1,690,979 hits
July 2019
M T W T F S S
« Jun   Aug »
1234567
891011121314
15161718192021
22232425262728
293031  

CS Teaching Tips


%d bloggers like this: