Bret Victor’s “Inventing on Principle,” and the trade-off between usability and learning

February 21, 2012 at 7:50 am 17 comments

I have had several people now send me a link to Bret Victor’s video on Inventing on Principle. It is a really impressive demo!

His system reminds me of Mike Eisenberg’s work on SchemePaint.  Mike wanted the artist to be able to interleave programming and direct manipulation.  In SchemePaint, you could draw something by hand, then store the result in a variable to manipulate in a loop.  Or you could write some code to tesselate some graphical object, then add tweaks by hand.  It was beautiful.  The work that Mike did on SchemePaint led to his wonderful work on HyperGami, a CAD system for origami, which was the start of his Craft Technology group. That’s the group from which Leah Buechley graduated — she did the LilyPad.

People are sending me Bret’s video asking, “Wouldn’t this be great for learners?”  I bet it could be, but we’d have to try it out. At one point in his lecture, Bret says, “Why should I have to simulate the computer in my head?”  Because that’s the point of understanding computer science.  Bret’s system looks like a powerful visualization system, and visualization can be used to lead to real understanding, but it isn’t easy to design the visualization and context such that learning occurs.

The problem is that visualization is about making information immediate and accessible, but learning is about changes in the mind — invisible associations and structures.  Sometimes good usability makes it easier to make these associations and structures.  Tools like Scratch and Alice increase usability in one direction (e.g., syntax) while still asking students to make an effort toward understanding (e.g., variables, loops, and conditionals).

My first PhD student was Noel Rappin, who explored the features of modeling environments that lead to learning.  He had a CHI paper about his work on helping chemical engineers learn through modeling.  Our colleagues in chemical engineering complained that their students couldn’t connect the equations to the physical details of the pumping systems that they were modeling. Noel built a system where students would lay out the physical representation of a pumping system, then “look underneath” to see the equations of the system, with the values filled in from the physical representation (e.g., height difference between tanks).

He ran a pilot study where students would lay out a system according to certain characteristics.  They would then manipulate the system to achieve some goal, like a given flow rate at a particular point in the system.  When Noel asked the pilot students if they gained any new insights about the equations, one student actually said, “What equations?”  They literally didn’t see the equations, just the particular value they were focusing on.  The system was highly usable for modeling, but not for learning.

Noel built a new system, where students could lay out a model, and values from the model were immediately available in an equation space.  To get the flow rate, the student would have to lay out the equations for themselves.  They would still solve the problem by manipulating the physical representation in order to get the right flow rate, and the system would still do all the calculations — but the students would have to figure out how to compute the flow rate.  The system became much harder to use.  But now, students actually did learn, and better than students in a comparison group.

Bret’s system is insightful and may have some terrific ideas for helping learning.  I’m not convinced that they’re new ideas yet, but an old idea in a new setting (e.g., JavaScript) can be powerful.  I worry that we get too entranced by improvements in usability.  In the end, learning is in the student, not in the system.

Entry filed under: Uncategorized. Tags: , , , , , , , , .

CS2013 Strawman Curriculum Standard now available New US Bureau of Labor Statistics Survey now out

17 Comments Add your own

  • 1. Alan Kay  |  February 21, 2012 at 8:51 am

    Well, learning to program linked lists in C is not a great way to get students to think about logic gates and computer architecture. And, in the other direction, it is not a great way to think about interesting and important problems that can only be solved on computers.

    The recent Strawman draft of computer curricula does not allocate any hours in Tier 1 to how things actually work, and seems to miss what is really important about the subject in the other tiers. 40 years ago (at least in the ARPA community) everyone had to learn both hardware and software as aspects of “computer processes”.

    Similarly, the Strawman draft seems to be very weak at the high-level end of our field.

    This year — 2012 — is the 50th anniversary of Sketchpad by Ivan Sutherland, arguably the single most forward looking system in computing history. Besides pushing what the air defense system could do into a the qualitatively new heights of “interactive graphical construction of simulations of all kinds”, it was also the most profound example of “Turing’s Spaceship”, a vehicle that could escape from conventional computing ideas to produce an entirely new, different and much more powerful conception of computing.

    There is no question that how Sketchpad was implemented on a conventional computer of the day is interesting — and useful, if you still only have conventional computers to deal with. But it isn’t the point.

    The real point with Sketchpad — and with what Bret Victor is trying to show people — is that we only have 7±2 chunks to think with, and we want to train them wisely and use the most powerful ones we have when we are trying to think and solve problems.

    Our primary routes for this don’t have to do with logic gates, C, or linked lists. They have to do much more with concepts expressed as powerfully as possible connected and made from dynamic relations.

    I interpret what Bret Victor is saying (as Ivan said long before), as “why should I have to simulate (a conventional) computer in my head?” Let’s make a really powerful computer and teach people how to simulate -that- in their heads!

    (And let’s come up with stronger perspectives for helping students learn about “how matter can compute”. Hint: the bottom is a lot more interesting and important than the middle. And bad bottoms lead to really bad middles ….)

    Cheers,

    Alan

    Reply
    • 2. Mark Guzdial  |  February 21, 2012 at 10:13 am

      Hi Alan,

      Your point about “Let’s make a really powerful computer and teach people how to simulate -that- in their heads!” is one that I thought about when watching Bret Victor’s talk. His description about how to develop algorithms with his system was really interesting — sort of an implied distributed-processing approach. But I couldn’t tell if he was *really* suggesting developing a new “bottom” (as you suggest). He’s using JavaScript and creating a layer on top. Maybe that’s just for convenience, and he’d prefer to change the layers below, too, but I couldn’t tell.

      Cheers,
      Mark

      Reply
      • 3. Bret Victor  |  February 21, 2012 at 1:31 pm

        The coding demos use JavaScript because JavaScript is a lingua franca that everyone can read. If I had demoed an environment on top of some more powerful “bottom” that I made up, nobody would have followed the demo, and the points would have been lost.

        The demos use JavaScript, but they are certainly not *about* JavaScript. I’m trying to come up with forms that will still be relevant and useful long after JavaScript is dead and gone. (William Playfair invented information graphics in the course of explaining the trade relationships among 18th-century European countries. The original context is irrelevant, but the form has persisted.)

        I don’t even believe that *code* is necessarily the right way of creating most software systems, in the long run. My only point was that you should be able to see what you’re doing. One reason Crick and Watson were able to figure out the structure of DNA was that they made metal physical models and played with them. (And many of their colleagues dismissed these “toys” — they wanted to simulate the molecules in their heads!) Nowadays, all advanced chemistry students play with plastic physical models, as well as virtual 3D-rendered ones.

        I don’t have much to say about the supposed tradeoffs between visualization and learning, since my focus is on practical tools, not education. But I will just say that artifically blindfolding learners “for their own good” reminds me of the fantasy stories at the beginning of Lockhart’s Lament, where music students are taught notation and theory throughout K-12, but only allowed to *listen* to their compositions once they reach the advanced college level. How else can they learn to simulate the music in their heads!

        Reply
        • 4. Bartosz Telenczuk  |  February 22, 2012 at 5:20 am

          Dear Bret,

          I guess the point of the post was that immediate presentation of the output of code encourages students to solve problems by trial-and-error. However, in my opinion solving the engineering, scientific or programming problems requires deeper understanding of the mechanics. In a way understanding is the ability to simulate the nature (computer, circuit, equation) in your head.

          I was very impressed with your talk and I agree with many of your points. The immediate feedback is just a necessity for designers, but might not be the the best option for problem solvers (as pointed out in the post).

          Reply
        • 5. kenanbolukbasi  |  February 22, 2012 at 1:14 pm

          Hi Bret,

          I really like the way you push human understanding side of digital creation towards its limits. but I find it an exceedingly misleading to draw parallelism between Elizabeth Cady Stanton and others, including yourself. What Stanton decided to be moral wrong was moral wrong. There is no actual counter-argument, there was only the benefit one half of the society got from the way things were. So they simply didn’t want things to be changed. That was just usual plain bad human nature.

          On the other hand, your other examples are people who pursuit some new ways of doing things with “social” motivation. They created good tools that become widespread. That is actually great! But unlike Stanton’s case, “society” is not the only possible motivation behind creating mediums of creation. They will always live side-by-side with alternatives with different motivation. Can you say the same for people who still think man and woman are NOT equal? You can’t, because they can’t provide you a universally important alternative motivation.

          “Modeless” way of software usage being widespread now doesn’t mean “modes” were wrong, it means modes have a different motivation behind. Some interface methodologies have higher learning curves but are actually more efficient than widespread alternatives once they are learned. Modes are one of them, you can confirm it by asking Blender (computer graphics tool) community. At least I, as a long-time Blender (and many other tools that avail modes) user, think so.

          There are lots of counter-arguments against OOP paradigm. Does widespread use of OOP prove all those arguments wrong? Does them having other motivations rather than “social” as higher priority prove them wrong?

          I absolutely agree that lacking tools for creation with social motivation is wrong. It is a missing piece in our digital heritage. I hope people like you (and me, and everyone that shares that ideal) will fill that missing piece someday soon. But that won’t prove most tools and methods of today wrong. They will prove them less useful in some context of creation.

          Please, pick your analogies more accurate next time, even if that makes your talks less inspirational.

          Reply
          • 6. kenanbolukbasi  |  February 22, 2012 at 1:29 pm

            I just realized my name on the comments I made linked to a wrong address, sorry for misdirection, it is kenanb.wordpress.com

            Reply
  • 7. Andy Ko  |  February 21, 2012 at 11:15 am

    This is spot on. Immediate feedback is not necessarily going to lead to learning about computing. But immediate feedback does facilitate learning: it’s a general mechanism for building mappings between cause and effect, facilitating skill acquisition. It’s just that in a user interface context, the mapping being learned is a shallow model of an application’s use concepts, and not the computing underlying those concepts. For instance, Victor’s tree example makes it quite easy to learn that if the flower count argument is increased, more flowers will appear. It teaches nothing, however, about how the argument is used, what the mapping from the argument to the rendering loop is, or even what an argument is.

    Reply
  • 8. Jon Williams  |  February 21, 2012 at 1:33 pm

    I didn’t stop to consider the impact on students who don’t have a strong foundation. I was too busy imagining the productivity gains for those of us who do.

    Reply
  • 9. kenanbolukbasi  |  February 22, 2012 at 2:00 pm

    about the learning part, i think providing visual tools that directly react to change does not really effect learning in a bad manner. the problem is at the teaching material. if your exercise provides student a creation at state x, and asks the student to push it to the state y, and if there is an obvious solution from x to y that can easily be found by bruteforce and some help of advanced tools, the student will use them without learning the algorithm. it is not a problem of advanced tools that help solving the problem. it is a problem of wrong teaching material. an exercise that tries to trigger learning by excluding tools is a bad exercise.

    Reply
  • 10. Steve Thomas  |  February 29, 2012 at 11:29 pm

    I spent a week watching my son and about 120 other junior high kids practice for a regional choir concert. And one of the Choir teachers asked me what I learned about teaching from watching (I am not a choir teacher, and only work with kids part time on programming). After thinking about it, my strong reaction was “I wish I taught choir instead of programming.” In choir the students and teacher get instant feedback. And a master teacher can listen to 120 kids and realize which section needs help and then within that section can figure out which kid needs help. She can then give training to a particular group while encouraging the rest to listen and learn in the process and the kids do learn. They even sat relatively still for 10 hour days of practice (with breaks for lunch and dinner).

    So to relate this to Brett’s talk and Mark’s comments. I think part of the beauty of Brett’s ideas is that not just of good visualizations but of instant feedback and how important that is. I agree with the Mark’s comments that good visualizations in and of themselves are not enough and also the comments about kids using trial and error, rather than gaining a deeper understanding.

    So along the lines of Noel’s study, I wonder if can we design a system that allows for the instant feedback not only for the user/learner, but also for the teacher who has to deal with 30 kids. I am thinking of something along the lines of being able to detect the trial and error approaches (which I have seen many kids do, because they cared more about getting to the next level in a game, or getting it done as quickly as possible, because even though they don’t like they know they have to do it because their parents and/or teacher want them to, even though they find it really boring. Sorry separate issue). Every time I see kids use trial and error I can quickly detect it, and I bet with well designed “worlds” we could do the same and detect other common “bugs” in kids heads. This would help the teacher quickly identify and work with kids who are having certain types of problems.

    Lastly for those who know me I can’t finish without a plug for Etoys. I was talking with the Physical Etoys Team in Argentina (http://tecnodacta.com.ar/gira/projects/physical-etoys/) and discussing with Ricardo how much of what Brett demonstrated is already in Etoys (without the beautiful graphics). For example the ability to change a number by dragging on it (actually didn’t know this existed until Ricardo pointed it out) and that with Ticking scripts, you do instantly see the effects of your changes.

    Reply
  • […] personal data collection (personal informatics) as a means to self awareness and personal growth.  A key feature of growth and learning is feedback.  [Stick with me here.]  Feedback is were you you become aware of the results of your […]

    Reply
  • 12. Florian Thiel  |  March 4, 2012 at 5:35 am

    Cause and effect mapping by instant feedback is a powerful learning tool (as pointed out by neurology).

    That said, *apparent causation* can be really treacherous, especially for learning. Causation is just an abstraction, meaning that what we call causation is the *current best* theory about why something happens.

    Models (as Bret’s simulations) often imply causation. That’s what makes them powerful and convincing, as in good visualizations, good simulations, good talks. But without deeper understanding of the underlying mechanics, people may draw the wrong (simplistic) conclusions. That’s why models and abstractions have to be carefully designed *for a clear purpose*. Using a model for a different purpose can be misleading.

    An example:
    When I first saw Bret’s amazing visualization of dynamic systems I thought, “Great, that would be awesome to use for political decision making. E.g., let people alter the tax rate and see what happens to government budget, education spending, welfare, etc.”

    The problem is, we don’t understand the intricate effects of altering tax rates. Employment might be influenced, spending more on education may alter the job market (and future politics) fundamentally in the long run, the list goes on.

    So, when interpreting the simulation mentioned above, you have to be aware of all the assumptions that where used to create it, otherwise you might have created a dangerous tool.

    Summary: “Choose your models well, especially for learning”.

    (I’m still convinced that public simulations that allow all people to predict the future based on the manipulation of some fundamental variables is a better tool for decision making than the tools we have now…)

    Reply
  • 13. Will R.  |  March 30, 2012 at 2:15 am

    I think that the talk was not geared towards education, but more towards practitioners. I am a physicist and disagree with the statement that this is not useful for problem solvers. I think it depends on the modes that you are operating in. For example, suppose I am confronted with a new phenomena–for example, a new material with exotic properties. After characterization, I may want to come up with a “toy” model to see if it captures the basic idea of the phenomena–being able to quickly test that model is extremely useful. Next, I may want to try more nuanced models to see if they more accurately reproduce the results. If they do, then we might move on–if not, then there’s something interesting going on and there might be an opportunity to learn something new :> Bret’s idea is that if we had better tools to explore models quickly and visually, then we could develop a better intuition for our models and test/refine/discard them more quickly…I find this to be powerful–though for some of the modeling that we currently do in physics, the turn around times are fairly long, because the models are complex, or expensive to calculate–so it takes longer to develop an intuition…

    Reply
  • 14. Meta-tools for exploring explanations « Jon Udell  |  May 8, 2012 at 6:41 pm

    […] not, be. Noting that Bret’s demo “looks like a powerful visualization system,” Mark Guzdial wrote: The problem is that visualization is about making information immediate and accessible, […]

    Reply
  • […] With a bold claim, “Khan Academy Launches the Future of Computer Science Education,” TechCrunch described Khan’s new foray into computer science.  They’ve had CS videos in the past, but now they have a powerful text editor in which students can edit JavaScript, or manipulate variables like in Bret Victor’s cool demo. […]

    Reply
  • […] essay is explicitly a response to the Khan Academy’s new CS learning supports, and includes many ideas from his demo/video on making programming systems more visible and reactive for the programmer, but goes beyond that […]

    Reply
  • 17. pattern1sentence  |  December 22, 2012 at 9:55 pm

    Brets talk, as I understand was about creativity and the designer or creative artist being “connected” to the design outcome. He points out how easy it is for someone riding the technology of the day, to be totally drowned into the swing of the day and not have a chance to look at his own inner feeling about the things he is making while using the technology . Bret is pointing to the fact that most of us are being simply being bulldozed by the organizational / commercial machinery. This happens in any field -not necessarily in IT -and it is a cultural pressure-in this case of technology kind -to just to conform and get along. From that perspective declaring to become Mode-Less needs the courage and the conviction to have that long and painful march on your own.
    Bret’s view about connectedness do have implication into our education system as well..

    There is one connection between creativity and education. This is that, what is delivered through the education system was designed [created] by someone- be it Einstein or Archimedes , and it is only beneficial to the human generation in general to know the design intents and motivation of the original designer. It is only a fair march of civilization that we no longer take things based on prior social glamour and blind focus. I am sure the inventors themselves would agree and support this attitude.
    The similarity between the student learning computer science and the end user using an engineering piece or a software is that both the student and the end user are driven by the motivation of the designer, his/her motivation at the time of the invention or design. It is part of a necessary education and a necessity in our effort of making the society a better place that the end user and the student both should become aware of what the invention has tried to solve and how it has solved it.
    The current technology of software and education does not try to bring this original design decision at real-time to the end user or to the student. The reason it can not or does not do that, has a deep root into how we look at invention, its utility , how we look at ourselves, how we make our decision about what student should learn. The same reason stops making those technologies that will allow the designer to make things directly and not through cryptic indirect methods . I believe there is a lot to liberate in this front of the human and social perception and value and ego system.

    There has been lot of DE-humanisation when we try to set up programming languages that gives no special representation of the subject matter starter concepts,features , questions , semantics and when we try to stash business logic into the narrow funnels of if,then clauses and do loops around some cryptic variables or even pretty pictures!

    I believe Bret’s talk is pointing to that long march into that road of liberation. Thanks Bret.

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,095,895 hits
February 2012
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
272829  

CS Teaching Tips