Posts tagged ‘visual programming’
Live coders challenge CS to think about expression again
Bret Victor’s great time traveling video emphasized that the 1960’s and 1970’s computer scientists were concerned with expression. How do you talk to a computer, and how should it help you express yourself? As I have complained previously, everything but C and C-like languages have disappeared from our undergraduate curriculum. Bret Victor has explored why we talked about expression in those earlier years. I have a different question: How do we get computer scientists to think about expression again?
Live coders think about and talk about expression, as evidenced from the conversations at Dagstuhl. They build their own languages and their own systems. They talk about the abstractions that they’re using (both musical and computational, like temporal recursion), how their languages support various sound generation techniques (e.g., unit generators, synthesized instruments, sampled sounds) and musical styles. If you look at the live coders on the Dagstuhl Seminar participant list, most of them are in music programs, not computer science. Why are the musicians more willing to explore expressive notations than the computer scientists?
Lisp is alive and well in live coding. I now have a half-dozen of these systems running on my laptop. Overtone is a wonderful system based in Clojure. (See here more on Overtone, and particularly powerful combined with quil for Processing visuals combined with music.) Andrew Sorensen’s Impromptu was in Scheme, as is his new environment Extempore.
Extempore is amazing. Take a look at this video of an installation called “Physics Playroom,” all controlled in Extempore. It’s a huge touch sensitive display that lets groups of students play with physics in real-time, e.g., exploring gravity systems on different planets. Andrew said that he could build 90% of this in Impromptu, but the low-level bits would have to be coded in C. He wasn’t happy with changing his expressive tools, so he created Extempore whose lowest level parts would be compiled (via LLVM) directly to machine code. Andrew went to this effort because he care a lot about the expressiveness of his tools. (At the opposite end from the Physics Playroom, see this video of Extempore running on ARM boards.)
Not everything is S-Expressions. Thor Magnusson’s Ixi Lang (more on the Ixi Lang project) is remarkable. I love how he explores the use of text programming as both a notation and a feedback mechanism. When he manipulates sequences of notes or percussion patterns, whatever line he defined the sequence on changes as well (seen in red and green below, as agents/lines that have been manipulated by other operations).
Tidal from Alex Maclean is a domain-specific language built on top of Haskell, and his new Texture system creates more of a diagramming notation. Dave Griffiths has built his live coding environment, Fluxus, in Racket which is used in Program by Design and Bootstrap CS education projects. Dave did all his live coding at Dagstuhl using his Scheme Bricks, which is a Scratch-like block language that represents Scheme forms. (See here for Dave’s blog post on the Dagstuhl seminar.)
How many of our undergraduates have ever seen or used notations like these? How many have considered the design challenges of creating a programming notation for a given domain? Consider especially the constraints of live coding (e.g., expressiveness, conciseness, and usability at 2 am in a dance club). David Ogbourn raised the fascinating question at Dagstuhl of designing programming languages for ad hoc groups, in a collaborative design process. Some evidence suggests that there may be nine times as many end-user programmers in various domains as professional software developers. Do we teach CS students how to design programming notations to meet the needs and constraints of various domains and communities?
I wonder how many other domains are exploring their own notations, their own programming languages, without much contribution or involvement from computer scientists. I hope that the live coders and others designing domain-specific languages challenge the academic computer scientists to think again about expression. I really can’t believe that the peak of human expression in a computing medium was reached in 1973 with C, and everything else (Java, C++, C#) is just variations on the motif. We in computer science should be leading in exploring the design of expressive programming languages for different domains.
google-blockly – A visual programming language that generates JavaScript code
Intriguing new web-based visual programming language from Google that generates JavaScript or Python.
Blockly is a web-based, graphical programming language. Users can drag blocks together to build an application. No typing required.
Check out the demos:
Maze – Use Blockly to solve a maze.
Code – Export a Blockly program into JavaScript, Dart, Python or XML.
RTL – See what Blockly looks like in right-to-left mode (for Arabic and Hebrew).
Blockly is currently a technology preview. We want developers to be able to play with Blockly, give feedback, and think of novel uses for it. All the code is free and open source. Join the mailing list and let us know what you think.
via google-blockly – A visual programming language – Google Project Hosting.
Doing with Images Makes Symbols, and from Action to Abstraction
Here’s my morning Twilight Zone moment for you. I’m here at the SILC Center doing an NSF Site Review. (I learned last night that I’m allowed to say that–it’s considered public knowledge.) I recommend this piece from Center Director Nora Newcombe as a readable introduction to their work. One of their theoretical framings here is called “From Action to Abstraction“:
Learning from action to abstraction: In contrast to traditional views of the mind as an abstract information processor, recent theories of embodied cognition suggest that our representations of objects and events are often grounded in the sensorimotor systems we use to perceive and act on the world (Wilson, 2002). This linking of thought and action is readily observed when STEM practitioners talk about objects in their area of expertise. For example, organic chemists gesture heavily when discussing molecular structure and engineers rely on sketches in conceptual design. This leads us to believe that involving the action systems in learning may help to deepen students’ knowledge of abstract concepts by tying them to sensorimotor brain systems that are good at capturing spatial/action relationships. We are interested in understanding how performing different actions (ranging from feeling forces when learning about angular momentum in physics, to actually manipulating models of physical molecules in chemistry to learn about their spatial makeup, to sketching spatial relations in geosciences) might bolster spatial learning by engaging sensorimotor systems that might not otherwise be brought to bear on the concepts at hand. Moreover, we are interested in when this action information might harm performance by tying students’ representations too closely to the physical world and how tools such as gesture and sketching might serve as a bridge between concrete physical relations and more abstract knowledge. Finally, we are interested in how different forms of action can provide a window into learners’ minds by revealing information that they may not be able to articulate verbally.
Then I read this email from Brian Harvey of Berkeley (author of the excellent Computer Science Logo Style books) on the SIGCSE Members list talking about Alan Kay’s video on similar themes from several years back:
By the way, in our first course for CS majors I show them the 30? 40? year
old Alan Kay “Doing with Images Makes Symbols” lecture (google it) [That’s a live link — I did the Googling for you], still
the most inspiring thing I’ve seen about user interface design, and one
thing I like about it is that, although it shows some details, it isn’t
/about/ details, but about how ideas about human psychology inform UI design.
Kinda weird to get these similar ideas, from two different directions, in one morning, eh?
BYOB now available
Just posted by Brian Harvey to the SIGCSE-Members list:
To Computer Science educators looking for a non-intimidating but powerful programming language for introductory courses, we offer the alpha test version of BYOB (Build Your Own Blocks), an extension of Scratch (http://scratch.mit.edu), which is a visual programming language for young people in which programs are constructed by snapping together primitive blocks that control multimedia multi-character presentations. BYOB extends Scratch to include first class lists and first class procedures. These additions are all it takes to enable the construction, by the BYOB user, of arbitrary data structures (trees, hash tables, etc.) without needing primitive blocks for every structure. BYOB also supports object oriented programming, either using its native animated objects (sprites) or building the OOP facilities explicitly, so students will understand how OOP can actually work, based on BYOB's first class procedures. The plan is to release the final version of BYOB 3 in August (2010), but a public alpha test version (2.99) is available now at http://byob.berkeley.edu along with tutorial material in the form of Scratch projects (runnable in BYOB). This is and will remain free software, subject to the MIT Scratch license that sets conditions for distribution of modified versions such as BYOB. -- Jens Moenig, MioSoft Corporation; Brian Harvey, University of California, Berkeley
Is learning to program inherently hard?
In our educational technology class yesterday, we read and discussed a classic paper by John Anderson, Albert Corbett, Ken Koedinger, and Ray Pelletier, Cognitive Tutors: Lessons Learned from The Journal of the Learning Sciences (1995, 4(2), 167-207). This paper presented 10 years worth of data on cognitive tutors, including the Lisp Tutor. When the Lisp Tutor was tested in 1984, tutor-using students completed exercises 30% faster and performed 43% better on a posttest. In 1991, they did a more careful evaluation under a structure that was more like a real course. In that one, tutor-using students completed exercises 64% faster and still did 30% better on a posttest. Wow!
Yet, the students weren’t really learning to program. Yes, they learned Lisp really well. They knew nothing about debugging. They didn’t know anything about going from a problem to a program. In class, we made the good argument that those limitations are good things. The Lisp Tutor succeeded because it made the task manageable. CS1 has too large a cognitive load.
But here’s the question we got to wondering about: Could you build a cognitive tutor for all of programming? Cognitive tutors teach process, like problem-solving. They guide students through a process using a technique they call “model-tracing.” Designing is not a fixed process — there is no single path, and it involves tradeoffs. Debugging is an immensely difficult task, requiring the programmer to internalize a dynamic mental model of the program. These aren’t traceable problem-solving processes.
CHI2010 is going on this week here in Atlanta. I’m actually not attending (too expensive for too little that’s close to what I do), but I am following the Twitter feed and reading some of the papers. One of those is a really interesting paper on Toque: designing a cooking-based programming language for and with children. A group of researchers at U. Maryland College Park (with the always-intriguing Allison Druin) worked with children to design a programming language for themselves and other children, which they programmed using a visual notation input through body motions (tracked via a Wiimote). As I scanned through the paper, the headline leaped out at me,“Confusing and Boring”: Loops. Even when you have gifted designers, non-textual languages, and no keyboards, loops are just hard.
So what makes programming so hard to learn? Here’s a possibility: It’s inherently hard. Maybe the task of programming is innately one of the most complex cognitive tasks that humans have ever created.
Is it really harder than most other tasks? In considering this premise, I keep coming back to debugging. Physics wants students to develop a mental model of the physical world, one where gravity tugs and friction resists and electromagnetism is understood even if never seen. We have evidence that not all students really develop this complete mental model. However, using equations that can be applied plug-and-chug, and a limited model, students can get by, and even take jobs using this more-limited model of Physics understanding.
How do you debug without really understanding how the code works? How do you debug at all without developing a mental model of the program? There are still cognitive scientists who disbelieve that humans actually develop executable/runnable mental models at all. I bet someone could prove that they exist using computer programming, because programmers have to have them to successfully understand and fix program behavior.
I don’t really believe that programming is the most cognitively complex activity humans have created. I am wondering about how hard it is, how to measure that complexity, and how the challenge of computing education may be greater than the challenge of other forms of STEM education.
Picture-driven computing
Researchers at MIT have a new system that allows one to program with screenshots. For example, to get a message to a cell phone when a bus reaches a corner, “the programmer can simply plug screen shots into the script: when this (the pin) gets here (the corner), send me a text.” It sounds too good to be true, but when Allen Cypher says it’s good, you gotta be impressed.
“When I saw that, I thought, ‘Oh my God, you can do that?’” says Allen Cypher, a researcher at IBM’s Almaden Research Center who specializes in human-computer interactions. “I certainly never thought that you could do anything like that. Not only do they do it; they do it well. It’s already practical. I want to use it right away to do things I couldn’t do before.”
The article also says, “The researchers say that Sikuli could allow novice computer users to create their own programs without having to master a programming language.” Interesting question: Would this increase interest in programming (“I can do that? What else can I do?”) or decrease interest (“I can do whatever I want this way — why go further?”)?
Microsoft’s top developers prefer old-school coding methods
While visual programming can be easier to learn and can help make developers more productive, it’s also “easier to delude yourself,” said Butler Lampson, a technical fellow at Microsoft. For instance, “no one can ever tell you what a UML diagram means.”
via Microsoft’s top developers prefer old-school coding methods.
An interesting article about a panel of Microsoft distinguished engineers mostly whumping on “visual programming,” which is never defined, and seems to mean everything from drawing buttons in GUI layout to drag-and-drop programming. Maybe the engineers also said positive things about visual programming, but the author decided to only emphasize the negative.
Butler Lampson had the most enlightened comments (in the sense of being more balanced than most of the others). The above does reflect what we see in the research literature. It may be that visual programming is easier to get started with (see Hundhausen’s TOCHI article), but it’s no easier to understand or debug (as Petre, Green, and Moher showed in their studies). Having taught UML for years, I do appreciate his comment that “no one can ever tell you what a UML diagram means.” The interesting psychological question for me is, why do we “delude ourselves” (as Lampson puts it)? Why do we believe that visual programming is a silver bullet? Maybe because there is a better way there, but we haven’t found it yet? Or is it an illusion?
Recent Comments