Posts tagged ‘Squeak’

Code Smells might suggest a different and better Notional Machine: Maybe students want more than one main()

There is a body of research that looks for “code smells” in Scratch projects. “Code smells” are characteristics of code that suggest a deeper problem (see Wikipedia description here). I have argued that these shouldn’t be applied to Scratch, that we’re confusing software engineering with what students are doing with computing (see post here).

One of the smells is having code lying around that isn’t actually executed from the Go button, the green flag in Scratch. The argument is that code that’s not executed from the Go button is unreachable.  That’s a very main() oriented definition of what matters. There was a discussion on Twitter about that “smell” and why it’s inappropriate to apply to Scratch. I know that when I program in GP (another block-based program), I often leave little bits of maintenance code lying around that I might use to set the world’s state.

There’s another possibility for code lying around that isn’t connected and thus doesn’t executd properly — it should execute properly. There’s evidence that novice students are pretty comfortable with the idea of programs/functions/codechunks executing in parallel. They want more than one main() at once. It’s our programming systems that can’t handle this idea well.  Our languages need to step up to the notional machines that students can and want to use.

For example, in Squeak eToys, it’s pretty common to create multiple scripts to control one object. In the below example, one script is continually telling the car to turn, and the other script is continually telling the car to go forward. The overall effect is that the car turns in circles.

I was on Kayla DesPortes dissertation committee (now at NYU!). She asked novice programmers to write a script to make two lights on an Arduino to blink. She gave them the code to blink one light: In a Forever loop, they raise the voltage on a pin high, then wait a bit, then lower the voltage, then wait a bit. That makes a single light blink.

The obvious thing that more than half of the participants in her study did was to duplicate the code — either putting it in parallel or putting in sequence. One block blinked the light on one pin, and the other block blinked the light on the other pin. However, both blocks were Forever loops. Only script can execute on Arduino at a time.

On the Arduino, what the students did was buggy. It “smelled” because the second or parallel Forever block would never execute.

These examples suggest that parallel execution of scripts might be normal and even expected for novices. Maybe parallel execution is an attribute of a notional machine that is natural and even easier for students than trying to figure out how to do everything in one loop. Maybe concurrency is more natural than sequentiality.

Something that “smells” to a software engineer might actually be easier to understand for a layperson.

March 18, 2019 at 7:00 am 13 comments

MicroBlocks Joins Conservancy #CSEdWeek

This is great news for fans of GP and John Maloney’s many cool projects. MicroBlocks is a form of GP. This means that GP can be funded through contributions to the Conservancy.

We’re proud to announce that we’re bringing MicroBlocks into the Conservancy as our newest member project. MicroBlocks provides a quick way for new programmers to jump right in using “blocks” to make toys or tools. People have been proclaiming that IoT is the future for almost a decade, so we’re very pleased to be able to support a human-friendly project that makes it really easy to get started building embedded stuff. Curious? Check out a few of the neat things people have already built with MicroBlocks.

MicroBlocks is the next in a long line of open projects for beginners or “casual programmers” lead by John Maloney, one of the creators of Squeak (also a Conservancy project!) and a longtime Scratch contributor. MicroBlocks is a new programming language that runs right inside microcontroller boards such as the micro:bit, the NodeMCU and many Arduino boards. The versatility and interactivity of MicroBlocks helps users build their own custom tools for everything from wearables to model rockets or custom measuring devices and funky synthesizers.

Source: MicroBlocks Joins Conservancy

December 5, 2018 at 7:00 am Leave a comment

Creating CS Meetups for Constructionist Adult Education

A few months ago, I wrote a post on Constructionism for Adults. I argued that we want constructionist learning for adults, but most constructionist learning environments are aimed at children. I suggested that adults have three challenges in constructionism that kids don’t have:

  • Adults have a “face” (in the Goffman sense) that they want to preserve.
  • Adults don’t necessarily have expertise in an area, but as adults, they are presumed to have expertise.
  • Adults have less free time and more responsibilities than children.

I mentioned in that post that I was learning to play the ukulele, and that that experience was leading to new insights for me about adult education. I’m going to continue to use my ukulele learning to suggest a way to create constructionist learning opportunities for adults.

Legitimate Peripheral Participation for Adult Learning

From this point of view a very remarkable aspect of the Samba School is the presence in one place of people engaged in a common activity – dancing – at all levels of competence from beginning children who seem scarcely yet able to talk, to superstars who would not be put to shame by the soloists of dance companies anywhere in the world. The fact of being together would in itself be “educational” for the beginners; but what is more deeply so is the degree of interaction between dancers of different levels of competence. From time to time a dancer will gather a group of others to work together on some technical aspect; the life of the group might be ten minutes or half an hour, its average age five or twenty five, its mode of operation might be highly didactic or more simply a chance to interact with a more advanced dancer. The details are not important: what counts is the weaving of education into the larger, richer cultural-social experience of the Samba School.

So we have as our problem: to transfer the positive features of the Samba School into the context of learning traditional “school material” — let’s say mathematics or grammar. Can we solve it?

— Seymour Papert, “Some Poetic and Social Criteria for Education Design” (1975)

What Seymour was seeing in Samba schools is what Jean Lave and Etienne Wenger called a community of practice. My colleagues Jose Zagal and Amy Bruckman have a wonderful paper describing how Samba schools are a form of a community of practice, and how that model appears in the Computer Clubhouses that Yasmin writes about in her new book. In their influential 1991 book Situated learning: Legitimate Peripheral Participation, Lave and Wenger described several examples for how learning occurs in everyday settings, often with adults. Lave and Wenger point out

  • There are the midwives who train their daughters who start out just going-along to help mother at births.
  • There are the tailors who start out by delivering fabric and pieces between shops, and in that way, get to see many shops — without actually doing tailoring but still doing something useful to being a tailor.
  • There are the attendees at Alcoholics Anonymous meetings who learn to tell their stories through listening to role models and getting feedback from others.

There are some key elements to these stories:

  • Newcomers start out doing something useful, but on the periphery of the community — hence, legitimate peripheral participation. Jose and Amy point out that successful Samba schools are flexible to outsiders (anyone can become a newcomer).
  • Everyone sees practice (story-telling, being a tailor, helping a birth, dancing at Samba school) at different levels. Jose and Amy talk about having a diversity of membership (socio-economic, age, race, and expertise) and that there are events for public to exhibit practice.
  • There are some members of the community of practice who are clearly at the center. They serve as role models for others. From the newcomers to those practicing but not yet central, everyone strives to learn to become like those at the center of the community of practice.

Ukulele Meet-up As Samba School and Community of Practice

In my quest to learn to play ukulele, I’ve joined the Southeast Ukers, a group of ukulele players in Atlanta. I was fortunate to know a Uker who invited me to a meet-up. A meet-up is the experience I’ve had that is closest to how I understand a Samba school.

The meet-up is held at a local Hawaiian BBQ restaurant at 2 pm on the 1st and 3rd Sunday’s in a month. Ukers show up with a couple of Ukulele songbooks with literally hundreds of songs. (I happened to have one of them on my iPad when I first went, and had both by my second meet-up.)

For the first 90 minutes, it’s a “strum-along.” The leader calls out a page number, then after a count off, everyone plays the same song and sings along. This is a remarkably successful learning activity for me as a newcomer.

  • It’s completely safe. If I can play along, I do. If I can’t, I just sing, or just watch. If I can play the chords but more slowly, I catch up on the second or third strum of a measure. I can immediately hear if I’m getting it right (right chord, right rhythm) or if I made a mistake. The people right next to me can hear me and can comment on my playing, but only those — it’s a big group.
  • It’s a public opportunity for learning. I know what chords everyone is playing. I can look around and see how everyone else plays it.
  • While everyone is strumming, the really good players are picking individual notes, or doing tricky rhythms. I can hear those, and watch them do it, and develop new goals for things I want to learn.

The gaps between the songs are when a lot of the learning happens for me. I get coaching (e.g., “You are doing really well!” or “I heard you stammer in your rhythm on that hard chord change”). I can ask specific questions and get specific advice. I’ve received tips on how to make D7 chords more easily, and different ways to do barre chords.

After 90 minutes, it’s open-mic time. Individual ukers sign up during the strum-along, and then go up to the corner stage to perform (a quality setup, with separate mics for singing and for playing and someone at a sound board). Here’s where we get to see those on their way or at the center of the community of practice. Those at the center of the community of practice reference other meet-ups and other performances, and often play their own compositions.

As a newcomer, I stare slack-jawed at the open-mic performances. They create music that I didn’t know could be made on a ukulele. Slowly, I’m starting to imagine myself playing at open-mic, even writing my own music. I’m starting to set a personal goal to become more central to this community of practice.

At a meet-up, I talk to my fellow ukers and get a sense of how much effort does it take to develop that level of expertise. I start to get a sense of how much effort it will take me to reach different levels of expertise. There’s no expectations set on me, and no presumption of expertise. I can decide for myself on how good I want to get and how much effort I can afford to put in. I can set my own pace for when I might one day sign up for an open-mic performance, and maybe even try to compose my own music. (But it won’t be soon.)

Creating a Computing Samba/Meet-Up Culture

Could we create an experience like the Samba school or like the meet-up for learning computing by adults, like undergraduates, end-user programmers, and high school teachers? What are the critical parts that we would need to duplicate?

It must be safe. People should be able to save face at the meet-up. Participants need to be able to talk with one another privately, without overhead (e.g., learning some complicated mechanism to open a private chat line). Newcomers need to be able to participate without expectation or responsibility, but be able to take on expectation and responsibility as they become more central to the community.

There must be legitimate peripheral participation. Newcomers have to be able to participate in a way that’s meaningful while working at the edge of the community of practice. Asking the noobs in an open-source project to write the docs or to do user testing is not a form of legitimate peripheral participation because most open source projects don’t care about either of those. The activity is not valued.

Everyone’s work must be visible. Newcomers should be able to see the great work of the more central participants just by looking around. This is probably the trickiest part. We tend to confuse accessibility with visibility. Yes, on an open source project, everyone’s contributions are accessible — if you can figure out github, and figure out which files are meaningful, and figure out who contributed which. Visible means that you can look around without overhead and see what’s going on.

I must be able to work alone. Everyone needs a lot of hours of practice to develop expertise. It can’t happen just in the meetup. There needs to be a way to develop one’s work alone, and share it in the meetup.

A Proposed Computing Meet-Up Context

Here are some early thoughts on what it might be like to create an environment for learning computing the way that the ukulele meetup works.

Years ago, the Kansas environment was implemented in the programming language Self. Kansas was remarkable. It was a shared desktop where all participants could see each other, see their cursors, and see their developing work.

Lex Spoon created a version of Kansas for the Squeak programming language called Nebraska (for another “large, flat, sparsely-populated space”). Nebraska in Squeak is particularly interesting for a meet-up because all the rich multi-media features of Squeak are available in both a programmable and a drag-and-drop form.

Here’s a sketch of what I propose, using a shared space like Kansas or Nebraska:

  • Participants come to a physical space with their laptops. Physical co-location is key for safe and easy peer communication. A new journal article on co-located viewing of MOOCs suggests that co-location may dramatically improve learning.
  • The participants log on to a shared Kansas/Nebraska server, which is displayed an ultra-high resolution display.
  • The participants work together to create a multimedia show.
    • Newcomers can build the graphical or audio elements (perhaps some developed at home and brought to the meetup). Building can start in drag-and-drop form, but can develop into code elements. If something doesn’t work, it might not make it into the show, but it’s a contribution to the shared space, and it’s visible for comment and review.
    • All participants can watch others work, and can walk over to them to ask questions.
    • Participants can specialize, by focusing on different aspects of the performance (e.g., music, graphics, layout, synchronization).
    • Those more central to the community can assemble components and choreograph the whole performance (much as in a Samba school).

Would this kind of meet-up be a way for adults to learn computation in a constructionist manner?

October 8, 2014 at 8:24 am 8 comments

Facts that conflict with identity can lead to rejection: Teaching outside the mainstream

Thought-provoking piece on NPR.  Take parents who believe that the MMR vaccine causes autism.  Show them the evidence that that’s not true.  They might tell you that they believe you — but they become even less likely to vaccinate future children.  What?!?

The explanation (quoted below) is that these parents found a sense of identity in their role as vaccine-deniers.  They rejected the evidence at a deeply personal level, even if they cognitively seemed to buy it.

I wonder if this explains a phenomenon I’ve seen several times in CS education: teaching with a non-traditional but pedagogically-useful tool leads to rejection because it’s not the authentic/accepted tool.  I saw it as an issue of students being legitimate peripheral participants in a community of practice. Identity conflict offers a different explanation for why students (especially the most experienced) reject Scheme in CS1, or the use of IDE’s other than Eclipse, or even CS teacher reaction when asked not to use the UNIX command line.  It’s a rejection of their identity.

An example: I used to teach object-oriented programming and user interface software using Squeak.  I had empirical evidence that it really worked well for student learning.  But students hated it — especially  the students who knew something about OOP and UI software.  “Why aren’t we using a real language?  Real OOP practitioners use Java or C++!”  I could point to Alan Kay’s quote, “I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.”  That didn’t squelch their anger and outrage.  I’ve always interpreted their reaction to the perceived inauthenticity of Squeak — it’s not what the majority of programmers used.  But I now wonder if it’s about a rejection of an identity.  Students might be thinking, “I already know more about OOP than this bozo of a teacher! This is who I am! And I know that you use Java or C++!”  Even showing them evidence that Squeak was more OOP, or that it could do anything they could do in Java or C++ (and some things that they couldn’t do in Java or C++) didn’t matter.  I was telling them facts, and they were arguing about identity.

What Nyhan seems to be finding is that when you’re confronted by information that you don’t like, at a certain level you accept that the information might be true, but it damages your sense of self-esteem. It damages something about your identity. And so what you do is you fight back against the new information. You try and martial other kinds of information that would counter the new information coming in. In the political realm, Nyhan is exploring the possibility that if you boost people’s self-esteem before you give them this disconfirming information, it might help them take in the new information because they don’t feel as threatened as they might have been otherwise.

via When It Comes To Vaccines, Science Can Run Into A Brick Wall : NPR.

March 31, 2014 at 1:13 am 36 comments

Why is computing with media so rarely supported in programming languages?

Our publisher has asked Barb and me to explore making a 3rd edition of our Python Media Computation book, and in particular, they would like us to talk about and use Python 3.0 features.  Our book isn’t a generic Python book — we can only use a language with our Media Computation approach if we can manipulate the pixels in the images and the samples in the recorded sounds.  Can I do that in Python 3.0?

The trick of our Java and Python books is that we can manipulate pixels and samples in Java.  I wrote the original libraries, which did work — but then Barbara saw my code, eventually stopped laughing, and re-wrote them as a professional programmer would.  Our Python Media Computation book doesn’t use normal C-based Python.  We use Jython, a Python interpreter written in Java, so that we could use those same classes.  We solved the problem of accessing pixels and samples only once, but used it with two languages.  We can’t use that approach for the Python 3.0 request, because Jython is several versions behind in compatibility with CPython — Jython is only at Python 2.5 right now, and there won’t be Jython 3.0 for some time yet.

We used our Java-only media solution because it was just so hard to access pixels and samples in Python, especially in a cross-platform manner.  Very few multimedia libraries support lower levels of access — even in other languages.  Sure, we can play sounds and show pictures, but changing sounds and pictures is much more rare.  I know how to do it in Squeak (where it’s easy and fast), and I’ve seen it done in C (particularly in Jennifer Burg’s work).

I have so-far struck out in finding any way to manipulate pixels and samples in CPython.  (I don’t have the cycles to build my own cross-platform C libraries and link them into CPython.)  My biggest disappointment is Pygame, which I tried to use last summer.  The API documentation suggests that everything is there!  It just doesn’t work.  Pixels work fine in Pygame.  Every sound I opened with Pygame reported a sampling rate of 44100, even if I knew it wasn’t.  The exact same code manipulating sounds worked differently on Mac and Windows.  I just checked, and Pygame hasn’t come out with a new version since 2009, so the bugs I found last summer are probably still there.

What I don’t get is why libraries don’t support this level of manipulation as a given, simply obvious.  Manipulating pixels and samples is fun and easy — we’ve shown that it’s a CS1-level activity.  If the facilities are available to play sounds and show pictures, then the pixel and samples are already there — in memory, somewhere.  Just provide access!  Why is computing with media so rarely supported in programming languages?  Why don’t computer scientists argue for more than just playing and showing from our libraries?  Are there other languages where it’s better?  I have a book on multimedia in Haskell, but it doesn’t do pixels and samples either.  I heard Donald Knuth once say that the hallmark of a computer scientist is that we shift our work across levels of abstractions, all the way down to bytes when necessary.  Don’t we want that for media, too?

So, no, I still have no idea how to do media computation with Python 3.0.  If anyone has a suggestion of where to look, I’d appreciate it!

June 17, 2011 at 8:44 am 27 comments

Finally, Programming Environments for Blind Students

At CE21, I got a chance to talk to Chris Hundhausen who told me about his SIGCSE 2011 paper on building programming environments for blind students.  Susan Gerhart has challenged our community of computing educators to think about how our pedagogical tools can be used with visually disabled students.  She’s completely right — we tend to use graphical notations (as in Alice, Scratch, and Squeak eToys) to improve students’ ability to get started with computing, but those are useless for a blind student.

Chris is actually working on several different ideas including audio debuggers and manipulatives (physical artifacts) for representing the programs.  Chris said that his colllaborator, Andreas Stefik (Chris’ former student) is excellent at empirical methods, so all his design ideas are carefully developed with lots of trials.  The paper includes results from a test of the whole suite of tools.

I hope that lots of people follow-up on Chris’s work and direction.  My bet that what they’re finding will enable multi-sensory programming environments that will help everyone.


February 4, 2011 at 2:07 pm 3 comments

Swiki on the iPhone!

I went poking around to figure out who did the Scratch port to the iPhone.  Not surprisingly, I found John McIntosh who built many of the early Squeak VMs.

I found that he’s also developed and selling a Wiki server for the iPhone.  When I looked at his Wiki Edit Page, I recognized Swiki syntax!  His Wiki has the asterisk delimiters that I put in the original Swiki (which Ward Cunningham, inventor of the Wiki, really didn’t like), and *phrase>URL* syntax that Jeff Rick built into his versions of the Swiki (and AniAniWeb).  I don’t know if any of John’s code includes any of our Swiki code — what’s cool for me is seeing the echoes of that original work.

March 8, 2010 at 2:27 pm 3 comments

Can direct manipulation lower the barriers to computer programming and promote transfer of training?

Chris Hundhausen has a really important paper in the latest issue of ACM TOCHI: Can direct manipulation lower the barriers to computer programming and promote transfer of training?.

We’ve known for a couple decades now that programmers read and understand visual programs no better than textual programs — Thomas Green, Marian Petre, and Tom Moher settled that question a long time ago.  However, everybody experiences that starting with a visual programming language is easier than a textual language.  But does it transfer?  If you want students to eventually program in text, does starting out with Alice or Squeak or Etoys hurt? Given Chris found: “We found that the direct manipulation interface promoted significantly better initial programming outcomes, positive transfer to the textual interface, and significant differences in programming processes. Our results show that direct manipulation interfaces can provide novices with a ‘way in’ to traditional textual programming.”  I think that this is big news for computing educators.

November 16, 2009 at 5:28 pm 4 comments

The Whole Package Matters

The enormous discussion on “Lisp and Smalltalk are dead” (latest count: over 2,900 views) has spawned a parallel thread of discussion within the College of Computing.  One of the points of discussion that I found particularly interesting is the discussion of Vinny’s comment, about how Smalltalk wouldn’t be so bad if he could use it in the editor of his choice.

At first, I thought that was an entirely orthogonal issue.  What does the editor have to do with the expressiveness of the language?  Amy Bruckman called me on that point, and now I see her point.  The user interface does matter.  How well the interface supports the language does matter.  One of the biggest complaints that students had with Squeak when I taught it was the user interface. Complaints ranged from how unpleasant the colors were (which were changeable, but as Nudge points out, when the default doesn’t work well, people aren’t willing to make the choice) to how hard it was to find the menu item you wanted.  I chalked that up to being part of the learning curve, but maybe that’s the point.

I’ve been exploring some other languages recently like Ruby, Scala, and various Lisp/Scheme implementations.  I’m surprised at how few of these have editors or IDEs that come with them.  (With the great exception of DrScheme which is my favorite pedagogical IDE.)  Most of these have some kind of Eclipse plug-in, which doesn’t do me any good at all.  I have never been able to get Eclipse to install properly, and never got my head around how to use it.  On the other hand, they all have Emacs plug-ins, too.  I can use Emacs.  I’m not great at it (I’m a vi guy from my internship at Bell Labs in 1982), but I can use it.  And for the most part, it’s all the same then — it’s reliable and relatively consistent, whatever language I’m playing with.

Several years ago, Don Gentner and Jakob Nielsen wrote a great paper called The Anti-Mac Interface. They considered what kind of interface you’d get if you reliable broke every one of the Mac’s UI guidelines.  They found that you resulted in a consistent and powerful user interface.  It was no longer good for novices doing occasional tasks.  It was great for experts doing regular tasks and who wanted shortcuts and macros.

Nudge points out that the surface level matters, and if that isn’t smooth, people are discouraged from looking deeper.  The user interface level to these tools matter, and if they’re not understandable, nobody gets to the expressiveness.

August 18, 2009 at 8:06 am 4 comments

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,004 other followers


Recent Posts

Blog Stats

  • 1,875,256 hits
September 2021

CS Teaching Tips