Slow change in CS1 Culture: China and Gates

November 5, 2010 at 3:06 am 20 comments

I’m currently in China. Barb and I arrived in Beijing Wednesday night, visited the Great Wall and the Forbidden City on Thursday, and are now traveling to Jinan where I will be speaking on Saturday. For me, China feels the most foreign of any place I have visited. Qatar was very different, but China differs from my American expectations in everyday life in a way that Qatar didn’t, in values about food, money, time, and space.

I’ve been particularly thinking about culture since I read on the (unbelievably long) outbound flight a great interview with Melinda French Gates in the July/August issue of Smithsonian Magazine. It’s a special issue (that I’ve been working through very slowly), on the 40th anniversary of the magazine, about the next 40 years.

Q: You have referred to mistakes as “learning opportunities.” Which have had the greatest impact on your thinking?

One thing that was driven home on my last trip to India was how important it is to pair the best science with a deep understanding of traditional cultures. I was in Rae Bareli, a rural village in northern India, to see a project called Shivgarh. This is a Johns Hopkins research site that our foundation and USAID funded together, and the goal is to decrease infant mortality. The first six months of the Shivgarh project was spent on research to understand current newborn care practices, with a focus on identifying the practices that lead to neonatal deaths, and analyzing the perceptions on which these practices are based.

The researchers found that most mothers didn’t understand the importance of skin-to-skin contact, immediate breast feeding, or keeping the umbilical cord clean. However, by making analogies to important local customs, health workers were able not just to tell women what to do but also to explain why they should do it. In less than two years, Shivgarh has seen a 54 percent reduction in neonatal deaths in the target areas.

Gates does not actually answer the question about mistakes. Rather, she tells a story of a success–from taking into account cultural values, and introducing improvements as incremental changes in practice that still relate to those values. We’re left to wonder about what the mistake was that gave her this insight. What happens if you force change? Maybe the change doesn’t get adopted. Maybe the change leads to something worse.

I’m no anthropologist, so connections I draw are those of an amateur. I think Gates’ story may offer insights for us in computing education. Look at the history of CS1, seen in terms of what’s happened to the AP CS. First it was in Pascal, and then C++ and Java. Python is clearly gaining a foothold in CS1, and maybe it will be a language for the new AP CS. If we look at that sequence, it feels to me like a series of incremental changes that maintains certain values about what an introductory language should look like and what students should learn to do. Think about some of the options that have been explored for CS1 during this same time, like Scheme, or Prolog, or Smalltalk. None of those gained the momentum of the AP CS languages. Why not? Maybe those other languages were too radical a change. Maybe they challenged the cultural values of the majority of CS1 teachers. I can well believe that we might have much greater success (measured in your choice of measures) if we all adopted a radically different language. But we didn’t.

There’s a similar story to be told about how we teach CS1. Media computation challenges CS1 values about what we ask students to program. Most schools try Media Computation for their non-majors where it doesn’t challenge CS1 cultural values. Only a few (like UCSD and UIC) have tried it with their majors. Pair programming is popular, though not in the majority of CS1’s. I think it challenges CS1 teachers’ notions about the individual effort that CS1 students should make. Peer instruction and peer-lead team learning may make greater inroads because it doesn’t challenge CS1 culture in the same way, though it does challenge our sense of what a CS1 teacher does.

In Lijun Ni’s research, she found that the main factor influencing teacher’s adoption was the personal excitement or appeal that the approach has for the teacher. The research evidence did not matter at all to her subjects. That supports Gates’ view–it’s about appealing to currently held values, not about making a rational argument. Values trump reason, especially long-held values.

I don’t really know what leads to changes in values and culture. I suspect that it’s important to keep innovating, to keep challenging culture and values, and that that’s what leads to change. Gates suggests that, if you really want to make change happen in the short term, you figure out what the current values and practices are, and introduce change as an increment on those practices, explained in terms of those values. That may be frustrating if you don’t agree with the values. I’ll bet that Gates’ researchers didn’t agree with all the values they found in Rae Bareli, but that’s not the point. You work with what is there.

Entry filed under: Uncategorized. Tags: .

First things aren’t always first Google: Exploring Computational Thinking

20 Comments Add your own

  • 1. Alan Kay  |  November 5, 2010 at 9:04 am

    Hi Mark,

    Your analogy could possibly hold — if so, it is a real disaster and failure. Teachers of science (even a “half-science” like CS) are supposed to have (and need to have) an epistemological framework very different from that of a traditional culture.

    It has proven to be extremely difficult in most cases to move adults who have already committed to a “way of knowing” to a very different way of knowing. The successful transitions in the past have happened through the children of a culture adopting the new ways usually via schooling.

    This often happens very slowly because the adults (including most of the teachers) don’t cooperate. But there are very interesting cases of rapid change (such as the transition in Japan after Admiral Perry and the decision by the Japanese to Westernize).

    One of the deals with “scientific epistemology” is that we are supposed to do “more thinking and less believing”. And this is supposed to allow results that transcend the common view to be contemplated and understood by adults who did not grow up with that view.

    Historically this has not worked perfectly in science, but there is a lot of evidence that it has worked better than the process in a traditional culture.

    I think there is also a lot of evidence from the very odd choices, lack of historical perspective, considerable backsliding, and “reinventing the flat tire” that we’ve seen in computing over the last 30 years (at least) that there is very little (not enough) “modern thinking” going on in CS in general, and teachers of CS in particular.

    An important side point is that small changes in a deeply rooted culture do not necessarily add up to an epistemological shift. This is the “cargo cult” effect that is one of the most important principles of Anthropology and Social Psychology.

    Best wishes,


  • 2. Alex Rudnick  |  November 5, 2010 at 2:39 pm

    … values about food, money, time, and space.

    I’d be really interested to hear your thoughts on any or all of these, if you have time to write about them!

    (also: how filtered is your Internet connection? Does your Kindle circumvent the filtering, like recent news has suggested?)

    • 3. Mark Guzdial  |  November 5, 2010 at 7:17 pm

      I’m using VPN — only way to reach WordPress or Facebook.

  • 4. Bettina Bair  |  November 5, 2010 at 7:33 pm

    The question that I think is unanswered is why? Why these values? Why would women in rural North India not WANT skin-to-skin contact with their infants? Seems like it would take a really big community campaign to overcome a natural behavior like that. Did someone tell them that diseases are communicated that way? If so, perhaps its a reasonable position for them, considering the challenges of sanitation in some rural communities.

    So I also want to know why first language is important at all. Just from reading this post, I get the idea that there are are lots of good computer languages. And if you put three computing professionals in a room, you’ll hear that each of them started with a different one, and then had to learn three more in the first five years of their career. But somehow in the last 30 years or so, we’ve gotten fixated on the idea that the language matters. So much so, there are regular flame wars erupting on the discussion boards about the natural superiority of python, perl, pascal and cobol. (threw those last two in to see if you were paying attention)

    But everyone knows in their heart, it’s not the medium, it’s the message.

    Perhaps it traces back to the basic misconceptions that the general public has about computing. Business owners know that they need IT help, but they don’t know how to ask for it. They throw a lot of jargon into a job advertisement and send it to the nearest college. The college posts the job ad, and all of the students believe that ‘expertise in C++’ is an important marketable skill.

    We need to take control of the conversation, and let our students and community know that while technology is important to our field, it’s a moving target. What we really need is computing professionals who know how to think algorithmically.

    • 5. Alan Kay  |  November 5, 2010 at 9:36 pm

      “Why these values?” Basically, Anthropology 101: It’s not “what values?” but the depth of the grip and commitment to them once formed.

      A person who took the position that there are *no* good computer languages would be much closer to the actual case!

      However, there are still important distinctions. The question that should be asked is whether the notions in a first language are committed to in a strong enough way to make it difficult to internalize and use what is really strong in subsequent languages.

      There was a lot of evidence that this was the case 40 years ago, and I don’t know whether this has been studied again more recently. (Years ago this led to the phrase that “You can write Fortran in any language”).

      One of the things that was noticeable about the almost complete misunderstanding of OOP coming out of the 70s was the manifest entrenched style of “Pascal” or “C” programming in languages that provided objects. Instead the programmers were simply using OOP to simulate the old data structure languages they were used to — result: terrible mess! Another example of “Fortran in any language”.

      It probably doesn’t matter which order you learn languages that are poor and similar. But it really does matter whether you learn ones that have qualitatively more power in important directions (including the meta-direction).

      If you were to look into business computing, you will usually find a strong commitment to a small number of languages — for a variety of reasons, including legacy, training, their own version of both the first language problem, and the “meta-problem”. I think your theory is also off here.

      To disagree with you one more time, we actually need our colleges and universities to help students think in systems and design terms, and to move away from the simple-minded algorithmic thinking that is just some of the leaves on a much larger tree of interconnected systems.

      The programming languages that can help programmers deal with both design and large systems problems are arguably the ones they should learn first (on the likely supposition that the first things they get fluent in will stick the deepest as they more through subsequent learning).

      Best wishes,


  • 6. John "Z-Bo" Zabroski  |  November 5, 2010 at 9:55 pm

    @Alan Kay
    @The question that should be asked is whether the notions in a first language are committed to in a strong enough way to make it difficult to internalize and use what is really strong in subsequent languages.

    But this question has been asked, of course, just not at the novice level. Marian Petre’s Ph.D. thesis addressed how expert problem solvers simply *are* expert problem solvers. Ever since, she has dedicated her life to kind of metaprogramming expert systems, figuring out how you can use how experts train themselves to train the systems, these sorts of meta-questions. What is the secret behind being an expert problem solver? She found that expert problem solvers were the ones who were fluent in the most problem solving paradigms. Less effective problem solvers knew fewer problem solving tricks. This also bolsters why the question is so important to ask. Did these experts have to struggle to reach their state of awesomeness? Was learning one paradigm inhibiting learning another paradigm?

    From personal experience, the second programming language was the hardest, and the others after that are generally much easier. I’ve heard others tell me the same thing. I think there are a mix of reasons at play and questions to ask here:

    (1) Was the third language easier to learn because I had practice in simply learning languages?
    (2) Was the third language easier to learn because the differences between it and the first two were much smaller than the differences between the first and second language?
    (3) How can you be productive if you only know one language?

    Other interesting stuff: Nigel Cross has done some interesting studies on how designers think across professions, and he basically shows that Herb Simon’s The Sciences of the Artificial is sort of bunk, in that real designers don’t actually solve problems the way Herb suggests.

    • 7. Alan Kay  |  November 5, 2010 at 10:14 pm

      Yes, it has been asked for many years. It is likely that some (only a few I think) don’t imprint as strongly on the first paradigms they learn — so it is easier for them to learn multiple perspectives.

      But there is also more than one kind of expert problem solver. To pick two very different kinds I’ve had a lot of experience with (a) the ones who are brilliant with the materials at hand, and (b) the ones who are not bound by the materials and make their own.

      These two types of experts are found in most problem solving fields.

      We see the first kind most frequently in computing (they can work successfully within given programming languages), and the second less frequently (they sit down and make a language in which the problem can be more easily solved).

      I would guess there would be more of the second kind if it were (a) easier to make new languages in the languages that are around, and (b) if part of the early learning about programming would be to understand that it is easier to make new programming languages than it seems.

      The Newell and Simon stuff was very much about how *they* liked to solve problems. We had a very amusing example of this in the early PARC days when Newell visited. I think I mentioned this in “The Early History of Smalltalk”.



  • 8. gasstationwithoutpumps  |  November 5, 2010 at 11:39 pm

    I think that the first language matters, if only because so many students never learn a second one. The first language should be useful in itself, and not because it sets the student up for something awesome later.

    I’m old enough that my first programming language was fortran and my second assembler (for the IBM 1130), but I’ve been through many languages since, some of which appealed to me, some of which were cute but useless, and some of which I hated. I do tend to program in ways that fit C and C++, rather than purer OO languages (though I did learn smalltalk when it first came out, it never appealed much to me). Currently, my main programming languages are C++ and Python, neither of which is extremely object oriented (though both can be forced to be used that way).

    I think that teaching decomposition of problems into subproblems, iteration, variables and data structures are natural early programming concepts. I find the recursion-first, stateless approach of the Scheme-based classes to confuse more than it enlightens. I believe (without much evidence) that true object-oriented approaches are too abstract for beginners, and that starting with simpler concrete programs is more effective for starting. That leads to my belief that Scratch is an excellent first language, though I doubt that Alan would agree.

    • 9. John "Z-Bo" Zabroski  |  November 6, 2010 at 1:00 am

      Alan is objecting to languages that teach people to think in terms of “Do It”, rather than “I’m Done”. The powerful concept of object-oriented programming is the power of the context, and that by formalizing the context, complex problems can be made solvable even as policies change.

      The Fortran mindset is precisely the “Do It” model. It is procedural programming carried through to perfection. The problem with this model is that it does not scale well, and is vulnerable to programmer error.

      When OOP languages are used with the “Do It” model, the problems are disasterous. What ends up happening is that inheritance is used as a way to sequence Do It chains, rather than object-oriented generalization/specialization relationships. I’ve seen many Fortran programmers who thought that inheritance was the key to reuse, and as a consequence ended up encoding sequences using the language’s late binding mechanism. This is sad but funny. I have argued in other places on the Internet that some people have used OO reflection to implement first-order logic, which is just about as funny as you can get… they called their idea Data-Context-Interaction Architecture. The basic idea, obscured by completely messy language features like “traits”, was that programs could be made more readable by making them more static and that reflection could be used to decouple the syntactic elements so that it looked syntactically like it was a late-bound architecture. But what they were really doing, to put it smugly, was writing object-oriented programs with first-order semantics. That is simply ironic, considering objects are fundamentally higher-ordered things.

      I think that if OO languages would have advocated eventing more and earlier on in the history of OOP, we would have seen a different direction, and a generation of programmers that understand the “I’m Done” philosophy — programmers who understand that objects are independent entities and communicate only through passing messages, and that the most decoupled form of messaging is simply a signal announcing the object has completed a step in its internal state machine, and that that internal state machine is totally governed conceptually by run-to-completion semantics. But most OO languages based their design decisions on performance needs. Self was the only exception. It was co-authored by a guy who was awarded an ACM Doctoral Dissertation Award for a cutting edge Smalltalk VM, and he and his co-designer started with the premise of not even caring about performance and just focusing on making an object-oriented language. It was a nice idea, but the interpreter model, for bootstrapping little worlds of objects, that they designed had a monsterous bug.

  • 10. Alfred Thompson  |  November 7, 2010 at 11:53 am

    I think that some people, after enough time, do start to think in terms of selecting the right language for the problem rather than trying to make the problem fit their favorite language. But for that to happen people have to be open to learning several languages and to keep learning new ways of look at the problems. Not everyone fixates on their first language. My first languare was also FORTRAN on an 1130 BTW but that nevr became my favorite language. And I was always willing to use a different language when the task called for it. That openness is what we should strive for. Yet I find that many schools (secondary and post secondary) focus on just one language and decide that students can learn others later. My impression though is that this attitude too often teaches students to think that this language is the end all and be all.

    Coming back to Mark’s origional points about culture. I think cultrural norms limit people in many areas of their life. Teachers in K-12 are very often resistant to new technology in the classroom because it forces them to teach differently. Well if you use it right. We see teachers using white boards the same way they used blackboards and using Smart boards the same way they used white boards. That’s not really taking advantage of the technology. Yet schools will boast of the new tech in the classroom even when nothing really changes.

    • 11. gasstationwithoutpumps  |  November 7, 2010 at 12:57 pm

      FORTRAN was never my favorite either, but I will admit to liking C and C++ for many years—they fit well with what I needed to do.

      Incidentally, what differences do you *expect* to see between chalkboards and whiteboards? Other than a shiny white surface, I don’t see much functional difference. Few are using smartboards like white boards: they are using them like simple digital projectors for PowerPoint. They’d probably be better off using them like whiteboards.

  • 12. Bettina Bair  |  November 8, 2010 at 11:04 am

    This discussion makes me wonder why other fields don’t have the same debate. Do art professors concern themselves with the deep and abiding question of acrylic vs watercolors as a first painting medium? Do philosophers concern themselves with the limits of English as a first language when teaching Wittenstein or Kant?

    Does the man with a hammer see every problem as a nail?


    • 13. Alan Kay  |  November 8, 2010 at 11:36 am

      Hi Bettina,

      In fact the fields you mentioned certainly did concern themselves with “what first” (and perhaps still do).

      In the 50s, when (by coincidence) I was learning both of these, the preferred route in painting and sculpture was for media that was not easily reversable.

      So watercolors and carving soap — this was to avoid the “trying to debug something into existence”, as opposed to getting a pretty strong image of what you wanted to do in your head first.

      And with regard to Wittgenstein and other analytic philosophers (not Kant), it was all about the limits of English and the challenges to do better with especially constructed logics.

      I think the person with many tools plus the idea of what tools are gets to see more problems from more perspectives.

      Best wishes,


      • 14. Bettina Bair  |  November 8, 2010 at 5:07 pm


        But having taken both painting and philosophy in the 70s, these were never articulated to me. No particular starting point was recommended for either discipline. So what happened between the 50s and the 70s to make the art and philosophy schools change?

        Or, is this a regionalism? If so, I will disclose that I went to University of Arizona in Tucson, AZ.

        • 15. Alan Kay  |  November 8, 2010 at 6:07 pm

          Hi Bettina,

          Well, the 60s happened between the 50s and the 70s.

          Could be too pat an answer, but I think it is a pretty good simplification of the changes that were going on.

          For example, Stanford had one way of thinking about education for all of its students up to the late 60s and then completely changed most approaches to “what it meant to go to Stanford”, and “what it meant to learn about human knowledge”.

          One noticeable result was one Cicero warned against: “He who knows only his own generation remains forever a child”.

          I was 30 in 1970. I thought the 40s and 50s and early 60s were too rigid about too many things (and some of them were quite bad). So some very good shakeups happened in the 60s.

          But the big problem was enormous overshoot and overreaction — there was more of the “French Revolution” to the 60s than the “American Revolution”.

          So lots of babies got tossed out with the bath water.

          One of them was *technique* (which was always in difficulty in the US).

          There’s no question that “Technique should be the Servant of Art not the Master”. However, the overshoot against technique that occurred in the 60s in so many areas is still crippling today in most knowledge and many artistic areas.

          Another was *genre* (in the sense that it was now OK to be “original and creative” by quickly initiating genres rather than trying to get good in ones that already existed). It’s interesting to look historically at the pace of genre changes and what phase of a genre produced the greatest works.

          And another was History. And another was Anthropology.

          One could imagine many of these changes actually working for the positive if education could help learners to understand what is going on and how to deal more strongly with the increased sense of freedom.

          Without this perspective, there has appeared a kind of social and individual narcissism that I don’t think is good for our society.

          And in computing, there has been no end of “reinventing the flat tire”….

          Getting back to the original topic. During some of our long stints in schools (7+ years in one of them), we got to follow many of the children from their entry into the school in 1st grade to moving on after 6ths grade. There are many ways to characterize children, and one of the most interesting was how children responded to the biological commitment to naming and forms and convention that is made around 7 years old.

          Most children after this age had a hard time taking a form that was for one thing, and imagining it’s use for another purpose entirely. But a few (perhaps 5%) stayed quite the same as they had been before this commitment. They could follow any suggestion and turn their imaginations on a dime.

          This correlated very strongly with many other kinds of learning, including scientific, mathematical and certain kinds of computer learning.

          In the context of this discussion, the “5%ers” could likely start anywhere with most ideas — they could “skate” over them. Most of the rest of the children needed much greater care in sequencing, very likely because they would commit so much more early and strongly to whatever they thought was going on.

          Best wishes,


          • 16. John "Z-Bo" Zabroski  |  November 9, 2010 at 11:08 am

            @Alan Kay
            @One noticeable result was one Cicero warned against: “He who knows only his own generation remains forever a child”.

            Actually, this idea goes back further than Cicero. Arguably, Cicero stole his wisdom from Greek Lyceum in Athens, where a much bigger, more powerful idea was taught:

            It takes three generations to see the affect on the Soul that a systematic change to the City has. Most biologists and lawmakers today appear to be woefully unaware of this ~2,500 year old advice.

            Another way to state this idea: People tend to be far away from the consequences of their actions. This is especially true in software, when you consider the entire software development lifecycle and all the stakeholders, users, owners, developers, designers, feifdoms in the kingdom that get uprooted by new technology, etc.

          • 17. John "Z-Bo" Zabroski  |  November 9, 2010 at 11:10 am

            I forgot to add: one the change to the Soul has taken place, it is too late, and the change is rooted in the City permanently. I can’t remember which book this is from.

    • 18. gasstationwithoutpumps  |  November 8, 2010 at 11:38 am

      The biology-chemistry-physics vs. physics-chemistry-biology subject ordering has been going on for at least 40 years.

      You only see the CS debate, because you only talk with the CS faculty, perhaps.

    • 19. John "Z-Bo" Zabroski  |  November 8, 2010 at 3:08 pm

      It happens in math as well…

      Google “Calculus Reform” to read about the famous Harvard Calculus reform that occurred when Harvard suddenly realized few of their students could actually solve word problems (If I Recall Correctly).

      And, of course, there are some noted mathematicians who feel there is One True Way to learn certain subjects — See the book titled “Linear Algebra Done Right!” (says it all).

      And they also vary in public recognition. See EJ Wildberger’s Divine Proportions manifesto for getting rid of transcendental functions from trig & geo.

  • […] professional programmers will get their jobs due to their expertise in Scratch, Alice, or Kodu. That absolutely should not matter to CS1 instructors.  But it […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,186 other subscribers


Recent Posts

Blog Stats

  • 2,060,645 hits
November 2010

CS Teaching Tips

%d bloggers like this: