There is no “First” in CS1

November 27, 2010 at 5:10 pm 30 comments

Andy diSessa has a great video that he sometimes shows in his talks, where he is interviewing a Physics Senior at Berkeley.  He’s asking her what happens to ball being tossed from one hand to the other.  She tells him a story about potential and kinetic energy.  He points out that the ball traces a parabola — what happens to cause that shape?  She tells him a story about horizontal and vertical velocity, and about acceleration due to gravity impacting the vertical velocity, but not the horizontal.  He presses her further — what really happens at the very top of the arc?  What causes the ball to change direction?  Finally, she blurts out, “The air pressure! The air pressure pushes down on the ball forcing it to fall!”

Which is wrong.  It’s a common alternative conception for falling objects.  The student was certainly not taught that explanation at Berkeley.  She developed it as a novice, as an observer of the world around her.  She chose to major in physics — she must have had an interest in the physical world before coming to Berkeley.  She observed the physical world, and developed theories about how it worked.  Andy’s video shows that, despite her developing new understanding, the old belief system is still there.

Cognitive science shows that no student enters our class a blank slate.  They enter undergraduate study with nearly two decades of experience, and theories that they have developed in those decades about how the world works. Those theories have allowed them to function on a daily basis.  If they enter our computer science classroom voluntarily, then they have probably been an astute observer of computing mechanisms, and they have certainly developed theories about how the computing world works.

I have heard proponents of a wide variety of beliefs about the power of CS1:

  • “If we teach them recursion first, then iteration becomes simple.”
  • “If we teach them strong typing (or test-driven development, or functional programming), then they will have good practices of mind that will stick with them for the rest of their years.”
  • “If we teach them objects first, then objects will be natural to them.  We all can’t understand it the same way, because we learned imperative programming first.”
  • “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.” (You probably recognize this one.)

I don’t know if any of them were once true, but I’m quite confident that none of them are true today.  There is no first. There is no objects-first, functions-first, hardware-first, or non-Basic-first.  No, I’m not suggesting that there already is computer science in high school  — there are way too few teachers and classes for that to be true, on average.  Rather, in a world where students live with Facebook and WiFi and email and “To The Cloud,” they most certainly have theories about how computing works.  There’s no way to get them before they make up their theories.  By the time they decide to study computing, they have them.

I thought of this looking at Carsten Schulte’s ITICSE 2010 paper, on studying student perceptions of difficulty in an objects-first vs. an objects-later class.  This is a follow-up to his earlier ICER paper where he reported no learning differences between an objects-first and an objects-later class.  While I’ve heard complaints about Schulte’s analysis methods, I found his experimental set-up to be as careful as one could possibly be.  The only difference between his two classes were the order of topics.  Objects-first vs. objects-later doesn’t matter, but neither does any other -first.  His results are really not surprising.  We already know that the sequence of topics in a curriculum doesn’t really seem to make much difference in the learning — that’s how problem-based learning works.

It’s an important open research question: How do students understand the computing around them?  What theories do they have?

  • How do students explain how a router box works, that my daughters get Wikipedia when they ask for it and I get WordPress when I ask for it, though there’s only one DSL connection and one router box in the house?
  • How do the Chinese characters appear on my screen when I get an email from China — were the characters in the email, or in my computer?
  • How is it that I can run Windows on my MacBook?
  • What is on that CD that I put into my Wii, and how is it different from what’s on a DVD or a CD that works on my Mac or PC?

We know something about how novices develop naive theories of computing.  John Pane’s research shows us that novices tend to develop declarative explanations of software that is driven by events, and don’t seem to develop notions of objects at all. The “Commonsense Computing” group has shown us that novices can create algorithms for a variety of problems, though that doesn’t really tell us how they think software and software development works in the world around them.

One problem that we have as educators is that this is a constantly open research question.  Every new technological doohickey that students meet will lead to new theories about how technology works, sometimes requiring changes to old theories.  Our students will have different theories depending on their computing experiences.  That’s how all learning works — we meet a new situation, that we either assimilate into our understanding, or that we accommodate with a new theory.  Over 50 years of cognitive science tells us that this is happening with every student who is paying attention to computing. If they enter into our CS classes, they are paying attention, they are making sense of the world around them, and they already have theories about how their computing technology works, how software works, and how software is developed.

We are in the same position as physics (or biology, or chemistry, or other science) educators.  Students have theories about how Wii controllers, voicemail menu systems driven by spoken voice commands, touch screens, and Google/Bing search work.  If these novice theories “mutilate” their minds, then it’s done, it’s happened to everyone, and we’d best just get on dealing with it.  There is no chance to place a theory in their minds before they learn anything else.  We have to start from where the students are, and help them develop better theories which are more consistent and more correct theories.  There is no first, but we can influence next.


Entry filed under: Uncategorized. Tags: , , .

Constructivism is to memorization, as RSS is to TIFF An essay is enough to narrow gender-based achievement gap

30 Comments Add your own

  • 1. Gary Litvin  |  November 27, 2010 at 7:13 pm

    There is a first – mathematics! 🙂 We can be pretty sure that students have very little, if any, exposure to discrete mathematics or have any theories to explain mathematical phenomena.

    • 2. Mark Guzdial  |  November 27, 2010 at 7:35 pm

      Jean Lave showed in 1988 that untrained people, like housewives and Weight Watcher practitioners, used fairly sophisticated mathematics, while they were unable to demonstrate the same in the classroom. Students won’t know our formalisms, but they may know some mathematics.

    • 3. Barry Brown  |  November 27, 2010 at 9:36 pm

      I see math and computing as symbiotic. My students tell me that CS has helped their math classes and vice versa. I’m skeptical that math could truly come before CS. While I agree that almost any computation topics has foundations in mathematics, I doubt that many freshmen (who may have barely passed high school algebra) could make the connections between mathematics and programming. The “a ha!” moments don’t come until much later.

      • 4. Gary Litvin  |  November 28, 2010 at 12:11 am

        I agree with you entirely — see my back-cover text. It would be nice, though, if some discrete math were taught in middle school.

  • 5. Barry Brown  |  November 27, 2010 at 9:43 pm

    P.S. This is a really nice post.

  • 6. Alan Kay  |  November 28, 2010 at 9:11 am

    Well …

    I think the small version of this theory works a little — and many people love this as a story — but it obscures more important issues.

    For example, it has been noticed that around age 7 children make a stronger commitment to “local commonsense” than they had before. If your goal is to help them think along different lines, then you have more flexibility earlier on than later. Our experience over 40 years bears this out.

    And, though there have been many studies that bear out the Andy diSessa video (of going back to the “old patterns of thought”), part of the problem is just how the new thoughts have been learned.

    What is mainly going on here is that what she learned was not deep enough to allow her to think through Andy’s question. (And was any attempt made to see what she could do when not under the gun in real-time with video cameras rolling?)

    We see this kind of reversion in jazz improvisation. At some point overload and situation start to override what one can do in real-time, and one falls back to simpler deeper patterns.

    Slow down the tempo a bit and we get to hear a very different take on the same chord changes. This is one of the big differences between “intermediates” and “advanced” in music and sport. For example, in tennis, intermediates can do all the strokes and can play quite well, but lack “anticipation”, which starts to be a major factor as the pace picks up.

    In improvisation, the situation is a bit like being in an all terrain vehicle at night with a 5 watt headlight. One’s sense of the terrain and the possible routes is limited, but going very slowly really helps. Expertise gives you more light to see ahead and ability to control the vehicle.

    Another thing that bothers me about this story is that it certainly is not in accord with deeply understood and tested principles for music and sports learning. There’s not a “first” thing (this is the small part of this that holds), but there is a lot of before-after sequencing that is really critical.

    Some approximations work, but some difficulties have to be addressed right from the beginning.

    On Mark’s last point. One definitely always has to start with the learner’s state of mind, but in areas where the epistemological and technical foundations are very different from commonsense, this mainly argues for finding ways to start the learning earlier in life.

    Best wishes,


    • 7. Mark Guzdial  |  November 28, 2010 at 2:42 pm

      Hi Alan,

      Yes, completely agreed — if you get to kids as they’re having their first experiences or even before, then there is a “first.” My point, though, was that there is no “first” in the introduction to computing class in undergraduate, and there’s probably no “first” in the high school years either. Rousseau proposed getting students very early in “Emile,” as a way to avoid lots of bad learning that children would get growing up with their families — a similar goal to get “first” by preceding the development of bad learning.

      Andy’s video is a snapshot that summarizes his 1982 paper Unlearning Aristotelian physics: a study of knowledge-based learning. He later replicated some of the Dynaturtle experiments with Berkeley Physics professors. Even those experts resorted to Aristotelian physics when pressed.

      I don’t know enough about music to question whether before-after sequencing is crucial there. My only sports learning of any real depth is that I studied martial arts for a number of years, ending up as an instructor and a second-degree black belt. There’s no critical before-after sequencing in the form of Choi Kwang-Do that I studied. Sure, we sequence things to try to make learning as efficient as possible, but there’s always iteration and re-learning and un-learning. I learned all my kata at least three times: The first time through the belts, again when going for my first degree (when the first three years of learning had me realize how wrong I’d been doing things from the start), and again when going for my second degree (when two more years again made me realize that I was again all wrong). A punch or a kick is one thing when it’s the first thing you’re learning, something else when you’ve learned lots of other forms of the same thing, and something else again when you’ve learned other options entirely. A better sequencing may have prevented some iteration, but I doubt that it could have been avoided.

      I realize that these are personal anecdotes. Does anyone know of any experimental evidence suggesting that there are bad sequences for learning whose ill-effects can’t be later corrected? I certainly know of rationale for one sequence over another in many curricula (e.g., Jerome Bruner has rationalizations like these for “Man: A Course of Study”), but I don’t know any evidence that another sequence wouldn’t work.


      • 8. Mark Guzdial  |  November 28, 2010 at 7:54 pm

        My blog is echoed via RSS over in Facebook, where Chris Martin suggested that I look at Alison Glopnik’s work which suggests that there is no early-enough. Baby’s start to plan inventions, and do much theorizing before 3 years. Stunning! Maybe the only “first” is pre-natal?

        • 9. Alan Kay  |  November 28, 2010 at 8:51 pm

          This isn’t exactly what Alison Gopnik is saying. (And it’s kind of an example of people using evidence for one thing to try to justify their notions about something else.

          The general field is called Neuroethology, and has been around for about 50-60 years.

          And of course, Piaget and Vygotsky (not to forget Montessori) were earlier pioneers here.

          Some of the earliest work with humans was actually done by Jerome Bruner and others in the 50s, and the first really classic and deep results were done by Tom Bower in the early 60s. (These addressed questions such as “what do babies know about at birth?” etc.)

          The answer is quite a lot, and this merges into the “human universals” work in Anthropology that I’ve mentioned previously.

          The main point that needs to be emphasized is not that babies make theories from birth, but that they are not nearly as committed to these theories before around age 7 as they tend to be afterwards.

          Consider the “fundamental theorem of Anthropology” — that a baby born anywhere on Earth can be moved to any other culture and will grow up as a member of the new culture. Whatever theories the baby is making are quite relative to the commonsense content of the culture and to its own deeper human genetics.

          The “stickyness” of cultural learning when starting young has been deeply studied. (Including by Alison)

          This isn’t so simple as many people would like, but the actual principles are pretty clear.

          Best wishes.


          • 10. Mark Guzdial  |  November 29, 2010 at 8:28 am

            Hi Alan,

            Sorry that I got Alison Gopnik’s story wrong. I think I get your point. You’re saying that yes, even babies make theories about the world, but they don’t start believing them until they’re around 7. Thus, getting to kids around that age allows you to present some “firsts”.

            In your response to Bettina, you said, “I think both of these actually obtain.” What do you mean by “obtain” here?


  • 11. Bettina Bair  |  November 28, 2010 at 9:26 am

    Great essay.

    As a teacher, I’ve seen what students understand first varies greatly, depending on a number of factors — Not the least of which is the learning style preferences and intelligence type of the student. The “big picture” kids and the “detail” kids might as well be in different classrooms. They start from completely different points of reference.

    This parochialism that says that CS may only be taught in certain formulaic ways (by certain iconic types of geeks) is probably the leading cause of underrepresentation of women in computing. Until we embrace diversity in our thinking, we can hardly embrace diversity in our ranks.


  • 12. Alan Kay  |  November 28, 2010 at 2:05 pm

    Hi Bettina

    There’s no question that there are a variety of learning styles and distinctions (you mention two very important ones).

    I think we need to heed what you say, and somehow also deal with real sequencing issues, and the interferences that the learning styles bring.

    For example, a first order theory might be “teach to the intellectual style of the learner”.

    A second order theory might be “Try to get the learner to learn outside their point of view and stance”.

    We can think of the first as “somewhat additive” and the second as “somewhat transformative”.

    I think both of these actually obtain. (And they join quite a few interesting and important theories where the 1st and 2nd order versions of them are quite different, yet both obtain.)



    • 13. Alan Kay  |  November 29, 2010 at 8:53 am

      Hi Mark,

      By “both of these actually obtain” I mean a longstanding soap box favorite of mine: the learner’s have goals to get from A to B, and a good education process takes them to a C they weren’t able to conceive of at the start.

      To do this, as Bettina points out, you have to start with where they are, including their initial motivations. But I think the art of real education (as opposed to the more simple idea of training) is to use these as positive forces and fuel to get to perspectives that were not in their initial purview.

      Sometimes the transformation can be done with the same idea looked at in different ways. For example, a huge bug in computer science learning is to look at objects from the perspective of data structures and procedures. From this POV one sees something like “extensible data structures” with getters and setters, etc. (i.e. Abstract Data Types).

      Another way of looking at objects is as “agencies” (goal-oriented processes) as part of a large system that is simulating something of interest. This is the way I thought of them.

      The first POV is “a better old thing” and the second POV is “almost a new thing”. The more you like the “old thing” the harder it is to even see what the new thing might be (and most in our field just haven’t).

      The differences are profound, even pragmatically.

      So, to me, the current “objects first” (or not) seems quite irrelevant since it is the weak ADT view that is meant regardless of sequence.

      However, if you want to help students learn the new more powerful perspective, then I think this has to be carefully thought through, because part of the implications are new and different ways to program.



  • 14. Alan Kay  |  November 29, 2010 at 10:29 am

    Hi Mark

    It’s not that they don’t believe their theories until age 7 or so, it’s that they don’t commit so strongly to them. This is what “let’s pretend” is all about. (Our 7+ year experience at the Open School indicated that a few percent of children change very little, and the rest change quite a bit.)

    (Some of the early work on this was done by Howard Gardner’s Project Zero in their extensive study of children’s art around the world.)

    This is possibly similar to other such “critical periods” for easier to study traits such as binocular vision, face and phoneme learning, etc. The changes in all cases are large and non-linear.



    • 15. Mark Guzdial  |  November 30, 2010 at 10:12 am

      Hi Alan,

      Thanks for the explanations! I want to try to say back to you what I think your concerns are with my post, please. I think that you’re agreeing that students develop theories about computation. I think that you disagree with my suggestion that curricular order isn’t important, though we might be agreeing that by the high school or undergraduate levels, it’s no longer important. The large, non-linear changes occur early. I think that you’re saying that by CS1, it may be too late to introduce the “real thing” for some students — that they will have developed beliefs and theories (as early as 7) that are incorrect, and those incorrect theories are “sticky” and hard to shake. You don’t believe the objects first/later studies, because you expect that it’s a weak model of objects, one based on ADT’s++, as opposed to a more PLANNER-like, goal-oriented programming model.

      If I have that right, do you see anything useful that you think should be done in CS1 or high school? What do you think about the research idea, to study what models of computation students have when entering CS1? They may be awful, as you suggest, but until we define them, we don’t know. And once we know what they are, we know what the Point A is, which is important for reaching Point B or Point C.


      • 16. Alan Kay  |  November 30, 2010 at 11:36 am

        Much of what you echoed is what I was trying to say.

        However, I don’t believe that high school or college is too late. It just takes a lot more focus and effort for both the learners and their helpers. (There are parallels here to natural language learning and other kinds of learning later in life that could be real analogies.)

        And especially for the high levels of fluency (but not virtuosity) that general real education demands, I think it is “never too late” to get fluent.

        What makes this discussion difficult is that we are neither talking about “one thing first” *nor* are we talking about “curricular order is not important”. It’s “a few things first” and it’s “some orderings are critical”.

        For example, I’ve argued in the (long) past that one should learn to program in 3 rather different programming languages at the same time. (This came out of how useful it was to learn different machine code architectures early. It might be a frightening and bad idea for many students today.)

        In any case, that suggestion is (a) against one thing first, (b) against learner imprinting of a paradigm to their later detriment (c) heeding the psychological fact that it is very difficult to unlearn, especially when one has worked one’s way into real fluency, and (d) very much to the point of needing careful curriculum design which may or may not be what students might want to do at that point.

        I think your last paragraph is the important one, and now we have to heed Bettina and really try to find a way to do “(d)” really well.

        I don’t know the answers here. But, I do think that the 7+-2 ideas of chunking really obtain (perhaps mainly as a deep metaphor and guide).

        So the question I ask is: what should they be doing with the chunks they’ve got that will be fun, motivating, somewhat educational, and that points towards what they really should be learning when they’ve consolidated enough to have some chunks left over?

        And: are there ways to incorporate the multi-paradigm approach to programming without blowing them out of the water? (Especially given the very different kinds of backgrounds and motivations that many of today’s students bring to the table — e.g. they are likely not to be very math savvy)

        In the Etoys work, the most successful part of that system that has stayed powerful and useful is the environment and user model (including many parallel objects of the same general kind) presented to the users/learners.

        Programming at the line of code level was intentionally done really minimally to make the original setting for the system work within limited goals (9 year olds on a parent’s lap online). That has worked amazingly well elsewhere, but violates some of the goals above. This could be done much better, and should be.

        The previous children’s system we did (Playground), had a much more interesting and powerful programming model, but was I think too bare bones in our implementation. Still, it was a glimmer of future possibilities.

        I think it is a very good idea to study and understand learner models and misconceptions in every field. (This is what one wants to know as a teacher!)

        When I was teaching guitar long long ago, I would occasionally have adults who had played for quite a few years, usually self taught. The good news is that one could play with them for 10 minutes and get a very good idea of their approach.

        One thing you find a lot is “three finger playing”– the pinkie on the left hand isn’t used for playing notes. And quite a bit can be done without the pinkie — and there are many important things that just need a fluent pinkie.

        (This is somewhat similar to self taught tennis adults some of whom will use a “pronated backhand” (so the same side of the racquet is used for both forehand and backhand)). And, again, one can play tennis this way up to a certain point.

        In order to help, the helper somehow has to get the learner to do thousands of repetitions of “the better way” with as little pain as possible, and such that this way will hold up under stress. (I was a 3 finger player myself, even as an early pro, and later spent more than a year while in the Air Force just redoing every part of this technique.)

        I think things are much easier for the high school and college situations in any of the STEM subjects. I don’t have proof, but my prejudgment is that a good teacher with a good curriculum will be successful with most of these students. This is partly because most of the students are not yet fluent with their weaker ideas and methods.

        There’s an interesting question here about the relationships between stress and good learning. A few learners thrive on stress, but I think for most it leads to bad short cuts, weak methods, and even cheating. Seems like a bad idea all around.

        I think we agree strongly that project based curricula are well suited for CS learning, and that there are ways to access if the student teacher ratio is not ridiculous.

        So I think what we need to do is to find out more about Point A, and put a lot of work into what the Points C should be.



        • 17. Mark Guzdial  |  December 1, 2010 at 7:14 am

          Thanks very much, Alan! Those are really helpful comments, and directly useful advice for me in thinking through research directions!


  • 18. Briana Morrison  |  November 29, 2010 at 10:37 am

    From my point of view, most students entering CS1 haven’t even considered the “how” of most of your questions and thus haven’t formed theories…unless they have been forced to solve a problem that requires forming a theory. Consider student A who set up the router at home with no difficulties and communicates only with native English speaking persons. This student has (sadly) probably never thought out how Chinese characters are displayed or how multiple users “use” the router at one time. On the other hand, student B installed the router at home and had multiple difficulties and had to go through the entire debugging process to fix (why can Mom connect but I can’t) and has friends that send email from other countries that appear as jibberish when printed has been forced to form theories to solve problems.

    So while all the students might not be clean slates, many who may have never thought about specifics (or been forced to) are virtual clean slates. Just ask my 16 year old daughter who drives how a car works and you’ll get the same virtual blank slate concerning automotive engineering. (And sadly her mother doesn’t know much more…)

    The real concern is exposing the kids to the concepts (with correct theories) before they enter our classrooms. And this is my hope for computational thinking…getting students to think about the how and why of computing instead of just being consumers.


    • 19. Mark Guzdial  |  November 30, 2010 at 9:55 am

      Hi Brianna,
      You are likely right — students won’t build theories unless they face a problem. However, I have great confidence in the lack of robustness and reliability in today’s technology. I suspect that everyone has faced some problem that they had to develop a theory about. I’m sure that you and your daughter have had some car troubles that you have had to think through (“It needs a jump” vs. “I’m out of gas”).

      But in any case, it is an interesting and open research question: What theories about computation are students bringing into our classes? And can we build on those?


      • 20. Bettina Bair  |  November 30, 2010 at 10:05 am

        A question that always gets an interesting response is, ‘when did you realize that your computer was more than just a screen? When did you realize that it was different from your television, and why? how?’

        Sometimes students are startled because they aren’t even aware that they have had this realization. But by asking them to articulate the transition in their thinking, they reveal their preconceptions and how they moved beyond them.


  • 21. Alan Kay  |  November 30, 2010 at 10:46 am

    I’m not worried about whether students (and others) in computing will form theories. I’m worried that they will form mediocre to really bad theories and willy nilly implement them to create unnecessary and sometimes crippling barriers for years, even decades, into the future.

    Part of the flavor of this discussion is “top down” vs “bottom up” and (somewhat overlapping) is dealing with the different styles of learning brought to the education process.

    I would like to return to why sequence is important and why we have to be very careful and firm about how we teach.

    A (if not *the*) classic example is the web browser. The process of making it could hardly have been more bottom up, “debugging it into existence”, “forming theories by encountering problems”, and it certainly has catered to the learning styles of its designers over the years.

    It does a few things. People use it for a few things. It has errors which present problems from which theories can be formed, etc.

    The problem is that the browser was done egregiously wrong. It is a disaster if it is approached as a basis for learning computing and design. It is only useful as a subject for really deep criticism and trying to understand how something could be done that was so much worse than the better solutions which already existed (some for many years).

    One of the fundamental misconceptions initially and running to the present is that the browser was a kind of interactive “application”. In fact, the largest part of the most important services it should supply are those of an operating system (this is very slowly being recognized after almost 20 years of the bad idea).

    In short, browsers are set up so they have to understand more and more with many restrictions of features and tools, whereas operating systems mainly need to be able to run arbitrary processes safely and blindly, while coordinating output from the processes in a UI.

    The result of this truly awful approach has meant that many needed and desired things that can be done easily on one’s laptop cannot still be done in the browser running on the same machine!

    Consider “making sound” via creating waveforms which can be sent to a sound buffer and clocked out to D/A converters. This is easy to do on your laptop, but is simply not possibly yet in the browsers (*they* “are working on it”).

    The important idea here is that this is not the province of “they”. With an operating system you can just get a protected process/address-space that will confine anything, including raw machine code. Anyone can then write the (literally) few hundreds of lines of C to do waveform synthesis, and hand the buffers of these to the OS agencies to play out.

    The way you have to do this now, is to make a browser plug-in. Except if you are trying to make software for children to use in school, the school district SysAdmins won’t let children and teachers download executables to install a plug-in.

    Javascript doesn’t need permission to download, but it is slow and the browser lacks the feature for “waveforms out”.

    The browser could download executables in any of a number of ways and run them completely safely in separate address spaces, but it doesn’t. (“They are working on this”)

    Another example that is really frustrating is that Etoys has a terrific particle system which we’d like to carry over to a “from a web page” version. But there is no vehicle for this, even though it is so easy outside the browser.

    So what we have have here is the quintessential “reinventing the flat tire” that so characterizes computing today.

    The problem with the browser is that it is not “a reasonable design with some bugs” but that it was always a really bad approach.

    I firmly believe that the reason for this was that no one or no learning agency (for example at the University of Illinois or NCSA) contravened the bad theories that the programmers brought to the process.

    And I wonder why there wasn’t a mighty uproar in academic computer science about how badly this was done, and how difficult it would be to get out from under if allowed to propagate. Could it be that academic computer science really couldn’t see the problem?

    Best wishes,


  • 22. Do We Need A New Teaching Programming Language - BooleanBase  |  December 1, 2010 at 11:40 pm

    […] crowd would howl about that but I am not convinced that we need objects first. See mark Guzdial’s There is no “First” in CS1 What I want in a first teaching language is simplicity and creating classes is all too often not […]

  • 23. John "Z-Bo" Zabroski  |  December 5, 2010 at 3:01 am

    I can only speak from personal experience.

    I first heard of “objects first” as a freshman in college and had no idea what it meant other than google teling me it might be related to “blueJ”. At the same time I was bogged down withsearch results critiquing even the most miniscule of details such as wether Hello, World! was the right first program andif it was, does boilerplate like “static void main(String[] args)” legitimately distract. Since this was after the dot-com bubble burst and universities were struggling to enroll new students and prevent dropouts, a lot of scrutiny was prsumably placed on cs1 and AP CS because its the Martin Luther King Blvd of new designer drugs.

    I read the blueJ approach, eventually.
    I read Kim Bruce’s eventful approach as well.
    I also read simulation expert Bernhard Zeigler’s Objects and Systems.
    I also read Bertrand Meyer’s Touch of Class.
    And Structure and Interpretation of Computer Programs.
    And How to Design Programs.
    And Budd’s Multi-paradigm Leda.
    And Concepts, Techniques and Models by Van Roy.
    I’ve also got books on learning assembly. Like Dos Reis’s favorably reviewed book.
    I own a 1000+ books. Most programming related.
    I also read as many articles or papers as possible on API Design.

    Most books I own are crap intellectually. Every objects first cs1 marketed book I own is drivel.

    Here is what books end to lack discussion of as an example.

    Algorithms and data structure books don’t address engineering, such as sparse matrices, geospatial, full text search, data mining, and case studies such as wh industtrial sorting algorithms are the way they are. Most don’t even touch upon the empirical methods used to evaluate the quality of the implementation, such as McIlroy’s original test case data for qsort which is stilled used today to benchmark e.g. Java’s sorting algorithms.

    How long does it take somebody today from their first exposure to data structures and algorithms to improve on Lucene or Java’s sort?

    Then there are deeper issues of correct problem domain abstraction. e..g. I would argue Kim’s event-driven graphics doesn’t force students to think about good primitives for calculation adnd coordination and as such is a pedagogic failure.

  • […] CS as inquiry would be about encouraging students to explore “how things work” and what their models of computation are and what they should be.  Our first step towards building an inquiry-based CS education would be […]

  • 25. Hard Questions « And Yet It Moves  |  December 30, 2010 at 12:24 am

    […] changes to the exam.  Most significantly, these are just different humans, each coming in with their own mental models and skill […]

  • […]  It’s about teaching what we have now, but in a new and more powerful way.  It’s Andy diSessa’s argument for computing literacy — how much powerful are we when we are as literate with computing as we are with numbers or […]

  • […] I’ve talked about RunRev/LiveCode here before.  It’s 90% HyperCard, updated to be cross-platform and with enhanced abilities.  I mostly agree with the comments below (but not with the critique of Scratch or Logo): It really does seem like an excellent tool for the needs in today’s schools.  It’s real programming, you can build things quickly, you can build for desktop or Web or mobile devices, it’s cross platform, and it’s designed to be easily learned.  The language is English-like and builds on what we know about how people naively think about programming. […]

  • […]  Programming requires specification of details that do not occur in natural language (as seen in John Pane’s work, and related to the “Communicating with Aliens” problem).  Why should our evolved […]

  • […] minorities, likely see their first programming language. And even if students arrive with prior programming experience, their first officially-sanctioned exposure in college is still influential. I want to give CS0 […]

  • 30. orcmid  |  October 18, 2020 at 1:45 pm

    I didn’t notice the age of this post at first. Thanks for linking to it from a recent Twitter exchange.

    I was in despair until “There is no first, but we can influence next.”

    Thanks for that.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,186 other subscribers


Recent Posts

Blog Stats

  • 2,060,873 hits
November 2010

CS Teaching Tips

%d bloggers like this: