Frameworks and Standards can be limiting and long-lasting: Alan Kay was right

January 21, 2019 at 7:00 am 48 comments

Through the K-12 CS Framework process (December 2016, see the post here), Alan Kay kept saying that we needed real computer science and that the Framework shouldn’t be about consensus (see post here). I disagreed with him. I saw it as a negotiation between academic CS and K-12 CS.

I was wrong.

Now that I can see standards efforts rolling out, and can see what’s actually going into teacher professional development, I realize that Alan was right. Standards are being written to come up to but rarely surpass the Framework. All those ideas like bits and processes that I argued about — they were not in the Framework, so they are not appearing in Standards. The Framework serves to limit what’s taught.

Teachers are experts on what is teachable, but that’s not what a Framework is supposed to be about. A Framework should be about what the field is about, about what’s important to know. Yes, it needs to be a consensus document, but not a consensus about what goes into classrooms. That’s the role of Standards. A Framework should be a consensus about what computing is.

I think what drove a lot of our thinking about the Framework is that it should be achievable.  There was a sense that states and organizations (like CSTA and ISTE) should be able to write standards that (a) meet the Framework’s goals and (b) could be measurably achieved in professional development — “Yup, the teachers understand that.” As I learn about the mathematics and science frameworks, it seems that their goal was to describe the field — they didn’t worry about achievable.  Rather, the goal was that the Framework should be aspirational. “When we get education right for all children, it should look like this.”

Standards are political documents (something Mike Lach taught me and that Joan Ferrini-Mundy told ECEP), based on Frameworks. Because the K-12 CS Framework is expected to reflect the end state goal, Standards are being written a step below those. Frameworks describe the goals, and Standards describe our current plans towards those goals. Since the Framework is not aiming to describe Computer Science, neither do the state Standards that I’m seeing.

I told Alan about this realization a few weeks ago, and then the Georgia Standards came out for review (see page here). They are a case in point. Standards are political documents. It matters who was in the room to define these documents in this way.

Here’s the exemplar standard from the Grade 6-8 band:

Use technology resources to increase self-direction and self-regulation in learning, including for problem solving and collaboration (e.g., using the Internet to access online resources, edit documents collaboratively)

Can technology resources increase self-direction and self-regulation in learning? Maybe — I don’t know of any literature that shows that. But even if it can, why are these in the Computer Science standards?

The K-2 band comparable Standard is even more vague:

Recognize that technology provides the opportunity to enhance relevance, increase confidence, offer authentic choice, and produce positive impacts in learning.

I have no idea if computers can “increase confidence,” but given what we know about self-efficacy and motivation, I don’t think that’s a common outcome. Why is this in the Computer Science Standards?

There are lots of uses of the word “information.” None of them define information. The closest is here (again, grades 6-8), which lists a bunch of big ideas (“logic, sets, and functions”) but the verb is only that students should be able to “discuss” them:

Evaluate the storage and representation of data; Analyze how data is collected with both computational and non-computational tools and processes

  1. Discuss binary numbers, logic, sets, and functions and their application to computer science
  2. Explain that searches may be enhanced by using Boolean logic (e.g., using “not”, “or”, “and”)

What’s missing in the Framework is also missing in the Georgia standards.

  • The word “bit” doesn’t appear anywhere in these standards — if there is no information, then it makes sense that students don’t need bits.
  • The word “process” does, but mostly in the phrase “design process.” Then it shows up in the Grade 6-8 band, but in highly technical forms: “process isolation” and “boot process.”
  • There are no names: No Turing, no Hopper. There is no history, so no grounding in where computer science came from and what the big and deep ideas are.

There are strange phrases like “binary language,” which I don’t understand.

This is from Georgia, where there is a strong video game development lobby. Thus, all students are expected (by Grades 6-8) to:

Develop a plan to create, design, and build a game with digital content for a specific target market.

And

Develop a visual model of a game from the Game Design Document (GDD).

And

Create a functional game, using a game development platform, based on the storyboards, wireframes, and comprehensive layout.

It’s clear that the Georgia Standards are the result of a political process.

The bottom line is that I now wish that we had made sure that the K-12 CS Framework reflected computer scientists’ understanding of Computer Science. It instead reflected K-12 classroom computer science as defined in 2016. They presume languages like Scratch and curricula like AP CS Principles.  That’s reasonable in Standards that describe what goes into the classroom tomorrow, but Frameworks should describe a broader, longer-range thinking. Our

There are no plans that I’m aware of to define a new Framework. The Standards are still just being developed for many states, so they’re going to last for years. This is what Computer Science will be in the United States for the next couple decades, at least.

Entry filed under: Uncategorized. Tags: , .

Vote for SIGCSE’s Top 10 Papers of the first 50 years The Ground Truth of Computing Education: What Do You Know?

48 Comments Add your own

  • 1. alanone1  |  January 21, 2019 at 7:32 am

    One of the largest problems with the “CS” Framework development — besides it being very political and tinged with business interests — was the disinclination to invite enough people with really deep understanding of computing to be part of the process.

    They really didn’t want anything (a) like actual expert experience and opinions about the field, (b) about what an “intellectually honest version of the field” (in Jerome Bruner’s phrase) might look like for K-12, or (c) about how to design a Framework.

    There were a few good people on that committee — and some additional “well meaning” members — but from my (outside view) it was overall much too much pop culture in its views of both computing and early learning. The biggest problem — as Mark alluded to above — is that they failed to understand the purpose of a Framework, and then failed to learn what they are for and should be when this was pointed out to them.

    A major bug was the “Well, it’s at least a start” syndrome — but even with some improvement, if the context is the wrong one, improving it will never accomplish the qualitative leap into a fruitful context.

    The operative slogan is “Better and Perfect are the Enemy of ‘What Is Actually Needed’ “. They failed to understand the critical difference between “better” and “what is actually needed”. The result they came up with will set back real progress for decades.

    I don’t find disasters arising from human blindnesses to be nearly fascinating enough to be worth the trouble they cause — but in many ways the processes that created the Framework and the Standards were an almost perfect microworld example of the central problems of American education. (So something could be learned from looking at how this turned out badly — including learning how to see that it is not good.)

    There have been some recent books written about the growing trend of the general public rejecting actual experts. The ones I’ve read seem really harsh. But my personal experiences — which is on a much smaller scale and too anecdotal — seem to bear out the most dismal views of the writers.

    Reply
    • 2. orcmid  |  January 21, 2019 at 12:51 pm

      Amen. I think maybe the ACM/IEEE curricula and maybe the Body of Knowledge efforts are more relevant for Framework mining.

      With regard to K-12, there just seem to be too many cross-currents and an effort to satisfy them all that somehow buries computer science (apart from popularity of the CS label).

      It seems that the notion of the 3Rs and their value in primary education is understood. Maybe the 4th R is Reckoning, or Rationality (something that came up with Bertrand Meyer recently but more narrowly).

      I recuse myself as incompetent with regard to pedagogy. I don’t know what kind of glimpse of CS history, principles, and practices (and Information Technology as it has become known) are delivered and consideration of who is being prepared for what at the K-12 level).

      Also, none of the Rs actually take everybody down the same paths to some destination. Maybe there are fundamentals, but not everyone goes done the full track of any of them, so why CS Education? What’s a rational progression?

      Reply
  • 3. zamanskym  |  January 21, 2019 at 8:05 am

    I think I voiced my concerns about the framework back when it was a thing and an additional problem is that it along with the CSTA standards by being the first are treated with extra reverence.

    I won’t go on with my usual standards/framework rant but I will say It’s important to remember that in addition to being political documents, standards aren’t about teaching or learning,

    Reply
    • 4. zamanskym  |  January 21, 2019 at 10:02 am

      Didn’t get the last line in for my comment.

      I meant towrite that they aren’t about teaching or learning, they’re about testing.

      Reply
  • 5. Kathi Fisler  |  January 21, 2019 at 9:59 am

    I’m currently involved in my third state standards process (author on MA, lead author on RI, consultant on NY), while also having been an advisor on the framework.

    The big tension that I see is around “for all students” (which is a form of “achievable”, but from a student perspective rather than an educator one). I get nervous when standards for all students effectively expect all students to prepare for an AP exam. AP, by definition, is not “for all students”. Computer science experts, in my experience, think at the level of AP and beyond.

    So how do we reconcile “standards for all” and “AP is not for all”? I’m drawn to efforts that contain a separate “for some” level in high school that not all students will target, but that still (tries to) present a coherent view of the field that could be accessible at a high school level. Not all states want to go that route.

    Another problem I see in our standards lies in our practices. Compare the CS practices to the mathematics practices: the math ones are about habits of working that enable someone to succeed intellectually in math (at least that’s how my outsider view perceives it). Many CS practices are more socially focused (collaboration, communication, diversity, etc). These respond to negative images that people have about computing. And while these are important to computing as a community, I don’t believe a student needs these to learn the fundamentals of computing as an individual.

    I wish we had better synthesis–in a spot where states can find it–across good ideas that have emerged as individual states adapt existing frameworks/standards. States who want to start from the framework or CSTA should have access to the big ideas that other states’ experts thought were weak or missing. Sort of like an addendum document of collected wisdom from other states that was easy to find from the framework and CSTA sites.

    In RI for example, three of us had led academic security programs, and the security portion of the RI standards is correspondingly much stronger and richer than the weak treatment in the framework and CSTA (which overlook the security-related thinking needed for a modern professional in most domains). Several states have wrestled with where to put digital literacy, which isn’t CS but which is a prereq for many of the skills we want to build in CS. I’m sure there are others from other states. How are we collectively moving forward given how standards are being developed?

    Kathi

    Reply
    • 6. Mark Guzdial  |  January 21, 2019 at 10:15 am

      So how do we reconcile “standards for all” and “AP is not for all”?

      That’s a great question for a Standards effort, but not for a Framework. Mitosis and meiosis are both part of biology. That goes in a Framework. A state can decide if they want all their students to know the difference. Computer science is concerned with processing information, and the smallest piece of information is whether there is a difference or not: A bit is on or off. A state can decide if they want students to know that computers can process information, and that we can define all information in terms of bits. But that is part of computer science and belongs in the Framework.

      Reply
    • 7. alanone1  |  January 21, 2019 at 12:38 pm

      I was criticizing the approach taken to a “CS” Framework. I also tried to explain to them that though “standards” are not the same as a “curriculum”, they do presume many things about age and development and topic that will lead to curriculum choices.

      The results show almost no relationship between the actual work that has been done over 50 years with children of many ages/developmental-levels to what the “standards” assign to each grade. I pleaded and then urged (and then more) them to first collect, generate, and test a series of typical projects that would be done by the children — especially in the early grades — that are in line with actual past experience by the people who have done the most advanced work with children.

      They did not do this.

      The reason they should have is that actually making a curriculum and testing it is an enormous amount of work that also needs to be based on what children are actually capable of. Since they did not look at that, one is forced into the opinion that much of what they chose to include was both fanciful and also very much conditioned by what they thought teachers could understand.

      A much better way to proceed would be to scaffold and migrate the best projects from the past into the computing media picked for deployment. This isn’t yet “a curriculum” but it does have a sequence, it is related to developmental level, it does have scaffolding, and it suggests what aspects of “CS” as laid out in the Framework are being used in each project.

      This in turn creates enough of a structure to allow stronger thinking about possible curricula, and what “standards” might be like (if “standards” are actually remotely a useful idea — maybe not).

      I suggested to them the rallying cry “Children First!” and to please please please carry out what that really implies.

      Reply
      • 8. Greg Nelson  |  March 18, 2019 at 6:48 pm

        All of this makes sense to me, is there any way to recover from this mistake? Grass-roots making of a better document?

        Re: “A much better way to proceed would be to scaffold and migrate the best projects from the past”

        Where can I find those at a curricular level of detail, and/or described at a level of detail that I could implement in a classroom?

        Do you mean the projects like the falling ball / “fifth grade Isaac Newton”? Other lessons / projects taken from logo exercise books from the 80s? Is there any archive of those projects, or scholarly review of them? I have typically found it hard to find detailed descriptions of these, and feel I would have to try to reimplement the activities in a classroom to understand how to sequence them and rediscover scaffolding that are needed for them to work in practice, though I just may not be skilled enough to find or understand them deeply.

        Beyond a framework with decades long impact, I’m just as concerned about “knowledge of what really works in practice” for “authentic computing” being lost or hard to access. Inventing it is hard and I don’t know of ways to apprentice under others.

        Also, even as a CS PhD student at a “top” university like University of Washington, sometimes I don’t feel well exposed to “authentic computing culture”.

        Reply
        • 9. orcmid  |  March 19, 2019 at 10:17 am

          Greg, is this you? https://www.cs.washington.edu/qualsexam/glnelson

          I’d like to visit. (Sorry for stealing this thread, only way I could reach out.)

          dennis.hamilton@acm.org

          Reply
        • 10. alanone1  |  March 19, 2019 at 10:57 am

          Hi Greg

          This is why I advocated a process to gather the past curriculum gems that have been done. Most of these projects still have at least one person alive who can provide the needed detail if it is not documented.

          Reply
  • 11. orcmid  |  January 21, 2019 at 1:04 pm

    I easily forget that CS is a laboratory science and that one could look at projects as revealing of aspects, just as there are projects in K-12 concerning biological sciences, physics, etc. I overlook that the kinds of things that are appropriate at primary school levels can also serve as hands-on sources of insights into the nature of computing as an instrument of human purpose.

    Thanks for this with regard to children.

    Reply
    • 12. alanone1  |  January 21, 2019 at 1:33 pm

      I think of “children’s computing” as being the most important to get right for a number of reasons, both foundational for their learning, and epistemological for their points of view (this is the very same way I feel about “children’s science” and “children’s math” etc — and, for young children they are much the same set of ideas and outlooks.

      I also think this younger age group has a bit more wiggle room, because it is further away from the anxieties and hurdles already set up in the last years of high school for all of these subjects.

      It’s very hard to write definitions of things that have lots of degrees of freedom (and especially when so many of these are meta, as they are in computing. It’s easier to look at a project and decide “Yes, that looks like the real deal to me” (or not).

      These are especially the case when it comes to the relationship of programming to a real “computer science” for children. Being a programmer is to be a “first cause” of a computer doing something that was desired — so it’s much larger than learning a particular style of programming (and especially to focus on assignments, data-structures, and typical 1950s algorithms). A huge challenge in teaching “real programming” to anyone — especially children — is to somehow give them concretions that they can handle, without having them imprint on them like Lorenz’s ducklings so that they miss other ways, and especially more subtle and abstract ways to express one’s desires for processes.

      Reply
      • 13. orcmid  |  January 21, 2019 at 3:30 pm

        Well said. Thank you.

        Take-away: I need to recognize that my exposure to computing in the late 1950s was atypical and certainly has little bearing on what youngsters are presented with now. Some of the things I see presented seem like toys; I need to look at what they inspire differently. Clearly, arranging some kind of digital process and having it operate needs to be examined for its experimental value to a learner and I have to give up my prejudices about that.

        I am left with more questions that I must ask someone who started with how CS notions of computation were pointed out with education-oriented artifacts and grew into mastery.

        Reply
        • 14. alanone1  |  January 22, 2019 at 2:51 am

          The “concreteness” of computing artifacts has to be put aside to glom onto “what’s really profound” and “lies behind”.

          This is similar to the first step in “real science” being to really understand that “the world is not as it seems” — and therefore “what things seem to be” is a blindness that has to be understood and acknowledged and put aside before the methods of science can be used “to make the invisible more visible”. That was Frank Oppenheimer’s sole aim when he set up The SF Exploratorium — it is not a science museum but (used to be) 500 hands on exhibits that show children the world is not as it seems (he hoped each child would encounter at least one exhibit that really spoke to them).

          One of the ideas behind computing is that we — including children — can make things that help us do and think. Many children get contact with “things that help us do” — in the form of materials and tools to make things with — but most adults don’t understand that these can also be used to make things that help us think. Consider showing 1st graders that you can make an arithmetic calculator from two rulers that will allow them to outdo 5th graders, especially in fractions!
          (I would show you a picture of this except that 25 years after the WWW and 45 years after WYSIWYG at Parc, this WordPress editor will not allow me to drag and drop an image into the text here. This is a good example of why most people today should not be allowed to explain computing to children.)

          The two ruler scheme is in fact a real computer. But the programming of it is done by the child’s hands — not just to manipulate it to add and subtract — but to also put more “constants” into the machine to do different kinds of additions (not just of different fractions, but of logarithms for multiplication, etc.).

          We don’t tell the children — yet — that computers are language machines, but they are! They can then make things in language that a slightly more complex machine can understand, by using a music box, and then punching tapes for a small “player piano” music box that can be bought on Amazon for $17 (https://www.amazon.com/Kikkerland-Make-Your-Own-Music/dp/B000HAUEFY/ref=sr_1_3?ie=UTF8&qid=1548142518&sr=8-3&keywords=music+box+mechanism).

          It’s such a chore to write in this “non-editor” that I won’t go further. But the earliest ideas about the “essence” of computing (including from Babbage and Ada, from Turing, from Herb Simon and Al Perlis) are all about being able to represent processes in machines that can interpret the representations. So this is a kind of “dynamic math and dynamic reading and writing and a dynamic parts of thinking“, and the study of this and making ideas and theories about this is a science. They decided to call it “computer science” as an aspiration for the future.

          Another way to look at this is that the goal of explaining computing to anyone, and especially children, is to find out how to include the ideas above and, from the beginning, to have them encounter more and more interesting and profound ways to deal with “meta” (which is actually the essence of what language is about) and “processes” which is what the “language machines” we make are and deal with.

          And to iterate from the above — “science is to make the invisibles more visible, understandable and usable” — “computer science is to make the invisibles of language/thinking more visible, understandable, and usable”.

          Reply
          • 15. alanone1  |  January 22, 2019 at 5:21 am

            P.S. Perusing YouTube, guessing that there should be something interesting if I typed in: “Lego xylophone”, I found several projects to make a xylophone player from Lego — essentially a “music box” but where children can make them.

            A careful design of this for young children — as an extension of the stuff they are doing with Tinkertoy and Lego (hopefully with the more general components) could be really illuminating for them. I would go for “no electric motors” but for simpler more understandable sources of energy like hand power or weight power.

            Pretty early on, the children could make — from Tinkertoy-like components — a “programmable robot vehicle” using the design that Heron of Alexandria came up with (there is now a YouTube example of this) to propel the robot car with a falling weight and to program its motions by how the cord to the axle is organized around pegs, etc.

            I remember asking people on the Framework committee if they had ever heard of Heron’s design — none whom I talked to had.

            I remember thinking that if they really cared about computing to the point of love and romance they would have taken the trouble to find out everything possible about where it came from, who did it, etc. But there was no evidence of this. (Is there a physicist who does not know about Archimedes or Aristarchus? Or Lucretius? Or Newton or Maxwell?)

            Reply
          • 16. orcmid  |  January 22, 2019 at 10:30 am

            @alanone1 Thanks for this and your subsequent comments.

            I was immediately struck by Michael Polyani’s notion of focal and subordinate attention. When I first heard of it, it reminded me of the two attentions in old processors, the program counter and the data address and the marvelous way that they can move over the same subjects. We can do that when reading also.

            Later, I came to think of the subordinate attention as being on these pixels, or the typographical shapes and even the formed “words.” The focal attention is on the tacit leap we make to interpreting language and experiencing what we take as the message. That is, perceiving what is made manifest by ourselves and that also carries some sense of the author’s intention. That an artifact is evocative of more than simply what it is (with all the incidental concrete details) and affords recognition of wonderful manifestations beyond is marvelous to me.

            I suppose one can worry that attention on programming of the usual sort is a bit like confusing typesetting and maybe even typography with the purpose of this text. (I share your distress over the limitation of expression, as found on blog systems.)

            I have no understanding of how one guides or encourages learning about this with regard to children. I am from the tinker-toy era, although books enchanted me far more than erector sets.

            Currently, I want to present a model of computation that features the prospect of representation and interpretation, starting with a mathematical foundation yet featuring the linguistic power that has computers so amenable as an instrument.

            I don’t know how accessible I can make that, since the first stage is to ensure that it is understandable to myself. I think you would recognize aspects of what you are speaking too, although at a rudimentary, mathematically-founded level of an elementary model.

            ((PS: Work in progress (on GitHub) and narrated at (https://orcmid.wordpress.com). A glimpse of the idea of manifestation with respect to electronic documents is in the proposed work — now set aside — at (http://nfoworks.org/rct/). ))

            Reply
  • 17. Shuchi Grover  |  January 21, 2019 at 7:43 pm

    Mark–I, like you, was an advisor to the CS Framework development process. Yes there were/are issues with the framework development process, but I would like to point out that “bit” “information” and “process” – have been incorrectly called out as missing. They do appear in several places in the framework in the context of the Computer Systems topic. It’s also unclear that the problematic examples of the GA standards are a fault of the framework. Do we know what they drew on to design those standards? Does not appear at all clear to me that those examples were inspired by K12CS (which makes a point to call out the distinction between “technology use” and CS)
    I think the framework should be a living doc. It was created in a hurry. There will always be room for improvement.

    Reply
    • 18. alanone1  |  January 22, 2019 at 5:56 am

      I’m on Mark’s side here. Yes, “bit” and “bits” are referred to in a few places, but — using the pdf search for “bit” — I could not find one place in the Framework where the why of this term is explained to the reader (who if they understand what a Framework is supposed to be, would expect to find fundamental ideas and terms — their whats and whys — explained for people who don’t understand them).

      The big idea here — which is egregiously absent — is that all computing is fundamentally based on being able to compare one thing with another in a way that allows a judgement to be made (this is true of every kind of analog computing as well).

      This idea is also a cosmic one for children to get more self-conscious about (humans are terrible at tracking their own thinking processes, or even realizing they have them).

      The idea behind this is sometimes called “drawing a distinction” and the two big offshoot ideas are (a) that two things in the world can’t be the same no matter how closely they might resemble each other, and (b) that if we have to draw a distinction it will be by either deciding they are similar enough to be the “same type of thing” or not similar enough to be the same type.

      Because of noise in the physical world, we have to do active things to protect our judgement if we want to remember and use it accurately later.

      So “1s” and “0s” are a terrible thing to put in a Framework about “distinctions”.

      That they are there means the writers only have a “pop culture” idea about “bits”. The two states are actually marks. That is what computer memories have to retain, and they are “digital if and only if their underlying analog nature and noisiness is kept at bay by active processes where “more things are going right than going wrong”). This is a huge idea that underlies every kind of information, messaging, retention, etc.

      Computers are machines that can interpret and provide interpretations of markings. Some of the ways the two states of the marks could be interpreted is as “true” or “false”, or “there” or “not there” or “like” or “not like” or as “zero” or “1”. The last interpretation comes from the mechanisms that do things with the marks (the marks have no idea that they are representing a “0” or a “1”).

      You might think this is too abstruse for children, but again taking Bruner’s challenge to find “intellectually honest versions of the subject matter that matches their development level”, we can see that things like having them pass notes to each other that can be read by the receivers, but where intermediaries can paint over 80% of the note (in any way) will get them thinking about how to protect the message.

      Similarly, they can see what of very old tattered books and newspapers can still be read (and learn that you can do a lot of things to each printed character before it is really unreadable, etc.). For example suppose you get rid of all the vowels in one of their story books, can they get the whole words back?

      If a Framework doesn’t explain this well — or much much worse — if the people who write the Framework do not understand that this is one of a number of ideas that must be included with full explanations for the readers of the Framework — then the Framework should not have been allowed to be published.

      It winds up being a bunch of people who don’t understand the difference between “Guitar Hero” and “real guitar” and “real music” trying to bring what they wrongly think is “real music” and “real guitar to everyone”. This is another disaster of confusing popular culture with “necessary culture”.

      Reply
      • 19. orcmid  |  January 22, 2019 at 12:02 pm

        Umm, I think I am looking at some of the same things.

        My thinking has been that computers don’t interpret though, the interpretations are ours and the computer/software is engineered for our affordance of them (including even the “bits”). Perhaps that is the most difficult thing to comprehend about our nature and tacit behavior.

        Reply
        • 20. alanone1  |  January 22, 2019 at 2:17 pm

          An ant is set up to interpret a pheromone gradient within an environment as “go one way if I’m holding food” and as “go the other way if I’m foraging for food”.

          And we bring a lot more context (hopefully) from our environments to our interpretations of computer outputs.

          But there’s no question that what a programmed computer is doing internally (often with a number of layers at a given time) is interpreting language messages within environments and reacting to them.

          Reply
          • 21. orcmid  |  January 22, 2019 at 4:01 pm

            Is there a way to put this so that it doesn’t attribute agency. To me, “doing” doesn’t provide that. I get that the computer operation is orchestrated a particular way (through however many layers) to achieve a particular interpretation-supporting result for us.

            I find myself falling back to this: http://trosting.org/info/2005/10/i051002f.htm

            Reply
            • 22. alanone1  |  January 22, 2019 at 4:11 pm

              I guess I don’t understand what you mean by “agency”. I don’t agree with your indicated website’s arguments and conclusions. But I can’t answer your question until you make it clearer.

              Reply
            • 23. orcmid  |  January 22, 2019 at 4:17 pm

              I also see that I used “doing” but not with respect to computer-external purpose.

              Reply
              • 24. orcmid  |  January 22, 2019 at 4:46 pm

                I think, for example, saying that a clock tells time attributes too much agency to the clock. Or saying that a slide rule (or the child’s marked paper edges) does arithmetic. Using it that way is a purpose of the user, not the thing itself. I want to avoid what philosophers might consider to be category mistakes. Maybe purpose is a better term.

                I hesitate to attribute much to an ant’s behavior beyond what is observable. However it works, it is successful in locating food, returning food, and marking a route to be followed back in search of more. That’s what is accomplished when ants are not interfered with or otherwise unsuccessful. I hazard that the mechanism is at a lower level than what we are able to observe as the achievement. I hoped to avoid collapsing levels together in my digital human metaphor.

                I have side-tracked this thread. My apologies for that.

                Reply
                • 25. alanone1  |  January 23, 2019 at 1:43 am

                  It would be great if you could come up with an alternate word instead of “agency” to be more clear about what you mean here.

                  You might be using it it way it is sometimes used in social science — to denote “free will” — but most people, including in computing, use the Oxford English Dictionary definition: Action or intervention producing a particular effect.

                  Besides the larger philosophical debates about “free will”, it’s more worthwhile pondering whether a new born calf has free will about struggling to its feet after being born, or a herring gull chick has free will about responding to a red dot on its parent’s beak, or a human baby has free will about wanting to find a nipple.

                  I don’t think that we have to anthropomorphize a computer at all to talk about e.g. “goal seeking”, or especially “reflective goal-seeking”. Or “problem-solving” (where the mechanisms don’t have an explicit algorithm to do X, but can piece together experiments and past memories to find a way to do X).

                  In many of the latter examples, good ways to do this have one computer system able to watch another computer systems’ deliberations, and to “critique” and “coach”.

                  We are not watching a simulation of a human, but an embodiment of methods for thinking and reasoning and representing that have been invented over the years, and that can be represented also in the computing medium. These are really interesting, because e.g. most of the methods of mathematics and science are not strongly in human genetics. Instead, we had to invent them and then co-opt our language abilities to make a simulation of better thinking. (

                  (For example, this is what we do when we get fluent in Calculus: we have made processes in our brain/minds that can handle some ideas much better than our traditional birth brain/minds and traditional cultures.)

                  It is many of these kinds of human inventions that are quite amenable to being also made from computer stuff, precisely because of the nature of the inventions. The inventions probably include “being reflective”, given that most anthropologists and social psychologists have noted the lack of reflection and “sense of self” in many traditional societies.

                  One of the biggest swings over the last 60 years has been in the linguistics researchers, who know think that much of what has been thought about as deep genetic predispositions for language — i.e. Chomsky — could very well be mostly incremental cultural inventions that have been preserved through use (in somewhat the same way that fire is universal in every culture, has been preserved in every culture, and whose roots are not genetic).

                  Reply
                  • 26. orcmid  |  January 23, 2019 at 12:12 pm

                    I think maybe “purpose” then. But seeing purpose at a level external to an artifact, rather a bit like an affordance — in Don Norman’s sense. By the way, I am using interpretation in a variety of ways, including “intended interpretation” and usually not in the sense of interpreting a (“mechanical”) language, although that does come up.

                    I favor your writing here. One precaution I take. I think it is useful to me, in viewing how computers operate, to distinguish between what an operation (or algorithm) is, versus what it is for. Good software maintainers (and testers) are masters at inference of purpose in isolating code defects and correcting them in a fashion that aligns with the purpose. But that depends on external knowledge of intended purpose that the computer has no access to, at least in present-day systems.

                    I posit making art in all of the beautiful ways that a computer can be an instrument is in this category too, with respect to purposes of ours. Is there a case here to avoid confusing the instrument with the artist? (Wandering off into notions of beautiful code … .)

                    Reply
                    • 27. orcmid  |  January 23, 2019 at 12:30 pm

                      Looking over this lengthy post and al the commentary (and watching an anti-trust trial in which standards are featured), two observations come to mind.

                      Establishment of standards is a political (and even socio-economic) act. I can’t find an exception in my work on standards of various kinds, even for ASCII.
                      As Bob Bemer observed, “Standards are arbitrary solutions to recurring problems.”

                      It is valuable to be clear on whose problem and on whose agenda (for the arbitrary part).

                    • 28. alanone1  |  January 23, 2019 at 1:51 pm

                      It is possible to write algorithms whose purpose is to find, manifest, and deal with purposes that neither the algorithms nor the programmer of the algorithm has any notions about.

                      An Ur version of this is that a universal Turing machine mechanism does not have to have — nor do they generally have — any notion of what the machine that is being simulated blindly from the description on the tape will be like, do, etc.

                      The Turing Machine is “just doing what it was set up to do”, but the manifestation of carrying out arbitrary descriptions of things — including descriptions of even more comprehensive mechanisms, including those that can create next levels for themselves — is unlimited in qualitative scope.

                      Similarly, the simple processes of evolution — which do not seem to have any purposes/goals — can eventually result in living things who not only have purposes — but have goals they are trying to achieve — and in a number of cases, the entities can do some planning, some problem solving, etc to achieve their goals.

                      This is because once brains can represent “ideas as things” — not just pattern matching and reactions — new ideas can act as though they are parts of the underlying brains. (This is also what is fun about Universal Turing Machines — you don’t need much, but can do everything if you only have enough memory to hold the “new brains”.

                      This is not a surprising idea to biologists, etc. but it takes quite a bit of work to bring it into the realm of “not surprising”. Physics is not telling atoms anything about writing “Shall I compare thee to a summer’s day”, and to quote some of the writings from people who don’t understand, the atoms “are just doing what they are programmed to do”.

                      It’s the organizational architectures of simple things doing simple things that wind up being able to write sonnets. This is one of the biggest ideas in “Computer Science” but it is not an idea that jumps out at the simple programming level.

                      The reason I’m dinging the Framework so much is that it egregiously missed almost everything that is actually important — and even failed to explain the mundane stuff that isn’t that important. If it were a “Framework” for “real science”, then it missed explaining Physics, Chemistry, and Biology (and it certainly did miss the computer equivalents of these three fields).

                  • 29. orcmid  |  January 25, 2019 at 11:16 am

                    I think I misled you about what the “digital human” was about, somehow. It was about a human playing computer to see how limited operationally the computer is, with respect to the external world relative to the domain of computation.

                    In any case, I think it would be good to have a conversation about the “essence” of computing. I think there are different levels in which that might be manifest, especially in the context of human purpose. Models of computation expose an “essence” in theoretical terms. Their realization in ways accessible to manipulations and explorations by people is another (or several). I think they all matter, as does appreciating how the underlying mechanism is so amazingly useful in our hands.

                    Reply
                    • 30. alanone1  |  January 25, 2019 at 12:35 pm

                      I think “playing computer” has really misled many people, in much the same way as when you assay a living thing, you wind up with just 6 of the simplest elements, plus tiny traces of a few others, and 70% of the assay is just water. This led to a variety of supernatural explanations for life.

                      This was because the power of “architectural organization” and the synergies both for function and for noise damping is hard to imagine for most human brains without a lot of training.

                      My greatest joy — besides music — as a teenager who was 13 in 1953 for the Watson/Crick letter to Nature, was to grow into the early stages of Molecular Biology and to see the first complete sketches of “life as an architecture”.

                      If you look at a favorite intestinal bacterium — E coli — you see a living organism that has 10s of millions of “large molecules” made from the 6 simple atoms in the 30% of it that isn’t water. Going from imaging what any of the simple atoms do, to even combinations of the simple atoms, it is really hard to see that one of the “implications” is “life”.

                      Besides the scales and organizations involved, the scale speeds (a better word is “ferocities”) of activities is quite beyond human imagination. At room temperature to blood temperature molecules are moving more than 1000 miles per hour, if you reduce this to the diameter of a water molecule, the scale speed is out of our ability to imagine. E.g. much larger protein molecules of 5000 or so atoms are moving their own length in about 2 nanoseconds and rotating at about 1 million rotations per second. This diffusion is such at the probability is essentially 1 that any one of the millions of large molecules will touch any of the others in less than 1/2 second. This is why life can work: there’s enough motion of heat to allow random contacts to be effective.

                      Computers have even simpler “atoms”, but even after these have been put together to make a simple programmable computer, trying to “play computer” with this will have the same problems with human capacities for imagination. Included with this are the actual speeds along with the architectural combinations that can be made.

                      One will erroneously wind up with observations and conclusions like the web-page you pointed to in one of your comments.

                      A more interesting and understandable way to approach “imagining computing” is also similar to the physical sciences. Start with the fact that there are living things, and try to see if they are architectures or whether something supernatural has to be involved. Trying to work upwards from atoms is really tough for most people.

                      Marvin Minsky used to teach new grad students to program in Lisp by giving them the previous year’s best PhD thesis and to tell them to “find X” and see if you change it to do “Y”. The context for learning programming was a whole system — and one that was working when you started on it — not a few lines of code that don’t do much and might have immediate errors.

                      The thesis was that year’s version of “life” and the kids played at “life” while learning how atoms could be assembled into “life”. This is one of those “Context is worth 80 IQ Points” ideas, and missing these kinds of things are yet another facet of my complaints about the Framework and the very limited conceptions touched on in the Framework.

                      It doesn’t matter if you kill “life” while poking around before you understand it. That is what “UNDO” is for. And that is why every single thing in Smalltalk had a hood that could be opening on the dynamic entity underneath while it is running, and that is why we put similar hoods on everything in Etoys, even for 5th graders. It’s not that they are charged to understand everything — or anything much — down there — it’s that they can see that there is a “down there” and there are “down theres”.

                    • 31. orcmid  |  January 25, 2019 at 12:59 pm

                      I’m going to say uncle here and sticking to my investigations, at the level they are at.. Whether that provides any utility to a higher level essence of computing will depend on the utility of that.

    • 32. Mark Guzdial  |  January 22, 2019 at 6:56 pm

      It’s a reasonable question: If state standards are lacking (in quality, in coverage, whatever), how much fault can be attributed to the framework that the standards are based on?

      I teach user interface design, so I think about it like an HCI designer. If a user can’t use my interface, it’s mostly my problem, not the user’s. We used to blame the user and insist that they read the manual, or learn to think more like a developer. But over time, we realized that interfaces should be built for users. Being user-centered leads to better interfaces.

      There’s a balance to be struck here. Users can be so much more powerful and successful if they are willing to learn. Interfaces designed for walk-up-and-use usability aren’t necessarily well-designed for expert use. (Great paper on that last point: https://www.nngroup.com/articles/anti-mac-interface/ )

      The Framework was designed to be usable by the states for writing Standards. Certainly, states can use the Framework well or poorly in writing their Standards. But if states are producing poor Standards (and the states that write good Standards succeed because they have some of the Framework writers or advisors on their development team), then we have to consider the possibility that the Framework did not achieve its design goals. A better Framework might have led to greater usability.

      Reply
      • 33. alanone1  |  January 23, 2019 at 2:18 am

        To iterate some already mentioned ideas (sorry): I don’t think standards or curricula should be attempted before having a good collection of tested examples and projects drawn from the best of the last 50 years (and from a number of new examples and projects that will have to be invented).

        To use your UI example and analogy, the number one principle in good UI design is to understand the users who are being designed for (this is importantly similar to “the number one principle in trying to teach X to Y”, is to understand Y as well as understanding X).

        The Framework exhibits little to no understanding of what children can actually do (the latter gathered by 50 years of experience working with children of those who have actually done this). So the first job at hand (as I explained to them) is to start collecting the best evidence of what children at different developmental levels are set up to learn.

        A good example from many is to compare what the 5th graders in a demographically spread busing school in Los Angeles were able to do with real math and real science, that considerably surpassed what the studies of a large cross section of college students on the same subject. http://www.vpri.org/pdf/rn2005001_learning.pdf.

        Omission of example projects like this was particularly tragic because it was done ca 2000 in the ancestor of Scratch, and could have been used as an example of “If you take this route with 5th graders, they will wind up being able to do this“.

        Projects are not the same as standards, nor are they the same as curricula (they are kind of “micro-curricula”) but projects are needed as part of curricula, and a rough ordering of projects and provide a useful several years of testing to help understand the more detailed multithreaded strategies that are good curricula, and in a parallel route to the really difficult task of “standards”

        Why are standards the most difficult? Because they as much or more about assessment as they are about “achievements to be aimed for”. I’m not going to argue against them here, but I’d like to point out that music teachers and sports coaches (who are not involved in “official schooling”) don’t actually have articulated standards, and don’t base their curricula on standards.

        Instead, most of these teachers are deep practitioners who can recognize — and often very quickly — much of the level of achievement of the students by listening and watching. A scientist can do this with those who are learning science. Doing this with children requires understanding what an “intellectually honest version of X” is for them at their level of development. Musicians and coaches have no trouble with this adjustment.

        But for e.g. math and science in the elementary grades, we have almost exclusively teachers who have never gone deeply enough into math or science to be at all fluent — and they can’t tell whether real math and science is happening or whether the curricula forced on them is good or just plain crap.

        Instead of creating and paying for better teachers, the non-remedy has been to try to do more and more assessment by testing. This really doesn’t work in music and sports — they are art forms that have to be assessed by helping the children do art — and I’m quite firmly of the opinion — being both a mathematician and a scientist — that these are also art forms, and the kids have to actually be helped to do them, and need to be assessed by people “who can tell”.

        Once something good like this is installed, then “little tests” (as used in music and sports) can really help find areas that need to be worked on.

        Etc.

        Reply
  • 34. alanone1  |  January 22, 2019 at 4:02 am

    Another way to look at this is from one of the two most wonderful experiences I’ve had watching children learn “real computing”, when ca 1998 in Etoys a 5th grade girl saw that she could change her “car” into a “robot car” and then change this into a “fish” and then change the “road” into food for the fish, and add more ways for the fish to find and follow food. She finally said “Oh, you can do anything!”

    She had felt and found “the music” of computing and could feel strongly that “it was an instrument whose music is ideas”.

    If you are going to make a Framework for a subject that is of any use and worth at all, you have to put “the music” and romance of the subject front and center and help readers to feel them deeply. One of the ways to help do this is for the writers to feel this deeply and to convey their own feelings to help the readers feel them.

    The CS Framework has nothing of the kind to it. Even if it had anything worthwhile in it (it is not completely void, but close) it fails its primary raison d’etre by not emitting the beautiful bright lights the real subject radiates.

    Frameworks are about “fundamental concepts” of a field, but the chapter (6) called “Concepts” doesn’t appear until page 87, and when we try to find what these might be, they just aren’t there.

    Given that there were people on the committee such as Mark, Yasmin, Uri, and a few others who must have been overruled many times, the Framework book itself looks like it was taken over by the writers (and perhaps teachers). The result is beyond disappointing.

    I quite disagree with the argument “[the framework] was created in a hurry. There will always be room for improvement.”.

    First, why be in a hurry to make something that is going to influence over a decade or more people who will use it instead of thinking things through for themselves? This seems completely wrong.

    Second, it is not close enough to what a real Framework should be to allow “room for improvement”. It really needs to be done again, more carefully, taking more time, and with a majority of people who understand the subject deeply.

    The first pass should just be about “computing” without regard for K-12. That needs to be vetted carefully. Then take Jerome Bruner’s (and Seymour’s) advice and start the harder tasks of finding and inventing “intellectually honest versions of the real subject” that can start to fit into the developmental levels within K-12.

    As I said in another comment here, I think the best way to do the second pass is by collecting examples of the best stuff/projects that have been found to work with children over the last 50 years, and use these to help with sequencing and general design criteria.

    A very important part of any Framework of a subject for any age — and especially for children — is to try to explain what it means for someone to learn the subject and be fluent with it. For the developmental levels the children move through, this has to be done for each level as well as it is possible. One of the big reasons for this is that it is not possible to understand a standard that uses phrases akin to “students know” without understanding what is meant by “students know” and how one would determine that they “know” or don’t know.

    Without all of these different things being done really well, what winds up is something devoid of “why?” (the “music”), the “what?”, and with an unacceptable ratio of BS to any real content.

    It is a kind of reflection of what is quite wrong with American education and the larger processes involved with it. But why drag another neat thing — like computing — into the soul sucking processes that have already done in reading, writing, mathematics, science, and history?

    Why not take advantage of the newness of computing to at least make an great example of how a wonderful new art form of great importantce could be introduced to all children in our schools?

    Reply
    • 35. orcmid  |  January 22, 2019 at 12:08 pm

      Your writing reminds me that the fanciful imaginings of children is probably akin to the invention of language that every child undertakes, and how playful it can be, along with the fearless discoveries of motion, walking, and other delights.

      I fear I am artless and fail to see, from my perch, how important it is for unfolding a full life.

      Reply
  • 36. alanone1  |  January 23, 2019 at 4:25 am

    Apologies in advance for yet more verbiage … But the many things egregiously wrong about the Framework, and worse (as Wolfgang Pauli once complained “Your theory is not even wrong!”), are good instigators of both long held opinions and a few new ones.

    Some of the interchanges with orcmid brought up one of the many things I complained about when the Framework was first revealed. Not just the lack of “meta”, but the lack of so many things that are really interesting and important about “computers as language machines”.

    The programming styles generally advocated in the Framework and in using Scratch as a “type” programming language target, does have some overlap with the imperative simple algorithms and data structures approach used commercially. But it walls off experimentation with languages themselves and then to see what one’s “programming language” is as a language. These are key parts of “CS”.

    A simple example is the non-straightforwardness in most of what children are exposed to today of just trying to write a small Logo program to turn any English sentence into Pig Latin (this was pretty much the first thing I saw when I visited Seymour and Cynthia Solomon and their 12-13 year old learners in 1968).

    For example, this is barely possible to do in Scratch — I just investigated this — but I think it would be quite unwieldy and frustrating for the age group who could do this quite readily in Logo.

    More to the point to orcmid‘s comments are questions of “making progress without direct information”, “reflection”, to wind up doing things that can’t be simply programmed ahead of time, etc.

    My favorite through line on this for learners is from Ron Ferguson’s fun Byte article on Prolog from 1981.

    The basic idea of these kinds of languages (and this kind of computing) is that most problems will require various kinds of information — some of it strategic — that has been found, created and remembered to be pieced together on the fly to eventually have the system write the program that is to do the eventual work desired.

    Ron does a great thing in this article. He first shows some simple standard Prolog things, then introduces the idea of a robot on the moon with a spaceship, a human, an alien, some gold, etc. How can the robot be strategically programmed so its processes can find out how to deal with obstacles and find out how get the gold, and then get it?

    This is a “Monkey and Bananas” kind of problem, and it is relatively easy to write the strategy in Prolog for describing it and how to go about figuring out ways to solve it.

    However, some of the perfectly logical solutions will harm the human. So Ron says: why not introduce Asimov’s Three Laws of Robotics as a “superego” process that can check the plans from the problem solving part before they can be interpreted to instigate actions?

    This works well in a Prolog (or a Lisp or.a Logo) because they are all meta in that code is something (a) that can be looked at, and conversely (b) the stuff that can be put together to be looked at can also be interpreted as code (this is a very big idea — quite absent from the Framework’s purview even for high school — this is partly because in their limited view of things they were thinking of current HS AP in Java, and this kind of stuff is not at all straightforward in Java).

    The way this can be done in Prolog is really nice: you can make an interpreter for Prolog in Prolog in just a few lines that can deduce things) and this simple interpreter can have the higher level “Asimov” strategies (again in just a few lines) that can check to see if the The Laws are being upheld.

    This is a great example because it suggests good ways to use all kinds of trial evaluations in a wide range of situations.

    It is also even more fun to think about how to do this example earlier in the children’s experiences, where the “meta” is not embedded but is part of a single level pipeline. For example, we can start in Logo by creating sentences for Pig Latin, and from there go to sentences that have the form of Logo code, that can then be executed using “eval”, and finally could be looked at first by another program to see if anything dangerous is going to be done before issuing the “eval”.

    This kind of stuff is part and parcel from the ideas of Seymour and Marvin about giving children an environment and materials that can be much more strategic than simple programming and also much more reflective — and, critically, give the children ways to think about their own thinking, and about thinking in general.

    This brings to mind that Uri Wilensky — a great guy who was a student of Seymour’s — was on the list of advisors for the Framework. He’s nicer than I am, but I’m guessing that he must have brought some of this up and advocated for it.

    He also is the driver behind the excellent massively parallel particle system “Netlogo” (which came out of some of Seymour’s ideas and Mitchel Resnick’s thesis), and there could be another large angry rap here about “where is ‘massively parallel’ computation in the Framework?”

    And for that matter, where is Mitchel Resnick’s own great thesis work in his own Scratch language? (The absence of this is one of my top disappointments with the way Scratch turned out — it really is not optional these days in any reasonable system for children (or any age).

    Reply
    • 37. Mark Guzdial  |  January 23, 2019 at 9:52 am

      I’m appreciating your insights, Alan. Thanks for sharing them.

      I’m trying to find an online reference for the Ferguson article. I’ve been able to find the reference, but I don’t have that issue of Byte here, and I haven’t found a digitized copy yet. Anybody here know of a source?

      Ferguson, R. (1981). Prolog: A step towards the ultimate computer language. Byte, 6(11), 384-399.

      Reply
      • 38. orcmid  |  January 23, 2019 at 11:40 am

        Here’s the Byte issue archive:
        https://archive.org/details/byte-magazine-1981-11
        with available PDF of the issue (a 360 MB download). The issue is fascinating as a snapshot of computer history and the popularization of computing. I’m certain that a command-line version of Prolog can be found if you want to try the robot experiment.

        Aside: I found a citation in Barker, P. G., Singh, R. Author Languages for Computer‐Based Learning. British Journal of Educational Technology, Volume 13 (3) – Oct 1, 1982, but it was a passing reference to Prolog not being successful. This is an interesting survey paper, but behind a pay wall and maybe dated, using the pouring-into-the-students-head view updated with computer-mediated support. I think the orientation is in search of LMS’s with students having interactive-instruction tools. This is before graphical UIs at the consumer level (well, maybe Apple ][) and I don’t think they had wind of Smalltalk. That has me wonder how to make things truly student-centric rather than student-subject (and deal with the tracking, measurement conundrum).

        Reply
      • 39. alanone1  |  January 23, 2019 at 11:57 am

        Hi Mark

        I just sent you a scan of the Xerox copy I’ve had for many years.

        If you put it online, then you can refer to it in a blog comment.

        For the K-12 uses we are discussing, one would not use Prolog or the representation Ron used, but would instead have the pedagogical language we have made be able to nicely do this.

        To me, one designs a programming language from about 20 driving examples that have to be “really nice” and seem to be from the same fabric.

        The Framework and Standards people completely didn’t understand how to go about designing anything … (you have to start with “exemplars” … )

        Reply
        • 40. Mark Guzdial  |  January 23, 2019 at 12:08 pm

          Thanks so much, Alan! I’ll put it on the U-M servers soon, but for now, folks can grab it here: https://www.dropbox.com/s/5joehjloygyfgvc/prolog-ferguson81.pdf?dl=0

          To respond to Dennis’s comment: Benedict du Boulay studied students learning Prolog in the 80’s and 90’s, and he documented a lot of the problems students have in understanding the unification algorithm. I agree with Alan — for all the reasons Ben found, Prolog in that form is unlikely to have much traction in K-12. What the Ferguson article offer is a model for describing a language around a set of examples.

          I’m excited by the Ferguson article because of the work I’m doing in thinking about a programming language to support historical thinking, particularly around causal chains. My current design has an underlying mechanism between OPS-5 and Prolog, with a different kind of representation. I’ve got three examples that I’m working around right now: how the Plague led to the end of serfdom, what led to the extinction of megafaunas, and why ancient forests turned into coal and oil, but forests today decompose into the soil. Only three, so I’m just starting yet.

          Reply
  • 41. alfredtwo  |  January 23, 2019 at 11:53 am

    This was hard to read for me because I was involved in writing the Framework and invested a lot of time in it. But I can see some of the flaws with the benefit of time and an outside view from many.

    I was also on the CS 2013 task force so I have seen two different processes for two very different documents. I do, and always did, wish the Framework process could have been more like the CS 2013 process. I feel like the reviews between drafts of the CS 2013 were done better with solid input from many and varied outside reviewers.

    For an other thing, I felt like the Framework writing became a lot more about how things were said (worded) than what was said. Not that what was said wasn’t important but that politics drove the working. Perhaps that is because I joined the writing team late in the process but I still wonder if the emphasis was right.

    As a classroom teacher I don’t spend a lot of time looking at how my curriculum matches a standard or a framework. Teaching at a private high school I have more flexibility and self determination than I suspect teachers at public schools have. I also have spent a long time in the computing field as both an educator and an industry professional so I have a sort of confidence (ego perhaps) that I can decide what students need to know. Students seem to be well prepared for university, or so they tell me, so I can live with that for now.

    The problem is what to specify for newer, less experienced (more modest?) teachers. The Framework was one attempt and it was well-meaning I am sure. What we need to do is not so much focus on its flaws but discuss where to go from here.

    Reply
    • 42. alanone1  |  January 23, 2019 at 3:31 pm

      Hi Alfred

      I’ve always been a big enthusiast about Jerome Bruner’s idea that *for every learner you can (and need to) find an intellectually honest version of a subject they can learn if you heed their level of development”.

      In order to do this, I think you really need to have a good picture and as good as possible a definition of the subject before trying to find “intellectually honest versions of it” for different kinds of learners.

      This was not done for the CS Framework. If the subject were physical sciences, one would first start with top scientists to put together a workable picture of the sciences as the best people in the field see them at that time.

      I think that group would also insist on inviting the best engineers in the allied engineering disciplines to wind up with a double document about science and engineering involving the physical sciences and how they relate today.

      This should have been done with both the “computing sciences” and “computing engineering”, especially “software engineering”.

      An important note here is that there would be frighteningly little overlap between the double document, and general commercial computing practices. How to deal with that gap is necessary, but it is entirely a separate issue from “real science” and “real engineering”.

      Now we have the issues of K-12. The easier parts of this are developmental levels and styles. But much of high school, especially the last several years, is so jammed up with various real and imagined pragmatic crises and hurdles, that dealing with this is a major set of problems.

      My suggestion was that this also be separated out, and that the K-6-8 (or so) years be dealt with as though the pragmatics of APs, college, jobs, how businesses program computers, etc. didn’t exist, and instead to concentrate on “intellectually honest and deep” versions of computing that children can learn fluently in their early years.

      Most of the great scientists who helped to make the definitions of the field are likely not to be very good even to make “intellectually honest versions” that fit for undergrads (e.g. Feynmann’s Lectures are wonderful, but were not a great fit, even for CalTech students), let alone high school or grade school.

      So the tough process — which has to be very carefully — and take as long as necessary — is to come up with those things that are simultaneously the “real deal” and that children at different levels can get fluent in.

      Some remarkable people need to be involved for this to work. You will be able to find a few top scientists who also have a feel for children. There are special teachers who have a feel for the subject even if not deep in it. There are projects that have been done for 50 years that show some of what children can do.

      And above all there are many new projects that can be defined and tried out during this process. A very important empirical fact from our research — and very likely a more general fact — is that it takes about 3 years to really test out a curriculum idea on children using a real teachers. A very big problem with the educational literature is the general absence of this learning curve for ideas and trials.

      The good news is that the real scientists can almost always recognize an “intellectually honest version” of their field is present in an approach for children — and they are even more accurate at spotting when this has been missed (where the attempts to simplify have lost the esprit du sujet).

      A Framework is to explain the subject to people who aren’t clear about the subject, especially who are being asked to think about and eventually create curriculum and teach the subject. Again — leaving grades 9-12 aside here — the key to success here is to have the Framework (a) explain the subject to adults who don’t know it — especially those who have had a little experience and might think they do understand it well enough, and (b) to show what the subject might be like for four or five developmental levels with K-8 children.

      (a) has to be done really well in order to provide the necessary context for (b).

      No part of the content of the Framework should cater to the current state of teachers except in the pains it must take to explain the real deal to them. As Mark points out, curricula, standards, practical issues, teacher training needs, etc are outside the scope of what a Framework’s reason to be made. There have been enough really good examples of past frameworks — for example “The English and Language Arts Framework” in the state of California in the 80s — so there should have been no confusion about the goals of this Framework and what it needed to accomplish.

      Since a Framework, once put forth, will influence and start processes for a decade (usually more), what is the point of hurrying and doing a weak job on something that is going to be used to initiate ideas and directions, and also as an authority for what is being done? Yikes!

      I remember to this day the shock I felt more than 50 years ago when I accidentally went to an ARPA grad school after being a professional programmer for 5 years in the Air Force and the National Center for Atmospheric Research. I was a good programmer and also aware that I didn’t know a lot about computing. However, what ARPA was doing was (a) qualitatively outside of virtually all my experience, and (b) much it was “real computer science”, a term I had not even heard before.

      I came away with the realization that — because of the nature of both programming and most institutions — what was “normal” in most of the world about computing, really had very little to do with what was important about computing.

      The non-understanding of this difference is a very large bug in trying to explain and teach “real computer science”, especially in the forms that are needed for the general population, and especially for young children. This bug shows up pretty much everywhere in the Framework I’m complaining so much about.

      After the fact I was given a copy of the Framework to comment on and critique and “improve”. I explained much of what I just wrote above. The problem is that it missed what’s important, and I said you really need to discard it and start over with a qualitatively better process and to take the time needed.

      I’ve been involved in many “Standing Boards” on X at the National Academies, some of them even good. Some of those got completely distorted because the Academies have staff who actually do the writing, and sometimes most of the meat from the meetings got missed entirely. This also happened to a surprising extent in a process I wasn’t involved in: The AAAS Project 2061 — a massive effort about “Science Literacy for all” — in the end it was the writers who controlled the output, and they missed much of importance.

      Reply
  • 43. orcmid  |  January 23, 2019 at 6:00 pm

    Wow. @alanone1,your last two lengthy replies are packed with notions that intrigue me. I’m not clear whether I should be happy or sad with the mentions. Thanks for so much meaty narrative, Alan.

    (Oh, so my wordpress account doesn’t reveal my real-world name, only my nom-de-bit. Sorry about that. Internet search on “orcmid” finds me though.)

    Reply
  • 44. alanone1  |  January 24, 2019 at 4:09 am

    A little more context from another angle.

    We are not the only creatures whose identities, thinking, and almost certainly evolution, are a mixture of internal and external (through our cultures), but we are by far the most intertwined.

    This means the main business of childhood is to become fleshed out by acquiring “most things” from the surrounding culture (and there is considerable evidence that children have strong genetic drives to do this — and some of these still operate for adults moving to a new culture).

    Montessori — besides being the first woman physician in Italy — was one of the leading anthropologists of her day, and used her understanding of what is actually going on to declare that early education is primarily epistemological, and much of the important epistemological perspectives of the 20th century are quite hidden.

    These don’t exist in most homes, and cannot be learned in a classroom but must be made visible and embedded in the culture that surrounds the child, so that the genetic drives to learn one’s environment and culture will be large motivators to absorb modern perspectives. So: make the school be ‘the 20th century”. This puts a tremendous burden on the teachers in such a school, and Montessori made it her main task to train and vet them.

    From this perspective, we should be able to readily see the underlying purposes helping children get fluent in the relatively new inventions of the human race (writing is only about 5000 years old) via the enriched forms of cultural embedding that a school can become.

    When we look at adulthood in a modern society, we don’t just see adults working to make a living, but we see many kinds of processes that have been set up to moderate our genetic impulses into more fruitful paths. Many of the inventions that gave rise to the ideas of “civilization” (especially as meaning “people trying to become more ‘civilized’ “) are not found in traditional cultures — they are more recent inventions and require both more learning and more difficult learning to acquire.

    These include not just reading and writing and math and science, but also the idea of “rights” and “equal rights”, and to take our impulses for both cooperation and competition and choose the former whenever possible. And so forth.

    To pick just three things beyond making a living that we need to learn how to do, let me choose:
    1. Citizenship
    2. Duty to help raise the next generations
    3. “Richness”

    The latter has much to do with what it means to feel the various glowing emotions we are capable of that stem from love, art, etc.

    This means that when we help children learn to read and write, only the tiniest part is to help them 15 years later to work and earn a living. Most of it has to do with the first three on the list, and especially “richness”.

    Similarly for mathematics and science and history and the arts. We are trying to help full-blown “rich” versions of human beings to form, and in many ways to go far beyond our genetics and our traditional cultures to also grow the cultures we grew up in.

    For example, what is the place of “science” in an elementary school classroom if we take the epistemological imperatives to heart? We should be able to see that by far the larger part is not “science facts” as a catechism that can be tested via multiple choices. For one thing, science doesn’t have “facts” in the traditional sense of the term: it’s epistemological stance towards its knowledge is so different that the term “knowledge” probably shouldn’t have been re-used. Similarly, what it means to “understand” in science is so different that another term should have been coined (Robert Heinlein came up with a Martian word “grok” to mean a very different kind of deeper understanding, etc.)

    Instead, we can start to see that “modern science” (what Bacon called the “new science”) is the combinations of new epistemological stances partly in the form of heuristic methods and mechanisms for trying to deal with the noisiness of our own brains, our cultures, our languages, and our educators.

    This is a very big idea! Right at the top of the most important human inventions. It’s the invention of us “not being smart and knowledgeable” coupled with the inventions of “ways to deal with our blindness”.

    As a slogan: “We can’t learn to see until we realize that we are blind”.

    Another way to look at this is: “It’s about music, much much more than “music appreciation”.

    As a start to this way to look at “real science” we can see what a “real scientist” (Frank Oppenheimer) created for children to help them with the most important first step towards science: to realize that “the world is not as it seems”. This is what the original 500 hands on exhibits at the Exploratorium were for — the idea was that 2000 children and 500 exhibits had a chance to allow “collisions that would shock their ideas” out of commonsense takes on what is going on. (This has now gotten quite lost after Frank passed away.)

    For children — especially — we have to find ways to help them take on these new perspectives without killing off some of the older perspectives that are the sources of great happiness. For example, the many kinds of things that have story elements, and how and why we love stories, are sources of great pleasure — yet, “story thinking” really hurts scientific thinking — and “good thinking” — in all but a few ways that have to be carefully restricted. (I will not take this thread further here.)

    We were very lucky in our field to have Seymour Papert be deeply interested in children’s development. This led to a very elevated perspective — along the lines I’ve put forth above — to see how children’s epistemological perspectives could be enriched, and that certain ways of shaping computers could be of great help. Marvin Minsky got interested in these ideas also, and provided additional important perspectives.

    The important point here is that it is not going back to Logo that will help so much as being able to recapture the larger visions and perspectives that Seymour helped us to see and appreciate.

    In all of these invented areas that help us grow and think and feel more deeply — and now the new one of computing — we have to keep closely in mind:

    our “epistemological responsibilities” (let’s pick Maria Montessori as our patron saint);
    our “pedagogical responsibilities” to “find intellectually honest versions of the subjects that match to the learner’s developmental levels” (Jerry Bruner is the patron saint here);
    and that computing can be a truly profound new way to think about both important old things, and the important new things that are only possible because we have computers (guided by our patron saint Seymour Papert).

    Reply
  • […] One of the questions relates to the recent discussion about standards and frameworks (see post here). […]

    Reply
  • […] mistakes. There was the time I said Stanford was switching from Java to JavaScript. I should have fought for more CS in the K-12 CS Framework. And I should have been saying “multi-lingual” instead of “language independent” for years. […]

    Reply
  • […] CS classes, from the data available. There is value in setting high standards for CS education (as Alan Kay has been arguing), but that’s an argument for the end goal. Where do we start with CS education? How quickly […]

    Reply
  • […] best description of how I used these whiteboards and the discussion notes is that these are my standards. My advisors said very clearly during the sessions — there are too many learning objectives and […]

    Reply

Leave a reply to alanone1 Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,096,022 hits
January 2019
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

CS Teaching Tips