Alan Kay on Hoping That “Simple” is not “Too Simple”

April 23, 2010 at 12:51 pm 24 comments

Alan wanted to make this longer comment, but couldn’t figure out where it fit naturally, so he kindly forwarded it to me to provide here:

Mark in his blog has provided a cornucopia of useful topics and questions about teaching computing to a wide demographic. It’s all very complex and (to me at least) difficult to think about. My simple minded approach for dealing with this looks at “humans making/doing things” as having three main aspects:

1. Bricks, mortar, and bricklaying
2. Architectures
3. Models of the above

And we can think of the “model” category as being composed of the same three categories.
1. Bricks, mortar, and bricklaying of models
2. Architectures for models
3. (Meta) Models of the above

If we stop here we have a perhaps overly simplistic outline of the kinds of things to be learned in computing (and many other activities as well).

Questions I would ask about these include:

  • How many ideas are there here, and especially, how many ideas at a time can learners handle?
  • How much real practice of each of these is required for real understanding and operational usage?
  • Where can we look for useful parallels that will help us think about our own relatively undeveloped area?
    • Music?
    • Sports?
    • Science?
    • Engineering?

To take the last first, we would (or I would) be very surprised to be able to prepare someone as a professional in 4 years of college if they started from scratch in any of the possible parallels listed above. To go to the really simplistic idea of “hours put in”, there just aren’t enough actual hours available per year (3 practice hours a day is about 1000 hours a year) and professional fluency in any of the above will require more than 4000 hours of practice from most learners. And it’s not just a question of hours. There are longitudinal requirements (time for certain ideas and skills to “sink in”) which probably represent real latencies in both the “notional” and physiological  parts of learner’s minds.

A large number of those going into any of the four areas started learning, training, and practicing in childhood. And for those who try to start as a first year college student ….

a. This “problem” is “solved” for music partly by the existence of “pop music” much of which does not require deep fluency in music for participation. (And it is certainly not hard to see real parallels and the existence of “pop computing” in our culture.) Classical and jazz music simply require a lot more time and work.

b. The problem is solved for professional sports by excluding the not skilled enough (and even quite a few of those with skills, and who did start in childhood). The last census listed about 65,000 professional athletes in all US sports. This is a small job market.

c. The problem is solved for the hard sciences (and medicine) most often with extensive postgraduate learning, training and practicing (and by high thresholds at the end). Should we ask where those who, for one reason or another didn’t make the cut, wind up?

d. I don’t know what the engineering demographics are (but would like to). Engineering has always had a strong ad hoc nature (which is what allowed it to be invented and practiced long before mathematics and science were fully invented). Architecture is harder than bricklaying, so one could imagine many with engineering UG degrees winding up in technical companies in what would be essentially apprentice processes.

I’m guessing that this is where similar computer students with undergraduate degrees might wind up — essentially doing bricklaying in some corporate notion of architecture.

Both of these last two seem to me to be dead ends — but it would be good to have more than personal and anecdotal evidence. My own observations would generalize to “they don’t learn much that is good” in their undergraduate experience, and “they learn even less that is good when on the job”.

I think universities have a moral obligation to try to deal with the “they don’t learn much that is good” part of this problem. And doing this well enough could cause large useful and important changes in industry over the next decade or two.

If I were going to get started on this, I would try to put forth a very clear outline of the six aspects of computing I listed above, show how they work together — and try to sketch out what it actually takes to learn them for most college students.

In my thinking about this I keep on coming back — not to the problems of “coverage” over 4 years — but what seems to me to be the larger problem of getting in enough real practicing of the various kinds needed to actually ground the ideas into thoughtful and operational tools.

Best wishes,

Alan

Entry filed under: Uncategorized. Tags: , , .

Lister and Spohrer, Plans and Schema and MCQ’s Is the laptop enabling or inhibiting learning?

24 Comments Add your own

  • 1. Andrey Fedorov  |  April 24, 2010 at 1:27 am

    I wish Alan had at least hinted at his partitioning of the ways “humans make and do things”.

    I can only think of it in terms of my undergrad curriculum (where I apparently “didn’t learn much that is good”). There’s math, which gives us many formal ways of thinking about things. That intersects with CS because most computers have trouble with the natural ambiguity of our minds. Then there’s the “Philosophy of Mathematics” or “CogSci”, which asks “what are we capable of thinking?”, or more precisely “what kind of thinking can machines do for us?”, and the practical side of that coin, which is what most of undergrad CS focuses on: hardware architecture, operating systems, networking, compiler theory, type theory, virtual machines, etc.

    The latter-most is the bricklaying, I get that. I suppose the compiler and language design might be architecture, and the phiolophical perspectives might be “modeling”? How is modeling split up into three sub-divisions of its own? Does this even make sense?

    Reply
  • 2. Andrey Fedorov  |  April 24, 2010 at 1:48 am

    Let me give this another shot:

    1 is programming: say, in Python, running in Ubuntu with a 2.6.1 kernel, on Intel hardware. That’s bricklaying.
    2 is architecture: designing a language, an OS, or a chip to run those on.
    3 is models of the two: different “shapes” of programmable systems and their interfaces to hardware.

    3.1 is the study of measurable properties of architectures: speed of abstractions, temperature of chips, big-O-efficiency of algorithms
    3.2 is mathematics: the study of formalized thoughts so that computers might be able to work with them
    3.3 is epistomology/philosophy of cognition: what is thinkable? what thoughts can computers help us with?

    Reply
  • 3. Paul Gestwicki  |  April 24, 2010 at 11:00 am

    I took “architecture” to be allegorical, as with “bricklaying.” The primary tool of the architect is design thinking. A student who learns design thinking can apply this technique to any number of creative domains. Interpreting “architecture” thusly, the models become the patterns of successful designs. Just as in the discipline of architecture, a specific design solves a specific problem, but there are properties that can be abstracted and reused. This is demonstrated most explicitly by the patterns community and their inspiration from architecture, of course, but I think the lessons are deeper: as we see more non-computing areas in which design thinking takes place, I suspect we will find more insight into how to be better at making computing artifacts.

    The more I think about the problems of students’ levels of expertise, the more I come back to the need for apprenticeship learning, building directly on Lave and Wenger’s work on communities of practice. Merely getting undergraduate students to have the requisite number of hours of practice requires significant restructuring of an undergraduate education. Time spent talking about computing only helps students learn to talk about computing: time spent *doing* computing (i.e. engaging as legitimate peripheral participants in a community of practice) will help them build identity as computing knowledge workers. Honestly, I find myself mostly frustrated by the results from scholarship of teaching and learning, since their main results — such as communities of practice, intrinsic motivations, and the benefits of autonomous inquiry-based learning — are heretical to established structures of higher education, beyond my capacity to manipulate.

    Reply
  • 4. Andrey Fedorov  |  April 24, 2010 at 11:34 am

    @Paul: So “bricklaying” is programming, and “architecture” is library/API/language/OS/hardware (aka system) design. Then what do you make of model-making, and the partitioning thereof?

    Reply
    • 5. Paul Gestwicki  |  April 26, 2010 at 10:49 am

      Dovetailing on Alan’s response a bit, I think it’s important to see the metaphor as a thought-game, not as a formalization of the discipline. At least, that’s how I interpreted it. Take any aspect of computing, and one can identify the fundamental pieces (bricks) and the intentional combination of them to solve a problem (architecture).

      Thinking about HCI, for example, we have widget sets, languages for UI development, heuristics for usability, etc. These can be combined via design thinking to solve problems. Looking over the set of solutions, we can attempt to build models of success, but note that we’ve made an orthogonal shift of medium from computing artifact to models thereof.

      Reply
  • 6. Alan Kay  |  April 24, 2010 at 12:57 pm

    I don’t equate “bricklaying” with “programming”, but with learning how to “connect” components at a given level with each other, and having some sense of what the combined artifact is and will do.

    Architecture has to do with organization of basic materials and their connective properties to serve a purpose. So bricks can be used to make walls, bridges, cathedrals, etc. The literal meaning of architecture is “to make arches”, and I think they picked this because walls were essentially making a big brick with bricks, but making an aqueduct or Pantheon was to organize bricks in a very non-obvious way.

    In this example, a model would be the thing you would simulate (could be physical or symbolic) that captures what’s important about the artifact in some more easily handle-able and understandable form.

    A model is itself an artifact, so it’s worth thinking about what its “bricks, architecture, and model” might be.

    I’ve found it very useful and illuminating to play this game with computing artifacts, including computers, programming languages, operating systems, applications, networks, etc.

    Cheers,

    Alan

    Reply
  • 7. Paul Gestwicki  |  April 26, 2010 at 11:09 am

    I’ve been thinking about this excerpt:

    If I were going to get started on this, I would try to put forth a very clear outline of the six aspects of computing I listed above, show how they work together — and try to sketch out what it actually takes to learn them for most college students.

    I find your model to provide an intriguing thought game, but I don’t see it as being specific enough to apply directly to higher eduction reform. (I welcome further elucidation, of course!) I do agree fundamentally with your assertion that students don’t learn much that is good. I am a CS professor, for what it’s worth.

    What is your opinion of Alistair Cockburn’s Foundations for Software Engineering, specifically his model of software development as a cooperative game? I find this to be quite compelling and potentially providing a framework for rebuilding computing curricula. What he calls the “craft” of software development could be conceived as educational specializations, such as algorithm analysis, language design, project management, etc. This set of crafts is a superset of what is conventionally covered in computing curricula, and this might help deflect complaints of trading rigor for pragmatics.

    Regardless of the model, all we need for it to succeed is for an interdisciplinary group of educators to break down disciplinary silos, develop a fully integrated curriculum that incorporates computing into a liberal arts education, and find an administration that is willing to support it. 😉 In all seriousness, I look at projects like Stanford’s d-school as providing a graduate-level model for how this might be accomplished at an undergraduate level. If such an approach could work for the average undergraduate and not just highly-selective programs, we win.

    Reply
  • 8. Alan Kay  |  April 26, 2010 at 2:16 pm

    I’m in the process of trying to make a short enough real example that will fit in this tiny blog window I have to type into…….**

    We (at Viewpoints) have been looking at several perspectives on this idea. For example, one could start with an purposeful artifact — like TCP/IP — and render it in terms of bricks of some kind + the organization of these to fulfill TCP/IP’s purpose. Typical versions of these are from a few thousand lines to about 20,000 lines of (say) C.

    However, in a suitable meta-language we can make a *running model of TCP/IP* in about 160 very readable lines of code (and we think its natural size is about 80 or so).**

    In this direction — where we have phenomena already, and our task is to make a much more compact, understandable and debuggable model — we are doing actual science (this is what science does and is).

    But we can also use the model idea to go in the other direction — to come up with an idea, make a runnable debuggable model of it — and then use this as a guide and reference (even sometimes as a kernel to generate from) for manifesting the (perhaps more highly optimized) artifact — here, we are doing invention and engineering.

    Part of the reason I wrote the original blog post was to attract readers to the lack of attention to serious model building in either direction in CS (and in production) today.

    In order for this to work, the approach to modeling has to “be really interesting and powerful”, and various ways to do this will themselves have “interesting and powerful” bricks, architectures and models.

    —————————
    **We have been looking at large programs of hundreds of thousands to millions of lines of code and have made models that are from a factor of 100 to a factor of 10,000 smaller. Most of these are a few hundreds of lines of code — which is still a bit long for a blog post. However, there are some nice possibilities where the model is really tiny, and I’ll try to put one of these together when I get a little spare time.

    Cheers,

    Alan

    Reply
    • 9. Mark Guzdial  |  April 27, 2010 at 8:58 pm

      Alan, feel free to send me a longer example as a post, and I’d be happy to share it here as such.
      Mark

      Reply
  • 10. steve  |  April 27, 2010 at 8:52 pm

    I do like the music example as to being not too simple. Though as in music, there can be different levels of appreciation.

    Reply
  • 11. Matt Glickman  |  April 28, 2010 at 1:40 pm

    I’m curious about what Alan means by students generally getting “not much that is good” in undergraduate computing education. My guess is that what he means by bad is students learning/memorizing/practicing ideas/material/skills without acquiring awareness of the full, relevant context (too much bricks and mortar, no arches in sight). The result is brittle ability, applicable in only limited circumstances, e.g. “Sorry, I don’t know Python. I only know Java.”

    What is good is for students to gain an appreciation of the context and purpose of what they are learning, to be able to reflect upon what they know as well as what they don’t know. The desired outcome is an ability to perform under varying conditions, to recognize when they’re at the edge of their own knowledge, and best of all, to have a sufficient basis to be able to pursue continued learning in a self-directed manner.

    Alan advocates teaching model-building as a (the?) core component of the CS curriculum (a great distillation of what CS is really about). More broadly, if we view model building as explicit abstraction and reflection, we might also see the value in CS education of emphasizing more implicit forms of reflection.

    Instead of teaching technical material as received wisdom carved in a tablet, one approach is to provide more of the historical narrative behind a given idea or invention, e.g who came up with the first multitasking OS and why? What was their background/intellectual-context and what problem were they trying to solve? Perhaps more relevantly, teachers can relate their own experiences in creative problem solving, including ideas that didn’t work out along the way as well as models/metaphors that provided key insights. The central idea here is to use stories to illustrate the reflective processes underlying creative problem solving. (Roger Schank’s work on teaching through telling stories is likely most relevant here.)

    A related approach is to incorporate peer-teaching. Successfully explaining an idea necessitates reflection and abstraction, implicitly building a “verbal model” of a concept or process. Work in Jordan Pollack’s lab to best match and provide incentives for peer student/teacher pairs is relevant here.

    I may be echoing some of what Paul has raised above (e.g. communities of practice). Not being a CS prof, I can’t say I’m familiar with the difficulties of applying such ideas in practice.

    Reply
  • […] do we hold students, especially non-majors?  I think that that’s Alan’s point in his recent guest blog post here.  When I read the USA Today piece, I get the sense that this teacher was really doing the right […]

    Reply
  • 13. WORD  |  April 30, 2010 at 6:42 pm

    We are not philosophers are more able to think and deduce what we feel.

    Reply
  • 14. John "Z-Bo" Zabroski  |  May 3, 2010 at 6:31 pm

    Paul,

    I had a hard time understanding your questions posed to Alan, which is probably why you did not get such a great direct answer.

    What is it you really want to ask? I bet 1 in 500,000 developers know what Alistari Cockburn’s cooperative game theory of software development is about, and half that actually understood Cockburn.

    All,
    As for real world examples besides TCP/IP, I’ve been working on something of my own using Ometa#. I have to profess to not understanding the point behind Ometa/JS at least as demonstrated by VPRI via examples, other than the cheesy argument that it is the web’s assembly language. What we don’t have right now is a way to convert between Ometa* variants. That’s true computing power, because that then says JavaScript is merely a format for describing mobile code, and we can recursively define all OMeta interpreters in terms of other self-described OMeta interpreters, in a few lines of code.

    Reply
    • 15. Alan Kay  |  May 4, 2010 at 6:45 am

      Hi John,

      This is a very pertinent comment. First, please check out Alex Warth’s thesis on OMeta:

      Click to access tr2008003_experimenting.pdf


      This includes an “operational semantics” for OMeta.

      And, I think you realize that the OMeta/JS was done to make it easy for interested people to play with this approach to making universal translators. The versions of OMeta we use generally are grounded on some other basis (such as Smalltalk or our STEPS semantics that reaches down to the metal).

      And, Alex has bootstrapped OMeta (and earlier versions of OMeta) to many targets using itself as the “conversion vehicle” (just as you desire).

      In fact, OMeta started out as play with its remote ancestor Meta II by Val Shorre, one of my favorite “oldies but goodies from 1964. It is defined in itself in a few lines and uses a byte coded interpreter as its underlying semantics (it was originally implemented on an 8K 1401). Alex wrote that by hand to get it running the first time and then started using it to make eventual large changes as he added many ideas of his own.

      Since Mark’s Blog is organized around ideas for teaching computing, we should try to think of what would be neat for (say) 2nd year students to learn here.

      I’ve always liked the idea of grounding practical semantics in something that is “barely a computer” (BAC), as Niklaus Wirth did for Euler in the mid-60s. In this is written a simple virtual machine, and the bootstrap is written in the VM. So one has to provide the correspondences with the BAC to do the initial bootstrap. And then more powerful tools can be brought to bear.

      I think of this as one of the “five finger exercises” that everyone should learn how to do, and be able to “just do it” whenever the need appears.

      One interesting approach that could be very good for students would be to make the VM part of this bootstrap a simple LISP, and to have that be the target of the meta-translator.

      What do you think? We are talking about a few hundred lines of code total for a bare bones bootstrap of something really powerful.

      Cheers,

      Alan

      Reply
      • 16. Matt Glickman  |  May 4, 2010 at 3:33 pm

        Most people like those reading this blog would likely get the general lesson from this exercise, as well as just groove on its intrinsic coolness.

        However, I’m concerned that for a broad range of students to be able to “‘just do it’ whenever the need appears”, it’s probably critical to teach how to recognize this need, i.e. when a problem warrants bootstrapping yourself to a higher level of representation. In the particular case of bootstrapping from a BAC to a meta-language, the value is clear, but most students won’t expect to find themselves stuck working with a BAC outside of school.

        Something like the compilable-ASCII-diagram TCP example may have more practical relevance for them. To illustrate the value here, students might be asked to code TCP up both in classical “flat” form and by implementing a meta-language.

        At first, while perhaps requiring fewer actual lines of code, the meta-language version will probably be much more work. However, when asked to use both versions to implement a series of protocol variants, they should very quickly see their meta-language effort amortized across the variant implementations.

        Am I shooting too low by suggesting that the lesson is that the right higher-level representation can save you work? Perhaps a better lesson to get across is that the right higher-level representation can help you find ideas or solutions you wouldn’t have found otherwise (a point Herb Simon used to emphasize). Any others?

        Reply
      • 17. John "Z-Bo" Zabroski  |  May 6, 2010 at 3:37 pm

        I honestly didn’t pay attention to the fact Alex bootstrapped OMeta using Meta II. I must’ve been speedreading when that factoid on page 59 of his thesis mentions the bootstrapping process. Interestingly, Alex says on that same page that Meta II doesn’t support backtracking, but Alex and Ian have said not too long ago they find backtracking unnecessary in many cases!

        The second point, about having to pack and unpack terms to determine what rule to apply, is the big missing feature from Meta II. It makes using other grammars for foreign rule invocation handling baroque and tedious.

        As for OMeta/JS making things funner to play with, I am not sure I agree. It is not quite there yet. I find the website more annoying than helpful, and you still need a killer app. I have two ideas for killer apps, both of which piggyback on Ian’s TCP/IP example and both of which can feed into each other recursively.

        Reply
      • 18. John "Z-Bo" Zabroski  |  May 6, 2010 at 3:52 pm

        Let me qualify why OMeta/JS isn’t what you think it is– easy to get started with.

        1) The website is designed by an engineer; there are no clues as to what does what, and what you are supposed to do to get things going.
        2) OMeta/JS is documented in terms of OMeta/COLA, which is frustrating because now you are reading documentation that compares two things you may have never used before (higher barrier for entry)
        3) OMeta/JS doesn’t really have any cool examples that are friendly to play with, and there is no IDE to visualize the application of rule productions against sample data. There are lots and lots of papers on the benefits of using, for example, graph grammars to specify visual languages.

        Just 2 cents. You are free to disagree, but would need five people in a hallway to agree with you for me to care. It’s one thing to have intent; but it’s another thing to measure the feedback loop and make sure the intent is clear. What are you doing to ensure you’re actually making things fun and easier to pick up? How do you go about getting feedback on your project?

        For students to learn this kind of stuff, I am not sure I am the best person to ask. I like to have fun when I learn, and most people just care about grades and view puzzles as obstacles with unclear solution paths to accomplishing their goals. “He who teaches himself hath a fool for a master,” of course!

        I am a strong believer in studying at least one bootstrap, end-to-end. The first one I studied was the Warren Abstract Machine, in a book that described it plainly using stack manipulation to model a heap. Like the Lisp bootstrap tradition, it is somewhat distasteful and doesn’t focus on tradeoffs much, just focuses on how to do a bootstrap, not why you should do it that way or what you might want to do with the bootstrap once you have it. Lisp in Small Pieces is like that; it’s more about a cute way to teach things that bends the mind than having fun in exploring design trade-offs. In other words, it is not really about truly building models.

        Reply
  • 19. Paul Gestwicki  |  May 4, 2010 at 3:20 pm

    In response to John: I have a slightly better understanding now of what Alan meant after his post of Apr 26. I am still mostly unfamiliar with this line of work, but it seems like a great way of framing the modeling and scientific aspects of computer science. I am left with the question of how this would apply to higher education.

    Looking at Alan’s original suggestion for higher education reform, we could take his six-part taxonomy of computing, determine what it takes for students to become competent at these, and design a curriculum accordingly. However, I wonder if the taxonomy is comprehensive of computing. Where does a skill such as “critical evaluation” fit in? This is one of the skills students should acquire, according to CS2008. I’m genuinely not sure if this is something that Alan references as a “brick”—a base-level skill that can be treated like a component—or if it is at a different level, or if it is something different altogether.

    (I’ve been reading some of Cockburn’s work recently, so it was fresh in my mind. One positive aspect of his craft-oriented model of software development is that for curriculum design, I can call “critical evaluation” a skill and design learning experiences around it. The downside of his model is that I can name *a lot* of skills, and at some point I end up with a list like in CS2008 and an overwhelmed undergraduate curriculum committee. The conventional translation of outcomes to lectures-and-assignments also seems contrary to the scholarship on communities of practice and the science of learning, but that’s a topic for another post.)

    Reply
  • 20. Alan Kay  |  May 4, 2010 at 8:30 pm

    Hi Matt,

    Yep, and this is why Alex kindly provided everyone with a number of examples of bootstrapping to an existing language (in this case Javascript), including doing a JS in OMeta that can extend JS live by writing a new kind of “meta-procedure”.

    But, in various comments in this blog I’ve argued that “outlook” is the most important of the triple (outlook, knowledge, “IQ”), and there is valuable outlook to be gained through elementary skills in learning to “make anything you need from whatever they give you”.

    I think you are right that part of the outlooks to be gained here have to do with “picking good representations to think with”.

    The other part has to do with being able to do what is needed at the level it is needed — for example, one of the many grievous sins committed in the browser is the unnecessary prevention of outside fast running low level codes (perhaps for a better graphics system or a massively parallel particle system, etc.) even though they can be completely confined.

    One of the motivations Don Knuth had for writing his books on programming was to give their readers “principles and recipes” (he used a quote from McCalls cookbook as a metaphor). His examples don’t use all the optimizations one might use in a production system, instead Don presents a wide variety of styles on a wide range of goals.

    I think a small important subset would make up the “five finger exercises” that need to be part of the ready vocabulary of every programmer and computer scientist.

    Cheers,

    Alan

    Reply
  • 21. Alan Kay  |  May 5, 2010 at 6:07 am

    Hi Paul,

    I could readily imagine many more distinctions and categories being added. This is why I was wondering if “simple is too simple”.

    But, as in the natural sciences — which heavily convolve “critical evaluation” with new kinds of epistemological outlooks — I would group these kinds of activities under what it takes to make and evaluate the models.

    And the current very weak emphasis on the kinds of modeling I’m referring to is why I put forth the perhaps too simple trios of processes in the first place.

    Cheers,

    Alan

    Reply
  • 22. Alan Kay  |  May 6, 2010 at 4:05 pm

    To John Z

    Alex made the OMeta/JS site for a class that he, I, and Todd Millstein taught at UCLA a few years ago. Those students had context.

    And you are so right that this is not easy to get started with if you weren’t in that class or had not looked at any of the other documents.

    I think that both the LOGO (on the simple side) and the Prolog (on the subtle side) examples are good ones to play with. The OMeta in itself is perhaps a little large (even though it is quite small via other comparisons).

    I also like the idea of studying a bootstrap from end to end, and we have not had the time since OMeta was made to make a nice pedagogical example (but we should).

    Cheers,

    Alan

    Reply
  • 23. John "Z-Bo" Zabroski  |  May 6, 2010 at 5:45 pm

    Well, you need an issue tracker and a way for people to contribute back. As I said, you appear to have no monitoring of the feedback loop. I am not sure what use OMeta is in general without a feedback loop. Just using OMeta to tune itself is not interesting enough, and becomes navel gazing at some point, and VPRI needs to see above the parapet and implement some community involvement bootstrapping logic. “(but we should)” is missing the issue entirely. It is not constructivist if there ultimately must be the VPRI puppet master.

    Just 2 cents.

    [I wasn’t aware you ever taught classes; cool, just keep it up. What I learned from Stephen Dewey is that his biggest inspirations in studying the human brain actually came from questions kids would ask him when he gave presentations about scientific research using real examples from his work. They’d ask him a bunch of WHAT IF questions he’d never thought of before, and he’d go and write a research grant proposal whenever his answer to a kid was “I don’t know”. Adults are way too into the details to ask the same material you’d put in a “Questions Kids Ask” book. Only philosophers and poets ask those kinds of questions, but a philosopher and a poet are just two kinds of adults that never grew up.]

    Reply
  • 24. 2010 in review from Wordpress « Computing Education Blog  |  January 2, 2011 at 3:18 pm

    […] Alan Kay on Hoping That “Simple” is not “Too Simple” April 2010 23 comments 5 […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,715 other followers

Feeds

Recent Posts

Blog Stats

  • 1,757,945 hits
April 2010
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

CS Teaching Tips


%d bloggers like this: