K-12 CS Framework: Best we can expect. Maybe as good as we need.

October 18, 2016 at 7:01 am 11 comments

The CS K-12 Framework was released Monday.  This has been an 11 month long process — see first blog post about the frameworkfirst blog post on the process, and the post after my last meeting with the writers as an advisor.  The whole framework can be found here and a video about the framework can be found here:

A webinar about the Framework will be held on Wednesday, October 19, at 12 PM Pacific / 3 PM Eastern. Visit https://www.youtube.com/watch?v=wmxyZ1DFBwk for more details and to watch the webinar on the 19th.

I believe that this framework is about as good as we can expect right now.  Pat Yongpradit did an amazing job engaging a broad range of voices in a short time.  The short time frame was forced on the process by the state policymakers who wanted a framework, something on which they could hang their state standards and curricula.  The NGSS veterans did warn us what could happen if we got it wrong, if we went too fast.  Maybe the framework process didn’t go too fast.

The framework document is impressive — comprehensive, carefully constructed, with a rich set of citations.  It’s teacher-centric, which may not be the best for a document to inform state standards, but that’s the constituency with the strongest voice in CS Ed today.  There are too few CS Ed informed policymakers or district administrators to push back on things that might not work work. The CS Ed researchers are too few and too uncertain to have a strong voice in the process.  Computer scientists (both professional and academic) generally ignored the process. The CS teachers had the greatest political influence.

I predicted in January that this would be a “safe list,” a “subset of CS that most people can agree to.”  I was wrong. There’s a lot in there that I don’t see as being about computation.  Like “Create team norms, expectations, and equitable workloads to increase efficiency and effectiveness” — that’s a high school computing recommendation?  Like “Include the unique perspectives of others and reflect on one’s own perspectives when designing and developing computational products” — you can achieve that in high school?

Those “aspirational” statements (Pat’s word) mean that the writers went beyond defining a consensus document.  They tried to push future CS education in the ways that they felt were important.  Time will tell if they got it right.  The framework fails if schools (especially under-resourced schools) decide that it’s too hard and give up, meaning that underprivileged kids will continue to get no CS education.  If teachers and administrators work harder to provide more and better CS education because of this document, then the framework writers win.

This is an important document that will have a large influence.  Literally, millions of schoolchildren in several states are going to have their CS education defined by this document.

Typing that statement gives me such a sinking feeling because we just don’t have the research evidence to support what’s in the framework.

When I went to meetings, I too often heard, “Of course, teachers and students can do this, because it works in my program.”  So few computing education programs (e.g., packages of curriculum, professional development, assessment, and all the things teachers need like pacing guides and standards crosswalks) have scaled yet in diverse populations.  Maybe it works in your program.  But will it work when it’s not your program anymore?  When it’s a national program? When states and districts take it over and make it their own?  Will it still work?

And we want schools and districts to make things their own.  That’s at the heart of the American educational system — we’re distributed and diverse, with thousands of experiments going on at once.  I worry about how little knowledge about computing and computing education is out there, as guidance when schools and districts make it their own.

So, yeah, I’m one of those uncertain researchers, mumbling in the corner of this process, worrying, “This could go so wrong.”  Maybe it won’t.  Maybe this will be the first step towards providing a computing education for everyone.

The die is cast. Let’s see what happens.


Entry filed under: Uncategorized. Tags: , , , .

Underrepresentation is more dangerous to US than to CS: Interview with Richard Tapia How can teachers help struggling computationalists

11 Comments Add your own

  • 1. Alfred Thompson (@alfredtwo)  |  October 18, 2016 at 7:48 am

    The speed of development concerns me (and has from the beginning). The CS 2013 ACM/IEEE curriculum recommendations took 3 years and involved even more people. At least at the reviewing phases. And for CS 2013 we (I was on the steering committee of CS 2013) had several previous versions and some history of how to develop the report to help. K-12 is really a first effort.

    Sure we have had some standards in the past but they were mostly local – state-wide for the most part. The CSTA standards, while a good effort, probably suffer from too few people and too little research behind them. As you suggest in your post most of the people involved came from good programs in good schools with the assumption, unproven, that what they do scales.

    But we have to start somewhere and I think what we have is as good as we’ll get today. We can’t really wait for perfect or we’ll not even have a chance at good.

    Note I was part of the K-12 CS writing team but am writing for myself not the project.

  • 2. alanone1  |  October 18, 2016 at 8:07 am

    Hmmm, let’s see: Democracy meets STEM content …

    One of my favorites from the past was a state law declaring that pi would be 3 1/7 (there was another movement to make it be 3 to be in accord with the Bible).

    My main agreement with Mark is that Pat Yongpradit is really amazing and deserves high praise.

    Cutting to the chase here … I don’t think my (really low) opinion of this framework and the process used to get the framework is important.

    The two things that do seem really important to me are (a) the lack of a reasonable “exhibit” of any kind of an above threshold compendium of what computing should be about for learners of the future, and (b) a means to deploy such knowledge — if we had it — in ways that don’t require the kinds of teachers who are really needed (and also so that the generally under-knowledged teachers around are not the determining factors of how any field should be defined for learners).

    For certain kinds of subjects and certain kinds of learners, well-written -books- by those both expert in the subjects and at writing to help learners, fulfilled a lot of both (a) and (b).

    A supreme irony, which our not-quite-a-field really needs to do something about is the qualitative successor to the “book that can be learned from” in the form of dynamic media that both can provide many more perspectives, and also can provide active guidance and instruction and real question answering when needed.

    This is a tough problem, and there are lots of good reasons why it didn’t happen the first few times it was called for, starting more than 50 years ago. However, I don’t think there are any HW-SW limitations today: just limitations of visions, goals, research, funding and will power.

    The printed book solved part of the problem of “not enough Socrates’ to go around”. It’s time to solve the next part* of that problem with the next qualitative medium after the book.

    *For the un-careful reader, I’m obviously not saying we can replace all of great teachers, but I am asserting strongly that printed books made an enormous difference because they scaled where great teachers didn’t, to provide a different but fruitful new learning ecology (that changed our entire world). We need the next big stage of this.

  • 3. David Young  |  October 18, 2016 at 11:05 pm


    The late Jef Raskin had a vision of what I have come to call “in-situ computation,” where you’re performing computation on and within the documents you write. There are not silos for drawings, code, text, calculation, web browsing, email.

    Back in the 1980s, Jef had a little company that made a computer that embodied some of his ideas. The computer was sold as the Canon Cat. I would be surprised if you do not know all about the Cat, already, but if you haven’t laid hands on one, do: I think you will get a kick out of it!

    It has always seemed to me that in Jef’s vision, anything you could see on the computer screen, you could interrogate, distill, and transform. (Seems like he said so in his book.) Do you think that marrying the web with in-situ computation jibes with your idea of dynamic media?


    • 4. alanone1  |  October 19, 2016 at 12:15 am

      Hi Dave

      Not that “everything was done earlier at Xerox Parc” but this was just how the Smalltalk system there did its thing in the 70s (this is one of the many reasons why real objects are a good idea! because they can be mixed and matched in media, and you don’t need or want separate isolated apps).

      Last year — for Ted Nelson’s 70th birthday celebration — we resurrected a rescued old disk pack that had the NoteTaker Smalltalk on it from 1978, and I used it to do all the presentation media for my short talk. This is on YouTube, and you can see what things were like about a year before Steve Jobs (and Jef Raskin) saw this system in 1979.

      However, in my comment here, I was referring to something we wanted, but were unable to do at the time, even with great AI people like Terry Winograd, Danny Bobrow, Ira Goldstein, John Seeley Brown, Richard Burton, etc.

      And that was a truly inter-active interface (not just a good reactive one), especially to augment the learning processes where the learner needs feedback from an observer, answers to questions, and some general guidance through subject matter.

      My call in the previous comment was that it has been possible to actually do this for the last few years, and we need to make real progress on this right now.

      • 5. David Young  |  October 20, 2016 at 10:40 am

        Alan, I’m not sure if I understand your remarks about “interactive” versus “reactive.” Is “reactive” roughly the same as direct manipulation? And interactive is a conversation?

        It seems to me that conversing with a computer remains just as frustrating and non-illuminating as always, whether it’s Siri or a command line. I think that you probably have something else in mind?

        Thanks for sharing the video. I wondered how the system seemingly put arbitrary objects into communication with each other, or if that is even a sensible question to ask.

        It has always seemed to me that you have a more expansive concept of “object” than the industry does. I seem to recall that you have written elsewhere that the idea of classes is not essential to your idea of object-orientation. When a CS curriculum teaches that classes are like this, “class Z { … data … methods … }”, and objects instantiate classes, are they in the weeds? What would you teach about objects, instead?

        • 6. alanone1  |  October 20, 2016 at 1:29 pm

          It’s a question of initiative — “reactive” is where the initiative is mostly by the end-user, “interactive” is “mixed-initiative”, where the system will sometimes take the lead. (Note: the question is not whether “X” is bad now, but whether it is a real need — there were poor reactive systems around before we did the Parc one that was above enough thresholds to be generally useful.)

          And, yes, some of the many things that have to be found out about and invented are the interaction and relationship modes that will work well and can be sustained over time. (The reactive interface required lots of research and trial systems and experiments in order to be successfully developed.)

          “Real Objects” are semantically like complete little computers on a network. In the example on the video, each “place” that seems to be a desktop/page is called a “project”, and Smalltalk can manifest an unlimited number of them.

          Any object in the Smalltalk universe can be manifested on any of the projects (think of each object having a URL). A project can act like a local name space, and objects can have local names.

          You can see these local names in the label on the windows “bouncing” at the top, and “painter” on the bottom. I should mention here that “everything is a ‘view’ in Smalltalk”, some show borders and some don’t (this was quite misunderstood by Apple and other early audiences for demos like these).

          If you got to full screen on the video (and I hope that you did) you can see the message

          “painter picture <- bouncing currentFrame"

          This allows the "painter" to paint on that frame in the animation while the animation object is running through all the frames of the animation.

          As I've said a few times over the last 35 years or so "I made up the term 'object-oriented'* and I did not have C++ in mind!".

          *50 years ago next month

          The original meaning of "object" (admittedly a bad choice of term on my part, for a number of reasons) got "colonized" in the 80s in a kind of "designer jeans" fashion. A much more proper term for both languages and style of use is "Abstract Data Type" languages and programming.

          C++ was specifically meant to be very like Simula (which for its time cannot be too highly praised) which was done as a preprocessor to Algol, and Stroustrup specifically imitated this with C++ as a preprocessor to C. Bjarne specifically said in the first C++ document that he wasn't trying to do what Smalltalk had already done for 10 years, but was trying to do what Simula did earlier.

          However, since the pop-culture of computing doesn't care about such details, the terms "object" and "object-oriented" now refer to something quite different — and it's left us without a term to label what did get done at Parc. (We sometimes use "real objects" or "dynamic objects", etc.)

          Still, a nice little demo like the one in the video comes as quite a surprise to most computer people today.

          Your very last question is a even bit further out of the scope of this blog than the out of scope too-long answer I've just given.

  • 7. Gary Stager  |  October 19, 2016 at 4:57 am


    I share lots of your skepticism, but for lots of different reasons. I worry about trusting the very same community who presided over the death of programming in schools with its renaissance.

    Your comments about the lofty project management and collaboration miss the k-12 tradition of substituting affective content-free “skills” for actually developing fluency. These goals sound all kumbaya and 21st century and can be “taught” with sock puppets. You will not need any of those pesky computers or teacher’s who know how to program.

    This is a familiar hustle in the k-12 standards game.

    I’ll just keep up my work teaching thousands of teachers each year to teach programming to kids without a framework or NSF grant.

    • 8. Mark Guzdial  |  October 19, 2016 at 11:38 am

      The framework isn’t about our work, Gary. It’s about the work after we’re gone and in the places where we can’t get.

      We’re not going to reach “CS for All” in my lifetime (and since you’re about the same age as me, in our lifetimes). I worry about creating the systemic and sustainable structures that might lead to “CS for All” in 50 years. This framework is going to influence that. Most of the US teachers that you are teaching to program will be in states whose standards will be based on this framework. You can teach them one thing. Their success will be determined by this framework.

      I’m after universal computational literacy, in the diSessa and Paper sense. I want people to use computing as a way of expressing ideas, modeling and simulating those ideas, and learning, as Alan Kay has promoted. Pat wanted this Framework to represent those goals, but I think it’s more about Silicon Valley goals. Why do you teach every high school student to debug team processes in computing class, except to prepare them for software development teams?

      The framework doesn’t impact us directly. It will impact those who come after us.

      • 9. gasstationwithoutpumps  |  October 19, 2016 at 5:55 pm

        What makes you think that the CS framework will last anywhere near 50 years? It is unlikely to last more than 5–10 years, especially if it is framed around currently fashionable educrat topics. These things go through fads. Even when the concepts are stable, the educatese vocabulary is constantly shifting.

        • 10. Mark Guzdial  |  October 19, 2016 at 7:43 pm

          Completely agreed! But every other framework will be a “revision” of this one. The starting place anchors the trajectory for decades.

  • 11. Sony kashyap  |  October 20, 2016 at 9:19 am

    This blog very helpful and great post so pls keep continue postings.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed

Recent Posts

October 2016
« Sep    


Blog Stats

  • 1,279,820 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,579 other followers

CS Teaching Tips

%d bloggers like this: