Computing Education Lessons Learned from the 2010’s: What I Got Wrong

January 13, 2020 at 7:00 am 42 comments

There’s a trend on Twitter over the last few weeks where people (especially the academics I follow) tweet about their accomplishments over the last 10 years. They write about the number of papers published, the number of PhD students graduated, and the amount of grant money they received. It’s a nice reflective activity which highlights many great things that have happened in the 2010’s.

I started this blog in June 2009, so most of it has been written in the 2010’s. The most interesting thing I find in looking back is what I got wrong. There were lots of things that I thought were true, ideas that I worked on, but I later realized were wrong. Since I use this blog as a thinking space, it’s a sign of learning that I now realize that some of that thinking was wrong. And for better or worse, here’s a permanent Internet record.

There are the easy ones — the ones I’ve been able to identify in blog posts as mistakes. There was the time I said Stanford was switching from Java to JavaScript. I should have fought for more CS in the K-12 CS Framework. And I should have been saying “multi-lingual” instead of “language independent” for years. And there was the blog post where I just listed the organizational mistakes I’d made.

The more interesting mistakes are the ones that are more subtle (at least to me), that took me years to figure out, and that maybe I’m still figuring out:

Creating pre-service CS teacher programs would be easy. I thought that we could create programs to develop more pre-service computer science teachers. We just needed the will to do it. You can find posts from me talking about this from 2010 and from 2015. I now realize that this is so hard that it’s unlikely to happen in most US states. My Blog@CACM post this month is about me getting schooled by a group of education faculty in December. We are much more likely to integrate CS into mathematics or science teacher programs than to have standalone CS teacher professional development — and even that will require an enormous effort.

CS for All is about Access. I used to think that the barrier to more students taking CS was getting CS classes into high schools. You can find me complaining about how there were too few high school CS classes in 2016. I really bought into the goal of CS10K (as I talked about in 2014). By 2018, I realized that there was a difference between access and participation. But now we have Miranda Parker’s dissertation and we know that the problem is much deeper than just having teachers and classes. Even if you have classes, you might not get students taking them, or it may just be more of the same kinds of students (as the Roehampton Report has shown us). Diverse participation is really hard.

Constructionism is the way to use computing in education. I grew up as a constructionist, both as a “technically precocious boy” and as a researcher. Seymour Papert wrote me a letter of recommendation when I graduated with my PhD. My post on constructionism is still one of the most-read. In 2011, I thought that the One Laptop Per Child project would work. I read Morgan Ames’ The Charisma Machine, and it’s pretty clear that it didn’t.

The idea of building as a way of learning makes sense. It’s at the heart of Janet Kolodner’s Learning by Design, Yasmin Kafai’s work, Scratch, and lots of other successful approaches. But if you read Seymour carefully, you’ll see that his vision is mostly about learning mathematics and code, through teaching yourself code. That only goes so far. It doesn’t include everyone, and at the worst implementations of his vision, it leaves out teachers.

I was in a design meeting once with Seymour, where he was arguing for making a new Logo implementation much more complicated. “Teachers will hate it!” several of us argued. “But some students will love it,” he countered. Seymour cared about the students who would seek out technical understanding, without (or in spite of) teachers, as he did.

Constructionism in the Mindstorms sense only works for a small percentage of students, which is what Ames’ story tells us. Some students do want to understand the computer soup-to-nuts, and that’s great, and it’s worthwhile making that work for as many students as possible. But I believe that it still won’t be many students. Students care about lots of other things (from business to design, from history to geography) that don’t easily map to a focus on code and mathematics. I still believe in the value of having students program for learning lots of different things, but I’m no longer convinced that the “hard fun” of Logo is the most useful or productive path for using the power of computing for learning. I am less interested in making things for just a few precocious students, especially if teachers hate it. I believe in making things with teachers.

The trick is to define Computational Thinking. Then there’s Computational Thinking. I thought that the problem was that we didn’t have a clear definition. If we had that, we could do studies in order to measure the value (if any) of CT. I blogged about definitions of it in 2011, in 2012, in 2016, and in 2019. I’ve written and lectured on Computational Thinking. The paper I wrote last Fall with Alan Kay, Cathie Norris, and Elliot Soloway may be the last that I will write on CT. I realized that CT is just not that interesting as a research topic (especially with no well-accepted definition) compared to the challenge of designing computation for better thinking. We can try to teach everyone about computational thinking, but that won’t get as far as improving the computing to help everyone’s thinking. Fix the environment, not the people.

But I could be wrong on that, too.

Entry filed under: Uncategorized. Tags: , , , .

What’s unique about CS education compared to other DBERs?

42 Comments Add your own

  • 1. Colin Potts  |  January 13, 2020 at 7:28 am

    Great, reflective post, Mark. We need more of those. (I was amused that the recent Chronicle of Higher Education’s article on the top 100 educational technology failures of the 2010s highlighted just about every fad promoted by certain people we both know, and that it prompted not a peep from them – not even the protest that it was an anti-technology diatribe, which would not be totally unfair. You, in contrast, want to look at the facts unflinchingly, scholar that you are.)

    I was struck by your conclusion that computational thinking was really not the interesting topic – as opposed to computation for thinking – that you had once believed. You’ll recall that when we introduced CS for everyone at Georgia Tech way back in the mid- to late-90s, we used Kay’s argument that CT was the new liberal art, that the algorithmics of discrete processes would be to the 21st century what calculus had been for the 18th and 19th: a formal language to describe and understand change, and therefore a necessary formal bedrock for the social scientists and life scientists that mathematics’ analytic methods had largely failed to deliver. And yet we have both seen at least one of GT’s three introductory CS courses morph into little more than a coding boot camp. Having sat for seven years on the state’s general education council, I can say that that trend is far from unusual, and you probably have an even broader perspective that backs this up. This is very unlike the state of math or physics education, which continue to have for their rationales much more general goals to do with understanding the workings of the world and formal reasoning.

    So, with the original justification for CS for everyone now on very shaky epistemological grounds, and the current state of affairs concentrating on possibly ephemeral technical skills (digital drivers ed for Gen Z), where does that leave us? Other than fueling a job market that may end up being a 1990s-like bubble, albeit one that may deflate over the 2025-2040 horizon rather than pop suddenly, is there a general education justification for teaching CS at all – that is, to those who are not future specialists? I suspect that there is, but you’ve caused me to question my faith in the Kay analogy.

    Reply
    • 2. Mark Guzdial  |  January 13, 2020 at 1:18 pm

      Thanks, Colin! I’m glad you like the post, and I appreciate you engaging through your comment.

      Alan Kay made some of those arguments that you describe (e.g., that it would be a formal language to describe and understand change), but I think more about “programming” and “computer science” than CT. Jeanette Wing made more of the CT arguments.

      I still believe in the power of programming and believe that it’s valuable enough to teach everyone, in higher-ed at least, for all of Alan’s reasons and more. (I have a book chapter in press on this topic.) But programming has to change in order to make the cost-benefit ratio work well. Most programming languages were written to ease the tasks of professionals. They are neither easy to learn, nor are they optimized to help with the tasks of learners. Programming doesn’t have to be MATLAB or Python or C++. We can invent programming that provides the benefits of programming, without the cost of complicated syntax and semantics.

      I still believe in the power of the computing requirement at Georgia Tech. I just taught U. Michigan’s Engr 101, roughly equivalent to Georgia Tech’s CS1371. (650+ students, 265 in my section.) The first 8 weeks were on MATLAB. I was really impressed! All the problems were grounded in real Engineering problems. MATLAB was emphasized as a tool that all Engineers could use. For example, the biggest chunk of the class was on data analysis, simulation, and using all of MATLAB’s powerful graphing capabilities, which far outstrip tools like Excel. It really felt useful to Engineers, and the data that I saw suggest that students saw it that way. I can’t speak to classes becoming coding bootcamps, but I came away with renewed faith that these classes can be powerful and useful.

      But this is obviously a bigger discussion. I’ll be on-campus in May for Jenny’s graduation — perhaps we can chat then?

      Reply
      • 3. Colin Potts  |  January 13, 2020 at 8:36 pm

        Yes, let’s catch up in person in May. I’m sorry that I missed you at the hooding in December.

        Reply
  • 4. Robert R Gotwals Jr.  |  January 13, 2020 at 9:41 am

    For the past 3+ years I’ve been screaming (into a void, it seems): what happened to computational SCIENCE, a well-defined and well-established scientific discipline, considered to be the third leg of modern scientific research (along with experiment and theory)? Weintrop’s taxonomy, in my 40+ years of doing computational science education, does a great job of capturing those things that we’d want kids to be able to do in terms of computing in the sciences. I am a huge supporter and advocate for COMPUTER science education, but with computational science, we have a motivation that kids can see as to why computing (and mathematics) is important: we want to solve challenging scientific problems, like global climate change, protein folding, all of those problems defined as Grand Challenge problems. I am, by the way, a computational chemist by training, and our advanced STEM high school offers 11 courses in the computational sciences, including new courses in digital humanities and data science for scientists. Perhaps some intrepid researcher might want to study how WE are integrating computational SCIENCE into our high school program, with both stand-alone courses AND as integrated into existing courses, including AP courses.

    Reply
    • 5. Mark Guzdial  |  January 13, 2020 at 1:07 pm

      Robert, there are way more questions to ask about computing education than there are researchers. The International Computing Education Research (ICER) has never had more than 200 attendees in any year. I agree with you that computational science in schools is very interesting. There are people working in this area (e.g., Boostrap, Irene Lee’s Project GUTS). If you want to get research into your classroom, you’ll have to recruit researchers and raise funding. There are few researchers, and they don’t have excess capacity to explore new questions.

      Reply
      • 6. Robert Radcliffe Gotwals Jr.  |  January 13, 2020 at 2:03 pm

        Don’t get me wrong, I’m not trolling for researchers or research funding. The question still is: what happened to discussions of computational science? As best as I can tell from reading a variety of articles, I’ve seen it mentioned maybe four or five times (out of at least 100 articles that I have read).

        Reply
        • 7. gasstationwithoutpumps  |  January 13, 2020 at 3:14 pm

          Computation science is still going strong, in the sciences where it matters (computation chemistry, computational physics, bioinformatics, applied mathematics, …). It isn’t big in computer science circles, because it doesn’t involve much computer science (a little parallel algorithms, a little computer architecture).

          Reply
          • 8. Robert Radcliffe Gotwals Jr.  |  January 13, 2020 at 3:55 pm

            It might be going strong at the university level, but I don’t think that’s true at the K-12 level. I had NSF money for years, working with teachers, and we made very little impact. I think it’s especially difficult now, given the demands of high-stakes testing, scientific computing is not on the radarscope of ANY relevant high school exams. APs still use calculators, for heaven’s sake, which some people consider to be computing devices, but not most computationalists.

            Reply
            • 9. gasstationwithoutpumps  |  January 13, 2020 at 6:33 pm

              True—computational science has never caught on at the K–12 level. Part of the problem is that much of computational science relies on calculus, and once the calculus is removed the remaining calculation is not very helpful for learning the subject.

              Reply
              • 10. Robert Radcliffe Gotwals Jr.  |  January 13, 2020 at 8:00 pm

                I couldn’t disagree more…there is a LOT of computational X that can be done that does NOT require a working knowledge of how to solve calculus-based problems. As before, I’m primarily a computational chemist, and my kids use research-grade quantum chemistry software programs to do very high-end stuff. In bioinformatics, a lot of R and python, a lot of use of bioinformatics databases. We use programs such as STELLA and VenSim, which solve integrals, but have a very visual interface in which we can make the calculus explicit, or, in the case of elementary and middle school kids, not mention it at all. While my computational science research kids have all completed calc (most are taking multi or graph theory), probably 90% of the 1000 or more kids who have come through my program have NOT taken calc, or are in the beginning stages of doing so. Tools like NetLogo out of Northwestern are great platforms for doing really cool scientific computing and model building.

                Reply
                • 11. Kathi Fisler  |  January 13, 2020 at 8:25 pm

                  I agree that you don’t need calculus. In Bootstrap:Physics, we’re doing a decent bit of computational modeling using only Algebra (following the content of a Physics First curriculum, which is itself designed relative to Algebra). You can certainly expose students to computational models and the kinds of questions they can answer without calculus.

                  Reply
  • 12. gflint  |  January 13, 2020 at 9:43 am

    What is with the resistance from university departments of education to pre-service CS? I tried to get it started at my local university and it was like talking to a brick wall. Even the CS department was totally uninterested. The worst thing was their argument against it made sense; very few K-12 schools offer CS so why burden pre-service teachers with another option. Of course that argument is a bit short sighted but that seem to be the trend with many university programs.

    Reply
    • 13. Mark Guzdial  |  January 13, 2020 at 1:03 pm

      Take a look at my CACM blog post on it. Bottomline: pre-service teacher education in many states is basically an enactment of the legislation or regulations about teacher development. These are detailed prescriptions.

      Reply
  • 14. orcmid  |  January 13, 2020 at 1:28 pm

    Great post. I felt the OLPC appeal also and bought two XO (XOi?).. It did worry the biggies (private communication), who were preparing defensive competitors until it became evident that there was no threat.

    Now, the landscape is completely different and my personal experience of over 60 years is not an useful guide. There were no computers in my classrooms :). I think learning by doing is a significant aspect in the development of software and computer fluency. That is easily observed. I have no idea what is called for in guided experience in a classroom situation, and how one equips educators for it.

    I am suspicious of approaches that do not provide for a learning-curve/on-ramp toward use of computational technology for productive purposes. Recreational use seems less crucial since it is completely successful at inviting adherents. I have no justification for that attitude.

    I am told that making mistakes is at the core of learning. I bestow on you the title of “learned educator.”

    Reply
  • 15. Kathi Fisler  |  January 13, 2020 at 8:33 pm

    Hmm, the post I tried to make early this morning didn’t go through, so I’ll try again.

    I’d update Kay’s version in a couple of ways. First, the “new liberal art” of computing lies in data as much as in algorithms. That so many intro CS curricula still focus so heavily on control structures at first says that most are still in an algorithms-dominant mode. But if you ask people in other disciplines how they might use computation, it is often (though not always) centered around data.

    For someone grounded in data, computational tools hide the algorithmics. This is a good thing, as it lets people use computation in terms that are meaningful to what they want to do.

    A general purpose intro to all of computing raises significant problems of transfer to other disciplinary contexts. Mark’s recent work (as I see it) is trying to avoid the challenging transfer problem.

    I still believe that computing is a foundational liberal art, but I believe less in that foundation being mostly algorithmics. I believe it takes different forms depending on who wants to use it (computational modeling for some, data analysis/management for some, foundations of computing for some, …). There will be clusters of disciplines that can share different foundations, but putting it in context will remain essential for those who will see computing mainly as a tool.

    Reply
    • 16. Mark Guzdial  |  January 13, 2020 at 8:40 pm

      I agree with your characterization of my work, Kathi — I’m trying to avoid the transfer issue, by making the programming look like the discipline, by turning it from a far-transfer to near-transfer experience.

      I also agree with the focus on data over algorithmics. When I was working on Media Computation, I realized that knowledge about the data (e.g., that colors were represented in 24-bit RGB) and about representational choices (e.g., that vector graphics didn’t record pixels) were important to the non-CS faculty who advised me, though they were considered unimportant details by the CS faculty. The CS faculty wanted me to teach about searching and sorting, which wasn’t all that important in a digital media context.

      Reply
    • 17. Robert Radcliffe Gotwals Jr.  |  January 14, 2020 at 5:59 am

      I’ve often wondered why functional programming languages like Mathematica aren’t more widely used (with a possible reason being cost), especially at the K-12 level. I use it heavily with high school kids. The built-in curated data, like ChemicalData and CountryData, provide a rich resource of data for students. I completely re-wrote my high school chemistry course to be Mathematica-based, the kids loved it. AND, the write-programs-by-combining-functions approach allowed my I’m-scared-of-programming kids with a really gentle way to learn about logic and programming syntax/grammar. I used both R and Mathematica in a digital humanities course for humanities students, and there was unequivocal evidence that the less procedural of the two programming environments was preferred. Virtually all of the final projects used Mathematica — none used R.

      (Sorry, Mark, hope we haven’t hijacked your thread here.)

      Reply
      • 18. Mark Guzdial  |  January 14, 2020 at 9:30 am

        Totally in-scope! I challenge one of your claims. I actually think you’re right that Mathematica is likely to be more successful than R for many contextualized uses of programming and computing. But your experience doesn’t constitute “unequivocal evidence.” Morgan Ames has an award-winning CSCW paper on the history of Constructionism and Logo that points out that the earliest Logo studies did show evidence of transfer and metacognitive skill development — and all of that stopped when the studies scaled or the researchers left the room. If we really want to test these claims, we need to test them.

        Reply
        • 19. Robert Radcliffe Gotwals Jr.  |  January 14, 2020 at 10:19 am

          Mark, absolutely correct, I’m a computational science educator, not a researcher. I am, however, a careful observer, a scientific observer, and my observations strongly support the claim that Mathematica wins over R (and python, another programming environment that my students use frequently). Hence my earlier point about these claims need to be tested by independent researchers, and, again, I’m not trolling for bodies or bucks. My goal was, is and always will be to get students to be able to answer this question: how can computing and mathematics be used to study interesting and complex scientific problems? My CS colleagues here really care about the programming — which language, which constructs, etc. Me? I couldn’t care less — I want the lowest obstacle to getting meaningful scientific data that helps students with the science. My point was, and is, that of ALL of the environments I teach to my students — Mathematica, R, MatLab, NetLogo, Excel, STELLA/VenSim, and others — my students (almost 100 per semester) tend to pick MMA when they have a choice.

          Reply
          • 20. gasstationwithoutpumps  |  January 14, 2020 at 11:50 am

            Student choice of language is often driven more by where they get the best support (tutors, instructor enthusiasm, prior exposure, …) rather than by how easy the language is intrinsically for them to learn or solve their problems in. So one instructor saying that their students prefer Mathematica is only a mildly interesting data point.

            Reply
            • 21. Robert Radcliffe Gotwals Jr.  |  January 14, 2020 at 12:28 pm

              Yep, and I’m certainly NOT offering up MY experience(s) with this particular audience as “proof” of anything…it IS just a data point. The original comment was expressing some surprise that computing tools like Mathematica aren’t more in the conversation.

              Reply
  • 22. bh  |  January 13, 2020 at 11:36 pm

    I’ve always said that if “Computational Thinking” means anything, it means mathematical thinking — not any particular piece of math curriculum, but the way mathematicians actually use formal language to solve problems. The only thing computers add is to provide a formal language that’s much more accessible than the ones mathematicians use.

    You know I’m not one of the people who defend to the death anything Seymour ever said or did. But with respect to complexity, I don’t see why we can’t have it both ways. Snap! supports Scratch-beginner-type projects, and also supports lambda calculus. The former benefits occasionally from the latter; for example, when beginners reach the point of working with lists, they can use MAP pretty easily, without knowing anything about what’s inside it.

    Similarly, one of the goals of Berkeley Logo was to restore the metaprogramming capability (DEFINE and TEXT) that was in the early research Logo versions but taken out of later commercial versions because, to quote Brian Silverman, “You and three other people want that.” It doesn’t get in the way of writing traditional Logo programs.

    As for wanting to teach smart kids, I can tell you that finding a kid who just soaks up recursion and functional programming at age 10 is really, really fun! (I’ve had that pleasure twice. A third time it was a 12-year-old, also a lot of fun.) So I can well believe that it was fun for Seymour, too. But if I were building a language and a curriculum for geniuses, it wouldn’t be Logo or Snap! or BJC. It’d be Scheme and SICP. What that comparison makes clear, though, is that we Scheme-and-SICP fans want to build a curriculum and a programming environment for everyone that gives them a pathway into the same ideas the geniuses use, not a different set of ideas. There’s room for argument about whether that’s a good goal in trying to serve everyone, but I don’t think it’s fair to dismiss it as abandoning the masses.

    Reply
    • 23. Mark Guzdial  |  January 14, 2020 at 9:25 am

      Hi Brian,

      Thank you for your comment and engaging here!

      I just wrote a response to Ken Kahn that touches on some of these same issues. To avoid repeating, I’m going to use a different strategy here.

      Scratch-beginner-type projects do not make ideas of programming accessible to students. I’m sure that you’ve seen the studies of how students actually use Scratch. One of my favorites is led by Yasmin Kafai and Debbie Fields (see here). For example, students don’t use boolean expressions, and thus conditionals or loops other than forever loops. Scratch is powerful in engaging students and giving them computation as a form of expression, but it’s not really helping students onto the low threshold of using the power of programming. And while Scratch engages a lot of students, it’s nowhere near “the masses.” (Here’s an analysis I did in the CACM Blog recently.)

      Brian, you know that I’m one of those three that Brian Silverman was describing. I have complete sets of both editions of your CSLS books just above my head as I type. I now realize that Brian’s numbers are not much of an exaggeration. Less than 4% of US high school students see any CS. That’s a really low number. When 96% of students never see a subject, can’t we say that the subject is effectively not taught in high school? That’s the problem I’m interested in. The fundamental principle of HCI design is “Know thy user for they are not you.” If we really want to make programming accessible to many others, we have to go study them, because they are not like us.

      I question whether it’s possible to build a tool that really has a low enough threshold to be accessible to most students and is interesting still to students who want to push the ceiling. Educational psychology has terms like aptitude-treatment interaction and expertise reversal effect to describe how the instruction for low-ability and high-ability students must be different. But that’s an open empirical question — you might be right that a tool like Snap! can adequately serve the entire spectrum of students. We certainly don’t have evidence that that’s true yet.

      So, yes, the evidence is that so-far, computing education has abandoned the masses. We’re going to have to work hard to figure out how to reach the students and teachers who are thus far not engaging with the power that programming can provide. It’s an open and interesting research question to ask how to reach those students who find even basic Logo, Scratch, and Snap! programming to be out of their reach.

      Reply
      • 24. bh  |  January 14, 2020 at 10:33 pm

        It’s my impression that a lot of the early Logo failure came from injecting it into a classroom without any teacher preparation. We’re trying to do better this time around, and we (the developers) answer questions on the Snap! forum and on the BJC Teachers group on Piazza. For all its faults, the Internet lets us build a widespread community of teachers and of kids.

        If it’s /really/ true that many kids can’t understand an IF/ELSE, and it’s not that their teacher doesn’t understand it, then the answer is to find the point of failure in their earlier education and do interventions there. I don’t know that that’d work, but it’s really all I can think of. I’m not willing to call an experience that doesn’t use conditionals “learning to program” or “learning computer science.”

        Reply
    • 25. orcmid  |  January 14, 2020 at 11:19 am

      I am generally favorable to the idea of SICP and Scheme, although it is important to know why MIT has changed course and what they are now using instead. One key problem is that it may be too geeky and not terrific as a means of engaging the imagination and satisfaction of students who will likely not be MIT undergraduates. There’s something I can’t quite put my finger on. I think it is about too much facing inward toward the computational machinery and not so much outward toward practical and especially recreational pursuits. Whether it “could” be used in such settings, it is clear that it isn’t in any widely-available form. SICP is intentionally inward looking and it does a great job at that.

      As an adherent of functional-programming approaches, with some early-career (1960s) exposure, the appeal of Mathematica for me is the ability to produce beautiful presentations of the work and supporting analysis. I think it is too high level in some respects and may hide too much. It looks like a great tool though. Probably rather elite though, and very proprietary.

      I favor “Literate Programming” but there is not wide take-up and there are cultural barriers to what it takes to use well. I can’t imagine it in an introductory or early-practice level.

      So much for personal anecdotal experience of a practitioner, not a CSE researcher.

      I have one small niggle for you. It concerns the opening paragraph on It is easy to be misled by the appropriation of mathematical-formula notations and expression languages for computer programming, as in

      “[mathematical thinking,’not any particular piece of math curriculum, but the way mathematicians actually use formal language to solve problems. The only thing computers add is to provide a formal language that’s much more accessible than the ones mathematicians use.”

      Serious mathematicians have good reason to take issue with that paragraph concerning the problems they work on and the formal languages that are employed.

      I want to point out, though, that moving from a mathematical-seeming expression to computational process is not a mathematical act but one of engineering and empirical success. “Proof” and “computation” are not the same almost all of the time. Think about this blog comment and all the machinery behind it and how it affords communication. There may be reliance on mathematical/statistical computations in the operation and carrying out of all of it, but it is not in the same realm as formal mathematics. I wonder if Mathematica encourages this confusion for many people.

      A mundane example can be found in elementary school arithmetic that we all learn to some degree. That arithmetic works has a mathematical foundation. But when we do arithmetic, we are the computer and we are not doing mathematics, although we might rely on mathematical assurances without much thought to the matter. Mathematics is behind ways of checking our work, and it is a good thing that is possible because the computer tends to be unreliable without that provision.

      Reply
      • 26. Robert Radcliffe Gotwals Jr.  |  January 14, 2020 at 11:28 am

        orcmid, I concur with the proprietary aspect, although Wolfram has done well by my high school in terms of VERY reasonable site licensing. They also now have the online version, which opens it up to Chromebook and other “lesser” computing resources. In terms of level, I (and my kids) found it MUCH more accessible than any other programming environment. For my gen chem classes, I have what is basically a front-and-back “cheat sheet” that gives them most of the commands they need to succeed in using MMA in chemistry. Most, but not all, of the kids figure out other functions that they like. It’s also a LOT more black-box than procedural languages — I was talking with some machine learning kids this summer who were using python, they were irritated when they saw that one function — Classify — basically replaced a hundred lines of python code. As before, the computer scientists have a different agenda than the computational scientists, so I’m all for black box.

        Reply
      • 27. bh  |  January 14, 2020 at 10:18 pm

        I think you’re interpreting my comment on formal language more narrowly than I meant. Probably my fault for not being clear. I don’t mean that the languages are the same — that an arithmetic expression looks (more or less) the same for a programmer and for a mathematician. What I mean is that when a mathematician finds some new abstraction, they give it a name, and when a programmer finds a new abstraction they give it a procedure. In both cases there are syntactic and semantic constraints on composing these things.

        Reply
  • 28. Ken Kahn  |  January 14, 2020 at 2:44 am

    Regarding

    “I was in a design meeting once with Seymour, where he was arguing for making a new Logo implementation much more complicated. “Teachers will hate it!” several of us argued. “But some students will love it,” he countered. Seymour cared about the students who would seek out technical understanding, without (or in spite of) teachers, as he did.”

    I think a well-designed language for learning can have both a low threshold and a very high ceiling. Maybe the trick is to hide the high ceiling from those who find it scary (e.g. the teachers that were predicted to hate it). Imagine Logo was the first example I saw of a Logo with many layers where the lowest layer was classic Logo and the top layers supported objects, inheritance, concurrency, and much more. And there was a nice gradation in between.

    Yes, some students will never get to the more sophisticated layers but I think we owe it to those students who want to go further to give them the best tools we can.

    Reply
    • 29. Mark Guzdial  |  January 14, 2020 at 9:05 am

      Hi Ken,

      The context was a bit different than you’re guessing.

      We were designing the Microworlds Logo just after LogoWriter, the successful LCSI product that merged a text editor with Logo, so that there were primitives to move the cursor, select letters and words, and cut/copy/paste. The idea for Microworlds was to do this big — have versions of Microworlds explicitly for (say) astronomy with a star field display and lots of appropriate primitives, or chemistry with a molecule display and matching primitives. LogoWriter had opened up English class to Logo programming, and LCSI was excited about doing this with lots of other domains. Then Seymour said, “You can market it to different departments, but it has to be the same Logo. All the different capabilities have to be built in to every Logo.” Set aside the interface complexities of this. As you recall, Ken, Logo has a global namespace. A misspelling of a primitive might drop someone into biology while in chemistry class, if all the primitives were available in every version. Error messages which might make sense in one class now don’t in the other. Teachers would have a whole new class of errors to deal with, connecting to classes outside their expertise. Seymour was right, of course, that some kids would totally love to merge mathematics and science fields like that. I would have loved it. Some teachers would also love it. Most would not.

      I think we have roundly underestimated how the low the threshold has to be. Logo never had a low threshold. Variables, parameter passing, conditionals, and iteration were always hard — all the early Logo studies support that claim. If we really built a Logo that scaffolded understanding those basic issues, it would now be boring and pedantic for those reaching for the high ceiling.

      I completely agree that we should offer something to the students who want the most sophisticated layers. But catering to them will always be catering to a small percentage of students. We as a field have spent a lot of time at that high ceiling level. We have to work at making it work at a low enough threshold.

      Reply
      • 30. orcmid  |  January 14, 2020 at 12:21 pm

        Brief aside: This reminds me of “The Next 700 Programming Languages” notion and a single (functional) structure with differences around sugar and with types and functions associated with specialized domains. The use of namespaces and libraries was not presumed, but it sort of happens that way in practice. Nothing contradicts your observations or experience and conceptual difficulties though.

        In (so-to-speak) mathematical curricula, we have a progression from arithmetic to algebra to trigonometry to calculus and beyond as one branch. Geometry is an odd side-branch after arithmetic (at least when I was in high school) that is about a kind of mathematical thinking and an interesting contrast with trigonometry in regard to computational prospects. We also had to deal with rational numbers (i.e., “fractions”), and tables of logarithms and other functions, there not being any calculators, let alone computers.

        I recount this in observation that there is a progression and also, there is a change in kind from one level to the next.We don’t have to forget the earlier stages, they can still be relied upon, and the subsequent levels also give us new ways to regard the preceding stages. There is also new capability and value at each stage.

        The teaching of physics (and chemistry) also involves progressions. Although much might be surveyed, advanced levels of physics (as for mathematics) are pretty inaccessible until the collegiate levels, and beyond. It can be a little shocking to move from algebra to real analysis, differential equations, and also higher algebra. Then there’s discrete mathematics, analysis of algorithms, etc.

        I don’t think the social sciences and humanities lack this sort of thing, but the sciences and mathematics stand out for me.

        Does this tell us anything or provide any kind of guidance about how to “level” computation and information technology as something, with utility at every stage that is replaced but qualified rather than obsoleted at higher levels? (By utility I mean personal benefit to the individual, however appreciated.)

        I cannot argue against fluency with information technology and even some counterpart of literacy associated with it. Maybe programming is no longer the best point of entry? What people observe in their interactive use of computers these days is far removed from what is accomplished with simplified programming setups, and the gap is maybe too unsatisfying.

        II share your interest in the value of systematic thinking and computing as an instrument. Yet I only have questions.

        Reply
      • 31. Ken Kahn  |  January 14, 2020 at 9:14 pm

        Interesting. It would seem that Microworlds Logo could have resolved this the way most languages do – easy-to-load libraries. Careful naming could have minimised the consequences of a global namespace. Somehow libraries never became part of the Logo way of doing things.

        Regarding work at a low enough threshold I see Scratch as that.

        Reply
  • 32. carpetbomberz  |  January 14, 2020 at 8:58 am

    Reblogged this on Carpet Bomberz Inc. and commented:

    I’m definitely going to lookup and read the Charisma Machine by Morgan Ames. I wasn’t aware anyone had done any review of the One Laptop Per Child project at Media Lab.

    Reply
  • 33. Ken Kahn  |  January 14, 2020 at 9:46 pm

    I mentioned Scratch before seeing Mark’s response to Brian. I too am disappointed in the poor quality of most Scratch scripts – repeated code, lack of abstraction, reliance upon only the simplest control structures, etc.

    I have a problem with research that shows how disappointing the use of programming languages and tools are in practice when I see how well things are working under “best practices”. To me the big question is how one can replicate best practices. Do best practices require exceptional teachers? I’m sceptical but uncertain.

    Reply
    • 34. Mark Guzdial  |  January 15, 2020 at 9:22 am

      I think it’s fine that Scratch programs are poor-quality from a CS or Software Engineering perspective. There’s a lot of use of Scratch — arguably, the most successful children’s programming language in the world. Students are engaged and they’re programming.

      But it doesn’t actually lead them to learn or use much of programming. They likely don’t learn much of computer science. Scratch shows us that engagement is not enough to get students to dig deeply into the concepts. Maybe we need different kinds of activities. Maybe different kinds of teachers. Maybe a different interface for programming.

      I’m not convinced that replicating best practice is sufficient. The n on our best studies is too small. It’s likely that we have only tested in the best possible situations. The exact same classroom might not work with a different and more random set of students.

      I think we need to try new and more things. Computing education research has to engage more with design studies.

      Reply
      • 35. Robert Radcliffe Gotwals Jr.  |  January 15, 2020 at 11:34 am

        Mark, I appreciate I’m a single data point, and that I’m not a researcher but a boots-on-the-ground computational science educator. I started in HPC in 1987, so I’ve seen a great deal in terms of classroom integration of both computer science and computational sciences. With NSF funding, I ran a pile of teacher PD workshops in the computational sciences. I apologize for cynicism, but almost all of the teachers, despite participating in NSF summer workshops, were very skeptical, not only about their own abilities to learn the material, but also about their abilities to integrate into classrooms. After years of head-banging, I made the (career) decision to come back to the classroom, work directly with students, in the hopes that the “you teach how you were taught” mantra would be realized. I think in terms of teacher development, things are even harder than they were in the late 80s/early 90s. Less classroom flexibility, more high-stakes testing, destructive funding shifts (particularly in North Carolina), all of these things have made things much harder. Does that mean we stop trying? No. I have had very little success, however, exporting ANY of my 11 computational science courses (or even parts of them) to schools in North Carolina. Teachers tell me they can barely keep up with the expectations of EOC testing, the lack of resources, etc. I’m not sure we can expect a “different kind of teacher”. If we are expecting that, I would say: yes, they are different now. More stressed, poorly resourced, etc. My school has significant expectations to serve as a resource for the entire state. It’s become much more difficult since I arrived here in 2006. Maybe a better programming environment. Fingers crossed.

        Reply
        • 36. orcmid  |  January 15, 2020 at 1:21 pm

          Maybe a better programming environment.

          This shows up in other comments from time to time. Is there some sense of what that would be, what barriers to remove, what the appeal would be for what and to whom?

          I am very curious, yet have no insight.

          Reply
          • 37. Robert Radcliffe Gotwals Jr.  |  January 15, 2020 at 1:40 pm

            orcmid, in my experience and opinion, both as an NSF-funded researcher AND as a long-time classroom person, these things need to be ensconced in the testing expectations that most teachers face. I had one NSF proposal (not funded) that was looking to generate enough data to convince the American Chemical Society and thus the College Boards that computational chemistry skills were as important as “traditional” lab techniques (think: beakers and burners). Having taught AP for a number of years, you barely have enough time to get through the testable material in a year, and, if your performance review is based on how many 5s your kids get (and it is so based), you spend zero time on “extracurriculars” like computational chemistry. Never mind the challenges in getting appropriate software funded (if needed, like Mathematica) OR installed on school machines to be able to do higher level (scientific) computing. At our school, each student is required to have a personal laptop, so we don’t face that problem. With one NSF grant I had a number of years ago, the Dean of the College of Education at ECU and I went to districts all over the state, meeting with superintendents, with the goal of getting better software installed so kids could do computational science. Almost to a person, the superintendents were unwilling to demand of their own IT staffs that software would be installed on computers. Saying this out loud sounds really stupid, but this really happened. So we have teacher anxiety about learning this stuff, we have the driving reality of end-of-grade testing, we have challenges in getting software beyond Microsoft Office installed on school materials, and we have the general malaise that exists right now in terms of school funding. If you read about what’s happening in North Carolina, extrapolate that to a dozen or more other states.

            As before, I apologize for the “downer” comments here, but it’s really hard to be optimistic. Yes, there are individual schools, teachers, and programs doing great things in the computational sciences (we are one of them). Teachers are using tools like PheT, some of the Concord Consortium materials, stuff from the Shodor Foundation, etc. Others, like us, have full-blown, semester-length courses. But, unless I’m missing something, and I pay pretty close attention, there isn’t anything happening on a scale of any magnitude. In my CURRENT NSF work, related to computational thinking, we’re trying to get at some these issues with high school chemistry and biology teachers. But there are lots of things we can’t address, like funding for computers, EOGs, etc.

            Reply
            • 38. orcmid  |  January 15, 2020 at 3:04 pm

              Maybe a better programming environment. Fingers crossed.

              My apologies. I thought you were talking about a software platform, not the context in which instruction is conducted.

              It was in the instrumental sense that I commented

              This shows up in other comments from time to time. Is there some sense of what that would be, what barriers to remove, what the appeal would be for what and to whom?

              I am very curious, yet have no insight.

              You have my sympathy, for what little that’s worth. I have no illusions that any instrument can shift the difficulties that you observe. That should not deter the search for something that affords a handy conceptual model for encouraging exploration and inquiry into what’s behind it. I remain clueless.

              Reply
              • 39. Robert Radcliffe Gotwals Jr.  |  January 15, 2020 at 3:17 pm

                No sympathy needed, we have NONE of those problems at my school. We are full up with resources (including access to several supercomputers), we do no EOG testing, we have 100% control of curriculum (no Department of Public Instruction “guidance” provided), and all of our faculty have Masters or PhDs and have a significant amount of expertise. We just hired our SECOND computational chemist, and we’re a high school. We’re resourced to the point where we are opening a second campus in 2021.

                I do care, however, about other schools that AREN’T in our very enviable situation. It CAN’T just be my 100 kids/semester getting a chance to do advanced computing in the sciences.

                Reply
      • 40. orcmid  |  January 15, 2020 at 1:18 pm

        I think we need to try new and more things. Computing education research has to engage more with design studies.

        Say a bit more about “engage more with design studies” please. Design studies?

        Reply
          • 42. orcmid  |  January 15, 2020 at 2:37 pm

            Participatory design, design-based research, design experiments.

            Oh wow. Thanks, I see the controversy. I’ve participated in an (industrial) participatory design activity that was very fruitful. I guess the paper published about it would fall in the brief talks on practice and not research with capital “R.”

            This reminds me of the “virtuous cycle of learning and improvement” associated with W. Edwards Deming. It is an iterative product development methodology that includes the adopters of a product, though “participatory” is a matter of degree. Ethnographers who study work practices get dissertations out of their observational work.

            I don’t know how much this is similar to what you are thinking of. Maybe you need some anthropologists. PARC did some of that, maybe including a little with Smalltalk and the kids that they got to work with it. Not certain participatory design though. You know who to ask :).

            Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,125 other followers

Feeds

Recent Posts

Blog Stats

  • 1,720,734 hits
January 2020
M T W T F S S
« Dec    
 12345
6789101112
13141516171819
20212223242526
2728293031  

CS Teaching Tips


%d bloggers like this: