Goals for CS Education include Getting Students In the Door and Supporting Alternative Endpoints

June 1, 2020 at 7:00 am 21 comments

ACM Inroads has published an essay by Scott Portnoff “A New Pedagogy to Address the Unacknowledged Failure of American Secondary CS Education” (see link here). The Inroads editors made a mistake in labeling this an “article.” It’s an opinion or editorial (op-ed) piece. Portnoff presents a single perspective with little support for his sometimes derogatory claims. I have signed a letter to the editors making this argument.

Portnoff is disparaging towards a group of scholars that I admire and learn from: Joanna Goode, Jane Margolis, and Gail Chapman. He makes comments about them like “had CSEA educators been familiar with both CS education and the literature.” Obviously, they are familiar with the research literature. They are leading scholars in the field. Portnoff chides the CSEA educators for not knowing about the “Novice Programmer Failure problem” — which is a term that I believe he invented. It does not appear in the research literature that I can find.

In this blog, I want to try to get past his bluster and aggressive rhetoric. Let’s consider his argument seriously.

In the first part, he suggests that current approaches to secondary school CS education in the United States are failing. His measure of success is success rates on the Advanced Placement Computer Science Principles exam. He also talks about going on to succeed in other CS courses and about succeeding at industry internships, but he only offers data about AP CSP.

He sees the reason for the failure of US CS education in high school is that we have de-emphasized programming. He sees programming as being critical to success in the AP exams, in future CS classes, and in industry jobs. Without an emphasis on programming, we will likely continue to see low pass rates on the AP CS Principles exam among female and under-represented minority students.

In the second part, Portnoff lays out his vision for a curriculum that would address these failings and prepare students for success. He talks about using tools like CodingBat (see link here) so that students get enough practice to develop proficiency. He wants a return to a focus on programming.

What Portnoff misses that there is not consensus around a single point of failure or a set of goals about CS Education. In general, I agree with his approach for what he’s trying to do. I value the work of the CSEA educators because the problems that they’re addressing are harder ones that need more attention.

The biggest problem in US high school CS education is that almost nobody takes it. Less than 5% of US high school students attend any CS classes (see this blog post for numbers), and the students we currently have are overwhelmingly male, white/Asian, and from wealthier schools. Of course, we want students to succeed at the Advanced Placement exams, at further CS courses, and at industry jobs. But if we can’t get students in the door, the rest of that barely matters. It’s not hard to create high-quality education only for the most prepared students. Getting diverse students in the door is a different problem than preparing students for later success.

CSEA knows more about serving students in under-served communities than I do. They know more about how to frame CS in such a way that principals will accept it and teachers will teach it. That’s a critical need. We need more of that, and we probably need a wide range of approaches that achieve those goals.

A focus on programming is critical for later success in the areas that Portnoff describes. The latest research supporting that argument comes from Joanna Goode (as I described in this blog post), one of the educators Portnoff critiques. Joanna was co-author on a paper showing that AP CS A success is more likely to predict continuation in CS than AP CSP success. I’m also swayed by the Weston et al. article showing that learning to program led to greater retention among female students in the NCWIT Aspirations awards programs (see link here).

I also agree with Portnoff that learning to program requires getting enough practice to achieve some level of automaticity. CodingBat is one good way to achieve that. But that takes a lot of motivation to keep practicing that long and hard. We achieve reading literacy because there are so many cultural incentives to read. What will it take to achieve broad-based programming literacy, and not just among the most privileged? Portnoff tells us that his experience suggests that his approach will work. I’m not convinced — I think it might work with the most motivated students. He teaches in the same school district where the ExploringCS class was born. But Portnoff teaches in one of LAUSD’s premier magnet schools, which may mean that he is seeing a different set of students.

An important goal for CS Education is to get students in the door. I’m not sure that Portnoff agrees with that goal, but I think that many involved in CS education would. There is less consensus about the desired outcomes from CS education. I don’t think that CSEA has the same definition of success that Portnoff does. They care about getting diverse students to have their first experience with computer science. They care about students developing an interest, even an affinity for computing. They care more about creating a technically-informed citizenry than producing more software developers. Portnoff doesn’t speak to whether CSEA is achieving their desired outcomes. He only compares them to his goals which are about continuing on in CS.

There is a tension between preparing students for more CS (e.g., success in advanced classes and in jobs) and engaging and recruiting students. In a National Academy study group I’m working in, we talk about the tension between professional authenticity (being true to the industry) and personal authenticity (being personally motivating). The fact that so few students enroll in CS, even when it’s available in their school, is evidence that our current approaches aren’t attractive. They are not personally authentic. We need to make progress on both fronts, but considering how over-full undergraduate CS classes are today, figuring out the recruitment problem is the greater challenge to giving everyone equitable access to CS education.

I just learned about a new paper in Constructionism 2020 from David Weintrop, Nathan Holbert, and Mike Tissenbaum (see link here) that makes this point well, better than I can here. “Considering Alternative Endpoints: An Exploration in the Space of Computing Educations” suggests that we need to think about multiple goals for computing education, and we too often focus just on the software development role:

While many national efforts tend to deploy rhetoric elevating economic concerns alongside statements about creativity and human flourishing, the programs, software, curricula, and infrastructure being designed and implemented focus heavily on providing learners with the skills, practices, and mindset of the professional software developer. We contend that computing for all efforts must take the “for all” seriously and recognize that preparing every learner for a career as a software developer is neither realistic nor desirable. Instead, those working towards the goal of universal computing education should begin to consider alternative endpoints for learners after completing computing curricula that better reflect the plurality of ways the computing is impacting their current lives and their futures.

Entry filed under: Uncategorized. Tags: , , , .

Measuring progress on CS learning trajectories at the earliest stages Becoming anti-racist: Learning about race in CS Education

21 Comments Add your own

  • 1. bobirving13  |  June 1, 2020 at 9:47 am

    Thanks for your always thoughtful posts, Mark. My background — I teach middle school CS in an independent school, where we require a quarter of CS in grades 5-8 and a semester in 9th grade. Students may apply to be in our program in 10th grade, and we generally get submissions from about 35% of each grade. Our program is hard, but it’s not aimed solely at those who want to be professional programmers. Naturally, some in our program do follow that path, but many do not. There are many reasons to learn to code besides going on to employment as a programmer. Some like to master technology they use every day, some add it to their “toolkit” in another field of work, and some just enjoy the satisfaction of creating something useful or even beautiful. Art and music classes are not aimed at producing professional artists and musicians. English classes are not aimed at producing professional writers. Physics classes aren’t aimed at producing astrophysicists. So why do we in CS feel that our curriculum must be guided solely by producing professionals? I would echo Mitch Resnick’s ideal that CS should have “low floors, high ceilings, and wide walls.”

    Reply
  • 2. alanone1  |  June 1, 2020 at 11:13 am

    For me, the only place I can start is with Jerome Bruner’s notion of “an intellectually honest” version of a subject for a learner’s “level of development”.

    If we take the term “science” seriously in “Computer Science” — and also take what the founders of the field thought it should be about — then an “intellectually honest version of Computer Science” is hard to find for any age or grade in the US.

    The previous sentence is a claim, but I think I actually have been (and still am) an actual “Computer Scientist” going on more than 50 years now, and can prove it. This doesn’t mean that my definition of the term is the last word on it — I don’t think it is. But I do think that starting with people who actually are it and do it is a good place to begin this discussion.

    I also want to leave out of this part of the discussion the fact that there are jobs available for people who have learned to program a little. This is somewhat analogous to being allowed to practice medicine if you have taken a HS Biology course. It’s realized this is not a good idea vis a vis medicine, but the idea that computing might take quite a few years of hard study before “practicing on the public” is allowed has not yet been generally understood.

    The next part of this discussion is much more difficult to pin down. It has to do with a country whose K-12 preparation has not gotten them to realize and understand that:

    (a) it is an actual duty and necessity for them to vote (more than 44% did not vote in 2016 — that is more than did vote for either candidate — this is a complete breakdown of the education system).

    (b) a contagious deadly incurable disease that can infect more than one other person will spread in a compounding fashion, and very quickly — the only recourse is to immediately isolate (if we had acted as New Zealand did, there would be about 1500 fatalities from COVID, not more than 100,000 — the latter could have been avoided!)

    (c) it has been known for more than 60 years that the planet’s greenhouse gases and average temperature are increasing at a faster than constant rate, and that science and its simulations show many kinds of disasters in this century (this is the monster crisis that is still quite invisible to most).

    (d) there are many more completely true important ideas like the above that need to be understood and heeded.

    To me this means that — generally speaking — the main purposes of public education in the US — which are to shape the kinds of citizens we need over at least 80% of the voting population — have quite failed — not just miserably, but disastrously.

    In a democratic republic, the needs are for a population which is not just aware of important ideas and knowledge, and “how to think”, but also has learned how to argue (not to try to win debates, but how to argue to make progress and deal with important issues).

    Finding the “intellectually honest” versions of what a democratic republic needs is quite a daunting task, and in the 21st century also has to include the sciences along with the humanities.

    An important point I’m trying to make here, is that American education first has to deal with what can’t be options.

    For example, it is still not controversial that learning to read and write fluently should be mandatory in a democratic republic (but there are signs of even this being attacked). And the NAEP shows that even though American children are supposed to learn to read and write fluently, a vast percentage don’t, and they are allowed to fail.
    I don’t think that public health and “public science” can be optional in the 21st century. (There are more subjects I’m leaving out.)
    I can imagine “intellectually honest versions” of computing that would qualify for the full K-12 treatment.

    But I don’t see anything like this in my somewhat limited sampling of what is now a very large set of offerings from both school systems and private vendors.

    I don’t see what is generally going on regarding computing in industry as providing any reasonable form of computing worthy of having versions of it invented for young learners.

    More sadly, it is very hard to find even pockets of “real computer science”, especially at the undergraduate level, even in leading universities (but there are a few, it’s not yet completely zero).

    For the general learner in K-12 who is going on to be a voting citizen, but not a professional writer, mathematician, scientist, or computerist, we really need to come up with another, and much better round of what they do need to learn.

    For computing, I’m quite convinced that the center of it, from K onwards, has to be about systems, understanding them, making them, modeling them, etc. Nothing less will be able to be “intellectually honest” enough. This makes an easy test for what is now going on …

    Reply
    • 3. gasstationwithoutpumps  |  June 1, 2020 at 6:02 pm

      While I agree with much of what you say, Alan, I find your medical analogy flawed. Non-CS people programming for themselves is not the equivalent of practicing medicine without a license—it is closer to fixing one’s leaking toilet or preparing a healthy meal. A little knowledge is needed to do a decent job, but not a lot, and the downsides of a botched job are not very large. (Turning untrained people loose on major software projects for life-critical applications would be the equivalent of practicing medicine without a license, but that is not what is usually being done.)

      Reply
      • 4. alanone1  |  June 2, 2020 at 12:06 am

        Hi Kevin

        Yes, I was a bit uneasy about that analogy when I wrote it (and am always miffed that — unlike Quora — this WordPress will not let me go back to fix something that doesn’t sit well).

        Let’s find something between our two analogies.

        When I think of software out in the world today, as done by most of the people doing it today, my main thoughts are (a) the image of the “Pacific Gyre” of trillions of tons of waste gradually killing the oceans, (b) the really bad insecure and open to mischief (open source “AI” and the current web, etc. and (c) the Boeing 737 Max kinds of problems (again perhaps a bit of an exaggeration), (d) the “lowering of normal” (back to more prehistoric modes of thought) that is the hallmark of weak education for children and how they interact with messaging systems.

        I think the downsides of millions of botched up jobs are quite large, many perhaps more “mind-threatening” at present, but which are very much moving towards “life-threatening” in our society.

        Reply
        • 5. gasstationwithoutpumps  |  June 2, 2020 at 10:20 am

          I think that most of the “trash” polluting the software world is created by “professional” software developers—those supposedly trained in computer science, but not particularly competent or that optimize for the wrong thing (like speed of development over security of information). It is their work that gets copied millions of times.

          The programming-for-all movement is aimed more at creating one-off scripts used by one person. Such programs may be terrible, but they don’t do much damage, either individually or collectively.

          Perhaps a better analogy is bad food prep in a food-processing factory vs. in one’s own kitchen. The food-processing factory can make millions sick, but in one’s own kitchen the risk is only to oneself (or one’s family).

          Reply
          • 6. alanone1  |  June 2, 2020 at 10:39 am

            But consider a kid with access to the Internet and various kinds of free software, including bot software. I think they can do a lot of damage (and in fact have been doing damage). Like the molecular bio one can now do in a kitchen for about $10K, it’s the multiplicative effects of life on the one hand, and the internet and computing on the other that are very worrisome, regardless of whether there is any malice added.

            Reply
            • 7. Mark Guzdial  |  June 2, 2020 at 12:43 pm

              Hi Alan,

              By analogy, then, should we stop teaaching kids to write because they might promote violence via Twitter? Or forment revolution by writing a pamphlet like Paine’s “Common Sense”? One can cause damage with any tool. The responsible thing to do is to teach the ethical use of the tool, not to avoid teaching how to use the tool. Today, the people with the power to create and produce software are mostly male and white or Asian. Computing is far too powerful to leave in such biased hands. We need to promote more learning of computing.

              Reply
              • 8. alanone1  |  June 2, 2020 at 12:58 pm

                Hi Mark

                Check out my first comment in some detail — it is all about ethics and citizenship, and what should be taught to children especially.

                I think the limits to how learning and culture can moderate our genetics can be greatly aided by deciding what kinds of tools should be available. For example, I think really tough gun control is a good idea, even though “in theory” people might be taught to only use them wisely.

                The basic principle is to look at the multiplicative factors — often the result of technological advances — that can take an impulse from being a mistake to being a much larger tragedy. An angry cave person with a rock is in a different category than an angry cave person with an automatic weapon, or worse.

                And, as I pointed out, I am all for having everyone get fluent in “real computing” (I think I’ve be at this for longer than most on this forum). But I don’t think anyone is being done a favor by being taught “Guitar Hero” instead of “Guitar”, even if it “broadens participation” (that just seems crazy to me).

                In the current situation I think it’s the case that (a) the actual tools that should be taught are not being taught, and (b) the ethical uses of them are not being taught.

                Reply
                • 9. Mark Guzdial  |  June 2, 2020 at 3:43 pm

                  Hi Alan,

                  I’d love to get your take on “Guitar Hero” vs “Guitar” with respect to programming language design, please. I’m guessing that you would classify all of Smalltalk, Scratch, eToys, and NetLogo as “real computing” (and maybe even tools like Mathematica). From a computing perspective, these are all many layers of abstraction up from the “real computer.” What makes a tool “real,” i.e., what makes it “Guitar” and not “Guitar Hero”?

                  I’ve been thinking about this question from two perspectives. First, I continue to work with my collaborators in history and discrete math to create new task-specific programming languages. These are “real” in the sense of doing significant computational work (e.g., I have to be careful that the discrete math work doesn’t blow up available memory while it explores exponential counts), but they are not at all close to being the “real computer.” When is domain-specific programming “Guitar” and when is it “Guitar Hero”?

                  Second, I have started collaborating with cybersecurity people who expose flaws at the microprogramming level. They point out that running the same program on the same processor twice does not lead to the same bits flowing in and out of the same chip. Because of cache memories, branch prediction, and pre-fetching, programs are not actually deterministic at the bit-level of a processor anymore. Working at the “real computer” level is an enormous cognitive load these days — and perhaps is not really worthwhile for student learning. How close to “real” should we be to still give students the sense of “real computing”?

                  Thanks!

                  Reply
                  • 10. gasstationwithoutpumps  |  June 3, 2020 at 12:21 am

                    As a side comment—I found some of the work from UCSB on the side-channels created by just-in-time compilation of javascript rather interesting—it isn’t just hardware that creates weird side channels—almost anything can.

                    Reply
                  • 11. alanone1  |  June 3, 2020 at 12:30 am

                    Hi Mark

                    I think you pose absolutely key critical questions.

                    When I start thinking about this stuff, my operative question is “What is actually needed?”.

                    A too simple answer to one part of your question is that we often use computing as a means to an end — this can include various kinds of limited control & programming — whereas for “computing”, it is the end we pursue. There are analogies to many other subjects such as math, some of the sciences, and engineering. I think it is important to actually learn the subjects, even if they are not going to be one’s profession (I’m not against “applied” use at all, but they are quite different).

                    And my simple answer to another part of the question is: “something other than -Smalltalk, Scratch, eToys, and NetLogo-“. Smalltalk was an answer in the 70s. Etoys and Scratch are both limited experiments in limited areas, from which a lot was learned. Neither has the breadth, depth or legs for the purposes we are talking about. I think where Uri was able to take NetLogo was tremendously important, and is a definite facet of “what is actually needed?”.

                    If we go back to LOGO, I think we can pick up the impulse that Seymour, Wally, Danny, and Cynthia had — which was separately articulated by Jerry Bruner — find/invent/create an “intellectually honest version of the big ideas that matches to the learners’ level of development”.

                    They did this by taking the most interesting and powerful (in the sense of language and computing) programming language of its day — LISP — and used ideas from the best designed end-user language of that day — JOSS — to make something that could do what LISP could do but felt much more like JOSS for children and other end-users.

                    I thought — and still think — this was a truly great achievement.

                    I think we were able — in the best versions of Smalltalk at Xerox Parc — to get both of these once again with the ante raised a level or two in the 70s. I don’t think Smalltalk-80 got over this bar — it had forgotten about children.

                    Hypercard showed the next level of what end-users could deal with, but — unnecessarily — diluting the comprehensive/reach/depth part of the combination. The “unnecessarily” is still haunting.

                    To me, the central powerful idea of computing is: if you have a computer you can make other computers. If you are trying to teach “real computing” you have to find ways to teach this operationally at every level. This includes how to reveal the “computer” you are using was itself derived from a different computer, etc.

                    If we look around today, we find things that are ultimately made from computers that have been reduced to being simple tools for various reasons, but which unnecessarily have had what underlies made opaque and unreachable.

                    A second powerful idea that comes from the first is: the prime reason for learning/doing computing is to extend our abilities to understand, represent, think about, and deal with complexities, especially complex systems. The first powerful idea is like Math, and the second is like Science (and both are new extensions of both).

                    The two main factors that I think are still operative that makes what to do a moving target for each decade (or perhaps each duo-decade) are (a) Moore’s Law and its equivalents providing vastly different possibilities as time passes, and (b) that the field is still new enough that what we learn in one generation is not close to a complete enough story.

                    This means that we need to design and build afresh periodically.

                    We now live in an era that has lost its taste and energy for this — despite it being easier by far than it has ever been in history (this is part of my complaint about the current “normal” in computing — and in general — being redefined to such a low level). The pop culture prefers ever simpler subsets to any kind of development that qualitatively exceeds the past (the excruciatingly bad web browser even after 25 years is the ugly poster child for this stance).

                    So, I think we have to come up with what “Guitar” could mean for the next 10 years or so.

                    One way to think about this with regard to computer languages is that they aren’t just for manipulating bits, but for embodying “styles of approach”.

                    In other words, a very large part of a programming language is (should be) user interface design (I think most flunk).

                    LOGO did well with its day and its set of assumptions and problems.

                    The original Smalltalk design (-71) was to have the mechanisms of Carl Hewitt’s PLANNER used as the “interface language matcher” for objects. This is because I thought that the two most powerful new ideas for programming/computing ca 1970 were dynamic objects and “pattern directed invocation” (PLANNER was a superset of the later Prolog) — and what each implied about making models of ideas.

                    I thought a really good way to make a qualitative leap to the next levels of programming was to try to deal with the representation and inference problems that the AI of the day was trying to deal with — this was the most advanced thinking in programming and computing.

                    We didn’t do it because we had enough on our plate to invent “what is actually needed” for personal computing. I thought it would happen in the 80s, but it didn’t.

                    We did Etoys to explore a facet of learning: how to do real science in elementary schools. This eventually worked, but it wasn’t about computing.

                    (It is really tough to try to write connected discourse in this terrible WordPress system! — how could it be so much worse than 50 years ago?)

                    If I were going to try to take a pass at an actual design, I think there is a lot more known about how to do “systems learning and making” especially for children. This would include — as Perlis wanted — concerns about “all systems”, and would thus include many sciences in its purview.

                    I feel confident that something quite wonderful could be done that would be “real guitar” for young children, and would keep unfolding more and more of “real guitar” as the children develop. This would make a big difference in how they would think about the world, especially as adults.

                    The part I’d spend the most time and worry on would be the “representation and transformations of human ideas” facet that cannot be omitted in this day and age. This is quite a mess at all levels, including for pros.

                    For something that has been neglected and has deteriorated — like the web browser — we know what to do to vastly improve it. I think we are in much worse shape with regard to “Big Meaning” because we can’t point to great exemplars in the pro world (could there be a “CYC” for children?).

                    I think we have to make a major assault on “Big Meaning in the 21st century”, and figure out a children’s version of this. This is one of those cases where figuring out a child’s version could help the adult version (it did for dynamic objects in Smalltalk).

                    One part of the “systems world” that we do know how to do — and for children — is how to deal with aggregates and “aggregate measures” and coming to conclusions from these. This is also a toe dip into “Big Meaning”, but some of the cognitive aspects really have to be addressed as well.

                    This latter is worth a lot of pondering.

                    Robert Heinlein liked to say “The bull wears itself out on the cape and fails to see the sword”. I think quite a bit of today’s world is dominated by distractions that hide many swords. I think computing certainly is, and K-12 “computing education” cannot see other than the cape. This is what we have to break out of here.

                    Reply
                    • 12. Mark Guzdial  |  June 3, 2020 at 8:03 am

                      This is amazing, Alan – thank you! I’d like to share this comment as a guest post next week. Would that be okay?

                      You’ve helped me frame what I’m doing now. Thanks!

            • 13. gasstationwithoutpumps  |  June 3, 2020 at 12:17 am

              The kids are not writing the bots—they are just using them. The bots are being written by rogue programmers with a lot of learning and very little ethics.

              Reply
              • 14. alanone1  |  June 3, 2020 at 12:32 am

                “The kids are not making the guns, they are just using them”. The key idea is the multiplicative effects of impulses in each day and age and the tools available to magnify them.

                Reply
  • 15. Shuchi  |  June 2, 2020 at 9:14 pm

    “preparing every learner for a career as a software developer” — Honestly, I cannot think of a single person in CS education making this argument! So many of us in this last half decade have presented a plurality of reasons and approaches – the most compelling being Vogel, Santo and Ching’s article on Why CSForAll?

    Reply
    • 16. Mark Guzdial  |  June 2, 2020 at 9:17 pm

      I’m glad that you don’t face that, Shuchi. Perhaps I interact with people less enlightened than the people you interact with.

      Reply
  • […] a great software engineer” does not consider alternative endpoints for computing education (see post here). Not all our students want those kinds of jobs. Many of our students are much more interested in […]

    Reply
  • […] programming (that’s me again). Let’s accept a wide range of abilities and interests (and endpoints) without denigrating those who will learn and work […]

    Reply
  • […] more than producing software professionals. There are certainly CS teachers who disagree with me. An example is Scott Portnoff’s critique of CS curricula that does not adequately prepare students for the AP CS A exam and the CS major. I agree that we […]

    Reply
  • […] who learns to program is going to be a software engineer. (See the work I cite often on “alternative endpoints.”) Using good software engineering practices as the measure of success doesn’t make sense, […]

    Reply
  • […] education? For example, what would an AP in Computational Science look like? What would it mean to value alternative endpoints in “Computing Education for […]

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,095,047 hits
June 2020
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips