Literature is to Composition, as Computer Science is to Computational Literacy/Thinking

November 23, 2018 at 7:00 am 41 comments


Annette Vee was visiting in Ann Arbor, and looked me up. We had coffee and a great conversation.  Annette is an English Professor who teaches Composition at University of Pittsburgh (see website here). She published a book last year with MIT Press, Coding Literacy: How Computer Programming is Changing Writing. (I’m part way through it and recommend it!) She knew me from this blog and my other writing about computational literacy. I was thrilled to meet someone who makes the argument for code-as-literacy with a real claim to understanding literacy.

One of the themes in our conversation was the distinction between literature and composition.  (I’m going to summarize something we were talking about — Annette is not responsible for me getting things wrong here.) Literature is about doing writing very well, about writing great works that stand the test of time. It’s about understanding and emulating greater writers.  Composition is about writing well for communicationIt’s about letters to Grandma, and office memos, and making your emails effective.  Composition is about writing understandable prose, not great prose as in literature. People in literature sometimes look down on those in composition.

There’s a similar distinction to be made between computer science as it’s taught in Universities and what Annette and I are calling coding/computational literacy (but which might be what Aman Yadav and Shuchi Grover are calling computational thinking).  Computer science aims to prepare people to engineer complex, robust, and secure systems that work effectively for many users. Computational literacy is about people using code to communicate, to express thoughts, and to test ideas. This code doesn’t have to be pretty or robust. It certainly shouldn’t be complex, or nobody will do it. It should be secure, but that security should probably be built into the programming system rather than expecting to teach people about it (as Ben Herold recently talked about).  People in computer science will likely look down on those teaching computational literacy or computational thinking. That’s okay.

Few people will write literature. Everyone will compose.

Entry filed under: Uncategorized. Tags: , , , .

MIT creates a College of Computing to integrate across all disciplines African-Americans don’t want to play baseball, like women don’t want to code: Both claims are false

41 Comments Add your own

  • 1. alanone1  |  November 23, 2018 at 8:07 am

    I ordered her book. But it seems that from your description she is conflating e.g. “classical music” that includes the subset “great classical music” — or in this case: literature as being “writings about ideas” which includes the subset “great writing about ideas”.

    And also therefore “literacy” — which is “being able to read and understand about ideas” with the subsets that include great styles, etc.

    To take a case in point. Recently I finally got around to reading the account of the atomic bomb project — “Now It Can Finally Be Told” — by the overall head of the project: General Leslie Groves. (And I kicked myself for not doing this years ago — I had read the Rhodes and other accounts which were mostly about Los Alamos and the physicists involved, but Groves’ story was much larger, about the immense engineering effort involved with as many as 800,000 people involved and whole cities created from scratch all over the country to deal with various parts of the process.)

    By her definition (per you) this is not “literature”, because Groves does cannot be considered a prose stylist in any sense. He is a military engineer — one of the builders of the Pentagon — whose writing skills were devoted to reports and military “5 paragraph letters”.

    However: the account is quite spell binding: in part. because the story and ideas are amazing, and in part because of the neutral presentation style, which does not try to accommodate itself to fashion or audience.

    I don’t see how this could fail to be called “literature”.

    We should find candidates from the computer science literature for “coding literature” (please!).

    I have always like Don Knuth’s “B Book” for TEX, which is the TEX program embedded in Don’s explanation of how it works (using a special desktop publishing system he invented called “Web”). In the version I have, the programming language he used was Pascal (not many redeeming features style-wise) but via the embedding in English and a few extra features to isolate sections of code, Don makes it very readable and understandable.

    Brian Harvey’s “Computer Science LOGO Style” is what I’d call “literature of coding”.

    David Harel wrote a pretty good book (I forget the name and I’m in London right now) using pseudo-code to help bridge the reading gaps.

    So, I think we should be emphasizing: ideas, clarity, and understanding first. (Poetics are always welcome if they add rather than subject from the first three!)

    • 2. profvee  |  November 25, 2018 at 10:27 am

      First, I want to say thanks to Mark for the great conversation, and for looping me in to this community (longtime listener, first-time caller…). He’s right that I don’t draw this analogy between composition and CS Ed in my book, but I think it’s apt. My book is more about bringing the history and concepts of literacy to the conversations about how computing and programming work in the world. I rely on his work and others in CS Ed, but I don’t purport to contribute directly to the field.

      But this analogy, like any analogy, is limited. I think it’s not worth belaboring the point about where the precise line between “literature” and “composition” are, or “technical writing” etc etc. There are obviously overlaps.

      But the point is: in composition, we think about how everyday people use writing for their own purposes. We try to guide and support that writing in various ways to help them meet their communication goals (e.g., succeed in college, work towards social justice, write their grandmother, compose a memoir, become a diplomat, make a grocery list). We focus on teaching writing, in a lot of different forms. Literature as a field generally focuses on who we might call “Authors,” those who write professionally, who push boundaries of writing in exciting ways, or have been canonized for their skill and status. I might add that *whom we call authors* has a highly fraught and gendered and racialized and normative history. Most contemporary Lit scholars are well aware of this, and many of them work around the fraught designation of “authorship” in really imaginative ways. But they still work with *published authors*. Most of the writing in Composition never gets published, never is read by more than a few people, never makes a splash. It might make money, because this kind of writing is often workaday writing: all of the writing that we do to communicate, report, negotiate. Like this blog. Which is published online of course. So this distinction about “publication” is also a fuzzy one.

      Again, I don’t think it’s worth scrutinizing the borders here too much because they’ll break down with any extended analysis. It’s more a distinction about values. Composition wants to support people’s writing and goals and pull back on judgment, because random people’s opinion of my letters to my family or my handwritten memoir addressed to my children or grocery list are completely irrelevant. Does this comp class make me *want* to write? Better able to consider my situation and audience and choose language to meet my goals? Then that’s a win for Composition.

      So, to connect back to Mark’s point: I think he’s pointing out that lots of people can use programming and things that CS Ed teaches without becoming professional “authors” of code. So whether their code is the most efficient or is sustainable or secure or well commented doesn’t matter as much as: did they learn something about computing? Are they able to communicate or build or do something they couldn’t before? Are they *interested* in the endeavor? Then that’s a win. Authorship can come later, to the few who are interested in that route.

      • 3. shriramkrishnamurthi  |  November 25, 2018 at 11:10 am

        Thanks for your thoughts and comments!

        While I have heard Mark make these points many times and I have often felt somewhat inclined to agree with him, I think I still resist it at least a little. It is not for lack of sympathy with his overall point. After all, I myself have done some amount of research in what is called “end-user programming”, and I’m a huge fan of spreadsheets, all for reasons similar to Mark’s motivation.

        The problem is that the ends don’t ring fence the methods. Your goal may have been just exploratory for some non-technical reason (e..g, social justice), and so long as nobody ever sees your exploration, that’s probably just fine: many calculations and spreadsheets never leave my machine. But if you want to advertise your views, you have a growing obligation to publicize your methods (and I think justifiably so, and I’ve been leading a charge on “artifact evaluation” towards this end). But once you publicize them, others may use them. And now you have, advertently or otherwise, put your code on someone else’s desktop and so taken some responsibility for using up their processor cycles or costing them in cloud storage or, of course, potentially exposed them to security problems.

        I think this last point is where analogies to writing break down. There’s live code in the published artifact.

        But we can actually push the analogy further, if we want. Of course reading another person’s essays may corrupt my thoughts (we’re all reading Mark’s blog precisely because that’s what we *hope* will happen!), but the effects are not of a technical nature. But suppose reading Mark’s blog were to expose me to security problems, then wouldn’t that put some obligation on Mark? If, say, visiting this page were to download a virus on my machine, wouldn’t we tell Mark to secure this site better? (And many security attacks use the “popular site” vector precisely because of the attraction of eyeballs.) So just as we would expect Mark to secure his Web site for his readers, we should also expect Mark to take some responsibility for the code he publishes (to secure his “runners”, if you will).

        Mark deals with securing his writing by using WordPress and trusting them to deal with all these niggling details. (For instance, Mark doesn’t even build his own authentication system, outsourcing it entirely.) That is, Mark “writes no code” — all his effect is in prose. We could imagine the same with a computational exploration. I could just write an essay about what I learned. But these days I have an obligation to also publish the code that went with it. I may *think* of the code as a “hobbyist” activity, but I am exposing others to risk. Therefore, the distinction that there is such a thing as a purely “amateur” publisher of code versus a “professional” one strikes me as inaccurate and irresponsible — unless the amateur uses truly sandboxed, “amateur” publishing environments, the moral equivalent of writing with WordPress. But amateurs often don’t.

      • 4. gasstationwithoutpumps  |  November 25, 2018 at 12:23 pm

        I have given a lot of thought to the parallels between programming and writing, having taught both for many years. There are a number of useful parallels, both for students and for teachers.

        Teaching composition is closely analogous to teaching programming—both need to come after students already have some literacy and can distinguish (perhaps with some effort) between successful and unsuccessful attempts. Neither freshman composition nor freshman programming courses are aimed at producing professionals, merely at producing marginally competent writers and programmers, who can later (if they choose) improve their work to professional levels. The vast majority will be satisfied with being marginally competent, and many will think themselves much better than they really are.

        For both programming and writing, a big chunk of the effort is in figuring out the point of the piece being created—what it is supposed to accomplish, then breaking down that overall goal into well-structured components that can be individually tackled. Breaking goals into subgoals and paying attention to the interfaces between these subgoals is a major part of all engineering, and it does help students to explicitly point out the transferability.

        There are a lot of structural similarities also: sections to modules, paragraphs to procedures, topic sentences to block comments, and syntax and punctuation in both.

        From a teaching standpoint, the biggest value to students is in detailed feedback from someone who has read their work carefully. Many of my students in my bioinformatics course have said that I was the first person who had ever read their programs and looked at the comments or the program structure (too many programming classes have scaled up to the point where essentially all grading is automatic). Many of these students were graduate students who had completed a BS in computer science with no one having ever looked critically at their code.

        Many of my students have also said that they got more useful feedback on their writing in my electronics course than in any of their writing courses—the writing courses had all focused on genres of writing that they had no interest in and they had gotten only vague generic advice on their writing. In the electronics course, I provide feedback on the content, the structure, and the details of the writing.

        Providing the detailed feedback that students need is time-consuming—I spend about 5 hours per student on written feedback for a 10-week course. Writing instructors are familiar with the economics of this process, and most contracts for writing instructors have explicit class-size caps (generally around 15 to 25 students, depending on the wealth of the institution). Rather than limit class size, programming instructors have tried to automate the process of grading, with the result that students get no feedback on anything other than low-level syntax (from their compilers) and input/output behavior (from the automated grading).

        • 5. Bonnie  |  November 26, 2018 at 8:32 am

          I completely agree with you on the links between writing and programming. I often find that my best CS students are not the-mathematical geniuses, but rather, people who are comfortable enough with math but who also can write a tight, well-reasoned, analytical paper. That ability to organize thoughts seems to carry across both domains.

          I also think the role of detailed feedback on student programs is critical, and I am appalled at how rarely it is done. I think it is every bit as critical as the use of worked examples. Universities routinely put the resources into composition classes, staffing them so that instructors can give students extensive feedback on their writing. If computer science is really as important as everyone claims it is, then the resources need to be put into staffing programming courses so that students can get good feedback.

          • 6. profvee  |  November 26, 2018 at 9:58 am

            Thanks, everyone, for your thoughts here.

            A resounding YES! to @gasstationwithoutpumps for the benefits of closely reading student work. In Composition, we have a long tradition of resisting automated essay scoring, grading and testing. The most vocal opponent has been Les Perelman, who has written a Turing Machine-type prose generator to fool an automated scorer. Turns out, it’s quite easy to do with some pretentious phrases and convoluted syntax. (a href=””) I assume the greater emphasis on automated grading in that area is the belief that since code is written for computers, it makes sense to have it read solely by computers. Of course, we know from Knuth and so many others that this really isn’t the point of writing good code. In Composition, there’s another (less bad?) approach to cutting costs: grad students and adjuncts teach much of those courses. These teachers are not automated, but they are often underpaid and less experienced, and so the quality and approaches of first-year comp courses varies widely.

            I once attended a writing workshop led by Richard Gabriel. It was beautiful: everyone sharing their work, and everyone providing constructive critique on how to make it stronger. He has an MFA and uses that creative writing workshop model in his workshops. From an outsider’s perspective at least, it seems to work really well to support the writers of the projects.

            Which brings me to @shriramkrishnamurthi’s point about security and responsibility with running code. I see your point: it’s impossible to know how far a piece of code will travel once it’s shared, and writers should take responsibility for the effects of this code once it’s out in the wild. I’m no security expert, so I wonder: is it really possible to anticipate all consequences of running code in the wild? Sure, the basic stuff: infinite loops, preventing SQL injections, etc. I’ve read the horror stories about political databases being hacked easily, and it is quite alarming anyone ever let that stuff out. But: can you anticipate **everything that might happen???** I’m talking also about the social consequences of code. For instance: Facebook. Could anyone have fully anticipated the impact it would have on our social relations and politics? If they could, maybe we would have nipped that in the bud. Obviously, this is an extreme example, but my point is that anyone makes choices to satisfice about the consequences of their code: “This is good enough.” And that “good enough” line may be a little fuzzy.

            I’m interested in getting people excited about programming in the first place. Nothing kills one’s desire to write faster than a red pen bent on correcting grammar mistakes. And front-loading lessons on security probably does the same for programming. As Mark mentioned earlier, some places are safer than others to play with code. I’d encourage someone to have fun in Scratch before I showed them how to mess with their OS. I think this approach doesn’t outsource everything to technology, as you could still do that in the model of the Composition class, with a real human teacher reading and commenting on the code.

            Why haven’t CS depts taken up the model of the first year comp course in college? I can see some drawbacks myself, but this is a genuine question…

  • 7. gasstationwithoutpumps  |  November 23, 2018 at 8:19 am

    Mark, I don’t think you have the analogy quite right. The study of literature is not about writing, nor how to write, but about what has been written. In sort of the same way, “computer science” is not about programming, or how to program, but about what can be programmed.

    Composition is the introduction to writing, so corresponds to programming (not to “computational literacy” which is a lower level—more analogous to reading than to composition).

    There are higher levels of writing than freshman composition classes (journalism, creative writing, technical writing, …), and these correspond roughly to software engineering.

    There are other fields in the humanities that look at language and writing from other viewpoints (such as linguistics or classics) that also have analogies in computational fields.

    There are a lot of good analogies between programming and writing, but I think you miss the mark slightly here.

  • 8. Mark Guzdial  |  November 23, 2018 at 9:13 am

    Thank you, both! I appreciate your critiques of these ideas.

    None of these flaws should be attributed to Annette or her book. She and I were having a coffee conversation about where we are in our lives. She’s a Composition teacher, newly tenured in an English department. I’m a Computing Education researcher, now in a (fairly traditional) CS department. This analogy is not in her book. Her book is much deeper and more interesting than that. (I’m partway into reading it.)

    Alan, I like your framing in terms of “ideas.” I like how you describe Knuth’s and Harvey’s books, and think you mean this Harel book:

    Kevin, “what has been written” and “what can be programmed” is a more interesting depiction of both Literature and Computer Science.


  • 9. Alfred Thompson  |  November 23, 2018 at 10:05 am

    This brings up something I have been wondering about. We talk about a coding or programming gene and say that it doesn’t exist. Everyone can learn to code. But I keep wondering if there is something innate in people that lets them move to “coding literature” from “Coding composition?” I think we can agree (maybe not?) that everyone can learn to write but not everyone is going to be able to write great literature.

  • 10. orcmid  |  November 23, 2018 at 12:07 pm

    It is certainly relevant that Knuth’s web/tangle software and its application in the presentation of TeX implementation is called Literate Programming. He continues to develop code in that manner to this day, although I suspect the code is often in C Language :).

    In case I haven’t mentioned it already, I think the idea of “fluency” is appropriate to this conversation. Computational fluency does not have to demand programming, and I think it maps to reliance on information technologies in communities of practice better. I prefer it over the notion of composition reduced to programming (or vice versa).

    • 11. orcmid  |  November 23, 2018 at 12:18 pm


      1. It is interesting to me that Literate Programming doesn’t have much uptake, although there is kind of a cult following. I suspect it is a matter of requiring too much writing and that may be tied to how open-source projects of my acquaintance eschew comments, somehow believing that the code is its own documentation.

      2. The fact of Literate Programming is, for me, demonstration that code does not ever reveal what a program is *for*, only what it is, and they are rarely the same thing. Literate Programming provides one means for narrating code against some purpose intended for it, and also bridging between the purposive requirement and the implementing software in a connected narrative.

      • 12. alanone1  |  November 23, 2018 at 12:39 pm

        I think there is “the why”, “the what”, and “the how”.

        Right now, I think English in the manner of Don’s style would be the best for “the whys”.

        For the last two, one could imagine a programming language that has two columns for code.

        The left column would have “the whats” — requirements, specifications, constraints, etc. — expressed in a declarative form in a language that is as readable as possible.

        The right column would have optional optimizations using forms suitable for many kinds of strategies and tactics.

        The system should be able to run completely from the left column with everything blank (or turned off in the right column). This might require debugging on a supercomputer (think of the CAD->SIM->FAB cycle in established kinds of engineering today).

        The optimizations — “the hows” — can be checked to see if they do only the same things as the requirements — “the whats”.

        One of the biggest problems in most coding styles — whether grungy C or attempts at pristine Haskell — is the intertwining of meanings and optimizations to produce cacophony.

        I think I first got this idea from one of Bertrand Russell’s explanations of how he was able to write so clearly (it included the idea of making the early sentences glosses and later sentences to tidy up the oversimplifications). This suited readers — he said — because they can get the gist from something that is simple and not quite true, and then home in more easily on the more detailed and accurate account that follows.

  • 14. shriramkrishnamurthi  |  November 23, 2018 at 4:58 pm

    “It should be secure, but that security should probably be built into the programming system rather than expecting to teach people about it (as Ben Herold recently talked about).” — and the VERY NEXT TWEET says “Technologists think technology will solve any problem.”

    Should’ve kept reading, Mark!

    • 15. Mark Guzdial  |  November 23, 2018 at 5:36 pm

      I did see that. Sorry, I’m not getting your point.

      • 16. shriramkrishnamurthi  |  November 23, 2018 at 10:11 pm

        Security should be “built in” rather than taught seems exactly the sort of thing that next tweet is cautioning about: expecting that a technology will solve a problem that is often human.

        For sure, as a PL person, there are certain very limited kinds of security that can be built into the language; though let me know when you’re ready to change the programming language you use and teach. (Also be wary that people who’ve spent decades trying to build more secure languages and are tired of being ignored are then also the ones accused of traits that people like Jens Mönig will dump on and you will agree with.)

        But I also teach usable security, and it keeps me grounded any time I let my brain lazily think security can be “built into the programming system”. At least, my mortal brain (despite said “obnoxious claims of intellectual superiority”) doesn’t see how that’s possible.

        • 17. Mark Guzdial  |  November 24, 2018 at 10:20 am

          There’s a lot there. I’ll assemble an answer for at least of it.

          I didn’t see Herold’s quotes as being about the technology but about the system. Certainly, technology plays a role, but I think about the system as about broader issues like legal and regulatory frameworks. I thought Herold was speaking about cybersecurity in the same ways that some experts talk about financial education (see previous blog post). We cannot effectively teach everyone everything that they need to know make smart financial decisions, so (the argument goes) we should change financial systems so that they’re easier to understand and the default behavior is safe. Similarly, I read Herold’s tweets as saying that we cannot possibly teach all programmers to be smart in terms of cybersecurity, so we should change the programming system (e.g., can we require hardware and software manufacturers to guarantee certain levels of safeguards, e.g., like the protections of the Apple Store for iOS?) so that all programmers don’t have to know about cybersecurity practices all the time.

          I trust that you know the technological issues of cybersecurity far better than me. In general, I don’t use a computer science lens first when looking at problems. I look at problems as a social scientist first, and at that, I use mostly learning sciences and economics lenses. Teaching people to do the right thing all the time is expensive, and society isn’t willing to bear the costs to make that happen for large populations of people. So, we have to change whole systems.

          I can’t answer about being tired of being ignored. Most academic fields are mostly ignored most of the time.

          But I will defend what I thought Jens’ tweets were about. I think this is the one you’re referencing:

          I’ll surmise that obnoxious claims of intellectual superiority paired with narcissistic intolerance of other programming paradigms have had a greater share in both Smalltalk‘s and HtDP‘s stalling than syntax

          Most of what we teach in CS education is based on claims of intellectual superiority. I’ve taught classes where I was required to teach UML because it was so valuable for O-O development (no evidence), and to teach Agile Methods because they were so much better than anything else (no evidence). Elizabeth Patitsas’s dissertation is mostly built on closure theory, which is all about intolerance of other approaches as a way of cementing advantages. Yes, Jens is picking on Smalltalk and HtDP, but those are hardly the worse offenders. So, yes, I “liked” and “retweeted” his tweet. I do see now why you were offended, because he was calling out HtDP explicitly. I’m sorry for that. I’ve been associated with Smalltalk for many years, and I do see lots of claims of intellectual superiority without support, and with intolerance of other approaches. I did recently make several tweets where I talked about the advantages of your approach (see tweet here), so I see how you have supportable claims for superiority.

  • 18. Bonnie  |  November 26, 2018 at 8:41 am

    I deeply disagree with this statement
    “Computational literacy is about people using code to communicate, to express thoughts, and to test ideas. This code doesn’t have to be pretty or robust. ”

    Why? Because when I worked in industry, I saw many cases where badly written exploratory code, usually written by some end user, turned into production code without any modification. Managers tend to think that if the code exists, that is all that is needed. I worked on one project where code that had been intended as a demo, written by a healthcare tech who had taught himself a little Java, had been turned into an electronic claims processing system at a major hospital. It failed constantly, but no one was willing to commit the resources to completely redo it. In the financial world, there are a lot of problems when traders write exploratory code and then expect it to be put into production immediately. Most of the time, the code needs to be rewritten to address regulatory, security, and performance issues, but again, the pressure to bypass is immense. This is why end users who learn to program need to also be taught the rudiments of testing and good design.

    • 19. Mark Guzdial  |  November 26, 2018 at 9:09 am

      I hear your argument as saying that, because managers will make bad decisions and move prototypes into production, we should teach everyone testing and design. Did I get that right?

      Which is less expensive — to teach managers how to manage technical projects, or to teach *everyone* who might program about testing and design? I don’t know if we know how to teach everyone about testing and design. We’re not good at teaching those skills systematically to all CS majors, and those are students who are motivated to learn good software development practices.

      • 20. shriramkrishnamurthi  |  November 26, 2018 at 9:50 am

        No, you can’t “blame the victim” here. Managers have deadlines, and they have to have some trust in the product they are delivered. Also, many managers don’t have the training to know good code from bad, either. So an employee produces what seems to be a working product, and they sign off on it; if the employee actually produced a minefield, how would a manager know?

        At a place like Google, this sort of thing is rigorously vetted with code reviews and such (and super-technically-competent managers). But let’s spare a thought for the tens of thousands of companies out there without Google’s or Microsoft’s pockets and aura whose managers are, perhaps, a bit lacking in some of these respects. (In fact, I know other blue-chip computing companies where the code review culture is broken.) You can’t argue “programmers have failings” and just move fixing all of those onto the managers — who may have themselves been such programmers once, and in many cases were not even. (Surely they bring other skills to compensate, but that doesn’t always help them find security/robustness bugs.)

        Jane Street is an instructive story. To paraphrase greatly, at some point the “money people” realized that the coders were off building products in Visual Basic and the like, and those products were trading *their money*. The company clamped down and unified around OCaml, and now everyone there does everything[*] in OCaml. They centered around a technology that improves things systematically. On the one hand, they have the money to hire OCaml people. On the other hand, most smaller companies haven’t actually tried this experiment and failed; they’re just certain it can’t be done at all. (Reminds me of Paul Graham’s Viaweb experience.)

        [*] I assume someone somewhere in Jane Street uses Excel, but you get my point.

        We do have methods for teaching students to program more rigorously. Half the problem is educators. Need I remind you of the exchange at ICER? An author says “Students often solve the wrong problem!” An audience member says “We have a rigorous method that helps address this by having students write tests early.” The responses range everywhere from “My students would never do that!” to “But, but, bricolage!!!” In fact, everything other than, “Yeah, maybe we should give that a try, and figure out how to make it work”. Indeed, you (Mark) have consistently argued that end-users do NOT need to deal with these things, as opposed to “It’s really hard, and I’d like to figure out how to make it happen”. The attitude difference is hugely important, and a necessary first step to changing things.

        • 21. Mark Guzdial  |  November 26, 2018 at 10:59 am

          Is the manager the victim here? Or is it the user who deals with broken systems? There’s a critical point in your first paragraph. Does the employee (assuming an end-user programmer) sign off on it? Can they be expected to be able to do that with any confidence?

          Yes, my preference is that end-users do not deal with these things. I am not interested in teaching test-driven development to end-users (which is many, many people) because one day maybe somebody’s code might end up in production due to a lack of a code review culture and a series of bad decisions. However, I would be interested in teaching students to write tests early if it was about teaching them mechanistic reasoning which I think is what you’d want to do if students were, say, writing physics simulations or data analysis programs from which people are going to make claims. Context matters.

          I’m sorry that you had so many negative experiences at ICER, but I am not responsible for them. I did not pay attention to all the exchanges, and I’m certainly not going to defend *any* of the participants in the exchange. You’re tired of being ignored. I’m tired of you attacking me. I welcome discussion and even critique. Language like “blame the victim” and referring to me as “petulantly walking away” and being “irresponsible” is not constructive. It’s meant to hurt me without adding to the conversation. Congratulations — you achieved your goal.

          • 22. shriramkrishnamurthi  |  November 26, 2018 at 4:08 pm

            I’m sorry that you feel attacked; my comments are about a community’s attitude to ideas, not about you personally (and if anything you’ve been one of the most open-minded). Still, if I’ve created that impression, that’s my fault. An unqualified apology.

            Your analysis of my own mental state isn’t quite accurate, but I think my psychoanalysis is a topic that interests at most two, one, or maybe even zero people (-:, so I won’t engage in it on this blog. We can talk about that some other time and place.

            I do want to correct one seemingly niggling but very important detail, because I’m afraid of a false syllogism. What I advocate and teach is *not* test-driven programming. That’s a very narrow and specific idea and one that I largely reject in most contexts, not only for end-users. The syllogism I’m worried about is roughly:

            Edwards says TDD doesn’t work
            Guzdial says SK proposes TDD
            Therefore SK’s proposal won’t work

            The differences between TDD and what we advocate is perhaps also not really best explored in a deeply-nested comment thread on a *somewhat* unrelated topic [*], so I’ll offer to take this off-line too.

            [*] “Somewhat” because I think it is a rather interesting question to ask what the analog of various software practices is in writing: practices like writing data definitions, creating examples, testing, etc. I think that’s actually rather relevant to the question of “how much is programming like writing”.

            • 23. Mark Guzdial  |  November 26, 2018 at 6:55 pm

              Thank you, Shriram. As a new faculty member, I’m more likely to feel attacked, even if I’m not.

              I agree — I inserted the phrase “test-driven programming,” and that’s not what you said.

              I’m working on some blog posts to pop up these themes. (I suspect that’s how Jag ended up commenting here — I asked and several others involved in computing ethics to give me feedback.) This thread has led to some fascinating themes and ideas. I’m buying new books today to try to educate myself about the issues.

      • 24. Bonnie  |  November 26, 2018 at 10:02 am

        In many companies the managers have never been trained to manage technical projects, and will never get that training. For example, in the hospital example, the managers were all clinicians who had moved over to administration. Hospitals are cash strapped places and do not have the money to send their administrators for extensive training in technical management. Often, it isn’t money, but politics. In many financial companies, the managers are financial people who see the software development team as a cost to be controlled rather than a generator of money. And sometimes, the managers are also the end users writing the bad code and insisting that it be used. I have seen all of these scenarios.
        And the deadline pressure from higher up is enormous, so even if you do have a manager with a deep understanding of software development, he or she still has to get the product out.

        • 25. H V Jagadish  |  November 26, 2018 at 6:17 pm

          If software is made available commercially, or is used in production, then the company doing so is responsible for any failures of the software. I infer this by analogy with physical products. If I sell a toy with a small part that breaks off with use, or a refrigerant that punches a hole in the ozone layer, I am expected to fix the problem, and accept responsibility. If these bad things happen even though I exercised due care, I am still responsible; however, I may not be responsible for consequential damages. (E.g. if I sell you a leaky pot, I am responsible for replacing it, but possibly not for the stains on your carpet).

          Every computationally literate person should build software, and its quality will vary, including in how secure it is. If a company now sells this software, it takes responsibility for the flaws, include any security holes. What taking responsibility means will depend on our societal standards for due care on the part of the company. Even with the very low standards some of the comments above suggest, there is at least the responsibility to correct and replace. With higher standards for due care, a company that doesn’t exercise due care may be on the hook for much more, e.g. consequential damages. Yes, managers in the company do have to take responsibility for their decisions, and they can do so without themselves being programmers.

          If the software isn’t sold, the responsibility is correspondingly less. If a computational literate physicist writes some code and shares it with some other physicists who like it and use it, the users may have the burden of determining if they trust the author of the code.

          [Even though my comments above may sound like legalese, I am a computer scientist and not a lawyer. In fact, I am not even certain that my understanding of the relevant law is correct, in any jurisdiction. However, I stand behind the rationale, whatever be its specific manifestation in law.]

          • 26. Bonnie  |  November 26, 2018 at 9:03 pm

            A lot of software is neither sold commercially nor informally shared. An awful lot of software is developed for in-house use, such as the electronic claims processing software I mentioned earlier. One of the flaws with that system was that it generated incorrect claims data and stored it in the database. The fix? Developers were asked to spend their weekends running a 30 hour repair job every two weeks. The managers preferred that to rewriting the software correctly.

            Many, many flaws exist in software that never get publicized. In many cases, the flaws churn along at a low level. causing inefficiency and waste. And then, suddenly something happens and the flaw becomes critical, and causes a data breach or a medication error or a glitch in program trading. Think of all the data breaches that happen every day. Are the companies ever held responsible? Have you ever seen a manager held responsible?

            I worked in the software industry for years, and my husband and most of my friends are still in the industry. That is perhaps why I am a tad jaded. And an end user who knows a little bit of coding can be a dangerous thing in rush-rush environment of many companies.

            • 27. gasstationwithoutpumps  |  November 26, 2018 at 10:24 pm

              Bonnie, you seem to be arguing against CS-for-all, and in favor of a guild model, where only masters who have passed apprenticeship and journeyman status are allowed to create software.

              • 28. Bonnie  |  November 27, 2018 at 8:11 am

                My argument is that CS-for-all needs to be about more than just teaching coding. Some sense of the engineering mindset, which includes design, testing, and risk analysis, also needs to be taught. One of the arguments for CS-for-all that Mark makes is that people end up needing to write code at their jobs. If they are writing code that is going to be used by others, either customers or within the company, then there needs to be some assurance that the code is correct.
                My husband is a senior technical manager at a financial company. He tells me that one of their big problems is the huge numbers of end user spreadsheets that contain mistakes. The spreadsheets are developed by people at the trading desks and are typically very complex. When the regular software systems produce numbers that don’t match someone’s spreadsheet, all hell breaks loose and the developers end up spending hours trying to figure out why. It usually ends up being a mistake in the spreadsheet. Not only is this a big timesink for the software department, but my husband says that it terrifies him to think of all the uncaught mistakes out there. These spreadsheets are used for day to day trading decisions, so they have a real impact.
                So yes, I think CS-for-all needs to be more expansive than just learning the ability to write a little code.

                • 29. shriramkrishnamurthi  |  November 27, 2018 at 8:56 am

                  Hear, hear.

                  Spreadsheets are a great example. The makers of spreadsheets have given up all sense of responsibility about them (back to the thread w/ Jagadish) and pushed all of it onto the users, who are often the least computationally trained. And we end up with a large number of spreadsheet horror stories (see Panko’s work, Rogoff/Reinhart, etc.).

                  That doesn’t mean we shouldn’t teach spreadsheets. That doesn’t mean spreadsheet makers shouldn’t work harder (akin to Mark’s comments about baking in security). But it also doesn’t mean we hand a powerful weapon to someone without making sure they know where the locks are and what harm it can cause.

                  There’s a useful analogy in the differences between German and American car driving education. In the US, driving is a right and education is mostly a process of getting you to the right quickly. In Germany a car is a large, dangerous object and driving it is a responsibility that must be taken seriously. CS-for-all seems to be in the “American” model, and Bonnie and I are arguing it should be in the “German” model.


                  That doesn’t mean driving in Germany is considered a “guild” activity, so I think this is an unfair reductio.

                  • 30. gasstationwithoutpumps  |  November 27, 2018 at 10:48 am

                    I like the driver’s ed analogy—it points out the hazards of poor education (as practiced in the US).

                    I didn’t really think that Bonnie was in favor of a guild model—just that the way she framed her argument led in that direction. Using the driver’s ed analogy frames her argument better.

            • 31. H V Jagadish  |  November 26, 2018 at 10:45 pm

              Dear Bonnie,
              In-house use for profit is only a minor step removed from commercial sale, and most of my arguments above carry over. If a company provides a service using software developed in-house, users have expectations of this service. Failure to deliver the expected service with sufficient quality can lead to liability, or at least loss of business, depending on the service agreement.

              I agree with you that many companies are poorly run, and have unreasonable expectations of their developers. But I think much of this arises when managers do not understand software development. Most of us who are not poets do not understand the poetry creation process enough to be able to set reasonable expectations for a poet who works for us. This is why I believe that computing literacy for all will actually ameliorate the problems you point out.

          • 32. shriramkrishnamurthi  |  November 26, 2018 at 9:18 pm

            In his keynote at SIGSOFT FSE 1997, David Parnas gave a talk on the 25th anniversary of his seminal paper on modularity. He tellingly titled the talk, “Software Engineering: An unconsummated marriage”.

            He partly made his case by pointing out that in every other engineering discipline, products are accompanied by warranties. Just as you note, even a one-dollar object can usually be returned, and its manufacturer can be sued if something goes wrong.

            When’s the last time you returned a piece of software, or successfully got financial amends for a mistake it made?

            (Of course, companies have found various ways in the legal system to dodge what might be their obligations through EULAs and shrink-wraps and licenses-not-sales and so on. That’s a topic for another day.)

            Parnas’s point was that software is the only engineering where instead of warranties, we provide only disclaimers.

            Certainly, around the fringes we see some counterexamples: e.g., a cloud provider’s SLA is a kind of warranty with penalties for non-compliance. Those are few and far between, and anyway not purely for “software”. The bottom line is you don’t have a check to show from Microsoft or Apple for a mistake they made, and neither do I.

            Therefore, I do not know how useful this analogy is, unless I’ve misunderstood your point.

            • 33. H V Jagadish  |  November 26, 2018 at 10:34 pm

              Shriram, I totally agree with you that software producers have been very good at avoiding warranties and minimizing how much responsibility they accept for their products. However, bug fixes and periodic patches are examples of product “repair” after it has been deployed in the field by the customer. So I think the analogy applies, even if the societal expectations for software quality are much lower than for physical products.

              • 34. shriramkrishnamurthi  |  November 26, 2018 at 10:35 pm

                I’ll agree to meet you in the middle there, though the “middle” isn’t really half-way. (-:

  • 35. alanone1  |  November 26, 2018 at 11:13 am

    Analogies are often illuminating, and sometimes distracting. Code mapped to writing does have some force, but perhaps leads away from some of the most important issues, some of which might be better addressed by mapping to biology and public health rather than sentences.

    This is because it’s not “code” that is central, but the *processes* it gives rise to. These are much more like biological life than sentences in natural language (even though some of them can incite humans). But the processes from code do not need humans and are relatively immune from human judgement. They are much more like contentious organisms in a vast ecology that is mostly beyond human ability to view and to pass judgement on. (This is echoing a number of ideas put forth here by Mr. Shriramkrishnamurthi.)

    As a former biologist I can also report that unrestricted public access to “kitchen table disease” synthesis is not far behind public access to inserting random code into the cyberverse. In epidemiology, it is not just the virulence of a disease that must be worried about, but also whether it has easy or difficult “vectoring” to move about. E.g. a disease that can vectors by cough sputum and is moderately contagious can be very dangerous in today’s closely paced societies and transportation systems.

    I think everyone here is aware that almost everything is different legally and in terms of enforcement for “hacking life” as opposed to freely being able to make bad code. I think bad code -> dangerous process is qualitatively different from bad sentences -> human sensibilities, but is rather like “bad life” -> world ecologies.

    I was present at a few of the Internet design meetings where the question of “net intelligence?” was debated. In the end everyone unanimously agreed not to have the net itself be a command system (this is the “end to end argument). In other words no TCP/IP communication could be other than sending neutral packets. If any datagram was treated as a command, it would only be because some code inside the receiving computer decided to interpret it that way — nothing could be forced from the outside. (This, by the way, is “real object-oriented design.)

    This — and many other good and great ideas from that time — ran afoul of the much larger “more pop-culture” of unsophisticated hackers at all levels (including CPU designers). They liked straightforward “commanding” of things, and didn’t understand that this doesn’t scale well.

    For a variety of reasons, “absolute security” is not possible in a world of human corruptibility. But what is possible is to “confine and delay” and in many cases “isolate and kill” pernicious processes. Biological life is “more things going right than going wrong”, and that is a good principle for computer systems design at the scales we have today.

    So one way to look at this is that if we were thinking more along the lines of “public health” we might want to restrict many of the things done to people who are more like certified doctors, and we might want to create environments that make vectoring of pernicious processes difficult enough to handle most of them.

    One of the many things I complained about at some of the “Coding for all” frenzies was that almost nothing that the pop-culture people thought was “computing” was close to the real issues.

    The genie has been out of the bottle for quite a while, and it’s the kind of genie that might successfully resist being coaxed back in, especially in a world where most people only understand a little about computing — just enough to be dangerous, but not enough for health.

    • 36. alanone1  |  November 27, 2018 at 5:22 am

      A few more thoughts along the dimensions of “confinement”, “responsibilities of ‘systems software’ “, and “implications of Moore’s Law”.

      When “time-shared/multi-processing systems” were starting to be worked on in the 50s, one of the concerns was to prevent inadvertent — via bugs — or malicious — via intentional — damage to processes from other processes. It was realized that human inspection of code wouldn’t be sufficient, and hardware confinement of various kinds was devised (usually with a base-bounds pair of registers to limit the address space of a process).

      Philisophically, it was realized that intents like these followed on the dual benefits of e.g. inventing index registers, not just for convenience, but also for boosting integrity.

      The tiny RAMs available also led to various kinds of overlay and swapping schemes, which also required virtualization of memory. For a time these made sense to be combined into “MMUs”.

      We can see that scaling via Moore’s Law puts a lot of strain on the combination because we generally want and can use a lot of concurrent processes. If the HW and OS don’t provide these, then programmers will invent unprotected “threads” and try to use SW means to keep them from corrupting each other.

      In a somewhat similar and parallel vein, we can see that “garbage collection” services are not only conveniences, but also can be ways to have more systems integrity, if situated in protected parts of a system.

      And, we can similarly see that HLL and VHLLs are not just opportunities for convenience: they are also further ways to boost the overall integrity of systems made from them if they have been designed and implemented along these lines.

      Along this chain, we can imagine “meta-monitors” that scrutinize important intended actions of processes along the lines of “rules and regulations” that follow in the tradition of Asimov’s “Three Laws of Robotics”. (It’s been pointed out that the loopholes and possibilities for gaming the Three Laws are what Asimov made his robot stories from, and this is a useful cautionary tale.) Still, the need is manifest for “meta-monitors” whose job it is to reify boundaries into processes.

      I won’t waste space here crying into the wilderness about what hasn’t been done at the systems level, and especially at the “systems responsibility level” after the advent of personal computing. But for the purpose of this discussion, what is needed is for the systems that enable personal computing and wide-spread coding to at least implement what is known about protection and confinement, and to vet and certify it. This is similar to the idea that at least bridges and automobiles should be and can be held to a higher standard of safety than the individual drivers of automobiles can. As Butler Lampson used to point out, locks merely impose costs on thieves (but they are effective just for this reason). Similarly, confinements can usually be breached via efforts, but the point is to make the needed efforts very deep and very costly.

      • 37. shriramkrishnamurthi  |  November 27, 2018 at 7:23 am

        Garbage collection can’t possibly be thought of as a “convenience” when it’s a semantic feature. It provides a form of infinite-memory abstraction. Whether an implementation GCs or not is something a programmer needs to know when writing the program and its presence or absence leads to different programs.

        Thus, it can’t be thought of as a “convenience”. The operation that is NOT there (`free`) is just as much a semantic feature as the ones that are.

        This is independent of notions of system integrity, which are of course *also* impacted by the presence/absence of GC.

        • 38. alanone1  |  November 27, 2018 at 7:58 am

          I can see one has to be careful and literal around you … I think most people would have correctly inferred that I meant “automatic built-in garbage collection” as opposed to programmer devised memory expansion schemes … And “automatic built-in garbage collection” is most definitely a “convenience” along with its other benefits … and most computer scientists would also think of it as being much more pragmatic than semantic (it is there mostly because of the lack of indefinite amounts of fast memory: that is a pragmatic restriction).

          • 39. shriramkrishnamurthi  |  November 27, 2018 at 8:48 am

            I too meant the same thing. Most people I know don’t use “garbage collection” to mean “a scheme cooked up by a programmer as part of an application” as opposed to something built in. (People often seem to use “memory management” to mean the larger class that encompasses both of the above, which is why using that term to mean GC requires the prefix “automated” to avoid confusion.)

            And it’s most definitely a semantic property because you would write different programs with and without it. It literally changes your big-O[or other measure] space complexity.

            Of course, you write different programs because of *how much* memory you have, and your program written for one amount of memory may not work (without rewriting) with another amount. That doesn’t make it a “convenience”, it just means the semantic consequences are complex and not binary.

            Clinger’s paper on tail calls is more than just an analogy, since tail calls are themselves a special case of GC. But his argument re. tail calls seems exactly apposite here.

  • […] But I also think it’s about feedback.  I don’t really learn Spanish well because I’m rarely in a position to use it. If I did, I’d get a response to what I said. Can anyone learn to program without trying to write some code and getting feedback on whether it works? The issue of feedback came up several times in the recent discussion about the relationship between teaching programming and teaching composition. […]

  • […] terrific discussion that Shriram started in my recent post inspired by Annette Vee’s book (see original post here), “The ethical responsibilities of the student or end-user programmer.” I asked several others, […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,185 other followers


Recent Posts

Blog Stats

  • 2,039,508 hits
November 2018

CS Teaching Tips

%d bloggers like this: