Posts tagged ‘computing for everyone’
In my research, I’m most interested in the non-CS majors, the ones who learn computing because it makes them more productive (see where I make that argument) or because they want to make themselves more marketable (see Eric Robert’s post) or because they will live and work (as I predict) in the fat line between programmers and users (see post here). A recent article in the CACM suggests that all non-CS majors need to be learn (let’s not use the “be exposed” euphemism — there’s no sense in “exposing” someone to something unless you’d like them to learn from it) “functional programming languages [and] the declarative programming paradigm.” I’m willing to consider that, but why? The quote below says, “they allow programmers to do more with less and enable compilation to more efficient code across a wide range of runtime targets.” I’ve been studying non-CS majors who program for a lot of years, and I’ve never heard any of them say even once that they want to “enable compilation to more efficient code across a wide range of runtime targets.”
So let’s consider the “more with less.” Do we buy that what what non-CS majors is to be able to get more expressive power with fewer keystrokes? I don’t see the argument for that.
- Brian Dorn studied graphic designers who program, and found that assignment was fairly hard for them to learn (see his CHI 2010 paper). Surely, there’s not much that has fewer characters than that.
- Neil Brown has been mining the BlueJ Blackbox data for empirical data on what students get wrong most often (see his ICER paper). I was surprised to learn that confusing & for && and | for || is pretty common. Those are pretty easy to type, short, and seemingly error-prone expressions.
- We have Thomas Green’s fascinating result that that IF P THEN … END P; IF NOT P THEN … END NOT P. is not just better than IF P THEN…ELSE.… It’s ten times better — novices do better by a magnitude if they avoid ELSE.
My suspicion is that non-CS major programmers value understandability and fewer errors, over fewer keystrokes and more power.
I like functional programming and would be interested in a good argument for it for non-CS majors. I don’t see it here.
Second, would-be programmers (CS majors or non-majors) should be exposed as early as possible to functional programming languages to gain experience in the declarative programming paradigm. The value of functional/declarative language abstractions is clear: they allow programmers to do more with less and enable compilation to more efficient code across a wide range of runtime targets. We have seen such abstractions gain prominence in DSLs, as well as in imperative languages such as C#, Java, and Scala, not to mention modern functional languages such as F# and Haskell.
I’m currently reading Nobel laureate Daniel Kahneman’s book, “Thinking Fast, Thinking Slow” (see here for the NYTimes book review). It’s certainly one of the best books I’ve ever read on behavioral economics, and maybe just the best book I’ve ever read about psychology in general.
One of the central ideas of the book is our tendency to believe “WYSIATI”—What You See Is All There Is. Kahneman’s research suggests that we have two mental systems: System 1 does immediate, intuitive responses to the world around us. System 2 does thoughtful, analytical responses. System 1 aims to generate confidence. It constructs a story about the world given what information that exists. And that confidence leads us astray. It keeps System 2 from asking, “What am I missing?” As Kahneman says in the interview linked below, “Well, the main point that I make is that confidence is a feeling, it is not a judgment.”
It’s easy to believe that University CS education in the United States is in terrific shape. Our students get jobs — multiple job offers each. Our graduates and their employers seem to be happy. What’s so wrong with what’s going on? I see computation as a literacy. I wonder, “Why is our illiteracy rate so high? Why do so few people learn about computing? Why do so many flunk out, drop out, or find it so traumatic that they never want to have anything to do with computing again? Why are the computing literate primarily white or Asian, male, and financially well-off compared to most?”
Many teachers (like the comment thread after this post) argue for the state of computing education based on what they see in their classes. We introduce tools or practices and determine whether they “work” or are “easy” based on little evidence, often just discussion with the top students (as Davide Fossati and I found). If we’re going to make computing education work for everyone, we have to ask, “What aren’t we seeing?” We’re going to feel confident about what we do see — that’s what System 1 does for us. How do we see the people who aren’t succeeding with our methods? How do we see the students who won’t even walk in the door because of how or what we teach? That’s why it’s important to use empirical evidence when making educational choices. What we see is not all there is.
But, System 1 can sometimes lead us astray when it’s unchecked by System 2. For example, you write about a concept called “WYSIATI”—What You See Is All There Is. What does that mean, and how does it relate to System 1 and System 2?
System 1 is a storyteller. It tells the best stories that it can from the information available, even when the information is sparse or unreliable. And that makes stories that are based on very different qualities of evidence equally compelling. Our measure of how “good” a story is—how confident we are in its accuracy—is not an evaluation of the reliability of the evidence and its quality, it’s a measure of the coherence of the story.
People are designed to tell the best story possible. So WYSIATI means that we use the information we have as if it is the only information. We don’t spend much time saying, “Well, there is much we don’t know.” We make do with what we do know. And that concept is very central to the functioning of our mind.
I believe the result described in the article below, that a critical limitation of teacher’s ability to use technology is too little understanding of technology. In a sense, this is another example of the productivity costs of a lack of ubiquitous computing literacy (see my call for a study of the productivity costs). We spend a lot on technology in schools. If teachers learned more about computing, they could use it more effectively.
In 2010, for example, researchers Peggy A. Ertmer of Purdue University, in West Lafayette, Ind., and Anne T. Ottenbreit-Leftwich of Indiana University, in Bloomington, took a comprehensive look at how teachers’ knowledge, confidence, and belief systems interact with school culture to shape the ways in which teachers integrate technology into their classrooms.
One big issue: Many teachers lack an understanding of how educational technology works.
But the greater challenge, the researchers wrote, is in expanding teachers’ knowledge of new instructional practices that will allow them to select and use the right technology, in the right way, with the right students, for the right purpose.
My colleague, Amy Bruckman, wrote a blog post about the challenges that nonprofits face when trying to develop and maintain software. She concludes with an interesting argument for computing education that has nothing to do with learning programming that everyone needs. I think it relates to my question: What is the productivity cost of not understanding computing? (See post here.)
This is not a new phenomenon. Cliff Lampe found the same thing in a study of three nonprofits. At the root of the problem is two shortcomings in education. So that more small businesses and nonprofits don’t keep making this mistake, we need education about the software development process as part of the standard high-school curriculum. There is no part of the working world that is not touched by software, and people need to know how it is created and maintained. Even if they have no intention of becoming a developer, they need to know how to be an informed software customer. Second, for the people at web design firms who keep taking advantage of customers, there seems to be a lack of adequate professional ethics education. I teach students in my Computers, Society, and Professionalism class that software engineers have a special ethical responsibility because the client may not understand the problem domain and is relying on the knowledge and honesty of the developer. More people need to get that message.
The article linked below makes the argument that then-Governor Ronald Reagan changed perception higher education in the United States when he said on February 28, 1967 that the purpose of higher education was jobs, not “intellectual curiosity.” The author presents evidence that date marks a turning point in how Americans thought about higher education.
Most of CS education came after that date, and the focus in CS Education has always been jobs and meeting industry needs. Could CS Education been different if it had started before that date? Might we have had a CS education that was more like a liberal education? This is an issue for me since I teach mostly liberal arts students, and I believe that computing education is important for giving people powerful new tools for expression and thought. I wonder if the focus on tech jobs is why it’s been hard to establish computing requirements in universities (as I argued in this Blog@CACM post). If the purpose of computing education in post-Reagan higher education is about jobs, not about enhancing people’s lives, and most higher-education students aren’t going to become programmers, then it doesn’t make sense to teach everyone programming.
The Chronicle of Higher Education ran a similar piece on research (see post here). Research today is about “grand challenges,” not about Reagan’s “intellectual curiosity.” It’s structured, and it’s focused. The Chronicle piece argues that some of these structured and focused efforts at the Gates Foundation were more successful at basic research than they were at achieving the project goals.
“If a university is not a place where intellectual curiosity is to be encouraged, and subsidized,” the editors wrote, “then it is nothing.”
The Times was giving voice to the ideal of liberal education, in which college is a vehicle for intellectual development, for cultivating a flexible mind, and, no matter the focus of study, for fostering a broad set of knowledge and skills whose value is not always immediately apparent.
Reagan was staking out a competing vision. Learning for learning’s sake might be nice, but the rest of us shouldn’t have to pay for it. A higher education should prepare students for jobs.
I buy Chris Granger’s argument here, that coding is not nearly as important as modeling systems. The problem is that models need a representation — we need a language for our models. The point is modeling, but I don’t think we can have modeling without coding. As Michael Mateas said, there will always be friction (see post).
We build mental models of everything – from how to tie our shoes to the way macro-economic systems work. With these, we make decisions, predictions, and understand our experiences. If we want computers to be able to compute for us, then we have to accurately extract these models from our heads and record them. Writing Python isn’t the fundamental skill we need to teach people. Modeling systems is.
Why programming in a non-majors, CS course is unlikely to lead to computational thinking (but is still a good idea): We must go beyond Intuition to Evidence
The March 2015 issues of Inroads (linked here) has a special section on “The role of programming in a non-major, CS course.” I was disappointed by several of the articles in the special section for making arguments without empirical evidence, and decided to write my February Blog@CACM article on the need for evidence-based practice in computing education (see post linked here).
I left out Henry Walker’s second article in the Blog@CACM post, and will discuss it here. In the first article, he argues against teaching programming because it would not leave enough time for other, more important topics. In the second one, he argues for teaching programming, if your learning objective is computational thinking.
If a non-majors course in computer science seeks to help students sharpen their skills in computational thinking, then students must be able to write solutions precisely. Further, students must be able to analyze the correctness of solutions and compare alternative solutions. Such work requires precision in writing. English or another natural language allows precision, but does not require precision.
Like in his first article, Henry offers no evidence for his claims. I do agree that programming requires greater precision than natural language. Henry argues for a value of the use of programming that is not supported by our research evidence.
If defined in sufficient detail, pseudo-code can enforce rigorous thinking, but pseudo-code cannot be run to check correctness or test efficiency. Ultimately, the use of a programming language is essential if computing courses are to help students sharpen their problem-solving skills.
In the decades of studies that have tried to find such transfer, the research evidence is that computing courses do not help students sharpen their problem-solving skills. I am not aware of studies that have rebutted David Palumbo’s 1990 review of the literature on programming and problem-solving (see paper reference here). It is possible to teach problem-solving skills using programming, but students do not gain general problem-solving skills from computing courses (with or without programming).
Henry’s evidence that this does happen is an anecdote:
An upper-level political science major who took my introductory computer science course indicated that her logical thinking in computer science had a clear payoff when she put arguments together for her papers in political science.
As a rationalization for a teaching decision, this is weak evidence. It’s self-report from a single student. The student probably did learn something interesting and useful from the class. Maybe the student did gain in logical thinking. Given the preponderance of evidence against general problem-solving skills coming from a programming class, I’m skeptical. Maybe she just saw her existing skills in a new light because of the computer science class — a useful learning outcome. In any case, is the positive experience of one student justification for designing a course for dozens or hundreds?
The conclusion of my Blog@CACM post still applies here. We don’t know what non-CS majors need or what they can learn. We shouldn’t just guess, because our intuition is very likely wrong — and it’s dangerous, since our experience (as mostly white, mostly male, CS faculty) is so different than those of most non-CS majors. We need the humility to admit that we don’t know. We must seek out evidence to inform our decision-making.