Posts tagged ‘computing for everyone’
On Friday, August 14, the first RESPECT conference will be held in Charlotte, NC — the first international meeting of the IEEE Special Technical Community on Broadening Participation with technical co-sponsorship by the IEEE Computer Society (see conference website here). RESPECT stands for Research on Equity and Sustained Participation in Engineering, Computing, and Technology.
We have two papers in RESPECT which I’ll summarize in a couple of blog posts. I’m less familiar with IEEE rules on paper referencing and publishing, so I’ll make a copy available as soon as I get the rules sorted out.
Miranda Parker has just finished her first year as a Human-Centered Computing PhD student at Georgia Tech, working with me. She’s done terrific work in her first year which I hope to be talking more about as she publishes. At RESPECT 2015, she’ll be presenting her first paper as a PhD student, “A critical research synthesis of privilege in computing education.”
Miranda defines privilege as:
Privilege is an unearned, unasked-for advantage gained because of the way society views an aspect of a student’s identity, such as race, ethnicity, gender, socioeconomic status, and language.
Her short paper is a review of the literature on how we measure privilege, where its impact has been measured in other STEM fields, and where there are holes in the computing education literature. She’s using studies of privilege in other STEM fields to help define new research directions in computing education. It’s just the sort of contribution you’d want a first year PhD student to make. She’s surveying literature that we don’t reference much, and using that survey to identify new directions — for her, as well as the field.
In my research, I’m most interested in the non-CS majors, the ones who learn computing because it makes them more productive (see where I make that argument) or because they want to make themselves more marketable (see Eric Robert’s post) or because they will live and work (as I predict) in the fat line between programmers and users (see post here). A recent article in the CACM suggests that all non-CS majors need to be learn (let’s not use the “be exposed” euphemism — there’s no sense in “exposing” someone to something unless you’d like them to learn from it) “functional programming languages [and] the declarative programming paradigm.” I’m willing to consider that, but why? The quote below says, “they allow programmers to do more with less and enable compilation to more efficient code across a wide range of runtime targets.” I’ve been studying non-CS majors who program for a lot of years, and I’ve never heard any of them say even once that they want to “enable compilation to more efficient code across a wide range of runtime targets.”
So let’s consider the “more with less.” Do we buy that what what non-CS majors is to be able to get more expressive power with fewer keystrokes? I don’t see the argument for that.
- Brian Dorn studied graphic designers who program, and found that assignment was fairly hard for them to learn (see his CHI 2010 paper). Surely, there’s not much that has fewer characters than that.
- Neil Brown has been mining the BlueJ Blackbox data for empirical data on what students get wrong most often (see his ICER paper). I was surprised to learn that confusing & for && and | for || is pretty common. Those are pretty easy to type, short, and seemingly error-prone expressions.
- We have Thomas Green’s fascinating result that that IF P THEN … END P; IF NOT P THEN … END NOT P. is not just better than IF P THEN…ELSE.… It’s ten times better — novices do better by a magnitude if they avoid ELSE.
My suspicion is that non-CS major programmers value understandability and fewer errors, over fewer keystrokes and more power.
I like functional programming and would be interested in a good argument for it for non-CS majors. I don’t see it here.
Second, would-be programmers (CS majors or non-majors) should be exposed as early as possible to functional programming languages to gain experience in the declarative programming paradigm. The value of functional/declarative language abstractions is clear: they allow programmers to do more with less and enable compilation to more efficient code across a wide range of runtime targets. We have seen such abstractions gain prominence in DSLs, as well as in imperative languages such as C#, Java, and Scala, not to mention modern functional languages such as F# and Haskell.
I’m currently reading Nobel laureate Daniel Kahneman’s book, “Thinking Fast, Thinking Slow” (see here for the NYTimes book review). It’s certainly one of the best books I’ve ever read on behavioral economics, and maybe just the best book I’ve ever read about psychology in general.
One of the central ideas of the book is our tendency to believe “WYSIATI”—What You See Is All There Is. Kahneman’s research suggests that we have two mental systems: System 1 does immediate, intuitive responses to the world around us. System 2 does thoughtful, analytical responses. System 1 aims to generate confidence. It constructs a story about the world given what information that exists. And that confidence leads us astray. It keeps System 2 from asking, “What am I missing?” As Kahneman says in the interview linked below, “Well, the main point that I make is that confidence is a feeling, it is not a judgment.”
It’s easy to believe that University CS education in the United States is in terrific shape. Our students get jobs — multiple job offers each. Our graduates and their employers seem to be happy. What’s so wrong with what’s going on? I see computation as a literacy. I wonder, “Why is our illiteracy rate so high? Why do so few people learn about computing? Why do so many flunk out, drop out, or find it so traumatic that they never want to have anything to do with computing again? Why are the computing literate primarily white or Asian, male, and financially well-off compared to most?”
Many teachers (like the comment thread after this post) argue for the state of computing education based on what they see in their classes. We introduce tools or practices and determine whether they “work” or are “easy” based on little evidence, often just discussion with the top students (as Davide Fossati and I found). If we’re going to make computing education work for everyone, we have to ask, “What aren’t we seeing?” We’re going to feel confident about what we do see — that’s what System 1 does for us. How do we see the people who aren’t succeeding with our methods? How do we see the students who won’t even walk in the door because of how or what we teach? That’s why it’s important to use empirical evidence when making educational choices. What we see is not all there is.
But, System 1 can sometimes lead us astray when it’s unchecked by System 2. For example, you write about a concept called “WYSIATI”—What You See Is All There Is. What does that mean, and how does it relate to System 1 and System 2?
System 1 is a storyteller. It tells the best stories that it can from the information available, even when the information is sparse or unreliable. And that makes stories that are based on very different qualities of evidence equally compelling. Our measure of how “good” a story is—how confident we are in its accuracy—is not an evaluation of the reliability of the evidence and its quality, it’s a measure of the coherence of the story.
People are designed to tell the best story possible. So WYSIATI means that we use the information we have as if it is the only information. We don’t spend much time saying, “Well, there is much we don’t know.” We make do with what we do know. And that concept is very central to the functioning of our mind.
I believe the result described in the article below, that a critical limitation of teacher’s ability to use technology is too little understanding of technology. In a sense, this is another example of the productivity costs of a lack of ubiquitous computing literacy (see my call for a study of the productivity costs). We spend a lot on technology in schools. If teachers learned more about computing, they could use it more effectively.
In 2010, for example, researchers Peggy A. Ertmer of Purdue University, in West Lafayette, Ind., and Anne T. Ottenbreit-Leftwich of Indiana University, in Bloomington, took a comprehensive look at how teachers’ knowledge, confidence, and belief systems interact with school culture to shape the ways in which teachers integrate technology into their classrooms.
One big issue: Many teachers lack an understanding of how educational technology works.
But the greater challenge, the researchers wrote, is in expanding teachers’ knowledge of new instructional practices that will allow them to select and use the right technology, in the right way, with the right students, for the right purpose.
My colleague, Amy Bruckman, wrote a blog post about the challenges that nonprofits face when trying to develop and maintain software. She concludes with an interesting argument for computing education that has nothing to do with learning programming that everyone needs. I think it relates to my question: What is the productivity cost of not understanding computing? (See post here.)
This is not a new phenomenon. Cliff Lampe found the same thing in a study of three nonprofits. At the root of the problem is two shortcomings in education. So that more small businesses and nonprofits don’t keep making this mistake, we need education about the software development process as part of the standard high-school curriculum. There is no part of the working world that is not touched by software, and people need to know how it is created and maintained. Even if they have no intention of becoming a developer, they need to know how to be an informed software customer. Second, for the people at web design firms who keep taking advantage of customers, there seems to be a lack of adequate professional ethics education. I teach students in my Computers, Society, and Professionalism class that software engineers have a special ethical responsibility because the client may not understand the problem domain and is relying on the knowledge and honesty of the developer. More people need to get that message.
The article linked below makes the argument that then-Governor Ronald Reagan changed perception higher education in the United States when he said on February 28, 1967 that the purpose of higher education was jobs, not “intellectual curiosity.” The author presents evidence that date marks a turning point in how Americans thought about higher education.
Most of CS education came after that date, and the focus in CS Education has always been jobs and meeting industry needs. Could CS Education been different if it had started before that date? Might we have had a CS education that was more like a liberal education? This is an issue for me since I teach mostly liberal arts students, and I believe that computing education is important for giving people powerful new tools for expression and thought. I wonder if the focus on tech jobs is why it’s been hard to establish computing requirements in universities (as I argued in this Blog@CACM post). If the purpose of computing education in post-Reagan higher education is about jobs, not about enhancing people’s lives, and most higher-education students aren’t going to become programmers, then it doesn’t make sense to teach everyone programming.
The Chronicle of Higher Education ran a similar piece on research (see post here). Research today is about “grand challenges,” not about Reagan’s “intellectual curiosity.” It’s structured, and it’s focused. The Chronicle piece argues that some of these structured and focused efforts at the Gates Foundation were more successful at basic research than they were at achieving the project goals.
“If a university is not a place where intellectual curiosity is to be encouraged, and subsidized,” the editors wrote, “then it is nothing.”
The Times was giving voice to the ideal of liberal education, in which college is a vehicle for intellectual development, for cultivating a flexible mind, and, no matter the focus of study, for fostering a broad set of knowledge and skills whose value is not always immediately apparent.
Reagan was staking out a competing vision. Learning for learning’s sake might be nice, but the rest of us shouldn’t have to pay for it. A higher education should prepare students for jobs.
I buy Chris Granger’s argument here, that coding is not nearly as important as modeling systems. The problem is that models need a representation — we need a language for our models. The point is modeling, but I don’t think we can have modeling without coding. As Michael Mateas said, there will always be friction (see post).
We build mental models of everything – from how to tie our shoes to the way macro-economic systems work. With these, we make decisions, predictions, and understand our experiences. If we want computers to be able to compute for us, then we have to accurately extract these models from our heads and record them. Writing Python isn’t the fundamental skill we need to teach people. Modeling systems is.