Posts tagged ‘image of computing’
The question that Jennifer Kay raised in her AAAI Spring Symposium paper is about robotics, but her question on the SIGCSE Members list is more general: “Do we have any empirical evidence that cool stuff genuinely does attract more students?” Bruce Barton changed the question slightly in his message on the list:
Are we doing a disservice to our students by teaching them robotics, animation, game development, etc. when most of the industry is performing fairly mundane computer programming tasks? I understand that we are trying to increase enrollment and also retention. But are we perpetrating a bait and switch scam on our students? Back when I first started out (late 60′s), data processing was where it was at and we enjoyed what we were doing. Has the video generation had their attention span so decreased that they can only learn if we make the learning experience play-time? I have heard the reports about video gaming drawing in the students and that video gaming is the new big thing in the industry. But each year we put out many thousands of graduates who want to become game developers and there are certainly not that many jobs available in that specialty. Where do the graduates who don’t make it into game development go? Should we be the voice of reality for them? Would we really lose that many students if we approached the subject in a less fanciful way?
There is evidence that more engaging approaches in the first semester do lead to improved retention in later classes, even in more traditional classes. Charlie McDowell found that with pair programming. Beth Simon’s ITICSE 2010 paper shows Media Computation CS1 students succeeding more in a (traditional) CS2, than students in a traditional CS1.
Why does this happen? Why is it that students stick with computer science, after an engaging start, even if those latter courses are no different than they have ever been?
- One theory is that we simply have to get students engaged, and then they see the value of computing in a broader sense. Once they see computing in the form of a concrete and engaging application area, then maybe they see the value of computer science in its general form.
- Alternatively, maybe the first course sets up the carrot, and students are willing to bear with the rest in order to achieve that carrot. Students in our Computational Media degree program want to go off to Electronic Arts or Pixar, and they are willing to go through courses that they find less engaging, and even (in their opinion) less valuable, in order to achieve their degree in order to improve their access to the careers they want. Maybe the first course (in robotics, in media computation, with pair programming) shows them the best that they might find in computer science, and that makes it all worthwhile.
The implication in these statements is that the rest of the curriculum is boring and unengaging, and that most jobs in computing are similar. Is it true that most computing jobs are boring and unengaging? That’s counter to what we’ve been telling students the last few years. Does the curriculum have to be boring and unengaging? Maybe some students want the pure computing. In Lana Yarosh’s paper on our Media Computation Data Structures course, we found that about 10% of the students didn’t want the engaging media context — they wanted pure data structures. In the paper by Allison Tew and others on the use of a Nintendo Gameboy context for a computer organization course, they found that students were much more excited about the “boring” topic of computer organization with the engaging context — and they still learned the computer organization pretty well.
Do we really believe that computer science is inherently boring and unengaging? Why is that? Why would we believe that about ourselves and our field?
Alan wanted to make this longer comment, but couldn’t figure out where it fit naturally, so he kindly forwarded it to me to provide here:
Mark in his blog has provided a cornucopia of useful topics and questions about teaching computing to a wide demographic. It’s all very complex and (to me at least) difficult to think about. My simple minded approach for dealing with this looks at “humans making/doing things” as having three main aspects:
1. Bricks, mortar, and bricklaying
3. Models of the above
And we can think of the “model” category as being composed of the same three categories.
1. Bricks, mortar, and bricklaying of models
2. Architectures for models
3. (Meta) Models of the above
If we stop here we have a perhaps overly simplistic outline of the kinds of things to be learned in computing (and many other activities as well).
Questions I would ask about these include:
- How many ideas are there here, and especially, how many ideas at a time can learners handle?
- How much real practice of each of these is required for real understanding and operational usage?
- Where can we look for useful parallels that will help us think about our own relatively undeveloped area?
To take the last first, we would (or I would) be very surprised to be able to prepare someone as a professional in 4 years of college if they started from scratch in any of the possible parallels listed above. To go to the really simplistic idea of “hours put in”, there just aren’t enough actual hours available per year (3 practice hours a day is about 1000 hours a year) and professional fluency in any of the above will require more than 4000 hours of practice from most learners. And it’s not just a question of hours. There are longitudinal requirements (time for certain ideas and skills to “sink in”) which probably represent real latencies in both the “notional” and physiological parts of learner’s minds.
A large number of those going into any of the four areas started learning, training, and practicing in childhood. And for those who try to start as a first year college student ….
a. This “problem” is “solved” for music partly by the existence of “pop music” much of which does not require deep fluency in music for participation. (And it is certainly not hard to see real parallels and the existence of “pop computing” in our culture.) Classical and jazz music simply require a lot more time and work.
b. The problem is solved for professional sports by excluding the not skilled enough (and even quite a few of those with skills, and who did start in childhood). The last census listed about 65,000 professional athletes in all US sports. This is a small job market.
c. The problem is solved for the hard sciences (and medicine) most often with extensive postgraduate learning, training and practicing (and by high thresholds at the end). Should we ask where those who, for one reason or another didn’t make the cut, wind up?
d. I don’t know what the engineering demographics are (but would like to). Engineering has always had a strong ad hoc nature (which is what allowed it to be invented and practiced long before mathematics and science were fully invented). Architecture is harder than bricklaying, so one could imagine many with engineering UG degrees winding up in technical companies in what would be essentially apprentice processes.
I’m guessing that this is where similar computer students with undergraduate degrees might wind up — essentially doing bricklaying in some corporate notion of architecture.
Both of these last two seem to me to be dead ends — but it would be good to have more than personal and anecdotal evidence. My own observations would generalize to “they don’t learn much that is good” in their undergraduate experience, and “they learn even less that is good when on the job”.
I think universities have a moral obligation to try to deal with the “they don’t learn much that is good” part of this problem. And doing this well enough could cause large useful and important changes in industry over the next decade or two.
If I were going to get started on this, I would try to put forth a very clear outline of the six aspects of computing I listed above, show how they work together — and try to sketch out what it actually takes to learn them for most college students.
In my thinking about this I keep on coming back — not to the problems of “coverage” over 4 years — but what seems to me to be the larger problem of getting in enough real practicing of the various kinds needed to actually ground the ideas into thoughtful and operational tools.
I’ve been thinking about the Pew study of Millenials since it came out in February. Are Millenials really different in some significant way from previous generations? From the perspective of computing education, I see the same cognitive issues today as in years past. The problems with loops that Lister’s ITICSE working group study found look pretty similar to the problems that Elliot Soloway and Jim Spohrer identified among Yale undergraduates working on the Rainfall problem in the early 1980′s. I look at my 1995 SIGCSE paper on the challenges that students face in learning object-oriented programming, and I see those exact same problems among the seniors in my Capstone Design class this semester.
The most detailed study to date of the 18- to 29-year-old Millennial generation finds this group probably will be the most educated in American history. But the 50 million Millennials also have the highest share who are unemployed or out of the workforce in almost four decades, according to the study, released today by the Pew Research Center.
There is one place where I see a problem with Millenials–not unique to them, but even stronger with them than among the adults. My students and I have been working on papers for ICER 2010 over the last couple weeks. A common theme that we’re seeing in several different studies is a perception of our participants that Computer Science is about advanced use of applications. If you really know how to use Photoshop, then that’s Computer Science. It’s a hard misconception to deal with because an expert on Photoshop probably has picked up a lot of what we would recognize as Computer Science knowledge — about digital representation of data, about processing, about efficiency. It’s not that the perception is wrong, it’s just missing an important perspective.
What’s striking about this misperception is that it shows up in several studies, from high school students to adults. The Millenials might have it a bit stronger, a bit more persistently than the adults, because they have used computer applications for so long. The Millenials hear us talk about real computer science, and they give us the “Yeah, yeah — I’ll tell that back to you on the test, but I know what really matters.” They listen to us, but don’t think it’s all that important. If they don’t think it’s important, they make little effort to really learn it. We find that this perception is strong among the adults, too. The adults care about employment. If you finally understand the difference between arrays and linked lists, you have made an important intellectual step, but you haven’t generated a new line in your resume. If you take a class on “Advanced Photoshop,” you do have a new claim that can lead to a new job. The adults in our studies, too, see advanced application use as being “Computer Science,” and far more valuable than a degree in Computer Science. The adults don’t give us the “Yeah, yeah” bit — they just ignore “Computer Science” entirely.
Both Millenials and adults are practical. What gives me the most benefit for the least cost? Learning computer science is hard, and its value is indeterminate, especially to someone who doesn’t understand the IT industry. Learning to use applications better is an obvious job skill. The fact that the advanced levels of the latter overlap with some levels of the former makes it even harder for we educators to make our case.
Erik asked a great question in a comment to the “White Boys are Boring” post (a post which was clearly accompanied by a healthy serving of hyperbole, as Kurt pointed out):
Has anyone looked at the comparative efficacies of race/gender neutral programs to increase participation versus ones targeted at specific races or at women?
I do know that curriculum designed to address the needs of women and members of underrepresented minorities work better at attracting those students than the traditional ones — that’s one of the directions that the NSF BPC program has been exploring. That’s not answering Erik’s question, though. The traditional computing curriculum is not neutral.
Media Computation was not designed explicitly to attract women and minority students. We designed Media Computation to attract Liberal Arts, Architecture, and Management majors, and we used sources like Margolis and Fisher’s Unlocking the Clubhouse to inform our decisions. The result is that no published study has found a difference in success rates due to gender or ethnicity, and the published studies show that women are more likely to succeed with Media Computation than with whatever was the traditional curriculum. That doesn’t mean that Media Computation is neutral — some students dislike it. The distinction doesn’t seem to be due to gender or ethnicity.
When we design computing curricula, most teachers aim to make assignments and examples motivating and interesting, and in so doing, we speak to some members of our audience, and not others. When we use video games or robots in examples, for example, we tend to get the boys more engaged than the girls. I’ve found that it’s hard to be culturally neutral in my own assignments. One year, I used an example in an object-oriented design course about parts of a car (lots of opportunity for aggregation and part-of relationships there), only to find that my students from the developing world didn’t have much experience with cars and didn’t know anything about parts of an engine. Our introductory courses used to build assignments around board games like Yahtzee and Risk, which were really engaging for students who knew those games, and a drudgery for those who didn’t know the games. (Implementing pages of rules for a game you’ve never played is dull.) There were cultural biases in the choices of games, e.g., favoring the kinds of games that, in the US, middle class kids in Suburbia played.
The question to which I don’t know the answer is whether it’s possible to build “neutral” curriculum. The academic answer seems to be “no,” but it’s still an issue being explored. Some of what I’ve found from some digging:
- The prevailing academic answer says curricula are not neutral. A.V. Kelly’s 2009 book (5th ed) The Curriculum: Theory and Practice says that all approaches to curricular planning have a variety of biases in them. I found an interesting 2003 journal article that says that that’s not a bad thing (as well as other articles making a similar argument). Curricular change occurs because of particular strengths/weaknesses of a curriculum and are implemented through leveraging power relations. The challenge is being aware of the biases.
- That doesn’t mean that there aren’t efforts to create neutral curricula. In 2004, the UN announced an effort to create a culture-neutral school curriculum. I found an announcement for an on-going research project that is attempting to build gender neutral curricula. I found no results on any of these or similar attempts.
- I suspect that some computer scientists would say, “Use Mathematics. Math is neutral.” I found that the mathematics education community (at least in the articles I found that describing efforts to create neutral curriculum) believes that mathematics is neutral, but mathematics teaching is necessarily value-laden. I found a nice summary of the claim in Rethinking Mathematics piece at the Rethinking Schools site.
Simply put, teaching math in a neutral manner is not possible. No math teaching — no teaching of any kind, for that matter — is actually “neutral,” although some teachers may be unaware of this. As historian Howard Zinn once wrote: “In a world where justice is maldistributed, there is no such thing as a neutral or representative recapitulation of the facts.”
Bottom line is that I don’t think that anyone can answer Erik’s question. Maybe the academics are wrong and it’s possible to build neutral curricula — there certainly. are attempts today. However, if we don’t know if we can build it, then we definitely don’t have any to compare.
A note follows from Susan Rodger to ACM SIGCSE members, from her position on the ACM Education Policy Committee. This is great news! Cameron Wilson showed us this at the ACM Education Council meeting last weekend — the quoted statement showed up in the Federal Register, so it’s citable:
As a member of the ACM Education Policy Committee I wanted to make SIGCSE members aware of two important items.
1) First, the Department of Education has recognized computer science as a
science part of STEM. This is important for applying for funds related to
“Consistent with the Race to the Top Fund program, the Department interprets
the core academic subject of science under section 9101(11) to include
STEM education (science, technology, engineering and mathematics) which
encompasses a wide-range of disciplines, including computer science.”
2) The Department of Education has two funds to apply for:
a) Invest in Innovation Fund (I3)
You can apply for these funds. A letter of intent is due April 1.
b) Race to the Top
Only states can apply for these funds, but you can contact your
state department of education and point out to them that computer
science is an eligible discipline and ask how computer science
education fits into your state’s plan.
For more details, please see this memorandum from ACM:
Susan Rodger, Professor of the Practice
Dept. of Computer Science, Box 90129
LSRC Room D237
Duke University, Durham, NC 27708-0129
Announcement from Georgia Tech today — related to an earlier blog post.
Dear Faculty, Staff & Students,
It’s my pleasure to announce the formal creation of the School of Computational Science & Engineering within the College of Computing at Georgia Tech. The new School will operate under the direction of Chair Richard Fujimoto and in close cooperation with the colleges of Engineering and Science here at Tech.
In addition to focusing on its core research areas—high performance computing, modeling and simulation, and massive data analysis—the School of CSE’s mission will include producing a new type of computational scholar. Indeed, by creating this School, we once again take a leadership role in defining the field of computing itself. As a university, we are stating clearly that CSE is an academic discipline in its own right, with a distinct body of knowledge that lies at the confluence of computing, math, science and engineering. Many of our School of CSE faculty will have joint appointments around campus, and they will continue to pursue the kind of interdisciplinary work that has come to define this School, this College and Georgia Tech.
Finally, let us all express our appreciation to former John P. Imlay Dean Rich DeMillo, who first conceived of CSE as a separate unit of the College. Rich’s foresight has (again) allowed us to stake an important intellectual claim before our peers, and the College will reap the benefits of his prescience for years to come.
Congratulations to all of the faculty, staff and administrators in CSE on this achievement. Great work!
Interim Dean & Professor
Stephen Fleming Chair of Telecommunications
The College of Computing has interviewed three Dean candidates over the last two weeks. All three gave us lots to think about, good advice, and plenty of blog-fodder — but we’re not supposed to name them, so the blog-potential is smaller than it might be. Still, this last one made a comment that I found so striking that I want to talk about it anonymously.
“Do you want to know the top three areas of Computer Science?
Algorithms. Algorithms. Algorithms.”
Historically, that’s an accurate view. Certainly, viewing the world in terms of its algorithms has enabled computing to change the way many disciplines think about their work. However, is that view the one that will push computing forward? Is that where the next great advances in computing will come from?
I suggest that the future of computing is people, people, and people.
- People as co-processors. Luis von Ahn’s home page says that he focuses on “human computation.” What is it that humans can do, that is hard to capture (captcha?) in a computer’s algorithms, that we can then use in concert with computation? The DARPA Network Challenge is a fascinating example of using people as probes, and technology as the networking and processing glue between them. What makes this so powerful is that we can’t understand this as algorithms, but we can use algorithms to leverage human computation.
- People as many, many users. One of our other Dean candidates emphasized the importance of multi-core processing in the future of computing. I think he missed a different massively-parallel phenomenon which is even more fundamentally deeply changing our society.People is different than persons, and social media is more than just individual users being addressed in old-style HCI terms. What emerges when we connect up millions of people through rapid telecommunications networks? Certainly, new things — I’m amazed at the number of press reports I read these days that reference gathering information through Twitter, blogs, and Facebook postings.
There are a lot of research issues to explore here. One that I’ve been thinking about lately: Based on “Nudge,” I predict that a broad range of opinions may initially appear when a new topic arises in a rapid-response social medium like Twitter or Facebook, but the majority of respondents will quickly converge on a small range of opinion. In other words, within a social group, there is no “long tail” effect — friends & followers quickly conform to a few dominant positions, and they do it more quickly than in non-Internet media. Whether or not I’m right, characterizing the behavior of these new forms of media is important, so that we can understand how they’re influencing us.
- Finally, people need to learn about computing. Our first Dean candidate spent a significant amount of time talking about computing education. A particular claim was made that I found interesting. Higher education costs are soaring. They might be capped or limited in some way, or society may expect more from higher education in the United States — like expecting Universities to play a larger role in improving the dismal state of K-12 education, especially in computing. I didn’t hear either of the other two candidates say anything about the responsibility of a College of Computing for improving the state of computing education across the society. Of course, I agree that we do have a responsibility here, to figure out what people should know about computing, to help people learn about computing, and to figure out how to improve computing learning, for both the major and the non-major.
Our past was about algorithms. Our future is about people.
Lijun Ni is a PhD student working with me, who is interested in how to support computer science teachers. In computing education, we tend to worry about the students. Lijun started work on her doctorate with a focus on the teacher. She and Tom McKlin recently did a series of interviews with teachers who attended workshops from “Georgia Computes!” on robotics, MATLAB, and Media Computation curricula. Lijun and Tom wanted to find out what influenced teachers’ adoptions (and lack of adoptions), what questions the teachers asked themselves, and what change was necessary to make the adoption successful. Their SIGCSE paper describes the results. The paper is particularly aimed at developers of new curricula and tools, to identify the questions that workshops need to answer to help teachers adopt.
Her dissertation work is slightly different, addressing an important question related to the CS10K project. The goal of the CS10K project is to have 10,000 CS teachers in 10K schools by 2015. Given that about 46% of teachers quit in the first five years, Lijun is asking: “How do we avoid having only 5K teachers in 2020?”
Lijun is studying teacher identity. Education research results show that teachers who have a sense of identity as a particular kind of teacher (e.g., as a mathematics teacher or a science teacher or a business teacher):
- (a) are less likely to quit teaching, and
- (b) seek to become better at that kind of teacher, e.g. they take workshops and other forms of professional development. (And they tend to look for and try out new curricula — which relates to her SIGCSE 2010 paper.)
So, what influences teachers seeing themselves as “computer science teachers”? And what do teachers mean by that term? Lijun has been interviewing high school teachers, to understand how they define “computer science teacher.” She is proposing to study the DCCE teachers, to explore how the experience of DCCE influences their perception of being a computer science teacher.
Lijun is working on a PhD in Human Centered Computing. Why is being a “computer science teacher” different (and worthy of research, especially in a computing college) than being a math teacher or science teacher? Let’s put it this way: How would you define “computer science” teacher? The whole field is challenged to define “computer science.” Few states offer certification for computer science teachers, so the main source of identification (“My teaching certificate says I’m…”) is missing. Many teachers whom Lijun has interviewed see Computer Science as “Apps++” or as “Applied Math.” She uses her computing background to understand the teacher’s perspectives, and how those definitions might differ from how the academic community might define “computer science.”
If you’d like to hear more about Lijun’s work, be sure to find her at SIGCSE next week.
Next week is SIGCSE 2010, so the sound of scampering feet, practice talks, and impending panic permeates our group here at Georgia Tech. We have something in seven sessions this year. Tom Cortina, Program Co-Chair this year and Conference Co-Chair next year, told me how much trouble we’re causing him, to not have us overlap anywhere. (Barb already discovered that she was double-booked, but got it resolved.)
I thought I’d spend some of my blog posts this week giving previews of talks and sessions that Georgia Tech folk are involved in. I try to be cautious in talking about student work before it gets published. This seems like fair pickings, to talk about their cool work (and to drum up more of an audience!).
Mike Hewner is presenting Friday on “What Game Developers look for in a New Graduate: Interviews and Surveys at One Game Company.” Mike isn’t actually doing his dissertation on game development. Mike really wants to be a computer science teacher at the post-secondary level. He realized that many students coming into College today want to be game developers. So, last summer, he took an internship at a game company, so that he could tell students honestly that he had first-hand experience as a game developer. While he was there, he did the research for this paper.
There are various efforts going on to define what is the core of CS through efforts like concept inventories, e.g., asking teachers what’s important or hard. Mike asked a much more focused question, “For what do game developers get hired?” Know what gets you a job as a game developer? Rather than ask teachers, he asked the people who hire game developers. He used a variant of a Delphi method, to develop an initial list of needs, then to get his respondents to respond to each other and rank the whole list.
In his dissertation work, Mike is actually interested in a much broader question. We know that students are showing less interest in computing careers. Mike is using social psychology to ask the question: How do students become affiliated with computing as a career choice, and how can we influence that affiliation? He’s got a project going on right now that responds in some sense to Maureen Biggers’ paper about Stayers vs. Leavers. Maureen found that people who stayed in computing tended to see it as a broad field, while those who left thought it was just about programming. Mike is trying to see if he can get high school students to broaden their definition of computing, using concept maps to measure that breadth. That’s probably more than I should say about unpublished (actually, ongoing and unfinished!) work. If you want to know more, find Mike at SIGCSE next week.
The former Dean of the College of Computing at Georgia Tech, Rich DeMillo, established three schools within the College. I’m not really sure how he came to decide these three groupings. I am finding them useful for understanding the tensions in defining computer science today (perhaps the “malaise” in Beki’s blog post).
- The School of Computer Science (SCS) is focused on the traditional definition of computer science. It looks inward, into improving the computer and how it functions. Systems, networking, programming languages, theory, and compilers go here. Software engineering goes here, though flavors of it could go elsewhere.
- The School of Interactive Computing (IC) looks at the boundary between the computer and everything else. It includes human-computer interaction, learning sciences & technologies, computational journalism, and computing education research (where humans are the “everything else”), and also includes robotics, computational photography, and vision (where “everything else” is literally the world). Intelligent systems and graphics go here, for using humans as the model for intelligence and form, but versios of each could go elsewhere.
- The Division (soon to be School) of Computational Science and Engineering (CSE) focuses on the application of computing for advancing science and engineering. This was the most innovative of the three. Rich once told me that he wanted this School to provide an academic home for an important field that wasn’t finding one elsewhere. Computer science departments often don’t tenure computational science researchers because their work may not necessarily invent new computer science, and science departments don’t tenure scientists just for being code monkeys. This area is too important to leave adrift.
I admit that I’m a man with a hammer. I see these three groupings at the various colleges and universities I visit, as the three competing images for what is computer science. SCS faculty have history on their side — their view of computing is roughly what was defined in the computing curricula reports from 1968 onward. (I do wonder if those early curricular reports may have defined CS for education too soon, before we really knew what would be important as computing evolved.) IC faculty have modern day relevance on their side — much of the exciting new computing work that gets picked up in the press from this group. Here in the College of Computing, these sides tussle over the shared ownership of our MS and PhD degrees in computer science. (We don’t argue so much about the BS in CS because Threads provides enough flexibility to cover a wide range of definitions.) Do graduate students have to take a course in Systems? In Theory? Aren’t these “core” to what is Computer Science? And what is “Computer Science” anyway? Does (or should) the School of “Computer Science” have a particularly strong say in the matter?
In the latest issue of Communications of the ACM, Dennis Groth and Jeffrey Mackie-Mason argue that we need “informatics” degrees as something separate than a computer science degree. When they list informatics academic units, they include my IC School. They define “informatics is a discipline that solves problems through the application of computing or computation, in he context of the domain of the problem.” That’s close enough to my “computing at the boundary with everything else.” They are arguing that we can make greater advances in informatics by splitting those degrees off from computer science.
As we tussle over the name and identity of “Computer Science,” I increasingly value Dennis and Jeffrey’s point. I can see that IC and CS may be different bodies of knowledge. Computer science students ought to know about RISC processors and assembly language. Students in IC must understand and be able to use empirical methods, especially social science methods like interviews and surveys (e.g., how to put them together well, how to avoid bias in creating them and evaluating the results). These methods are necessary to listen to someone else, figure out their problem, and then later, figure out how the technology solves (or at least, impacts) the problem. When I look at IC-related professionals and researchers, I see few that use knowledge of RISC processors and assembly language in their work. The empirical, social science methods don’t fit well into CS. I was on the committee that wrote the ACM/IEEE Computing Curriculum 2008 Update, and in particular, I was in charge of the HCI section. We had to gut much of what SIGCHI felt was absolutely critical for students to know about working with humans (and which I agreed with) because we simply couldn’t cram more into a CS student’s four years. IC and CS have a significant overlap, but there is a lot in each that is not in the intersection.
We tussle over these degrees and names because, in part, we fear creating a new name. We worry that students won’t be interested in a degree from computing that’s not named “computer science.” IC co-owns our BS in Computational Media (about 300 students, ~25% female, placing students at places like Electronic Arts and Pixar) and a PhD in Human-Centered Computing (one of the few PhD programs in a computing school that is over 50% female). Students are willing to take a gamble, and we’ll draw on a different demographic of students.
I’ve not said much here about CSE yet, but that’s because it’s not big enough to tussle yet. Recently, I got to interview students and teachers in interdisciplinary computational science classes. These classes don’t really work for CS (or IC) students. The computer science being used is too simple for them (so they’re bored while the science students come up to speed), but the science is way harder than they can just jump into. For CS students to succeed in CSE classes, they need to take a bunch of science classes to understand how real scientists are using scientific computing. We run into the same problem as squeezing the important parts of HCI into CS — we run out of room. As CSE grows in numbers and importance, we will eventually find that it doesn’t fit into IC or CS, either. By separating the fields, we encourage greater research advances through tighter focus, and we create better, clearer opportunities for student learning by removing the unnecessary and spending more time on the necessary.
Our high school daughter came home last night with her Sophomore year course elections form. She’s considering taking Computing in the Modern World, Georgia’s version of the similar course in the ACM Model K-12 Curriculum. She might take Beginning Programming at some point, but she’ll need the pre-requisite…which is listed on the form as the course in “Computer Applications.” Which is WRONG! Barb, who helped write the state standards, knows that Beginning Programming should require Computing in the Modern World. Barb’s comment was, “Who do I have to fight now!?!“
The notion that computer science is just beefed up applications (CS == Apss++) is prevalent in high schools. Lijun Ni, my student studying high school CS teachers finds it all the time. “Oh, I’m a computer science teacher! It’s important for students to go beyond computer applications into real computer science!” That’s a true sentiment, but there doesn’t have to be a connection between applications and CS. You can be a great computer scientist and not be able to figure out Word or Excel. Last time I spoke to Jane Margolis, she was facing it in in her new pre-AP course in the LA Unified School District. She said that the teachers complain, “How can they learn computer science when they have weak keyboarding skills? We’ll have to do two weeks of keyboarding first.”
Is the problem that computing has been too successful? That you can do so much with applications, that it’s considered the fundamentals, the base of all of computing? Or is that teachers do not understand computer science as a real, academic, rigorous subject? Or is that high school leaders don’t understand computer science at all? I suspect that we have a chicken-and-the-egg problem. How do we get real computer science valued in schools when the people making the decisions don’t understand computer science?
With all the excitement over “apps” on the new iPad, maybe now is the time to push: CS != Apps++
We had a visitor at Georgia Tech today, alum Mike Terry, who has been studying the usability practices of open source development teams, like for Gimp, Inkscape, and Firefox. The short answer is, “There are no usability practices,” but that’s a little too pat. It’s a little bit more complicated than that, and actually even more concerning from an education perspective.
The folklore is that open source developers start because they have “an itch to scratch,” something that they want developed. Mike thinks that that’s true, but that scratching that itch doesn’t actually take long. Social factors keep open source developers going — they care about their developer community and working with them.
Mike finds that few projects really care about usability. The argument, “If you made your usability better, you’d increase your user base,” is not enticing to most open source developers. Open source developers have no layers (like salespeople or tech support) between themselves and the public users. Thus, they get inundated with (sometimes ill-informed and even downright stupid) bug reports and feature requests. The last thing open source developers want is more of those.
Since open source developers soon stop being users of their own software, and they don’t want to talk to lots of users, how do they deal with usability? Mike says that the top developers develop close relationships with a few power users, and the developers design to meet those users’ needs. So there is some attention to usability — in terms of what high-end, power users want.
So what happens when a User Experience person wanders into the open source fold? Mike has interviewed some of these folks (often female), and finds that they hate the experience. One said, “I’d never have done it if I wasn’t being paid to do it.” I guess there’s not much of an open source usability developer community. The open source developer community is not welcoming to these “others” with different backgrounds, different goals, and most of all, not a hard-core software development background.
Mike believes that the majority of our software will be open-licensed. I expressed concerns about that future in terms of education.
- How do people get started in developing software in an all open-source world? Mike suggested that open source is a great way for high school students to get started with software development. I pointed out how unfriendly open-source development communities have been to newcomers, especially females, and how open-source development mailing lists have been described as “worse than locker rooms.” Mike agreed with those characterizations, then said, “But once you get past that…” Well, yeah — that’s the point. Margolis and Fisher showed us years ago that those kinds of subtle barriers say, “This is a boys-only club — you don’t belong!” and those can prevent women and underrepresented minorities from even trying to enter the community.
- I worry about the economics of open-source and what signals it sends to people considering the field. Mike assured me that companies like RedHat are making money and hiring programmers — but there are many more unpaid programmers working on RedHat than paid programmers. If the world goes mostly open source, how do we convince students that there are jobs available developing software? Many kids (and parents) already believe that software jobs are all being outsourced. How do we convince them that there are good jobs, and they don’t have to work for years for free before they get those paying jobs?
- Finallly, I really worry about the lack of thought-diversity in the open source communities. People who care about usability are driven away from these communities? While we educators are trying to convince students that not all of computing is about programming, the open source community is telling newcomers that programming is all that matters. If the whole software industry goes open source, we’re going to have a hard time selling the image of a broad field of computing.
I found Mike’s work fascinating, and well grounded in data. I just find the world he describes a little disconcerting. I hope that the open source community considers the education issues of its next generation of developers.
Why are professors so liberal? Why are computer science majors mostly male and white or Asian? One possible answer is the same for each — that’s what we’ve been raised to expect. Typecasting may explain the liberalness of professors, and why nursing is predominantly female.
A pair of sociologists think they may have an answer: typecasting. Conjure up the classic image of a humanities or social sciences professor, the fields where the imbalance is greatest: tweed jacket, pipe, nerdy, longwinded, secular — and liberal. Even though that may be an outdated stereotype, it influences younger people’s ideas about what they want to be when they grow up.
My colleague Nancy Nercessian has been studying how engineering scientists think, and the short form answer is, “With stuff.” They use distributed cognition through the things in their lab in order to think through problems.
Nercessian began by posing the question, “How do engineering scientists think?” The resulting journal article in Topics in Cognitive Science quotes Daniel Dennett: “Just as you cannot do very much carpentry with your bare hands, there’s not much thinking you can do with your bare mind.”
Famously, Edsger Dijkstra is quoted as having said “Computer science is no more about computers than astronomy is about telescopes.” Nancy’s results suggest that, while Dijkstra may be right that computer science is not about computers, a computer scientist can’t think without a computer.
Thomas Sowell’s column appears in the Atlanta Journal-Constitution on Tuesday’s, and his column this week appeared under the headline World worse off because of role intellectuals play. Sowell’s argument is that intellectuals overall did more harm than good in the 20th century. Examples of intellectuals who caused great harm in Sowell’s opinion include Hitler and Marx. He distinguishes the Wright brothers in his article, because they created something.
All these people produced a tangible product or service and they were judged by whether those products and services worked. But intellectuals are people whose end products are intangible ideas, and they are usually judged by whether those ideas sound good to other intellectuals or resonate with the public.
So are computer scientists “intellectuals” by Sowell’s definition? We create products and services, but our products and services are merely intangible ideas. You can’t touch a bit, nor a Web page, nor a window and scroll bar. How do you judge the quality of our products and services? Is there a way of judging software by more than “sound good to other intellectuals or resonate with the public”?
I’m not sure that Sowell’s argument stands up to much criticism, e.g., the difference between intellectuals and those whose ideas are worth something is just that the latter have tangible products and services? If I’m better at marketing, so my ideas turn into a product, then I’m not longer “just” an intellectual? Still, the philosophical question of what we are, we who build things out of just thought and some serious typing, is interesting.