Posts tagged ‘image of computer science’
At schools that have closed down CS, journalism has been closed down too. Colorado is now talking about closing down journalism, and to create a School of Information. Is that the first step towards closing down CS, too, in keeping with the trend? Isn’t it ironic, that CS innovations have led to the closing of Journalism, but that’s somehow joined with taking down CS, too?
The University of Colorado should eliminate its standalone journalism degree and create both a new school of information and an institute to study the “global digital future,” according to documents released Tuesday by the Boulder campus.
CU officials announced in August that they would take unprecedented steps to possibly close down CU’s traditional School of Journalism and Mass Communication, citing budget cuts and the rapid evolution of media.
Through the program discontinuance process, a CU panel and top campus leaders have recommended shutting down the traditional school and relocating its tenured professors elsewhere on campus.
Thanks to Beki Grinter for pointing this out — an interesting piece recognizing the value of computer science for the liberal arts major.
Computer science exposed two generations of young people to the rigors of logic and rhetoric that have disappeared from far too many curricula in the humanities. Those students learned to speak to the machines with which the future of humanity will be increasingly intertwined. They discovered the virtue of understanding the instructions that lie at the heart of things, of realizing the danger of misplaced semicolons, of learning to labor until what you have built is good enough to do what it is supposed to do.I left computer science when I was 17 years old. Thankfully, it never left me.
I suspect that this is a bigger issue in computer science (and computing, broadly) than in other parts of academia, since our work is so easily commoditized. It’s certainly the case that in my School, creating companies is highly valued and faculty are often encouraged to be entrepreneurs (e.g., see the article our new Dean sent to the whole faculty Saturday.)
Q: Academic research has always cost money to produce, and led to products that made money for others. How is the “commodification” of research different today than in past periods?
A: Commodification means that all kinds of activities and their results are predominantly interpreted and assessed on the basis of economic criteria. In this sense, recent academic research is far more commodified than it was in the past. In general terms, one can say that the relation between “money” and specific academic activity has become much more direct. Consider the following examples: first, the amount of external funding acquired is often used as a measure of individual academic quality; second, specific assessments by individual scientists have a direct impact on departmental budgets; for instance, if I now pass this doctoral dissertation, my department receives a substantial sum of money; if not, it ends up with a budget deficit; third, the growing practice of patenting the results of academic research is explicitly aimed at acquiring commercial monopolies. Related to these financial issues are important and substantial changes of academic culture. Universities are increasingly being run as big corporations. They have a top-down command structure and an academic culture in which individual university scientists are forced to behave like mini-capitalists in order to survive, guided by an entrepreneurial ethos aimed at maximizing the capitalization of their knowledge.
The comments on the National Research Council’s draft Framework for Science Education are due today. Please do visit and comment on them. Overall, after spending some time reading them, I was impressed. They’re good. I liked Dimension 2 on Cross-Cutting Elements quite a bit, and I really liked breaking out the Scientific and Engineering Practices as a separate Dimension. The biggest flaw for me was not highlighting computation as having a special role as a technology used in science education.
To spur thoughts about what to comment on, I’m posting here some of my responses to the survey questions.
- I commented on the Engineering and Technology Core Disciplinary ideas, specifically “ET4 – In today’s modern world everyone makes technological decisions that affect or are affected by technology on a daily basis. Consequently, it is essential for all citizens to understand the risks and responsibilities that accompany such decisions.”My comment: In ET4, there is a missing phrase after “Consequently, it is essential for all citizens to understand the risks and responsibilities that accompany such decisions.” I’d add “with the recognition that the technological world is broad, with diverse members, who may not share the same sense of risk and responsibility.” Students need to realize that technology may be wrong, may lie, may break, and may be used in ways that can harm them — at an appropriate level of understanding depending on developmental progression.
- I recommended adding another ET core disciplinary idea: Students should know about computational technology, as key to scientific modeling practice. A computational model is one that can be executed on a computing device. A computational model builds on our ability to create machines that can store values, do mathematical operations on those values, compare values, and take actions based on those comparisons. These models are limited due the discrete nature of the computational device, e.g., all computational floating point numbers are simply simulations of real numbers, and parallelism in models is a simulation.
- Most of my comments were on the Scientific and Engineering Practices Dimension:
I recommend adding to the definition of “Modeling” to explicitly require computational modeling. “Computational science” is a third kind of science practice (besides theoretical and empirical). All students should have experience generating data from a computational model, analyzing it, and drawing conclusions from that model.
To the practice of “Constructing and Critiquing Arguments,” I recommend adding critique of data gathered or generated via computational means. Students should ask where data on the Internet are coming from and whether those sources are trustworthy. For example, Wolfram Alpha facilitates a wide range of analyses, but the data sources are nearly invisible. Counts of “Likes” or “Review Stars” on polls might easily be falsified.
Under “Collecting, Analyzing, and Interpreting Data,” I recommend including the limitations and affordances of different representations of data. Data generated from a computer model has limitations which should be understood by the student scientist. Lists of x,y, and z positions vs. velocity and acceleration vectors permit different kinds of analyses and modeling algorithms.
I got a chance to review the ACM’s response. It’s bold and interesting. The ACM pushes for computer science to be its own disciplinary area, not integrated with engineering and technology. They also responded with a letter, rather than fit their comments into the survey framework as I did. The survey framework is constraining — on purpose, I’m sure, to get focused reaction to issues that the committee was most interested in hearing about. My responses are mostly within-the-box, about injecting computing into what they have. The ACM is going bolder, pushing to add computer science as its own thing. I hope it works.
That’s the lead question in this article from the Revolution (RunRev.com) newsletter. The arguments being made (especially about computer scientists not caring about users) are similar to those that Brian Dorn heard from graphics designers, in explaining why they didn’t take computer science courses. I found fascinating the first argument, that elegance and “best practice” have no place in the actual workplace.
There is a vast gulf between theoretical, “best practice” programming, as taught on many degree courses, and actual programming in the workplace. You may be taught that a certain thing should be done in such a way, or you should never use method xyz, but in practice you may be given a deadline of a week to produce something that “should” take three weeks, and you have to find a way to produce it. Quick and dirty hacks become more explicable in these circumstances. Then there is the difference between producing something satisfying to mind of a programmer, and something that an end user can actually use and understand. It may be the smart and logical way to do it, but if the end user can’t grasp it, its not especially valuable.
I just discovered TileStack, which is HyperCard on the Web. Very cool, but the first comment on the introductory stack is something I heard a good bit these last few weeks at my workshops:
Python, for instance, is very easy to pick up. You might make the argument that it’s much easier to learn Speak [the HyperCard-like language in TileStack], but even if it takes twice as long to learn Python to do the equivalent of making a Stack with Speak, you can at least apply what you learned in many other places other than tilestack.com. Just seems pointless for people to waste their time learning something that only applies to a single website when they could learn something that they could use for many other applications.
Based on my experience, most computer science teachers (much more at the undergraduate faculty level than at the high school level!) believe that they only things worth learning in computer science are those that can be used to make applications.
- As soon as I started teaching about JES and Jython, a set of faculty in every workshop I taught this summer (five workshops, all pretty much full!) asked me, “But how do I build applications?” or “How can I run this outside of JES?” I explained that this was all possible, but that we don’t teach in the first semester how to build standalone applications. Several faculty insisted that I show them how to run Jython with our media libraries separate from JES, and were frankly not interested in listening to anything more I had to say unless they could be convinced that what I was showing them could lead to building standalone applications.
- Several faculty asked me, “But this isn’t Python 3.0, is it? When will you be covering Python 3.0?” That one particularly got my goat. I started responding, “I’m barely covering Python 1.0 in here! I’m trying to teach computer science with the minimum language features, much less whatever special features are in the latest version of a language!” That response seemed to carry some weight.
I was really surprised about that. I hear people regularly decrying the fact that computer science in most states is classified under vocational education. But it’s certainly the case that many university faculty buy into that model! I regularly was told by faculty at these workshops that computer science is only worth learning if it leads to job skills and application-building capabilities. CS education is purely utilitarian, in this model.
Why do we teach people the difference between mitosis and meiosis, or about evolution, or that planets orbit the sun? None of those are job skills, and they certainly won’t lead to building marketable products. Isn’t knowing about computer science and one’s virtual world at least as important as understanding this level of detail about the natural world? I’m going to bet that, if someone were to do a survey, most university faculty don’t really believe in computational thinking, that knowing about computing at some beyond-applications level is important for everyone.
I got back from the International Conference of the Learning Sciences last Friday, and spent less than 48 hours visiting with family, preparing my workshops for this week, burning slides, and heading out to Philadelphia on the Fourth of July. I just taught two days of workshops at The College of New Jersey, and leave tomorrow morning (from Philly, where I am right now) for Blacksburg, VA. You can follow along in the galleries to see what teachers are doing. (To all my friends in the Philly area, my apologies for not looking you up. I’m pretty exhausted after these eight-hour days of teaching, so I’m just laying low.)
I wanted to write up some thoughts about ICLS before I forgot them all.
- ICLS had around 600 attendees — about 1/2 the size of SIGCSE, and six times the size of ICER.
- Carl Weiman gave a really impressive opening keynote talk, on the similarities between the various science disciplines in terms of research on learning. Most impressive for me: It was a completely different talk than his SIGCSE talk! A point that I found interesting related to an earlier blog piece — he said that learning the language of the discipline (the specialized vocabulary) is key and not knowing it interferes with learning.
- I learned a new term “dorsal teaching.” That’s where the (typically, engineering and mathematics) teacher turns to the board and writes, and all you can see is their back and one writing arm (“dorsal fin”) sticking out.
- I told Jennifer Turns of U. Washington about Mike Hewner’s recent studies, and she says that she sees similar issues among her engineering undergraduates. She says that they complain to her, “When are we going to get to engineering?!?” and she sees engineering in all their classes. She likens the problem to the “dancing gorilla” awareness problem. If you don’t know what Engineering is, you won’t recognize and attend to it when you’re studying it.
- A common theme (or hole, really) for me at ICLS was how the learning scientists are studying similar topics to the computing educators, but not asking the same questions. There were umpteen papers and posters studying the use of videogames to support learning. Some of them tracked males and females separately, and reported no differences in learning outcomes. But not a single study asked if the girls were as engaged by the videogames as the boys! When I asked that question in a talk, I was told, “No, the girls weren’t as interested in the games” and “Some of the boys got far too focused on learning and designing the videogames, but didn’t pay attention to the learning goals.” I heard one talk about helping engineering students learn about ecology through video games, and it was perfectly okay with them that the male:female ratio was 5:1. I heard another talk about helping improve reading through videogames, and all their subjects were male, and they argued that that’s important because boys do worse at reading than girls. I am not pointing this out to critique the learning science researchers, because for their questions, maybe those other issues aren’t as important. I found it fascinating that two such similar disciplines look at the same situations with radically different questions and goals.
- Pam Grossman gave a keynote talk on reading poetry, where she chided the learning sciences community for not studying learning in the humanities. She talked about the complexity of the task of reading poetry. She talked about the challenge of getting through ambiguity in a poem, filling in the gaps (especially around words whose meaning you are unsure), and expecting to read a poem multiple times to get it. Sally Fincher was sitting next to me, and she mentioned that one also has to have life experience to draw upon to relate to a poem. I realized that this was actually a really good list for the challenges of reading code. Not knowing all the subfunctions/methods/whatever being called, code may seem ambiguous, and you may have to read it several times, looking things up, to get it. Having read a bunch of code previously makes it easier to read new code.
- Our panel on learning in the computing disciplines was fascinating — slides are available on-line. We could hardly have come more from different directions. I talked about our learning science related challenges in computing education. Yasmin gave this talk drawing on the history of computing education, especially for K-12, from Logo through tangible programming with LilyPad. Ulrich Hoppe argued against trying to engage students and against using tangible programming, and in favor of using Prolog instead of imperative or object-oriented languages. And Sally drew it all together with some great quotes from Kuhn and Agre.
In answer to a question about biology, I made a claim in the panel that I’d like to bounce off you. I argued that computer science is far bigger than any specific science, in the same way that mathematics is bounded only by human imagination while any natural science is bounded by the real world. If you take any scientific phenomenon, there can be at least one program that simulates that phenomenon correctly, and you can study that using scientific methods. However, there are an infinite number of programs that get close to simulating that phenomenon, but get it wrong, and you can only figure it out that it’s wrong by using scientific methods (experimentation, measurement, hypothesis setting) to figure that out. In some sense, each program is another natural world to study, and computer science is about understanding any program. Thus, our domain is infinitely larger than any natural science. No wonder it’s so hard to get kids to pass CS1!
I’m writing from Chicago where I’m attending the International Conference of the Learning Sciences 2010. It’s pretty exciting for me to be back here. I helped co-chair the 1998 ICLS in Atlanta, but I haven’t been at this conference since 2002, when my focus shifted from general educational technology to specifically computing education. The theme this week is “Learning in the Disciplines.” I’m here at the invitation of Tom Moher to be part of a panel on Friday morning on computing education, with Yasmin Kafai, Ulrich Hoppe, and Sally Fincher. The questions for the panel are:
- What specific type of knowledge is characteristic of computer science? Is there a specific epistemology?
- Are there unique challenges or characteristics of learning in and teaching about computer science?
- What does learning about computing look like for different audiences: young children, high school, undergraduate, and beyond (e.g., professional scientists, or professionals from non-computing disciplines)? In the case of “non-computing professionals,” what do they learn, and how do they learn it (e.g.,what information ecologies do they draw upon, and how do they find useful information)?
- How do we support (broadly) learning about computer science?
In a couple weeks, I’m giving the keynote talk at the first AAI-10: The First Symposium on Educational Advances in Artificial Intelligence. I’m no AI person, but this conference has a strong computing education focus. I’m planning to use this as an opportunity to identifying challenges in computing education where I think AI researchers have a particularly strong lever for making things better. Not much travel for that one — I get to stay in Atlanta for a whole week!
In getting ready for my talk Friday, I’ve been trying to use themes from learning sciences to think about learning computing. For example, physics educators (BTW, Carl Weiman is here for the opening keynote tonight) have identified which physics concepts are particularly hard to understand. The challenge to learning those concepts is due in part to misconceptions that students have developed from years of trying to understand the physical world in their daily lives. I’ve realized that I don’t know about computing education research that’s looked at what’s hard about learning concepts in computing, rather than skills. We have lots of studies that have explored how students do (not?) learn how to program, such as in Mike McCracken’s, Ray Lister’s, and Allison Tew’s studies. But how about how well students learn concepts like:
- “All information in a computer is made up of bytes, so any single byte could be anything from the red channel of a pixel in a picture, to an instruction to the processor.” Or
- “All Internet traffic is made up of packets. So while it may seem like you have a continuous closed connection to your grandmother via Skype, you really don’t.”
Does anybody have any pointers to studies that have explored students learning conceptual (not skill-based) knowledge about computing?
I know that there is an argument says, “Computing is different from Physics because students have probably never seen low-level computer science before entering our classes, so they have few relevant preconceptions.” I believed that until I saw Mike Hewner’s data from his study of high school students in our Georgia Computes! mentoring program this last year. These are high school students who are being trained to be mentors in our younger student (e.g., middle school kids, Girl Scouts) workshops. They’re getting to see a lot of cool tools and learning a bunch about computer science. Mike found that they had persistent misconceptions about what computer science is, such as “Someone who is really great at Photoshop is a great computer scientist.” While that’s not a misconception about bytes or packets, that’s a misconception that influences what they think is relevant. The concept about bytes might seem relevant if students think that CS is all about great graphics design, but the packet concept interferes with their perception of Skype and doesn’t help with Photoshop — students might ignore or dismiss that, just as physics students say to themselves, “Yeah, in class and on exams, gravity pulls the projectile down, but I know that it’s really about air pressing down on the projectile.” So students’ misconceptions about what’s important about computing might be influencing what they pay attention to, even if they still know nothing about computer science.
I just finished reading James Gleick’s book Faster: The Acceleration of Just About Everything. It’s a 10 year old book now, but the story is still valid today. I didn’t enjoy it as much as his books Chaos or Genius. However, the points of Faster are particularly relevant for computing education.
One of Gleick’s anecdotes was on how AT&T sold Touch Tone dialing in 1964 as saving an average of ten seconds per seven-digit number dialed. Now, we have speed dialing.
In the post-Touch Tone generation, you probably have speed-dial buttons on your telephone. Investing a half-hour in learning to program them is like advancing a hundred dollars to buy a year’s supply of light at a penny discount…To save time, you must invest time.
Do some students and end-user programmers invest time in learning to program to “advance a hundred dollars to buy a year’s supply of light at a penny discount”? Are they looking to program in order to save time, to do things faster and more efficiently? Do they give up on learning to program when they realize that it doesn’t work that way?
The problem is that I don’t think that ever really happens for the individual writing code for him or herself. It’s hard to program. The time cost of programming amortizes over users. The development cost of Microsoft Office, amortized over millions of users, results in a profit for Microsoft. A few hours of a programmer’s time on some feature of Excel enables many hours of use of that feature by many users. But for any individual writing code for him or herself? Takes a lot more than 30 minutes of programming software to get the same usefulness of 30 minutes of programming speed-deal buttons.
So why program? In the Media Computation Python CS1 class, we tell students that they should program in order to create a replicable process (if you need something to be done the same way, maybe by others, many times), to create a process that many people can use (like when commercial software is created), or to communicate a process (like when trying to explain a theory of how something dynamic happens, like DNA transcription or evolution). Paul Graham tells us that hackers write software to create beauty. But few people successfully program in order to save time for themselves — you’d have to do something many times to make the benefits of use outweigh the cost of development.
Maybe it shouldn’t be that way. Maybe software development should be easier. I wonder if you could make it easier, and still keep all the fun, all the communicative power of programing languages, all the “Passion, Beauty, Joy, and Awe“?
The overall story of Faster may be relevant for understanding the decline in interest in computer science. Gleick claims that “boredom” is actually a modern word and concept. “To bore meant, at first, something another person could do to you, specifically by speaking too long, too rudely, and too irrelevantly.” Today, we are bored by simple silence — by not enough challenges, not enough multi-tasking, by too many choices. We have so many options for entertainment that we choose many at once, so that we drive, while listening to the radio, and talking on the cell phone (not texting or doing email, of course). Gleick (naturally, as an author) bemoans the death of the book, because readers are too easily bored to pay attention to a whole book, and always have the options of magazines or blogs or just 140 character “tweets.” Why would anyone make a career choice like “computer science” when there are so many other choices that are less boring, take less concentrated focus, take less time?
Gleick provides an afterword for the electronic version of the book (I read it on my Kindle), where he speaks to some of these concerns:
I believed when I began Faster, and believe now more than ever, that we are reckless in closing our eyes to the acceleration of our world. We think we know this stuff, and we fail to see connections. We struggle to perceive the process of change even as we ourselves are changing.
Powered by Qumana
I wonder if Paul Graham is right, that “Computer science is a grab bag of tenuously related areas thrown together by an accident of history, like Yugoslavia.” I’m wondering because, when I re-read his famous Hackers and Painters essay recently, I found myself listing the other areas not in his analysis but part of what I think of as “computer science”:
- Human-centered computing, the implications of computing for humand and how human concerns (e.g., culture, psychology, economics) influence the design of computing systems.
- The deep-down what computing is about, reflected in Alan Kay’s “Triple Whammy” that everyone should know about computing. Is that mathematics? It’s not the natural history or hackers parts. It’s not really an area of research for everyone, but it is something that everyone should know.
- The graphics designers that Brian Dorn is studying, who program but not to produce beauty in software, like Graham’s hackers, but to produce software output of value, to produce artifacts that might create beauty. Brian is finding that these people need to know a lot about computer science to make themselves more successful at what they want to do, but they don’t fit into any of Graham’s categories.
Can all these pieces stay together, under some kind of UN-enforced treaty? Or are we bound to split into multiple fields?
I’ve never liked the term “computer science.” The main reason I don’t like it is that there’s no such thing. Computer science is a grab bag of tenuously related areas thrown together by an accident of history, like Yugoslavia. At one end you have people who are really mathematicians, but call what they’re doing computer science so they can get DARPA grants. In the middle you have people working on something like the natural history of computers– studying the behavior of algorithms for routing data through networks, for example. And then at the other extreme you have the hackers, who are trying to write interesting software, and for whom computers are just a medium of expression, as concrete is for architects or paint for painters. It’s as if mathematicians, physicists, and architects all had to be in the same department.
via Hackers and Painters.
The criticism in this blog post is interesting. The blogger agrees with those in the field who are saying we don’t do enough to emphasize the rigor and complexity of computer science. It’s interesting that the author also criticizes CS for not teaching its students enough about how to be a better programmer. Those feel like two different things to me: To learn to be a great programmer, and to understand the deep and interesting questions of CS.
Computer science is shallow, and nearly every place it’s taught is at the mercy of “industry”. They rarely teach deep philosophy and instead would rather either teach you what some business down the street wants, or teach you their favorite pet language like LISP. Even worse, the things that are core to Computer Science like language design, parsing, or state machines, aren’t even taught unless you take an “advanced” course. Hell, you’re lucky if they teach you more than one language.
Another way to explain the shallowness of Computer Science is that it’s the only discipline that eschews paradox. Even mathematics has reams of unanswered questions and potential paradox in its core philosophy. In Computer Science, there’s none. It’s assumed that all of it is pretty much solved and your job as an undergraduate is to just learn to get a job in this totally solved area of expertise.
My aunt and uncle were in town last week. My aunt told Barb and me how many of her friends’ computers were “destroyed” by “watching a YouTube video.” ”It almost happened to us, too, but we got a phone call telling us not to watch that video!” Sure, there is probably a website out there that can trick users into installing a virus that can cause damage to their computer, and it may have a video on the website. But I have a hard time believing that simply watching a video on a website like YouTube might “destroy” one’s computer (or more specifically from her explanation, erase one’s hard disk). Belief that that could happen seems like a belief in magic and mythology, like the belief that a chariot draws the sun across the sky. We ask everyone to take classes in history and biology, because they should understand how their world works, whether or not they will major in those fields. It’s part of being an informed citizen who does not believe that the world runs by magic and myths. What does everyone need to know about computer science?
Alan Kay and I were having an email conversation about this question, about what was the core of computer science that everyone ought to know about, even non-majors. He came up with a “triple whammy” list that I really like. It may need som re-phrasing, but there’s something deep there. I’m copy-pasting his notes to me (repeated here with his permission) in italic-bold, with my intepretation and commentary between.
It is all about the triple whammy of computing.
1. Matter can be made to remember, discriminate, decide and do
In his book Pattern on the Stone, Danny Hillis points out that modern day CPU’s are just patterns on stone, essentially the stuff of sand. We are able to realize YouTube and eBay and natural language translation and Pixar movies all because we can make patterns on stones that can remember things, distinguish between options, act on those distinctions, and do things from playing sounds to actuating robots. This feels like magic, that matter can do those things, but mechanical engineers would find this first step unsurprising. They know how to make machines made out of matter that can do these things, even without modern computers. Whammy #1 is an important step away from magic, but isn’t yet computer science.
2. Matter can remember descriptions and interpret and act on them
In step 2, we get to programs and programming languages. We can describe processes, and our matter can act on those descriptions. While we can do this with steam engines and mechanical engineering, it’s complicated and not obvious. We do this regularly in computer science.
3. Matter can hold and interpret and act on descriptions that describe anything that matter can do.
This third step is amazingly powerful — it’s where we go meta. We can also describe the matter itself as programs. Now we can create abstractions on our programming languages. Now we can point out that any program can be written in any programming language. This doesn’t directly address my aunt’s misconceptions, but if she understood the third whammy, we could talk about how a badly written media player could interpret a nefariously designed video such that the video could instruct a too-powerful media player to trash a hard disk, but how unlikely that would be. This third step is where we get to the role that computer science can
The Triple Whammy isn’t all of computer science. There is a lot more than these three steps. For example, I think that everyone should know about limits of computability, and about the possibility of digitizing information in any medium (thus allowing for visualization of sound or auralization of stock market data). I do see the Triple Whammy as part of a core, and that this could fit into any CS1 for any student.
We definitely talk about steps 1 and 2 in the Media Computation CS1, and parts of step 3. For example, we define a simple line-drawing language, then build an interpreter (just does each line-drawing statement) and a compiler (generates the equivalent Python function or Java method) for that line drawing language. We do that in order to explain (in part) why Photoshop is faster than Python for any image filter we create. But we definitely do not do this explicitly yet. As I’m working on the Powerpoint slides for the Python 2ed book now, I’m thinking about building a “Triple Whammy” slide deck, to encourage teachers to have this discussion with their students in a Media Computation context. I’ll bet that TeachScheme already gets there.
What I really like about this list is that it clearly explains why Computer Science isn’t just advanced use of application software. We see adults and kids in our studies all the time who tell us that somebody really good at Photoshop is a computer scientist. We hear from teachers and principals regularly who tell us that they teach computer science, because here’s the book they use for introducing Excel and Access. The Triple Whammy is about computer science and not about using applications.
Budget cuts and low enrollment have led to this:
In similar letters from Paul Tobias (Chairman, Albion College Board of Trustees) sent to the Albion faculty and the Albion family, the Board of Trustees reported that they have eliminated computer science as a major at Albion College and that Albion College may continue to offer a computer science minor. In the process, an untenured Assistant Professor has been notified his position will be discontinued after the 2010-2011 academic year. The letter to students also indicated “Students who are currently enrolled in the affected programs will receive personalized advising to enable them to accomplish their academic goals and fulfill their graduation requirements for their major in a timely manner.”
In other news coverage, they detail the cuts overall:
Majors in computer science and physical education and minors in dance, journalism and physical education will not be part of the college’s curriculum moving forward — a reduction strategy that will eliminate about 12 courses, said Dr. Donna Randall, the college’s president.
That comparison point really hit home. Newspapers are dying, so journalism is less valued and on the chopping block. Okay, I get that. Physical education is the least rigorous field of education to prepare teachers for, so if you have to chop one, that’s the least valued. And computer science is in that group.
To me, this is a sign of the dire straits of computer science and university budgets these days. More than that, it’s a sign that computing literacy among the general public is at an all time low. The uproar about these decisions is that they were made by a governing board, against the wishes of the faculty. This governing board sees computer science as being so useless, so lacking in value? The board made this decision based on “”how do we best prepare our students for meaningful … work in the 21st century?” What do they think computer science is?
The Qatar Foundation inspired me on my visit to Qatar last week. We were told that the point of the Qatar Foundation is to prepare their country for a “post-carbon” world. Yes, Qatar is amazingly wealthy today from their oil and gas exports, but they recognize that they have maybe 100 years of oil left. What happens after that? The Qatar Foundation is investing a lot of that wealth in changing their culture so that their people are generators of intellectual property, to create a “knowledge-based society,” to sustain their economy when the oil has run out. (Do you know of other nations that are taking so seriously the effort to prepare for a “post-carbon” world?)
One of their strategies is to create Education City, an enormous campus where six prestigious American universities have satellite campuses. We visited CMU Qatar, which has a beautiful building and active research programs.
Do you see that sign along the walkway in the CMUQ building? “Create. Inform. Connect.” That campaign is everywhere in Doha. Around Education City, are these enormous (maybe 10 feet tall?) free-standing signs, exhorting the people to:
Some of these signs are multi-story tall, hanging on the faces of skyscrapers in downtown Doha:
While inspiring, there are curves in the road, which are hard to see around. You may have read my post from Qatar – the women in CS at Qatar University are keen to build new applications, to embrace “geekiness.” But more women attend Qatar University, instead of CMU Qatar. They want gender-segregated education. They want to get their degrees and then work in Doha. They will not move away.
The faculty at Qatar University told us that they are planning to increase the amount of IT coverage in their degree and their curriculum, because that’s where the jobs are in Doha. Most computing companies in Doha adopt technology from elsewhere then adapt it for the Qatari and Middle East culture. They customize and manage (which IT curricula excel at), rather than create (which is where CS curricula focus).
The ACM Education Board is talking to people in the Middle East about having a summit on Computing Education. Should that summit focus on teaching students about Information Technology (IT), or about Computer Science (CS)? The CMUQ folks say, “Computer Science!” Aren’t the jobs today in IT, not CS? ”No, the jobs aren’t there now, but they will be when they graduate.”
Will the jobs be there? I like the quote from William Gibson, “The future is already here – it’s just not evenly distributed.” The Qatar Foundation wants the future of Qatar to be about intellectual innovation. The current jobs in Qatar are about adaptation of others’ innovations. The women embracing CS in Qatar want jobs in Qatar. They can’t move to where CS jobs might be elsewhere. Will the future arrive quickly enough in Doha to meet the predictions of CMUQ faculty, to give jobs to the students studying CS today? It would be a tragedy to teach these women about computer science, where Qatar sees its future, only to be unused because of a lack of jobs today.
Changing a nation’s culture is a hard job. I am inspired by the efforts of the Qatar Foundation. If I were a CS faculty member at Qatar, I don’t how I would make their tough decisions. Prepare the students for the near future, or the further future, and how fast is that further future arriving?