Archive for May, 2012
Be What’s Next! – A sentiment to drive computing education
I liked Alfred’s sentiment in this post: “Be what’s next!” The issues around my BLS post remind me of this idea. We don’t want to give up on what we have always done, because we want to retain those outcomes. But what if those outcomes are less useful today? “Be what’s next!”
Grace Hopper used to tell her audiences that if they ever used “because we have always done it this way” as an excuse for anything that she would magically appear next to them to “haunt” them. I first heard her say that some 40 years ago and it has stuck with me since then. And yet people do use that as an excuse. Oh they may say it differently but that is what they often mean. In computer science education all too often people believe that because they learned computer science some way that everyone should learn it that way. It’s not as bad as it used to be but at times I wonder if people are just saying it differently. For example “we use command line application programming because we don’t want students getting too wrapped up in GUI stuff.” Or perhaps “we need students to use text editors and command line compilers so that the really understand what is going on.” Baloney I say. Use modern tools and let students create applications that are real looking and relevant to them. In the long run this will be more incentive to learn more than anything else.
CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts
Boy, do I want to learn more about this! Chuck and Processing, and two semesters — it sounds like Media Computation on steroids!
The National Science Foundation (NSF) has awarded California Institute of the Arts (CalArts) a grant of $111,881 to develop a STEM (Science, Technology, Engineering and Mathematics) curriculum for undergraduate students across the Institute’s diverse arts disciplines. The two-semester curriculum is designed to teach essential computer science skills to beginners. Classes will begin in Fall 2012 and are open to students in CalArts’ six schools—Art, Critical Studies, Dance, Film/Video, Music and Theater.
This innovative arts-centered approach to teaching computer science—developed by Ajay Kapur, Associate Dean of Research and Development in Digital Arts, and Permanent Visiting Lecturer Perry R. Cook, founder of the Princeton University Sound Lab—offers a model for teaching that can be replicated at other arts institutions and extended to students in similar non-traditional STEM contexts.
Women leave academia more than men, but greater need to change in computing
I did my monthly post at Blog@CACM on the some of the recent data on how few women there were in computing. I suggested that things haven’t got better in the last 10 years because we really haven’t decided that there’s a problem with under-representation. The comments to that post suggest that I’m right. Blog@CACM posts don’t often get comments. Three in a week is a lot, and two of those expressed the same theme, “Women are choosing not to go into IT. Why is that a problem?” It’s a problem because there are too few people in IT, and there are many women who could do the work that we should be trying to recruit, motivate, and engage, even if it requires us to change our own cultures and careers. Computing has a bright future, and I predict that most applications of computing in our lives are still to be invented. We need a diverse range of people to meet that future, and change in our culture and careers would be healthy.
The situation is different with respect to academia. The article linked below points out that women are turned off to careers in academia are greater rates than men. Other recent work suggests that students in doctorate programs lose interest in academia the longer that they are in it. There should be more women in academia, and academia cultures and careers should change to be more attractive to a broader range of qualified applicants. But what could make that happen?
In contrast to the computing industry, academia isn’t growing. The economics in academia are changing, and there will be fewer academic jobs (especially in CS). I still believe that we ought to ramp up CS faculty hiring, in order to offer computing to more people (even everyone) on campus, but the economics and organizational trends are against me. If we were to hire in academia, we should make an effort to draw in more women and more under-represented minorities. We absolutely should strive to improve the culture and career prospects in academia to retain the (relatively little) diversity that we now have in academia. But neither hiring nor retention are at the top of academia’s concerns right now. Maybe the young scientists are wise to seek other opportunities, and PhD students are figuring out that academia may not hold great career prospects?
Young women scientists leave academia in far greater numbers than men for three reasons. During their time as PhD candidates, large numbers of women conclude that (i) the characteristics of academic careers are unappealing, (ii) the impediments they will encounter are disproportionate, and (iii) the sacrifices they will have to make are great.
Men and women show radically different developments regarding their intended future careers. At the beginning of their studies, 72% of women express an intention to pursue careers as researchers, either in industry or academia. Among men, 61% express the same intention.
By the third year, the proportion of men planning careers in research had dropped from 61% to 59%. But for the women, the number had plummeted from 72% in the first year to 37% as they finish their studies.
Visual ability predicts a computer science career: Why? And can we use that to improve learning?
I’ve raised this question before, but since I just saw Nora Newcombe speak at NCWIT, I thought it was worth raising the issue again. Here’s my picture of one of her slides — could definitely have used jitter-removal on my camera, but I hope it’s clear enough to make the point.
This is from a longitudinal study, testing students’ visual ability, then tracking what fields they go into later. Having significant visual ability most strongly predicts an Engineering career, but in second place (and really close) is “Mathematics and Computer Science.” That score at the bottom is worth noting: Having significant visual ability is negatively correlated with going into Education. Nora points out that this is a significant problem. Visual skills are not fixed. Training in visual skills improves those skills, and the effect is durable and transferable. But, the researchers at SILC found that teachers with low visual skills had more anxiety about teaching visual skills, and those teachers depressed the impact on their students. A key part of Nora’s talk was showing how the gender gap in visual skills can be easily reduced with training (relating to the earlier discussion about intelligence), such that women perform just as well as men.
The Spatial Intelligence and Learning Center (SILC) is now its sixth year of a ten year program. I don’t think that they’re going to get to computer science before the 10th year, but I hope that someone does. The results in mathematics alone are fascinating and suggest some significant interventions for computer science. For example, Nora mentioned an in-press paper by Sheryl Sorby showing how teaching students how to improve their spatial skills improved their performance in Calculus, and I have heard that she has similar results about computer science. Could we improve learning in computer science (especially data structures) by teaching spatial skills first?
Next Generation Science Standards available for comment now through 1 June
Check out “Gas station without pumps” for more on the Next Generation Science Standards, available now for comment (but only through this week). There is a bit of computational thinking and computing education in there, but buried (as the blog post points out). I know that there is a developing effort to get more computation in there.
The first public draft of the Next Generation Science Standards is available from May 11 to June 1. We welcome and appreciate your feedback. [The Next Generation Science Standards]
Note that there are only 3 weeks given for the public review of this draft of the science standards, and that time is almost up. I’ve not had time to read the standards yet, and I doubt that many others have either. We have to hope that someone we respect has enough time on their hands to have done the commenting for us (but the people I respect are all busy—particularly the teachers who are going to have to implement the standards—so who is going to do the commenting?).
I’m also having some difficulty finding a document containing the standards themselves. There are clear links to front matter, how to interpret the standards, a survey for collecting feedback, a search interface, and various documents about the standards, but I had a hard time finding a simple link to a single document containing all the standards. It was hidden on their search page, rather than being an obvious link on the main page.
via Next Generation Science Standards « Gas station without pumps.
Why high-income students do better: It’s not the velocity but the acceleration
Low-income students and schools are getting better, according to this study. They’re just getting better so much more slowly than the wealthy students and schools. Both are getting better incrementally (both moving in the right direction), but each increment is bigger for the rich (acceleration favors the rich).
We heard something similar from Michael Lach last week. The NSF CE21 program organized a workshop for all the CS10K efforts focused on teacher professional development. It was led by Iris Weiss who runs one of the largest education research evaluation companies. Michael was one of our invited speakers, on the issue of scaling. Michael has been involved in Chicago Public Schools for years, and just recently from a stint at the Department of Education. He told us about his efforts to improve reading, math, and science scores through a focus on teacher professional development. It really worked, for both the K-8 and high school levels. Both high-SES (socioeconomic status) and low-SES students improved compared to control groups. But the gap didn’t get smaller.
Despite public policy and institutional efforts such as need-blind financial aid and no-loan policies designed to attract and enroll more low-income students, such students are still more likely to wind up at a community college or noncompetitive four-year institution than at an elite university, whether a member of the Ivy League or a state flagship.The study, “Running in Place: Low-Income Students and the Dynamics of Higher Education Stratification,” will be published next month in Educational Evaluation and Policy Analysis, but an abstract is already available on the journal’s website.“I think [selective colleges] very much want to bring in students who are low-income, for the most part,” said Michael N. Bastedo, the study’s lead author and an associate professor of higher education at the University of Michigan. “The problem is, over time, the distance between academic credentials for wealthy students and low-income students is getting longer and longer…. They’re no longer seen as competitive, and that’s despite the fact that low-income students are rising in their own academic achievement.”
Stereotype threat and growth mindset: If we tell students intelligence is malleable, are we lying?
This week at the NCWIT Summit, I heard Joshua Aronson speak on stereotype threat. I’ve read (and even taught) about stereotype threat before, but there’s nothing like hearing the stories and descriptions from the guy who co-coined the term. Stereotype threat is “apprehension arising from the awareness of a negative stereotype or personal reputation in a situation where the stereotype or identity is relevant, and thus comparable.” Aaronson has lots of examples. Remind women of the gender (and implicitly, of the stereotype that says women are worse than men at math) and their scores drop on math tests. Remind African Americans of their race (and implicitly, of the stereotype about African Americans and intelligence) and their scores on IQ tests drop.
I took a picture of one of Aronson’s slides. He observed that most of the tests in the laboratory experiments were, well, laboratory experiments. They weren’t “real,” that is, they didn’t count for anything. So what if we tweaked the AP Calculus test? Typically, the AP Calc asks students their gender just before they start the test, which makes the stereotypes about gender salient. What if you moved that question to the end of the test? Here are the results:
If you ask before, women do much worse than men, as past results have typically shown. If you ask after, the women do better than the men, but the men also do much worse than before! Reminding men of their gender, and the stereotype, improves their performance. Don’t remind them, and they do worse. Which leaves us in a tough position: When should you ask gender?
Now, there is a solution here: Dweck’s fixed vs growth mindset. Many children believe that intelligence is a fixed quantity, so if they do badly at something, they believe that they can’t do better later with more work. What if we emphasize that intelligence is malleable? Writes Dweck in Brainology:
The wonderful thing about research is that you can put questions like this to the test — and we did (Kamins and Dweck, 1999; Mueller and Dweck, 1998). We gave two groups of children problems from an IQ test, and we praised them. We praised the children in one group for their intelligence, telling them, “Wow, that’s a really good score. You must be smart at this.” We praised the children in another group for their effort: “Wow, that’s a really good score. You must have worked really hard.” That’s all we did, but the results were dramatic. We did studies like this with children of different ages and ethnicities from around the country, and the results were the same.
Here is what happened with fifth graders. The children praised for their intelligence did not want to learn. When we offered them a challenging task that they could learn from, the majority opted for an easier one, one on which they could avoid making mistakes. The children praised for their effort wanted the task they could learn from.
The children praised for their intelligence lost their confidence as soon as the problems got more difficult. Now, as a group, they thought they weren’t smart. They also lost their enjoyment, and, as a result, their performance plummeted. On the other hand, those praised for effort maintained their confidence, their motivation, and their performance. Actually, their performance improved over time such that, by the end, they were performing substantially better than the intelligence-praised children on this IQ test.
Aronson and colleagues asked in their Department of Education report: “Does teaching students to see intelligence as malleable or incrementally developed lead to higher motivation and performance relative to not being taught this theory of intelligence?” They did find that teaching a growth mindset really did result in higher motivation and performance. They recommended the strategy, “Reinforce for students the idea that intelligence is expandable and, like a muscle, grows stronger when worked.”
It turns out that, if you teach students about growth mindset, then they are less likely to be influenced by stereotype threat. Dweck writes in her Brainology essay:
Joshua Aronson, Catherine Good, and their colleagues had similar findings (Aronson, Fried, and Good, 2002; Good, Aronson, and Inzlicht, 2003). Their studies and ours also found that negatively stereotyped students (such as girls in math, or African-American and Hispanic students in math and verbal areas) showed substantial benefits from being in a growth-mindset workshop. Stereotypes are typically fixed-mindset labels. They imply that the trait or ability in question is fixed and that some groups have it and others don’t. Much of the harm that stereotypes do comes from the fixed-mindset message they send. The growth mindset, while not denying that performance differences might exist, portrays abilities as acquirable and sends a particularly encouraging message to students who have been negatively stereotyped — one that they respond to with renewed motivation and engagement.
Dweck is pretty careful in how she talks about intelligence, but some of the others are not She talks about “while not denying that performance differences might exist” and “portrays abilities as acquirable” (emphasis mine). The Dept of Ed report says we should tell students that “intelligence is expandable.” Is it? Is intelligence actually malleable?
The next workshop I went to after Aronson’s was Christopher Chabris’s on women and the collective intelligence of human groups. Chabris showed fascinating work that the proportion of women in groups raises the collective intelligence of groups. But before he got into his study, he talked about personal and collective intelligence. He quoted Charles Spearman from 1904: “Measurements of cognitive ability tend to correlate positively across individuals.” Virtually all intelligence tests correlate positively, which suggests that they’re measuring the same thing, the same psychological construct. What’s more, Chabris showed us that the variance in intelligence can be explained in terms of physical structures of the brain. Personal intelligence is due to physical brain structures, but we can work collectively to do more and think better.
My Georgia Tech colleague, Randy Engle, was interviewed in the NYTimes a few weeks ago, arguing that intelligence is fixed. It’s due to unchanging physical characteristics of the brain. We can’t change it.
For some, the debate is far from settled. Randall Engle, a leading intelligence researcher at the Georgia Tech School of Psychology, views the proposition that I.Q. can be increased through training with a skepticism verging on disdain. “May I remind you of ‘cold fusion’?” he says, referring to the infamous claim, long since discredited, that nuclear fusion could be achieved at room temperature in a desktop device. “People were like, ‘Oh, my God, we’ve solved our energy crisis.’ People were rushing to throw money at that science. Well, not so fast. The military is now preparing to spend millions trying to make soldiers smarter, based on working-memory training. What that one 2008 paper did was to send hundreds of people off on a wild-goose chase, in my opinion.
“Fluid intelligence is not culturally derived,” he continues. “It is almost certainly the biologically driven part of intelligence. We have a real good idea of the parts of the brain that are important for it. The prefrontal cortex is especially important for the control of attention. Do I think you can change fluid intelligence? No, I don’t think you can. There have been hundreds of other attempts to increase intelligence over the years, with little or no — just no — success.”
via Can You Make Yourself Smarter? – NYTimes.com.
Is intelligence expandable and malleable, or is it physical and fixed? There is a level where it doesn’t matter. Telling students that intelligence is expandable and malleable does have an effect. It results in higher test scores and better performance. But on the other hand, is it good policy to lie to students, if we’re wrong about the malleability?
Maybe we’re talking about different definitions of “intelligence.” Engle and Chabris may be talking about a core aspect of intelligence that is not malleable, and Dweck and Aronson may be talking about knowledge, skills, and even metacognitive skills that can be grown throughout life. But we say that “intelligence” is malleable, and the work in stereotype threat tells us that the language matters. What words we use, and how (and when) we prompt students impacts performance. If we don’t say “intelligence can be grown like a muscle” and instead say, “knowledge and skills are expandable and malleable,” would we still get the same benefits?
I’m not a psychologist. When I was an education graduate student, I was told to think about education as “psychology engineering.” Educators take the science of psychology into actual practice to create learning systems and structures. I look to the psychology to figure out how to help students learn. While Dweck and Aronson are explicitly giving educators strategies that really work, I worry about the conflict I see between them and other psychologists in terms of the basic science. Is it a good strategy to get positive learning effects by telling students something that may not be true?
Defining: What does it mean to understand computing?
In the About page for this blog, I wrote, “Computing Education Research is about how people come to understanding computing, and how we can facilitate that understanding.” Juha Sorva’s dissertation (now available!) helped me come to an understanding of what it means to “understand computing.” I describe a fairly technical (in terms of cognitive and learning sciences) definition, which basically is Juha’s. I end with some concrete pedagogical recommendations that are implied by this definition.
A Notional Machine: Benedict DuBoulay wrote in the 1980’s about a “notional machine,” that is, an abstraction of the computer that one can use for thinking about what a computer can and will do. Juha writes:
Du Boulay was probably the first to use the term notional machine for “the general properties of the machine that one is learning to control” as one learns programming. A notional machine is an idealized computer “whose properties are implied by the constructs in the programming language employed” but which can also be made explicit in teaching (du Boulay et al., 1981; du Boulay, 1986).
The notional machine is how to think about what the computer is doing. It doesn’t have to be about the CPU at all. Lisp and Smalltalk each have small, well-defined notional machines — there is a specific definition of what happens when the program executes, in terms of application of S-expressions (Lisp) and in terms of message sending to instances of classes (Smalltalk). C has a different notional machine, which isn’t at all like Lisp’s or Smalltalk’s. C’s notional machine is closer to the notional machine of the CPU itself, but is still a step above the CPU itself (e.g., there are no assignment statements or types in assembly language). Java has a complicated notional machine, that involves both object-oriented semantics and bit-level semantics.
A notional machine is not a mental representation. Rather, it’s a learning objective. I suggest that understanding a realistic notional machine is implicitly a goal of computational thinking. We want students to understand what a computer can do, what a human can do, and why that’s different. For example, a computer can easily compare two numbers, can compare two strings with only slightly more effort, and has to be provided with an algorithm (that is unlikely to work like the human eye) to compare two images. I’m saying “computer” here, but what I really mean is, “a notional machine.” Finding a route from one place to another is easy for Google Maps or my GPS, but it requires programming for a notional machine to be able to find a route along a graph. Counting the number of steps from the top of the tree to the furthest leaf is easy for us, but hard for novices to put in an algorithm. While it’s probably not important for everyone to learn that algorithm, it’s important for everyone to understand why we need algorithms like that — to understand that computers have different operations (notional machines) than people. If we want people to understand why we need algorithms, and why some things are harder for computers than humans, we want people to understand a notional machine.
Mental Models: A mental model is a personal representation of some aspect of the world. A mental model is executable (“runnable” in Don Norman’s terms) and allows us to make predictions. When we turn on and off a switch, we predict that the light will go on and off. Because you were able to read that sentence and know what I meant, you have a mental model of a light which has a switch. You can predict how it works. A mental model is absolutely necessary to be able to debug a program: If you have to have a working expectation of what the program was supposed to do, and how it was supposed to get there, so that you can compare what it’s actually doing to that expectation.
So now I can offer a definition, based on Juha’s thesis:
To understand computing is to have a robust mental model of a notional machine.
My absolutely favorite part of Juha’s thesis is his Chapter 5, where he describes what we know about how mental models are developed. I’ve already passed on the PDF of that chapter to my colleagues and student here at Georgia Tech. He found some fascinating literature about the stages of mental model development, about how mental models can go wrong (it’s really hard to fix a flawed mental model!), and about the necessary pieces of a good mental model. DeKleer and Brown provide a description of mental models in terms of sub-models, and tell us what principles are necessary for “robust” mental models. The first and most important principle is this one (from Juha Sorva’s thesis, page 55):
- The no-function-in-structure principle: the rules that specify the behavior of a system component are context free. That is, they are completely independent of how the overall system functions. For instance, the rules that describe how a switch in an electric circuit works must not refer, not even implicitly, to the function of the whole circuit. This is the most central of the principles that a robust model must follow.
When we think about a switch, we know that it opens and closes a circuit. A switch might turn on and off a light. That would be one function for the switch. A switch might turn on and off a fan. That’s another function for a switch. We know what a switch does, completely decontextualized from any particular role or function. Thus, a robust mental model of a notional machine means that you can talk about what a computer can do, completely apart from what a computer is doing in any particular role or function.
A robust mental model of a notional machine thus includes an understanding of how an IF or WHILE or FOR statement works, or what happens when you call a method on an object in Java (including searching up the class hierarchy), or how types do — completely independently of any given program. If you don’t know the pieces separately, you can’t make predictions, or understand how they work a particular function in a particular program.
It is completely okay to have a mental model that is incomplete. Most people who use scissors don’t think about them as levers, but if you know physics or mechanical engineering, you understand different sub-models that you can use to inform your mental model of how scissors work. You don’t even have to have a complete mental model of the notional machine of your language. If you don’t have to deal with casting to different types, then you don’t have to know it. Your mental model doesn’t have to encompass the notional machine. You just don’t want your mental model to be wrong. What you know should be right, because it’s so hard to change a mental model later.
These observations lead me to a pedagogical prediction:
Most people cannot develop a robust mental model of a notional machine without a language.
Absolutely, some people can understand what a computer can do without having a language given to them. Turing came up with his machine, without anyone telling him what the operations of the machine could do. But very few of us are Turings. For most people, having a name (or a diagram — visual notations are also languages) for an operation (or sub-model, in DeKleer and Brown terms) makes it easier for us to talk about it, to reference it, to see it in the context of a given function (or program).
I’m talking about programming languages here in a very different way than how they normally enter into our conversation. In much of the computational thinking discussion, programming is yet another thing to learn. It’s a complexity, an additional challenge. Here, I’m talking about languages as a notation which makes it easier to understand computing, to achieve computational thinking. Maybe there isn’t yet a language that achieves these goals.
Here’s another pedagogical recommendation that Juha’s thesis has me thinking about:
We need to discuss both structure and function in our computing classes.
I suspect that most of the time when I describe “x = x + 1” in my classes, I say, “increment x.” But that’s the function. Structurally, that’s an assignment statement. Do I make sure that I emphasize both aspects in my classes? They need both, and to have a robust mental model, they probably need the structure emphasized more than the function.
We see that distinction between structure and function a lot in Juha’s thesis. Juha not only does this amazing literature review, but he then does three studies of students using UUhistle. UUhistle works for many students, but Juha also explores when it didn’t — which may be more interesting, from a research perspective. A common theme in his studies is that some students didn’t really connect the visualization to the code. They talk about these “boxes” and do random walks poking at graphics. As he describes in one observation session (which I’m leaving unedited, because I enjoyed the honesty of Juha’s transcripts):
What Juha describes isn’t unique to program visualization systems. I suspect that all of us have seen or heard something pretty similar to the above, but with text instead of graphics. Students do “random walks” of code all the time. Juha talks a good bit about how to help his students better understand how UUhistle graphical representations map to code and to the notional machine.
Juha gives us a conceptual language to think about this with. The boxes and “incomprehensible things” are structures that must be understood on their own terms, in order to develop robust mental models, and understood in terms of their function and role in a program. That’s a challenge for us as educators.
So here’s the full definition: Computing education research is about understanding how people develop robust models of notional machines, and how we can help them achieve those mental models.
NCWIT Pioneer Awards to two women of Project Mercury: Following their passions
The NCWIT Aspirations Awards have highlighted the achievements of high school women aiming for computing careers. Last night, NCWIT started a new award, the Pioneer Award, given to women who paved the way for others in computing.
The awardees were Patricia Palumbo (left in picture) and Lucy Simon Rakov (right), who were both programmers on the Project Mercury Team. I loved their talks, not just for what they did on Project Mercury, but what they did the rest of their lives.
Patricia graduated with a mathematics degree from Barnard College, one of only two women. She was recruited by IBM and worked on the re-entry phase. I loved her story of what she did after that. She earned her Master’s degree in piano, and did work in computer music composition and image processing (Media Computation!). She has retired from computing, but still teaches piano and records in her home studio. She said that mathematics and music have always been her passions, and she saw the computer as “a general purpose tool” that could help any pursuit, any occupation, including music.
Lucy was on a team at IBM of 100 mathematicians, 10 of whom were women. They were among the programmers building the first real-time control system. She worked on the launch sequence, and in particular, worked on importing live radar data into the orbit computations. After IBM, she ran her own business, Lucy Systems Inc. (LSI) for 27 years. She encouraged the audience to follow their passions. She said that, when she’s doing mathematics and computing, she’s in “flow” and loses track of time. She told us to find those kinds of jobs.
We’ve talked here before about the issues of culture that drive women from computing. Here were two women who got into computing before the culture we know today was established, and they were driven by what the computer could do, and what they could do with a computer. That’s what we need to convey, in order to draw more diverse people into computing.
TEDxGT Video: Computing for Everyone — a 21st Century Literacy
My TEDxGeorgiaTech talk finally got posted. I show how small bits of code can lead to useful and interesting insights, even for students who don’t focus on STEM. It’s a “Computing for Everyone,” Media Computation demonstration talk. I was nervous doing this talk (and unfortunately, it shows) because I had decided to code Python live and play harmonica, in front of a TEDx audience. The talk includes image manipulation, sound manipulation, and changing information modalities (e.g., turning pictures into sound).
Science of Spatial Learning: Nora Newcombe at NCWIT
Great to see this coverage of SILC in US News and World Report, and I’m excited to hear Dr. Nora Newcombe speak at the NCWIT Summit Tuesday of this week. As I’ve mentioned previously, SILC hasn’t looked much at computer science yet, but there are lots of reasons to think that spatial learning plays an important role in computing education.
Spatial reasoning, which is the ability to mentally visualize and manipulate two- and three-dimensional objects, also is a great predictor of talent in science, technology, engineering and math, collectively known as STEM.
Yet, “these skills are not valued in our society or taught adequately in the educational system,” says Newcombe, who also is principal investigator for the Spatial Intelligence and Learning Center. “People will readily say such things as ‘I hate math,’ or ‘I can’t find my way when I’m lost,’ and think it’s cute, whereas they would be embarrassed to say ‘I can’t read.’
“People have a theory about this skill, that it’s innate at birth and you can’t develop it, and that’s really not true,” she adds. “It’s probably true that some people are born with a better ability to take in spatial information, but that doesn’t mean if you aren’t born with it, you can’t change. The brain has a certain amount of plasticity.”
We need to produce far more software developers than programmers: How do we change?
At the PACE meeting two weeks ago, we heard a presentation from Lauren Csorny, an economist with the US Bureau of Labor Statistics. She did a wonderful job, tailoring the results of the recent 10 year predictions to focus on computing.
Here’s the first and most important table: How BLS defines these jobs. The BLS doesn’t get to actually define the jobs — someone else in the Department of Labor does that. BLS counts employment within those occupation definitions.
Note carefully the distinction between Programmers and Developers! That’s a key distinction in terms of the predictions. Programmers write code that developers design. Software developers are the “creative minds behind computer programs.”
Here are predictions in terms of percentages of job growth:
The market for computer programmers is going to grow more slowly than the rest of the economy. Software developers (for both systems software and applications) are going to grow enormously fast. (Yes, grouping “Information security analysts, web developers, and computer network architects” into one job is crazy — they realize that, and that will likely be broken out in the next prediction in 2014.)
The BLS data give us a little more fine-grained view into the job growth vs. replacing existing employees in each of these categories:
Over the next ten years, there will certainly be the need to replace workers (due to age, or moving into a new category of job). But the greatest growth is in software developers. Programmers will have more replacement than growth. There will be growth (i.e., new jobs that don’t exist yet) among Software Developers.
I’ve been thinking about what these predictions mean for Computer Science. Should we be thinking about vocational concerns when considering the content of a degree program? We need to consider whether we’re preparing students for the future, for their future, and about the needs and opportunities that we can predict.
What do we want computer science graduates to do? Do we see them as filling the Computer Programmers slot, or the Software Developers slot? If CS wants to give up on the Software Developers slot, I’ll bet that IS or IT or even these new Information degree graduates would be willing to take those job.
If we in CS want to create Software Developers, how? What should we do? And how should educating Software Developers look different than educating Computer Programmers? Some specific questions:
- Do we need to teach software developers programming? I strongly believe that one can’t learn to design without learn something about one’s materials — you can’t learn to design software if you’ve never programmed software. But I’m willing to admit that I don’t have evidence for this, other than analogy. Architects learn something about strength of materials and civil engineering, but those are different degrees — the design of buildings versus the construction of buildings. But it probably depends on the kind of software developer, too. Systems software developers need to understand lower-level details than application software developers (a distinction in the BLS categories).
- Do we need to produce expert programmers? In our program here at Georgia Tech, and at many other degree programs I’ll bet, we have an explicit goal to produce graduates who have some level of expertise in coding. We teach Java in so many classes, in part, to produce excellent Java programmers. Is that important, if the programmer job is the one that is going to grow more slowly, and we have such an enormous need for software developers?
- Do we need to teach how to make data structures, or how to use them? Is it important to learn how to make linked lists and trees, or just how to use them and what their differences are? This has been a question for a long time in the SIGCSE community, and these job predictions make an argument that the low-level skills are more specialized and less generally useful now.
- Do we teach one language/paradigm, or many? The BLS predictions suggest to me, even more strongly, that we are doing our students harm by only showing them one language or style of language. “Creative minds” know more than one way to think about a problem.
We might decide that the BLS data are not relevant for us. Andy Begel and Beth Simon did a really interesting study of new hires at Microsoft, and came away with a picture of what new software engineers do — a picture in sharp contrast with what we teach in our classes. In the four years since that study came out, I haven’t heard much discussion about changing curricula to address their issues. CS2013 may have a better chance at shaking up how we think about computer science curricula.
Visual Programming Languages in the Browser: Scratch and Snap
The all-browser-based Scratch 2.0 prototype was just released for testing (see below). I’ve been playing with the “Snap!” prototype (successor to BYOB Scratch), which is also all-browser-based. This is a great trend for high school CS teachers, who often can’t install software on their computers.
Welcome to the Scratch 2.0 prototype! We hope you’ll explore and experiment. Check out the Featured Projects and Featured Galleries. Click Help to learn more.
We’re still in the process of adding and revising features. Unfortunately, you can’t login yet, so you won’t be able to save or remix projects or write comments. These features and more will be available when we officially release Scratch 2.0 later this year.
Dismal results for US science education « Gas station without pumps
I doubt that the NAEP included computing education in its report, but my guess is that such inclusion would only draw the average down further. I suppose that this post isn’t saying more than what Alan Kay has been telling us all along, but it bears repeating, and is always worth revisiting when more data become available.
The National Assessment of Educational Progress recently released a report on the science achievement levels of 8th graders in the US: The Nation’s Report Card: Science 2011: Executive Summary.
The results are pretty dismal, with only 2% of students scoring at an “advanced” level (which is pretty much where they need to be if they are going to go into a science or engineering program in college) and only 31% scoring proficient or better (which is where we as a society need our politicians and voters to be in order to make reasonable decisions about issues like pollution, climate change, and funding of medical programs). With fewer than a third of our students having the science understanding that they should have entering high school, our high school science teachers are reduced to doing remedial education, teaching middle school science, and our college teachers then having to teach high school science.
via Dismal results for US science education « Gas station without pumps.
EdTech Magazine’s Dean’s List: 50 Must-Read Higher Education Technology Blogs
Quite cool to be picked for this! (I think I’d go further than “is perhaps more important.”)
Recent Comments