Archive for April, 2010

US lags in science education because it excels in science research

The argument of this study is that there are only so many hours in a day.  Faculty want to focus on teaching, but believe that their Universities only value research.  “It appears, then, that many universities—and by extension, their faculty—treat research and teaching as a zero-sum game: as more time and energy is invested in one endeavor, the amount of resources that can be allocated to the other drops.”  The article explicitly claims that the US lags in science education because it leads in science research.  What does that imply for computing education, when the US is a world leader in computing research?

Recent studies have shown that American students greatly underperform many of their global peers in the science sections of standardized tests. The US has the largest economy in the world and spends a disproportionately large percent of its GDP on scientific research, so why aren’t our students excelling in science? The problem may not be purely financial: science programs in both rich and poor nations are not educating students as effectively as they should. A new study from Nature Publishing Group (NPG) suggests that emphasis on research at the expense of teaching at the university level may be partially responsible for the scientific underperformance of advanced students worldwide.

via Science education vs. research: a zero-sum game?.

April 30, 2010 at 10:33 am 9 comments

Is higher education a racket like Wall Street?

This is a real concern.  I once heard a legislative aid say, “After we clean up health care, we’re going to clean up higher ed.”  We’d best be able to defend ourselves.

On Wednesday, in a speech to state regulators who oversee for-profit colleges, the chief architect of the Education Department’s strategy, Robert Shireman, offered a much more critical assessment of the private sector institutions than he has in his public comments to date, according to accounts given by several people who were in the room. He compared the institutions repeatedly to the Wall Street firms whose behavior led to the financial meltdown and called them out individually, one by one, for the vast and quickly increasing sums of federal student aid money they are drawing down.

via News: Comparing Higher Ed to Wall Street – Inside Higher Ed.

April 29, 2010 at 10:52 am 4 comments

Education as a (Software) Engineering Endeavor

This article in USA Today hit home for me, since it touches on a frequent accusation about Media Computation: That we’re getting higher success rates simply by lowering standards.  Give kids higher grades and they won’t fail — that’s easy!  After peer-reviewed, published studies at four schools, with four very different grading standards and multiple teachers, I don’t think that’s a reasonable accusation.  The issue is still there, of course — to what standards do we hold students, especially non-majors?  I think that that’s Alan’s point in his recent guest blog post here.  When I read the USA Today piece, I get the sense that this teacher was really doing the right things to achieve the standards as she perceived them.  The problem arose because of a difference in perceived standards between her and the administration.

Dominique G. Homberger won’t apologize for setting high expectations for her students. The biology professor at Louisiana State University at Baton Rouge gives brief quizzes at the beginning of every class, to assure attendance and to make sure students are doing the reading. On her tests, she doesn’t use a curve, as she believes that students must achieve mastery of the subject matter, not just achieve more mastery than the worst students in the course. For multiple choice questions, she gives 10 possible answers, not the expected 4, as she doesn’t want students to get very far with guessing.

Students in introductory biology don’t need to worry about meeting her standards anymore. LSU removed her from teaching, mid-semester, and raised the grades of students in the class. In so doing, the university’s administration has set off a debate about grade inflation, due process and a professor’s right to set standards in her own course.

via LSU removes tough professor, raises students’ grades –

I’ve been thinking a lot lately about how Education is really a kind of Engineering, and in particular, it shares a lot in common with Software Engineering.  The suggestion is that some of the better practices of Software Engineering could be used to improve Education.   When I was a graduate student, I took classes from Bob Kozma, famous (in part) for his public debate with Richard Clark on the role of media in learning. Bob had all of us read Simon’s Sciences of the Artificial, because we want Education to be a Science of design decisions in learning. We explicitly talked about Education as “Psychology Engineering” — the practice of influencing students’ minds in ways society had deemed appropriate.  That definition is not for the squeamish, but it’s not new either.  Plato’s Republic defined education as enculturation, turning children into citizens who hold dear the social values.  Of course, we want students to be innovative, free-thinkers — because that’s what our society values and needs, in a technological, capitalist democracy.

Education and Software Engineering both have a problem of way too many degrees of freedom.  Software is pure mind-stuff.  Engineering of that Software requires discipline and limitations on how freely we allow software expression.  A brilliant developer could produce a fabulous piece of software that is completely illegible to anyone else and thus non-maintainable — and Software Engineers would reject that great piece of software as bad Engineering, and appropriately so.  On the other side, people learn ALL the time.  The challenge of Education is to get them to learn what society values, what we need citizens to know and value.  A great Teacher might inspire students to go forth and learn such that they are wonderful citizens in 20 years, but we might not be able to see what he or she was doing in the classroom now that was achieving that goal.  What if held Educators to the same standards and discipline as Software Engineers?

So let’s play out this analogy a little:

Unit-Testing: The USA Today article highlights a problem of ill-defined, non-testable requirements that we often have in education.  What if we practiced unit-testing in education, to match the best practice in software engineering?  Before you teach something, you define the test for it, and get everyone to agree that the goal and test are reasonable.  I do argue with one point in the USA Today piece.  I don’t think professors have the right to set their own standards for their classroom.  We all do, and in upper-level, terminal courses, it may not matter.  But if we have a curriculum, a system, then the pieces have to fit together, so we all have to agree to the standards.

Peer Review: In my College, we have resisted peer review of teaching, on the “Great and Inspiring Teacher” argument.  A great teacher influences students such that they don’t realize for a year or more what they learned.  The great teacher inspires students to go forth and learn on their on.  That feels to me like the argument protecting the brilliant-but-illegible software developer.  Just how many of those are there?  And how many lousy and unproductive teachers/developers are you protecting because you’re not checking?

What if we defined teaching as “practice that clearly and verifiability can be expected to result in the desired learning outcome for a reasonably prepared set of students.” Then we can go and watch a teacher, and ask for the reasonable rationale for why that set of interventions should result in the desired learning outcome.  We’d have to be prepared to fire a teacher who, while inspiring, was not visibly, verifiably achieving the desired learning outcomes.

Model-checking: I am not an eager proponent of proving programs correct, because I am a fan of Perlis, Lipton, and DeMillo and I don’t believe it will work.  However, I am a big fan of testing and verifying software (and design decisions, more generally), and in that sense, I like the idea of model checking.  I am beginning to believe that the most important factor in the success of cognitive tutors is that they require the developer/instructor to define the instructional model to a level of detail where each step can be checked for reasonableness.  “Could a student learn this small step in this amount of time/effort?  Could all these steps be assembled in such a way to achieve the overall goal?”

In general Education, we don’t make these checks, so we create curricula that have great big ballooning “Magic Happens Here” bubbles in them.  “CS1 is really hard, so we don’t expect too much there, so CS2 doesn’t get well-prepared students, and then there are a bunch of electives — and then our students program the Space Shuttle!”  We do want CS1 to be simple (but not “too simple”), and do we want our students to go on to great things, but we ought to check if we can really get from here to there. James Duderstadt’s Millenium Center at the University of Michigan did such an analysis of Engineering Education and came to the conclusion that an undergraduate degree didn’t cut it.  Duderstadt argues that we need Engineers to get a four-year liberal arts degree, and then an Engineering Professional degree.  That may be what we need for Computing, too. We should do the analysis! We should build a model and check it.  Can we get from here to there in the steps we have allowed ourselves?  If not, then change the expected outcomes, change the initial state (more K-12 CS, anyone?), or change the amount of time (number of steps) we give ourselves.

Education is a form of Engineering.  There’s no question about that.  The question is whether we adopt Engineering practices in Education.  I’m arguing that Software Engineering has some practices and affinities with Education that make sense.

April 29, 2010 at 9:59 am 14 comments

ACM Ed Board Meeting in Doha, Qatar, 1-4 May 2010

My blog posts will probably get more bursty next week, as I travel Friday to Doha, Qatar for an ACM Education Board meeting and summit with education leaders in Qatar.  I’m pretty excited — I’ve never been to that part of the world.

The event is being organized by John Impagliazzo, long-time editor of SIGCSE Inroads, member of the Ed Board, former professor at Hoffstra University, and now professor at Qatar University.  The opening ceremonies, including the keynote address by Dame Dr. Wendy Hall, ACM President, are going to be covered by the local television network, Al Jazeera. I’m chairing a panel on Computing Education Research: Challenges and Opportunities, with Heikki Topi of Bentley University (and Ed Board), Boots Cassel of Villanova (and Ed Board), and Mark Stehlik of CMU (and CMU Qatar and the ACM Education Policy Committee).  Part of the meeting is going to be planning a similar event for India, with Mathai Joseph of ACM India.

I think the overall point is to make folks there aware of what ACM offers (in terms of educational resources, conferences, and research) and to draw them into the process.  My talk on the panel is going to highlight the work presented over the last five years at the ACM ICER (International Computing Education Research) Workshop, both to share the findings and to encourage faculty there to submit and present in ICER.  Mark Stehlik is going to talk about activities of the ACM US Education Policy Committee, and how similar organizations could be set up to address education policy issues in other parts of the world.

So when I do post next week, it may be part travelogue/travel-blog, as well as normal computing education related meanderings.  (They’re putting us up at the Ritz-Carlton Doha, right on the Persian Gulf.  Wow! Serious posh!)  I’ll try to report on the meeting, as I get over jetlag and find Internet connections.

April 28, 2010 at 12:04 pm 1 comment

How to make progress in computing education: Get more funding!

Cameron Wilson and I wrote the Education column for the Viewpoints section of Communications of the ACM this month.  Our title is “How to make progress in computing education?” where the subtitle (provided by the editors) gets it right: “Improving the research base for computing education requires securing competitive funding commitments.”  It’s an analysis of where there is funding for computing education (answer: too few places), and where there is funding that’s not being tapped well by computing educators yet (answer: NSF’s Education and Human Resources (EHR) Directorate).  NSF’s computing directorate programs, CPATH and BPC, together get about $20M per year.  EHR’s research budget is $850M per year.  We make concrete suggestions for what we can do to increase funding for computing education research.

April 27, 2010 at 7:17 pm 1 comment

National Academies Report on Computational Thinking Released

I just received word that a National Academies report on computational thinking has just been released.  It’s a great committee that wrote the report: Marcia Linn, Al Aho, Yasmin Kafai, Janet Kolodner (from Georgia Tech!), Larry Snyder, and Uri Wilensky. (I don’t know Brian Blake or Bob Constable, but I’d like to meet them. They’re in a good crowd!) The PDF is free to download, and I’m looking forward to reading it.

Report of a Workshop on the Scope and Nature of Computational Thinking presents a number of perspectives on the definition and applicability of computational thinking. For example, one idea expressed during the workshop is that computational thinking is a fundamental analytical skill that everyone can use to help solve problems, design systems, and understand human behavior, making it useful in a number of fields. Supporters of this viewpoint believe that computational thinking is comparable to the linguistic, mathematical and logical reasoning taught to all children.

via Report of a Workshop on The Scope and Nature of Computational Thinking.

April 27, 2010 at 2:51 pm Leave a comment

Don’t mess with the appliances: Is the iPhone bad for CS education?

I don’t buy the argument (made in the below referenced article) that the iPhone discourages students from pursuing computer science because it’s a “closed” platform.  So are cars, cable boxes, credit cards, and the weather, yet kids still get interested in mechanical engineering, electrical engineering, banking, and meteorology.  You don’t have to tinker with something to get interested in knowing how it works.

However, this second argument (that I think is the point of the quote below) is more intriguing to me.  Do students start to see the iPhone as an appliance, as something that is not only not-knowable, but it’s not even interesting to know it?  For several years, I asked people who might know the answer: How does a microwave oven work?  The answer I got back almost all the time was, “I don’t know, and I’m not particularly interested.”  (I have an idea now how it works, but am not absolutely sure that I really get it.)  As an appliance, a microwave becomes unworthy of consideration or study.

What’s more, you rely on an appliance.  I tinkered a lot as a kid, but not on the family television, refrigerator, or oven.  Not only were those things dangerous, but I knew full well that things I tinkered with didn’t still work the same after I was through.  I didn’t want that to happen to something that was important!

Does the iPhone make the technology simply disappear?  From a usability perspective, that’s great.  From getting kids interesting in computing?  It’s pretty hard to get kids excited about something that’s invisible to them.

“We have a generation growing up that’s extremely comfortable with technology – no problem using it. But they don’t seem to be that interested in understanding it,” Harle told

“People can use their iPhone… but they don’t want to delve into it, they don’t want to understand the depths behind it. And I have a sneaking suspicion this is partly because we’ve got to the stage now with computing, computer science, IT, whatever you like, that it’s now such a black box, such a complex thing that you can’t really fiddle in the same way as people used to.”

via Why the iPhone could be bad news for computer science | Software |

April 26, 2010 at 5:17 pm 5 comments

Is the laptop enabling or inhibiting learning?

As a teacher, I definitely understand this phenomenon.  Yes, the laptop can really enhance learning.  But when 75% of your class has their laptops open in class, and 90% of those are on Facebook, there’s no opportunity for classroom learning.  Last week, I broke up my class into smaller groups, and I had to re-explain the activity to several students who had been sitting there the whole time, but in Facebook, so not really there.

As a culture, we’re at an odd crossroads regarding personal computers. For years, educators have been clamoring to put technology in the hands of young students through partnerships with big tech companies, best symbolized by the One Laptop Per Child initiative.

But by the time those kids grow up, they may well find university authorities waging a war on laptops in the classroom. In 2008, the University of Chicago Law School turned off Internet access in classrooms. At the University of Oklahoma, Dr. Kieran Mullen became an Internet sensation when a student recorded him freezing a laptop in liquid nitrogen and shattering it.

via The Blackboard Versus the Keyboard | The Big Money.

April 26, 2010 at 9:41 am 6 comments

Alan Kay on Hoping That “Simple” is not “Too Simple”

Alan wanted to make this longer comment, but couldn’t figure out where it fit naturally, so he kindly forwarded it to me to provide here:

Mark in his blog has provided a cornucopia of useful topics and questions about teaching computing to a wide demographic. It’s all very complex and (to me at least) difficult to think about. My simple minded approach for dealing with this looks at “humans making/doing things” as having three main aspects:

1. Bricks, mortar, and bricklaying
2. Architectures
3. Models of the above

And we can think of the “model” category as being composed of the same three categories.
1. Bricks, mortar, and bricklaying of models
2. Architectures for models
3. (Meta) Models of the above

If we stop here we have a perhaps overly simplistic outline of the kinds of things to be learned in computing (and many other activities as well).

Questions I would ask about these include:

  • How many ideas are there here, and especially, how many ideas at a time can learners handle?
  • How much real practice of each of these is required for real understanding and operational usage?
  • Where can we look for useful parallels that will help us think about our own relatively undeveloped area?
    • Music?
    • Sports?
    • Science?
    • Engineering?

To take the last first, we would (or I would) be very surprised to be able to prepare someone as a professional in 4 years of college if they started from scratch in any of the possible parallels listed above. To go to the really simplistic idea of “hours put in”, there just aren’t enough actual hours available per year (3 practice hours a day is about 1000 hours a year) and professional fluency in any of the above will require more than 4000 hours of practice from most learners. And it’s not just a question of hours. There are longitudinal requirements (time for certain ideas and skills to “sink in”) which probably represent real latencies in both the “notional” and physiological  parts of learner’s minds.

A large number of those going into any of the four areas started learning, training, and practicing in childhood. And for those who try to start as a first year college student ….

a. This “problem” is “solved” for music partly by the existence of “pop music” much of which does not require deep fluency in music for participation. (And it is certainly not hard to see real parallels and the existence of “pop computing” in our culture.) Classical and jazz music simply require a lot more time and work.

b. The problem is solved for professional sports by excluding the not skilled enough (and even quite a few of those with skills, and who did start in childhood). The last census listed about 65,000 professional athletes in all US sports. This is a small job market.

c. The problem is solved for the hard sciences (and medicine) most often with extensive postgraduate learning, training and practicing (and by high thresholds at the end). Should we ask where those who, for one reason or another didn’t make the cut, wind up?

d. I don’t know what the engineering demographics are (but would like to). Engineering has always had a strong ad hoc nature (which is what allowed it to be invented and practiced long before mathematics and science were fully invented). Architecture is harder than bricklaying, so one could imagine many with engineering UG degrees winding up in technical companies in what would be essentially apprentice processes.

I’m guessing that this is where similar computer students with undergraduate degrees might wind up — essentially doing bricklaying in some corporate notion of architecture.

Both of these last two seem to me to be dead ends — but it would be good to have more than personal and anecdotal evidence. My own observations would generalize to “they don’t learn much that is good” in their undergraduate experience, and “they learn even less that is good when on the job”.

I think universities have a moral obligation to try to deal with the “they don’t learn much that is good” part of this problem. And doing this well enough could cause large useful and important changes in industry over the next decade or two.

If I were going to get started on this, I would try to put forth a very clear outline of the six aspects of computing I listed above, show how they work together — and try to sketch out what it actually takes to learn them for most college students.

In my thinking about this I keep on coming back — not to the problems of “coverage” over 4 years — but what seems to me to be the larger problem of getting in enough real practicing of the various kinds needed to actually ground the ideas into thoughtful and operational tools.

Best wishes,


April 23, 2010 at 12:51 pm 24 comments

Lister and Spohrer, Plans and Schema and MCQ’s

In response to my piece on Millenials, Raymond Lister pointed out that his ITICSE working group report was about reading code while the work by Jim Spohrer and Elliot Soloway was on writing code.  Of course, he’s right.  But his passing comment had had me thinking all night.  How different are those tasks?  I worked with Elliot for (mumble, mumble) years, and I read all that I could by him and Jim.  I just leafed through Jim’s dissertation again this morning (yeah, it sits on my shelf — I really am a total geek!  But now it’s on Google Books, too!).  I feel like there is such a similarity between what Lister’s group was working on and what Spohrer was working on, but I can’t quite put my finger on what’s so similar.

Let me quote a bit from Raymond et al.’s paper, if he doesn’t mind.  (If he does, I’ll delete these later.)  This is Question 5 of the multiple choice question (MCQ) instrument that Raymond’s group created and presented to students.  It was the easiest question, in terms of how students actually performed on it.

First off, I don’t find this question particularly easy.  It involves careful tracing, but I do agree that it feels like something that students should be able to do at the end of the first semester of Computer Science.  Why is that?  Why does it look like something we expect students should be able to do?

Elliot’s group at Yale explored a model for how programers write programs that focused on goals and plans. Goals were what you wanted to do, and plans were how you did them.  Sometimes you set a goal, and tried to implement a plan, but failed (reached an impasse). So you tried a different plan.  Plans were models of code, typical pieces of code that you wrote or saw a million times.  Rob Rist had a similar theory, structured around schemas instead of plans.  Tom-a-to, Tom-ah-to — it feels similar to me.

Imagine you saw this code:

i = 0;
while (i < a.length){
    // Do something with a[i]

Say I showed this to you and said, “It’s not working — what did I do wrong?”  I’m guessing that every reader of this blog would immediately say, “You forgot to increment i inside the loop!”  How did you know that?  Because you traced the code?  I suggest that you all know the index-across-the-array-with-a-while plan, and know that a common problem is forgetting to increment the index inside the loop.  You didn’t have to trace this code as much as you saw the problem, because you know this plan.

Why does Question 5 look so reasonable to us?  Because it looks like reversing an array?  Because it looks like the code for swapping elements of an array?  Both of those might be true.  In general, it looks like code we’ve seen many times.  We feel like we recognize those plans.

Here’s Question 8, the question that students performed the worst on.

This one feels harder to me.  Why?  The code looks like the code for a sort, like a Bubble Sort, but it’s not really.  It triggers my plan-recognizer enough to say, “Sure, I should be able to do that.” But when I try to match my plans to this, it doesn’t quite work.  So I have to do step-by-step reasoning, what Cognitive Scientists call weak methods.  If I immediately recognize a problem and can provide a ready-made answer, that answer is often right — that’s a strong method.  If I can’t do that, I fall back on my backup approaches, like tracing code line-by-line, which is more often going to fail.  Did students recognize that they had to trace to solve this problem, or did they just plug in what looked the most like Bubble Sort to them?  And once they recognized that they had to use a weak method, did they apply that method correctly or did they make a mistake?

On a side note, this question feels a bit like cheating on the “it’s about tracing, not writing code” claim.  Isn’t picking which expression or statement fits very much like composing the right plan into the code?  At what point does identifying code cross from a “tracing” task into a “writing” task?  Is it “tracing” because I didn’t have to type the characters in or fight the syntax?

In the end, I think that these two explorations, 12 years apart, feel similar to me because I can use the theory from the first one to explain the second one.  That is something that I don’t think we do enough of in computing education research today: Make theory, use theory to generate predictions, then test those predictions.  Many of the ICER conference papers present descriptions and analyses.  Those are great and important.  Many of the conclusions of ICER papers make recommendations for how we teach.  That, too, is great and important.  But science is about making theory, and progress in society comes from using scientific theories to make useful predictions.  We need to get to the point where we can be predictive in computing education research.

April 23, 2010 at 12:36 pm 8 comments

Compose Your Own — Music and Software

Really exciting piece by Jason Freeman, Georgia Tech music professor, in the New York Times yesterday.  I think what he says goes just as well for software — so many of us use it, so few of us express ourselves with it.

These days, almost all of us consume music but few of us create it. According to a recent National Endowment for the Arts survey, only 12.6 percent of American adults play a musical instrument even once per year. The survey does not report how many of us compose music, but I suspect that percentage is even smaller.

It saddens me that so few of us make music. I believe that all of us are musically creative and have something interesting to say. I also wish that everyone could share in this experience that I find so fulfilling.

via Compose Your Own – Opinionator Blog –

April 23, 2010 at 10:15 am Leave a comment

Programs Train Teachers Using Medical School Model

This is a really interesting idea, especially for computing education.  One of the challenges of teaching computing (and even doing computing, for that matter) is that we have not yet learned to make explicit all the knowledge that is necessary.  We’re finding that in some of our studies of adult learners — they talk about the “secret” of programming, and that the teacher is “hiding something” from the students.  An apprenticeship model would give students the opportunity to learn and develop those skills that we do note yet know how to explicitly teach.

What if we prepared teachers the same way we prepare doctors?As school reformers lurch toward more innovative ways for training classroom teachers, this idea is getting a lot of attention. A handful of teacher “residency programs” based on the medical residency model already exist. Boston was one of the first to create one in 2003.

via Programs Train Teachers Using Medical School Model : NPR.

April 23, 2010 at 10:09 am Leave a comment

The Millenials are like the adults, only more so

I’ve been thinking about the Pew study of Millenials since it came out in February.  Are Millenials really different in some significant way from previous generations?  From the perspective of computing education, I see the same cognitive issues today as in years past.  The problems with loops that Lister’s ITICSE working group study found look pretty similar to the problems that Elliot Soloway and Jim Spohrer identified among Yale undergraduates working on the Rainfall problem in the early 1980’s.  I look at my 1995 SIGCSE paper on the challenges that students face in learning object-oriented programming, and I see those exact same problems among the seniors in my Capstone Design class this semester.

The most detailed study to date of the 18- to 29-year-old Millennial generation finds this group probably will be the most educated in American history. But the 50 million Millennials also have the highest share who are unemployed or out of the workforce in almost four decades, according to the study, released today by the Pew Research Center.

via Study: Millennial generation more educated, less employed: USA Today.

There is one place where I see a problem with Millenials–not unique to them, but even stronger with them than among the adults.  My students and I have been working on papers for ICER 2010 over the last couple weeks.  A common theme that we’re seeing in several different studies is a perception of our participants that Computer Science is about advanced use of applications.  If you really know how to use Photoshop, then that’s Computer Science.  It’s a hard misconception to deal with because an expert on Photoshop probably has picked up a lot of what we would recognize as Computer Science knowledge — about digital representation of data, about processing, about efficiency.  It’s not that the perception is wrong, it’s just missing an important perspective.

What’s striking about this misperception is that it shows up in several studies, from high school students to adults.  The Millenials might have it a bit stronger, a bit more persistently than the adults, because they have used computer applications for so long.  The Millenials hear us talk about real computer science, and they give us the “Yeah, yeah — I’ll tell that back to you on the test, but I know what really matters.”  They listen to us, but don’t think it’s all that important.  If they don’t think it’s important, they make little effort to really learn it. We find that this perception is strong among the adults, too.  The adults care about employment.  If you finally understand the difference between arrays and linked lists, you have made an important intellectual step, but you haven’t generated a new line in your resume.  If you take a class on “Advanced Photoshop,” you do have a new claim that can lead to a new job.  The adults in our studies, too, see advanced application use as being “Computer Science,” and far more valuable than a degree in Computer Science. The adults don’t give us the “Yeah, yeah” bit — they just ignore “Computer Science” entirely.

Both Millenials and adults are practical.  What gives me the most benefit for the least cost?  Learning computer science is hard, and its value is indeterminate, especially to someone who doesn’t understand the IT industry.  Learning to use applications better is an obvious job skill.  The fact that the advanced levels of the latter overlap with some levels of the former makes it even harder for we educators to make our case.

April 22, 2010 at 8:41 am 5 comments

Women graduate in STEM more than boys: It’s video games?

I found this report interesting, both because of its claim and because of the (what seems to me to be) horrendously flawed logic.  Women are increasingly taking more STEM classes, the author claims, and are nearly catching up to men.  However, more women graduate!  Why?  Well, of course, because men play more video games!  I might use this as an example of correlation-is-not-causation next time I teach the research methods section of my educational technology class.

The number of women taking courses in science, technology, engineering and mathematics, the STEM subjects, has been increasing since 1966 according to a new report. But another study, on boys’ academic responses to new video games, establishes a cause-and-effect relationship that could partly explain the decline in male academic achievement.

Women students in higher education now outnumber men in most countries, except Japan and Turkey. In the US, this has skewed the ratio among the sexes in terms of those who graduate: the proportion of males earning degrees has dropped to 43% while that for women has increased to 57%.

via University World News – US: Women gain in science while video games hold back boys.

April 21, 2010 at 10:17 am 3 comments

Older Posts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,185 other subscribers


Recent Posts

Blog Stats

  • 2,060,308 hits
April 2010

CS Teaching Tips