## Archive for July 21, 2009

### The Economics of Computing Education

Economics is a fascinating field. It’s psychology-of-masses, a form of psychological engineering, and the closest thing we have to Hari Selden’s psychohistory (from Asimov’s *Foundation* series). It’s a study of how people make choices in order to maximize their benefit, their *utility*. It is not only about money–money is just a way of measuring value, about some common sense of the potential of some consumable for providing utility. I’ve been reading more economics this summer, and that’s got me thinking about what economic theory might have to say about computing education.

Students, especially in undergraduate education, are clearly economic decision makers. They choose their classes. That isn’t to say that they are our *customers* whose wants we must meet. It means that we provide consumables (classes) under various rule sets, and the students seek to maximize their benefit.

What students want from higher education (that is, what utility the classes are meant to provide) these days isn’t in much doubt. Most studies of higher education that I’ve read suggest that a big change occurred in the 1970’s, and that since then, over 90% of incoming students in higher education are attending higher education in order to get a better job and improve their socioeconomic class. There is some evidence that suggests that students, by the time they are in their fourth year, they value education for its own sake more. Students in their first years, on the whole, make choices based on their job prospects.

We’ve talked in this blog about why a student should study computer science. One argument is because of the value of computing as a field and the insights that it provides. Smart students will probably recognize that learning computing for those reasons will result in greater utility over the long run. How do we get students to see value, to receive benefit from what we know will help them more in the long run? Is it possible to *teach* students new and better utility functions? Can we help students to realize the greater utility of valuing *knowledge*, even from their first years in higher education? That’s an interesting question that I have not seen any data on.

What if we simply say, “This is the way it is. I’m teaching you this because it will be the best for you in the long run”? Paul Romer’s work on rule sets has been describing how the rules in effect in a country or a company can encourage or discourage innovation, and encourage or discourage immigration and recruitment. He would point out that higher education is now a competitive market, and deciding to teach for what the students *should* value is creating a set of rules. Students who don’t value those rules will go elsewhere. Those students who say will probably succeed more, but the feedback loop that informs us in higher education that we’re doing the right thing doesn’t currently exist. Instead, we simply have lower enrollments and less tuition–not the right feedback.

It’s that last part, about the feedback on teaching, that I have been specifically thinking about in economic terms. Malcolm Gladwell wrote a fascinating *New Yorker* piece last December about the enormous value of having a good teacher. What makes for a good teacher? Maybe those who create effective rule sets, who create incentives for student success? What provides utility for teachers? How do we make sure that teachers receive utility for *good* teaching?

How do we recognize and reward success in teaching? I listened to a podcast of a lecture by William Wulf who points out how badly we teach in engineering education. In economic terms, that’s not surprising. I don’t know of research into what university teachers *value* in terms of teaching. What is the utility function for a higher-education teacher, a faculty member? Job prospects and tenure are based on publication, not teaching, at least in research universities. When we *do *evaluate teaching, how do we do it?

- By measuring learning? We’ve already pointed out in this blog how very hard it is to do that right. Teachers use examinations and other forms of assessment. Are they measuring the right things? The research that I’ve seen suggests that grades are only rough measures of learning. If we were going to measure learning as a way of rewarding faculty to incentivize better teaching, we need some external measure of learning apart from grades, and we need that measurement to be meaningful — that it reflects what we really value in student learning.
- By measuring student pass rates? Wulf might say, “If only!” He points out that correcting our 50% dropout rate in engineering (and computing!) education would alone dramatically improve our enrollment numbers. Would we be dumbing down our education offerings? Honestly, how would we know (see previous bullet)?
- Instead, we most often just ask the students. “Was this a good class? Was this teacher a good teacher?” This gets back to student as consumer, which is a step beyond decision maker. Are they the right ones to make this determination? Is the end of the class the right time for a student to be able to evaluate if the class was worthwhile?

Higher education teaching will probably improve once we figure out how to give reasonable feedback on teaching quality which could then impact teachers’ perception of benefit or utility. As Gladwell and Wulf point out, getting it right would have a dramatic improvement on student quality and enrollment.

### Correction and Update on APCS enrollment

About a month ago, I blogged on the impact of the Advanced Placement Computer Science (APCS) exam on undergraduate enrollment in computing. I cited some statistics about APCS that I have since discovered were wrong. In particular, I claimed that there were 26 states whose total enrollment in APCS over the last 25 years has not been over 200. That’s wrong.

Barb Ericson kindly gave me a spreadsheet with data from all 50 states over the last 10 years, so I can provide some more accurate observations.

- In the 10 year window that Barb gave me, there 9 states whose total number of APCS seats (a student took the APCS Level A exam) is below 100. Those states are (from fewest taking to most taking) are Montana (at 25 students from 1998 to 2008), North Dakota, South Dakota, Wyoming, Nebraska, Oregon, Kansas, Alaska, and Mississippi (at 198).
- Those are also some of our
*least populous*states, so those low numbers are not surprising. There are fewer kids there (presumably) to take CS (though probably more than 200 high school kids total…) What if we balance for state population? Barb looked up the population in the state (total, not just of high school students, so it’s only a rough scaling factor) and came up a measure of tests taken in 2008 per million people, sortof a seats per capita. There, Louisiana is lowest, with only 1.36 tests taken per million people. Montana, Wyoming, and Idaho are tied at 2. - There are 18 states with less than 20 seats (on the 2008 exam) per million, which I’m using as a rough benchmark of “There’s one APCS teacher teaching one class of CS students per million people.” Some of those were pretty surprising to me: Iowa at 7 (21 students took it, with roughly 3 million people), Oregon at 8.78, Arizona at 10.8, Utah at 12.6, and West Virginia at 13.3. Just outside my metric (tied at 21) are New Mexico, Michigan, and Minnesota.
- Who leads in producing APCS students? Maryland is the highest seats-per-million at 160. The rest of top 10 are Texas, Virginia, Washington DC (51 students for 600K population), New Jersey, Connecticut, Georgia, Hawaii, California, and New York.

Overall, 15,014 students took the APCS Level A exam in 2008. Just shy of half of those students (48%) came from three states: California, Texas, and New York.

In contrast, in 2008, 222,835 students took the AP exam for Calculus (Level AB). 57,758 took the AP exam for Physics Level B. If we were to assume that high schools were perfect economic beings, and the number of those taking the test is a true indication of the importance of the field, then CS is 6% as important as Calculus, and 26% as important as Physics.

The College Board has found in its studies that 58% of high school students who take an APCS course end up taking a computer science course in college, and 19% of those pursue a computing degree – regardless of whether the students even take the exam. In comparison, only 28% of high school students who do *not* take any APCS exam go on to take a computer science course, and only 3% of those students pursue a computing degree. Simply *taking the APCS course* has an important impact on improving enrollments in computing.

Recent Comments