Archive for June, 2010

High School Computing Education needs more Barbaras

Georgia’s high school computer science education efforts lead the nation to a large extent because Georgia Tech invested in a Barbara — Barbara Ericson, Director of CS Outreach for the College of Computing.  Of course, we got a really good Barbara (and I’m heavily biased, as Barb’s collaborator, co-author, and husband) and that matters a lot.  As I talk to people who are interested in improving K-12 computing education, and they ask me how we did what we’ve done in Georgia, I realize how critical was the investment that the College of Computing made in hiring Barbara.

Barb was hired in 2004 as part of our Institute of Computing Education (ICE) that Maureen Biggers started. Georgia’s Department of Education had moved Computer Science to the Business Department — the Career, Technical, and Agricultural Education Department.  This gave CTA their first Advanced Placement exam, and they wanted to grow that.  They wanted more AP CS teachers, and they wanted to use Media Computation to do it.  Barb had been working as a programmer/consultant on various projects in Java and teaching adult education classes for the College part-time.  She knew the material and was an accomplished adult educator.

It wasn’t too much of a gamble for the College, as several administrators mentioned to me.  CTA would pay for teacher education workshops, so that covered about half of Barb’s salary.  The College figured that we’d find funding for the other half of Barb. By 2006, we had started Georgia Computes!.  Barb is now paid through external funding and her workshops for teachers.

Look at how much the state has got out of that investment!  Barb’s sole job for the College is to increase the number of students taking AP CS, because that increases the number of students going into Computer Science.  She’s not tenure-track, and she doesn’t teach other classes for Georgia Tech.  Her sole job is to grow computer science at the high school level.

  • Barb has taught literally hundreds of computer science teachers around the state.  As DCCE participants said in their presentations a couple weeks ago, “Barbara returns emails!”  She gets questions from teachers ranging from how to teach arrays, to how to install DrJava on their lab computers.  She visits teachers in their schools and does guest lectures occasionally.  Besides the four textbooks she’s co-authored since she started this gig, she produces enormous amounts of materials for teachers.  I see her generating Powerpoint slides and example code all summer long for her workshops.
  • When the State decided to create a high school curriculum, Barb was an obvious person for that committee.  She played a huge role in getting Georgia to use the ACM Model K-12 Curriculum as a starting place.
  • When the State decided to create a teaching endorsement so that we would have a kind of certification for CS high school teachers, Barb was on that committee, too.
  • She argued with lots of people to get AP CS to count for something in high school graduation requirements.  It counts as a Science in Georgia (and a Math in Texas, and those are still the only two states in the US that consider CS as fulfilling any high school graduation requirements).

Barb and I were talking this last weekend about how this all has its own momentum now.  There are leaders among the high school computer science teachers who have used workshops and other events to find like-minded fellow teachers.  They are forming a community. Georgia teachers are starting a fledgling (not yet approved) CSTA chapter.  Teachers get together on their own to share efforts and stories.  It’s not just Barbara, but I don’t think it would have happened without Barbara.

Barbara teaching teachers has a huge multiplier effect.  It may take Barb 10 hours to produce an hour of workshop material.  But if she has 25 teachers in the workshop, and they each have 25 students, those 10 developer hours impacted a lot of students quickly.

Barb is uniquely talented and has accomplished an enormous amount in Georgia.  However, there are other potential “Barbaras” out there.  I’m suggesting that the critical idea was hiring a talented person, based in a University (that has important authority/prestige/getting-attention implications), whose job it is to improve computing education for the whole State (amortizing costs quickly).  It is an expense, and that’s hard to justify in these times.  It was a gamble by the College’s leadership, but it more than paid off.

Think of Barbara (and ICE, and “Georgia Computes!”) as a model for state-wide high school computing education reform.  Hire a smart, talented person who is willing to pour lots of energy and charm into the job.  Use paid workshops to cover part of the salary, and seek external support for the rest.  (With the new CPATH+BPC programs coming out of NSF this summer, there should be funds available.)  Get collaborations started with the Department of Education, high schools, and colleges and universities in the state. Support the teachers all year round.  Give it time — this is Barb’s sixth summer of workshops.  It’s amazing what one good person can make happen, given the chance.

If you want computing education to grow in your state, try hiring a Barbara.  It’s a relatively small investment with potentially large rewards.

June 30, 2010 at 11:10 am Leave a comment

Tools for Building Tutors, and Tutors for Computing Education

I took a workshop this morning on building intelligent tutoring systems.  That’s surprising if you knew me even 10 years ago, when I thought that intelligent tutoring systems were an interesting technology but a bad educational idea. I thought that tutors were the fancy worksheets that I thought deadened education and taught only the kinds of things that weren’t worth teaching.  Then I spent the last eight years trying to figure out how to teach computing to people who do want to learn about computing but don’t want to become professional software developers (i.e,, Media Computation).

  • I’ve come to realize that there are students who need drill-and-practice kinds of activities to succeed, for whom discovery or inquiry learning is more effort than it’s worth. I recognize that in myself — I find economics fascinating and enjoy reading about it, but I’m not interested enough in economics to (for example) sit for hours with an economic simulator to figure out the principles for myself.
  • I also now believe that even those students who do want to discover information for themselves still need a bunch of foundational knowledge on which to base their discoveries. A student who wants to figure out something about computing using Python, still has to learn enough Python to be able to use it as a tool. It’s not worth anybody’s time to learn Python syntax through trial-and-error discovery or inquiry learning.

I am now interested in tools like intelligent tutoring systems to help students learn foundational skills and concepts as efficiently as possible.

The workshop this morning was short, only three hours long. Still, we all built simple model-tracing tutors for a single mathematics problem, and I think most of us started building a tutor for something that we were interested in. I started building a tutor that would lead a student through writing the decreaseRed() function that we start with in both the Python and Java CS1 books.

The Cognitive Tutor Authoring Tools (CTAT) that the CMU folks have built are amazingly cool! They’ve built Java and Flash versions, but the Flash version is actually totally generic. Using a socket-based interface, the CTAT for Flash tool can observe behavior to construct a graph of potential student actions, which can labeled with hints, structure for success/failure paths, made ordered/unordered, and made generic with formulas. The tool can also be used for creating general rule-based tutors. CTAT really is a general tutoring engine that can be integrated into just about any kind of computational activity. I’m still wrapping my head about all the ways to use this tool.

My biggest “Aha!” (or maybe “Oh No!”) moment came from this table:

First, I’d never realized that 30 minutes of activity in the famous Geometry Tutor took two months to develop! The whole point of the CTAT effort is to reduce these costs. This table gave me new insight into what it’s going to take to meet President Obama’s goal of computational, individualized tutors. A typical semester course in college is about three contact hours and 10-15 hours of homework per week for 15 weeks. Let’s call it 13 hours of scripted learning activity a week, for a total 195 hours. The best ratio on that table is 48:1 — 48 hours of development for one hour of student activity. 9360 development hours (for those 195 hours at a 48:1 ratio), at 40 hours per week, is just over four person-years of effort to build a single college semester course. That’s not beyond reason, but it is certainly a sobering number. A full year high school course, at 45 minutes a week, five days a week, for 30 weeks is 112.5 student hours, which is (again using best case of 48:1) 5400 development hours. Two person-years of effort is a minimum to produce a single all-tutored high school course.

Here’s another great role for computer scientists: Build the tools to make these efforts more productive, and make the tools easier to use and easier to understand so that a wider range of people can engage in the effort.  CTAT is great, but still requires a hefty knowledge and time investment.  Can we make that easier and cheaper?

June 29, 2010 at 4:15 pm 7 comments

Talks and Trips: Learning Computing Concepts vs. Skills?

I’m writing from Chicago where I’m attending the International Conference of the Learning Sciences 2010. It’s pretty exciting for me to be back here. I helped co-chair the 1998 ICLS in Atlanta, but I haven’t been at this conference since 2002, when my focus shifted from general educational technology to specifically computing education. The theme this week is “Learning in the Disciplines.” I’m here at the invitation of Tom Moher to be part of a panel on Friday morning on computing education, with Yasmin Kafai, Ulrich Hoppe, and Sally Fincher. The questions for the panel are:

  • What specific type of knowledge is characteristic of computer science? Is there a specific epistemology?
  • Are there unique challenges or characteristics of learning in and teaching about computer science?
  • What does learning about computing look like for different audiences: young children, high school, undergraduate, and beyond (e.g., professional scientists, or professionals from non-computing disciplines)? In the case of “non-computing professionals,” what do they learn, and how do they learn it (e.g.,what information ecologies do they draw upon, and how do they find useful information)?
  • How do we support (broadly) learning about computer science?

In a couple weeks, I’m giving the keynote talk at the first AAI-10: The First Symposium on Educational Advances in Artificial Intelligence. I’m no AI person, but this conference has a strong computing education focus. I’m planning to use this as an opportunity to identifying challenges in computing education where I think AI researchers have a particularly strong lever for making things better. Not much travel for that one — I get to stay in Atlanta for a whole week!

In getting ready for my talk Friday, I’ve been trying to use themes from learning sciences to think about learning computing. For example, physics educators (BTW, Carl Weiman is here for the opening keynote tonight) have identified which physics concepts are particularly hard to understand. The challenge to learning those concepts is due in part to misconceptions that students have developed from years of trying to understand the physical world in their daily lives. I’ve realized that I don’t know about computing education research that’s looked at what’s hard about learning concepts in computing, rather than skills. We have lots of studies that have explored how students do (not?) learn how to program, such as in Mike McCracken’s, Ray Lister’s, and Allison Tew’s studies. But how about how well students learn concepts like:

  • “All information in a computer is made up of bytes, so any single byte could be anything from the red channel of a pixel in a picture, to an instruction to the processor.” Or
  • “All Internet traffic is made up of packets. So while it may seem like you have a continuous closed connection to your grandmother via Skype, you really don’t.”

Does anybody have any pointers to studies that have explored students learning conceptual (not skill-based) knowledge about computing?

I know that there is an argument says, “Computing is different from Physics because students have probably never seen low-level computer science before entering our classes, so they have few relevant preconceptions.” I believed that until I saw Mike Hewner’s data from his study of high school students in our Georgia Computes! mentoring program this last year. These are high school students who are being trained to be mentors in our younger student (e.g., middle school kids, Girl Scouts) workshops. They’re getting to see a lot of cool tools and learning a bunch about computer science. Mike found that they had persistent misconceptions about what computer science is, such as “Someone who is really great at Photoshop is a great computer scientist.” While that’s not a misconception about bytes or packets, that’s a misconception that influences what they think is relevant. The concept about bytes might seem relevant if students think that CS is all about great graphics design, but the packet concept interferes with their perception of Skype and doesn’t help with Photoshop — students might ignore or dismiss that, just as physics students say to themselves, “Yeah, in class and on exams, gravity pulls the projectile down, but I know that it’s really about air pressing down on the projectile.” So students’ misconceptions about what’s important about computing might be influencing what they pay attention to, even if they still know nothing about computer science.

June 29, 2010 at 3:33 pm 3 comments

Using technology to improve college completion rates

EduCause is heading up a new effort funded by the Gates Foundation to use technology improve college readiness and thus completion rates.  Below are their main bullets and a link to more information.  This links a couple of themes showing up in this blog lately: The importance of college completion rates, and how we in Computing should be in the forefront of figuring out how to use technology for learning.

  • The high school graduation rate for all U.S. students is just over 70%. For African-Americans, Hispanics, and low-income students, the rate hovers at slightly over 50%.
  • Of those who do graduate from high school, only half are prepared to succeed in college.
  • For those who do enroll in postsecondary education, only about half will actually earn a degree or certification, with as few as one quarter of low-income students completing a degree.
  • Today, it is virtually impossible to reach the middle class, and stay there, with only a high school diploma.
  • Postsecondary education is increasingly critical to individual and family financial security, to a vibrant economy, and to an engaged and participatory society.

via Next Gen Learning Challenges | EDUCAUSE.

June 29, 2010 at 2:57 pm 2 comments

Dave Patterson on fixing high school CS education

Dave Patterson kindly visited and commented on the post on Technology plus policy for scale. Heroically, he typed a long response, in raw HTML, in the little comment box. I wanted to make sure the comment didn’t get overlooked, so I’m sharing it here as a guest post.

Let me start by saying I love teaching. My sister got her teaching credential, my nephew is music high school teacher, and my daughter-in-law’s father is a high school teacher and in charge for information technology education for a school district.

My belief that the K-12 CS education problem is practically unsolvable for the next 10-20 years in the US is based on:

  • No room in the high-school curriculum for CS. College bound students want to take AP-everything, so they have very little flexibility in their schedules. The comments at the meeting where that we should just get a statewide requirement passed that mandates teaching of CS. What current topic should we drop? Physics? Biology? Math? English? History? Good luck convincing a state school board or your colleagues on campus that CS is more important for the future of our citizens than these topics. Part of their arguments against CS would be how can you get high quality of teachers for CS that they have demonstrated they can get at scale for their topics.
  • Low pay for new teachers. Once a young person knows enough about CS to be a good teacher of the material, they can dramatically increase their income by taking an IT job. Their love of teaching would have to outweigh their need to support their families. In addition, they will probably receive a layoff notice in their first few years, just in case their are not enough funds, whether or not they are really laid off. This letter has to make one wonder if this is a good long-term career. Fixing this problem is a major societal change in the US, and until its fixed its basically a Catch-22, leaving us with a relatively small number of heroic competent K12 teachers.
  • Changing education policy is hard and takes a long time, and there is little reason to believe you will succeed. This is a state by state, school district by school district level of change involving many advocacy groups. If you think all you need is logical arguments to win the day, look at the resurfacing of alternatives to evolution in the classroom.
  • Most proposed solutions don’t scale. There are roughly 50,000 high schools and 80,000 elementary schools and middle schools in the US. Whatever you are proposing, think about the time scale your innovation would take to affect 10% of these schools. That would mean that 90% students are left out. How long before your proposal would help 50%? 90%?

These points are why I agree with Alan Kay that the most plausible path forward is some kind of online tutor / assistant that could help teach the ideas big ideas about CS.

Basically, for the US we need solutions that leverage Moore’s Law to scale to the size of the problem we have. A goal could be to provide technology so that parents and/or math and/or physics teachers can supplement what students do in the classroom such an online assistant.

Here are my reasons why I think online assistant is plausible now despite its sorry 20th century track record:

  • The successes of open source software and Wikipedia. The ability of volunteers to create interesting and high quality material has been demonstrated many times in our field. I see no reason why this couldn’t happen for education assistants.
  • Cloud Computing means there need not be a local administrator running local hardware. This was a major problem with old hardware and out of date software given limited budgets. The remarkably low cost of nearly infinitely scalable computing is a godsend for K-12.
  • Cell phones mean everyone can have access. Half of the people on the planet have cell phones, and they are increasingly becoming smart. Cell phone are so popular that schools have policies banning them, as opposed to bake sales trying to raise funds to buy some PCs. Tablets and netbooks are further lowering the costs of getting something with a bigger screen; basically, all the software is in the Cloud.
  • WiFi makes “wiring” a school trivial. Even coffeeshops offer free WiFi, so its trivial for campuses to have them also.
  • Highly productive programming environments for Software as a Service lowers the difficult of creating online teaching services and more people can build them. Frameworks like Ruby on Rails are remarkably productive, and fun to use. Hundreds of thousands of people today can build services, and scale them up if needed using Cloud Computing.
  • Crowdsourcing to help with online questions. The success of Mechanical Turk and Wikipedia, where people do a lot of work for no or remarkably little money, suggest that there are many people who could answer questions that would come up naturally from people trying to learn from an online assistant. Hence, online assistants may end up in reality being hybrids of computers doing what they do well with online people doing what computers don’t do well.
  • Our material lends itself to online teaching and evaluation. While making an assistant for English is probably an AI-hard problem, we have the advantage of being able to run programs to see if they work or not. And their is lots of technology developed and being developed for testing and debugging.
  • The current trend of standardized testing in the US may lend itself to online assistants. This was Roscoe Giles’ argument, who was at the meeting. Leaving aside whether standardized tests are good or not, it seems like an online assistant could help students for many fields improve their scores on these tests. Hence, online assistants could get an early positive reviews because of their help in schools where they are deployed. Hence, there is a window of opportunity with a clear measure of success to demonstrate what we do can help K-12.

Let me finally wrap up. While I am pessimistic about getting high quality material taught by high quality K-12 teachers in US in the next decade or two, I am optimistic that a major online effort could scale and have a positive impact on a large fraction of the K-12 students within a decade.

If we can create technology that allows billions of people to search all the data online and get useful answers in less than a second for free, I see no obvious reason why we can’t dramatically improve IT education for anyone in the world with a cell phone by 2020.

June 25, 2010 at 11:41 am 34 comments

Creating (and improving) options for CS practice: Practice-It! and beyond

One of the (several!) pedagogical methods that I learned about at the DCCE meeting a couple weeks ago was Practice-It!, a new (to me) website from the University of Washington.  Practice-It! provides a variety of practice activities for students, from multiple choice questions, to predict-the-output problems, to exercises where students write a single method to solve a problem.  These help to fill the huge gap between reading the book and attending lecture on one side, and facing a full IDE (“a speeding compiler”) on the other side.  It joins pedagogical tools like CodingBat and Problets in an important, but surprisingly sparse area of tools for computing students.

I really like these tools and think that they fill an important role.  However, given that there is more than one tool in this space now, I have a criticism of all the existing tools, that I mean to be constructive.  Currently, the coding problems in these tools invoke a compiler or interpreter and return the error generated.  We can do better, and need to because the errors messages of virtually any interpreter or compiler presume a knowledgeable, professional programmer.  They are unclear, often useless, and always infuriating for a novice.

Here are a couple examples.  In CodingBat, I tried the Python problem where I have to write a function to determine if I can sleep in, depending on whether it’s a weekday or during vacation.  The inputs are booleans, but I tried (like many students) to write the function without reading the description of the inputs. I made assumptions about the inputs being objects and collections.  I compounded the error by writing the code “weekday is in vacation” as opposed to the correct “weekday in vacation.”  The error message isn’t useful.  My semantic error (of ignoring that the inputs are booleans) is hard (but doable!) to catch and address.  My syntax error is not helped by this message.

Practice-It! works similarly.  Here, I’m asked to write a Java method, and (again, as a student might) I decide to just get the basic method declaration in first — but get it wrong because I forget to deal with input parameters.

The error messages from the Java compiler are worse than useless.  They mention things like “enums” that I (as a student working on such a simple problem) have never seen.  To the credit of Practice-It!, they are collecting these awful error messages and trying to generate student “friendly” versions.

I wonder, though, if we can do even better than rewriting Java’s error messages. In each of these situations, we have a lot of knowledge about what code the student should be writing, what errors we might expect, and what the student knows already.  We should be able to tune the error messages to the problem.

Here’s the radical idea I’ve been exploring:  What about building our own parsers?  A parser for any of these problems does not have to be a parser for all of Java or Python. In fact, it shouldn’t be.  We know from lots of research (e.g., Lewis Johnson’s PROUST and Anderson’s Cognitive Tutors) that student answers to coding problems mostly fall in a small range of options, and those with radically different answers are far more likely to be radically wrong than brilliant-and-different-thinking — and letting beginning students flail with the all the flexibility of the full language is simply wasted time.  Only let the students type in a subset of the language, but provide understandable, informed error messages for that subset that are tuned to the problem.

As computer scientists, we might blanch at the complexity of writing parsers, remembering hours spent battling Lex and YACC.  Lex and YACC were written before 1975. What’s the possibility that we could do better in 35 years of development and Moore’s Law increases? I’ve been exploring OMeta lately for just this purpose — it makes it possible to build parsers for rich languages in surprisingly few lines of code.

The constructive recommendation that I have for computing educators building (or like me, considering to build) tools to fill the book-to-IDE gap: Be computer scientists, who build their own programming languages tools and can use these tools for improving education.  We don’t need to use a compiler as a monolithic piece of software.  We know the techniques used in building those compilers, and we can mix-and-match the techniques and components in our tools to help our students learn.

June 25, 2010 at 11:29 am 5 comments

More teacher education vs. centralized control

Linda Darling-Hammond’s new book The Flat World and Education: How America’s Commitment to Equity will Determine Our Future is excerpted in this piece at rethinkingschools.org. I have the book but hadn’t started it yet, but now I’m really intrigued.

In this excerpt, Linda Darling-Hammond is contrasting the success of Finland with the direction of educational change in the United States.  While the US has moved more toward standardized testing and increased curricular standards (even at a national level), Finland has (instead) decreased the national standards and instead increased education for its teachers — three graduate years, paid for by the state.  The goal is to increase the quality of the teachers, rather than try to check outcomes and enforce standards (in some sense) after the fact.

This is relevant for us because the current trends in improving computing education (e.g., the new AP “Computer Science: Principles” exam, and the efforts toward getting CS into the Common Core) look much more like the US mainstream strategy than the Finland option that Darling-Hammond is praising.  I admit my naiveté — I had not even considered the trade-off between our current centralized strategy in high school CS and this option for fewer standards and better education for teachers.  I’m not sure that Darling-Hammond is right (e.g., will a strategy that works in Finland also work in the larger and more diverse United States?  Can we create these post-graduate teacher education programs in the US, at scale, especially in CS where such programs are almost non-existent?), but I’m intrigued and want to learn more.

The process of change has been almost the reverse of policies in the United States. Over the past 40 years, Finland has shifted from a highly centralized system emphasizing external testing to a more localized system in which highly trained teachers design curriculum around the very lean national standards. This new system is implemented through equitable funding and extensive preparation for all teachers. The logic of the system is that investments in the capacity of local teachers and schools to meet the needs of all students, coupled with thoughtful guidance about goals, can unleash the benefits of local creativity in the cause of common, equitable outcomes.

Meanwhile the United States has been imposing more external testing—often exacerbating differential access to curriculum—while creating more inequitable conditions in local schools. Resources for children and schools, in the form of both overall funding and the presence of trained, experienced teachers, have become more disparate in many states, thus undermining the capacity of schools to meet the outcomes that are ostensibly sought. Sahlberg notes that Finland has taken a very different path. He observes:

The Finns have worked systematically over 35 years to make sure that competent professionals who can craft the best learning conditions for all students are in all schools, rather than thinking that standardized instruction and related testing can be brought in at the last minute to improve student learning and turn around failing schools.

via Steady Work Finland.

June 25, 2010 at 9:37 am 3 comments

Paying teachers for merit only works if you can measure merit

One of the most critical issues for secondary school CS education is teachers.  Whether we’re creating technology to teach, or whether we’re trying to reach CS10K, the issue is creating enough good teachers to ramp up computing education.  To emphasize, the goal is creating enough good teachers.  We still have a big problem measuring “good.”

One of the nation’s most ambitious efforts to link teacher compensation to student achievement has done little to improve test scores or retain teachers at participating Chicago Public Schools, according to a report released Tuesday.

More than three years after the pilot program was announced to great fanfare by Mayor Richard Daley and former schools chief Arne Duncan, now U.S. education secretary, selected schools are performing no differently than schools that did not implement the program, according to the research group Mathematica.

via Merit pay system found to make no difference at Chicago Public Schools – chicagotribune.com.

The key insight to this paper comes later in the article:

“Fundamentally, you still have the same performance evaluation and the same compensation system that every other school has,” said Alicia Winckler, the chief of human capital at the school district. “Until you really change the base structures, I don’t anticipate we’ll see different outcomes.”

This makes sense to me.  Economics says that you get what you reward.  If you can’t measure real teacher merit, then you’re not rewarding what you want.  You’re encouraging a construct, a desire to do better at the merit measures.  Until we know how to measure what being a “good teacher” means, paying for merit may not work.

June 24, 2010 at 11:00 am 4 comments

How much does undergraduate education really cost?

Interesting analysis suggesting that undergraduate education actually costs much less than undergraduates and the state are charged, so increasing enrollment would actually buoy up university’s bottomlines.  Now that might not actually work, because much of what universities actually pay for has little to do with education:

If public universities are really committed to promoting access, affordability, and quality, they should consider increasing their funding by accepting more undergraduate students instead of raising tuition and restricting enrollments. While many would argue that higher education institutions are already unable to deal with the students they currently enroll, in reality, it costs most public research universities very little to educate each additional student, and the main reason why institutions claim that they do not get enough money from state funds and student dollars is that they make the students and the state pay for activities that are not directly related to instruction and research….

This means that most of the money coming from undergraduate students and the state is used to pay for sponsored research, graduate education, administration, and extracurricular activities. Furthermore, the main reason why the cost for instruction is so low is that research universities rely on large classes and inexpensive non-tenured faculty and graduate students to teach most of their undergraduate courses. However, my point is not that states or students shouldn’t support the full range of activities that universities pursue; rather, I am arguing that the best way to make up for the loss of state funding is to enroll more students.

via Views: The Solution They Won’t Try – Inside Higher Ed.

Contrast this with this interview with University of Georgia’s president:

Q: UGA recently accepted another freshman class. How much do you hear from parents of rejected students who say my son or daughter grew up wanting to go to Athens?

A: I hear it a lot. You especially don’t want to be me in April. Unfortunately, we turned down about 12,000 Georgia students this year. But we’ve stretched about as much as we can stretch. In my 13 years here, we’ve grown the freshman class from about 3,800 to roughly 5,000. We’re much larger now than Chapel Hill [University of North Carolina]. We’re much larger than Virginia. We think we have just about optimized the number of students that we can serve.

June 24, 2010 at 10:57 am 6 comments

It’s not just CS: All of science is hurting for majors

Two competing reports suggest that it’s been pretty bad, but maybe it’s now getting better:

The number of computer science degrees awarded to U.S. citizens from 2004 to 2007 (the latest figures available) declined 27%, according to the National Science Board. But the shortfall isn’t just in computer science. Neither universities nor high schools are preparing enough U.S. students in so-called STEM subjects: science, technology, engineering, and math. While observers blame different causes — lousy secondary schools, boring college courses, lazy students — few deny a crisis exists.

For every new Ph.D. in the physical sciences, according to the Aerospace Industries Association, the U.S. graduates 50 new MBAs and 18 lawyers; more than half of those with bachelor of science degrees still enter careers having nothing to do with science. The ACT testing service says only 17% of high school seniors are both interested in STEM majors and have attained math proficiency. Even among students who begin college pursuing a STEM degree, only half wind up with one. Finding new STEM teachers has become especially urgent: As of two years ago, nearly 60% of U.S. workers with STEM degrees were 45 and older.

via Where have all the science majors gone? – Jun. 9, 2010.

In contrast, from NSF news report in June:

In 2008, there were more students enrolled in U.S. science and engineering (S&E) graduate programs than in the previous year. New National Science Foundation (NSF) data show graduate enrollment in S&E programs grew 2.5 percent over comparable data for 2007. Noteworthy was the 7.8 percent increase in first-time, full-time enrollments of S&E graduate students, and the increase occurred across all S&E fields.

June 23, 2010 at 9:37 pm 1 comment

Proving and Improving Teaching Programming Languages

SIGPLAN Education Board has produced a report “Why undergraduates should learn the principles of programming languages”  which was presented at the ACM Education Council meeting.  It makes four claims for why students should study programming languages:

  • Students learn widely-applicable design and implementation techniques.
  • Many students will need to create new domain specific languages or virtual machines, so it’s useful for them to study what’s known about languages.
  • By learning programming languages, students learn new computational models and speed learning of new languages.  “The best preparation for quickly learning and effectively using new languages is understanding the fundamentals underlying all programming languages and to have some prior experience with a variety of computational models.”
  • Students learn how to choose the right programming language for a task.

The problem is that we have empirical support for none of these claims.  People are amazingly bad at transferring knowledge.  People tend to learn about a specific situation and not recognize when the same idea applies in a new situation — or worse, they transfer negatively, mistaking the similarity and using older knowledge in an incorrect way.

One of the few treatments of transfer of programming knowledge is The Transfer of Cognitive Skill by Mark Singley and John Anderson.  Transfer between programming languages, even between skills in the same language, is surprisingly small.  For example, there is evidence that students don’t even transfer (“vertically” as they describe it) between knowledge of how to write programs and how to debug those programs.

This doesn’t mean that the SIGPLAN folks are wrong or that those claims are wrong.  It’s simply that they haven’t been shown yet.

  • We need studies showing students learning design and implementation techniques from programming languages, then applying them in new contexts.
  • We need to show that students can usefully draw on older languages when designing new languages.
  • We need to show that knowing one set of languages improves learning of a later set.  (Ben Shneiderman argued in the late 70′s that learning a second language can be even harder than learning a first language.)
  • We need to show that we can teach students rubrics or guides by which they can choose new languages effectively.

My guess is: We can do all these things.  The real trick is how we teach such that these things happen.  There are these great examples in How People Learn showing that highlighting foundational knowledge, so that students recognize it and can use it in new contexts, can improve performance.  It is possible to teach for transfer.  No, transfer doesn’t occur automatically.  That doesn’t mean it can’t happen.

The SIGPLAN Education Board is planning to produce curricula to support the goals they’ve outlined.  I hope that they also create learning guides, recommendations on how to teach programming languages, and studies showing that these guides and recommendations work.  I believe that we can prove that learning programming languages can be very useful, but it may involve improving on current practice, which may not be informed by what learning scientists know about teaching for transfer.

June 22, 2010 at 5:59 pm 3 comments

Technology plus policy for scale

I’m at the University of California at Berkeley for an ACM Education Council meeting this week.  Yesterday, we heard a slew of reports: On what the SIGs (from SIGCHI to SIGGRAPH to SIGPLAN) are doing in education, on the latest in the common core initiative, to what’s going on at CSTA.  Mehran Sahami gave an overview of Stanford’s new CS Curriculum, and Andy van Dam presented his report from CRA-E (which he’ll do again at Snowbird.)  (Both Mehran and Andy’s talks emphasized the role of context in motivating computing and in supporting learning about connections between computing and contexts that we want students to learn.)

The highlight of the day for me was a panel that Dan Garcia organized on the challenges and future of computing education, considered across the education pipeline.  The speakers were:

  • Michelle Friend Hutton, middle school CS teacher and president of CSTA.
  • Josh Paley, a high school CS teacher in Palo Alto (high end school).
  • Eugene Lemon, a high school CS teacher from Oakland, CA (where four of their students were killed this year, including one of his AP CS students who was about to become the first student from their school to ever go on to a four year college).
  • Tom Murphy, a community college professor (who teaches C++ and Scheme, and whose goal is for his students to not have to re-take anything when they get to Berkeley).
  • David Patterson, a famous Berkeley professor and past president of ACM.

Dave went last, and expressed pessimism that the problems of K-12 CS education could ever be solved.  That was quite a gauntlet to throw down, so the Q&A session afterward was long (was scheduled for 30 minutes, and went on for over an hour) and active.  Roscoe Giles of Boston University encouraged us to think not only about solutions, but about solutions that scale.  Teaching CS in K-12 is a huge problem.  Eric Roberts of Stanford (with Dave Patterson agreeing) suggested that technology is really our only possible solution to the problem — we have to be able to use the technology we teach about, to teach about technology better.

I wanted to throw in a follow-on comment.  I strongly agree with Eric and Dave that technology is key, but I think that education policy is a critical component.  The CS10K project is about having 10,000 high school CS teachers ready to teach AP in 10K schools by 2015.  We have 2,000 high school CS AP teachers today.  We can’t possibly increase five-fold the number of teachers without distance education — we can’t ramp up face-to-face programs fast enough.

But what happens in 2020?  Lijun Ni’s research (based on studies of other STEM fields) suggests that we’ll have maybe 5K teachers left of that original 10K.  STEM teachers tend to drop out at a higher rate than other K-12 teachers, around 50% within five years.  What influences teachers staying?  Having a sense of belonging which is influenced by certification (e.g., teachers who are certified in science call themselves “science teachers” and tend to seek out professional development and community) and support systems.  Unless there is certification, and high school CS curricula (e.g., more than AP classes defined and being taught), and a community of CS teachers, we can expect to lose more than half those teachers in the first five years.

So technology is necessary to get the scale Roscoe is calling for, but so is policy to keep those teachers at scale.

June 22, 2010 at 10:53 am 13 comments

Why Can’t Johnny Develop Secure Software?

The line of reasoning here is interesting.  The people interviewed in this piece argue that software developers will never learn to develop secure software — it’s at odds with their goals as developers (to write code fast, to meet customer needs).  But they also argue that it doesn’t work to bring in an outside security expert, because she won’t be able to pay attention to everything in the code to find every possible security breach.  Their answer: automated testing tools.  It feels like an Agile answer to me — we’ve got a development problem with no obvious solution, so we’ll test and iterate.

“The talent coming out of schools right now doesn’t have the security knowledge it needs,” says Paul Kurtz, executive director at SAFECode, a nonprofit organization backed by major software vendors and focused on secure software development practices. “There needs to be a lot more work in our educational institutions to teach them how to develop secure code.”

But nearly all experts agree that no matter how strong the training effort, the average developer will never be very security-savvy. “They’re always going to be more focused on code quality and trying to meet their deadlines,” Sima says. “If I’m a developer, as soon as I’ve been assigned a project, I’m already behind. If there’s a faster way to do something, they’re going to take it, because for them speed is more important than security.”

via Why Can’t Johnny Develop Secure Software? – secure software development/Security – DarkReading.

June 22, 2010 at 10:21 am 1 comment

NRC Non-Statement on What is Computational Thinking

I read the new National Research Council report on what is Computational Thinking on the way out here to Berkeley (for the ACM Education Council meeting).  It was fascinating but a little disappointing.  As Marcia Linn explains in Preface to the report, the goal wasn’t to create and present a consensus view of what is computational thinking.  Instead, the report simply presents the discussion, the lack of consensus, with lots of argument and dialogue.  I found the discussion really interesting with some wonderful speakers presented.  I didn’t come away with any answers, though.

I was particularly pleased to read a revisiting of the NRC Fluency with IT report (sometimes called the “FITness Report”), led by Larry Snyder.  There are lots of people creating “information technology fluency” classes, but they often get it wrong.  As this report describes, the FITness report does call for programming — maybe in a domain specific language, maybe even in Excel, but definitely in a precise and testable way.

Some of my favorite parts of the report:

  • The discussion of the 2004 NRC report on what is computer science, which quotes Gerald Sussman saying “Computer science is not a science, and its ultimate significance has little to do with computers.  The computer revolution is a revolution in the way we think and in the way we express what we think.”
  • Alan Collins (who significantly influenced my dissertation work) whom I haven’t heard much from recently, was there and emphasized the importance of “representational competence, which he described as the effective application of computational means of representation of knowledge.”
  • The discussion of modeling is really interesting.  Uri Wilensky and Yasmin Kafai spoke of the importance of having students learn to critique models and question assumptions of models.
  • The report quotes Donald Knuth and Fred Brooks (in previously published work) and includes Alan Kay and Roy Pea who were at the event.

The report does get to what I see as one of the key questions of computational thinking.  Asking “What is Computational Thinking?” doesn’t make much sense by itself.  It’s more interesting to ask it in terms of outcomes, “What does computing education for everyone mean and what does such education offer?”  The report does speak to the key question, “Who is ‘everyone‘?”  Thinking about K-12 students as ‘everyone’ leads to one kind of focus on computational thinking, thinking about science and engineering majors leads to another, and thinking about college vs. non-college attending citizens leads to different emphases in computational thinking. The report doesn’t answer the question of who ‘everyone‘ is.  A key contribution of this report is to highlight that question and point out some of the different answers and the implications of each answer.

June 21, 2010 at 12:38 pm 2 comments

Adjuncts and Retention Rates

Adjunct faculty are particularly important in computing, where we want students to understand something about computing practice and in particular, gain from the experience of those who have developed expertise through years of effort.  However, we already have retention problems in computer science classes.  Studies like these are important for us — we need to figure out how to use adjuncts to enhance the educational opportunities that we offer students, but we need to do that in a way that avoids a rise in failure rates.

Freshmen who have many of their courses taught by adjuncts are less likely than other students to return as sophomores, according to a new study looking at six four-year colleges and universities in a state system. Further, the nature of the impact of adjunct instruction varies by institution type and the type of adjunct used, the study finds. And in some cases, students taking courses from full-time, non-tenure track instructors or from adjuncts well supported by their institutions do better than those taught by other kinds of adjuncts.

via News: Adjuncts and Retention Rates – Inside Higher Ed.

June 21, 2010 at 12:10 pm 1 comment

Older Posts


Recent Posts

Feeds

June 2010
M T W T F S S
« May   Jul »
 123456
78910111213
14151617181920
21222324252627
282930  

Blog Stats

  • 880,215 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,783 other followers


Follow

Get every new post delivered to your Inbox.

Join 2,783 other followers