Posts tagged ‘undergraduates’
The growth of departments in the Taulbee report is astonishing, but what Computerworld got wrong is calling it “computer science enrollments,” as opposed to “computer science enrollments in PhD-granting institutions.” The Taulbee report doesn’t cover all CS departments, and that’s why the new NDC survey has been launched.
The Taulbee report also indicates that the percent of women graduating with a Bachelors in CS has risen slightly, while the Computer Engineering percentage has dropped. Both are well south of 15%, though — a depressingly small percentage.
The number of new undergraduate computing majors in U.S. computer science departments increased more than 29% last year, a pace called “astonishing” by the Computing Research Association.
The increase was the fifth straight annual computer science enrollment gain, according to the CRA’s annual surveyof computer science departments at Ph.D.-granting institutions.
In the context of David Notkin’s receipt of the 2013 Computing Research Association A. Nico Habermann Award for outstanding contributions to supporting underrepresented groups in the computing research community, Lecia Barker of the National Center for Women & Information Technology (we hosted their Washington State Awards for Aspirations in Computing last weekend) sent us the chart to the right, comparing UW CSE’s performance to the national average in granting bachelors degrees to women.
It was really great to see these results in the U. Washington CSE News, but it got me to wondering: Did all the big R1 institutions rise like this, or was this unusual at UW? I decided to generate the GT data, too.
I went to the GT Self-Service Institutional Research page and downloaded the degrees granted by college and gender in each of 2005, 2006, and on up to 2011. (All separate spreadsheets.) I added up Fall, Spring, and Summer graduates for each year, and computed the female percentage. Here’s all three data sets graphed. While GT hasn’t risen as dramatically as UW in the last two years (so UW really has done something remarkable!), but GT’s rise from 2005 far below the national average to above the national average in 2009 is quite interesting.
Why is UW having such great results? Ed Lazowska claimed at SIGCSE 2013 that it’s because they have only a single course sequence (“one course does fit all,” he insisted) and because they have a large number of female TAs. I don’t believe that. I predict that more courses would attract more students (see the “alternative paths” recommendation from Margolis and Fisher), and that female TA’s support retention, not recruitment. I suspect that UW’s better results have more to do with the fact that GT’s students declare their major on their application form, while UW students have to apply to enter the CSE program. Thus, (a) UW has the chance to attract students on-campus and (b) they have more applications than slots, so they can tune their acceptances to get the demographics that they value.
The last paragraph of this is interesting. Yes, Engineering and Computer Science (in particular) are booming, but not everywhere, and it’s not evident to everyone. I was just at Tufts on Monday, where some Engineering students were asking me if Computer Science was growing in enrollment anywhere. Well, there’s Stanford…
Now? According to three stats buried in a press release from the university’s engineering school, Computer Science is the most popular major at Stanford. More students are enrolled in it than ever before (even more than at the dot-com boom’s height in 2000-2001). And more than 90 % of Stanford undergrads take a computer science course before they graduate.
Stanford is Stanford, and its stats aren’t necessarily indicative of academia at large: Countrywide, the most popular major is business. But the school’s computer-heavy numbers reflect its existence, both as a member of what candid college administrators call the Big Four (the other three are Princeton, Harvard and Yale), and as a school nestled close to Silicon Valley’s elite.
In a lengthy feature from earlier this year, the New Yorker’s Ken Auletta revealed that, even beyond Stanford’s CS department, “A quarter of all undergraduates and more than 50% of graduate students [at Stanford] are engineering majors. At Harvard, the figures are 4 and 10%; at Yale, they’re 5 and 8%.”
Mike Hewner successfully passed his PhD dissertation defense on Friday. There are just some dissertation tweaks and bureaucracy to go. In the process of the defense, there were several really interesting implications for his theory that got spelled out, and they relate to some of the comments made in response to my post on his dissertation last week.
Early choice is not early decision: In response to a question about when students should decide their specializations (should it be earlier in the degree or later in the degree), Mike said, “Making a choice early doesn’t force making a decision early.” We then spent some time unpacking that.
In Mike’s theory, students spend time exploring until they face a differential in enjoyment between classes that students interpret as an affinity for one topic over another. Students use this process to decide on a major, or to decide on a specialization area within a major. Once they’ve made a decision, they are more committed, and are willing to go through less-enjoyable classes in pursuit of a goal that they have now decided on. Forcing students to make a choice early (between majors or specializations) doesn’t change this process — they don’t decide earlier to become committed to a major or specialization. Forcing the choice early may mean dealing graduation, when students finally decide on something else and become committed to that other path.
Job as ill-defined goal: One of the surprising and somewhat contradictory ideas in Mike’s thesis is that, while US students today may be more driven to get a college education in order to get a better job or a middle class lifestyle, they don’t necessarily know what that job entails. Students that Mike interviewed rarely could describe what kind of job they wanted, or if they did, it was vague (“Work for Google”) and the students couldn’t explain what that job would require or what classes they should take to prepare for that job.
When we were first developing Threads, we talked about helping students to describe the kind of job they wanted, and then we could advise them to pick the Threads that would help them achieve that career. But Mike’s theory says that that’s backwards. Students don’t know what kind of job they want. They use experiences in the classes to help them decide what kind of work they will enjoy.
Hewner’s theory is constructivist. Mike was asked, “How would you advise a student such that they could figure out the best Thread for themselves?” Mike’s response was that students would need to do something that was authentic and representative of work within that Thread — which is hard to do in an accessible manner for students who don’t know much about that Thread yet. You can’t just tell students about the Threads or about the jobs that fit into the Threads. It’s unlikely that students will be able to successfully predict if they would enjoy the work in the Thread based on a description.
In some sense, Mike’s theory is intensely constructivist. Mike’s students won’t decide on a major, specialization or career choice until they experience the work of that major, specialization, or career choice, and then decide if they enjoy it or not for themselves. If decisions are made based on enjoyment, you can’t tell someone that they’d enjoy the experience. They have to figure it out for themselves.
Interesting new initiative between the White House and NSF to increase the number of graduates in computing and engineering by focusing on retention. (I strongly agree, because retention is where we’ve been focusing our attention.)
This letter announces a cooperative activity between NSF and members of the Jobs Councils High Tech Education working group, led by Intel and GE, to stimulate comprehensive action at universities and colleges to help increase the annual number of new B.S. graduates in engineering and computer science by 10,000. Proposals for support of projects would be submitted under a special funding focus Graduate 10K+ within the NSF Science, Technology, Engineering, and Mathematics Talent Expansion Program STEP, see http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5488.
Studies have shown that retention during the critical first two years in a students major, or along the path towards declaration of a major, is an excellent predictor of eventual graduation with a STEM degree. Recognizing that the correlation between retention and graduation is particularly strong for students in engineering and computer science, we invite proposals from institutions that can demonstrate their commitment to:(i) significant improvement in first and second year retention rates in these particular majors, beyond current levels; and (ii) sustained, institutionally-embraced practices e.g. http://www.asee.org/retention-project that lead, ultimately, to increased graduation. Jobs Council members anticipate providing support for this special funding focus, with the number of awards to be made contingent on the availability of funds.
ICER 2012 in Auckland, New Zealand, was notable for having the highest number of submitted papers to any ICER (8th year) and almost as many attendees as last year, despite being a long way for all the US and European delegates. In the end, there were 13 research papers accepted, 8 discussion papers (shorter papers, with shorter presentations and longer discussion periods), 16 attendees at the Doctoral Consortium, and 19 Lightning Talks accepted. Despite all the successes, I’m worried whether ICER can meet the global needs for computing education research.
The first talk of the conference ended up winning the People’s Choice award, voted on by the delegates (used to be called the “Fool’s Award,” but now renamed the “John Henry Award” for a paper with great “potential to be transformative”) from Ian Utting and the BlueJ crew. BlueJ, probably the most successful and popular pedagogical Java IDE, is going to be outfitted in the Spring with event logging. We’ll know what the students are doing in BlueJ, at a large scale (probably about 1Tb/year). All of that data is going to get stored (anonymously) for use by researchers. The interesting discussion point is: What are web-scale questions in CS Education?
The Chair’s Award (new to ICER, kind of a best-paper award) was won by Colleen Lewis for her detailed explanation of how a middle schooler at a summer camp in Scratch did his debugging. In a sense, Colleen (just graduated from Berkeley, and just started at Harvey Mudd College) was answering a comment that Ray Lister made which was often quoted during the conference: that students sometimes demonstrate “bizarro programming behaviors.” Colleen carefully reconstructed the activity of the student and pieced together a story of how the student thought through the problem, and how his behavior did make sense.
I tweeted some of my favorite one-liners from the conference. I’ll mention just a couple highlights here.
- Quintin Cutts presented an intriguing paper suggesting a new way of looking at questions that spur learning, with data drawn from Beth Simon’s CS:Principles course for non-majors. The idea is called the Abstraction Transition Taxonomy, and it’s about how we talk about problems (natural language), we talk about CS (“arrays”), and we talk in code (e.g., “a[i]“). They hypothesize that questions that lead to transitions between levels may be the most successful scaffolding for novice learning. So, how do we test that hypothesis?
- My former students, Drs. Brian Dorn and Allison Elliott Tew, are working on a new validated measure of attitudes towards computing, based on similar instruments developed for physics and biology. They presented their validation scheme at ICER. I’ve already read a draft of a future paper where they’re actually using the instrument, and I think that this is going to be a big deal.
- Lauren’s subgoal paper drew some oohs when I showed the results, a few shakes of heads (I don’t think everyone believed it), and some challenging questions. ”Why aren’t you using this in your intro classes?” asked one questioner. ”Or your advanced classes?” asked another. Yup. Good questions.
- One of the lightning talks had an interesting idea: Form pairs for pair-programming based on perception of efficacy. Put non-confident students together! The idea is that self-efficacy feeds off a vacuum, “I’m doing worse than everyone else. I just know.” Having someone else with low-confidence provides evidence that you’re not alone in struggling. No data were presented, but it’s an intriguing idea.
One of my mentors here at Georgia Tech is Jim Foley who recommends structuring research around BHAG’s — Big, Hairy Audacious Goals. The BHAG for computing education is teaching computer science in all schools. What’s particularly scary about this BHAG is it’s already happening. The US has the CS10K effort. The Computing At School effort is going strong in the UK. New Zealand and Denmark have both instituted new nationwide CS curricula in the last couple years. There is an enormous need for research on how to help teachers learn to teach computer science, what the challenges are in teaching computer science to school children (e.g., who have not declared a major of computer science, who are not necessarily motivated to learn computing for a career), and evaluations of successful models for supporting learning by both teachers and school children. Maybe we’re just going to do it, and figure out later what works. But maybe there’s a better way.
How much of ICER 2012 research could possibly inform these efforts? There’s Colleen’s interesting paper on a pre-teen debugging, and there’s Briana’s work on professional development efforts. That’s pretty much it for directly computing-at-schools/CS10K relevant, from my read of the papers. There were a few papers that addressed non-majors (like Quintin’s, and our statewide survey paper), but at the undergraduate level. The rest of ICER’s papers were seeking to understand and teach undergraduate CS majors.
It’s important to understand undergraduate CS majors and to improve their understanding. My personal research agenda is more on the latter than the former — it’s more important to me to learn how to teach better, rather than to understand the effects of teaching that might be better if we built on everything that we know about teaching. But I do get the value of understanding understanding (or lack of understanding, or even misconceptions). There are far more high school teachers and schoolchildren than there are undergraduate majors, and they’re different. The oncoming problems are much bigger than the ones we’re currently facing.
How do we inform the broader need for research on computing education? Is ICER the place to look for that research? Or will ICER (and SIGCSE) always be a mostly undergraduate-oriented conference (and organization)? If not ICER and SIGCSE, where should we look? I was a reviewer for AERA’s new engineering and computing education division, and while I was excited about those papers, they’re coming at the problems almost entirely from the education perspective. There was little from ICER and the computing education research community. The problems that we need solved will require work from both communities/disciplines/fields. How do we get there?
There’s a meme going around my College these days, about the 10 questions you should never ask your professor (linked below). Most of them are spot-on (e.g., “Did we do anything important while I was out?”). I disagreed with the one I quote below.
One of our problems in computer science is that we teach things for reasons that even we don’t know. Why do we teach how hash tables are constructed? Does everybody need to know that? I actually like the idea of teaching everybody about hash functions, but it’s a valid question, and one that we rarely answer to ourselves and definitely need to answer to our students.
Why we’re teaching what we’re teaching is a critical question to answer for broadening participation, because we have to explain to under-represented minorities why it’s worth sticking with CS. Even more important for me is explaining this to non-majors, and in turn, to our colleagues in other departments. Computing is a fundamental 21st Century literacy, and we have to explain why it’s important for everyone to learn. ”Stuck in the Shallow End” suggests that making ubiquitously available can help to broaden participation, but we can only get it ubiquitously available by showing that it’s worth it.
I’m back from New Zealand and ICER: today, yesterday, tomorrow — days get confused crossing a dateline. (We landed in both Sydney and Los Angeles at 10:30 am on 13 September.) I spent several hours of the trip reading Mike Hewner’s complete dissertation draft. Mike has been asking the question, “How do CS majors define the field of computer science, and how do their misconceptions lead to unproductive educational decisions?” He did a Grounded Theory analysis with 37 interviews (and when I tell this to people who have done Grounded Theory, their jaws drop — 37 interviews is a gargantuan number of interviews for Grounded Theory) at three different institutions. One of his findings is that even CS majors really have no idea what’s going on in a class before they get there. The students’ ability to predict the content of future courses, even courses that they’d have to take, was appallingly bad. Even our own majors don’t know why they’re taking what they’re taking, or what will be in the class when they go to take it.
We will have to tell them.
“Why do we have to learn this?” In some instances, this is a valid question. If you are studying medical assisting, asking why you have to learn a procedure can be a gateway to a better question about when such a procedure would be necessary. But it should be asked that way–as a specific question, not a general one. In other situations, like my history classes, the answer is more complicated and has to do with the composition of a basic liberal arts body of knowledge. But sometimes, a student asks this because they do not think they should have to take a class and want a specific rationale. In that case, I respond, “Please consult your course catalog and program description. If you don’t already know the answer to that question, you should talk to your advisor about whether or not this is the major for you.”
I hadn’t heard about this form of cheating in MOOC’s. I knew that answers got passed around (as Dave Patterson reported in June), but was surprised to hear that students were creating multiple account in order to re-take exams. That changes one’s perception of the 100K registered users. The question raised here in Dick Lipton’s blog is: Is this “cheating” or simply “mastering” the material?
Here is what happens next. Bob signs up for the course multiple times: let’s call them Bob1, Bob2, Bob3, Bob4. Recall there is no cost to Bob for signing up multiple times—none. So why not sign up several times…
Bob’s insight is simple: he now can take the course multiple times and keep only the best grade. Say there is a graded exam. Bob1 takes the exam and gets a 70% on it. Not bad, but not great either. So Bob sees what he got wrong, sees what questions they threw at him. He studies some more, then takes the exam again as Bob2. Of course the exam is different, since all these on-line systems do some randomization. However, the exam covers the same material, so now Bob2 gets an 85% say.
Perhaps Bob is satisfied. But if he is really motivated he studies some more, retakes the exam, and now Bob3 gets 90%. You guessed right. He goes on and takes it one more time as Bob4 who—surprise—gets a perfect 100%.
I wrote my monthly Blog@CACM piece this last weekend, which was a synthesis of several pieces I wrote here: About the worked examples that I’m trying out in Oxford, the PixelSpreadsheet, and contrasting the study abroad I’m teaching on and MOOCs. I mention that I’m doing an end-of-term survey about how all this worked, and I expect to say more about those results here in the next couple weeks.
In the Blog@CACM piece, I mention an analogy I’ve been thinking about. (Please forgive the terrible pun in the title.) John Henry is an American folk hero who worked on the railroads “driving steel.” Along comes the steam-powered hammer, which threatened the job of steel-drivers like John Henry. John Henry raced the steam-powered hammer, and beat it — but suffered a heart attack and died immediately afterwards. In some versions of the story, John Henry’s wife or son picks up his hammer and keeps driving steel. But as we all know, the steam-powered hammer did drive the steel-drivers out of a job.
I wonder about the analogy to higher education. The Internet makes information cheaper and easier to access. Teachers play the role of John Henry in this analogy. Sure, they may do a better job than that steam-powered education, but cheap and plentiful is more important than quality, isn’t it? Taking the analogy in a different direction, the teachers who are building the new Coursera courses at Universities with no additional pay or course/work release remind me of the John Henry who suffered exhaustion and “died with a hammer in his hand.”
Colleagues who went to the Google Faculty Summit came back with stories of how MOOC’s were part of the conversation there. I heard that my advisor, Elliot Soloway, stood up to say:
”I’m at the University of Michigan where in addition to our university we have Central Michigan, Eastern Michigan, Western Michigan, etc. In five years, those schools will be gone.”
That’s when I realized another potential casualty in the battle over MOOCs, if Elliot is right. My niece went to Central Michigan to get a degree in Occupational Therapy. Today, she works with special needs children, with both physical and cognitive impairments. There are only a couple of OT programs in the state of Michigan, and none at U-M. Can you imagine teaching students how to provide therapy to patients with physical impairments via MOOCs?!? (Relates to “Gas Stations Without Pumps” on what works as a Coursera course.) How do we teach everything that we want and need to teach if only elite universities and MOOC’s exist for higher education? Is the role of John Henry in the higher education version of the analogy played by teachers (as in my original blog post), by degree programs that don’t fit these models, or by the students who seek to do something other than what the elites and MOOCs offer?
It’s over-the-top melodramatic, I admit, but that’s what makes for good folklore. Folklore and similar stories play a useful purpose if they help us to see new perspectives. In the vision of the world where community colleges don’t survive, who gets wiped out (besides the Colleges themselves) like John Henry?
There are efforts to engage undergraduates in open-source software development as a form of service learning, to be part of a developer community, or as a way to gain experience with significant code bases. I’ve mentioned before that the OSS community doesn’t have a great track record for diversity and welcoming newcomers. Here’s a new study describing how hard it is for newcomers to connect with the oldtimers in OSS. These results suggest that undergrads doing OSS for a course are still providing a service and are likely still gaining good experience working on a larger code base, but they’re unlikely to become part of the established developer community. It won’t really be an apprenticeship model — they’ll mostly just be talking to each other.
“Taken together, we found that accomplished developers tend to connect with other accomplished developers, essentially forming an elitist circle in the OSS (open source software) community. By contrast, it is more difficult for less successful developers to establish collaborative relations, and even if they do, they tend to connect with others who have a similar lower level of performance and experience,” Shen writes in the article.
Heading to International Computing Education Research 2011 in Rhode Island: How CS students choose Threads
I’m heading out Sunday for the 2011 International Computing Education Research (ICER) Workshop, hosted by Dr. Kate Sanders at Rhode Island College in Providence. The schedule is exciting — we have a bunch of speakers from communities who have been doing CS Ed research, but have not been at ICER previously. (“Workshop” is ACM’s name for a small conference.) I’m chairing the discussion papers session. I’m looking forward to Eric Mazur’s keynote (who has a new educational technology that he’s promoting), and his advice from the Physics Education Research community to the much-younger Computing Education Research community.
The second talk of the conference is from my PhD student, Mike Hewner (same student who previously studied what game developers look for in graduates). Mike’s dissertation research is asking, “How do computer science undergraduates define ’computer science,’ and how does their definition influence their educational decisions?” He’s using grounded theory, which is a demanding social science method. He’s done about a dozen interviews so-far, and has not yet reached “saturation” (where new interviews don’t contribute to the developing theory), so the current theory is still considered “tentative.” This paper is one piece of that work.
In most CS degree programs, there are some options for students: Choices between electives, between specialization paths, between Threads. Mike wanted to know how students made those choices. Several findings surprised me. First, students don’t ”begin with the end in mind.” Students he interviewed had little idea what job they wanted, and if they did, they didn’t really know what the job entailed. Second, students don’t think that the choice of specialization is all that important — they figure that they’re at a good school, they trust the faculty, so whatever choice they make will turn out fine. Finally, an engaging, fun class can dramatically influence students’ perception of a field. A “fun” theory class can convince students that they like theory. Their opinion of the subject is easily swayed by the qualities of the class and the teacher. “Why are you in robotics (even though it doesn’t have much to do with what you say you want to do for your job)?” “Well, I really liked the robots we used in CS101…”
Hope to see some of you there!
One of our graduating seniors shared the below blog post with me, and I shared it with all the faculty who teach the lower division courses in Georgia Tech’s College of Computing. Andrew makes the strong statement in his blog post: “Students shouldn’t be able to graduate with a Computer Science degree from Georgia Tech without being able to read and write production quality code.”
My sense is that most of the faculty who have responded agree with Andrew. Our students should know how to read significant code (e.g., walking through the whole Linux kernel in OS course). One of our professors talked about the value of watching his own code be rewritten by a professional, expert programmer — it was stunning how much better the code got. We could teach more about reading production code at the University, but I’m not sure that we could teach enough about writing production code at the University. As Bjarne Stroustrup pointed out, faculty don’t build much these days. Programming well has much in common with craft and art, and it’s not something that the University does well.
If the University could not teach reading and writing production code well, where should students learn it? One answer is, “On the job.” Craft is often taught as an apprenticeship. I worry that the computing industry has given up on its professional development responsibilities. We talk about people being lifelong learners. Is that entirely an individual responsibility? When I was at Bell Labs and Bellcore, there were dozens of classes that I could (and did!) take. Where has that gone? Is everyone a contractor these days, or does industry have a responsibility to develop its human resources?
My research interest is more in the computing that everyone needs, and in that sense, I agree with Andrew, but without the word “production.” I fear that we focus too much on having students write code, and not enough time reading code examples. Worked examples are a powerful approach to learning that we simply make too little use of in computer science. We emphasize so much that computer science is about “problem-solving” that we only make students solve problems, as opposed to reading solutions and learning to learn from reading. I’m preparing my CE21 proposal now, and spending a lot of time learning what educational psychologists know about how to write examples that students learn transferable knowledge from – research that we pretty much ignore in computing education.
Literacy is about being able to write and read.
As I come closer and closer to graduation, I’m looking back at the Georgia Tech Computer Science program, the things it did well and not so well.
One piece I feel is missing in the curriculum is having students read good, high quality code. We’re asked to code alone and code in groups, code in labs and code in dorms, code on paper and code in IDEs.
It seems like the administration and professors think this skill just magically appears with practice. I disagree, and I think we can do better.
I took a workshop this morning on building intelligent tutoring systems. That’s surprising if you knew me even 10 years ago, when I thought that intelligent tutoring systems were an interesting technology but a bad educational idea. I thought that tutors were the fancy worksheets that I thought deadened education and taught only the kinds of things that weren’t worth teaching. Then I spent the last eight years trying to figure out how to teach computing to people who do want to learn about computing but don’t want to become professional software developers (i.e,, Media Computation).
- I’ve come to realize that there are students who need drill-and-practice kinds of activities to succeed, for whom discovery or inquiry learning is more effort than it’s worth. I recognize that in myself — I find economics fascinating and enjoy reading about it, but I’m not interested enough in economics to (for example) sit for hours with an economic simulator to figure out the principles for myself.
- I also now believe that even those students who do want to discover information for themselves still need a bunch of foundational knowledge on which to base their discoveries. A student who wants to figure out something about computing using Python, still has to learn enough Python to be able to use it as a tool. It’s not worth anybody’s time to learn Python syntax through trial-and-error discovery or inquiry learning.
I am now interested in tools like intelligent tutoring systems to help students learn foundational skills and concepts as efficiently as possible.
The workshop this morning was short, only three hours long. Still, we all built simple model-tracing tutors for a single mathematics problem, and I think most of us started building a tutor for something that we were interested in. I started building a tutor that would lead a student through writing the decreaseRed() function that we start with in both the Python and Java CS1 books.
The Cognitive Tutor Authoring Tools (CTAT) that the CMU folks have built are amazingly cool! They’ve built Java and Flash versions, but the Flash version is actually totally generic. Using a socket-based interface, the CTAT for Flash tool can observe behavior to construct a graph of potential student actions, which can labeled with hints, structure for success/failure paths, made ordered/unordered, and made generic with formulas. The tool can also be used for creating general rule-based tutors. CTAT really is a general tutoring engine that can be integrated into just about any kind of computational activity. I’m still wrapping my head about all the ways to use this tool.
My biggest “Aha!” (or maybe “Oh No!”) moment came from this table:
First, I’d never realized that 30 minutes of activity in the famous Geometry Tutor took two months to develop! The whole point of the CTAT effort is to reduce these costs. This table gave me new insight into what it’s going to take to meet President Obama’s goal of computational, individualized tutors. A typical semester course in college is about three contact hours and 10-15 hours of homework per week for 15 weeks. Let’s call it 13 hours of scripted learning activity a week, for a total 195 hours. The best ratio on that table is 48:1 — 48 hours of development for one hour of student activity. 9360 development hours (for those 195 hours at a 48:1 ratio), at 40 hours per week, is just over four person-years of effort to build a single college semester course. That’s not beyond reason, but it is certainly a sobering number. A full year high school course, at 45 minutes a week, five days a week, for 30 weeks is 112.5 student hours, which is (again using best case of 48:1) 5400 development hours. Two person-years of effort is a minimum to produce a single all-tutored high school course.
Here’s another great role for computer scientists: Build the tools to make these efforts more productive, and make the tools easier to use and easier to understand so that a wider range of people can engage in the effort. CTAT is great, but still requires a hefty knowledge and time investment. Can we make that easier and cheaper?
EduCause is heading up a new effort funded by the Gates Foundation to use technology improve college readiness and thus completion rates. Below are their main bullets and a link to more information. This links a couple of themes showing up in this blog lately: The importance of college completion rates, and how we in Computing should be in the forefront of figuring out how to use technology for learning.
- The high school graduation rate for all U.S. students is just over 70%. For African-Americans, Hispanics, and low-income students, the rate hovers at slightly over 50%.
- Of those who do graduate from high school, only half are prepared to succeed in college.
- For those who do enroll in postsecondary education, only about half will actually earn a degree or certification, with as few as one quarter of low-income students completing a degree.
- Today, it is virtually impossible to reach the middle class, and stay there, with only a high school diploma.
- Postsecondary education is increasingly critical to individual and family financial security, to a vibrant economy, and to an engaged and participatory society.
Beth Simon just let me know that her paper has just been accepted to ITICSE 2010. She shared the submitted draft with me, and I’ve been biting my lip, wanting to talk about it here. Now that it’s accepted, I can talk about it, while still leaving the real thunder for Beth’s paper and her presentation this summer. For me, it’s exciting to see two year’s worth of data with CS majors, including following the students into their second year. Beth deals head-on with one of the criticisms of Media Computation (e.g., no, it’s not a tour of all-things-Java — you won’t cover as many language features as you used to) and provides the answers that really matter (e.g., you retain more students, they learn more about problem-solving, and they do really well in the next course). I’ll quote her abstract here:
Previous reports of a media computation approach to teaching programming have either focused on pre-CS1 courses or courses for non-majors. We report the adoption of a media computation context in a majors’ CS1 course at a large, selective R1 institution in the U.S. The main goal was to increase retention of majors, but do so by replacing the traditional CS1 course directly (fully preparing students for the subsequent course). In this paper we provide an experience report for instructors interested in this approach. We compare a traditional CS1 with a media computation CS1 in terms of desired student competencies (analyzed via programming assignments and exams) and find the media computation approach to focus more on problem solving and less on language issues. In comparing student success (analyzed via pass rates and retention rates one year later) we find pass rates to be statistically significantly higher with media computation both for majors and for the class as a whole. We give examples of media computation exam questions and programming assignments and share student and instructor experiences including advice for the new instructor.