Posts tagged ‘undergraduates’
I found the analysis linked below interesting. Most IT workers do not have an IT-related degree. People with CS degrees are getting snapped up. The suggestion is that there’s not a shortage of IT workers, because IT workers are drawn from many disciplines. There may be a shortage of IT workers who have IT training.
IT workers, who make up 59 percent of the entire STEM workforce, are predominantly drawn from fields outside of computer science and mathematics, if they have a college degree at all. Among the IT workforce, 36 percent do not have a four-year college degree; of those who do, only 38 percent have a computer science or math degree, and more than a third (36 percent) do not have a science or technology degree of any kind. Overall, less than a quarter (24 percent) of the IT workforce has at least a bachelor’s degree in computer science or math. Of the total IT workforce, two-thirds to three-quarters do not have a technology degree of any type (only 11 percent have an associate degree in any field).4
Although computer science graduates are only one segment of the overall IT workforce, at 24 percent, they are the largest segment by degree (as shown in Figure F, they are 46 percent of college graduates entering the IT workforce, while nearly a third of graduates entering IT do not have a STEM degree). The trend in computer scientist supply is important as a source of trained graduates for IT employers, particularly for the higher-skilled positions and industries, but it is clear that the IT workforce actually draws from a pool of graduates with a broad range of degrees.
CS researchers have long been interested in what predicts success in introductory computing, e.g., the “camel has two humps” paper, and the Bennedsen and Caspersen review of the literature. Would knowing who might succeed or fail allow us to boost retention? A new system at Purdue was claimed to do exactly that, but turns out, isn’t.
Michael Caulfield, director of blended and networked learning at Washington State University at Vancouver, decided to take a closer look at Signals after Purdue in a September press release claimed taking two Signals-enabled courses increased students’ six-year graduation rate by 21.48 percent. Caulfield described Purdue research scientist Matt Pistilli’s statement that “two courses is the magic number” as “maddening.”
Comparing the retention rates of the 2007 and 2009 cohorts, Caulfield suggested much of what Purdue described as data analysis just measured how many courses students took. As Signals in 2008 left its pilot and more students across campus enrolled in at least one such course, Caulfield found the retention effect “disappeared completely.”
Put another way, “students are taking more … Signals courses because they persist, rather than persisting because they are taking more Signals courses,” Caulfield wrote.
Karen Head has finished her series on how well the freshman-composition course fared (quoted and linked below), published in The Chronicle. The stats were disappointing — only about 238 of the approximately 15K students who did the first homework finished the course. That’s even less than the ~10% we saw completing other MOOCs.
Georgia Tech also received funding from the Gates Foundation to trial a MOOC approach to a first year of college physics course. I met with Mike Schatz last Friday to talk about his course. The results were pretty similar: 20K students signed up, 3K students completed the first assignment, and only 170 finished. Mike had an advantage that Karen didn’t — there are standardized tests for measuring the physics knowledge he was testing, and he used those tests pre-post. Mike said the completers fell into three categories: those who came in with a lot of physics knowledge and who ended with relatively little gain, those who came in with very little knowledge and made almost no progress, and a group of students who really did learn alot. They don’t know why nor the relative percentages yet.
The researchers also say, perhaps unsurprisingly, that what mattered most was how hard students worked. “Measures of student effort trump all other variables tested for their relationships to student success,” they write, “including demographic descriptions of the students, course subject matter, and student use of support services.”
It’s not surprising, but it is relevant. Students need to make effort to learn. New college students, especially first generation college students (i.e., whose parents have never gone to college), may not know how much effort is needed. Who will be most effective at communicating that message about effort and motivating that effort — a video of a professor, or an in-person professor who might even learn your name?
As Gary May, our Dean of Engineering, recently wrote in an op-ed essay published in Inside Higher Ed, “The prospect of MOOCs replacing the physical college campus for undergraduates is dubious at best. Other target audiences are likely better-suited for MOOCs.”
On the freshman-composition MOOC, Karen Head writes:
No, the course was not a success. Of course, the data are problematic: Many people have observed that MOOCs often have terrible retention rates, but is retention an accurate measure of success? We had 21,934 students enrolled, 14,771 of whom were active in the course. Our 26 lecture videos were viewed 95,631 times. Students submitted work for evaluation 2,942 times and completed 19,571 peer assessments (the means by which their writing was evaluated). However, only 238 students received a completion certificate—meaning that they completed all assignments and received satisfactory scores.
Our team is now investigating why so few students completed the course, but we have some hypotheses. For one thing, students who did not complete all three major assignments could not pass the course. Many struggled with technology, especially in the final assignment, in which they were asked to create a video presentation based on a personal philosophy or belief. Some students, for privacy and cultural reasons, chose not to complete that assignment, even when we changed the guidelines to require only an audio presentation with visual elements. There were other students who joined the course after the second week; we cautioned them that they would not be able to pass it because there was no mechanism for doing peer review after an assignment’s due date had passed.
These results seem consistent with Mike Hewner’s thesis results. If a student likes her intro course more, they are more likely to take that major. Students use how much they enjoy the course as a proxy for their affinity for the subject.
Undergraduates are significantly more likely to major in a field if they have an inspiring and caring faculty member in their introduction to the field. And they are equally likely to write off a field based on a single negative experience with a professor.
Those are the findings of a paper presented here during a session at the annual meeting of the American Sociological Association by Christopher G. Takacs, a graduate student in sociology at the University of Chicago, and Daniel F. Chambliss, a professor of sociology at Hamilton College. The paper is one part of How College Works, their forthcoming book from Harvard University Press.
I couldn’t believe this when Mark Miller sent the below to me. ”Maybe it’s true in aggregate, but I’m sure it’s not true at Georgia Tech.” I checked. And yes, it has *declined*. In 2003 (summing Fall/Winter/Spring), the College of Computing had 367 graduates. In 2012, we had 217. Enrollments are up, but completions are down.
What does this mean for the argument that we have a labor shortage in computer science, so we need to introduce computing earlier (in K-12) to get more people into computing? We have more people in computing (enrolled) today, and we’re producing fewer graduates. Maybe our real problem is the productivity at the college level?
I shared these data with Rick Adrion, and he pointed out that degree output necessarily lags enrollment by 4-6 years. Yes, 2012 is at a high for enrollment, but the students who graduated in 2012 came into school in 2008 or 2007, when we were still “flatlined.” We’ll have to watch to see if output rises over the next few years.
Computer-related degree output at U.S. universities and colleges flatlined from 2006 to 2009 and have steadily increased in the years since. But the fact remains: Total degree production (associate’s and above) was lower by almost 14,000 degrees in 2012 than in 2003. The biggest overall decreases came in three programs — computer science, computer and information sciences, general, and computer and information sciences and support services, other.
This might reflect the surge in certifications and employer training programs, or the fact that some programmers can get jobs (or work independently) without a degree or formal training because their skills are in-demand.
Of the 15 metros with the most computer and IT degrees in 2012, 10 saw decreases from their 2003 totals. That includes New York City (a 52% drop), San Francisco (55%), Atlanta (33%), Miami (32%), and Los Angeles (31%).
This is our problem in computing, too. If students have never seen a computer science course before coming to college, they won’t know what hits them when they walk in the door.
Experts estimate that less than 40 percent of students who enter college as STEM majors actually wind up earning a degree in science, technology, engineering or math.
Those who don’t make it to the finish line typically change course early on. Just ask Mallory Hytes Hagan, better known as Miss America 2013.
Hagan enrolled at Auburn University as a biomedical science major, but transferred to the Fashion Institute of Technology a year later to pursue a career in cosmetics and fragrance marketing.
“I found out I wasn’t as prepared as I should be,” Hagan said during a panel discussion today at the 2013 U.S. News STEM Solutions conference in Austin. “I hit that first chem lab and thought, ‘Whoa. What’s going on?’”
Google has found that being great at puzzles doesn’t lead to being a good employee. They also found that GPA’s aren’t good predictors either.
Nathan Ensmenger could have told them that. His history The Computer Boys Take Over shows how the relationship between academic mathematics and brainteasers with computer science hiring was mostly an accident. Human resources people were desperate to find more programmers. They used brainteasers and mathematics to filter candidates because that’s what the people who started in computing were good at. Several studies found that those brainteasers and math problems were good predictors of success in academic CS classes — but they didn’t predict success at being a programmer!
How many people have been flunked out of computer science because they couldn’t pass Calculus — and yet knowing calculus doesn’t help with being a programmer at all?!?
You can stop counting how many golfballs will fit in a schoolbus now. Our Favorite Charts of 2013 So FarBen Bernanke Freaked Out Global MarketsGoogle has admitted that the headscratching questions it once used to quiz job applicants (How many piano tuners are there in the entire world? Why are manhole covers round?) were utterly useless as a predictor of who will be a good employee.”We found that brainteasers are a complete waste of time,” Laszlo Bock, senior vice president of people operations at Google, told the New York Times. “They don’t predict anything. They serve primarily to make the interviewer feel smart.”
The growth of departments in the Taulbee report is astonishing, but what Computerworld got wrong is calling it “computer science enrollments,” as opposed to “computer science enrollments in PhD-granting institutions.” The Taulbee report doesn’t cover all CS departments, and that’s why the new NDC survey has been launched.
The Taulbee report also indicates that the percent of women graduating with a Bachelors in CS has risen slightly, while the Computer Engineering percentage has dropped. Both are well south of 15%, though — a depressingly small percentage.
The number of new undergraduate computing majors in U.S. computer science departments increased more than 29% last year, a pace called “astonishing” by the Computing Research Association.
The increase was the fifth straight annual computer science enrollment gain, according to the CRA’s annual surveyof computer science departments at Ph.D.-granting institutions.
In the context of David Notkin’s receipt of the 2013 Computing Research Association A. Nico Habermann Award for outstanding contributions to supporting underrepresented groups in the computing research community, Lecia Barker of the National Center for Women & Information Technology (we hosted their Washington State Awards for Aspirations in Computing last weekend) sent us the chart to the right, comparing UW CSE’s performance to the national average in granting bachelors degrees to women.
It was really great to see these results in the U. Washington CSE News, but it got me to wondering: Did all the big R1 institutions rise like this, or was this unusual at UW? I decided to generate the GT data, too.
I went to the GT Self-Service Institutional Research page and downloaded the degrees granted by college and gender in each of 2005, 2006, and on up to 2011. (All separate spreadsheets.) I added up Fall, Spring, and Summer graduates for each year, and computed the female percentage. Here’s all three data sets graphed. While GT hasn’t risen as dramatically as UW in the last two years (so UW really has done something remarkable!), but GT’s rise from 2005 far below the national average to above the national average in 2009 is quite interesting.
Why is UW having such great results? Ed Lazowska claimed at SIGCSE 2013 that it’s because they have only a single course sequence (“one course does fit all,” he insisted) and because they have a large number of female TAs. I don’t believe that. I predict that more courses would attract more students (see the “alternative paths” recommendation from Margolis and Fisher), and that female TA’s support retention, not recruitment. I suspect that UW’s better results have more to do with the fact that GT’s students declare their major on their application form, while UW students have to apply to enter the CSE program. Thus, (a) UW has the chance to attract students on-campus and (b) they have more applications than slots, so they can tune their acceptances to get the demographics that they value.
The last paragraph of this is interesting. Yes, Engineering and Computer Science (in particular) are booming, but not everywhere, and it’s not evident to everyone. I was just at Tufts on Monday, where some Engineering students were asking me if Computer Science was growing in enrollment anywhere. Well, there’s Stanford…
Now? According to three stats buried in a press release from the university’s engineering school, Computer Science is the most popular major at Stanford. More students are enrolled in it than ever before (even more than at the dot-com boom’s height in 2000-2001). And more than 90 % of Stanford undergrads take a computer science course before they graduate.
Stanford is Stanford, and its stats aren’t necessarily indicative of academia at large: Countrywide, the most popular major is business. But the school’s computer-heavy numbers reflect its existence, both as a member of what candid college administrators call the Big Four (the other three are Princeton, Harvard and Yale), and as a school nestled close to Silicon Valley’s elite.
In a lengthy feature from earlier this year, the New Yorker’s Ken Auletta revealed that, even beyond Stanford’s CS department, “A quarter of all undergraduates and more than 50% of graduate students [at Stanford] are engineering majors. At Harvard, the figures are 4 and 10%; at Yale, they’re 5 and 8%.”
Mike Hewner successfully passed his PhD dissertation defense on Friday. There are just some dissertation tweaks and bureaucracy to go. In the process of the defense, there were several really interesting implications for his theory that got spelled out, and they relate to some of the comments made in response to my post on his dissertation last week.
Early choice is not early decision: In response to a question about when students should decide their specializations (should it be earlier in the degree or later in the degree), Mike said, “Making a choice early doesn’t force making a decision early.” We then spent some time unpacking that.
In Mike’s theory, students spend time exploring until they face a differential in enjoyment between classes that students interpret as an affinity for one topic over another. Students use this process to decide on a major, or to decide on a specialization area within a major. Once they’ve made a decision, they are more committed, and are willing to go through less-enjoyable classes in pursuit of a goal that they have now decided on. Forcing students to make a choice early (between majors or specializations) doesn’t change this process — they don’t decide earlier to become committed to a major or specialization. Forcing the choice early may mean dealing graduation, when students finally decide on something else and become committed to that other path.
Job as ill-defined goal: One of the surprising and somewhat contradictory ideas in Mike’s thesis is that, while US students today may be more driven to get a college education in order to get a better job or a middle class lifestyle, they don’t necessarily know what that job entails. Students that Mike interviewed rarely could describe what kind of job they wanted, or if they did, it was vague (“Work for Google”) and the students couldn’t explain what that job would require or what classes they should take to prepare for that job.
When we were first developing Threads, we talked about helping students to describe the kind of job they wanted, and then we could advise them to pick the Threads that would help them achieve that career. But Mike’s theory says that that’s backwards. Students don’t know what kind of job they want. They use experiences in the classes to help them decide what kind of work they will enjoy.
Hewner’s theory is constructivist. Mike was asked, “How would you advise a student such that they could figure out the best Thread for themselves?” Mike’s response was that students would need to do something that was authentic and representative of work within that Thread — which is hard to do in an accessible manner for students who don’t know much about that Thread yet. You can’t just tell students about the Threads or about the jobs that fit into the Threads. It’s unlikely that students will be able to successfully predict if they would enjoy the work in the Thread based on a description.
In some sense, Mike’s theory is intensely constructivist. Mike’s students won’t decide on a major, specialization or career choice until they experience the work of that major, specialization, or career choice, and then decide if they enjoy it or not for themselves. If decisions are made based on enjoyment, you can’t tell someone that they’d enjoy the experience. They have to figure it out for themselves.
Interesting new initiative between the White House and NSF to increase the number of graduates in computing and engineering by focusing on retention. (I strongly agree, because retention is where we’ve been focusing our attention.)
This letter announces a cooperative activity between NSF and members of the Jobs Councils High Tech Education working group, led by Intel and GE, to stimulate comprehensive action at universities and colleges to help increase the annual number of new B.S. graduates in engineering and computer science by 10,000. Proposals for support of projects would be submitted under a special funding focus Graduate 10K+ within the NSF Science, Technology, Engineering, and Mathematics Talent Expansion Program STEP, see http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5488.
Studies have shown that retention during the critical first two years in a students major, or along the path towards declaration of a major, is an excellent predictor of eventual graduation with a STEM degree. Recognizing that the correlation between retention and graduation is particularly strong for students in engineering and computer science, we invite proposals from institutions that can demonstrate their commitment to:(i) significant improvement in first and second year retention rates in these particular majors, beyond current levels; and (ii) sustained, institutionally-embraced practices e.g. http://www.asee.org/retention-project that lead, ultimately, to increased graduation. Jobs Council members anticipate providing support for this special funding focus, with the number of awards to be made contingent on the availability of funds.
ICER 2012 in Auckland, New Zealand, was notable for having the highest number of submitted papers to any ICER (8th year) and almost as many attendees as last year, despite being a long way for all the US and European delegates. In the end, there were 13 research papers accepted, 8 discussion papers (shorter papers, with shorter presentations and longer discussion periods), 16 attendees at the Doctoral Consortium, and 19 Lightning Talks accepted. Despite all the successes, I’m worried whether ICER can meet the global needs for computing education research.
The first talk of the conference ended up winning the People’s Choice award, voted on by the delegates (used to be called the “Fool’s Award,” but now renamed the “John Henry Award” for a paper with great “potential to be transformative”) from Ian Utting and the BlueJ crew. BlueJ, probably the most successful and popular pedagogical Java IDE, is going to be outfitted in the Spring with event logging. We’ll know what the students are doing in BlueJ, at a large scale (probably about 1Tb/year). All of that data is going to get stored (anonymously) for use by researchers. The interesting discussion point is: What are web-scale questions in CS Education?
The Chair’s Award (new to ICER, kind of a best-paper award) was won by Colleen Lewis for her detailed explanation of how a middle schooler at a summer camp in Scratch did his debugging. In a sense, Colleen (just graduated from Berkeley, and just started at Harvey Mudd College) was answering a comment that Ray Lister made which was often quoted during the conference: that students sometimes demonstrate “bizarro programming behaviors.” Colleen carefully reconstructed the activity of the student and pieced together a story of how the student thought through the problem, and how his behavior did make sense.
I tweeted some of my favorite one-liners from the conference. I’ll mention just a couple highlights here.
- Quintin Cutts presented an intriguing paper suggesting a new way of looking at questions that spur learning, with data drawn from Beth Simon’s CS:Principles course for non-majors. The idea is called the Abstraction Transition Taxonomy, and it’s about how we talk about problems (natural language), we talk about CS (“arrays”), and we talk in code (e.g., “a[i]“). They hypothesize that questions that lead to transitions between levels may be the most successful scaffolding for novice learning. So, how do we test that hypothesis?
- My former students, Drs. Brian Dorn and Allison Elliott Tew, are working on a new validated measure of attitudes towards computing, based on similar instruments developed for physics and biology. They presented their validation scheme at ICER. I’ve already read a draft of a future paper where they’re actually using the instrument, and I think that this is going to be a big deal.
- Lauren’s subgoal paper drew some oohs when I showed the results, a few shakes of heads (I don’t think everyone believed it), and some challenging questions. ”Why aren’t you using this in your intro classes?” asked one questioner. ”Or your advanced classes?” asked another. Yup. Good questions.
- One of the lightning talks had an interesting idea: Form pairs for pair-programming based on perception of efficacy. Put non-confident students together! The idea is that self-efficacy feeds off a vacuum, “I’m doing worse than everyone else. I just know.” Having someone else with low-confidence provides evidence that you’re not alone in struggling. No data were presented, but it’s an intriguing idea.
One of my mentors here at Georgia Tech is Jim Foley who recommends structuring research around BHAG’s — Big, Hairy Audacious Goals. The BHAG for computing education is teaching computer science in all schools. What’s particularly scary about this BHAG is it’s already happening. The US has the CS10K effort. The Computing At School effort is going strong in the UK. New Zealand and Denmark have both instituted new nationwide CS curricula in the last couple years. There is an enormous need for research on how to help teachers learn to teach computer science, what the challenges are in teaching computer science to school children (e.g., who have not declared a major of computer science, who are not necessarily motivated to learn computing for a career), and evaluations of successful models for supporting learning by both teachers and school children. Maybe we’re just going to do it, and figure out later what works. But maybe there’s a better way.
How much of ICER 2012 research could possibly inform these efforts? There’s Colleen’s interesting paper on a pre-teen debugging, and there’s Briana’s work on professional development efforts. That’s pretty much it for directly computing-at-schools/CS10K relevant, from my read of the papers. There were a few papers that addressed non-majors (like Quintin’s, and our statewide survey paper), but at the undergraduate level. The rest of ICER’s papers were seeking to understand and teach undergraduate CS majors.
It’s important to understand undergraduate CS majors and to improve their understanding. My personal research agenda is more on the latter than the former — it’s more important to me to learn how to teach better, rather than to understand the effects of teaching that might be better if we built on everything that we know about teaching. But I do get the value of understanding understanding (or lack of understanding, or even misconceptions). There are far more high school teachers and schoolchildren than there are undergraduate majors, and they’re different. The oncoming problems are much bigger than the ones we’re currently facing.
How do we inform the broader need for research on computing education? Is ICER the place to look for that research? Or will ICER (and SIGCSE) always be a mostly undergraduate-oriented conference (and organization)? If not ICER and SIGCSE, where should we look? I was a reviewer for AERA’s new engineering and computing education division, and while I was excited about those papers, they’re coming at the problems almost entirely from the education perspective. There was little from ICER and the computing education research community. The problems that we need solved will require work from both communities/disciplines/fields. How do we get there?
There’s a meme going around my College these days, about the 10 questions you should never ask your professor (linked below). Most of them are spot-on (e.g., “Did we do anything important while I was out?”). I disagreed with the one I quote below.
One of our problems in computer science is that we teach things for reasons that even we don’t know. Why do we teach how hash tables are constructed? Does everybody need to know that? I actually like the idea of teaching everybody about hash functions, but it’s a valid question, and one that we rarely answer to ourselves and definitely need to answer to our students.
Why we’re teaching what we’re teaching is a critical question to answer for broadening participation, because we have to explain to under-represented minorities why it’s worth sticking with CS. Even more important for me is explaining this to non-majors, and in turn, to our colleagues in other departments. Computing is a fundamental 21st Century literacy, and we have to explain why it’s important for everyone to learn. ”Stuck in the Shallow End” suggests that making ubiquitously available can help to broaden participation, but we can only get it ubiquitously available by showing that it’s worth it.
I’m back from New Zealand and ICER: today, yesterday, tomorrow — days get confused crossing a dateline. (We landed in both Sydney and Los Angeles at 10:30 am on 13 September.) I spent several hours of the trip reading Mike Hewner’s complete dissertation draft. Mike has been asking the question, “How do CS majors define the field of computer science, and how do their misconceptions lead to unproductive educational decisions?” He did a Grounded Theory analysis with 37 interviews (and when I tell this to people who have done Grounded Theory, their jaws drop — 37 interviews is a gargantuan number of interviews for Grounded Theory) at three different institutions. One of his findings is that even CS majors really have no idea what’s going on in a class before they get there. The students’ ability to predict the content of future courses, even courses that they’d have to take, was appallingly bad. Even our own majors don’t know why they’re taking what they’re taking, or what will be in the class when they go to take it.
We will have to tell them.
“Why do we have to learn this?” In some instances, this is a valid question. If you are studying medical assisting, asking why you have to learn a procedure can be a gateway to a better question about when such a procedure would be necessary. But it should be asked that way–as a specific question, not a general one. In other situations, like my history classes, the answer is more complicated and has to do with the composition of a basic liberal arts body of knowledge. But sometimes, a student asks this because they do not think they should have to take a class and want a specific rationale. In that case, I respond, “Please consult your course catalog and program description. If you don’t already know the answer to that question, you should talk to your advisor about whether or not this is the major for you.”
I hadn’t heard about this form of cheating in MOOC’s. I knew that answers got passed around (as Dave Patterson reported in June), but was surprised to hear that students were creating multiple account in order to re-take exams. That changes one’s perception of the 100K registered users. The question raised here in Dick Lipton’s blog is: Is this “cheating” or simply “mastering” the material?
Here is what happens next. Bob signs up for the course multiple times: let’s call them Bob1, Bob2, Bob3, Bob4. Recall there is no cost to Bob for signing up multiple times—none. So why not sign up several times…
Bob’s insight is simple: he now can take the course multiple times and keep only the best grade. Say there is a graded exam. Bob1 takes the exam and gets a 70% on it. Not bad, but not great either. So Bob sees what he got wrong, sees what questions they threw at him. He studies some more, then takes the exam again as Bob2. Of course the exam is different, since all these on-line systems do some randomization. However, the exam covers the same material, so now Bob2 gets an 85% say.
Perhaps Bob is satisfied. But if he is really motivated he studies some more, retakes the exam, and now Bob3 gets 90%. You guessed right. He goes on and takes it one more time as Bob4 who—surprise—gets a perfect 100%.