Posts tagged ‘undergraduates’

Shortage in the IT U.S. labor market? Or just a lack of graduates?

Is the shortage of STEM graduates a myth, as IEEE has been arguing recently?  Is the case for IT different than the case for STEM overall?

I found the analysis linked below interesting.  Most IT workers do not have an IT-related degree.  People with CS degrees are getting snapped up.  The suggestion is that there’s not a shortage of IT workers, because IT workers are drawn from many disciplines.  There may be a shortage of IT workers who have IT training.

IT workers, who make up 59 percent of the entire STEM workforce, are predominantly drawn from fields outside of computer science and mathematics, if they have a college degree at all. Among the IT workforce, 36 percent do not have a four-year college degree; of those who do, only 38 percent have a computer science or math degree, and more than a third (36 percent) do not have a science or technology degree of any kind. Overall, less than a quarter (24 percent) of the IT workforce has at least a bachelor’s degree in computer science or math. Of the total IT workforce, two-thirds to three-quarters do not have a technology degree of any type (only 11 percent have an associate degree in any field).4

Although computer science graduates are only one segment of the overall IT workforce, at 24 percent, they are the largest segment by degree (as shown in Figure F, they are 46 percent of college graduates entering the IT workforce, while nearly a third of graduates entering IT do not have a STEM degree). The trend in computer scientist supply is important as a source of trained graduates for IT employers, particularly for the higher-skilled positions and industries, but it is clear that the IT workforce actually draws from a pool of graduates with a broad range of degrees.

via Guestworkers in the high-skill U.S. labor market: An analysis of supply, employment, and wage trends | Economic Policy Institute.

February 13, 2014 at 1:16 am 6 comments

Researchers cast doubt about early warning systems’ effect on retention

CS researchers have long been interested in what predicts success in introductory computing, e.g., the “camel has two humps” paper, and the Bennedsen and Caspersen review of the literature.  Would knowing who might succeed or fail allow us to boost retention?  A new system at Purdue was claimed to do exactly that, but turns out, isn’t.

Michael Caulfield, director of blended and networked learning at Washington State University at Vancouver, decided to take a closer look at Signals after Purdue in a September press release claimed taking two Signals-enabled courses increased students’ six-year graduation rate by 21.48 percent. Caulfield described Purdue research scientist Matt Pistilli’s statement that “two courses is the magic number” as “maddening.”

Comparing the retention rates of the 2007 and 2009 cohorts, Caulfield suggested much of what Purdue described as data analysis just measured how many courses students took. As Signals in 2008 left its pilot and more students across campus enrolled in at least one such course, Caulfield found the retention effect “disappeared completely.”

Put another way, “students are taking more … Signals courses because they persist, rather than persisting because they are taking more Signals courses,” Caulfield wrote.

via Researchers cast doubt about early warning systems’ effect on retention | Inside Higher Ed.

December 2, 2013 at 1:41 am Leave a comment

Lessons Learned From First Year College MOOCs at Georgia Tech (and SJSU)

Karen Head has finished her series on how well the freshman-composition course fared (quoted and linked below), published in The Chronicle. The stats were disappointing — only about 238 of the approximately 15K students who did the first homework finished the course. That’s even less than the ~10% we saw completing other MOOCs.

Georgia Tech also received funding from the Gates Foundation to trial a MOOC approach to a first year of college physics course.  I met with Mike Schatz last Friday to talk about his course.  The results were pretty similar: 20K students signed up, 3K students completed the first assignment, and only 170 finished.  Mike had an advantage that Karen didn’t — there are standardized tests for measuring the physics knowledge he was testing, and he used those tests pre-post.  Mike said the completers fell into three categories: those who came in with a lot of physics knowledge and who ended with relatively little gain, those who came in with very little knowledge and made almost no progress, and a group of students who really did learn alot.  They don’t know why nor the relative percentages yet.

The report from the San Jose State University MOOC experiment with a remedial mathematics course came out with the argument:

The researchers also say, perhaps unsurprisingly, that what mattered most was how hard students worked. “Measures of student effort trump all other variables tested for their relationships to student success,” they write, “including demographic descriptions of the students, course subject matter, and student use of support services.”

It’s not surprising, but it is relevant.  Students need to make effort to learn.  New college students, especially first generation college students (i.e., whose parents have never gone to college), may not know how much effort is needed.  Who will be most effective at communicating that message about effort and motivating that effort — a video of a professor, or an in-person professor who might even learn your name?

As Gary May, our Dean of Engineering, recently wrote in an op-ed essay published in Inside Higher Ed, “The prospect of MOOCs replacing the physical college campus for undergraduates is dubious at best. Other target audiences are likely better-suited for MOOCs.”

On the freshman-composition MOOC, Karen Head writes:

No, the course was not a success. Of course, the data are problematic: Many people have observed that MOOCs often have terrible retention rates, but is retention an accurate measure of success? We had 21,934 students enrolled, 14,771 of whom were active in the course. Our 26 lecture videos were viewed 95,631 times. Students submitted work for evaluation 2,942 times and completed 19,571 peer assessments (the means by which their writing was evaluated). However, only 238 students received a completion certificate—meaning that they completed all assignments and received satisfactory scores.

Our team is now investigating why so few students completed the course, but we have some hypotheses. For one thing, students who did not complete all three major assignments could not pass the course. Many struggled with technology, especially in the final assignment, in which they were asked to create a video presentation based on a personal philosophy or belief. Some students, for privacy and cultural reasons, chose not to complete that assignment, even when we changed the guidelines to require only an audio presentation with visual elements. There were other students who joined the course after the second week; we cautioned them that they would not be able to pass it because there was no mechanism for doing peer review after an assignment’s due date had passed.

via Lessons Learned From a Freshman-Composition MOOC – Wired Campus – The Chronicle of Higher Education.

September 21, 2013 at 1:29 am 14 comments

Study finds choice of major most influenced by quality of intro professor: Mesh with Hewner

These results seem consistent with Mike Hewner’s thesis results.  If a student likes her intro course more, they are more likely to take that major.  Students use how much they enjoy the course as a proxy for their affinity for the subject.

Undergraduates are significantly more likely to major in a field if they have an inspiring and caring faculty member in their introduction to the field. And they are equally likely to write off a field based on a single negative experience with a professor.

Those are the findings of a paper presented here during a session at the annual meeting of the American Sociological Association by Christopher G. Takacs, a graduate student in sociology at the University of Chicago, and Daniel F. Chambliss, a professor of sociology at Hamilton College. The paper is one part of How College Works, their forthcoming book from Harvard University Press.

via Study finds choice of major most influenced by quality of intro professor | Inside Higher Ed.

September 10, 2013 at 1:02 am 2 comments

CS/IT higher-ed degree production has declined since 2003

I couldn’t believe this when Mark Miller sent the below to me.  “Maybe it’s true in aggregate, but I’m sure it’s not true at Georgia Tech.”  I checked.  And yes, it has *declined*.  In 2003 (summing Fall/Winter/Spring), the College of Computing had 367 graduates.  In 2012, we had 217.  Enrollments are up, but completions are down.

What does this mean for the argument that we have a labor shortage in computer science, so we need to introduce computing earlier (in K-12) to get more people into computing?  We have more people in computing (enrolled) today, and we’re producing fewer graduates.  Maybe our real problem is the productivity at the college level?

I shared these data with Rick Adrion, and he pointed out that degree output necessarily lags enrollment by 4-6 years.  Yes, 2012 is at a high for enrollment, but the students who graduated in 2012 came into school in 2008 or 2007, when we were still “flatlined.”  We’ll have to watch to see if output rises over the next few years.

Computer-related degree output at U.S. universities and colleges flatlined from 2006 to 2009 and have steadily increased in the years since. But the fact remains: Total degree production (associate’s and above) was lower by almost 14,000 degrees in 2012 than in 2003. The biggest overall decreases came in three programs — computer science, computer and information sciences, general, and computer and information sciences and support services, other.

This might reflect the surge in certifications and employer training programs, or the fact that some programmers can get jobs (or work independently) without a degree or formal training because their skills are in-demand.

Of the 15 metros with the most computer and IT degrees in 2012, 10 saw decreases from their 2003 totals. That includes New York City (a 52% drop), San Francisco (55%), Atlanta (33%), Miami (32%), and Los Angeles (31%).

via In the Spotlight: Higher Ed Degree Output by Field and Metro |

August 19, 2013 at 1:19 am 4 comments

Colleges Fight to Retain Interest of STEM Majors: Computing, too

This is our problem in computing, too.  If students have never seen a computer science course before coming to college, they won’t know what hits them when they walk in the door.

Experts estimate that less than 40 percent of students who enter college as STEM majors actually wind up earning a degree in science, technology, engineering or math.

Those who don’t make it to the finish line typically change course early on. Just ask Mallory Hytes Hagan, better known as Miss America 2013.

Hagan enrolled at Auburn University as a biomedical science major, but transferred to the Fashion Institute of Technology a year later to pursue a career in cosmetics and fragrance marketing.

“I found out I wasn’t as prepared as I should be,” Hagan said during a panel discussion today at the 2013 U.S. News STEM Solutions conference in Austin. “I hit that first chem lab and thought, ‘Whoa. What’s going on?'”

via Colleges Fight to Retain Interest of STEM Majors – US News and World Report.

July 15, 2013 at 1:33 am 2 comments

Google Finally Admits That Its Infamous Brainteasers Were Completely Useless for Hiring

Google has found that being great at puzzles doesn’t lead to being a good employee.  They also found that GPA’s aren’t good predictors either.

Nathan Ensmenger could have told them that.  His history The Computer Boys Take Over shows how the relationship between academic mathematics and brainteasers with computer science hiring was mostly an accident.  Human resources people were desperate to find more programmers.  They used brainteasers and mathematics to filter candidates because that’s what the people who started in computing were good at.  Several studies found that those brainteasers and math problems were good predictors of success in academic CS classes — but they didn’t predict success at being a programmer!

How many people have been flunked out of computer science because they couldn’t pass Calculus — and yet knowing calculus doesn’t help with being a programmer at all?!?

You can stop counting how many golfballs will fit in a schoolbus now. Our Favorite Charts of 2013 So FarBen Bernanke Freaked Out Global MarketsGoogle has admitted that the headscratching questions it once used to quiz job applicants (How many piano tuners are there in the entire world? Why are manhole covers round?) were utterly useless as a predictor of who will be a good employee.”We found that brainteasers are a complete waste of time,” Laszlo Bock, senior vice president of people operations at Google, told the New York Times. “They don’t predict anything. They serve primarily to make the interviewer feel smart.”

via Google Finally Admits That Its Infamous Brainteasers Were Completely Useless for Hiring – Adam Pasick – The Atlantic.

July 4, 2013 at 1:15 am 11 comments

Older Posts Newer Posts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,273 other followers


Recent Posts

Blog Stats

  • 1,664,033 hits
July 2019
« Jun    

CS Teaching Tips