Archive for May, 2011
This new report from the Pew survey doesn’t seem to mesh well with other surveys of College students. For example, the UCLA Higher Education Research Institute says that 2/3 of new frosh in 2010 were making education decisions based on economics, and 56.5% of frosh in 2009 were going to college because “graduates get good jobs.” Janet Donald has reported studies saying that the percentage of students who are going to college purely to get a new job is over 70% today. 47% of students aiming just for a job (not also an education) in the Pew study feels low to me.
Perhaps it’s because assessing the value of a college education is not a hard-and-fast calculation. Sure, diplomas help Americans land better jobs and earn higher salaries, and one can estimate the financial return on those investments. Yet the perceived benefits of attending college go well beyond dollars.
In the Pew survey, all respondents were asked about the “main purpose” of college. Forty-seven percent said “to teach knowledge and skills that can be used in the workplace,” 39 percent said “to help an individual grow personally and intellectually,” and 12 percent said “both equally.”
These findings echo the words graduates often use to describe the benefits of their college experiences. Typically, those benefits are intangible, immeasurable, and untethered to narrow questions about what a particular degree “got” them.
I attended a Pearson sponsored event last week in Mexico City for university faculty. The subject matter range was pretty large, with speakers from Education to Operations Research to Mathematics, and to Computer Science.
I was particularly interested by the talk from Julio Pimienta, a Cuban Education scholar now at UNAM. I heard the talk through simultaneous translation, which was an interesting experience especially when the translation was in contrast with the slides I saw before me. I spent a good bit of time with Google Translator to understand the slides separately from the translation.
He spent a long time on this one slide.
Pimienta asked the audience how they felt about the new national requirements to focus on content standards, and there was a mixed response, which he interpreted as, “Everyone wants students to learn the content, but we’re not convinced that you get there by starting from the content.” He then put up this slide. He said that our traditional educational model is that students learn the content (“contenidos”) which is organized and presented by the teacher (“transformar,” which the translator sometimes translated as “organized” and other times as “presented”), for the students to learn (“aprendizaje”) in such a way that they can use it in new contexts (“contexto”). The problem is that students don’t learn to apply the knowledge in new contexts in this flow.
He says that we now focus on reversing that flow. We provide students with interesting and motivating problem contexts, which encourage them to learn, and they have to organize the content that they learn in order to solve the problems. I’ve heard variations on this story before — it’s like Ann Brown‘s argument, and the arguments for Learning by Design and for constructionism. What’s new for me was seeing this as a “reverse flow,” that we want the same content to be learned, but through a context-driven mechanism.
Brian Dorn’s dissertation suggests that this doesn’t really work, but might. It’s hard to imagine a better context than real professionals who discover that they need computer science knowledge, and try to teach themselves with on-line materials. Unfortunately, he found that they only get part way there, and inefficiently. Brian shows that, by creating case materials appropriately, we can improve the efficiency and get more significant learning. He got his students to reverse the flow.
I’ve always thought about Brian’s work as mostly speaking to the non-traditional learner, the professional learning CS in-the-wild. But now I’m realizing that his work also speaks to how to make problem/project-based learning work. How do we make the content available and organizable by the students? The answer can’t be just lots of recorded lectures and other educational videos — that content is available to Brian’s subjects, too, but it’s not really helping. Here’s an interesting research problem: How do we provide learning resources such that students can find the content that matches their context, and figure out how to organize and apply it? Can we do it in such a way that is useful to a wide range of learners? Wikipedia and MIT Open Courseware aren’t there yet. I do agree with Pimienta that the reverse flow is more likely to lead to deep, transferable learning. I think we’re good at providing contexts. I think we need to work harder at getting to the student-organized content.
Money isn’t enough to improve schools. That’s probably obvious, though it’s interesting to see that somebody did the work to provide evidence. When I see what other countries do to improve their education quality, I realize how much of the education picture has to do with culture and respect, and money doesn’t help with that.
In the first-of-its-kind analysis of the billionaires’ efforts, NEWSWEEK and the Center for Public Integrity crunched the numbers on graduation rates and test scores in 10 major urban districts—from New York City to Oakland—which got windfalls from these four top philanthropists.
The results, though mixed, are dispiriting proof that money alone can’t repair the desperate state of urban education. For all the millions spent on reforms, nine of the 10 school districts studied substantially trailed their state’s proficiency and graduation rates—often by 10 points or more. That’s not to say that the urban districts didn’t make gains.
The good news is many did improve and at a rate faster than their states 60 percent of the time—proof that the billionaires made some solid bets. But those spikes up weren’t enough to erase the deep gulf between poor, inner-city schools, where the big givers focused, and their suburban and rural counterparts.
This really speaks to the discussion we were having the other day when talking about traditional lectures vs. interaction, and in particular, open learning resources — just how valuable are the Khan Academy videos for learning? This video explains that it’s about highlighting the misconceptions that really helps with learning, which is something that you get with approaches like peer instruction. This approach is hard to use in computer science education, because we know so little about misconceptions and prior conceptions of computing.
It is a common view that “if only someone could break this down and explain it clearly enough, more students would understand.” Khan Academy is a great example of this approach with its clear, concise videos on science. However it is debatable whether they really work. Research has shown that these types of videos may be positively received by students. They feel like they are learning and become more confident in their answers, but tests reveal they haven’t learned anything. The apparent reason for the discrepancy is misconceptions. Students have existing ideas about scientific phenomena before viewing a video. If the video presents scientific concepts in a clear, well illustrated way, students believe they are learning but they do not engage with the media on a deep enough level to realize that what was is presented differs from their prior knowledge. There is hope, however. Presenting students’ common misconceptions in a video alongside the scientific concepts has been shown to increase learning by increasing the amount of mental effort students expend while watching it.
A new report says that the greatest potential for growing higher-education is in the bottom half of the US economy. I found the below graph from the report pretty startling — I had heard that the US already had a very high percentage of higher-education degrees among its citizens, but this graph suggests that we’ve got quite a long way to go.
President Obama has set a goal for the United States to have the highest proportion of college graduates in the world by 2020.
The report, “Developing 20/20 Vision on the 2020 Degree Attainment Goal: The Threat of Income-Based Inequality in Education,”argues that the “nation’s failure to keep pace with other countries in educational attainment among 25- to 34-year-old adults can be largely traced to our inability to adequately educate individuals from families in the bottom half of the income distribution.”
If all Americans attained bachelor’s degree by age 24 at the same rate as do individuals from the top half of the income distribution, the United States would now have the highest share of bachelor’s degree recipients in the world, the report says.
This is an older (year old) NYTimes piece, but wow, what a cool one! Really interesting insights into study skills and misconceptions about how studying works. My favorite part, though, is addressing one of the most prevalent claims I hear: That there are “learning styles.” There aren’t. They don’t measurably exist.
For instance, instead of sticking to one study location, simply alternating the room where a person studies improves retention. So does studying distinct but related skills or concepts in one sitting, rather than focusing intensely on a single thing.
“We have known these principles for some time, and it’s intriguing that schools don’t pick them up, or that people don’t learn them by trial and error,” said Robert A. Bjork, a psychologist at the University of California, Los Angeles. “Instead, we walk around with all sorts of unexamined beliefs about what works that are mistaken.”
Take the notion that children have specific learning styles, that some are “visual learners” and others are auditory; some are “left-brain” students, others “right-brain.” In a recent review of the relevant research, published in the journal Psychological Science in the Public Interest, a team of psychologists found almost zero support for such ideas. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.
But maybe it’s not the babies. A study of more than 3,700 female engineers carried out by Nadya Fouad and Romila Singh at the University of Wisconsin–Milwaukee revealed that only a quarter left engineering because of family reasons (http://bit.ly/gA79xQ). The remaining three-quarters quit their jobs or left the field entirely because they did not like the workplace culture, or were unhappy with other aspects of the job.
While blatant gender discrimination in the workplace is rare, the subtle, everyday instances of bias that women experience create a snowball effect that, over time, can be overwhelmingly off-putting.
More than half of female scientists have experienced gender bias, according to a 2010 survey by the American Association for the Advancement of Science for L’Oréal. Examples include being ignored in meetings, students calling you Mrs. instead of Dr. or Professor, receiving unwanted comments on your appearance, and hearing that you were hired not on merit, but because you’re a woman.
The best part of this post from Hake comes at the end, where he cites six published accounts of dramatic improvement in learning with dramatic decline in student teaching evaluations. Administrators rely heavily on student evaluations of teaching, but the reality is, they don’t correlate with good teaching. Students don’t necessarily “like” teaching that makes them think.
Unfortunately for my academic career, I gradually caught on to the fact that students’ conceptual understanding of physics was not substantively increased by traditional pedagogy. As described in Hake (1987, 1991, 1992, 2002c) and Tobias & Hake (1988), I converted to the “Arons Advocated Method” [Hake (2004c)] of “interactive engagement.” This resulted in average normalized gains on the “Mechanics Diagnostic” test or “Force Concept Inventory” that ranged from 0.54 to 0.65 [Hake (1998b), Table 1c] as compared to the gain of about 0.2 typically obtained in traditional introductory mechanics courses [Hake (1998a)].
But my EPA’s for “overall evaluation of professor,” sometimes dipped to as low as 1.67 (C-), and never returned to the 3.38 high that I had garnered by using traditional ineffective methods of introductory physics instruction. My department chair and his executive committee, convinced by the likes of Peter Cohen (1981, 1990) that SET’s are valid measures of the cognitive impact of introductory courses, took a very dim view of both my teaching and my educational activities.
In our Media Computation classes, we often ask the question, “How could you tell if someone faked a picture in the paper?” Here’s a high-profile example — could you come up with a computational mechanism of recognizing this fake?
A Brooklyn Yiddish-language newspaper airbrushed Secretary of State Hillary Clinton from the White House’s official Osama bin Laden “war room” photograph because editors decided that their Hasidic readership would be offended by a photograph of a woman.
Der Tzeitung ran a copy of the iconic photo of President Obama surrounded by his advisers during last Sunday night’s raid the terror mastermind’s headquarters — but a photo editor removed Clinton from the iconic, historic image because of the paper’s “long-standing editorial policy” to omit women from photos.
Do come to C5! You can skip my talk, since readers of this blog already know pretty much everything I’d have to say. I’m looking forward to the conference!
The 10th International Conference on Creating,
Connecting and Collaborating through Computing (C5 2012)
18-20 January 2012
Playa Vista, CA USA
Hosted by the USC Institute for Creative Technologies
Computers, networks, and other forms of technology are pervasive in our
information-based society. Unfortunately, most users of this technology use
it for passive consumption of information and entertainment. To evolve into a
true knowledge society it is critical that we transform computer-based human
activities to engage users in the active process of creating, connecting, and
The C5 conference is for anyone interested in the use of computers as tools to
develop and enable user-oriented creation, connection, and collaboration
processes. Researchers, developers, educators and users come together at C5
to present new and ongoing work and to discuss future directions for creative
computing and multimedia environments. We welcome the submission of
theoretical and technical papers, practitioner/experience reports, and papers
that bridge the gap between theory and practice or that encourage inter- and
=== Keynote Speakers ===
“Helping Everyone Create with Computing”
Dr. Mark Guzdial
Georgia Institute of Technology
“C2P3: Creating and Controlling Personalisation
and Privacy in Pervasive Digital Ecosystems”
Dr. Judy Kay
University of Sydney
=== Topics ===
C5 invites submissions of full papers in (but not limited to) the following
- Technology-enhanced human-computer and human-human interaction
– Virtual worlds and immersive environments
– Educational environments for classroom, field work and online/distance
– New technologies for literature, music and the visual arts
– Technologies for collaborative and self-empowered learning
– Multimedia authoring environments
– Gaming/entertainment platforms, virtual characters, and software
– Social networks and social networking
– Novel programming paradigms and languages for implementors
– Scripting or visual paradigms and languages for end-users
– Creating and maintaining online communities
– Tools for creating/managing online services/environments
– Distributed and collaborative working
– Social and cultural implications of new technologies
Papers should be submitted electronically in PDF format via EasyChair at:
Submissions must be written in English (the official language of the
conference) and must not exceed eight (8) pages. They should use the IEEE
10-point two-column format, templates for which are available at:
=== Proceedings ===
A preliminary version of the proceedings will be distributed during the
conference. The formal version of the proceedings will be published by the
Conference Publishing Services (CPS) and sent to authors after the conference.
For each accepted paper, at least one of the authors needs to attend the
conference and deliver the presentation; otherwise the paper will not be
included in the formal proceedings.
=== Dates ===
Submission of papers: October 7, 2011
Author notification: November 18, 2011
Camera-ready copy: December 16, 2011
Conference: January 18-20, 2012
Really? CS is just another form of shop class? Really?
That response heartens Paula M. Krebs, a professor of English at Wheaton College, in Massachusetts, who said she has worried that higher education “could succumb to the language of utility.” Colleges shouldn’t be judged, she argued, on graduates’ first jobs out but rather on the intellectual foundation they provide.
After all, says Ms. Krebs, now an American Council on Education fellow at the University of Massachusetts, “no one thinks high school should be training for the work world only. No one advocates a high-school curriculum of just shop classes, or just computer-science courses. You have to take English, math, history.”
Getting high-quality computer science education into high school would likely smooth out undergraduate enrollment. Rather than the spikes that we get when a new computational technology makes waves, and the lulls when students realize that they don’t know what computer science is, we would have better-informed students. Getting computer science into all high schools would mean that a more diverse population would get to try out computer science, and may discover that they like it. But how do we get good computer science education into high schools? Maybe we take a lesson from Calculus.
In 2010, 245,867 students took the AP Calculus AB test (to contrast with 20,210 AP CS Level A test takers.) That’s evidence that there is a lot of calculus in high schools. How did that happen? Was there a drive to push calculus into all state’s curricula? (I don’t remember ever hearing about “Calculus in the Core”? :-) Was there a national effort to convert existing math teachers into Calculus teachers? Did the Colleges tell the high schools, “We need students who are calculus-literate”?
Here’s my take on how it happened, based on what histories I can find and the growth of Calculus II in high schools. Colleges and universities taught Calculus to undergraduates. The best high schools decided that they would start to teach Calculus, to better prepare their high-achieving students (back in the 1960’s). More colleges and more universities started requiring or expecting calculus. More and more high schools tried to raise their prestige by preparing students to teach calculus. Several organizations (College Board, NCTM, MAA) and universities today train teachers to teach calculus, because those teachers and their schools want it.
If we want high schools to teach computer science to college-bound students, colleges and universities must require computer science of all their students. If not require computer science of all undergraduate students, require it for admission–but be prepared to offer remedial classes, since so few high schools do offer good undergraduate-level computer science. If computer science is important enough for high school students, it’s important enough for undergraduate students.
Efforts like Computing in the Core and the new AP CS:Principles are great ideas, and I hope that they succeed, but they are top-down efforts. A stronger effect comes bottom-up. We want teachers and administrators to say, “My local college requires CS for everyone. I want my students to be well-prepared for college by already knowing CS when they get in the door!” The bottom-up effort is slower — it’s taken decades for calculus to infiltrate high schools to the level that it has. But it’s less expensive and makes change happen pervasively.
If we can’t convince our peers in the colleges and universities that computer science is important, how are we going to convince the high schools? And if we convince our colleges and universities, the high schools will likely follow. We can follow the Calculus lead.
AP, Washington Post, NYTimes, and NPR covered this story this week — Carl Weiman has an article in Science showing that two grad students with an interactive learner-engagement method beats out a highly-rated veteran lecturer in terms of student learning in a large class. This is a cool piece, and I buy it — that’s why I’m doing peer-interaction in my class. I still believe that lecture can work, the evidence is strong that learner-engagement beats lecture, especially in large STEM classes. I think that this result is particularly disconcerting for the open learning movement. If lectures aren’t worth much for most learners, what is it that iTunes-U and MIT Open Courseware are offering?
Who’s better at teaching difficult physics to a class of more than 250 college students: the highly rated veteran professor using time-tested lecturing, or the inexperienced graduate students interacting with kids via devices that look like TV remotes? The answer could rattle ivy on college walls.
A study by Nobel Prize winning physicist Carl Wieman at the university found that students learned better from inexperienced teachers using an interactive method _ including the clicker _ than a veteran professor giving a traditional lecture. Student answers to questions and quizzes are displayed instantly on the professor’s presentation.
He found that in nearly identical classes, Canadian college students learned a lot more from teaching assistants using interactive tools than they did from a veteran professor giving a traditional lecture. The students who had to engage interactively using the TV remote-like devices scored about twice as high on a test compared to those who heard the normal lecture, according to a study published Thursday in the journal Science.
I’m at the “Computer Science: Principles” Commission meeting (yesterday and today) and Advisory Board meeting (tonight and tomorrow). Wow — a lot has happened since the Commission last met over a year ago. And for someone like me, who loves to wallow in data, there’s a lot to make this pig happy. (Before anyone asks: The data are not mine to share, and I don’t know if they will. I have been told that I can share what I’m describing here.)
First, the attestation process really worked. Over 80 schools have signed up saying that they will give credit, placement, or offer a similar class to the CS:Principles outline. That’s enough for the College Board and NSF to be willing to go forward. Huge congratulations to Larry Snyder, Owen Astrachan, and all the others who made this happen. Even if you disagree with CS:Principles, the attestation effort is evidence that the CS Education community can draw together and get behind something that they find important.
Even more amazing to me is that 121 CS departments (wow!) took a 90 minute survey (and I took it twice — I know that it really does take that long) where they gave us feedback on each of the claims and evidence statements. What is most fascinating is what the departments consider to be “not important” to put into the CS:Principles class. For example, more than half of the departments surveyed find claims and evidence related to Big Idea #6 “Digital devices, systems, and the networks that interconnect them enable and foster computational approaches to solving problems” to be “not important,” most of them saying it’s “too advanced.” Now, as Owen pointed out to us yesterday, that doesn’t meant that these ideas are unimportant — but it does mean that the current form isn’t working. You can expect the Big Ideas and Computational Thinking Practices to change over the next few weeks.
Kathleen Haynie is the external evaluator for the effort, and she presented (and created a great, long, detailed report — wallow, wallow!) on the four pilot sites who tried to implement CS:Principles in Fall 2010. I found the data amazing, almost unbelievable. All four pilot classes were over 50% female. 43% of the students in the pilots took the class just because they were interested — it wasn’t useful for their major (12.4% of the students were CS majors), or their minor, or their general education. I can’t think of a single class at all of Georgia Tech where over 40% of the students are taking the class just because they think it’s interesting. That stat really speaks to the quality of the pilot study teachers — these are true Master Teachers who attract students, even to learn to program.
I suspect that I’ll be wallowing in these data for several weeks after this meeting. Great fun!
The blogger quoted below argues that today’s technical recruiter doesn’t know how to ask the hard questions, to make sure that “the new guy can code.” His radical proposal: Only interview people who have built an app with real users.
I’m leery of one-size-fits-all models. That’s explicitly what we were trying to avoid with Media Computation and Threads — we weren’t trying to get rid of the old way, but creating options for people who didn’t fit the old way. I know excellent programmers who can’t build a user interface, and exceptional user interface designers who can’t code at all. Does that mean that those folks should never be interviewed, that they have nothing to offer? Do you have to be the whole package, from Eclipse to Photoshop, to offer any value to a company?
So what should a real interview consist of? Let me offer a humble proposal: don’t interview anyone who hasn’t accomplished anything. Ever. Certificates and degrees are not accomplishments; I mean real-world projects with real-world users. There is no excuse for software developers who don’t have a site, app, or service they can point to and say, “I did this, all by myself!” in a world where Google App Engine and Amazon Web Services have free service tiers, and it costs all of $25 to register as an Android developer and publish an app on the Android Market.
The old system was based on limited information—all you knew about someone was their resume. But if you only interview people with accomplishments, then you have a much broader base to work from. Get the FizzBuzz out of the way, and then have the interviewee show and tell their code, and explain their design decisions and what they would do differently now. Have them implement a feature or two while you watch, so you can see how they actually work, and how they think while working. That’s what you want from a technical interview, not a measure of its subject’s grasp of some antiquated algorithm or data structure. The world has moved on.