Posts tagged ‘education research’
Active learning has differential benefits for underserved students
We have had evidence that active learning teaching methods have more benefit for underserved populations than for majority groups (for example, I discussed the differential impact of active learning here). Just published in March in the Proceedings of the National Academy of Science is a meta-analysis of over 40 studies giving us the strongest argument yet: “Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math” at https://www.pnas.org/content/117/12/6476. I’ll remind everyone that a terrific resource for peer instruction in computer science is here: http://peerinstruction4cs.com/
Achievement gaps increase income inequality and decrease workplace diversity by contributing to the attrition of underrepresented students from science, technology, engineering, and mathematics (STEM) majors. We collected data on exam scores and failure rates in a wide array of STEM courses that had been taught by the same instructor via both traditional lecturing and active learning, and analyzed how the change in teaching approach impacted underrepresented minority and low-income students. On average, active learning reduced achievement gaps in exam scores and passing rates. Active learning benefits all students but offers disproportionate benefits for individuals from underrepresented groups. Widespread implementation of high-quality active learning can help reduce or eliminate achievement gaps in STEM courses and promote equity in higher education.
So much to learn about emergency remote teaching, but so little to claim about online learning
The Chronicle of Higher Education published an article by Jonathan Zimmerman on March 10 arguing that we should use the dramatic shift to online classes due to Covid-19 pandemic as an opportunity to research online learning (see article here).
For the first time, entire student bodies have been compelled to take all of their classes online. So we can examine how they perform in these courses compared to the face-to-face kind, without worrying about the bias of self-selection.
It might be hard to get good data if the online instruction only lasts a few weeks. But at institutions that have moved to online-only for the rest of the semester, we should be able to measure how much students learn in that medium compared to the face-to-face instruction they received earlier.
To be sure, the abrupt and rushed shift to a new format might not make these courses representative of online instruction as a whole. And we also have to remember that many faculty members will be teaching online for the first time, so they’ll probably be less skilled than professors who have more experience with the medium. But these are the kinds of problems that a good social scientist can solve.
I strongly disagree with Zimmerman’s argument. There is a lot to study here. There is little to claim about online learning.
What we are doing right now is not even close to best practice for online learning. I recommend John Daniels’ book Mega-Universities (Amazon link). One of his analyses is a contrast with online learning structured as “correspondence school” (e.g., send out high-quality materials, require student work, provide structured feedback) or as a “remote classroom” (e.g., video record lectures, replicate in-classroom structures). Remote classrooms tend to have lower-retention and increase costs as the number of students scale. Correspondence school models are expensive (in money and time) to produce, but scales well and has low cost for large numbers. What we’re doing is much closer to remote classrooms than correspondence school. Experience with MOOCs supports this analysis. Doing it well takes time and is expensive, and is carefully-structured. It’s not thrown together with less than a week’s notice.
My first thought when I read Zimmerman’s essay was for the ethics of any experiment comparing to the enforced move to online classes versus face-to-face classe. Students and faculty did not choose to be part of this study. They are being forced into online classes. How can we possibly compare face-to-face classes that have been carefully designed, with hastily-assembled online versions that nobody wants at a time when the world is suffering a crisis. This isn’t a fair nor ethical comparison.
Ian Milligan recommends that we change our language to avoid these kinds of comparisons, and I agree. He writes (see link here) that we should stop calling this “online learning” and instead call it “emergency remote teaching.” Nobody would compare “business as usual” to an “emergency response” in terms of learning outcomes, efficiency, student satisfaction, and development of confidence and self-efficacy.
On the other hand, I do hope that education researchers, e.g., ethnographers, are tracking what happens. This is first-ever event, to move classes online with little notice. We should watch what happens. We should track, reflect, and learn about the experience.
But we shouldn’t make claims about online learning. There is no experiment here. There is a crisis, and we are all trying to do our best under the circumstances.
Sepehr Vakil appointed first Associate Director of Equity and Inclusion in STEM Education at U. Texas-Austin
I just met Sepehr at an ECEP planning meeting. Exciting to meet another CS Ed faculty in an Education school! He won the Yamashita Prize at Berkeley in 2015 for his STEM activism.
Dr. Vakil’s research revolves around the intersection of equity and the teaching and learning of STEM, particularly in computer science and technology. This focus has led Dr. Vakil to conduct participatory design research projects in several contexts. These efforts include founding and directing the Oakland Science and Mathematics Outreach (OSMO) program—an after school program serving youth of color in the city of Oakland. Dr. Vakil also has experience teaching and conducting research within public schools. During graduate school, he co-taught Introductory Computer Science Courses for 3 years in the Oakland Unified and Berkeley Unified School Districts. As part of a university-research collaboration between UC Berkeley and the Oakland Unified School District, he worked with students and teachers in the Computer Science and Technology Academy at Oakland Technical High School to design an after school racial justice organization named SPOCN (Supporting People of Color Now!) Dr. Vakil’s work at the intersection of equity, STEM, and urban education has also led to publications in prestigious journals such as Cognition & Instruction, Equity and Excellence in Education, and the Journal of the Learning Sciences.
Crowd-sourcing high-quality CS Ed Assessments: CAS’s Project Quantum
Bold new project from the UK’s Computing at School project aims to create high-quality assessments for their entire computing curriculum, across grade levels. The goal is to generate crowd-sourced problems with quality control checks to produce a large online resource of free assessments. It’s a remarkable idea — I’ve not heard of anything this scale before. If it works, it’ll be a significant education outcome, as well as an enormous resource for computing educators.
I’m a bit concerned whether it can work. Let’s use open-source software as a comparison. While there are many great open-source projects, most of them die off. There simply aren’t enough programmers in open-source to contribute to all the great ideas and keep them all going. There are fewer people who can write high-quality assessment questions in computing, and fewer still who will do it for free. Can we get enough assessments made for this to be useful?
Project Quantum will help computing teachers check their students’ understanding, and support their progress, by providing free access to an online assessment system. The assessments will be formative, automatically marked, of high quality, and will support teaching by guiding content, measuring progress, and identifying misconceptions.Teachers will be able to direct pupils to specific quizzes and their pupils’ responses can be analysed to inform future teaching. Teachers can write questions themselves, and can create quizzes using their own questions or questions drawn from the question bank. A significant outcome is the crowd-sourced quality-checked question bank itself, and the subsequent anonymised analysis of the pupils’ responses to identify common misconceptions.
Computing Education Research and the Technology Readiness Level
I just learned about this Technology Readiness Level (see Wikipedia page here) and found it interesting. Does it make sense for computing education research, or any education research at all? Aren’t we too much pragmatists when it comes to education research — we don’t become interested unless it can really work in classrooms. Or maybe early stage education research is just called “psychology”?
There’s a useful high-tech concept called the Technology Readiness Level that helps explain why Uber pounced when it did. NASA came up with this scale to gauge the maturity of a given field of applied science. At Level 1, an area of scientific inquiry is so new that nobody understands its basic principles. At Level 9, the related technology is so mature it’s ready to be used in commercial products. ‘‘Basically, 1 is like Newton figuring out the laws of gravity, and 9 is you’ve been launching rockets into space, constantly and reliably,’’ says Jeff Legault, the director of strategic business development at the National Robotics Engineering Center.
Source: Uber Would Like to Buy Your Robotics Department – The New York Times
Say Goodbye to Myers-Briggs, the Fad That Won’t Die
Once in our Learning Sciences seminar, we all took the Myers-Briggs test on day 1 of the semester, and again at the end. Almost everybody’s score changed. So, why do people still use it as some kind of reliable test of personality?
A test is reliable if it produces the same results from different sources. If you think your leg is broken, you can be more confident when two different radiologists diagnose a fracture. In personality testing, reliability means getting consistent results over time, or similar scores when rated by multiple people who know me well. As my inconsistent scores foreshadowed, the MBTI does poorly on reliability. Research shows “that as many as three-quarters of test takers achieve a different personality type when tested again,” writes Annie Murphy Paul in The Cult of Personality Testing, “and the sixteen distinctive types described by the Myers-Briggs have no scientific basis whatsoever.” In a recent article, Roman Krznaric adds that “if you retake the test after only a five-week gap, there’s around a 50% chance that you will fall into a different personality category.”
Poverty Impedes Cognitive Function
An interesting experiment, with a deeply disturbing result.
The poor often behave in less capable ways, which can further perpetuate poverty. We hypothesize that poverty directly impedes cognitive function and present two studies that test this hypothesis. First, we experimentally induced thoughts about finances and found that this reduces cognitive performance among poor but not in well-off participants. Second, we examined the cognitive function of farmers over the planting cycle. We found that the same farmer shows diminished cognitive performance before harvest, when poor, as compared with after harvest, when rich. This cannot be explained by differences in time available, nutrition, or work effort. Nor can it be explained with stress: Although farmers do show more stress before harvest, that does not account for diminished cognitive performance. Instead, it appears that poverty itself reduces cognitive capacity. We suggest that this is because poverty-related concerns consume mental resources, leaving less for other tasks. These data provide a previously unexamined perspective and help explain a spectrum of behaviors among the poor. We discuss some implications for poverty policy.
Knowing more doesn’t necessarily lead to correct reasoning: Politics changes problem-solving
Thanks to Elizabeth Patitsas for this piece. Fascinating experiment — people solve the exact same math problem differently if the context is “whether a skin cream works” or “whether gun control laws work,” depending on their politics. The statement below is an interesting interpretation of the results and relates to my questions about whether computing education research actually leads to any change.
For study author Kahan, these results are a fairly strong refutation of what is called the “deficit model” in the field of science and technology studies—the idea that if people just had more knowledge, or more reasoning ability, then they would be better able to come to consensus with scientists and experts on issues like climate change, evolution, the safety of vaccines, and pretty much anything else involving science or data (for instance, whether concealed weapons bans work). Kahan’s data suggest the opposite—that political biases skew our reasoning abilities, and this problem seems to be worse for people with advanced capacities like scientific literacy and numeracy. “If the people who have the greatest capacities are the ones most prone to this, that’s reason to believe that the problem isn’t some kind of deficit in comprehension,” Kahan explained in an interview.
via Science Confirms: Politics Wrecks Your Ability to Do Math | Mother Jones.
1st “BOOC” on Scaling-Up What Works about to start at Indiana University
I talked with Dan Hickey about this — it’s an interesting alternative to MOOCs, and the topic is relevant for this blog.
In the fall semester of 2013, IU School of Education Researcher and Associate Professor Dr. Daniel Hickey will be leading an online course. The 11-week course will begin on September 9 and is being called a ‘BOOC’ or “Big Open Online Course”. The main topic being taught is ”Educational Assessment: Practices, Principles, and Policies”. Here students will develop “WikiFolios”, endorse each other’s work, and earn bonafide Digital Badges based on the work they complete. Additionally, the course provides an opportunity for Dr. Hickey to observe how these activities translate from the same for-credit, online course that initially seated 25 students to the new ‘BOOC’ format hosting 500 participants: During his small scale experimental study, Dr. Hickey stated:
“I feel like I came up with some nice strategies for streamlining the course and making it a little less demanding which I think is necessary for an open, non-credit course. I learned ways to shorten the class, to get it from the normal 15 week semester to the 11 weeks. I condensed some of the assignments and gave students options; they do performance or portfolio assessment, they don’t do both. I thought that was pretty good for students.”
via 1st “BOOC” To Begin In September, Scaling-Up What Works | BOOC at Indiana University.
Carl Wieman Moves to Stanford to Focus on Better Science Teaching
Carl Weiman has accepted a position at Stanford to focus on science teaching. It’s a great place for him, and I expect that we’ll hear more interesting things from him in the future. One aspect of the story that I find particular interesting is Weiman’s dislike of MOOCs, and how that conflicts with the perspective of some of the MOOC advocates at Stanford.
Mr. Wieman left the White House last summer, after receiving a diagnosis of multiple myeloma and after spending two years searching for ways to force universities to adopt teaching methods shown through scientific analysis to be more effective than traditional approaches.
His health has improved, Mr. Wieman said in an interview last week. But rather than try again through the political process to prod universities to accept what research tells them would be better ways of teaching and retaining students in the sciences, he now hopes at Stanford to work on making those methods even better.
The Two Cultures of Educational Reform – NYTimes.com
(Shoot — I meant to put this on “draft” and come back to it, but hit the wrong button. Sigh.)
Here’s what I thought was interesting about this piece: I agree with Fish’s depiction of “data and experiment culture” about education, and the “ineffable culture,” too. But his alignment of MOOCs with “data and experiment culture” of MOOCs seems wrong. Our data about MOOCs says that they’re not working. So, belief in MOOCs is “ineffable.” It’s about having warm feelings for technology and the hopes for its role in education.
About halfway through his magisterial study “Higher Education in America,” Derek Bok, twice president of Harvard, identifies what he calls the “two different cultures” of educational reform. The first “is an evidence-based approach to education … rooted in the belief that one can best advance teaching and learning by measuring student progress and testing experimental efforts to increase it.” The second “rests on a conviction that effective teaching is an art which one can improve over time through personal experience and intuition without any need for data-driven reforms imposed from above.”
Bok is obviously a member of the data and experiment culture, which makes him cautiously sympathetic to developments in online teaching, including the recent explosion of MOOCs (massive open online courses). But at the same time, he is acutely aware of the limits of what can be tested, measured and assessed, and at crucial moments in his analysis that awareness pushes him in the direction of the other, “ineffable” culture.
Seymour Papert Tribute at IDC 2013
I only planned to watch a little bit of this. Allison Druin’s talk was particularly recommended to me. So I started watching, and Paulo Blikstein’s opening remarks were so intriguing. (I loved his characterization that today’s notions of “personalized learning” were “like telling a prisoner that he can walk around his cell all he wants.”) I hadn’t heard Edith Ackermann in decades, and was particularly struck by her comment, “Any theory of learning that ignore resistances to teaching misses the point!” Mike Eisenberg, Mitchel Resnick, and Uri Wilensky were all wonderful and insightful talks, and Allison was as good as the recommendation promised. 90 minutes later, I’m explaining to my family where I’d disappeared to…
The intellectual ideas discussed are fascinating, from epistemology to politics to education to design. Recommended.
Not clear that we can learn anything about learning from neuroscience yet
I like David Brooks’s opinion pieces quite a bit, and particularly his pieces where he draws on research. The piece linked below touches on an issue that I’ve been wondering about. All this neuroscience data about what part of the brain lights up when — what does it really tell us about how the mind works? Does it actually tell us anything about learning? Brooks’ opinion: Not yet.
These two forms of extremism are refuted by the same reality. The brain is not the mind. It is probably impossible to look at a map of brain activity and predict or even understand the emotions, reactions, hopes and desires of the mind.
Success in MOOCs: Talk offline is important for learning
That students who had offline help did the best in this MOOC study is not surprising. Sir John Daniel reported in Mega-Universities that face-to-face tutors was the largest line item in the Open University UK’s budget. But the fact that 90% of the students didn’t talk online (a statistic that is similar to what Tucker Balch found) says that success in MOOCs may be more about talking offline than online.
“On average, with all other predictors being equal, a student who worked offline with someone else in the class or someone who had expertise in the subject would have a predicted score almost three points higher than someone working by him or herself,” write the authors.The correlation, described by the authors as the “strongest” in the data set, was limited to a single instance of a particular MOOC, and is not exactly damning to the format. But it nonetheless may give ammunition to critics who say human tutelage remains essential to a good education.Other findings could also raise eyebrows. For example, the course’s discussion forum was largely the dominion of a relatively small group of engaged users; most students simply lurked. “It should be stressed that over 90 percent of the activity on the discussion forum resulted from students who simply viewed pre-existing discussion threads, without posting questions, answers, or comments,” the authors write.
Learning for today versus learning for tomorrow: Teaching evaluations
Really interesting set of experiments that give us new insight into the value of teaching evaluations. The second is particularly striking and points to the difficulty of measuring teaching quality — good today isn’t the same as good tomorrow.
When you measure performance in the courses the professors taught i.e., how intro students did in intro, the less experienced and less qualified professors produced the best performance. They also got the highest student evaluation scores. But more experienced and qualified professors students did best in follow-on courses i.e., their intro students did best in advanced classes.The authors speculate that the more experienced professors tend to “broaden the curriculum and produce students with a deeper understanding of the material.” p. 430 That is, because they don’t teach directly to the test, they do worse in the short run but better in the long run.
via Do the Best Professors Get the Worst Ratings? | Psychology Today.
Recent Comments