Posts tagged ‘education research’
Sepehr Vakil appointed first Associate Director of Equity and Inclusion in STEM Education at U. Texas-Austin
I just met Sepehr at an ECEP planning meeting. Exciting to meet another CS Ed faculty in an Education school! He won the Yamashita Prize at Berkeley in 2015 for his STEM activism.
Dr. Vakil’s research revolves around the intersection of equity and the teaching and learning of STEM, particularly in computer science and technology. This focus has led Dr. Vakil to conduct participatory design research projects in several contexts. These efforts include founding and directing the Oakland Science and Mathematics Outreach (OSMO) program—an after school program serving youth of color in the city of Oakland. Dr. Vakil also has experience teaching and conducting research within public schools. During graduate school, he co-taught Introductory Computer Science Courses for 3 years in the Oakland Unified and Berkeley Unified School Districts. As part of a university-research collaboration between UC Berkeley and the Oakland Unified School District, he worked with students and teachers in the Computer Science and Technology Academy at Oakland Technical High School to design an after school racial justice organization named SPOCN (Supporting People of Color Now!) Dr. Vakil’s work at the intersection of equity, STEM, and urban education has also led to publications in prestigious journals such as Cognition & Instruction, Equity and Excellence in Education, and the Journal of the Learning Sciences.
Bold new project from the UK’s Computing at School project aims to create high-quality assessments for their entire computing curriculum, across grade levels. The goal is to generate crowd-sourced problems with quality control checks to produce a large online resource of free assessments. It’s a remarkable idea — I’ve not heard of anything this scale before. If it works, it’ll be a significant education outcome, as well as an enormous resource for computing educators.
I’m a bit concerned whether it can work. Let’s use open-source software as a comparison. While there are many great open-source projects, most of them die off. There simply aren’t enough programmers in open-source to contribute to all the great ideas and keep them all going. There are fewer people who can write high-quality assessment questions in computing, and fewer still who will do it for free. Can we get enough assessments made for this to be useful?
Project Quantum will help computing teachers check their students’ understanding, and support their progress, by providing free access to an online assessment system. The assessments will be formative, automatically marked, of high quality, and will support teaching by guiding content, measuring progress, and identifying misconceptions.Teachers will be able to direct pupils to specific quizzes and their pupils’ responses can be analysed to inform future teaching. Teachers can write questions themselves, and can create quizzes using their own questions or questions drawn from the question bank. A significant outcome is the crowd-sourced quality-checked question bank itself, and the subsequent anonymised analysis of the pupils’ responses to identify common misconceptions.
I just learned about this Technology Readiness Level (see Wikipedia page here) and found it interesting. Does it make sense for computing education research, or any education research at all? Aren’t we too much pragmatists when it comes to education research — we don’t become interested unless it can really work in classrooms. Or maybe early stage education research is just called “psychology”?
There’s a useful high-tech concept called the Technology Readiness Level that helps explain why Uber pounced when it did. NASA came up with this scale to gauge the maturity of a given field of applied science. At Level 1, an area of scientific inquiry is so new that nobody understands its basic principles. At Level 9, the related technology is so mature it’s ready to be used in commercial products. ‘‘Basically, 1 is like Newton figuring out the laws of gravity, and 9 is you’ve been launching rockets into space, constantly and reliably,’’ says Jeff Legault, the director of strategic business development at the National Robotics Engineering Center.
Once in our Learning Sciences seminar, we all took the Myers-Briggs test on day 1 of the semester, and again at the end. Almost everybody’s score changed. So, why do people still use it as some kind of reliable test of personality?
A test is reliable if it produces the same results from different sources. If you think your leg is broken, you can be more confident when two different radiologists diagnose a fracture. In personality testing, reliability means getting consistent results over time, or similar scores when rated by multiple people who know me well. As my inconsistent scores foreshadowed, the MBTI does poorly on reliability. Research shows “that as many as three-quarters of test takers achieve a different personality type when tested again,” writes Annie Murphy Paul in The Cult of Personality Testing, “and the sixteen distinctive types described by the Myers-Briggs have no scientific basis whatsoever.” In a recent article, Roman Krznaric adds that “if you retake the test after only a five-week gap, there’s around a 50% chance that you will fall into a different personality category.”
An interesting experiment, with a deeply disturbing result.
The poor often behave in less capable ways, which can further perpetuate poverty. We hypothesize that poverty directly impedes cognitive function and present two studies that test this hypothesis. First, we experimentally induced thoughts about finances and found that this reduces cognitive performance among poor but not in well-off participants. Second, we examined the cognitive function of farmers over the planting cycle. We found that the same farmer shows diminished cognitive performance before harvest, when poor, as compared with after harvest, when rich. This cannot be explained by differences in time available, nutrition, or work effort. Nor can it be explained with stress: Although farmers do show more stress before harvest, that does not account for diminished cognitive performance. Instead, it appears that poverty itself reduces cognitive capacity. We suggest that this is because poverty-related concerns consume mental resources, leaving less for other tasks. These data provide a previously unexamined perspective and help explain a spectrum of behaviors among the poor. We discuss some implications for poverty policy.
Thanks to Elizabeth Patitsas for this piece. Fascinating experiment — people solve the exact same math problem differently if the context is “whether a skin cream works” or “whether gun control laws work,” depending on their politics. The statement below is an interesting interpretation of the results and relates to my questions about whether computing education research actually leads to any change.
For study author Kahan, these results are a fairly strong refutation of what is called the “deficit model” in the field of science and technology studies—the idea that if people just had more knowledge, or more reasoning ability, then they would be better able to come to consensus with scientists and experts on issues like climate change, evolution, the safety of vaccines, and pretty much anything else involving science or data (for instance, whether concealed weapons bans work). Kahan’s data suggest the opposite—that political biases skew our reasoning abilities, and this problem seems to be worse for people with advanced capacities like scientific literacy and numeracy. “If the people who have the greatest capacities are the ones most prone to this, that’s reason to believe that the problem isn’t some kind of deficit in comprehension,” Kahan explained in an interview.
I talked with Dan Hickey about this — it’s an interesting alternative to MOOCs, and the topic is relevant for this blog.
In the fall semester of 2013, IU School of Education Researcher and Associate Professor Dr. Daniel Hickey will be leading an online course. The 11-week course will begin on September 9 and is being called a ‘BOOC’ or “Big Open Online Course”. The main topic being taught is ”Educational Assessment: Practices, Principles, and Policies”. Here students will develop “WikiFolios”, endorse each other’s work, and earn bonafide Digital Badges based on the work they complete. Additionally, the course provides an opportunity for Dr. Hickey to observe how these activities translate from the same for-credit, online course that initially seated 25 students to the new ‘BOOC’ format hosting 500 participants: During his small scale experimental study, Dr. Hickey stated:
“I feel like I came up with some nice strategies for streamlining the course and making it a little less demanding which I think is necessary for an open, non-credit course. I learned ways to shorten the class, to get it from the normal 15 week semester to the 11 weeks. I condensed some of the assignments and gave students options; they do performance or portfolio assessment, they don’t do both. I thought that was pretty good for students.”