Posts tagged ‘learning sciences’
One year, I gave an assignment in my Objects and Design class (in Squeak!) to construct a personal newspaper by reading bits of news (based on user interest) from local news sites. The night before the assignment was due, so many students tested their buggy fetch-and-scrape code on one poor site that they killed the site — a pedagogical denial-of-service attack.
Should I or my students have been arrested and taken away in handcuffs? It seems like the direct computing world analogy from the story quoted below.
Fortunately, the student has now been cleared of charges. It’s still a scary story.
It’s a sad commentary on our alarmist society that a similar deed would probably land a modern day budding Oliver Sacks in jail. That is exactly what it has done to a young aspiring scientist named Kiera Wilmot from Bartow High School in Florida, and in the process it has almost certainly deprived this country of exactly the kind of scientist whose shortage its politicians and educators are so fond of lamenting. The student conducted a common experiment mixing Drano and aluminum foil on the grounds of a school. The exact details are unknown but the incident led to a minor explosion, hurt nobody and damaged no property. This relatively harmless bit of curiosity led to Ms. Wilmot being handcuffed, arrested and expelled from the school. Irrational State Overreach: 1, The Much Touted American Edge in Science: 0. Whatever else the school was trying to achieve, it definitely succeeded in squelching independent scientific curiosity in its students.
I usually really like Annie Murphy Paul’s articles, but this one didn’t work for me. Below are her reasons why TED talk videos work well in learning, with my comments interspersed.
• They gratify our preference for visual learning. Effective presentations treat our visual sense as being integral to learning. This elevation of the image—and the eschewal of text-heavy Power Point presentations—comports well with cognitive scientists’ findings that we understand and remember pictures much better than mere words.
Cognitive scientists like Richard Mayer have found that diagrams and pictures can enhance learning — absolutely. But his work combined diagrams with words (e.g., best combination with diagrams: audio narration, not visual text). This quote seems to suggest that pictures are better than words. For most of STEM, that’s not true. We may have an affinity for visual, but that doesn’t mean that it works better for learning complex material.
• They engage the power of social learning. The robust conversation that videos can inspire, both online and off, recognizes a central principle of adult education: We learn best from other people. In the discussions, debates, and occasional arguments about the content of the talks they see, video-watchers are deepening their own knowledge and understanding.
Wait a minute — isn’t she just saying that TED talks give us something to talk about? TED talks are not themselves inherently social. Isn’t a book discussed in a book club just as effective for “engaging the power of social learning”? What makes TED talks so “social”?
• They enable self-directed, “just-in-time” learning. Because video viewers choose which talks to watch and when to watch them, they’re able to tailor their education to their own needs. Knowledge is easiest to absorb at the moment when we’re ready to apply it.
This was the quote that inspired this blog post. It’s an open question, but here’s my hypothesis. Nobody watches a TED talk for “just-in-time” learning. People watch TED talk for entertainment. ”I am about to go to my school board meeting — I think I’ll watch Sir Ken Robinson to figure out what to say!” ”I need to be able to guess birthdays — isn’t there a TED talk on that?” There are videos that really work for “just-in-time” learning. TED talks aren’t like that.
• They encourage viewers to build on what they already know. Adults are not blank slates: They bring to learning a lifetime of previously acquired information and experience. Effective video instruction build on top of this knowledge, adding and elaborating without dumbing down.
It’s absolutely true that effective instruction builds on top of existing knowledge, which is something that the best teachers know how to do — to figure out what students know and care about, and relate knowledge to that. How does a fixed video build on what viewers (all hundreds of thousands of them) actually know? No, I don’t see how TED talks do that.
Way to go, Wendy! My Georgia Tech colleague did really well at a recent AAAS forum on MOOCs. The tone between the three speakers is striking. Anant Agarwal says “Hype is a good thing!” Kevin Wehrbach says that a MOOC is “an extraordinary teaching and learning experience.” Then Wendy Newstetter lets loose with concerns supported with citations and hard research questions.
In any learning environment, students should gain “transferable knowledge” that can be applied in many contexts, said Newstetter, citing a 2012 National Academies’ report on Education for Life and Work. Specifically, she said, researcher James Pellegrino has identified an array of cognitive, interpersonal and intrapersonal skills that all students need in order to succeed. How can the array of new online learning models help students achieve those goals?
Newstetter proposed a series of questions that should be answered by research. Educators need to know, for example, under what conditions technology-mediated experiences can result in enhanced learning competencies, she said. Do MOOCs effectively encourage students to develop perseverance, self-regulation and other such skills? Is knowledge gained in a MOOC “transferable,” so that what students learn can help them solve problems in other contexts? How can MOOCs be enhanced to promote interpersonal skills, and what intrapersonal attributes are needed for optimal learning in MOOCs?
Some observers have suggested that MOOCs tend to work best for more affluent students, Newstetter noted. She mentioned the 2013 William D. Carey lecture, presented at the AAAS Forum by Freeman Hrabowski III, president of the University of Maryland, Baltimore County, who focused on strategies for helping underrepresented minorities succeed in science fields. “What he described was high-contact, intensive mentoring,” she pointed out.
I’ve written before about computer science pedagogical content knowledge (PCK). Phil Sadler and his colleagues just published a wonderful study about the value of PCK. He found that science teachers need to know science, but the most effective science teachers also know what students get wrong — their misconceptions, what the learning difficulties are, and what are the symptoms of misunderstandings. I got a chance to ask him about this paper, and he said one of the implications of the work that he sees is that he offers a way to measure PCK, and measuring something important about teaching is hard and useful.
For the study described in their paper, Sadler and his colleagues asked teachers to answer each question twice, once to give the scientifically correct answer, and the second time to predict which wrong answer their students were likeliest to choose. Students were then given the tests three times throughout the year to determine whether their knowledge improved.
The results showed that students’ scores showed the most improvement when teachers were able to predict their students’ wrong answers.
“Nobody has quite used test questions before in this way,” Sadler said. “What I had noticed, even before we did this study, was that the most amazing science teachers actually know what their students’ wrong ideas are. It occurred to us that there might be a way to measure this kind of teacher knowledge easily without needing to spend long periods of time observing teachers in their classrooms.”
I am working set crew for a musical for the last two weeks and through this weekend. This is my third year doing it, so I’m not quite the novice I was when I first wrote about the experience. We’re doing “Curtains” which is a show-in-a-show musical — the setting is a theater in Boston where a Western musical is being readied for Broadway, when murders start backstage.
Again, I’m struck by the complexity of musical theater. The actors have been at it since January, and everything they have to learn amazes me. As stage crew, I only owe them three weeks of every evening, but I still have had a lot to learn in a short time. In part of Act Two, I’m setting flats, then racing back to help actors with their quick change (it’s way harder to button someone else’s shirt buttons than your own), then lift a globe into place (turning it sideways to fit through door frames), before racing back to set up a river in the next scene.
What’s particularly striking me this year is how we have not only learned some fairly complex activities, but we have learned them well enough to self-monitor and invent.
- During one performance this last weekend, I was the last crew still on stage when the stage manager whispered to me, “The rope!” The rope that held the globe still had come loose and was dangling. I grabbed it and dove behind a riser — just as the lights came up. I was trapped. (Not seriously, of course. The worst that would happen is that the audience would see a guy in black crawl by at the back of the stage. But the whole point of theater is to maintain an illusion, so you avoid those kinds of incongruities.) The stage manager whispered to me to climb up the ladder behind the globe without being seen, and tie the globe down, which I did. Now, I’m trapped on a ladder behind the scene and thinking, “What do I do next?” In the next scene change, I was to be a real stagehand acting like a stagehand. ”Curtains” is a play about a play, so at a few times in the show, someone yells, “Clear the set” and we stagehands come out (in the lights! in front of the audience!) to clear the set. When Lt. Cioffi yelled, “Let’s bring in the river,” I ran out to bring in the river scene — from behind the globe. Nobody would have noticed or cared where the stagehand came from, so the illusion was maintained.
- During last night’s performance, the trap door that drops the heavy sandbag (an attempted murder) didn’t work. One of the actors on stage invented dialog to get around that flub and keep the story going – that was quick thinking. The trap door failure created a challenge for the set crew. Why didn’t the trap door work? Was it going to get unstuck and drop a weight during the middle of another scene? While one member of the set crew started crawling around to check the trap door, the rest of the stagehands covered his chores.
I could go on and on. A prop is missing, a costume breaks, someone flubs their line or doesn’t get on stage quick enough. Things happen, and people have to think on their feet. Let’s compare this to introductory computer science class, where students famously have difficulty figuring out one way to do something in 10-15 weeks of practice. Or when they do something the one way that they can figure out, it just barely works and the code is frequently awful — ugly and hard to read.
What we see going on in the musical is complex learning, with flexibility. It’s not quantum physics, but it is complex. If you’ve ever learned a dance or martial arts, you know that remembering and recreating a sequence of physical moves can be challenging. Now combine that across multiple scenes, with rapid timing (quick changes have to be completed before the orchestra finishes the song), with lots of people involved, and it’s complicated. I just bought the “Curtains” soundtrack and am impressed. Our actors and singers can hold their own with the original cast recording.
How did everyone involved in the musical learn so much, so well, in such a short amount of time? And why doesn’t that happen so often in formal education? There are lots of things going on. Here are two that I’ve been thinking about:
- I’m currently listening (in my work commutes) to “Quiet: The power of introverts in a world that can’t stop talking,” where she talks about Anders Ericsson’s work on deliberate practice. I’m not suggesting that the actors or stagehands in the musical have put in ten thousand hours or are experts (though I would not be surprised if some of our top actors, who do a lot of theater and commercial work, may cross that threshold). I am suggesting that Ericsson’s conditions for developing expertise are present here: “The most cited condition concerns the subjects’ motivation to attend to the task and exert effort to improve their performance. The subjects should receive immediate informative feedback and knowledge of results of their performance. The subjects should repeatedly perform the same or similar tasks.” We do the musical over-and-over. We are motivated to get it right. The directors critique, and we critique ourselves. ”That didn’t go well,” or “we could do that better.” That doesn’t happen in formal education so much.
- I’m reading David Perkins’ “Making learning whole,” where he talks about how we tend to teach piecemeal in formal education, but in informal education (in his introduction, it’s learning baseball), the learner knows what the end product is supposed to look like. The actors and stagehands in a musical know where we’re going. We have a complete picture of the role of each piece. We know what a good show looks like. We focus on this number here, and this set change there, but there’s no question that everything is supposed to fit together. It’s not like “We’re learning recursion, and I’m not sure why I’d ever want to do this.” Students in formal education often don’t understand the relevance of what they’re learning, of how it all fits together.
P.S. If you’re in Atlanta, there are shows this Friday and Saturday at 8 pm, and Sunday at 4 pm. Come see it!
This is a pretty exciting center. EDC does very good work, and Jeremy Roschelle is an excellent researcher in learning sciences (author of the JLS article on economic benefits of STEM education that I blogged on last year).
The new center aims to maximize the potential of NSF-funded projects focused on learning with technology, with the goal of addressing pressing needs in STEM education. Of particular interest are technological advances that allow more personalized learning experiences, that draw in and promote learning among those in populations not currently well-served, and that allow access to learning resources. EDC’s role will be to assess the needs of NSF grantees, foster the development of partnerships, and facilitate and lead events that bring together grantees and stakeholders from the national cyberlearning community.
“This initiative brings another NSF program resource center to EDC and allows us to harness our collective experience and knowledge in this area,” said EDC’s Sarita Pillai, who will lead the EDC team. “Through this work, we expect to accelerate progress in the field of cyberlearning and to improve student learning in the areas of science, technology, engineering, and math.”
“This is a timely, important opportunity to connect high-quality research with the rapidly growing market for digital learning, an area of intense need and investment in Silicon Valley and throughout the country,” said SRI’s Jeremy Roschelle, director of CIRCL.
This is a compelling vision. Set aside MOOCs or not — how could we use a team-based approach in building postsecondary education, so that we have the best of texts, tools, in-class experiences, videos, and individualized tutoring and advising? If we want higher-quality, we can’t expect one teacher to perform all roles for increasing numbers of students.
The real threat to traditional higher education embraces a more radical vision that removes faculty from the organizational center and uses cognitive science to organize the learning around the learner. Such models exist now.
Consider, for example the implications of Carnegie Mellon’s Open Learning Initiative. More than 10 years ago, Herb Simon, the Carnegie Mellon University professor and Nobel laureate, declared, “Improvement in postsecondary education will require converting teaching from a solo sport to a community-based research activity.” The Open Learning Initiative (OLI) is an outgrowth of that vision and has been striving to realize it for more than a decade.
Inquiry-based learning is the best practice for science education. Education activities focus on a driving question that is personally meaningful for students, like “Why is the sky blue?” or “Why is the stream by our school so acidic (or basic)?” or “What’s involved in building a house powered entirely by solar power?” Answering those questions leads to deeper learning about science. Learning sciences results support the value of this approach.
It’s hard for us to apply this idea from science education and teach an introductory computing course via inquiry, because students may not have many questions that relate to computer science when they first get started. Questions like “How do I make an app to do X?” or “How do I use Snap on my laptop?” are design and task oriented, not inquiry oriented. Answering them may not lead to deeper understanding of computer science. Our everyday experience of computing, through (hopefully) well-designed interfaces, hides away the underlying computing. We only really start to think about computing at moments of breakdown (what Heidigger called “present-at-hand”). ”Why can’t I get to YouTube, even though the cable modem light is on?” and “How does a virus get on my computer, and how can it pop up windows on my screen?” It’s an interesting research project to explore what questions students have about computing when they enter our classes.
I realized this semester that I could prompt students to define questions for inquiry-based learning in a second computer science class, a data structures course. I’m teaching our Media Computation Data Structures course this semester. These students have seen under the covers and know that computing technology is programmed. I can use that to prompt them about how new things work. What I particularly like about this approach is how it gets me out of the “Tour of the Code” lecturing style.
Here’s an example. We had already created music using linked lists of MIDI phrases. I then showed them code for creating a linked list of images, then presented this output.
I asked students, “What do you want to know about how this worked?” This was the gamble for me — would they come up with questions? They did, and they were great questions. ”Why are the images lined up along the bottom?” “Why can we see the background image?”
I formed the students into small groups, and assigned them one of the questions that the students had generated. I gave them 10 minutes to find the answers, and then report back. The discussion around the room was on-topic and had the students exploring the code in depth. We then went through each group to get their answers. Not every answer was great, but I could take the answer and expand upon it to reach the issues that I wanted to make sure that we highlighted. It was great — way better and more interactive than me paging through umpteen Powerpoint slides of code.
Then I showed them this output from another linked list of images.
Again, the questions that the students generated were terrific. ”What data are stored in each instance such that some have positions and some are just stacked up on the bottom?” and “Why are there gaps along the bottom?”
Still later in the course, I showed them an animation, rendered from a scene graph, and I showed them the code that created the scene graph and generated the animation. Now, I asked them about both the animation code and the class hierarchy that the scene graph nodes was drawing upon. Their questions were both about the code, and about the engineering of the code — why was it decomposed in just this way?
(We didn’t finish answering these questions in a single class period, so I took pictures of the questions so that I could display them and we could return to them in the next class.)
I have really enjoyed these class sessions. I’m not lecturing about data structures — they’re learning about data structures. The students are really engaged in trying to figure out, “How does that work like that?” I’m busy in class suggesting where they should look in the code to get their questions answered. We jointly try to make sense of their questions and their answers. Frankly, I hope to never again have to show sequences of Powerpoint slides of code ever again.
Probably lots of people have now heard about the professor who walked out on his Coursera MOOC. What I found striking was Irvine’s response. They suggest that the course was just fine and would meet the needs of just about everyone, from those who just wanted a taste to those who wanted a serious education. What we know aptitude-treatment interaction suggests that that’s not possible. A single course, with no personalization, is unlikely to meet the needs of tens of thousands of students.
Irvine officials, however, “felt that the course was very strong and well designed,” he said, “and that it would, indeed, meet the learning objectives of the large audience, including both those interested only in dipping into the subject and those who were seriously committed” to completing the course.
I gave a talk on 19 February at HCIL at U. Maryland-College Park. I was pleased with how it turned out. One of the things I learned when I gave my Indiana talks was that I ought to frame my talk with how I define learning and what theoretical frameworks I’m drawing on (e.g., learning sciences, constructionism, situated learning, community of practice, and authenticity). This was my first talk where I tried to do that, and I liked how I could keep referencing back to the theory as I went along. The talk gave me a chance to connect my work in computing education research (CER) to a broader education theory.
Nice piece in our C21U newsletter, suggesting that pedagogy is more important than the MOOC technology. How we teach is much more important for dramatic impacts on learning, than aiming for scale via advanced technology.
We may find that MOOCs work well for self-motivated students who have a lot of technology at their fingertips, have been raised in stimulating intellectual environments all their lives, who have lots of support mechanisms within their grasp to help them learn the material, and who have the wherewithal to spend the time and energy required to learn deeply what is being taught in these MOOCs.
But what about those students who don’t have the resources required to support their learning, who have not been raised in intellectually stimulating environments, who don’t even know how to study well? It is hard to see how MOOCs will work for these students, yet these are the students that it is most important that we reach in order to meet the challenges of 21st-century education.
I would much rather see the resources of Georgia Tech and our nation’s other educational institutions, being used to support the creation of research-based learning environments that can most effectively support the learning of all students, regardless of their background. Learning environments that do not rely on the lecture. Learning environments that make good use of those precious and valuable times when students are in direct contact with their instructors.
The Muller research being described in the below post was discussed here previously, and is related to the predict-before-demo work that Eric Mazur presented at last year’s ICER. The uppermost bit here is that data mining can’t get at this level of abstraction in terms of identifying good teaching. I’m also concerned that data mining can’t help if you lose 80% of your subject pool — you can’t learn about people who aren’t there.
But even granting that you can get sufficiently rich information about the students, there’s another hard problem. Let’s say that, thanks to the upgrade in your big data infinite improbability drive made possible by your new Spacely’s space sprocket, your system is able to flag at least a critical mass of videos taught in the Mueller method as having a bigger educational impact on the students the average educational video by some measure you have identified. Would the machine be able to infer that these videos belong in a common category in terms of the reason for their effectiveness? Would it be able to figure out what Muller did? There are lots of reasons why a video might be more effective than average. And many of those ways are internal to the narrative structure of the video. The machine only knows things like the format of the video, the length, what kind of class it’s in, who the creator is, when it was made, and so on. Other than the external characteristics of the video file, it mostly knows what we tell it about the contents. It has no way for it to inspect the video and deduce that a particular presentation strategy is being used. We are nowhere close to having a machine that is smart enough to do what Muller did and identify a pattern in the narrative of the speaker.
Was anyone else bothered by the argument in this NYtimes blog post? ”MOOCS aren’t effective in terms of completion rates; Duolingo is not a MOOC; Duolingo is effective.” So…what does that tell us about MOOCs?
The paper on Duolingo effectiveness is pretty cool. I think it’s particularly noteworthy that more prior knowledge of Spanish led to less of an effect of Duolingo. I wonder if that’s because Duolingo is essentially using a worked example model, and worked examples do suffer from the expertise reversal effect.
Moreover, there are early indications that the high interactivity and personalized feedback of online education might ultimately offer a learning structure that can’t be matched by the traditional classroom.
Duolingo, a free Web-based language learning system that grew out of a Carnegie Mellon University research project, is not an example of a traditional MOOC. However, the system, which now teaches German, French, Portuguese, Italian, Spanish and English, has roughly one million users and about 100,000 people spend time on the site daily.
I agree with the post below which suggests that MOOCs misunderstand what a good teacher does–that’s what my post earlier was about. I’m not convinced that I agree with the author’s definition of what a teacher does. Yes, a good teacher does all those things described in the second paragraph below, but a key part of what a teacher does is to motivate the student to learn. Learning results from what the student does and thinks. It’s the teacher’s job to cajole, motivate, engage, and even infuriate the student so that he or she thinks about things in a new way and learns. In the end, it’s always about the student, and the most important thing a teacher does is to get the student to do something.
But even if Tabarrok’s model makes good economic sense, it makes bad education sense and misrepresents what genuine teaching is and what the “best” teachers actually do. For starters, unlike TED speakers, they don’t simply deliver lectures and profess. They also work with students to help them become better thinkers, readers, and writers. How?
Through personal attention (such as tutorials) and classroom interaction (such as discussions and the guided close reading of texts). By constantly testing their students’ minds against theirs, forcing them to ask the hard questions and to explain them with significant answers. And by giving them appropriate personalized feedback.
Seb Schmoller had a nice response to my Friday post, where he asked what it will take for MOOCs to engage the student and lead to the learning that a good teacher can achieve. He included a wonderful quote from Herb Simon which really captures the key idea:
“Learning results from what the student does and thinks, and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.” – Herb Simon.