Posts tagged ‘learning sciences’
Really interesting blog post, dissecting the mistakes made in a very popular TED talk.
Sir Ken’s ideas aren’t just impractical; they are undesirable. Here’s the trouble with his arguments:
1. Talent, creativity and intelligence are not innate, but come through practice.
2. Learning styles and multiple intelligences don’t exist.
3. Literacy and numeracy are the basis for creativity.
4. Misbehaviour is a bigger problem in our schools than conformity.
5. Academic achievement is vital but unequal, partly because…
6. Rich kids get rich cultural knowledge, poor kids don’t.
I don’t completely agree with all of Pragmatic Education’s arguments.
- Intelligence may not be malleable. You can learn more knowledge, and that can come from practice. It’s not clear that fluid intelligence is improved with practice.
- Learning styles don’t seem to exist. Multiple intelligences? I don’t think that the answer is as clear there.
- Creativity comes from knowing things. Literacy and numeracy are great ways of coming to know things. It’s a bit strong to say that creativity comes from literacy and numeracy.
- There are lots of reasons why rich kids are unequal to poor kids (see the issue about poverty and cognitive function.) Cultural knowledge is just part of it.
But 90% — I think he gets what’s wrong with Sir Ken’s arguments.
The first of these “lies” is the one that that the students in my TA Prep (Teaching Assistant Preparation course, for PhD students learning to be teaching assistants) courses most often say back to me. The third lie (where “___” is “computer programming”) is a pernicious one among CS teachers.
When I was in middle school and high school, teachers loved to impart various tidbits of wisdom about the way students learn during lectures, always couched in such a way as to indicate these were scientifically accepted facts. You know everyone learns differently. Do you think you learn better through words or pictures? Did you know you learn different subjects with different sides of the brain?
Welp, they were wrong. Many of the theories of “brain-based” education, a method of instruction supposedly based on neuroscience, have been largely debunked by rigorous science. Brain-based education studies are usually poorly designed and badly controlled. Nevertheless, myths about how we learn persist in the popular imagination, and, most importantly, in educational materials and references for teachers.
1. We Learn Best When Teaching Is Tailored To Our Learning Style
2. Some People Are Left-Brained, Some People Are Right-Brained
3. __ Will Make You Smarter
I talked with Dan Hickey about this — it’s an interesting alternative to MOOCs, and the topic is relevant for this blog.
In the fall semester of 2013, IU School of Education Researcher and Associate Professor Dr. Daniel Hickey will be leading an online course. The 11-week course will begin on September 9 and is being called a ‘BOOC’ or “Big Open Online Course”. The main topic being taught is ”Educational Assessment: Practices, Principles, and Policies”. Here students will develop “WikiFolios”, endorse each other’s work, and earn bonafide Digital Badges based on the work they complete. Additionally, the course provides an opportunity for Dr. Hickey to observe how these activities translate from the same for-credit, online course that initially seated 25 students to the new ‘BOOC’ format hosting 500 participants: During his small scale experimental study, Dr. Hickey stated:
“I feel like I came up with some nice strategies for streamlining the course and making it a little less demanding which I think is necessary for an open, non-credit course. I learned ways to shorten the class, to get it from the normal 15 week semester to the 11 weeks. I condensed some of the assignments and gave students options; they do performance or portfolio assessment, they don’t do both. I thought that was pretty good for students.”
Taking a test is better than studying, even if you just guess: We need to flip the flipped classroom
The benefits of testing for learning are fascinating, and the result described below makes me even more impressed with the effect. It suggests even more strongly that the critical feature of learning is trying to understand, trying to generate an answer, even more than reading an answer.
Suppose, for example, that I present you with an English vocabulary word you don’t know and either (1) provide a definition that you read (2) ask you to make up a definition or (3) ask you to choose from among a couple of candidate definitions. In conditions 2 & 3 you obviously must simply guess. (And if you get it wrong I’ll give you corrective feedback.) Will we see a testing effect?
That’s what Rosalind Potts & David Shanks set out to find, and across four experiments the evidence is quite consistent. Yes, there is a testing effect. Subjects better remember the new definitions of English words when they first guess at what the meaning is–no matter how wild the guess.
These results mesh well with a new study from Stanford. They found that the order of events in a “flipped” classroom matters — the problem-solving activity (in the classroom) should come before the reading or videos (at home). The general theme is the same in both sets of studies: problem-solving drives learning, and it’s less true that studying prepares one for problem-solving.
A new study from the Stanford Graduate School of Education flips upside down the notion that students learn best by first independently reading texts or watching online videos before coming to class to engage in hands-on projects. Studying a particular lesson, the Stanford researchers showed that when the order was reversed, students’ performances improved substantially.
The First Annual ACM Conference on Learning at Scale will be held March 4-5,
2014 in Atlanta, GA (immediately prior to and collocated with SIGCSE-14).
The Learning at Scale conference is intended to promote scientific exchange
of interdisciplinary research at the intersection of the learning sciences
and computer science. Inspired by the emergence of Massive Open Online
Courses (MOOCs) and the accompanying huge shift in thinking about education,
this conference was created by ACM as a new scholarly venue and key focal
point for the review and presentation of the highest quality research on how
learning and teaching can change and improve when done at scale.
“Learning at Scale” refers to new approaches for students to learn and for
teachers to teach, when engaging large numbers of students, either in a
face-to-face setting or remotely, whether synchronous or asynchronous, with
the requirement that the techniques involve large numbers of students (where
“large” is preferably thousands of students, but can also apply to hundreds
in in-person settings). Topics include, but are not limited to: Usability
Studies, Tools for Automated Feedback and Grading, Learning Analytics,
Analysis of Log Data, Studies of Application of Existing Learning Theory,
Investigation of Student Behavior and Correlation with Learning Outcomes,
New Learning and Teaching Techniques at Scale.
November 8, 2013: Paper submissions due
November 8, 2013: Tutorial proposals due
December 23, 2013: Notification to authors of full papers
January 2, 2014: Works-in-progress submissions due (posters and demos)
January 14, 2014: Notification to authors of acceptance of works-in-progress
January 17, 2014: All revised and camera-ready materials due
March 4-5, 2014: Learning at Scale meeting
Additional information is available at: http://learningatscale.acm.org/
Stuart Wray has a remarkable blog that I recommend to CS teachers. He shares his innovations in teaching, and grounds them in his exploration of the literature into the psychology of programming. The quote and link below is an excellent example, where his explanation led to me a paper I’m eager to dive into. Stuart has built an interesting warm-up activity for his class that involves robots. What I’m most intrigued by is his explanation for why it works as it does. The paper that he cites by Jones and Burnett is not one that I’d seen before, but it explores an idea that I’ve been interested in for awhile, ever since I discovered the Spatial Intelligence and Learning Center: Is spatial ability a pre-requisite for learning in computer science? And if so, can we teach it explicitly to improve CS learning?
The game is quite fun and doesn’t take very long to play — usually around a quarter of an hour or less. It’s almost always quite close at the end, because of course it’s a race between the last robot in each team. There’s plenty of opportunity for delaying tactics and clever blocking moves near the exit by the team which is behind, provided they don’t just individually run for the exit as fast as possible.
But turning back to the idea from James Randi, how does this game work? It seems from my experience to be doing something useful, but how does it really work as an opening routine for a programming class? Perhaps first of all, I think it lets me give the impression to the students that the rest of the class might be fun. Lots of students don’t seem to like the idea of programming, so perhaps playing a team game like this at the start of the class surprises them into giving it a second chance.
I think also that there is an element of “sizing the audience up” — it’s a way to see how the students interact with one another, to see who is retiring and who is bold, who is methodical and who is careless. The people who like clever tricks in the game seem often to be the people who like clever tricks in programming. There is also some evidence that facility with mental rotation is correlated with programming ability. (See Spatial ability and learning to program by Sue Jones and Gary Burnett in Human Technology, vol.4(1), May 2008, pp.47-61.) To the extent that this is true, I might be getting a hint about who will have trouble with programming from seeing who has trouble making their robot turn the correct direction.
An interesting study suggesting that role models and how they’re described (in terms of their achievements, or in terms of their struggles) has an interaction with students’ stereotypes about scientists and other professionals in STEM fields. So there are not just cognitive benefits to learning from failure, but there are affective dimensions to focusing on the struggle (including failures) and not just the success.
But when the researchers exposed middle-school girls to women who were feminine and successful in STEM fields, the experience actually diminished the girls’ interest in math, depressed their plans to study math, and reduced their expectations of future success. The women’s “combination of femininity and success seemed particularly unattainable to STEM-disidentified girls,” the authors conclude, adding that “gender-neutral STEM role models,” as well as feminine women who were successful in non-STEM fields, did not have this effect.
Does this mean that we have to give up our most illustrious role models? There is a way to gain inspiration from truly exceptional individuals: attend to their failures as well as their successes. This was demonstrated in a study by Huang-Yao Hong of National Chengchi University in Taiwan and Xiaodong Lin-Siegler of Columbia University.
The researchers gave a group of physics students information about the theories of Galileo Galilei, Issac Newton and Albert Einstein. A second group received readings praising the achievements of these scientists. And a third group was given a text that described the thinkers’ struggles. The students who learned about scientists’ struggles developed less-stereotyped images of scientists, became more interested in science, remembered the material better, and did better at complex open-ended problem-solving tasks related to the lesson—while the students who read the achievement-based text actually developed more stereotypical images of scientists.
I haven’t read the new framework myself yet, but the press coverage suggests that this is really something noteworthy. I do hope that there is some serious assessment going on with this new curriculum. I’m curious about what happens when five year olds start programming. How far can they get? In Yasmin Kafai’s studies of Scratch and in Amy Bruckman’s studies of MOOSE Crossing, almost none of the younger students ever used conditionals or loops. But those were small studies compared to a national curriculum. How much transfers forward? If you do an abstract activity (programming) so early, does it lead to concrete operational reasoning earlier? Or does it get re-interpreted by the student when she reaches concrete operational? And, of course, the biggest question right now is: how can they get enough teachers quickly enough?
The new curriculum will be mandatory from September 2014, and spans the breadth of all four ‘key stages’, from when a child first enters school at age five to when they end their GCSEs at 16. The initial draft of the curriculum was written by the British Computer Society (BCS) and the Royal Academy of Engineering in October 2012, before being handed back to the DfE for further tweaks.
By the end of key stage one, students will be expected to ‘create and debug simple programs’ as well as ‘use technology safely and respectfully’. They will also be taught to, ‘understand what algorithms are; how they are implemented as programs on digital devices; and that programs execute by following precise and unambiguous instructions’.
Not everyone is happy about the new curriculum. Neil Brown has a nice post talking about some of the issues. He kindly sent me a set of links to the debate there, and I found this discussion from a transcript of Parliament proceedings fascinating — these are all really good issues.
First, on professional development, the Minister made the point that some money was being made available for some of the professional development work. Does he feel that it will be sufficient? There is a serious issue about ongoing professional development throughout the system, starting at primary level, where updating computer skills will be part of a range of updated skills which all primary teachers will need to deliver the new curriculum. It is also an issue at secondary level, where it may not be easy but is possible to recruit specialist staff with up-to-date computing skills. However, if you are not careful, that knowledge and those skills can fall out of date very quickly.
Secondly, what more are the Government planning to do to attract new specialist computing staff to teach in schools? It is fairly obvious that there would be alternative, better paid jobs for high-class performers in computing. They may not necessarily rush into the teaching profession.
Thirdly, can the Minister confirm that the change in name does not represent a narrowing of the curriculum, and that pupils will be taught some of those broader skills such as internet use and safety, word processing and data processing, so that the subject will actually give people a range of knowledge and skills which the word “computing” does not necessarily encompass?
Fourthly, the teaching will be successful only if it is supported by sufficient funds to modernise IT facilities and to keep modernising them as technology changes. The noble Lord made reference to some low-cost initiatives in terms of facilities in schools. However, I have seen reference to 3D printers. That is fine, it is just one example, but 3D printers are very expensive. The fact is that, for children to have an up-to-date and relevant experience, you would need to keep providing not just low-cost but some quite expensive technological equipment in schools on an ongoing basis. Will sufficient funds be available to do that?
Finally, given that computing skills and the supporting equipment that would be needed are increasingly integral to the teaching of all subjects, not just computing, have the Government given sufficient thought to what computing skills should be taught within the confines of the computing curriculum and what computing skills need to be provided with all the other arts and science subjects that people will be studying, in all of which pupils will increasingly require computing skills to participate fully? Has that division of responsibilities been thought through? I look forward to the Minister’s response.
We just had the ECEP Day at the Computer Science Teachers Assocation (CSTA) Conference on July 14, where I heard representatives from 16 states talk about their efforts to improve computing education. Special interests, where do state legislators have to be involved, what does “Computing” mean anyway — all of the states reported pretty much the same issues, but each in a completely different context. The issues seem to be pretty much the same in the UK, too.
I’d love to see this new system from MIT compared to Lewis Johnson’s Proust. Proust also found semantic bugs in students’ code. Lewis (and Elliot Soloway and Jim Spohrer) collected hundreds of bugs when students were working on the Rainfall Problem, then looked for those bugs in students’ programs. Proust caught about 85% of students’ semantic errors. That last 15% covered so many different bugs that it wasn’t worthwhile to encode the semantic check rules — each rule would only fire once, ever. My guess is that Proust, which knew what problem that the students were working on, would do better than the MIT homework checker, because it can only encode general mistakes.
The new system does depend on a catalogue of the types of errors that student programmers tend to make. One such error is to begin counting from zero on one pass through a series of data items and from one in another; another is to forget to add the condition of equality to a comparison — as in, “If a is greater than or equal to b, do x.”
The first step for the researchers’ automated-grading algorithm is to identify all the spots in a student’s program where any of the common errors might have occurred. At each of those spots, the possible error establishes a range of variations in the program’s output: one output if counting begins at zero, for instance, another if it begins at one. Every possible combination of variations represents a different candidate for the corrected version of the student’s program.
I like David Brooks’s opinion pieces quite a bit, and particularly his pieces where he draws on research. The piece linked below touches on an issue that I’ve been wondering about. All this neuroscience data about what part of the brain lights up when — what does it really tell us about how the mind works? Does it actually tell us anything about learning? Brooks’ opinion: Not yet.
These two forms of extremism are refuted by the same reality. The brain is not the mind. It is probably impossible to look at a map of brain activity and predict or even understand the emotions, reactions, hopes and desires of the mind.
I’ve mentioned before how much I enjoy the Computing At Schools online forum. I got involved in a discussion about how to teach teachers programming, and the question was raised: Why do we have to teach programming? Shouldn’t we just teach concepts? Neil Brown (in a blog post that I highly recommend reading) suggested, “We teach programming to make it concrete.” One of the commenters suggested that memory is very concrete. I disagreed, and am sharing here my response (for those who don’t yet belong to CAS), with editing and expansion:
Concreteness and abstraction in computing are difficult to define because, really, nothing in computing is concrete, in the Piagetian sense. Piaget talked about concreteness in terms of sensory input. I’ve heard before that “memory is concrete — it’s really there.” Can you see it? Can you touch it? Sure, you can “see” it in a debugger — but that’s seeing through a program. Maybe that memory is “made up” like any video game or movie special effect. It’s no more “real” than Yoda or Mario. We can sense the output of computation, which can then be Piagetian-concrete, but not the computation itself.
Uri Wilensky (who was a student of Seymour Papert) has a wonderful paper on concreteness. He redefines concreteness as being a quality of relationship. “The richer the set of representations of the object, the more ways we have of interacting with it, the more concrete it is for us.” Uri gives us a new way of measuring abstract-concrete in terms of a continuum.
- Memory is really pretty abstract for the novice. How many ways can a newcomer to computing view it, manipulate it? It might be really concrete if you know C, because you can manipulate memory in many ways in C. You can construct a relationship with it (to use Uri’s term). From Scratch or Python or Java, memory is totally abstract for the novice. There’s no way to directly manipulate it
- We did Media Computation because images and sounds are concrete. We get sensory input from them. So, computation to manipulate images and sounds gives us concrete ways to explore computation. We can’t see the computation, but as we change the computation and get a different sensory output, we can develop a relationship with computing.
- Threads are hopeless abstract. You have to be pretty expert, and know how to think about and manipulate processes-as-a-thing before threads can become concrete.
In some sense, this is not a surprising result. If you purchase (educational) technology without an explicit goal in mind, it’s hard to measure a difference later. See Larry Cuban on being “Oversold and Underused.”
In a review of student survey data conducted in conjunction with the federal exams known as the National Assessment of Educational Progress, the nonprofit Center for American Progress found that middle school math students more commonly used computers for basic drills and practice than to develop sophisticated skills. The report also found that no state was collecting data to evaluate whether technology investments were actually improving student achievement.
“Schools frequently acquire digital devices without discrete learning goals and ultimately use these devices in ways that fail to adequately serve students, schools, or taxpayers,” wrote Ulrich Boser, a senior fellow at the Center for American Progress and the author of the report.
In a sense, what Chris Quintana is doing here is a connectivist MOOC, but one where the student is guided via software-realized scaffolding through a self-study on a topic of their own interest. It’s an interesting idea, to help students organize a wide variety of learning opportunities in support of inquiry learning.
We aim to support cross-context inquiry that spans formal and informal settings by developing Zydeco Sci-To-Go, a system integrating mobile devices and cloud technologies for middle school science inquiry. Zydeco enables teachers and students to create science investigations by defining goals, questions, and “labels” to annotate, organize, and reflect on multimodal data e.g., photos, videos, audio, text that they collect in museums, parks, home, etc. As students collect this information, it is stored in the cloud so that students and teachers can access that annotated information later and use it with Zydeco tools to develop a scientific explanation addressing the question they are investigating.
One year, I gave an assignment in my Objects and Design class (in Squeak!) to construct a personal newspaper by reading bits of news (based on user interest) from local news sites. The night before the assignment was due, so many students tested their buggy fetch-and-scrape code on one poor site that they killed the site — a pedagogical denial-of-service attack.
Should I or my students have been arrested and taken away in handcuffs? It seems like the direct computing world analogy from the story quoted below.
Fortunately, the student has now been cleared of charges. It’s still a scary story.
It’s a sad commentary on our alarmist society that a similar deed would probably land a modern day budding Oliver Sacks in jail. That is exactly what it has done to a young aspiring scientist named Kiera Wilmot from Bartow High School in Florida, and in the process it has almost certainly deprived this country of exactly the kind of scientist whose shortage its politicians and educators are so fond of lamenting. The student conducted a common experiment mixing Drano and aluminum foil on the grounds of a school. The exact details are unknown but the incident led to a minor explosion, hurt nobody and damaged no property. This relatively harmless bit of curiosity led to Ms. Wilmot being handcuffed, arrested and expelled from the school. Irrational State Overreach: 1, The Much Touted American Edge in Science: 0. Whatever else the school was trying to achieve, it definitely succeeded in squelching independent scientific curiosity in its students.
I usually really like Annie Murphy Paul’s articles, but this one didn’t work for me. Below are her reasons why TED talk videos work well in learning, with my comments interspersed.
• They gratify our preference for visual learning. Effective presentations treat our visual sense as being integral to learning. This elevation of the image—and the eschewal of text-heavy Power Point presentations—comports well with cognitive scientists’ findings that we understand and remember pictures much better than mere words.
Cognitive scientists like Richard Mayer have found that diagrams and pictures can enhance learning — absolutely. But his work combined diagrams with words (e.g., best combination with diagrams: audio narration, not visual text). This quote seems to suggest that pictures are better than words. For most of STEM, that’s not true. We may have an affinity for visual, but that doesn’t mean that it works better for learning complex material.
• They engage the power of social learning. The robust conversation that videos can inspire, both online and off, recognizes a central principle of adult education: We learn best from other people. In the discussions, debates, and occasional arguments about the content of the talks they see, video-watchers are deepening their own knowledge and understanding.
Wait a minute — isn’t she just saying that TED talks give us something to talk about? TED talks are not themselves inherently social. Isn’t a book discussed in a book club just as effective for “engaging the power of social learning”? What makes TED talks so “social”?
• They enable self-directed, “just-in-time” learning. Because video viewers choose which talks to watch and when to watch them, they’re able to tailor their education to their own needs. Knowledge is easiest to absorb at the moment when we’re ready to apply it.
This was the quote that inspired this blog post. It’s an open question, but here’s my hypothesis. Nobody watches a TED talk for “just-in-time” learning. People watch TED talk for entertainment. ”I am about to go to my school board meeting — I think I’ll watch Sir Ken Robinson to figure out what to say!” ”I need to be able to guess birthdays — isn’t there a TED talk on that?” There are videos that really work for “just-in-time” learning. TED talks aren’t like that.
• They encourage viewers to build on what they already know. Adults are not blank slates: They bring to learning a lifetime of previously acquired information and experience. Effective video instruction build on top of this knowledge, adding and elaborating without dumbing down.
It’s absolutely true that effective instruction builds on top of existing knowledge, which is something that the best teachers know how to do — to figure out what students know and care about, and relate knowledge to that. How does a fixed video build on what viewers (all hundreds of thousands of them) actually know? No, I don’t see how TED talks do that.