Posts tagged ‘psychology’
I’m working with Amy Bruckman and Klara Benda on a paper describing the results of a study that Klara did of students taking on-line CS courses. Klara points out in her review of the literature that most retention/attrition models focus on psychological factors, e.g., having appropriate background knowledge, motivation, and metacognitive skills like planning. But the factors that appear in empirical studies of students who drop out, especially in on-line classes, emphasize sociological factors, like changes in job and residence situations, changes in financial status, and family pressures. That’s certainly what Klara found in her study of on-line CS students, and those same issues are echoed in this MSU study.
Depression, a loss of financial aid, increased tuition, unexpected bad grades and roommate conflicts are among key risk factors that lead college students to drop out, according to a study led by Michigan State University researchers.
Not so influential: a death in the family, failure to get their intended major, a significant injury and addiction.
“Prior to this work, little was known about what factors in a student’s everyday life prompt them to think about withdrawing from college,” Tim Pleskac, an MSU assistant professor of psychology and the lead researcher, said in a news release this afternoon.
Many thanks to Alan Blackwell who has resurrected a great old resource and made it available for the psychology of programming and computing education research communities! The book Psychology of Programming (1990) has been out of print for awhile. Alan sought out the chapter authors and secured their permission to post the whole thing on the Web, now available at http://www.cl.cam.ac.uk/teaching/1011/R201/. I just got an email a couple days ago asking for pointers to literature on how expert programmers read code — this is the kind of resource that I can now suggest for answers to that kind of question.
In Alan’s words:
> I’ve done this with permission from Jean-Michel, Thomas and
> David. Needless to say, this is only for educational and research
> use, since copyright remains with the publishers. I would welcome
> links to updated versions of individual chapters from the
> authors, if those were available.
Here’s the Table of Contents of what he’s made available — links to the PDF available at the site:
J.-M. Hoc, T.R.G. Green, R. Samurçay and D.J. Gilmore (Eds) (1990).
Psychology of Programming.
Published by the European Association of Cognitive Ergonomics and Academic Press.
Part 1 – Theoretical and Methodological Issues (introduction)
1.1 Programming, Programming Languages and Programming Methods – C. Pair (pp. 9-19)
1.2 The Nature of Programming – T.R.G. Green (pp. 23-44)
1.3 The Tasks of Programming – N. Pennington and B. Grabowski (pp. 45-62)
1.4 Human Cognition and Programming – T. Ormerod (pp. 63-82)
1.5 Methodological Issues in the Study of Programming – D.J. Gilmore (pp. 83-98)
Part 2 Language Design and Acquisition of Programming (introduction)
2.1 Expert Programmers and Programming Languages – M. Petre (pp. 103-115)
2.2 Programming Languages as Information Structures – T.R.G. Green (pp. 118-137)
2.3 Language Semantics, Mental Models and Analogy – J.-M. Hoc and A. Nguyen-Xuan (pp. 139-156)
2.4 Acquisition of Programming Knowledge and Skills – J. Rogalski and R. Samurçay (pp. 157-174)
2.5 Programming Languages in Education: The Search for an Easy Start – P. Mendelsohn, T.R.G. Green and P. Brna (pp. 175-200)
Part 3 Expert Programming Skills and Job Aids (introduction)
3.1 Expert Programming Knowledge: A Schema-based Approach – F. Détienne (pp. 205-222)
3.2 Expert Programming Knowledge: A Strategic Approach – D.J. Gilmore (pp. 223-234)
3.3 Expert Software Design Strategies – W. Visser and J.-M. Hoc (pp. 235-249)
Part 4 Broader Issues
4.1 The Psychology of Programming in the Large: Team and Organizational Behaviour – B. Curtis and D. Walz (pp. 253-270)
4.2 Research and Practice: Software Design Methods and Tools – B. Kitchenham and R. Carn. (pp 271-284)
Education has never been much for replication studies, but given what this article says about psychology, I’d bet that we would have trouble replicating some of our earlier education findings. I don’t see that this article condemning the scientific method as much as condemning our ability to find, define, and control all independent variables. The world changes, people change. Anything which relies on a steady-state world or human being is going to be hard to replicate over time.
Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
Really interesting result! Flies in the face of the original Worked Examples research by Sweller et al., but not the later work that emphasized skills testing as well as examples. It supports the claims of Peer Instruction, the idea of lots of mini-quiz-like questions mixed into the lecture.
Taking a test is not just a passive mechanism for assessing how much people know, according to new research. It actually helps people learn, and it works better than a number of other studying techniques.
The research, published online Thursday in the journal Science, found that students who read a passage, then took a test asking them to recall what they had read, retained about 50 percent more of the information a week later than students who used two other methods.
One of those methods — repeatedly studying the material — is familiar to legions of students who cram before exams. The other — having students draw detailed diagrams documenting what they are learning — is prized by many teachers because it forces students to make connections among facts.
These other methods not only are popular, the researchers reported; they also seem to give students the illusion that they know material better than they do.
A really interesting piece in the NYTimes, which is relevant for this blog in a couple of ways. First, the piece indicts computing technology for preventing us from having downtime. Second, the suggestion is that this downtime is necessary for better learning. Thus, placing us in computing education of trying to teach something which one might need to get away from in order to learn about it!
Cellphones, which in the last few years have become full-fledged computers with high-speed Internet connections, let people relieve the tedium of exercising, the grocery store line, stoplights or lulls in the dinner conversation.The technology makes the tiniest windows of time entertaining, and potentially productive. But scientists point to an unanticipated side effect: when people keep their brains busy with digital input, they are forfeiting downtime that could allow them to better learn and remember information, or come up with new ideas.
Lectures have a black eye on college campuses today. We’re told that they are useless, and that they are ineffective with out “explicit constructionism.” We’re told to use active learning techniques in lecture, like clickers. I’m realizing that there’s nothing wrong with lecture itself, and that the psychology results tell us that lectures should be a highly efficient form of learning. The problem is that there is an interaction between lecture as learning intervention and our students. That is an education (or broadly, a learning science) result, and it’s important to note the distinction between education (as instructional engineering, as psychology-in-practice) and psychology.
I just served on a Psychology Masters thesis committee. In 2009, Micki Chi published a paper where she posited a sequence of learning approaches: From passive, to active, to constructive. She suggested that moving along the sequence resulted in better learning. While her paper drew on lots of dyad comparison studies between two of those styles of learning, nobody had compared all three in a single experiment. This Masters student tested all three at once. He put subjects into one of three conditions:
- Passive: Where students simply read a text on ecology drawn from a Sophomore-level textbook.
- Active: Where students either (a) highlighted text of interest or (b) copy-pasted key sections into “Notes.”
- Constructive: Where students either (a) created self-explanations of the text or (b) created questions about the text.
He had a test on the content immediately after the training, and another a week later. Bottomline: No difference on either test. But the Masters student was smarter than just leaving it at that. He also asked students to self-report on what they were thinking about when they read the text, like “I identified the most important ideas” or “I summarized (to myself) the text” (both signs of “active” cognition in Chi’s paper), or “I connected the text to ideas I already knew” or “I made hypotheses or predictions about the text” (“constructive” level). Those self-reported higher-levels of cognitive processing were highly correlated with the test scores. Micki Chi called these “potential covert activities” in these kinds of studies. That’s a bit of a misnomer, because in reality, it’s those “covert” activities that you’re really trying to engender in the students!
The problem is that Georgia Tech students (the subjects in the study) are darn smart and well-practiced learners. Even when “just reading” a text, they think about it, explain it to themselves, and summarize it to themselves. They think about it, and that’s where the learning comes from. All the “active learning” activities can help with engendering these internal cognitive activities, but they’re not necessary.
Lectures are a highly-efficient form of teaching. Not only do they let us reach many students at once, but they play upon important principles of instructional design like the modality effect. Hearing information while looking at related pictures (e.g., diagrams on Powerpoint slides) can allow for better learning (more information in less time) than just reading a book on your own. Coding live in lecture is a “best practice” in teaching computer science. I don’t dispute all the studies about lectures, however — lectures don’t usually work. Why?
We add active learning opportunities to lectures because students don’t usually learn that much from a 90 minute lecture. Why? Because it takes a lot of effort to keep learning constructively during a 90 minute lecture. Because most students (especially undergraduates) are not great learners. This doesn’t have anything to do with the cognitive processes of learning. It has everything about motivation, affect, and sustained effort to apply those cognitive processes of learning.
Maybe it has to do with the fact that most of these studies of lectures take place with WEIRD students: “Western, educated, industrialized, rich, and democratic cultures.” A recent study in the journal Science shows that many of our studies based on WEIRD students break down when the same studies are used with students from different cultures. Maybe WEIRD students are lazy or inexperienced at focused learning effort. Maybe students in other cultures could focus for 90 whole minutes. In any case, I teach WEIRD students, and our studies of WEIRD students show that lectures don’t work for them.
There’s another aspect of this belief that lectures don’t work. I attended talks at education conferences lately where the speaker announces that “Lectures don’t work” and proceeds to engage the audience in some form of active learning, like small group discussion. I hate that. I am a good learner. I take careful notes, I review them and look up interesting ideas and referenced papers later, and if the lecture really captured my attention, I will blog on the lecture later to summarize it. I take a multi-hour trip to attend a conference and hear this speaker, and now I have to talk to whatever dude happens to be sitting next to me? If you recognize that the complete sentence is “Lectures don’t work…for inexperienced or lazy learners,” then you realize that using “active learning” with professionals at a formal conference is insulting to your audience. You are assuming that they can’t learn on their own, without your scaffolding.
When I was a student, I remember being taught “learning skills” which included how to take good notes and how to review those notes. I don’t know that those lessons worked, and it’s probably more effective to change lecture than to try to change all those students. We do want our students to become better learners, and it’s worth exploring how to make that happen. But let’s make sure that we’re clear in what we’re saying: Lectures don’t work for learning among our traditional American (at least) undergraduate students. That’s not the same as saying that lectures don’t work for learning.
Lately, I’ve been working with groups of psychologists and education researchers, listening to their stories, and watching how they negotiate their way around topics. I came away with this new appreciation of how different the two fields are. They’re not from two different planets, so the old “Mars vs. Venus” metaphor doesn’t work. It’s more like they’re on the same planet, even same land mass, but different continents. You can drive from one to the other, but it’d take some work.
Psychology is a real science. They can measure things. They can go deeper-and-deeper with their measurements, like the very cool fMRI work that’s going on now to do tracking of differences at the neural level. They do interesting things, but more in the laboratory than in the messy classroom.
Education is big and messy and exciting. It cares about such different things. Education research wants to be a science, but it’s so hard to measure things exactly when you’re dealing with real human learning on learning objectives that are complex and necessarily vague. So, replication and predictability (hallmarks of science) become nearly impossible. Education researchers do experiments and do measure phenomena of interest, but the experiments are so different that psychologists (often, in discussions I’ve had, in my experience [insert several other caveats here]) don’t really understand the issues. For example, Education researchers are trying to convince teachers, so they are very worried about perceived bias, hence the use of external evaluators to do the data gathering and analyses. Psychologists seem to mostly trust one another — if you said you collected this data, you probably did. (Maybe they should sit in more curriculum committee meetings?) Education researchers look at things like student attitudes and degree retention/attrition rates that don’t map to constructs that psychologists can’t measure with validated tests and fMRI. Teachers often don’t have the background to understand the issues that the psychologists talk about, so education researchers have to use different language than psychologists.
I don’t get the sense that psychologists think badly of education researchers. Education researchers just ask such different questions and work in such a different frame of reference. Psychologists I’ve worked with often get ideas for interesting new experiments from the work of education researchers.
On the other hand, education researchers can find psychologists frustrating. The education researcher might say (making all this up to explain my point), “Wait! We already know about student learning on that topic! It works like this.” and the psychologist responds, “Well, we haven’t considered these four other possibilities, and we’ll need to do controlled experiments on each,” and the ed researcher pulls out his hair saying, “But that’s such a waste of time!” Then the education researcher looks at the psychologists’ results and says, “How can you say that with so few subjects?” and the psychologist pulls out the power analysis and ANOVA and says, “See! It’s significant!” and the education researcher says, “But I have 150 students in my lecture, and I’m positive that my students would learn differently than that!” Then the psychologist comes out with some really terrific nugget that a teacher could use to improve their teaching, but presents it with a p value and a discussion on the significance test of residuals. The ed researcher screams, “Teachers won’t get it when you say it like that!”
Of course, I’m making gross generalizations, and no specific education researcher or psychologist talks or thinks like that. Lots of psychologists work with teachers and education researchers, and there are lots of education researchers who work perfectly well in psychology. There are lots of people who do travel back and forth from Kansas City to Rio de Janeiro. (You can take a plane. You don’t have to drive.) Being someone who does commute back and forth (who metaphorically mostly lives in Rio, but has visited KC enough that I don’t get lost when traveling the main streets), I find the differences striking and interesting. We’re all working on the same problem, but from different directions. Maybe we’ll meet somewhere in between. Bring your fMRI data, and I’ll grab my course evaluations, and we’ll have lunch in Acapulco.
Last night, Barb and I went out to dinner with our two teens. (The house interior is getting painted, so it was way easier than trying to eat in our kitchen.) We got to talking about the last academic year. Our eldest graduated from high school last week, with only one B in four years, including 7 AP classes. (While I take pride in our son, I do recognize that kids’ IQ is most highly correlated with mothers’ IQ. I married well.) Our middle child was moping a bit about how hard it was going to be to follow in his footsteps, though she’s doing very well at that so far.
Since our middle child had just finished her freshman year, we asked the two of them which teachers we should steer our youngest toward or away from. As they compared notes on their experiences, I asked about their biology teacher, Mrs. A. I couldn’t believe the homework load that Mrs. A. sent home with the kids each night — almost all worksheets, fill-in-the-blank, drill-and-practice. Sometimes, our middle child would have 300 questions to complete in a night!
Both our kids loved Mrs. A! No, they didn’t love the worksheets, but they said that they really liked how the worksheets “drilled the material into our heads.” “She’s such a great teacher!” they both said. They went on to talk about topics in biology, using terms that I didn’t know. Our middle child said that she’s looking forward to taking anatomy with Mrs. A, and and our eldest said that many of his friends took anatomy just to have Mrs. A again.
I was surprised. My kids are pretty high-ability, and this messes with my notions of Aptitude-Treatment Interactions. High ability kids value worksheets, simple drill-and-practice — what I used to call “drill-and-kill”?
On the other hand, their experience meshes with the “brain as muscle” notions that Carl Wieman talked about at SIGCSE. They felt that they really learned from all that practice in the fundamentals, in the language and terms of the field. Cognitive load researchers would point out that worksheets have low cognitive load, and once that material is learned, students can build on it in more sophisticated and interesting ways. That’s definitely what I heard my kids doing, in some really interesting discussions about the latest biology findings, using language that I didn’t know.
I realized again that we don’t have (or at least, use) the equivalent of worksheets in computer science. Mathematics have them, but my sense is that mathematics educators are still figuring out how to make them work well, in that worksheets have low cognitive load but it’s still hard getting to what we want students to learn about mathematics. I suspect that computational worksheets would serve mathematics and computer science better than paper-based ones. A computational worksheet could allow for dynamics, the “playing-out” of the answer to a fill-in-the-blank question. Much of what we teach in introductory computer science is about dynamics: about how that loop plays out, about how program state is influenced and manipulated by a given process, about how different objects interact. That could be taught (partially, the foundational ideas) in a worksheet form, but probably best where the dynamics could be made explicit.
Overall, though, my conversation with my kids about Mrs. A and her worksheets reminded me that we really don’t have much for CS learners before throwing them in front of a speeding interpreter or compiler. A blank editor window is a mighty big fill-in-the-blank question. We need some low cognitive load starting materials, even for the high ability learners.
The March 2010 Communications of the ACM (CACM) includes publication of two Blog@CACM pieces, a sort of point-counterpoint. CACM published my piece about “How we teach computer science is wrong,” where I argued that dumping students in front of a speeding compiler is not the best way to ramp students up into computing, and that we might think about instructional design mechanisms liked worked examples. CACM also published Judy Robertson’s piece “Introductory Computer Science Lessons — Take Heart!” where she argued that what we actually do introductory computing actually has the right pieces that good instructional design research recommends. When I heard that they were going to publish these two pieces together, I thought it was a great idea.
The title they chose was, “Too much programming too soon.” I think it’s really about the definition of “programming.” I do think a novice facing an empty edit buffer in an IDE is an awful and scary way to get started with computing. However, I deeply believe that programming is a wonderful part of computer science, but programming more broadly than “Debugging a blank sheet of paper.” It’s creative, powerful, awesome, and often surprising. There are lots of ways of getting started with programming that are much less scary, such as Squeak Etoys, Alice, and Scratch. I also think that we should explore reading examples, modifying existing code, debugging code, and new kinds of activities where students do limited text programming, some form of “reduced cognitive load” activities. We need broader definitions of what “programming” means.
My daughter turned 12 on Tuesday, and unfortunately, she was ill. Dad hung out with her, and played whatever video games she wanted. One of those she picked was Guitar Hero, so I finally got time to play it repeatedly. Y’know — it was kind of fun!
Back in December, when I first got Guitar Hero, I wrote a blog post where I agreed with Alan that Guitar Hero is not nearly as good as learning a real musical instrument. At that time, I wrote:
Guitar Hero might still be fun. But it’s just fun. I might learn to do well with it. But it would be learning that I don’t particularly value, that makes me better.
Now I’m thinking that I might want to eat those words. I found Guitar Hero hard. I own a guitar and have taken guitar lessons for two semesters. (Even putting it in terms of “semesters” suggests how long ago it was.) Some of my challenges in learning to play a guitar included doing two different things with my hands, and switching chords and strumming to keep the rhythm. I noticed that that’s exactly what I was having a hard time doing with Guitar Hero. I also noticed the guitar parts of rock songs — songs that I had heard a million times before but never had noticed all the guitar parts previously. I noticed because I missed my cues, and so those guitar parts were missing. While I have known Foghat and Pat Benatar for literally decades, Guitar Hero had me listening in a different way.
It occurred to me that Guitar Hero could be a form of scaffolding, a reduction in cognitive load that allows one to focus on one set of skills before dealing with all the skills at once. Cognitive scaffolding is much like the physical scaffolding, “a temporary support system used until the task is complete and the building stands without support.” Now, Guitar Hero would only be successful as a form of scaffolding if it actually leads to the full task, that it doesn’t supplant it. In education terms, if Guitar Hero could fade and if it doesn’t lead to negative transfer, e.g., “I’m great at Guitar Hero, but a real guitar is completely different.”
I did some hunting for studies that have explored the use of Guitar Hero to scaffold real music education. I could not find any educational psychology or music education studies that have explored Guitar Hero as a form of scaffolding or as a tutor to reduce cognitive load. I did find papers in music technology that hold up Guitar Hero as a model for future educational music technology! My favorite of these is a paper by Percival, Wang, and Tzanetakis that provides an overview of how multimedia technolgoies are being used to assist in music education. They point out additional lessons that students are learning with tools like Guitar Hero that I hadn’t noticed. For example, the physical effort of playing an instrument is more significant than non-players realize, and Guitar Hero (and similar tools) build up the right muscles in the right ways (or so they theorize — no direct studies of Guitar Hero are cited). The paper also argues that getting students to do something daily has a huge impact on music learning and performance, even if it’s a tutorial activity.
Now here’s the critical question: Does Guitar Hero lead to real music playing, or is it a stopping point? Nobody is arguing that playing Guitar Hero is making music, that I can see. Does it work as scaffolding?
I don’t know, but I’m now wondering: Does it matter? If Guitar Hero stops some people from becoming musicians, then it is a problem. If some people, who might have pushed themselves to become musicians, decide that Guitar Hero is hard enough, then Guitar Hero is doing a disservice. But if that’s not true, and people who never would become musicians, have a better appreciation for the music and a better understanding of the athleticism of musicians because of Guitar Hero, then Guitar Hero is providing a benefit.
These are computing education questions. You have all heard faculty who insist on using Eclipse in their introductory classes, because that’s what real software engineers use. We have recently read in comments on this blog that students should use “standard tools” and “learn science the way scientists understand it.” We also know from educational psychology that engaging introductory students in the same activity as experts only works for the best students. The bottom half of the students get frustrated and fail.
We need Guitar Hero for computer science. We need more activities that are not what the experts do, that are fun and get students to practice more often, that are scaffolding, and that reduce cognitive load. We have some, like Scratch and eToys. We need more. Insisting on the experts’ tools for all students leads to the 30-50% failure rates that we’re seeing today. We have to be doing more for the rest of the students.
Economics is a fascinating field. It’s psychology-of-masses, a form of psychological engineering, and the closest thing we have to Hari Selden’s psychohistory (from Asimov’s Foundation series). It’s a study of how people make choices in order to maximize their benefit, their utility. It is not only about money–money is just a way of measuring value, about some common sense of the potential of some consumable for providing utility. I’ve been reading more economics this summer, and that’s got me thinking about what economic theory might have to say about computing education.
Students, especially in undergraduate education, are clearly economic decision makers. They choose their classes. That isn’t to say that they are our customers whose wants we must meet. It means that we provide consumables (classes) under various rule sets, and the students seek to maximize their benefit.
What students want from higher education (that is, what utility the classes are meant to provide) these days isn’t in much doubt. Most studies of higher education that I’ve read suggest that a big change occurred in the 1970′s, and that since then, over 90% of incoming students in higher education are attending higher education in order to get a better job and improve their socioeconomic class. There is some evidence that suggests that students, by the time they are in their fourth year, they value education for its own sake more. Students in their first years, on the whole, make choices based on their job prospects.
We’ve talked in this blog about why a student should study computer science. One argument is because of the value of computing as a field and the insights that it provides. Smart students will probably recognize that learning computing for those reasons will result in greater utility over the long run. How do we get students to see value, to receive benefit from what we know will help them more in the long run? Is it possible to teach students new and better utility functions? Can we help students to realize the greater utility of valuing knowledge, even from their first years in higher education? That’s an interesting question that I have not seen any data on.
What if we simply say, “This is the way it is. I’m teaching you this because it will be the best for you in the long run”? Paul Romer’s work on rule sets has been describing how the rules in effect in a country or a company can encourage or discourage innovation, and encourage or discourage immigration and recruitment. He would point out that higher education is now a competitive market, and deciding to teach for what the students should value is creating a set of rules. Students who don’t value those rules will go elsewhere. Those students who say will probably succeed more, but the feedback loop that informs us in higher education that we’re doing the right thing doesn’t currently exist. Instead, we simply have lower enrollments and less tuition–not the right feedback.
It’s that last part, about the feedback on teaching, that I have been specifically thinking about in economic terms. Malcolm Gladwell wrote a fascinating New Yorker piece last December about the enormous value of having a good teacher. What makes for a good teacher? Maybe those who create effective rule sets, who create incentives for student success? What provides utility for teachers? How do we make sure that teachers receive utility for good teaching?
How do we recognize and reward success in teaching? I listened to a podcast of a lecture by William Wulf who points out how badly we teach in engineering education. In economic terms, that’s not surprising. I don’t know of research into what university teachers value in terms of teaching. What is the utility function for a higher-education teacher, a faculty member? Job prospects and tenure are based on publication, not teaching, at least in research universities. When we do evaluate teaching, how do we do it?
- By measuring learning? We’ve already pointed out in this blog how very hard it is to do that right. Teachers use examinations and other forms of assessment. Are they measuring the right things? The research that I’ve seen suggests that grades are only rough measures of learning. If we were going to measure learning as a way of rewarding faculty to incentivize better teaching, we need some external measure of learning apart from grades, and we need that measurement to be meaningful — that it reflects what we really value in student learning.
- By measuring student pass rates? Wulf might say, “If only!” He points out that correcting our 50% dropout rate in engineering (and computing!) education would alone dramatically improve our enrollment numbers. Would we be dumbing down our education offerings? Honestly, how would we know (see previous bullet)?
- Instead, we most often just ask the students. ”Was this a good class? Was this teacher a good teacher?” This gets back to student as consumer, which is a step beyond decision maker. Are they the right ones to make this determination? Is the end of the class the right time for a student to be able to evaluate if the class was worthwhile?
Higher education teaching will probably improve once we figure out how to give reasonable feedback on teaching quality which could then impact teachers’ perception of benefit or utility. As Gladwell and Wulf point out, getting it right would have a dramatic improvement on student quality and enrollment.
I’m listening to Paul Romer’s Seminar about Long Term Thinking, and got to thinking about the SALT podcasts and TED talks. These really are remarkable educational opportunities — really smart people, who are also really good at communicating their ideas to a lay audience. These are not necessarily front-line scientists. Michael Pollan and Malcolm Gladwell, for example, are both journalists who focus on taking important ideas from science (and economics and…) and making them accessible. Why is that uncommon? We have relatively few people who do this kind of thing, as opposed to all scientists or even all educators. Is it because that combination of talents is so rare, or because there is little market, interest, or demand for it?
Seymour Papert once argued that educational curricula should be evaluated like art — don’t try to identify the best, but instead argue about how well this example expresses something, or how accessible another one is, or how another one leaves people thinking and talking for years later. Compare curricula for how they reach and engage people, not for a measurable, numeric bottom line. Wouldn’t it be great to have so many compelling CS1 curricula that we could have a CS1 “art gallery” and compare them along the lines Seymour described?
Let’s imagine that we wanted to have more education that was engaging, compelling, and explained things to people. We’d have to re-organize how we teach and structure education. In fact, that would go against the basic structuring mechanisms of universities.
When I was at the University of Michigan, there was a lot of excitement about the proposed increased connections between the School of Education and the School of Social Work. At some places, like Northwestern University, these are housed in the same schools. That makes sense because the goals of Social Work are very similar to the goals of Education — improved human development, meeting human potential, individual self-reliance, and so on.
However, if we grouped scholars in terms of methods, we would structure universities very differently. I’ve always found it odd that Physics and Mechanical Engineering are in separate schools/colleges at most Universities, and the same with Chemistry and Chemical Engineering. Aren’t these really the same things, relying on the same theories, doing similar experiments? Instead, we group by outcomes. Civil, Chemical, and Mechanical Engineering are all about applying science to solve problems for people, at a large scale (by creating bridges and buildings, chemical plants, manufacturing capacity). Never mind that what I see faculty in Chemistry and Chemical Engineering doing much more similar things than faculty in Civil Engineering and Chemical Engineering.
If we did group by methods rather than outcomes, what disciplines would be the natural collaborators for Education? What disciplines would lead us to think about how we do things, so that we could create the kind of curriculum-as-art that Seymour described?
- Journalism, which also cares about methods for finding “truth,” for conveying that to people in ways that are understandable and compelling, and for structuring the story so that the punchline is up front, and the greater detail is at the end.
- Theater, because lecture is a kind of performance. Experimental Theater does a better job of getting the audience interacting with the performance than do most lectures!
- Medicine, which is (much more than Education) about meeting individual needs and figuring out how to tailor broad approaches to health for the individual’s particular combination of strengths and illnesses.
- Film and Television Studies, which know a lot about using multiple media for creating a compelling story. Everyone who does On-Line/Distance Education should take a Film Class, to figure out how you package a compelling story/experience for others whom you never see.
- Theme Park Designers (yeah, I know it’s not an academic discipline, but maybe it should be). I’m a big Disney Imagineering fan. Imagineers know how to draw you into the ride with the prestory, setting expectations and explaining the context, and then giving you an experience that you talk about and remember later.
- Economics, because in the end, most Educational decisions are economic ones. We know how to get two-sigma improvements in learning — give everyone a personal tutor. That’s too expensive to do at scale. Everything else we do is a step down from that, and if we knew how economists think about these trade-offs, it might help us in Education recognize our trade-offs and where we’re making them.
- Psychology, because Education is just Psychology Engineering. If in a methods-oriented University we lump Chemistry and Chemical Engineering together, we certainly should put the Psychologists and the Education faculty in the same building.
Okay, I’ll get back to my Faculty Summit talk preparation now, but I’m thinking about how the quality of education should be as much about the student’s experience as about the student’s performance on the test.