Posts tagged ‘contextualized computing education’
I finished up the “Georgia Computes!” report on our first four years just before the holidays. One of the evaluation studies we did was to look at the contexts that we use in our Girl Scout workshops and how those contexts influenced student attitude change. We asked students before and after each event (for everything — summer camps, YWCA afterschool activities, as well as Girl Scout camps) whether they agreed or disagreed with seven statements:
1. Computers are fun 2. Programming is hard 3. Girls can do computing 4. Boys can do computing 5. Computer jobs are boring. 6. I am good at computing. 7. I like computing 8. I know more than my friends about computers.
In the one study, we looked at a set of workshops over a multi-year period with over 600 Girl Scouts involved. We looked at where we got changes in attitudes, and computed the effect size. Here’s one of the tables of results:
This table shows the number of Girl Scout workshops that we had with each context, the number of large/medium/small effect sizes that we saw, and total number of effects. What we see here is that Pico Crickets and Scratch have the most effect: The most large effects, and the most overall effects. We’ve done a lot of different things in our robotics workshops, from following mazes to singing-and-dancing robots. Lego Mindstorm workshops (seven different ones, using a variety of activities) had only small effects on changes in attitudes. This isn’t saying that Lego robotics can’t be an effective context for making more positive Girl Scouts’ attitudes about computing. We are finding that it is harder than with these other contexts. I hope that someone replicates this study with even larger n, showing an approach to using Lego Robotics with Girl Scouts that leads to many large effects on attitudes. We just haven’t been able to find that yet.
Over the Christmas holiday, our extended family has been playing a bunch of great Wii games, including karaoke, “Just Dance,” and various Rock Band games. Barb and I discovered this morning that we were thinking the same thing about these games: What a great context for learning programming! Barb was noting that “Just Dance” uses a small icon to represent (abstraction!) a particular dance move, which is then repeated several times (iteration!). I was thinking about the great computing and media ideas required to build this kind of software: From digital signal processing to detect pitch, to the ubiquitous computing ideas involved in sensing the world (e.g., the accelerometers used to detect body motion in the dance games). We could use an inquiry-based approach to teach computing through these (amazingly popular!) games, e.g., “How do you think Rock Band figures out if you’re singing the right pitch?” and “How accurate do you think the motion detection in ‘Just Dance’ is?”
This is how we should identify contexts to use in contextualized computing education. What are the application areas that students find intriguing? What computing ideas do we want to teach and can be taught with those areas? Even though we may like robotics, if the student audiences that we’re seeking don’t, then it’s not a great context. There are many great contexts out there, many that are even more popular and even more powerful than what we use today. People like to sing and dance, even more than making robots sing and dance. Learning to build software to support that sounds like a great context.
Occasionally, I have been told that I made a mistake in my career, by focusing on computing education research rather than “real” computer science research. My first CS advisor at Michigan (before Elliot Soloway got there) told me that I shouldn’t do a joint CS-Education degree, because no CS department would hire me. (Maybe he was right — I was hired into the College of Computing.) Yesterday was the first time I was hit from the other side.
An Education school professor asked me why I was bothering with this computer science education stuff rather than doing “real” education research. The things I’m working on have already been done in education research. His point was well-taken. My contextualized computing education is a variation of situated learning, which is well-known among education researchers. Much of the work we’re doing (e.g., in developing assessments, in investigating teacher identity or student misperceptions) is work that has already happened in other fields, so doing this work doesn’t advance our understanding of education. Frankly, he thought I was wasting my time.
I had two answers for him. First, pedagogical content knowledge (PCK) differs from domain-to-domain, by definition. PCK is the knowledge that a teacher has on how to teach for a given domain. It’s more than knowing the domain — it’s knowing what problems students encounter and what approaches have worked best to explain concepts and skills in that domain. Developing PCK is a domain-by-domain activity, and it’s necessary for creating methods courses to teach new teachers. Second, I suggested that I had a practical lever in computer science. I am a computer science teacher, and I know how to talk to computer science teachers. I don’t have any particular insight into how to express education ideas to humanities or social studies teachers, for example. So, I have greater opportunity to create change in computer science education.
As always happens, I thought of my best answer after we parted ways. Somebody has to interpret general findings for a given domain. I don’t read medical journals to figure out how best to feed my family. I don’t read satellite imagery to figure out what tomorrow’s weather is going to be. Some computer scientist had to read the general education literature to explain (and explore — since it’s not always obvious) how a particular insight or finding applies to CS teaching, and then try it so that others could be convinced. That’s part of what I do. I’m not inventing as much as I’m interpreting and applying. That, too, is scholarship.
I should point out that I don’t completely agree with his point. Yes, most of what I do is neither education research (in general) or computer science research (in general), but it does happen from time-to-time. Our paper on developing an educational Wiki in the 2000 CSCW is (I believe) the first report on the Wikis in the ACM digital library. Sometimes work at the edge of disciplines can advance or influence the work within the discipline.
Work in domain-specific educational research is typically disliked by many practitioners’ of the domain. That’s been true in physics, chemistry, biology, and engineering. It’s also the case that domain-specific educational research is sometimes rejected by those in education. This was just my first time experiencing it.
I’m writing this from a hotel room in Toronto, Ontario. (This year, it seems like I can’t stay in the US for too long.) I’m visiting the University of Toronto for the next couple of days.
Tomorrow, I’m giving an informal talk on my view of the State of CS Education Research. I’m excited about this talk. It’s not a well-practiced, well-groomed talk, e.g., it has the most slides with just bulleted text of any talk I’ve given in years. They scheduled a couple hour block for me to tell stories of my recent students’ work, and about the work that I want to do next, which is not something I do in my standard DLS/keynote talk. For you who read this blog, you already know what I’m going to say — it’s about my students’ work, about worked examples and phonics, and about why textbooks are bad for CS Ed, and why distance education is important for CS10K.
On Tuesday, I’ll give my talk on “Meeting Everyone’s Need for Computing” where I’ll argue that teaching everyone on campus about computer science is an old but good idea. I’ll update a version of the talk that I gave in Jinan — various versions of the talk are at
if you’re interested.
When I get back (somewhere in the boundary of very late Tuesday and very early Wednesday), I’ll be recovering, and then it’ll be the American Thanksgiving holiday. (I understand Toronto had its “Santa Claus Parade” this morning, so it’s officially already the Christmas season here.) I expect to spend less time blogging this week than usual. Happy Thanksgiving!
I finally finished Jerome Bruner’s Towards a Theory of Instruction on the flight home from China yesterday. It was a fitting conclusion to this crazy travel year I’ve had — I started it on the flight to Doha last May, reading it during the take-offs/landings when I couldn’t read ebooks. I found it fascinating, but feel that I need to re-read it to get more out of it.
I found a lot of support for the contextualized computing education approach that we have been developing. Bruner talks about the value of context, and how school so often separates purpose-in-context from school.
“Note though, that in tens of thousands of feet of !Kung film, one virtually never sees an instance of ‘teaching’ taking place outside the situation where the behavior to be learned is relevant.”
“The change in the instruction of children in more complex societies is twofold. First of all, there is knowledge and skill in the culture far in excess of what any one individual knows. And so, increasingly, there develops an economic technique of instructing the young based heavily on telling out of context rather than showing in context.” (p.151)
Bruner points out that this can be an advantage, if managed well:
“At the same time, the school (if successful) frees the child from the pace setting of the round of daily activity. If the school succeeds in avoiding a pace-setting round of its own, it may be one of the great agents for promoting reflectiveness.” (p. 152)
Bruner’s solution, though, goes well-beyond teaching within a context, as we do in Media Computation. I think Bruner would like MediaComp based on the quote below, but he goes on to propose a goal of “problem finding.”
“By school age, children have come to expect quite arbitrary and, from their point of view, meaningless demands to be made upon them by adults–the result, most likely, of the fact that adults often fail to recognize the task of conversion necessary to make their questions have some intrinsic significance for the child. [Ed: That's what I think we do in MediaComp.] Children, of course, will try to solve problems if they recognize them as such. But they are not often either predisposed to or skillful in problem finding, in recognizing the hidden conjectural feature in tasks set them. But we know now that children in school can quite quickly be led to such problem finding by encouragement and instruction.” (p. 157-158.)
That’s a really interesting idea. How would we encourage students to do problem finding as a way of learning computer science?
I think we do that already with our open-ended homework assignments. When we ask students to build a photo collage, a music piece, or a movie, with only general goals, we find that students often develop goals for themselves that are beyond what we have explained how to do. So, students explore, play, and invent new kinds of effects to achieve their goals. That’s problem-finding.
Bruner’s examples of problem finding go beyond that, though. He talks about students looking at specific lessons (about cheetahs attacking baboons), then extrapolating from there to exploring broader issues, e.g. of ecological balance, driven by student questions and hypotheses. Could we do that in computer science?
Here’s one way to build a problem-finding pattern in a computer science course: We show students a particular lesson, which could be contextualized (like using weighted-sums to create a transparency effect) or decontextualized (like a binary search). Then we discuss, and encourage students to think about problems that might be solved with techniques like these, or to find similar problems that could be solved with variations of these techniques. Maybe you saw a commercial or video game that used a different use of transparency (fog or smoke?), or maybe you’re wondering how other kinds of searches work. This would be an interesting way of moving beyond contextualized-computing education.
A challenge for making this work in computer science is, as the essay in the Chronicle mentioned, kids today don’t think much about how things work, about how to take things apart. Will students be able to figure out problems that can be solved with computer science techniques that they know? Will students be able to think about the ways that computational things are built? This feels like an interesting open research question, like what the Commonsense Computing research group does: how do people think about how computational artifacts work? How do users think that Google works, or how spellchecking works, or how fonts are drawn on the screen? Do users even see the computation that needs to be explained? If not, they will have a hard time finding the problems that computer science can help them solve.
Bruner is suggesting an interesting teaching technique that we should try in computer science. That’s a design-based research question. There’s another research question about how well novices might think about computational problems, about whether they even see the solvable problems. Bruner’s notions of “encouragement and instruction” can help them see it, but that will take some exploration to develop.
I’ve been reading about “mirror neurons” lately, trying to understand why we like stories. I understand why we like to tell stories — there are lots of evolutionary advantages to wanting to communicate, to get others to pay you attention. But why do we like to consume stories? What advantage is there to wanting to hear others’ stories? This is relevant for my contextualized computing education notions — why does wrapping a story around CS1 lead to increased retention? One possible answer is that we are wired to mimic others’ activities, through our mirror neurons, potentially leading to vicarious learning.
Which is why the findings linked below are interesting. Turns out that our mirror neurons fire even when watching a computer do something.
Surprisingly, when players were observing their competitor make selections, the players’ brains were activated as if they were performing these actions themselves. Such ‘mirror neuron’ activities occur when we observe the actions of other humans but here the players knew their opponent was just a computer and no animated graphics were used. Previously, it has been suggested that the mirror neuron system supports a type of unconscious mind-reading that helps us, for example, judge others’ intentions.
Dr Howard-Jones added: “We were surprised to see the mirror neuron system activating in response to a computer. If the human brain can respond as though a computer has a mind, that’s probably good news for those wishing to use the computer as a teacher.”
This blog post at Technology Review caught my eye. The post itself is disappointing. They make claims (like the below) that are NOT made by the paper. The figure is right, but the claim is too strong.
The paper actually does a really good job of making the claim carefully. ONE of the semesters where they used the robots had a dramatic rise in retention rates, but not another. Comparing the study YEAR to previous years doesn’t show a significant difference in retention rate. However, that one semester is promising and well worth continued exploration.
The results were profound: retention rates for the 2009 computer science classes in which the Finch was used shown below, in red increased by 25 percent.
I gave the opening, invited talk at the first Educational Applications of Artificial Intelligence conference this morning. I was a bit nervous, since I am not an AI researcher or teacher. Rather than pretend to be and be exposed as an imposter, I instead focused on challenges in CS Education that I thought AI could help with.
Here were the three I identified (slides in PPT and PDF at
- Matching context to student. The evidence of the value of context at engaging students and improving success rates is pretty strong. But there is also evidence that the same context doesn’t work for everyone. If there were a bunch of contextualized courses available (robotics, media computation, video games, Engineering problem-solving), how would you match students to the context that would work best for them? I don’t know what variables are most important to use there. Interest? Future career choices? Previous computing background? Previous mathematics background?
- Teach computing concepts without requiring programming. The new APCS course has some challenging objectives, like having students understand issues of data and knowledge representation in terms of abstraction and about what makes for a usable user interface. A goal in this course is to minimize learning programming, at least, traditional programming languages. Some of these learning objectives (like knowledge representation) belong to AI. Others could use AI help, like maybe creating a simple agent that could “test the usability” of user interfaces that the students might design. We’re going to need a lot of content generated to help teach these objectives, with minimal programming, and without resorting to boring, rote memorization. (“Here, go memorize the Apple Human Interface Guidelines…”)
Thanks to Mehran Sahami and the rest of the EAAI organizing committee for inviting me!
Carl Wieman‘s talk at SIGCSE 2010 was intriguing. I really liked the teaching practices that he recommended. I didn’t buy his explanations for why they were good. But as I’ve started poking at the references he provided, I’m finding that there is evidence to support at least some of his claims. I’m downloading more in order to dig deeper.
Carl said that the goal of his institute is both to have students learn more effectively and to make teaching more efficient and rewarding for the teacher. He recommends a model of carefully identifying the components of expertise, measuring the development of expertise in students, and iteratively experimenting and assessing to get it right. He identified expert competence as having lots of facts, having a good knowledge organization framework, and monitoring one’s own understanding and learning. The goal of science education is to get students to be more like that.
Carl first presented evidence that we’re not doing well now. He cited a paper by Richard Hake describing a 6,000 student survey (Yes! Three zeroes there!) showing that “On average, students learn less than 30% of concepts that they did not already know in lecture classes. Lecturer quality, class size, and institution doesn’t matter.” With improved methods, that can rise to 40-60% or better.
He gave four principles of effective learning and teaching. (1) Motivation which he said is “essential, but often neglected.” (2) Connecting with and building on prior thinking. (3) Applying what is known about memory (where he recommended Robert Bjork’s work). And (4) explicit authentic practice of expert thinking. This last part is where he went into an argument that I didn’t quite buy. He said that “Brain development is much like muscle development.” It takes lots of practice, and that’s why motivation is so important.
Now, when I took cognitive science in the late 1980′s and early 1990′s, I was told explicitly that the brain was not a muscle and shouldn’t be thought about that way. It wasn’t about practice. So, I started digging into it. Looks like Wieman is right! There are these really intriguing studies showing that simply telling kids that the brain is like a muscle leads to better learning. Of course, it’s still controversial, and it’s not about the brain being biologically similar to muscle. It’s about thinking about brain development as being like muscle development. Practice matters.
Carl pointed out errors that we make as teachers by not taking all of this into account. For example, weighting exams most heavily in determining course grades is counter-productive. Making exams important leads to cramming, which does result in better exam performance — and minimizes long term retention of that information. You learn it only for the exam.
Then Carl claimed that lectures tend to cover too much material. We should try to teach less per lecture because there are limits on short term memory. We shouldn’t try to teach more than seven concepts in a lecture, because we can only hold 7+/-2 subjects in short term memory. Now, I don’t buy this one. The duration of short term memory is at most 10 minutes, or as short as 30 seconds. That’s not lecture-length times. Cognitive load is certainly a critical issue, but I don’t know of evidence (and can’t find any yet) supporting the argument for no more than seven concepts per 60-90 minute lecture.
Several of the methods that Carl promoted really resonated with me. His argument that we should start top-down, with an interesting problem and then explain what’s needed to solve it (as opposed to bottom-up, providing background knowledge, and then problems that integrate that knowledge) meshes with our notions of contextualized-computing education. He’s a big fan of peer learning and the use of “clickers” in classrooms. He provided lots of pointers to what he called “more scientific forms of teaching.”
Carl’s talk has me digging into areas of educational psychology than I’ve not looked at in a long time. He’s also got me thinking about how to implement some of his methods in computing classrooms. How do we give “quick, effective” feedback on homework? No way is entering a whole program into an IDE then interpreting Java error messsages counts as “quick and effective”! (Alex Repenning had a great quote from a student in his talk: “Computer science class? That’s where the teacher gives you a program on the board, then you type it in, and it doesn’t work.”) How do we provide homework or in-class activity that gets at expert computing thinking skills, like debugging and testing, without overloading that with also having to design programs, write programs, enter programs, and fight the compiler’s error messages?
Another great note keynote well worth the price of admission, er, the time and expense to travel to Milwaukee for SIGCSE 2010.
I had a really interesting conversation yesterday with Ron Eglash of RIT at the BPC Community Meeting. Ron does wonderful work exploring the computing and mathematics in cultural practices, like the transformational geometry in how cornrows are woven, or the complex graphics algorithms in Native American bead weavings. He mentioned that one of the surprising things he’s discovered is that the context matters a lot, but it doesn’t have to be their context. As he’s been taking his design tools to places like Ghana and to peoples like the Inuit in Alaska, he finds that students often find most interesting the tools not from their own context. He says that he shows them all his tools, then let’s them pick what they want to explore further, and they rarely pick their own culture’s practices.
I think that meshes with what we’re learning about contextualized computing education. Not all the students in our IPRE CS1 want to become roboticists, but the students recognize that robotics is part of CS, so the robotics context brings meaning to what they’re doing. Lana Yarosh found that 60-70% of the students in the Media Computation data structures class found that the media context made the class more interesting and more motivating — even though the majority of the students were Industrial and Systems Engineers who were probably not going to be doing much media manipulation in their careers.
All these stories remind me of Viktor Frankl’s Man’s Search for Meaning – not that I’m saying taking a computer science class is like surviving a concentration camp! Piaget said that humans are sense-making: we try to make sense of situations. Frankl said that humans are meaning-making: we need to have a reason for doing. Ron’s work and our contextualized computing education is about providing meaning, demonstrating the value of what’s being learned, and giving a reason for making the effort to make sense of the material.
Context matters. I have to value the context, but it doesn’t have to be my context.
She [Shirley M. Tilghman, President of Princeton] recited various statistics and called for the creation of more courses that engage science students in “big questions” early in their careers. Too many college students are introduced to science through survey courses that consist of facts “often taught as a laundry list and from a historical perspective without much effort to explain their relevance to modern problems.” Only science students with “the persistence of Sisyphus and the patience of Job” will reach the point where they can engage in the kind of science that excited them in the first place, she said.
YES! This is what we’ve been arguing for the last eight years, since we started designing Media Computation, our Engineering CS1 in MATLAB, and IPRE’s CS1. It’s the basic idea behind Threads. We lose too many students in the first year, because we keep saying to them, “Just stick with it, and by your Junior and Senior year, this will all make sense and you’ll be doing relevant work!” It’s nice to hear similar arguments being made for all of science education.
But I will say that I think President Tilghman draws too strong a contrast. One of the lessons for me from the last eight years is that relevance is critical, but students still need to learn stuff. They do need understand the history of what they’re learning. We need to figure out what students really need to know, and not just include everything that we once studied. Thus, that list of stuff to learn does have to be learned (but probably not everything on that laundry list) and a historical context is important, and still, I completely agree with President Tilghman, that the relevance to the big and important questions is the overarching context that needs to permeate our introductory classes.
While my post on the new AP CS slideshow received few comments here, I’ve been getting a bunch of them via email and in-person. Since I’ve given the same responses several times now, it’s probably worthwhile to put it here publicly for others to find (who probably have the same questions). Let me make clear up-front that I do not speak for the College Board, NSF, or even the PI’s of the effort, Owen Astrachan and Amy Briggs. I’m a member of the Commission that is building the “curriculum framework” (as the College Board calls it), and I’m giving you my impressions of what’s going on.
There is a serious but not insurmountable mismatch between NSF’s goals (and that of the larger computing education community) and the College Board’s goals. Because of that mismatch, a lot of what people want to see in the new AP CS course doesn’t appear in the slideshow or in the “Big Ideas” and “Computing Practices” documents that have been assembled by the commission. That’s because the College Board’s process has as its goal (1) a course that has similar content and learning objectives to existing college level courses and (2) an assessment that checks for those learning objectives. (A whole lot of what the Commission has been working on the last few months is invisible right now, because we’re taking the Ideas and Practices documents and producing (essentially) specs for the assessment.) The NSF’s goals are about creating a course that is rigorous but fun, inviting, and engaging. That mismatch is why I am getting questions like:
- “Where is the fun? Why doesn’t this class talk about computing being fun?”
- “Where is the Web? Where are databases? These are important application areas!”
- “Where is Scratch? Where is Alice? I thought that these courses would be taught with tools like those.”
- “My College colleagues will never accept for credit a course that includes Big Idea/Practice X and leaves out Z!”
Here’s the biggest big idea about this course, and is the real challenge and hope of this mismatch: This course doesn’t exist! All that exists right now is this “framework” of Big Ideas (and supporting concepts, etc.) and Computational Thinking Practices (and claims and evidence, etc.) that describe what comes out of the course. Therefore, making this course happen will be a stretch for everybody. The hope for this course lies in its potential.
It’s also the case that these documents are being assembled by a multi-layer committee (Commission of maybe 10, an Advisory Group triple that size), in a very short amount of time. The last Commission meeting was pretty darn quiet most of the time, because we were just producing stuff as quickly as we could with small groups reviewing. The result is inherently a compromise, without a common, coherent voice, and barely even time for an editing pass.
To respond to some of these questions:
- Where’s the fun? It’ll get there, we hope. But it’s not a Big Idea or a Practice, because it’s not a concept that is testable. All the Big Ideas about “fun” disappeared when we started the effort of creating Claims and Evidence for the assessment part. How do you assess if a student had fun in their APCS course? “Did you have fun, on a scale from 1 to 5?” And if they say that they didn’t, do you grade the student down, the teacher, the course, the Commission/Advisory group, the College Board, or NSF? Fun will come from the course assembled around this framework.
- Where’s the Web and databases? As one of my colleagues once sniffed, “We don’t teach classes about the Web! We teach classes about concepts!“ The College Board process is about those concepts that Colleges and Universities care about. This framework can be applied to a wide variety of contexts, including Web, databases, scientific computing, media, robots, and video games. The plan is for FIVE (5) versions of this course to be created at College/University-level pilot sites in the Fall. (I understand that over 80 schools offered to be pilot sites. The invitation to be a pilot site has gone out to a smaller list of people. If you didn’t get one, sorry, but we can only have a limited number of pilots–and don’t ask me for an invite, I wasn’t involved there.)
These pilots are incredibly important! All that fun, fascinating, engaging, and inviting stuff has to come about from the curriculum that gets wrapped around the framework. I want to see great fun, inviting, and engaging contexts, too. That’s not going to come out of the College Board process — they’re about Ideas, Practices, and assessment. That’s their job.
- Where’s Scratch and Alice? Again, not part of the College Board process. Now, I believe that the final course and assessment will have to specify languages and tools (the test can’t be wide open, but it can be more than one possible language/tool). I suspect that the set of tools chosen will come out of the College level pilots in Fall 2010 and the High School/College level pilots in Fall 2011. (Nope, don’t ask me for an invitation to the high school pilots, either — that’s a decision “above my pay grade.”)
- That last question is completely unhelpful. The course will be a stretch for everybody. Better form of the question, “Okay, I think I can convince my colleagues to do without Z, but it’ll be more palatable if we could teach X as X’.” Now, we’re talking!
The other question is how we’re going to teach high school teachers to teach this. Yeah, big issue, and it’s a topic for another blog post.
The point of my rant here is to see what we’re doing as trying to carve out a path to a class that only exists in potential, and since we’re doing this as an AP course, there are constraints on the process. I think that there is a way to meet both the College Board’s and NSF’s goals. When reviewing the AP CS curriculum framework, look for holes (“Everyone should know X”) and places that could prevent a great course from being created. But mostly, imagine the great course(s) that can be erected around these girders. That’s what’s being assembled in the next phase.
Mike Hewner is a PhD student working with me who has spent much of this semester figuring out what he wants to do for his dissertation. The time between passing the qualifying examination and proposing a topic is a creative and fun period, where you get to try out and explore a bunch of ideas. He’s settled on one, but one that he didn’t pick was still pretty interesting. He agreed that I could share it here.
Mike is interested in social identity theory. In terms we might care about, how does a student decide to affiliate with computer science, and to define himself or herself as a computer scientist? One of factors that influences affiliation is choice. If you choose a path, you are more likely to feel affiliation or belonging for that path. Contextualized computing education (like Media Computation or IPRE’s robotics CS1) should support greater affiliation, as long as students can choose the kind of context that they want. But your run into two problems: (1) How can you possibly offer enough choices, economically? (2) How do you make sure that students learn general, transferable knowledge, and not just “Joe Shmoe’s favorite topics in CS”?
Mike thought about building a variation of the Keller method. The Keller method is a self-paced learning structure, where students study on their own, and then take a test with a human grader who decides when the student has reached mastery. The research results on the Keller method are really impressive. It has simply fallen out of favor, and to the best of my knowledge, was never used for computer science.
So Mike’s vision is to give students a huge amount of contextualized computing learning materials (robotics, and media, and engineering, and video games, and bioinformatics, and…), and support for them to pick a path through those materials that best match their interests and cover the topics we want in a CS1. That’s the self-paced part. Now comes the test part.
As a group of students finish a section (say, on arrays, or on iteration) each in their own context and language, the group of students meet with a facilitator (a professor or TA, like in problem-based learning). They are given a task like, “Build a Powerpoint slide show that explains what arrays really are.” The students now have to work together to explain this CS concept in general terms, that each learned in their own context and the language of that context. Now you have the kid who learned MATLAB for Engineering talking about iteration to the kid who learned PHP for Web pages, and they have to figure out what’s the same in each language and context. The idea is that each student learns what they want (to increase affiliation), but has to collaborate with those in different contexts in order to develop transferable knowledge.
Like I said, Mike isn’t going to do this — it’s pretty big (e.g., just assembling and possibly creating that mound of learning materials) and complicated to be a dissertation. I did think it was an interesting example of how to balance student choice and the goal of learning knowledge that transfer beyond the context.
I admit up-front that I did not hold out much hope for the new report from the National Academies “Engineering in K12 Education: Understanding the Status and Improving the Prospects.” As a new, untenured assistant professor in educational technology at Georgia Tech, I did a lot of my early work in engineering education. Engineering is the 800 pound gorilla on campus, and that’s where the greatest learning needs and opportunities were.
I tired of banging my head against the infrastructural challenges of Engineering education. My collaborators in Engineering were warned against working in education. One was told by his chair that every publication in Journal of Engineering Education would count as a negative publication: “Not only was it a useless publication, but it was time wasted that could have been spent on a real publication.” Graduate students in Engineering wouldn’t work with us because they feared that it would hurt their progress. Senior faculty in education mocked reform efforts. One Civil Engineering professor I interviewed told me at length why undergraduates should never collaborate (“It prevents real learning”). When I pointed out that ABET accreditation guidelines required collaboration, he just smiled and said, “Yeah, that’s what they say. We know how to get around those rules.”
When I got tenure, I decided to focus just on computing education. We have many of the same attitudes among our faculty, but I care more about our problems. I’m willing to bang my head against the wall for longer.
Nowadays, I promote a strategy of using context to motivate and sustain engagement with computing education. I’m pleased to see a similar idea in the new National Academies report:
How might engineering education improve learning in science and mathematics? In theory, if students are taught science and mathematics concepts and skills while solving engineering or engineering-like problems, they will be able to grasp these concepts and learn these skills more easily and retain them better, because the engineering design approach can provide real-world context to what are otherwise very abstract concepts.
I don’t agree that it’s the “design approach” that provides the real-world context, but I completely agree with the rest. It’s like the approach that Owen Astrachan has been emphasizing — the power of the problem that students address. The engineers are saying that they own “real-world context” more than scientists and mathematicians, and I think they’re right. Now I’m actually looking forward to reading the rest of the report.