Archive for August, 2012
In the last few weeks, the focus of the MOOC debate seems to have shifted to an important question: Exactly what is the value of face-to-face contact? The President of Williams College, Adam F. Falk, published a piece in WSJ claiming that contact hours with a professor is the most important factor in learning.
A recent article in the Georgia Tech Alumni Magazine claims exactly the opposite (with no reference to support this dubious claim): “In fact, one of the core tenets of traditional learning—that face-to-face interaction between teacher and student is critical—is actually of almost no value, according to meta-analysis of education studies.” The very next paragraph starts with: “Meta-analysis shows that the other most effective educational tool is one-on-one tutoring.” So the tutoring is only valuable if it’s not face-to-face?
The below article by Walt Gardners raises a more reasoned critique of Falk’s WSJ piece. The question hasn’t been resolved one way or another for me, but it’s certainly one of the key questions in the debate over the value of MOOCs. What is lost when face-to-face contact is removed? How are on-line media forms best used for learning?
According to Falk, the curriculum, the choice of major, and the GPA do not predict self-reported gains in these critical outcomes nearly as well as “how much time a student spent with professors.” In other words, a professor can be a dud in the classroom and yet still be effective in helping students achieve the stated goals. How is that possible? I don’t doubt that the relationship between professors and students is an important factor in learning. But that’s not what Falk argues. Instead, he asserts that it’s the number of hours a professor logs with students after the bell rings that counts the most. I fail to see what that has to do with instruction.
The rebuttal is that not all learning takes place in the classroom. Fair enough. But “personal contact” can mean having coffee and talking about the latest fashions. ‘Im sure thats a pleasant way to spend time, but how does that translate into, say, being able to write effectively? I assume that the time spent with students does not involve tutoring because Falk never uses the word. The irony, of course, is that when teachers in K-12 complain about the need for small classes so that they have a better chance to know students and design lessons in line with their needs and interests, they are seen as making excuses.
IEEE Computer Society does good videos. They did a nice video at the Awards Ceremony, and now, they’ve put together a follow-up video with footage from interviews that they did after the Awards Ceremony. I always find it painful to watch myself being interviewed in a video, but I like how they got what’s important about Media Computation and Georgia Computes in this piece. You always try to get some of the important stuff into an interview, but the stuff you thought was most important usually ends up on the cutting room floor. Here, they got what I thought were the important bits.
This piece got mentioned in an earlier blog post comment Mylène, and I wanted to make sure that it got highlighted. It’s a wonderful post about what really leads to an enduring relationship with a subject matter. There are some great lessons here for computing education. Media Computation fares well when considered from this perspective. I just used MediaComp as a way of introducing graduate students to Python, and they puzzled (for example) over why sounds came out the way that they did. I thought it worked as a way of getting the students to start reasoning with Python.
An ounce of perplexity is worth a pound of engagement. Give me a student with a question in her head, one that math can help her answer, over a student who’s been engaged by a poster or a celebrity testimonial or the promise of a career. Engagement fades. Perplexity endures.
Perhaps it comes to this: rather than remembering your own tastes as a twelve-year-old, empathize with the tastes of a twelve-year-old who isn’t anything like you, one who has experienced only humiliation and failure in mathematics. What does math have to offer that student?
I don’t know of a study that addresses the question Nick is asking here. It may certainly exist — I’m not up on research in higher education. (For the CS folk who read this list, there are actually departments in schools of education just on higher education administration, and you can get your doctorate in it.) What percentage of faculty in various kinds of higher education (community college, liberal arts college, research university) want to teach? Enjoy it? Want to get better at it? The closest that we in our group have come to exploring this question is when Lijun Ni interviewed CS faculty in the University System of Georgia, and was told by one faculty member (at a school with a teaching-primary mission) that he was not a computing educator and was not interested in getting better at it. What’s the percentage overall?
Have we actually ever asked people these key questions as a general investigation? “Do you like teaching?” “What do you enjoy about teaching?” “What can we do to make you enjoy teaching more?” Would this muddy the water or clear the air? Would this earth our non-teaching teachers and fire them up?
Even where people run vanity courses (very small scale, research-focused courses design to cherry pick the good students) they are still often disappointed because, even where you can muster the passion to teach, if you don’t really understand how to teach or what you need to do to build a good learning experience, then you end up with these ‘good’ students in this ‘enjoyable’ course failing, complaining, dropping out and, in more analogous terms, kicking your puppy. You will now like teaching even less!
I want to go meta for a moment, because I noticed something that I found interesting in my WordPress spam folder. I have several completely legitimate, thoughtful comments on the blog, with completely illegitimate ownership. I suspect that the ownership of the comment has been hijacked to drive traffic to their site.
For example, here’s a comment that has supposedly been made by a “Panama Offshore Bank Account” website:
We do know how to engage kids now. We have NCWIT Best and Promising Practices , and we have contextualized computing education . The real problem is that, when it comes to high school CS, we’re just not there. If you choose a high school at random, you are ten times more likely to find one that offers no CS than to find one offering AP CS. That’s a big reason why the AP numbers are so bad. It’s not that the current AP CS is such an awful class. It can be taught well. It’s just not available to everyone! The AP CS teachers we’re working with are turning kids away because their classes are full. Most kids just don’t have access.
That’s a relevant contribution — why would a Panama Bank submit that?
Here’s another, on the Khan Academy CS supports, from an “Anglo-Far East Gold Bullion” site:
The system works wonderfully. Educators often call it “scaffolded problem-based learning.” Essentially students will be solving real-life problems while being encouraged to explore, but are also guided by a teacher along their way, who will be able to point out a number of different ways of accomplishing the problem. Scaffolded learning acknowledges that real-life problems will always have more than one way to solve the solution, that students will always learn best by doing instead of watching, and that curiously should drive exploration (as a personal thought, it’s kind of funny that we’re basically finding things out that were already discovered hundreds of years ago).
These are far too-relevant to be generated by auto-spamming bots. I’m wondering if, somehow, legitimate comments are getting relabeled.
If you make a comment, and it doesn’t show up, please drop me a note to check the spam filter, and I’ll try to make sure that your comment gets posted.
Last week, I got to meet Sebastian Thrun, founder of Udacity. It was great fun, and I got to ask him about a bunch of the issues raised in this blog.
If you haven’t read the piece in Huffington Post about him (linked below), I recommend it. He said that he doesn’t like the piece, since it depicts him as a reckless driver. When you’re developing a driverless car, it’s not a good thing for people to see you as someone who can’t drive safely. Beyond that, he liked it. How could he not? It paints him as a bold genius who is making big, broad gambles.
I found that Sebastian’s take on MOOC’s is quite a bit more careful than many who talk about MOOCs. He doesn’t believe that MOOCs are going to wipe out Universities anytime soon, and he sees that there are many subjects (like occupational therapy, that I mentioned in another post) that will never work well in MOOCs. While he believes that the Udacity platform could be used to provide substitutes to community college classes, he doesn’t see that Udacity itself is going to be doing that anytime soon. He definitely sees Udacity as offering corporate training.
We talked about the low completion rates in Udacity courses and the fast pace that students complained about. Sebastian said that that’s been fixed — Udacity courses can now be completely self-paced. However, that doesn’t raise the completion rates. Course-pace and self-pace don’t lead to high completion rates. Maybe cohort-paced?
I asked him if he’s seen Dick Lipton’s blog on cheating vs mastery. He said that he had and that Udacity doesn’t work like that anymore. Students taking an exam in Udacity can see the answers after the exam, which eliminates the mastery-learning component. Students can optionally pay to go to a testing center, which diminishes the cheating possibility, but also prevents the mastery learning element.
Sebastian didn’t say this explicitly, but here’s what I believe his goal is. He’s not out to replace the lower end. He’s trying to create a new, low-cost option at the upper end of the higher-education spectrum. He wants to create an inexpensive, high-quality “Elite” (to use Rich DeMillo’s term): An E-Ivy, or an ubiquitously-accessible Stanford. The low pass rates aren’t a problem, then. Rather, it’s using motivation and willingness to put in the effort as the filter, rather than wealthy and clout. They’ll still have few graduates, but it’s because that’s who makes it through, not who can pay the tuition. Those who graduate will really know their stuff.
I asked Sebastian, “Which do you think will have a bigger impact on society, Udacity on education, or your driverless car?” He said, “Udacity’s impact on education.” I bet the Driverless car. I’ve seen too many people with big, even wonderful ideas to change everything in education, but they ran headlong into the schoolification of everything. I do think that Sebastian has an angle that they haven’t. He’s aiming to change the top, rather than try to reach the bottom. Rather than make something that can be used with everyone, he’s making something that only a few have to succeed at. That’s an interesting and unusual strategy. The reality is that the top is the goal for everyone else, so education does get changed from the top down. Udacity will likely change things, but I don’t think I can predict how. On the other hand, I was born in Detroit where cars are a very big thing. I took a course at Wayne State University where a big part of it was an analysis of how car culture influenced American culture. A successful driverless car could affect everyone in society, not just those between 4 and 24 years old, and will be especially important with the aging of America.
I suggested that we meet again in five years and see who was right.
Thrun’s resume is populated with seismic efforts, either those already set in motion or others just around the corner. There are various robotic self-navigating vehicles that guide tourists through museums, explore abandoned mines, and assist the elderly. There is the utopian self-driving car that promises to relieve humanity from the tedium of commuting while helping reduce emissions, gridlock, and deaths caused by driver error. There are the “magic” Google Glasses that allow wearers to instantly share what they see, as they are seeing it, with anyone anywhere in the world—with the blink of an eye. And there is the free online university Udacity, a potentially game-changing educational effort that, if Thrun has his way, will level the playing field for learners of all stripes.
via A Beautiful Mind.
Admittedly, this is Texas, whose state Republican platform recently recommended no teaching of higher-order thinking skills or critical thinking skills. It may be an outlier. It may also be a leading indicator. The Houston Chronicle has published an op-ed which proposes replacing more university courses with MOOCs.
Number five is the most cost-saving recommendation: Move more classes online. Online learning will become to education what the forward pass was to football. It will revolutionize.
MIT, for example, has implemented an online program free of charge, and for a small fee, it will award a certificate of compliance. The first course, Circuits and Electronics, drew 120,000 registrants in the first month.
Our paper on our ebook evaluation did not get accepted to ICER2012, but we’re going to turn it into a tech report and make it more generally available (rather than just a link here) soon (and then I’ll give it its own blog post). But our bottomline isn’t too different from the one described below: Students don’t use ebooks differently than regular books, and that’s a problem. How do they learn to use the affordances of the new medium?
I had dinner with Sebastian Thrun Wednesday night (which deserves get its own blog post soon!), and he suggested that the problem was calling them “books” at all — it suggests the wrong kinds of interaction, it connects to an incorrect model. Maybe he’s right. He suggested that “video games” was better, but I think that name has its own baggage with our in-service teachers. What could we call these new media, such that students would interact with them differently than traditional paper-based books?
The report is based on a survey conducted this spring of students and faculty at five universities where e-textbook projects were coordinated by Internet2, the high-speed networking group. Students praised the e-books for helping them save money but didn’t like reading on electronic devices. Many of them complained that the e-book platform was hard to navigate. In addition, most professors who responded said that they didn’t use the e-books’ collaborative features, which include the ability to share notes or create links within the text.
I taught educational technology in the Spring, and it gave me a chance to re-read classic texts (I still love Cognitive Apprenticeship) and reflect on some of the key principles of learning sciences. One of these is that all learning is built on existing knowledge — Piagetian assimilation and accommodation are still the main two learning mechanisms that we know. That’s why culture matters, and past experience matters.
The piece linked below from NYTimes highlights how different that prior experience can be, even with students attending the same classroom, and how those different experiences lead to different learning outcomes.
I wonder about the implications for CS Ed. What are the key experiences that lead students to have the prior knowledge to succeed in CS1? If a student has never built a spreadsheet with formulas, then that student may not have the same understanding of specifying instructions for another agent and for using a formal notation to be interpreted by machine, compared to a student who has. A student who has never used Photoshop or looked at a color chooser may have a harder time understanding hierarchy of data representations (e.g., red, green, and blue numbers inside a pixel, which is arranged in two dimensions to make up a picture). Studies in the past have looked at background experiences like how much mathematics a student has had. With the pervasiveness of computing technology today, we might be able to look at more “near transfer” kinds of activities.
When a new shipment of books arrives, Rhonda Levy, the principal, frets. Reading with comprehension assumes a shared prior knowledge, and cars are not the only gap at P.S. 142. Many of the children have never been to a zoo or to New Jersey. Some think the emergency room of New York Downtown Hospital is the doctor’s office.
The solution of the education establishment is to push young children to decode and read sooner, but Ms. Levy is taking a different tack. Working with Renée Dinnerstein, an early childhood specialist, she has made real life experiences the center of academic lessons, in hopes of improving reading and math skills by broadening children’s frames of reference.
I hadn’t heard about this form of cheating in MOOC’s. I knew that answers got passed around (as Dave Patterson reported in June), but was surprised to hear that students were creating multiple account in order to re-take exams. That changes one’s perception of the 100K registered users. The question raised here in Dick Lipton’s blog is: Is this “cheating” or simply “mastering” the material?
Here is what happens next. Bob signs up for the course multiple times: let’s call them Bob1, Bob2, Bob3, Bob4. Recall there is no cost to Bob for signing up multiple times—none. So why not sign up several times…
Bob’s insight is simple: he now can take the course multiple times and keep only the best grade. Say there is a graded exam. Bob1 takes the exam and gets a 70% on it. Not bad, but not great either. So Bob sees what he got wrong, sees what questions they threw at him. He studies some more, then takes the exam again as Bob2. Of course the exam is different, since all these on-line systems do some randomization. However, the exam covers the same material, so now Bob2 gets an 85% say.
Perhaps Bob is satisfied. But if he is really motivated he studies some more, retakes the exam, and now Bob3 gets 90%. You guessed right. He goes on and takes it one more time as Bob4 who—surprise—gets a perfect 100%.
Updated August 22: See note at bottom
We spent a significant amount of time this summer discussing with NSF our proposal to create an alliance around Expanding Computing Education Pathways (ECEP). One of the issues that we got pressed on was how to not just improve the numbers of women and members of under-represented minorities entering computer science, but to improve the quality of their learning and of their performance on metrics like the Advanced Placement Computer Science exam. Barbara Ericson started digging into the AP CS data at the College Board site, and found some pretty amazing things. I’m helping with some of the statistics (using my new “Computational Freakonomics” knowledge). We’re not sure what we’re going to do with this yet (SIGCSE paper, perhaps?), but Barb agreed that I could share some of the stats with you. The results in this post are Barb’s analysis of the AP CS results from 2006-2011, the years in which “Georgia Computes!” and CAITE were both in existence.
Nationally, here are the pass rates per year. The gap from the blue line at top and the red line below is explained by the gender gap. In 2011, the pass rate was 63.7% overall, 57.6% for females. The even larger gap from those two lines down to the rest is the race/ethnicity gap: 31.7% for Blacks, and 37.2% for Hispanics in 2011. I didn’t expect this: Hispanic females do statistically significantly better than Black females at passing the AP CS over this time frame (t-test, one-tailed, p=.01). (I’m using “Black” because that’s the demographic category that the College Board gives us. We are collapsing “Mexican American,” “Other Hispanic,” and “Puerto Rican” into the “Hispanic” category.) There’s still a big gap between the orange Hispanic line (37.2% in 2011) and the light blue Hispanic females line (25% in 2011).
While Hispanics are doing better than Blacks on AP CS, I was still surprised at this: No Hispanic female has scored a passing grade (3, 4, or 5) on the AP CS test in Georgia, Michigan, Indiana, South Carolina, or Alabama in the last six years. Only one Hispanic female has passed in Massachusetts in the same time frame. Why these states? ECEP is starting from Georgia and Massachusetts, next involving California and South Carolina, and we want to compare to states of similar size or similarly sized minority populations. We haven’t looked at all 50 states — the College Board doesn’t make it easy to grab these numbers.
The Black pass rate is quite a bit smaller than the Hispanic, in part because the participation rate is so low. Michigan has 1.4 million Blacks (out of 9.8 million overall population, so 14% Black), but only 2 Black men have passed the AP CS in the last six years. In 2011, 389 students took the AP CS in Michigan, only 2 of whom were Black. Only one Black female has even taken the AP CS in Michigan in the last six years. (No, she didn’t get a passing grade.)
Considering the population of the state is really important when considering these numbers. Last year, Georgia had 884 people take the AP CS Level A test (the most ever), 79 of whom were Black (about 9%). 17 passed. for a 21.5% pass rate. In contrast, California had a 51.7% pass rate among Black test-takers, 15 of the 29 test takers. That’s 29 test-takers out of 3101 AP CS Level A tests in California (0.9%)! California has an enormous test-taking population, but few Blacks and relatively few Hispanics (230 Hispanic test takers (49 female) out of the 3101 overall test takers). California has 37.6 million people, and 2.2 million Blacks (5.8%). Georgia has 9.8 million people, 2.9 million Blacks (30%). Bottomline: Georgia had many more Black test-takers than California, with a similarly-sized Black population. Georgia’s test-taking numbers aren’t representative of the population distribution overall (9% vs. 30%), but California’s are even more out-of-whack (0.9% vs. 5.8%).
Barb’s still digging into the numbers (e.g., to compare regionally, as well as by similarly sized). If we get ECEP, this is the first step — to know where we are, so we can measure how we do.
Updated August 22: When I wrote this up, I didn’t realize that Barb had created several datasets. She has data back into the 1990′s, but the dataset she gave me was just 2006-2011, the years in which our NSF BPC Alliances existed. So my claims of “ever” in the original post were too strong. We don’t know that the claims are wrong, but we haven’t actually checked back further than 2006 yet. My sincere apologies for mis-stating the scope of my claims! I’m glad that we discovered this problem when it’s just a blog post, not a paper submitted for publication. I’ve updated the text of the post to reflect the claims that I can actually make.
One of the biggest final efforts in “Georgia Computes!” has been trying to get a measure of the whole state’s CS1/CS2 population. Who are they? Where did they come from? What influenced their decision to take a CS course? Did “Georgia Computes!” have any influence on them? Our third ICER2012 paper (available here) documents our effort to answer those questions.
Of the 35 colleges and universities in Georgia, 29 offer computer science coursework, and 19 participated in our statewide survey. (Why only 19 or 29? Great question, and worthy of another study in itself.) In total, 1,434 introductory computer science students (in either a first or second semester course, but all in the same semester without duplication of students) completed the survey. Our analysis had three parts:
- General description of who’s taking CS and why;
- An attempt to answer the question, “Did Georgia Computes have an effect?”
- Regression analysis on what variables impact decisions to pursue computing.
The general description required a GT vs. non-GT lens. 673 of the students in the survey came from Georgia Tech, and most of those were not CS majors, since GT requires everyone to take CS1. When GT is included, the pool is 31% female, but without GT, it’s only 25% female. Most of the pool had no interest in CS in middle or high school, but the percent expressing interest rises dramatically when you take GT out (since there are so many non-majors being forced to take CS at GT). Having some middle school out-of-school computing experience is pretty much the same with GT (57%) or without GT (56%) which is somewhat surprising. Only 56% of students who ended up as CS majors (not at GT) did anything with CS in middle school? Even larger percentage 57% of students (at GT, thus part of the “required” and “not likely to be CS majors” cohort) had some middle school CS, but did not choose a CS major? One explanation might be that GT is a prestigious school and the kids who go there (CS majors or not) had more out-of-school experiences in general.
We did ask students that if they were NOT a computing major, what were the reasons? Here were the top three answers:
- I don’t want to do the kind of work that a computing major/minor leads to, 30%.
- I don’t enjoy computing courses, 20%.
- I don’t think I belong in computing (don’t fit the stereotype), 13%.
In general, GaComputes out-of-school activities were not mentioned by many students. Girl Scout events and summer camps are still too small in Georgia to touch a significant percentage of students who end up in CS. A big part of our analysis was figuring out if the students may have been influenced by a teacher who had professional development through Barbara’s Institute for Computing Education (ICE). We asked every student what high school they went to, then deciphered their scrawl, and figured out if we had an ICE teacher there. (We didn’t try to figure out if the student actually interacted with that teacher.) Yes, in general, schools that have ICE teachers do produce more women in our CS1/CS2 data set and more under-represented minorities (in some categories), but neither is a significant difference. Right direction, not not enough to make a strong claim.
Finally, we looked at what influenced student interest in pursuing computing career, disaggregated by gender and race/ethnicity. There were several statistically significant differences that we noted, like men are more interested in computer games and programming than women, and women are more interested in using computing to help people or society. These aren’t new, but at the size and scope of the survey, it’s an important replication. Most interesting is the mediation analysis that Tom McKlin and Shelly Engelman did. They found that women and under-represented minorities are statistically more influenced by encouragement and a sense of belonging than by a sense of ability, compared to men and white/Asian groups, with outcome variables of (a) satisfaction in choosing to study computing, (b) likelihood in completing a computing major/minor, and (c) likelihood of pursuing a career in computing. Again, these are expected results, but it’s useful to get a large, broad replication.
As I said before, we’re getting to the end of “Georgia Computes!” This was one of our last big analysis efforts. It’s really hard to do these kinds of studies (e.g., each of those school that did not participate still got our time and effort in trying to convince them, then there’s the data cleaning and analysis and…). I’m glad that we got this snapshot, but wish that we got it at an even larger scale and more regularly. That would be useful for us to use as a yardstick over time.
(NSF BPC funded “Georgia Computes!”. All the claims and opinions here are mine and my colleagues’, not necessarily those of any of the funders.)
While the schedule for the International Computing Education Research (ICER) 2012 conference is now up, the papers aren’t linked to it. I’m guessing that it’s because of the snafu that ACM had with their publishing contractor. I was waiting for the papers to be linkable before I started talking about our other two papers. Instead, I’ll just link to versions of our submitted papers (but not the final ones).
I’ve already talked about Lauren’s paper on using subgoal analysis to improve instruction about App Inventor, which I’ve made available here. Here I’ll tell you a bit about Briana Morrison’s paper on adapting the Disciplinary Commons model for high school CS teachers.
The Disciplinary Commons is a model for professional development that Sally Fincher and Josh Tenenberg developed. We received NSF CPATH funding during the last three years to create the Disciplinary Commons for Computing Education (DCCE), which included both high school and university faculty. The university part wasn’t all that successful, and wasn’t the most interesting part of the work. The really interesting part was how Briana, Ria Galanos, and Lijun Ni adapted the DC model to make it work for high school teachers.
There are really two big needs that high school CS teachers have that are different than university CS teachers:
- Recruiting strategies: There are no majors in high school (in general) in the United States. High school CS teachers have no guaranteed flow of students into their classes. High school computer science is an elective in the US. If you want to teach CS, you recruit students into your class, or else you’ll end up teaching something else (or you lose your job).
- A Community: While I’m sure they exist, I’ve not yet met a higher education CS faculty member who is his or her own department. Most high school CS teachers are the only CS teachers in their school. They rarely know any other high school CS teachers. Providing them with a community makes a big difference in terms of their happiness, teaching quality, and retention.
Briana does a great job in her paper of explaining what happened in the DCCE over the three years that we ran it, and providing the evidence that good things happened. The evidence that the recruiting strategies worked is astounding:
According to these self reported numbers, the high school teacher participants increased the number of AP CS students in the year following their participation in the DCCE by 302%. One teacher in Year 3 had a 700% increase in students in her AP CS class and attributed it to the recruiting help received from the DCCE.
The evidence that the community-building helped is actually even stronger. We had The Findings Group as our external evaluators on DCCE, and they used social network analysis (SNA). The diagram is compelling, and the stats on the network show that the teachers dramatically increased their awareness and use of the network of high school CS teachers.
Briana is continuing to work with DCCE, to help other high school disciplinary commons start up around the country. NSF CPATH is allowing us to spend out the remaining money to fund her travel to help out other groups. Briana is now a PhD student working with me, and she’s figuring out what her dissertation is going to look like, and if it’ll build on the success of DCCE.
(NSF CPATH funded DCCE. All the claims and opinions here are mine, not necessarily those of any of the funders.)
I’ve told you a bit about how the Media Computation class went this summer, with the new things that I tried. Let me tell you something about how the “Computational Freakonomics” (CompFreak) class went.
The CompFreak class wasn’t new. Richard Catrambone and I taught it once in 2006. But we’ve never taught it since then, and I’d never taught it before on my own, so it was “new” for me. There were six weeks in the term at Oxford. Each week was roughly the same:
- On Monday, we discussed a chapter from the “Freakonomics” book.
- We then discussed social science issues related to that chapter, from the nature of science, through t-tests and ANOVA, up to multiple linear regression. Sometimes, we did a debate about issues in the chapter (e.g., on “Atlanta is a crime-ridden city” and on “Roe v. Wade is the most significant explanation for the drop in crime in the 1990′s.”)
- Then I showed them how to implement the methods in SciPy to do real analysis of some Internet-based data sets. I give them a bunch of example data sets, and show them how to read data from flat text files and from CSV files.
At the end of the course, students do a project where they ask a question, any question they want from any database. Then, they do it again, but in pair, after a bunch of feedback from me (both on the first project, and on their proposal for the final project). The idea is that the final projects are better than the first round, since they get feedback and combine efforts in the pair. And they were.
- One team looked at the so-called “medal slump” after a country hosts the Olympics. The “medal slump” got mentioned in some UK newspapers this summer. One member of the team had found in his first project that, indeed, the host country wins a statistically significant fewer medals in the following year. But as a pair of students, they found that there was no medal “slump.” Instead, during the Olympics of hosting, there was a huge medal “bump”! When hosting, the country gets more medals, but the prior two and following two Olympics all follow the same trends in terms of medals won.
- Another team looked at Eurozone countries and how their GDP changes tracked one another after moving to the Euro, then tried to explain that in terms of monetary policy and internal trading. It is this case that Eurozone countries who did move to the Euro found that their GDP started correlating with one another, much more than with non-Euro Eurozone countries or with other countries of similar GDP size. But the team couldn’t figure out a good explanation for why, e.g., was it because internal trading was facilitated, or because of joint monetary policy, or something else?
- One team figured out the Facebook API (which they said was awful) and looked at different company’s “likes” versus their stock price over time. Strongly correlated, but “likes” are basically linear — almost nobody un-likes a company. Since stock prices generally rise, it’s a clear correlation, but not meaningful.
- Another team looked at the impact of new consoles on the video game market. Video game consoles are a huge hit on the stock price of the developing company in the year of release, while the game manufacturers stock rises dramatically. But the team realized a weakness of their study: They looked at the year of a console’s release. The real benefit of a new console is in the long lifespan. The year that the PS3 came out, it was outsold by the PS2. But that’s hard to see in stock prices.
- The last team looked at impact of Olympics on the host country’s GDP. No correlation at all between hosting and changes in GDP. Olympics is a big deal, but it’s still a small drop in the overall country’s economy.
One of my favorite observations from their presentations: Their honesty. Most of the groups found nothing significant, or they got it wrong — and they all admitted that. Maybe it was because it was a class context, versus a tenure-race-influenced conference. They had a wonderful honesty about what they found and what they didn’t.
I’ve posted the syllabus, course notes, slides that I used (Richard never used PowerPoint, but I needed PowerPoint to prop up my efforts to be Richard), and the final exam that I used on the CompFreak Swiki. I also posted the student course-instructor opinion survey results, which are interesting to read in terms of what didn’t work.
- Clearly, I was no Richard Catrambone. Richard is known around campus for how well he explains statistics, and I learned a lot from listening to his lectures in 2006. Students found my discussion of inferential statistics to be the most boring part.
- They wanted more in-class coding! I had them code in-class every week. After each new test I showed them (correlation, t-test, ANOVA, etc.), I made them code it in pairs (with any data they wanted), and then we all discussed what they found in the last five minutes of class. I felt guilty that they were just programming away while I worked with pairs that had questions or read email. I guess they liked that part and wanted more.
- I get credit from the students for something that Richard taught me to do. Richard pointed out that his reading of cognitive overload suggests that nobody can pay attention for 90 minutes straight. Our classes were 90 minutes a day, four days a week. In a 90 minute class, I made them get up halfway through and go outside (when it wasn’t raining). They liked that part.
- Students did learn more about computing, inspired by the questions that they were trying to answer. They talk in their survey comments about studying more Python on their own and wishing I’d covered more Python and computing.
- In general, though, they seemed to like the class, and encourage us to offer it on-campus, which we’ve not yet done.
Students who talked to me about the class at the end said that they found it interesting to use statistics for something. Turns out that I happened to get a bunch of students who had taken a lot of statistics before (e.g., high school AP Statistics). But they still liked the class because (a) the coding and (b) applying statistics to real datasets. My students asked all kinds of questions, from what factors influenced money earned by golf pros, to the influences on attendance at Braves games (unemployment is much more significant than how much the team is in contention for the playoffs). One of the other more interesting findings for me: GPD correlates strongly and significantly with number of Olympic gold medals that a country wins, i.e., rich countries win more medals. However, GPD-per-capita has almost no correlation. One interpretation: To win in the Olympics, you need lots of rich people (vs. a large middle class).
Anyway, I still don’t know if we’ll ever offer this class again, on-campus or study-abroad. It was great fun to teach. It’s particularly fun for me as an exploration of other contexts in contextualized computing education. This isn’t robotics or video games. This is “studying the world, computationally and quantitatively” as a reason for learning more about computing.
The TechCrunch article actually cites research (see below), a paper by Cindy Hmelo. Cindy’s paper is actually on problem-based learning, but it does describe scaffolding — as defined in a Hmelo & Guzdial paper from 1996! How about that!
What I see in the Khan Academy offering is one of the kinds of scaffolding that Cindy and I talked about. Scaffolding is an idea (first defined by Wood, Bruner, and Ross) which does involve letting students explore, but under the guidance of a tutor. A teacher in scaffolding doesn’t “point out novel ways of accomplishing the task.” Instead, the teacher models the process for the student, coaches the student while they’re doing it, and gets the student to explain what they’re doing. A key part of scaffolding is that it fades — the student gets different kinds of support at different times, and the support decreases as the student gets more expert. I built a form of adaptable scaffolding in my 1993 dissertation project, Emile, which supported students building physics simulations in HyperTalk. Yes, students using Emile could click on variables and fill in their values without directly editing the code, but there was also process guidance (“First, identify your goals; next, find your components in the Library”) and prompts to get students to reflect on what they’re doing. And the scaffolding could be turned on or off, depending on student expertise.
I wouldn’t really call what Khan Academy has “scaffolding,” at least, not the way that Cindy and I defined it, nor in a way that I find compatible with Wood, Bruner, and Ross’s original definition. There’s not really a tutor or a teacher. There are videos as I learned from this blog post, and later found for myself. The intro video (currently available on the main Khan Academy page) says that students should just “intuit” how the code works. Really? There’s a lot more of this belief that students should just teach themselves what code does. The “scaffolding” in Khan Academy has no kind of process modeling or guidance, nothing to explain to students what they’re doing or why, nothing to encourage them to explain it to themselves.
It is a very cool text editor. But it’s a text editor. I don’t see it as a revolution in computer science education — not yet, anyway. Now, maybe it’s way of supporting “collaborative floundering” which has been suggested to be even more powerful than scaffolding as a learning activity. Maybe they’re right, and this will be the hook to get thousands of adolescents interested in programming. (I wonder if they tested with any adolescents before they released?) Khan has a good track record for attracting attention — I look forward to seeing where this goes.
The heart of the design places a simplified, interactive text editor that sits adjacent to the code’s drawing output, which updates in real time as students explore how different variables and numbers change the size, shapes, and colors of their new creation. An optional video guides students through the lesson, step-by-step, and, most importantly, can be paused at any point so that they can tinker with the drawing as curiosity and confusion arise during the process.
This part is key: learning is contextual and idiosyncratic; students better absorb new material if they can learn at their own pace and see the result of different options in realtime.
The pedagogy fits squarely into what educators called “scaffolded problem-based learning” [PDF]; students solve real-life problems and are encouraged to explore, but are guided by a teacher along the way, who can point out novel ways of accomplishing the task. Scaffolded learning acknowledges that real-life problems always have more than one path to a solution, that students learn best by doing, and that curiosity should drive exploration. This last point is perhaps the most important, since one of the primary barriers to boosting science-related college majors is a lack of interest.