Archive for April, 2020
Data science as a path to integrate computing into K-12 schools and achieve CS for All
My colleague Betsy DiSalvo is part of the team that just released Beats Empire, an educational game for assessing what students understand about middle school computer science and data science https://info.beatsempire.org The game was designed by researchers from Teachers College, Columbia University; Georgia Tech; University of Wisconsin, Madison; SRI International; Digital Promise; and Filament Games in concert with the NYC Dept. of Education. Beats Empire is totally free; it has already won game design awards, and it is currently in use by thousands of students. Jeremy Roschelle was a consultant on the game and he just wrote a CACM Blog post about the reasoning behind the game (see link here).
Beats Empire is an example of an important development in the effort to help more students get the opportunity to participate in computing education. Few students are taking CS classes, even when they’re offered — less than 5% in every state for whom I’ve seen data (see blog post here). If we want students to see and use computing, we’ll need to put them in other classes. Data science fits in well with other classes, especially social studies classes. Bootstrap: Data Science (see link here) is another example of a computing-rich data science curriculum that could fit into a social studies class.
Social studies is where we can reach the more diverse student populations who are not in our CS classes. I’ve written here about my work developing data visualization tools for history classes. For a recent NSF proposal, I looked up the exam participation in the two Advanced Placement exams in computer science (CS Principles and CS A) vs the two AP exams in history (US history and World history). AP CS Principles was 32% female, and AP CS A was 24% female in 2019. In contrast, AP US History was 55% female and AP World History was 56% female. Five times as many Black students took the AP US History exam as took the AP CS Principles exam. Fourteen times as many Hispanic students took the AP US History exam as took the AP CS Principles exam.
Data science may be key to providing CS for All in schools.
Active learning has differential benefits for underserved students
We have had evidence that active learning teaching methods have more benefit for underserved populations than for majority groups (for example, I discussed the differential impact of active learning here). Just published in March in the Proceedings of the National Academy of Science is a meta-analysis of over 40 studies giving us the strongest argument yet: “Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math” at https://www.pnas.org/content/117/12/6476. I’ll remind everyone that a terrific resource for peer instruction in computer science is here: http://peerinstruction4cs.com/
Achievement gaps increase income inequality and decrease workplace diversity by contributing to the attrition of underrepresented students from science, technology, engineering, and mathematics (STEM) majors. We collected data on exam scores and failure rates in a wide array of STEM courses that had been taught by the same instructor via both traditional lecturing and active learning, and analyzed how the change in teaching approach impacted underrepresented minority and low-income students. On average, active learning reduced achievement gaps in exam scores and passing rates. Active learning benefits all students but offers disproportionate benefits for individuals from underrepresented groups. Widespread implementation of high-quality active learning can help reduce or eliminate achievement gaps in STEM courses and promote equity in higher education.
Checking our hubris with checklists: Learning a lesson from the XO Laptop
My Blog@CACM blog post for February was on Morgan Ames’ book The Charisma Machine (see post here). The book is well-written, and I do recommend it. In the post, I say that the OLPC opposition to HCI design practices is one of the themes in her book that I found most interesting:
It takes humility to design software that humans will use successfully. The human-computer interaction (HCI) community has developed a rich set of methods for figuring out what users need and might use, and for evaluating the potential of a new interface. To use these methods requires us to recognize our limitations — that we are unlikely to get the design right the first time and that our users know things that we don’t.
How do we get developers to have that humility? There are a lot of rewards for hubris. Making big promises that you probably can’t keep is one way to get grant and VC funding.
I just finished Atul Gawande’s The Checklist Manifesto (which I already blogged about here, before I even read it). It’s a short book which I highly recommend. I hadn’t realized before how much Gawande’s story overlaps with the OLPC story — or rather, how much it doesn’t but should have. Gawande is a surgeon. His entry into the idea of checklists is because of the success of checklists in reducing costs and improving patient success rates in medicine. There, too, they had to deal with physician hubris. They saw the checklists as busywork. As one physician said in opposition to checklists, “Forget the paperwork. Take care of the patient.”
The OLPC project couldn’t be bothered with user studies or pilot studies. They wanted to airdrop tablets into Ethiopia. They were so confident that they were going to (in Negroponte’s words) “eliminate poverty, create peace, and work on the environment.” They couldn’t be bothered with the details. They were taking care of the patient!
Gawande points out that checklists aren’t needed because physicians are dumb, but because they know SO much. We’re humans and not Econs. Our attention gets drawn this way or that. We forget about or skip a detail. Our knowledge and systems are so complex. Checklists help us to manage all the details.
We need checklists to check our hubris. We have confidence that we can build technology that changes users lives. The reality is that the odds are slim that we can have impact without going through an HCI design process, e.g., know the user, test often, and iterate. The OLPC Project could have used an HCI checklist.
The second to last chapter in Gawande’s Checklist Manifesto captures the idea well that we need checklists:
We are all plagued by failures—by missed subtleties, overlooked knowledge, and outright errors. For the most part, we have imagined that little can be done beyond working harder and harder to catch the problems and clean up after them. We are not in the habit of thinking the way the army pilots did as they looked upon their shiny new Model 299 bomber—a machine so complex no one was sure human beings could fly it. They too could have decided just to “try harder” or to dismiss a crash as the failings of a “weak” pilot. Instead they chose to accept their fallibilities. They recognized the simplicity and power of using a checklist. And so can we. Indeed, against the complexity of the world, we must. There is no other choice. When we look closely, we recognize the same balls being dropped over and over, even by those of great ability and determination. We know the patterns. We see the costs. It’s time to try something else. Try a checklist.
How I’m lecturing during emergency remote teaching
Alfred Thompson (whom most of my readers already know) has a recent blog post requesting: Please blog about your emergency remote teaching (see post here). Alfred is right. We ought to be talking about what we’re doing and sharing our practices, so we get better at it. Reflecting and sharing our teaching practices is a terrific way to improve CS teaching, which Josh Tenenberg and Sally Fincher told us about in their Disciplinary Commons.
My CACM Blog Post this month is on our contingency plan that we created to give students an “out” in case they become ill or just can’t continue with the class — see post here. I encourage all my readers who are CS teachers to create such a contingency plan and make it explicit to your students.
I am writing to tell you what I’m doing in my lectures with my co-instructor Sai R. Gouravajhala. I can’t argue that this is a “best” practice. This stuff is hard. Eugene Wallingford has been blogging about his emergency remote teaching practice (see post here). The Chronicle of Higher Education recently ran an article about how difficult it is to teach via online video like Zoom or BlueJeans (see article here). We’re all being forced into this situation with little preparation. We just deal with it based on our goals for our teaching practice.
For me, keeping peer instruction was my top priority. I use the recommended peer instruction (PI) protocol from Eric Mazur’s group at Harvard, as was taught to me by Beth Simon, Leo Porter, and Cynthia Lee (see http://peerinstruction4cs.com/): I pose a question for everybody, then I encourage class discussion, then I pose the question again and ask for consensus answers. I use participation in that second question (typically gathered via app or clicker device) towards a participation grade in the class — not correct/incorrect, just participating.
My plan was to do all of this in a synchronous lecture with Google Forms, based on a great recommendation from Chinmay Kulkarni. I would have a Google Form that everyone answered, then I’d encourage discussion. Students are working on team projects, and we have a campus license for Microsoft Teams, so I encouraged students to set that up before lecture and discuss with their teams. On a second Google Form with the same question, I also collect their email addresses. I wrote a script to give them participation credit if I get their email address at least once during the class PI questions.
Then the day before my first lecture, I was convinced on Twitter by David Feldon and Justin Reich that I should provide an asynchronous option (see thread here). I know that I have students who are back home overseas and are not in my timezone. They need to be able to watch the video at another time. I now know that I have students with little Internet access. So, I do all the same things, but I record the lecture and I leave the Google Forms open for 24 hours after the last class. The links to the Google Forms are in the posted slides and in the recorded lectures. To fill out the PI questions for participation, they would have to at least look at that the lecture.
I’m so glad that I did. As I tweeted, I had 188 responses to the PI questions after the lectures ended. 24 hours later, I had 233 responses. About 20% of my students didn’t get the synchronous lecture, but still got some opportunity to learn through the asynchronous component. The numbers have been similar for every lecture since that first.
I lecture, but typically only for 10-15 minutes between questions. I have 4-5 questions in an 85 minute lecture. The questions take longer now. I can’t just move the lecture along when most of the students answer, as I could with clickers. I typically give the 130+ students 90 seconds to get the link entered and answer the question.
I have wondered if I should just go to a fully asynchronous lecture, so I asked my students via a PI question. 85% say that they want to see the lecturer in the video. They like that I can respond to chat and to answers in Google Forms. (I appreciate how Google Forms lets me see a summary of answers in real-time, so that I can respond to answers.) I’d love to have a real, synchronous give-and-take discussion, but my class is just too big. I typically get 130+ students synchronously participating in a lecture. It’s hard to have that many students participate in the chat, let alone see video streams for all of them.
We’re down to the last week of lecture, then we’ll have presentations of their final projects. They will prepare videos of their presentations, and receive peer comments. Each student has been assigned four teams to provide peer feedback on. Each team has a Google Doc to collect feedback on their project.
So, that’s my practice. In the comments, I’d welcome advice on improving the practice (though I do hope not to have to do this again anytime soon!), and your description of your practice. Let’s share.
Recent Comments