Archive for March, 2017
I decided to use clickers in CS1315 this semester (n=217), rather than use the colored index cards that I’ve used in the past for Peer Instruction (see blog post here). With cards, I can only take a vote — no histogram of results, and I can’t provide any grade value for the participation. With clickers, I can use the evidence-based practice as developed by Eric Mazur, Cynthia Lee, Beth Simon, Leo Porter, et al. (plugging the Peer Instruction for CS website):
- Ask everyone to answer to prime their thinking about the question,
- ask students to discuss the question in groups of 2-3,
- then vote again (consensus within groups), and
- show the results and discuss the misconceptions.
To make it worthwhile, I’m giving 10 points of final course grade for scoring over 50% on the second question (only — first one is just to get predictions and activate knowledge), 5 points for scoring over 30%.
I’m trying to do this all with campus-approved standards: TurningPoint clickers, TurningPoint software. I’d love to use an app-based solution, but our campus Office of Information Technologies warns against it. They can’t guarantee that, in large classes, the network will support all the traffic for everyone to vote at once.
The process is so complicated: Turn on clickers in our learning management software (a form of Sakai called T-Square), download the participant list, open up ResponseWare and define a session (for those using the app version), plug in receiver. After class, save the session, integrate the session with the participant list, then integrate the results with T-Square for grades. The default question-creation process in TurningPoint software automatically shows results and demands a specific format (e.g., which makes it hard to show screenshots as part of a question), so I’m using “Poll Anywhere” option, which requires me to process the session file after class to delete the first question (where everyone votes to prime their thinking) and to define the correct response(s) for each question.
I’m willing to do all that. But it’s more complicated than that.
Turns out that Georgia Tech hasn’t upgraded to the latest version of the TurningPoint software (TurningPoint Cloud). GT only supports TurningPoint 5. TurningPoint stopped distributing that version of the software in May 2016, so you have to get it directly from the on-campus Center for Teaching and Learning. I got the software and installed it — and discovered that it doesn’t run on the current version of MacOS, Sierra.
I did find a solution. Here’s what I do. Before each lecture, I move my lecture slides to a network drive. When I get to class, I load my lecture on the lecture/podium computer (which runs Windows and TurningPoint 5 and has a receiver built-in). I gather all the session data while I teach with the podium computer and do live coding on my computer (two screens in the massive lecture hall). I save the session data back to the network drive. Back in my office, I use an older Mac that still runs an older version of MacOS to download the session data, import it using TurningPoint 5, do all the deletions of priming questions and correct-marking of other questions, then integrate and upload to T-Square.
Counting my laptop where I make up slides and do live coding, my Peer Instruction classes require three computers.
Every CS teacher should use active learning methodologies in our classes. Our classes are huge. We need better and easier mechanisms to make this work.
In January, Barbara Ericson and I were invited to visit the new ExcITED Center at NTNU in Trondheim, Norway. ExcITED is the Centre for Excellent IT Education. It was a whirlwind trip, fitting it in after the start of our semester at Georgia Tech, but really wonderful. We got there just as NTNU was celebrating their new Department of Computer Science with an “IDIovation” celebration which included some great research talks and (a highlight for me) a live coding computer music performance. The whole event was recorded and is available here.
Our host for the visit was Michail Giannakos, who is a learning scientist interested in a variety of educational technologies. We got a chance to meet with several of the faculty and many of the students working in ExcITED. Like I said, it was a whirlwind trip, so please excuse me if I only mention a few of the projects we saw — the ones that particularly stuck with me, despite the jet-lag.
One team at ExcITED is logging student interactions with the IDE that they use in their classes at the University, like the BlueJ Blackbox effort. What makes what they’re doing remarkable is that they’re immediately turning the data around, to present a process mirror to the students. They show students a visualization of what they have been doing. The goal is to encourage reflection, to get students to realize when they’re spending too much time on one phase of their work, or maybe not enough (e.g., in testing). The challenge is mapping from the low-level user interactions to higher-level visualizations that might inform students.
There are several projects that are working with children who are programming in Scratch (which can be localized to Norwegian). The one that most captured my attention was where students were programming these beautiful robotic sculptures, created by professional artists. The team is exploring how this influences student motivation. How does motivation change when the robots under the students’ control are neither student-generated nor stereotypically “robotic”?
The Tiles project by Simone Mora, Francesco Gianni, and Monica Divitini aims to engage designers in ubiquitous computing. They have these cool cards that they use in an activity with designers to get them thinking about the kinds of everyday items in which computation might be embedded. They want designers to think about how sensors and actuators might be used to support user activity.
On the weekend after our visit, the chair of the department, Letizia Jaccheri, took Barb and I off to ski in Sweden in Åre. We arrived on a Thursday, spoke at IDIovation that night, met with ExcITED researchers on Friday, traveled to Sweden to ski on Saturday, back on Sunday, and flew home on Monday. An absolutely amazing trip for which we were both grateful to have had the opportunity!
C.P. Snow got it right in 1961. Algorithms control our lives, and those who don’t know what algorithms are don’t know what questions to ask about them. This is a powerful argument for universal computing education. I like the below quote for highlighting that a better term for the concern is “model,” not “algorithm.”
Discussions about big data’s role in our society tends to focus on algorithms, but the algorithms for handling giant data sets are all well understood and work well. The real issue isn’t algorithms, it’s models. Models are what you get when you feed data to an algorithm and ask it to make predictions. As O’Neil puts it, “Models are opinions embedded in mathematics.”
Sepehr Vakil appointed first Associate Director of Equity and Inclusion in STEM Education at U. Texas-Austin
I just met Sepehr at an ECEP planning meeting. Exciting to meet another CS Ed faculty in an Education school! He won the Yamashita Prize at Berkeley in 2015 for his STEM activism.
Dr. Vakil’s research revolves around the intersection of equity and the teaching and learning of STEM, particularly in computer science and technology. This focus has led Dr. Vakil to conduct participatory design research projects in several contexts. These efforts include founding and directing the Oakland Science and Mathematics Outreach (OSMO) program—an after school program serving youth of color in the city of Oakland. Dr. Vakil also has experience teaching and conducting research within public schools. During graduate school, he co-taught Introductory Computer Science Courses for 3 years in the Oakland Unified and Berkeley Unified School Districts. As part of a university-research collaboration between UC Berkeley and the Oakland Unified School District, he worked with students and teachers in the Computer Science and Technology Academy at Oakland Technical High School to design an after school racial justice organization named SPOCN (Supporting People of Color Now!) Dr. Vakil’s work at the intersection of equity, STEM, and urban education has also led to publications in prestigious journals such as Cognition & Instruction, Equity and Excellence in Education, and the Journal of the Learning Sciences.
Following up on the brief that Google did last month on Blacks in CS, this month they’ve prepared a brief on the state of girls in CS.
Computer science (CS) education is critical in preparing students for the future. CS education not only gives students the skills they need to succeed in the workforce, but it also fosters critical thinking, creativity, and innovation. Women make up half the U.S. college-educated workforce, yet only 25% of computing professionals. This summary highlights the state of CS education for girls in 7th–12th grade during 2015–16. Girls are less likely than boys to be aware of and encouraged to pursue CS learning opportunities. Girls are also less likely to express interest in and confidence in learning CS.
Expanding the Pipeline: Characteristics of Male and Female Prospective Computer Science Majors – Examining Four Decades of Changes – CRN
Interesting report from CRA that offers a nuanced view about gender differences in goals for STEM education and how those interact with pursuing a degree in CS.
Another example of a variable becoming more salient over time relates to one’s scientific orientation. Students of either gender who express a stronger commitment to making a “theoretical contribution to science” are more likely to pursue a computer science major, but over time this variable has become a significantly stronger predictor for women while remaining a steady predictor for men. In other words, it is increasingly the case that computer science attracts women who see themselves as committed to scientific inquiry. While at face value that seems like positive news for the field of computer science, the fact is that women are much less likely than men to report having a strong scientific orientation upon entering college; thus, many potential female computing majors may be deterred from the field if they simply don’t “see” themselves as the scientific type.
Still, there is some positive news when it comes to attracting women to computing. The first relates to the role of mathematical self-concept. Specifically, even though women rate their math abilities lower than men do—and perceptions of one’s math ability is one of the strongest predictors of a major in computer science—the fact is that the importance of mathematical self-concept in determining who will pursue computer science has weakened over time. Thus, despite the fact that women tend to have lower math confidence than men do, this differential has become less consequential over time in determining who will major in computer science.
William G. Bowen of Princeton and of the Mellon Foundation recently died at the age of 83. His article about MOOCs in 2013 is still relevant today.
In particular is his note about “few of those studies are relevant to the teaching of undergraduates.” As I look at the OMS CS results and the empirical evidence about MOOC completers (which matches results of other MOOC experiments of which I’m aware at Georgia Tech), I see that MOOCs are leading to learning and serving a population, but that tends to be the most privileged population. Higher education is critiqued for furthering inequity and not doing enough to serve underprivileged students. MOOCs don’t help with that. It reminds me of Annie Murphy Paul’s article on lecture — they best serve the privileged students that campuses already serve well. That’s a subtle distinction: MOOCs help, but not the students who most need help.
What needs to be done in order to translate could into will? The principal barriers are the lack of hard evidence about both learning outcomes and potential cost savings; the lack of shared but customizable teaching and learning platforms (or tool kits); and the need for both new mind-sets and fresh thinking about models of decision making.
How effective has online learning been in improving (or at least maintaining) learning outcomes achieved by various populations of students in various settings? Unfortunately, no one really knows the answer to either that question or the important follow-up query about cost savings. Thousands of studies of online learning have been conducted, and my colleague Kelly Lack has continued to catalog them and summarize their findings.
It has proved to be a daunting task—and a discouraging one. Few of those studies are relevant to the teaching of undergraduates, and the few that are relevant almost always suffer from serious methodological deficiencies. The most common problems are small sample size; inability to control for ubiquitous selection effects; and, on the cost side, the lack of good estimates of likely cost savings.