Posts tagged ‘learning sciences’

Human students need active learning and Econs learn from lecture: NYTimes Op-Ed in defense of lecture

I’m sympathetic to the author’s argument (linked below), that being able to understand an argument delivered as a lecture is difficult and worthwhile. Her characterization of active learning is wrong — it’s not “student-led discussion.”  Actually, what she describes as good lecture is close to good active learning.  Having students answering questions in discussion is good — but some students might disengage and not answer questions.  Small group activities, peer led team learning, or peer instruction would be better to make sure that all students engage. But that’s not the critical flaw in her argument.

Being able to listen to a complicated lecture is an important skill — but students (at least in STEM, at least in the US) don’t have that skill.  We can complain about that. We can reform primary and secondary schooling so that students develop that skill.  But if we want these students to learn, the ones who are in our classes today, we should use active learning strategies.

Richard Thaler introduced the term “Econs” to describe the rational beings that inhabit traditional economic theory. (See a review of his book Misbehaving for more discussion on Econs.)  Econs are completely rational.  They develop the skills to learn from lecture because it is the most efficient way to learn.  Unfortunately, we are not econs, and our classes are filled with humans. Humans are predictably irrational, as Daniel Ariely puts it. And there’s not much we can do about it. In his book Thinking, Fast and Slow, Daniel Kahneman complains that he knows how he is influenced by biases and too much System 1 thinking — and yet, he still makes the same mistakes.  The evidence is clear that the students in our undergraduate classes today need help to engage with and learn STEM skills and concepts.

The empirical evidence for the value of active learning over lecture is strong (see previous post).  It works for humans.  Lecture probably works for Econs.  If we could find enough of them, we could run an experiment.

In many quarters, the active learning craze is only the latest development in a long tradition of complaining about boring professors, flavored with a dash of that other great American pastime, populist resentment of experts. But there is an ominous note in the most recent chorus of calls to replace the “sage on the stage” with student-led discussion. These criticisms intersect with a broader crisis of confidence in the humanities. They are an attempt to further assimilate history, philosophy, literature and their sister disciplines to the goals and methods of the hard sciences — fields whose stars are rising in the eyes of administrators, politicians and higher-education entrepreneurs.

Source: Lecture Me. Really. – The New York Times

A similar argument to mine is below.  This author doesn’t use the Humans/Econs distinction that I’m using.  Instead, the author points out that lecturers too often teach only to younger versions of themselves.

I will grant that nothing about the lecture format as Worthen describes it is inherently bad. But Worthen’s elegy to a format that bores so many students reminds me of a bad habit that too many professors have: building their teaching philosophies around younger versions of themselves, who were often more conscientious, more interested in learning, and more patient than the student staring at his phone in the back of their classrooms.

Source: Professors shouldn’t only teach to younger versions of themselve

October 30, 2015 at 8:49 am 4 comments

A Terrific and Dismal View of What Influences CS Faculty to Adopt Teaching Practices

Lecia Barker had a terrific paper in SIGCSE 2015 that I just recently had the chance to dig into. (See paper in ACM DL here.)  Here’s the abstract:

Despite widespread development, research, and dissemination of teaching and curricular practices that improve student retention and learning, faculty often do not adopt them. This paper describes the first findings of a two-part study to improve understanding of adoption of teaching practices and curriculum by computer science faculty. The paper closes with recommendations for designers and developers of teaching innovations hoping to increase their chance of adoption.

I’ve published in this area before.  Davide Fossati and I wrote a paper about the practices of CS teachers (based on interviews with about a dozen CS university teachers): how they made change, what convinced them to change, and how they decided if the change worked.  (See blog post about this here.)  The general theme was that these decisions rarely had an empirical basis.

Lecia and her co-authors went far beyond our study.  She interviewed and observed 66 CS faculty from 36 institutions, explicitly chosen to represent a diverse set of schools.  The result is the best picture I’ve yet seen of how CS faculty make decisions.

Lecia found more evidence of teachers using empirical evidence than we did, which was great to see.  But whether students “liked” it or not was still the most critical variable:

On the other hand, if students don’t “like it,” faculty are unlikely to continue using a new practice. At a public research university, a professor said, “You can do something that you think, ‘Wow! If the learning experience was way better this term, the experiment really worked.’ And then you read your teaching reviews, and it’s like the students are pissed off because you did not do what they expected.”

Lecia discovered a reason not to adopt that I’d not heard before.  She found that CS teachers filter out innovations that didn’t come from a context like their own.  Those of us at research universities are filtered out by some teachers at teaching-oriented institutions:

Faculty trust colleagues who have similar teaching and research contexts, share attitudes toward students and teaching, or teach similar subjects. In describing what conference speakers he finds credible at SIGCSE, a professor at a private liberal arts university acknowledged, “I do have the anti- ‘Research One’ bias. Like if the speaker is somebody who teaches at <prestigious public research university>, the mental clout that I give them as a teacher—unless they’re a lecturer—I drop them a notch. When someone stands up to speak and they’re from a really successful teaching college <names several> or universities that have a real reputation of being great undergraduate teaching institutions, I give them a lot of merit.”

The part that I found most depressing (even if not surprising) is that research evidence did not matter at all in adopting new ways to teach:

Despite being researchers themselves, the CS faculty we spoke to for the most part did not believe that results from educational studies were credible reasons to try out teaching practices.

Lecia’s study is well done, and the paper is fascinating, but the overall picture is rather dismal.  She points out many other issues that I’m not going into here, like the trade-off between cost and benefit of adopting a new practice, and about the need for specialized equipment in classrooms for some new practices.  Overall, she finds that it’s really hard to get higher education CS faculty to adopt better practices.  We reported on that in “Georgia Computes!” (see post here) but it’s even more disappointing when you see it in a large, broad study like this.

September 21, 2015 at 8:59 am 3 comments

Growing evidence that lectures disadvantage underprivileged students

The New York Times weighs in on the argument about active learning versus passive lecture.  The article linked below supports the proposition that college lectures unfairly advantage those students who are already privileged. (See the post about Miranda Parker’s work for a definition of what is meant by privilege.)

The argument that we should promote active learning over passive lecture has been a regular theme for me for a few weeks now:

  •  I argued in Blog@CACM that hiring ads and RPT requirements should be changed explicitly to say that teaching statements that emphasize active learning would be more heavily weighted (see post here).
  • The pushback against this idea was much greater than I anticipated. I asked on Facebook if we could do this at Georgia Tech. The Dean of the College of Engineering was supportive. Other colleagues were strongly against it. I wrote a blog post about that pushback here.
  • I wrote a Blog@CACM post over the summer about the top ten myths of computing education, which was the top-visited page at CACM during the month of July (see post here).  I wrote that post in response to a long email thread on a College of Computing faculty mailing list, where I experienced that authority was able to sway CS faculty more than research results (blog post about that story here).

The NYTimes piece pushes on the point that this is not just an argument about quality of education.  The argument is about what is ethical and just.  If we value broadening participation in computing, we should use active learning methods and avoid lecture. If we lecture, we bias the class in favor of those who have already had significant advantages.

Thanks to both Jeff Gray and Briana Morrison who brought this article to my attention.

Yet a growing body of evidence suggests that the lecture is not generic or neutral, but a specific cultural form that favors some people while discriminating against others, including women, minorities and low-income and first-generation college students. This is not a matter of instructor bias; it is the lecture format itself — when used on its own without other instructional supports — that offers unfair advantages to an already privileged population.

The partiality of the lecture format has been made visible by studies that compare it with a different style of instruction, called active learning. This approach provides increased structure, feedback and interaction, prompting students to become participants in constructing their own knowledge rather than passive recipients.

Research comparing the two methods has consistently found that students over all perform better in active-learning courses than in traditional lecture courses. However, women, minorities, and low-income and first-generation students benefit more, on average, than white males from more affluent, educated families.

Source: Are College Lectures Unfair? – The New York Times

September 18, 2015 at 8:44 am 4 comments

ICER 2015 Preview: Subgoal Labeling Works for Text, Too

Briana Morrison is presenting the next stage of our work on subgoal labeled worked examples, with Lauren Margulieux. Their paper is “Subgoals, Context, and Worked Examples in Learning Computing Problem Solving.” As you may recall, Lauren did a terrific set of studies (presented at ICER 2012) showing how adding subgoal labels to videos of App Inventor worked examples had a huge effect on learning, retention, and transfer (see my blog post on this work here).

Briana and Lauren are now teaming up to explore new directions in educational psychology space and new directions in computing education research.

  • In the educational psychology space, they’re asking, “What if you make the students generate the subgoal labels?” Past research has found that generating the subgoal labels, rather than just having them given to the students, is harder on the students but leads to more learning.
  • They’re also exploring what if the example and the practice come from the same or different contexts (where the “context” here is the cover story or word problem story). For example, we might show people how to average test grades, but then ask them to average golf scores — that’s a shift in context.
  • In the computing education research space, Briana created subgoal labeled examples for a C-like pseudocode.

One of the important findings is that they replicated the earlier study, but now in a text-based language rather than a blocks-based language. On average, subgoal labels on worked examples improve performance over getting the same worked examples without subgoal labels. That’s the easy message.

The rest of the results are much more puzzling. Being in the same context (e.g., seeing averaging test scores in the worked examples, then being asked to average test scores in the practice) did statiscally worse than having to shift contexts (e.g., from test scores to golf scores). Why might that be?

Generating labels did seem to help performance. The Generate group had the highest attrition. That make sense, because increased complexity and cognitive load would predict that more participants would give up. But that drop-our rate makes it hard make strong claims. Now we’re comparing everyone in the other groups to only “those who gut it out” in the Generate group. The results are more suspect.

There is more nuance and deeper explanations in Briana’s paper than I’m providing here. I find this paper exciting. We have an example here of well-established educational psychology principles not quite working as you might expect in computer science. I don’t think it puts the principles in question. It suggests to me that there may be some unique learning challenges in computer science, e.g., if the complexity of computer science is greater than in other studies, then it’s easier for us to reach cognitive overload. Briana’s line of research may help us to understand how learning computing is different from learning statistics or physics.

August 7, 2015 at 7:40 am 4 comments

Why we are teaching science wrong, and how to make it right: It’s about CS retention, too

Important new paper in Nature that makes the argument for active learning in all science classes, which is one of the arguments I was making in my Top Ten Myths blog post. The image and section I’m quoting below are about a different issue than learning — turns out that active learning methods are important for retention, too.

Active learning is winning support from university administrators, who are facing demands for accountability: students and parents want to know why they should pay soaring tuition rates when so many lectures are now freely available online. It has also earned the attention of foundations, funding agencies and scientific societies, which see it as a way to patch the leaky pipeline for science students. In the United States, which keeps the most detailed statistics on this phenomenon, about 60% of students who enrol in a STEM field switch to a non-STEM field or drop out2 (see ‘A persistence problem’). That figure is roughly 80% for those from minority groups and for women.

via Why we are teaching science wrong, and how to make it right : Nature News & Comment.

August 3, 2015 at 7:49 am Leave a comment

WYSIATI: CS Teachers need to ask “What am I not seeing?”

I’m currently reading Nobel laureate Daniel Kahneman’s book, “Thinking Fast, Thinking Slow” (see here for the NYTimes book review).  It’s certainly one of the best books I’ve ever read on behavioral economics, and maybe just the best book I’ve ever read about psychology in general.

One of the central ideas of the book is our tendency to believe “WYSIATI”—What You See Is All There Is.  Kahneman’s research suggests that we have two mental systems: System 1 does immediate, intuitive responses to the world around us.  System 2 does thoughtful, analytical responses.  System 1 aims to generate confidence.  It constructs a story about the world given what information that exists.  And that confidence leads us astray. It keeps System 2 from asking, “What am I missing?”  As Kahneman says in the interview linked below, “Well, the main point that I make is that confidence is a feeling, it is not a judgment.”

It’s easy to believe that University CS education in the United States is in terrific shape.  Our students get jobs — multiple job offers each.  Our graduates and their employers seem to be happy.  What’s so wrong with what’s going on? I see computation as a literacy. I wonder, “Why is our illiteracy rate so high? Why do so few people learn about computing? Why do so many flunk out, drop out, or find it so traumatic that they never want to have anything to do with computing again?  Why are the computing literate primarily white or Asian, male, and financially well-off compared to most?”

Many teachers (like the comment thread after this post) argue for the state of computing education based on what they see in their classes.  We introduce tools or practices and determine whether they “work” or are “easy” based on little evidence, often just discussion with the top students (as Davide Fossati and I found). If we’re going to make computing education work for everyonewe have to ask, “What aren’t we seeing?”  We’re going to feel confident about what we do see — that’s what System 1 does for us.  How do we see the people who aren’t succeeding with our methods?  How do we see the students who won’t even walk in the door because of how or what we teach? That’s why it’s important to use empirical evidence when making educational choices. What we see is not all there is.

But, System 1 can sometimes lead us astray when it’s unchecked by System 2. For example, you write about a concept called “WYSIATI”—What You See Is All There Is. What does that mean, and how does it relate to System 1 and System 2?

System 1 is a storyteller. It tells the best stories that it can from the information available, even when the information is sparse or unreliable. And that makes stories that are based on very different qualities of evidence equally compelling. Our measure of how “good” a story is—how confident we are in its accuracy—is not an evaluation of the reliability of the evidence and its quality, it’s a measure of the coherence of the story.

People are designed to tell the best story possible. So WYSIATI means that we use the information we have as if it is the only information. We don’t spend much time saying, “Well, there is much we don’t know.” We make do with what we do know. And that concept is very central to the functioning of our mind.

via A machine for jumping to conclusions.

July 24, 2015 at 7:23 am 2 comments

What Convinces CS Faculty to Change: Authority over Evidence

My Blog@CACM Post for July 2015 is on the Top Ten Myths of Teaching Computer Science. You can go take a take a look at it here.

I wrote that blog post because we really have had a long debate in our faculty email list about many of those topics. I recently saw our Dean at an event, and he told me that he hadn’t read the thread yet (but he planned to) because “it must be 100 messages long.” Most of the references in that blog post came from messages that I wrote in response to that thread. It was a long post because people generally didn’t agree with me.  Several senior, well-established (much more famous than me) faculty strongly disagreed with the evidence-based argument I was making. The thread finally ended when one of the most senior, most respected faculty in the College wrote a note saying (paraphrased), “There are probably better teaching evaluation methods than the ones we now use. I’m sure that Mark knows teaching methods that would help the rest of us teach better.” And that was it. Thread ended. The research-based evidence that I offered was worth fighting about. The word of authority was not.

I’ll bet that faculty across disciplines similarly respond to authority more than evidence. We certainly see the role of authority in Physics Education Research (PER). Pioneering PER researchers were not given much respect and many were ostracized from their departments. Until Eric Mazur at Harvard had his students fail the Force Concept Inventory (FCI), and he changed how he taught because of it. Until Nobel laureate Carl Wieman decided to back PER (all the way to the Office of Science Technology and Policy in the White House). Today, the vast majority of physics teachers know research-based teaching methods (even if they don’t always use them). FCI existed before Mazur started using it, but it really started getting used after Mazur’s support. The evidence of FCI didn’t change physics teaching. The voice of authority did.

While we might wish that CS faculty would respond more to evidence than authority (see previous post on this theme), this insight suggests a path forward.  If we want CS faculty to improve their teaching and adopt evidence-based practices, top-down encouragement can have large impact.  Well-known faculty at top institutions publicly adopting these practices, and Deans and Chairs promoting these practices can help to convince faculty to change.

July 15, 2015 at 7:21 am 26 comments

Older Posts

Recent Posts

November 2015
« Oct    


Blog Stats

  • 1,155,655 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,668 other followers

CS Teaching Tips


Get every new post delivered to your Inbox.

Join 3,668 other followers