Posts tagged ‘learning sciences’

Interaction beats out video lectures and even reading for learning

I’m looking forward to these results!  That interaction is better than video lectures is really not surprising.  That it leads to better learning than even reading is quite a surprise.  My guess is that this is mediated by student ability as a reader, but as a description of where students are today (like the prior posts on active learning), it’s a useful result.

Koedinger and his team further tested whether their theory that “learning by doing” is better than lectures and reading in other subjects. Unfortunately, the data on video watching were incomplete. But they were able to determine across four different courses in computer science, biology, statistics and psychology that active exercises were six times more effective than reading. In one class, the active exercises were 16 times more effective than reading. (Koedinger is currently drafting a paper on these results to present at a conference in 2016.)

Source: Did you love watching lectures from your professors? – The Hechinger Report

January 6, 2016 at 8:12 am 7 comments

Blog Post #2000: Barbara Ericson Proposes: Effectiveness and Efficiency of Adaptive Parsons Problems #CSEdWeek

My 1000th blog post looked backward and forward.  This 2000th blog post is completely forward looking, from a personal perspective.  Today, my wife and research partner, Barbara Ericson, proposes her dissertation.

Interesting side note: One of our most famous theory professors just blogged on the theory implications of the Parsons Problems that Barb is studying. See post here.

Barb’s proposal is the beginning of the end of this stage in our lives.  Our youngest child is a senior in high school. When Barbara finishes her Human-Centered Computing PhD (expected mid-2017), we will be empty-nesters and ready to head out on a new adventure.

Title: EVALUATING THE EFFECTIVINESS AND EFFICIENCY OF PARSONS PROBLEMS AND DYNAMICALLY ADAPTIVE PARSONS PROBLEMS AS A TYPE OF LOW COGNITIVE LOAD PRACTICE PROBLEM

Barbara J. Ericson
Ph.D. student
Human Centered Computing
College of Computing
Georgia Institute of Technology

Date: Wednesday, December 9, 2015
Time: 12pm to 2pm EDT
Location: TSRB 223

Committee
————–
Dr. James Foley, School of Interactive Computing (advisor)
Dr. Amy Bruckman, School of Interactive Computing
Dr. Ashok Goel, School of Interactive Computing
Dr. Richard Catrambone, School of Psychology
Dr. Mitchel Resnick, Media Laboratory, Massachusetts Institute of Technology

Abstract
———–

Learning to program can be difficult and can result in hours of frustration looking
for syntactic or semantic errors. This can make it especially difficult to prepare inservice
(working) high school teachers who don’t have any prior programming
experience to teach programming, since it requires an unpredictable amount of time for
practice in order to learn programming. The United States is trying to prepare 10,000
high school teachers to teach introductory programming courses by fall 2016. Most
introductory programming courses and textbooks rely on having learners gain experience
by writing lots of programs. However, writing programs is a complex cognitive task,
which can easily overload working memory, which impedes learning.

One way to potentially decrease the cognitive load of learning to program is to
use Parsons problems to give teachers practice with syntactic and semantic errors as well
as exposure to common algorithms. Parsons problems are a type of low cognitive load
code completion problem in which the correct code is provided, but is mixed up and has
to be placed in the correct order. Some variants of Parsons problems also require the
code to be indented to show the block structure. Distractor code can also be provided
that contains syntactic and semantic errors.

In my research I will compare solving Parsons problems that contain syntactic and
semantic errors, to fixing code with the same syntactic and semantic errors, and to writing
the equivalent code. I will examine learning from pre- to post-test as well as student
reported cognitive load. In addition, I will create dynamically adaptive Parsons problems
where the difficulty level of the problem is based on the learners’ prior and current
progress. If the learner solves one Parsons problem in one attempt the next problem will
be made more difficult. If the learner is having trouble solving a Parsons problem the
current problem will be made easier. This should enhance learning by keeping the
problem in the learner’s zone of proximal development as described by Vygotsky. I will
compare non-adaptive Parsons problems to dynamically adaptive Parsons problems in
terms of enjoyment, completion, learning, and cognitive load.

The major contributions of this work are a better understanding of how variants of
Parsons problems can be used to improve the efficiency and effectiveness of learning to
program and how they relate to code fixing and code writing. Parsons problems can help
teachers practice programming in order to prepare them to teach introductory computer
science at the high school level and potentially help reduce the frustration and difficulty
all beginning programmers face in learning to program.

 

December 9, 2015 at 7:37 am 4 comments

Blog Post #1999: The Georgia Tech School of Computing Education #CSEdWeek

Three and a half years, and 1000 blog posts ago, I wrote my 999th blog post about research questions in computing education (see post here). I just recently wrote a blog post offering my students’ take on research questions in computing education (see post here), which serves to update the previous post. In this blog post, I’m going to go more meta.

In my CS Education Research class (see description here), my students read a lot of work by me and my students, some work on EarSketch by Brian Magerko and Jason Freeman, and some by Betsy DiSalvo. There are other researchers doing work related to computing education in the College of Computing at Georgia Tech, notably John Stasko’s work on algorithm visualization, Jim Foley’s work on flipped classrooms (predating MOOCs by several years), and David Joyner and Ashok Goel’s work on knowledge-based AI in flipped and MOOC classrooms, and my students know some of this work. I posed the question to my students:

If you were going to characterize the Georgia Tech school of thought in computing education, how would you describe it?

We talked some about the contrasts. Work at CMU emphasizes cognitive science and cognitive tutoring technologies. Work at the MIT Media Lab is constructionist-based.

GT-School

Below is my interpretation of what I wrote on the board as they called out comments.

  • Contextualization. The Georgia Tech School of Computing education emphasizes learning computing in the context of an application domain or non-CS discipline.
  • Beyond average, white male. We are less interested in supporting the current majority learner in CS.
  • Targeted interventions. Georgia Tech computing education researchers create interventions with particular expectations or hypotheses. We want to attract this kind of learner. We aim to improve learning, or we aim to improve retention. We make public bets before we try something.
  • Broader community. Our goal is to have a broaden participation in computing, to extend the reach of computer science.
  • We are less interested in making good CS students better. To use an analogy, we are not about raising the ceiling. We’re about pushing back the walls and lowering the floors, and sometimes, creating whole new adjacent buildings.
  • We draw on learning sciences theory, which includes cognitive science and educational psychology (e.g., cognitive load theory).
  • We draw on social theories, especially distributed cognition, situated learning, social cognitive theory (e.g., expectancy-value theory, self-efficacy).

I might have spent hours coming up with a list like this, but in ten minutes, my students came up with a good characterization of what constitutes the Georgia Tech School of Thought in Computing Education.

December 7, 2015 at 7:43 am Leave a comment

Human students need active learning and Econs learn from lecture: NYTimes Op-Ed in defense of lecture

I’m sympathetic to the author’s argument (linked below), that being able to understand an argument delivered as a lecture is difficult and worthwhile. Her characterization of active learning is wrong — it’s not “student-led discussion.”  Actually, what she describes as good lecture is close to good active learning.  Having students answering questions in discussion is good — but some students might disengage and not answer questions.  Small group activities, peer led team learning, or peer instruction would be better to make sure that all students engage. But that’s not the critical flaw in her argument.

Being able to listen to a complicated lecture is an important skill — but students (at least in STEM, at least in the US) don’t have that skill.  We can complain about that. We can reform primary and secondary schooling so that students develop that skill.  But if we want these students to learn, the ones who are in our classes today, we should use active learning strategies.

Richard Thaler introduced the term “Econs” to describe the rational beings that inhabit traditional economic theory. (See a review of his book Misbehaving for more discussion on Econs.)  Econs are completely rational.  They develop the skills to learn from lecture because it is the most efficient way to learn.  Unfortunately, we are not econs, and our classes are filled with humans. Humans are predictably irrational, as Daniel Ariely puts it. And there’s not much we can do about it. In his book Thinking, Fast and Slow, Daniel Kahneman complains that he knows how he is influenced by biases and too much System 1 thinking — and yet, he still makes the same mistakes.  The evidence is clear that the students in our undergraduate classes today need help to engage with and learn STEM skills and concepts.

The empirical evidence for the value of active learning over lecture is strong (see previous post).  It works for humans.  Lecture probably works for Econs.  If we could find enough of them, we could run an experiment.

In many quarters, the active learning craze is only the latest development in a long tradition of complaining about boring professors, flavored with a dash of that other great American pastime, populist resentment of experts. But there is an ominous note in the most recent chorus of calls to replace the “sage on the stage” with student-led discussion. These criticisms intersect with a broader crisis of confidence in the humanities. They are an attempt to further assimilate history, philosophy, literature and their sister disciplines to the goals and methods of the hard sciences — fields whose stars are rising in the eyes of administrators, politicians and higher-education entrepreneurs.

Source: Lecture Me. Really. – The New York Times

A similar argument to mine is below.  This author doesn’t use the Humans/Econs distinction that I’m using.  Instead, the author points out that lecturers too often teach only to younger versions of themselves.

I will grant that nothing about the lecture format as Worthen describes it is inherently bad. But Worthen’s elegy to a format that bores so many students reminds me of a bad habit that too many professors have: building their teaching philosophies around younger versions of themselves, who were often more conscientious, more interested in learning, and more patient than the student staring at his phone in the back of their classrooms.

Source: Professors shouldn’t only teach to younger versions of themselve

October 30, 2015 at 8:49 am 5 comments

A Terrific and Dismal View of What Influences CS Faculty to Adopt Teaching Practices

Lecia Barker had a terrific paper in SIGCSE 2015 that I just recently had the chance to dig into. (See paper in ACM DL here.)  Here’s the abstract:

Despite widespread development, research, and dissemination of teaching and curricular practices that improve student retention and learning, faculty often do not adopt them. This paper describes the first findings of a two-part study to improve understanding of adoption of teaching practices and curriculum by computer science faculty. The paper closes with recommendations for designers and developers of teaching innovations hoping to increase their chance of adoption.

I’ve published in this area before.  Davide Fossati and I wrote a paper about the practices of CS teachers (based on interviews with about a dozen CS university teachers): how they made change, what convinced them to change, and how they decided if the change worked.  (See blog post about this here.)  The general theme was that these decisions rarely had an empirical basis.

Lecia and her co-authors went far beyond our study.  She interviewed and observed 66 CS faculty from 36 institutions, explicitly chosen to represent a diverse set of schools.  The result is the best picture I’ve yet seen of how CS faculty make decisions.

Lecia found more evidence of teachers using empirical evidence than we did, which was great to see.  But whether students “liked” it or not was still the most critical variable:

On the other hand, if students don’t “like it,” faculty are unlikely to continue using a new practice. At a public research university, a professor said, “You can do something that you think, ‘Wow! If the learning experience was way better this term, the experiment really worked.’ And then you read your teaching reviews, and it’s like the students are pissed off because you did not do what they expected.”

Lecia discovered a reason not to adopt that I’d not heard before.  She found that CS teachers filter out innovations that didn’t come from a context like their own.  Those of us at research universities are filtered out by some teachers at teaching-oriented institutions:

Faculty trust colleagues who have similar teaching and research contexts, share attitudes toward students and teaching, or teach similar subjects. In describing what conference speakers he finds credible at SIGCSE, a professor at a private liberal arts university acknowledged, “I do have the anti- ‘Research One’ bias. Like if the speaker is somebody who teaches at <prestigious public research university>, the mental clout that I give them as a teacher—unless they’re a lecturer—I drop them a notch. When someone stands up to speak and they’re from a really successful teaching college <names several> or universities that have a real reputation of being great undergraduate teaching institutions, I give them a lot of merit.”

The part that I found most depressing (even if not surprising) is that research evidence did not matter at all in adopting new ways to teach:

Despite being researchers themselves, the CS faculty we spoke to for the most part did not believe that results from educational studies were credible reasons to try out teaching practices.

Lecia’s study is well done, and the paper is fascinating, but the overall picture is rather dismal.  She points out many other issues that I’m not going into here, like the trade-off between cost and benefit of adopting a new practice, and about the need for specialized equipment in classrooms for some new practices.  Overall, she finds that it’s really hard to get higher education CS faculty to adopt better practices.  We reported on that in “Georgia Computes!” (see post here) but it’s even more disappointing when you see it in a large, broad study like this.

September 21, 2015 at 8:59 am 4 comments

Growing evidence that lectures disadvantage underprivileged students

The New York Times weighs in on the argument about active learning versus passive lecture.  The article linked below supports the proposition that college lectures unfairly advantage those students who are already privileged. (See the post about Miranda Parker’s work for a definition of what is meant by privilege.)

The argument that we should promote active learning over passive lecture has been a regular theme for me for a few weeks now:

  •  I argued in Blog@CACM that hiring ads and RPT requirements should be changed explicitly to say that teaching statements that emphasize active learning would be more heavily weighted (see post here).
  • The pushback against this idea was much greater than I anticipated. I asked on Facebook if we could do this at Georgia Tech. The Dean of the College of Engineering was supportive. Other colleagues were strongly against it. I wrote a blog post about that pushback here.
  • I wrote a Blog@CACM post over the summer about the top ten myths of computing education, which was the top-visited page at CACM during the month of July (see post here).  I wrote that post in response to a long email thread on a College of Computing faculty mailing list, where I experienced that authority was able to sway CS faculty more than research results (blog post about that story here).

The NYTimes piece pushes on the point that this is not just an argument about quality of education.  The argument is about what is ethical and just.  If we value broadening participation in computing, we should use active learning methods and avoid lecture. If we lecture, we bias the class in favor of those who have already had significant advantages.

Thanks to both Jeff Gray and Briana Morrison who brought this article to my attention.

Yet a growing body of evidence suggests that the lecture is not generic or neutral, but a specific cultural form that favors some people while discriminating against others, including women, minorities and low-income and first-generation college students. This is not a matter of instructor bias; it is the lecture format itself — when used on its own without other instructional supports — that offers unfair advantages to an already privileged population.

The partiality of the lecture format has been made visible by studies that compare it with a different style of instruction, called active learning. This approach provides increased structure, feedback and interaction, prompting students to become participants in constructing their own knowledge rather than passive recipients.

Research comparing the two methods has consistently found that students over all perform better in active-learning courses than in traditional lecture courses. However, women, minorities, and low-income and first-generation students benefit more, on average, than white males from more affluent, educated families.

Source: Are College Lectures Unfair? – The New York Times

September 18, 2015 at 8:44 am 4 comments

ICER 2015 Preview: Subgoal Labeling Works for Text, Too

Briana Morrison is presenting the next stage of our work on subgoal labeled worked examples, with Lauren Margulieux. Their paper is “Subgoals, Context, and Worked Examples in Learning Computing Problem Solving.” As you may recall, Lauren did a terrific set of studies (presented at ICER 2012) showing how adding subgoal labels to videos of App Inventor worked examples had a huge effect on learning, retention, and transfer (see my blog post on this work here).

Briana and Lauren are now teaming up to explore new directions in educational psychology space and new directions in computing education research.

  • In the educational psychology space, they’re asking, “What if you make the students generate the subgoal labels?” Past research has found that generating the subgoal labels, rather than just having them given to the students, is harder on the students but leads to more learning.
  • They’re also exploring what if the example and the practice come from the same or different contexts (where the “context” here is the cover story or word problem story). For example, we might show people how to average test grades, but then ask them to average golf scores — that’s a shift in context.
  • In the computing education research space, Briana created subgoal labeled examples for a C-like pseudocode.

One of the important findings is that they replicated the earlier study, but now in a text-based language rather than a blocks-based language. On average, subgoal labels on worked examples improve performance over getting the same worked examples without subgoal labels. That’s the easy message.

The rest of the results are much more puzzling. Being in the same context (e.g., seeing averaging test scores in the worked examples, then being asked to average test scores in the practice) did statiscally worse than having to shift contexts (e.g., from test scores to golf scores). Why might that be?

Generating labels did seem to help performance. The Generate group had the highest attrition. That make sense, because increased complexity and cognitive load would predict that more participants would give up. But that drop-our rate makes it hard make strong claims. Now we’re comparing everyone in the other groups to only “those who gut it out” in the Generate group. The results are more suspect.

There is more nuance and deeper explanations in Briana’s paper than I’m providing here. I find this paper exciting. We have an example here of well-established educational psychology principles not quite working as you might expect in computer science. I don’t think it puts the principles in question. It suggests to me that there may be some unique learning challenges in computer science, e.g., if the complexity of computer science is greater than in other studies, then it’s easier for us to reach cognitive overload. Briana’s line of research may help us to understand how learning computing is different from learning statistics or physics.

August 7, 2015 at 7:40 am 4 comments

Older Posts


Recent Posts

February 2016
M T W T F S S
« Jan    
1234567
891011121314
15161718192021
22232425262728
29  

Feeds

Blog Stats

  • 1,182,827 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,068 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 4,068 other followers