Posts tagged ‘educational psychology’

ICER 2014 Preview: Briana Morrison and an instrument for measuring cognitive load

The International Computing Education Research (ICER) conference 2014 is August 11-13 in Glasgow (see program here).  My involvement starts Saturday August 9 when we have the welcome dinner for the doctoral consortium, which will be run all day on Sunday August 10 (Sally Fincher and I are chairing).  The main conference presentations continue through noon on Wednesday August 13. The rest of August 13 and into Thursday August 14 will be a new kind  of ICER session: Critical Research Review for work-in-progress.  I’m presenting on some new work that I’m getting feedback on related to constructionism for adults. I’ll blog about that later.

Briana Morrison is presenting her paper on developing an instrument to measure cognitive load (early version of paper available here), with co-authors Brian Dorn (my former student, now a chaired assistant professor at U. Nebraska-Omaha) and me.  Briana’s research is looking at the impacts of modality on program understanding for students.  Does audio vs. video vs. both have an impact on student understanding?  She’s controlling for time in all her presentations, and plans to measure performance…and cognitive load.  Is it harder for students to understand audio descriptions of program code, or to try to read text descriptions while trying to read text programs?

There wasn’t a validated instrument for her to use to measure the components of cognitive load — so she created one.  She took an existing instrument, and adapted it to computer science.  She and Brian did the hard work of crunching all the correlations and load factors to make sure that the instrument is still valid after her adaptation.  It’s an important contribution in terms of giving computing education researchers another validated tool for measuring something important about learning.

August 8, 2014 at 7:50 am Leave a comment

People problem-solve differently in foreign languages: Implications for programming languages

Since states are making computing courses count as foreign language courses (even if that’s a bad idea),  it’s worthwhile to consider what the value is of learning a foreign language.  A recent Freakonomics podcast (linked below) considers the return on investment of learning a foreign language.  Most intriguing is that people problem-solve differently in their non-native languages.  I wonder what the implications are for programming languages?  We know that people have negative transfer when their native language abilities conflict with their programming language problem-solving.  Are there ways we could make the programming language better for problem-solving?

Learning a language is of course not just about making money — and you’ll hear about the other benefits. Research shows that being bilingual improves executive function and memory in kids, and may stall the onset of Alzheimer’s disease.

And as we learn from Boaz Keysar, a professor of psychology at the University of Chicago, thinking in a foreign language can affect decision-making, too — for better or worse.

via Freakonomics » Is Learning a Foreign Language Really Worth It? A New Freakonomics Radio Podcast.

July 24, 2014 at 9:31 am Leave a comment

Teaching programming could be made easier

Gas station without pump’s post on Garth’s complaint “Teaching programming is not getting easier” intrigued me.  Garth does a good job of pulling together a lot of the themes of what makes teaching CS hard today.  I think that we can improve the situation.  I’m particularly interested in learning how to scaffold the development of programming knowledge, and we have to find ways to create professional communities of CS teachers.  There are techniques to share (worked examples, peer instruction, pair programming, Parson’s problems, audio tours), and we’re clearly not doing a good job of it yet.

In programming there are 4 homework problems over the period of a week, none of which are “easy”, and all require some problem solving and thinking.  There is somewhat of an incremental progression to the problems but that step from written problem to code is always a big one.  It is somewhat similar to solving word problems in math, every student’s favorite task.  For programming there are no colleagues available that have as much or more experience to pull teaching ideas from, if there are any other programming teachers at all.  There are no pedagogical resources anywhere online for teaching strategies.  After watching a number (3) of programming teachers teach it seems the teaching strategy is pretty consistent; show and tell and hope.

via Teaching programming is not getting easier. | Garth’s CS Education Blog.

June 17, 2014 at 8:55 am 12 comments

MOOCs: One Size Doesn’t Fit All

My colleague, Amy Bruckman, considers in her blog how HCI design principles lead us to question whether MOOCs can achieve their goals.

Can a MOOC teach course content to anyone, anywhere? It’s an imagination-grabbing idea. Maybe everyone could learn about topics from the greatest teachers in the world! Create the class once, and millions could learn from it!

It seems like an exciting idea. Until you realize that the entire history of human-computer interaction is about showing us that one size doesn’t fit all.

via MOOCs: One Size Doesn’t Fit All | The Next Bison: Social Computing and Culture.

April 18, 2014 at 1:38 am 1 comment

Are we getting better at handling abstraction? – Radiolab podcast on Killing Babies, Saving the World

I’m a fan of Radiolab podcasts.  The one referenced below talks about the Flynn effect. Comparison of various tests of IQ over decades suggest that we’ve been getting smarter over the last 100 years.  Josh Greene argues that we (as humans in the developing world) may be developing greater ability to handle abstract thinking.  Abstraction isn’t everything in computer science (as Bennedsen and Caspersen showed us in 2008), but it is important.  Could our problems with computing education resolve over time, because we’re all getting better at abstraction?  Might it become easier to teach computer science in future decades, as we develop better cognitive abilities?  Given that performance on the Rainfall Problem has not improved over the last thirty years, I doubt it, but it’s an intriguing hypothesis.

Robert talks to Josh Greene, the Harvard professor we had on our Morality show. They revisit some ideas from that show in the context of the big, complicated problems of today (think global warming and nuclear war). Josh argues that to deal with those problems, we’re going to have to learn how to make better use of that tiny part of our brain that handles abstract thinking. Not a simple proposition, but, despite the odds, Josh has hope.

via Killing Babies, Saving the World – Radiolab.

March 11, 2014 at 1:10 am 8 comments

Learnersourcing subgoal labeling to support learning from how-to videos

What a cool idea!  Rob Moore is building on the subgoal labeling work that we (read: “Lauren”) did, and is using crowd-sourcing techniques to generate the labels.

Subgoal labeling [1] is a technique known to support learning new knowledge by clustering a group of steps into a higher-level conceptual unit. It has been shown to improve learning by helping learners to form the right mental model. While many learners view video tutorials nowadays, subgoal labels are often not available unless manually provided at production time. This work addresses the challenge of collecting and presenting subgoal labels to a large number of video tutorials. We introduce a mixed-initiative approach to collect subgoal labels in a scalable and efficient manner. The key component of this method is learnersourcing, which channels learners’ activities using the video interface into useful input to the system. The presented method will contribute to the broader availability of subgoal labels in how-to videos.

via Learnersourcing subgoal labeling to support learning from how-to videos.

February 12, 2014 at 1:11 am 3 comments

Big Data vs. Ed Psychology: Work harder vs. work smarter

I met with a prospective PhD student recently, who told me that she’s interested in using big data to inform her design of computing education.  She said that she disliked designing something, just crossing her fingers hoping it would work.  She and the faculty she’s working with are trying to use big data to inform their design decisions.

That’s a fine approach, but it’s pretty work-intensive.  You gather all this data, then you have to figure out what’s relevant, and what it means, and how it influences practice.  It’s a very computer science-y way of solving the problem, but it’s rather brute force.

There is a richer data source with much more easily applicable design guidelines: educational psychology literature.  Educational psychologists have been thinking about these issues for a long time.  They know a lot of things.

We’re finding that we can inform a lot of our design decisions by simply reading the relevant education literature:

I was recently reading a computer science paper in which the author said that we don’t know much about mathematics education, and that’s because we’ve never had enough data to come up with findings.  But there were no references to mathematics education literature.  We actually know a lot about mathematics education literature.  Too often, I fear that we computer scientists want to invent it all ourselves, as if that was a better approach.  Why not just talk to and read the work of really smart people who have devoted their lives to figuring out how to teach better?

 

January 31, 2014 at 1:35 am 6 comments

Musicians’ First Teachers: What can educators do to sustain interest?

What can the teacher do to inculcate interest?  What responsibility does the teacher have to sustain interest?  If there is a way to teach that can be effective, don’t teachers have a moral obligation to teach that way?

In general, findings from studies of interest suggest that educators can (a) help students sustain attention for tasks even when tasks are challenging—this could mean either providing support so that students can experience a triggered situational interest or feedback that allows them to sustain attention so that they can generate their own curiosity questions; (b) provide opportunities for students to ask curiosity questions; and (c) select or create resources that promote problem solving and strategy generation.

via Musicians’ First Teachers « Annie Murphy Paul.

January 15, 2014 at 1:27 am 1 comment

Expressed and measured vocational interests

 

 

 

 

 

www.uncg.edu__p_silvia_papers_01_JVB

 

at http://www.uncg.edu/~p_silvia/papers/01%20JVB.pdf

An interesting paper I found reading Annie Murphy Paul’s blog.  An Expressed Interest is an answer to a question like “What career do you plan to pursue after College?”  A Measured Vocational Interest is measuring an interest in mathematics, and suggesting that the student go into accounting.  The former are far more predictive of future careers than the latter.  Why are we so bad at predicting what field someone should go into based on their base interests?  I’ll bet that it has to do with more things than just interests, like Eccles model of academic achievement (how do people think about this career? can you see yourself in this career?) and values (which are different than interests).

 

 

December 5, 2013 at 1:15 am Leave a comment

Too little and too much self-efficacy is bad for interest

What an interesting paper! (Pun slightly intended.)  In this paper from Paul Silvia, he found experimentally that self-efficacy and interest are related on a bell-shaped curve.  Too little self-efficacy makes a task seem too daunting and uninteresting.  Too much makes the task boring.  This is important because we know that self-efficacy is among the most significant factors influencing non-majors success in learning to program.  It’s clear that there’s a sweet spot that we’re aiming for.

www.uncg.edu__p_silvia_papers_03_JVB__Self-Efficacy___Interest-graph

November 28, 2013 at 1:25 am 11 comments

Say Goodbye to Myers-Briggs, the Fad That Won’t Die

Once in our Learning Sciences seminar, we all took the Myers-Briggs test on day 1 of the semester, and again at the end.  Almost everybody’s score changed.  So, why do people still use it as some kind of reliable test of personality?

A test is reliable if it produces the same results from different sources. If you think your leg is broken, you can be more confident when two different radiologists diagnose a fracture. In personality testing, reliability means getting consistent results over time, or similar scores when rated by multiple people who know me well. As my inconsistent scores foreshadowed, the MBTI does poorly on reliability. Research shows “that as many as three-quarters of test takers achieve a different personality type when tested again,” writes Annie Murphy Paul in The Cult of Personality Testing, “and the sixteen distinctive types described by the Myers-Briggs have no scientific basis whatsoever.” In a recent article, Roman Krznaric adds that “if you retake the test after only a five-week gap, there’s around a 50% chance that you will fall into a different personality category.”

via Say Goodbye to MBTI, the Fad That Won’t Die | LinkedIn.

November 5, 2013 at 1:53 am 5 comments

What to do about laptops in lectures: Worse for the bystanders

Fascinating result: The bystanders have their learning impacted more than the ones who opened up the laptop.

There is a fundamental tension here, and I don’t know how to resolve it. On the one hand, I like it when students have their laptops in class. Many of them are more comfortable taking notes this way than longhand. In the middle of a lecture I might ask someone to look something up that I don’t know off the top of my head.

On the other hand, the potential for distraction is terrible. I’ve walked in the back of the classroom of many of my colleagues and seen that perhaps 50% of the students are on the Web.

via What to do about laptops in lectures? – Daniel Willingham.

November 1, 2013 at 1:07 am 7 comments

How much is too much time spent on testing in schools?

How_much_time_do_school_districts_spend_on_standardized_testing__This_much.

Exactly how much standardized testing are school districts subjecting students to these days? A nearly staggering amount, according to a new analysis.

“Testing More, Teaching Less: What America’s Obsession with Student Testing Costs in Money and Lost Instructional Time,” released by the American Federation of Teachers, looks closely at two unnamed medium-sized school districts — one in the Midwest and one in the East — through the prism of their standardized testing calendars.

via How much time do school districts spend on standardized testing? This much..

This article is worth blogging on for two reasons:

First, my colleagues in the UK were stunned when I told them that most tests that students take in US schools are locally invented.  “Doesn’t that lead to alot of wasted effort?”  Perhaps so — this report seems to support my claim.

Second, I don’t find that much testing either staggering nor undesirable.  Consider the results on the Testing Effect — students learn from testing.  20 hours in an academic year is not too much, if we think about testing as driving learning.  We don’t know if these are good or useful tests, or if they are being used in a way that might motivate more learning, so 20 hours isn’t obviously a good thing.  But it’s also not obviously a bad thing.

Consider the results of the paper presented by Michael Lee at ICER 2013 this year (and which won the “John Henry Award,” the people’s choice best paper award).  They took a video game that required programming (Gidget) and added to it explicit assessments — quizzes that popped up at the end of each level, to ask you questions about what you did.  They found that such assessments actually increased engagement and time-on-task.  Their participants (both control and experimental) were recruited from Amazon’s Mechanical Turk, so they were paid to complete more levels.  Adding assessments led to more levels completed and less time per level — that’s pretty remarkable.

Lee-Gidget-engagement-speed

Maybe what we need is not fewer tests, but better and more engaging tests.

 

September 13, 2013 at 1:57 am 3 comments

More evidence for Aptitude-Treatment Interactions

The same kind of educational opportunity does not work for all students. In particular, constructionism may not provide enough structure for low achieving students. (See previous discussion about boredom vs. failure.)

Moreover, the researchers found different approaches effective for different types of students: “Usually people say, ‘Yes, autonomy is beneficial. We want to provide students with choices in school,’ This is the case for high achievers, but not low achievers,” Wang said. “Low achievers want more structure, more guidelines.”

via ‘Active’ Student Engagement Goes Beyond Class Behavior, Study Finds – Inside School Research – Education Week.

August 6, 2013 at 1:36 am 1 comment

Taking a test is better than studying, even if you just guess: We need to flip the flipped classroom

The benefits of testing for learning are fascinating, and the result described below makes me even more impressed with the effect.  It suggests even more strongly that the critical feature of learning is trying to understand, trying to generate an answer, even more than reading an answer.

Suppose, for example, that I present you with an English vocabulary word you don’t know and either (1) provide a definition that you read (2) ask you to make up a definition or (3) ask you to choose from among a couple of candidate definitions. In conditions 2 & 3 you obviously must simply guess. (And if you get it wrong I’ll give you corrective feedback.) Will we see a testing effect?

That’s what Rosalind Potts & David Shanks set out to find, and across four experiments the evidence is quite consistent. Yes, there is a testing effect. Subjects better remember the new definitions of English words when they first guess at what the meaning is–no matter how wild the guess.

via Better studying = less studying. Wait, what? – Daniel Willingham.

These results mesh well with a new study from Stanford.  They found that the order of events in a “flipped” classroom matters — the problem-solving activity (in the classroom) should come before the reading or videos (at home). The general theme is the same in both sets of studies: problem-solving drives learning, and it’s less true that studying prepares one for problem-solving.

A new study from the Stanford Graduate School of Education flips upside down the notion that students learn best by first independently reading texts or watching online videos before coming to class to engage in hands-on projects. Studying a particular lesson, the Stanford researchers showed that when the order was reversed, students’ performances improved substantially.

via Classes should do hands-on exercises before reading and video, Stanford researchers say.

July 30, 2013 at 1:47 am 15 comments

Older Posts


Recent Posts

August 2014
M T W T F S S
« Jul    
 123
45678910
11121314151617
18192021222324
25262728293031

Feeds

Blog Stats

  • 935,368 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,994 other followers


Follow

Get every new post delivered to your Inbox.

Join 2,994 other followers