Posts tagged ‘learning science’

Going beyond the cognitivist in computing education research questions

In Josh Tenenberg’s lead article in the September 2014 ACM Transactions on Computing Education (linked below), he uses this blog, and in particular, this blog post on research questions, as a foil for exploring what questions we ask in computing education research.  I was both delighted (“How wonderful! I have readers who are thinking about what I’m writing!”) and aghast (“But wait!  It’s just a blog post!  I didn’t carefully craft the language the way I might a serious paper!”) — but much more the former.  Josh is kind in his consideration, and raises interesting issues about our perspectives in our research questions.

I disagree with one part of his analysis, though.  He argues that my conception of computing education (“the study of how people come to understand computing”) is inherently cognitivist (centered in the brain, ignoring the social context) because of the word “understand.”  Maybe.  If understanding is centered in cognition, yes, I agree.  If understanding is demonstrated through purposeful action in the world (i.e., you understand computing if you can do with computing what you want), then it’s a more situated definition.  If understanding is a dialogue with others (i.e., you understand computing if you can communicate about computing with others), then it’s more of a sociocognitive definition.

The questions he calls out are clearly cognitivist.  I’m guilty as charged — my first PhD advisor was a cognitive scientist, and I “grew up” as the learning science community was being born.  That is my default position when it comes to thinking about learning.  But I think that my definition of the field is more encompassing, and in my own work, I tend toward thinking more about motivation and about communities of practice.

Asking significant research questions is a crucial aspect of building a research foundation in computer science CS education. In this article, I argue that the questions that we ask are shaped by internalized theoretical presuppositions about how the social and behavioral worlds operate. And although such presuppositions are essential in making the world sensible, at the same time they preclude carrying out many research studies that may further our collective research enterprise. I build this argument by first considering a few proposed research questions typical of much of the existing research in CS education, making visible the cognitivist assumptions that these questions presuppose. I then provide a different set of assumptions based on sociocultural theories of cognition and enumerate some of the different research questions to which these presuppositions give rise. My point is not to debate the merits of the contrasting theories but to demonstrate how theories about how minds and sociality operate are imminent in the very questions that researchers ask. Finally, I argue that by appropriating existing theory from the social, behavioral, and learning sciences, and making such theories explicit in carrying out and reporting their research, CS education researchers will advance the field.

via Asking Research Questions.

October 28, 2014 at 8:18 am 8 comments

ICER 2014 Preview: Briana Morrison and an instrument for measuring cognitive load

The International Computing Education Research (ICER) conference 2014 is August 11-13 in Glasgow (see program here).  My involvement starts Saturday August 9 when we have the welcome dinner for the doctoral consortium, which will be run all day on Sunday August 10 (Sally Fincher and I are chairing).  The main conference presentations continue through noon on Wednesday August 13. The rest of August 13 and into Thursday August 14 will be a new kind  of ICER session: Critical Research Review for work-in-progress.  I’m presenting on some new work that I’m getting feedback on related to constructionism for adults. I’ll blog about that later.

Briana Morrison is presenting her paper on developing an instrument to measure cognitive load (early version of paper available here), with co-authors Brian Dorn (my former student, now a chaired assistant professor at U. Nebraska-Omaha) and me.  Briana’s research is looking at the impacts of modality on program understanding for students.  Does audio vs. video vs. both have an impact on student understanding?  She’s controlling for time in all her presentations, and plans to measure performance…and cognitive load.  Is it harder for students to understand audio descriptions of program code, or to try to read text descriptions while trying to read text programs?

There wasn’t a validated instrument for her to use to measure the components of cognitive load — so she created one.  She took an existing instrument, and adapted it to computer science.  She and Brian did the hard work of crunching all the correlations and load factors to make sure that the instrument is still valid after her adaptation.  It’s an important contribution in terms of giving computing education researchers another validated tool for measuring something important about learning.

August 8, 2014 at 7:50 am 3 comments

Online education is dead; long live Mentored Simulated Experiences

Roger Schank (one of the founders of both cognitive science and learning science) declares MOOCs dead (including Georgia Tech’s OMS degree, explicitly), while recommending a shift to Mentored Simulation Experiences.  I find his description of MSE’s interesting — I think our ebook work is close to what he’s describing, since we focus on worked examples (as a kind of “mentoring”) and low cognitive-load practice (with lots of feedback).

So, while I am declaring online education dead, because every university is doing it and the market will soon be flooded with crap, I am not declaring the idea of a learning by doing mentored experience dead.

So, I  propose a new name, Mentored Simulated Experiences.

via Education Outrage: Online education and Online degrees are dead; now let’s move on to something real.

July 3, 2014 at 8:48 am 8 comments

A flawed case against teaching: Scaffolding, direct instruction, and learner-centered classrooms

Premise 1: Teaching is a human endeavor that does not and cannot improve over time.

Premise 2: Human beings are fantastic learners.

Premise 3: Humans don’t learn well in the teaching-focused classroom.

Conclusion: We won’t meet the needs for more and better higher education until professors become designers of learning experiences and not teachers.

at Change | The Case Against Teaching

——

Interesting argument linked above, but wrong.

 

June 10, 2014 at 9:57 am 5 comments

Addressing Computer Science Student Misconceptions with Contrasts

I have wanted to figure out how to use in my class the interesting findings about the use of video to address science misconceptions.  The idea is that you want to use real student misunderstandings and contrast them with better, more powerful ways of understanding something.  The challenge for me has been how to get those misunderstandings in class.  I don’t want to call on someone that I know has a misconception and have him lay out his explanation — just to pounce on it to say, “And that’s wrong!”

Then I realized my chance this last week.  I was grading the second midterm, and saw all these surprising misconceptions made evident in the students answers.  Normally, the class time after a midterm is about going over the midterm answers.  I decided instead to make it about the misconceptions.

I built a Powerpoint slide deck filled with these contrasting bits of code (like the contrasting explanations in the science videos) and with alternative code for answering the same problem.  I tried to disguise the code so as not to embarrass any particular student.  For example, I changed variable names — and since students expect that changing variable names should make plagiarized code impossible to detect, that should be enough, right?

I formed students into pairs, and then put up the slides and asked for them to respond or to answer a question in their pair.  For example, I noticed that several students seemed to confuse IF and WHEN.  So I put up this slide.

An IF and WHILE example

I asked students to punch into their clickers what they thought “A” would print out.  And yes, about 20% of the students guessed something other than “1.” I executed “A” as a way of checking the answer. I then had students answer for “B.”  I could hear lots of discussion suggesting that students were seeing the difference between IF and WHILE.

I put code up like this:

An infinite WHILE loop with a linked list

I had each group discuss what would be the output of this code, then took suggestions of the output from around the room.  I wrote them on the board, and then had pairs vote on which answer they most agreed with.  By the time we voted, everyone got it right — just generating the options, and hearing the discussion as each option went up, they figured out what the best answer was.  I really liked hearing students “discovering” invariants as they talked, e.g., “The loop can never end, because you never change node1a in the loop!”

I have no real evidence of learning here — we’ll see how things go in the class.  I do have a sense that this was a more fruitful activity for a most-midterm discussion than just me giving the answers and telling them why the wrong answers were wrong.  That recitation of sins usually just results in students coming up to me with, “You only give me 5 points for this, but  based on the discussion, I think I deserved 7.”  This way, the discussion was punctuated more often with “Ohhhh — now I get it!”

 

 

 

April 1, 2013 at 1:35 am 2 comments

Visiting Indiana University this week

I’m visiting Indiana University this week, and giving two talks.  If any readers are in the Bloomington area, I hope you can stop by!

9:30 am Jan 29
Colloquium
Education 2140

Title: Improving Success in Learning Computer Science Using Lessons from Learning Sciences

Abstract: Learning computer science is difficult, with multiple international studies demonstrating little progress. We still understand too little about the cognitive difficulties of learning programming, but we do know that we can improve success by drawing on lessons from across learning sciences. In this talk, I will describe three examples, where we improve success in learning computer science through application of lessons and models from the learning sciences. We increased the retention of non-CS majors in a required CS course by increasing the relevance of the course (informed by Eccles’ model of achievement-related choices), though we are limited in how far we can go because legitimate peripheral participation is less relevant. We improved opportunities to learn in a collaborative forum by drawing on lessons from anchored instruction, but were eventually defeated by student perceptions of culture. We have improved learning and transfer of knowledge about programming by using subgoal labeling to promote self-explanations.

9 am Thursday Jan 31
SoIC Colloquium Series
IMU State Room East

Title: Three Lessons in Teaching Computing to Everyone

Abstract:  My colleagues and I have been studying how to teach computer science, to CS majors, to non-CS undergraduates, and to adult professionals.  In this talk, I’ll talk about some of what we’ve learned, organized around three lessons.  Lesson #1: We typically teach computer science too abstractly, and by teaching it in a context (e.g., media, robots, Nintendo GameBoys, Photoshop), we can dramatically improve success (retention and learning) for both traditional and non-traditional CS learners. Lesson #2: Collaboration can create opportunities for learning, but classroom culture (e.g., competition) trumps technology (Wikis).  Lesson #3: Our greatest challenge in computer science education is improving teaching, and that will require changes in high schools, in public policy, and in universities.

January 28, 2013 at 11:55 am 3 comments

What are the cognitive skills needed for model-building?

Mylène is describing in the below blog post about how she’s helping her students develop a set of cognitive skills (including a growth mindset) to help them build models.  What I found fascinating in her post were the implicit points, obvious to her, about what the students didn’t know.  One student said, “I wish someone had told me this a long time ago.”  What are the cognitive skills necessary to enable people to build models, or program?  Causal thinking is absolutely critical, of course. What else is necessary that we haven’t identified?  We need to check if students have those skills, or if we need to teach them explicitly.

Last year I found out in February that my students couldn’t consistently distinguish between a cause and a definition, and trying to promote that distinction while they were overloaded with circuit theory was just too much.  So this year I created a unit called “Thinking Like a Technician,” in which I introduced the thinking skills we would use in the context of everyday examples.

via Growth-Mindset Resource Could Support Model-Building « Shifting Phases.

January 25, 2013 at 1:18 am Leave a comment

Older Posts Newer Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,353 other followers

Feeds

Recent Posts

Blog Stats

  • 1,587,086 hits
December 2018
M T W T F S S
« Nov    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

CS Teaching Tips