Posts tagged ‘learning sciences’

Seeking Collaborators for a Study of Achievement Goal Theory in CS1: Guest blog post by Daniel Zingaro

I have talked about Dan’s work here before, such as his 2014 award-winning ICER paper and his Peer Instruction in CS website. I met with Dan at the last SIGCSE where he told me about the study that he and Leo Porter were planning. Their results are fascinating since they are counter to what Achievement Goal Theory predicts. I invited him to write a guest blog post to seek collaborators for his study, and am grateful that he sent me this.

Why might we apply educational theory to our study of novice programmers? One core reason lies in theory-building: if someone has developed a general learning theory, then we might do well to co-opt and extend it for the computing context. What we get for free is clear: a theoretical basis, perhaps with associated experimental procedures, scales, hypotheses, and predictions. Unfortunately, however, there is often a cost in appropriating this theory: it may not replicate for us in the expected ways.

Briana Morrison’s recent work nicely highlights this point. In two studies, Briana reports her efforts to replicate what is known about subgoals and worked examples. Briefly, a worked example is a sample problem whose step-by-step solution is given to students. And subgoals are used to break that solution into logical chunks to hopefully help students map out the ways that the steps fit together to solve the problem.

Do subgoals help? Well, it’s supposed to go like this, from the educational psychology literature: having students generate their own labeled goals is best, giving students the subgoal labels is worse, and not using subgoals at all is worse still. But that isn’t what Briana found. For example, Briana reports [1] that, on Parsons puzzles, students who are given subgoal labels do better than both those who generate their own subgoal labels and those not given subgoals at all. Why the differences? One possibility is that programming exerts considerable cognitive load on the learner, and that the additional load incurred by generating subgoal labels overloads the student and harms learning.

The point here is that taking seriously the idea of leveraging existing theory requires concomitant attention to how and why the theory may operate differently in computing.

My particular interest here is in another theory from educational psychology: achievement goal theory (AGT). AGT studies the goals that students adopt in achievement situations, and the positive and negative consequences of those goals in terms of educationally-relevant outcomes. AGT zones in on two main goal types: mastery goals (where performance is defined intrapersonally) and performance goals (where performance is defined normatively in comparison to others).

Do these goals matter? Well, it’s supposed to go roughly like this: mastery goals are positively associated with many outcomes of value, such as interest, enjoyment, self-efficacy, and deep study strategies (but not academic performance); performance goals, surprisingly and confusingly, are positively associated with academic performance. But, paralleling the Briana studies from above, this isn’t what we’ve found in CS. With Leo Porter and my students, we’ve been studying goal-outcome links in novice CS students. We’ve found, contrary to theoretical expectations, that performance goals appear to be null or negative predictors of performance, and that mastery goals appear to be positive predictors of performance [2,3].

We are now conducting a larger study of achievement goals and outcomes of CS1 students — larger than that achievable with the couple of institutions to which we have access on our own. We are asking for your help.

The study involves administering two surveys to students in a CS1 course. The first survey, at the beginning of the semester, measures student achievement goals. The second survey, close to the end of the semester, measures potential mediating variables. We plan to collect exam grade, interest in CS, and other outcome variables.

The hope is that we can conduct a multi-institutional study of a variety of CS1 courses to strengthen what we know about achievement goals in CS.

Please contact me at daniel dot zingaro at utoronto dot ca if you are interested in participating in this work. Thanks!

[1] Briana Morrison. Subgoals Help Students Solve Parsons Problems. SIGCSE, 2016. ACM DL link.

[2] Daniel Zingaro. Examining Interest and Performance in Computer Science 1: A Study of Pedagogy and Achievement Goals. TOCE, 2015. ACM DL link.

[3] Daniel Zingaro and Leo Porter. Impact of Student Achievement Goals on CS1 Outcomes. SIGCSE, 2016. ACM DL link.

July 15, 2016 at 7:30 am Leave a comment

Are there elements of human nature that could be better harnessed for better educational outcomes?

I don’t often link to Quora, but when it’s Steven Pinker pointing out the relationship between our human nature to educational goals, it’s worth it.

One potential insight is that educators begin not with blank slates but with minds that are adapted to think and reason in ways that may be at cross-purposes with the goals of education in a modern society. The conscious portion of language consists of words and meanings, but the portion that connects most directly to print consists of phonemes, which ordinarily are below the level of consciousness. We intuitively understand living species as having essences, but the theory of evolution requires us to rethink them as populations of variable individuals. We naturally assess probability by dredging up examples from memory, whereas real probability takes into account the number of occurrences and the number of opportunities. We are apt to think that people who disagree with us are stupid and stubborn, while we are overconfident and self-deluded about our own competence and honesty.

Source: (3) Are there elements of human nature that could be better harnessed for better educational outcomes? – Quora

July 13, 2016 at 7:57 am 2 comments

Motivating STEM Engagement in Children, Families, and Communities

I’ve known Dan Hickey for many years, and got to spend some time with him at Indiana when I visited there a couple years ago.  He’s dealing with an issue in this blog post that is critical to CS Education.  If we want students to value computing, it has to be valued and promoted in their families and communities.  How do we get engagement at a beyond-school level in computing education?

These issues of trajectories and non-participation in STEM learning have personal relevance for me and my own family. I was quite pleased a few years ago when my son Lucas enrolled in a computer programming class in high school. I never learned to program myself and these days it I find it quite a handicap. While I bought an Apple II+ computer in 1982 (!) and taught myself BASIC, an instructional technology professor discouraged me from delving too deeply into technology or programming (because “it changes too often”). While I still want to learn how to code, my non-participation in programming clearly helped define my trajectory towards a Ph.D in Psychology and satisfying career as a Learning Scientist.Unfortunately, the curriculum in my son’s programming class was like the typical secondary computer science instruction that Mark Guzdial chronicles in his Computing Education blog. The coding worksheets seemed to have been haphazardly created to match various videos located on the web. My son wanted to use the much more professional videos and exercises that we were able to access via my university’s account at Lynda.com, but his teacher insisted that my son complete the worksheets as well (so teacher could grade them).

Source: re-mediating assessment: Motivating STEM Engagement in Children, Families, and Communities

May 27, 2016 at 8:04 am Leave a comment

The Community of Practice for CS teachers? Suggestion: It’s not teachers

My Blog@CACM post this month is on the AAAS symposium I attended on undergraduate STEM education (see post here).  The symposium set up for me a contrast between computing education and other STEM education.  In math and science education, faculty are more likely to get continuing professional development and to value education more than CS faculty.

Why is it different in CS?  In the blog post, I suggest that part of the issue is maturation of the field.  But I have another hypothesis — I suggest that most CS teachers, especially at the undergraduate level, don’t think of themselves as teachers.

In my book Learner-Centered Design of Computing Education, I use Lave & Wenger’s situated learning theory as a lens for understanding motivations to pursue computing education.  Lave & Wenger say every learner aims to join a community of practice.  Learners start out on the periphery of the community, and work their way towards the center, adopting the skills, values, and knowledge that those in the center hold. They might need to take classes because that’s what the community values, or maybe they do an apprenticeship. The community of practice provides the learner and the practitioners a sense of identity: “I belong with this group. I do this practice. This is who I am.”

Lijun Ni taught me the value of teacher identity. Someone who says “I’m a math teacher” (for example) will join math teacher organizations, will seek out professional development, and will more likely be retained longer as a teacher. That’s their identity.

I believe that many science and math teachers (even at the undergraduate level) feel a sense of identity as teachers. Even at research universities, those teaching the intro courses in mathematics and science are likely teachers-first. They know that they are mostly no preparing future mathematicians, biologists, chemists, and physicists.  They are preparing students for their chosen professions, perhaps in engineering, medicine, or computer science. The math and science teachers belong to a community of practice of teachers, e.g., they have a goal to be like the best teachers in their profession.  They have an identity as teachers, e.g., they strive to be better math and science teachers.

I suspect that CS teachers feel a sense of identity as software developers. They see themselves as programmers primarily. They see themselves as producing future programmers. They take pride in what they can do with code. They have a sense of guardianship — they want the best and brightest in their field.

There’s a difference between CS teachers as programmers vs CS teachers. Programmers train other programmers. They learn new programming languages, new techniques of programming, the latest tools. Teachers teach everyone, and they learn how to be better at teaching. We need CS teachers to be teachers. It’s less important that they know the latest industry gadgets. It’s more important that they learn how to teach “all” about CS, and how to teach that CS better.

When Grady Booch came to SIGCSE 2007, I was surprised at how excited everyone was — people still talk about that visit (e.g., see the explanation for the BJC approach to computing). I realized that, for most of the people in the room, Grady was a role model.  He was at the center of community that they most cared about.  Note that Grady is not a teacher. He’s an exceptional software engineer.

There are serious ramifications of a teacher with an identity as a software engineer.  I had a discussion a few months ago with one of our instructors, who told me, “I just don’t get why women would even want to be in computer science.  Working in a cubicle is not a great place for women to be! They should get a better job.”  I was shocked. I didn’t tackle the gender issues first. I started out trying to convince him that computer science doesn’t just lead to a cubicle. You could study computer science to become something other than a software developer, to work somewhere other than a cubicle. He wasn’t buying my argument. I realized that those cubicle jobs are the ones he wants to prepare students for. That’s where he imagines the best programmers working. He doesn’t want to teach computer science for whatever the students need it for. He prepares future programmers. That’s how he defines his job — a master software engineer with apprentice software engineers.

I am calling out undergraduate CS teachers in this post, but I suspect that many high school CS teachers see themselves as software developers (or as trainers of software developers), more than as teachers of computer science.  I hear about high school CS teachers who proudly post on the wall the t-shirts of the tech companies who employ their former students.  That’s a software developer focus, an apprenticeship focus. That’s not about teaching CS for all.

What would it take to shift the community of practice of CS teachers to value teaching over software development?  It’s an important change in perspective, especially if we care about CS for all. Not all of our students are aiming for jobs in software development.

How did other STEM disciplines do it?  How did they develop a culture and community of practice around teaching?

May 23, 2016 at 7:35 am 21 comments

Transfer of learning: Making sense of what education research is telling us

I enjoy reading “Gas station without pumps,” and the below-quoted post was one I wanted to respond to.

Two of the popular memes of education researchers, “transferability is an illusion” and “the growth mindset”, are almost in direct opposition, and I don’t know how to reconcile them.

One possibility is that few students actually attempt to learn the general problem-solving skills that math, CS, and engineering design are rich domains for.  Most are content to learn one tiny skill at a time, in complete isolation from other skills and ideas. Students who are particularly good at memory work often choose this route, memorizing pages of trigonometric identities, for example, rather than learning how to derive them at need from a few basics. If students don’t make an attempt to learn transferable skills, then they probably won’t.  This is roughly equivalent to claiming that most students have a fixed mindset with respect to transferable skills, and suggests that transferability is possible, even if it is not currently being learned.

Teaching and testing techniques are often designed to foster an isolation of ideas, focusing on one idea at a time to reduce student confusion. Unfortunately, transferable learning comes not from practice of ideas in isolation, but from learning to retrieve and combine ideas—from doing multi-step problems that are not scaffolded by the teacher.

Source: Transfer of learning | Gas station without pumps

The problem with “transferability” is that it’s an ill-defined term.  Certainly, there is transfer of skill between domains.  Sharon Carver showed a long time ago that she could teach debugging Logo programs, and students would transfer that debugging process to instructions on a map (mentioned in post here).  That’s transferring a skill or a procedure.  We probably do transfer big, high-level heuristics like “divide-and-conquer” or “isolate the problem.”  One issue is whether we can teach them.  John Sweller says that we can’t — we must learn them (it’s a necessary survival skill), but they’re learned from abstracting experience (see Neil Brown’s nice summary of Sweller’s SIGCSE keynote).

Whether we can teach them or not, what we do know is that higher-order thinking is built on lots of content knowledge. Novices are unlikely to transfer until they know a lot of stuff, a lot of examples, a lot of situations. For example, novice designers often have “design fixation.”  They decide that the first thing they think of must be the right answer.  We can insist that novice designers generate more designs, but they’re not going to generate more good designs until they know more designs.  Transfer happens pretty easily when you know a lot of content and have seen a lot of situations, and you recognize that one situation is actually like another.

Everybody starts out learning one tiny skill at a time.  If you know a lot of skills (maybe because you have lots of prior experience, maybe because you have thought about these skills a lot and have recognized the general principles), you can start chunking these skills and learning whole schema and higher-level skills.  But you can’t do that until you know lots of skills.  Students who want to learn one tiny skill at a time may actually need to still learn one tiny skill at a time. People abstract (e.g., able to derive a solution rather than memorize it) when they know enough content that it’s useful and possible for them to abstract over it.  I completely agree that students have to try to abstract.  They have to learn a lot of stuff, and then they have to be in a situation where it’s useful for them to abstract.

“Growth mindset” is a necessity for any of this to work.  Students have to believe that content is worth knowing and that they can learn it.  If students believe that content is useless, or that they just “don’t do math” or “am not a computer person” (both of which I’ve heard in just the last week), they are unlikely to learn content, they are unlikely to see patterns in it, and they are unlikely to abstract over it.

Kevin is probably right that we don’t teach problem solving in engineering or computing well.  I blogged on this theme for CACM last month — laboratory experiments work better for a wider range students than classroom studies.  Maybe we teach better in labs than in classrooms?  The worked examples effect suggests that we may be asking students to problem solve too much.  We should show students more completely worked out problems.  As Sweller said at SIGCSE, we can’t expect students to solve novel problems.  We have to expect students to match new problems to solutions that they have already seen.  We do want students to solve problems, too, and not just review example solutions. Trafton and Reiser showed that these should be interleaved: Example, Problem, Example, Problem… (see this page for a summary of some of the worked examples research, including Trafton & Reiser).

When I used to do Engineering Education research, one of my largest projects was a complete flop.  We had all this prior work showing the benefits of a particular collaborative learning technology and technique, then we took it into the engineering classroom and…poof! Nothing happened.  In response, we started a project to figure out why it failed so badly.  One of our findings was that “learned helplessness” was rampant in our classes, which is a symptom of a fixed mindset.  “I know that I’m wrong, and there’s nothing that I can do about it.  Collaboration just puts my errors on display for everyone,” was the kind of response we’ve got. (See here for one of our papers on this work.)

I believe that all the things Kevin sees going wrong in his classes really are happening.  I believe he’s not seeing transfer that he might reasonably expect to see.  I believe that he doesn’t see students trying to abstract across lower-level skills.  But I suspect that the problem is the lack of a growth mindset.  In our work, we saw Engineering students simply give up.  They felt like they couldn’t learn, they couldn’t keep up, so they just memorized.  I don’t know that that’s the cause of the problems that Kevin is seeing.  In my work, I’ve often found that motivation and incentive are key to engagement and learning.

April 25, 2016 at 7:33 am Leave a comment

LATICE 2016 in Mumbai: An exciting, vibrant conference with great students

I was at the Learning and Teaching in Computing Education (LaTICE 2016) conference in Mumbai in early April. It was one of my most memorable and thought-provoking trips. I have had few experiences in Asia, and none in India, so I was wide-eyed with amazement most of my time there. (Most of the pictures that I am including in this series of blog posts are mine or come from the LaTICE 2016 gallery.)

I was invited to join discussants at the LaTICE Doctoral Consortium on the day before the conference. LaTICE was hosted at IIT-Bombay, and IIT-Bombay is home to the Inter-disciplinary Program in Educational Technology (see link here). The IPD-ET program is an impressive program. Only five years old, it already has 20 PhD students. The lead faculty are Sahana Murthy and Sridhar Iyer who are guiding these students through interesting work. (Below picture shows Sahana with the DC co-chairs, Anders Berglund from Uppsala University and Tony Clear from Auckland University of Technology.) The Doctoral Consortium had students from across India and one from Germany. Not all were IDP-ET students, but most were.

IMG_0384

Talking to graduate students was my main activity at LaTICE 2017. Aman Yadav (from Michigan State, in the back of the below picture) and I missed a lot of sessions as we met with groups of students. I don’t think I met all the IDP-ET students, but I met many of them, and wrestled with ideas with them. I was pleased that students didn’t just take me at my word — they asked for explanations and references. (I ripped out half of the pages of my notebook, handing out notes with names of papers and researchers.) I feel grateful for the experience of hearing about so many varied projects and to talk through issues with many students.

Holding-office-hours

I’m going to take my blog writer’s prerogative to talk about some of the IDP-ET students’ work that I’ve been thinking about since I got back. I’m not claiming that this is the best work, and I do offer apologies to the (many!) students whose work I’m not mentioning. These are just the projects that keep popping up in my (still not sleeping correctly) brain.

Aditi Kothiyal is interested in how engineers estimate. Every expert engineer does back-of-the-envelope estimation before starting a project. It’s completely natural for them. How does that develop? Can we teach that process to students? Aditi has a paper at the International Conference of the Learning Sciences this year on her studies of how experts do estimation. I find this problem interesting because estimation might be one of those hard-to-transfer higher-order thinking skills OR it could be a rule-of-thumb procedure that could be taught.

Shitanshu Mishra is exploring question-posing as a way to encourage knowledge integration. He’s struggling with a fascinating set of issues. Question-posing is a great activity that leads to learning, but is practiced infrequently in classroom, especially by the students who need it the most. Shitanshu has developed a guided process (think the whiteboards in Problem-Based Learning, or classroom rituals in Janet Kolodner’s Learning-By-Design, or Scardamalia & Bereiter’s procedural facilitation) which measurably helps students to pose good questions that encourage students to integrate knowledge. When should he guide students through his question-posing process? Is it important that students use his process on their own?

Yogendra Pal is asking a question that is very important in India whose answer may also be useful here in the US: How do you help students who grew up in a non-English language in adapting to English-centric CS? India’s constitution recognizes 22 languages, and has 122 languages spoken by many Indian citizens on a daily basis. Language issues are core to the Indian experience. CS is very English-centric, from the words in our programming languages, to the technical terms that don’t always map to other languages. Yogendra is working with students who only spoke Hindi until they got to University, where they now want to adapt to English, the language of the Tech industry. I wonder if Yogendra’s scaffolding techniques would help children of immigrant families in the US succeed in CS.

Rwitajit Majumdar is developing visualizations to track student behavior on questions over time. Originally, he wanted to help teachers get a sense of how their students move towards a correct understanding over multiple questions during Peer Instruction. Now, he’s exploring using his visualizations with MOOC data. I’m interested in his visualizations for our ebooks. He’s trying to solve an important problem. It’s one thing to know that 35% of the students got Problem #1 right, and 75% got (similar) Problem #2 right. But is it the same 25% of students who got both wrong? What percentage of students are getting more right, and are there any that are swapping to more wrong answers? Tracking students across time, across problems is an important problem.

Overall, the LaTICE conference was comparable to SIGCSE or ITiCSE. It was single track, though it’s been dual-track at some instances. LaTICE is mostly a practitioner’s conference, with a number of papers saying, “Here’s what I’m doing in my class” without much evaluation. I found even those interesting, because many were set in contexts that were outside my experience. There are some good research papers. And there are some papers that said some things that I felt were outright wrong. But because LaTICE is a small (< 200 attendees, I’d guess) and collegial conference, I had one-on-one conversations with all the authors with whom I disagreed (and many others, as well!) to talk through issues.

My keynote was based on my book, Learner-Centered Design of Computing Education: Research on Computing for Everyone. I talked about why it’s important to provide computing education to more than computing majors, and how computing education would have to change for different audiences. Slides are here: http://www.slideshare.net/markguzdial/latice-2016-learnercentered-design-of-computing-education-for-all

LaTICE_2016__Learner-Centered_Design_of_Computing_Education_for_All

The most remarkable part of my trip was simply being in India. I’ve never been any place so crowded, so chaotic, so dirty, and so vibrant. I felt like I took my life in my hands whenever I crossed the street after noon on any day (and given the pedestrian accidents that some conference participants reported seeing, including one possible fatality, I likely was taking a risk). I went out for three runs around Mumbai and across campus (only in the morning when the traffic was manageable) and enjoyed interactions with cows and monkeys. I was shocked at the miles and miles of slums I saw when driving around Mumbai. I got stuck on one side of a major street without any idea how I could possibly get through the crowds and traffic to the other side — on a normal Sunday night. The rich colors of the Indian clothing palette were beautiful, even in the poorest neighborhoods. There was an energy everywhere I went in Mumbai.

I’ve not experienced anything like Mumbai before. I certainly have a new sense of my own privilege — about the things I have that I never even noticed until I was somewhere where they are not given. Given that India has 1.2 billion people and the US only has some 320 million, I’m wondering about how I define “normal.”

April 18, 2016 at 7:18 am 4 comments

Brain training, like computational thinking, is unlikely to transfer to everyday problem-solving

In a recent blog post, I argued that problem-solving skills learned for solving problems in computational contexts (“computational thinking”) were unlikely to transfer to everyday situations (see post here).  We see a similar pattern in the recent controversy about “brain training.”  Yes, people get better at the particular exercises (e.g., people can learn to problem-solve better when programming). And they may still be better years later, which is great. That’s an indication of real learning.  But they are unlikely to transfer that learning to non-exercise contexts. Most surprisingly, they are unlikely to transfer that learning even though they are convinced that they do.  Just because you think you’re doing computational thinking doesn’t mean that you are.

Ten years later, tests showed that the subjects trained in processing speed and reasoning still outperformed the control group, though the people given memory training no longer did. And 60 percent of the trained participants, compared with 50 percent of the control group, said they had maintained or improved their ability to manage daily activities like shopping and finances. “They felt the training had made a difference,” said Dr. Rebok, who was a principal investigator.

So that’s far transfer — or is it? When the investigators administered tests that mimicked real-life activities, like managing medications, the differences between the trainees and the control group participants no longer reached statistical significance.

In subjects 18 to 30 years old, Dr. Redick also found limited transfer after computer training to improve working memory. Asked whether they thought they had improved, nearly all the participants said yes — and most had, on the training exercises themselves. They did no better, however, on tests of intelligence, multitasking and other cognitive abilities.

Source: F.T.C.’s Lumosity Penalty Doesn’t End Brain Training Debate – The New York Times

March 18, 2016 at 7:26 am 4 comments

Older Posts


Recent Posts

August 2016
M T W T F S S
« Jul    
1234567
891011121314
15161718192021
22232425262728
293031  

Feeds

Blog Stats

  • 1,259,881 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,248 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 4,248 other followers