Posts tagged ‘collaboration’

In STEM Courses, a Gender Gap in Online Class Discussions: What drives collaboration?

It’s not surprising that men and women participate differently in online class discussions.  I’m disappointed that the interpretations of the results are not grounded in the literature on collaborative learning.  We know something about why people might not want to participate in an online forum (as I wrote about in a previous blog post).

Company officials argued that the differences in behavior by gender represent a “gap in confidence” between women and men enrolled in the courses. It’s a phenomenon that has long interested the company’s founder, Pooja Sankar, who says she felt isolated as one of only a few women studying computer science at a university in India and was too shy to collaborate with male classmates.

Based on reports from hundreds of students and professors who use Piazza, “we know that students answer questions more when they feel more confident,” Ms. Gilmartin said. “We know that they use the anonymity setting when they feel less confident.”

via In STEM Courses, a Gender Gap in Online Class Discussions – Wired Campus – Blogs – The Chronicle of Higher Education.

February 13, 2015 at 7:11 am Leave a comment

Data Mining Exposes Embarrassing Problems For Massive Open Online Courses: There is no dialogue

Interesting article studying the lack of discussion in MOOC discussion forums.  I’m surprised that the teacher involvement doesn’t improve matters.  It may be that the scale swamps out the teacher demonstrating value for the discussion.  Our past work in CSCL suggests that the culture of the class (e.g., the subject, the rewards structure, etc.) influences discussion behavior, and that they’d get more on-target discussion with anchored collaboration.

These guys have studied the behaviour in online discussion forums of over 100,000 students taking massive open online courses (or MOOCs).

And they have depressing news. They say that participation falls precipitously and continuously throughout a course and that almost half of registered students never post more than twice to the forums. What’s more, the participation of a teacher doesn’t improve matters. Indeed, they say there is some evidence that a teacher’s participation in an online discussion actually increases the rate of decline.

via Data Mining Exposes Embarrassing Problems For Massive Open Online Courses | MIT Technology Review.

January 17, 2014 at 1:49 am 7 comments

Education Research Questions around Live Coding: Vygotskian and Non-Constructionist

I posted my trip report on the Dagstuhl Seminar on Live Coding on Blog@CACM (see the post here).  If you don’t want to read the post, check out this video as a fun introduction to live coding:

I have a lot more that I want to think through and share about the seminar. I’m doing a series of blog posts this week on live coding to give me an opportunity to think through some of these issues.

IMG_0100

I saw four sets of computing education research questions in live coding. These are unusual research questions for me because they’re Vygotskian and non-Constructionist.

Live coding is about performance. It’s not an easy task. The live coder has to know their programming language (syntax and semantics) and music improvisation (e.g., including listening to your collaborator and composing to match), and use all that knowledge in real-time. It’s not going to be a task that we start students with, but it may be a task that watching inspires students. Some of my research questions are about what it means to watch the performance of someone else, as opposed to being about students constructing. I’ve written before about the value of lectures, and I really do believe that students can learn from lectures. But not all students learn from lectures, and lectures work only if well-structured. Watching a live coding performance is different — it’s about changing the audience’s affect and framing with respect to coding. Can we change attitudes via a performance?

Vygotsky argued that all personal learning is first experienced at a social level. Whatever we learn must first be experienced as an interaction with others. In computing education, we think a lot about students’ first experience programming, but we don’t think much about how a student first sees code and first sees programming. How can you even consider studying a domain whose main activity you have never even seen? What is the role of that coding generating music, with cultural and creative overtones? The social experience introducing computing is important, and that may be something that live code can offer.

IMG_0073

Here are four sets of research questions that I see:

  1. Making visible. In a world with lots of technology, code and programmers are mostly invisible. What does it mean for an audience to see code to generate music and programming as a live coder? It’s interesting to think about this impact for students (does it help students to think seriously about computing as something to explore in school?) and for a more general audience (how does it change adults’ experience with technology?).
  2. Separating program and process. Live coding makes clear the difference between the program and the executing process. On the first day, we saw performances from Alex MacLean and Thor Magnusson, and an amazing duet between Andrew Sorensen at Dagstuhl and Ben Swift at the VL/HCC conference in San Jose using their Extempore system. These performances highlighted the difference between program and process. The live coders start an execution, and music starts playing in a loop. Meanwhile, they change the program, then re-evaluate the function, which changes the process and the music produced. There is a gap between the executing process and the text of the program, which is not something that students often see.
  3. Code for music. How does seeing code for making music change student’s perception of what code is for? We mostly introduce programming as engineering practice in CS class, but live coding is pretty much the opposite of software engineering. Our biggest challenges in CS Ed are about getting students and teachers to even consider computer science. Could live coding get teachers to see computing as something beyond dry and engineering-ish?  Who is attracted by live coding?  Could it attract a different audience than we do now?  Could we design the activity of live coding to be more attractive and accessible?
  4. Collaboration. Live coding is a collaborative practice, but very different from pair programming. Everybody codes, and everybody pays attention to what the others are doing. How does the collaboration in live coding (e.g., writing music based on other live coders’ music) change the perception of the asocial nature of programming?

I’ll end with an image that Sam Aaron showed in his talk at Dagstuhl, a note that he got from a student in his Sonic Pi class: “Thank you for making dull lifeless computers interesting and almost reality.” That captures well the potential of live coding in computing education research — that activity is interesting and the music is real.

IMG_0074

September 30, 2013 at 5:38 am 5 comments

A 10 year retrospective on research on Media Computation: ICER 2013 preview

I get to teach our Media Computation in Python course, on Georgia Tech’s campus, in Spring 2014.  I’ve had the opportunity to teach it on study abroad, and that was wonderful. I have not had the opportunity to teach it on-campus since 2007.  Being gone from a course for seven years, especially a big one with an army of undergraduate TA’s behind it, is a long time. The undergraduate TA’s create all the assignments and the exams, in all of the introductory courses in the College of Computing.  Bill Leahy, who is teaching it this summer semester, kindly invited me to meet with the TA’s in order to give me a sense for how the course works now.

It’s a very different course than the one that I used to teach.

  • I mentioned the collage assignment, which was one of the most successful assignments in MediaComp (and shows up even today in AP CS implementations and MATLAB implementations).  Not a single TA knew what I was talking about.
  • The TA’s complained to me about Piazza.  “Nobody posts” and “I always forget that it’s there” and “It seems to work in CS classes, but not for the  other majors.”  I told them about work that Jennifer Turns and I did in 1999 that showed why Piazza and newsgroups don’t work as well as integrated computer-supported collaborative learning, and how that work led to our development of Swikis.  Swikis were abandoned many years ago in MediaComp, even before the FERPA concerns.
  • Sound is mostly gone.  Students have to play a sound in one assignment based on turtle graphics.  Students never manipulate samples in a sound anymore.
  • I started to explain why we do what we do in MediaComp: Introducing iteration as set operations, favoring replicated code over abstraction in the first half of the semester, avoiding else.  They thought that those were interesting ideas to consider adding to the course.  I borrowed a copy of the textbook from one of them, and read them part of the preface about Ann Fleury’s work.  Lesson: Just because you put it in the book and provide the citation, doesn’t mean that anybody actually reads it, even the TA’s.

It’s a relevant story because I’m presenting a paper at ICER 2013 on Monday 12 August that is a 10 year retrospective on the research on Media Computation.  (I’m making a preview version of the paper available here, which I’ll take down when the ACM DL opens up the ICER 2013 papers.) It was 10 years ago that we posted our working document on creating MediaComp and our 2002 and 2003 published design papers, all of which are still available. We made explicit hypotheses about what we thought Media Computation would do.  The ICER 2013 paper is a progress report.  How’d we do?  What don’t we know?  In hindsight, some seem foolish.

  • The Plagiarism Hypothesis:  We thought that the creative focus of MediaComp would reduce plagiarism.  We haven’t done an explicit study, but if we found a difference with statistical significance, it would be meaningless.  Ten years later, still lots of academic misconduct.
  • The Retention Hypothesis: Perhaps our biggest win — students are retained better in MediaComp than traditional classes, across multiple institutions.  The big follow-up question: Why?  Exploring that question has involved the work of multiple PhD students over the last decade, helping us understand contextualized-computing education.
  • The Gender Hypothesis: We designed MediaComp based on recommendations from people like Jane Margolis and Joanne Cohoon on how to make an introductory CS course that would be successful with women.  Our evidence suggests that it worked, but we don’t actually know much about men in the class.
  • The Learning Hypothesis:  We hoped that students would learn as much in MediaComp as in our traditional CS1 class.  Answering that question led to Allison Elliott Tew’s excellent work on FCS1.  The bottom line, though, is that we still don’t know.
  • The More-Computing Hypothesis: We thought that non-CS majors taking MediaComp would become enlightened and take more CS classes.  No, that didn’t really happen, and Mike Hewner’s work helped us understand why not.

There are two meta-level points that I try to make in this paper.

  • The first is: Why did we think that curriculum could do all of this, anyway?  Curriculum can only have so much effect.  There are lots of other variables in student learning, and curriculum only touches some of those.
  • The second is: How did we move from Marco Polo to theory-building?  Most papers at SIGCSE have been classified as Marco Polo (“We went here, and we saw that.”)  MediaComp’s early papers were pretty much that — with the addition of explicit hypotheses about where we thought we’d go.  It’s been those explicit hypotheses that have driven much of the last 10 years of work.  Understanding those hypotheses, and the results that we found in pursuit of those hypotheses, have led us to develop theory and to support a broader understanding of how students learn computing.

Lots of things change over 10 years, and not always in positive directions. Good lessons and practices of the past get forgotten.  Sometimes change is good and comes from lessons learned that are well worth articulating and making explicit.  And sometimes, we got it plain wrong in the past — there are ideas that are worth discarding.  It’s worth reflecting back occasionally and figuring out how we got to where we are.

August 9, 2013 at 1:22 am 11 comments

Success in MOOCs: Talk offline is important for learning

That students who had offline help did the best in this MOOC study is not surprising.  Sir John Daniel reported in Mega-Universities that face-to-face tutors was the largest line item in the Open University UK’s budget.  But the fact that 90% of the students didn’t talk online (a statistic that is similar to what Tucker Balch found) says that success in MOOCs may be more about talking offline than online.

“On average, with all other predictors being equal, a student who worked offline with someone else in the class or someone who had expertise in the subject would have a predicted score almost three points higher than someone working by him or herself,” write the authors.The correlation, described by the authors as the “strongest” in the data set, was limited to a single instance of a particular MOOC, and is not exactly damning to the format. But it nonetheless may give ammunition to critics who say human tutelage remains essential to a good education.Other findings could also raise eyebrows. For example, the course’s discussion forum was largely the dominion of a relatively small group of engaged users; most students simply lurked. “It should be stressed that over 90 percent of the activity on the discussion forum resulted from students who simply viewed pre-existing discussion threads, without posting questions, answers, or comments,” the authors write.

via MOOC Students Who Got Offline Help Scored Higher, Study Finds – Wired Campus – The Chronicle of Higher Education.

July 5, 2013 at 1:08 am 4 comments

Collaborative Floundering trumps Scaffolding

Really interesting finding!  I suspect, though, that the collaboration had a lot to do with the floundering being successful.  It seems to me that floundering is going to require greater cognitive effort, and thus, greater motivation/engagement to persevere.  I also wonder about the complexity of the task.  I have seen pairs of students flounder at a Java program and (seemingly) not learn much from the effort.

With one group of students, the teacher provided strong “scaffolding” — instructional support — and feedback. With the teacher’s help, these pupils were able to find the answers to their set of problems. Meanwhile, a second group was directed to solve the same problems by collaborating with one another, absent any prompts from their instructor. These students weren’t able to complete the problems correctly. But in the course of trying to do so, they generated a lot of ideas about the nature of the problems and about what potential solutions would look like. And when the two groups were tested on what they’d learned, the second group “significantly outperformed” the first.

via Anne Murphy Paul: Why Floundering Makes Learning Better | TIME Ideas | TIME.com.

May 1, 2012 at 7:47 am 10 comments

No More Swikis: End of the Constructionist Web at Georgia Tech

Using Wikis for undergraduate courses was invented at Georgia Tech. We started in 1997, long before Wikipedia.  Ward Cunningham talks about our work in his book “The Wiki Way.”  Our paper on how we designed the Swiki (or CoWeb) at CSCW 2000 is, I believe, the earliest reference to wikis in the ACM Digital Library.  Jochen “Jeff” Rick built the Swiki software that we use today, and he did his dissertation on his extensions to Swiki.

We published a technical report in 2000 about the varied uses of Swikis that we saw around Georgia Tech’s campus.  Some classes were having students create a public case library.  Others were have cross-semester discussions between current and past students.  Others had public galleries of student work.

All of that ended yesterday.

Georgia Tech’s interpretation of FERPA is that protected information includes the fact that a student is enrolled at all.  The folks at GT responsible for oversight of FERPA realized that a student’s name in a website that references a course is evidence of enrollment.  Yesterday, in one stroke, every Swiki ever used for a course was removed.  None of those uses I described can continue.  For example, you can’t have cross-semester discussions or public galleries, because students in one semester of a course can’t know the identities of other students who had taken the course previously.

Seymour Papert coined the term constructionism to describe a setting for constructivism to occur.

Constructionism–the N word as opposed to the V word–shares constructivism’s connotation of learning as “building knowledge structures” irrespective of the circumstances of the learning. It then adds the idea that this happens especially felicitously in a context where the learner is consciously engaged in constructing a public entity, whether it’s a sand castle on the beach or a theory of the universe.

Constructionism relies on the fact that the entity being constructed is public. The public nature influences the student’s motivation for doing it and doing it well. If it’s not public, it’s not constructionism. We can no longer have students construct public entities on the Web anymore for education at Georgia Tech.  It may be that FERPA demands that no school can use the Web to post student work publicly.

November 15, 2011 at 10:57 am 64 comments

Older Posts


Recent Posts

January 2018
M T W T F S S
« Dec    
1234567
891011121314
15161718192021
22232425262728
293031  

Feeds

Blog Stats

  • 1,467,676 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,225 other followers

CS Teaching Tips