Archive for March 24, 2011

Why we teach how linked lists are made, and how memory representations work

One of the ongoing battles in CS Education is whether or not to teach how data structures are built.  One school of thought goes something like this, “We have to teach how data structures are really implemented, because unless you realize how the runtime behavior differs between a singly-linked list and a doubly-linked list and a tree, you really can’t make informed choices between them.”  In other words, we teach the implementation to inform the use.  An opposing school of thought says, “It’s ridiculous to have anyone implement yet another of these complex data structures!  How many linked list implementations do we need?  Instead, we should focus on how to use them well.”  That school of thought sees data structures as a programming or even vocational benefit.  Several colleagues in computing education wrote a nice piece describing the intent of teaching data structures, which differ from teacher-to-teacher.

Based on interviews with Computer Science educators and analysis of CS literature, we identified five categories of intent: developing transferable thinking, improving students’ programming skills, knowing “what’s under the hood”, knowledge of software libraries, and component thinking.

I’ve come to realize that I have another reason for teaching the implementation of data structures, which I don’t think is covered by the five reasons in that piece.  It’s only through dealing with implementation structures that I (and my students) have seen their misconceptions highlighted, like the ones I talked about in my ontological categories post.  I continue to struggle with my students’ understanding of object, variable, reference, and “link” (the arrows we draw between boxes in data structure diagrams).  We have wonderful discussions now, where we talk about the best visual representation for various structures — great “meta-level” discussions to have.

I think they’re starting to get it (and I have these peer instruction results, so that I can make that claim with some confidence), but it’s a real struggle.  Moving from singly-linked lists to doubly-linked lists helped face the problem well: It’s too easy to simply say, “Well, that’s just the next node,” and gets more complicated when students have to say, “That node is the next of the previous, and the previous of the next” and now start to wonder what those words mean.  “Next” has a commonplace meaning.  We don’t talk about the “next” and the “previous” at the same time in our daily conversation, but we have to deal with both simultaneously when implementing them.

I had a chance to discuss these issues yesterday with some really great cognitive and learning scientists.  Some pointed out that the problem is all the harder because there’s no there there — all these words, all these ways to refer to objects, are just different perspectives, and there’s no real solid thing that’s being described.  I countered that there was a real, inspectable, testable thing: the actual values in real memory addresses. That argument didn’t hold much weight with them, because few of us really use that level.  Things are defined through their use, and different uses lead to different definitions, and might as well be different things.  “Explaining memory is just layering yet another perspective, and that one has even less usefulness than the others and is even more complicated. ”  They thought that it’s like teaching quantum mechanics to fourth graders so that they have a scientist’s understanding of heat.

I’m not convinced, but at a higher level, I’m interested by the kinds of arguments we’re having.  These questions of “what is our intent in our teaching?,” “what’s real?,” and “what does it mean for something to be ‘defined’?” seem esoteric, or worse, pointless for the in-the-classroom CS teacher.  Those are questions for navel-gazing philosophers, we might think.  Actually, they are the answer to some of our fierce debates in computing education.  Why do we teach data structures?  Why do we teach hardware, architecture, and memory structures?  I’m sure that we all have immediate answers that we whip out in undergraduate curriculum committee meetings.  What’s our evidence for our answers?

  • The cognitive scientists were asking us to consider, “Does it really make sense any more for computer scientists to understand the computer down to the metal?”
  • I’m asking us to consider that maybe the reason to teach data structure implementation has no practical or vocational benefit — an important purpose is just to highlight and correct what students don’t understand.

March 24, 2011 at 6:56 am 11 comments

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,185 other subscribers


Recent Posts

Blog Stats

  • 2,060,394 hits
March 2011

CS Teaching Tips