How do teachers teach recursion with embodiment, and why won’t students trace their programs: ICLS 2020 Preview

June 15, 2020 at 7:00 am 52 comments

This coming week was supposed to be the International Conference of the Learning Sciences (ICLS) 2020 in Nashville (see conference website here). But like most conferences during the pandemic, the face-to-face meeting was cancelled (see announcement here). The on-line sessions are being announced on the ICLS2020 Twitter feed here.

I’m excited that two of my students had papers accepted at ICLS 2020. I haven’t published at ICLS since 2010. It’s nice to get back involved in the learning sciences community. Here’s a preview of their papers.

How do teachers teach recursion with embodiment

I’ve written here about Amber Solomon’s work on studying the role of space and embodiment in CS learning. This is an interesting question. We live in a physical world and think in terms of physical things, and we have to use that to understand the virtual, mostly invisible, mostly non-embodied world of computing. At ICER 2018, she used a taxonomy of gestures used in science learning to analyze the gestures she saw in a high school computer science classroom (see link here). Last summer at ITiCSE, she published a paper on how making CS visible in the classroom (through gesture and augmented reality) may reduce defensive climate (see link here). In her dissertation, she’s studying how teachers teach recursion and how learners learn recursion, with a focus on spatial symbol systems.

Her paper at ICLS 2020 is the first of these studies: Embodied Representations in Computing Education: How Gesture,Embodied Language, and Tool Use Support Teaching Recursion. She watched hours of video of teachers teaching recursion, and did a deep dive on two of them.

I’m fascinated by Amber’s findings. Looking at what teachers say and gesture about recursion from the perspective of physical embodiment, I’m amazed that students ever learn computer science. There are so many metaphors and assumptions that we make. One of the teachers says, when explaining a recursive function:

“Then it says “… “now I have to call.”

Let’s think about this from the perspective of the physical world (which is where we all start when trying to understand computing):

  • What does it mean for a function to “say” something?
  • The function “says” things, but I “call”? Who is the agent in this explanation, the function or me? It’s really the computer with the agency, but that doesn’t get referenced at all.
  • Recursion is typically explained as a function calling itself. We typically “call” something that is physically distant from us. If a function is re-invoking itself, why does it have to “call” as if at a distance?

For most computer scientists, this may seem like explaining that the sky is blue or that gravel exists. It’s obvious what all of this means, isn’t it? It is to us, but we had to learn it. Maybe not everyone does. Remember how very few students take or succeed at computer science (for example, see this blog post), and what enormously high failure and drop-out rates we have in CS. Maybe only the students who pick up on these metaphors are the ones succeeding?

Why won’t students trace their programs?

Katie Cunningham’s first publication as a PhD student was her replication and extension of the Leeds Working group study, showing that students who trace program code successfully line-by-line are able to answer more accurately questions about the code (see blog post here). But one of her surprising results was that students who start tracing and give up do worse on prediction questions than those students who never traced at all. In her ITICSE 2019 paper (see post here), she got the chance to ask those students who stopped tracing why they did. She was extending that with a think-aloud protocol, when something unusual happened. Two data science students, who were successful at programming, frankly refused to trace code.

Her paper “I’m not a computer”: How identity informs value and expectancy during a programming activity is an exploration of why students would flat out refuse to trace code — and yet successfully program. She uses Eccle’s Expectancy Value Theory (which comes up pretty often in our thinking, see this blog post) to describe why the cost of tracing outweighs the utility for these students, which is defined in terms of their sense of identity — what they see themselves doing in the future. Sure, there will be some programs that they won’t be able to debug or understand because they won’t trace line-by-line. But maybe they’ll never actually have to deal with code that complex. Is this so bad?

Katie’s live session is 2:00-2:40pm Eastern time on June 23. The video link will be available on the conference website to registered attendees. A pre-print version of her paper is available here.

Both of these papers give us new insight into the unexpected consequences of how we teach computing. We currently expect students to figure out how their teachers are relating physical space and computation, through metaphors that we don’t typically explain. We currently teach computing expecting students to be able to trace code line-by-line, though some students will not do it (and maybe don’t really need to). If we want to grow who can succeed at computing education, we need to think through who might be struggling with how we’re teaching now, and how we might do better.

Entry filed under: Uncategorized. Tags: , , , , .

Becoming anti-racist: Learning about race in CS Education Why do students study computing, especially programming

52 Comments Add your own

  • 1. alanone1  |  June 15, 2020 at 7:26 am

    When we teach students to write, we should be able to expect all of them to be willing to “trace their writing line by line” to assess many things about it.

    When we teach students mathematics, we should be able to expect all of them to be willing to “trace their inferences line by line”, etc.

    I could write similarly for “learning to read” and many other areas (like music, etc.).

    It’s not surprising that many students will not want to learn how to be so careful about X, Y, and Z. But part of modern thinking and doing is learning how to be careful about X, Y, and Z.

    Why is this such a conflict here? A large part of what schooling is about is transformation not catering.

    There is nothing more repetitious than most practicing in sports and music. Most of the same kids will not feel like a machine practicing free throws.

    My guess here is that this is much more like Herb Kohl’s book “I won’t learn from you” which is about tribal conflicts (with identity buried underneath the tribal). The kids have tribes that value sports but not writing, math, and computing. The larger culture has to be improved for this to improve.

    Meanwhile, should the kids be allowed to fail?

    Reply
    • 2. Mark Guzdial  |  June 15, 2020 at 8:25 am

      All scaffolding is catering with a goal of transformation. Meet students where they are, and give them supports so that they can do more than they can do without supports. (Wood, Bruner, Ross, 1976)

      We are developing scaffolds to support students who have already failed at computer science because they found it too complex. They found it too complex, in part, because CS was unwilling to scaffold them. Many of the data science students that have been in our studies took a CS class at undergraduate, couldn’t succeed at it, and changed their major. We are interested in how to scaffold them so that they can be successful writing, tracing, and debugging code. We’re just not starting at line-by-line.

      The cognitive complexity of all line-by-line tasks is not the same. Even just within programming, tracing line-by-line is different in Smalltalk, C++, Scheme, APL, and HyperTalk. That’s the insight we’re building on.

      The data science students we’re studying work in chunks of code, i.e., plans like those that Elliot Soloway and Jim Spohrer studied. Our participants don’t dig into the individual lines of the plan, but they know (for example) that if they use these 3-7 lines and tailor them in just the right way, the code works in order to do an analysis or generate a visualization. In school, we don’t teach to support that activity. It is a natural way that some people will start programming.

      In CS classes, we don’t tend to help students learn by plan or by idiom. We currently only teach by syntax and semantics — “today we do the IF statement” rather than “today we’ll choose how we process the data based on what it is.” Focusing on the syntax first (without the use cases) is like expecting a baby to only speak in correct sentences. Children learn by first repeating idioms and then dissecting them to see how they work. We don’t start instruction by having everyone do everything line-by-line.

      We are exploring teaching programming at the cognitive level, where the novices are now. We are developing scaffolding so that they can be successful reading, tracing, and debugging at the level of plans, and fade the scaffolding so that they can understand and trace the behavior at the level of lines.

      This is absolutely a Bruner-informed approach. We’re not switching languages or tasks on them — they want to work in the authentic task domains. We are creating scaffolding to bring them from where they are, towards expert practice, engaging in expert-level tasks.

      Reply
      • 3. alanone1  |  June 15, 2020 at 1:25 pm

        Hi Mark

        When we teach a musical instrument we do have students play every note from the very beginning (and we take care to figure out how to go about this with musical meaning, but not by leaving out or gisting). Suzuki violin is a good example of dealing with the vast degrees of freedom of the instrument for beginners.

        Similarly with reading: we want what is read to have meaning to the learner, but we don’t want them to gist and guess (the latter is unfortunately taught a lot these days for dealing with multiple choice tests). We want them to steady down on what is actually there.

        A lot of the courses that your students haven’t done well in are likely quite terrible in most respects. I don’t blame them for turning up their noses and also getting discouraged.

        But (to me) this doesn’t mean that they should be allowed to avoid learning how to read “line by line”, especially their own code (to me this implies that the sequence of lines does have meaning –it is not just about an IF statement). For imperative programming especially, it really helps to “play computer”.

        However, I think reading other people’s code should be much the smaller part of learning to program. I have liked David Harel’s use of pseudocode in his classes to get right to the essence of what most brain cells need to be used for (when you are learning to program, I don’t think that most of the gratuitous paraphernalia of commercial systems does anything but distract and subtract from “7±2”).

        Reply
        • 4. Mark Guzdial  |  June 15, 2020 at 1:38 pm

          Hi Alan,

          Jeanne Bamberger, Rena Upitis, and even the ukulele activity at HARC a few years ago didn’t start with reading notes. All of them play with fooling around with the instrument. Vi Hart asked us to try to match the tones of a doorbell by ear, not try to read notes.

          We only learn to read once we’ve learn to speak, and the first we do is to recognize the idioms that we say. Phonics comes later.

          I noticed that your response didn’t include the word “scaffolding,” one of Bruner’s greatest contributions to education. Are you opposed to scaffolding?

          Reply
          • 5. alanone1  |  June 15, 2020 at 1:54 pm

            Suzuki doesn’t start with reading notes, nor do most music teachers. You want to start out at a kind of “oral culture” level (this is also Couperin’s approach to the keyboard). But after doing things with meaning, you then get to see how they are notated. And learn what has to be added to the notation to get back the same feelings.

            So I think you are misunderstanding what I said or I didn’t say it well enough.

            As as for “scaffolding” what do you think about the invention of the GUI, or programming languages made for children, etc. I don’t think anyone would accuse me of being against scaffolding.

            But what Jerry had in mind was not “carrying” but just the “minimal good helps”. So the GUIs I invented also allowed full programming of everything.

            If you look at how Suzuki introduces the violin, you can see a fantastically well thought out approach to both a really difficult instrument and to music itself.

            In the spirit of above, Suzuki doesn’t stoop to giving beginners frets on the violin (this winds up being anti-violin!). Nor does he stoop to teaching notes as individuals (like phonics), but shows in context the phases of repeated notes that the children play right from the beginning.

            To me, his book “Nurtured By Love” is right up there with Jerry Bruner. As with Montessori’s books, going to the source can be a revelation and surprise.

            Reply
            • 6. Mark Guzdial  |  June 15, 2020 at 1:59 pm

              Thanks, Alan. I’m sure I’m misunderstanding you. “Nurtured by Love” is a great phrase for what we’re exploring. You can’t make anyone learn anything. You have to draw them in. That’s what we’re trying to figure out. Learning sciences is a lot about figuring out why someone isn’t learning. That’s what our exploration of identify and expectancy-value is doing.

              Reply
          • 7. gasstationwithoutpumps  |  June 16, 2020 at 1:10 am

            I realize that your remark about “scaffolding” was aimed at Alan, but I felt obliged to chime in. Scaffolding is an essential part of getting students started in a field, but it needs to be followed by descaffolding, where the students learn to do the work without extensive supports. Way too many CS programs continue the scaffolding into upper-division courses (I’ve seen “Advanced Programming” courses that just had students writing short routines for a fully worked-out framework).

            Designing descaffolding is as hard or harder than designing scaffolding. Remove the scaffolds too quickly and the students fail hard, but remove it too slowly and the students don’t learn to work without it.

            Reply
            • 8. Mark Guzdial  |  June 16, 2020 at 7:17 am

              Sure. I wrote my dissertation on adaptable scaffolding (https://dl.acm.org/doi/book/10.5555/194476) and have done work on scaffolding that fades and doesn’t (https://dl.acm.org/doi/10.5555/1161135.1161153)

              Reply
              • 9. orcmid  |  June 16, 2020 at 11:58 am

                OK, I am beginning to understand “scaffolding.” The scaffolding and de-scaffolding seem useful to explore with regard to designing “experiences.”

                Reply
            • 10. Mark Guzdial  |  June 17, 2020 at 8:33 am

              Kevin, do you know Herbert Kohl’s “I will not learn from you”? It’s pretty relevant here. You can’t teach a student anything (including literacy, in Kohl’s work) without them wanting to learn it.

              Reply
              • 11. gasstationwithoutpumps  |  June 17, 2020 at 1:33 pm

                No, I wasn’t familiar with that essay—I’ll read it this week.

                Reply
                • 12. alanone1  |  June 17, 2020 at 1:39 pm

                  It’s an excellent little book, and certainly was an eye opener for me (and I went to school in New York City public schools in the 50s).

                  Reply
  • 13. Raul Miller  |  June 15, 2020 at 8:50 am

    This was thought provoking…

    I have found that thinking about the data (before and after the code works on it) helps me understand the code — often more than thinking about the steps being taken.

    So, when I trace execution, it’s usually with that in mind — I am trying to form a mental picture about the changes hitting the data, and (more importantly) I am trying to come up with a good concept for the transformation as a whole.

    (Mind you, what I have described here also varies depending on the level of abstraction represented by my coding environment, and by the character of the obstacles I am facing. I am trying to also understand what each of the steps does. But once I’ve got that, and have seen it in operation, it’s often time to move on.)

    ((Two other difficulties worth remembering here, though, are: (1) The “hello world” problem, where certain details about the environment absolutely hinder people from getting started and once those hurdles have been passed they lose most of their importance, and (2) the “sophomoric” thing where on discovering some solutions a person is inclined to think they are ready to conquer the world. Both of these hit us again and again…))

    (((And… that said — you’re going to lose some people. Some people just feel stronger draws from other subjects. Some may later change their mind, though. Anyways, I would be inclined to spend considerable focus on what works and why that works. Still, … I really liked the perspective presented here.)))

    Reply
  • 14. Karyn Voldstad  |  June 15, 2020 at 9:55 am

    Katie who?

    Reply
    • 15. orcmid  |  June 15, 2020 at 11:46 am

      Katie Cunningham. A couple of paragraph’s earlier. Hmm. Oddly related to the subject of this post.

      Reply
      • 16. Karyn Voldstad  |  June 15, 2020 at 1:04 pm

        It was edited to add her last name. Thanks!

        Reply
        • 17. Mark Guzdial  |  June 15, 2020 at 1:20 pm

          Yes — sorry about that! Katie and I both reviewed this, and both missed that!

          Reply
  • 18. orcmid  |  June 15, 2020 at 12:29 pm

    I only have anecdotal reflections, yet this post has me wonder how I came to be so accepting of recursion.

    First, my experience with recursion came with ALGOL 60 (learned in 1961 at age 22). I also saw in in McCarthy’s LISP paper in the description of LISP itself. I thought it was cool. I think my preparation was in mathematics and the notion of mathematical induction.

    Learning about the housekeeping and how to manage recursive operation at the machine-language level without modifying running programs came later.

    There is an approach to establishing the correctness of recursive representation of functions in programming that I more-recently saw presented in a MOOC course by Gregor Kiczales. That involves structural induction although it wasn’t presented as such. (I would have preferred that, since it explains “the magic happens here” part of it.)

    Secondly, I also used an approach such as that of Raul Miller. I would look at data structures and the transformations on them, using diagrams, to establish the invariants that certain operations had to accomplish. Noteworthy examples were for adding and removing elements in symmetric (bi-directional) lists. There was also a diagramming technique as part of IBM’s HIPO (hierarchical input-process-output) methodology that was nicely descriptive for me. And also dataflow diagramming at different scale. Now diagramming is easier although out of fashion. State charts are close.

    Finally, I do not use debuggers/tracers and the only time I had to resort to one was to demonstrate that a compiler optimizer had broken my program.

    I do engage in mental symbolic execution, and sometimes I would draw register/variable settings and trace through code that way. My normal approach is to break things down into smaller parts and provide output instructions at intermediate points until I capture any discrepancy.

    There is an approach called test-first development that works well in functional programming and I prefer that for what is known as unit testing. It is a kind of demonstration of correct functioning and handling of edge cases (also used in that Kiczales MOOC using Dr. Racket). This is also in line with Dijkstra’s divide-and-conquer views and various ideas for proof-like structuring of code.

    I have no idea how any of this is helpful for CS education in a CS4all context and K-12 settings. Maybe it points to something on behalf of those who refuse to do symbolic execution.

    Reply
    • 19. shriramkrishnamurthi  |  June 16, 2020 at 11:11 am

      Please don’t call it test-first. That’s an unfortunate terminological mistake we made. I now far prefer to call it examples-first, and distinguish tests from examples. [Why I say “we”: I’m one of the co-authors of the book and environment that Gregor’s course is based on.]

      Reply
      • 20. orcmid  |  June 16, 2020 at 12:16 pm

        I didn’t get that term from your book or Gregor’s “Systematic Program Design” course, Shriram. The use of “test-first” was mine based on experience of the course, writing assertions that were expected to fail and then expanding the code until the assertions are passed. Also making certain that edge cases turn out by design.

        Are you speaking of “How to Design Programs” or something else? I mistakenly purchased Liu’s “Systematic Program Design” which is clearly not behind Gregor’s course :).

        Thank you for commenting. I am happy to have better references on this and your current efforts are something I will also dig into.

        Reply
        • 21. shriramkrishnamurthi  |  June 16, 2020 at 1:20 pm

          Hi — yes, I am speaking of “How to Design Programs”.

          Edwards, in particular, has used test-first in the software engineering sense (which is also how you use it), and the results are not salutary.

          You may find it useful to look at our paper that tries to exploit the example-vs-test distinction:

          https://cs.brown.edu/~sk/Publications/Papers/Published/wk-examplar/

          You don’t have to read the whole paper, but do look at the first 2-3 pages and the related work section (sec 5).

          Reply
          • 22. orcmid  |  June 16, 2020 at 5:42 pm

            That’s an interesting paper. My impressions is that Exemplar is a tool for testing conceptual understanding of a problem statement by (behind the scenes) running the input-output cases against both existing good implementations (“wheats”) and deficient implementations (“chaffes”) in a training situation. Figuring out what’s what is then an investigation the student must carry out by improving the input-output cases.

            In Gregor’s class, this only came up in the mechanical rating of submitted solutions. When do-overs were allowed, we could go back and reconcile the failing-case messages, an interesting experience the first time I saw a chaffe one.

            In development work, demonstrating conceptual understanding separated from implementation (and with no reference solutions) seems more challenging. Identifying chaffe’s is very interesting in this case also. I don’t see a way to decouple implementation much at all under these conditions. Is there any help here? Have you found a way to segue this into production programming work?

            Reply
            • 23. shriramkrishnamurthi  |  June 17, 2020 at 8:15 am

              Yes, this is a different (and very common) thing. This isn’t about auto-grading. The key difference is that we keep focusing on programs, but not enough on problems, which is where many misunderstandings actually lie. If someone has misunderstood the problem, there’s simply no point debugging their program.

              It’s not clear how to scalably lift these ideas to large-scale software development work. A key difference in academic tools is that we have ground truth; we already know, and have built, the thing that the student is being asked to build. Tools like Examplar exploit that fact. In industry, you’re not going to build N correct implementations and K incorrect ones and give them to someone to test their understanding; you’ll just build one, reasonably correct, implementations and move on.

              Reply
      • 24. orcmid  |  June 16, 2020 at 1:37 pm

        OK, I see how you did that with Pyret’s “where:” clause. Unfortunate for me, since I prefer Peter Landin’s “where” as trailing “let.” Maybe “such that:”? Although odd language.

        Reply
        • 25. shriramkrishnamurthi  |  June 16, 2020 at 2:08 pm

          Yeah, all of us Pyret designers have enough Haskell and other languages in our heads that we thought about and worried a bit about that, but we really couldn’t come up with a better keyword (as you note). Also, it’s a different but perfectly valid sense of “where” — they’re just “where”ing different scopes. (-:

          One important consideration for us is that many of our students are pretty weak typists, so literally every extra character counts. That immediately eliminated descriptive but terms like “such-that:”. (And the likelihood of parsing errors with hyphens/spaces/etc. Underscores are out of the question.)

          Reply
  • 26. Prajish Prasad  |  June 16, 2020 at 6:25 am

    Thanks Mark for the insightful post! Going by Alan’s analogy of learning the violin, I am reminded of many of my friends who started learning the piano, but soon quit because they lost interest in preparing for and giving music exams (for which they had to play fixed pieces of music, which they felt were not at all relevant to them).

    As you rightly said, programming courses also suffer from this, and I think this paper can help CS instructors think about how essential practices like tracing and debugging can be taught while aligning with students’ identities, beliefs, attitudes and interests.

    Reply
  • 27. Ken Kahn  |  June 16, 2020 at 8:02 am

    I’m confused about the word “tracing”. I grew up with what Wikipedia (https://en.wikipedia.org/wiki/Tracing_(software)) says “tracing involves a specialized use of logging to record information about a program’s execution”. Some of this discussion sounds to me like “stepping” through code. Other parts seem to equate “tracing” with a careful line-by-line reading of a program. And yet other parts seem to mean mentally stepping through an execution (as opposed to using a stepper).

    Reply
    • 29. orcmid  |  June 16, 2020 at 12:30 pm

      What tracing means to me is what you get from a trace routine as in section 1.4.3.2 Trace routines of “The Art of Computer Programming,” vol.1. (One of my claims to a check from Knuth is a bug I detected by desk-checking the version in the first edition.)

      I take using a stepper as equivalent, including how the state of the processor and its registers are amenable to inspection.

      For interpretive systems or modern debuggers, one might step through at a source code level rather than the machine’s level. In my case, I had to see the machine code (in assembler form) to observe that optimization code motion had broken the code by eliminating a much-needed side-effect: a correctly-placed input-output operation :).

      In a recent project I have been using “trace” in another sense, and I should probably use “trap” instead. I’ll look into that.

      Reply
      • 30. gasstationwithoutpumps  |  June 16, 2020 at 2:12 pm

        To me “tracing” is following the flow of control for a program for specific data, whether by hand simulation, single-stepping, using debugger breakpoints, or using programs with additional calls to logging routines. If you are following all the side-effects and variable value changes, you are not just tracing, but emulating.

        A “traceback” is a dump of the call stack (often without the passed parameter or local-variable values on the stack).

        Reply
        • 31. alanone1  |  June 16, 2020 at 2:32 pm

          In the early 60s in the Air Force it was just called “desk checking”, meaning: run the entire machine code program in your head many times (plus a few other maneuvers). This is because you could get a max of one 3-5 minute session a day on the machine with an operator, much of which was used to examine registers at key points, and to get core dumps.

          Desk checking worked surprisingly well once the deep skill was learned and developed. It made a huge difference in writing programs because of the larger awareness of consequences. It also intertwined well with the very good macro-assemblers of the day, especially Autocoder from IBM.

          And had some real parallels with learning how to write prose with few errors.

          When I first got on an interactive debugger some years later, the old habits and skills really helped, and the debugger was very helpful, but not the center of the code writing activity.

          Reply
          • 32. orcmid  |  June 16, 2020 at 4:37 pm

            That was my experience as well. I think “the larger awareness of consequences” also had to do with learning to confine effects and dependencies. I know it influenced my design of data structures in assembly language programming.

            Reply
          • 33. shriramkrishnamurthi  |  June 16, 2020 at 8:14 pm

            I too learned to program without a physical computer, and to this day those same skills I developed in the process have stood me in good stead.

            Reply
            • 34. gasstationwithoutpumps  |  June 17, 2020 at 12:45 am

              Do any of you remember the “CARDIAC”—a paper toy for emulating a computer? I had one when I was first learning to program:
              https://www.cs.drexel.edu/~bls96/museum/cardiac.html

              Reply
              • 35. alanone1  |  June 17, 2020 at 1:23 am

                Another great paper toy was the “Compilagame” that was handed out by Burroughs so that customers could learn how the polish postfix internal codes were compiled and executed …

                Reply
              • 36. alanone1  |  June 17, 2020 at 1:42 am

                Hi Kevin

                The Drexel page you point to is terrific to see! CARDIAC came along in the late 60s so it was after my learning time (it would have been great to have this about 15 years earlier). The Drexel simulator for CARDIAC is a useful addition!

                A cool thing would be to see if (say) Meta II could be implemented in it (maybe not quite enough room) to then get an instant Algol, or at least a Smallgol.

                Reply
  • 37. Ken Kahn  |  June 16, 2020 at 12:57 pm

    I understand now, thanks. It is unfortunate that the word “tracing” is used in different but overlapping ways.

    Reply
    • 38. Mark Guzdial  |  June 16, 2020 at 2:03 pm

      The original computing ed work that we cite (Leeds Working Group) called it “doodles,” which is a nice and friendly phrase, but doesn’t capture the cognitive complexity of simulating a computer running the given program in your head.

      Reply
  • 39. alanone1  |  June 17, 2020 at 5:29 am

    Hi Mark

    You and your colleagues have probably done a survey over the years, but it would be useful to see one or two examples, and especially one from the present time of “why are you currently studying computing, especially programming?”

    It would be illuminating — and very important — to see the reasons, and especially the percentage who say: “to learn and understand and do computing and programming”.

    Do you have access to this?

    Reply
  • 42. Ken Kahn  |  June 17, 2020 at 7:42 am

    I see this discussion touching on 3 different kinds of tracing: (1) on paper, (2) in one’s head, and (3) in a debugger. Personally I’ve avoided (1) even when asked to do so. But I have seen good lecturers simulate a program on a whiteboard successfully. But I question the generality of this way of teaching.

    Reply
    • 43. Mark Guzdial  |  June 17, 2020 at 8:31 am

      As you well know, Ken, it’s dangerous for an education researcher to rely on introspection and personal experience, since others are different from you. The first study I know of tracing was the Leeds Working Group study (https://doi.org/10.1145/1041624.1041673) which found that those who traced on paper were much more likely to understand programs and predict program behavior than those who did it just in their head. Katie replicated that study (ICER 2017, https://doi.org/10.1145/3105726.3105746) and found that those who started tracing on paper then stopped were more likely to get the problem wrong than those who never used paper at all! That led to her ITICSE 2019 paper (https://doi.org/10.1145/3304221.3319788), aptly titled “Novice Rationales for Sketching and Tracing, and How They Try to Avoid It.”

      Reply
      • 44. Ken Kahn  |  June 17, 2020 at 9:19 am

        Yes, I agree it is dangerous to rely upon personal experience. After reading bits of “Novice Rationales for Sketching and Tracing, and How They Try to Avoid It.” I am more sympathetic to “paper-based tracing” but only for very simple programs. The paper mentions 5 and they are simple enough that working them out on paper can be valuable (at least for students).

        While I should resist personal recollections, my memory of having to do assembler and machine code exercises while in graduate school (a very long time ago) is that updating registers on paper was very helpful. But again how general is this use of paper tracing?

        Reply
  • 46. Yoshiki Ohshima  |  June 17, 2020 at 4:17 pm

    This is absolutely anecdotal and only based on personal experiences, but when I teach recursive functions, it seems to be better not to talk about them in terms of operational actions such as “call and return”, and “the value of i becomes (i – 1) this time around.” Rather, I try to bring students attention to the “definition”, and say “if somebody has written a function that does a job for you for the (n-1) case, let us think about how to define your function for the n case. You don’t even have to know how the (n-1) version is solved.” I myself have trouble following all dynamic behavior up and down in the chain, but “take somebody’s work and just do your little part” (for me) is a better way to think about and teach about recursion. It is not in conflict with the line by line checking idea; it is just the mind set that the helper version is a black box you don’t have to look into.

    Reply
  • […] writing a dissertation about the role of embodied representations in CS education (see a post here about her most recent paper). She recommended more on learning about […]

    Reply
  • […] to trace code at the line-by-line level. She wrote an ICLS 2020 paper about their reasons (see blog post). She decided to study that […]

    Reply
  • 49. Dijkstra Was Wrong About 'Radical Novelty' - GistTree  |  November 30, 2020 at 9:42 pm

    […] Sciences regarding the embodied metaphors that teachers use when instructing recursion (gaze weblog put up summary right here). Academics gesture and point, but it’s not high-quality to what. They discuss about being […]

    Reply
  • […] the Learning Sciences about the embodied metaphors that teachers use when teaching recursion (see blog post summary here). Teachers gesture and point, but it’s not clear to what. They talk about being […]

    Reply
  • […] do student use embodiment when they learn CS? Part of the answer to the first question appeared at ICLS last year. I talked about helping with Amber’s coding of student videos in my blog post about Dijkstra. Her […]

    Reply
  • […] no interest in understanding the details of how programs work. As one said to her (which became the title of her ICLS 2020 paper), “I’m not a computer.” Block-based programming won’t work for her learners because, like […]

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,096,666 hits
June 2020
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips