Jeff Atwood says “Learning to code is overrated” but means “We need good CS teachers”

October 26, 2015 at 7:55 am 9 comments

I’ve written responses to comments like Atwood’s before.  His perspective on “coding” is too limited, and he isn’t realizing that being a user and being a programmer is where most people will be (see the “fat line” blog post here).  That “provide them plenty of structured opportunities to play with hardware and software” is a pretty good definition of one kind of “teaching kids ‘computer science.'”  We need that.  But the kids who only need opportunities to “play” in order to learn tend to be highly privileged (see the “rich boys” blog post here).  Nobody wants kids to just “type in pedantic command words in a programming environment.” That’s a good definition of poor computing teaching.  We need good teachers who know how to support a range of students with different kinds of scaffolding.

So what Atwood is really saying that we need good CS teaching.  Yup, you need a lot of that in NYC — I agree.

If you want your kids to have a solid computer science education, encourage them to go build something cool. Not by typing in pedantic command words in a programming environment, but by learning just enough about how that peculiar little blocky world inside their computer works to discover what they and their friends can make with it together. We shouldn’t be teaching kids “computer science.” Instead, we should provide them plenty of structured opportunities to play with hardware and software. There’s a whole world waiting to be unlocked.

Source: Jeff Atwood: Learning to code is overrated – NY Daily News

Entry filed under: Uncategorized. Tags: , , , .

Teachers Aren’t Dumb: The importance of improved teacher development Professor wants to double the number of computer science teachers in Wisconsin: Color me jealous

9 Comments Add your own

  • 1. Don Davis (@gnu_don)  |  October 26, 2015 at 8:21 am

    Recently on the SIGCSE list, an instructor was bemoaning that his students could by and large not transition from given set-theory proofs to independent proofs. He has identified that there are a students who are successful — but another disconcertingly more problematic number who can’t make that transition.

    Similarly, but more extreme, Clegg and Kolodner describe how given a task to independently develop a model of human lungs – a noticeable number of students aren’t successful in structuring the activity and progressing independently (Clegg & Kolodner, 2007).

    With regards to: “but by learning just enough about how that peculiar little blocky world inside their computer works to discover what they and their friends can make with it together.”

    I was recently reviewing an article from Kirschner et al. (2006) illuminating difficulties with “minimal guidance” – the sorts of programming mentioned above – take some building blocks and run. Sure, some students, especially those now writing tech blogs are likely to flourish in such an environment, but others, even with substantive scaffolding still struggle.

    An example: With high school students we go over some basic JavaFX – here’s how you make a circle (method), here’s a square method, here’s a triangle method – now put them altogether to draw a house or whatever you want. The students Atwood is describing, and the majority of students I teach, are successful with this. However, as Kirschner contends, and as I’ve witnessed, there are students who draw blanks at that point.

    The question – what then? These students, Kirschner et al. would note, do not have sufficient background experiences to move on independently, which (the authors maintain) cannot be developed in school. Taking it down a level would involve being pedantic as Atwood describes – draw a circle here, draw the square there.

    Is there a Soloway & Spohrer type analysis discriminating the skill differences in the learners Atwood and Kirschner et al. describe?

    Clegg, T., & Kolodner, J. (2007). Bricoleurs and planners engaging in scientific reasoning: A tale of two groups in one learning community. Research and Practice in Technology Enhanced Learning, 2, 239–265.
    Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86.

    Reply
    • 2. Peter Donaldson  |  October 26, 2015 at 7:03 pm

      We strongly suspect it has a lot to do with fluency in an underpinning set of sub-skills that centre around mechanistic reasoning.

      It’s just speculation at the moment but several clusters of CS education research findings point towards this possibly being a crucial piece of the puzzle.

      It would explain why just viewing visualisations of software processes such as algorithms doesn’t improve novices learning, why being able to identify particular types of construct and accurately trace code is so strongly correlated with code writing and why even programming using a declarative language appears to need some understanding of the underlying notional machine. It would also help to explain why certain types of generic visualisation abilities correlate with programming skill more consistently than other measures that have been tried.

      It could be the equivalent of poor decoding skills preventing novices from suceeding in reading comprehension work. Hopefully I’ll be in a position one day to be able to work out an appropriate set of test instruments and experimental designs to test this hypothesis.

      In the meantime I’ve changed the way I teach to incorporate a more comprehension orientated approach that includes pupil code tracing and it’s certainly resulted in a higher rate of engagement and lower rate of partial or total failure when writing code of their own.

      Reply
      • 3. Don Davis (@gnu_don)  |  October 28, 2015 at 7:53 am

        Peter,
        I appreciate the link to CAS. (Some of the links are down.) I was trying to find some of your research. Do you have a paper you’d recommend? Have you articulated some sort of discrete task analysis somewhere?
        Thank you,
        Don

        Reply
        • 4. Peter Donaldson  |  November 1, 2015 at 6:27 am

          Hi Don,
          I’m not aware of any published research on mechanistic reasoning and it’s relationship with developing knowledge and skill in CS at the moment. That’s why I was careful to say that we strongly suspect it’s important.

          For the last two years I’ve been involved in helping to co-create and deliver a programme of professional learning for other Computing teachers across Scotland called PLAN C. As part of this programme we analysed a large volume of CS education research to try to identify difficulties that novices have in CS and teaching methods that are more effective. This was purely so that teachers had access to research findings in a form that they could use to help them improve their own teaching. It wasn’t a research project so the main aim was to ensure that as many teachers as possible had a chance to explore and try out these methods for themselves.

          We had limited time so although we have collected quite a bit of data about various aspects of the programme and the teachers who participated it hasn’t been written up in a publishable format yet. It’s also why we haven’t been able to explore the relationship between mechanistic reasoning skills and the ability to learn how to understand and create process descriptions.

          Mark’s right that defining CS learning progressions is a big challenge. We didn’t tightly define one but what we did do was start to develop an understanding of some of the causes of novice difficulties, what contributory skills they needed to develop and teaching methods that would help a broader range of pupils acquire them.

          Reply
          • 5. Mark Guzdial  |  November 1, 2015 at 10:29 am

            I can suggest a few lines of work that relate to mechanistic reasoning for student understanding of programs.

            • I suggest Andrea diSessa’s work on p-prims and Bruce Sherin’s work on how students understand code vs. equations differently.
            • There’s a line of work on “notional machines” — the mental models that students build about what the computer is doing when it’s running a program (Benedict duBoulay, Mike Eisenberg, Mitchel Resnick (pre-Scratch), Juha Sorva).
            • I very much like the structure, behavior, and function (SBF) model of design knowledge developed by Ashok Goel and applied to science education by Cindy Hmelo-Silver. I believe that it would be a fruitful model for understanding how students relate the structure of a program, to its behavior when running, to the function or purpose of the program overall.
            Reply
            • 6. Peter Donaldson  |  November 3, 2015 at 2:21 pm

              Hi Mark,

              thank you for the overview on CS research that relates to mechanistic reasoning; there are a few researchers you listed whose work I haven’t really delved into or come across before. I don’t think I’ve read any of Bruce Sherin’s work and have only seen Cindy Hmelo-Silvers work referred to in some of the CS education research being carried out in Israel.

              I agree about structure, behaviour and function being an interesting model. I haven’t done anything too formal but our animation and games programming units tend to focus on something pupils would like to be able to implement such as sprites talking to one another, then looking really closely at some specific example behaviour before we think about the instructions we’d need.

              We teach a range of other topics that don’t involve programming but the Animation Programming unit has consistently been the highest rated unit in terms of both depth of learning and enjoyment in our annual survey of all 1st year pupils. Choosing to really focus on a context with broad appeal and how to motivate the use of certain types of construct has really paid dividends.

              Reply
    • 7. Mark Guzdial  |  October 28, 2015 at 11:45 am

      Don, I don’t have an answer to your final question. Defining CS learning progressions based on empirical evidence is a big challenge in our community today. I can point you to Mike Lee’s dissertation where he explored different ways to teach CS. (See blog post here.) The minimal guidance approach didn’t work well, as expected, but a game-based approach beat a tutorial one, which was surprising to me.

      Reply
      • 8. Don Davis (@gnu_don)  |  October 28, 2015 at 8:03 pm

        Thank you, Mark. I’ll give Mike Lee’s dissertation a close read.

        Reply
  • 9. fgmart  |  October 27, 2015 at 7:54 am

    He’s right that our fixation on code and coding is narrow and vocational.

    Instead, we should be broadly encouraging our kids to become makers.

    Including making with code.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


Recent Posts

October 2015
M T W T F S S
« Sep   Nov »
 1234
567891011
12131415161718
19202122232425
262728293031  

Feeds

Blog Stats

  • 1,293,462 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,598 other followers

CS Teaching Tips


%d bloggers like this: