Those Who Say Code Does Not Matter are Wrong

May 16, 2014 at 9:09 am 8 comments

Bertrand Meyer is making a similar point to Andy Ko’s argument about programming languages.  Programming does matter, and the language we use also matters.  Meyer’s goes on to suggest that those saying that “code doesn’t matter” may be just rationalizing that they continue to live with antiquated languages.  It can’t be that the peak of human-computer programming interfaces was reached back in New Jersey in the 1970’s.

Often, you will be told that programming languages do not matter much. What actually matters more is not clear; maybe tools, maybe methodology, maybe process. It is a pretty general rule that people arguing that language does not matter are simply trying to justify their use of bad languages.

via Those Who Say Code Does Not Matter | blog@CACM | Communications of the ACM.

About these ads

Entry filed under: Uncategorized. Tags: , .

Reading, Writing, Arithmetic, and Lately, Coding: But mostly a video game (Elliot Soloway) NSF funding for junior faculty in first two years

8 Comments Add your own

  • 1. alanone1  |  May 16, 2014 at 11:34 am

    N.B. those who say code doesn’t matter are rarely found programming in machine code or assembler these days, but they’ve inherited this dim view from many in previous generations who did.

    However, if we take to heart that “code does matter” then why are most so complacent about the very poor language designs employed in the “every one must learn coding” movements today?

    Cheers

    Alan

    Reply
  • 2. gflint  |  May 16, 2014 at 12:59 pm

    I just got done teaching my first Python course. The indent format is visually nice but having a block “end” would just make it so much easier to see and teach. Is “end” a bad thing? I am just a lowly high school programming teacher trying to get kids to understand the principals of programming, algorithms, problem solving and so I do not understand why building a language with “end” is such a bad thing.

    Reply
    • 3. Mark Guzdial  |  May 16, 2014 at 1:16 pm

      Way more than just NOT being a “bad thing,” having “end” dramatically improves readability, especially for novices (e.g., it’s significantly more helpful for the beginning programmer than it is for the expert programmer).

      Reply
  • 4. gasstationwithoutpumps  |  May 16, 2014 at 4:04 pm

    While I like Python for a lot of reasons, I find the lack of a delimiter for the ends of blocks a major cause of error in student programs, as they come out of 3 levels deep, when they meant to come out of 2 or 4. Different syntax editors with different meanings for tabs also cause portability problems as indentation gets messed up unless people are really, really careful to remove tabs (the editors don’t do it automatically, but only on request).

    Reply
    • 5. gflint  |  May 16, 2014 at 6:55 pm

      Those are exactly the issues I was having with the kids. Until I got the hang of it I was having the same problems.

      Reply
  • 6. Dan Lessner  |  May 17, 2014 at 11:52 am

    About the missing end statement in Python: I guide my students to write it there in a form of a comment. It is not necessary, but helps clarity, so they often do it, even though they could compile without it. I consider it to be a nice habit anyway, also in languages with explicit ends – exactly when coming out from 3 levels deep, it is nice to know (= have a comment on) which end belongs to what.
    I am not sure what are any software engineering standards for this, but most of my high school students are not headed that way anyway.

    Reply
  • 7. Mark Miller  |  June 12, 2014 at 7:04 pm

    I’m surprised this idea has stayed around this long. I heard this when I was in college over 20 years ago, except it was expressed as, “There is no difference between programming languages. They all compile/interpret down to code that uses the same set of machine instructions, and they’re all Turing Complete.” Along with that they’d say things like, “You can do object-oriented programming in assembly language, or Fortran.” As Alan has noted, I know that at least some of them had had experience programming in assembly/machine code. Given the time that it was, and their age, I imagine they probably all had at one time or another.

    It would seem our professors saw Turing Completeness as the be-all end-all of what computing is, and so that’s all that mattered as far as programming went. It also seemed they saw language as a notation for a form of system organization, but they didn’t attach any significance to it. They told us, “It’s a matter of taste,” as if to say, “If you like OOP, you can program in that fashion,” but that was just your preference. It’s no better than any other, because it accomplishes the same thing as any other form of organization, as far as computation is concerned.

    I could see their perspective about how “it’s all the same underneath,” but I didn’t see it as helpful. In retrospect, I think it missed some important ideas entirely, as did (and do) the arguments over which extant language is better. Re. the latter, one of the questions should be, “Better for which ideas and goals?”

    Having gained a wider perspective, going outside of computing for a while, I see similarities between this academic CS attitude about languages and the philosophical stance of “openness” that’s been a stock-in-trade of some significant parts of academia for several decades now, which wallows in relativism, and comes to the same conclusion about cultures. The difference being that CS professors tend to be much less militant about it.

    Reply
  • 8. Mark Miller  |  June 12, 2014 at 8:18 pm

    Addressing Meyer’s specific concerns, I think he accurately criticizes the misuse of C, but I think he advances a false dichotomy about which language is “best,” by setting up a specific example (the faulty C code), and then saying that it’s representative of a wider problem with the language that can only be solved by some criteria for clarity, exemplified by his “best” language, which is justified in opposition to the example. He’s ignoring the rest of the system, and the larger problems posed by its design. The example he cites is just a symptom of that.

    I make no bones about saying there are problems with C’s language design, and that of its derivatives, and there are many cases where C has been misapplied to problems it was not designed to solve, but I kind of bristle when critics blame a language for programmers getting themselves tangled in their own thickets. In my view, one of the duties of a programmer is to deeply understand the language they’re using, and if necessary, take measures to avoid potential pitfalls in using it, whatever that may entail. It seems to me from this small example that Apple’s programmers didn’t think enough about the problem they were trying to solve in relation to their own limitations, and that C was insufficient to the task, but I would hope this would conjure up thoughts of fashioning a runtime to suit the task, and then thinking about syntactic/semantic clarity.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


Recent Posts

May 2014
M T W T F S S
« Apr   Jun »
 1234
567891011
12131415161718
19202122232425
262728293031  

Feeds

Blog Stats

  • 954,292 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,072 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 3,072 other followers

%d bloggers like this: