Improve Computing Education: Take the More-than-Java Pledge

March 1, 2011 at 8:20 am 63 comments

I have a sure-fire way of improving computing education.  Everyone reading this, post this to your blogs and Facebook status and every other way that you make public, digital statement these days:

I promise to no longer teach Java to anyone at the undergraduate Freshman level or earlier.

I am teaching Java in my Media Computation Data Structures class this semester, the first time I’ve taught first year students in four years.  I had forgotten how bad Java is for beginning students!

My students are almost all non-CS majors.  This is all their second semester CS course, but for the most part, last semester was the first time any of them had ever programmed.  Their first course was in Python (robots), Python (MediaComp), or MATLAB.  It’s a small enough class that students actually do come to my office hours, and that lets me see the aggravating errors that they are facing.

Here’s a common error — it’s a faulty method declaration.

public void foo();  
{    
    // blah, blah blah
  }

The error you get is: missing method body, or declare abstract.  Sure, that message makes sense if you understand about semi-colons and blocks, but if you don’t…”What’s abstract?!?  I have a method body there — why doesn’t it see it?”

Here’s the one that I saw multiple times (both versions), which I find just infuriating.

while (a < 4);
{
    // do something in here, and probably change "a"
}
if (sometest());
{
   // do something if true
}

These are infuriating because there is no error — the first generates an infinite loop, and the third one just always executes the body, ignoring the result of the test.  Programs don’t work, and the compiler gives no clue that the students did something that only experts can handle correctly.

I’ve read Kernighan and Ritchie.  I know that, with magical side-effects and complex C-magic, one doesn’t actually need to have a body on loops and IF’s.  Everything can be done in the IF or WHILE test, or in the body of the overly-complex and macro-like FOR loop.  But why allow that in a language for beginners?  We’d never want to teach that to first year students, and by allowing these experts-only practices in Java, we lay land-mines for them!

I completely believe that students should learn C-based languages, and Java is a powerful tool that most CS students should learn.  But not to start.  It’s a lousy language to wrestle with when you are still trying to understand what commanding a computer is all about.  My students are trying to understand object interactions and creating dynamic data structures, and errant semi-colons are eating into all of their programming time.  Seriously — a bad semi-colon may cost a starting student 30 minutes of programming time (as Matt Jadud’s ICER 2006 paper showed).  If you can only afford two hours of programming time for an assignment, one wrong semi-colon now means you only have 90 minutes.  If you can’t complete the assignment, you never get the motivation boost of success and your grade suffers.  I really believe that semi-colon errors correlate with our retention problems.

So why do we teach Java so early?  Because it has become the language of CS education.  We have to teach Java to prepare students for what comes next.  This is particularly salient for me because, as of this semester, we no longer teach Smalltalk to students at Georgia Tech.  Lisp died from our curriculum about a dozen years ago.  Now, the required courses teach (in order): Python, Java, Java, C, Java, and options in upper-level courses between C, C++, C#, and Java.  If students want, they can take a specialty course where they might see some ML.  I don’t think GT undergraduates can even take a class where they’ll use Lisp anymore.  C has won.  This is a learning problem because I worry that students won’t develop cognitive flexibility without these other language approaches. Forget about transfer, forget about vocational training — let’s focus on being able to think about problems and representations in different ways. Here’s our goal in Rand Spiro’s words (which weren’t about programming, but fit perfectly): “Cognitive flexibility theory suggests that learners grasp the nature of complexity more readily by being presented with multiple representations of the same information in different contexts.”

The real tragedy here is that few of us can take the pledge.  I can’t take the pledge, either.  We live in an educational ecology, and none of us can act alone. If I did, then I would be doing my students a disservice — they would be unprepared for their later courses.  If high school teachers took the pledge, then they couldn’t teach AP CS, and there would be even less high school in CS.  And no, that wouldn’t be a good thing — by every study we have, students without a CS course emerge from high school with extremely ill-informed and negative views of CS, and any high school CS makes things better.  High school students with CS have a better understanding of what CS is, and are more likely to pursue later studies.

How do we get to a better place from where we are now?  We who teach CS all have to decide that there’s more to programming computers than C, that 1973 was not when humans reached their peak in ability to communicate with a computer.  We have to recognize that other forms of programming are important, even if it doesn’t get students a job.  And by teaching those other languages, perhaps we create a seed to change industry, too.  We just can’t settle for what we have now.   We have to decide to teach other kinds of programming languages (somewhere!), and to create pathways so that we don’t doom students who don’t have Java in their first year.

Here’s a pledge that I hope that all computing faculty can make:

I pledge that I will work with my colleagues so that all of our computing undergraduates will learn a programming language that is not based on C, and we will build that into a required course within the next two years.

Entry filed under: Uncategorized. Tags: , , .

Call for papers: ACM SIGCSE ICER 2011 Changing CS1 doesn’t help recruitment

63 Comments Add your own

  • […] URL: Improve Computing Education: Take the More-than-Java Pledge … almost-all, first, for-beginning, four-years, media, media-computation, semester, students-are, […]

    Reply
  • 2. Alan Kay  |  March 1, 2011 at 9:39 am

    To add to your polemic, I don’t think Java is at all good for pros either — even though it can be escaped from using meta-techniques, it is quite a bit of needless work to do this in Java.

    I think that all levels from the highest level languages and programming we know how to make and do, all the way down to the metal, can be sampled and understood in the first two years of learning without having to give in to specious arguments about “later jobs” and “later requisites”.

    Any so-called “computer science” that doesn’t know how to do this and isn’t willing to do this should be drummed out and stripped of its accreditation.

    This should be one of the prime differences between going to university vs going to a trade school, and academia really needs — as you suggest — to take itself in hand and try to grapple with the real issues here.

    But I would advocate going further than you suggest ….

    Cheers,

    Alan

    Reply
    • 3. Mark Guzdial  |  March 1, 2011 at 11:28 am

      There’s an interesting proposal in your response, Alan, that would be worth testing. If we focused on student expression and cognitive flexibility in the first two years, can we get to an outcome that students can work in any language? I don’t know that it’s been tested in that way previously.

      Cheers,
      Mark

      Reply
      • 4. Alan Kay  |  March 1, 2011 at 11:35 am

        Hi Mark

        I think you are more familiar with the evidence for both directions than I am.

        There is both imprinting and superstition when only one language is used, but there is only one devil to believe in.

        50 years ago every programmer would have to get fluent in literally dozens of languages (including the machine codes of the wide range of architectures back then). This hurt my head the first couple of times I had to do it, but then one’s mind in defense forms a different useful abstraction called “programing” that makes subsequent languages (both similar and different) much easier and quicker to learn.

        Our brains are set to learn both ways — I think there has to be a bit of a forcing function for the second.

        Cheers,

        Alan

        Reply
      • 5. Mark Miller  |  March 3, 2011 at 5:40 pm

        I don’t disagree with your proposal. I’ve mentioned this before to you, but maybe it deserves repeating. You may have had a similar CS education as I had. When I took CS we learned two languages pretty well (Pascal, and C), and became familiar with at least three more (I became familiar with two more than that, because of the senior track I entered). Only one of them (Lisp) was a real head-scratcher to me at the time, but I blame that on it being taught so badly by one prof. I had.

        The belief among my CS profs of the time (this was more than 20 years ago) was that students should be able to learn any language by themselves, using the principles of CS. Their attitude basically was, “We’ll teach you a language in your freshman year, but after that you’re pretty much on your own.” They offered some guidance, but they kept it to a minimum on purpose. They wanted us to learn to be adaptable, and they figured the only way to do that was to give us the experience of learning languages on our own while we were going through the program. I can remember arguing for the CS Dept. to offer a course in C, because it was popular in industry at the time. My CS prof. vigorously argued against that. He said, “We don’t want to become a trade school.” He said, “I can guarantee you 5-10 years from now C won’t be popular anymore. Instead another language will have taken its place.” He had seen that pattern himself, and history, which we can all see, has proven him correct.

        Eventually the CS Dept. relented and offered a C course, but only for a half-semester, perhaps because many students were having difficulty learning it themselves. For me, that’s all I needed, based on prior courses I’d taken in Pascal, and assembly language.

        Professors varied the language they used in the senior level courses, depending on what they individually thought was best, or in some cases what the students preferred. I remember when I entered as a freshman there were students using a language called “Turing” for the operating systems course. In my senior year I saw students using Modula-2. Some were using C++ for graphics, and the operating systems course.

        The big difference I think between then and now, from what I’ve heard you describe, is that quite a few of the undergrads had prior experience with programming before they entered college. That was true in my case. That didn’t make things easier in all cases (my experience with learning Lisp being one example), but there were a few models I had in my head, which I had developed from earlier experience, that helped me learn some concepts taught in CS more rapidly, and which were filled in more by what I was taught.

        I’m not saying that universities should return to the way we learned things. I mean to give an account to remind people of what was once considered possible, and which was proven out with some degree of success. Being exposed to a variety of different languages didn’t hamper my job prospects in the slightest. It gave me a wider view of what was possible. It definitely gave me reason to be critical of what the industry does, because I had the opportunity to see how different problems can be solved more easily and elegantly depending on which language you use, and the industry in large part has used criteria that have nothing to do with making the job easier for programmers, or elegance, and in the process has hampered itself. Without this perspective there’s very little hope that things will get better.

        It used to be that language choice in industry had a lot to do with performance. Slow hardware, and probably poor compiler design, limited these choices. Now, I think it has more to do with desired functionality being associated with particular languages/VMs, and tradition. What used to be considered the slow languages now run very nicely on the fast hardware, but they are usually shoved to the side now for lack of desired library functionality.

        Reply
        • 6. Bonnie MacKellar  |  March 5, 2011 at 9:16 am

          I went to school in the early 80’s. Back then, we learned one programming language (Pascal) with a wee bit of Lisp thrown in (and it was a head scratcher to me too). Most students had no programming background from high school in those days because most schools did not have the resources in those pre-PC days. I went to a large, research-oriented private university with a strong CS program (and many women, in those days, about 40%).

          I think it has only been in the last 15 years or so that we have started expecting kids to have had programming in high school.

          Reply
          • 7. gasstationwithoutpumps  |  March 5, 2011 at 12:53 pm

            I learned programming in high school in about 1969 (Fortran and assembly language), but it was very rare then for high schoolers to get any computer training. Our high school was rich and even owned its own computer (an IBM 1130 with a card reader, card punch, chain printer, and removable disk drives). I believe it had 4k bytes of RAM (ferrite core memory in those days), but it may have been 4k 2-byte words.

            I didn’t do much CS as an undergrad (I did some programming, but I can’t remember whether I had any courses), but got into it in grad school in the mid 70s, learning Pascal, Algol W, SAIL, LISP, and C. I also had a smattering of other languages: APL, Smalltalk, SETL, several assembly languages, …

            I think that high-school programming classes peaked about 5–10 years ago, and are on the way back down. Does anyone have data on that?

            Reply
  • 8. Jim Huggins  |  March 1, 2011 at 10:23 am

    I’m reminded of what Churchill said once: democracy is the worst form of government, except for all the others.

    Is it any surprise that Java isn’t a great language for teaching computing, considering that it was never intended for that purpose? I can hardly complain that my cheap compact car can’t get up to 120mph on the freeway, when I picked my car based on price and fuel consumption, not speed. (Not that I’ve ever tried to get my car up to 120mph. Nope. Never. Quick, look over there …)

    We choose languages for instruction based on a variety of constraints, the grand union of which is usually contradictory. Inevitably, we make tradeoffs. Are those tradeoffs worth it? Obviously, educators can (and do) differ, and that’s what makes for a vibrant discussion.

    My institution (Kettering University) has an unusually practical bent, and so the emphasis on a “real” language early on is worth the pain, in our particular setting. But other institutions with different missions could easily decide differently.

    Reply
    • 9. Alan Kay  |  March 1, 2011 at 11:13 am

      Machine code is “real” Fortran is real. Lisp is real. Smalltalk is real.

      Do you have more criteria (I hope)?

      Reply
      • 10. Jim Huggins  |  March 1, 2011 at 11:26 am

        Of course; hence the use of the word “real” in quotes as a placeholder for a much longer discussion.

        We are a full co-op school; we send our first-year students out into the workplace almost immediately. There is consequently a great deal of pressure on us from our corporate partners to use languages in our first-year courses that, if not in common use, have great similarities to languages in common use. The C-like languages Mark names above form the majority of the marketplace for the employers we see.

        For us, picking a pedagogically ideal language for the first year, and then moving to more “dirty” real-world languages, doesn’t fit with our particular mission. We’re willing to delay discussion of the ideals until later courses. I won’t claim that’s an ideal course of action, and I certainly don’t claim it’s the “right” answer for other institutions … just that it works for us.

        Reply
        • 11. Alan Kay  |  March 1, 2011 at 11:46 am

          I understand this argument. And it does satisfy your institution, many students, and most employers.

          But it just puts out more barely skilled shaky bricklayers in a field that needs architects and engineers.

          So I don’t think your reasons are remotely good enough.

          Cheers,

          Alan

          Reply
          • 12. Jim Huggins  |  March 1, 2011 at 11:53 am

            With respect … we do cover the more “profound” topics in later coursework. It’s not like we believe that knowing Java is the only thing a CS professional needs to know. We just differ as to the sequencing of those topics.

            And, to be a little smug and defensive … part of becoming an architect or an engineer is understanding how things work in practice, not just the classroom. This is one reason we’re happy to make the tradeoff. Students learn as much, if not more, about how CS works by solving the problems of cash-paying clients than completing yet another artificial homework assignment for a professor. And they come back into our classrooms with better questions as a result.

            Reply
          • 13. John Clements  |  March 1, 2011 at 4:40 pm

            +1

            John Clements

            Reply
        • 14. Keith Decker  |  March 2, 2011 at 9:47 am

          With respect, Northeastern U, home of HTDP [How To Design Proograms] and the Program by Design approach, is **also** a co-op university. Spending the first semester teaching freshmen how to think (about computation), before drilling them in a job-skill language, works really well for them… Although Delaware (where I am) is not co-op, I know that most of our majors will want to do summer jobs and go to work rather than go to grad school. Yet I have no problem convincing them that in the very first semester, it might be worthwile to spend some time thinking about what the minimal, core ideas are that will pop up in every language they use in the future, rather than where to put a semicolon. And to have some fun writing graphical games and such. The key is to tell the students *why* this is a good idea for them, early and often during the course, via examples, readings, and class discussions.

          Reply
          • 15. Ashok Bakthavathsalam  |  November 1, 2016 at 6:45 am

            +1

            Reply
        • 16. Andres  |  March 2, 2011 at 4:18 pm

          “It is not the task of the University to offer what society asks for, but to give what society needs”. E. Dijkstra

          Andres.

          Reply
          • 17. Mark Miller  |  March 4, 2011 at 8:07 pm

            Driving the point further, there’s this quote from a speech Dijkstra gave in 1999. IMO it pertains to some of the comments we’ve seen on here:

            “Industry suffers from the managerial dogma that for the sake of stability and continuity, the company should be independent of the competence of individual employees. Hence industry rejects any methodological proposal that can be viewed as making intellectual demands on its work force. Since in the US the influence of industry is more pervasive than elsewhere, the above dogma hurts American computing science most. The moral of this sad part of the story is that as long as computing science is not allowed to save the computer industry, we had better see to it that the computer industry does not kill computing science.”

            Reply
  • 18. Rob St. Amant  |  March 1, 2011 at 10:25 am

    I teach an upper-level undergrad/grad course on AI programming, in Lisp, and it’s a lot of fun. Students seem to get a lot out of it in the way of programming concepts. I wish I could encourage more students to get into the language. Also, for what it’s worth, I’ve been trying out Python lately, and I really like it for introductory programming.

    Reply
  • 19. Ajai Karthikeyan  |  March 1, 2011 at 10:48 am

    Mark, as you might know I was coding for about 10 years before coming to Tech. I Initially started off with Logo, switched over to BASIC (Quick initially, followed by Visual), and then it was a hotch potch back and forth between Java and C++ depending on which school I was in. I was doing basic HTML in parallel with this. Something that I still like to do today is pick up a language and hack something together over the course of the day. Each one of these languages seem to have their quirks that students will need to work around, so can we really say there is one perfect language for introductory classes? Personally, Python and Matlab both actually annoy me more than pretty much any other language I’ve worked with.

    Also, I’ve noticed that 1331 (Intro to OOP) usually ends up turning into a *let’s learn java* class while the actual OOP concepts don’t sink in till 2340 (Objects and Design) which until recently was the *let’s learn smalltalk* class. Way too much attention is payed by the students (and professors) to the language to get the concepts through.

    [mirrored comment of facebook to keep the discussion going]

    Reply
  • 20. James Taylor  |  March 1, 2011 at 10:56 am

    JavaScript has the same problem, of course. But I don’t get why it is such a problem. Students learn to deal with semicolons. Maybe one needs a good lint program. Testing in JSFiddle with JSLint tells me that JSLint shows exactly where the problem is.

    Also, your note seems to suggest that the intro to a course in Python (I do love Python) doesn’t seem to help. Are you suggesting to continue using something like Python to teach deeper concepts before moving on to C languages?

    I also note that JavaScript is not in the list of learned languages at all. Why is that? It seems like a rather important language and one that allows for multiple conceptual pathways to the same result.

    Reply
    • 21. Mark Guzdial  |  March 1, 2011 at 11:26 am

      It’s true that JavaScript has a more interesting object model than Java, but it’s no where near as different as message sending in Smalltalk or Self, logic programming in Prolog, and strong type definitions and functional in ML. The argument for JavaScript is that it’s “rather important” — in terms of its use and applicability. That’s a more vocational argument. Let’s get beyond that for cognitive flexibility. Sense of relevance is important to motivate students, agreed, but we can get relevance without choosing a vocationally-motivated language.

      Reply
      • 22. Alan Kay  |  March 1, 2011 at 11:43 am

        Hi Mark,

        Javascript could be an interesting compromise if OMeta were used so it could be easily extended and superceded. Alex Warth has done some great things here ,,, and so has Dan Ingalls, some of it with Alex’s tools.

        This would allow both “real” and “getting around ‘real’ gracefully” to be part of the early learning curve.

        This works because the interesting and useful parts of Javascript are a dynamic language very influenced by Lisp, and that JS has enough of a meta framework to allow e,g, OMeta to actually be used as an alternate function definition so the language can be extended while it is running,

        This plus a good IDE (extensible also) for JS would allow a lot of things to be done — and for them to be justified for the myriad of doubters, etc.

        This wouldn’t be as good as a complete roll your own (which is also very doable and should be looked into), but a wide range of really interesting programming styles could be manifested and learned.

        For example, the Prolog that Alex and Stephen Murrell did in just 100 lines for everything is quite beautiful, as well as providing a Prolog to learn this style of programming.

        Cheers,

        Alan

        Reply
        • 23. James Taylor  |  March 1, 2011 at 1:23 pm

          Alan’s response is a great description/improvement of what I had in mind. It does seem that the standard use of JavaScript would not encompass messaging, logic, etc., but I think it can be easily cast in those forms to teach the ideas. And inserting those ideas into web programming could be a good social benefit.

          Perhaps the question is, what languages allow one to explore various programming models without having to add too much complexity? I think if you can slip all the different programming models into a language that seems practical and familiar, one could implement a new CS ecosystem without too much trouble.

          Personally, I would be curious if there was any programming paradigm that JavaScript could not handle easily (other than being strict!).

          I also think that the main problem of JavaScript is its practical side. Dealing with the browser programming is painful though it can be hidden with the right framework which is what I would do if teaching this language for programming concepts.

          Reply
          • 24. Alan Kay  |  March 1, 2011 at 2:08 pm

            There is nothing difficult about making a strict language in JS (again via OMeta).

            Check out Dan Ingalls’ Lively Kernel to see JS used as a machine on which a complete class based system with its own graphics model has been implemented.

            In other words, there need be nothing painful about browser programming — just disappear all the horrors ….

            By the way, the way Alex turned JS into a completely extensible language was to write JS in OMeta (about 185 lines of code) and this allows all of JS, or subsets, to be used plus arbitrary extensions and modifications.

            The basic idea behind what and how here is part of the essence of CS and this has almost disappeared as a “5 finger exercise” technique for making programming systems which better match up to the problems at hand.

            Reply
          • 25. James Taylor  |  March 1, 2011 at 2:54 pm

            Thanks Alan. Both look neat and I look forward to perusing Alessandro’s thesis. And I can certainly see this system being useful in setting up the right environments in teaching.

            But I was wondering more along the lines that JavaScript has in it the ability to be many things without extensions or parsing, just discipline in its use. That is, using messaging or functional programming ideas in normal JavaScript could just be done, with not a lot of code or frameworks.

            Reply
  • 26. Max Hailperin  |  March 1, 2011 at 11:01 am

    I know that, with magical side-effects and complex C-magic, one doesn’t actually need to have a body on loops and IF’s.

    Although true, that only is loosely related to the difficulty you are pointing out. The real culprit is a syntactic decision that dates back even further than C, to PL/I: the decision that null statements should be written as just a plain semicolon, rather than something like “skip;”, “pass;”, or “nop;”. (Or null statements could be left out of the language entirely, as in BCPL. Given that C allows an empty block to be written with just two characters, this would have been a quite reasonable choice.) The fact that null statements occasionally make useful bodies may have some heuristic relevance, but doesn’t directly address the syntactic issue.

    Reply
    • 27. Mark Guzdial  |  March 1, 2011 at 11:22 am

      Sure, Java inherited some of its flaws, as you suggest, Max. But we can also come up with new ones that Java invented. “Static” to represent “class”? PSVM, anyone?

      Reply
      • 28. Max Hailperin  |  March 1, 2011 at 11:47 am

        Mark, I must not have made my point clear enough. I wasn’t trying to make the point that some of Java’s flaws (including your original example) were inherited. You actually had made that point yourself, though you only traced that particular flaw back to C, whereas I pushed it back deeper in history to PL/I. The distinction of C vs. PL/I is irrelevant to whether the flaw was novel in Java. You already indicated it wasn’t.

        Nor do I doubt that Java has introduced new difficulties that truly are novel. As it happens, your new example regarding “static” is not one. C++ already had pioneered “static members”, as they are called in that context. And that C++ design decision was itself clearly inspired by C’s overloading of “static” to mean “local to a compilation unit (i.e., file)” when applied to external declarations. But that history just invalidates your particular example, not your point, which as I say, I don’t dispute. Java surely has its own unique trouble spots.

        Nor am I trying to defend Java. Even less am I trying to defend its over-dominant and over-early use in education. I would be among the last to do that.

        All I was trying to do was clarify the specific grammatical nature of the particular problem you were using for your example. I don’t see any reason why we can’t criticize inappropriate educational roles for Java, as you do, while still having our criticisms be accurate with regard to the grammatical (and historical) details.

        Reply
        • 29. Mark Guzdial  |  March 1, 2011 at 11:57 am

          Sorry, Max — you’re right. I didn’t go deeply enough into the history, and you’re right that the null-body and static problems were also both inherited into Java.

          Reply
  • 30. gasstationwithoutpumps  |  March 1, 2011 at 11:31 am

    I’m in agreement that Java is a poor choice for first programming language.

    Currently, I favor Scratch followed by Python, though in teaching my son to program I followed a very different route. He started with various Lego robotics programming languages (most pretty bad), did some Logo and NQC. Finally he got a substantial does of programming in Scratch. Then he learned C (in order to explore how hashing worked). Then he had a year of programming in Scheme from school. Now he is programming in Python (which he likes a lot). Packages like NumPy and PyGui make a big difference in the sophistication of what he’s willing to try. He is certainly a much better programmer than I was at 14, and knows about as much about programming as I did going into grad school in CS.

    He’s beginning to understand object orientation in Python, thanks mainly to the PyGui API. He is beginning to be irritated at Python’s inability to catch typos in variable names, and will be ready for an object-oriented language that requires everything to be declared in another year or two.

    I think it would be a real service to the community to push HARD on AP to move from Java to Python. Doing so will require that a large fraction of prestigious colleges make their first college programming course be in Python. That may take 5–10 years, given the inherent conservatism of first-year teachers, many of whom are on year-to-year contracts and don’t feel they have the clout to do any experimenting.

    Reply
    • 31. Owen Astrachan  |  March 1, 2011 at 11:53 am

      I pledge to move our first course to Python, we’re a conveniently-ranked top-10 undergraduate institution. We made the change this year and its loads of fun.

      Speaking of Python, check this out

      http://people.csail.mit.edu/pgbovine/python/

      Owen

      Reply
  • 32. Alan Kay  |  March 1, 2011 at 12:06 pm

    Response to #9 Jim Huggins

    Having done quite a bit of corporate consulting over the years (and for 3 very large companies presently), let me make a generalization that could be too sweeping, but it does capture some of the actual situation

    A lot of the bad code out there was a poor solution to an even more poorly posed and chosen problem.

    I think you can see that such a milieu is not a good environment for learning how to do computing. It tends to do the opposite by showing “poor” as “normal” (which indeed is just what Java does)

    Reply
  • 33. Alan Kay  |  March 1, 2011 at 4:20 pm

    For:
    17. James Taylor | March 1, 2011 at 2:54 pm

    I understand what you are saying, but turning JS into a really extensible language in many directions is actually showing the students how to think about real/serious problems in a better way. I think far too much programming is done in languages as they are (when it is so often the case that there is a very poor fit between the problem and how the language looks at computing.

    Cheers,

    Alan

    Reply
  • 34. Bonnie MacKellar  |  March 2, 2011 at 2:29 pm

    We’re switching TO Java, finally. Why? Because doing procedural first, then OO, just wasn’t working in our curriculum. We were sucking up too many courses teaching students to program one way, then reteaching them all over again. The upshot was that we were graduating students who were not competent in any language (not all of them, of course, but a significant share), into a marketplace where employers overwhelmingly want OO skills.

    I understand the arguments for having different paradigms and languages in a CS major, but you have to be really careful that the program doesn’t just turn into a hodgepodge of language A, language B, language C courses. At schools with lvery academically strong students, it is possible to throw a text at them and say, go learn this language so we can start doing (concurrency/AI/web programming/design patterns), but it isn’t realistic with weaker students. With our students, we had to go back and reteach programming all over again with Java, usually with students who were now juniors. This was, quite simply, wasting time.

    The reality for our students, and I think the majority of students who don’t go on to grad school, is that they are going to work in a world where they won’t be doing a lot of exciting algorithm development. Instead, they are going to be working in environments where the ability to design interfaces and APIs, to understand versioning, to write requirements, to thoroughly test, and to design for maintenance are paramount. We really need to be focusing on how to get students to think about the macro-level, rather than the micro-level, of software development. I’ve never personally seen Python used in a large scale project in industry, so I can’t comment on how well it scales to the macro level, but Java scales quite nicely (as does C#), so why not use them?

    In sum, I think that as long as industry wants graduates to know Java and/or C#, schools are going to have to teach them. And why waste valuable course time reteaching programming in one of these languages when there are so many more important things that need to be taught? If your program only sends students to grad school, you may be free to say “They’ll learn Java later on”, but that is not a luxury many of us have.

    Reply
    • 35. gasstationwithoutpumps  |  March 2, 2011 at 11:39 pm

      If you are having to reteach them to program just because you are switching to java, then they probably won’t learn programming if you start them in java. If you are spending a lot of time teaching micro-level development over and over, then throwing out the first few courses and doing everything in java isn’t going to help. If they don’t get macro-level ideas in simpler programming languages, they aren’t going to get them while struggling with java syntax.

      I think that Java is a reasonable programming language for big projects, and that Python is a better programming language for rapid prototyping. Both skills are useful in industry, as well as in academia.

      There are a few big Python projects, but they tend to be more in the freeware realm.

      Reply
      • 36. Bonnie MacKellar  |  March 3, 2011 at 8:53 am

        I agree with what you are saying – it is clear they are not getting the programming concepts in the CS1 and 2 courses. I just happen to think that switching languages midstream makes the problem worse.
        I think the real problem is that students are mystified by any programming language syntax. it doesn’t matter whether it is Pascal, Python, C++, or Java – it is all foreign. So they cope with it by trying to memorize their way through the course. They do this in math courses, too, as any of my math colleagues would tell me. I think it is a really common problem and the only answer is giving them enough time to get comfortable with this new language and way of thinking. When we switch languages on them midstream, many students react by going right back into memorize mode, right at a time when they should be starting to relax and to look at the whole picture.
        We in computer science have fighting over CS1 languages for a very long time, always thinking that the “new” language will somehow fix the memorization problem. I’ve been around long enough to realize that it never will. I have taught in Pascal, in C++, in Java and in Greenfoot, and have seen no difference. I have taught at two schools that switched languages midstream, and both fell into the HodgePodge O’ Languages curriculum trap. I now think the best approach is to pick a useful language and simply stick with it long enough so that they start seeing the patterns and stop memorizing.

        Reply
        • 37. Alan Kay  |  March 3, 2011 at 11:10 am

          I think the comment about similar problems in math obtains (and how wonderful it would be if syntax were their only problem in math!). There’s a lot of remediation — actually mediation without the re in college these days.

          However, what is a bit crazy here is to get stuck on syntax. Etoys and then Scratch introduced drag and drop tiles (via Mike Travers’ thesis Agar) to at least eliminate this and get on to strategies and systems designs.

          One of the nice things about tiles (especially the way Scratch does them) is that they can be individually tested by double-clicking on them — this removes a lot of the mystery of the semantics – and coupled with the English like gist of the commands allows most people to move along and start learning how to program.

          So my question would be — why are the development environments so bad for most of the “adult” languages? There is no reason for it — and it is also quite worrisome to me that CS departments haven’t been inclined to make better IDEs for their first year programming students ….

          If the CS departments think this is really hard — especially after a lot of design and implementation and testing has already been done in the world of children — then they should fold up their tents and quit,

          If they do understand how to do it, then shouldn’t they be willing to put in the effort instead of complaining?

          Cheers,

          Alan

          Reply
          • 38. Alfred Thompson  |  March 3, 2011 at 11:26 am

            IDEs do not seem to be “interesting” to most CS departments. That is to say they don’t see it as something worth researching and publishing on.
            In the area of IDEs for beginners or teaching there is the DrScheme project, BlueJ and Greenfoot which are from CS departments. These are widely used in high schools but not as much in university that I am aware of. A simplified IDE and language not from the university is Small Basic which started out as a part-time project by a Microsoft engineer.
            Generally though industry has an interest in more powerful (what ever that means to different people) and not more simple IDEs. Features that may help beginners are a sort of happy accident if you will.

            Reply
          • 39. Keith Decker  |  March 3, 2011 at 11:55 am

            I agree the IDE plays a big role for intro students (different than the role for professionals). I think a lot of people who look at the Program by Design materials at the middle school (bootstrapworld.org) or HS/College level (the new 2nd edition of HTDP) focus only on the language (or what they think is the language 🙂 and not on the other two very important legs of the stool: the IDE that adapts to the level of each teaching language as it introduces a few new ideas (and really, only a handful in all); and the design recipe for producing objectively gradable intermediate steps between the instructor’s problem statement and the final working, tested program.

            That said, the maths comment hits home as we’ve started testing students on simple math (not programming) word problems on day 1 of class (“I pay Sally $50 a month retainer and then $8.50 per hour, if she works x hours a month, how much should I pay her?”). The scores correlate highly with the final grade in CS1…

            Reply
        • 40. gasstationwithoutpumps  |  March 3, 2011 at 11:14 am

          I’ve seen no evidence that the students who start out trying to memorize things ever get past it, even if you only teach them in one language. Instead you end up with a student who is an incompetent programmer in one language, instead of incompetent in several. The one incompetent in several can usually be detected with simple tests—the one incompetent but highly trained in one language has usually memorized enough that simple tests don’t detect the incompetence: you have to have them actually design a program before it becomes evident.

          So I think you are fooling yourself if you think that using only one language is a teaching strategy that produces competent programmers.

          Reply
          • 41. Bonnie MacKellar  |  March 3, 2011 at 11:26 am

            Here I have to disagree. I have worked with many students who started off as memorizers, but eventually got it. I think it is a combination of bad habits from high school (we get lots of NY public school kids, so you can imagine), and total abject fear. Once they relax, usually have a couple of semesters, they start seeing the commonalities.

            And, I am going to admit it, with total embarrassment – I was also a memorizer in the beginning. It took experience and practice, and a realization that the computer was not going to blow up in my face, before I could start seeing the patterns and really understand what was happening. I was taught in Pascal, too, which was supposedly one of the great teaching languages.

            Perhaps it makes a difference if your students already studied computer science in high school? I had never touched a computer when I went to college (which was typical in those days) and most of my current students have never dealt with programming at any level when they come into our program.

            Reply
    • 42. Mark Miller  |  March 4, 2011 at 8:58 pm

      Having worked on custom software in the IT industry for several years, the two main skills I wish I had gotten practice in before going into it would’ve been how to estimate time for a project (which would’ve included time for meetings, repeated test cycles, not to mention buffering for the unexpected, which always crept up, and other things that have nothing to do with writing code), and how to incorporate unexpected events (error conditions) into software design. Through my own experience, and my schooling, we were always given tasks: “Write a program that does X,” not, “Write a program that does X, and make it fault-tolerant so that if anything unexpected happens, or any data stream is unavailable, or interrupted, it will respond appropriately, and tell you what went wrong in the clearest way possible.” It takes some thinking about design just to anticipate what could go wrong, and how to respond to it. This is all software engineering, though, and it seems like a logical thing to do would be to separate SE from CS. Both are needed. I am uncomfortable with the idea that one should encroach on or replace the other.

      Another “skill” was “how to code fast.” I’ve heard of CS classes where they actually teach this to some extent. The university environment I was in discouraged this. There were deadlines, but what my professors wanted were nice, clean code, and well thought out algorithms. I liked that environment. The thing was, out in industry, they often *don’t* like that. That’s one of the problems with it, IMO. Please understand, I *really* don’t like saying this, but the unfortunate truth is if you want to train students for jobs, getting students not to care so much about how the code is written would actually be considered an asset… That’s how screwed up it is out there! What counts on the bottom line, typically, is getting the computer to do X within Y time period. Some places have some minimum standards for what nice code is like, but in my experience, most of the time the bottom line is, “Just get it to work.” In my experience, if I wanted to code well on a project, I literally had to sneak it in! Maybe I could tell the senior engineer I was working with about it, but my bosses would’ve looked at me with puzzlement or disdain if I had said I had taken time to “do it well,” because they had no concept of what that meant, except that I was taking more time to get something done, which meant more money spent on the project (ie. that’s bad).

      I understand the argument about “training for jobs.” I entered undergrad CS with that intention years ago. However, some of what I was shown, which I was told would have very little to do with the work world, was very interesting to me. I actually wanted more of it, and I thought maybe I’d get the chance to work with it in a job, but I was only shown “morsels,” and it was unclear to me if I had gone on to grad school whether I would’ve had the opportunity to delve more into those areas. I just didn’t know any better, and no one informed me. Even so, grad school might’ve had to wait. The point being that people can change their mind–in fact I would not consider it a bad sign if they did–about what they think is important about the subject, after they have been exposed to its theoretical underpinnings, and given a wider exposure to what’s possible in the field. If you don’t show it to them, it’s unlikely they’ll obtain it later. I probably wouldn’t have.

      Reply
      • 43. Erik Engbrecht  |  March 5, 2011 at 9:39 am

        I think the mess that is custom business software has very little to do with CS. It may have a lot to do with software engineering, depending on how one defines software engineering.

        The problem is that the expected value (measured in usefulness) of any given chunk of code is very, very low. The cause of that low value is dominated by the fact that “the requirements are probably wrong,” which is a understatement and drastic oversimplification.

        Basically the idea behind any piece of business software is that one can insert it into a business, and the business will respond in a positive way. The business is an extremely complex system, and understanding how any change will affect it over the short and long term is very hard.

        Most software engineers are not incentivized to consider the larger system into which their software will be placed. They are incentivized to meet requirements as quickly as possible. Quickly met requirements give the customer the opportunity to try the software, decide that the requirements are wrong, and issue change requests that are hopefully accompanied by more money.

        In my experience this extends well beyond business software, and even beyond software engineering. I’d say it tends to happen whenever the customer is directly paying for the engineering effort associated with building the system.

        You can’t expect people to build good systems when they are incentivized to ignore systemic issues. You can’t expect management to appreciate high quality in a component when it knows that the system into which the component is going is most likely highly flawed.

        Reply
        • 44. Mark Miller  |  March 6, 2011 at 8:24 pm

          I think the mess that is custom business software has very little to do with CS. It may have a lot to do with software engineering, depending on how one defines software engineering.

          That’s one of the points I was making. There was a heavy emphasis on SE issues from my experience in the industry. Some CS knowledge was desirable when I first started out, because most commercial software was being written in C. This meant that it was valuable to understand a bit about how the compiler worked, how it was allocating memory, and accessing it. It was also valuable to understand efficiency of algorithms, and how to construct data structures. The same applied to when the industry switched to C++. Some of these skills are still desirable in industry today, but in a different sense. Now, it’s desirable to understand how garbage-collected memory works, and it’s still desirable to understand the importance of efficient algorithms, though not really in constructing them, but in using them.

          Having said that, I really want to emphasize that CS is still important, even though it’s not as desirable in industry, because as we see in modern, developed engineering methods, science is crucial to engineering success and advancement of the field’s sophistication. What’s lacking is a real science in CS, though.

          I agree with you that the issue which computer scientists butt up against in industry is the fact that the people who are paying for the work don’t really understand the power that computing can bring, and so their goals are weak. From my experience, it’s the business customers which set the unrealistic expectation that they will respond positively once the system is brought in. It sounds odd, but I’ve found this saying to be true: “The customer doesn’t really understand what they want.” I was taught in SE that it was important for the team lead to sit down with the customer and go into the issues in depth to really get at what they want, not just what they said they wanted right off the top of their head. It was rare if I saw that kind of introspection, but it happened sometimes. Those were my best projects.

          The problems are compounded by the fact that even when engineers are trying to meet their goals, these same people often don’t understand the engineering issues. To be fair to business folks, engineers often don’t understand customer relations issues, such as not taking problem reports too literally, hand holding through issues, and just generally what their needs are, in terms of usability, scheduling, and budgetary issues. That’s been an insoluble problem in the industry since before I entered college. I’ve since had the thought that for those desiring a software development career in the private sector, that they should take some business management courses, and that people majoring in business should take some CS/SE courses, so that there’s some understanding of where each is coming from, because there are legitimate issues in each area. I haven’t seen that happen too much, though, and from what I’m hearing, businesses are opting to sequester IT into outside specialty firms, which handle all of the IT issues, and that’s how they’re choosing to deal with the issue.

          From my experience, it was pretty rare if a customer was willing to pay more for changes during a project, even though they were the ones bringing them up. Most projects I was on were fixed-bid. We’d draw up requirements and do an initial design document (which of course went out of date very quickly), and we’d base our estimate on that (I’m thinking, “Gee, I can see a problem with this already…”). We’d get one lump sum, and that was it. Sure enough, the customer would come back with changes, but they expected us to get it done in the same amount of time for the same amount of money. If we needed more time we had to push back and say it. If we went over budget, we ate the difference on the cost. Once I learned about this process I thought it was pretty dumb, because it became obvious to me that we could not accurately estimate how much time and effort would be needed before we even started coding. It was a total crapshoot. Projects would go on for months at a time. It’s not too hard to estimate a few week’s worth of work, but a few months is a different story. As I said earlier, the estimate based on the design document was a farce. I mean, why not just drop the charade and say, “This is a wild guess”? I think I know the answer, because then the customer would’ve dropped us. I don’t think it’s much of a stretch to say that one reason why it is such a mess is that each “side” operates under a delusion of what the other “side” is really doing, and the only way for them to operate together in something resembling harmony is for each “side” to maintain the illusion, though it’s been rare when I’ve seen that done well. In any case, it’s not a healthy situation, which is why I’d like to see the field improve its outlook and practices.

          I had the experience of a more reasonable process early on. I think the company I worked for would go through the same process of giving an up-front estimate, and they’d get the lump sum, but our VP of Engineering would give the overall estimate, not us. He wouldn’t ask us for our estimates until we had been coding for a while on it. By that point we were well acquainted with the issues, and could give estimates reliably. It was a “best we could do” practice. It’s not what a developed engineering discipline would do, but I think anything else would’ve been deceiving ourselves, given what we knew how to do.

          I heard about some estimating techniques while in college (such as Monte Carlo), but we didn’t study them or try them out. I’ve since heard that that happens in graduate CS, and that these methods are based on prior data. So engineers would have to be disciplined in keeping metrics on work components of each project, and documenting their characteristics, in order for these methods to work. From what I understand, these methods tend to keep software teams out of trouble. They’re not that popular with customers, because they lead to better estimates which can seem high, especially given that there are teams which will promise a lower bid, but they’re not as reliable. Nevertheless they’ll often go with the lower bid, and it’s pretty certain they’ll feel the consequences as well.

          Reply
  • 45. Alfred Thompson  |  March 2, 2011 at 4:52 pm

    I have a bias against C-family langauges that goes back some 30+ years. And a particular personal and professional bias agaisnt Java – just for full disclosure.
    I have found that HS students, at least, pick up multiple languages very quickly and easily as long as the focus on the first course is concepts more than specific syntax. Syntax though often gets in the way of the concepts as the examples in Mark’s post show. That is why I prefer more forgiving languages which often have dynamic or simplified types, more english like structures and fewer keywords. Most of all a first course should allow students to have some fun – to create projects that interest them. These days that means they need to do more thanwhite text on a black background. Java is not the easiest way to do that IMHO.
    I hear great things about most langauges/tools. Greenfoot (Java but maybe more thant Java), Scratch, Alice, Small Basic, Python and on and on. But I think that the teacher matters more than the language. And that is why it is better to have more tools available – the right tool for the specific teacher.

    Reply
  • 46. gasstationwithoutpumps  |  March 3, 2011 at 8:33 pm

    One good thing about the Scratch IDE, not shared by things like Dr. Scheme, is that it doesn’t have the “slow reveal” of gradually less-crippled layers. It is all there all the time, but the extra stuff doesn’t get in the way. If the Dr. Scheme system is held up as an example of a good IDE for teaching, maybe it is just as well that CS departments aren’t doing IDEs for beginning programmers.

    I know that my son was very irritated by “slow reveal” teaching (in Dr. Scheme and in the early Lego languages), and I’ve always hated it as lecture technique. There is something wrong with a language if you have to deliberately hide big chunks of it to force students to look at a crippled subset.

    Reply
    • 47. Keith Decker  |  March 3, 2011 at 10:19 pm

      In a one-on-one situation, it may be best to let them explore and help them along. It’s trivial to ignore/turn off the teaching languages and go with some professional language (be it RSR5 Scheme or Racket)

      However, in a large lecture including half non-programmers and many non-majors, it is a huge win that the IDE does not report errors using terminology/jargon not yet introduced, and does not silently accept legal programs in a more complex language that are impossible to explain using the concepts covered in class so far (Mark’s original example of empty bodies).The idea is to teach the concepts, not syntax-of-the-week. The test coverage display (showing exactly what code is covered by the existing unit tests) is another stellar idea enabled by reducing the language complexity.

      Plus, in the HTDP case, the point is to teach ideas/abstractions shared by many languages and to transition to an OO language, and so none of the teaching languages are the “full” language anyway. Students are writing 2-player networked graphical games and even smartphone apps; they don’t seem to feel “crippled” for most of the first semester of a 4-year program!!

      Of course good students will run into walls, but I think of them more as teaching moments, when they want to do something 3-D, or wonder why things are so slow with 50 things moving all over independently. But we are then talking concepts and algorithms and data structures and APIs, not syntax issues, and they can move to Java and C.

      Reply
    • 48. Mark Guzdial  |  March 4, 2011 at 10:48 am

      I’m with Keith. That “slow reveal” is called “faded scaffolding” in the ed psych literature. (“Scaffolding” as a term was first used to describe instruction in a paper by Wood, Bruner, and Ross in 1976.) My dissertation was on how to implement adaptable (including fading) scaffolding in software. That it didn’t work for your son isn’t surprising. His “zone of proximal development” (the gap between what a learner can do on their own, and what they can do with help) was probably already beyond the scaffolding in Dr. Scheme. It’s hard to support the high-ability learning. The use of fading scaffolding in Dr. Scheme is one of its coolest features, in my opinion — it’s one of the few times that learning theory has informed the design of software in CS education.

      Reply
      • 49. gasstationwithoutpumps  |  March 4, 2011 at 11:13 am

        I’m aware that my experience and my son’s experience are not typical, but I am a little dubious about the “fading scaffolding” model in Dr. Scheme. It seems to slow the fast students more than it supports the slow ones.

        I prefer tools that work well for both beginners and power users, and that don’t need to be crippled for beginners. Features that only more advanced users need can be slightly harder to access (in menus rather than always visible, for example).

        Generally, I find that tools that need to be crippled for beginners are badly designed tools, and that fixing the design does a lot more for making them teachable than progressive crippling.

        Reply
        • 50. Stephen Bloch  |  March 6, 2011 at 8:20 pm

          It seems to slow the fast students more than it supports the slow ones.

          Was your son really delayed that much by having to do three mouse clicks to promote himself to the next language level?

          Features that only more advanced users need can be slightly harder to access (in menus rather than always visible, for example).

          That approach works well for interactive commands of a GUI program; how would one apply that idea to a language composed of typed text? (I’m asking sincerely: I’d really like to know if anybody has good ideas.)

          I first encountered DrScheme in 1998, and was immediately impressed by two things: (1) the ability to call any function on any arguments (expressed in the usual language syntax) at any time and see the result, without writing a “main program” with a bunch of I/O, and (2) the language levels. I’ve taught with it every year since 1999, and have continued to consider both features powerful aids to beginning programmers. BlueJ and DrJava have the former, but I don’t know of any other widely-available IDE with the latter.

          Of course, the language levels were designed to go with a particular pedagogy, and if you want to follow a different order of topics — I/O and assignment before higher-order and anonymous functions, for example — they don’t serve you as well. It’s possible to write your own language level, but it’s much more hassle than using the predefined ones. The developers chose ease-of-use (for beginning students) over flexibility (for their instructors).

          tools that need to be crippled for beginners are badly designed tools

          On the other hand, a tool that’s designed for teaching programming doesn’t need to be the same as a tool that’s designed for professional programming. Hardly anybody ever used Pascal for professional programming — it was so rigid about data types as to prevent code re-use — but a generation of us learned decent programming habits through it. Wouldn’t it have been even better if there had been a smooth upgrade path from Pascal to something usable in industry, rather than having to scrap much of your Pascal language knowledge to learn PL/I or C or whatever? That’s the idea of DrScheme’s language levels.

          Reply
  • 51. Jim Huggins  |  March 4, 2011 at 8:24 pm

    Quoting Dijkstra: “The moral of this sad part of the story is that as long as computing science is not allowed to save the computer industry, we had better see to it that the computer industry does not kill computing science.”

    So, if industry won’t listen to the wisdom of academics, then academics shouldn’t listen to the wisdom of industry either? We should just pick up our virtual footballs and go home?

    Look, I’m not saying that we should turn CS degrees into vocational training certificates. Believe me, some of the suggestions I’ve heard from employers are worthy of ridicule. But does that mean we throw the baby out with the bathwater? Is there no room in CS education, even at the early stages, for giving students some practical transferable skills — skills that might allow them to earn a little money on the side or during the summer? (Money may not be the most noble of motivators … but it ain’t the worst one, either …)

    Reply
    • 52. Mark Miller  |  March 5, 2011 at 1:40 am

      Maybe this would help put it in a larger context. Before the part i quoted Dijkstra said:

      “• The ongoing process of becoming more and more an amathematical society is more an American specialty than anything else. (It is also a tragic accident of history.)

      • The idea of a formal design discipline is often rejected on account of vague cultural/philosophical condemnations such as ‘stifling creativity’; this is more pronounced in the Anglo-Saxon world where a romantic vision of ‘the humanities’ in fact idealizes technical incompetence. Another aspect of that same trait is the cult of iterative design.”

      What he’s getting at is the issues that industry wants to deal with are, as Alan Kay said earlier, not well thought out in terms of what computers are good at dealing with, or in terms of how they could be put to best use. Bringing that influence into the discipline carries the risk that it will be overwhelming, and any possibility for CS to lead, as it once did, will be lost.

      After the part I quoted earlier, he said,

      “But let me end on a more joyful note. One remark is that we should not let ourselves be discouraged by the huge amount of hacking that is going on as if computing science has been completely silent on how to realize a clean design. Many people have learned what precautions to take, what ugliness to avoid and how to disentangle a design, and all sound structure systems ‘out there’ sometimes display is owed to computing science. The spread of computing science’s insights has grown most impressively, but we sometimes fail to see this because the number of people in computing has grown even faster (It has been said that Physics alone has produced computing ignoramuses faster than we could educate them!)

      The other remark is that we have still a lot to learn before we can teach it. For instance, we know that for the sake of reliability and intellectual control we have to keep the design simple and disentangled, and in individual cases we have been remarkably successful, but we do not know how to reach simplicity in a systematic manner.

      Another class of unsolved problems has to do with increased system vulnerability. Systems get bigger and faster and as a result there comes much more room for something to go wrong. One would like to contain mishaps, which is not a trivial task since by their very structure these systems can propagate any bit of confusion with the speed of light. In the early days, John von Neumann has looked at the problem of constructing a reliable computer out of unreliable components but the study stopped as von Neumann died and transistors replaced the unreliable valves. We are now faced with this problem on a global scale and it is not a management problem but a scientific challenge.” [my emphasis]

      You can read the full text here.

      I didn’t read what I quoted earlier to mean that CS students shouldn’t go work in industry. Personally I have seen very little, when taking in what has been produced by the whole industry, that would be worthy of entry into the academy, in terms of solutions to problems. I think it might be constructive for academics to listen to industry, and IT depts. in government, in terms of getting ideas about what problems society is trying to solve. As those who work in technical support know, I don’t think academics should take the concerns they hear at face value, bur rather take them as indicators of issues that are perhaps worth paying attention to, and bring in other influences that will hopefully help broaden the picture of what’s really happening, and then go in the direction of trying to find solutions that address the larger issue(s). This is what I see in what I’ve cited here.

      Reply
    • 53. Bonnie MacKellar  |  March 5, 2011 at 9:30 am

      I worked in industry for years, and I really disagree that people don’t care about good code in industry. The difference is in perspective. In industry, we cared about much larger-picture aspects of the code. We cared about having good interfaces, since we had several teams all collaborating. We cared about a product that could evolve gracefully over a 20 year lifespan. we cared about things that people in academia never even THINK about, such as internationalization, and the customer’s experience when building and installing a complex product and compatibility across many platforms, database management systems, and other environmental factors. And yes, we worried about performance and fault tolerance too.

      In academia, people are too focused on the micro-level. In the end, it doesn’t matter whether you correctly choose a while-loop or a for-loop. It really doesn’t matter whether we use Python or Java or C# or Fortran first -except that I hate seeing schools waste time teaching language-specific courses late in the program when they should be teaching test methods, software architectures, concurrency, and other badly needed, advanced topics.

      One of the things I noticed in my years in industry was the shift away from hands-on coding, towards configuring largescale systems. In the beginning, we were hand-coding everything, but by the time I left, we were using many more already-coded components, and the trick became getting everything to work together well. The systems have become correspondingly larger and more complex at the same time. Everyone I know in software development has noticed this trend. That is the world that our students are going into.

      Reply
      • 54. Mark Miller  |  March 6, 2011 at 8:47 pm

        What you’re describing, though, is more SE and CE (computer engineering) than CS. The fact that you were planning ahead for future compatibility and expansion is good. It was just rare in my experience. Like I was saying, I had a personal ethic about producing good code that promoted future expansion, and readability. I wrote documentation so that whoever came after me could understand what I had done. I did this when no one was looking, though, because what I describe were often not seen as a worthwhile activities by my superiors.

        The requirements for good design sound different now. I accept that. I can’t recall them for you now, but back when I was paying attention to what others were doing in IT software and management just a few years ago, there were a lot of horror stories. So I would say be thankful for your good experiences. I am pretty confident that most people working in the IT industry are not having as good of an experience as what you describe. I understand this sounds dour, and what I would say to students is don’t settle. Try to find a good IT firm that values good engineering. They are out there, but you probably have to go find them (or if you’re spectacular, they’ll probably find you).

        Reply
      • 55. Mark Miller  |  March 7, 2011 at 1:37 am

        (sigh) This is one of those rare situations where I wish I could delete my comments, because I’ve been rethinking my stance. Bonnie, after thinking about the situation with academic CS vis a vis the IT industry, I’m thinking now that you have a good point with regard to the need for greater SE influence at the undergrad level. I was overreacting, because I thought that giving ground to SE in undergrad CS programs would be to the detriment of CS overall. But then I thought about how there’s CS at the post-graduate level, and perhaps that’s where it would flourish best. At least that’s what I would hope. Perhaps the effect of a greater emphasis on SE at the undergrad level would be detrimental to post-grad CS. It’s hard to say.

        Even in my prior comments I did not intend to demean SE. I was just saying that it and CS should be separate disciplines. Perhaps undergrad CS should disappear to be replaced by SE, leaving CS as a post-grad discipline.

        Reply
        • 56. Alfred Thompson  |  March 7, 2011 at 2:21 am

          I see SE as almost as important for CS as multiplication is for mathematics. Of course you can teach mathematics without teaching multiplication (repetitive addition will work) but what you have left is not real mathematics. This comes to mind because I once worked on computer hardware without instructions for subtraction, division or multiplication and we managed. Of course SE is not a subset of CS as certainly multiplication is to mathematics and I am exaggerating for emphasis but there is enough overlap that not teaching any SE leaves a serious hole in ones ability to “do” good CS. I would argue that one reason for the low opinion of CS in some quarters of industry is that to many graduates leave the academy with lots of CS theory but no real idea of how to write good code. It is as if they have a great vocabulary but no ability to write a good essay. Most students do not get to work on projects of any size until graduate school. Even then what is large in the academy is still small to mid-sized by industry standards so there is room for less SE rigor. (in practice I suspect more projects would turn out better if that rigor were there though.) Without even theory to fall back on this can be a problem once a person moves on to larger projects.

          This is not to suggest that industry is all that great at either SE or managing large projects – they are often not. There is not enough research being done there. What we do see are diagraming and planning tools that either work for small projects but not for large ones or tools that work in theory but not in practice. I’ve lost track of the published tools that were brought into companies where I worked with great fanfare only to find that they didn’t work with real engineers and real project constraints. While one could argue the people should have changed to fit the tools the reality is that doesn’t always work that well

          Reply
          • 57. Bonnie MacKellar  |  March 7, 2011 at 12:52 pm

            I am really sorry to see the low opinion of software engineering in CS departments these days. It is much worse now then when I was last in academia back in the 90’s. The reality is, most of our students are going to BE software engineers (or software developers, or “principal engineers” – the title depends on the company). If they don’t do development, they are likely to become test engineers (QA) or product support engineers or something else equally oriented to the broader field of software engineering. Even students who go to grad school are likely to end up doing software engineering work. The percentage of students that is going to end up doing CS research is pretty small. What is wrong with educating students for the careers that they want to pursue? Software engineering is actually rather interesting. There is research being done in the field, and the concepts are interesting to teach, AND useful to students. My whole argument is that rather than waste so much time worrying about the “correct” language in which to teach micro-level programming, we need to be worrying about getting the students up to the important concepts.
            Is it possible that the current problems in industry with poor methodology and poor design is because we have failed to teach it?

            Reply
            • 58. Mark Guzdial  |  March 7, 2011 at 1:38 pm

              The reality is, most of our students are going to BE software engineers (or software developers

              Here at Georgia Tech, the majority of our credit hours after the first year (i.e., 2000-level and above courses) are taken by NON-Computing major students. According to the report by Scaffidi, Shaw, and Myers for the DoD, by 2012, we’ll have 3 million professional software developers in the US, and 13 million people who program in their jobs but aren’t professional software developers. The demand for CS education (not just at Georgia Tech) is MUCH greater for those students who aren’t going to be software engineers than for those who are. As Brian Dorn’s dissertation showed, software engineering is part of what those 13 million professionals need, but not the majority of it.

              The dichotomy isn’t “software engineering” vs. “research or grad school.” Most people who program do projects of 100 lines or less, all by themselves.

              Reply
    • 59. Stephen Bloch  |  March 6, 2011 at 8:35 pm

      Of course CS education needs to teach practical transferable skills, early and often. Indeed, if your students are likely to want summer or co-op jobs, they need something they can use on the job by the end of the first year; that’s why the Program By Design curriculum transitions from Scheme to Java after one semester. Other approaches transition from Python to Java after a semester, or from Processing to Java after a semester, or from BYOB to Java after a semester, or from Alice to Java after a month.

      But I hope we’re not defining “practical transferable skills” purely in terms of what languages you “know”. The experience of hundreds of teachers who use the Program By Design curriculum (formerly TeachScheme!) is that by starting in a language selected for teaching rather than for use in industry, they can actually teach MORE AND BETTER practical transferable skills. Students after a semester in Scheme and a semester in Java, properly stitched together, are demonstrably better Java programmers than those who spent the whole year in Java. (They don’t know as many features of the Java language, but they specify, design, and implement programs better, as measured by AP scores, co-op job placement, and instructors’ subjective opinions.) And I wouldn’t be surprised if the same is true for the Python-to-Java, Processing-to-Java, BYOB-to-Java, and Alice-to-Java curricula.

      Reply
      • 60. Bonnie MacKellar  |  March 7, 2011 at 12:57 pm

        We definitely want to avoid the Hodge-Podge O’Languages trap. That isn’t what I mean by industry-oriented skills at all. Perhaps your approach, of moving students rapidly from the introductory language to the industry-oriented language, is a good one. Our experience, spending 3 semesters on a procedural language and then desperately trying to teach OO and Java in an upper division course, was bad. Students never get OO, and it is a waste of an upper division course slot.

        Reply
  • 61. Alfred Thompson  |  March 4, 2011 at 8:51 pm

    I like to think I have a foot planted in both industry and academia. In general the two sides don’t understand each other very well. There are others with feet on both worlds but it is hard for them (even a couple of Turing Award winners I could think of) to have influence in both though. Industry is frustrated with academics who appear to have their heads too far in the clouds while academics are frustrated with industry people who appear to refuse to learn from them.
    One area that seems to be doing well (in my opinion) is programming language development where former academics (Erik Meijer who has brought key innovations to C# and VB comes to mind) have been accepted into the development process.
    In general though I think there needs to be a lot more mixing of the two areas. I think that some research groups, Microsoft Research being the one I know most about) DO interact with academics pretty well. But even for them, as company insiders, it can be hard to bring innovation into from academia. I don’t see as much of academia being willing to learn from industry though. The reaction I hear a lot is “we are not a vocational school.” This misses the point that there are companies doing innovative research that the academy could benefit from.

    As usual, speaking for myself and not my employer or in my role at that company.

    Reply
  • […] not arguing that Java is a great language, and I’ll continue arguing that Java is a poor beginner’s language.  But our students do need to know Java, because Java exemplifies the current ideas and practice […]

    Reply
  • […] suggest to me, even more strongly, that we are doing our students harm by only showing them one language or style of language.  ”Creative minds” know more than one way to think about a […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,004 other followers

Feeds

Recent Posts

Blog Stats

  • 1,875,611 hits
March 2011
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

CS Teaching Tips


%d bloggers like this: