Moti asks: Objects Never? Well, Hardly Ever!

September 11, 2010 at 10:44 am 33 comments

Moti Ben-Ari told me at ICER that this piece was coming out in the September CACM, but I promised not to say anything here until it was out.  I just saw that it’s been posted on the CACM website.  I recommend reading the article.  It’s a radically different perspective than objects-first or objects-late.  He suggests “objects-never” or maybe “objects-graduate school.”

I claim that the use of OOP is not as prevalent as most people believe, that it is not as successful as its proponents claim, and, therefore, that its central place in the CS curriculum is not justified.

via Objects Never? Well, Hardly Ever! | September 2010 | Communications of the ACM.

Entry filed under: Uncategorized. Tags: , .

About the CompSci Woman Blog Why Are So Many Terrorists Engineers? – NYTimes

33 Comments Add your own

  • 1. Katrin Becker  |  September 11, 2010 at 11:58 am

    THANK YOU! THANK YOU!!!

    I have been saying this for YEARS. Any chance that this article will become publicly readable any time soon? I’d love to send this to some of those people who ridiculed me for having an opinion like this one.

    Reply
    • 2. Mark Guzdial  |  September 11, 2010 at 1:20 pm

      I didn’t realize that it wasn’t publicly readable! Sorry — I just followed the link from the CACM page, and didn’t realize that it was reading my cookie. No, I don’t know anything about it’s public availability.

      Reply
  • 3. Katrin Becker  |  September 11, 2010 at 1:31 pm

    I’ve had to let my personal subscription to the digital library lapse (I’m not employed by a university anymore so can’t afford it), so I accessed it through someone else’s university library account.

    I completely agree with all the claims (based on 30 years’ teaching experience and many industry contacts). I’ve been saying that OOP is not an introductory topic for years.

    I first learned about OO in 4th year in a simulation class (using Simula). It took about 10 minutes to understand the concepts. I think of how much time we waste trying to get novices to understand this, when we could be doing something more useful, and… more interesting.

    As an aside, he missed one of the application areas to which OO is especially well suited: video games. But even there, OO is only useful for part of the solution – there are still plenty of ‘left-over’ bits that just don’t fit nicely into the OO paradigm.

    Reply
  • 4. Erik Engbrecht  |  September 11, 2010 at 2:07 pm

    I think that objects-oriented programming is fundamentally more complicated than plain procedural and functional programming. That being said I think when done right it is more powerful.

    I think that Java makes it almost impossible to do right.

    Reply
  • 5. Michael  |  September 11, 2010 at 3:15 pm

    This is a great article by Mr. Ben-Ari, and I have to agree with what Katrin said above that these are thoughts I too have held for a long time. It’s not that I believe the object-oriented paradigm is fundamentally bad; however I do think that OOP tends to obscure some of the basic reasoning skills that students need to acquire when they first begin to learn how to program. Should they see it eventually? Probably so.

    On the other hand, should a carpenter’s first woodworking lessons be with a CNC mill? I don’t really think so. And so, too, I agree that we shouldn’t start programmers out on a set of tools that were designed to solve a particular class of large-scale industrial problems.

    This is only compounded if it really does turn out that OOP isn’t all that widespread in actual practice, and if in fact it’s true that the ideas of OOP don’t really solve the problems they were designed to.

    Reply
  • 6. Alfred Thompson  |  September 11, 2010 at 6:54 pm

    It took me a while to grok OOP. I’m old school and remember the move to structured programming in the early 70s. There was a lot of discussion back them about having to bend designs to make them fit the structured methodology at the time. The languages changed to reduce that problem but the “go to is bad” argument still remains – albeit somewhat underground. 🙂 But by and large the benefits were obvious enough that structured programming “won.”
    Some say that OOP is the next step but I’m not so sure. I think it is an additional step that can co-exist with other paradigms. Proponents what to replace other ways of designing software and that is where the object first or objects later debate comes in. There is a third object in parallel argument that gets lost in the shuffle though. That is the school of thought I find myself in.
    The way I see it objects fit into more traditional programs and make some things easier. Making everything an object often adds unnecessary and even unhelpful complexity to what can be simple designs. One only has to look at “hello world” written as an OOP program, say in Java. Compare that with the same in old languages like BASIC or even dynamic languages like Python. I don’t see why objects can’t be used with great efficiency when appropriate in a more traditionally designed program.

    Reply
  • 7. Blog Post: Objects When? If Ever? | IT.beta  |  September 11, 2010 at 7:50 pm

    […] post started as a comment on Mark Guzdial’s blog (Moti asks: Objects Never? Well, Hardly Ever!) but I decided to elaborate some. I think this is an important discussion to have both in education […]

    Reply
  • 8. Objects When? If Ever?  |  September 11, 2010 at 7:52 pm

    […] post started as a comment on Mark Guzdial’s blog (Moti asks: Objects Never? Well, Hardly Ever!) but I decided to elaborate some. I think this is an important discussion to have both in education […]

    Reply
  • 9. Barry Brown  |  September 12, 2010 at 11:36 pm

    I wonder how much of the resistance to OOP as an introductory topic (ie, objects first, or objects early) is due to the sheer syntactic overheard of declaring classes and objects. Creating a new class in Java is not a trivial task and it ought to be.

    I propose that if we want beginners to think “natively” in objects, they ought to be able to get to the heart of the matter quickly, without a lot standing in their way.

    Reply
  • 10. Cay Horstmann  |  September 13, 2010 at 3:35 am

    Before throwing out the baby with the bath water, consider the legitimate desire of many educators to give their students assignments that are more interesting than, say, printing the first n prime numbers. Like manipulating maps, photos, robots, sounds, etc.

    The people who have written these libraries haven’t really picked up on the fact that objects are hardly ever used, and they usually give you an OO interface to the, erm, objects that students manipulate.

    I would find it hard to imagine that calling draw(monster) instead of monster.draw() gives any real “relief” to students.

    So, I’ll assume that using objects is ok. A better question is when students should DESIGN their own classes. In the first days of CS1? Never? Somewhere in between?

    When I teach the intro course, students learn pretty quickly how to implement a class that someone else designed. They gradually get better at producing their own designs over the next 3 semesters. Frankly, they don’t seem to have too much trouble with it.

    What do they have trouble with? Loops.

    Reply
  • 11. Katrin Becker  |  September 13, 2010 at 10:02 am

    And that’s exactly the problem.

    They can pick up the high-level stuff and work with other people’s objects. Working with media like pictures, sounds, animations, etc. only requires the students to be able to call a lot of utilities – they don’t actually need to know (or learn) how to program.

    I’m all for giving them interesting things to do (I used classic arcade games for years, using ascii displays), but I don’t think we should be getting them to ‘write’ programs that are little more than a series of method calls when they don’t even have a handle on what a variable is.

    Reply
  • 12. Mark Guzdial  |  September 13, 2010 at 10:41 am

    Katrin, why is use of objects not “need[ing] to know (or learn) how to program”? In MediaComp, we use objects in the way that Cay describes, and we use tons of loops. I agree with Cay that loops (and assignments and conditionals) are the core that students find challenging, and don’t really learn well. Can’t you do that with objects? I also agree with Cay and Barry that building classes is a whole new level of complication. But I’m trying to understand your argument that use (without creation of classes) of objects is not programming.

    Reply
  • 13. Katrin Becker  |  September 13, 2010 at 11:22 am

    It can be done, but when I look at where students actually spend most of their time and effort while working on these assignments, it often centers around making their graphics look pretty (or cool), or finding the right sounds, or finding the right method to call. One of the brightest students I ever had referred to it as donkey-work. It’s the difference between being a tool user and being a tool maker.

    If we’re OK with teaching them to be tool-users and not tool-makers, then objects early (even objects-only) is fine. If, on the other hand, we ever want them to understand how the OS works, or to be able to (like Moti said) write code for embedded systems or simulations that are robust and that work, they need to know more.

    I guess it all depends on the level of understanding we are trying to facilitate. I’ve taught programming literacy to non-CS students using Alice. We had a lot of fun and they did learn some things about program flow, loops, conditionals, etc. They did not learn much about data or scope, or for that matter, how most of what they did actually works.

    We need far fewer tool-makers than we did 20 or 30 years ago to be sure, but we still need some.

    There’s a reason that so many racecar drivers started out as auto mechanics – you need to know a lot more than just how to drive in order to get the kind of performance that they need to get out of those vehicles.

    Reply
  • 14. Moti Ben-Ari  |  September 14, 2010 at 4:13 am

    An author of a “Viewpoint” in the CACM retains its copyright, so I have posted the final draft of the article (before ACM’s formatting) on my site. It is at the bottom of the page: http://stwww.weizmann.ac.il/g-cs/benari/home/keynote.html.

    Reply
  • 15. Moti Ben-Ari  |  September 14, 2010 at 4:27 am

    Hi Cay,

    First, I’d like to thank you for writing Core Java. Without it I would never have managed to learn how to use the Java API!

    I totally agree that “manipulating maps, photos, robots, sounds” is more interesting and more educational. It reminds me of my last “real” job, working in a company that developed systems for automatic inspection. There was one programmer who wrote the UI using OOP, but no one really cared what he did. The heart and soul of the company was the algorithm group which developed the algorithms for image processing, pattern recognition, etc., and a relatively large number of software engineers who tried to get the algorithms to work on the actual hardware. This was hardcore programming with lots of loops!

    Even students can do exercises in this field like computing the histogram of an image, applying simple digital filters, etc. It has to me more interesting and more useful than the “getColor”, “setColor” that takes up so much time in OOP.

    Reply
    • 16. Alan Kay  |  September 15, 2010 at 10:12 am

      Here is the comment I added to the ACM post of the article. And I’ve added a few more things to think about below this.
      ———————————————————————–
      I think this article raises important issues.

      A good example of a large system I consider “object-oriented” is the Internet. It has billions of completely encapsulated objects (the computers themselves) and uses a pure messaging system of “requests not commands”, etc.

      By contrast, I have never considered that most systems which call themselves “object-oriented” are even close to my meaning when I originally coined the term.

      So part of the problem here is a kind of “colonization” of an idea — which got popular because it worked so well in the ARPA/PARC community — by many people who didn’t take the trouble to understand why it worked so well.

      And, in a design-oriented field such as ours, fads are all to easy to hatch. It takes considerable will to resist fads and stay focused on the real issues.

      Combine this with the desire to also include old forms (like data structures, types, and procedural programming) and you’ve got an enormous confusing mess of conflicting design paradigms.

      And, the 70s ideas that worked so well are not strong enough to deal with many of the problems of today. However, the core of what I now have to call “real oop” — namely encapsulated modules all the way down with pure messaging — still hangs in there strongly because it is nothing more than an abstract view of complex systems.

      The key to safety lies in the encapsulation. The key to scalability lies in how messaging is actually done (e.g. maybe it is better to only receive messages via “postings of needs”). The key to abstraction and compactness lies in a felicitous combination of design and mathematics.

      The key to resolving many of these issues lies in carrying out education in computing in a vastly different way than is done today.
      ——————————————————————-

      A few more comments here.

      If you are “setting” values from the outside of an object, you are doing “simulated data structure programming” rather than object oriented programming. One of my original motivations for trying to invent OOP was to eliminate imperative assignment (at least as a global unprotected action). “Real OOP” is much more about “requests”, and the more the requests invoke goals the object knows how to accomplish, the better. “Abstract Data Types” is not OOP!

      A larger problem here is that though the invention of OOP and the coining of the term were influenced by several prior systems (including Sketchpad and Simula, and others which can be found in the history I wrote for the ACM — a nice irony it turns out!), it is quite clear that the idea of OOP did not include most of its precursors.

      We didn’t even do all of the idea at PARC. Many of Carl Hewitt’s Actors ideas which got sparked by the original Smalltalk were more in the spirit of OOP than the subsequent Smalltalks. Significant parts of Erlang are more like a real OOP language the the current Smalltalk, and certainly the C based languages that have been painted with “OOP paint”.

      The largest problem here is that a misapplication of a paradigm is being blamed for what is really bad language and systems designs and implementations. And I agree completely with the author that most of the features cited are really bad. But they have nothing to do with OOP.

      For example, Smalltalk initially did not have inheritance because I thought the way it was used in Simula was all to easily the foundation of nightmares (too many different semantics from one mechanism). Instead the original Smalltalk used many LISP ideas to allow dynamic experiments with many kinds of generalizations.

      I think the remedy is to consign the current wide-spread meanings of “object-oriented” to the rubbish heap of still taught bad ideas, and to make up a new term for what I and my colleagues did.

      A smaller consideration is to notice that what is good about the original idea is still quite good, but it does require more thinking (and different thinking) and design to accomplish (but with great benefits in expressiveness, scalability and safety).

      Blaming a good idea for being difficult is like blaming the Golden Rule for not being easily able to be learned by most humans. I think the main points of both lie elsewhere.

      Best wishes,

      Alan

      Reply
  • 17. Moti Ben-Ari  |  September 15, 2010 at 10:49 am

    Hi Alan,

    It’s great to hear from you! I read about Smalltalk a very, very long time ago, but I’ve written “OO” mostly in Java, so my view of OO is colored by that.

    I like your characterization of the internet as the archetype of OO. This is so close to the architecture of the Mercedes car (with its dozens of independent controllers) and so far from “encapsulating” a single byte code in a class.

    As I wrote, what is really missing is a clear statement of what OO is good for and what it is not good for. Any software technology I know of – even the ones I really, really like – is good for some things and bad for other things, but the everything-is-an-object people don’t seem to agree. Perhaps you could write something up as a Viewpoint?

    Moti

    Reply
    • 18. Alan Kay  |  September 15, 2010 at 11:38 am

      Hi Moti,

      I’ll try to write something more extensive after all the other writing I have to do between now and the end of October.

      But one of my favorite aphorisms at PARC was “Simple things should be simple and complex things should be possible”.

      If simple things aren’t simple, then bad design has been done by some tool maker.

      If complex things aren’t possible, then the bad tool makers have not allowed you to learn and scale.

      One question we need to ask is “How can we do something with about the mental effort it deserves and somehow then gracefully scale it and share it and protect it and maintain it?”

      And we can ask “How can we find out what something means and also create something that has meaning” in a straightforward way without impoverishing the result for someone else’s use?”

      45 years ago I used to ask: “If you send computer stuff 1000 miles, what do you have to send with it to make it useful?” A manual? A programmer? Some form of code? What does it “mean to mean”?

      The thing that attracted me about whole computers was that they can do whatever a computer can do.

      Making the interior of a computer to be virtual computers preserves this. “The parts have the same powers as the wholes”. Whereas data structures have lost this and their manifest pragmatics get in the way of many needed things.

      So the design problem to be considered (or the way I looked at it back then) was a “have the cake and eat it too” one. For example, we wanted the number 3 to be no larger than the data version on a PDP-11 (in fact we made it smaller), but also to somehow carry its most important meanings (for both internal and external use) around with it.

      In the most recent children’s system we did over the last decade we made one kind of “universal object” which could be used to make everything else. (This is an interesting design exercise!) And it is interesting to compare the “large comprehensive” object idea with having class object be tiny and trying to build up a universe through zillions of subclasses. The large comprehensive object idea is a little more biological (where every cell in our body contains the entire DNA and the several hundred cell specializations are done entirely by a kind of parameterization).

      Again, a little more design has to be applied to make this work, but then it works for you over and over.

      I’m very fond of “simulation style” programming that uses ideas of McCarthy, Strachey and the later Lucid language to have transitions to future states of objects be done as functional relations, but to “model time explicitly” (rather than allowing the CPU to do it) so that (as in John’s “situation calculus”) there are no race conditions, yet actions can happen and time can progress.

      The original Smalltalk was extensible in all areas, including syntactic (because you could make up an input grammar to receive messages), and this allowed programmers to both use and to invent styles that are suitable to the problems.

      As an old mathematician (where this is done all the time), I’m very much in favor of this design tool as an inherent part of a programming language. Again, you have to learn how to design a bit more, but I believe that many of the biggest problems with computing today come from bad designs by non-expert designers.

      Language extensibility is quite compatible with strong association of meanings.

      Real Estate is location location location, Real Computing is design design design!

      Best wishes,

      Alan

      Reply
      • 19. lixkid  |  September 16, 2010 at 2:58 am

        Hi Alan,
        In some of your talks and papers you mentioned the term “Dynamic OOP” to name programming techinques that enable building more robust systems. Was that also a way of trying to make a difference between that kind of OOP and OOP that is supported by Java, C++, C# and numerous other programming languages? If this is the case, what will be your suggestions to students for how to most effectively learn “Dynamic OOP”?
        In addition to your usage of this term I have encountered it in the context of CLOS and Smalltalk. The reason I am asking this question is that despite having a feeling that “Dynamic OOP” really is different than OOP in other languages I don’t think that I understand it yet.

        Reply
        • 20. Alan Kay  |  September 16, 2010 at 12:25 pm

          Hi lixkid

          Before the term “OO” got “colonized” it meant “something in the future pointed at by Smalltalk”. After it got vastly changed when claimed by C++, etc., and redefined by Peter Wegner (a nice guy, but not really in a place to do a good job of this), I started referring to the original branch of OO as “dynamic OO”, and later as “real OO”.

          But I think there’s too much water over the dam, and it’s time for a completely different term. And since I think we can do a much better job of this 45 years later, the next term should be given to a qualitative improvement along these lines.

          “Dynamic” tried to call attention to several important properties, including “late-binding” and “liveness” (an example of the latter is the Internet, which has never been taken down for maintenance). Instead, like a biological system it changes, grows, repairs, etc., as a living organism and the design accommodates this.

          (A glaring dumb exception is that you occasionally see an email from one’s organization saying such and such server “will be down for maintenance”, even though there is no need. Servers are cheap and they are just IDs, which means their content can be moved to other HW, renamed etc. and used while the old HW is being fixed and replaced. SysAdmins who don’t do this do not understand the Internet or “dynamic systems”.)

          If a programming language and its DE is worth anything (meaning “powerful”) then it would be ridiculous not to have this power used for all aspects of programming including how to fix and improve itself. No change should take more than a fraction of a second to safely take effect. There is no need for text based source code, separate compliations and loading, reinitializing, etc. A decent dynamic language should be able to easily do all these things.

          Both Interlisp and especially Smalltalk showed how this could be done to great benefit to the programming and the designs of the software systems that were being attempted.

          The analogies to how the much more complex living systems actually scale would be very clear if computer people were willing to actually learn about complexity and scaling.

          The way to get safety is not to make the system static, etc., but to implement “fences” which safely confine, but which can be hopped when a designer needs the next level of “meta” to make new underpinnings.

          Nowadays when I get asked by large organizations about all this I try to get them to actually understand why the Internet works, and to see how a soft virtual version of the Internet would make a more ideal programming tool and environment.

          As to your last sentence, I’ve been pointing out that “OOP” in many languages is an empty term ….

          Best wishes,

          Alan

          Reply
      • 21. Stephen Bloch  |  March 6, 2011 at 12:58 pm

        … one of my favorite aphorisms at PARC was “Simple things should be simple and complex things should be possible”.

        If simple things aren’t simple, then bad design has been done by some tool maker.

        This is one of my favorite arguments against objects-early, at least in Java/C++/etc. (I was a strong believer in objects-early in the 1990’s.) The accepted wisdom, at least, is that OOP works well to organize programs of hundreds of KLOCs… but my first-semester students aren’t writing those programs, they’re writing at most hundreds of LOCs.

        Challenge: come up with a program of under 100 lines, written in “good OO style” in Java or C++, that you couldn’t have written considerably shorter, simpler, and clearer without the OO.

        Reply
        • 22. Alan Kay  |  March 6, 2011 at 1:52 pm

          By my original definition of “Object oriented” neither Java nor C++ is OO.

          So why not either change your term or work within the actual definition?

          “Good OO style” in Java or C++ is an oxymoron

          And the “accepted wisdom” is neither wise nor correct.

          BTW, there are many under 100 line programs in Scratch and especially Etoys, that would not be as clear, easy or short without objects.

          Cheers,

          Alan

          Reply
          • 23. Stephen Bloch  |  March 6, 2011 at 6:27 pm

            why not either change your term or work within the actual definition?

            Because, as you said earlier, the term “OO” has been effectively colonized and you need a new term to describe what you originally meant by “OO”.

            “Good OO style” in Java or C++ is an oxymoron.

            No argument here, hence the scare-quotes. I meant those aspects of Java/C++ programming style that are common to most of the textbooks that use these languages.

            And the “accepted wisdom” is neither wise nor correct.

            I haven’t done enough research to opine one way or another about that.

            Reply
  • 24. gasstationwithoutpumps  |  September 17, 2010 at 12:00 am

    (A glaring dumb exception is that you occasionally see an email from one’s organization saying such and such server “will be down for maintenance”, even though there is no need. Servers are cheap and they are just IDs, which means their content can be moved to other HW, renamed etc. and used while the old HW is being fixed and replaced. SysAdmins who don’t do this do not understand the Internet or “dynamic systems”.)

    Alan, I think you misunderstand what “maintenance” means. It rarely has anything to do with hardware. It is almost always installing some new software system (sometimes to fix a security problem, sometimes just to get new features). The system has to be shutdown and restarted to get the parts to communicate correctly. There is very little software robust enough to handle major version changes on critical components while running.

    Reply
    • 25. Alan Kay  |  September 17, 2010 at 7:06 am

      I don’t think I “misunderstand” what “maintenance” means.

      However, since anything can be changed in Smalltalk while it is running — and this has been true since 1976 — we have a case example that it can be done, and how to do it. (And today it can be and is done better and more comprehensively than 35 years ago.)

      To be able to do this is one of the meanings (or connotations) of “dynamic” and “late-binding”.

      (And, by the way, for early-bound systems, the ploy of renaming servers is a way to keep service continuous for software which doesn’t know how to reconfigure.)

      Cheers,

      Alan

      Reply
  • 26. BKMacKellar  |  September 19, 2010 at 3:21 pm

    I worked in industry as a software engineer for 11 years before returning to academia. I do not understand what this author is talking about when he says objects are hardly ever used. On the contrary, objects won! In my 11 years in industry, I saw almost NO non-object-oriented code. The only examples, really, were things like database maintenance scripts written in Perl, and web front end code. The libraries out there are all OOand the common design methodologies are OO. Of course, there is a lot of badly written OO code out there, but it is still OO. What universe is this author living on?

    Reply
    • 27. Erik Engbrecht  |  September 19, 2010 at 4:41 pm

      You have to look at how objects are commonly used and compare it to patterns in non-object oriented languages. I think the differences, especially in application code, are often quite superficial.

      Reply
    • 28. Moti Ben-Ari  |  September 20, 2010 at 2:34 am

      Please see my comment above for one example (automatic inspection) and the example of the car in the article. Another example is aircraft control (I think 1 million lines of code in the latest Boeing and Airbus aircraft), where the central issues of realtime scheduling, hardware interfaces and mathematical algorithms are not ameable – in my opinion – to OO.

      So it all depends on what “industry” we’re talking about. Personally, I believe that it is OO that is the niche technology (albeit a very visible one). My main complaint against OO supporters is that they have not articulated what OO is good for and what it is not good for.

      Moti

      Reply
  • 29. Objects When? If Ever? | Etherealear17's Blog  |  September 27, 2010 at 8:16 pm

    […] on September 28, 2010 by etherealear17 This post started as a comment on Mark Guzdial?s blog (Moti asks: Objects Never? Well, Hardly Ever!) but I decided to elaborate some. I think this is an important discussion to have both in education […]

    Reply
  • 30. Objects When? If Ever? - Ethereal Tech News  |  October 9, 2010 at 8:25 am

    […] When? If Ever? Oct.09, 2010 in Main This post started as a comment on Mark Guzdial?s blog (Moti asks: Objects Never? Well, Hardly Ever!) but I decided to elaborate some. I think this is an important discussion to have both in education […]

    Reply
  • […] on blogs and lists for some time.  “Objects never or hardly never” was a topic on Mark Guzdial’s blog not too long ago. It is extremely important that intro students can develop logical solutions / […]

    Reply
  • 32. Dataflow Book » Summary of the Actor Model  |  March 27, 2014 at 12:26 am

    […] From Moti asks: Objects Never? Well, Hardly Ever!” […]

    Reply
  • 33. rdtsc comments on "(unknown story)"  |  January 25, 2016 at 12:13 am

    […] https://computinged.wordpress.com/2010/09/11/moti-asks-objec… (see comments history) […]

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,247 other followers

Feeds

Recent Posts

Blog Stats

  • 1,655,173 hits
September 2010
M T W T F S S
« Aug   Oct »
 12345
6789101112
13141516171819
20212223242526
27282930  

CS Teaching Tips


%d bloggers like this: