in reply to Re^6: Random quotes in the top left corner
in thread Random quotes in the top left corner

I find it ironic that you have problems with my explanation, and then proceed to indicate that you got the main point.

Your reason B for less modular OS designs was exactly my point. Less modular designs were feasible on the hardware before a more modular design would be. Once more modular designs became feasible, it was only a question of time until one emerged as a standard and then won due to advantages in cost, configurability, and maintainability. But there was a window where non-modular solutions made sense.

  • Comment on Re^7: Random quotes in the top left corner

Replies are listed 'Best First'.
Re^8: Random quotes in the top left corner
by apotheon (Deacon) on May 02, 2005 at 06:48 UTC

    I think I may not have been clear enough in what I was saying about your point.

    Specifically, it seems likely to me that the necessity of tight coupling in new OS development is largely a thing of the past. Advances in hardware have created a greater value for developer resources and a lesser value for system resources, across the board, and that this reduces the value of tight coupling in new system development to the point where it is negligible (except perhaps in concept prototyping).

    Yes, there was a time when non-modular solutions made more sense, but they don't really make any sense any longer, for the most part — often even in experimental development, contrary to what I understood of your earlier statements.

    print substr("Just another Perl hacker", 0, -2);
    - apotheon
    CopyWrite Chad Perrin

      What makes non-modular solutions make more sense is not the time - there are places where they make sense today and will be places 30 years from now - it is the characteristics of current technology and your problem space.

      While it does not make sense today for problems where it did 20 years ago, there are new problems where it arises today, and there will be undreamed of problems where it arises in 20 years.

      Also there is a certain relative nature to the issue. There is never an absolute line that this is loosely coupled and that is not. Rather you can say that one design is more loosely coupled than another. For instance suppose that OO Perl runs at half the speed of procedural Perl (there is a lot of overhead in making method calls), then using OO techniques to modularize has a huge overhead. Is it worth that overhead to be able to get the benefits of OO? That entirely depends on what you are doing, and what the capabilities of modern hardware are. And if OO doesn't make sense for you to use today, it might in 5 years.

      The result is that software 20 years from now will be able to be shockingly inefficient from the present perspective, just as current software is shockingly inefficient from the perspective of computers 20 years ago. But that inefficiency will buy people something, and one of the things that it will buy them is additional looseness of coupling in layers between the programming abstraction and the actual implementation.

        The more I think about this, the more it occurs to me that in a sense we're both right, without any compromises being made in either position: in five years, there will still be as much need for "tight coupling" as there is now, and now there is as much need for it as there was five years ago, and so on back to the invention of the term in reference to code modularity. What changes isn't the necessity for tight coupling, but how we define tightness of coupling. Some of what we call "loose coupling" now will be thought of as "tight coupling" in five years. Thus, if there were a way to measure an absolute percentage value for incidence of tight coupling, in five years the percentage will likely be about the same as it is now, but the metric itself will have changed.

        At least, that seems to be the likely state of affairs to me.

        print substr("Just another Perl hacker", 0, -2);
        - apotheon
        CopyWrite Chad Perrin