This meditation is intended as an antidote to an over-enthusiasm that I see in some for all things OO. I have no intention of indicating that OO is not very useful. However it is a limited approach to realizing the real world in code, and it is worthwhile to understand some of those limitations.
I started intended to respond someone grumpily to Re: Often Overlooked OO Programming Guidelines, which stakes out the opposite extreme position. In particular it says that There is simply no such thing as "useless OO". and the basic points used to support this are:
- Everything in the real world is an object (class is the collective abstraction of object).
- Programming is the way used to resolve real world problems.
- In order to be able to resolve the problem, especially through a machine, you need a way to (observe and ) describe the entities concerned, and the process to resolve the problem.
- OO is one way to describe real world (or to be precise, to perceive it and then describe it.)
I disagree to a greater or lesser extent with all 4 claims. Here are some of my underlying views:
- The world is not really made of objects: This is true both in a trivial and a profound way.
First the trivial. Let's take something simple, like day and night. They are different, aren't they? Well, where do you divide the evening between them? There isn't an intrinsically clear line. And if you can travel in a jet, the boundary between today and tomorrow becomes confusing - you can get from today to tomorrow without experiencing night by flying around the world (or by standing at the right latitude). The world is full of vague boundaries like this, things that merge into each other. In fact if you examine even simple things, like chairs and you, at a microscopic enough level, there is always a fuzzy boundary in "things". And frequently the more that you know about them, the harder it becomes to say what they are.
Now for the profound. The world that we are interested in is largely constructed of artificial social constructions. Speaking for myself, the vast majority of professional code that I have written has involved "fake" things like money (most of which doesn't physically exist), debts, holidays, contracted permissions, etc. In other words I'm dealing with "things" whose only reality is convention. Conventions whose intrinsic non-reality is demonstrated when they change over time, or depending on location, causing no end of headaches for software maintainers.
- Figuring out the correct relationships between things is both arbitrary and hard: Mental models embodied in programs contain a mixture of things (of varying degrees of artificiality) that the program is about, and made-up concepts and parts internal to the programming system. When you set out to write a program it is not at all obvious what real things need to be included, in what detail, what they are (confusion over that leads to a lot of spec questions!), and so on. It gets even more arbitrary when you start to organize your program and decide whether you are going to, say, use a formal MVC approach with invented Controllers and Views in addition to the Models above.
The fact that different teams of competent programmers can come up with different designs to tackle the same problem demonstrates the arbitrariness of these choices. Anyone who has had a design fall apart on them is painfully aware of how hard it is to come up with good choices.
If it were as simple as saying that there is a simple, obvious reality that we just have to describe accurately, then we would do much better at software engineering than we do.
- Even when relationships are clearly understood, it is not always clear how capture them with OO: This is put better in The Structure and Interpretation of Computer Programs than I can put it. (For those who don't know, SCICP is a true classic.) As a footnote there puts it,
Developing a useful, general framework for expressing the relations among different types of entities (what philosophers call ``ontology'') seems intractably difficult. The main difference between the confusion that existed ten years ago and the confusion that exists now is that now a variety of inadequate ontological theories have been embodied in a plethora of correspondingly inadequate programming languages. For example, much of the complexity of object-oriented programming languages -- and the subtle and confusing differences among contemporary object-oriented languages -- centers on the treatment of generic operations on interrelated types.
- The definition of OO is unclear: Do we allow single-dispatch? Multiple-dispatch? Single-inheritance? Multiple-inheritance? Do we have prototype-based inheritance? Some class-based model? Something more sophisticated (like Perl 6's roles)? Is everything an object? Do we call it object-oriented if you have lots of accessor methods?
For every one of these choices I can name languages and OO environments that made that choice. I can name ones that didn't. I can find people who argue that each option is the "right" choice. Yet these choices profoundly alter what it means to be "object oriented". They alter the kinds of strategies that you can use. And, as indicated in the SCICP quote, each combination is unsatisfactory in some ways.
Yet despite this, you can find plenty of people who are quick to argue that something is "not real OO".
And now allow me to address each of the original points in turn:
- Everything in the real world is an object (class is the collective abstraction of object). I think that I've argued that the real world isn't. And further, the "world" inside of our programs necessarily has a lot of stuff which has very little to do with the real world.
- Programming is the way used to resolve real world problems. First, my experience is that programming is more about communication and understanding between people than about what the program is supposed to do. Second, programs deal with a world at several degrees of remove. Third, I find that it is better for programs provide tools, not solutions. Oh, computers can apply the simple rules, but you have to leave complex problem solving to people. We're better at it.
- In order to be able to resolve the problem, especially through a machine, you need a way to (observe and ) describe the entities concerned, and the process to resolve the problem. An ongoing tension in good design is how much you can leave out of the model. For example, look at spreadsheets. Myriads of problems have been effectively solved with spreadsheets (often by people who didn't know that they were programming), even though spreadsheets are innately horrible at really modelling any of the entities which those problems were about.
- OO is one way to describe real world (or to be precise, to perceive it and then describe it.) This I mostly agree with. But I would point out that every variation of OO is a different way to describe things (both real and invented), and I also claim that none of those ways are completely satisfactory.
And to address the point that started all of this, anyone who really believes that There is simpy no such thing as "useless OO". should read Design Patterns Considered Harmful. Yes, adding OO can not only be useless, it can be actively counter-productive effort.
Disclaimer: When I first understood OO I had a reaction that has been confirmed over time.
My background is in mathematics. Mathematicians can be broadly divided into people inclined towards algebra versus analysis. By specialty and subspecialty it is hard to make this division, but very few mathematicans have any problem telling you which side of the divide they are on.
Let me broadly describe each. Analytically inclined mathematicians like to form mental models of the topic at hand, from which intuitive understanding it is clear how to produce possibly long chains of calculations leading to results. Algebraically inclined mathematicians are more inclined towards abstracting out sequences of operations which have no meaning, but whose analogs have proven themselves useful in the past. This is not a question of ability. Any mathematician is competent at both kinds of thought. But will generally find one or the other far more congenial.
That said, my first reaction to OO was, I bet that this really appeals to algebraically inclined people. This impression has been strengthened over time (as well, several people familiar with both have agreed with me).
My personal inclination was towards analysis...
UPDATE: VSarkiss corrected me on the title of SICP (programs, not programming). Fixed.
Re: The world is not object oriented
by davido (Cardinal) on Jan 02, 2004 at 08:49 UTC
|
Object Orientation should simply be seen as one tool in the toolbelt.
I think that it excels at a few things:
- Providing a simple interface to a subject with complex internals.
- Providing a degree of autonomy to entities.
- Making a new entity act or be acted upon in a way that seems already familiar (tied variables, overloaded operators, etc.).
- Giving data an inherent context (I'm not talking about context in the Perlish usage, but rather, context in the conceptual sense), and a standardized means of manipulation.
- Extensibility through inheritance.
- Multiple instances! (almost forgot that one)
Of course that's not a comprehensive list, and some of those notions can be addressed without OO. But OO can provide a convenient means to those ends.
For those reasons, I happen to like using CPAN and core modules that have an OO design better than ones that don't; they just seem to be easier to use once the OO notation is understood.
But OO design can also be overkill, and may add to confusion if not well thought-out. But the same can be said of many tools. Use a screwdriver for screws, and a hammer for nails. And if you need to chop wood, a chainsaw is handy. But don't try to drive nails with chainsaws; you'll get hurt. Use OO when it makes sense, when it simplifies design, when it is helpful. Bag it when some other approach is less of a headache.
| [reply] |
Re: The world is not object oriented
by Corion (Patriarch) on Jan 02, 2004 at 09:48 UTC
|
As I am on the other side of the divide of mathematicians, I should feel more comfortable with objects, and to some degree, this seems true, as my first "real" language was Turbo Pascal 4, and I implemented my own object oriented windowing system on top of it (or rather, I learned, that it is bad to have parameter lists of 30ish mandatory parameters, and passed records around). I'm not sure though, whether this was because object orientation was the destined path or whether object orientation has been the most successful tool for windowing systems.
On the other hand, I have been passing around functions as parameters since TP4 as well, but Perl was my first language where I could pass around real closures - TP suffered from being tied to the processor stack, like C.
On the third hand, "Object Orientation" means (to the functional programmers) only that you have a lot of functions that take a common first parameter or that have a curried first parameter, so all hardcore Lisp programmers will consider Object Orientation just syntactic candy, but I, as a Perl programmer, like syntactic candy.
perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The
$d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider
($c = $d->accept())->get_request(); $c->send_response( new #in the
HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web
| [reply] [d/l] |
|
OO should mean a lot more than that to a functional programmer.
OO gives you a fairly flexible prebuilt data-driven function dispatch mechanism. This is better than a closure with a curried first parameter because you have multiple related functions associated with the data. Furthermore you have mechanism to associate functions with data which allows you limited ability to abstract relations between things.
Plenty of Lisp programmers find this useful, which is why many Lisp systems have built-in object oriented systems. Sure, Lisp can't associate any syntactic sugar with it. But the idea of layering abstractions on top of each other so that in the end you can find a natural implementation is very well appreciated in the Lisp world, and plenty of Lisp programmers use OO.
| [reply] |
Re: The world is not object oriented
by hardburn (Abbot) on Jan 02, 2004 at 15:19 UTC
|
The definition of OO is unclear
I've recently been reading Booch's "Object-Oriented Analysis and Design", which contained a passage that made the reason for all this confusion over what OO is suddenly clear for me: the term "object" arose independently from several branches of CS at about the same time, each expressing a similar idea, yet just different enough to be confusing.
In such a case, it seems foolish to define OO along a single means of implementation. In Java, classes are always defined with a class Foo, plus some optional inheirtance declaration, followed by a block containg the class definition. Objects are always defined with a Foo f = new Foo(); statement, plus some optional parameters to the constructor. If you're not doing that, you're not doing OO, as far as the Java people will tell you.
What they have really done is implemented a single kind of OO and ignored a large and useful number other kinds of OO. Perl's bless'd objects are roughly analogous to Java's object system, and it's the one that gets the most attention, but it's hardly the only one. Just off the top of my head, it also has:
- Inside-out objects
- Classless objects
- Closures
And probably a lot more that I don't know about, or even have yet to be discovered. The important point is that these have massively different implementations, but can all be unified under the banner of OO.
Another thing I've noticed is some OO programmers describing OO concepts in terms of how it's implemented in their favorite language. It is ironic that OO, which is about abstracting the interface from the implementation, is often described in terms of implementation. For instance, I once saw a C++ programmer explain polymorphisim (here on Kuro5hin.org) like this:
Polymorphism - The class contains a pointer to the appropriate function to call that can handle the class specific data. This is done so to provide a consistant interface.
Yes, that's how polymorphism is implemented in C++. But it doesn't have to work like that. Perl's bless'd objects, for instance, does polymorphisim by walking the inheirtance tree (not that this is a great way to do it--it's slow).
---- I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
: () { :|:& };:
Note: All code is untested, unless otherwise stated
| [reply] [d/l] [select] |
|
Yes, that's how polymorphism is implemented in C++. But it doesn't have to work like that. Perl's bless'd objects, for instance, does polymorphisim by walking the inheirtance tree (not that this is a great way to do it--it's slow).
No. The inheritance tree is called that, because it is used for implementing inheritance, not polymorphism. Polymorphism means you have several things, that you can treat in a uniform way, for example because they all implement the same interface. Polymorphism is (for example) usefull, when you are storing objects in a container, and the container somehow has to interact with its objects.
perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The
$d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider
($c = $d->accept())->get_request(); $c->send_response( new #in the
HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web
| [reply] [d/l] |
|
The inheritance tree is called that, because it is used for implementing inheritance, not polymorphism. Polymorphism means you have several things, that you can treat in a uniform way, for example because they all implement the same interface.
Subclasses, in general, have the same interface as their parents, which means that polymorphism and inheirtance are tightly bound.
---- I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
: () { :|:& };:
Note: All code is untested, unless otherwise stated
| [reply] [d/l] |
|
|
Correction. Closures are a distinctly different concept than objects. And making effective use of closures takes a different set of design skills than laying out a good OO design. (Neither is better than the other. They just don't translate very well.)
A closure is distinctly simpler in behaviour than an object. They aren't a less capable building block since you can implement an object system with closures. But a single closure represents a far less rich set of behaviours than a single object, and closures offer fewer tools than objects do to channel and constrain your designs along natural paths. (Flexibility is definitely a somewhat mixed blessing.)
Therefore it is incorrect to claim that closures are one of Perl's OO systems.
| [reply] |
|
I think they implement enough OO concepts to be considered objects in their own right. They're very simple, and only have one method, but they're still objects. They encapsulate behavior behind that method. They are also polymorphic, as long as they take the same parameters (curried versions can help there). I think it's fair to call them objects, and rather rigorious ones at that, though with limited functionality.
---- I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
: () { :|:& };:
Note: All code is untested, unless otherwise stated
| [reply] [d/l] |
|
|
|
Re: The world is not object oriented
by castaway (Parson) on Jan 02, 2004 at 11:04 UTC
|
If I had to describe OO off the top of my head, then I'd probably say that it's used when you have a bunch of similar things, in which the method is the same, and only the data changes. I often wonder why it is that RW examples get used when trying to describe it. For some reason, cups with different contents just entered my mind while thinking about this, the next thing that occured to me was Databases, and that Class::DBI which I thought was a strange overuse of OO when I looked at it, seems to be the classic implementation of such a RW example. And it looks.. like a lot of overkill (a class per table..)
In short, OO solutions seldom tend to mirror the way we describe OO, mini classes for everything just seem wrong.
I agree with your points, in particular, that programming is not directly used to resolve real world problems. Its used to assist people in solving problems, by removing the tedium of some repeated calculations, and for (storing/accessing stored) data. (I wont say its about Interfaces and such, since these only come about if something is solved by programming)
Now I'm trying to figure which of the Mathematicians I am (or was) .. I guess analytical..
And thanks for the link to Design Patterns considered harmful and the various links there.. it got me thinking..
C. | [reply] |
Re: The world is not object oriented
by kal (Hermit) on Jan 02, 2004 at 12:25 UTC
|
Yeah, I pretty much agree with most of your points. Although, I don't think you should read the four points quite as standalone as you appear to have done: it seems more a train of thought to the last one, but in a rough way, not a proof way.
Objects are fairly good at modelling discrete systems. They're not very good at continuous, but then computers in general aren't, and tend to end up treating the problem in a somewhat quantum fashion. So, if you're attempting to replicate a real-world system, you always have some kind of information loss if the system contains any continuous process. And this is all assuming you can capture enough information about the thing to know how it works.
I'm kind of surprised at your thought towards the end though - if anything, I would think it would be more appealing to the analysts, although perhaps your descriptions are slightly different to what I'm thinking. In my experience, OO tends to appeal to people who want to map the software onto the problem directly, essentially making a little world inside the computer. The abstraction side of things is very rarely used in real-world OO, again, IME.
| [reply] |
|
You may not have experienced the tendancy for people using OO to overabstract things. But I certainly have run into plenty of that.
As for my final thought, pick up a design patterns book, and read it. Think about it in light of the two basic patterns of thought that I described for mathematics. If you can't see how it more closely resembles the algebraic mode of thought, then I probably didn't describe it clearly enough. For the real explanation I'd have to inflict a few months of advanced math on you. After you'd experienced figuring out, say, how homology groups work (for a demonstration of algebra) and various epsilon-delta proofs (for a taste of analysis), then you'd know directly the distinction that I am trying to describe.
| [reply] |
Re: The world is not object oriented
by stvn (Monsignor) on Jan 02, 2004 at 16:33 UTC
|
Tilly,
I agree that OO is certainly not a silver bullet. It is but one tool in a big jumbled toolbox that has developed over the relatively short time in which computers have been around, and software has been written. Think about it, our collective "field" has been in existence for less than a lifetime!
Personally, I like to use OO as a way to model concepts within the system that lend themselves to OO modeling. Other parts of my design might be more plain ole vanilla procedural code (sometimes to script the interactions of said objects) and other times i use a more functional approach (nested curried subroutine references and other such esoteria).
Its all about modeling the system in a way that fits the system, not about following some strict ideology. In my experience anyone who preaches "the one true answer" quite likely doesn't understand the question. (this applies to all of life, not just programming).
As for modeling the "real" world ("real" is in quotes since I have been reading too much post-modern philosophy and cannot rightfully aknowledge the existence of a "real" world, nor our ability to percieve it, but that is for another meditation :-P ). OO is woefully inadequate in this regard, but IMHO its better than alot of other approaches, but only if the context is right. But again, its just another tool in our collective toolbox; if it works, use it.... if it doesn't, don't. I am sure we have all felt as if we have been forced to use an Elephant Gun to kill a flea, or vice-versa (I was once forced to use a large J2EE server to write a message board, talk about overkill).
Programming is a very young field/art/craft/whatever, and one that borrows heavily from many other disciplines (mathmatics, biology, philosophy, linguistics, etc etc etc). But compared to those fields, its barely a zygote. We've still got a long way to go.
-stvn | [reply] |
Re: The world is not object oriented
by Anonymous Monk on Jan 02, 2004 at 08:55 UTC
|
That said, my first reaction to OO was, I bet that this really appeals to
algebraically inclined people. This impression has been strengthened over
time (as well, several people familiar with both have agreed with me).
Thank you tilly. That gives me some insight onto myself and fuel for
thought. Perhaps that sheds light on my tendency toward reasoning and
modelling in a more prototyped-based-OO manner (regardless of language)
when working with OO (falling more on the analytical side of the dichotomy
myself).
| [reply] |
Re: The world is not object oriented
by exussum0 (Vicar) on Jan 02, 2004 at 16:12 UTC
|
1.
Day and night are objects of type time of Morning. Think of it like a Boolean, where it's true or false. Or something like that. :)
But you are right. Everything isn't an object. Anything that's physical can be represented as an object. Things that follow ideas usually can be, such as Calendar Date is comprised of a Month, Year, etc... SOme things can be but shouldn't, such as some parsers, where functional programming would be a little smarter.
2.
Nono. Programs are a tool to solve real world problems. If it's adding two numbers to solving pi to the 80th digit, yeah. But processes solve problems. The process of adding and pi are easily implemented as programs. There are a set of problems which cannot be solved in deterministic time. The halting problem for instance: If a given set of code will ever end. That is much easier to do for a human (right now).
3.
Part of the problem with a computer is, it's binary.. usually. It cannot represent some states easily. We can simulate it with tons of statistics, but that's as far as it goes. Expert systems accomplish things like this with prior inputs and confidence levels, which are just statistics. Same with spam, and baysian filters.
4.
Everything can be represented as an object. Data can always be represented by a graph. Sorting can always be implemented w/ an n log n algorithm. But it's not always the best representation.
Figuring out the relationships of how things fit isn't hard. It's preparing for the future that is hard. Simple patterns of usage in a current system is quite easy when you know what's going in. It's just tedious. Given a set of functions of a program, it can be segregated into a many different subsets, where things overlap. But solving a real world problem doesn't require knowledge of all subsets.. just the best ones. Problem is, when someone throws a fork into the mix.
Everything can be done with OO in one form or another. Not always great, not always terrible, but it can be done. Should it? No.
But the definition of OO is clear. It's simply a description of what something can do and what attributes it has. How the definitions of the "do's" and "attributes" are, how they are passed around, accessed and all, is both a sugar and a medicine. It makes things easier and enforces "good" patterns of usage. Java doens't do multiple inheritance for it's "good reasons". perl and c++ do. perl and php (till late) had no access modifiers.
OO appeals to orgnaized people. Not all the people it appeals to are organized and some organized people think OO is not that great.
People who write algorithms, like solving sorting issues and what not don't always flock to OOP since you are describing a process in it's simplest forms. Even then, it's not usually in a concrete language in its description, but in functional psuedo-code :)
Play that funky music white boy..
| [reply] |
|
Day and night are objects of type time of Morning. Think of it like a Boolean, where it's true or false
As tilly pointed out, "day" and "night" are not boolean. It's clear that 3:00 AM is night, and 1:00 PM is day, but conditions around twilight become difficult to seperate night and day. In fact, you can take mesurements at the same lattitude but different logitudes and get different values for night and day (even ignoring the Earth's tilt), because higher spots will see the sun longer, and lower spots might be obscured by mountains.
IMHO, these boundry conditions are too often ignored as "noise" in mathmatics and science. It wasn't until the study of Chaos theory and fractals that people started realizing just how facinating boundry conditions really are.
the definition of OO is clear
The only "clear" definition of OO is so broad that it becomes useless in practice. Too many people think different ways on OO. Which is one reason I like Perl--it allows many different object systems to coexist and lets you pick the best one.
---- I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
: () { :|:& };:
Note: All code is untested, unless otherwise stated
| [reply] [d/l] [select] |
|
I dedicate this post to you, since it's my 100th writeup :)
Well, I usually say good morning until noon, and refer to 3 in the morning when people call me that late/early.. but we can always say it's an object type BeforeNoon, but we can quible on that, since AM and PM doese have some sorta binary meaning.
The only "clear" definition of OO is so broad that it becomes useless in practice
I strongly disagree with this. The most freeform language you can have is machine language. All the higher level languages do is impose a syntax and laws of use. For instance, I've worked at companies where this_is_a_function vs thisIsAFunction was enforced. WHere tabs and spacing are dictated. This is to make things easier to read. Yes, we can argue that once you know a language it should be easy, but that's not the point.
All languages like java, php, perl, c++ and even c do, is dictate how OOP can and can't be usd. in C, it's the most liberal, since you can use OOP by emulation with structs. php and perl are of varying amounts of freedom. Java is very strict. In the broadest, most liberal sense, it's still quite useful. If you impose coding standards on it which govern usage, you come up with your own language where certain things aren't allowed. Even perl can be made to emulate java by never using multiple inheritance. Define an interaface by having all your methods as die().
It's a reason why there are a lot of people use java for oop and perl for oop. A lot of people agree that java's restrictions (permissions and usage) for how clases work is better than most. The perl people like the freedom they have. It's why languages strive, people agree on stuff :) So you see, the practical ones that people like, such as perl's, php's, java's, c's (lack there-of or via struct-emulation) is clear for them and thus highly usable.
Play that funky music white boy..
| [reply] |
|
|
|
|
Re: The world is not object oriented
by petdance (Parson) on Jan 02, 2004 at 17:38 UTC
|
| [reply] |
Re: The world is not object oriented
by pg (Canon) on Jan 02, 2004 at 20:39 UTC
|
I am certainly not surprised to see people holding different views, and sometime could be quite different. The purpose of discussion is not try to uniform people's mind, but to expose the diversity.
I only read couple of lines here and there. Let me just pick on one thing I have read:
"First the trivial. Let's take something simple, like day and night. They are different, aren't they? Well, where do you divide the evening between them? "
My OO view is that, time is a class. It has certain properties like year, month, day, hour, minute, and second. Different point of time is just an instance of time, which holding particular property values. Day and night, or the line between them is unrelated here. There is nothing make me feel the need to perceive day as a class, and night as another class. or even day as an instance or night as another instance.
| [reply] |
|
A famous philosopher believed that discussion progressed by starting with a thesis, confronting it with its antithesis. And then they mix into a synthesis. As much as I don't care for that philosopher's political philosophy, there is quite a bit of truth to the concept that the best way to expose people to diversity is to expose them to conflicting views. And interesting discussions frequently start with a significant disagreement, carefully explored.
Let's take your example a little further. Time is a class. A Time object has properties like "year", "month", "day", "hour", etc. What is the relationship then between Time and TimeZone? Does Time only have an hour in the context of a TimeZone? Or is the same split second in London a different Time than that split second in Los Angeles?
Going further, time is a more complex thing than your model holds. For a layman, this NY Times article doesn't do a bad job of explaining why. Attempting to produce a Time class that models the intricities of, say, metrics and varying versions of local time is very hard. Using such a beast is even more insane. Now for 99.9999% of the code out there (maybe more, maybe fewer 9's) the complications don't matter and it makes perfect sense to have a naive concept of Time. But for the fraction of a percent that does, you have a lot of fun waiting. An example in that fraction of a percent is the code used internally by the GPS satellites. (Did you know that GPS not only has to take into account corrections from Einstein's Special theory of Relativity, but also the general? Yup, the satellites are able to detect that their clocks run faster due to being farther out of the Earth's gravitational well!)
On misuse of the tool of OO design, my statement is not that I misuse the tool. It is that the tool simply has natural limitations. Typically those limitations don't prevent us from being able to do what we want (though they may complicate it). Rarely they do. Much more often, people who haven't yet learned a good balance between theory and practice create theoretical but impractical designs.
And finally I admit that if you limit your perceptions, you can shoehorn everything into an OO framework. I've seen people who seem to do this. I have no way to know whether you do - all that I can judge you by is what you write. However your post staked out an extreme position, and I certainly can critique the position that I understood your post to be making. This I have tried to do.
| [reply] |
|
| [reply] |
|
|
I think pg makes an excellent point here.
Its all in the needs of your application. If your application requires that it uses a different style sheet for the web page based upon it being "day" or "night" in the users timezone, then your concepts of "day" and "night" are pretty clearly delineated. If your application requirements dictate that your user may be crossing time-zones while using it, your defintion of "day" and "night" is then much different, but still very modelable within OO. I find that OO better models "conceptual" objects than it models "real" objects.
To go back to the original example of day/night modeling, it is clear that you are running into the classic OOA/D trap of over-generalization and over-abstraction. Of course you cannot model the entire concept of day and night with OO, but you couldn't describe the concept adequately in natual lanaguage either (at least not on the level of detail needed in a computer system).
In the process of OOA/D you need to know when to stop abstracting things, or you end up with a giant mess of all too specific classes (which will actually make things worse and defeat the whole idea of OO in the first place). You also need to learn when to stop generalizing (which is very much like abstraction, maybe best to say a facet of the act of abstraction). If your classes are too general, you end up with more details/capabilties than you actually need, and again, you've defeated the benefits of OO.
Again, I will say, its just another tool. And like all tools, their usefulness is not so much an intrinsic property of the tool, but instead all in how you utilize the tool.
-stvn
| [reply] |
|
Exactly.
In any kind of programming design, OO or otherwise, the goal is to find a good representation for your purposes. My point is that there often isn't an "obviously natural" representation just waiting to be noticed, and existing OO systems have to stop at partially expressing the full conceptualization that the programmer might have of the problem domain.
OO is one toolkit for producing useful representations. It is a useful one (else it would not have become so popular), but it is not the only one, and it is not always the right one.
| [reply] |
|
|
day and night could be viewed as another property of the time class or an assertion you could make against an instance of time, base on an artificial line between day and night.
I don't consider day and night to be artificial constructs, because nature does specific things during those times. For most bats, night is the time to wake up and start hunting, while it's time to go to sleep for a robin. The boarder between the two is fractal-like. If you can figure out the fractal's underlieing pattern, it should be easy to program your is_day and is_night methods, since fractal generators are surprisingly simple once you know the pattern. If you can't figure out the pattern, it'll be all but impossible without some artificial agreement.
---- I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
: () { :|:& };:
Note: All code is untested, unless otherwise stated
| [reply] [d/l] [select] |
|
With the aid of a PC connected GPS, and some fairly well defined celestial mechanics math, determining the time of Sunset and Sunrise at any given location isn't that hard.
Though you would probably need a missile guidance system style local topology map to account for things like local mountain ranges etc. Depending where you live these may exist, but probably aren't avaiable to the general public.
Accounting for refraction and backscatter of light due to the atmosphere will be very difficult as it will be affected by the presence, type and hieght of cloud cover upto several hundred miles from the target location.
Given enough incentive, determining when the last vestiges of Sunlight will cease to fall on a given point on the earths surface would probably be possible, to a fairly high degree of accuracy.
However, whether this would difinitively capture the etheral notion of evening I doubt. What constitutes the start and end of evening is very much determined by a whole range of factors well beyond the scope of any form of mathematical formulea to define.
One example of these factors is social background: Different societies, and different groups of people within any given society, and even the age group of people can have an influence upon whether evening starts at 4pm, 6 pm or 8 pm or any point in between. Even once a computer calculated a specific point as the StartOfEvening, the percived accuracy of the determination would vary depending upon the audience, and may even vary with the same person according to a number of factors. Personally, Sunday Evening notionally starts later in the day -- as defined by my local time including artificial factors like daylight saving time -- than a work day evening. Likewise, evenings seem to start later, or the "afternoon" continues longer, when I am on holiday.
There are many concepts in the real world that do not lend themselves to being viewed as objects, nor even attributes of objects. The question then becomes one of:
Do these concepts have any place in computer programs?
We can represent colours within a computer program more accurately than most human eyes can differenciate between them, even with 24-bits. However, writing a computer program that could quantify any given colour as a "nice colour" would be extremely difficult. Or even a pair of colours as having a "good contrast" or as "complementary" in the fashion designer sense of the terms.
But then, what would be the purpose of having a computer program to do this? A computerised critique of the stars dress choices at red carpet parades?
For exactly the same reasons as it is difficult to produce a computer program to perform these tasks, anything said, or written, has to be taken in the context of where and when it was written. Taking individual words and phrases and ideas, expressed in one context and then considering their merits in isolation from the original context is fraught with problems.
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
Hooray!
| [reply] |
Re: The world is not object oriented
by rir (Vicar) on Jan 02, 2004 at 17:27 UTC
|
I do not find anything to grasp in your spreadsheet paragraph quoted below. What is your point here?
In order to be able to resolve the problem, especially through a machine, you need a way to (observe and ) describe the entities concerned, and the process to resolve the problem. An ongoing tension in good design is how much you can leave out of the model. For example, look at spreadsheets. Myriads of problems have been effectively solved with spreadsheets (often by people who didn't know that they were programming), even though spreadsheets are innately horrible at really modelling any of the entities which those problems were about.
| [reply] |
|
The point is that it is not always necessary, or useful, to have detailed internal models of the entities of interest in your program. You need to capture and manipulate the data of interest about those entities, yes. But that doesn't imply that you always need to build good models of the entities themselves.
| [reply] |
|
I think it's that spreadsheets leave a lot of "real programming" out, even though they may be computationally complete. A spreadsheet is a way of orginizing data, so the language used within should be specialized in handling data, like formatting it and running formulas on it. Excel doesn't need a full GUI API embedded in its scripting language, so it doesn't have one. This is generally considered a good thing.
In other words, a spreadsheet has left a lot out of its scripting support, and this is a good thing. So good that people don't even realize that they're programming.
---- I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
: () { :|:& };:
Note: All code is untested, unless otherwise stated
| [reply] [d/l] |
Re: The world is not object oriented
by BUU (Prior) on Jan 03, 2004 at 04:53 UTC
|
One reason, which I think you didn't touch upon, is that OO seems to produce much simpler code, at least conceptually. I think that this is because it's much more closely related to procedural programming than the other types of programming, and because (imho, etc) procedual is the simplest type of programming, being closest to it makes for code thats easier to understand.
So OO is often taught as the next step beyond procedural, simply because it's simpler and easier to teach and understand.
Another thought is that OO tends to make code sound simpler, at least conceptually speaking, especially when speaking in more natural languages (such as english and so on). It's much easier to say Object 1 calls Object2's method baz which then takes an object3 which it manipulates in this manner, then to start describing how a functionally oriented program works.
Perhaps what I'm getting at is that objects tend to be the best way to hide implementation details, and this makes everything seem much much simpler when your trying to model something. This may not even be true when you are writing the code, but it's much easier to conceptualize.
One thing I've noticed is that occasionally you have library type code that is programmed as a class/object combination simply because thats the only way the author knows how to appropiately encapsulate his data and methods. This is often the case with Singleton pattern objects, theres no real reason to use an object in this case, you just need a module, but some times the author or the language can't use a module, so you use a Singleton object. | [reply] |
|
I agree with you, alot of Singleton classes are mearly modules shoe-horned into classes. But i disagree that this is reason not to use OO and instead use a plain ole module. It is largely dependent upon the context. While I always advocate mixed paradigm programming within the larger application, I think that on a smaller scale, paradigms should be reasonably seperated.
If my application is largely OO, then a Singleton approach/syntax makes sense. A Singleton's singular instance can be passed around and treated like any other object, the other objects blind to its Singleton-ness. In this case the module approach/syntax would be awkward and clumsy.
Now if my application is largely procedural/imperitive/functional style, then the module approach/syntax makes much more sense. It fits the context here. Where an Singleton object would be akward and clumsy.
Ideally a module should provide both interfaces (if appropriate), much like CGI.pm. (yes i know CGI.pm just auto-creates an object instance for you if you use the non-OO interface, but hey, encapsulation isn't just for OO).
-stvn
| [reply] |
Re: The world is not object oriented
by demerphq (Chancellor) on Oct 21, 2007 at 11:59 UTC
|
There is OO as a method of encapsulating and associating data and the means to operate on that data, and there is OO as a method of modeling an abstract concept. Both have their place, however I tend to find the latter is actually less useful than the former. The former tends to use 'hasa' or 'containsa' semantics and the latter tends to use 'isa'.
I find programming in an environment that requires isa semantics to be frustrating and annoying (outside of perhaps mix-ins). I tend to find that inheritance and polymorphism to be overused and less useful than is typically advertised, leading to layers of twisty two line subroutines that obscure the actual intent and operation of the code.
On the other hand I find OO as an encapsulation mechansim to be of clear utility, especially in Perl where there are no type declarations. If I build a data structure out of AoAoHoA's and i accidentally feed it to a routine that doesn't operate on that structure the error is neither caught at compile time nor is the error message all that useful, and in fact given autovivification sometimes no error at all occurs, merely incorrect and surprising results. With OO as encapsulation this does problem does not arise, while the error message still will not occur at compile time, an error *will* arise when I call a method on an object that it does not define. Such an error is easy to debug, and is generally preferable to silent erroneous operation.
I think OO is like anything else, used with moderation its fine, used as an end-all-be-all matter of policy is ridiculous.
---
$world=~s/war/peace/g
| [reply] |
|
| [reply] |
|
Well I'm not sure that we are disagreeing. :-) My point was that the two tend to lean different ways, not that well designed software wouldn't use the most appropriate form of relationship regardless of its intent. Also I guess I was griping a bit, has-a relationships are IMO underused, and IME often more useful than is-a relationships.
Further to my original point, I think of lot of programmers, especially less experienced programmers (but not at all exclusively), tend to forget that OO can be used simply as a form of encapsulation that makes organizing their code easier. They get so distracted by all the subtleties of inheritance and overloading and polymorphism that they forget that OO can be used in simple ways to achieve complex goals in manageable ways. In fact one of the reasons I like perl and its anonymous subroutines/closures is related to this. Its a simple way to define what is more or less a single method class/object that encapsulates its data conveniently. The classic case is a counter object versus a counter closure.
---
$world=~s/war/peace/g
| [reply] |
Re: The world is not object oriented
by zentara (Cardinal) on Jan 02, 2004 at 17:51 UTC
|
I never looked at the world in terms of "objects", I see it more as a collection of "detached threads". Each "thread" is a
"sequence of events". The threads are all interwoven to produce our "reality". Hmmm...maybe I should write a poem. :-) | [reply] |
Re: The world is not object oriented
by Anonymous Monk on Jan 03, 2004 at 09:32 UTC
|
In other words I'm dealing with "things" whose only reality
is convention. Conventions whose intrinsic non-reality is demonstrated
when they change over time, or depending on location, causing no end of
headaches for software maintainers.
I think classifying such things as instrinsically not-real misses something
important. Intangible, non-static, evolving things, can still be considered
"real". Historians deal with such "conventional" realities all the time.
The following quote on "objects" of discourse brings such "contextual"
existence to the fore:
The conditions necessary for the appearance of an object of discourse,
the historical conditions required if one is to 'say anything' about it,
and if several people are to say different things about it, the conditions
necessary if it is to exist in relation to other objects, if it is to
establish with them relations of resemblance, proximity, distance,
difference, transformation - as we can see, these conditions are many
and imposing. Which means that one cannot speak of [just]
anything at any time; it is not easy to say something new; it is not
enough for us to open our eyes, to pay attention, or to be aware, for
new objects suddenly to light up and emerge out of the ground. ... the
object does not await in limbo the order that will free it and enable it
it to become embodied in a visible and prolix objectivity; it does not
pre-exist itself, held back by some obstacle at the first edges of
light. It exists under the positive conditions of a complex group of
relations.
- Michel Foucalt (1972) The Archaeology of Knowledge & the discourse on
language.
| [reply] [d/l] [select] |
|
If this was a philosophy forum rather than a programming one, I might pursue this thread further.
Obviously I meant "really exists" in a very prosaic sense. When you start talking about what conceptual ideas really exist, life gets very complex, very fast. I know from the history of math just how complex a question this is. For instance standard mathematics insists that virtually all numbers which exist can never be specified in any way, shape, or form. Yet in what sense do they exist..? (And how do you model this state of affairs in a computer?)
| [reply] |
|
If this was a philosophy forum rather than a programming one, I might pursue this thread further.
No, this isn't a philosophy forum: But then, philosophical reasoning belongs
in every forum :). At any rate, I thought the passage quoted was interesting
and relevant, and that a great number of the objects we programmers use, or
discover, or invent, are propped up entirely by groups of relations
(conventions, if you will): The very concepts Object and Class in OO for
example. And in a deeply OO language, Classes are Objects too. And when
programmers get caught up in the "programming objects" == "real world
objects" (and variations along those lines), it can be pertinent to ask them
what "real world object" the Class object models.
| [reply] |
|
|
|
Re: The world is not object oriented
by dragonchild (Archbishop) on Mar 29, 2004 at 20:33 UTC
|
Is it possible to meld OO and functional programming together? I mean more than just having closures as your attribute values, but really meld the two? Or, for that matter, meld OO and declarative or functional and declarative?
I'm asking the question because it's unclear to me if the various styles are blendable or not. And, this meditation just puts it into perspective for me.
My personal style is to use objects as the gross organizer, but to use a curious blend of procedural and functional within the methods of those objects. I use inheritance as a gross form of reuse and interface enforcement. (Except, of course, when I use it for other things.) And, declarative is used whenever I drop into SQL. Except, I build those declarative statements using that curious blend of OO, functional, and procedural.
Speaking of functional ... functional has always seemed, to me, to be more about data than OO is. Functional builds functions from functions, but that's just so you can process lists. The lists are what actually go through the functions-of-functions. So, in a way, functional programming is the style of programming most closely tied to the data-flow diagram. Everything just flows, like the Mississippi delta, at its own pace. Kinda like the Sixties.
OO, on the other hand, is much more like urban life. Everyone distrusts everyone, doesn't allow anyone to peek inside the windows, and everything has a contract. If you violate the contract, you're taken out back and shot like the dog you are.
Declarative, to me, has always seemed to be the style of programming that Louis XIV or Henry VIII would have used. "Give me THIS, and be snappy about it!" No if, ands, or buts.
Procedural is what you do when you're speaking to an 8 yr. old who just couldn't care about what you're asking, but he'll still do it. You just have to be really specific. It's not enough to say "Clean your room." You have to say "Pick up the stuff on your floor, organize your desk, make your bed, and fold your clothes". Even then, you'll miss something.
------
We are the carpenters and bricklayers of the Information Age.
Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose
| [reply] |
Re: The world is not object oriented
by oha (Friar) on Jan 03, 2004 at 16:08 UTC
|
| [reply] |
Re: The world is not object oriented
by Anonymous Monk on Jan 04, 2004 at 16:55 UTC
|
Id be interested in your experience/opinions of problems that are best modelled without using OO.
You suggest GPS satelites requiring a different notion of time; but i dont see why that would invalidate the use of encapsulation and an interface?
With all due respect, I find your 'microscopic chair' and 'Day/Night' examples embarassingly ill-thought out. The rest of your comments dont seem particularly tied to OO - and many of these issues are further compounded in another paridigm.
I have always held the same view, but am yet to find a compelling reason to not use OO; one that doesnt amount to semantic pedantry, saving keystrokes/CPU cycles, or accomodating Computer Science obscurities.
thanks
| [reply] |
A reply falls below the community's threshold of quality. You may see it by logging in. |
|
|