|Perl: the Markov chain saw|
Re: Re: Re: Re: J2EE is too complicated - why not Perl?by BrowserUk (Pope)
|on Dec 07, 2003 at 15:20 UTC||Need Help??|
Thanks for taking the time to give such a comprehensive answer. I'm not completely convinced by it yet -- but that's probably no surprise:) I started writing a blow-by-blow counter argument, but stopped as I decided you probably would not be interested in reading it, and that this may well be considered too far OT for this place.
Some parts of your post read a little like "I have a relational hammer, so the whole world looks like a Codd defined nail":) Please take that in the humourous manner in which it was intended. If you do have any interest in looking at an alternate viewpoint, you might like to read The OODBMS manifesto. The word "manifesto" kind of put me off reading it the first time I was refered to it, but I bit the bullet and read it anyway and was glad I did. It's not a that long. A hour or so should do it as a first pass.
The following is what I had written before I stepped back a little. I include it only as an "if your interested" sample of my thinking.
IMO, most of the advantages that you cite for RDBMSs are denatured by the need to force fit the mapping of objects to a relational view.
For example, atomic transactions. The simple act, from the view of the OO-programmer, of modifying a single instance of a single object, becomes a nightmare to translate into an 'atomic transaction' via SQL, if that object is a compound object. Even if our objects are limited to single inheritance, the object in question may be a subclass of a subclass, and may contain one or more collections (sets/bags) of a couple of other object types. Saving a single instance of this object to an RDBMS would require modifications to at least 4 tables to be done as a single transaction, but the nature of the way objects are constructed (in the memory of the application) means that 4 different classes will be involved in that transaction, each performing it's own constraint checking and exception handling. Often as not, the four classes will be written and maintained by 4 separate programmers.
Building all the code for each of these four classes into a single executable and coordinating the DB access to atomise the transaction is awkward to say the least. That the very next application using the same top level class has to perform the exact same juggling act is wasteful of effort.
Now sublass this class and try to save an object of this new class? The atomicity of the transaction at the superclass level is wrong. We need to move the boundaries of that atomicity up a level to the subclass. Every time we add to the class structure, whether through inheritance or composition, the task of managing the boundaries of the transactions moves also, and must be repeated in every application that makes use of the class.
The only effective, reusable way of avoiding this is to place the code that performs the updates of individual classes/tables within the DB itself, and have the DBM manage the transactions for us. In other words, we need to be able to tell the DBM, "store or update this object instance", and have it trigger the sequence of updates required, treating the whole sequence as a single transaction.
With respect to using an SQL client as a debugger. Did you ever see the mess that results from (wrongly) modifying the contents of a complex C tree or list structure in memory using a debugger? You tweak a pointer somewhere to see the effect of your proposed correction to a bug your tracking down and get the alignment wrong. You type the "go" cmd to restart the application and all hell breaks loose:)
The problem is that, using SQL to view and modify a compound object in terms of it's consituent tables rather than as a complete object is so fundementally dangerous -- in the same way as tweaking pointers in memory. The difference is, that tweaking memory pointers only screws up the transitory data held in memory, whereas doing it to bits (tuples) of objects in your DB is likely to screw up the permenant state of your data.
And that's my point. Viewing, and being able to view, the component parts of compound objects as individual tupples held in disparate tables, and modify them is exactly the same as being able to modify tables whilst bypassing their relational (foriegn key) constraints -- it is so dangerous, that it simply should not be possible to do it.
With respect to the physical storage and performance. Giving the programmer an OO view of his OO data does not preclude using an efficient (tabular/tupple) based physical storage mechanism. It simply hides this from him and places the code that manages the mapping, contraint checking and transactioning from him, and places the logic for doing it in a single, central place where it can be got right once and optimised in one place, rather than spreading it all over every application and requiring every programmer to understand and recreate it over and over again.
This is a long time mindware project of mine. I first read Codd's original 1970 paper, in 1976 as part the preparation for my end-of-term project. I didn't understand a lot of it back then, but down the years I've understood more. I've been a long time proponent of RDBMS's, but for the last 5 or so years I've begun to see the need for something more. That's not to say that the underlying tenants of relational logic have changed. Only that the programmers view of them should.