I've been following along on this thread and (mostly)reserving comment but..
What you are advocating, is to design your database schema to reflect the way your programs choose to model the underlying data.
This is soo wrong.
DB schema's should (can only) reflect the structure inherent in the data; not how one particular application chooses to model (some subset of) that data.
What if a new application needs to model (a different subset of) that data in a different way?
You then have two choices.
- Adapt the schema to the new application and re-write the first application to suit.
This will inevitably leave the schema a compromise between the requirements of both applications, throw away any DB (index) tuning that had been done for the first application.
It can also lead to the situation where the needs of one application are irreconcilable with the other.
- Duplicated the data into two databases, each schema'd for the particular application.
Hopefully this choice needs no annotation.
The advantage that using stored procedures, writable views and triggers to represent the business logic is:
- You can leave the database schema to the DBA. He can normalise the data, setup the referential integrity according to the structural needs of the data itself.
He can the tune the DB (indexes etc) over time in accordance with the actual loads placed upon the DB in use.
- The applications get access to their preferred model of the data through stored procedures and views.
This places most of the heavy hitting directly in the DB itself, where is is best handled and best tuned.
- Each application can have what ever view of the data it requires.
- Multiple applications can share the same data, through different views without compromising the integrity or perfromance of the underlying schema.
- The applications access to the data is controlled at the business level. Changes in business rules can be tested, proven and instigated at the DB level and (can be) invisible to the application code.
- As applications evolve, their view of the data, and the makeup of their subset of the total dataset, can change and be accomodated without necessarily modifying the underlying schema.
New fields and relationships can be added to the schema transparently to existing applications.
Achieving this level of isolation is hard.
That's why schema design lies in the province of the specialist DBA not the generalist application programmer.
It's also why good DBA's are worth their weight in gold (and usually know it!), and bad ones can bring whole departments and companies to their knees.
- The business retains control over their data.
The data for individual applications all lives in one large business database rather than lots of little databases.
Even if those small databases all live within the same RDBMS, writing new applications that draws together relationships across the boundaries of those small DBs becomes impossible.
Attempting to correct this by making applications use one or more tables within a single DB results in duplication of data, and/or compromising one application's needs for another.
The solution, invented long ago by someone far clever than myself, is writable views and stored procedures (and triggers to ease the use of SPs).
The only cogent argument I've seen against the use of SPs and views to implement business rules and provide the isolation layer between applications and the schema is the difficulty that arises if the RDBMS vendor is changed.
This is only a major problem if:
- The new RDBMS doesn't provide for the same functionality as the old.
Moving from PgSQL to MySQL would be a problem currently.
- If too great a reliance is made of the extended features provided by one RDBMS.
Avoiding 'lock-in' by avoiding 'extended features' is a problem in all fields.
Weighing the decision to use such facilities and risk lock-in is a difficult compromise to make and depends entirely upon the long term benefits of using such facilities.
As much as any other reason, that is why such extended features are evolved & marketed, and why it takes so long for such extensions to be unified into standards.
The truth is that changing DB vendor is always going to be traumatic if the facilities of the RDBMS are being used in a cross-application, joined-up way.
There are several companies out there that will try to sell you on gaining independance from your RDBMS vendor by selling you a middleware layer in which to encapsulate your business rules.
What they don't tell you is, that independance comes at the cost of restricting you to the lowest common denominator across all the RDBMSs that they support. Just as with all cross-platform libraries.
And, you just moved your lock-in from the RDBMS vendor to the middleware vendor. In practice, the RDBMS vendors are generally better.
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon