|We don't bite newbies here... much|
Ahhh, the crux of the matter! Indeed, good subject to revisit, tpyahoo.
I always integrate my customers with my deployment process, especially when I'm working with a live database. There's rarely a situation where transactions can't be queued for a few minutes, at least at three AM: long enough to do a block -- data dump -- install -- test -- release block. You do have a queue and queue handler on your database, do you not? :)
Warnings are always posted and propagated a day ahead of time, and our web systems post alerts on all database-active pages.
Finally, our systems involve replication and hot spares, so we're used to having more than one live system. We introduce a newly upgraded machine into the mix, point dynamic DNS at it, and watch the transaction logs. If it fails with data coming in, we simply pull its transaction logs and feed them to one of the existing servers after taking it back off-line.
I work as hard on the 'revert' scripts as I do on the 'install' scripts. You'll never catch everything, so be ready.
"There's more than one level to any answer."
In reply to Re: Migrating Code From Test to Production: How to do it right, how to set up an environment that leaves nothing to chance