http://www.perlmonks.org?node_id=1198078


in reply to Re^5: copying mysql table data to oracle table
in thread copying mysql table data to oracle table

If your transaction fails, you go back to the beginning, meaning you start a new transaction, fetch the record again, and decide what to do about it. Maybe the intervening update obviated the need for your change, and maybe it didn't. But the point is, that's what you can do about failed transactions. You just retry them.

Imagine for a moment that the script was (accidentally) executed twice, simultaneously. Both scripts lock each other out, and assuming there is no deadlock, both will keep retrying forever. I am not sure what the benefit of an explicit transaction is in this case anyway.

throwing the whole thing in a big perl hash for later reference. Bad Idea, right?

Oh, i wasn't referring to that at all. It would only be a bad idea because it causes a data transfer that can be easily avoided.

You at least want to be able to tell the user, "here are the records that didn't get updated."

The case here is a one-time data migration. Exception handling for the purpose of human readable messages is not worth the effort. Anyway, the where clause should include all constraint checks anyway. If you need to know what records were not inserted, use the appropriate clause in MERGE, or simply do a where not exists() after the operation is finished, to see what didn't make it.

can we just lock the table and prevent anybody else from making any updates until we're done, or is that going to cheese too many people off?

Locking a table is the easiest way, no question, that is for brute force, non-merge anyway. But it is bothersome to others when their queries wait (unless they specify no wait) and will lock all the records in the table until all are done updating. But, considering a merge can be done, there is really no reason to lock anything.

  • Comment on Re^6: copying mysql table data to oracle table