Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re^5: copying mysql table data to oracle table

by Anonymous Monk
on Aug 25, 2017 at 17:28 UTC ( [id://1198025]=note: print w/replies, xml ) Need Help??


in reply to Re^4: copying mysql table data to oracle table
in thread copying mysql table data to oracle table

The point is, there's nothing you can do about it, being you haven't locked the table or records.
I don't understand your statement at all. If your transaction fails, you go back to the beginning, meaning you start a new transaction, fetch the record again, and decide what to do about it. Maybe the intervening update obviated the need for your change, and maybe it didn't. But the point is, that's what you can do about failed transactions. You just retry them.
The cache i referred to in the database cache where table are often kept for future statements.
No, I'm talking about the OP's original idea of starting off with a "select * from emp" and throwing the whole thing in a big perl hash for later reference. Bad Idea, right? Right?
If the statement is insert where not exists, there should be no errors...
There could be any sort of column constraint violation, like an out-of-range value or a missing foreign key. You at least want to be able to tell the user, "here are the records that didn't get updated."
We should not need more context...
The context we need is: can we just lock the table and prevent anybody else from making any updates until we're done, or is that going to cheese too many people off?
  • Comment on Re^5: copying mysql table data to oracle table

Replies are listed 'Best First'.
Re^6: copying mysql table data to oracle table
by chacham (Prior) on Aug 26, 2017 at 19:01 UTC

    If your transaction fails, you go back to the beginning, meaning you start a new transaction, fetch the record again, and decide what to do about it. Maybe the intervening update obviated the need for your change, and maybe it didn't. But the point is, that's what you can do about failed transactions. You just retry them.

    Imagine for a moment that the script was (accidentally) executed twice, simultaneously. Both scripts lock each other out, and assuming there is no deadlock, both will keep retrying forever. I am not sure what the benefit of an explicit transaction is in this case anyway.

    throwing the whole thing in a big perl hash for later reference. Bad Idea, right?

    Oh, i wasn't referring to that at all. It would only be a bad idea because it causes a data transfer that can be easily avoided.

    You at least want to be able to tell the user, "here are the records that didn't get updated."

    The case here is a one-time data migration. Exception handling for the purpose of human readable messages is not worth the effort. Anyway, the where clause should include all constraint checks anyway. If you need to know what records were not inserted, use the appropriate clause in MERGE, or simply do a where not exists() after the operation is finished, to see what didn't make it.

    can we just lock the table and prevent anybody else from making any updates until we're done, or is that going to cheese too many people off?

    Locking a table is the easiest way, no question, that is for brute force, non-merge anyway. But it is bothersome to others when their queries wait (unless they specify no wait) and will lock all the records in the table until all are done updating. But, considering a merge can be done, there is really no reason to lock anything.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1198025]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (7)
As of 2024-04-23 19:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found