|Problems? Is your data what you think it is?|
How do I handle a DB error when a SQL statement must not fail?by ted.byers (Scribe)
|on Mar 28, 2014 at 22:25 UTC||Need Help??|
ted.byers has asked for the
wisdom of the Perl Monks concerning the following question:
Here is some sample code, showing how I normally handle database errors.
Obviously this is a contrived example, showing showing that Exception::Class::TryCatch catches deadlock errors. NB: I found that I can't use begin_work if autocommit is already set to 0, in which case I send 'BEGIN' to the database, but I don't need to use the latter, if I have autocommit = 1 and use begin_work: both variants produce the same output. There is a second script that is very similar, and when the two are executed at the same time, at least within a second or two, the one executed second dies with a deadlock error.
My production case is a little different, in that autocommit is on (which as you know turned each statement into its own transaction). I still need to figure out a way to verify that tests the ability of this code to detect any deadlock that may arise from my production code. The production code involved two individual INSERT statements, each executed using $dvh->do($sql). These INSERT statements must not fail, so I need help to figure out how to retry them (and keep retrying them) until they succeed (or log them to a file, after, say, 10 retries, if they continue to fail). Hence, one question is, how do I modify the above code so that it better tests the possibility of deadlock errors involving a signle INSERT SQL statement with autocommit on, and then, how to modify it so that it retries the inserts until they either succeed or we have to give up because they failed too many times (along with the error that explains why they failed)?
The context is that these inserts are invoked from a CGI script AFTER the data has been validated. I know the SQL I am using is generally valid, as 99.99% of the instances in which they're executed, the data is stored properly. Alas, the 0.01% or less of the transactions that fail leave no record of their existence in the MySQL logs, the webserver logs, or logs I create using perl. This is a problem as none of the data involved can be lost,
At this point, I have had no luck figuring out why there are random intervals of 5 to 10 minutes during which data is lost, with an average of 1 such period per week (most weeks have one such period, some have 2 and some have 0). It was suggested to me a few days ago that maybe I am encountering a deadlock with two instances of my cgi script trying to insert data at the same time. Alas, I don't yet know how to test whether or not this is true, and if so, how to improve my error handling so that the insert that fails is retried.. Nor do I know what else may be awry that can hit so very rarely.
I would welcome any aide you can provide.