Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight

Re: How do I handle a DB error when a SQL statement must not fail?

by sundialsvc4 (Abbot)
on Mar 29, 2014 at 17:48 UTC ( #1080220=note: print w/replies, xml ) Need Help??

in reply to How do I handle a DB error when a SQL statement must not fail?

I suspect that the root cause of the problem is right here:

$sql = "SELECT SLEEP(10);";

This is asking the SQL service engine to sleep.   This is probably stoppering things up.   Furthermore, it’s holding a transaction open for ten seconds, and this I wager is the root cause of your deadlocks.

If you need to take a snooze, your application program should do this on its own.   It should not keep a transaction open during the duration.

  • Comment on Re: How do I handle a DB error when a SQL statement must not fail?

Replies are listed 'Best First'.
Re^2: How do I handle a DB error when a SQL statement must not fail?
by ted.byers (Monk) on Mar 29, 2014 at 21:24 UTC


    Actually, the code I showed was a test script designed to test the ability of Exception::Class::TryCatch to catch deadlocks (I have a whole suite of similar tests that verify it's ability to catch more mundane SQL errors, focussed on the construction of the SQL statement itself). It works with a nearly identical script that accesses the same table, but does different updates on the table. The only purpose of the SLEEP(10) is to give me a chance of executing the second test script before the first completes. I can guarantee that there is no SLEEP(10) statement anywhere in my production code. The applicability of the example script I showed relates too the fact I use a try/catch block for the transactions in question. The only difference in the try block is that I have two SQL statements, and autocommit is on (so no need for begin/commit), The second aspect of its applicability is how I ought to restructure that try/catch logic so that, if the first attempt to insert fails, it can be retried either a limited number of times before logging the fact and relevant details, or it succeeds. Remember, this is part of a CGI program, and one of the constraints I impose on my code is that it must respond within 5 to 10 seconds, from the moment the client machine submits the request, to the time it receives my response (my competitor typically take 20 seconds to respond).

    Recall, I had asked how to refactor from a simple try/catch logic to a try/catch and retry logic, where retries may have a limit of 5 or 10 before logging the failure). I also don't know, at this stage, whether or not DBI issues some other error if mysql just takes to long to respond, but without mysql generating an exception, or if DBI can produce other exceptions unrelated to the correctness of the SQL submitted. How do I identify the error conditions that DBI may produce that are unrelated to the correctness of the SQL that has been submitted, and what is the best way to respond (server side only) in such a way as to ensure data is never lost?

    Thanks again


      Aye, I had suspected this was so, however the presence of that SLEEP does render this implementation incomparable to the production case since this case has (this) very-serious problem on its own.   And as to why the deadlocks are happening, reading just the material that you have posted here, I truly do not know why they would be.   Is this a CGI process that is inserting data into a table that is also being used for some very heavy-duty long-running transactions by another back-end (non-CGI) process, say?   An attempt to insert data into a table should not “fail.”   Most certainly, it should not take any time at all to do either.   If the table is contentious, then a CGI process probably should not be touching it, and the presence of many CGI processes doing so will make the contention considerably worse.   (Perhaps this is what your competitor is right-now doing wrong?)

      What if, for instance, you designated another daemon-process to which the CGI processes could hand-off requests, say using the existing (SOAP, XML-RPC, FastCGI pick one ...) protocols?   Instead of attempting to do the work themselves, the CGI processes would would send the request to this server process (or pool), and await a reply.   This server would execute requests very quickly but now each request would not contend with the others.   (Production-ready servers for all of these are right now available on CPAN, so there is no burden of implementation of the plumbing.)

      I believe, or at least by now I suspect, that it is this contention, whatever its true source may be, which is causing the deadlocks, is the root cause of this matter and that some design change may be required to permanently and effectively deal with it.

        I wrote that test just to test whether or not the Exception::Class:TryCatch code would catch a deadlock error. I have not taken it further to test code more similar to my production code, because I have not yet figured out how to do that right.

        This CGI process touches tables that are touched by only one other process (which does not insert data into it, but rather reads from it (in a scheduled task that stars at 1AM EST)) The scheduled task itself takes half an hour or so to complete, but that is because it executes a couple hundred SELECT SQL statements (no long running transaction as each takes less than a second to complete. Where that scheduled task spends most of its time is in calculation of derived results, and date manipulation, between SQL statements. As these random periods of data lost have never happened at the time that this scheduled task runs, it seems unlikely that it is a contributor to the problem.

        I had considered using another daemon or service (but it has to be able to work on both Linux and Windows as I do much of my initial development on Suse, but testing and production happens on a Windows box), but I have not figured out how best to do that. Can I trouble you to make a recommendation of a couple of the best servers, for multithreaded XML-RPC and FastCGI servers that handle this sort of thing. I am guessing that the servers you have in mind can take simultaneous requests from a few to many (how many? Or how many can be handled by a pool of such a server?) instances of my CGI script, and put them into a kind of buffer or queue. Would that be right? I don't see how you can take data from multiple threads or processes and insert it into a database without using some sort of queue to ensure that no attempted insert happens at the same time as another insert statement. And can you point me to a good reference that explains how all this ought to work and that gives examples of implementing it using the servers you prefer?



Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1080220]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others chilling in the Monastery: (4)
As of 2018-05-26 22:21 GMT
Find Nodes?
    Voting Booth?