Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re^3: How do I handle a DB error when a SQL statement must not fail?

by sundialsvc4 (Monsignor)
on Mar 30, 2014 at 14:24 UTC ( #1080296=note: print w/ replies, xml ) Need Help??


in reply to Re^2: How do I handle a DB error when a SQL statement must not fail?
in thread How do I handle a DB error when a SQL statement must not fail?

Aye, I had suspected this was so, however the presence of that SLEEP does render this implementation incomparable to the production case since this case has (this) very-serious problem on its own.   And as to why the deadlocks are happening, reading just the material that you have posted here, I truly do not know why they would be.   Is this a CGI process that is inserting data into a table that is also being used for some very heavy-duty long-running transactions by another back-end (non-CGI) process, say?   An attempt to insert data into a table should not “fail.”   Most certainly, it should not take any time at all to do either.   If the table is contentious, then a CGI process probably should not be touching it, and the presence of many CGI processes doing so will make the contention considerably worse.   (Perhaps this is what your competitor is right-now doing wrong?)

What if, for instance, you designated another daemon-process to which the CGI processes could hand-off requests, say using the existing (SOAP, XML-RPC, FastCGI pick one ...) protocols?   Instead of attempting to do the work themselves, the CGI processes would would send the request to this server process (or pool), and await a reply.   This server would execute requests very quickly but now each request would not contend with the others.   (Production-ready servers for all of these are right now available on CPAN, so there is no burden of implementation of the plumbing.)

I believe, or at least by now I suspect, that it is this contention, whatever its true source may be, which is causing the deadlocks, is the root cause of this matter and that some design change may be required to permanently and effectively deal with it.


Comment on Re^3: How do I handle a DB error when a SQL statement must not fail?
Re^4: How do I handle a DB error when a SQL statement must not fail?
by ted.byers (Scribe) on Mar 31, 2014 at 05:14 UTC

    I wrote that test just to test whether or not the Exception::Class:TryCatch code would catch a deadlock error. I have not taken it further to test code more similar to my production code, because I have not yet figured out how to do that right.

    This CGI process touches tables that are touched by only one other process (which does not insert data into it, but rather reads from it (in a scheduled task that stars at 1AM EST)) The scheduled task itself takes half an hour or so to complete, but that is because it executes a couple hundred SELECT SQL statements (no long running transaction as each takes less than a second to complete. Where that scheduled task spends most of its time is in calculation of derived results, and date manipulation, between SQL statements. As these random periods of data lost have never happened at the time that this scheduled task runs, it seems unlikely that it is a contributor to the problem.

    I had considered using another daemon or service (but it has to be able to work on both Linux and Windows as I do much of my initial development on Suse, but testing and production happens on a Windows box), but I have not figured out how best to do that. Can I trouble you to make a recommendation of a couple of the best servers, for multithreaded XML-RPC and FastCGI servers that handle this sort of thing. I am guessing that the servers you have in mind can take simultaneous requests from a few to many (how many? Or how many can be handled by a pool of such a server?) instances of my CGI script, and put them into a kind of buffer or queue. Would that be right? I don't see how you can take data from multiple threads or processes and insert it into a database without using some sort of queue to ensure that no attempted insert happens at the same time as another insert statement. And can you point me to a good reference that explains how all this ought to work and that gives examples of implementing it using the servers you prefer?

    Thanks

    Ted

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1080296]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (10)
As of 2014-08-22 20:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (164 votes), past polls