|Don't ask to ask, just ask|
Re^2: How do I handle a DB error when a SQL statement must not fail?by ted.byers (Scribe)
|on Mar 31, 2014 at 19:26 UTC||Need Help??|
That, too, was my gut reaction, but the MySQL gurus I talked with suggested it may be a deadlock, so I started to test for that, as, not being a true expert with MySQL, I had no concrete evidence that it wasn't a deadlock.
But, neither can I fathom what kind of bug can be in my code that would allow it to work 99.99% of the time, and give apparently random failures.
I wonder if it is a question of resource exhaustion, as suggested here by Anonymous Monk on Mar 29, 2014 at 00:21 UTC. That is something I am presently investigating. I finally found a resource monitor that will log resource use to a file for CPU cycles, memory, disk IO and network traffic. And, I have beefed up the logging done by MySQL. Hopefully, between those logs, and whatever you and other monks suggest is worth examination regarding whatever kind of bug I may have, I will get to the bottom of this RSN.
I am still interested in learning about these servers you mentioned (particularly FastCGI and XML::RPC, and how they can serialize insertions into a DB with reqeuests coming from multiple CGI processes, along with how much traffic will make such infrastructure mandatory
As for what my competition is doing, I have no idea. I watch industry reports, particularly those that report average response times for services like mine, response time being measured as the time the client software makes a request to the time it gets a response time back. Being competitors, I doubt they'll let me look at their code or the architecture they use. ;-) Back when I developed desktop applications in Java and C++ (I preferred C++ because it was, and still is, blindingly fast relative to Java, or any other language I can use), I found in some cases, it was just a case of using a compiled language rather than an interpreted language, and in a great many more cases, it was a question of using the wrong algorithm to complete the task (e.g. using a linear search through an enormous dataset, rather than the more sensible binary search). Now, if I can learn how to use C++ to handle requests using FastCGI, and could have a good library to add FastCGI support to my C++ projects, then I may well resort to that, at least if and when the traffic grows to a point where Perl could have trouble handling it (or use profiling information to find out where the bottlenecks are, and see if a package written in C++ can improve things). In what ways my competitors make their systems so slow, I can make educated guesses, but can not really know because I can't see their code.
BTW, perhaps you might had paraphrased Men in Black, rather than quoting it, to say, "Ted, We have a bug." ;-)