Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Re^8: Problem Running More than Two Successive (I think) Sql SELECT statements, DBI, DBD::Oracle

by perldigious (Priest)
on Feb 10, 2020 at 14:30 UTC ( #11112731=note: print w/replies, xml ) Need Help??


in reply to Re^7: Problem Running More than Two Successive (I think) Sql SELECT statements, DBI, DBD::Oracle
in thread RESOLVED - Problem Running More than Two Successive (I think) Sql SELECT statements, DBI, DBD::Oracle

Oh, I'm not a big fan of how I'm doing this either. For me it's a kludge that seems to fix another problem I had with this.

So are the all the connect/disconnects, I had another different problem without them. I had also tried adding finish method calls instead with no luck.

I'm not savvy enough at this stuff to be that effective on debugging, but I know if I do go back to just a single connect that isn't cached and/or remove the explicit disconnects I still see the issues. I'm sort of relying on my Oracle DBAs at this point, and since one of them switched the service I connected to and added a few other connection options things seem far more stable than they have ever been previously.

I'm still holding out hope I can figure out what's really going on as I work with my employer's database from Perl more, but for now I'm way behind at work and I have to let this one go.

Just another Perl hooker - My clients appreciate that I keep my code clean but my comments dirty.
  • Comment on Re^8: Problem Running More than Two Successive (I think) Sql SELECT statements, DBI, DBD::Oracle
  • Download Code

Replies are listed 'Best First'.
Re^9: Problem Running More than Two Successive (I think) Sql SELECT statements, DBI, DBD::Oracle
by Marshall (Abbot) on Feb 10, 2020 at 16:53 UTC
    I don't understand why you are getting connection timeouts. The number of connections is limited, but usually a big number. Rather than attempting this connection_cached method, I would recommend a re-try of the basic connection method. The docs show enough red flags about connection_cached that I would stay away from that. The answer to a problem with a common simple method is seldom to increase complexity. Here is some code showing a retry for a web page fetch which you can adapt to your situation or some Monk can supply a better way:
    my $success=0; my $tries=0; while (! $success and $tries++ < 10) { eval { $m2->get($fullurl); }; if (! $@) { $success = 1; } else { print STDERR "Error: Retry Attempt $tries of 10\n"; print LOG "Error: Retry Attempt $tries of 10\n"; sleep (3); } } die "aborted Web Site Error: $!" unless $success; #ultimate f +ailure!! PROGRAM ABORT !!!!
    In general I would connect once at the beginning of your script and then disconnect after all SQL work is done. You could add some statements to log how often connect succeeds vs fails. DB Admins (and Monks) like to see data. From the above code, I can tell you for example that this website fails about once per 2,000 requests. Why, I don't know. But it is infrequent enough that I don't worry about it.

    This stuff where the query result fetch hangs is weird. That basically "shouldn't happen", but alas it does happen - such is computing at times! Figuring out what is going on there probably involves writing some specific test code to reproduce the problem with a known dataset. And sounds like that will take more time than you have at the moment. It is possible to implement your own timeout to prevent the program hang, but there are some "yeah but's" with that and we are dealing with an error that is so rare that most folks have never seen it before.

    Check back in when you have time to work on your project again. In general the Perl DBI is very robust and much easier to use than similar code in some other popular languages. I don't think we've come close to exhausting the various testing possibilities.

    Update: The most common method that I use for DB results is fetchall_arrayref. I am not sure what the DBI does with fetching results row by row. Most of my result sets are "small" meaning less than 2,000 rows. There is an equivalent fetchall_hashref. Perhaps some other Monk can explain the difference in communication between the DBI and DB for these array result situations and row by row fetching?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://11112731]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others chanting in the Monastery: (8)
As of 2020-06-01 17:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Do you really want to know if there is extraterrestrial life?



    Results (6 votes). Check out past polls.

    Notices?