http://www.perlmonks.org?node_id=272436

lhoward has asked for the wisdom of the Perl Monks concerning the following question:

I have a large base of code working off of an oracle database that is using perpare_cached for performance reasons. I'm having problems where the statement handle is going bad because of a change in the underlying database. Is there any way to force DBI to verify in perpare_cached that the statment handle is still valid and if not re-prepare it? Would doing so void the performance gains I get from using prepare_cached?

L

Replies are listed 'Best First'.
Re: DBI perpare_cached and statment handle gone bad
by lhoward (Vicar) on Jul 08, 2003 at 20:21 UTC
    Here's a little test program I use to replicate the issue:
    #!/usr/bin/perl -w use strict; use DBI; my $dbh=DBI->connect('zzz','zzz','zzz'); while(1){ my $sth=$dbh->prepare_cached('select count(*) from EXAMPLE'); $sth->execute(); my @r=$sth->fetchrow(); print "$r[0]\n"; $sth->finish(); sleep 10; } $dbh->disconnect();
    When I do the work to EXAMPLE that invalidates the statment handle, my example code returns the following:
    DBD::Oracle::st execute failed: ORA-00942: table or view does not exist (DBD ERROR: OCIStmtExecute)
    
    DBD::Oracle::st execute failed: ORA-01003: no statement parsed (DBD ERROR: OCIStmtExecute)
    
    What I'd like is for prepare_cached to recover gracefully from these types of intermittent hiccups. Any ideas?

    L

Re: DBI perpare_cached and statment handle gone bad
by cbro (Pilgrim) on Jul 08, 2003 at 20:46 UTC
    "Is there any way to force DBI to verify in perpare_cached that the statment handle is still valid and if not re-prepare it?"
    You could write a subroutine that re-prepares the handle. That is:
    $dbh->execute() || try_again();
    From the try_again() function, you could reexecute your statement after running some checks and return the results.

    "Would doing so void the performance gains I get from using prepare_cached?"
    Not if there is going to be some extended period of time where the query (data from the query) will be reused, IMO.
    In all, a lot of what you're worried about is standard error checking. Whether using cached queries or not, you should always check your prepare and/or execute statements. If you are only worried about catching a certain error or a list of errors, you can pass $DBI::errstr and do some parsing to decide whether or not you want to handle the error (e.g re-prepare your statement), die, or continue.
    HTH,
    Chris
    Update: Per runrig's comment, I changed the try_again() to go after the execute. I read to fast and didn't realize the fail was occurring on the execute. This will work (I use it).
    Either way, I like the other ideas better (from leriksen and dragonchild), and suggest using those...as I will now start doing!
      my $sth = $dbh->prepare_cached($statement) || try_again();
      That won't do what you think it does. The error is occuring on the execute, not the prepare. And prepare_cached will just return the previously prepared statement handle (which, assuming the database changed, will now be an invalid handle, but a handle nonetheless), and your try_again() will not be called.

      What the OP might want to do, if caching the statement handle is worth the trouble, is cache the handle himself without using prepare_cached, and re-prepare it if it fails on the execute.

      Or use prepare_cached, and if the execute fails, look at the hash ref returned by the CachedKids dbh attribute, delete the statement handle from that, and re-prepare.

        Actually one could use eval to emulate a try-catch scenario (wuch is really one of the main uses of eval)
        eval { # try $sth->execute(); #throw }; if ($@) { #catch if (<error text indicates a recoverable statement handle error) { ...delete cached statement handle, re-prepare SQL and re-execute... } }