Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery

Unexpected and uncatchable database corruption errors using DBD::SQLite & Parallel::ForkManager

by djerius (Beadle)
on Apr 05, 2014 at 05:04 UTC ( #1081215=perlquestion: print w/replies, xml ) Need Help??
djerius has asked for the wisdom of the Perl Monks concerning the following question:

I'm writing data to an SQLite db from multiple child processes managed by Parallel::ProcManager. Unfortunately several odd things are happening.
  1. I'm getting error messages indicating that the database is corrupt; yet it seems not to be.
  2. Even though I've got RaiseError on and PrintError off, the error messages are still output and I can't catch them via an eval. I have to resort to checking execute()'s return value.
Test code is at the end of this post. The order and frequencies of errors are not deterministic, but here are those that are output to the standard error stream:
  • DBD::SQLite::st execute failed: file is encrypted or is not a database
  • DBD::SQLite::st execute failed: database disk image is malformed
  • DBD::SQLite::st execute failed: database is locked
The database is locked errors are expected, as the database is being hammered with 30 processes which do nothing but attempt to update it.

I've run this code on both Linux and Mac OS X boxes with the same errors (I seem to get many more errors on the OS X box).

I've applied (correctly I hope) the advice from the DBI documentation regarding the AutoInactiveDestroy flag.

I've read out on the 'net that the malformed or not a database errors may be because of busy timeouts. That doesn't seem to be the case here, as I've set a busy timeout of 5 seconds, and the vast majority of these errors appear before 5 seconds in the child process have elapsed. My questions are:

  1. Am I missing something in setting up the database connection so that it works correctly across forks?
  2. Is there an SQLite parameter I need to set to get database locking to work? My impression from the DBD::SQLite documentation is that it does this automatically.
  3. Why doesn't RaiseError work? This is the first time I've seen this happen (I usually work with PostgreSQL and Sybase; not much experience with SQLite)
Any help is appreciated.


Directly after posting (of course), I noticed that I hadn't supplied the correct number of parameters to DBI->connect; I'd left out the username and password parameters. After fixing that my problems with catching exeptions is fixed. I've amended the code below to include that fix.

Here's the test code:

#!/usr/bin/env perl use strict; use warnings; use DBI; use Parallel::ForkManager; use Time::HiRes qw[ gettimeofday ]; use Getopt::Long; my %opt; GetOptions( \%opt, qw[ nproc|p=i db=s njobs|j=i timeout=f ] ) or die; $opt{nproc} = 30 unless defined $opt{nproc}; $opt{njobs} = 1000 unless defined $opt{njobs}; $opt{timeout} = 5 unless defined $opt{timeout}; die( "must specify --db\n" ) unless defined $opt{db}; unlink( $opt{db} ); my $dbh = DBI->connect( "dbi:SQLite:dbname=$opt{db}", '', '', { PrintError => 0, AutoCommit => 1, AutoInactiveDestroy => 1, RaiseError => 1, } ) or die( "unable to create db\n" ); $dbh->sqlite_busy_timeout( $opt{timeout} * 1000 ); $dbh->do( "create table test ( a int, b int )" ); my $pm = Parallel::ForkManager->new( $opt{nproc} ); for my $idx ( 1 .. $opt{njobs} ) { $pm->start and next; my $time_of_day = gettimeofday; update_results( $dbh, $idx % 5, $$, $time_of_day ); $pm->finish; } $pm->wait_all_children; sub update_results { my ( $dbh, $a, $b, $time_of_day ) = @_; my $sth_update = $dbh->prepare( q[ update test set b = ? where a = ? ] ); my $sth_insert = $dbh->prepare( q[ insert into test ( a, b ) values ( ?, ?) ] ); eval { my $rv = $sth_update->execute( $b, $a ); die( "error in attempt to update: @{[ $dbh->errstr ]}\n" ) if !defined $rv; if ( 0 == $rv ) { $rv = $sth_insert->execute( $a, $b ); die( "error in attempt to insert: @{[ $dbh->errstr ]}\n" ) if !defined $rv; die( "$$: error updating status for a = $a\n" ) unless $rv; } }; my $err = $@; if ( $err ) { my $elapsed_time = gettimeofday() - $time_of_day; # complain if not a busy timeout error message if ( $elapsed_time > $opt{timeout} && $err !~ /database is loc +ked/ ) { die "ERROR: time overrun: \[$$] elapsed time: $elapsed_tim +e: $@" if $@; } die "ERROR: \[$$] elapsed time: $elapsed_time: $@" if $@; } }
  • Comment on Unexpected and uncatchable database corruption errors using DBD::SQLite & Parallel::ForkManager
  • Select or Download Code

Replies are listed 'Best First'.
Re: Unexpected and uncatchable database corruption errors using DBD::SQLite & Parallel::ForkManager
by moritz (Cardinal) on Apr 05, 2014 at 05:40 UTC

    I don't know if this applies to SQLite, but many database systems have a problem if you fork after connecting; In this case, forking first and then connecting to the database (or if that's not an option, reconnect in the child process after fork) is the proper solution.

      That does the trick. Giving each forked process its own database handle makes all of the corruption errors go away.


Re: Unexpected and uncatchable database corruption errors using DBD::SQLite & Parallel::ForkManager
by graff (Chancellor) on Apr 05, 2014 at 15:53 UTC
    You should check the SQLite docs about whether that library supports multiple simultaneous connections to the same database file. I just found this very nice summary about applications that should and should not use SQLite.

    The sections there about client/server applications and high concurrency lead me to think that SQLite and multi-threading are not a good combination. The latter section specifically says SQLite "will only allow one writer at any instant in time."

    Update: the bottom line is: either you have to complicate your own script in order to overcome SQLite's limitations (choroba's reply below suggests there's already a CPAN module to help with that), or else you have to pick a more capable type of database.

      That's why I had to use DB_Repeat.
      لսႽ ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
      The SQLite documentation indicates that it can handle multiple simultaneous read connections. It only allows one writer at a time, locking the database during the write. If it cannot lock the database, it returns a "database is locked" error.

      I'm testing this on a local disk (no NFS shenanigans involved), so the locking mechanism should work.

Re: Unexpected and uncatchable database corruption errors using DBD::SQLite & Parallel::ForkManager
by sundialsvc4 (Abbot) on Apr 06, 2014 at 11:54 UTC

    I cordially suggest that this is the sort of application that should not use SQLite.

      The actual application doesn't hit the database that hard; the test code is designed to get to failure more quickly.

      After the fix suggested by moritz, everything is working fine.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1081215]
Approved by moritz
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others imbibing at the Monastery: (9)
As of 2017-05-24 09:34 GMT
Find Nodes?
    Voting Booth?