I love Parallel::ForkManager and I love DBD::mysql but they don't always get along so well. If you have DBI connections open in the parent process you'll get errors when the child processes exit. There's a few ways to deal with the problem, but here's the easiest. Just disconnect all handles in the parent and reconnect later after your parallel work is done:

# disconnect all database handles my %drivers = DBI->installed_drivers(); my @all_dbh = grep { defined } map { @{$_->{ChildHandles}} } value +s %drivers; $_->disconnect for @all_dbh; # now use Parallel::ForkManager as usual foreach my $job (@work) { $pm->start and next; # do the fork # connect in the child $dbh = DBH->connect(...); $pm->finish; # do the exit in the child process } $pm->wait_all_children; # now safe to reconnect in the parent $dbh = DBI->new(...);

The alternative is to keep the parent handles connected but set InactiveDestroy on then in the children. I don't prefer this because you also have to be sure the children won't accidently use the parent handles or explicitely disconnect() them. This may seem easy for simple scripts but code using ORMs like Class::DBI or Rose::DB::Object can hide DBI connections deep in their bowels. You might worry about the slowdown from all this disconnecting and reconnecting but I've yet to see it show up on a profiling run.

So there you have it. I wrote this note mostly for my own future reference since I've had to rediscover this a couple times now. Maybe it will help you too!


PS: This probably works for any DBI driver that uses sockets to talk to the database, but I can't say for sure.