Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Sharing a database handle over multiple processes

by Anonymous Monk
on Jun 28, 2001 at 05:20 UTC ( [id://92147]=perlquestion: print w/replies, xml ) Need Help??

Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi, monks!

I've got a server which listen()s on a certain port and then fork()s off a new process for every accept()ed connection.
Each child process then reads X number of bytes from the socket, checks data for validity and then writes everything to the database (Oracle).

In the current implementation, each child process does its own $dbh = DBI->connect($source, $uname, $auth, \%attr); thing and I have a feeling that this slows things down dramatically...

So, my question is:

Is there a way to share the same database connection among multiple processes ?

For instance, can I do the following?

... my $dbh = DBI->connect($source, $uname, $auth, \%attr); while(1){ ########################## # accept() code goes here ########################## my $cid = fork(); if (defined $cid){ if ($cid == 0){ my $statement = q!a SQL statement goes here!; my $sth = $dbh->prepare($statement); $rv = $sth->execute; ... etc ... exit 0; } }else{ # handle the error } ...
What will happen in case when multiple processes try to use their copies of $dbh at the same time?

Any input is greately appreciated.
Thanks in advance!

Replies are listed 'Best First'.
Re: Sharing a database handle over multiple processes
by pope (Friar) on Jun 28, 2001 at 13:00 UTC
    A DBI database handle does survive across fork().
    But the problem here is, when a child exits, the database handle gets disconnected, and since the same database connection is shared among the other children, this will affect them as well.

    If the overhead of connect() is really the bottleneck of your system, then I'd suggest you to follow the apache/mod_perl model:
    a child doesn't exit immediately after serving a client, but doing an accept() loop instead to serve more subsequent requests until a certain maximum number reached.
    Your program still creates a new $dbh for each forked child, but this $dbh is reused several times within the child before exits, not just once. This way you reduce the number of connect().

    Update: Added a suggestion.

Re: Sharing a database handle over multiple processes
by Zapawork (Beadle) on Jun 28, 2001 at 07:04 UTC
    Here is another idea,

    If your problem is multiple connections from the child procs then focus on the obvious solution. I am hoping you already have a pipe between the parent and child procs that allows reading and writing. If you do then let the parent proc open the database handle. This gives you more flexibility within the queing of requests to be inserted as well as limiting the connections to one.

    Just have the parent proc take all output from the child pipe and use that as data to be prepared for the sql execution.

    I hope this works for you.. In general i would limit the amount of database connections that are open in every case.

    Dave

Re: Sharing a database handle over multiple processes
by voyager (Friar) on Jun 28, 2001 at 05:35 UTC
    The word I got from my local expert is that database handles do not survive a fork. So you will need to make a new connection in each new process.

    Is this consistent with what others have heard/experienced.?

      Apache::DBI states that it creates all its DBI handles after the fork, since DBI handles would overwrite important info since SQL stuff through DBI is keyed off the handle... such as LAST_INSERT_ID and transactions, etc... so I agree with you wholeheartedly

                      - Ant

Re: Sharing a database handle over multiple processes
by VSarkiss (Monsignor) on Jun 28, 2001 at 07:01 UTC
    There is no database system I know of whose handle will stay valid across a fork. It might work with something that doesn't maintain client state, like a flat file with DBD::CSV, but even that would have problems of its own.
Re: Sharing a database handle over multiple processes
by fs (Monk) on Jun 28, 2001 at 05:33 UTC
    Have you thought about doing some profiling? If you already suspect which block of code is hitting you hardest, some timing checkpoints at the beginning, end, and just before and after the code in question should be able to confirm or deny whether the ->connect is really the problem.

    If it is, you might want to think about having a seperate process maintain the $dbh, and have the other processes pass queries off to it. If you perform a fairly small set of queries frequently, you would then be able to cache the prepared queries without much memory penalty.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://92147]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (6)
As of 2024-04-19 14:32 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found