http://www.perlmonks.org?node_id=161043


in reply to Re: Sharing database handle
in thread Sharing database handle

Hi Matt,

I'm forking about 1 process every second. And indeed, I use DBI to connect to a MySQL database. The child processes only perform simple SELECT and INSERT queries, so no heavy stuff.

Judging from the reactions above, I guess it's best to keep it the way it is. I was just wondering if I could be more efficient.

Grtz Marcello

Replies are listed 'Best First'.
Re: Re: Re: Sharing database handle
by d_i_r_t_y (Monk) on Apr 23, 2002 at 08:40 UTC
    i'm forking about 1 process every second.

    continuously?! surely not?!!... perhaps you should post the code?

    if divide and conquer is not applicable then i have one other suggestion:

    1. open a local UNIX socket/pipe
    2. fork a predefined set of children (say, 8), have each of the children open a DBI handle to mysql
    3. set up the parent process to loop over your insert data spooling it to the socket that the children are all listening to/blocking on; meanwhile each of children are blocking in the $server->accept() call waiting for the parent to send them data
    4. upon reading a chunk of data, enter the insert routine and loop back to the accept
    5. continue until parent runs out of data

    an example of this approach (a pre-forking server, like apache) is in the perl cookback in the chapter on sockets or IPC, though i don't have the book in front of me at the moment, so can't give specific recipe.

    but you're probably right; unless you're loading >1Gb of data (which does or does not need transformation prior to insert), it's probably fast enough. ;-)

    d_i_r_t_y