http://www.perlmonks.org?node_id=161237


in reply to Re: Re: Sharing database handle
in thread Sharing database handle

i'm forking about 1 process every second.

continuously?! surely not?!!... perhaps you should post the code?

if divide and conquer is not applicable then i have one other suggestion:

  1. open a local UNIX socket/pipe
  2. fork a predefined set of children (say, 8), have each of the children open a DBI handle to mysql
  3. set up the parent process to loop over your insert data spooling it to the socket that the children are all listening to/blocking on; meanwhile each of children are blocking in the $server->accept() call waiting for the parent to send them data
  4. upon reading a chunk of data, enter the insert routine and loop back to the accept
  5. continue until parent runs out of data

an example of this approach (a pre-forking server, like apache) is in the perl cookback in the chapter on sockets or IPC, though i don't have the book in front of me at the moment, so can't give specific recipe.

but you're probably right; unless you're loading >1Gb of data (which does or does not need transformation prior to insert), it's probably fast enough. ;-)

d_i_r_t_y