Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re: Sharing database handle

by d_i_r_t_y (Monk)
on Apr 21, 2002 at 03:10 UTC ( #160844=note: print w/ replies, xml ) Need Help??


in reply to Sharing database handle

hi marcello,

I have an application which forks of child processes to do all the work. At the moment, every child process creates its own database connection, does its work and closes the database connection again. This causes a lot of connections to be made to the database.

how many children are you forking?!
seriously, we have done the same thing for our biological data since the amount of work preceding each insert/update justifies the parallel model.

i don't think there is a problem with many db connections as long as the work for each thread justifies the connection time overhead. on mysql, connections are cheap and fast and i wouldn't even think twice about doing it (though concurrency/locking with mysql is more or an issue...). on db2, where 1 connection costs about 750msec each (that's client/server on same machine!), you would want to have at least 750msec worth of work for each process to justify the connection overhead. that said, divide and conquer works a treat if you can break down your workload into mutually exclusive chunks.

btw, have to say it's sure satisfying to see those CPUs and disks crying for mercy when you have 8 concurrent processes hammering away inserts to the db...

matt


Comment on Re: Sharing database handle
Re: Re: Sharing database handle
by Marcello (Hermit) on Apr 22, 2002 at 13:14 UTC
    Hi Matt,

    I'm forking about 1 process every second. And indeed, I use DBI to connect to a MySQL database. The child processes only perform simple SELECT and INSERT queries, so no heavy stuff.

    Judging from the reactions above, I guess it's best to keep it the way it is. I was just wondering if I could be more efficient.

    Grtz Marcello
      i'm forking about 1 process every second.

      continuously?! surely not?!!... perhaps you should post the code?

      if divide and conquer is not applicable then i have one other suggestion:

      1. open a local UNIX socket/pipe
      2. fork a predefined set of children (say, 8), have each of the children open a DBI handle to mysql
      3. set up the parent process to loop over your insert data spooling it to the socket that the children are all listening to/blocking on; meanwhile each of children are blocking in the $server->accept() call waiting for the parent to send them data
      4. upon reading a chunk of data, enter the insert routine and loop back to the accept
      5. continue until parent runs out of data

      an example of this approach (a pre-forking server, like apache) is in the perl cookback in the chapter on sockets or IPC, though i don't have the book in front of me at the moment, so can't give specific recipe.

      but you're probably right; unless you're loading >1Gb of data (which does or does not need transformation prior to insert), it's probably fast enough. ;-)

      d_i_r_t_y

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://160844]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (8)
As of 2014-12-21 13:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (105 votes), past polls