Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: speeding up row by row lookup in a large db

by samtregar (Abbot)
on Mar 21, 2009 at 19:57 UTC ( [id://752287]=note: print w/replies, xml ) Need Help??


in reply to speeding up row by row lookup in a large db

First off, let me echo at maximum volume that you should drop SQLite right away. MySQL is faster, Postgresql is faster, everything is faster. SQLite is about convenience not speed.

Second, you've got two processors on your current machine and four on your deployment machine. This means you need to think about how to get the most out of all that extra CPU horsepower. Usually this means finding a way to run multiple requests in parallel. My favorite tool for this job is Parallel::ForkManager although beware, it takes some finesse to get it working right with DBI due to problems with DBI and forking. If you do it right you might be able to run 2x as fast on your current machine and 4x as fast on your final target, so it's definitely worth the effort.

UPDATE: if you do decide to use Parallel::ForkManager, check out this node - Parallel::ForkManager and DBD::mysql, the easy way. I started to put this info here and then decided it would be more useful as a separate node.

-sam

  • Comment on Re: speeding up row by row lookup in a large db

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://752287]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (4)
As of 2024-04-20 00:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found