Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot

Re: speeding up row by row lookup in a large db

by samtregar (Abbot)
on Mar 21, 2009 at 19:57 UTC ( #752287=note: print w/replies, xml ) Need Help??

in reply to speeding up row by row lookup in a large db

First off, let me echo at maximum volume that you should drop SQLite right away. MySQL is faster, Postgresql is faster, everything is faster. SQLite is about convenience not speed.

Second, you've got two processors on your current machine and four on your deployment machine. This means you need to think about how to get the most out of all that extra CPU horsepower. Usually this means finding a way to run multiple requests in parallel. My favorite tool for this job is Parallel::ForkManager although beware, it takes some finesse to get it working right with DBI due to problems with DBI and forking. If you do it right you might be able to run 2x as fast on your current machine and 4x as fast on your final target, so it's definitely worth the effort.

UPDATE: if you do decide to use Parallel::ForkManager, check out this node - Parallel::ForkManager and DBD::mysql, the easy way. I started to put this info here and then decided it would be more useful as a separate node.


  • Comment on Re: speeding up row by row lookup in a large db

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://752287]
and the fog begins to lift...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (4)
As of 2018-01-22 03:27 GMT
Find Nodes?
    Voting Booth?
    How did you see in the new year?

    Results (231 votes). Check out past polls.