![]() |
|
No such thing as a small change | |
PerlMonks |
Re: speeding up row by row lookup in a large dbby samtregar (Abbot) |
on Mar 21, 2009 at 19:57 UTC ( [id://752287]=note: print w/replies, xml ) | Need Help?? |
First off, let me echo at maximum volume that you should drop SQLite right away. MySQL is faster, Postgresql is faster, everything is faster. SQLite is about convenience not speed.
Second, you've got two processors on your current machine and four on your deployment machine. This means you need to think about how to get the most out of all that extra CPU horsepower. Usually this means finding a way to run multiple requests in parallel. My favorite tool for this job is Parallel::ForkManager although beware, it takes some finesse to get it working right with DBI due to problems with DBI and forking. If you do it right you might be able to run 2x as fast on your current machine and 4x as fast on your final target, so it's definitely worth the effort. UPDATE: if you do decide to use Parallel::ForkManager, check out this node - Parallel::ForkManager and DBD::mysql, the easy way. I started to put this info here and then decided it would be more useful as a separate node. -sam
In Section
Seekers of Perl Wisdom
|
|