laziness, impatience, and hubris | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Some thoughts:
Are the processes CPU-bound or IO-bound? If IO-bound you might consider making the file system cache bigger (as long as you're not paging). If you also can arrange that all reports that use the same shell run after each other ( or 8 at the same time to use all procs) you can be pretty sure that all reports will find all the data cached in memory. You will still have the overhead of the OS. Is it true that a holdings file defines a subset of the shell file? If so, is it possible to split up the shell file in several subset files, and then run all the reports on the appropiate subset files. The report processes then don't have to worry about iterating through the data or using a binary search, they need it all. This is most efficient if several reports need the same subset. You will need at least twice the disk space, depending on how much overlap there is between the holdings files. It should be possible to create all the subset files with a single pass through the shell file, as long as you don't run into the maximum number of open files per process. That depends on the number of subset files you need to create. HTH, Thijs In reply to Re: Speeding up data lookups
by raafschild
|
|