Welcome to the Monastery | |
PerlMonks |
Re: Taking advantage of multi-processor architectureby bluto (Curate) |
on Feb 11, 2005 at 18:34 UTC ( [id://430203]=note: print w/replies, xml ) | Need Help?? |
Since you've only posted a general question, we can only give you general advise. In order for you to optimize a program, you must know two things. What resource is it limited by (usually CPU speed or poor code design; real memory; disk speed)? How can you restructure your code to either limit it's dependence on that resource or parallelize access to it. Others have mentioned forking/threading, but these can be hard to use if you inexperienced with them and don't have the time to learn. Some other things you may want to consider...
If your script is performing a lot of IO (reading and writing), consider separating the files onto different physical disks, and importantly don't access files through things like NFS mounts. Sometimes this alone can double the throughput, esp if you are reading and writing two files at the same time on the same physical disk. You really do not want the system to be swapping while your program is running, since it will cause things to slow down a lot. This is often caused by trying to manipulate massively large data structures in memory in perl. If your script is using lots of memory (e.g. reading two large files completely into memory before processing), consider processing as you read each line in. If you need them in arrays, consider using something like Tie::File. One common example is trying to sort a massive array within perl itself. Sometimes you can call an external utility to do this for you much more quickly (e.g. GNU sort). If you give more details, I'm sure someone can help out more.
In Section
Seekers of Perl Wisdom
|
|