Syntactic Confectionery Delight | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Agreed here.
A web crawling application is not going to see benefit from the light-weightedness of multiple threads since it is by it's nature fairly heavy. If you decide that threads don't really hold an advantage for your application you can save yourself a whole load of work by forking off processes. As pointed to in a recent node, Parallel::ForkManager might be of use to you. The module description includes:
This module is intended for use in operations that can be done in parallel where the number of processes to be forked off should be limited. Typical use is a downloader which will be retrieving hundreds/thousands of files.Sounds right up your tree? Or is that down your tree? (I never did work out where the roots for a red-black tree would go). In reply to Re^2: Multithread Web Crawler
by aufflick
|
|