|Perl: the Markov chain saw|
Re: to thread or fork or ?by Tanktalus (Canon)
|on Oct 19, 2012 at 03:08 UTC||Need Help??|
Personally, I'd default to the same as mbethke: no threading. And forking might not be any better.
If I were to even consider something like this, the file would have to be split. Across multiple physical disks. Because your CPU is so much faster than the disk access that there's no way for the disk to overwhelm a single CPU, nevermind multiple CPUs. Only with multiple physical disks all spinning and sending back data simultaneously do you have a chance at even keeping a single CPU busy. And even then, I suspect that the bus would still be a limiting factor.
If, however, you were to be doing analysis of each word to group similar-sounding words together (see soundex), then you might have enough to do to make the disks start to wait for the CPU. But I doubt it.
Assuming we completely change your requirements so that there would be a point to this, that your file was actually large files each on separate physical disks, and that you were doing some significant processing, even then, I'd probably avoid threading or forking if at all possible. To this end, I'd consider using IO::AIO (which uses threads under the covers, but not perl threads) to read the files.
If that ends up not being fast enough where you actually need to process on more than one CPU at a time, I would suggest as little sharing as possible. Having different threads be responsible for words starting with a different letter just doesn't scale easily (26 threads may not be optimal for your CPU configuration, and what if you start dealing with non-English languages, especially Russian or Greek or Chinese, among others?), nor uniformly (more words start with "s" in English than any other letter, for example, while the thread dealing with words starting with "x" will be awfully idle). Instead, one thread/process per file, and each thread/process doing exactly the same thing: populating a hash. Then the challenge is that in the parent process, you need to pull all this together and add them together. That's not really too hard:
The challenge is getting that data back to the parent atomically. That is, the entire hash, not a partial result. And doing so easily. One way is to share the %total hash in each thread, and then use a mutal exclusion semaphore on it that blocks concurrent access. Another way is to use AnyEvent::Util::fork_call to transfer the data back from the subprocess (not thread) to the parent process. In this model, the parent is not using threads at all, and so will not be able to get partial results.
But, for your original requirements, there's just no way that threading, forking, or any of that will add anything other than complexity. :-)