Pathologically Eclectic Rubbish Lister | |
PerlMonks |
Re^2: Easily parallelize program executionby bennymack (Pilgrim) |
on Aug 24, 2006 at 02:43 UTC ( [id://569270]=note: print w/replies, xml ) | Need Help?? |
I'm afraid I can't think of any other examples off the top of my head that will run anywhere but on my system. I like the way this example illustrates a couple of principals. It shows that the script only takes as long as the longest process. It also demonstrates how one can aggregate the results of each process. I don't think it takes too much imagination to see how this can be useful. For instance, it could be used to connect to 100 servers, grep their access log for a pattern, return the line count, then display the total line count. Or, on a 4 processor system, look through a directory of gzipped files 4 files at a time, do something useful, then return the aggregate result. It's not limited to aggregation either. One could create a complex data structure also. For instance, building upon the gzip example, look for a particular query string parameter in a directory of gzipped log files and generate a hash of the values of the params and the count of times they appear. Ok so that last one was kind of aggregation too. Of course, the reduce_sub is optional. One can simply print the results of each task...
In Section
Cool Uses for Perl
|
|