iotarho has asked for the wisdom of the Perl Monks concerning the following question:
I'm fairly new to Perl, so apologies if my question is poorly phrased...
I'm trying to take advantage of a 64 CPU cluster plus Perl's text-grabbing/matching capabilities. I have a fairly large (~100MB) file full of names (a few names per line in the file) that I need to individually look up and retrieve information for from an even larger (~5 GB) database.
Would ForkManager be a good way to implement a "divide and conquer" approach in Perl? I haven't been able to find good examples of using ForkManager to read a file a line-at-a-time and then doing something with those lines.
Back to
Seekers of Perl Wisdom