Anonymous Monk has asked for the
wisdom of the Perl Monks concerning the following question:
i have a program which in the beginning creates large hashes and array with data and then works with them mainly read only.
until now we just start the program 4 times on the same computer to use its 4 processor cores. but now the data is about 1.5gig, so on our 4Gig machines we can only start it 2 times wasting 2 cores doing nothing.
Of course the better idea is to load the data once and then let each instance of the program using this. To do this I spend the last 2 weeks in changing the programm to use threads and threads::shared, just to find out that the result is way too slow. While it does use all 4 cores, the 4 threads running parallel on 4 cores are slower then the old single thread edition running on one core :(
somewhere well hidden in a doc here on perl monks I read that even shared variables are not really shared but copied, and such completely useless for me.
So my big question is: are there alternatives for me? As I said I want to have the data in memory only once, but having 4 processor cores working with them. Sounds easy, but I haven't found anything so far... In case it matters, I am using Perl 5.10.0 with Linux
Thanks ahead for your help!