cnd has asked for the wisdom of the Perl Monks concerning the following question:
I want to run a dozen simultaneous ford()'d perl scripts (each with it's own indiviaul processor affinity on a multi-CPU host). I want all of them to have efficient access to a large pool of mostly-static shared memory.
For example - I want to be able for every script to do this:-
print $shared{'hugedata'};
and this:-
$shared{'totalrequests'}++;
but for only 1 copy of all that to live in memory.
I specifically do not want to shuffle copies of stuff around, or to go serializing/unserializing everytying all the time.
Is this possible?
If not - how hard do you think it would be to extract the variable-handling functions out of the perl source, and compile it all into some kind of .xs loadable module?
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Efficient shared memory - possible? how??
by Corion (Patriarch) on Feb 20, 2012 at 12:24 UTC | |
by sundialsvc4 (Abbot) on Feb 20, 2012 at 14:19 UTC | |
Re: Efficient shared memory - possible? how??
by BrowserUk (Patriarch) on Feb 20, 2012 at 12:58 UTC | |
Re: Efficient shared memory - possible? how??
by JavaFan (Canon) on Feb 20, 2012 at 12:01 UTC | |
Re: Efficient shared memory - possible? how??
by Anonymous Monk on Feb 20, 2012 at 10:57 UTC | |
by cnd (Acolyte) on Feb 20, 2012 at 11:09 UTC |