![]() |
|
No such thing as a small change | |
PerlMonks |
comment on |
( #3333=superdoc: print w/replies, xml ) | Need Help?? |
I want to run a dozen simultaneous ford()'d perl scripts (each with it's own indiviaul processor affinity on a multi-CPU host). I want all of them to have efficient access to a large pool of mostly-static shared memory. For example - I want to be able for every script to do this:- print $shared{'hugedata'}; and this:- $shared{'totalrequests'}++; but for only 1 copy of all that to live in memory. I specifically do not want to shuffle copies of stuff around, or to go serializing/unserializing everytying all the time. Is this possible? If not - how hard do you think it would be to extract the variable-handling functions out of the perl source, and compile it all into some kind of .xs loadable module? In reply to Efficient shared memory - possible? how?? by cnd
|
|