There are many ways to share data between processes. You
can use a local dbm file. You can use IPC::Shareable.
And so on. But all of the efficient ones have the rather
significant problem that all of the requests in your
series have to come back to the same physical machine.
This does not play well with load balancing.
in reply to Re: Re: Sharing data structures among http processes?
in thread Sharing data structures among http processes?
However one crazy way of doing it is like this. One
machine gets the request and forks off a local server.
Other CGI requests are passed the necessary information to
be able to access this temporary server, which is run in a
persistent process, and then when this server decides the
time is right, it de-instantiates itself. This would be a
lot of work though.
Personally I would just see if you can keep the temporary
state in the database, and just have each individual
request deal with the bit of the state that they need to
handle. But I cannot, of course, offer any guesses on
whether this would work without knowing more details than
you have given us.