http://www.perlmonks.org?node_id=887511

Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Greetings folks,

Over the years I have read on and off about how perl manages servers, non-blocking vs blocking, forking vs select, client-server, etc.

One thing that I'm interested in doing is writing a server that listens to a connection but then also acts as a client itself. It in turn would connect to the exact same server running somewhere else.

The purpose is to take information it learns as a client and in turn repeat it to as a server to its clients. Of course, each of these clients are also copies of the same server running elsewhere.

The data is nothing more than a table whose content would occasionally change - but changes must be transmitted and received and duplicates discarded - almost like a database, but purely in memory since it's never meant to be permanently kept.

As suspected, data would need to be shared between each process. We're not talking hundreds of megabytes of data, actually something on the order of a few hundred kilobytes with a little bit of churn (kilobytes).

I have the abstract algorithm down, but I'm not sure if I should use forking with IPC or threads. Anyone have some advice?

Thanks