TCP Socket, Forking, Memory exhaustionby asuter (Initiate)
|on Nov 07, 2007 at 10:41 UTC
asuter has asked for the wisdom of the Perl Monks concerning the following question:
I would like to code a TCP server that has to accept socket connections from more than 2'000 clients. The clients would perform some sort of FTP-like communication, i.e. Request-Reply-Request-Reply-...
The connections would be let established for the whole FTP-like communication, which can last for several hours.
The problem: Currently I am forking the server foreach new accepted client (standard way to allow the parent to listen again immediately). But using this approach, I will duplicate the perl script (server) and thus the memory usage would increase. Say the server script needs 5MB, and let there be 10 concurrent clients. Then the memory usage would be already 50MB (+5MB for the parent). Is there a better approach to minimize the memory usage?
Are there any tutorials and/or good books out there that cover this problem? The communication I am trying to implement is a little bit more tricky than simple FTP. After the connection is established, both, the server and the client, can send requests.
My implementation: In a loop, I use the method can_read(1) from the module Select on the socket to check if there is new data (a request or maybe a reply) to be processed. If there is, I will read the data and invoke a script that handles the data (depending on whether it is a request or a reply). If not, I will search in a specified directory for text-files (created by other scripts) that can be sent through the socket. The content of the text-files is either a reply or a request. Then I will sleep for 1 second. and restart the loop.
Any suggestions how I could improve this complicated communication?
Thanks for your answer(s)