I happen to like to use queues ... there are several to choose from ... knowing that all of them know how to send hashrefs and Perl objects. Therefore, in a very simple but common scenario, you might have one process that receives incoming connections, builds “request” objects from them, and places these onto a queue. An arbitrary but adjustable number of workers sit on that queue, retrieve requests from it, and execute those requests ... perhaps by calling some method (say, execute() ...) on the object that it just received. This method, say, produces a result ... or stores the result in the object itself. The worker then places the request/response onto a “completed work” queue ... from which it is retrieved and the results sent back to the requesting user.
This, of course, is essentially the magic that’s used by FastCGI in any web-server on this planet: your incoming HTTP data is gathered up and queued to a worker, who generates an HTTP response packet and queues it back to the web-server for delivery. The FastCGI workers typically process hundreds or thousands of requests during their lifetimes.
The advantage of this sort of general design is that, no matter how “busy” the server gets, the only evidence and the only consequence is that “the queues get rather long.” The system might be running like a cat on a hot tin roof ... all of the workers being 100% active 100% of the time ... but it will not become congested. It’ll become just as busy as you’ve allowed it to be, but not one whit more.