|Think about Loose Coupling|
Designing an enqueing application proxyby bronto (Priest)
|on Jan 03, 2008 at 22:07 UTC||Need Help??|
bronto has asked for the
wisdom of the Perl Monks concerning the following question:
A few days ago I was presented a problem.
The problem is as following. We have a web service that we'll call WS thereafter, and an application that uses that service and we'll call it PE. PE is a batch that runs every 30 minutes.
Now: WS can't handle more than 10 connections per second, and apparently this limit can't be easily changed. PE could possibly be limited, but trying to do that activates a bug that makes it loop and duplicate requests, making the problem even worse.
After discussing it with one of the people at the PE development team, we thought that a solution would be having a proxy that gets the incoming requests from PE, dispatching them at a maximum rate of 10 per second and enqueing the exceeding ones. Unfortunately, no solution has been found to do that job out of the box.
Being System Administrator and not (officially) a programmer I was not charged to solve it; nevertheless it intrigued me, and I started thinking how it could be solved using Perl. Probably I won't never have the chance to write the real program, but I designed it in my mind and I would kindly ask the opinion of the many of you that are real programmers. So, here we go.
I thought that the best thing would be to create a multithreaded application.
The application should have 10 threads that I would call "Connectors", that would interface to WS. On the other side there would be 10*N threads that I would call "Listeners", that would receive incoming requests from PE. I think that five listeners per connector would suffice.
Each listener and connector has an associated incoming queue. For the listeners only, this queue pool could be a shared hash that associates the listener's thread id with its queue (more on this later).
A further queue is shared by all the listeners and will be used to enqueue the incoming requests.
Each listener would create a listen socket. When a request comes in, its content is extracted and sent into the common queue, along with the thread identifier that will be used to route the response to the right socket.
A timed subroutine (maybe a separate thread would be better?) runs each second, dequeues up to 10 requests and enqueues them, one for each connector. (Question: since, as far as I remember, alarm() is not considered a good solution for this kind of problems, how else could one do the job?)
The connector reads the request content and id from the queue, forwards the request to WS, reads the response and uses the id to enqueue it to the right listener queue. The listener reads from the queue, forwards the response to PE, closes the connection and goes back to listening (this could be optimized with a keepalive configuration).
Is this design correct?
Do you think it would be better to do it in POE instead? In that case, how would the POE application be designed (I am a POE illiterate...)
Thanks in advance
In theory, there is no difference between theory and practice. In practice, there is.