There's more than one way to do things | |
PerlMonks |
Re: Strange blocking issue with HTTP::Daemonby isync (Hermit) |
on Aug 11, 2010 at 12:30 UTC ( [id://854344]=note: print w/replies, xml ) | Need Help?? |
Another post as I'd like to add a request for comments about what I've learned:
Result 1: So HTTP::Daemon in essence behaves correctly, keeping connections alive on HTTP/1.1 requests, with the effect that this might block another client trying to connect. This is a problem as long as HTTP::Daemon is single-threaded/non-forking, right? With a spawning HTTP::Daemon connection wrapper, although one connection might be doing keep-alive, there would be others idling waiting for connections, right? Or do all the threads/forks still share a single socket, which is then blocked? As said, I think I've seen this "blocking" behavior with forking Net::Server and assertively non-blocking AnyEvent::HTTPD based scripts - well, I think.. But forks sharing a single socket would explain this. But I again admit my limited understanding of socket workings. Result 2: Assuming that forks do not share a single (blocked) socket, a forking server might be able to serve keep-alive connections and closing ones side by side. Adapting a ForkOnAccept concept from here, this is my result: and guess what, it does work. (And on keep-alive connections the pid remains the same). Mh, although I get a lot of "Needs close: 0" messages from non-keep-alive clients. I think I need to think through fork()'ing again... What's wrong/anything wrong with this design?
In Section
Seekers of Perl Wisdom
|
|