Sol-Invictus has asked for the wisdom of the Perl Monks concerning the following question:
On a tcp server socket, I want to use a timeout to log out clients that have become inactive after 5 mins, so after reading this:
(from IO::Socket pod)
timeout([VAL]) Set or get the timeout value associated with this socket. I +f called without any arguments then the current setting is returned. + If called with an argument the current setting is changed and +the pre- vious value returned.
I wrote a socket which opens like this:
use IO::Socket; use IO::Select; $max_msglen = 1024; $max_clients = 10; $port = 9999; $timeout = 300; $serverSocket = IO::Socket::INET->new( Proto=>"tcp", LocalPort=>$port, Listen=>$max_clients, Timeout =>$timeout, Reuse=>1 ); $sel = IO::Select->new($serverSocket);
but it doesn't time out the clients. Looking for answers I went here:
(from IO::Select pod)
select ( READ, WRITE, ERROR [, TIMEOUT ] ) "select" is a static method, that is you call it with the p +ackage name like "new". "READ", "WRITE" and "ERROR" are either "un +def" or "IO::Select" objects. "TIMEOUT" is optional and has the sam +e effect as for the core select call.
So now I'm really confused -
Firstly: What is the socket timeout I've set up actually doing?
Secondly: is the "core select" mentioned the same as the one I've tried to call on the socket at startup time?
Thirdly: Should I be checking for return values to catch the timeout event, if so where?
Back to
Seekers of Perl Wisdom