Many recent posts, such as Locked threads and tcp timeouts confront the problem of blocking IO operations. While I'm not a kernel or low level C expert, I would like to know how did we get into a position where IO can block a whole process, without having the ability to send a interrupt signal to it, or easily code around it?

My thoughts are the following:

Dosn't the socket method is_connected give you the ability to detect if the socket is stuck? Couldn't is_connected be made to bounce a message off the other end? Thereby indicating whether the lines went down. Then you could do a is_connected test before each read. That is not perfect, is wasteful of time, but would work for small data transfers, considering how fast the lines are now.

Would it be that wasteful in low level IO code, to have it listen for an interrupt in it's read loop, and have that timed out with an alarm?

On linux, the only sure fired method for avoiding the blocking IO problem is to make sure the code is forked off, and kill -9 it's pid

Concerning it's problems in threads, is this blocking IO problem the reason venerable old masters like merlyn refused to jump onto the threads bandwagon, saying that forks and shared memory segments work just fine. You can keep your problem prone threads. :-)

I've touched on quite a few points here, all connected in sub space by the blocking IO problem, and even gave thought it may be in there like a StuxNet device, to give external devices the ability to lock up programs. A very useful tool for network engineers to have.

So before I put on my titanium foil hat, would someone care to explain why blocking IO should even be a problem? Is it in the processor design itself?

I'm not really a human, but I play one on earth.
Old Perl Programmer Haiku ................... flash japh