in reply to Re^3: how did blocking IO become such a problem?
in thread how did blocking IO become such a problem?
... “whereas,” if I may presume to impose upon your very apt analogy... “meanwhile, performance or no, the switchboard operator downstairs is quite matter-of-factly handling fifteen calls ‘simultaneously,’ and doing her nails and watching her tea-timer, all ‘at the same time.’”
To me, asynchronous I/O is the only reasonable way to handle things such as this. (And who, frankly, really cares about the POSIX so-called “standard” anyhow?) At any particular millisecond, you are either waiting for another light on the switchboard to light up (and you really do not care which one it is ...), or you are waiting to hear the little bell which tells you that your tea is ready (while doing your nails).
After all, a CPU (which can quite effortlessly react in terms of nanoseconds), can support many hundreds of “simultaneous” connections (which are timed at-best in terms of milliseconds), more-than-plenty fast enough, even though (from its point of view...) it is actually servicing them “one at a time.”
Here, the entire uber-messy business of “truly being interrupted” is neatly avoided. Even if the tea-timer goes off “during” the call, the operator can easily deal with it after she has dealt with the call. The telephone operator never actually has to stop whatever she is doing in order to finish making her tea... she merely has to notice that, sometime during the period of time when she was dealing with her last call, the little bell chimed. The actual processing cycle, although it is completed very fast and might vary considerably from one iteration to the next, is never actually aborted. And this reduces the whole thing to something that is quite reliable indeed.