Re^5: how did blocking IO become such a problem?by BrowserUk (Pope)
|on Feb 21, 2012 at 17:57 UTC||Need Help??|
True, but is that not true of fork as well?
Yes. At least to some degree.
And if your application, or this particular part of your application lends itself to that mechanism -- ie. it doesn't require 2-way communications, with parent or other siblings; or more than 8 bits of feedback; doesn't need write access to shared data; this asynchronous subprocess of your application will run for sufficient time to offset the start-up costs of a new process -- and your platform supports it natively, then it makes sense to use it.
Neither one distorts your code the way "Everything's a callback! Hooray!" does.
I assume that is a dig at AIO. If you use AIO purely at the OS API level, I agree, it can be unintuitive and messy. But so is disk IO if your application has to deal with it at the inodes&cluster tables&bitmaps level. So, don't. Abstract it.
The most promising [sic] abstraction for AIO is promises. They can encapsulate and abstract not only the callbacks of AIO, but also many other shared-state mechanisms -- queues, message passing etc. -- in a single, coherent, reliable, easily understood and yet performant interface abstraction:
The application programmers responsibilities become quite simple:
A single abstraction that can comfortably encapsulate all of the above forms of asynchronism, and more, in an intuitive and compiler-optimisable mechanism that allows the application programmer to write simple, linear flow code whilst benefiting from whatever parallelism is available. Only exception handling needs out-of-band handling, and that is exceptional.
Of course, you don't throw the lower levels away completely -- that would be silly, see below -- but you only deal with a lower level when it is required.
Sometimes I'd rather deal with that than with event-driven programming, and other times I wouldn't.
I neither believe in nor advocate absolute truths nor mono-cultures.
In many applications it makes sense to combine two or more of the above techniques. Eg. Gui front-ends to a scientific simulations, real-time financial statistical graphing, corporate and governmental data-mining apps.
Running the GUI event-loop in one thread; a second event loop thread receiving real-time data feeds; a third thread communicating with the database; and a fourth thread performing logging. With other threads spawned on demand to perform complex calculations; or store or retrieve large volumes of data from backing store. The latter task -- taking snapshots of the system state -- might well be better done using fork and COW.
You could try to force-fit that all into a single event-driven architecture, but having to break up all your long-running calculations into iddy biddy chunks, or intersperse them with regular calls to doOneEvent() to ensure the GUI remains responsive it both difficult to get right and a waste of precious resources.
Conversely, trying to do the whole thing with threading alone would be a mess of synchronisation points and mostly dormant memory consuming threads.
Trying to do it with fork alone would be a disaster.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.