True, but is that not true of fork as well?
Yes. At least to some degree.
And if your application, or this particular part of your application lends itself to that mechanism -- ie. it doesn't require 2-way communications, with parent or other siblings; or more than 8 bits of feedback; doesn't need write access to shared data; this asynchronous subprocess of your application will run for sufficient time to offset the start-up costs of a new process -- and your platform supports it natively, then it makes sense to use it.
Neither one distorts your code the way "Everything's a callback! Hooray!" does.
I assume that is a dig at AIO. If you use AIO purely at the OS API level, I agree, it can be unintuitive and messy. But so is disk IO if your application has to deal with it at the inodes&cluster tables&bitmaps level. So, don't. Abstract it.
The most promising [sic] abstraction for AIO is promises. They can encapsulate and abstract not only the callbacks of AIO, but also many other shared-state mechanisms -- queues, message passing etc. -- in a single, coherent, reliable, easily understood and yet performant interface abstraction:
stuff = promise( 'anything that might take a while' );
... do anything else that you can before you need the stuff
this = promise( 'read a file' );
that = promise( 'fetch something from a remote system' );
theOther = promise( 'query data from the DB' );
... perhaps check to see if stuff is available
if( !stuff->ready? ) doSomethingElse();
...
... if any of the four components isn't ready the statement blocks unt
+il it is.
... maybe it needs a timeout either collectively or individually
... but that depends what else you might do at this point in the code;
+ if anything.
combine( this, that, theOther, stuff );
The application programmers responsibilities become quite simple:
- Ask for anything you want as early as you have the information required to get it.
- Arrange, as far as possible, to have that information for the things that take longest, first.
- With a sufficiently intelligent compiler -- think haskell complexity -- even these responsibilities could be alleviated.
But it does require moving away from the old tool chains and peep-hole optimiser/jit compiler mentality.
(Ie. scrap gcc and java and start afresh with a clean slate and taking in the products of the last 20 years of research.)
A single abstraction that can comfortably encapsulate all of the above forms of asynchronism, and more, in an intuitive and compiler-optimisable mechanism that allows the application programmer to write simple, linear flow code whilst benefiting from whatever parallelism is available. Only exception handling needs out-of-band handling, and that is exceptional.
Of course, you don't throw the lower levels away completely -- that would be silly, see below -- but you only deal with a lower level when it is required.
Sometimes I'd rather deal with that than with event-driven programming, and other times I wouldn't.
I neither believe in nor advocate absolute truths nor mono-cultures.
- Some applications lend themselves to event-loop structuring. Eg. GUI's and webserver serving mostly small, static pages.
- Some applications lend themselves to forking. Eg telnetd; inetd etc.
- Some applications lend themselves to threading. Eg. Database servers.
In many applications it makes sense to combine two or more of the above techniques. Eg. Gui front-ends to a scientific simulations, real-time financial statistical graphing, corporate and governmental data-mining apps.
Running the GUI event-loop in one thread; a second event loop thread receiving real-time data feeds; a third thread communicating with the database; and a fourth thread performing logging. With other threads spawned on demand to perform complex calculations; or store or retrieve large volumes of data from backing store. The latter task -- taking snapshots of the system state -- might well be better done using fork and COW.
You could try to force-fit that all into a single event-driven architecture, but having to break up all your long-running calculations into iddy biddy chunks, or intersperse them with regular calls to doOneEvent() to ensure the GUI remains responsive it both difficult to get right and a waste of precious resources.
Conversely, trying to do the whole thing with threading alone would be a mess of synchronisation points and mostly dormant memory consuming threads.
Trying to do it with fork alone would be a disaster.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
|