Once again, you offer a few authoritative sounding 'wisdoms' in place of 'an answer'.
And as usual, on close inspection, they not only are of no value in terms of answering the question asked; they are in large part fundamentally incorrect.
- Threading buys you only two things:
This belies even the crudest understanding of threads. For example, you completely omit the following "buys" of threading:
- substantially cheaper context spawning.
- substantially cheaper context switching.
- the avoidance of kernel mode switching for state sharing.
- but it does not alter the fact that I/O is usually the physically limiting determinant of throughput
So, in your experience, there are no applications where a short burst of IO results in a large volume of cpu intensive computation?
- It allows you the potential opportunity to utilize multiple CPUs and/or cores, if they exist.
You've failed to notice that?:
- Even the cheapest commodity box you can buy now has multiple cores.
They're already turning up in high-end smart phones. Next year they'll be in disposable cells. A year or two beyond that and dual-core arms will be turning up in musical Xmas cards as they are replaced by 4 & 8-core processors for phone/tablet/ICE applications.
- That the practical manifestation of Moore's Law--the doubling of the number of transistors per chip--which until recently meant either the doubling of clock speeds, or (less frequently) the doubling of the width of the registers, has effectively hit the practical limits?
You haven't read the writing all around you that says that in the future, the maintenance of Moore's Law means that the number of cores will likely double every 2 years whilst clock speeds and register widths stagnate.
- Recursion, on the other hand, is a property of an algorithm.
Recursion is a property of implementation.
In many cases, algorithms written recursively in any given high-level language, are run iteratively by the implementation. This is the result of the well-known 'tail call elimination'
- Furthermore, it is significant to observe that recursion is not parallel. The recursive iteration must complete, and deliver its result, before the outer iteration(s) may proceed.
This is a(nother) very naive generalisation.
Many, many algorithms that are habitually described recursively, due to the succinctness and clarity of that description are trivially convertible to an iterative implementation. And as soon as you have iteration, the potential for simple, direct parallelisation is obvious.
- Many algorithms that can be expressed in a recursive fashion, also can be expressed in a massively parallel fashion ... but traditional threading models are not “massive parallelism,” a term that is usually reserved for talking about things like array (co)processors.
Is there such a thing as a "tradition threading model"?
If it did exist, the lines of where and when it existed are long since obscured.
Witness that for a (low) few thousands, you can have 16-cores on your desktop today. And if you have the need and half a million, an effective 1024 cores is yours for the asking.
Note also that if massively parallel array processing meets your requirements, then a few hundred quid will put the effective equivalent of a Cray 1 in your desktop in the form of a NVIDIA Tegra 2.
are changin' Change with them, or die.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.