Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Comment on

( #3333=superdoc: print w/ replies, xml ) Need Help??

Once again, you offer a few authoritative sounding 'wisdoms' in place of 'an answer'.

And as usual, on close inspection, they not only are of no value in terms of answering the question asked; they are in large part fundamentally incorrect.

  1. Threading buys you only two things:

    This belies even the crudest understanding of threads. For example, you completely omit the following "buys" of threading:

    • substantially cheaper context spawning.
    • substantially cheaper context switching.
    • the avoidance of kernel mode switching for state sharing.
    • others.
  2. but it does not alter the fact that I/O is usually the physically limiting determinant of throughput

    So, in your experience, there are no applications where a short burst of IO results in a large volume of cpu intensive computation?

  3. It allows you the potential opportunity to utilize multiple CPUs and/or cores, if they exist.

    You've failed to notice that?:

    • Even the cheapest commodity box you can buy now has multiple cores.

      They're already turning up in high-end smart phones. Next year they'll be in disposable cells. A year or two beyond that and dual-core arms will be turning up in musical Xmas cards as they are replaced by 4 & 8-core processors for phone/tablet/ICE applications.

    • That the practical manifestation of Moore's Law--the doubling of the number of transistors per chip--which until recently meant either the doubling of clock speeds, or (less frequently) the doubling of the width of the registers, has effectively hit the practical limits?

      You haven't read the writing all around you that says that in the future, the maintenance of Moore's Law means that the number of cores will likely double every 2 years whilst clock speeds and register widths stagnate.

  4. Recursion, on the other hand, is a property of an algorithm.

    Recursion is a property of implementation.

    In many cases, algorithms written recursively in any given high-level language, are run iteratively by the implementation. This is the result of the well-known 'tail call elimination'

  5. Furthermore, it is significant to observe that recursion is not parallel. The recursive iteration must complete, and deliver its result, before the outer iteration(s) may proceed.

    This is a(nother) very naive generalisation.

    Many, many algorithms that are habitually described recursively, due to the succinctness and clarity of that description are trivially convertible to an iterative implementation. And as soon as you have iteration, the potential for simple, direct parallelisation is obvious.

  6. Many algorithms that can be expressed in a recursive fashion, also can be expressed in a massively parallel fashion ... but traditional threading models are not “massive parallelism,” a term that is usually reserved for talking about things like array (co)processors.

    Is there such a thing as a "tradition threading model"?

    If it did exist, the lines of where and when it existed are long since obscured.

    Witness that for a (low) few thousands, you can have 16-cores on your desktop today. And if you have the need and half a million, an effective 1024 cores is yours for the asking.

    Note also that if massively parallel array processing meets your requirements, then a few hundred quid will put the effective equivalent of a Cray 1 in your desktop in the form of a NVIDIA Tegra 2.

Times they are changin' have changed. Change with them, or die.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

In reply to Re^2: [OT]: threading recursive subroutines. by BrowserUk
in thread [OT]: threading recursive subroutines. by BrowserUk

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • Outside of code tags, you may need to use entities for some characters:
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Chatterbox?
    and the web crawler heard nothing...

    How do I use this? | Other CB clients
    Other Users?
    Others pondering the Monastery: (6)
    As of 2014-07-30 00:09 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?

      My favorite superfluous repetitious redundant duplicative phrase is:









      Results (229 votes), past polls