http://www.perlmonks.org?node_id=1044108


in reply to Re^2: Efficiency: Recursive functions while de-subbing my program
in thread Efficiency: Recursive functions while de-subbing my program

That's about 1/3rd of one second. 100k function calls are indeed peanuts. And peanuts are indeed cheap. Showing a peanut next to a dust mite doesn't actually make the peanut larger or more expensive.

Doing nothing useful a whopping 400% faster is not actually a useful accomplishment. I can do nothing useful over 400,000,000% faster by actually doing nothing. That is one enormously impressive number. And it means nothing at all.

- tye        

  • Comment on Re^3: Efficiency: Recursive functions while de-subbing my program

Replies are listed 'Best First'.
Re^4: Efficiency: Recursive functions while de-subbing my program
by BrowserUk (Patriarch) on Jul 13, 2013 at 03:21 UTC
    Doing nothing useful a whopping 400% faster is not actually a useful accomplishment.

    The point of the benchmark was, that before you get around to doing anything useful with a subroutine, you have to pay the price of doing that "nothing useful". (That's specifically for Tye, because I know the other readers will have already got that.)

    We've been here before, and the final outcome of that thread hasn't changed.

    And the reality of OO's propensity to use small subs hasn't changed either, except that with the advent of Moouse et al. each method call can involve half a dozen subroutines calls once you've used before, after, around, inbetween, isa, wanna, coulda etc.

    And the reality of the affect of Perl's subroutine overhead relative to that of other languages hasn't changed either.

    So, whilst in the OPs target case, subroutine overhead is a red-herring, dismissing it completely as peanuts is a bad message in the wider scheme of things.

    As with all things, optimisation is about understanding what to do when; and when not; with the key being understanding, not blanket dismissal.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
    c

      The final outcome of that thread was to demonstrate that you don't even understand the difference between micro-optimizations and real optimizations. Which you demonstrate some more here.

      So, whilst in the OPs target case, subroutine overhead is a red-herring, dismissing it completely as peanuts is a bad message in the wider scheme of things.

      I didn't dismiss the sub overhead completely as peanuts. I concurred with dismissing the very specific case of 100k subroutine calls as peanuts (and then commented on how silly your "OMG! 400% faster, wow!" demonstration was).

      Even if File::Find weren't accessing the file system, the idea of just eliminating subroutine call overhead would still be silly. The subroutines of File::Find aren't even close to as trivially tiny as they would need to be for removing subroutine calls to have a chance of being worthwhile.

      Your own benchmark even makes that easy to see. You show that doing one useless thing (an addition) is 4 times faster than doing several useless things (an addition, a subroutine call, a 'my' declaration, copying a passed-in value, returning a copy of that value).

      - tye