Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^3: [OT]: threading recursive subroutines.

by ikegami (Pope)
on Feb 03, 2011 at 08:53 UTC ( #885921=note: print w/ replies, xml ) Need Help??


in reply to Re^2: [OT]: threading recursive subroutines.
in thread [OT]: threading recursive subroutines.

But if you are going to explore the threading of recursion in Perl, there are simpler ways than your implementations. This does the job:

No, that does not limit the number of workers.

Equally, the recursive algorithm can be sped up immensely by simply memoising it.

It doesn't seem to be a technique you can use for distributed work, which is where I thought you were going with this.

All of these modifications make it possible to calculate Fibonacci( 1474 )

The point was to find a technique. Or are you saying the technique will never be as efficient as alternatives?

Anyway, I saw your post about the Ackermann function, and that's a whole different ballgame. I spent a lot of time tinkering with that too after you mentioned it.

As far as I know, one can't make y = f(x); z = f(y); parallel without refactoring by a human, yet the Ackermann function is of that form.

That said, it does have interesting traits that may make parallisation possible (or impossible).

  • If you need calculate A(m,n), you know you will also need to calculate A(m,i) for 0<i<n.
  • For m>0 and n>0, A(m,n) is a series of n+1 evaluations of A(m-1,i) with increasing i.
  • ...

But I'm in way over my head.


Comment on Re^3: [OT]: threading recursive subroutines.
Download Code
Re^4: [OT]: threading recursive subroutines.
by BrowserUk (Pope) on Feb 03, 2011 at 09:56 UTC
    The point was to find a technique. Or are you saying the technique will never be as efficient as alternatives?

    My point was that using memoisation, iteration or direct calculation, calculating fibonacci to the limits of the machines ability to represent the result is practical, whereas the purely recursive routine would run for an inordinate amount of time.

    I'm also saying that as you can directly calculate all the representable values of Fib(n) in just over a millisecond:

    #! perl -slw use strict; use Data::Dump qw[ pp ]; use 5.010; use Time::HiRes qw[ time ]; use constant { GR => 1.6180339887498948482, ROOT5 => sqrt( 5 ), }; sub fibCalc { int( GR ** $_[0] / ROOT5 + 0.5 ); } my $start = time; fibCalc( $_ ) for 1 .. 1474; printf "calulating fibonacci( n ) for n := 1 to 1474 takes: %f seconds +\n", time() - $start; __END__ C:\test>885839 1000 calulating fibonacci( n ) for n := 1 to 1474 takes: 0.001122 seconds

    But doing just the first 30 recursively takes 49 seconds (from the post above):

    recursive: Got 24157816; took 49.278000 seconds

    you'd need 50,000 threads and perfect efficiency gains to allow a parallelised recursive routine to calculate those first 30 in the same time that the direct calculation does the first 1474.

    Basically, I don't believe you could ever match the efficiency of teh direct calculation no matter if you could throw millions of threads at the problem.

    As far as I know, one can't make ...parallel without refactoring by a human, yet the Ackermann function is of that form.

    That said, it does have traits that may make parallisation possible. For example, if you need calculate A(m,n), you know you will also need to calculate A(m,i) for 0<i<n.

    But in I'm way over my head.

    Actually, I think that you are pretty much exactly the same point as I am.

    You see that there is some scope for parallelising Ackermann, but you don't see how to do it.

    There are obviously great tranches of intermediate results that need to be calculated. For Ackermann( 3, 15 ) there are 655365 intermediate results. It seems inconceivable (to me) that there isn't scope for splitting this work across my 4 cores.

    Of course, you can memoise Ackermann and it does effect a good speed-up, but the memory used gets huge very quickly, which makes it impractical as a generic solution. This is true for many similar algorithms.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re^4: [OT]: threading recursive subroutines.
by BrowserUk (Pope) on Feb 04, 2011 at 00:07 UTC

    I just noticed this, I don't think it was a part of the post when I originally saw it. We must have overlapped.

    No, that does not limit the number of workers.

    Actually, the futures code does limit the number of workers. In the case of the posted sample, to 6 threads (plus the original):

    #! perl -slw use strict; use futures; sub fibonacci { my $n = shift; return $n if $n < 2; my $r1 = futures->new( \&fib_t1, $n -2 ); #1 my $r2 = futures->new( \&fib_t1, $n -1 ); #2 return $r1 + $r2; } sub fib_t1 { my $n = shift; return $n if $n < 2; my $r1 = futures->new( \&fib_t2, $n -2 ); #3 & #5 my $r2 = futures->new( \&fib_t2, $n -1 ); #4 & #6 return $r1 + $r2; } sub fib_t2 { my $n = shift; return $n if $n < 2; return fib_t2( $n -2 ) + fib_t2( $n -1 ); }

    It does it through the expedient of using 3 levels of sub call. The first two spawn threads, but the third, fib_t2(), uses straight recursion. This was done manually, and 3 levels were chosen so as to have 4 threads running for the majority of the time, to match my 4-cores.

    But it is not hard to see how this could be done automatically at either compile-time or run-time.

    There is the possibility of avoiding the two essentially redundant spawns at the first level, by having the second level return a future that closes over the addition of the two futures it creates.

    Something like:

    #! perl -slw use strict; use futures; sub fibonacci { my $n = shift; return $n if $n < 2; return fib_t1( $n -2 ) + fib_t1( $n -1 ); } sub fib_t1 { my $n = shift; return $n if $n < 2; return futures->new( sub{ futures->new( \&fib_t2, $n-2 ) + futures->new( \&fib_t2, +$n-1 ) } ); } sub fib_t2 { my $n = shift; return $n if $n < 2; return fib_t2( $n -2 ) + fib_t2( $n -1 ); }

    In this way, only 4 threads would be created and the original thread would wait for their completion.

    And by eliminating the temporary variables, it possible to see how the forms of the three levels of subroutine actually closely mirror each other.

    The next step is to move the decision as to which of the forms to use for any given level under the covers of the futures module. At that point, the user constructs a single recursive routine in the familiar way, but using futures for each recursive call, and the module keeps track of how many threads it is spawning.

    Ultimately, rather than having to use the futures constructor, the goal would be to simply mark the subroutine with an attribute and have it all taken care of transparently. Something like:

    sub fibonacci :future { my $n = shift; return $n if $n < 2; return fibonacci( $n -2 ) + fibonacci( $n -1 ); }

    Now, everything that was done manually in the second code block above should be feasible without further application programmer intervention.

    As already discussed, it wouldn't benefit such a trivial function, but it could for others.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://885921]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (7)
As of 2014-07-12 11:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    When choosing user names for websites, I prefer to use:








    Results (239 votes), past polls