Syntactic Confectionery Delight PerlMonks

### Re^4: [OT]: threading recursive subroutines.

by BrowserUk (Pope)
 on Feb 03, 2011 at 09:56 UTC ( #885943=note: print w/replies, xml ) Need Help??

The point was to find a technique. Or are you saying the technique will never be as efficient as alternatives?

My point was that using memoisation, iteration or direct calculation, calculating fibonacci to the limits of the machines ability to represent the result is practical, whereas the purely recursive routine would run for an inordinate amount of time.

I'm also saying that as you can directly calculate all the representable values of Fib(n) in just over a millisecond:

```#! perl -slw
use strict;
use Data::Dump qw[ pp ];
use 5.010;
use Time::HiRes qw[ time ];
use constant {
GR => 1.6180339887498948482,
ROOT5 => sqrt( 5 ),
};

sub fibCalc {
int( GR ** \$_[0] / ROOT5 + 0.5 );
}

my \$start = time;
fibCalc( \$_ ) for 1 .. 1474;
printf "calulating fibonacci( n ) for n := 1 to 1474 takes: %f seconds
+\n",
time() - \$start;

__END__
C:\test>885839 1000
calulating fibonacci( n ) for n := 1 to 1474 takes: 0.001122 seconds

But doing just the first 30 recursively takes 49 seconds (from the post above):

```recursive:  Got 24157816; took 49.278000 seconds

you'd need 50,000 threads and perfect efficiency gains to allow a parallelised recursive routine to calculate those first 30 in the same time that the direct calculation does the first 1474.

Basically, I don't believe you could ever match the efficiency of teh direct calculation no matter if you could throw millions of threads at the problem.

As far as I know, one can't make ...parallel without refactoring by a human, yet the Ackermann function is of that form.

That said, it does have traits that may make parallisation possible. For example, if you need calculate A(m,n), you know you will also need to calculate A(m,i) for 0<i<n.

But in I'm way over my head.

Actually, I think that you are pretty much exactly the same point as I am.

You see that there is some scope for parallelising Ackermann, but you don't see how to do it.

There are obviously great tranches of intermediate results that need to be calculated. For Ackermann( 3, 15 ) there are 655365 intermediate results. It seems inconceivable (to me) that there isn't scope for splitting this work across my 4 cores.

Of course, you can memoise Ackermann and it does effect a good speed-up, but the memory used gets huge very quickly, which makes it impractical as a generic solution. This is true for many similar algorithms.

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Create A New User
Node Status?
node history
Node Type: note [id://885943]
help
Chatterbox?
 [Corion]: I have no feeling about how my talk was received, but I'm happy with it, which counts the most to me ;-D [Corion]: Also, London is always a nice visit, as I got to meet a friend there, and spent some time offline, working on "minor" features of the shadertoy thing ;) [marto]: maybe next year when the kids are a little older I'll have time to attend perl events in Europe [Corion]: marto: Yeah - there isn't a Perl Day-Care at Perl events - we thought about it for YAPC::Europe 2012 but it's a gigantic effort to organize that [marto]: Corion yeah, it'd be too costly for me to bring them both :P

How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (8)
As of 2016-12-07 09:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
On a regular basis, I'm most likely to spy upon:

Results (125 votes). Check out past polls.