Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re: Non-recursive Ackermann

by ikegami (Patriarch)
on Feb 04, 2011 at 00:31 UTC ( [id://886105]=note: print w/replies, xml ) Need Help??


in reply to [OT]: threading recursive subroutines.

Using the dependency graph I posted earlier, I was able to construct a non-recursive implementation of the Ackermann function. It visits the graph left to right, top to bottom.

use strict; use warnings; use feature qw( say ); sub A { my ($m, $n) = @_; # $A[$m] holds A($m, $n[$m]) my (@n, @A); for my $mi (0..$m) { $n[$mi] = -1; $A[$mi] = 1; } for (;;) { ++$n[0]; $A[0] = $n[0] + 1; for my $mi (1..$m) { last if $A[$mi] != $n[$mi-1]; $A[$mi] = $A[$mi-1]; ++$n[$mi]; } return $A[$m] if $n[$m] == $n; } } say A(@ARGV);

Maybe you'll have better luck with that.

Some properties:

  • It only keeps m+1 Ackermann numbers in memory at once.
  • It never calculates the same Ackermann number twice.

m<4 can be special-cased. Known formula exist for A(0,n), A(1,n), A(2,n) and A(3,n).

Update: Cleaned up code a bit.

Replies are listed 'Best First'.
Re^2: Non-recursive Ackermann
by BrowserUk (Patriarch) on Feb 04, 2011 at 09:52 UTC
    I was able to construct a non-recursive implementation of the Ackermann function.

    Impressive work!

    Maybe you'll have better luck with that.

    I couldn't see much opportunity for parallisation in your first version.

    Update: Cleaned up code a bit.

    That's certainly easier to follow. (*)

    The 'obvious' thing would be to async the inner for loop. Except that with maximum realistic values of m being 5, starting a thread for 5 iterations is nonsense.

    So then my attention moves to the outer loop, with the thought that you can often split the range of a loop into subranges and run them in parallel.

    But, firstly we don't know the range. Secondly, even iteratively, the dependency of each subsequent iteration upon the previous would require lock-stepping the threading--either through complex locking or queuing--and with the minimal effort required for each step of each iteration versus the cost of locking and context switching, again make that a non-starter.

    Upshot: I think you called it earlier with your z = F(x); return F(z); comment that went over my head at the time. In the light of that, I've modified my first tentative rule from a post above to be:

    For a recursive algorithm--that is not tail recursive--to be a suitable candidate for parallelisation, it requires that it has multiple, independant, recursive calls at each level.

    The interesting thing now is: how many recursive algorithms fit that pattern?

    m < 4 can be special-cased. Known formula exist for A(0,n), A(1,n), A(2,n) and A(3,n).

    Indeed. The following returns all the tractable cases within the machine capability in a blink of an eye:

    sub ackermann_c { use constant INF => exp(~0 >> 1); my( $m, $n ) = @_; --$m, $n=1 if $n == 0; return $n + 1 if $m == 0; return $n + 2 if $m == 1; return 2*( $n+3)-3 if $m == 2; return 2**($n+3)-3 if $m == 3; return INF unless $m == 4; my( $p, $r ) = ( $n+3, 2 ); $r = 2**$r while --$p; return $r - 3; }

    But that now means I need to find a more suitable testcase for my experiments.

    An in-place, parallised qsort maybe?

    (*)Though wonder why you do a loop initialisation rather than my @n = (-1 ) x $m; my @A = ( 1 ) x $m; given that m will realistically max out at 5. And whilst I'm on unimportant, purely subjective matters, why for(;;) not while( 1 )?


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      [ Just a few rushed little comments ]

      Thanks!

      we don't know the range.

      would require lock-stepping the threading-

      with the minimal effort required for each step

      I reached the same conclusions.

      it requires that it has multiple, independant, recursive calls at each level.

      That's why I used fib at first and called Ackermann a whole other ball game.

      The interesting thing now is: how many recursive algorithms fit that pattern?

      Any divide and conquer algorithm, for starters. That's where the dependency graph came in. I wanted to see if there was a split in it that could be exploited.

      wonder why you do a loop initialisation rather than

      I didn't know what I was going to end up with when I started. The code got refactored multiple times. I didn't micro-optimise.

      (Upd: Put differently, the snippet is a theoretical implementation, not a practical one. Work still needs to be done to make it practical. Well, as practical as Ackermann can be. )

      why for(;;) not while( 1 )?

      It's what the code I learned from used. I've never used while (1). I like for (;;) cause I read it as "for ever". Trivia: Since the entire bound check of while (CONSTANT) gets optimised away, and since "while" and C-style "for" use the same opcodes, while (1) and for (;;) are practically identical internally.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://886105]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (6)
As of 2024-04-25 13:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found