XP is just a number PerlMonks

### Iterative vs Recursive Processes

by mvaline (Friar)
 on May 12, 2003 at 15:39 UTC Need Help??
mvaline has asked for the wisdom of the Perl Monks concerning the following question:

As I mentioned in Passing subroutines as arguments, I'm working through SICP and trying to apply what I'm learning to Perl.

SICP contrasts iterative vs recursive processes and the importance of not confusing this with iterative vs recursive procedures. Whereas the term "recursive procedure" refers merely to the syntactic fact that a procedure definition refers to the procedure itself, when we describe a processes as recursive or iterative, we are talking about how the process evolves, not how it is written.

SICP also says that the distinction between process and procedure can be confusing today because most implementation of common languages (including C) are designed in such a way that the interpretation of any recursive procedure consumes an amount of memory that grows with the number of procedure calls, even when the process described is, in principle, iterative. As a consequence, these languages can describe iterative processes only by resorting to special-purpose looping constructs. LISP, of course, does not share this defect and will execute an iterative process in constant space, even if the iterative processes is described by a recursive procedure. Languages with this property are called "tail-recursive."

I've translated the examples in SICP into Perl: a theoretically recursive and iterative way to compute factorials. When Perl executes these subroutines, is there any difference between them? Is Perl tail-recursive?

```#!/usr/bin/perl -w

use strict;

my \$result = factorial_recursive(5);
print "factorial_recursive(5) returns \$result\n";
\$result = factorial_iterative(5);
print "factorial_iterative(5) returns \$result\n";

sub factorial_recursive() {
my \$n = shift;
if (\$n == 1) {
return(1);
} else {
return(\$n * &factorial_recursive((\$n - 1)));
}
}

sub factorial_iterative() {
my \$n = shift;
return(fi_helper(1, 1, \$n));
}

sub fi_helper() {
my (\$product, \$counter, \$max_count) = @_;
if (\$counter > \$max_count) {
return(\$product);
} else {
return(fi_helper((\$counter * \$product), (\$counter + 1), \$max_count
+));
}
}

Replies are listed 'Best First'.
Re: Iterative vs Recursive Processes
by Thelonius (Priest) on May 12, 2003 at 16:43 UTC
The short answer is no. I don't know if the Perl 6/Parrot team is talking about doing any tail-recursion optimization. Here is a version with goto &. If your version of Perl was not compiled with DEBUGGING_MSTATS, you'll have to remove the call to mstat(). However, you can still notice that if you run this with 100 as the argument, you don't get a "Deep recursion" warning, whereas you do with either of your functions.

If you do run with mstat, you'll be able to see that there is (unfortunately) still linear growth of memory usage with this version, even though the stack growth is bounded.

```#!/usr/bin/perl -w
use strict;
use Devel::Peek qw(mstat);

my \$testn = shift || 50;

my \$result = factorial_iterative(\$testn);
print "goto factorial_iterative(\$testn) returns \$result\n";

sub factorial_iterative {
my \$n = shift;
return fi_helper(1, 1, \$n);
}

sub fi_helper {
my (\$product, \$counter, \$max_count) = @_;
if (\$counter > \$max_count) {
mstat "return product";
return \$product;
} else {
@_ = (\$counter * \$product, \$counter + 1, \$max_count);
goto &fi_helper;
}
}
Update:

After thinking about it a while, I was able to prevent any memory growth by getting rid of all arguments to fi_helper and making them "global" variables:

```#!/usr/bin/perl -w
use strict;
use Devel::Peek qw(mstat);

my \$testn = shift || 50;

my \$result = factorial_iterative(\$testn);
print "goto factorial_iterative(\$testn) returns \$result\n";

my (\$product, \$counter, \$max_count);

sub factorial_iterative {
my \$n = shift;
(\$product, \$counter, \$max_count) = (1, 1, \$n);

return fi_helper();
}

sub fi_helper {
my \$j;
if (\$counter > \$max_count) {
mstat "return product";
return \$product;
} else {
(\$product, \$counter, \$max_count) =
(\$counter * \$product, \$counter + 1, \$max_count);
goto &fi_helper;
}
}
Come to think of it, it has now become an ordinary goto with a little syntactic sugar.

I don't know if the Perl 6/Parrot team is talking about doing any tail-recursion optimization.

According to this post, Parrot does support the mechanism for tail recursion, so I'm betting the language team will put it in.

kelan

Perl6 Grammar Student

Fascinating, thanks! I will be meditating on your code.
Re: Iterative vs Recursive Processes
by tilly (Archbishop) on May 12, 2003 at 18:01 UTC
In addition to the above, I would like to point out that it is theoretically impossible for Perl 5 to be tail-recursive. Using caller you are supposed to be able to generate a complete stack backtrace. There is no way to reconcile that ability with trying to automatically notice that certain frames will not be used again so you can pre-emptively throw them away.

This is an important point. Adding highly dynamic features (such as the ability to ask for a stack backtrace whenever you want) frequently conflicts with asking for nice optimizations. Which is better? The debugging support of Carp's confess(), or the performance benefit of optimizing tail-recursion? (There isn't a right answer, only a trade-off.)

... it is theoretically impossible for Perl 5 to be tail-recursive.

I disagree with this, or at least with the sentiment I inferred from it. While tail-recursion optimisation could not be turned on by default for the reasons you cite, I see no reason that this and other optimisations could not be introduced optionally by means of lexically-scoped pragmas.

In principle, a more complex approach would also be possible: always to perform the optimisation, but to retain enough information to emulate the theoretical caller stack on demand. I wouldn't recommend this approach though - for a simple recursion, you'd probably need to retain only the number of levels of recursion to emulate, but it would be much more difficult to cope with even slightly more complex cases, such as a subroutine that called itself from two different places, or a pair of co-recursive subroutines.

Hugo

Are there algorithms that would benefit from tail recursion optimisation -- which I have always taken to mean that they left nothing in the stack frame by way of intermediate data or state that was required when unwinding -- that aren't readily and easily re-cast as interative algorithms in the first place?

Factorial isn't a great example, but it is a common one

```sub factorial {
my (\$f,\$n) = (1,shift);
\$f *= \$n-- while( \$n );
\$f;
}

To my eyes, even with 1-char variable names and no comments, this is so much clearer than the equivalent recursive routine that I can't understand why I would do it any other way.

The big hole in my language experience is any serious use of the lisp-ish branch of the family. To my shame, I never got passed the 'Loads of Infuriatingly Stupid Parenthesis' (and rediculously terse and meaningless keywords), state-of-mind. I tried a couple of times to go back and address this ommision, but I encounter the same Aaaaarg! sensations every time.

The whole tail-recursion debate seems to stem from those languages where recursion is used almost exclusively (it's an exageration I know, but not so much of one) to perform any form of repetition. Early on, they found that using recursion for essentially iterative algorithms caused explosive memory usage to little benefit, so they invented tail-recursion. The invention was very successful in decreasing the inefficiencies of this practice, and so the original bug-fix-hack, has taken on the status of an, algorithmically, 'fundementally good idea'+.

So my question is: Can anyone educate me, by showing me an algorithm that is better (by some definition) implemented using a tail-recursive method than an iterative method.

+ I am not the originator of these notions, but I couldn't find an authoratative quote anywhere, and as I am no longer in contact with the person who expressed them, convincingly, to me, I'm quite happy to take the rap for them.

Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
I agree that if you want to optionally change caller's behaviour, then you can get tail-recursion to work.

However I am having extreme difficulty in seeing how, even in principle, one could make your more complex approach work. One could save memory by compressing the information in the caller stack. But I fail to see how one could regenerate the intermediate results in a complex computation on demand without redoing the computation - and that only works if you readily know (which you don't in Perl) that the computation is side-effect free.

Using caller you are supposed to be able to generate a complete stack backtrace

Wouldn't:

Be aware that the optimizer might have optimized call frames away before "caller" had a chance to get the information. That means that caller(N) might not return information about the call frame you expect it do, for "N > 1". In particular, @DB::args might have information from the previous time "caller" was called.

(from caller's docs) let you out of that particular hole? Tail recursion would just be another case to add to the existing ones. Not entirely backwards compatable, but not so bad as to disallow the concept I would have thought...

Good point.

However even so I would argue against turning it on naively. You see in Perl part of cleaning up a call-frame is destroying temporary lexical variables and undoing invocations of local. When the things destroyed are objects that may have side-effects in their DESTROY methods (see ReleaseAction for an example where this happens) then trying to optimize tail-recursion creates a significant change in what the code does.

Which brings us to another theme. A lot of the theory folks argue for side-effect free programming (this is what "functional programming" really means - as opposed to the bad meme that I started) because (among other things) it gives the compiler far greater opportunity to optimize and reorder how things happen. (Proponents argue that it also makes big programs easier to understand.) However we naive humans find avoiding side-effects a difficult discipline, which makes a lot of theoretical optimizations very hard to apply in practical languages.

As Larry Wall said, optimization is a form of cheating, which is why you always get caught. And the more free-form the language is, the more requirements you put on the compiler, and the easier it is to catch it out when you try to cheat. :-)

Re: Iterative vs Recursive Processes
by derby (Abbot) on May 12, 2003 at 16:03 UTC

Thanks! I should have mentioned that I had read that post and the very interesting Pure Perl tail call optimization in my RTFM search prior to my post. I wasn't able to find an answer to my question, which is simply whether if I write a recursive procedure like factorial_iterative, which looks like it ought to generate an iterative process, does it in fact execute in a constant memory space like it looks like it should, or does it's memory usage grow linearly the way factorial_recursive looks like it should.

Perhaps my question is foolish and someone wiser than I could show me the error of my ways.

if I write a recursive procedure like factorial_iterative ... does it's memory usage grow linearly the way factorial_recursive looks like it should.
Yep, since it's just a wrapper around a recursive sub, fi_helper. Recursion isn't terribly efficient in perl since function calls aren't too swift and take up a fair bit of memory (hence the recusion warning (see. perldiag)). An iterative approach however will be a bit quicker e.g
```use strict;

use Benchmark 'cmpthese';

my \$limit = @ARGV ? shift : 100;

cmpthese(-10, {
recurse => sub { rec(\$limit) },
iterate => sub { itr(\$limit) },
});

sub rec {
my \$n = shift;
return \$n == 1 ? return 1 : \$n * rec(\$n - 1);
}
sub itr {
my(\$n, \$r) = (shift, 1);
\$r *= \$n-- while \$n > 1;
return \$r;
}

__output__

Benchmark: running iterate, recurse for at least 10 CPU seconds...
iterate: 11 wallclock secs (10.58 usr +  0.00 sys = 10.58 CPU) @ 58
+14.45/s (n=61546)
recurse: 11 wallclock secs (10.23 usr +  0.00 sys = 10.23 CPU) @ 16
+77.45/s (n=17162)
Rate recurse iterate
recurse 1677/s      --    -71%
iterate 5814/s    247%      --
Of course all benchmarks are to be taken with a grain of salt, but I think this gives a pretty clear illustration of the hit recursion incurs.
HTH

_________
broquaint

I couldn't say... but the way I would test would be to give each one a very large number and watch the process with 'top' to follow its memory usage. Should give you a good idea.

- Ant
- Some of my best work - (1 2 3)

Re: Iterative vs Recursive Processes
by demerphq (Chancellor) on May 12, 2003 at 22:07 UTC

Tail recursion optimization is one of those interesting subjects that seems to fascinate people (for what to me are admittedly obscure reasons :-) and generate a lot of hoopla every so often. Perl doesnt handle this type of optimization, but its not a big loss. Since all tail recursion optimization does is provide a fancy way to write a while loop you can still do the same thing without it (and IMO have clearer code, but thats another story....) I guess my point here is not to get too hung up on the recursion vs iteration.

Incidentally I should mention that it is very much worth the challenge to take recursive routines and reimplement them iteratively in my opinion. A really good example is implementing tied hashes. If you are using a tree for the underlying data structure, the inclination is to implement your algorithms recursively, (FETCH, STORE and the like). However you are then stuck with a problem: how to implement FIRSTKEY() and NEXTKEY(). Typically you need a way to represent the stack state of the recursion internally to the object. This essentially necessitates implementing the standard tree recursion iteratively with your own stack, which can be quite tricky if you've been thinking heavily recursively immediately before.

Also in cases of optimization often eliminating recursive algorithms produces signifigant speedups because of the fine grained resource management that iteration both provides and requires. Just the other day I reworked a recusive tree algortihm to use iteration and saw over 150% speedup for my efforts. Although granted the original code was hardly as efficient as it could be.

I find that a lot of the time ill code something the first time recursively, and then rewrite it iteratively given what ive learned. Anyway. :-)

I hope you keep posting issues from your reading. So far its been interesting stuff. Cheers.

---
demerphq

<Elian> And I do take a kind of perverse pleasure in having an OO assembly language...
Re: Iterative vs Recursive Processes
by vladb (Vicar) on May 13, 2003 at 17:47 UTC
I may be missing some deeper nested (no pun intended :) point, but how is your factorial_iterative() interative? Inside you simply make a call to fi_helper() which is a recursive subroutine.

An interative routine would be the one which didn't make a call to itself and instead used loops to complete the task (as some of other commenters already did).

(ps: the subject of this thread is great and I enjoyed reading both your and other monks' posts :)

update: mvaline, ahh that makes sense now ;-) Thanks for your reply.

_____________________
"We've all heard that a million monkeys banging on a million typewriters will eventually reproduce
the entire works of Shakespeare. Now, thanks to the Internet, we know this is not true."

Robert Wilensky, University of California

It is a bit tricky. The point I made in my post is that there is (or at least should) a difference between a recursive procedure and a recursive process. The term recursive procedure only refers to the fact that the procedure definition refers to itself. This doesn't necessarily result in a recursive process.

The difference between the processes is whether the process is characterized by linear growth because of delayed stack operations (recursive) or whether the process is static because the state of the program is defined entirely by state variables.

fi_helper() is indeed a recursive procedure, but written in a language that supports tail-recursion (like LISP), it doesn't generate a recursive process. My question was whether it would generate a recursive process in Perl.

I see - you refer to the source as describing the procedure. So in this case a given procedure can within its definition use itself and thus become a recursive procedure. Moving onto "process", you're referring to the state accrued through execution of a process. You're stating that a process is recursive if it maintains its stack (sort of). You can have perl throw away the current context by using the goto &sub function though that isn't an optimal way to code. You're more likely to write that code as an interator if you intend to write it in a way that perl won't penalize you for.

Create A New User
Node Status?
node history
Node Type: perlquestion [id://257464]
Approved by diotalevi
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (7)
As of 2017-06-26 18:37 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
How many monitors do you use while coding?

Results (585 votes). Check out past polls.