|Do you know where your variables are?|
Re^4: Behold! The power of recursion.by stvn (Monsignor)
|on Oct 18, 2004 at 15:52 UTC||Need Help??|
As with any tool, its all about how you use it.
The common misconception is that recursion is inefficient because each iteration entails calling a subroutine. In a lot of langauges, this means allocating a stack frame and freezing the previous stack frame until the next one exits. This can quickly eat up recourses, and depending upon the complexity of the recursion, leave a number of half finished computations. The result is that your function takes up exponetntial memory/stack/resource space. A common technique to avoid this in Scheme and other languages (even including C with the right compiler) is called Tail Call Optimization.
Tail Call Optimization is possible when you have a Tail Recursive function. A Tail Recursive function is a function where the last value (the return value) is the point of recursion. (Since this is a very common form for recursive functions, it works out well.) For instance, most factorial implementations are (for the most part) tail recursive but may not be eligible for tall call optimization, take this one for example:
It will do most of the bad things I mentioned above (except that Perl doesnt use a stack like C does). Each recursion is waiting on the next recursion before it can perform the multiplication. This type of function is very hard for a compiler to optimize because each recursion depends upon the next one to complete its work.
And now (keeping in mind that the particular compiler must do the real optimization) take a look at this modification of the factorial function. It is still recursive, but it has been modified so that it is in the "proper tail call" form.
Most modern optimizing compilers will be able to turn this version into machine code equivalent to the iterative version. Meaning that instead of a call structure like this:
You end up with something that looks more like this:
Basically the code becomes this:
Now one problem of course is that perl does not do this type of optimization :)
Standard ML is a functional language, which means that calling functions is important, so therefore, the Standard ML compiler is built to optimize function calls. As for recursion, it uses a heap based allocation rather than stack based, which allows for a number of optimizations to take place. I would provide a link on this, but to be honest, I read it while waist deep in the Standard ML site and I have never been able to dig it out again.
Scheme too makes many optimizations which allow it to be very efficient, even in the face of recusion which would bring most other languages to a halt (since they would have consumed all available resources). Googling "Scheme", "Optimized", etc will get you a number of links to lots of good info.
But my point is that it is unwise, as compiler technology improves, to rule out a recursive solution which is more understandable and maintainable because 5-10 years ago compiler technology couldn't handle it.
There is always a trade off between programmer time (writing, documenting and maintaining) and computer time (execution). Back in the days of yore, computer time was more expensive than programmer time. The inverse is true now.
Once again, it is all in how you use it. Bad recursion is not worse than bad iteration, its all bad programming in the end. If the problem/algorithm itself is recursive then the most understandable and maintainable version of it will be the recursive one.