Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW

useful depth of sub routines

by JockoHelios (Scribe)
on May 27, 2013 at 18:42 UTC ( #1035449=perlquestion: print w/replies, xml ) Need Help??
JockoHelios has asked for the wisdom of the Perl Monks concerning the following question:

Thanks for your responses. I won't be doing recursive subroutine calls, at least, not intentionally :) so I probably won't crash my system with what I intend to do.

And of course, what I intend the code to do is precisely what will occur :)

I'm working on an extensive data-processing project for which Perl seems the ideal language.
As the work progresses, I'm removing repetitive sections of code into subroutines.
Looking forward, I can see that I'll be writing subroutines that call subroutines.
And then subroutines that call subroutines that call subroutines. And probably even deeper levels.

I'm wondering how far, or how deep, this can go without causing Perl to upchuck.

A few preventative measures I'm already taking:
- passing large arrays to subroutines would chew up RAM, so I'm passing array references
- identifiers could become a problem within the layers, so I'm using strict
- I'm also using sub-routine specific naming conventions
- example: in the subroutine SRE_SubRoutineExample, all named variables and arrays start with SRE_ - thus, my $SRE_CompString = "JAPH";

I've found nodes for a variety of "gotchas", but not a node just for subroutine "gotchas". The calling depth, as mentioned, is one concern, but I'm interested in others that I haven't run into yet. So I'm hoping to hear about some of these and avoid them. And save a few brain cells in the process :)

Dyslexics Untie !!!

Replies are listed 'Best First'.
Re: useful depth of sub routines
by moritz (Cardinal) on May 27, 2013 at 19:30 UTC

    As a small data point, I've been programming in Perl for more than ten years, and I've never run into perl's call depth limitation, unless it was code that extensively used recursion. But "normal" code where one method or sub calls another simply doesn't hit that limit.

    Some gotchas you might encounter:

    • Calling subroutines that use eval or system functions can reset your $@ and $! variables, so check them right after the place where an error might have occurred.
    • each and regex matching with /g attach (more or less) hidden state to the variables they operate on. Be careful not to leak those variables to subroutines you call. (And be careful not exit a while (my ($key, $value) = each %hash) { ... loop with last or return or die when there's a chance it could be re-entered with the same variable; best avoid it altogether, and iterate over keys %hash in the first place).
    • File handles like IN are global; avoid them in favor of lexical file handles.
    • Avoid function prototypes unless you know exactly what they are doing.
    • die on errors.
    • If you pass a large number of similar argument to a bunch of functions, investigate if it might make sense to turn those arguments (or a subset thereof) into an object, and the functions into method.
    • Although you can inspect the outer context of a function with wantarray and caller, it is usually a bad idea, because it makes code less composable.
Re: useful depth of sub routines
by davido (Archbishop) on May 27, 2013 at 19:11 UTC

    This would be fairly trivial to test yourself. But just some tips: There is some overhead per subroutine call; each call pushes a lexical scope onto the call stack, to be popped off later. That's not free, but unless you're working inside of tight loops it's usually not something you need to care about. Also, Perl does support recursive subroutines, and only starts spitting out warnings when your recursion reaches 100 levels deep. Recursion isn't special in Perl (ie, there's no tail-call optimization), so each recursive call is another push onto the call stack. That should give you some reassurance that even 100 levels deep is possible.

    Remember, the deep recursion warning is just a warning, intended to clue developers in to the fact that they might have some runaway recursion. Even that warning can be silenced or ignored if you're ok with being 100 levels deep in sub calls. Perl doesn't arbitrarily limit you. You're free to use all available memory on the subroutine call stack if you wish. You're unlikely to come up with a reasonable design that takes you anywhere close to that though.


Re: useful depth of sub routines
by kcott (Chancellor) on May 27, 2013 at 19:18 UTC

    G'day JockoHelios,

    "I'm wondering how far, or how deep, this can go without causing Perl to upchuck."

    The default recursion depth for subroutines is 100. You can change this by compiling perl with -DPERL_SUB_DEPTH_WARN=n. This was introduced in 5.10.1 (see perl5101delta). You can temporarily ignore the limit with "no warnings 'recursion';" (see perldiag). Searching for "PERL_SUB_DEPTH_WARN" in both of those documents would be the quickest way to locate the relevant information.

    "example: in the subroutine SRE_SubRoutineExample, all named variables and arrays start with SRE_ - thus, my $SRE_CompString = "JAPH";"

    my variables declared within a subroutine are lexically scoped to that subroutine. You can have a $CompString in every subroutine you write and they will all be different variables. I think you'll probably be creating a rod for your back if you start trying to assign unique prefixes to every variable you declare in every subroutine. It's your back, though :-)

    -- Ken

      Up to now, using the subroutine-specific variable names has made it easier for me to keep track of what's happening to which variable at what point in the program.

      But at this point I only have a dozen or so subroutines. I see your point - this could get hairy as that number increases. I'll eventually have to keep track of the initials of the subroutine names, to keep the variable prefixes unique. Hoo, boy.

      Dyslexics Untie !!!
Re: useful depth of sub routines
by Laurent_R (Canon) on May 27, 2013 at 21:33 UTC

    I do not think that you have to worry too much. unless what you are doing is getting really hairy, you will probably not reach a limit. I have had at least a couple of cases of really complicated nested function calls, but never reached a limit, except of course sometimes for the deeply nested recursive call, but, yet, it is only a warning.

    A naive recursive approach to the Fibonacci series can lead to 331 million nested calls of the Fibo function if you want to calculate the 40th Fibonacci number, that is definitely not very time efficient (a few minutes on my computer), but that still works perfectly. I doubt that you will get anywhere near that in a non recursive approach.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1035449]
Approved by davido
Front-paged by kcott
and not a whimper to be heard...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (5)
As of 2018-04-19 16:48 GMT
Find Nodes?
    Voting Booth?