http://www.perlmonks.org?node_id=115818

George_Sherston has asked for the wisdom of the Perl Monks concerning the following question:

I'm sure this is a bad idea, but I can't see why.

I have a CGI which, depending on user input, does one of a guzillion different things. Each of these things gets done by a subroutine. So for example with subroutines foo, bar and baz, they might be called thus:
if ($Action eq 'foo') { &foo; } elsif ($Action eq 'bar') { &bar; } else { &baz; }
But as far as I can see this means that whenever the CGI is called it compiles all the different subroutines, even though it actually only ever runs one of them. And this is true whether the subs are in a lib or in the script itself.

Then I started listening to The Voices. And what they said was, put the code from each subroutine in its own file, call the files foo.pl, bar.pl, baz.pl, and call them with
if ($Action eq 'foo') { do 'foo.pl'; } elsif ($Action eq 'bar') { do 'bar.pl'; } else { do 'baz.pl'; }
Now, I can see one reason why this is bad: errors don't show up so easily - you just get a blank screen if foo.pl is n.b.g. But actually that's not too bad, because they're all fairly stable scripts that don't need to change much - and if I'm getting errors, that's a bigger problem than just not being able to track them very easily.

I can't see any other reason why this is bad... but it feels bad. It may be that I've made a mistake about how compiling works and this approach doesn't save any time. I don't think so, but request enlightenment. On the other hand, if it does work - why is it the wrong thing to do?

§ George Sherston