Either way, BrowserUK’s original observation is what holds water in this case: this is an I/O bound operation, so CPU-twiddling isn’t going to translate to a material improvement. (And the comment that “Perl subs are slow” is, to me, unreliable hearsay.)
It would be useful to time the two functions separately. How much time, and how much resources, does it actually take to run that “recursive” file search routine. Try to predict how much time it, alone would take. Could it, without your being aware of it, be taking more time/resources than it should? Then, using a list of files that is prepared entirely in-advance, sample the amount of time that the file-processing subroutine requires. Once more, to predict how much time it, alone would take to do the entire job. Then, well, does this “jive” with your empirical observations of the actual combined program? The most likely thing to improve actual runtime ... if it can be significantly improved at all ... will be an algorithm revision of some kind.
Don’t “diddle” code to make it faster: find a better algorithm.
Kernighan & Plauger: The Elements of Programming Style