I'll be the devil's advocate and suggest that doing your own recursive solution for traversing a directory tree can save some run time. If you have directories with outrageous quantities of files (e.g. more then 100K files/directory), then a minimal opendir/readdir recursion can even save time over unix/linux "find".
The OP code might be a little more streamlined at run-time by using
while ( my $name = readdir( DIR )) {
...
}
instead of loading all directory entries into an array.
In case it helps, here's a similar "hand-rolled" recursive traversal script: Get useful info about a directory tree -- it produces different results from what you want, but the basic recursion part is pretty much the same as yours. I even benchmarked it against a File::Find approach, which took noticeably longer to run, possibly due to the number of subroutine calls per directory entry that File::Find does.