Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Fastest way to recurse through VERY LARGE directory tree

by puterboy (Scribe)
on Jan 21, 2011 at 01:53 UTC ( #883444=perlquestion: print w/ replies, xml ) Need Help??
puterboy has asked for the wisdom of the Perl Monks concerning the following question:

Seeking help from Perl Efficiency Monks...

I need to recursively perform an action on all files in a large directory tree (tens of millions of files).

Basically, on each file I need to know the (relative) path of the file and do a 'stat' to get the inode number, the size, and the number of links.

Typically I would use File::Find but I was wondering how much overhead there was and if so would I be better off just using manual recursion with opendir/readdir/closedir and perhaps both avoid overhead and potential duplicate calls to stat (that might be buried in the find algorithm).

If, recursion with opendir is reasonably faster, does anybody have some streamlined code to offer so I can avoid "dumb" things that would slow down the recursion?

If Find is just as fast, are there any 'gotchas' I should avoid that would slow things down?

Thanks

Comment on Fastest way to recurse through VERY LARGE directory tree
Re: Fastest way to recurse through VERY LARGE directory tree
by ahmad (Hermit) on Jan 21, 2011 at 03:02 UTC
    use File::Find;

    It's fast enough, and will do the job for you.

Re: Fastest way to recurse through VERY LARGE directory tree
by JavaFan (Canon) on Jan 21, 2011 at 03:08 UTC
    Benchmark.

    It may very well be that whatever you come up with is only faster than File::Find on some setups of disks/volume managers/filesystem, and slower on others.

    You need to benchmark to find out.

    Now, in theory, carefully handcrafting something that does exactly what you need is going to be faster than a more general setup than File::Find. But whether that's actually be measurable is a different question.

    So, benchmark.

    Of course, as you describe the problem, the bottleneck might very well be your I/O.

    Hence, benchmark.

    Have I said you should benchmark? No? Well, benchmark!

Re: Fastest way to recurse through VERY LARGE directory tree
by salva (Monsignor) on Jan 21, 2011 at 07:08 UTC
    I am sure the bottleneck is going to be the disk I/O, so there is little you can do from Perl to speed the operation.

    ...well, maybe the order used to traverse the file system (deep or breadth first) could have some influence.

Re: Fastest way to recurse through VERY LARGE directory tree
by eff_i_g (Curate) on Jan 21, 2011 at 16:03 UTC
      Unless the OP needs the entire list of "tens of millions of files", I'd suggest not using File::Find::Rule, and I'd instead suggest an iterator or callback based routine (like File::Find). If you can process files one at a time, there's no need to build such a huge list.
        runrig,

        I don't follow you. File::Find::Rule does not simply return every file (although it can). You can instruct it what to return based on type, size, name, and even make a determination via a custom sub. You can have the sub perform actions and ignore the larger return value, or iterate with the start and match methods.

Re: Fastest way to recurse through VERY LARGE directory tree
by graff (Chancellor) on Jan 22, 2011 at 18:25 UTC
    I've been a devotee of using the compiled unix/linux "find" utility in preference to File::Find or any of its variants/derivatives, because I found that for any directory tree of significant size, something like this:
    open( FIND, '-|', 'find', $path, @args ); while (<FIND>) { .... }
    was significantly faster. In fact, just for grins, I tried an old benchmark that I posted here several years ago, to see if the results were still true with reasonably modern versions (perl 5.10.0 on macosx, File::Find 1.12), and I found an order of magnitude difference on a reasonably large tree (~30K files, 90 sec using File::Find, 9 sec using "find".)

    But then I ran into a case where someone had created a really obscene quantity of files in a single directory on a freebsd file server, and freebsd's "find" utility choked. (Apparently, that version of "find" was building some sort of in-memory storage for each directory, and it hit a massive number of page faults on the path in question.)

    I reverted to a recursive opendir/readdir approach for that case, and it succeeded reasonably well. Under "normal" conditions, compiled "find" seems to run about 10% faster than using recursive opendir/readdir, but in that particular case of an "abnormal" directory, freebsd "find" became effectively unusable, while opendir/readdir performance was consistent with normal conditions.

    I just posted a utility I wrote for scanning directories, which uses recursive opendir/readdir: Get useful info about a directory tree -- I'm sure it includes a lot of baggage that you don't need, but perhaps it won't be too hard to pick out the useful bits...

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://883444]
Approved by roubi
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (7)
As of 2014-08-23 04:05 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (172 votes), past polls