|XP is just a number|
Seeking help from Perl Efficiency Monks...
I need to recursively perform an action on all files in a large directory tree (tens of millions of files).
Basically, on each file I need to know the (relative) path of the file and do a 'stat' to get the inode number, the size, and the number of links.
Typically I would use File::Find but I was wondering how much overhead there was and if so would I be better off just using manual recursion with opendir/readdir/closedir and perhaps both avoid overhead and potential duplicate calls to stat (that might be buried in the find algorithm).
If, recursion with opendir is reasonably faster, does anybody have some streamlined code to offer so I can avoid "dumb" things that would slow down the recursion?
If Find is just as fast, are there any 'gotchas' I should avoid that would slow things down?Thanks
In reply to Fastest way to recurse through VERY LARGE directory tree by puterboy