Beefy Boxes and Bandwidth Generously Provided by pair Networks
Think about Loose Coupling
 
PerlMonks  

Re: Fastest way to recurse through VERY LARGE directory tree

by eff_i_g (Curate)
on Jan 21, 2011 at 16:03 UTC ( #883562=note: print w/ replies, xml ) Need Help??


in reply to Fastest way to recurse through VERY LARGE directory tree

I suggest File::Find::Rule over File::Find. I've used it recently for Finding Temporary Files and it goes through ~1TB in under an hour.


Comment on Re: Fastest way to recurse through VERY LARGE directory tree
Re^2: Fastest way to recurse through VERY LARGE directory tree
by runrig (Abbot) on Jan 21, 2011 at 16:38 UTC
    Unless the OP needs the entire list of "tens of millions of files", I'd suggest not using File::Find::Rule, and I'd instead suggest an iterator or callback based routine (like File::Find). If you can process files one at a time, there's no need to build such a huge list.
      runrig,

      I don't follow you. File::Find::Rule does not simply return every file (although it can). You can instruct it what to return based on type, size, name, and even make a determination via a custom sub. You can have the sub perform actions and ignore the larger return value, or iterate with the start and match methods.

        Read the OP. I didn't say FFR has to return the entire list, the OP says the entire list needs to be processed. So, in this case, FFR would return the entire list of files.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://883562]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (7)
As of 2014-12-28 22:32 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (183 votes), past polls