Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re: Splitting up a filesystem into 'bite sized' chunks

by zork42 (Monk)
on Jul 13, 2013 at 11:13 UTC ( #1044143=note: print w/ replies, xml ) Need Help??


in reply to Splitting up a filesystem into 'bite sized' chunks

Just a 'find' takes a substantial amount of time on this filesystem (days).
  1. Any idea why it takes so long?
  2. Does find or File::Find spend most of the time waiting for a response from the remote server?
  3. Would you be able to speed up File::Find by searching multiple directories simultaneously?
    If you go down a few directory levels and find (say) 100 subfolders, could you search each of those subfolders simultaneously?
  4. Would you be able to speed up File::Find by using RPCs?


Comment on Re: Splitting up a filesystem into 'bite sized' chunks
Re^2: Splitting up a filesystem into 'bite sized' chunks
by Preceptor (Chaplain) on Jul 16, 2013 at 18:52 UTC

    Contention and sheer number of files, mostly. Parallel traversals will help if I divide the workload reasonably - I've got a lot of spindles and controllers. Some filesystems will work ok with a 'traverse down' approach, but others are much more random in distribution. I don't want to extend my batches too much, because of outages, glitches etc.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1044143]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (10)
As of 2015-07-07 13:36 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The top three priorities of my open tasks are (in descending order of likelihood to be worked on) ...









    Results (88 votes), past polls