http://www.perlmonks.org?node_id=1044143


in reply to Splitting up a filesystem into 'bite sized' chunks

Just a 'find' takes a substantial amount of time on this filesystem (days).
  1. Any idea why it takes so long?
  2. Does find or File::Find spend most of the time waiting for a response from the remote server?
  3. Would you be able to speed up File::Find by searching multiple directories simultaneously?
    If you go down a few directory levels and find (say) 100 subfolders, could you search each of those subfolders simultaneously?
  4. Would you be able to speed up File::Find by using RPCs?
  • Comment on Re: Splitting up a filesystem into 'bite sized' chunks

Replies are listed 'Best First'.
Re^2: Splitting up a filesystem into 'bite sized' chunks
by Preceptor (Deacon) on Jul 16, 2013 at 18:52 UTC

    Contention and sheer number of files, mostly. Parallel traversals will help if I divide the workload reasonably - I've got a lot of spindles and controllers. Some filesystems will work ok with a 'traverse down' approach, but others are much more random in distribution. I don't want to extend my batches too much, because of outages, glitches etc.