in reply to
Splitting up a filesystem into 'bite sized' chunks
Maybe I should adopt the principle of writing every single terse-comment that I am prone to in a splendiferous loquacious paragraph, or three, in a vain attempt to forestall the “down-vote demons.” I dunno. But, wrapped-up in the terse-comment “NFS is a monster” is a very-valid point: NFS is a network file-system that does not (unlike, say, Microsoft’s famous system) pretend to be otherwise.
With NFS, filesystems can be unfathomably-large, and network transports can be slow, and NFS will still work. However, all that having been said ... your (Perl-implemented) algorithms must match. You must, for example, come up with a plausible strategy for “splitting up a filesystem into bite-sized chunks,” whatever that strategy might be, that assumes both that you cannot immediately ascertain how many files/directories are in any particular area of that filesystem, and that you cannot obtain such a count in a timely fashion. Instead of an algorithm, therefore, you are obliged to make use of a heuristic.