We don't bite newbies here... much | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Unfortunately, I'm pulling NFS mounts off a NAS. So I can't easily subdivide my volumes. I've got a few places where they're separated into (known size) subdirectories, and that's fine. But almost by the nature of it, the most unwieldy are also the ones that have silly numbers of TB and filecounts within a single structure. I can subdivide the mountpoints, but I'd rather not do it by hand.
It's a good point though - my scanner probably does have 'per file' scanning, which would mean I could stream a file list from a single source to multiple scanning engines. So perhaps that's the way to go. In the grand scheme of things though, the biggest problem isn't so much parallelising the scans on a single filesystem, as that'll create contention, but to have a good notion of a process that can be resumed part way - it's not such a big deal that it's done within a defined time window, but more that I can track progress and ensure everything _does_ get scanned eventually. In reply to Re^2: Splitting up a filesystem into 'bite sized' chunks
by Preceptor
|
|