|P is for Practical|
perl script that is run > 100x on a cluster to process 1000s of 3d brain images.
Even if the 1000s become low millions, it would be far more efficient to have a single script that scans the directory hierarchy and builds a big list in memory and then partitions the matching files into 100+ lists (1 per cluster instance) and writes the to separate files. It then initiates the processes on the cluster instances passing the name of one of those files to it. This is simple to implement and avoids the need for locking files entirely.
It could suffer from one problem though, that of imbalanced processing, if there is any great variability in the time taken to process individual images.
If that were the case, I'd opt for a slightly more sophisticated scheme. It have the directory scanning process open a server port that responded to inbound connections by returning the name of the next file to be processed. Each cluster instance then connects, gets the name of a file to process, closes the connection and processes the file; connecting again when it is ready to do another. Again, not a complicated scheme to program, but one that ensures balanced workloads across the cluster, and completely avoids the need for locking or synchronisation.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.