I was worried someone would say that
What's to worry about? If you're concerned about having to keep an extra file around, consider locking the script itself. This may not be appropriate in all cases, but it's a fairly common idiom:
use Fcntl qw(:flock);
# lock myself
open my $lockfh, "<", $0
or die "Cannot lock myself: $!\n";
flock $lockfh, LOCK_EX;
# ...
close $lockfh;
| [reply] [d/l] |
Yes, I've done this before, but it's not appropriate here. We're doing distributed processing on hundreds of pairs of files coming from one source. In the process, a certain amount of wheel reinvention has occurred, I'm sure.
| [reply] |
You might try the code I posted a while back on this node -- it's a simple module that implements a nice semaphore file locking technique that I pulled out of a TPJ article (code provides url to the article, written by Sean Burke).
Regarding this part of the OP:
Let's say I have a system where more than one process will try to grab a pair of files (two associated files), read it/them, copy it/them elsewhere, and delete the originals. I want only one copy of the originals to be floating around. The initial solution was
get handles,
lock,
copy,
unlock,
unlink.
If I get what you're describing, multiple processes can be trying to access either of two files at any time, and will normally want to "open / read / close / make a copy elsewhere / unlink the original" on each file in succession.
With a semaphore file, it would look like this:
get lock on semaphore file
for (file1, file2) {
open
read and copy
close
unlink
}
release semaphore file lock
So long as all the competing processes are set to use the same semaphore file, this will assure that only one process at a time can do anything at all with the two target data files.
| [reply] [d/l] |