Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re^2: File Locking revisited

by Jasper (Chaplain)
on Dec 01, 2004 at 14:27 UTC ( [id://411464]=note: print w/replies, xml ) Need Help??


in reply to Re: File Locking revisited
in thread File Locking revisited

What you need is a THIRD file. You should never lock the file(s) you're processing, as this leads to all sorts of problems in most situations.

I was worried someone would say that, although, having thought about it, I think the 2 file consecutive locking is (nearly) as good (as far as my processing goes).

Check the files.

Easier said than done, I think, but I'm sure something will be available.

Thanks for the wisdom.

Replies are listed 'Best First'.
Re^3: File Locking revisited
by revdiablo (Prior) on Dec 01, 2004 at 17:27 UTC
    I was worried someone would say that

    What's to worry about? If you're concerned about having to keep an extra file around, consider locking the script itself. This may not be appropriate in all cases, but it's a fairly common idiom:

    use Fcntl qw(:flock); # lock myself open my $lockfh, "<", $0 or die "Cannot lock myself: $!\n"; flock $lockfh, LOCK_EX; # ... close $lockfh;
      Yes, I've done this before, but it's not appropriate here. We're doing distributed processing on hundreds of pairs of files coming from one source. In the process, a certain amount of wheel reinvention has occurred, I'm sure.
Re^3: File Locking revisited
by graff (Chancellor) on Dec 03, 2004 at 14:14 UTC
    You might try the code I posted a while back on this node -- it's a simple module that implements a nice semaphore file locking technique that I pulled out of a TPJ article (code provides url to the article, written by Sean Burke).

    Regarding this part of the OP:

    Let's say I have a system where more than one process will try to grab a pair of files (two associated files), read it/them, copy it/them elsewhere, and delete the originals. I want only one copy of the originals to be floating around. The initial solution was

    get handles,
    lock,
    copy,
    unlock,
    unlink.

    If I get what you're describing, multiple processes can be trying to access either of two files at any time, and will normally want to "open / read / close / make a copy elsewhere / unlink the original" on each file in succession.

    With a semaphore file, it would look like this:

    get lock on semaphore file for (file1, file2) { open read and copy close unlink } release semaphore file lock
    So long as all the competing processes are set to use the same semaphore file, this will assure that only one process at a time can do anything at all with the two target data files.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://411464]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (5)
As of 2024-04-18 04:26 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found