Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

RE: Flock Subroutine

by KM (Priest)
on Jul 28, 2000 at 20:56 UTC ( [id://24908]=note: print w/replies, xml ) Need Help??


in reply to Flock Subroutine

Well, I mention this every time I see flock() being used. You introduce a race condition here. Please take a look at nodes 14137, 14139 and 14140 (I had got cut off so used three nodes, sorry). Personally, I would change the subroutine to use a semaphore (sentinal, or whatever you like to call it) file to avoid any race conditions.

Cheers,
KM

Replies are listed 'Best First'.
RE: RE: Flock Subroutine
by turnstep (Parson) on Jul 29, 2000 at 07:47 UTC

    I guess I still do not understand the basic objection to flock - does it not do what it claims to? My understanding is that the OS ensures that only one process can lock a file at a time. If process #2 changes a file between the time that process #1 opens it and locks it, what harm is done? #1 locks it, and then has #2's changes. As long as all your processes are using flock, I still can't quite see the problem with the race condition. Could you please explain it again? Thanks.

    P.S. I will be out of the country, so may not reply for a week, but I am interested in this. :)

      I don't object to flock() at all. It does what it claims to do, but using it incorrectly can be the problem. A race condition is when you have this type ofrun of events which is somewhat common, especially in older scripts (I have seen this a lot in CGI scripts):

      1. Open FH for reading
      2. Lock FH
      3. Read FH
      4. Close FH
      5. Re-open FH for writing
      6. Lock FH
      7. Write to FH
      8. Close FH

      Here you should be able to see the race. Another process can get an exclusive lock on the FH during the read open (read-only opens don't generally get exclusive locks), and between the close of the read and open of the write. Hence, you can have multiple processes working on the file in a way you do not want which could currupt your data ("Hey! Why is my counter file suddenly blank??").

      Consider this flow:

      1. Proc A open FH for write (using > which clobbers the file contents)
      2. Proc B opens FH for reading (no lock attempt since it won't likely get an exclusive lock granted)
      3. Proc A locks FH
      4. Proc A works with FH
      5. Proc A closes FH

      One race concern here is that if another process wants to read the contents of this file, it will get garbage since proc A clobbered the file contents. Having proc B attempt an exclusive lock is futile since they are not generally granted to r/o opens. By using semaphores, you can avoid this situation.

      These are just two examples (there is also an issue of hardware physically not being done writing to disk before another process opens the file). A good idea is to write the flow of your locks on a whiteboard and see what would happen if multiple processes are doing that same flow at once (I generally add sleeps at key points to show myself this, like in the example script in node 14140.

      I hope this makes more sense, if not let me know.

      Cheers,
      KM

        Well, using anything incorrectly can lead to problems, but I still think a simple flock is best - just be careful about it.

        > Here you should be able to see the race. Another process
        > can get an exclusive lock on the FH during the read open
        > (read-only opens don't generally get exclusive locks),
        > and between the close of the read and open of the write.

        I don't agree with this. First, if another process cannot get an exclusive lock while *any* other lock is on it. So if process A locks a file for reading (shared lock) and then process B tries to get an exclusive lock, process B cannot get the lock until *all* the locks are gone - shared and exclusive. In the second case, yes it's a problem, but that's a bad coding problem, not a problem with flock. The right way to do it of course is to open the file for read/write, get an exclusive lock, read in the file, rewind (usually), write to the file, and close it, releasing the exclusive lock.

        1. Proc A open FH for write (using > which clobbers the file contents)
        2. Proc B opens FH for reading (no lock attempt since it won't likely get an exclusive lock granted)
        3. Proc A locks FH
        4. Proc A works with FH
        5. Proc A closes FH

        No need for a semaphore, just change the above a bit:

        1. Proc A opens FH for read/write (the file is not changed at all yet)
        2. Proc B opens FH for reading (and gets a shared lock)
        3. Proc A locks FH exclusively, after B has released it's shared lock
        4. Proc A works with FH
        5. Proc A closes FH

        And yes, I need to update my tutorial. :)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://24908]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2024-04-24 04:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found