Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister

Re: Safe to open+close for every write, without locking?

by flexvault (Monsignor)
on Dec 21, 2012 at 10:20 UTC ( #1009895=note: print w/replies, xml ) Need Help??

in reply to Safe to open+close for every write, without locking?


Take a look at "File Locking Tricks and Traps" at for interesting things you can do with 'flock'.

But in your case, I'd just 'use Sys::Syslog' !

Good Luck!

"Well done is better than well said." - Benjamin Franklin

  • Comment on Re: Safe to open+close for every write, without locking?

Replies are listed 'Best First'.
Re^2: Safe to open+close for every write, without locking?
by sedusedan (Monk) on Dec 21, 2012 at 12:18 UTC

    Thanks, informative slides.

    Regarding slide #4 (Trap #1: LOCK_UN) surely it is no longer the case, since per "perldoc -f flock": "To avoid the possibility of miscoordination, Perl now flushes FILEHANDLE before locking or unlocking it."?

    Also, thanks for the suggestion but I really do not want syslog for this.


      I didn't previously think about the consequences of slide #4, but it probably is why I don't 'flock' the actual file. I use a dummy file called ".../LOCK", that I use 'flock' to lock/unlock. This file is alway of length '0', so that allows the actual file that I'm writing/reading to continue to be buffered. You have to use it the exact same way. ('LOCK_EX' for write, 'LOCK_SH' for read.)

      As long as you're preventing the race condition by always 'flock'ing the dummy file, you will not get into trouble. Early on I found problems with 'flock'ing the actual file, which may be why 'flock' was fixed to eliminate the problem described in slide #4. But buffering is good!

      I looked at your benchmark, and noticed that you don't delete the file each time your script is entered and before you start the benchmark. Even though you're 'open'ing the file with append, you may distort the results due to more disk head activity by having a larger and larger file with each additional call. I adjusted your script and did a comparison to 'syslog' and found that 'syslog' was about twice as slow. I'm guessing that is the socket activity between different processes.

      Anyway, looks like you have a plan.

      Good Luck...Ed

      "Well done is better than well said." - Benjamin Franklin

        Hi Ed,

        In the actual module (File::Write::Rotate, now already uploaded to CPAN), I do use a zero-length dummy file (lock file).

        Regarding not deleting the data file for every script: you have good eyes :) Actually I did truncate the file before each test, but I didn't include it in the post. So, my bad.

        Also, thanks for comparing with syslog. Nice to know.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1009895]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (6)
As of 2018-06-18 02:44 GMT
Find Nodes?
    Voting Booth?
    Should cpanminus be part of the standard Perl release?

    Results (107 votes). Check out past polls.