As earlier was pointed out, locks are only suggestion. Hence, other processes can over-ride your lock, so any processes that would be a problem for reading/writing at the wrong time, would have no respect for the lock. So the lock would have no effect anyway.
I still agree; however, that using locks are a good idea. Sometimes I don't feel they are necessary if it is a custom file that only that program will be accessing, and there can only be one instance of that program open at a time. Other than a situation like that it is a good idea to lock.
If you are on a system that doesn't support flock, you can implement your own simple locking system. You do this by testing for the presence of a lockfile before opening the data file, making a lock file when you read or write to a file, and then deleting it when you're done. The following code exemplifies a very simple locking implementation:
while( -f $LOCKFILE ) { ; }
open (LOCKME, "> $LOCKFILE") or die "The lockfile won't open $!";
open(CFILE, "count.txt" ) or die "The countfile won't open $!";
read( CFILE, my $buf, -s CFILE );
close( CFILE );
unlink $LOCKFILE;
~heise2k~ | [reply] [d/l] |
It's a good idea to lock the file using lock shared or LOCK_SH when reading a file, that way no one can manipulate it while someone reads it (i.e. delete or write to it) but others can read it as well. You may want to check out turnstep's Tutorial on File Locking. | [reply] |
I've often wondered the same. My undertanding is that all processes must agree on the lock. What if you are writing and a read comes along? I almost always play it safe and just do the flock _SH | [reply] |