Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris

locking over the network

by rovf (Priest)
on Jan 31, 2011 at 15:54 UTC ( #885300=perlquestion: print w/replies, xml ) Need Help??
rovf has asked for the wisdom of the Perl Monks concerning the following question:

I have two Perl processes, let's call them A and B. When A runs, it (re)creates a file F (i.e. if F already exists, it will be erased, and then it will be created from scratch). When B runs, it reads the file F (if present).

To coordinate these activities, I would normally use file locking in processes A and B, using a separate lockfile L, like this (error handling omitted):

# Acquire lock sysopen(LOCKFILE,'L',O_WRONLY|O_CREAT); while(!flock(LOCKFILE, LOCK_EX|LOCK_NB)) { sleep(...); } # Lock granted! if(I am process A) { open(my $file,'<','F'); ... } elsif(I am process B) { unlink 'F'; open(my $file,'>','F); ... } # Release lock flock(LOCKFILE, LOCK_UN); close(LOCKFILE);
I think (hope) the basic logic is correct. However, in my case, process A runs on Unix and process B runs on Windows, and the file F needs to be accessed via the network in both cases. From what I have read in, for example, perlfaq5, locking may or may not work well when done over the network.

Now I have two questions:
  • Can this locking algorithm be improved, or is it already "safe enough" for my concrete case?
  • Do I really need a separate lockfile L, oder can I lock on file F? Of course I can't then unlink('F'), but if I would truncate the file to length zero after opening it, an unlink wouldn't be necessary.
Ronald Fischer <>

Replies are listed 'Best First'.
Re: locking over the network
by SuicideJunkie (Vicar) on Jan 31, 2011 at 17:07 UTC

    If the order of reads and writes is independent and you simply want to ensure you have a consistent read, you could try having the writer write to a temp file, unlink the real file, and then rename that temp file to the real name.

    Would the unlink and rename be sufficiently atomic for your purpose?

      On writing, only one string is written to the file. The next time we are coming to the write operation, the old content is discarded and the new string is written.

      However, I see one problem with your solution: There is a race condition between unlinking the original file, and renaming the temp file. If process B tries to access the file during this time, it thinks there is no such file. Unfortunately, having no file *is* a legal situation (and, from the processing logic, equivalent to an empty file), so we can not let the reader wait until the file appears.

      Ronald Fischer <>
        Unfortunately, having no file *is* a legal situation (and, from the processing logic, equivalent to an empty file)

        Then change the logic, and make no file different to an empty file. That is, make it so that the create always creates a file, even if that file is empty. That way, your reader will always wait until there is a file before deciding upon its next action.

        Having the absence of a file be logically equivalent to the presence of an empty file creates the situation where the reader takes the crash of the writer to mean the same thing as the writer creating an empty file. And that's just silly. It just creates a false dilemma.

        With locking, the reader would have to wait for the writer to unlock the file. This will normally be a short period, but could potentially be forever if the creator crashed whilst holding the lock.

        If the reader waits (with timeout) for the appearance of the file you have the same situation.

        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        rename is atomic (and overwrites) on most common Perl platforms.

        - tye        

Re: locking over the network
by fidesachates (Monk) on Jan 31, 2011 at 19:57 UTC
    Hi there. I've got a slightly more simple solution for locking files. I too ran across the problem of how to best implement locking knowing that locking doesn't work well across networks.

    I propose you use port binding as your locking mechanism. On your Unix machine (which runs process A), open up a socket and bind it to a port as your lock. To unlock, simply close the socket. On your windows machine, just test the port to see if the file is "locked" or not. By testing, I mean issue a tcp connection to that port and see if a connection is open. If it is, the file is "locked". If the connection is rejected, the file is "unlocked". This also has the advantage that you can leave the open connection open and wait for it to close from the other side. Once that happens, you know the file has been "unlocked". Keep in mind you'll need to adjust firewalls for this to work.
Re: locking over the network
by rowdog (Curate) on Jan 31, 2011 at 20:50 UTC

    If windows supports hard links you could try something like this.

    #!/usr/bin/perl use strict; use warnings; open my $fh, ">", "mylock.$$" or die "open mylock.$$: $!"; sleep 2 until get_lock(); print "locked> "; <>; unlink "mylock"; # releases the lock unlink "mylock.$$"; # just cleaning up sub get_lock { link "mylock.$$", "mylock" and return 1; return (stat "mylock.$$" )[3] == 2; }

    The above is based on the discussion of portably locking with link(2) as found on the Linux man page for open(2) in the O_EXCL discussion.

    Edit: added comments because one unlink matters and the other doesn't.

      If windows supports hard links ...
      To be honest, I thought that the concept of hard links works only within a filesystem. I don't know how Windows does it, but with, for example, Samba, you don't have a node number where you could hard-link to, do you?

      Ronald Fischer <>
        To be honest, I thought that the concept of hard links works only within a filesystem.

        Yes, that's my understanding as well, which is why my example does its work in the current directory.

        I don't know how Windows does it, but with, for example, Samba, you don't have a node number where you could hard-link to, do you?

        smbclient(1) seems to support hard links but I really can't say whether perl's link command works on a samba mount.

Re: locking over the network
by Anonymous Monk on Jan 31, 2011 at 17:19 UTC
      I think this works only for NFS, doesn't it? When accessing the files from Windows, CIFS or Samba is used on the hosts where this application is running.

      Ronald Fischer <>

        The man page is quite clear what it does. Did you read it? You only have to read the first paragraph.

        - tye        

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://885300]
Approved by Corion
and John Coltrane plays...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (5)
As of 2018-01-18 01:17 GMT
Find Nodes?
    Voting Booth?
    How did you see in the new year?

    Results (206 votes). Check out past polls.