Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Writing to a log file without colliding

by cgraf (Beadle)
on Aug 17, 2004 at 09:51 UTC ( [id://383596]=perlquestion: print w/replies, xml ) Need Help??

cgraf has asked for the wisdom of the Perl Monks concerning the following question:

I have the need for multiple processes to write concurrently to the same text file for logging purposes. I suspect this will lead to chararcters from two write process being inter-woven within the output file. What I need is to ensure a write is completed as an atomic unit greater than a single character, for example a compete line. I'm unfamilar with the nature of IO perl functions behind the scences, so if anyone can give me some hints it would be appreciated.
  • Comment on Writing to a log file without colliding

Replies are listed 'Best First'.
Re: Writing to a log file without colliding
by Aristotle (Chancellor) on Aug 17, 2004 at 10:12 UTC
      My concern with using an exclusive lock is that the other processes needing to write will block/wait until the file becomes available, with will affect performance.

        There is no way around locking for a concurrently written logfile. If that is a problem, use a database.

        Makeshifts last the longest.

Re: Writing to a log file without colliding
by davorg (Chancellor) on Aug 17, 2004 at 10:14 UTC
Re: Writing to a log file without colliding
by naChoZ (Curate) on Aug 17, 2004 at 10:24 UTC

    No need to reinvent that task. Log::Log4Perl is one of a variety of available modules for logging.

    --
    "A long habit of not thinking a thing wrong, gives it a superficial appearance of being right." -- Thomas Paine
    naChoZ

Re: Writing to a log file without colliding
by adrianh (Chancellor) on Aug 17, 2004 at 10:13 UTC
Re: Writing to a log file without colliding
by acomjean (Sexton) on Aug 17, 2004 at 13:09 UTC
    You need locks if your writing to one file.. You said that you were worried that exclusive locks might degrade performance.

    There are other options

    Maybe you can write 2 files with time stamps and merge them at the end of a run based on time.
    Or have a separate process perform the logging (ie create shared memory or some other ipc like sockets). The logging process just gets data from other processes and stores in a file. Other processes could just place data they want logged in shared memory (or send via socket however its set up) without blocking or worrying about locks. This is significantly more complicated but may work better in the long run.

      I very much agree with the socket idea. It decouples the two processes further. As long as the shared memory only exists between each process and the log process/daemon then its just as decoupled as sockets. Prepend a timestamp to your log. Then you could come up with some cool sorting algorithm to make sure your log file is truly inorder if you make your timestamp look like

      [yyyy-mm-dd_hh:mm:ss]|$log_entry
      You can split the logfile based on the pipe and then substitute the square brackets,dashes,colons, and undescores with nulls. You then have an int that you can sort a hash (by key) with, and then have a truly ordered logfile. You would obviously re-sub the special characters in before writing back to the final log (if you wanted to keep them)

Re: Writing to a log file without colliding
by Mr_Person (Hermit) on Aug 17, 2004 at 14:37 UTC
    Another alternative would be to use Syslog (take a look at Sys::Syslog or Unix::Syslog) and let it worry about the details. This probably isn't what you want if your logging is very application specific, high volume, or you're not on a Unix style machine. But, if all you need to do is write out the occaisonal status message, it works great.
Re: Writing to a log file without colliding
by pingo (Hermit) on Aug 17, 2004 at 12:26 UTC
    It should be mentioned that using locks over NFS is bound to end in disappointment, but then again, maybe you aren't. :-)

    May work if you are using some new-fangled NFS version, though.
      If you are trying to lock a file that's on NFS create a file in a local directory (/tmp perhaps) and do the locks on that file to control access to the NFS file.
        Assuming, of course, that the script is only run on one machine. Nitpicking? Yes. :-)

        I don't know if I've been led correctly on the point but I believe lock-file creation/destruction isn't enough. I think a lock-dir creation/destruction is best for NFS situations (better atomicity from what I've been told).

Re: Writing to a log file without colliding
by KeighleHawk (Scribe) on Aug 17, 2004 at 20:15 UTC
    I think what you are after is a Queue. Your processes can write to the queue, and a seperate, single process can pop them off the queue and put them in the log file.

    One place I worked used UNIX queues and two small C programs to accomplish this. I did not write, nor remember the code but as I recall it was fairly short.

    There is at least one Perl Queue module (Queue::Base) though I have no experience with it or how it works.

    There is also POE which might work for you.

    As someone mentioned, if you have access already to a database like Oracle which has a queue implementation, or perhaps something like MQ then that would work. Obviously it is overkill if you need it only for a single logging queue, but if it is there already...

    Depending on your processes and versions of Perl, perhaps you can do a single process for all of them and do something clever with threads if you are feeling adventurous.

    Depending on your access and the temperment of your Sys Admins, you might be able to write to the system file (syslog) and then either use that as your log file or reap it with another single process. Depends on what you want this for and the volume of messages you expect, I guess.

    As a last, low tech model, pick a directory and have every process write seperate files for each message to the directory. Use a nameing convention/time stamp or some such to determine file/message order. You can also use the naming convention to distinguish files in process (e.g. "dot" files are being written to. when finished the process can remname them to remove the leading '.'). Then, have another, again single, process reap that directory, transfer the messages to the log file and delete the temp files.

Re: Writing to a log file without colliding
by talexb (Chancellor) on Aug 17, 2004 at 19:08 UTC

    A mechanism that I considered some time ago but never implemented might serve you. If your main log file is called /path/to/logfile, then your processes could write to /path/to/logfile.$$ where $$ is their PID. Each process would have their own log file, so no problems locking. These individual files could then be merged into /path/to/logfile at a more convenient time.

    Failing that, Log::Log4Perl is highly recommended.

    Alex / talexb / Toronto

    Life is short: get busy!

Re: Writing to a log file without colliding
by bluto (Curate) on Aug 17, 2004 at 19:59 UTC
    Since a running system probably performs 1000s of filelocks per day and they are probably fairly efficient, my first suggestion would be to run a test, with parallel processes writing to a single log file, using a lock on the lock file. Make sure you only hold the lock once your string is fully built though. If you really can't live with the rate you see, then consider other solutions.

    At some point, you must pay the price for combining the output into a single file. As many have mentioned, you can hide the latency by doing fancy things (i.e. sending messages via shared memory to a logging process, having a background thread log the message, combining files after the fact, etc), but with most of those you still must synchronize the output and pay a price (perhaps an even higher one overall on machine performance). They are also more complex and subject to their own set of problems (i.e. do all of your processes hang if your separate logging process dies/hangs?).

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://383596]
Approved by Aristotle
Front-paged by coreolyn
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (3)
As of 2024-04-19 01:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found