Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number

Looking for a simple multiprocess-enabled logging module

by bronto (Priest)
on Dec 20, 2004 at 15:17 UTC ( #416206=perlquestion: print w/replies, xml ) Need Help??
bronto has asked for the wisdom of the Perl Monks concerning the following question:

Hello all

I am in charge of rewriting in Perl a bash-application; my Perl program would fork N children to do the same job on different partitions of input data. Each child would write on the same log file, and obviously in a safe manner.

I dug into PerlMonks and to find the solution, and found some on both. At the two extremes there are an hand-crafted solution and Log::Log4perl.

Log::Log4perl is so feature-rich and flexible that, paradoxically, seems a bit overkill for what I am going to do; I'd prefer a simpler module, if at all possible.

So, before going through L::L4p, I would like to know if there is any simpler, multiprocess-enabled logging module. A search on cpan returns a lot of modules (677 at the moment I am writing this), that are far too much to examine one-by-one; I tried anyway, but for many of them the documentation doesn't tell about the multiprocessing support, and I should go for the source...

Any good advice?

Thanks in advance

Update: Added "simple" to node title


In theory, there is no difference between theory and practice. In practice, there is.
  • Comment on Looking for a simple multiprocess-enabled logging module

Replies are listed 'Best First'.
•Re: Looking for a simple multiprocess-enabled logging module
by merlyn (Sage) on Dec 20, 2004 at 16:20 UTC
Re: Looking for a simple multiprocess-enabled logging module
by amw1 (Friar) on Dec 20, 2004 at 16:35 UTC
    One alternative is to use Sys::Syslog and set your facility to be something like local7. Configure syslog to log local7 to a different logfile.

    It's simple, syslog is a pretty mature protocol, and Sys::Syslog is part of the perl core.

Re: Looking for a simple multiprocess-enabled logging module
by dave_the_m (Prior) on Dec 20, 2004 at 15:47 UTC
    If you open the file for appending, with autoflush enabled, then each print of a single string is guaranteed to be atomic (at least on sensible OSes).


      I thought that was only true if the string is smaller than a certain size?
        On a Unix and equivalent systems if the file you are writing to was opened with the O_APPEND flag, writes are guaranteed to be atomic regardless of size, unless the file is a pipe or FIFO, in which case atomicity is guaranteed only if the write size is PIPE_BUF or fewer bytes in length. So sayeth the Single Unix Specification on the write system call:
        If the O_APPEND flag of the file status flags is set, the file offset shall be set to the end of the file prior to each write and no intervening file modification operation shall occur between changing the file offset and the write operation....

        Write requests to a pipe or FIFO shall be handled in the same way as a regular file with the following exceptions: ... Write requests of PIPE_BUF bytes or less shall not be interleaved with data from other processes doing writes on the same pipe. Writes of greater than PIPE_BUF bytes may have data interleaved, on arbitrary boundaries, with writes by other processes, whether or not the O_NONBLOCK flag of the file status flags is set.

        On most Linux systems, PIPE_BUF is 4096.

        See the write(2) man page on your system and the Single Unix Specification for more.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://416206]
Approved by Arunbear
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others chilling in the Monastery: (7)
As of 2017-01-18 18:31 GMT
Find Nodes?
    Voting Booth?
    Do you watch meteor showers?

    Results (163 votes). Check out past polls.