http://www.perlmonks.org?node_id=1011551


in reply to why is Camel saying you can safely <> a growing file?

That just means the code writing to the log is doing it badly...

Appending a line to a log can be done atomically quite simply in Unix. That is the most typical situation and one which would never allow the problems that you are seeing. That is typical because it is very common for more than one process to be writing to a file if it is called a "log file" and not doing that means that you'd end up with intermixed parts of lines.

The most likely way I would expect what you are seeing is due to the writing to the log being done "fully buffered" such that instead of lines being written to the file, the batches of text that get written to the file are "one buffer full". A more serious problem with that situation is that it can take forever for information to actually make it into the log. So turn off buffering or switch to "line buffered" mode (in the program writing to the log file).

So writing "fully buffered" to a "log file" is a bad practice for several reasons and so wasn't the case being considered by the author of the advice you read.

I would think one needs to check for the end of the line, and if it is not '\n' then seek back to the beginning and then sleep, or something like that.

Well, that's an overly complicated way to deal with that problem. If you don't have a trailing newline, then just <> again and append the result to the previous data:

my $prev = ''; while( 1 ) { while( <LOG> ) { $_ = $prev . $_; if( ! /\n$/ ) { $prev = $_; next; } $prev = ''; grok($_); } sleep 15; seek LOG, 0, 1; # Clear EOF flag }

- tye