http://www.perlmonks.org?node_id=196311

(Updated after fishing lesson by Aristotle, thanks) A few days ago, I attempted a tail with perl, but it's performance was pretty poor. So I got to thinking, how to speed up? Read::FileBackwards was faster because it read lines instead of single bytes. It dawned on me to read big chunks, and somehow grab it as an array of lines. It is faster than Read::FileBackwards. You may need to adjust "chunk size" depending on line length, and the number of lines you want to tail. Here are the benchmark results for filereadbackwards, tailz(my original slow method) and tailz1(my faster method).
Benchmark: timing 1000 iterations of filereadbackwards, tailz, tailz1. +.. filereadbackwards: 1612.90/s (n=1000) tailz: 152.91/s (n=1000) tailz1: 12500.00/s (n=1000)
Benchmark code:
#!/usr/bin/perl use Benchmark; use File::ReadBackwards; use strict; #open (BLACKHOLE,">/dev/null") or die $!; my $numlines =10; my $filename = 'ARCHIVES'; timethese(1000, { #################################################### filereadbackwards => sub { my @lines; my $line; my $count=0; my $bw = File::ReadBackwards->new($filename) or die "can't read filename $!" ; while(defined($line = $bw->readline)){ push @lines,$line ; last if ++$count >= $numlines; } @lines= reverse @lines; # print BLACKHOLE "@lines\n"; }, ##################################################### tailz => sub { my $byte; open FILE, "<$filename" or die "Couldn't open filename: $!"; seek FILE,-1, 2; #get past last eol my $count=0; while (1){ seek FILE,-1,1; read FILE,$byte,1; if(ord($byte) == 10 ){$count++;if($count == 10){last}} seek FILE,-1,1; if (tell FILE == 0){last} } $/=undef; my $tail = <FILE>; # print BLACKHOLE "$tail\n"; }, ######################################################### tailz1 => sub { my $chunk = 400 * $numlines; #assume a <= 400 char line(generous) # Open the file in read mode open FILE, "<$filename" or die "Couldn't open $filename: $!"; my $filesize = -s FILE; if($chunk >= $filesize){$chunk = $filesize} seek FILE,-$chunk,2; #get last chunk of bytes my @tail = <FILE>; if($numlines >= $#tail +1){$numlines = $#tail +1} splice @tail, 0, @tail - $numlines; # print BLACKHOLE "@tail\n"; }, });

Edit by tye to change PRE to CODE around wide lines

#!/usr/bin/perl -w # example for files with max line lengths < 400, but it's adjustable # usage tailz filename numberoflines use strict; die "Usage: $0 file numlines\n" unless @ARGV == 2; my ($filename, $numlines) = @ARGV; my $chunk = 400 * $numlines; #assume a <= 400 char line(generous) # Open the file in read mode open FILE, "<$filename" or die "Couldn't open $filename: $!"; my $filesize = -s FILE; if($chunk >= $filesize){$chunk = $filesize} seek FILE,-$chunk,2; #get last chunk of bytes my @tail = <FILE>; if($numlines >= $#tail +1){$numlines = $#tail +1} splice @tail, 0, @tail - $numlines; print "@tail\n"; exit;

Replies are listed 'Best First'.
Re: pure perl tail revisited
by Aristotle (Chancellor) on Sep 09, 2002 at 16:49 UTC

    Except it won't work if the last X lines add up to more than 2 kbytes..

    You need a loop there, and it has to read backwards and pay attention to files smaller than 2k or consisting of less than X lines. Not counting the last EOL if it's the last thing in the file makes things a notch more complicated as well. In total, it's a rather not as trivial a problem as it first seems to be.

    Makeshifts last the longest.

Re: pure perl tail revisited
by grinder (Bishop) on Sep 09, 2002 at 20:07 UTC

    I covered this issue exhaustively at Performing a tail(1) in Perl (reading the last N lines of a file). You might want to look there for OWTDI. I actually took about tail(1) for the PPT project (see KM's response) to rip the guts out into a module, but at the moment the parts are lying around on the floor.

    The idea is that the stand-alone tail program just becomes a fairly simple object instantiation, but then you could also use the module in your own programs.


    print@_{sort keys %_},$/if%_=split//,'= & *a?b:e\f/h^h!j+n,o@o;r$s-t%t#u'
Re: pure perl tail revisited
by Aristotle (Chancellor) on Sep 10, 2002 at 16:15 UTC
    The "linux only" comment is obsolete in your updated version. It can still do with some cleaning up. Note this will still fail if the last X lines of the file are larger than 2*200*X bytes, so you cannot compare it against the other solutions as it doesn't do the same thing.
    #!/usr/bin/perl -w use strict; die "Usage: $0 file numlines\n" unless @ARGV == 2; my ($filename, $numlines) = @ARGV; # open or die first so that we don't stat a file we don't know exists open my $fh, "<", $filename or die "Couldn't open $filename: $!"; my $chunksize = 2 * 200 * $numlines; my $filesize = -s $fh; $chunksize = $filesize if $filesize < $chunksize; # why seek twice? seek $fh, -$chunksize, 2; my @tail = <$fh>; splice @tail, 0, @tail - $numlines; print @tail; __END__ $ perl -le'print "a"x500 for 1..10' > t.dat $ tailz t.dat 5 | wc -l 1
    You need a loop.

    Makeshifts last the longest.

      Thanks for showing me that splice trick. I don't think it took anything off the benchmark times, but it removed the temp array I was using. When I do tailz t.dat 5 | wc -l I get 5. (Just lucky). perl -le'print "a"x5000 for 1..10' > t.dat does give bad results The code still depends on you knowing your max line length beforehand, but I think this is true for more logs and text files. I'm happy now, at least I figured out what Merlyn meant when he said there was a faster method than Tie::File or File::ReadBackwards.
Re: pure perl tail revisited
by zentara (Archbishop) on Sep 10, 2002 at 15:05 UTC
    Hi, I updated my code with a fix for small files. It works pretty well now.
    if($chunk >= $filesize){$chunk = $filesize}
    and
    if($numlines >= $#tailtemp +1){$numlines = $#tailtemp +1}
    
    It took about 10% off my benchmark speed, but is still relatively fast.