I don't think there's a way to do exactly what you want and
pull out X number of lines in one shot. You're better off
doing something like limitting the number of bytes you read
in at a time. I threw together a bit of code (posted
below) which reads in a certain amount of bytes, splits
that into an array, and then processes those lines. Used
with a reasonable size of bytes to read in, it seems to
consistently be about 1.5 times faster.
#!/usr/bin/perl
use Benchmark;
use strict;
timethese(10, { 'linebyline' => \&linebyline, 'chunk' => \&chunk });
sub linebyline {
open(FILE, "file");
while(<FILE>) { }
close(FILE);
}
sub chunk {
my($buf, $leftover, @lines);
open(FILE, "file");
while(read FILE, $buf, 10240) {
$buf = $leftover.$buf;
@lines = split(/\n/, $buf);
$leftover = ($buf !~ /\n$/) ? pop @lines : "";
foreach (@lines) { }
}
close(FILE);
}
Benchmark: timing 10 iterations of chunk, linebyline...
chunk: 60 wallclock secs (55.20 usr + 3.48 sys = 58.68 CPU)
linebyline: 95 wallclock secs (91.67 usr + 2.16 sys = 93.83 CPU)
These tests were run on a 25 meg file with roughly 1 million
lines in it. This code is not guaranteed to work 100%, but
I believe it is correct enough to serve benchmarking
purposes well.