sourcecode
dws
<code>
#!/usr/bin/perl -w
#
# Proof-of-concept for using minimal memory to search huge
# files, using a sliding window, matching within the window,
# and using on /gc and pos() to restart the search at the
# correct spot whenever we slide the window.
#
# Doesn't correctly handle potential matches that overlap;
# the first fragment that matches wins.
#
use strict;
use constant BLOCKSIZE => (8 * 1024);
&search("bighuge.log",
sub { print $_[0], "\n" },
"<img[^>]*>");
sub search {
my ($file, $callback, @fragments) = @_;
local *F;
open(F, "<", $file) or die "$file: $!";
binmode(F);
# prime the window with two blocks (if possible)
my $nbytes = read(F, my $window, 2 * BLOCKSIZE);
my $re = "(" . join("|", @fragments) . ")";
while ( $nbytes > 0 ) {
# match as many times as we can within the
# window, remembering the position of the
# final match (if any).
while ( $window =~ m/$re/oigcs ) {
&$callback($1);
}
my $pos = pos($window);
# grab the next block
$nbytes = read(F, my $block, BLOCKSIZE);
last if $nbytes == 0;
# slide the window by discarding the initial
# block and appending the next. then reset
# the starting position for matching.
substr($window, 0, BLOCKSIZE) = '';
$window .= $block;
$pos -= BLOCKSIZE;
pos($window) = $pos > 0 ? $pos : 0;
}
close(F);
}
</code>
A demonstration of how to grep through huge files using a sliding window (buffer) technique. The code below has rough edges, but works for simple regular express fragments. Treat it as a starting point.
<p>
I've seen this done somewhere before, but couldn't find a working example, so I whipped this one up. A pointer to a more authoritative version will be appreciated.
<p>
Text Processing
[dws]