++ to what MidLifeXis said. As logs tend to be sorted already, it's likely you can avoid the sort as the only part that's likely to be a problem memory-wise.
To add to that, for data this size it may be worth running a little preprocessor in C, especially if your log format has fixed-size fields or other delimiters easily recognized with C string functions. That way you could both split the parsing over two CPU cores and avoid running slow regexen (or even substr() which is fast for Perl but still doesn't even come close to C). Something like this (largely untested but you get the idea):
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>
int main(int argc, char *argv[]) {
char buf[10000];
FILE *fh;
if(2 != argc) {
fputs("Usage: filter <log>\n", stderr);
exit(1);
}
if(!(fh = fopen(argv[1], "r"))) {
perror("Cannot open log");
exit(1);
}
while(!fgets(buf, sizeof(buf), fh)) {
static const size_t START_OFFSET = 50;
size_t len = strlen(buf);
char *endp;
if('\n' != buf[len-1]) {
fputs("WARNING: line did not fit in buffer, skipped\n", stder
+r);
continue;
}
endp = buf + START_OFFSET;
len = 20;
// To search for a blank after the field instead of using a fixe
+d width
// endp = strchr(buf + START_OFFSET, ' ');
// len = endp ? endp - (buf + START_OFFSET) : len - START_OFFSE
+T; // careful with strchr()==NULL
fwrite(buf + START_OFFSET, 1, len, stdout);
}
}
Edit: jhourcle's post just reminded me of the part I missed initially, namely that it's an Apache log. So if you use the standard combined format you could just use START_OFFSET=9 and len=11 to print only the date, if you don't want to differentiate by result code. Then a simple
my %h;
$h{$_}++ while(<>);
would get the requests-per-date counts and the only slightly trickier thing is to get them sorted chronologically on output. Something like
for(sort { $a->[0] <=> $b->[0] } map { [ Date::Parse::str2date($_) =>
+chomp ] } keys %h) {
print "$_->[1]: $h{"$_\n"}\n;
}