http://www.perlmonks.org?node_id=1019951


in reply to Re^3: Working with a very large log file (parsing data out)
in thread Working with a very large log file (parsing data out)

The second pass is against the reduced data, not the full file. This is more complex than it needs to be, so that we try to maintain the order of whatever was seen, no matter if it can sort cleanly or not. (standard data format in webserver logs is DD/Mmm/YYYY, so if we cross months, you need a custom sort function.

cut -d\  -f4 access_log | cut -b2-12 | uniq -c | perl -e 'my(%counts,@keys);while((my($count,$key)=(<STDIN>=~m#(\d+)\s(\d\d/\w\w\w/\d{4})#))==2){push(@keys,$key) if !$counts{$key}; $counts{$key}+=$count} print "$_\t$counts{$_}\n" foreach @keys'

Processes a 2.5M line / 330MB access log in 6.3 seconds. If it scales linearly and I'm doing my math right, that'd be 8.4 hrs for 1.5TB.

If the file's compressed, and you pipe through gunzip -c or similar, you might get even better times, as you'll have reduced disk IO. I ran a 2.2M line / 420MB (uncompressed) / 40MB (compressed) file in 7sec (est. 7.4 hrs for 1.5TB). If you have the processors, you could also break the file into chunks, do all of the non-perl bits in parallel on each chunk, then recombine at the end ... but then you might have to actually be able to sort to get the output in the right order.