go ahead... be a heretic | |
PerlMonks |
Re: Dealing with huge log files 100 MB to 1 GBby RMGir (Prior) |
on May 17, 2010 at 11:59 UTC ( [id://840338]=note: print w/replies, xml ) | Need Help?? |
This isn't useful for statistics, but if you need to quickly pull up sections of the large files to diagnose things like "what happened around time x?", File::SortedSeek is incredibly useful.
For your current problem, as everyone else said "precompute" is probably the best answer. You could probably do something complicated to map/reduce the statistics gathering in parallel across sections of the file, but it's probably not worth it. Mike
In Section
Seekers of Perl Wisdom
|
|