We don't bite newbies here... much | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
This isn't useful for statistics, but if you need to quickly pull up sections of the large files to diagnose things like "what happened around time x?", File::SortedSeek is incredibly useful.
For your current problem, as everyone else said "precompute" is probably the best answer. You could probably do something complicated to map/reduce the statistics gathering in parallel across sections of the file, but it's probably not worth it. Mike In reply to Re: Dealing with huge log files 100 MB to 1 GB
by RMGir
|
|