I had to solve the same problem (for Apache logs, too) a few years back. Brute force is fine for a small log, the logs I was parsing were growing at a gigabyte+ per minute. (We rolled logs every 100 GB or 30 minutes, which ever came first.)
Pseudo code:
set the current size of the log (end point)
seek to the mid-position (size/2, begin point)
read forward from the begin-point until a timestamp is found
if the timestamp is within 5 minutes of the current time,
process sequentially to the end of the log and exit
else
reset the begin and end points and try again.
This gimmick ran (most of the time) in under 500 milli-seconds, and gave us enough information. The Perl implementation was fast enough (most times) that we never got around to implementing it in C. You can run into problems with slow growing logs (what happens if there is only one line in the file?), and mumungous lines (again, only one line in the file and its 55MB long!). We got around it by fiat -- if something goes sour, quit; and retry again in 30 seconds. (Yahoo, Instant Messenger, three to four terabytes of logs per day....)
----
I Go Back to Sleep, Now.
OGB
| [reply] [Watch: Dir/Any] [d/l] |