Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Using binary search to get the last 15 minutes of httpd access log

by sundialsvc4 (Monsignor)
on Aug 04, 2012 at 17:54 UTC ( #985451=note: print w/ replies, xml ) Need Help??


in reply to Using binary search to get the last 15 minutes of httpd access log

I think that it is categorically a good idea to keep your individual files to hundreds of megabytes or maybe couple of gigabytes sizes.   Rotate them frequently on size and compress them; the logrotate daemon is great for all that.   A single, contiguous, gigantic-file is an awkward thing for both you and the operating system to handle.

When you process the file sequentially (and especially if you can “hint” to the OS that you intend to read the file from stem to stern), the operating system is automagically going to do a lot of buffering for you.   It will take deep draughts of the file data each time it does a disk-read.   In short, the operation will be quite a bit faster than you think.

Now that, of course, assumes that the drive is local to the machine that is doing the reading.   If data is flowing across any sort of network wire, then the situation is utterly and completely different ... such that you basically need to find a way to do the work somehow on the locally-attached machine.


Comment on Re: Using binary search to get the last 15 minutes of httpd access log

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://985451]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (4)
As of 2014-07-24 04:41 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (157 votes), past polls