in reply to
Using binary search to get the last 15 minutes of httpd access log
I think that it is categorically a good idea to keep your individual files to hundreds of megabytes or maybe couple of gigabytes sizes. Rotate them frequently on size and compress them; the logrotate daemon is great for all that. A single, contiguous, gigantic-file is an awkward thing for both you and the operating system to handle.
When you process the file sequentially (and especially if you can “hint” to the OS that you intend to read the file from stem to stern), the operating system is automagically going to do a lot of buffering for you. It will take deep draughts of the file data each time it does a disk-read. In short, the operation will be quite a bit faster than you think.
Now that, of course, assumes that the drive is local to the machine that is doing the reading. If data is flowing across any sort of network wire, then the situation is utterly and completely different ... such that you basically need to find a way to do the work somehow on the locally-attached machine.