|Think about Loose Coupling|
Efficient Way to Parse a Large Log File with a Large Regexby Dru (Hermit)
|on Apr 12, 2005 at 17:02 UTC||Need Help??|
Dru has asked for the
wisdom of the Perl Monks concerning the following question:
I have an array with almost 500 ip's that I want to see if any of them appears in a log file. The log file is large, it sometimes gets up to 3GB. I was wanting to run this script from cron every hour to see if any of these ip's appear, but I'm thinking this might be too much of a load on the server (Dual CPU, 2GB memory, RedHat ES 3.0) so I might run it just a few times a day. I also thought about doing a tail -f logfile | <name of program>.pl, to look at just new log entries, but again I'm concerned about the server being able to keep up.
Anyway, I'm looking for suggestions on how to efficiently parse this much data. I initially was going to build a regex group, but not capture, all of the ip's with an alternation between each ip. Something along the lines of:
BTW, the ip's are not in a nice sequential order like above, they are all over the place.
Actually, I still haven't figured out how I was going to get from the array to the regex. I was thinking I could use map to build the regex, but I'm still a map newbie. I did backslash each decimal like this:
So I guess my questions are:
1. Is creating a regex, like the one discussed above, going to be the most efficient way?
2. If yes to number 1, any suggestions on how to build a regex from the array?
P.S. I know the term efficient can vary greatly from one programmer to the next, but I'm just looking for suggestions.