|Problems? Is your data what you think it is?
Extract new lines from log fileby smithers (Friar)
|on Jan 02, 2007 at 19:41 UTC
smithers has asked for the wisdom of the Perl Monks concerning the following question:
Looking for direction and opinions. Background: I need to monitor 100s of application log files on approx 100 Windows 2000/2003 servers. Frequency of monitoring ranges from hourly to daily. Standard functionality in Perl 5.8 is working great from a single centralized server using Windows UNC file pathing to other local servers and log files. Note: monitoring the logs from a centralized server saves me time by eliminating the need for change-management plans to add scripts or perl binaries to 100 validated production server.
Issue: I need to expand my monitoring to include numerous remote servers -- some accessed via slow or bandwidth-impaired links. My problem is not the large remote log file, per se, as only a few new lines are appended hourly or daily. Rather my approach for extracting the new lines from the large log files seems to suck. My current logic to get new lines is:
This dual read (once for line count, another to get the new lines) is where all my script CPU and wall time is spent and I could obviously try to combine steps 1 - 4 into a single journey through the file. However, before I do that I thought I would ask for suggestions. Is there a better way to periodically extract the new lines from a log file? Again, with the constraint that I not deploy any scripts or perl distros to the local or remote servers where the logs reside?
Thanks for sharing any ideas you may have.