Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Re: Extract new lines from log file

by smithers (Friar)
on Jan 02, 2007 at 20:22 UTC ( [id://592633]=note: print w/replies, xml ) Need Help??


in reply to Extract new lines from log file

Thank you BrowserUk and ferreira. These are the helpful suggestions.

Replies are listed 'Best First'.
Re^2: Extract new lines from log file
by smithers (Friar) on Jan 05, 2007 at 21:49 UTC
    You are so right! SEEKing vs. line-by-line processing is wicked fast. For example test.pl script is a quick test script to determine timing to read the last 1024 bytes from the end of a 4GB file located on a remote server.

    use strict; my $filename = shift; my ( $curpos, $charsread, $buffer ); open(LOGFILE, "<", $filename) or die "can't open file: $!"; # Read last 1024 bytes of data. Test this via # read to -1024 bytes from EOF. seek(LOGFILE, -1024, 2); $curpos = tell(LOGFILE); $charsread = read( LOGFILE, $buffer, 1024 ); print "Chars Read: $charsread\n"; print "Buffer: >$buffer<\n"; close(LOGFILE);

    I ran as shown in next paragraph replacing my actual server name with "foo". I suspect most of you work with Perl on Unix but I'm loving Perl on Win32 and when I can use UNC file pathing as shown below it just tickles me that Perl is Win32 frienly in this way. Anyway, I digress. The result of below script run is the last 1024 bytes of the file is sub-second time. This is sooooooooooo fast! Thank you!

    test.pl \\foo\D$\MSSQL\BACKUP\DW\DW1_db_200701041904.BAK

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://592633]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2025-05-23 18:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?
    erzuuliAnonymous Monks are no longer allowed to use Super Search, due to an excessive use of this resource by robots.