Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Is this the most efficient way to monitor log files..

by Rhodium (Scribe)
on Apr 17, 2002 at 23:35 UTC ( [id://160049]=perlquestion: print w/replies, xml ) Need Help??

Rhodium has asked for the wisdom of the Perl Monks concerning the following question:

Hi monks
Following on a past thread, I created a log follower. The problem is that it is a cpu hog, and robs the processor from doing real work. Given the following log file monitor, can anyone check this over, and suggest a more efficient way to do this.
#!/usr/local/bin/perl use strict; my (@list) #DRACSTAT is nothing more than a text file which is appended to everyt +ime a job is # launched, it gives information about the type of job, where it was r +un, etc. chdir(); open (DRACSTAT, "<.dractrack") or die "Can't open drackstat.."; while (<DRACSTAT>){ chomp $_; push (@list, $_); } close DRACSTAT; # For each line run the log checker. Ideally, I would do this in parr +allel for # each item for (my $i=0; $i<@list; $i++){ &CheckStat($list[$i]); chdir(); open (DRACNEW, ">>.dracnew") or die "I think dracktrack is running"; # as you get done remove that item from the list.. for (my $j=$i; $j+1<@list; $j++){ print DRACNEW "$list[$j+1]\n"; print "$list[$j+1]\n"; } rename (".dracnew", ".dractrack") or die "Can't replace .dracktrack. +."; } sub CheckStat { my (%drac, @stages, @lstage); my $tmp = shift; # this is the line from DRACSTAT ($drac{Type},$drac{primary},$drac{rundir},$drac{printfile},$drac{hos +tname}) = split / /, $tmp , 5; chdir($drac{rundir}); # This just gets the total number of stages in a particular job open (STAGES, "jxrun.stg") or die "Can't open jxrun.stg"; while (<STAGES>){ chomp $_; @stages = split /\b/, $_; } close STAGES; my $percomp = 0; while ($percomp != 100){ # This is the actual logfile which we are continually monitoring.. open (LOGFILE, "<$drac{printfile}"."."."log") or die "Can't open $ +drac{printfile}.log"; while(<LOGFILE>){ chomp $_; next if ($_ !~ m/AT STAGE:/); @lstage = split /\s+/, $_; } close LOGFILE; my $percomp = ($lstage[3]/$stages[8])*100; # Give'm stats - I don't like the /r but it works -- any better i +deas? if ($percomp == 100){ print "\n$drac{Type} job $drac{primary} on $drac{hostname} -- DO +NE\n"; return; } else { print "Cell: $drac{primary} Verifying: $drac{type} Total Stages: + $stages[8] Number complete: $lstage[3] -- "; printf ("%2.2f\%\r", $percomp); } } }
Thanks much for your infinite wisdom:)

Rhodium

The <it>seeker</it> of perl wisdom.

Replies are listed 'Best First'.
Re: Is this the most efficient way to monitor log files..
by Kanji (Parson) on Apr 18, 2002 at 01:59 UTC
    while ($percomp != 100) {

    How long does $percomp take to reach 100?

    If it's any decent length of time, then the above is probably the source of your CPU drain as you continually run the loop (and without pause) until $percomp does hit 100.

    Slipping a sleep N; (where N is an arbitrary number of seconds) inside the while should help cure the hogging as it'll make your script pause for a little while every run of the loop, giving the the CPU breathing room to do something else.

    Update: I notice that you my $percomp both inside and outside the loop, which means -- thanks to scoping -- your while is equivalent to while (1) { as the outer $percomp will never equal 100.

    Perhaps you could change that loop to a favoured construct of mine ...

    while ( sleep 1 ) { # ... if ($percomp == 100) { print "\n$drac{Type} job $drac{primary} ", "on $drac{hostname} -- DONE\n"; last; } else { # ... } }

        --k.


Re: Is this the most efficient way to monitor log files..
by graff (Chancellor) on Apr 18, 2002 at 07:08 UTC
    You said:
    # For each line run the log checker. Ideally, I would do this in
    # parrallel for each item

    Do you mean you would really like the progress of all current jobs to be displayed at the same time? Then maybe you want a loop that reports current status on all entries in the ".draktrack" file, whereas your "CheckStat()" function watches a single job until it finishes before checking the next one.

    Consider the following alternative -- each time you run this, it will produce one line of status output for each job in your .dractrack file (at least, I hope so; I haven't tested it :P) Now, just decide how frequently you want to get an update, e.g. once every 5 sec, and run it that often.

    (If you have a lot of jobs, the first ones in the list will scroll by unless you pipe the output to "less" or something equivalent. If the jobs stick around a good while, you could format the reports as html, redirect to a file, and (re)load it in your browser whenever you feel like it.)

    Given that the jobs you are monitoring appear to be very well organized, there is probably something in ".draktrack" that will always uniquely identify each job in the list. On that assumption, I'm using a hash (assuming "rundir" works as a key field) to hold the presumably static information from the "jxrun.stg" file for each job. This is in case later on I might decide to put the "main" part inside a while loop with a sleep call at each iteration.

    use strict; my @list; my %jobs; # start of main: open( DRAC, ".dractrack" ) or die "Can't open .dractrack\n"; @list = <DRAC>; close DRAC; my $ndone = 0; for ( @list ) { chomp; my ($type,$primary,$rundir,$logfile,$host) = split; if ( not exists( $jobs{$rundir} )) { open( STAGE, "$rundir/jxrun.stg" ); my @stage = <STAGE>; close STAGE; $jobs{$rundir}{term} = (split(/\b/, pop @stage))[8]; } my $atstage = CheckStage( $rundir, $logfile ); if ( $atstage == $jobs{$rundir}{term} ) { print "$type job $primary on $host -- DONE"; $jobs{$rundir}{done}++; $ndone++; } else { printf("Cell: %s Verifying: %s at stage %3d of %3d -- %2.2f", $primary, $type, $atstage, $jobs{$rundir}{term}, 100 * $atstage / $jobs{$rundir}{term} ); } } # write a "current" version of ".dractrack", if necessary. # WATCH OUT! You REALLY need a semaphore or some other file # locking mechanism here (check Sean Burke's article about # semaphores in the most recent Perl Journal: # http://www.sysadminmag.com/tpj/) if ( $ndone ) { open( DRAC, ">.dracnew" ) or die "Can't rewrite .dractrack\n"; for ( @list ) { my $dir = (split)[2]; print DRAC "$_\n" unless $jobs{$dir}{done}; } close DRAC and rename ".dracnew", ".dractrack"; } # end of main sub CheckStage { my ($path, $log) = @_; open( LOG, "$path/$log" ); my @lines = <LOG>; close LOG; $_ = pop @lines until ( /AT STAGE:/ ); (split)[3]; }
      Hi graff,
      You said..
      Consider the following alternative -- each time you run this, it will produce one line of status output for each job in your .dractrack file (at least, I hope so; I haven't tested it :P) Now, just decide how frequently you want to get an update, e.g. once every 5 sec, and run it that often.
      How would you propose to do this from inside the program?? This is what lead me to my post in the first place..

      Rhodium

      The <it>seeker</it> of perl wisdom.

        Read the last paragraph before the code again. You can just wrap the part I have bracketed as "main" with

        while (1) {

        at the top and

          sleep $nsec;}

        at the bottom. (Put your own condition in there if you like, or use a "last" statement inside the loop wherever it's easy to decide that the process can stop.

        Also, do heed grinder's advice (which followed my reply). He's right about how to read the log files.

Re: Is this the most efficient way to monitor log files..
by grinder (Bishop) on Apr 18, 2002 at 08:00 UTC
    When people talk about monitoring log files, the last thing you want to be doing is slurping the damned things into memory. Read them in line by line.

    Taking graff's code, which seems like a good basis to work from, I would recast it as:

    use strict; my %jobs; # start of main: open( IN, ".dractrack" ) or die "Can't open .dractrack for input: $!\n +"; my $ndone = 0; while( <IN> ) { chomp; my ($type,$primary,$rundir,$logfile,$host) = split; if ( not exists( $jobs{$rundir} )) { open( STAGE, "$rundir/jxrun.stg" ); my $lastrec; while( <STAGE> ) { $lastrec = $_; } close STAGE; $jobs{$rundir}{term} = (split(/\b/, $lastrec))[8]; } my $atstage = CheckStage( $rundir, $logfile ); if ( $atstage == $jobs{$rundir}{term} ) { print "$type job $primary on $host -- DONE"; $jobs{$rundir}{done}++; $ndone++; } else { printf("Cell: %s Verifying: %s at stage %3d of %3d -- %2.2f", $primary, $type, $atstage, $jobs{$rundir}{term}, 100 * $atstage / $jobs{$rundir}{term} ); } } close IN; # write a "current" version of ".dractrack", if necessary. # WATCH OUT! You REALLY need a semaphore or some other file # locking mechanism here (check Sean Burke's article about # semaphores in the most recent Perl Journal: # http://www.sysadminmag.com/tpj/) if ( $ndone ) { open( OUT, ">.dracnew" ) or die "Can't open .dracnew for output: $!\ +n"; open( IN, ".dractrack" ) or die "Can't open .dractrack for input: $! +\n"; while( <IN> ) { my $dir = (split)[2]; print OUT unless $jobs{$dir}{done}; } close DRAC and rename ".dracnew", ".dractrack" or die "Cannot rename .dracnew to .dractrack: $!\n"; } # end of main sub CheckStage { my ($path, $log) = @_; open( LOG, "$path/$log" ) or die "Cannot open $path/$log for input: $!\n"; my $at_stage_rec = undef; while( <LOG> ) { $at_stage_rec = $_ if /AT STAGE:/; } close LOG; $at_stage_rec ? (split / /, $at_stage_rec)[3] : undef; }

    There are three places where files are being slurped: the main file, the stage files (and only the last line is needed!), and the log files, for which only the last line containing "AT STAGE" is needed. For all of these, a straight sequential read through the file will be able to pick up all that you need. This will be much cheaper than building up arrays left, right and center, only to throw them away after having picked out a single item.

    Also, when checking whether open et al. fail, it is a good idea to also state what the error was, and any other additional information (file was trying to be opened for input or output), etc.


    print@_{sort keys %_},$/if%_=split//,'= & *a?b:e\f/h^h!j+n,o@o;r$s-t%t#u'
Re: Is this the most efficient way to monitor log files..
by belg4mit (Prior) on Apr 18, 2002 at 03:10 UTC
    Another thing that is nipping at your heels in the way of CPU usage is your means of displaying a status. (You could use Curses, but why bother?). The problem here is buffering is on. Since buffering is on you are having to overflow the buffer before anything gets dumped to your terminal, thanks to \r your terminal then proceeds to run through a buffers worth of lines (last I checked on on my linux box it was 2kb), and then display the last one. Not so efficient. $|++ And so you don't need to print a status update every round the loop, you eyes cannot see much more than 26 fps :-P

    --
    perl -pe "s/\b;([mnst])/'\1/mg"

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://160049]
Approved by belg4mit
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (3)
As of 2024-04-22 23:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found