Beefy Boxes and Bandwidth Generously Provided by pair Networks vroom
Perl-Sensitive Sunglasses
 
PerlMonks  

Re: Re: Obtaining Apache logfile stats?

by mvam (Acolyte)
on Mar 25, 2004 at 16:46 UTC ( [id://339889]=note: print w/replies, xml ) Need Help??

This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.


in reply to Re: Obtaining Apache logfile stats?
in thread Obtaining Apache logfile stats?

a sample log line:

x.x.x.x - 24/Mar/2004:12:26:52 -0800 "GET /manual/misc/perf-tuning.html HTTP/1.1" 200 0 48296 "http://localhost/manual/" "Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.6) Gecko/20040211 Firefox/0.8"

and this is my logformat line:

LogFormat "%v %{x-up-subno}i %t \"%r\" %>s %T %b \"%{Referer}i\" \"%{User-Agent}i\"" wap

  • Comment on Re: Re: Obtaining Apache logfile stats?

Replies are listed 'Best First'.
Re: Re: Re: Obtaining Apache logfile stats?
by sauoq (Abbot) on Mar 25, 2004 at 16:56 UTC

    How are you calculating the "0_seconds" portion of your sample data?

    -sauoq
    "My two cents aren't worth a dime.";
    
      0_seconds is a sed substitution .. its really just 0 or 1 or whatever was returned.so it would be 0_seconds, 1_second, etc.. i was hoping it would be easier to read in the output file, but it doesnt really matter if its there or not since the average is used.

      sauoq, I believe that's coming from the "%T" portion of the log format.


      _______________
      DamnDirtyApe
      Those who know that they are profound strive for clarity. Those who
      would like to seem profound to the crowd strive for obscurity.
                  --Friedrich Nietzsche
Re: Re: Re: Obtaining Apache logfile stats?
by sauoq (Abbot) on Mar 25, 2004 at 17:28 UTC

    The quick and dirty approach would be to just carve it up on white space like you are doing with awk anyway. The conversion is straight forward. Use split or perl's -a option (as in my example above.)

    Regardless of how you parse the input, you'll probably find it worthwhile to compute the statistics for every file accessed on one pass through your log. That's a lot more efficient than reading your whole log once for each file you want stats on. That's easy enough; just use a hash to maintain data for each filename as you traverse the log.

    -sauoq
    "My two cents aren't worth a dime.";
    
      split is a good idea.. would it be faster to use an array?
        would it be faster to use an array?

        If you mean "faster to use an array instead of a hash for collecting the data", then no, it would not be faster. I would split each line into an array during processing though.

        The idea is to key the hash by the filenames. So, everytime you come across, for instance, "/some/dir/file.html", you increase a count and a sum. The code might look something like this (untested):

        while (<LOG>) { my @part = (split ' ', $_)[5,8]; $hash{$part[0]}->[0] ++; # increase the count. $hash{$part[0]}->[1] += $part[1]; # increase the sum. }
        Note that the values of the hash are arrayrefs in order to store both the count and the sum associated with each filename. After you've munged your logs into raw data, you'll traverse the hash you created and compute the stats you want. Something like (again, untested):
        for my $key (sort keys %hash) { my $avg = $hash{$key}->[1] / $hash{$key}->[0]; # sum/count. print "$key\t$avg\n"; }

        -sauoq
        "My two cents aren't worth a dime.";
        

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://339889]
help
Sections?
Information?
Find Nodes?
Leftovers?
    Notices?
    hippoepoptai's answer Re: how do I set a cookie and redirect was blessed by hippo!
    erzuuliAnonymous Monks are no longer allowed to use Super Search, due to an excessive use of this resource by robots.