Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^3: Splitting Apache Log Files

by crashtest (Curate)
on Apr 26, 2010 at 19:01 UTC ( #836968=note: print w/replies, xml ) Need Help??


in reply to Re^2: Splitting Apache Log Files
in thread Splitting Apache Log Files

qr// is a regular expression quote, and as such does, in a sense, compile regular expressions. Unfortunately, you're using the regular expression as a hash key, at which point it's turned back into a string. As you process the Apache log file, $rule is just a string. When you use it as a regular expression, it has to be compiled again - each time through the loop.

If I were writing your code, I would store the regular expression rules/filehandles in an array. Here's a sketch of what it might look like:

my @rules; # not %rules. ... # Process input file of processing rules while(<INFILE>) { ... push @rules, { regex => qr/$string/, file_handle => $fh }; } ... # Read Apache log file and print to various other files while (my $line = <STDIN>) { for my $rule_ref (@rules){ my $regex = $rule_ref->{regex}; my $fh = $rule_ref->{file_handle}; if ($line =~ $regex) { print $fh $line; } } }

Hope this helps.

Replies are listed 'Best First'.
Re^4: Splitting Apache Log Files
by cmm7825 (Novice) on Apr 26, 2010 at 20:16 UTC
    WOW, never knew that. Thanks a lot the execution time went done to 22seconds.
      execution time went done to 22seconds.

      You must have a pretty damn fast disk. SSD?

      I split a 500MB file into 10 on the basis of a single digit at a fixed position in each line:

      while( <> ) { print {$fhs[ substr $_, 2, 1 ]}, $_; }

      And I can't get below 1 minute.

      If you're managing to test each line against many (how many?) regexes, and still beat mine by 60%, I want a disk like yours.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        I'm not sure the exact specs...but its a production server. I assume it has RAID and SCSI hardisks
      Just as a simple I/O test, I played around with just creating a 500MB file and then copying that file to another file on my old WinXP machine.
      #!/usr/bin/perl -w use strict; my $begin = time(); open (BIG, ">fiveHundredMB") or die "cannot open file for 500MB write" +; my $c255 = '*'x255; my $c256 = "$c255\n"; my $oneK = "$c256"x4; my $oneMB = "$oneK"x1024; print BIG $oneMB for (1..500); close BIG; my $end = time(); print "elasped time for 500MB file is: ", $end-$begin, " seconds\n"; __END__ elasped time for 500MB file is: 9 seconds I opened this file in my text editor and there are 2,048,000 lines of 256 chars = 524,288,000 bytes. Windows says: 526,336,000 bytes at command line. This difference is a mystery to me at the moment. But this is basically a ~500 MB file.
      #!/usr/bin/perl -w use strict; my $begin = time(); open (BIG, "<fiveHundredMB") or die "cannot open file for 500MB read"; open (OUT, ">bigfile") or die "cannot open bigfile for write"; while (<BIG>) { print OUT $_; } close OUT; my $end = time(); print "elasped time for 500MB file is: ", $end-$begin, " seconds\n"; __END__ prints: elasped time for 500MB file is: 13 seconds
      Even at 22 seconds, the execution time seems slow, but that depends upon the number of regex'es you are running per line of input and how many lines that there are. I suspect that they are far less than 1024 bytes in length on average. If your performance is adequate for your use, I would stick a fork in it and call it done! I wouldn't worry about it. About 13-14 seconds is as fast as a single HD can go without any processing of the data.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://836968]
help
Chatterbox?
[Corion]: ambrus: And for Windows, I don't think that Prima knows if there still are messages queued for an object (in the Windows message loop). Finding that out would take lots of effort for little gain
[ambrus]: And even if this works, I'm still not sure you can't get double timeouts from a Timer.
[ambrus]: Corion: well Prima::Object says something like that the cleanup method will send an onDestory message and that you can't get more messages after cleanup, or something.
[Corion]: ambrus: Yeah - I don't think the deep source dive will be necessary if things are implemented as simple as they could be :)) And hopefully I won't need (more) timely object destruction. I can update the screen at 60Hz and hopefully even do HTTP ...
[Corion]: ... transfers in the background. Now that I think about it, this maybe even means that I can run the OpenGL filters on Youtube input :)
[ambrus]: Corion: I mentioned that the unix event loop of Prima always wakes up at least once every 0.2 seconds. Have you found out whether the win32 event loop of Prima does that too?
[Corion]: ambrus: Hmm - I would assume that the onDestroy message is sent from the destructor and doesn't go through the messageloop, but maybe it is sent when a window gets destroyed but all components are still alive...
[ambrus]: Corion: partly deep source dive, partly just conservative coding even if it adds an overhead.
[Corion]: ambrus: Hmm - no, I haven't looked at wakeup intervals ... I wonder why it should want to wakeup periodically because it gets a lot of messages from the Windows message loop (on Windows obviously)
[ambrus]: (Alternately a deep source dive and then rewrite that event loop to make it better, and then as a bonus you get an idle method.)

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (6)
As of 2016-12-09 10:27 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    On a regular basis, I'm most likely to spy upon:













    Results (150 votes). Check out past polls.