Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)

by anujajoseph (Novice)
on Nov 14, 2012 at 02:46 UTC ( [id://1003734]=perlquestion: print w/replies, xml ) Need Help??

anujajoseph has asked for the wisdom of the Perl Monks concerning the following question:

Hi, Need your expert advice on fine tuning the below perl script, which is currently taking more than 30 minutes to extract information from a raw file which has nearly 2million records. below are the requirements met in the script: 1) script should scan through a log file and output the final result into a file called "test.txt" 2)it replaces some strings in the log file into more generic terms 3)it finds the record number of the affected row from the log file and uses this record number to query the main source file for the particular record 4) it generates a string which now contains information from the logfile and patches it along with the affected row from the source 5) the result is written into the output file test .txt

=======================code ============================ #!/usr/bin/perl $read_file = "$ARGV[0]"; $read_source = "$ARGV[1]"; open(LOGFILE,$read_file) or die "An Error Occured : $!"; open(REPORT,">/retsit/systematics/test.txt"); $str1 = 'failed all WHEN clauses'; $str2= 'CUST SEGEMENT IS EMPTY'; $str3= 'unique constraint'; $str4= 'DUPLICATE RECORD'; while(<LOGFILE>) { if ($_ =~ /Record/) { $_ =~ s/$str1/$str2/g; $_ =~ s/$str3/$str4/g; $ind1 = index($_,'Record')+6; $len2 =index($_, ':')-6; $recnum = substr($_,$ind1,$len2); $recnum =~ s/^\s+|\s+$//g ; $strx = "sed -n '".$recnum."p' ".$read_source; $str5 = `$strx`; $_ .= '|'."$recnum"."$str5\n"; print REPORT $_; } } close(LOGFILE); close(REPORT);

Replies are listed 'Best First'.
Re: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by roboticus (Chancellor) on Nov 14, 2012 at 02:55 UTC

    anujajoseph:

    I'd suggest not shelling out to sed to print a line. Instead, read the file into an array and print the appropriate line. That would be better than rescanning the file over and over. Or build an array of lines to print, then after the while loop, scan the file once, printing each line in your array.

    ...roboticus

    When your only tool is a hammer, all problems look like your thumb.

      thanks for your suggestion , but the concern is since the source file is huge wont it cause memory issues if i open the file for reading the rows?

        anujajoseph:

        If the source file is too large to fit into memory, then the second suggestion might be better. But since computers tend to have so much RAM nowadays, most files will fit into memory.

        Update: On rereading the thread, I realize that in my first post I left out a word in the second suggestion. It should be an array of line numbers.

        ...roboticus

        When your only tool is a hammer, all problems look like your thumb.

Re: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by Tanktalus (Canon) on Nov 14, 2012 at 03:47 UTC

    I think roboticus nailed the biggest one. You're doing two things in that one line that are killers to performance.

    The first, which is probably the smaller one, is shelling out. The fact that you're forking a second process (overhead) and execing something else (reinitialising the C library, parsing parameters, etc., more overhead), well, that's a lot of overhead. Especially for things that Perl can do internally nearly as quickly, but without the overhead.

    The second, and larger, problem is that this subprocess is scanning your 2-million-record file from the beginning each time. That is, it opens the file (overhead), and then reads each record one at a time, looking at each byte for record-ending text (the \n character), counting them up to find the specific record you need.

    The minor issue is the way you get rid of leading/ending spaces (see this stackoverflow question though I'm sure we've covered it here, too).

    As for the first problem, 2 million records doesn't seem that bad. Say 500 bytes per line on average, that's 1GB. Perl adds some overhead, but if you were to load each record into an array, you're looking at less than 1.25GB of RAM, which is fairly minor for most (but definitely not all) systems. It'd be a bit of a strain on many systems, but not beyond the realm of reason, especially if it can save you 20+ minutes of processing time (I'm hesitant to say how much you will actually save, but I'm guessing more than that).

    If you're actually on a system where this is an issue, even Tie::File can save a ton of time just because it will cache file offsets for records so you'll only scan through the file a handful of times.

    Hope that helps.

      And if the data do not fit in memory and Tie::File doesn't help enough, you can read the data from read_source into an array tied to BerkeleyDB::Recno. That way the data are on disk, but the access by line number is very efficient.

      Jenda
      Enoch was right!
      Enjoy the last years of Rome.

      Dear ALL, many thanks for all your suggestions, is it possible i can use a perl one-liner effectively within this script to achieve the result instead of using unix "sed" command? im very new to perl scripting , thanks for all your help..

        I don't see how that would work. You're proposing fixing a performance issue caused by fork/exec/re-reading a large file from the beginning on each iteration, by doing exactly the same thing but with a virtual machine added in the middle instead of optimised C code? This isn't a perl-specific question you're asking, it's fairly generic. The proposed solutions (e.g., Tie::File) include some perl-specific suggestions, and some that aren't (read the file into memory as a list/array - you can do that in C++ with the STL fairly easily, and Java should make it pretty simple, too), but the general issue is language-agnostic.

        Instead, if you read it all into memory, you can use likely just a line to duplicate. Without testing or even compiling:

        # do this once. OUTSIDE OF YOUR LOOP. my @read_source_lines = do { open my $fh, '<', $read_source or die "Can't read from $read_source: + $!"; <$fh> }; # you may also need: chomp @read_source_lines; # gets rid of \n's. # inside the loop, instead of $strx/$str5: my $str5 = $read_source_lines[$recnum]; print REPORT "$_|$recnum$str5\n";
        Assuming you don't start swapping, this should eliminate most of your time. Note that there are better/faster ways to do this, but this will get you most of the benefit for the least amount of effort. Many of those better ways are actually embedded in Tie::File, IIRC (reading only as many lines as is currently needed, continuing from where you left off, maybe you don't need to read the entire file, this may also allow the OS to continue reading the file in the background to fill up your input buffers while you go do other work, that type of thing).

        please help to amend the script with what you suggest so that i can test (as im quite new to perl scripting). thanks for your understanding and help.
Re: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by space_monk (Chaplain) on Nov 14, 2012 at 04:34 UTC
    Others have given you most of the answer, but it also looks as though you're pulling the record number out by some incredibly convoluted process. What is wrong with something like:
    /Record(\d+):/ && do { $recnum = $1; s/$str1/$str2/g; s/$str3/$str4/g; .. }
    This checks for matching lines and pulls the record number out at the same time...

    Note that the regex in the above may not be quite right as your original seems to back up 6 characters from the ':', but the regex can be amended to do that too.

    A Monk aims to give answers to those who have none, and to learn from those who know more.
Re: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by anujajoseph (Novice) on Nov 14, 2012 at 10:52 UTC
    Thank You Perl Monks!! your suggestion has worked for me perfectly. Now the script takes only a second to complete. Appreciate much for your genuine help ! Keep up the good work :-)
Re: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by Anonymous Monk on Nov 14, 2012 at 14:20 UTC
    If you need to modify a flat-file, and the size is such that it can't be slurped into memory (and a 1GB file probably can be ...), it's best to process the file one record at a time writing the modified records to another file. Then, swap the file-names around. If you slurp into memory, run a performance monitor to see if the virtual-memory subsystem starts thrashing, because VM swapping activity is a serious source of "invisible" disk I/O. If you process a record at a time, memory won't be an issue but data movement (a gigabyte is read and written) will be. But a one-gigabyte file should be processed sequentially in seconds, not 30 minutes.
Re: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by fluticasone (Initiate) on Nov 15, 2012 at 03:48 UTC
    Well, use some combinations while you are fresh and new better than playing along into each mess you make with this code. Watch this, You are opening a file like there is butter "no problems"; then do not use "if" is for causing lots of funs but this does mess you up this moment. Instead use a While (<IN>) loop and use a "chomp" command assigned it to a variable "X" basically, you will remove any issues or string of hand to keep it on buffer just in case. Use File:Find library and make an array until satisfy. Finally go back and create or copy a file into a new one with new results. I'll see that you use new commands so you will get excited when you use perl next time. If you have not started yet, begin writng code slowly. And if you need me to write it for you; need to know file name and variable names. Love Perl as much as I do.
      Thanks for looking into the issue. the current perl version im my server is as below "This is perl 5, version 12, subversion 0 (v5.12.0) built for sun4-solaris". I was referring to few sample perl programs and trying to come up with my own code for the same.I would definitely prefer to use the most optimised and latest standard of coding Perl. if its not too much to ask, would you consider to re-write my code using the standards you mentioned. thanks again.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1003734]
Approved by Tanktalus
Front-paged by Tanktalus
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (7)
As of 2024-03-29 14:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found