http://www.perlmonks.org?node_id=1003739


in reply to Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)

I think roboticus nailed the biggest one. You're doing two things in that one line that are killers to performance.

The first, which is probably the smaller one, is shelling out. The fact that you're forking a second process (overhead) and execing something else (reinitialising the C library, parsing parameters, etc., more overhead), well, that's a lot of overhead. Especially for things that Perl can do internally nearly as quickly, but without the overhead.

The second, and larger, problem is that this subprocess is scanning your 2-million-record file from the beginning each time. That is, it opens the file (overhead), and then reads each record one at a time, looking at each byte for record-ending text (the \n character), counting them up to find the specific record you need.

The minor issue is the way you get rid of leading/ending spaces (see this stackoverflow question though I'm sure we've covered it here, too).

As for the first problem, 2 million records doesn't seem that bad. Say 500 bytes per line on average, that's 1GB. Perl adds some overhead, but if you were to load each record into an array, you're looking at less than 1.25GB of RAM, which is fairly minor for most (but definitely not all) systems. It'd be a bit of a strain on many systems, but not beyond the realm of reason, especially if it can save you 20+ minutes of processing time (I'm hesitant to say how much you will actually save, but I'm guessing more than that).

If you're actually on a system where this is an issue, even Tie::File can save a ton of time just because it will cache file offsets for records so you'll only scan through the file a handful of times.

Hope that helps.

Replies are listed 'Best First'.
Re^2: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by Jenda (Abbot) on Nov 14, 2012 at 09:19 UTC

    And if the data do not fit in memory and Tie::File doesn't help enough, you can read the data from read_source into an array tied to BerkeleyDB::Recno. That way the data are on disk, but the access by line number is very efficient.

    Jenda
    Enoch was right!
    Enjoy the last years of Rome.

Re^2: Need help to fine tune perl script to make it faster( currently taking more than 30 minutes)
by anujajoseph (Novice) on Nov 14, 2012 at 04:21 UTC
    Dear ALL, many thanks for all your suggestions, is it possible i can use a perl one-liner effectively within this script to achieve the result instead of using unix "sed" command? im very new to perl scripting , thanks for all your help..

      I don't see how that would work. You're proposing fixing a performance issue caused by fork/exec/re-reading a large file from the beginning on each iteration, by doing exactly the same thing but with a virtual machine added in the middle instead of optimised C code? This isn't a perl-specific question you're asking, it's fairly generic. The proposed solutions (e.g., Tie::File) include some perl-specific suggestions, and some that aren't (read the file into memory as a list/array - you can do that in C++ with the STL fairly easily, and Java should make it pretty simple, too), but the general issue is language-agnostic.

      Instead, if you read it all into memory, you can use likely just a line to duplicate. Without testing or even compiling:

      # do this once. OUTSIDE OF YOUR LOOP. my @read_source_lines = do { open my $fh, '<', $read_source or die "Can't read from $read_source: + $!"; <$fh> }; # you may also need: chomp @read_source_lines; # gets rid of \n's. # inside the loop, instead of $strx/$str5: my $str5 = $read_source_lines[$recnum]; print REPORT "$_|$recnum$str5\n";
      Assuming you don't start swapping, this should eliminate most of your time. Note that there are better/faster ways to do this, but this will get you most of the benefit for the least amount of effort. Many of those better ways are actually embedded in Tie::File, IIRC (reading only as many lines as is currently needed, continuing from where you left off, maybe you don't need to read the entire file, this may also allow the OS to continue reading the file in the background to fill up your input buffers while you go do other work, that type of thing).

        Thank you so much for your genuine help and time! Appreciate much! This piece of code you have suggested( without using Tie::File) , works perfectly fine. now the whole processing takes only a second. I have heard always perl is very fast, now saw its performance. thanks again for making me a PERL fan too :-)
      please help to amend the script with what you suggest so that i can test (as im quite new to perl scripting). thanks for your understanding and help.