Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re: Recommendations for efficient data reduction/substitution application

by BrowserUk (Patriarch)
on Mar 03, 2015 at 23:20 UTC ( [id://1118677]=note: print w/replies, xml ) Need Help??


in reply to Recommendations for efficient data reduction/substitution application

My suggestion comes in two parts:

  1. Throw cpu's at it.

    It doesn't much matter whether they are in the form of threads or processes, so long as the coordination is done right.

    Take the size in bytes of your file, divide it by the number of cores you have at your disposal, and assign a byte range to each core.

    Of course, those byte ranges won't match exactly to lines, but if a process/thread seeks to its start offset, and the uses readline, the first line will likely be a partial, so discard it and read the next.

    Then have your read loop check its file position with tell after each read, and it stops when that position has moved past the end byte of its range. Thus it will process the complete line, including the partial line, that the process with the next range, discarded.

    In (untested) code:

    my( $fh, $start, $end, $n ) = @_; // $n is this process' sequence numb +er seek $fh, $start, 0; <$fh>; // discard first read my $pos = tell $fh; while( $pos < $end ) { my $line = <$fh>; $pos = tell $fh; for my $regex ( @regex ) { $line =~ $regex; } ## write out modifed line. (see below) }
  2. Process multiple lines at a time if your regex allow (or can be rewritten to do so).

    Starting the regex engine a 100 times for each line is expensive. If you can safely rewrite them to process a batch of lines at a time, you can amortise the startup costs.

    (Note. That can be a 'big if', if the re-write makes the regex much more complex, then it can be self defeating; but its worth trying a few tests to find out.)

    Combining this with step 1 above takes thought and care to prevent exhausting your physical memory and moving into swapping.

    Let's say you have a 4-core machine with 4GB physical memory, but a 32-bit perl, thus a max of 2GB per process; and a 5GB file to deal with.

    First divide the file into 4; 0 - 1.25GB-1, 1.25GB - 2.5GB-1, 2.5GB - 3.75GB-1, 3.75GB - 5GB.

    If all 4 processes loaded their full quota, they'd move into swapping, so have them do it in two chunks. 4 * 0.625 = 2.5GB < 4GB.

    So, more untested code:

    my( $fh, $start, $end, $n ) = @_; // $n is this process' sequence numb +er my @ranges = ( [ $start, $end/2-1 ], [ $end/2, $end] ); // a loop and +careful math for more than 2 chunks for my $range ( @ranges ) { my( $start, $end ) = @$range; // use different names if you prefer seek $fh, $start, 0; <$fh>, $start = tell( $fh ) if $start != 0; // discard partial and + get real starting point (if not beginning) seek $fh, $end, 0; <$fh>, $end = tell $fh; // get true end point; seek $fh, $start, 0; read( $fh, my $buffer, $end - $start +1; for my $regex ( @regex ) { $buffer =~ $regex; } ## write out modifed line. (see below) }

The tricky bit with both schemes is combining the modified chunks back together, as the modifications will have changed their lengths.

The simplest mechanism is to write separate small files with a naming convention that allows them to be ordered.

Eg. You have 4 processes, so give each process a number, and have them write to numbered files: infile.dat.part0n, and have the parent process (that allocated and started the kids) wait for them to complete, and then merge the files back together.

HTH.

Update: If you have a second physical device available, do your small file writes to that; and then merge them back to the original device.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1118677]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (6)
As of 2024-03-19 05:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found