Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Forking Multiple Regex's on a Single String

by bernanke01 (Beadle)
on Aug 19, 2006 at 19:46 UTC ( #568393=perlquestion: print w/ replies, xml ) Need Help??
bernanke01 has asked for the wisdom of the Perl Monks concerning the following question:

Hello again,

Recently I asked about tuning multiple regular expressions and got some good suggestions and help with benchmarking. This code is now working well. However, I want to take advantage of the fact that I'm working on a many-CPU machine and to try forking these different searches.

Essentially, my program (given at the bottom) does three things:

  1. open a series of files one-by-one and load into memory
  2. for each file perform many (10-30) searches
  3. write the search results to a single file

My thought was to fork each of the separate string searches, so the string gets loaded from disk only once, and multiple CPUs can be used for searching it.

Aside from wondering if this is a reasonable approach, my main question is about implementing the output-to-disk. Can I have all the children write to the same file-handle or will this lead to corruption? And because the file output will almost certainly be the limiting step, am I being silly in parallelizing this in the first place? If there's a way to pass data back from the child to the parent, I could create one aggreated results hash and print it to disk simultaneously, for example.

Any advice/suggestions much appreciated!

Here's the code:

### INCLUDES ######################################################### +################################ use strict; use Bio::SeqIO; use Carp; ### PARAMETERS ####################################################### +################################ my $chr_file = $ARGV[0]; my $seq_file = $ARGV[1]; if ( 2 != scalar(@ARGV) ) { croak 'Invalid parameter number'; } elsif ( ! -e $chr_file || ! -T $chr_file ) { croak 'Missing or invalid chromsome-listing file'; } elsif ( ! -e $seq_file || ! -T $seq_file ) { croak 'Missing or invalid sequence-listing file'; } ### LOCALS ########################################################### +################################ my @chromosomes; my %motifs; ### LOAD THE CHROMOSOME LIST ######################################### +################################ open(my $fh_chr, '<', $chr_file) or croak "Unable to open chromsome li +st: $chr_file"; while (<$fh_chr>) { s/^\s+//; s/\s+$//; my $row = $_; next() if (!$row); push @chromosomes, $row; } close($fh_chr); ### LOAD THE MOTIF LIST ############################################## +################################ open(my $fh_seq, '<', $seq_file) or croak "Unable to open motif file: +$seq_file"; while (<$fh_seq>) { s/^\s+//; s/\s+$//; my @row = split("\t"); next() if ( 2 != scalar(@row) ); $motifs{ $row[0] } = $row[1]; } close($fh_seq); ### FIND SEQUENCE MOTIFS ############################################# +################################ foreach my $chromosome (@chromosomes) { my $directory = $chromosome.'/'; my $file = 'chr'.$chromosome.'.fa.masked'; my $path = $directory.$file; my $seqio = Bio::SeqIO->new( -file => "<$path", -format => 'largefasta' ); my $seq = $seqio->next_seq(); my $sequence = $seq->seq(); foreach my $motif ( keys(%motifs) ) { my $str = $motifs{$motif}; my $len = length($str); my $pos = 0; while ( ($pos = index($sequence, $str, $pos)) >= 0 ) { print join("\t", $chromosome, $pos, $motif), " +\n"; $pos += $len; } } }

Comment on Forking Multiple Regex's on a Single String
Download Code
Re: Forking Multiple Regex's on a Single String
by izut (Chaplain) on Aug 19, 2006 at 21:57 UTC

    You can write your programs to communicate to use another program thru UNIX socket, acting as output writer. You can use POE and its friends for that.

    Igor 'izut' Sutton
    your code, your rules.

Re: Forking Multiple Regex's on a Single String
by tilly (Archbishop) on Aug 19, 2006 at 21:58 UTC
    First of all if all children are writing to the same filehandle, you will get messy data.

    But more importantly than that, you're forking at the wrong level of granularity. Forking involves overhead, so you want to have multiple long-lived copies of your program doing something. Furthermore it is not obvious to me whether your code will be bound by CPU or bound by I/O - but forking can work to your benefit either way.

    If it takes a noticable amount time to process a file then my suggestion would be to use something like Parallel::ForkManager to fork off one job per file, and to keep a fixed number of jobs going at once. Have each output file written to another directory, and then reassemble the output into a single file later.

    If individual files are virtually instantaneous to process but you have a lot of files, then you'll want to divide the list of files between copies of the program.

    All of this advice, of course, is predicated on the assumption that you are using an OS where forking does something useful for you. Which means that I hope you're using something other than Windows.

      Parallel::ForkManager is available through ppm so someone considers it sufficiently useful under Windows to have gone to the effort of adding it to an ActiveState repository.


      DWIM is Perl's answer to Gödel
        My understanding is that it kind of works, but fork is very heavyweight on Windows. And is implemented using threading, which may reduce how much the scheduler will want to schedule it on multiple CPUs. Therefore I wouldn't expect the same performance benefits from fork under Windows that I would on Unix, Linux, or OS X.

        Then again, as always, this is dated information from someone who doesn't use Windows. This may have changed, improved, etc. But if it has, I am not aware of it.

Re: Forking Multiple Regexes on a Single String
by chromatic (Archbishop) on Aug 19, 2006 at 23:04 UTC
    And because the file output will almost certainly be the limiting step, am I being silly in parallelizing this in the first place?

    No one can give you a good answer, without doing a really good benchmark, but there are two things you ought to consider.

    • You're doing IO. That's a lot slower than performing quite a few calculations. In other words, you have to be able to save an impressively large number of computations to make up for performing IO.
    • If you have one process aggregating all of the results of multiple other processes, you have a point of synchronization and that implies some sort of ordering and slowdown. This may eat up a lot of the other benefit of parallelization.

    As far as possible, routines that benefit from parallelization should be completely independent. If you really need parallel processing here, have each process write to its own file, then merge them (if necessary).

Re: Forking Multiple Regex's on a Single String (use threads)
by BrowserUk (Pope) on Aug 19, 2006 at 23:26 UTC

    Instead of speculating about what might work, here's a threaded solution that does.

    I've C&P's most of the code from your OP, but I've made various changes to adapt it to my environment. I only had one chromosome file kicking around ( Chr_22_14-09-2001 Release 3, [nucleotoides 13100001-47848585] ), so I used that multiple times. And for the motifes I just clipped out a few short (80 character) sequences at random.

    The output goes to STDOUT. You could redirect this to a file via the command line or open STDOUT to a file before spawning the threads. On my system which has only 1 cpu, there is no conflict with writing to a file from multiple threads. Most threaded C-runtimes would serialise this for you, but if yours does not, it would only require the addition of 1 shared variable and two locks to do so.

    The code creates a separate thread for each chromosome file in the list and passes the chromosome name as a paramater to each thread. Depending upon the number of chromosome files you are processing, it might be better to limit the number of concurrent threads--but given that each thread is also doing IO to the same file, it might be ok as is for upto 2 or 3 times the number of chromosomes as you have cpus. It will require a substantial amount of memory as written if the chromosome files are large, but with better knowledge of the task, the code can be easily adapted.

    The motifs file is read and loaded into a shared hash and re-used by all the threads.

    #! perl -slw use strict; use Carp; use threads; use threads::shared; my $chr_file = $ARGV[0]; my $seq_file = $ARGV[1]; if ( 2 != scalar(@ARGV) ) { croak 'Invalid parameter number'; } elsif ( ! -e $chr_file || ! -T $chr_file ) { croak 'Missing or invalid chromsome-listing file'; } elsif ( ! -e $seq_file || ! -T $seq_file ) { croak 'Missing or invalid sequence-listing file'; } my %motifs : shared; sub thread { my $chromosome = shift; my $directory = 'fasta/'; my $file = 'chr_'.$chromosome.'.fa'; my $path = $directory.$file; # my $seqio = Bio::SeqIO->new( # -file => "<$path", # -format => 'largefasta' # ); # my $seq = $seqio->next_seq(); # my $sequence = $seq->seq(); ## Crude fasta load--Expects 1 sequence per file open my $fh, '<', $path or croak "$path : $!\n"; <$fh>; ## discard header ( my $sequence = do{ local $/; <$fh> } ) =~ s[\s+][]g; close $fh; foreach my $motif ( keys(%motifs) ) { my $str = $motifs{$motif}; my $len = length($str); my $pos = 0; while ( ($pos = index( $sequence, $str, $pos)) >= 0 ) { print join "\t", $chromosome, $pos, $motif; $pos += $len; } } } open my $fh_seq, '<', $seq_file or croak "Unable to open motif file: $seq_file"; while (<$fh_seq>) { s/^\s+//; s/\s+$//; my @row = split("\t"); next if 2 != scalar @row; $motifs{ $row[0] } = $row[1]; } close($fh_seq); my @chromosomes; open my $fh_chr, '<', $chr_file or croak "Unable to open chromsome list: $chr_file"; while (<$fh_chr>) { s/^\s+//; s/\s+$//; my $row = $_; next if !$row; push @chromosomes, $row; } close($fh_chr); my @threads = map{ threads->new( \&thread, $_ ) } @chromosomes; $_->join for @threads; __END__ C:\test>568393 fasta\chromosomes.lst fasta\motifs.lst 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6666540 6 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 6540 3 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 24666540 9 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 12666540 7 22_14-09-2001 540 2 22_14-09-2001 24666540 9 22_14-09-2001 540 2 22_14-09-2001 24666540 9 22_14-09-2001 24666540 9 22_14-09-2001 24666540 9 22_14-09-2001 540 2 22_14-09-2001 540 2 22_14-09-2001 24666540 9 22_14-09-2001 540 2 22_14-09-2001 24666540 9 22_14-09-2001 24666540 9 22_14-09-2001 540 2 22_14-09-2001 24666540 9 22_14-09-2001 24666540 9 22_14-09-2001 540 2

    It's easier to respond to questions if there are any than to try guess, and comment on what you might not understand.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Wow ++BrowserUK, this is so cool. I've never done threaded perl before and in fact had to build a threaded perl in my test environment to work with this. I think after some reflection I get how this works, but there are two things I'm not getting.

      The first (and biggest) is: how is the printing working? I don't see anything to ensure that writes from two different threads don't "collide" and corrupt the output file. Is that somehow taken care of automatically?

      The second (and minor) thing is how to get a limit of, say $limit concurrent threads running. Would something simple like this be appropriate? I just keep a counter of the number of active threads. I do realize I could probably do this smarter by storing the threads in a hash keyed by chromosome or something like that -- just checking if the general approach of run, check for joins, add when joining is complete -- is appropriate.

      Update: the code I had before was fatally flawed in so many ways I'm embarrassed. Here's a corrected version (with the original tarnished one at the bottom)

      my @waiting = @chromosomes; my %threads; my $max_threads = 10; while ( scalar(@waiting) || scalar( keys(%threads) ) ) { if ( scalar( keys(%threads) ) < $max_threads ) { my $chr = pop(@waiting); $threads{$chr} = threads->new(\&threading_function, $c +hr); } else { foreach my $chr ( keys(%threads) ) { if ( $threads{$chr}->is_joinable() ) { $threads{$chr}->join(); delete $threads{$chr}; } } } }



      my $limit = 10; my $processes = 0; my $processed = scalar(@chromosomes); my @threads; while ($processed) { if ( scalar(@threads) < $limit ) { push @threads, threads->new( \&thread, $_ ); ++$processes; } else { foreach my $thread (@threads) { if ( $thread->is_joinable() ) { $thread->join(); --$processes; } } } }

      Many thanks again -- you've opened up my eyes/mind to threaded programming!

        As I mentioned above, if the threaded C-runtime your copy of perl is built against is any good, it will take care of serialising access to global resources (like STDOUT) for you. However, if in practice you find that the output from the threads is getting intermingled, then the following two additional lines to my code above should prevent that (with the caveat that I don't have a multi-cpu machine to test this on):

        ... my %motifs : shared; my $semaphore : shared; ################# Added. ... while ( ($pos = index( $sequence, $str, $pos)) >= 0 ) { lock $semaphore; ################################# Added. print join "\t", $chromosome, $pos, $motif; $pos += $len; }

        There are several ways to approach limiting the number of concurrent threads. I would normally advocate using a pool of threads, but given the size of the data sets being loaded in each thread, discarding them after each use and spawning a new one is probably the safest way of ensuring most of the memory is returned to the heap for re-use.

        Your revised code looks good, but I would suggest a couple of changes.

        1. Continue to loop and join joinable threads even after there are no more files to process.
        2. Check for joinable threads even if it isn't time to start a new one yet.
        3. Insert a sleep into the loop to prevent this thread from running away with the cpu.

        Something like this (untested) code:

        my @waiting = @chromosomes; my %threads; my $max_threads = 10; ## Continue to join threads as the become joinable ## whilst there are still threads running while( @waiting or threads->list( threads::running ) ) { if( keys %threads < $max_threads ) { my $chr = pop @waiting; $threads{ $chr } = threads->new( \&threading_function, $chr ); } ## Do this regardless of whether it's time to start a new thread. foreach my $chr ( keys %threads ) { if( $threads{ $chr }->is_joinable ) { $threads{ $chr }->join; delete $threads{ $chr }; } } sleep 1; ## Prevent this thread consuming all spare cpu }

        Don't get hung up on my changes to your formatting. When you're used to seeing things a certain way, it's easier to follow the logic when the code looks that way. I'm not advocating my preferences here.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://568393]
Approved by BrowserUk
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (7)
As of 2014-12-28 02:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (177 votes), past polls