To expand here (as I was about to post much of BrowserUk's point 1), I expect you'll get much better performance if you swap to slurp mode, specifically because you won't interleave reads. It also means that which ever thread reads first will almost assuredly finish its processing before thread 2 finishes its read, so your wall time will be closer to n reads, 1 process, 1 hash transfer. Perhaps something like (untested):
sub parseLines{ my $content = do { open my $in, '<', $_[0] or die "Open failed $_[0]: $!"; local $/; <$in>; }; my %hash; for (split /(?<=\n)/, $content) { next unless /^\@HWI/; my ($header) = split(/ /, $_); $hash{$header} = 1; } return \%hash; }
Note the indirect filehandle (it automatically closes when it goes out of scope) and the failure test on the open. If your file is very large, I believe (I could be wrong) that the for(split) construct will be abusive of memory. In that case, you could craft it as a streaming parser as:
for ($content =~ /(.*\n?)/g) { my $line = $1; next unless $line =~ /^\@HWI/; my ($header) = split(/ /, $line); $hash{$header} = 1; }
It's plausible that slurp mode alone wouldn't be enough to force sequential reads, in which case it might make sense to add a shared read lock.

#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.


In reply to Re^2: Using threads to process multiple files by kennethk
in thread Using threads to process multiple files by anli_

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":