in reply to Re: Using threads to process multiple files
in thread Using threads to process multiple files

Hi. thanks for the reply. I was thinking along these lines as well. Although grateful, I think your Inline::C solution might be over my head. But I will try to look for some way of speeding it up. The harddrive I/O for sure is an issue, but it shouldn't cause the script to be slower. If anything, at least it should be able to perform as well. But like you say, the copying of the hash is probably the issues.
  • Comment on Re^2: Using threads to process multiple files

Replies are listed 'Best First'.
Re^3: Using threads to process multiple files
by BrowserUk (Pope) on Feb 02, 2015 at 16:08 UTC
    The harddrive I/O for sure is an issue, but it shouldn't cause the script to be slower. If anything, at least it should be able to perform as well.

    Sorry, but that simply is not the case.

    The program below reads two large files:

    02/02/2015 15:42 10,737,418,241 big.csv 02/02/2015 15:47 12,300,000,001 big.tsv

    First concurrently and then sequentially. The timings after the __END__ token show that reading them concurrently takes 5 times longer than reading them sequentially.

    #! perl -slw use strict; use threads; use Time::HiRes qw[ sleep time ]; sub worker { my( $file, $start ) = @_; open my $in, '<', $file or die $!; sleep 0.0001 while time() < $start; my $count = 0; ++$count while <$in>; my $stop = time; return sprintf "$file:[%u] %.9f", $count, $stop - $start; } my $start = time + 1; my @workers = map threads->create( \&worker, $_, $start ), @ARGV; print $_->join for @workers; for my $file (@ARGV) { open my $in, '<', $file or die $!; my( $start, $count ) = ( time(), 0 ); ++$count while <$in>; printf "$file:[%u] %.9f\n", $count, time()-$start; } __END__ [15:49:22.32] E:\test>c:piotest.pl big.csv big.tsv big.csv:[167772161] 407.047676086 big.tsv:[100000001] 417.717574120 big.csv:[167772161] 82.103285074 big.tsv:[100000001] 81.984734058

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Re^3: Using threads to process multiple files
by BrowserUk (Pope) on Feb 02, 2015 at 19:59 UTC

    Oh. And the final piece of the discussion about IO.

    I moved one of the two big files over to my old, slower, fragmented drive and re-ran the same test. Now, despite that reading from the older drive is slower, the times taken to read both files concurrently and sequentially are almost identical even though they are both connected via the same interface:

    [15:49:22.32] E:\test>c:piotest.pl big.csv big.tsv big.csv:[167772161] 407.047676086 big.tsv:[100000001] 417.717574120 big.csv:[167772161] 82.103285074 big.tsv:[100000001] 81.984734058 [16:31:59.04] E:\test>c:piotest.pl c:big.csv big.tsv c:big.csv:[167772161] 138.239881039 big.tsv:[100000001] 85.378695965 c:big.csv:[167772161] 141.292586088 big.tsv:[100000001] 83.687027931

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked