I would be hard-pressed to be convinced that “adding threads to” such an algorithm that is right now “working fine(!)” will actually make it either more reliable, or faster.
Perhaps if you relied less on your apparently limited imagination and tried using the stuff you feel obliged to talk about, you might talk less crap.
Threading always divides time, while adding considerable complication.
What!? What on earth are you taking when you dream this stuff up? I mean really man, did you do no simple math at school:
- One man digs a ditch and it takes 8 hours. Two men dig another identical ditch and it takes 4.
... or were you simply to busy bullshitting about how you'd seen and done it all in pre-school, to take any notice.
So, all kinds of Danger, Will Robinson! alarm bells are going off in my head.
Geez! You're like a kid that was told masturbation would make him go blind and has refused wash his winkle ever since.
A simple example of threading at work:
Fetching 100 urls serially: #! perl -slw
use strict;
use LWP::Simple;
my $start = time;
while( <> ) {
chomp;
eval {
local $SIG{ ALRM } = sub { die 'timeout' };
alarm 10;
my($content_type, $document_length, $modified_time, $expires,
+$server) =
head( "http://$_" ) or warn "Failed to HEAD $_: $!\n";
};
}
printf "\nRunning HEAD on $. urls serially took %d seconds\n",
time() - $start;
__END__
C:\test>junk57 urls.list.small
Failed to HEAD www.asuscom.de: Bad file descriptor
Failed to HEAD www.belkin.com: Bad file descriptor
Failed to HEAD www.logical-approach.com: Bad file descriptor
Running HEAD on 100 urls serially took 102 seconds
Fetching those same urls in parallel: #! perl -slw
use strict;
use threads;
use LWP::Simple;
my $start = time;
my @threads;
while( <> ) {
chomp;
push @threads, async{
eval {
local $SIG{ ALRM } = sub { die 'timeout' };
alarm 10;
my( $content_type, $document_length, $modified_time, $expi
+res, $server ) =
head( "http://$_" ) or warn "Failed to HEAD $_: $!\n";
};
};
}
$_->join for @threads;
printf "\nRunning HEAD on $. urls using threading took %d seconds\n",
time() - $start;
__END__
C:\test>junk58 urls.list.small
Failed to HEAD www.asuscom.de:
Failed to HEAD www.belkin.com:
Failed to HEAD www.logical-approach.com:
Running HEAD on 100 urls using threading took 19 seconds
5 times faster -- and look at how much more complicated it is. Four whole extra lines!
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] [d/l] [select] |
I would be hard-pressed to be convinced that “adding threads to” such an algorithm that is right now “working fine(!)” will actually make it either more reliable, or faster.
In a case like this, where the code performs network IO, it's easy to convince me that having multiple things block on IO simultaneously can be faster than code which has to block on IO serially.
I use parallelism all the time in similar situations to great effect.
| [reply] |