Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW

TCP socket and fork

by adismaug (Acolyte)
on Jul 04, 2009 at 10:57 UTC ( #777204=perlquestion: print w/replies, xml ) Need Help??
adismaug has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,
I am trying to write a script that will open a TCP socket, perform an HTTP get and display the server replay.
The script should fork and all the children should use the same socket (using HTTP version 1.1 and keep alive). The script below works fine but each process open a new socket which adds unnecessary delay (I am using Time::HiRes to calculate the total time to get 10 pages and save the content).
How can I improve the script so all the forked process wil use the same TCP socket?
Your help please…

#!/usr/bin/perl use LWP::Simple; use Time::HiRes qw(time); use Parallel::ForkManager; my $filename = "/tmp/result.log"; while ($z <=200) { $z++; my $i = 1; my $pm = new Parallel::ForkManager(20); my $t0 = time; my $num = 10; unlink("$filename"); open FH, ">:utf8", $filename; while ($i < $num) { $i++; $pm->start and next; $contents = get("$i"); print FH "$contents\?"; $pm->finish; } $pm->wait_all_children; $elapsed = time - $t0; print "$elapsed elapsed\n"; }

Replies are listed 'Best First'.
Re: TCP socket and fork
by afoken (Abbot) on Jul 04, 2009 at 11:18 UTC

    Why do you think that 20 threads could get HTTP contents faster over a single socket than over 20 distinct sockets? Only one thread at a time could read and write the socket due to the way HTTP works, so 19 threads would have to wait for the first thread to finish. After that, 18 threads have to work for the second thread. And so on, until the last thread finished. You don't need threads for that, a simple for loop is even faster, because it does not have the threads overhead.

    You can accelerate HTTP by using the keepalive feature, but for that, you need an agent that you don't destroy after a single request, like you to when you call the simple get() function.

    Update: Is this related to IO::Socket, Multiple GET.?


    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
      Dear Alexander,
      Thanks for the replay.
      Off course you are right and I did not think of the issue that the GET will have to be preformed one after the other and not in parallel.
      Time is the most important issue for me and I am using 20 threads so the program can get the all the data in the same time. I wanted to save the time it takes for the TCP handshake and use the resources already allocated by the server.
      Do you have an idea to how I can accelerate the program to the minimum time possible?
      How can I perform the entire GET request at the same time?
      Thanks in advance,

        You have various options, as I explained above.

        • Use a single socket and the HTTP/1.1 keepalive feature, then play request-response ping-pong over that socket. No threads required, just use a single LWP::UserAgent instance for this. This avoids that little bit of TCP handshake, but serialises all your requests.
        • Fork as many threads or processes as you like, and let each process fetch one resource, nearly as you do now, with 20 independent instances of LWP::UserAgent behind the scenes. This costs many TCP handshakes, but allows you to saturate your network connection (or that of the server).
        • Mix both approaches. Create a controlling thread/process that forks several slaves (let's just say four), then gives each slave a new URL to fetch as soon as the slave is idle. Use keepalive in each of the slaves. This uses most of your bandwidth and avoids some TCP handshakes. Note that the number of requests processed by each slave depends entirely on how fast it can hande its job. A slave that has to fetch a gigabyte of data will propably process only one request, while other slaves that get tiny repsonses will process lots of requests.
        • Simplified mix: Create just a bunch of slaves (again, let's assume four slaves), each with a constant fraction of the URL list to be processed (five entries, in this example). This does not balance as well, but requires less code. If one unlucky slave has to process five gigabyte responses, while the other slaves got away with a few kilobytes, you will wait a long time for the last slave.

        Why are you so worried about TCP handshakes? TCP handshake requrires three TCP packages. A simple GET request adds one more package, and the response uses round about one package for the HTTP headers and then two packages for every three KBytes of data. (Assuming we are talking about ethernet, PPP or PPPoE). As soon as your response is larger than a few KBytes, the TCP handshake does not really matter. If you (ab)use HTTP as a way to transport tons of tiny messages in some RPC protocol, TCP handhake really matters.


        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://777204]
Approved by AnomalousMonk
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (5)
As of 2018-06-24 03:39 GMT
Find Nodes?
    Voting Booth?
    Should cpanminus be part of the standard Perl release?

    Results (126 votes). Check out past polls.