Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re: Fastest way to download many web pages in one go?

by BrowserUk (Pope)
on Oct 11, 2013 at 20:43 UTC ( #1057950=note: print w/ replies, xml ) Need Help??


in reply to Fastest way to download many web pages in one go?

Try something like Re: Perl crashing with Parallel::ForkManager and WWW::Mechanize. Adjust $T to be 3 or 4 times the number of cores available for best throughput.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.


Comment on Re: Fastest way to download many web pages in one go?
Download Code
Re^2: Fastest way to download many web pages in one go?
by smls (Friar) on Oct 12, 2013 at 14:45 UTC
      Is there a significant difference (for this particular usecase) between that, and modules such as Thread::Queue and Parallel::ForkManager?

      For this use case -- the need to accumulate information from all the downloads together in order to produce the final report -- (for me) excludes forking solutions because of the complication of passing the extracted information back from child to parent.

      This either means:

      • effectively serialising the forks in order to retrieve it via pipes;
      • or; adding the complexity of a multiplexing server to the parent process to allow deserialised retrieval.

      That's more work than I wish to do; and puts a bottleneck at the end of the parallelisation.

      Thread::Queue on its own is not a solution to parallelisation, though it can form the basis of a thread pool solution.

      My choice of a new thread per download rather than a thread pool solution is based on the fact that you need to parse the retrieved pages.

      Thread pools work best when the work being done for each item is very small -- ie. takes less time than spawning a new thread. Once you need to wait network or internet times for the fetch and then parse the retrieved data, the time to spawn a new thread becomes insignificant, so spawning a new thread for each of your 60 pages becomes cost effective.

      The extracted data can easily be returned via the normal return statement from the threadproc and gathered in the parent via the threads::join() mechanism.

      Thus for each page the thread processing is a simple, linear flow of: fetch url; extract information; return extracts and end.

      For the main thread it is a simple loop over the urls spawning threads; mediated by a single shared variable to limit resource -- memory or bandwidth; whichever proves to be the limiting factor on your system -- and then a second loop over the thread handles retrieving the extracted data and pulling it together into a report.

      No scope for deadlocks; livelocks or priority inversions; no need for the complexities of multiplexing servers; no need for non-blocking, asynchronous reads; no user written buffering.

      In short: simplicity.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        An excellent and well-reasoned analysis ...   “++”

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1057950]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (7)
As of 2014-08-23 04:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (172 votes), past polls