Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid

Re^3: Fastest way to download many web pages in one go?

by BrowserUk (Pope)
on Oct 12, 2013 at 15:41 UTC ( #1057994=note: print w/replies, xml ) Need Help??

in reply to Re^2: Fastest way to download many web pages in one go?
in thread Fastest way to download many web pages in one go?

Is there a significant difference (for this particular usecase) between that, and modules such as Thread::Queue and Parallel::ForkManager?

For this use case -- the need to accumulate information from all the downloads together in order to produce the final report -- (for me) excludes forking solutions because of the complication of passing the extracted information back from child to parent.

This either means:

  • effectively serialising the forks in order to retrieve it via pipes;
  • or; adding the complexity of a multiplexing server to the parent process to allow deserialised retrieval.

That's more work than I wish to do; and puts a bottleneck at the end of the parallelisation.

Thread::Queue on its own is not a solution to parallelisation, though it can form the basis of a thread pool solution.

My choice of a new thread per download rather than a thread pool solution is based on the fact that you need to parse the retrieved pages.

Thread pools work best when the work being done for each item is very small -- ie. takes less time than spawning a new thread. Once you need to wait network or internet times for the fetch and then parse the retrieved data, the time to spawn a new thread becomes insignificant, so spawning a new thread for each of your 60 pages becomes cost effective.

The extracted data can easily be returned via the normal return statement from the threadproc and gathered in the parent via the threads::join() mechanism.

Thus for each page the thread processing is a simple, linear flow of: fetch url; extract information; return extracts and end.

For the main thread it is a simple loop over the urls spawning threads; mediated by a single shared variable to limit resource -- memory or bandwidth; whichever proves to be the limiting factor on your system -- and then a second loop over the thread handles retrieving the extracted data and pulling it together into a report.

No scope for deadlocks; livelocks or priority inversions; no need for the complexities of multiplexing servers; no need for non-blocking, asynchronous reads; no user written buffering.

In short: simplicity.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^4: Fastest way to download many web pages in one go?
by sundialsvc4 (Abbot) on Oct 15, 2013 at 03:24 UTC

    An excellent and well-reasoned analysis ...   “++”

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1057994]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (2)
As of 2018-05-23 04:11 GMT
Find Nodes?
    Voting Booth?