http://www.perlmonks.org?node_id=436190


in reply to Re^2: What is the fastest way to download a bunch of web pages?
in thread What is the fastest way to download a bunch of web pages?

If you want to get a solid advice just based on a few raw specs, hire a consultant. There are many consultants who want to make a quick buck by giving advice based on just the numbers. You're mistaken if you think that there's a table that say that for those specs, this and that is the best algorithm.

As for why disk I/O matters, well, I'm assuming you want to store your results, and you're downloading a significant amount of data, enough to not be able to keep in all in memory. So, you have to write to disk. Which means that it's a potential bottleneck (if all the servers you download from are on your local LAN, you could easily get more data per second over the network than your disk can write - depending of course on the disk(s) and the network).

Of course, if all you care about is downloading a handful of pages, each from a different server, in a reasonable short time, perhaps something as simple as:

system "wget $_ &" for @urls;
will be good enough. But that doesn't work well if you need to download 10,000 documents, all from the same server.

Replies are listed 'Best First'.
Re^4: What is the fastest way to download a bunch of web pages?
by tphyahoo (Vicar) on Mar 03, 2005 at 13:15 UTC
    I had monkeyed with wget, but wget doesn't handle cookies, post requests, and redirects nearly as well as LWP, hence I'm doing it this way. My real program does more complicated stuff than I had presented in this post, but I just wanted to keep things simple.

    With regards to the bottleneck, I don't think this will be a problem. I'm not writing a web crawler, this is just something to automate a bunch of form post requests, and massage the data I get back. But that doesn't matter for getting the threading part right.

    I will eventually be storing stuff in mysql, but this is a future PM question....

      My real program does more complicated stuff than I had presented in this post, but I just wanted to keep things simple.
      But your question is far, far from simple. You ask an extremely broad question (in the sense that there are a lot of factors that play a role) whose answer isn't going to be purely Perl. Simplification only leads to suggestions (like 'wget') that isn't going to work for you. Note that wget has options to work with cookies - including saving/restoring them to/from file.
      I'm not writing a web crawler, this is just something to automate a bunch of form post requests, and massage the data I get back. But that doesn't matter for getting the threading part right.

      I will eventually be storing stuff in mysql, but this is a future PM question....

      So it seems like speed isn't going to be that important. Why aren't you first focussing on getting the functionality working, then worry about speed? Perhaps by the time it's finished, the speed issue has resolved itself (for instance, because it's already fast enough, or the database is going to be the problem, or the retrieval is been done in the background)