|
|
| Don't ask to ask, just ask | |
| PerlMonks |
Re^2: Getting/handling big files w/ perlby BrowserUk (Patriarch) |
| on Nov 17, 2014 at 11:48 UTC ( [id://1107391]=note: print w/replies, xml ) | Need Help?? |
|
From your description, I doubt that the process could be tremendously improved as it stands: the process is I/O-bound and the I/O capabilities of the machine are lackluster. The process could be realistically (but, perhaps significantly) improved by re-defining it and then, as others suggested, “throwing silicon at” (the re-defined process). Now to debunk Yet Another of your Inglorious Theories. This shows a perl program downloading an 11MB file using 1,2,4,8,16 & 32 concurrent streams, on my 4 core CPU, across my relatively tardy 20Mb/s connection:
As you can see, you get diminishing returns from the concurrency, but over provisioning the 4 core CPU to manage 16 concurrent IO-bounds threads results in the best throughput. And how does that compare to using WGET and a single thread on the same connection and processor:
It beats it hands down! ************ server name redacted to discourage the world+dog from hitting them by way of comparison. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In Section
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||||||||