|Just another Perl shrine|
Not really, from a memory standpoint.
You could do much better with a standard loop that reads to a small buffer and writes to the output file in a loop.
Actually no. That forces the OS to keep the disk head moving back and forth between source(*) and destination.
One read & one write will always beat 125 iddy biddy reads and 125 iddy biddy writes, with a seek across the disk between each, hands down. (Not to mention 125 invocations of s/// or tr/// instead of one.)
(Not to mention that you seem to have really fast disks (SSDs?).
Not yet :) I waiting for a PCIe flash card that presents itself as additional (slow) ram at a reasonable price.
Haven't met a HDD yet that could read faster than 150 MB/s or write faster than 100 MB/s.)
As moritz points out: file system caching.
The timings posted were not the first runs; but the same caching benefited all three versions.
Not really, from a memory standpoint... Still a helluva lot better than the OS swapping you out because it can't fit the 500 MB into memory.
My last memory purchase:
(*Even if the input is cached from a previous read of the file, writing to disk before the entire input has been read is quite likely to cause some or all of the input file to be discarded before it has been read, to accommodate the output.)
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.