... thus confirming that, indeed, this is an I/O-bound process that benefits quite substantially from parallelism at (at least...) a factor of x4. As would be expected.
And, meaning to take absolutely no thunder away from this most-excellent example, I would like to mention here that I’ve seen quite a few shop-written scripts written (at various past-life engagements) which routinely used two command-line parameters: -s start_percent -e end_percent. For exactly this purpose.
Each of the scripts which supported these two parameters started by determining the size of the file, and from this, the byte-position represented by each of these two percentage-numbers. They then did a random-access seek to the starting position, then (if not zero%) read one line of text (presumed to be a fragment) and threw it away. Then, they processed lines until, after processing a particular line, they noticed that the file-position was now beyond the specified ending position. (Or end-of-file, whichever came first.) Then, they ended normally. In this way, each utility could be told to process a segment of the file.
The bash-scripts (or what have you) that executed these programs launched a number of parallel copies (with non-overlapping percentages as appropriate), as shell jobs, then did a wait for all of them to finish. It’s exactly the concept that BrowserUK demonstrates here, but implemented (for better or for worse) in the design of the programs and of the shell-scripts that invoked them.
Some of these programs weren’t Perl, and they always used operating system calls to advise the OS that they were going to do “sequential-only” reads of those files, thereby requesting big read-ahead buffers. They also explicitly said that the files were read-only and that they were shared, thereby encouraging shared use of common buffers. None of the programs were threads or processes, yet all of them were thus prepared to be child processes ... of the shell.
Great post. Thanks.