|Just another Perl shrine|
Bah, humbug ... ;-)
There’s one million-pound elephant in this room, if there is one at all, and that elephant certainly won’t consist of the mere-microseconds that a CPU will require to split-up a chunk of data that came in from a disk drive. We’re not reading graphic-images here and resizing them ... the task to be performed against each unit of data is quite inconsequential. The advantage comes from being able to gulp large amounts of data at a time from a disk device that (probably ...) does not have to spend too much time dong it. (Although, depending on the peculiarities of physical distribution of the file-in-question across the disk real estate, that might not turn out to be so.)
“What is the array of hashrefs for?” Why, you yourself said it! Even though “the input is a stream, and the output is a stream,” there is probably only one device. A physical operation consisting of seek + read or seek + write, which we desire to reduce to the point that no seek is required. We want: seek($$) + read + read + read + ... + read + seek($$) + write + write .... + write + seek ($$).
The specific reason why I questioned the validity of threads in this scenario, was that I judged the amount of CPU-intensive processing required to be pragmatically inconsequential relative to the amount of time that would be spent in an I/O-wait. Also, it was because I judged that the physical latency cost of the disk device would be the “above all, ruling constraint” no matter what. Pragmatically, we can afford to spend a few microseconds crunching numbers because the disk platter won’t have rotated too far during that time anyway.
But there is only so far that the point can be argued before it becomes merely an argument. If the per-record CPU load is indeed slight to the point of being inconsequential, then memory is a big fat available buffer that costs nothing ... if, indeed paging is not occurring, so that it really does cost “nothing.” But if a page-fault that must be resolved to disk does take place, then you just moved that read/write head after all. Overlapping I/O with CPU-processing might prove to be beneficial on the OP’s system, not yours... or not. If it were me, I would start by using the RAM as a buffer, see how much bang that buck got me, then pragmatically move forward from there.
It doesn’t advance the technical validity of your arguments to belittle the opinions others, you know... no matter how personally self-assured you might be.
In reply to Re^4: selecting columns from a tab-separated-values file