http://www.perlmonks.org?node_id=1183843


in reply to Re: PerlIO file handle dup
in thread PerlIO file handle dup

If we read one record at a time, the input semaphore isn't needed. However, I'm reading 500 records at a time, and they need to be in sequence. I suppose if I read in and processed one record at a time, I could eliminate the input semaphore when MCE::Shared is being used (probably not for regular file handles). However, I think that would make output slower since each thread needs to block until its processed data is the next to be written.

I only put the yield in there because the first thread seemed to be hogging all the input before the other threads even started. In my actual script I'm not using MCE::Shared for the output file, and autoflush is needed to keep the output in order.

So this

read $fh, my($buf), '4k';

is the same but faster than this?

my $buf = <$fh>;

If it always reads exactly one entire record regardless of "chunk size", what does the chunk size do exactly? Or is the chunk size a minimum, then it continues reading until EOL? It is confusing that MCE's read works fundamentally differently from Perl's read.

I don't suppose there is a "readlines" function for MCE file handles? I assume if I could read all 500 lines at a time, that would minimize overhead related to MCE. For delimited input, I'm currently letting Text::CSV_XS read from the file handle, though.