http://www.perlmonks.org?node_id=1183773

chris212 has asked for the wisdom of the Perl Monks concerning the following question:

I'm trying to have multiple threads read from the same file handle. I know they can't read concurrently, that is what the semaphore is for. I understand that file handles cannot be "shared", but threads can inherit a dup copy of a file handle. It seems with PerlIO, each copy tracks its own position, causing the data to read read out-of-sequence. This isn't the case if I use the :unix layer, but this kills performance without buffering. As a workaround, I can have my script keep track of the correct position and have each thread seek to it before reading data. This doesn't work if the file handle is piped output from a command such as gzip, though, since you cannot seek on it.

#!/opt/perl/bin/perl use strict; use threads; use Thread::Semaphore; my $fh; open($fh,'|-','gzip > test.txt'); foreach(1..1000) { print {$fh} sprintf('%04d',$_).('abc123' x 10)."\n"; } close($fh); open($fh,'-|','gzip','-cd','test.txt'); $| = 1; my @threads = (); my $sem = Thread::Semaphore->new(); foreach(1..3) { push(@threads,threads->create(\&test)); } $_->join() foreach(@threads); close($fh); print "\n"; sub test { my $tid = threads->tid(); my $line; while(1) { threads->yield(); $sem->down(); $line = <$fh> or last; print "Thread $tid ".$line; $sem->up(); } $sem->up(); }

I can't have the piped output read by a single thread, because I need the data read to be processed concurrently in threads. It is too slow to queue/dequeue the data between threads (too much data). Creating a new thread for each chunk of data and passing the data to the new thread did work, but caused intermittent crashes. Apparently this was due to nearly a million threads being created throughout execution, although only 34 would run concurrently. The MCE module has been suggested, but I don't understand it well enough to use that yet.

Apparently with stdio, these dup file handles would share a position? Is there any way to get a shared position without sacrificing buffering (even if each thread has it's own buffer)? Does using the stdio layer use the pre-5.8 I/O rather than PerlIO, or would that require re-compiling Perl?

UPDATE

I did some more reading about MCE, and apparently MCE's shared file handles are compatible with Perl threading, so I don't even need to replace Perl threading. It is really simple! Just replace open with mce_open, and the file handle is shared. It just works. For uncompressed files, it is still faster to have each thread seek rather than use MCE's IPC, though. I guess the buffering more than makes up for the IPC overhead for compressed files.

#!/opt/perl/bin/perl use strict; use threads; use Thread::Semaphore; use MCE::Shared; my $fh; open($fh,'|-','gzip > test.txt'); foreach(1..1000) { print {$fh} sprintf('%04d',$_).('abc123' x 10)."\n"; } close($fh); mce_open($fh,'-|','gzip -cd test.txt') or die("Failed to uncompress: $ +!\n"); $| = 1; my @threads = (); my $sem = Thread::Semaphore->new(); foreach(1..3) { push(@threads,threads->create(\&test)); } $_->join() foreach(@threads); close($fh); print "\n"; sub test { my $tid = threads->tid(); my $line; while(1) { threads->yield(); $sem->down(); $line = <$fh> or last; print "Thread $tid ".$line; $sem->up(); } $sem->up(); }