http://www.perlmonks.org?node_id=796581

Annirak has asked for the wisdom of the Perl Monks concerning the following question:

I'm trying to build a worker process that accepts and processes jobs. I'm getting hung up on how to accept the jobs: I'd like the worker to be able to accept jobs without having to check for them periodically. The way I have implemented it currently is this:

  1. The job issuer spawns a worker, and connects to it via a pipe. my $pid=open($pipe,"| $worker");
  2. The job issuer freezes a job object (or hash or whatever)
  3. The job issuer calculates the length of the frozen object, then sends print $pipe "$len\n$job"
  4. The worker picks this job up by: $len=<STDIN>;chomp $len; read(STDIN,$data,$len);$ref=thaw($data);
  5. The worker should enqueue the job for processing, then continue processing its current job.
  6. goto 2.

Here's the catch: For the second or later iteration, the worker may be in the middle of a long run process, so there's no way to know how long the worker might take to service STDIN. While I could just let data sit in the pipe, I'm not sure how reliable that is. More to the point, it makes it harder (impossible?) for the worker to collect information about pending jobs.

So I got the bright idea of signaling the worker when data has been written to its input pipe. Here's the worker:

#!/usr/bin/perl use strict; use Storable qw(freeze thaw); BEGIN{ $SIG{USR1}=\&drdy; } my @datastore=(); sub drdy { print "Signal!\n"; my $d=<STDIN>; print $d."\n"; push @datastore,$d; $SIG{USR1}=\&drdy; } while (1) { sleep 1; print shift @datastore if (scalar @datastore); }

This worker gets stuck in &drdy after the first SIGUSR1. This leads me to believe that I can't read from STDIN in a signal handler. The job issuer sets {my $fh = select $pipe;$|=0;select $fh} so the problem is not that the communication is buffered. The entire transaction should be in the worker's input pipe before the job issuer signals the worker.

I can only think of one other options to handle this IPC problem; make the worker multithreaded with one pipe management thread and one work thread. This seems like a pretty heavyweight solution to a pretty simple problem.

Is multithreading the way to do this?