|P is for Practical|
but I think performance would be similar to a queue,
The problem with a queue is that all locking is applied to the entire shared array, thus every lock blocks all contenders, even if they are after different elements of the array.
If you look carefully at my example, you'll see that the @buffers array itself isn't (explicitly) shared and is never locked; only the per-thread scalar elements are.
And as only the reader thread and 1 worker thread per buffer are competing for any given lock, all workers threads are free to continue independantly of each other.
And finally, the lock on any given buffer is only held for the brief time it takes to copy its contents to a local buffer, thus the reader thread can be repopulating it with the next record whilst the worker thread is processing the previous one.
The upshot is that in my use of this technique, it beats Thread::Queue by a wide margin for applications where the processing of a record takes 3x or more time, than that required to read it.
Test it. You might be pleasantly surprised.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice.