http://www.perlmonks.org?node_id=369269

Jeppe has asked for the wisdom of the Perl Monks concerning the following question:

Hello fellow perl followers!

I have worked on this problem for a while and tried a few modules and solutions:

I need to have one process write to several message queues and then have one reader for each message queue - and it needs to be efficient and scalable.

Basically, the writer process inserts credit card transactions into the database and then forwards the row id to several message queues - and then each reader reads from one of those and processes those transactions further. Some of the readers are a bit slower than the writer, so when we process files there will be a bit of a buildup in the message queue every hour - which is why the solution must be scalable.

What I've tried so far is IPC::Shareable - that was very slow, IPC::Sharelite + Storable - it doesn't scale well enough and a self-made database solution - I maintained a table where I would write a name and a transaction id, and then the readers would get and delete rows from that table.

The self-made database solution actually worked the best - but there are bugs in the DBD::DB2 driver that are triggered from the code - and believe me, I've tried to get around the problem!

So - now I'm looking for a new solution, one that is preferably free. I'm considering using sockets, but I understand I will encounter problems because of the asynchronous nature of my processing? How about using berkley data store - would that work?

It would make me happy if it is possible to make the solution reboot-safe. Also - I'm currently on perl 5.6.1 and would prefer not to move - because of this, anyhow.