Given what you would describe I would plan on, one way or
another, winding up with LibXSLT inlined in Perl.
in reply to Mega XSLT Batch job - best approach?
But before coding, my next question is how many parallel
processes you can profitably run. This is a question of
whether you are bound on I/O or CPU. If CPU, then it is
generally not worthwhile to run more processes than you have
CPUs. If I/O then it depends on your hardware, and what
fraction of time is CPU. The last time I tested an I/O
bound job, 7 worked best for me. YMMV.
Next try a run of 50-100 pages with LibXSLT, and see if you
have a serious memory leak. If memory usage stays flat,
then I wouldn't worry about it. If it is clearly leaking
but doesn't wind up at a worrying level, note that. If it
leaks unacceptably, figure out how many you can do in one
Now you have cases:
Two gotchas to think about. How you will handle error
handling is one. And the other is that in any sort of,
"gather together, set batches off" logic it is very easy
to say, "OK, when I have a batch, send it off" but forget
that when you finish finding new jobs, you need to run the
remainder as a batch.
- Only worthwhile to run one process, and there is no
leak. Then just inline it.
- Only worthwhile to run one process, but there is a
leak you care about. Then write a script that can
take an input file listing 50-100 file names and do those
files. In your main script launch batches in system
calls. (The idea of the batch is to amortize startup
- Worthwhile to run many processes. Figure out
what to run, then run them in parallel, either using
Parallel::ForkManager (may well be Linux
specific - the NT emulation of fork is not great) or
using IPC::Open3 directly as I did at Run commands in parallel.
Good luck, and tell us how it went.