P is for Practical | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
IIRC (but this was a while ago, so my rememberer may not be correct or out of date), PDF::API2 wraps the old + new documents into a new PDF::API2 document container. Therefore, if you are repeatedly building a new document this way, you end up with a structure that looks like... p1-pN / \ p1-p(N-1) pN / \ p1-p(N-2) p(N-1) / \ p1-p(N-3) p(N-2) ... and so on. Once this reaches a few hundred pages, you have a very imbalanced tree, which can be inefficient to process. Also, since (I would guess - I have not recently checked the source) the PDF traversal code probably uses recursion, that could generate your deep recursion message. You could manually build a plan for a more balanced tree, and then build the final PDF file from that plan. Essentially you want to end up with the shortest binary tree that you can get for the number of original documents you have. For example, if you have 4 documents, you would merge 1+2 => A and 3+4 => B, and then merge A+B => C. For 8, you would do 1+2 => A, 3+4 => B, 5+6 => C, 7+8 => D; then A+B => E, and C+D => F; then E+F => G. If this is the case (see the first paragraph, and look at the resulting PDF file structure after merging a couple of documents), then a 'correct' (but possibly destructive) fix would be to rebalance the pages as new ones are inserted. As always, corrections welcome. Update: Cleaned up graphic --MidLifeXis In reply to Re: Problem merging thousands of PDFs with PDF::API2: 'Deep recursion on subroutine "PDF::API2::Basic::PDF::Objind::release"'
by MidLifeXis
|
|