good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
Re^3: Am I on the right track?by graff (Chancellor) |
on Jul 12, 2015 at 19:23 UTC ( [id://1134415]=note: print w/replies, xml ) | Need Help?? |
I'm not sure I understand your question. It looks to me like the OP code opens every input file two times, once to get its page count, and once to append its content to a chosen output file. (Your "get_page_cnt()" sub only returns a group name, not a file handle or pdf content.) My suggestion is no different in that regard.
Where my suggestion differs is that all the inputs are scanned first, before any output is done, and then there's a nested loop: for each output "group" file, create it, then for each input file in that group, concatenate its content. (If none of the inputs fall into a given group, there's no need to create an output file for that group.) Opening and closing each output file exactly once is bound to involve less overhead on the whole, compared to randomly closing and reopening output files (but I have no idea whether the difference will be noticeable in wall-clock time). Another thing to consider is whether you have to worry about an upper bound on the amount of data you can concatenate into one pdf file for a single print job. If so, I think my suggested approach would make it easier to manage that, because you can work out all the arithmetic for partitioning before creating any outputs.
In Section
Seekers of Perl Wisdom
|
|