Hi Everyone,
I have a perl script that creates a variable data pdf (2 pages per record) using PDF::API2. The end result will be a printable PDF.
My problem is that the proscessing time is about 1 minute per record. Long for a 20 record file, REALLY long for a 1000 record file, TAKE A WEEK VACATION for a 5000 record file!
All of the images need to be hi res, so I'm thinking that it takes so long due to the resampling of each picture.
My question is: Has anyone been successful in caching frequently used images so that they don't need to be resampled each time; or any other brainspark suggestion to lower the processing time (besides buying a new server - though that is a choice)
Currently run on a dual p3 800 ibm eserver with 1.5 gb ram with fedora core 6 OS and perl v5.8.8
I don't mind posting the code, but it's a long one and I figured I'd post it only if needed...
Thanks!
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.
|