|The stupid question is the question not asked|
Greetings to all monks,
I have the following problem and request your counsel. An external vendor generates over 100 PDF files every Sun. Due to historical problems, the on-call (aka me once every 4 weeks) has to log in at ~1am every Mon and open ~30 random files to make sure they "seem" correct. Once all are "validated" they are printed and must be received by 9am containing "correct" values.
Each file is a report generated by MicroStrategy and contains a number of complex tables of retail data (wtd, , mtd, ytd, ly, etc.) I have a perl scripts that count the total number of files, compare current sizes to historical averages, etc. but many “bad” files continue to slip through to eventual Sev 2 ticketdom.
Based on historical posts here, it looks like converting PDF with complex tables to either HTML or TXT is not pretty and not advisable. (side note: these posts are from 2002-2004). To this novice, a good solution would be to be able to parse the PDF file directly. Hence my questions:
1) Are there “open source” methods to perform regexp's on PDF files?
2) If not, do current PDF to TXT/HTML converters handle complex tables better?
3) If not, would converting PDF to DOC and then using DOC parsers be a practical and advisable solution?