|Perl: the Markov chain saw|
Re^3: Mechanize Firefox text Methodby afoken (Abbot)
|on May 05, 2013 at 05:20 UTC||Need Help??|
PDF does not always contain text. I've seen lots of PDF files that were composed of images (scanned texts, no OCR involved). So getting no text or much less text than expected is not always a problem in your code.
PDF is a "postscript print job on steroides". PDF is basically postscript, with lots of addons that aren't really relevant for your problem. Postscript describes how to print a page. Most times, it works roughly in reading order, but neither postscript nor PDF have a problem with a print job that first emits all "A"s, then all "B"s, then all "C"s, and so on. It inflates the print job, and it makes it really hard to extract the original text, and there seems to be software written for exactly this purpose.
I think a much cleaner way is to determinate the URL of the PDF file (using Mechanize), download the PDF file (using LWP or Mechanize), and process the PDF file using tools like pdftotext.
Note that you still need some OCR software for scanned images, pdftotext just extracts text from the PDF file.
Update: There are several commercial OCR programs that can take PDF files (including those composed of scanned images) as input and deliver text or Word documents.
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)