Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: Mechanize Firefox text Method

by afoken (Parson)
on May 04, 2013 at 19:17 UTC ( #1032072=note: print w/ replies, xml ) Need Help??


in reply to Mechanize Firefox text Method

As of release 19 Firefox has a built-in PDF viewer

Technically, Firefox uses a lot of Javascript to convert the PDF document to a similar looking HTML document, which is then rendered by Firefox.

when I use the print $mech->content_type; method it returns the value "text/HTML" for the PDF document

That is a consequence of converting the PDF document to a HTML document.

If you want pre-19 behaviour, disable the PDF converter in Firefox.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)


Comment on Re: Mechanize Firefox text Method
Download Code
Re^2: Mechanize Firefox text Method
by halweitz (Novice) on May 05, 2013 at 00:45 UTC

    Thanks for the reply. Actually, I want the post release 19 behavior because this allows the ->text method to return the text of the PDF but it does not return all the text. Therein lies my problem. I tried to set the viewer to Adobe Reader but in that case I lose script control of the document.

      PDF does not always contain text. I've seen lots of PDF files that were composed of images (scanned texts, no OCR involved). So getting no text or much less text than expected is not always a problem in your code.

      PDF is a "postscript print job on steroides". PDF is basically postscript, with lots of addons that aren't really relevant for your problem. Postscript describes how to print a page. Most times, it works roughly in reading order, but neither postscript nor PDF have a problem with a print job that first emits all "A"s, then all "B"s, then all "C"s, and so on. It inflates the print job, and it makes it really hard to extract the original text, and there seems to be software written for exactly this purpose.

      I think a much cleaner way is to determinate the URL of the PDF file (using Mechanize), download the PDF file (using LWP or Mechanize), and process the PDF file using tools like pdftotext.

      Note that you still need some OCR software for scanned images, pdftotext just extracts text from the PDF file.

      Update: There are several commercial OCR programs that can take PDF files (including those composed of scanned images) as input and deliver text or Word documents.

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

        Thanks again for your reply. Let me clarify a bit. Since I can read the documents in the browser I know they contain only text so OCR is not an issue. All the documents follow a similar set of templates but the content changes for each. I have viewed hundreds of these and any document that does not conform will be skipped.

        Your comments on downloading and then using a pdftotext tool on the local file is inline with my current thinking as long as it can be scripted and run without intervention. Are there any other suggestions I should examine?

        have a look at CAM::PDF. I have used to do text searchs in PDFs with hundreds of pages.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1032072]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2014-11-21 03:38 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My preferred Perl binaries come from:














    Results (104 votes), past polls