in reply to Scraping HTML: orthodoxy and reality

Gosh, I wish I even knew the difference between those HTML:: modules and how to put them to work! Given the examples in this thread, soon I'll have more than a hammer to do my scraping. :-)

In the meantime: when I go to the web page from the link via my IE browser and do a Ctl-A and Ctl-C and then paste the text into a Notepad screen, this particular output is quite comprehensible to my HTML-untrained eye (vs the HTML stuff), e.g.

impse400 (I3C) / hp color LaserJet 4600 Information <snip much miscellaneous info> For highest print quality always use genuine Hewlett-Packard supplies. + BLACK CARTRIDGE HP Part Number: HP C9720A 73% Estimated Pages Remaining: 11025 (Based on historical black page coverage of 2%) Low Reached: NO Serial Number: 35860 Pages printed with this supply: 4078 TRANSFER KIT HP Part Number: HP C9724A 87% Estimated Pages Remaining: 103856 Etc.

With my regex sledgehammer it would be straightforward to process this data. Oftentimes, when I look at the "pure text" version of a web page there aren't nearly as many nice hooks for sorting things out. But this is THIS case, and my question is: might there be a tool which emulates this action of select/copy/paste of a web page to automate the production of such text for follow-on regex processing?

Replies are listed 'Best First'.
Re: Re: Scraping HTML: orthodoxy and reality
by BrowserUk (Pope) on Jul 09, 2003 at 03:35 UTC

    You could probably automate the C&P from your favorite browser (under Win32 at least) or use one of the console browsers (Lynx etc.) under *nix.

    The question is, what would you have achieved. Not only would you have used a parser (the one builtin to the browser), but you would have also used its rendering engine, spawned a new process and gone through some form of IPC whether its a pipe or clipboard. And you would still need to apply a regex to the result.

    If your going to use a parser, then you mught as well use one of the many available to you via CPAN and avoid all that additional overhead:)

    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

      For sure, this approach is expensive cpu-wise, etc., but if I need a solution that works right away then "module fetches/renders HTML into text", combined with regex processing that at least I know how to do, IS a solution. Sure, per RBFuller, "... if the solution is not beautiful, I know it is wrong" but if those cycles won't be used for anything else, who cares? This bear of little brain would have his program done.

      So, assuming that efficiency doesn't matter, I'm still fishing for something like building the $html object via a LWP 'get' as above and then turning it into text that I can examine with regexen. (However, since this is turning a golden object into lead, I'll do some more digging as you suggest, like re-reading this thread's Data::Dumper/HTML::TableExtract example! :-)

        I am just starting my studies with Perl, and of course, with modules I have less experience.

        But if you could print an HTML file to a plain text printer the result sent to a file would be just what you saw at the screen, right?

        Then you would treat it like text...

      Well, /s?he/ had the right idea. One wonders why HP can't just produce simpler HTML, or even provide a port for text output... ok, may be that's a little silly, but all of this to get a few numbers.
      <grumble>Isn't the real problem here the
      obsession that some designer at HP has with
      producing beautiful output, good HTML practice
      be damned? Why do Dreamweaver jockies have to
      make my life hard!!! AHHHHH!</grumble>

      20?"C64 RULES ";