Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer

Re: Re: Scraping HTML: orthodoxy and reality

by BrowserUk (Pope)
on Jul 09, 2003 at 03:35 UTC ( #272557=note: print w/replies, xml ) Need Help??

in reply to Re: Scraping HTML: orthodoxy and reality
in thread Scraping HTML: orthodoxy and reality

You could probably automate the C&P from your favorite browser (under Win32 at least) or use one of the console browsers (Lynx etc.) under *nix.

The question is, what would you have achieved. Not only would you have used a parser (the one builtin to the browser), but you would have also used its rendering engine, spawned a new process and gone through some form of IPC whether its a pipe or clipboard. And you would still need to apply a regex to the result.

If your going to use a parser, then you mught as well use one of the many available to you via CPAN and avoid all that additional overhead:)

Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

  • Comment on Re: Re: Scraping HTML: orthodoxy and reality

Replies are listed 'Best First'.
Re: Re: Re: Scraping HTML: orthodoxy and reality
by ff (Hermit) on Jul 09, 2003 at 05:38 UTC
    For sure, this approach is expensive cpu-wise, etc., but if I need a solution that works right away then "module fetches/renders HTML into text", combined with regex processing that at least I know how to do, IS a solution. Sure, per RBFuller, "... if the solution is not beautiful, I know it is wrong" but if those cycles won't be used for anything else, who cares? This bear of little brain would have his program done.

    So, assuming that efficiency doesn't matter, I'm still fishing for something like building the $html object via a LWP 'get' as above and then turning it into text that I can examine with regexen. (However, since this is turning a golden object into lead, I'll do some more digging as you suggest, like re-reading this thread's Data::Dumper/HTML::TableExtract example! :-)

      I am just starting my studies with Perl, and of course, with modules I have less experience.

      But if you could print an HTML file to a plain text printer the result sent to a file would be just what you saw at the screen, right?

      Then you would treat it like text...

Re: Re: Re: Scraping HTML: orthodoxy and reality
by markexpjp (Novice) on Jul 09, 2003 at 13:27 UTC
    Well, /s?he/ had the right idea. One wonders why HP can't just produce simpler HTML, or even provide a port for text output... ok, may be that's a little silly, but all of this to get a few numbers.
    <grumble>Isn't the real problem here the
    obsession that some designer at HP has with
    producing beautiful output, good HTML practice
    be damned? Why do Dreamweaver jockies have to
    make my life hard!!! AHHHHH!</grumble>

    20?"C64 RULES ";

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://272557]
and the monks are chillaxin'...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (6)
As of 2017-08-17 19:48 GMT
Find Nodes?
    Voting Booth?
    Who is your favorite scientist and why?

    Results (292 votes). Check out past polls.