Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery

Re: Easiest Way To Cut Info from Webpages

by jeffa (Bishop)
on Jun 17, 2004 at 19:10 UTC ( #367731=note: print w/replies, xml ) Need Help??

in reply to Easiest Way To Cut Info from Webpages

Have a look at HTML::Parser or HTML::TokeParser. If you just want to get the job done, chances are good that what you want is unique enough to pull out with a regular expression. I don't normally recommend regexes for this kind of job, but they do get the job done. At any rate, you will want to fetch the web page so you can parse it, and for that i recommend WWW::Mechanize. In conclusion, parsing web pages is generic, but parsing a specific web page is not, so you probably will not find an existing script for the website you are trying to parse, and if you do, the chances are good that it won't work for you. This is why you generally have to start from scratch, and inspect the HTML you are trying to parse with your own two eyes. And yes, as soon as the WebMaster changes the HTML, your script will probably break. :)


(the triplet paradiddle with high-hat)
  • Comment on Re: Easiest Way To Cut Info from Webpages

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://367731]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others imbibing at the Monastery: (6)
As of 2023-02-07 11:03 GMT
Find Nodes?
    Voting Booth?
    I prefer not to run the latest version of Perl because:

    Results (38 votes). Check out past polls.