Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid

Re: Retrieving contents of web pages

by OeufMayo (Curate)
on Aug 29, 2001 at 04:43 UTC ( #108649=note: print w/ replies, xml ) Need Help??

in reply to Retrieving contents of web pages

If you want to avoid the complexity of LWP and other HTML parsing modules, you may want to look at WWW::Chat, which is one of the easiest way to navigate through websites with perl. This modules creates LWP + HTML::Form scripts automatically via the webchatpp program.
There's still some features missing in this module, but it usually does a fair job. And more features may be added soon!

A simple example webchatpp script of what you want may look like this:

GET EXPECT OK FORM login F login=OeufMayo F password=s33kret CLICK EXPECT OK FOLLOW /Interesting link/ EXPECT OK for (@links){ print join("\n", map{ "@$_[1]\n\tURL: @$_[0] "} @links); }

Pretty simple, isn't it?

my $OeufMayo = new PerlMonger::Paris({http => ''});</kbd>

Comment on Re: Retrieving contents of web pages
Download Code
Replies are listed 'Best First'.
Re: Re: Retrieving contents of web pages
by RayRay459 (Pilgrim) on Aug 29, 2001 at 20:05 UTC
    OeufMayo, thank you very much for your sample code. That looks like it may work. I'll look into it deeper and probably post code if i get it to work. Thanks again.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://108649]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (4)
As of 2015-11-29 18:24 GMT
Find Nodes?
    Voting Booth?

    What would be the most significant thing to happen if a rope (or wire) tied the Earth and the Moon together?

    Results (752 votes), past polls