Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re: HTML stripper in WWW::Mechanize doesn't seem to work

by johnnywang (Priest)
on Jul 31, 2005 at 19:58 UTC ( [id://479729]=note: print w/replies, xml ) Need Help??


in reply to HTML stripper in WWW::Mechanize doesn't seem to work

I can't find in the documentationon your "content(format=>'text')" call. You probably should use some other parsers, such as HTML::TokeParser:
use WWW::Mechanize; use HTML::TokeParser; my $webcrawler = WWW::Mechanize->new(); $webcrawler->get("http://www.google.com"); my $content = $webcrawler->content; my $parser = HTML::TokeParser->new(\$content); while($parser->get_tag){ print $parser->get_trimmed_text(),"\n"; }

Replies are listed 'Best First'.
Re^2: HTML stripper in WWW::Mechanize doesn't seem to work
by Nkuvu (Priest) on Jul 31, 2005 at 21:14 UTC
    I installed WWW::Mechanize (I may even end up using it someday), and looked through the docs:
    $mech->content(...) Returns the content that the mech uses internally for the last page fetched. Ordinarily this is the same as $mech->response()->content(), but this may differ for HTML documents if "update_html" is overloaded (in which case the value passed to the base-class implementation of same will be returned), and/or extra named arguments are passed to con +- tent(): $mech->content( format => "text" ) Returns a text-only version of the page, with all HTML markup stripped. This feature requires HTML::TreeBuilder to be installed, o +r a fatal error will be thrown.
    So it looks like the call is correct.
Re^2: HTML stripper in WWW::Mechanize doesn't seem to work
by GrandFather (Saint) on Jul 31, 2005 at 21:51 UTC
      Right, i have made the neccessary changes and i think the code works fine now. The problem is i don't quite think the content( format => "text" ); function in the WWW::Mechanize http://search.cpan.org/dist/WWW-Mechanize/lib/WWW/Mechanize.pm module works. I have used it with google and perlmonks.com and it gives me the whole content. Does anyone else have the same problem or is it something with my code?

      Updated code:

      #!/usr/bin/perl use strict; #Module used to go through the web pages, Can extract links, save them + and also strip the HTML of its contents use WWW::Mechanize; use URI; print "WEB CRAWLER AND HTML EXTRACTOR \n"; print "Please input the URL of the site to be searched \n"; print "Please use a full URL (eg. http://www.dcs.shef.ac.uk/) \n"; #Create an instance of the webcrawler my $webcrawler = WWW::Mechanize->new(); my $url_name = <STDIN>; # The user inputs the URL to be searched my $uri = URI->new($url_name); # Process the URL and make it a URI #Grab the contents of the URL given by the user $webcrawler->get($uri); # Put the links that exist in the HTML of the URL given by the user i +n an array my @website_links = $webcrawler->links($uri); # The HTML is stripped off the contents and the text is stored in an +array of strings my $x = 0; my @stripped_html; $stripped_html[$x] = join ' ', $webcrawler->content( format => "text" + ); print $stripped_html[$x]; $x = $x + 1; exit;

      Thanks

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://479729]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others lurking in the Monastery: (5)
As of 2024-04-25 15:38 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found