Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

web::scraper using an xpath

by ag4ve (Monk)
on Dec 10, 2010 at 08:37 UTC ( #876401=perlquestion: print w/ replies, xml ) Need Help??
ag4ve has asked for the wisdom of the Perl Monks concerning the following question:

i'm trying to get data out of a web page. i figured i'd use web::scraper (it looked easy enough) so, i wrote this:

my $pagedata = scraper { process '//*/table[@class="someclass"]', 'table[]' => scraper { process '//tr/td[1]', 'name' => 'TEXT'; process '//tr/td[2]', 'attr' => 'TEXT'; }; }; ....... my $res = $pagedata->scrape( $content ) or die "Can't define content to parser $!"; print Dumper( $res );

and, data dumper says i'm only getting the first line of the table. where the table would look something like this and i want all of it:

<table class="someclass" style="width:508px;" id="Any_20"> <tbody> <tr> <td>name</td> <td>attribute</td> <td>name2</td> <td>attribute2</td> <td>possible name3</td> <td>possible attribute3</td> <td> .... </tr><tr> <td>... etc

so, i'm guessing that the question is how to set the //td/tr xpath strings up so that i get multiple rows? or if not, how do i setup the scraper function so that it loops to the next row? or what other module might i try to accomplish this?

UPDATE

ok, so thanks to the anonymous poster, here's what i've come up with that works damn nice:

#!/usr/bin/perl use strict; use warnings; use LWP::UserAgent; use LWP::Simple; use Web::Scraper; use Data::Dumper::Simple; my( $infile ) = $ARGV[ 0 ] =~ m/^([\ A-Z0-9_.-]+)$/ig; my $pagedata = scraper { process '//*/table[@class="someclass"]//tr', 'table[]' => scraper { my $count = 1; process '//tr/td[' . $count++ . ']', 'name' => 'TEXT'; process '//tr/td[' . $count++ . ']', 'attr' => 'TEXT'; }; }; open( FILE, "< $infile" ); my $content = do { local $/; <FILE> }; my $res = $pagedata->scrape( $content ) or die "Can't define content to parser $!"; print Dumper( $res );

thanks all, and sorry about not posting full code and just the snip - tried not to make the post too long.

Comment on web::scraper using an xpath
Select or Download Code
Re: web::scraper using an xpath
by Corion (Pope) on Dec 10, 2010 at 08:46 UTC

    You can't go "pairwise" over a list with XPath.

    You will need to capture all TD contents and then do the reconstruction of name+attribute in Perl if the table is really structured (and classed) the way you show.

Re: web::scraper using an xpath
by jethro (Monsignor) on Dec 10, 2010 at 09:54 UTC

    I know nothing about XPATH or Web::Scraper, but you could try to add tbody to your first process call, i.e. process '//*/table[@class="someclass"]/tbody' !??

      Changing it to '//*/table[@class="someclass"]//tr' should work as intended
        Are you sure? In that case I would guess that he must also remove the 'tr' from his further process statements.
Re: web::scraper using an xpath (searches)
by Anonymous Monk on Dec 10, 2010 at 15:11 UTC
      Um, Web::Scraper already does all that ...

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://876401]
Approved by perl_lover
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (8)
As of 2014-09-20 14:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (159 votes), past polls