Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re^2: Web::Magic 0.005

by tobyink (Canon)
on Jan 12, 2012 at 17:45 UTC ( #947590=note: print w/replies, xml ) Need Help??


in reply to Re: Web::Magic 0.005
in thread Web::Magic 0.005

App::scrape is more limited - it just uses CSS selectors to build up a Perl data structure from an HTML page. Handy yes, but Web::Magic does much more than that.

Can App::scrape handle YAML seamlessly?

use Web::Magic -sub=>'web'; say web('http://www.cpantesters.org/distro/W/Web-Magic.yaml') ->[0]{guid};

Or feeds?

use Web::Magic -sub=>'web'; say $_->title foreach web('http://www.w3.org/News/atom.xml')->entries;

Or for that matter JSON, RDF, arbitrary XML, etc?

And how about POST requests?

use 5.010; use Web::Magic -sub => 'web'; # Paste to paste2.org, and say the URL it was pasted to say web('http://paste2.org/new-paste') ->POST({ code => 'say "Hello world";', lang => 'perl', description => 'Perl Hello World', parent => 0, submit => 'Submit', }) ->Content_Type('application/x-www-form-urlencoded') ->header('Location');

Replies are listed 'Best First'.
Re^3: Web::Magic 0.005
by Anonymous Monk on Jan 13, 2012 at 08:39 UTC

    **thread bump**

    App::scrape is more limited - it just uses CSS selectors to build up a Perl data structure from an HTML page.

    It uses css and xpath, but yes, it is slightly simpler

    Can App::scrape handle YAML seamlessly?

    No, but I'm sure it could , in about five lines :) Tree::XPathEngine, its on CPAN :)

    Or for that matter JSON, RDF, arbitrary XML, etc?

    It does support RDF.

    And how about POST requests?

    Sure, its right there in the SYNOPSIS  use LWP::Simple qw(get); , you an just as easily write  use LWP::Simple qw( $ua ); and use  $ua->POST(...)

    I recognize that it does a lot more, and a large number of the prereqs are your modules -- that is a lot of work -- but why?

    Web::Magic won't help me "fake" a proper ua_string like WWW::Mechanize , and it has all those exceptions, but no cookie jar?

    Magic? Dwimmery? Awesomness? -- yes, I like kung-fu panda too :)

    HTML::Query, Web::Query, Web::Scraper, Web::Magic ... a lot of the same kind of work, which horse to choose?

    Sell me a horse?

    I'm sure you have philosophy, reasons for doing things your way, a big and little picture.... I'd love to know what it is :) I just don't have a grasp of the thing.

    Maybe its because i'm not a "24" fan ? What can I say, Kiefer Sutherland grates me worse than David Caruso :)

    Can you enlighten me?

      [App::scrape] does support RDF.

      No, it does not. Accepting XML does not count as supporting RDF. In general, RDF cannot be effectively processed with XML tools.

      Web::Magic won't help me "fake" a proper ua_string like WWW::Mechanize , and it has all those exceptions, but no cookie jar?

      Web::Magic lets you specify any user agent string you like. The POD for the "set_request_header" method shows two examples of how to do precisely that:

      $magic->set_request_header('User-Agent', 'MyBot/0.1'); $magic->User_Agent('MyBot/0.1'); # same as above

      And if you have a cookie jar you'd like to use:

      $magic->user_agent->cookie_jar($cookies);

      HTML::Query, Web::Query, Web::Scraper, Web::Magic ... a lot of the same kind of work, which horse to choose?

      Sell me a horse?

      Selecting stuff via CSS selectors is only a very small part of what Web::Magic does. (I almost regret using that feature in my first example.) Web::Magic aims to be the swiss army knife of HTTP-addressable resources. Whether it's a classic web page, a RESTful API, an Atom feed, or a WebDAV fileshare, Web::Magic can probably make dealing with it easier.

      Let's suppose you have a RESTful API which supports up XML and JSON, depending on the request's HTTP Accept header. Web::Magic notices how you're trying to access the data, and does what you mean...

      Web::Magic->new('http://example.com/new-document') ->POST($xmldom) ->{entry}{id};

      It's smart enough to figure out the HTTP headers you want:

      POST /new-document HTTP/1.1
      Host: example.com
      Content-Type: application/xml
      Accept: application/json, text/x-yaml
      
      <xmldoc>...</xmldoc>
      

      But if you'd called it like this:

      my @entries = Web::Magic->new('http://example.com/new-document') ->POST({ title => "Hello", foo => 1 }) ->entries; print $entries[0]->id;

      Then the request would be more like:

      POST /new-document HTTP/1.1
      Host: example.com
      Content-Type: application/x-www-form-urlencoded
      Accept: application/atom+xml, application/rss+xml
      
      title=Hello&foo=1
      

        Web::Magic lets you specify any user agent string you like.

        Um, yes, that is the problem, it doesn't help me concoct the correct string like Mechanize , its bare bones/vanilla LWP::UserAgent. With mechanize I use  $ua->agent_alias('Mac Mozilla'); or "Windows Mozilla", I don't have to go digging for the full string.

         

        Hat for your horse sir WWW::UserAgent::Random?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://947590]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (4)
As of 2020-02-17 10:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What numbers are you going to focus on primarily in 2020?










    Results (71 votes). Check out past polls.

    Notices?