|Perl: the Markov chain saw|
Re: The State of Web spidering in Perlby Anonymous Monk
|on Sep 23, 2013 at 00:35 UTC||Need Help??|
Scrappy Comments: Looks interesting, but the docs are a bit scattered and felt confusing, and the development seems to have stagnated.
... spidering ... scraping ...
You're confusing yourself there a little, spidering (anything goes) is a completely different ballgame than scraping (this one particular site)
WWW::Mechanize::Firefox adds JS support
that's it for scraping, the bare essentials and the state of the art, all the others add a little sugar and some baggage
Web::Magic adds maximum magic with maximum dependencies
Web::Query adds maximum sugar with minimal dependencies, like jQuery, but has odd bits
Mozilla::Mechanize -- good luck building that :) it ain't easy, no it aint easy
Gtk3::WebKit -- its a browser , you might scrape with it somehow (probably not)
Gtk2::WebKit -- its a browser , you might scrape with it somehow (probably not)
Wx::Htmlwindow -- its a (old/weak/limited) browser , you can scrape with it somehow , its clumsy and limited, not good for scraping
Wx::WebView-- its a (new/moderner/css+js) browser , even more useless for scraping than wx::htmlwindow ... looks nice but like all these browsers, not designed for scraping, although it could be