Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Re^8: Scraping Ajax / JS pop-up

by Monk-E (Initiate)
on Feb 16, 2012 at 07:55 UTC ( [id://954165]=note: print w/replies, xml ) Need Help??


in reply to Re^7: Scraping Ajax / JS pop-up
in thread Scraping Ajax / JS pop-up

Not to beat this into the ground, but as I've stated, your 3rd suggested approach is the one I'm interested in. But that's also the one I've been pursuing if you look at my code again. WWW::Scripter along with its Ajax plugin are what my code is using... so all the goodness available in WWW:Mechanize::Firefox you're suggesting to use is where I was stuck in the first place.

Please do not take offense to the term "cheating" as I am using it in a way synonymous with your "use non-perl X to 'figure out' what is going on" terminology above, since my expectation from the proclaimed JavaScript support is that the module would remove the user from needing to sniff HTTP with tools. The preference for approach 3 is to minimize manually "figuring out the HTTP" behind the JS as much as possible... what's behind the calls can change as the target website changes, whereas that would all be encapsulated if the module is handling it as encountered. Again, thanks for the suggestions.. they may indeed be the route I need to take. And the HTTP::Recorder is a pretty cool module to have handy in general.

Replies are listed 'Best First'.
Re^9: Scraping Ajax / JS pop-up
by Corion (Patriarch) on Feb 16, 2012 at 09:40 UTC

    In my experience, you will have to look at the HTTP requests that go over the wire. The only "hands-off" solution that works well for my case is WWW::Mechanize::Firefox, but that should be no surprise as I wrote it. But even with WWW::Mechanize::Firefox, if you care for efficiency or speed, you will have to look at what HTTP requests are made and which requests can be skipped. Also, when automating a Javascript-heavy site, you will have to look at the Javascript to find out what functions to call instead of clicking elements on the page, to get the results in a more formatted way.

    My reason for automating Firefox is that Firefox is a supported and interactive platform. If a website does not work with Firefox, it's the websites fault, not the fault of my program. And I can watch Firefox as it navigates through the website, which is a plus while developing the automation.

    Of course, the module needs Firefox, and Firefox needs a display. There is PhantomJS, but so far I found the (lack of) model of interaction between the browser Javascript and the Javascript within the page lacking.

      So a quick update, to help anyone looking for a similar solution.

      I have a working scraper bot now, which handles the info in the AJAX/JS pop-up. I've had to resort to sniffing the HTTP with tools/browser plug-ins. I then mimic the HTTP POSTs that went over the wire using HTTP::Request::Common. This was the solution I was trying to avoid (as discussed above in this thread), primarily because if a bot needs to be more autonomous than mine, such as crawling, a more programmatic / self-contained solution is preferred. This is what I was trying to explain to Anonymous Monk. I tried several modules and ways to do that without success. But I should note to those who want to try, that I did not exhaust trying all routes that had potential, so more work with something like WWW::Mechanize::Firefox could possibly be fruitful.

      If your scraper is specific to a stable site or does not need to be an autonomous crawler, I would recommend to just cheat the complexity and sniff / mimic the HTTP as described in this thread.

        primarily because if a bot needs to be more autonomous than mine, such as crawling, a more programmatic / self-contained solution is preferred. This is what I was trying to explain to Anonymous Monk.

        That was easily understood. Your insistence that it needs to be pure-perl is the problem.

      Thanks. :)
Re^9: Scraping Ajax / JS pop-up
by Anonymous Monk on Feb 16, 2012 at 08:58 UTC

    since my expectation from the proclaimed JavaScript support is that the module would remove the user from needing to sniff HTTP with tools.

    Lets see, an experimental alpha level browser produced by a single man, versus 20 years and millions of dollars browser produced by microsoft/mozilla ... gee, I wonder which one works better

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://954165]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (5)
As of 2024-04-19 20:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found