Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re^2: Web Scraping on CGI Scripts?

by Anonymous Monk
on Oct 10, 2011 at 16:29 UTC ( #930678=note: print w/ replies, xml ) Need Help??


in reply to Re: Web Scraping on CGI Scripts?
in thread Web Scraping on CGI Scripts?

Hi Tospo The URL is http://www.molmovdb.org/cgi-bin/browse.cgi I'm trying to follow all the links to the database enties iteratively and output these as text files to analyse later as you can probably see the coding is not formatted amazingly well!? many thanks and best wishes Dan


Comment on Re^2: Web Scraping on CGI Scripts?
Re^3: Web Scraping on CGI Scripts?
by tospo (Hermit) on Oct 11, 2011 at 08:29 UTC
    That page - apart from being marked-up in a rather old-fashioned way - isn't too bad at all. If you look at the page source code, you can easily see a table structure that you can use to parse it.
    You will want to use a module like WWW::Mechanize to interact with the website. This moduel allows you to interact with web content like a user would in a browser. You can make your script "click" on links, to get to the text files. Use the table structure of the "browse" page to iterate over all the molecules, each time following the link through to the text data files.
    Have a go with a simple example first. There are a few here. If you are getting stuck, post the script you have so far and what's happening so we can help you along. Good luck!
Re^3: Web Scraping on CGI Scripts?
by tospo (Hermit) on Oct 11, 2011 at 08:32 UTC
    oh and I forgot to mention: you are always parsing the HTML output that the server sends to you. It doesn't matter that this is a cgi script generating the page on the server, the output is just HTML (unless it's a webservice that sends XML, JSON or the like). So there is nothing special about this case.
      Hello Again

      WWW::Mechanize does seem to be the right medicine but I've already hit a snag on the road; I'm only interested in following the 'motion.cgi' links and extracting these links as text documents however the regex I've used only finds the first 2 links? Any ideas on whats going on?

      #!/usr/bin/perl use strict; use WWW::Mechanize; use Storable; my $mech_cgi = WWW::Mechanize->new; $mech_cgi->get( 'http://www.molmovdb.org/cgi-bin/browse.cgi' ); my @cgi_links = $mech_cgi->find_all_links( url_regex => qr/motion.cgi? +/ ); for(my $i = 0; $i < @cgi_links; $i++) { print "following link: ", $cgi_links[$i]->url, "\n"; $mech_cgi->follow_link( url => $cgi_links[$i]->url ) or die "Error following link ", $cgi_links[$i]->url; }
      best wishes

      Dan

        that's because after the first "follow_link" action, $mech_cgi is now on a different page (it behaves like a browser) and then you issue the next follow_link command but that links doesn't actually exist on the page you are on now. Add "$mech_cgi->back" before teh end of the loop and you will iterate through all the links.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://930678]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (5)
As of 2014-10-31 04:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    For retirement, I am banking on:










    Results (214 votes), past polls