Your post reminded me of a problem which I have been trying to solve involving extracting URL's pointing to a specific filetype (say a gz archive) from a web page. It turns out that at
CPAN there is a
page which contains an alphabetical list of all modules, with a hyperlink to the tar.gz file of each module.
The following code (given appropriate substitution of the command line input; ie gz for pdf) will create a text file with all of the hyperlinks to the tar.gz files:
use strict;
use LWP::Simple;
use HTML::SimpleLinkExtor;
#usage getfileoftype http://www.example.com pdf > urllist.txt
my $url = shift;
my $filetype = shift;
my $filetypelen = length($filetype);
my $offset = -$filetypelen;
#print $filetypelen."\n";
#print $offset."\n";
my $fileget = getstore($url,"tempfile.html");
my $extor = HTML::SimpleLinkExtor->new();
$extor->parse_file("tempfile.html");
my @a_hrefs = $extor->a;
for my $element (@a_hrefs) {
# print $element;
# print "\n";
my $suffix = substr($element,$offset,$filetypelen);
# print $suffix;
# print "\n";
if ($suffix =~ m/$filetype/){
print $element;
print "\n";
}
}
Once you have that, you can then use the following code to automatically download all of the modules if you so choose, or whatever subset of the modules you wish to extract from the text file created by the above code:
use strict;
use LWP::Simple;
use File::Basename;
open (DATA, "urllist.txt") || die "File open failure!";
while (my $downloadurl = <DATA>){
(my $name, my $path, my $suffix) = fileparse($downloadurl);
my $finurl = $downloadurl;
print $finurl."\n";
my $savefilename = $name.$suffix;
print $savefilename;
print "\n";
my $status = getstore($finurl,$savefilename);
print $status."\n"
}
Both pieces of code work nicely on my WinXP box. Yes, I know that "tempfile.html" gets clobbered, but I was just glad to get this code working, and WinXP doesn't seem to care. In any case, you can now generate a local repository of modules. Hope this helps.
Suggestions for improvement in my code are welcome!