Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister

Re^3: Any spider framework?

by tobyink (Abbot)
on Jan 06, 2012 at 12:51 UTC ( #946593=note: print w/ replies, xml ) Need Help??

in reply to Re^2: Any spider framework?
in thread Any spider framework?

In the case of <a name="foo"> it simply won't match, as the regexp includes href. And you wouldn't want it to match, as it's not a link. Whitespace around the equals sign (which is rare, but valid) is more problematic. There are other edge cases which behave differently to how you might want them to as well - note that the first subcapture allows ">" to occur within it.

But in practise, it's probably good enough to work for the majority of people.

The author may well accept a patch to parse the page properly using HTML::Parser given that the module already has a dependency on that module (indirectly, via LWP::UserAgent).

Or if you can't wait for a new fixed version to be released, just subclass it - it's only really that one method that's in major need of fixing.

Comment on Re^3: Any spider framework?
Download Code
Replies are listed 'Best First'.
Re^4: Any spider framework?
by jdrago999 (Pilgrim) on Jan 08, 2012 at 04:54 UTC

    As the author of WWW::Crawler::Lite, I am also appalled at the use of that regexp for URL detection! (What was I thinking?)

    I am quite pressed for time at the moment, but I will put the module on github and re-release it with the patches/updates suggested on RT already.

    FWIW I use this module in several places (and have for some time now). While there are perhaps some more "robust" spiders/crawlers out there, I wasn't able to find one as simple to use and understand as W:C:L.

    Once the github + pause uploads are completed, I'll re-post here.


Re^4: Any spider framework?
by jdrago999 (Pilgrim) on Jan 08, 2012 at 06:40 UTC


    As promised, the patches/updates/POD have been applied, github now hosts the code and I've put the newest release on github at

    Thanks everyone for your suggestions and time...

    Now you can get the HTML::LinkExtor version of link-parsing by specifying 'link_parser => "HTML::LinkExtor"' in the constructor. Otherwise you get the 'default' (original, regexp-based) way.

    Maybe this could be use something slick like Web::Query to get at that information (which, for me, was the whole point).

Re^4: Any spider framework?
by bart (Canon) on Jan 10, 2012 at 08:07 UTC
    In the case of <a name="foo"> it simply won't match, as the regexp includes href.
    And what makes you think the regex would limit itself to a single tag? In your example, the "<a " could be matched while the "href=" would be much further down in the document. In fact, there is no guarantee that that this string is a tag attribute, it could just be in plain html text ("PCDATA"), Javascript code, or even in HTML comments.

    To be reliable, a parser (actually just a lexer; it could be regex based) should extract whole tags, and you should then test each on its own. That would be much more reliable.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://946593]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (14)
As of 2015-10-13 17:59 GMT
Find Nodes?
    Voting Booth?

    Does Humor Belong in Programming?

    Results (312 votes), past polls