Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation

Best way to crawl PM's XML nodes.

by dmitri (Priest)
on Jun 10, 2007 at 22:43 UTC ( #620384=monkdiscuss: print w/replies, xml ) Need Help??

This thread has to do with the new MonkSearch project (discussed here). So, what is the best way to spider all of 620,000 or so nodes on

I was thinking a simple POE program with a configurable number of spider sessions and a single session to store the articles. Maybe the number of spiders should decrease or increase dynamically as response time goes up or down?

Replies are listed 'Best First'.
Re: Best way to crawl PM's XML nodes. (slow)
by tye (Sage) on Jun 10, 2007 at 23:41 UTC

    Just use one thread; that way you will be causing less load on the servers. So KISS.

    Please wait at least as long between fetches as the fetch takes, that way the load you induce in the server is automatically reduced if the server becomes busy.

    - tye        

Re: Best way to crawl PM's XML nodes.
by creamygoodness (Curate) on Jun 11, 2007 at 00:49 UTC

    Corion requested in a chatterbox conversation that the spider wait at least as long before initiating a new request as the last request took. (tye, too, I see now.) Reprinting your calculation from the project wiki, at 1 second per page...

    620,000 pages will take 620000/(60*60*24) = 7 days.

    Is that a decent estimate? I dunno. I'm not in the habit of writing spiders that make 1 request per second against a single server for a week solid, and I don't know how the adaptive waiting algo will perform.

    After thinking things over, though, I'm less concerned than I used to be about the total time. We can afford to be patient. We'll only have to do this once, and we'll be able to serve up useful results long before we have a complete set of documents. In web search, precision trumps recall. Spidering does seem like kind of an inefficient way to acquire what's effectively a database dump, though.

    The thing I've become much more concerned about is our lack of access to node rep -- as you pointed out to me, it's not available unless you're logged in and have voted on a node.

    Being able to feed node rep into the indexer will make a huge difference in terms of providing a high-quality search experience. We're talking about the kind of thing that allowed Google to differentiate itself from Alta Vista in 1998, when PageRank was introduced. Google's great innovation was to use link analysis to calculate an absolute measure of a page's worth. We already have such an absolute measure; we need to use it.

    Without factoring node rep into the scoring algo, we'll be relying on tf idf -- the top results will be those rated "most relevant" according to that algo, but not necessarily "high quality nodes" as judged by the user community. Combining TF/IDF with node rep, though, will make the "best" documents among those judged "relevant" float to the top. The perceived quality of the top results will be greatly increased... People will find the good stuff faster.

    We can build a prototype without node rep and Super Search will still be much improved. It will still be a much better search than you find on the vast majority of websites out there... But it won't live up to its highest potential. Node rep is crucial metadata to have.

    Marvin Humphrey
    Rectangular Research ―
        One of the main things wrong with prlmnks: (missing at the time of this comment). In fact, nothing seems updated after some time in 2006?


        Wow, never knew such site really exists. The youngest nodes are those created in Oct, 2006. Is it still going? How frequent the mirroring is done?

        Open source softwares? Share and enjoy. Make profit from them if you can. Yet, share and enjoy!


        Aside from the other items that have been brought up, I'm not sure how your reply addresses the post you're replying to. doesn't provide a way to get at node rep.

        Perhaps you mean, "What's wrong with the search at" It's certainly better than what we have here but 1) it's not here and 2) I think we can improve on it further.

        Marvin Humphrey
        Rectangular Research ―
Re: Best way to crawl PM's XML nodes.
by demerphq (Chancellor) on Jun 19, 2007 at 22:57 UTC

    I think that if you were to coordinate with a pmdevil you could probably work out an interface that would allow you to download much more than a single node at a time via an XML feed. Its something to think about anyway. Unfortunately i doubt ill have time to help out. Sorry. :-(


      One stepped forward. :) We have 100,000 pages to work with. Now we're just waiting for various commitments to finish up.

      Marvin Humphrey
      Rectangular Research ―

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: monkdiscuss [id://620384]
Approved by GrandFather
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (11)
As of 2018-06-22 12:03 GMT
Find Nodes?
    Voting Booth?
    Should cpanminus be part of the standard Perl release?

    Results (124 votes). Check out past polls.