Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Best way to crawl PM's XML nodes.

by dmitri (Curate)
on Jun 10, 2007 at 22:43 UTC ( #620384=monkdiscuss: print w/ replies, xml ) Need Help??

This thread has to do with the new MonkSearch project (discussed here). So, what is the best way to spider all of 620,000 or so nodes on www.perlmonks.org?

I was thinking a simple POE program with a configurable number of spider sessions and a single session to store the articles. Maybe the number of spiders should decrease or increase dynamically as response time goes up or down?

Comment on Best way to crawl PM's XML nodes.
Re: Best way to crawl PM's XML nodes. (slow)
by tye (Cardinal) on Jun 10, 2007 at 23:41 UTC

    Just use one thread; that way you will be causing less load on the servers. So KISS.

    Please wait at least as long between fetches as the fetch takes, that way the load you induce in the server is automatically reduced if the server becomes busy.

    - tye        

Re: Best way to crawl PM's XML nodes.
by creamygoodness (Curate) on Jun 11, 2007 at 00:49 UTC

    Corion requested in a chatterbox conversation that the spider wait at least as long before initiating a new request as the last request took. (tye, too, I see now.) Reprinting your calculation from the project wiki, at 1 second per page...

    620,000 pages will take 620000/(60*60*24) = 7 days.

    Is that a decent estimate? I dunno. I'm not in the habit of writing spiders that make 1 request per second against a single server for a week solid, and I don't know how the adaptive waiting algo will perform.

    After thinking things over, though, I'm less concerned than I used to be about the total time. We can afford to be patient. We'll only have to do this once, and we'll be able to serve up useful results long before we have a complete set of documents. In web search, precision trumps recall. Spidering does seem like kind of an inefficient way to acquire what's effectively a database dump, though.

    The thing I've become much more concerned about is our lack of access to node rep -- as you pointed out to me, it's not available unless you're logged in and have voted on a node.

    Being able to feed node rep into the indexer will make a huge difference in terms of providing a high-quality search experience. We're talking about the kind of thing that allowed Google to differentiate itself from Alta Vista in 1998, when PageRank was introduced. Google's great innovation was to use link analysis to calculate an absolute measure of a page's worth. We already have such an absolute measure; we need to use it.

    Without factoring node rep into the scoring algo, we'll be relying on tf idf -- the top results will be those rated "most relevant" according to that algo, but not necessarily "high quality nodes" as judged by the user community. Combining TF/IDF with node rep, though, will make the "best" documents among those judged "relevant" float to the top. The perceived quality of the top results will be greatly increased... People will find the good stuff faster.

    We can build a prototype without node rep and Super Search will still be much improved. It will still be a much better search than you find on the vast majority of websites out there... But it won't live up to its highest potential. Node rep is crucial metadata to have.

    --
    Marvin Humphrey
    Rectangular Research ― http://www.rectangular.com
        Wow, never knew such site really exists. The youngest nodes are those created in Oct, 2006. Is it still going? How frequent the mirroring is done?

        Open source softwares? Share and enjoy. Make profit from them if you can. Yet, share and enjoy!

        One of the main things wrong with prlmnks: http://prlmnks.org/html/620384.html (missing at the time of this comment). In fact, nothing seems updated after some time in 2006?

        -Paul

        Ambrus,

        Aside from the other items that have been brought up, I'm not sure how your reply addresses the post you're replying to. http://prlmnks.org doesn't provide a way to get at node rep.

        Perhaps you mean, "What's wrong with the search at http://prlmnks.org?" It's certainly better than what we have here but 1) it's not here and 2) I think we can improve on it further.

        --
        Marvin Humphrey
        Rectangular Research ― http://www.rectangular.com
Re: Best way to crawl PM's XML nodes.
by demerphq (Chancellor) on Jun 19, 2007 at 22:57 UTC

    I think that if you were to coordinate with a pmdevil you could probably work out an interface that would allow you to download much more than a single node at a time via an XML feed. Its something to think about anyway. Unfortunately i doubt ill have time to help out. Sorry. :-(

    ---
    $world=~s/war/peace/g

      One stepped forward. :) We have 100,000 pages to work with. Now we're just waiting for various commitments to finish up.

      --
      Marvin Humphrey
      Rectangular Research ― http://www.rectangular.com

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: monkdiscuss [id://620384]
Approved by GrandFather
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (3)
As of 2014-09-19 03:06 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (129 votes), past polls