in reply to db content dump

There used to be site documentation on "What tickers are available at PerlMonks?". I haven't been able to find it now. It documented the interfaces that can be used to download all of the public content of PerlMonks. I did find a translation of that documentation at Que geradores de XML estão atualmente disponíveis no PerlMonks?. Perhaps SiteDocClan can reconstruct it if necessary from that.

- tye        

Replies are listed 'Best First'.
Re^2: db content dump (tickers)
by Albannach (Prior) on Aug 28, 2013 at 00:56 UTC

      Yes. Thanks.

      Sadly, the fact that that node is of type "superdoc" rather than "sitefaqlet" makes it much harder to find than it really should be (a surprisingly large number of accidents of implementation combine to contribute to that problem). I suspected that was part of my difficulty and so hoped to find a link to it from one of the sitefaqlets, but didn't find such either. Though, I now see that such a link is trying to hide in a tiny font at the top of the translation that I linked to.

      A new sitefaqlet on "downloading" that includes such a link might be a good idea. I'd also like to see PerlMonks Syndication expanded a bit (including that specific link) and just more "see also" cross-linking of sitefaqlets in general.

      It'd also be nice for super search to know how to search sitedoclets (something quite different from a sitefaqlet). But I doubt any member of pmdev will get around to that any time soon, for various reasonable reasons.

      - tye        

Re^2: db content dump (tickers)
by daxim (Curate) on Aug 29, 2013 at 13:08 UTC
    At 1 request/s and ~1 million nodes that would take 12 days to crawl everything, also causing a huge traffic. Can you run this on the server and make a static dump/snapshot?

      The web site is not static. One static dump just begets another static dump.

      Do you have a successful replacement web site all ready?

      I'm pretty sure that there is more than 12 days of work yet ahead of you and that the vast majority of that work does not require ~1 million nodes of sample data.

      Having tried to produce dumps for export before and having seen others try, it is not simple, is not fast, and is prone to important mistakes. That it may appear very simple from where you are sitting does not actually have the power to change any of that.

      - tye