| [reply] |
Yes. Thanks.
Sadly, the fact that that node is of type "superdoc" rather than "sitefaqlet" makes it much harder to find than it really should be (a surprisingly large number of accidents of implementation combine to contribute to that problem). I suspected that was part of my difficulty and so hoped to find a link to it from one of the sitefaqlets, but didn't find such either. Though, I now see that such a link is trying to hide in a tiny font at the top of the translation that I linked to.
A new sitefaqlet on "downloading" that includes such a link might be a good idea. I'd also like to see PerlMonks Syndication expanded a bit (including that specific link) and just more "see also" cross-linking of sitefaqlets in general.
It'd also be nice for super search to know how to search sitedoclets (something quite different from a sitefaqlet). But I doubt any member of pmdev will get around to that any time soon, for various reasonable reasons.
| [reply] |
At 1 request/s and ~1 million nodes that would take 12 days to crawl everything, also causing a huge traffic. Can you run this on the server and make a static dump/snapshot? | [reply] |
The web site is not static. One static dump just begets another static dump.
Do you have a successful replacement web site all ready?
I'm pretty sure that there is more than 12 days of work yet ahead of you and that the vast majority of that work does not require ~1 million nodes of sample data.
Having tried to produce dumps for export before and having seen others try, it is not simple, is not fast, and is prone to important mistakes. That it may appear very simple from where you are sitting does not actually have the power to change any of that.
| [reply] |