http://www.perlmonks.org?node_id=576114

Perlmonks is great but there are some 'features' that I consider to be missing:

To address these issues I've added features to http://prlmnks.org, a site that I first created about a year ago to do the RSS feeds bit. Please take a look and tell me what you think.

Please note that I am not being negative towards perlmonks - I think it is a great resource.

  • Comment on PerlMonks mirror with RSS and searching

Replies are listed 'Best First'.
Re: PerlMonks mirror with RSS and searching
by planetscape (Chancellor) on Oct 03, 2006 at 18:42 UTC
      And a very handy RSS Feed, nevertheless. This is something I use frequently to find the new nodes without having to load up perlmonks in the heat of the day (it can be a little slow). Subsequently, there is a Perl module published (Plagger::Plugin::CustomFeed::PerlMonks) that utilizes this feed. (NOTE: that I haven't looked at the source nor full functionality of this module - just noticed it one time on CPAN)

      ---------
      perl -le '$.=[qw(104 97 124 124 116 97)];*p=sub{[@{$_[0]},(45)x 2]};*d=sub{[(45)x 2,@{$_[0]}]};print map{chr}@{p(d($.))}'
Re: PerlMonks mirror with RSS and searching
by ysth (Canon) on Oct 03, 2006 at 17:24 UTC
    I don't have any interest in RSS, but like the idea of the node contents section getting indexed by google. But are you getting crawled? If not, you may want to take out the blank line between user-agent and the disallows in your robots.txt.
Re: PerlMonks mirror with RSS and searching
by Jaap (Curate) on Oct 05, 2006 at 19:22 UTC
    Can you make RSS links that my beloved Firefox recognizes as RSS feeds to bookmark?
Re: PerlMonks mirror with RSS and searching
by DrHyde (Prior) on Oct 04, 2006 at 09:46 UTC
    Nice idea, but ... given that perlmonks's robots.txt bars all robots, how are you getting the content?

      He's ignoring robots.txt, of course.

      there used to be a static mirror of perlmonks specifically for search engines. Searching google for my username plus perlmonks yields results like:

      http://qs321.pair.com/~monkads/?node_id=388544

      but it seems that that is now forbidden/down/whatever.

      Hmm. Ignoring robots.txt is a bit to strong - I'd prefer to say that I was sticking to the spirit if not the letter of it.

      The aim of the entry is to stop perlmonks being repeatably scraped by lots of robots. I only scrape the XML bit of the page and cache the results so that I never need to scrape the same page twice. I also have code in place that prevents scraping too fast.

      Hopefully the result of the RSS feeds will be to reduce the traffic that hits perlmonks just to see what is new.