Hmm. Ignoring robots.txt is a bit to strong - I'd prefer to say that I was sticking to the spirit if not the letter of it.
The aim of the entry is to stop perlmonks being repeatably scraped by lots of robots. I only scrape the XML bit of the page and cache the results so that I never need to scrape the same page twice. I also have code in place that prevents scraping too fast.
Hopefully the result of the RSS feeds will be to reduce the traffic that hits perlmonks just to see what is new.