Good stuff - well done. I have added this as a live bookmark straight into Firefox.

You mentioned that your process used a daemon that checks for new nodes. This means that firstly, you need to run a daemon and secondly, you are interrogating PM on a regular basis.

You could simplify the model by caching the RSS for a particular page and interrogating the cache each time you wanted to serve a page. A cached page could time out after a short period of time (e.g. 10 mins). A cache miss (or timed out page) would initiate a request to the monastery. The result would be cached for next time. This means that when nobody was using the feed, PM wouldn't be hit.

Caching can be implemented using a simple file cache with timestamp checking or something more involved using a database. Either way, you periodically need to clean the cache of expired documents. You would also want to guard against an attack where a malicious user tried to access every node as a feed and therefore used up lot's of cache space.

This article may be of interest with respect to the database solution.


In reply to Re^3: RSS feeds to most of perlmonks.org by inman
in thread RSS feeds to most of perlmonks.org by EvdB

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":