in reply to
Re: questions concerning using perl to monitor webpages
in thread questions concerning using perl to monitor webpages
Your solution is basically comparing two files. While this method is valid - there are some flaws. The first is assuming the new page you fetch will not be served up from cache some place. It would also be a problem if the overall content of the page was the same, but something like the <date> was different every day. Of course, this can be argued both ways, but one must assume that changed is subjective and not objective.
I don't have any other solutions. I would probably roll my own very much like you have suggested. Since the number of pages to track could get large, I would probably store the MD5 sum and the URL in a database and that's it. I would code special cases for the problem cases.
This being said - I am betting someone will show a much more elegant and powerful solution.
Cheers - L~R