http://www.perlmonks.org?node_id=478039


in reply to [OT] Ethical and Legal Screen Scraping

The general concept of robots.txt was to restrict automated processes with no user behind them. Its original syntax makes very little sense, from a security point of view (I mean, it basically tells people 'here's the stuff I don't want you looking at'). The later RFC included an 'Allow' to go along with 'Disallow'.

If a site really didn't want people visiting their content, they'd use access restrictions. (and they can even filter by user agent + path, just like with a robots.txt file), but the robots.txt tells the robot to not even bother requesting any of those other files.

It is not intended for user-agents, that is to say something that has a user at the helm -- for instance, a web browser that's retrieving files as you request them (even if you go and option-click 50 links on the page, so each one pops up in a new window in your browser). Or some of the more annoying browsers that go and pre-fetch every page that's linked to, just in case you might follow a link.

I'd make sure to advertise my screen scraper with a unique user-agent string, and I'd look at robots.txt, in case they wanted to politely ask me to go away... but would it be unethical to ignore it? I'd say in your case, yes -- you're planning on doing it when you sleep. If you had a user agent that presented the content in a different format (ie, acting a screen scraper, but interactive, not automated), I'd say it'd be okay.

Now, if you were going to start looking at robots.txt, I would think it would be unethical to then decide to ignore it-- it's one thing to say 'I am a user agent, not a robot', and not check for it, but it's a bad thing to look to see if they want you to go away, and then ignore the request.