in reply to
Re: Using text files to remove duplicates in a web crawler
in thread Using text files to remove duplicates in a web crawler
Using a cpan:://DBI database will work, but using a tied hash with a DB_File or similar backing will be an order of magnitude faster - as well as simpler to script.
A foolish day
Just another day
Internet cleaning day
The real first day of Spring
The real first day of Autumn
Wait a second, ... is this poll a joke?
Results (485 votes),