in reply to
Re: Using text files to remove duplicates in a web crawler
in thread Using text files to remove duplicates in a web crawler
Using a cpan:://DBI database will work, but using a tied hash with a DB_File or similar backing will be an order of magnitude faster - as well as simpler to script.
Electric dreams (yikes!)
2001: A Space Odyssey
None of the above, please specify
Results (168 votes),