in reply to
Re: Using text files to remove duplicates in a web crawler
in thread Using text files to remove duplicates in a web crawler
Using a cpan:://DBI database will work, but using a tied hash with a DB_File or similar backing will be an order of magnitude faster - as well as simpler to script.
By rote learning.
Via Genetic memory.
It's provided by my firmware.
I just remember them.
Thirty days hath September
My computer gets it right, usually.
I just ask someone else.
Someone punches me on the first of the month.
Results (151 votes),