Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^2: Using text files to remove duplicates in a web crawler

by matija (Priest)
on Jul 07, 2004 at 06:28 UTC ( #372293=note: print w/ replies, xml ) Need Help??


in reply to Re: Using text files to remove duplicates in a web crawler
in thread Using text files to remove duplicates in a web crawler

Using a cpan:://DBI database will work, but using a tied hash with a DB_File or similar backing will be an order of magnitude faster - as well as simpler to script.


Comment on Re^2: Using text files to remove duplicates in a web crawler

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://372293]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (3)
As of 2016-02-05 23:25 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How many photographs, souvenirs, artworks, trophies or other decorative objects are displayed in your home?





    Results (209 votes), past polls