This is what I get on an Atom eee pc (1.6Ghz), after I removed
say "Total time: ", (time - $start); # 180 seconds
print "Total time: ", (time - $start); # 180 seconds
time perl db.pl
Total time: 5
marica.fr : Gestion des contrats, des dossiers contentieux et des sinistres d'assurance
Well, 200 millions records at a rate of 2000 per second, that's still 100,000 seconds, or almost 28 hours. That's still pretty long, isn't-it? Having said that, you may be able to live with that, a full day of processing is still manageable for a number of cases. Beware, though, that the rate might slow down as you database grows larger.
If you are really only looking for filtering out duplicates, the ideas discussed by BrowserUk are probably much better than using a database.