It does seem very strange that DB_File would get slower as the amount of data grows. It's supposed to be constant, and there are people using it with terabyte-size databases. Anyway, here are some things you might try:
in reply to Berkeley DB performance, profiling, and degradation...
- Use SDBM_File. It is much faster in some situtations. It has a limited record length though. Take a look at these benchmarks from the MLDBM::Sync documentation.
- Use BerkeleyDB instead of DB_File. It's newer and may perform better.
- Use Cache::FileBackend, IPC::MM, or Cache::Mmap instead of a dbm file.