|Just another Perl shrine|
Answer: Where is the bottleneck in flat file databases?
"Classic" databases do their own data managment within a reserved space (usually called a "tablespace") on disk. Internally, this might not differ very much from how a filesystem works, where the data and directory listings (e.g. table rows and indices) are organized in some sort of tree. When you insert, update or delete data, only a small part of it needs to be rewritten.
Classic databases are usually developed over a number of years, get tweaked, tuned, optimized, critical loops rewritten to low level high performance code and such.
Last, but not least, many databases are also optimized to cache the most critical and most used data and indices in RAM.
Unless there are very specific project requirements (compatibility with other programs, customer requirements), you should take a look into using a "real" database.
While there are many "big" databases out there, like PostgreSQL, MySQL, Oracle, Microsoft SQL Server, there are alternatives that you might find useful, too, depending on many factors.
For example, if you want a light, portable system where you can just copy the data file to another computer, DBD::SQLite may be an alternative.
If you are doing mostly key/value stores (where the "values" can be complex datasets as well), you could also take a look into NoSQL databases like CouchDB.