|Perl: the Markov chain saw|
Summarising then, from the various replies I have received (with thanks):
1) Whilst in concept stage I am more than adequately served by GDBM_File for a relatively simple database structure
2) As I do not require a relational database for this product, I can effectively ignore SQL, Oracle, Postgres, etc., as the demands of my database (even with the most optimistic predictions) simply don't justify the capabilities of those systems
3) To "industrialise" my product, BerkeleyDB is easily capable of serving my database ("multi-threaded concurrent read/write, hundreds of terabytes of data, 30,000+ accesses per second"). The server should have large file support in order to take full advantage of BerkeleyDB. Therefore I can keep the entire product non-proprietary and open-source
4) BerkeleyDB will allow me to use BTree as and when required (for reduced disk access during usage)
5) I should probably implement file-locking on the database for any write operation to maintain data integrity (despite the slight latency this will introduce)
6) The database should be backed-up regularly (which goes without saying) and plain-text backup is also probably a good idea
7) The database should run on its own, dedicated server. Not on a server that's also having to serve web-pages and other CGI
8) When using a tied-hash/GDBM_File database model, the entire database IS NOT loaded into memory for every single read or write operation (or thread). But there will be disk accesses for every such operation. Therefore, I'd conclude that the server needed to host this application would probably be okay with between 256Mb - 1Gb or RAM. The processor should be as fast as possible. The disk sub-system should be as fast as possible. RAID (what level?) would be best as the fault-tolerance would benefit the "industrialised" version of this database.
Does anyone disagree with any points in this summary?
Thanks again for your speedy replies and the invaluable insight you have given me.
Jonathan M. Hollin Digital-Word.com