DarkBlue has asked for the wisdom of the Perl Monks concerning the following question:

I have written a database with Perl 5.6 using GDBM_File. The records could be quite large as I have built no limits in to the database to impose on either record size or number of records.
The database is accessed with GDBM_File as a tied hash. It runs on a web-server via the CGI.

1) are there any good reasons why I should limit the size and/or number of records (obviously smaller records would be quicker to download - but these are text only, so even quite large records are manageable)?
2) when using tied hashes, is the entire database file loaded into memory each time I access it, or does Perl load only the record(s) being used at that particular time? My main concern here is that, if the former applies, as the database grows larger - and multiple users access it simultaneously, then I can quickly see the web-server running out of memory (with, I imagine, disastrous effects). Furthermore, using the database is going to be painfully slow when the volume of records is large.

As it stands, it's running very quickly (with instantaneous response); but it's only being used by a small workgroup and contains less than a hundred records. We estimate that between ten and twenty new records could be added each day. We also plan to make the database accessible outside of our workgroup (accessible by anyone with a web-browser), we have no idea how many simultaneous users it might then need to serve, but we're predicting about a thousand users a day (users, not accesses).
Before I proceed any further with the database in this form I need to know that it's not going to fall over or become unuseable due to a lack of speed.
I know I should probably use Oracle or SQL (et al), but I'm unfamiliar with both products and would rather stick with what I know at the moment (we can migrate to something more "industrial" once our concept is proven - should that prove necessary).
Any help would be appreciated.