A 2GB filesize limit is definitely a problem with the big file approach. Two possible ways to avoid this if you still want to go this way:
- the obvious: split the big file up into n files. This would also make the "growing" operation less expensive
- if some some subfiles aren't growing very much at all, you could actually decrease the size allocated to them at the same time you do the grow operation.
Actually, if you wanted to get really spiffy, you could have it automatically split the big file in half when it hits some threshold...then split any sub-big files as they hit the threshold, etc...
BerkeleyDB is definitely sounding easier...but I still think this would be a lot of fun to write! (Might be a good Meditation topic...there are times when you might want to just DIY because it would be fun and/or a good learning experience.)