|Just another Perl shrine|
The OS-level cacheing I mentioned was simply good old file-level cacheing. If your data store is held in files accessed through the file system (as is the case for simple databases like GDBM, flat file, etc) then often-used data is kept around in RAM - shared between processes.
You still need to spend some CPU cycles in doing lookups, etc but you don't spend any I/O - which is a win.
OK - so your back end data store is in a database which you wish to protect from the load which your hits are likely to generate. Do you know for certain this is going to be a problem? If not can you simulate some load to see?
Presumably you don't want to cache writes - you want them to go straight to the DB.
So you want a cacheing layer in front of your data which is shared amongst processes and invalidated correctly as data is written.
I don't know which DB you are using but I would imagine most/many would have such a cacheing layer. If this isn't possible or it doesn't stand up to your load then the route I would probably choose is to funnel all the DB access through one daemon process which can implement your cacheing strategy and hold one copy of the cached info.
But I wouldn't do that until it was clear that I couldn't scale my DB to meet my load or do something else...say regularly build read-only summaries of the complex queries.
I guess it all kind of depends on the specifics of your data structures...sorry to be vague. There is a good article describing a scaling-up operation at webtechniques which seems informative to me.
In reply to Re: Re: Re: Sharing data structures among http processes?