more useful options | |
PerlMonks |
Re: Saving big blessed hashes to diskby lwknet (Initiate) |
on Jul 24, 2005 at 12:56 UTC ( [id://477557]=note: print w/replies, xml ) | Need Help?? |
DB is more than able to handle your load(around 20-30 requests/s)
I've tested my own multi threaded in-memory storage caching / recursive / authoritative name server (10% finish as of now) with shared variables being able to retrieve a 512bytes variable for >1,000,000 times/s running in a vps, together with the overhead of seeking the right memory pointer to access, recv() and send(), its still enough to saturate a 10mbit line (consider i'm in a vps). in my layer 5 dns packets load balancer the figure is doubled in my benchmark accessing/writing shared variables is 20% slower than private ones, you will only start to notice the difference after like 500,000 access. the key to sucessfully using in memory db is an efficient data structure to minimize overhead of accessing and writting to memory, building indexes to help seek the desired data. the worst case is to write to memory exactly the format stored in your disk. it took me a couple of days just to figure out the best data structure (that i know of) for my app also my humble memory usage benchmark shows that multi dmensional array saves ~5% memory over single dimensional, instead of i prefer to write it the above is still not the best practice (at least what appears in perl), if you have tons of short strings like to store, group them togther in a scalar and make use of substr() to access your range of bytes helps reduce 90% of memory consumption, the above example is still not what to be considerd production level memory storage solution, the more optimistic result i got is to store a set of data every 500+ bytes (not either 1024, 2048, 4096..etc in perl), thus makes it i managed to only take up 65MB for storing 50MB data from disk, and the access speed is not affected by the size of your in memory DB at all. ability to handling some structured integers in bits also helps alot using in-memory db for just 20-30 requests/s is simply overkill and timewasting, you probably want mod_perl or custom built server daemon instead, mysql run on average system should handle 10 times your load :) 20050724 Edit by ysth: p, code tags
In Section
Seekers of Perl Wisdom
|
|