Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: Saving big blessed hashes to disk

by lwknet (Initiate)
on Jul 24, 2005 at 12:56 UTC ( [id://477557]=note: print w/replies, xml ) Need Help??


in reply to Saving big blessed hashes to disk

DB is more than able to handle your load(around 20-30 requests/s)

I've tested my own multi threaded in-memory storage caching / recursive / authoritative name server (10% finish as of now) with shared variables being able to retrieve a 512bytes variable for >1,000,000 times/s running in a vps, together with the overhead of seeking the right memory pointer to access, recv() and send(), its still enough to saturate a 10mbit line (consider i'm in a vps). in my layer 5 dns packets load balancer the figure is doubled

in my benchmark accessing/writing shared variables is 20% slower than private ones, you will only start to notice the difference after like 500,000 access.

the key to sucessfully using in memory db is an efficient data structure to minimize overhead of accessing and writting to memory, building indexes to help seek the desired data. the worst case is to write to memory exactly the format stored in your disk. it took me a couple of days just to figure out the best data structure (that i know of) for my app

also my humble memory usage benchmark shows that multi dmensional array saves ~5% memory over single dimensional, instead of

$array[0]='xxx' $array[1]='xxx' . .
i prefer to write it
$array->[0][0]='xxx' $array->[0][1]='xxx' $array->[1][0]='abc' . .
the above is still not the best practice (at least what appears in perl), if you have tons of short strings like
'xxx' 'abc'
to store, group them togther in a scalar
'xxxabc'
and make use of substr() to access your range of bytes helps reduce 90% of memory consumption, the above example is still not what to be considerd production level memory storage solution, the more optimistic result i got is to store a set of data every 500+ bytes (not either 1024, 2048, 4096..etc in perl), thus makes it
'abc123xxxabc123xxx..........'
i managed to only take up 65MB for storing 50MB data from disk, and the access speed is not affected by the size of your in memory DB at all. ability to handling some structured integers in bits also helps alot

using in-memory db for just 20-30 requests/s is simply overkill and timewasting, you probably want mod_perl or custom built server daemon instead, mysql run on average system should handle 10 times your load :)

20050724 Edit by ysth: p, code tags

Replies are listed 'Best First'.
Re^2: Saving big blessed hashes to disk
by b888 (Beadle) on Jul 25, 2005 at 07:50 UTC

    ...20-40 requests per second from users, and each of these requests will make 10-30 requests to some db..
    200-1200 for now, and will grow even more in nearest future.

    Sure, mysql is a good thing. But it just can't resolve my situation (tested already).

    Why to keep data in DB/files if it's stand alone application? The question for which I'm seeking answer is "how to get this data from application and store to disk/db/other memory".

    Thanks anyway. Got know some new information after reading :)

      you said its a stand alone app, if it is your own writeen server daemon or any kind of daemonized app, it is easier to "get away" with your considered slow mysql than CGI/mod_perl rely on apache. sharing variables in perl ithreads is pretty easy, look for "threads", "Thread::shared" modules from cpan. remember variables in perl never get cleaned even after undef/delete, so, use hashes with care

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://477557]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chilling in the Monastery: (5)
As of 2024-04-23 20:34 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found