Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re: speeding up row by row lookup in a large db

by Ish (Acolyte)
on Mar 21, 2009 at 17:50 UTC ( #752270=note: print w/ replies, xml ) Need Help??


in reply to speeding up row by row lookup in a large db

From the SQLite FAQ "INSERT is really slow - I can only do few dozen INSERTs per second Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second. Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk service before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite.. By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced. Another option is to run PRAGMA synchronous=OFF. This command will cause SQLite to not wait on data to reach the disk surface, which will make write operations appear to be much faster. But if you lose power in the middle of a transaction, your database file might go corrupt." Hope that is of use (and relevant!)


Comment on Re: speeding up row by row lookup in a large db
Re^2: speeding up row by row lookup in a large db
by rminner (Hermit) on Mar 22, 2009 at 06:11 UTC

    If the speed problems are primarily caused by rotational speed and you have some extra money to spend ($150), an (inelegant, non-geeky) way to fix it would be purchasing a fast SLC SSD (solid state drive). Got myself the mtron mobi 16GB. Lacking mechanical components the random access properties are obviously quite good. Throughput with Read and Write Speeds of roughly 100MB/s is also decent.

    The recommendations of the other monks to use a "normal" dbms such as mysql will probably also result in a decent performance gain.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://752270]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (11)
As of 2014-09-02 12:41 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite cookbook is:










    Results (22 votes), past polls