Just another Perl shrine | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
It appears, from cursory examination, that commits on SQLite are implied to be slow.
From the DBD::SQLite documentation: SQLite is fast, very fast. I recently processed my 72MB log file with it, inserting the data (400,000+ rows) by using transactions and only committing every 1000 rows (otherwise the insertion is quite slow), and then performing queries on the data. (Emphasis added) From http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations, it says: When doing lots of updates/inserts on a table it is a good idea to contain them within a transaction, . . . This will make SQLite write all the data to the disk in one go, vastly increasing performance. (Emphasis added) Apparently, what's happening is there is a disk flush of the actual changes to the table upon commit. This apparently is to allow for preservation of committed data if the database crashes. Oracle and MySQL (using InnoDB) have REDO logs which capture the commands that made the change, then actually apply the changes when it's convenient or necessary. (MySQL, using MyISAM tables, doesn't have this guarantee. This is why you can have corruption of MyISAM tables, but not InnoDB tables.) This constant flushing of the tables (and the attendant indices) looks to be why you have the performance issues. Note: I don't have proof this is what's happening, but it seems like a reasonable guess. Being right, does not endow the right to be rude; politeness costs nothing. In reply to Re: DBD::SQLite tuning
by dragonchild
|
|