For my type of work, I load SQLite DBs once, and then do various studies against that data. I load the entire shebang in one transaction, sometimes several million records in one go.
I've never really studied the difference between the first 100,000 and say the 10th, but that would be quite interesting to see. So, here's a benchmark:
Inserting 4096 byte strings for 15s per benchmark, ten sets, all in one mongo transaction. Final file size is 2.3G.
3122.67/s (n=50681)
3166.34/s (n=48825)
3165.59/s (n=50966)
3206.15/s (n=51619)
3157.13/s (n=50230)
3179.75/s (n=50399)
3188.18/s (n=50692)
3211.26/s (n=51316)
3098.32/s (n=49914)
3046.30/s (n=49807)
I don't really see any slowdown, but maybe I should change the test a bit. At any rate, I'm sticking -- IMNSHO, SQLite is one of the most under-rated pieces of code out there. (When used as indicated.) |