http://www.perlmonks.org?node_id=942949


in reply to Re: Efficient way to handle huge number of records?
in thread Efficient way to handle huge number of records?

Any DB that couldn't handle that few records would not be worthy of the name. Even MySQL or SQLite shoudl easily handle low billions of records without trouble.

I would be quite interested to see SQLite do this. (may even try it myself...)

In the past (last time I tried was, I think, a couple of years ago) SQLite always proved prohibitively slow: loading multimillion-row data was so ridiculously slow (even on fast hardware), that I never bothered with further use.

I'd love to hear that this has improved - SQLite is nice, when it works. Does anyone have recent datapoints?

(As far as I am concerned, Mysql and BerkeleyDB, as oracle products, are not an serious option anymore (I am convinced Oracle will make things worse for non-paying users all the time), but I am interested to know how their performance (or Oracle's itself for that matter) compare to PostgreSQL)