Keep It Simple, Stupid | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
I agree with you.
The OP wrote: "Greetings. I have about 250K records in a postgres db I need to process. I'm wondering if there's a way to read this in batches of 1000 records or something, to cut down on db transaction overhead.
The DB can perform many, many read operations per second. The DB whichever one you have, will try to maintain a self consistent state on the hard drive. See Wiki - ACID properties. There is a lot of overhead involved in a "transaction". The DB will have to write to the hard disk multiple times. A commit of a million row update doesn't necessarily take all that much longer than a commit of 100 rows. The commit operation, in and of itself is "very expensive". The Perl DBI provides: Wrapping those 2 statements around any DB operations involving many "writes" will have a huge effect upon performance. There is no problem whatsoever with a 1 million line "commit". 250 commits for 250K records will slow things down considerably - perhaps even an order of magnitude! In reply to Re^3: Posgres batch read with DBI? - ( PostgreSQL )
by Marshall
|
|