Off the top of my head:
in reply to Re: Processing ~1 Trillion records
in thread Processing ~1 Trillion records
1) You're sql query is doing an dynamic hash inner join. You may get better results making sure the joins (as well as your selection criteria) are indexed fields.
2) You're essentially slurping in db records to markup the fields and dump it to files. If there is any way to can get around reading a trillion records into a hash (i.e., ORDER BY in the database... and the ordered fields are indexed) then you can read/markup/write the records retail without thrashing ram/swap.
Well, that's my $.02 worth. No, for refunds you'll have to check our customer service department.