in reply to
Re^2: Processing ~1 Trillion records
in thread Processing ~1 Trillion records
Okay... after explaining the query to see how the DBMS actually is approaching it, I would check for indexes and then consider doing a select distinct query to retrieve all of the unique keys. Then, issue a query for each marker in turn, possibly splitting-out that work among processes, threads, or machines. In this way, each file can be completely finished and the data in-memory disposed of in anticipation of the next request.
Seemingly innocuous calls such as keys can be surprisingly expensive, as can sort, when there are known to be a prodigious amount of keys involved. Hence, I would measure before doing serious recoding.
“6 days” is such an extreme runtime ... that’s an intuition-based comment ... that there will most certainly turn out to be “one bugaboo above all others,” such that this is the first place and quite probably the only place that will require your attention.