in reply to Improving Memory Efficiency with Storable and Hashes of Scalars (code)
It sounds like a "sequential write" is exactly what you need here...
Why not loop over each source file and append them to your output?
If that's not plausible for some reason, something like Berkeley DB could very well be a solution, but given the sizes you're talking about (600M of plain text, plus presumably some amount of metadata, eg. filenames, plus the overhead and index of the DB file itself), you might be running into the maximum filesize of the DB format (IIRC, DB_File's can only be 2G) in the very near future.
I assume there's a driving reason this isn't being dumped into binary records in Oracle?
Um, good luck regardless!