Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re: 15 billion row text file and row deletes - Best Practice?

by mattr (Curate)
on Dec 02, 2006 at 13:35 UTC ( #587395=note: print w/ replies, xml ) Need Help??


in reply to 15 billion row text file and row deletes - Best Practice?

Hi, BrowserUK's reply looks quite good.

I just would like to chip in that if you are willing to chunk your data file in 100MB bites, based on my timing of unix sort on a 100MB file of 9 digit numbers it would cost you 100 hours to get sort all the data first. If you can at the same time ensure fixed length columns it will help.

You really do not want to do any disk I/O since on my machine it takes 7 hours just to read the file once. If you have sorted kill list that is chunked according to the extents of each data chunk you also only have 1/40 of the kill list to worry about (if distribution is even).


Comment on Re: 15 billion row text file and row deletes - Best Practice?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://587395]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (7)
As of 2014-07-25 04:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (167 votes), past polls