Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

Re: 15 billion row text file and row deletes - Best Practice?

by serf (Chaplain)
on Dec 02, 2006 at 15:34 UTC ( #587415=note: print w/ replies, xml ) Need Help??


in reply to 15 billion row text file and row deletes - Best Practice?

Wow! what an eye catching question, good one!

I would be wary of thinking of using grep, as bsdz and sgt have mentioned.

If you have a look at

grep -vf exclude_file to_thin_file in perl

you will see that Perl can do this much faster and with less memory than grep if the script is written efficiently.

My workmate swears by DBD::CSV - but I haven't used it.

Personally I think I'd feel safer writing to a new file in case anything went wrong while it was writing back - if it's running for a week that's a long time to risk it crashing!


Comment on Re: 15 billion row text file and row deletes - Best Practice?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://587415]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (5)
As of 2015-07-31 04:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The top three priorities of my open tasks are (in descending order of likelihood to be worked on) ...









    Results (274 votes), past polls