Clear questions and runnable code get the best and fastest answer |
|
PerlMonks |
Re: 15 billion row text file and row deletes - Best Practice?by serf (Chaplain) |
on Dec 02, 2006 at 15:34 UTC ( [id://587415]=note: print w/replies, xml ) | Need Help?? |
Wow! what an eye catching question, good one! I would be wary of thinking of using grep, as bsdz and sgt have mentioned. If you have a look at grep -vf exclude_file to_thin_file in perl you will see that Perl can do this much faster and with less memory than grep if the script is written efficiently. My workmate swears by DBD::CSV - but I haven't used it. Personally I think I'd feel safer writing to a new file in case anything went wrong while it was writing back - if it's running for a week that's a long time to risk it crashing!
In Section
Seekers of Perl Wisdom
|
|