|laziness, impatience, and hubris|
Going through a big file [solved]by Chuma (Beadle)
|on Jan 17, 2013 at 12:31 UTC||Need Help??|
Chuma has asked for the wisdom of the Perl Monks concerning the following question:
I've got a 58 GB XML file that I need to go through and do various regex-related things with.
First I tried the obvious
I assumed that it would read one line at a time, but apparently it tries to read the whole file into memory. I don't have that much memory, in fact I don't have that much hard drive space, so that's not a good idea, and I don't see why Perl would think so.
Then I found some module called Tie::file. If I do
there is no improvement, but if instead I do
it seems to work. Unfortunately it's still a bit slow - it initially processes about 10 MB per minute, which means it would take upwards of four days to process the whole thing. And that's assuming that it actually continues at constant speed.
I tried changing the program so that I can pause it and continue later, so I can run it at night, or something. I let the program tell me which line it's on, and then take that as input the next time it starts. But of course looking up the right line number when it starts again takes a certain amount of time, which appears to be more than linear in number of lines, so that's not going to work.
Is there some way to convince Perl to read one line at a time? Or some other clever workaround?