XP is just a number | |
PerlMonks |
Re^2: Parsing a Large file with no reasonby Cristoforo (Curate) |
on Jan 29, 2010 at 02:25 UTC ( [id://820285]=note: print w/replies, xml ) | Need Help?? |
It is so slow on large files because for each matching record, you loop through all 80,000 lines; So, if you had had 4000 matching records, you would have 4000 * 80000 = 320,000,000 iterations. There must be a better method I think. And I don't know if you can 'tie' the same file (as an array) while opening it for reading (both at the same time). Note that I set the input record separator, $/, to ---- lsattr, (with a space following lsattr), to read a record at a time. Not seeing more sample data, I made a guess at what might work and it did work with your sample data. But again, it's difficult to tell. Update: The data structure created above will only work if there is only 1 record for each sought key, ($vg). If there is more than 1 record with the same key, the data structure will only contain the last fields parsed from the file. It will silently give you incorrect results. That said, I would need to know more about your file to be able to suggest a suitable data structure.
In Section
Seekers of Perl Wisdom
|
|