|
|
| Perl-Sensitive Sunglasses | |
| PerlMonks |
Re: Huge files manipulationby johngg (Canon) |
| on Nov 10, 2008 at 14:09 UTC ( [id://722661]=note: print w/replies, xml ) | Need Help?? |
|
If you want the output file to contain only the first instance of each key in the order found in the input file you could try processing the input file line by line. The script keeps track of the keys encountered in the %seen hash and only prints a record to the output file if it hasn't been seen before. If there are so many unique keys that this hash starts causing resource problems you could tie it to a disk-based DBM such as Berkeley DB or GDBM. Given the input in your OP, this code
produces an output file with these records
I hope this is the sort of solution you are aiming for and that you find this of use. Cheers, JohnGG
In Section
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||||||||