"be consistent" | |
PerlMonks |
Re: Get unique fields from fileby kcott (Archbishop) |
on Jan 07, 2022 at 03:11 UTC ( [id://11140231]=note: print w/replies, xml ) | Need Help?? |
G'day sroux, Given the size of your data, processing speed may be a factor. The following may be faster than other proposed solutions; but do Benchmark with realistic data. I've used the same input data as others have done.
My code takes advantage of the fact that when duplicate keys are used in a hash assignment, only the last duplicate takes effect. A short piece of code to demonstrate:
So there's no need for %seen, uniq(), or any similar mechanism, to handle duplicates. Also note that I've used bind_columns(). See the benchmark in "Text::CSV - getline_hr()". The code:
The output:
— Ken
In Section
Seekers of Perl Wisdom
|
|