|No such thing as a small change|
Re: Removing duplicates linesby zork42 (Monk)
|on Sep 04, 2013 at 19:45 UTC||Need Help??|
Assuming you are only concerned with repeated numbers in consecutive lines, then something like this would do (untested):
In your example the lines are sorted by the 2-digit number. If that is typical, then at the most you're only going to have to process 100 lines from the huge files.
Is that what you expect?
Q: What are you supposed to do if you get repeated numbers, but in non-consecutive lines like below?
foreach my $line ( <FILE> ) #### do NOT read entire files into memory if they are big. Much better to process them a line at a time
while ( my $line = <FILE> ) #### do NOT read entire files into memory if they are big. Much better to process them a line at a time
Embarassing bug that! In my defence the original code had 'foreach' and I probably just missed it.
Had I written the code from scratch I would (I hope!) have used 'while'. I'm still an idiot though! :)
Thanks very much to Not_a_Number for pointing this out!