|
|
|
Clear questions and runnable code get the best and fastest answer |
|
| PerlMonks |
Re^2: Deleting duplicate lines from fileby blazar (Canon) |
| on Feb 17, 2006 at 06:25 UTC ( [id://530906]=note: print w/replies, xml ) | Need Help?? |
This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.
While I often use (md5) sums, I think that this is an overkill for checking duplicate lines; and as usual exposes to the risk of false positives while for reasonably sized lines, which are to be expected in this case, it is quite reasonable to assume that the md5sum will have a size comparable to that of the string itself, or -depending on the actual data- even larger. Also, the code seems just a little bit too verbose for may tastes. Without that verbosity adding to readability, that is. However they're just tastes, so I won't insist too much on this point. Last, if one needs to print non-duplicate lines, it's pointlessly resources-consuming to gather them into an array to print them all together. Granted, this may be an illustration for a more general situation in which one may actually need to store all of them in one place. But the OP is clearly a newbie and I fear that doing so in this case would risk being cargo culted into the bad habit of unnecessarily assigning to unnecessary variables all the time. Oh, and the very last thing about your suggestion:
The following is just equivalent:
In Section
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||||||