|Pathologically Eclectic Rubbish Lister|
Order-Preserving and Unique: List Cleanupby Intrepid (Deacon)
|on Jul 30, 2003 at 06:39 UTC||Need Help??|
This Meditation concerns something I have found myself needing to do several times. In various development and systems administration tasks I have found myself with lists of some kind where each entry was something like a variable value, a C pre-processor macro definition or a file path. The sorts of things that can look like this can include PATH and PATH-like variables (on *NIX, MANPATH and so forth). In such cases I have often reached a point where what I needed to do was see each list with only the first occurance of any given entry left in place, and all later repetitions of the entry removed. In other words, I have needed to have lists of things with all duplicates removed, but with the order of the entries preserved as they were (less the repetitions).
The basic algorithm for doing this is widely understood, it appears, judging from what I've turned up on Perl Monks. There are variations in opinion concerning the implementation of this in Perl, with one possible solution employing psuedohashes as the key Perl feature. I personally like the existence of psuedohashes in Perl 5 and if they are to be removed as a failed experiment (because support in the Perl internals for this feature has too many side-effects, and the performance is poor) in Perl6 I won't weep, but I will miss them.
As I've pondered this task several times it has grown in significance for me to the point that, rational or not, it seems like one of the most basic of operations - almost so basic that there ought to be an internal, built-in Perl function to do this ("idem"? - for "idempotant"). I am not seriously going to argue for this but my point is that I have had to think about how to achieve this kind of "list clean-up" just too many times. I want to have a way to just do it without thinking about it any more, and the code chosen to accomplish this goal ought to fulfill my personal preferences in these ways:
The code I have posted below represents my fulfillment of this goal for myself to the best of my present knowledge. It is so short (3 lines of text) that it could be called obfuscated (the author of the posting it is derived from considered it too obfuscated to recommend, but it suits me nicely, thank-you). It relies on a feature that may not be part of Perl6, but OTOH not on any external modules (hey, things are going to break on Perl6, that's just a fact of life!). And it's certainly basic enough to be called "primordial" IMHO.
As to the issue of poor performance due to psuedohash overhead, that just isn't a factor in my thinking. Because the lists of things to be processed by this code will seldom exceed, say, 20 or so, it simply doesn't seem worthwhile to get worked up over the performance aspect. Convenience and these other preferences I've listed weigh in far more for me.
To re-use it most effectively it might be best to turn it into a module. It seems to me that this code sits right on a borderline where opinions can easily fall on either side of that question. On the one hand, this code is so short that it could seem ridiculous to the point of absurdity to make a module of it. OTOH, it seems just challenging enough to commit to memory, that maybe it would be sensible to "modularize" it. Doing so would save some effort for some users; as ashamed as I should probably be to admit it, I am just not always a bright enough Perl hacker to always retain the full range of clever techniques that the rich language that is Perl affords us without having to either consult some reference documentation or else just "use" the wheel that someone else has already coded.
I'd like to know what other Monks think. Should this be a module? Or is it just absurd to contemplate that?