|No such thing as a small change|
Every operation in Perl - like any abstraction - has an overhead which can't be compensated by its C-implementation.
So tending to use as few as possible perl-commands which do mass-operations means to reduce overhead and delaying the task to highly optimized C.
Loops (including maps) are just multiplying the amount of executed commands (just imagine the linearized alternative which is even faster as the loop...)
so my approach is the fastest because its basically reduced to only 3 perl commands╣
1. setting a hash
2. deleting a slice from that hash
3. reading the resulting hash
OTOH my approach has drawbacks, depending on the task, it's only suitable for real sets of strings.
Arrays can contain repeated data or other datatypes like refs.
EDIT: you might be interested in Using hashes for set operations...
PS: of course there are still loops working under the hood, but they are already optimized in C.
In reply to Re^4: Removing elemets from an array (optimization)