|Syntactic Confectionery Delight|
Re^4: RFC on Inline::C hack: Hash_Iteratorby demerphq (Chancellor)
|on Aug 02, 2005 at 11:03 UTC||Need Help??|
Well.. you could always make use of a doubly-linked list to get around this. For the sake of the storage of a second pointer in your structure, this would seem a reasonable trade-off to give a more flexible iteration method.
The structures you are talking about are defined by Perl itself. Its not a design decision available to anybody but the pumpkings, and its unlikely they would appreciate the cost given the minimal benefits it would provide.
For (most) implementations, however, where more random access of hashed elements is required, an iterative method is really quite inefficient, requiring up to time const + N to look up any entry. In these cases, it's often better to use something akin to a binary tree, offering at worst time const + log N to find any entry.
Im not sure if I agree with this analysis. The linked lists used for buckets in perls hashes are intended to be extremely small, ie, generally they should hold only one element, and except for degenerate cases should not really exceed two elements. With this in mind a binary tree approach makes less sense as in most cases you will derive no benefit from it at all.
Perls hashes dont allow for duplicate keys, which means that buckets only contain multiples when there are hash-key collisions. Such collisions should be unusual, and overly long bucket chains will be redistributed to other buckets when a resize event occurs, and iirc overly long bucket chains are precisely the determinant for such resize events.