note
davido
<p>Another way of looking at the problem:</p>
<p>When you convert the array to a hash with keys as the array's values, and values as the array's indices, you pay the price for conversion once. Your subsequent lookups will be quite fast. But you do pay for it; the overhead of the hashing algorithm, combined with the O(n) time complexity of converting the entire array to a hash.</p>
<p>On the other hand, if all you're interested in is an occasional search that yields an index, you could use [mod://List::MoreUtils] <c>first_index</c> function:</p>
<c>
use List::MoreUtils 'first_index';
my @array = qw( a b c d e f g h );
my $found_ix = first_index{ $_ eq 'd' } @array;
print "Found 'd' at $found_ix.\n";
__END__
output: Found 'd' at 3.
</c>
<p>This avoids the one-time overhead of generating hash keys for the entire structure, and the per-search overhead of hash lookups. But now every lookup will be an O(n) operation. If you're doing a lot of lookups this is a net loss. If you're doing few lookups, it could be a win, which would have to be verified via benchmarking.</p>
<p>One nice thing about the first_index method is that its semantics are pretty clear. But if you're doing frequent lookups your original idea of using a hash lookup is good.</p>
<div class="pmsig"><div class="pmsig-281137">
<br /><p>Dave</p>
</div></div>
1008288
1008288