in reply to elsif chain vs. dispatch
When you say that something is O(N) or O(N^2) or whatever, you are saying that as N changes then the resource in question (time or memory normally) changes with that relation to it. So if something is O(N) and N doubles, then the time taken doubles. What this ignores is that the time taken is actually c*N or k*1, and that for different algorithms the constants will be different.
So, the if/elsif/elsif/elsif/else chain actually takes c*N seconds, and the hash lookup and subroutine dispatch takes k*1 seconds. I'm not at all surprised that for small N then cN is smaller than k because it involves very few calculations so its constant term is a *lot* less than that for the hash lookup.
I shall now proceed to pull some random numbers out of my arse.
Calculating a hash takes of the order of a few hundreds of machine cycles (let's say a thousand). Finding the right place in the if/elsif/elsif/else chain will take more like a few tens of cycles. In fact, if the compiler optimises really well, checking and ignoring each condition will take just two machine cycles. Let's call it ten because this is perl, not C.
If we also assume that on average it has to check N/2 conditions then the hash would only win for N > 200. That's ignoring, of course, any extra overhead from setting up both cases in the first place.
Re^2: elsif chain vs. dispatch
by JavaFan (Canon) on Apr 27, 2009 at 12:01 UTC

When you say that something is O(N) or O(N^2) or whatever, you are saying that as N changes then the resource in question (time or memory normally) changes with that relation to it. So if something is O(N) and N doubles, then the time taken doubles.
To be pedantic, that's not true. O(N) means that the growth is at most linear. O(N^2) means that the growth is at most quadratic. This means that any algorithm that is O(N) is also O(N log N) and O(N^2).
If you want to express that an algorithm is linear (and not worst case linear), the correct function is use is called Θ.
Note also that hash lookups are, worst case, Θ(N). There's always a chance that all hash keys map to the same value, resulting in a linear list that needs to be searched.
 [reply] [d/l] [select] 

 [reply] 

/* hv.c */
#define HV_MAX_LENGTH_BEFORE_SPLIT 14
...
Perl_hv_common( ... )
{
...
while ((counter = HeNEXT(counter)))
n_links++;
if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) {
/* Use only the old HvKEYS(hv) > HvMAX(hv) condition t
+o limit
bucket splits on a rehashed hash, as we're not goin
+g to
split it again, and if someone is lucky (evil) enou
+gh to
get all the keys in one list they could exhaust our
+ memory
as we repeatedly double the number of buckets on ev
+ery
entry. Linear search feels a less worse thing to do
+. */
hsplit(hv);
}
...
}
(the comment seems to be a leftover from an ealier implementation, though...)
 [reply] [d/l] 

I don't know what is new in Perl 5.8.3 regarding new resizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets.To presize a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;. The Perl hash algorithm is:
/* of course C code */
int i = klen;
unsigned int hash =0;
char *s = key;
while (i)
hash = hash *33 + *s++;
Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2.
As mentioned above, this "(10/1024)" string shows number of "buckets" and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries. If the total number of (typo update:) keysentries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again. So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will resize the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >Perl 5.8.3. Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm.
 [reply] [d/l] 





A measure was added to 5.8.1 to thwart the intentional exercise of the degenerate case.
I don't see anything in there or in the linked section of perlsec about detecting the accidental exercise of the degenerate case, but it's possible. (It's even likely.)
if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) in hv.c in perl.git might be that very check.
 [reply] [d/l] 

Ooops. I think I screwed up here an pushed create/update button at the wrong level.
Lots below is redundant. A goof...
I don't know what is new in Perl 5.8.3 regarding new resizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets. To presize a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;.
The Perl hash algorithm is:
/* of course C code */
int i = klen;
unsigned int hash =0;
char *s = key;
while (i)
hash = hash *33 + *s++;
Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2.
As mentioned above, this "(10/1024)" string shows number of "buckets" used and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries. If the total number of entries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again. So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will resize the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >=Perl 5.8.3. Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm. However this does appear to point out a pitfall in presizing hash. Perl starts a hash with 8 "buckets". If you start it yourself with say 128 buckets, it is possible to wind up with a lot more things associated with a hash key than if you let Perl just grow the hash on its own.
update: As a small update, I would add that I haven't found much performance difference in just letting Perl do its hash thing vs presizing a hash. The hash key computation effort (which is actually very efficient as above shows) tends to get dwarfed by the input effort to get the say, 100,000 keys and the computation required on those keys!
 [reply] [d/l] 

