Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re: elsif chain vs. dispatch

by DrHyde (Prior)
on Apr 27, 2009 at 10:31 UTC ( #760316=note: print w/ replies, xml ) Need Help??


in reply to elsif chain vs. dispatch

When you say that something is O(N) or O(N^2) or whatever, you are saying that as N changes then the resource in question (time or memory normally) changes with that relation to it. So if something is O(N) and N doubles, then the time taken doubles. What this ignores is that the time taken is actually c*N or k*1, and that for different algorithms the constants will be different.

So, the if/elsif/elsif/elsif/else chain actually takes c*N seconds, and the hash lookup and subroutine dispatch takes k*1 seconds. I'm not at all surprised that for small N then cN is smaller than k because it involves very few calculations so its constant term is a *lot* less than that for the hash lookup.

I shall now proceed to pull some random numbers out of my arse.

Calculating a hash takes of the order of a few hundreds of machine cycles (let's say a thousand). Finding the right place in the if/elsif/elsif/else chain will take more like a few tens of cycles. In fact, if the compiler optimises really well, checking and ignoring each condition will take just two machine cycles. Let's call it ten because this is perl, not C.

If we also assume that on average it has to check N/2 conditions then the hash would only win for N > 200. That's ignoring, of course, any extra overhead from setting up both cases in the first place.


Comment on Re: elsif chain vs. dispatch
Re^2: elsif chain vs. dispatch
by JavaFan (Canon) on Apr 27, 2009 at 12:01 UTC
    When you say that something is O(N) or O(N^2) or whatever, you are saying that as N changes then the resource in question (time or memory normally) changes with that relation to it. So if something is O(N) and N doubles, then the time taken doubles.
    To be pedantic, that's not true. O(N) means that the growth is at most linear. O(N^2) means that the growth is at most quadratic. This means that any algorithm that is O(N) is also O(N log N) and O(N^2).

    If you want to express that an algorithm is linear (and not worst case linear), the correct function is use is called Θ.

    Note also that hash lookups are, worst case, Θ(N). There's always a chance that all hash keys map to the same value, resulting in a linear list that needs to be searched.

      Note also that hash lookups are, worst case, Θ(N). There's always a chance that all hash keys map to the same value, resulting in a linear list that needs to be searched.

      I recall that perls from 5.8.3 or so have code in place to watch out for this sort of degenerate case, and will rehash to prevent this from occurring.

      • another intruder with the mooring in the heart of the Perl

        I don't know what is new in Perl 5.8.3 regarding new re-sizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets.

        To pre-size a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;.

        The Perl hash algorithm is:

        /* of course C code */ int i = klen; unsigned int hash =0; char *s = key; while (i--) hash = hash *33 + *s++;
        Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2. As mentioned above, this "(10/1024)" string shows number of "buckets" and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries.

        If the total number of (typo update:) keysentries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again.

        So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will re-size the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >Perl 5.8.3.

        Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm.

        A measure was added to 5.8.1 to thwart the intentional exercise of the degenerate case.

        I don't see anything in there or in the linked section of perlsec about detecting the accidental exercise of the degenerate case, but it's possible. (It's even likely.)

        if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) in hv.c in perl.git might be that very check.

        Ooops. I think I screwed up here an pushed create/update button at the wrong level. Lots below is redundant. A goof...

        I don't know what is new in Perl 5.8.3 regarding new re-sizing algorithms based upon buckets used, but if you are curious as to what is happening, the scalar value of a hash, eg my $x = %hash; returns a string like "(10/1024)" showing number of buckets used/total buckets.

        To pre-size a hash or force it get bigger, assign a scalar to keys %hash, eg: keys %hash=8192;.

        The Perl hash algorithm is:

        /* of course C code */ int i = klen; unsigned int hash =0; char *s = key; while (i--) hash = hash *33 + *s++;
        Perl cuts the above value to the hash array size of bits, which in Perl is always a power of 2. As mentioned above, this "(10/1024)" string shows number of "buckets" used and total "buckets". There is another value, hxv_keys accessible to the Perl "guts" that contains the total number of hash entries.

        If the total number of entries exceeds the number of buckets used, Perl will increase the hash size by one more bit and recalculate all hash keys again.

        So let's say that we have a hash with 8 buckets and for some reason only one of those buckets is being used. When the ninth one shows up, Perl will see (9>8) and will re-size the hash by adding one more bit to the hash key. In practice, this algorithm appears to work pretty well. I guess there are some improvement in >=Perl 5.8.3.

        Anyway I often work the hashes with say 100,000 things and haven't seen the need yet to override the Perl hash algorithm. However this does appear to point out a pitfall in pre-sizing hash. Perl starts a hash with 8 "buckets". If you start it yourself with say 128 buckets, it is possible to wind up with a lot more things associated with a hash key than if you let Perl just grow the hash on its own.

        update: As a small update, I would add that I haven't found much performance difference in just letting Perl do its hash thing vs pre-sizing a hash. The hash key computation effort (which is actually very efficient as above shows) tends to get dwarfed by the input effort to get the say, 100,000 keys and the computation required on those keys!

        I recall that perls from 5.8.3 or so have code in place to watch out for this sort of degenerate case

        I think it's the HV_MAX_LENGTH_BEFORE_SPLIT, currently set to 14.

        /* hv.c */ #define HV_MAX_LENGTH_BEFORE_SPLIT 14 ... Perl_hv_common( ... ) { ... while ((counter = HeNEXT(counter))) n_links++; if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) { /* Use only the old HvKEYS(hv) > HvMAX(hv) condition t +o limit bucket splits on a rehashed hash, as we're not goin +g to split it again, and if someone is lucky (evil) enou +gh to get all the keys in one list they could exhaust our + memory as we repeatedly double the number of buckets on ev +ery entry. Linear search feels a less worse thing to do +. */ hsplit(hv); } ... }

        (the comment seems to be a left-over from an ealier implementation, though...)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://760316]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (12)
As of 2014-07-30 13:58 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (234 votes), past polls