Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Hash Clash on purpose

by John M. Dlugosz (Monsignor)
on Jun 02, 2003 at 19:38 UTC ( #262468=perlmeditation: print w/ replies, xml ) Need Help??

FYI: this came up on the OpenPGP mailing list: http://www.cs.rice.edu/~scrosby/hash/CrosbyWallach_UsenixSec2003/index.html.

It concerns the hash function used in Perl's implementations.

Comment on Hash Clash on purpose
Re: Hash Clash on purpose
by gjb (Vicar) on Jun 02, 2003 at 19:48 UTC

    You might want to have a look at hash collision DOS which mentions this article and happens to be in the Weekly Best.

    Just my 2 cents, -gjb-

Re: Hash Clash on purpose
by l2kashe (Deacon) on Jun 02, 2003 at 20:08 UTC
    I read the paper, and the crux of the issue is not processing incoming data prior to passing it to a hashing function. I think its something ala
    chomp($f = <>); $input{$f}++;
    Though it might be something closer to
    chomp($f = <>); if ( $input{$f} ) { blah; }
    The problem described is causing the hashing function to work harder to produce the hashed string... degrading performance to O(n * n) time frame. But it assumes the attacker has a quasi direct pipe to the hashing functions input.

    There are situations where it makes sense to track data collected in a hash (letter/number counting comes to mind off the top of my head), but the type of pre constructed data that is required for this attack to be successful raises other questions in my mind.

    If this is something other than trivial tracking of data, and is infact some larger piece of production code we are talking about then you don't trust anything coming in from outside, right??

    I would be interested to hear what other monks think after reading the paper, personally it would be interesting to hear from individuals who actually work on the Perl codebase, and what they think of this situation. Also I didn't see any code to reproduce this, but I also didn't look too hard. I think I may revist this now though.

    MMMMM... Chocolaty Perl Goodness.....
      Since this issue hasn't been recognized before, no one worries about this form of bad input.

      There is one very common module that accepts input from the outside and inserts strings into a hash table: CGI.pm. It stores the parameter names as keys in a hash table. As an experiment, I took the 10,000 strings from the paper, turned them into a query string of about 200 KB, and POST it to a CGI script. The CGI process ran for 3 minutes at 99% CPU before it finished parsing.

      Lots of CGI scripts don't have any POST limits for forms. Even those that do, lots of strings can be passed in the sizes I have seen used (1-10 MB). POST limits are usually designed to prevent file uploads, not pathological queries.

        Good insight, can you tell what type of perl devel I don't do regularly? ;)

        On another note though, aren't the hash keys in a form and such predefined? Thinking this through, it would take a maliciously crafted URL to actually exploit a backend perl based CGI, as opposed to going through the "normal" processing channel, as in this exploitation its really the keys that are the issue as opposed to the values. Since we are talking about DOS attacks on systems, this is a valid attack. It will most certainly be very easy to track via some simple log searching without some proxying involved, but thats kinda outside of the scope of the thread...

        New question: Is there a way to "fix" CGI.pm?
        Thinking it through it feels knee jerkish to point at CGI, and I think might be outside of the scope of the modules varied duties. Though it would be interesting if you could code something to the effect of...

        use CGI; my $cgi = new CGI; $cgi->allow_param( qw(foo,bar,baz) );
        Which would only allow foo, bar, and baz, and silently drop everything else from the URL. But that kind of breaks the standalone nature of many cgi programs, and would need to be checked for/cleaned prior to actually parsing the parameters. On a plus side this could lead to many many more sights being far more secure than they presently are as its one more obstacle to hurdle in the never ending hunt for other people's processing cycles. Kinda like 'use strict' for CGI :)

        Im still hoping to get feedback from more "senior" members of the community, to get a handle on what they really think of the issue. Also does/will this effect Perl6?

        MMMMM... Chocolaty Perl Goodness.....
        I'm impressed. 10,000 strings of 24 bytes sent in a 200kb query. I want to know how you pulled that off. :)

        Thanks for the experimental result; I'll note this thread on the webpage. Also remember that the behavior is quadratic. 20,000 strings, 500kbyte, would take 12 minutes. 100,000 strings would run for 5 hours.

        Scott Crosby

Re: Hash Clash on purpose
by BrowserUk (Pope) on Jun 03, 2003 at 04:08 UTC

    Currently, circa. 5.8.0, the hashing value for a new hash is initialised to 0.

    Would initialising this to a random 32-bit value chosen fresh for each hash at runtime, largely elleviate if not entirely suppress the possibility of a hash_clash attack against a perl hash-based system?

    Whilst it would still be possible to compute 48 generators that could be combined as described in the article for any given initialisation of the hash, computing them for all possible 2**31 possible initialisations would be considerably more expensive. Finding a way of determining which initialisation had been randomly chosen for any given hash, of any given start of any given program on any given system renders choosing the right set of 48 generators almost impossible?

    Or am I missing something obvious again?

    Update:The change I was proposing would affect the following code from hv.c:59 (as of 5.8.0).

    @hv.h - 5.8.0 #define PERL_HASH(hash,str,len) \ STMT_START { \ register const char *s_PeRlHaSh_tmp = str; \ register const unsigned char *s_PeRlHaSh = (const unsigned char *) +s_PeRlHaSh_tmp; \ register I32 i_PeRlHaSh = len; \ - register U32 hash_PeRlHaSh = 0; \ /* Randomise the initialisation (Pseudo code only:) */ + register U32 hash_PeRlHaSh = rand(0xFFFFFFFF); \ while (i_PeRlHaSh--) { \ hash_PeRlHaSh += *s_PeRlHaSh++; \ hash_PeRlHaSh += (hash_PeRlHaSh << 10); \ hash_PeRlHaSh ^= (hash_PeRlHaSh >> 6); \ } \ hash_PeRlHaSh += (hash_PeRlHaSh << 3); \ hash_PeRlHaSh ^= (hash_PeRlHaSh >> 11); \ (hash) = (hash_PeRlHaSh + (hash_PeRlHaSh << 15)); \ } STMT_END

    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller


      First of all I am too lazy to actually test whether changing the initial value changes the relative choices of hash buckets. I strongly suspect it does, but I either have to work it out by hand or else install a bunch of stuff on this computer (eg Perl) and then write a program. What we really want to do is tell that it sends what had been 0 buckets back to 0, and for that I should find those strings which means installing stuff, which I haven't done...

      However assuming that the analysis shows that changing the initialization value does change hashing decisions, your pseudo-code looks wrong to me. You are initializing it randomly per hash lookup. For hashing to work, the hash lookup algorithm has to be consistent from lookup to lookup. Instead what you need to do is save the value of the initial value somewhere and then pull that into hash_PeRlHaSh.

      That means that you have to store that somewhere. Several options exist. One is to reserve space per hash to store its initialization value, and then look that up per lookup. Another is to have a global value chosen for all of your hashes. And a third is to make it a random compile-time constant. Problems with binary linking of XS modules that have been copied from point A to B make the last one infeasible. The first one adds 4 bytes to every hash, which isn't that much, but we have a lot of small hashes. An offhand guess is that we would see 1-5% space usage increase, and (of course) binary incompatibility.

      The middle option (a run-time random constant) looks to be the best bet. p5p might have some arguments over binary compatibility (code compiled with hash lookups initialized to 0 won't match code compiled with your new initialization) but it should be easy to have whether to initialize randomly or to 0 at startup to be a compile-time flag.

      Hmmm...looks like I argued myself into believing that you can fix the problem with your approach. It would be worthwhile for you to try to make the fix, run tests to verify that it does make the performance attack infeasible, then try to get it accepted... :-)

        Agreed. My pseudo-code wasn't up to much.

        However, I just remembered/re-discovered an interesting thing. In common with CV's, SV's and AV's, the HV structure has an NVX field. This is the 32-bit field used by SV's to store the numeric value of SV's once they have used in a numeric context. In the HV, it is unused.

        It seems to me that this would be the perfect place to store a (hash) creation time generated random value that could be used to initialise the hash function.

        That would effectively allow for a different hashing function for every hash within a program, and for every run, with no extra space required, and only hits would be:

        A register indirected load of the initialisation value at every hash generation, which shouldn't more than a few clock cycles on any processor as it should already be in the level 1 cache as soon as the HV has been addressed.

        The one-time cost of generating the random value when the hash is created.

        The only real argument against the idea that I forsee is that it might be possible that an application could randomly hit worse case performance. Statistically, the odds of this happening are, I think much the same as now for any given dataset, but the possibility that an application that loaded that same hash with the same data every run, could randomly have that hash degenerate to worst case, could be seen as a problem.

        I really wish I was set up to be able to build, test and propose this fix to p5p, but I'm not, and am unlikely to be in the near future. Maybe Elian or somebody can pass the idea along if they think it has some merit.


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller


Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://262468]
Front-paged by diotalevi
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (10)
As of 2014-08-01 14:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Who would be the most fun to work for?















    Results (22 votes), past polls