Beefy Boxes and Bandwidth Generously Provided by pair Networks httptech
XP is just a number
 
PerlMonks  

Re: Truly randomized keys() in perl 5.17 - a challenge for testing?

by BrowserUk (Pope)
on Sep 30, 2013 at 10:50 UTC ( #1056327=note: print w/ replies, xml ) Need Help??


in reply to Truly randomized keys() in perl 5.17 - a challenge for testing?

The problem with this new key randomisation code is not the non-determinacy it introduces -- that was already there a long time since; albeit in a lesser form.

The problem is that is a pointless prophylactic that doesn't even come close to solving the "problem" that was used to justify its addition to the code-base.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.


Comment on Re: Truly randomized keys() in perl 5.17 - a challenge for testing?
Re^2: Truly randomized keys() in perl 5.17 - a challenge for testing?
by vsespb (Hermit) on Sep 30, 2013 at 11:06 UTC
    Proof?

      Where's the proof it does?

      How can an undocumented problem be countered?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        For example, you have a webpage, which outputs some data in random order. One can find hash seed used by this server worker process, then one can DoS this worker by sending special data (which will be treated as hash keys by workers process).

        Anyway, it's already in Perl, so I assume one who argue that the change is wrong should show the proof, not the one who asks why it's wrong.
Re^2: Truly randomized keys() in perl 5.17 - a challenge for testing?
by ikegami (Pope) on Oct 01, 2013 at 13:57 UTC
    Actually, this change is removal from the code base. It's a simplification of the existing mechanism. Rather than perturbing the hash when an attack is detected, the salt is always applied. To make that simplification safe, the salt needs to be different for each hash.

      In demerphq's own words:

      bulk88> I disagree there is a measurable cost to the current implementation of

      bulk88> REHASH. If the rehash code is so rare, maybe it should be removed from

      bulk88> hv_common and placed in its own func, but looking at the asm, the

      bulk88> overhead is so tiny I dont think the rehash code even matters compared

      bulk88> to the rest of the design of hv_common. I don't see any performance

      bulk88> gains by removing it.

      demerphq Yes, I am unable to show any actual performance gains either.

      So, not for performance (gain) reasoning.

      Also in his own words:

      demerphq So I think that the current rehash mechanism is about as secure as the random hash seed proposal.

      And:

      demerphqPersonally I dont think its worth the effort of doing much more than thinking about this until someone demonstrates at least a laboratory grade attack. IMO in a real world environment with multi-host web sites, web servers being restarted, load balancers, and etc, that simple hash randomization is sufficient. Seems like any attack would require large numbers of fetch/response cycles and in general would just not be effective in a real production environment. I would assume that the administrators would notice the weird request pattern before an attacker could discover enough information to cause damage. Same argument for non-web services IMO.

      And it doesn't make anything more secure.

      And Dave_the_m said:

      Indeed, based on the thread starting at

      Message-ID: <2003101002...@perlsupport.com>

      it looks like the primary motivation for moving to rehash was to restore the binary compatibility within the 5.8.x branch inadvertently broken by 5.8.1.

      I'm not particularly keen on having hashes always randomised - it makes debugging harder, and reproducing a reported issue nigh-on impossible; but if Yves can show a measurable performance gain without the rehash checks, then I'll approve, as long as the hash seed can still be initialised with the env var PERL_HASH_SEED=0 - otherwise debugging becomes impossible.

      Which mirrors the OPs objections.

      So, significant consequences

      So, why? What did Perl gain?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Why do you tell me this? I know all of that. Do you somehow think it contradicts what I said? Please reread the post to which you replied. It mentions neither performance nor making things more secure. It only mentions code simplification and maintaining the level of security.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1056327]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (11)
As of 2014-04-17 19:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    April first is:







    Results (454 votes), past polls