For example, you have a webpage, which outputs some data in random order. One can find hash seed used by this server worker process, then one can DoS this worker by sending special data (which will be treated as hash keys by workers process).
Anyway, it's already in Perl, so I assume one who argue that the change is wrong should show the proof, not the one who asks why it's wrong.
One can find hash seed used by this server worker process, then one can DoS this worker by sending special data (which will be treated as hash keys by workers process).
I understand the (original; which is what you are describing) problem, but this was fixed in 5.8.1 using something akin to this.
That's all old news.
However, the latest changes implemented in 5.17 are purported to address a different problem, or at least a different manifestation of that old problem; and the changes go much further.
In addition to adding randomisation, they claim to add "per-process randomisation" -- which makes no sense as the randomisation of the hash initialisation was always per-process -- and several new (selectable) hashing algorithms.
The problem is, this "new attack vector" has never been publicly described -- even to the (so-called) clearing agencies that raised this undescribed, undemonstrated "problem" to significant status.
Thus not only have the implemented "fixes" never been verified as addressing the problem; the "problem" has never been verified as existing as a real-world threat.
These "fixes" for this undemonstrated problem not only affect that code that relied upon previously reliable but unspecified and thus subject-to-change behaviour; they also have a raft of consequences for every new code that uses hashes correctly -- ie. in accordance with the long-standing assumption that key order in indeterminate.
Consequences that are measurably significant and unjustifiable unless they are as a consequence of some demonstrable need!
That need has not been demonstrated.
One man claimed a reason; proposed solutions; and implemented them; without ever demonstrating the need; nor the theoretical efficacy of the proposed solution; nor the actual effectiveness of the implementation.
Nor were any other possible solutions to the undemonstrated problem ever considered.
All because the sole-sponsor, sole-author and sole-tester is hiding behind "need to know" and thus ignoring the FOSS/Security Industry principle of Full disclosure.
From my investigations -- based in-part, of necessity, on informed guestimation and 'reading between the lines'; and a lot of research and reading everything I could find --
the problem (if it exists at all) has never been seen in the wild.
It is, at best, a theoretical possibility, that would require a whole bunch of coincidences to manifest together, along with a bad guy who has:
unprecedented knowledge of his target's systems;
unprecedented access to (influence those) target systems;
unprecedented hardware resources to bring to bear on the attack.
In short; it ain't never goin' to 'appen.
If a bad guy was so equipped; then the proposed solutions do little or nothing to dissuade the attacker, nor to diffuse the attack.
They are adding another couple of deadbolts to the front door whilst the back door and all the windows stand wide open.
The implementation does not implement the proposal.
The code is in error in several places.
That's as much as I am prepared to say. You cannot have a rational discussion based upon speculation, rumour and guesswork. So, I'm keeping my powder dry...
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.