Thank you for your answer.
Seems I found original discussion
and it seems that was just optimization of previous algorithm overhead:
Perl currently has a defense against attacks on its hash algorithm
where it conditionally enables the use of a random hash seed when it
calculates the hash value for a string. This mode is triggered by
encountering an overly long bucket chain during insert. Once this bit
is set all further hash operations have to go through additional
layers of logic. All of this support has a per-operation cost.
The rehash mechanism more or less works around the problem of hash
order expectations by only enabling the randomization in degenerate
hashes. But it achieves this at the cost of added per operation
overhead in hot code paths and additional complexity in the code.