Yes. I shepherded this change into blead. In the course of doing so I had to fix a surprisingly large amount of code that one way or another had bogus expectations of hash ordering. Some of these were surprisingly subtle. For instance code like this:
my %hash= some_list_of_key_value_pairs();
my @keys= keys %hash;
my %copy= %hash;
my @values= values %copy;
do_something_with($keys[$_],$values[$_]) for 0..$#keys;
This code will work so long as the keys put into %hash all hash into different buckets. However if any collide it will fail. So it is very sensitive to what data is involved and to an uninformed observer would make no sense, especially if test samples happened to work out. With an older perl the same input would always produce the same output so a safe input set would remain safe for ever, on the other hand with hash seed randomization you are pretty much guaranteed to get a collision /sometime/ no matter what input you feed in, which will then make this code fail regularly.
Ive seen all kinds of variants of this. All of them would be broken with old perls with tweaks to the state of the hash, such as pre-sizing the hash buckets to a particular size, or putting the same keys in two hashes with two different bucket array sizes. Things that would fail very very rarely, and would be very hard to debug. All of these kind of bugs will start happening much more often, and therefore become much easier to track down and fix.