The problem is that you get guaranteed worst-case behaviour at the cost of extra code along what should be a fast path, and this is going constantly be executed. I suspect that you'd find a measureable slowdown in perl. Which is why I personally prefer the solution that was chosen, which is to vary the hashing function so that its performance characteristics are unaltered but there is no predictable DOS target to go against.
However don't let my back-of-the-envelope guess deter you from trying to implement that solution. It may be that there is no significant performance penalty, in which case the improved worst-case performance would be very worthwhile.
Its interesting that this thread resurfaced now, as just yesterday I posted a patch to p5p to change the hash function that we use from the "OneAtATime" function to the "SuperFastHash" by Paul Hsieh. Details of both can be found in an updated version of Bob Jenkins hash article from DrDobs. The SFH is signifigantly faster than OneAtATime on modern machines, with SFH running about 3 times as fast as OneAtATime.
Anyway, id be curious if you have any tools or scripts that can be used to do some analysis on how the two functions perform. I plan to instrument a version of perl so I can do chain length histograms and things like that, but anything you might think of would be appreciated.
I'd suggest benchmarking how fast Perl finishes its test suite. Hashing is so pervasively used within Perl that any significant difference is sure to become obvious.
Also I'd strongly suggest that you add the ability to add a random seed to the hash, which should be generated by Perl on every run. As this thread explains, that is needed to stop a user DOS attack, which is a potential remote exploit in a web application.