The keyword sort is a pretty big clue as to what the code does, and should be more than sufficient to inform the casual reader of the purpose of the code.
If the reader is less casual and needs to know how the sort operates -- perhaps because they wish to alter that ordering in some way -- then you'd have to hope they'd take a few minutes to understand that before making changes.
With factors up to 15x faster between the naive function and the more efficient ones, the idea that 5 minutes of a junior programmer's (or can't be bothered, senior programmer's) time is more valuable than that of the 10's (or 1000's or millions) of user's that will sit around waiting for the naive/lazy coder's efforts to finish, is a crock.
And it doesn't take "bezillions" of records for the delays to be significant. It only takes a few 10s of thousands of items to require the fastest algorithm tested here to take 1 second. The naive function would take over 15 seconds to do the same thing. And there are two detrimental aspects of that to consider:
Imagine Amazon or Google having to build 15 (or just 10 or 5) times as many of their mega data centres as they have now in order to deal with peak time loading.
Now think about the energy costs.
I don't know how many millions of times an hour Amazon or similar sites perform sorts; but I do know that if I was the boss I'd rather have those million sorts use 1/15th of the cpu/energy/time, even if that meant I had to pay for an extra 5 or 10 minutes of programmer time whenever a new programmer had to get to grips with how the efficient sort worked.
|Comment on Re^2: Unusual sorting requirements; comparing three implementations.|