Yes, you inspect log N item and move one, (steps 1, 2 and 3) below). But then you are not finished. You still need to swap items 1 and 2.
1) 0 [ 10 ] 2) 0 [ 10 ] 3) 0 [ 11 ] 4) 0 [ 11 ]
1 [ 9 ] 1 [ 9 ] 1 [ 9 ] 1 [ 10 ]
2 [ 8 ] 2 [ 11 ] 2 [ 10 ] 2 [ 9 ]
3 [ 7 ] 3 [ 7 ] 3 [ 7 ] 3 [ 7 ]
4 [ 5 ] 4 [ 5 ] 4 [ 5 ] 4 [ 5 ]
Now try making that a 7 item array and moving the middle item to the top. Count the number of comparisons and swaps required.
In the end, you have had to move the middle item to the top and all the intervening items down. Splice does this directly. A heap algorithm does it one at a time.
Splice does this in O(N). A heap algorithm does it using O(N log N).
I have several good data structure & algorithm books, a couple of them are almost as old as you. Unlike you apparently, I haven't just read the headlines. I've also implemented many of the algorithms myself and understood the ramifications.
I was simply waiting for you to catch up with the fact that the use of heaps has no benefit here.
The likely size of the cache is a few hundred, maybe 1000 elements. More than this and I run out of file handles or memory. splice is way more efficient at moving 1 item in an array of this size than any implementation of a (binary search + swap) * (old_position  new_position) in Perl.
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." David Dunham
"Think for yourself!"  Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side."  tachyon
