Welcome to the Monastery PerlMonks

### Re^6: Re-orderable keyed access structure?

by BrowserUk (Pope)
 on Aug 15, 2004 at 00:24 UTC ( #383050=note: print w/replies, xml ) Need Help??

in reply to Re^5: Re-orderable keyed access structure?
in thread Re-orderable keyed access structure?

Sorry, but I'm having trouble picturing how this works? A binary tree in an array. Highest value is always root (element 0?). How do I locate the second highest item? If it's always element 1, then you talking about a sorted array, that resorts itself as you add items to it or change the weight any of the items.

The array elements contain the weights, where does the payload go? So the array becomes an AoA? Raising an items weight to the top means assigning that item a weight higher than the weight in \$array[0][0]--easy enough.

But then, the process of re-sorting the list is to move the newly highest weighted item to position 0. This is done by comparing this items weight with that of the item above it and if its higher (which it always will be), swap the two items and repeat with the next highest item. Keep repeating until it has found it's way to the top.

This is a bubble sort. If the previously lowest item in the heap/array gets modified, then it's weight gets set to the highest value +1 (eg. \$array[ 0 ][ 0 ] ). The heap algorithm then swaps it with every item in the heap/array, one at a time, until it reaches the top.

Given that I already know that I am moving the item to the top of the list, the splice operation is vastly more efficient. Once I am going to do that, there is no point in embedding the weight within each element, because it is always, directly related to the elements position in the array. Ie. The elements position is it's weight.

Back to square one.

If I have this wrong (and I am pretty sure that I don't) then please illustrate a 5 element weighted heap and the steps required to raise the middle element to top (or bottom).

Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Replies are listed 'Best First'.
Re^7: Re-orderable keyed access structure?
by Aristotle (Chancellor) on Aug 15, 2004 at 00:46 UTC

You could call it a bubble sort in a sorted array (O( n )) I guess. Except you're only looking at log n items. The splice is not more efficient. You need to move the entire array down one position to move a new element at the front. With a heap, you need to swap at most log n elements.

Google found me a good explanation of heaps. Look at the illustrations. It also explains storage of complete binary trees in an array.

Please do pick up a book or two on algorithms and data structures; this is stuff anyone who is serious about programming should know.

Makeshifts last the longest.

Note however that this is Perl. It is true that one splice is O(\$#array) in chunks of memory to be moved while a heap moves fewer chunks of memory, O(log(\$#array)). But it is also true that splice is a single Perl opcode while the heap will be O(log(\$#array)) in Perl opcodes.

And I wouldn't be surprised if O(1 opcode)+O(\$#array moves) isn't quite often a win over O(log(\$#array) opcodes and moves).

- tye

Sure, if k1 is much smaller than k2, O( k1 n ) will be smaller for small values of n than O( k2 log n ). Using builtins is a good way of getting very small values for k1, and I've asserted many times that this is a sensible optimization goal in Perl, even recently.

But with n growing, the constants eventually become irrelevant. Since BrowserUk claims to be unable to hold all of his data in memory, I would assume this is such a situation. Even Perl's builtin splice won't move 100,000 elements down one position faster than spelled-out Perl code would swap 17 (≅ log2 100_000) elements.

Makeshifts last the longest.

Yes, you inspect log N item and move one, (steps 1, 2 and 3) below). But then you are not finished. You still need to swap items 1 and 2.

```1)  0 [ 10 ] 2)  0 [ 10 ] 3)  0 [ 11 ] 4)  0 [ 11 ]
1 [  9 ]     1 [  9 ]     1 [  9 ]     1 [ 10 ]
2 [  8 ]     2 [ 11 ]     2 [ 10 ]     2 [  9 ]
3 [  7 ]     3 [  7 ]     3 [  7 ]     3 [  7 ]
4 [  5 ]     4 [  5 ]     4 [  5 ]     4 [  5 ]

Now try making that a 7 item array and moving the middle item to the top. Count the number of comparisons and swaps required.

In the end, you have had to move the middle item to the top and all the intervening items down. Splice does this directly. A heap algorithm does it one at a time.

Splice does this in O(N). A heap algorithm does it using O(N log N).

I have several good data structure & algorithm books, a couple of them are almost as old as you. Unlike you apparently, I haven't just read the headlines. I've also implemented many of the algorithms myself and understood the ramifications.

I was simply waiting for you to catch up with the fact that the use of heaps has no benefit here.

The likely size of the cache is a few hundred, maybe 1000 elements. More than this and I run out of file handles or memory. splice is way more efficient at moving 1 item in an array of this size than any implementation of a (binary search + swap) * (old_position - new_position) in Perl.

Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Sorry, I'm not the one who seems to only have read headlines. A heap does not somehow entail a bubble sort. But let's leave the ad hominem out and look at facts.

Yes, you inspect log N item and move one, (steps 1, 2 and 3) below).

A single swap requires inspecting exactly two elements, not log n. You need at most log n swaps total at any time.

But then you are not finished. You still need to swap items 1 and 2.

Why? The heap condition is not violated at any point after your step 3 (which is really step 2, and swapping step 1). \$a[0] > \$a[1] and \$a[0] > \$a[2] is fulfilled, so the root and its children satisfy the condition. Likewise \$a[1] > \$a[3] and \$a[1] > \$a[4], so the left child of the root and its children satisfy the condition as well. \$a[2] has no children, so it automatically satisfies the condition as well. Your step 4 is not required in a heap.

Want me to demonstrate on a larger heap? Sure.

```X)  0 [ 13 ] 0)  0 [ 13 ] 1)  0 [ 13 ] 2)  0 [ 13 ] 3)  0 * 16 ]
1 [ 12 ]     1 [ 12 ]     1 [ 12 ]     1 [ 12 ]     1 [ 12 ]
2 [ 11 ]     2 [ 11 ]     2 [ 11 ]     2 * 16 ]     2 * 13 ]
3 [ 10 ]     3 [ 10 ]     3 [ 10 ]     3 [ 10 ]     3 [ 10 ]
4 [  9 ]     4 [  9 ]     4 [  9 ]     4 [  9 ]     4 [  9 ]
5 [  8 ]     5 [  8 ]     5 * 16 ]     5 * 11 ]     5 [ 11 ]
6 [  7 ]     6 [  7 ]     6 [  7 ]     6 [  7 ]     6 [  7 ]
7 [  6 ]     7 [  6 ]     7 [  6 ]     7 [  6 ]     7 [  6 ]
8 [  5 ]     8 [  5 ]     8 [  5 ]     8 [  5 ]     8 [  5 ]
9 [  4 ]     9 [  4 ]     9 [  4 ]     9 [  4 ]     9 [  4 ]
10 [  3 ]    10 [  3 ]    10 [  3 ]    10 [  3 ]    10 [  3 ]
11 [  2 ]    11 * 16 ]    11 *  8 ]    11 [  8 ]    11 [  8 ]
12 [  1 ]    12 [  1 ]    12 [  1 ]    12 [  1 ]    12 [  1 ]

That's it. 3 swaps among a segment of 12 elements.

In a heap with 100 elements, you need at most 7 swaps to get an item from the bottom of the heap to the top without violating the heap condition. I am doubtful of whether splice would win.

In a heap with 1,000 elements, you need at most 10 swaps. How much money will you bet on splice?

Makeshifts last the longest.

Create A New User
Node Status?
node history
Node Type: note [id://383050]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (5)
As of 2020-04-06 00:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
The most amusing oxymoron is:

Results (36 votes). Check out past polls.

Notices?