Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re^2: Modified Binary Search

by jethro (Monsignor)
on Jan 14, 2010 at 05:00 UTC ( #817365=note: print w/replies, xml ) Need Help??


in reply to Re: Modified Binary Search
in thread Modified Binary Search

Reduce memory yes, speed-up depends.

If he needs the number of different(!) elements, there would be a tremendous speed-up but the hash would be superfluous.

If he needs number of elements, he would have to sum up all different elements between $beg and $end instead of just subtracting $beg from $end. Depending on the data set this additional step could eat away any savings from performing a few steps more in the binary search.

To be precise: It must be 2*log2(average number of duplicates) > (average number of different elements in a search) to get a speed-up.

Replies are listed 'Best First'.
Re^3: Modified Binary Search
by BrowserUk (Pope) on Jan 14, 2010 at 06:03 UTC
    Depending on the data set this additional step could eat away any savings from performing a few steps more in the binary search.

    Sorry, but that would only be true if a binary search worked on data with duplicates. It doesn't.

    So you have to factor in the additional complexity of the basic search plus the steps required to locate the appropriate (upper or lower) boundary of the run of duplicates.

    I don't believe it is possible to code a search over sorted data with duplicates that comes even close to be O(log N). Even in theory. And in practical implementations, it'd be far slower.

    Feel free to prove me wrong :)


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      I don't believe it is possible to code a search over sorted data with duplicates that comes even close to be O(log N). Even in theory.
      You're wrong. You can easily find the smallest index containing your target value by using a condition like:
      return $i if $A[$i] == $target && ($i == 0 || $A[$i-1] < $target);
      instead of the usual
      return $i if $A[$i] == $target;
      And you find the highest index by using:
      return $i if $A[$i] == $target && ($i == $#A || $A[$i+1] > $target);
      It doesn't increase the run time complexity - the worst case of a binary search is when no match is found anyway.
        You can easily find the smallest index containing your target value

        How about finding the smallest index that contains a value equal or greater than your target; or highest equal or less?

        And how about you post a full sub save all of us trying to recreate your thought?

        It doesn't increase the run time complexity

        The proof is in the pudding!


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
        superb....

      Ok, here is the proof. I programmed the algorithm as described in my first post:

      #!/usr/bin/perl use 5.10.0; use strict; use warnings; use Data::Dumper; my @data; for ('aa'..'lz') { push @data, ($_) x 7; } my $size= scalar(@data); say 'array size is ',$size,', log2 of ',$size,' is ',int(log($size)/lo +g(2)); for ('aa','ab','ce','dn','ea','fr','lb') { my ($beg,$iterations)= binary_search($_,@data); say "Found first '$_' at position $beg after $iterations iteration +s"; } #---------------------- sub binary_search { my ($search,@data)= @_; return(0) if (@data<2); my $beg=0; my $end= @data-1; my $iter=0; while ($beg<$end-1) { my $middle= int(($end+$beg)/2); if ($data[$middle] lt $search) { $beg=$middle; } else { $end=$middle; } $iter++; } #handle the special case if you are looking for $data[0] return($beg,$iter) if ($data[$beg] eq $search); return($end,$iter); } #output: array size is 2184, log2 of 2184 is 11 Found first 'aa' at position 0 after 11 iterations Found first 'ab' at position 7 after 11 iterations Found first 'ce' at position 392 after 11 iterations Found first 'dn' at position 637 after 11 iterations Found first 'ea' at position 728 after 11 iterations Found first 'fr' at position 1029 after 11 iterations Found first 'lb' at position 2009 after 11 iterations

      Ok, not a proof in formal language but good enough for us, I hope. As you can see it is a binary search, it takes exactly the number of iterations predicted to find the items. Also it is evident that it finds the first item (see the first two results, and also all following results are dividable by 7)

      BrowserUk,
      Since this was a "one and done" script, I didn't really care too much about this being as efficient as possible. I ended up just performing 2 binary searches (to find each end point) and then reverted to a linear search to handle the duplicates.

      Regarding the statement: I don't believe it is possible to code a search over sorted data with duplicates that comes even close to be O(log N). Even in theory. And in practical implementations, it'd be far slower.

      Why would the following logic be so much slower in implementation?

      • Given: A sorted list containing duplicates
      • Given: A target value
      • Given: A desired anchor (closest to which endpoint)
      • Find: The closest element to the desired anchor that is equal to or $desired_operator than the target value
      1. Perform a normal binary search to find the target item
        • If not found, check if $list[$mid] < $val to determine if $mid has to be adjusted by one to meet $anchor/$desired_operator - done
        • If found, proceed to step 2
      2. Determine if you are at "right" endpoint of the list of duplicates by checking the $list[$mid - 1] eq $val or $list[$mid + 1] eq $val
        • If yes, done
        • If no, proceed to step 3
      3. Check to see if this item is even a duplicate by checking $list[$mid - 1] eq $val or $list[$mid + 1] eq $val - whichever one not done in step 2
        • If not a duplicate - done
        • If a duplicate - proceed to step 4
      4. Use the following logic to find the first element in the desired direction that is not a duplicate. For the description, let's say I am trying to find the last element. $min = $mid from previous search and $max = $#list.
        • If $list[$mid] eq $val, $min = $mid
        • If $list[$mid] ne $val, $max = $mid - 1
        • Stop when $max - $min < 2

      Cheers - L~R

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://817365]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (4)
As of 2021-06-20 06:06 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What does the "s" stand for in "perls"? (Whence perls)












    Results (94 votes). Check out past polls.

    Notices?