http://www.perlmonks.org?node_id=817283


in reply to Modified Binary Search

Build a new array containing the indexes of the elements where sequences of equal strings start. For instance:
@a = qw(a a b b c c c d e e e e e f f) # ix: 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 #start: * - * - * - - * * - - - - * - @start = (0, 2, 4, 7, 8, 13);
Then just perform the binary over @start, using $a[$start[$ix]] as the search key.

Once you find the index $ix corresponding to the searched element, the start and end offsets will be $start[$ix] and $start[$ix+1] - 1

Replies are listed 'Best First'.
Re^2: Modified Binary Search
by ikegami (Pope) on Jan 13, 2010 at 22:02 UTC
    That makes an O(log N) algorithm into O(N). Were 10 checks were made, you'd now make 1,000,000.
      That makes an O(log N) algorithm into O(N).

      Obviously it does not!

      As for any binary search algorithm you have a previous step where you build the array, sorting something, loading it from disk, whatever. This operation is O(N) in the best case and usually O(NlogN) because of the sorting.

      Once that array is build you can perform binary searches over it with cost O(logN). In order to amortize the cost of building the array the number of binary searches has to be high (otherwise, there are better algorithms).

      The algorithm I have proposed increments the cost of the setup step but does not change its complexity because it already was O(N) at best.

      Also, if the number of repeated strings is high, @start would be considerably smaller than @a and so less operations will be performed per binary search. That means that if the number of binary searches to be carried out is high enough, my algorithm will actually be faster than any other one attacking @a directly.

      ... and some benchmarking code just to probe it is not worse:
      #!/usr/bin/perl use strict; use warnings; use File::Slurp qw(slurp); use Benchmark qw(cmpthese); sub binary_search { my ($str, $a) = @_; my $l = 0; my $h = @$a; while ($l < $h) { my $p = int (($l + $h) / 2); if ($a->[$p] lt $str) { $l = $p + 1; } else { $h = $p; } } $l } sub make_start { my $a = shift; my $last = $a->[0]; my @start = (0); for my $ix (1..$#$a) { my $current = $a->[$ix]; if ($current ne $last) { push @start, $ix; $last = $current; } } return \@start; } chomp (my @words = slurp '/usr/share/dict/words'); @words = grep /^\w+$/, @words; for my $size (100, 1000, 100_000, 1_000_000) { for my $dups (3, 10, 100) { next unless $size > $dups; for my $reps (100, 100_000, 1_000_000) { print "size: $size, dups: $dups, reps: $reps\n"; # generate data: my @a = map $words[rand @words], 1..1+($size/$dups); push @a, $a[rand @a] while @a < $size; @a = sort @a; cmpthese(-30, { naive => sub { my $ix; $ix = binary_search($a[rand @a], \@a) for + (1..$reps); }, salva => sub { my $ix; my $start = make_start(\@a); my @a_start = @a[@$start]; $ix = $start->[binary_search($a[rand @a], + \@a_start)] for (1..$reps); } }); print "\n"; } } }
      The parameters in the benchmarks are:
      • $size: the size of the array
      • $dups: average number of times any string is repeated in the array
      • $reps: number of binary searchs to perform over one given array.

      Note also than this code only looks for the lowest index where some given string is found. Handling the case described by the OP where he also needs to find the highest index is trivially handled in my algorithm without increasing its computation cost but will require an additional binary search when using the naive algorithm.

      Here are the results I have gotten on my machine:

        Of course the array construction won't add to the time if you don't include it in the code that's timed.

        You assume the same list is always searched.

        I suppose a more precise analysis is O(log(N) + N/M) where M is the number of searched performed between modifications of the list. The number of such searches have to be proportional to the size of the list to get less than O(N).