In big O notation O(logN - 1) and O(log N) are equivalent. They denote the same complexity order.
Though, that does not mean that the two algorithms are equally efficient. Actually they are not: Re^3: Modified Binary Search.
| [reply] |
I'm aware that the theorists will categorise them as having the same order of complexity, but when additional conditional checks are required, the complexity has increased.
And at some point it is necessary to decide whether you need to find the lowest value greater or equal to the search term or the highest value less than or equal to the search term. And that adds to the (actual, real-world), complexity of the code.
I know you know this--as your many Sort::* packages assert--in Perl, it is the number of source-level operations that is most relevant to efficiency:
@a = 1 .. 1e6;
cmpthese -1, {
a=>q[ my $total = sum @a; ],
b=>q[ my $total = 0; $total += $_ for @a ],
};;
Rate b a
b 10.8/s -- -73%
a 40.9/s 277% --
Identical algorithms but a significant difference in efficiency.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
A well implemented binary search is O(logN) worst case, but averages O(logN - 1).
That's wrong on multiple levels. First of all, your notation is sloppy. O(log N) and O(log N - 1) are equivalent classes. Any function that is in one class is also in the other.
I assume you mean that on average, a well implemented binary search only needs log N - 1 comparisons on average. But that's not true either. That's only true if you only search for elements that are present. Each unsuccessful search will take ceil(log N) or floor(log N) comparisons.
| [reply] [d/l] [select] |