|There's more than one way to do things|
You are tackling the wrong problem.
Wherever you get this data from, be it loaded sorted or generated internally, you should not be storing it as a flat array with duplicates.
You should load a parallel pair of an array and a hash. You load one of each element into the array (in load order if input/derivation is sorted), and increment the hash value keyed by the unique element for each duplicate.
You can then use binary searches on the uniques array, to locate the bounds, and iterate between the bounds of the array summing the counts in the hash to arrive at the inclusive or exclusive total.
This both speeds up the search by the removal of the duplicates; and reduces memory requirements by not storing them.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.