Reduce memory yes, speed-up depends.
If he needs the number of different(!) elements, there would be a tremendous speed-up but the hash would be superfluous.
If he needs number of elements, he would have to sum up all different elements between $beg and $end instead of just subtracting $beg from $end. Depending on the data set this additional step could eat away any savings from performing a few steps more in the binary search.
To be precise: It must be 2*log2(average number of duplicates) > (average number of different elements in a search) to get a speed-up.