|Syntactic Confectionery Delight|
But, that was not my goal. My goal was to benchmark the filtering/processing methods of grep, map, and~~ against the same data set.
That implies that you have an application for filtering values against an array that is currently too slow; so you chose too benchmark alternatives. That's good.
But rather than benchmarking the actual application, you made up this 'unique random number selection' problem and used that as the basis of your benchmark. That's less good.
The chances are that if you posted a benchmark for the actual application, then one of the monks would see an alternative approach to that application that would similarly avoid the need to do O(N) processing of a huge list.
For example, for simple unique filtering of small lists of values, using a hash is way more efficient:
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.