I do think that I understand what the notion was ... and it really does come down to just how many values are present to be looked-through; not what the domain of possible values is. Which the OP, as far as I read, does not say.
Is “a sparsely-populated set” 10,000 entries, 100,000 or 100 million? It makes a gigantic difference. How many actual entries (regardless of the possible domain of values they could take) are there, and how are they going to be retrieved? Is it ever necessary to iterate through them, or is it pure random-access? How often do insertions happen and how do they happen?
You do like to pull “everybody else’s idea is fantastically worse than mine,” e.g. “25 days vs. 1.3 seconds” but, honestly, I don’t think anyone seriously benefits from such outlandish comparisons. Hash tables are easy but can have a hidden cost; bit vectors have a size more-or-less based on domain size, and so on. All of these things ... all of the rest of us know, too. Most of the most-key decision factors of the actual problem to be solved are not in the OP. At this writing, puterboy has not clarified.