in reply to Array vs. Hash for sparsely integer-indexed data
I do think that I understand what the notion was ... and it really does come down to just how many values are present to be looked-through; not what the domain of possible values is. Which the OP, as far as I read, does not say.
Is “a sparsely-populated set” 10,000 entries, 100,000 or 100 million? It makes a gigantic difference. How many actual entries (regardless of the possible domain of values they could take) are there, and how are they going to be retrieved? Is it ever necessary to iterate through them, or is it pure random-access? How often do insertions happen and how do they happen?
You do like to pull “everybody else’s idea is fantastically worse than mine,” e.g. “25 days vs. 1.3 seconds” but, honestly, I don’t think anyone seriously benefits from such outlandish comparisons. Hash tables are easy but can have a hidden cost; bit vectors have a size more-or-less based on domain size, and so on. All of these things ... all of the rest of us know, too. Most of the most-key decision factors of the actual problem to be solved are not in the OP. At this writing, puterboy has not clarified.
Replies are listed 'Best First'. | |
---|---|
Re^2: Array vs. Hash for sparsely integer-indexed data
by BrowserUk (Patriarch) on Jan 26, 2013 at 01:39 UTC | |
by roboticus (Chancellor) on Jan 26, 2013 at 14:39 UTC | |
Re^2: Array vs. Hash for sparsely integer-indexed data
by Lotus1 (Vicar) on Jan 26, 2013 at 16:10 UTC | |
Re^2: Array vs. Hash for sparsely integer-indexed data
by puterboy (Scribe) on Jan 29, 2013 at 03:55 UTC |