Also not to be excluded entirely is ... uhh ... an array of integers that is searched sequentially each time. Seriously. If you've got say 10,000 integers to look through, they're gonna take up 40K bytes. All sitting in memory all the time. You could loop through that. You could even loop through 10 times that and still be, well, okay. I'd suggest estimating how many unique values you're going to have, and set up a few short test-runs with randomly generated data to get a feel for how the various alternatives pan out. Then just pick one and go.
in reply to Array vs. Hash for sparsely integer-indexed data