I like this approach, but I'll need to think it over. I may even get off my butt and write some code to actually benchmark it.
My concern is that the chances of collisions vary not only with the number of files, but how they're waited. To use a contrived example, say we are picking 2 out of 300 files, but 1 of those files is weighted to contain 98% of the hitspace? You'll probably pick that the first time, and then re-pick it quite a bit until you actually successfully get something else.
But for general use, this could definitely be an improvement. Maybe a hybrid approach - pick a file, and if it contains below a certain percentage of the hitspace, then just leave it alone and continue. If it is above a certain percentage, then splice it out (keeping in mind that you'd need to re-calculate previously saved indexes. If you keep index 4 flagged as one to skip over, and then you remove index 3, you need to change your flag to ignore index 3 instead of index 4, and so on).