Not sure. You'd probably need some way to estimate whether your sampling distribution is uniform with respect to the index of the sample. Also, you could see whether the average length of the records in your sample jibes with the average length of records in the entire population.
My other thoughts overnight had to do with the pathological case presented by bobf:
-
To avoid the scenario where you pick the same record 90% of the time if one record is 90% of the file, you need to avoid already-selected records.
-
To give the large record itself a fair chance of being selected, you need to perform the wrapping suggested by bcrowell2, that is, selecting the first record if you land inside the last.
Taken together, these make even the extreme case just as amenable to this method as any other. If you remember which records you've hit and do not re-sample them, you're simply omitting a segment of the number line from a uniform distribution. The distributions on either side are still uniform, i.e., random.
So even if you are hitting the big record 90% of the time, you ignore it after the first time, and then other 10% of the hits select records as normal. Since any record at all can follow the 90% length record, that's fair. And since the length of the last record has nothing to do with the length of the first, it has same same likelihood of being selected as any record.