|No such thing as a small change|
If you're just looking for an estimate, why not work on a manageable and reasonably sampled subset of the data?
Because, according to (my application of) the formulae I found on line, I should be seeing false positive after low 10s of millions of inserts. But my testing showed none after 1e6, 10e6, 100e6, 200e6 ...
Ie. My sampling showed that the calculations were wrong, but provided no information upon which to draw any further conclusions.
The nature of the beast is that I needed to run the experiment for 3/4 of a billion iterations in order to get my first piece of actual information from it.
This start out being about verify my gut with respect to the big strings. That lead to looking for a mechanism capable of verifying uniqueness with the huge numbers involved. That lead -- through various documented mechanisms that proved inadequate -- to coming up with the current schema that appears to work substantially better than all the documented mechanisms.
This post is about trying to come up with math that fits the experimental evidence.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.