in reply to Re^3: [OT] The statistics of hashing.(SOLVED) in thread [OT] The statistics of hashing.
I will do my very best to try and understand the formula
Perhaps this will help some.
Below is some fairly simple code that does the precise calculations of the odds for a collision in a single hash (and compares those calculations with the formula roboticus proposed). I came up with a simpler implementation than I expected to. I didn't even try to implement this at first (only a little because I expected it to be more cumbersome, but) mostly because I knew it would be impractical for computing odds for such a large number of insertions. It consumes O($inserts) for memory and O($inserts**2) for CPU.
$= 1;
my( $b )= ( @ARGV, 2**32 ); # Total number of bits in the hash.
my $i= 1; # Number of insertions done so far.
my @odds = 1; # $odds[$s] == Odds of their being $s+1 bi
+ts set
while( 1 ) { # Just hit CtrlC when you've seen enough
my $exp= $b*( 1  exp($i/$b) );
my $avg = 0;
$avg += $_ for map $odds[$_]*($_+1), 0..$#odds;
my $err = sprintf "%.6f%%", 100*($exp$avg)/$avg;
print "$i inserts, $err: avg=$avg exp=$exp bits set\n"
if $i =~ /^\d0*$/;
# Update @odds to in preparation for the next value of $i:
for my $s ( reverse 0..$#odds ) {
$odds[$s+1] += $odds[$s]*($b$s1)/$b;
$odds[$s] *= ($s+1)/$b;
}
$i++;
}
$i tracks the number of insertions done so far. $odds[$s] represents the odds of there being $s+1 bits set in the hash (after $i insertions). $avg is an average of these values of $s (1..@odds) weighted by the odds. But, more importantly, it is also the odds of getting a singlehash collision when inserting (after $i insertions) except multiplied by $bits ($b). I multiple it and 1exp($i/$b) by $b to normalize to the expected number of set bits instead of the odds of a collision because humans have a much easier time identifying a number that is "close to 14" than a number that is "close to 14/2**32".
$odds[1] turns out to exactly match (successively) the values from birthday problem. For low numbers of $inserts, this swamps the calculation of $avg (the other terms just don't add up to a significant addition), which is part of why I was computing it for some values in my first reply. (Since you asked about that privately.)
I have yet to refresh my memory of the exact power series expansion of exp($x), so what follows is actually half guesses, but I'm pretty confident of them based on vague memory and observed behavior.
For large $bits, 1exp($inserts/$bits) ends up being close to 1/$bits because 1exp($inserts/$bits) expands (well, "can be expanded") to a power series where 1/$bits is the first term and the next term depends on 1/$bits**2 which is so much smaller that it doesn't matter much (and nor do any of the subsequent terms, even when added together).
On the other hand, for large values of $inserts, 1exp($inserts/$bits) is close to $avg because the formula for $avg matches the first $inserts terms of the power series expansion.
I hope the simple code makes it easy for you to see how these calculations match the odds I described above. But don't hesitate to ask questions if the correspondence doesn't seem clear to you. Running the code shows how my calculations match roboticus' formula. Looking up (one of) the power series expansions for computing exp($x) should match the values being computed for @odds, though there might be some manipulation required to make the match apparent (based on previous times I've done such work decades ago).
Re^5: [OT] The statistics of hashing. (dissection) by BrowserUk (Pope) on Apr 03, 2012 at 15:59 UTC 
Perhaps this will help some.
I seriously hope this will not offend you, but suspect it will.
Simply put, your post does not help me at all.
I am a programmer, not a mathematician, but given a formula, in a form I can understand(*), I am perfectly capable of implementing that formula in code. And perfectly capable of coding a few loops and print statements in order to investigate its properties.
What I have a problem with  as evidently you do too  is deriving those formula.
Like you (according to your own words above; there is nothing accusatory here), my knowledge of calculus is confined to the coursework I did at college some {mumble mumble} decades ago. Whilst I retain an understanding of the principles of integeration; and recall some of its uses, the details are shrouded in a cloud of disuse.
Use it or lose it, is a very current, and very applicable aphorism.
The direction my career has taken me means that I've had no more than a couple of occasions when calculus would have been useful. And on both those occasions, I succeeded in finding "a man that can", who could provide me with an understandable formula, and thus, I achieved my goal without having to relive the history of mathematics.
(*) A big part of the problem is that mathematicians not only have a nomenclature  which is necessary  the also have 'historical conventions'  which are not; and the latter are the absolute bane of the layperson's life in trying to understand the mathematician's output.
There you are, happily following along when reach a text that goes something like this:
We may think intuatively of the Riemann sum: Ʃ^{b}_{a} f(x) dx
as the infinite sum: f(x_{0})dx + f(x_{1})dx + ... + f(x_{H  1})dx + f(x_{H})(b  x_{H})
Where did H come from? Where did a disappear to? Is H (by convention) == to b  a?
For the answer to this and other questions, tune in next week ..... to the last 400 (or sometimes 4000) years of the history of math
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
 [reply] 

What I have a problem with  as evidently you do too  is deriving those formula.
Well, I'm sorry you didn't understand because I wasn't trying to help you convert between programs and formulae (hence why I left that step to you). I was explaining how to derive the most interesting part of the formula at hand (well, verify the derivation in order to understand it, since the formula was already provided).
I was not trying to teach you how to do integration nor how integration correlates to this sampling problem. I have no interest in trying to teach beginning calculus via a web forum.
But it seems your expressed desire "to understand" does not extend to trying to understand 1exp(1$inserts/$slots). Perhaps my explanation will assist others on that point.
But my explanation also serves as response to more than one private request you made to me, despite your apparent lack of interest now.
I seriously hope this will not offend you, but suspect it will.
I'm not sure why.
 [reply] 

But it seems your expressed desire "to understand" does not extend to trying to understand 1exp(1$inserts/$slots)
You really are a supercilious sod aren't you.
Why would I need help with that bit? It is already code.
How about this. When I respond to someone asking for their help with materials they posted, you keep your often supercilious, rarely helpful, always patronising nose out of it. Is that a plan?
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
 [reply] 
