Suppose you have a hash which lists relative frequencies of occurance for various possible die rolls:
my %bias = (
1 => 3.1,
2 => 2.0234,
3 => 1.7
4 => 1.542232,
5 => 1.321249563,
6 => 1.0142,
So, for example, the ratio of 1's to 3's in a properly randomized output set of sufficient size would approach 3.1/1.7. Well, I've thought of a few ways to get this result, but none of them seem quite right; they trade off either efficiency, elegance, or correctness.
One method is to build a large array, with contents having frequencies approximating those above, and then randomly choose elements from this array. This, however, is imprecise, because of the rounding error from using the array length as your divisor. You can sacrifice memory for extra precision, but only to a point. Precision beyond 5 decimal digits quickly becomes a practical impossibility.
Another alternative is to create a list which gives the cumulative probability at each number:
my @distribution = (
... and so on
Then, you would do a binary search to find which element is just above rand sum(@distribution)
. Yuck! I think we deserve an O(1) algorithm here, don't you?
So, this is my conundrum of the moment. Either I'm missing something obvious, or I'm banging my head against Von Neumann's headstone. I'd like to know which it is :)
s aamecha.s a..a\u$&owag.print