|Pathologically Eclectic Rubbish Lister|
Still puzzled by floatsby leriksen (Curate)
|on Oct 07, 2002 at 05:34 UTC||Need Help??|
leriksen has asked for the
wisdom of the Perl Monks concerning the following question:
This question flows from an earlier problem I asked about summing to zero. Thanks for the replies.
Reading the perl FAQ, they state
However, 19.95 can't be precisely represented as a binary floating point number,...,
Why not? Surely there is a bit pattern in the IEEE float format that precisely represents 19.95 (1.995E1). If a float is stored in,say, 32 bits on a particulat platfrom, some bits are for the mantissa and some are for the exponent and sign. If 24 bits store the mantissa, isn't there a bit pattern from those 24 bits that precisely represents 1.995 ? Its not like it's a lot of precision.
If that is so, why can't the conversion library get that bit exact. The machine can represent 0.15 to however many decimal places it stores, why can't the conversion library create the correct representation? I understand that _expressions_ will potentially have inaccuracies, like exp(ln(x)) resulting in something very close but not quite x. But a string literal should (in my utopian view) convert precisely. The only exception to this would be a string literal expressing a value beyound the precision of the platform, OS or perl (most restrictive win's) - e.g. "0.1E-4096" will _probably_ end up being represented as 0.0E0
Is it related to the binary storage and summing fractional powers of 2 e.g. a2^-1 + b2^-2 + c2^-2 ... where a,b,c... are 1 or 0.
Anyone have a view why string literal conversion of seemingly inoccuous floats has inaccuracies?