in reply to Re: what did I just see..?
in thread what did I just see..?

Sorry, sectokia, but I don't think that adding twice the machine_epsilon is the cure-all you really think it is.

Per the documentation, machine_epsilon is the maximum relative error ("Machine::Epsilon - The maximum relative error while rounding a floating point number"), but you are using it in an absolute fashion, which means that you are not properly scaling the error. The error possible is different for 1_000_000 vs 1 vs 0.000_001... it's even different for 2 vs 1 vs 1/2. So that won't help you "correct" the cumulative floating point errors. Since you are using values from 0.8 down to 0.1, you will be in the ranges [0.5,1.0), [0.25,0.50), [0.125,0.25), and [0.0625,0.125), which actually have four different ULP sizes.

Besides that, the cumulative errors will keep getting bigger, until your x+2*epsilon is no longer enough to increase it. Starting at 0.8 and decreasing by 0.01 each loop, by 70 iterations, the stringification of the "real" number and the stringification of your x+2*epsilon will not be matching each other.

The example in the spoiler shows both of those issues in more detail (where "diff" represents the real number, start - n*step, "x" represents the multiple individual subtractions, and "sectokia" represents the x+2epsilon).

Further, your x+2*epsilon assumes that your subtraction step size is slightly bigger than the real value; that is true for a step of 0.1 (1.00000000000000006e-01) or 0.01 (1.00000000000000002e-02), but for a step of 0.03 (2.99999999999999989e-02), the step size is smaller, so now your adjustment takes the string in the wrong direction. (Not shown in the spoiler code.)

Replies are listed 'Best First'.
Re^3: what did I just see..?
by sectokia (Scribe) on Mar 24, 2021 at 10:17 UTC

    For the range 0 to 1, then epsilon will always be greater than the error (as it would only scale smaller), but of course you are correct in that it should be scaled both up and down for a normalized solution.

    You are also right about needing to apply it to each subtraction operation.

    I don't agree with the bit around it being in the wrong direction if the step happens to be just under the desired ideal value. Print rounds down. If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal. It doesn't matter if you started +ve or -ve from ideal.

      For the range 0 to 1, then epsilon will always be greater than the error

      So while we have the example of 0.8 down to 0.1, the difference between the epsilon and the ULP won't be huge. But who knows whether the OP will eventually go beyond that range, and get upset when numbers near 2e-16 start displaying as near 4e-16. That's one of the reasons I was cautioning against applying epsilon in this case, because it might later be generalized to a condition where it's not even close to the real absolute error.

      Print rounds down

      that's not true, as syphilis showed. Here's another example.

      C:\usr\local\share>perl -le "printf qq(%.17e => %s\n), $_, $_ for 2.99 +999999999999989e-02, 2.99999999999999503e-02, 2.99999999999999468e-02 +" 2.99999999999999989e-02 => 0.03 2.99999999999999503e-02 => 0.03 2.99999999999999468e-02 => 0.0299999999999999

      Print rounds to nearest to the precision that print "likes" rounding to.

      If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal.

      Again, no.

      C:\usr\local\share>perl -MMachine::Epsilon -le "for (2.999999999999999 +89e-02, 2.99999999999999503e-02, 2.99999999999999468e-02) { printf qq +(%.17e => %-30s x+2eps = %s\n), $_, $_, $_+2*machine_epsilon}" 2.99999999999999989e-02 => 0.03 x+2eps = 0.0 +300000000000004 2.99999999999999503e-02 => 0.03 x+2eps = 0.0 +300000000000004 2.99999999999999468e-02 => 0.0299999999999999 x+2eps = 0.0 +300000000000004

      Notice that x+2epsilon prints a number bigger than 0.03, whereas the value of x is slightly less than 0.03 (by 1 ULP).

      Print rounds down

      As a generalization this is not true.
      Even when it is true for some value $x, it will be untrue for -$x.
      However, there are times when print() rounds up for positive values.

      Consider the following (perl-5.32.0, nvtype is "double"):
      use strict; use warnings; my $d = 0.4444444444444446; printf "%a\n", $d; print "$d\n"; print "print() has rounded upwards\n" if "$d" > $d; __END__ Outputs: 0x1.c71c71c71c71fp-2 0.444444444444445 print() has rounded upwards
      And how do I calculate the value of this "epsilon" that is being mentioned ?

      Cheers,
      Rob
        And how do I calculate the value of this "epsilon" that is being mentioned

        As sectokia's first post indicated, Machine::Epsilon provides machine_epsilon(). It is dependent on the $Config{doublesize}, but for 64-bit, the value that sectokia quoted is the value. It is the maximum error, relative to the appropriate power of two.

        The value of that is the same as ulp(1) (from my Data::IEEE754::Tools), where ulp is the Unit in the last place. But really, ulp(value) is the easier way to figure out the exact ULP size for a given value, and you know that the "real" value is somewhere between +/- 1 ULP (actually, I think +/- 0.5 ULP, really) from the value stored in the 64-bit double float.