in reply to Re^2: what did I just see..?
in thread what did I just see..?

For the range 0 to 1, then epsilon will always be greater than the error (as it would only scale smaller), but of course you are correct in that it should be scaled both up and down for a normalized solution.

You are also right about needing to apply it to each subtraction operation.

I don't agree with the bit around it being in the wrong direction if the step happens to be just under the desired ideal value. Print rounds down. If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal. It doesn't matter if you started +ve or -ve from ideal.

Replies are listed 'Best First'.
Re^4: what did I just see..?
by pryrt (Monsignor) on Mar 24, 2021 at 17:42 UTC
    For the range 0 to 1, then epsilon will always be greater than the error

    So while we have the example of 0.8 down to 0.1, the difference between the epsilon and the ULP won't be huge. But who knows whether the OP will eventually go beyond that range, and get upset when numbers near 2e-16 start displaying as near 4e-16. That's one of the reasons I was cautioning against applying epsilon in this case, because it might later be generalized to a condition where it's not even close to the real absolute error.

    Print rounds down

    that's not true, as syphilis showed. Here's another example.

    C:\usr\local\share>perl -le "printf qq(%.17e => %s\n), $_, $_ for 2.99 +999999999999989e-02, 2.99999999999999503e-02, 2.99999999999999468e-02 +" 2.99999999999999989e-02 => 0.03 2.99999999999999503e-02 => 0.03 2.99999999999999468e-02 => 0.0299999999999999

    Print rounds to nearest to the precision that print "likes" rounding to.

    If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal.

    Again, no.

    C:\usr\local\share>perl -MMachine::Epsilon -le "for (2.999999999999999 +89e-02, 2.99999999999999503e-02, 2.99999999999999468e-02) { printf qq +(%.17e => %-30s x+2eps = %s\n), $_, $_, $_+2*machine_epsilon}" 2.99999999999999989e-02 => 0.03 x+2eps = 0.0 +300000000000004 2.99999999999999503e-02 => 0.03 x+2eps = 0.0 +300000000000004 2.99999999999999468e-02 => 0.0299999999999999 x+2eps = 0.0 +300000000000004

    Notice that x+2epsilon prints a number bigger than 0.03, whereas the value of x is slightly less than 0.03 (by 1 ULP).

Re^4: what did I just see..?
by syphilis (Bishop) on Mar 24, 2021 at 12:47 UTC
    Print rounds down

    As a generalization this is not true.
    Even when it is true for some value $x, it will be untrue for -$x.
    However, there are times when print() rounds up for positive values.

    Consider the following (perl-5.32.0, nvtype is "double"):
    use strict; use warnings; my $d = 0.4444444444444446; printf "%a\n", $d; print "$d\n"; print "print() has rounded upwards\n" if "$d" > $d; __END__ Outputs: 0x1.c71c71c71c71fp-2 0.444444444444445 print() has rounded upwards
    And how do I calculate the value of this "epsilon" that is being mentioned ?

    Cheers,
    Rob
      And how do I calculate the value of this "epsilon" that is being mentioned

      As sectokia's first post indicated, Machine::Epsilon provides machine_epsilon(). It is dependent on the $Config{doublesize}, but for 64-bit, the value that sectokia quoted is the value. It is the maximum error, relative to the appropriate power of two.

      The value of that is the same as ulp(1) (from my Data::IEEE754::Tools), where ulp is the Unit in the last place. But really, ulp(value) is the easier way to figure out the exact ULP size for a given value, and you know that the "real" value is somewhere between +/- 1 ULP (actually, I think +/- 0.5 ULP, really) from the value stored in the 64-bit double float.

        and you know that the "real" value is somewhere between +/- 1 ULP (actually, I think +/- 0.5 ULP, really) from the value stored in the 64-bit double float

        Thanks for elaborating.
        If these considerations are relevant to the post that started this thread, I'm genuinely curious to know just what that relevance is ... because I'm not really seeing it, and I'd hate to be missing out on something ;-)
        (If they're not relevant, then that's OK. I always find thinking about and fiddling with such considerations to be fun, anyway.)

        The doubles 0.5, 0.3, and 0.1 all have ULPs of the same value (2 ** -53) - yet they all differ from their respective rational representations (5/10, 3/10, 1/10) by different amounts.
        It seems to me that the (details of the) behaviour reported by the OP has more to do with the size of the rounding error, than with the value of the ULP.

        I'll check out Machine::Epsilon and Data::IEEE754::Tools.

        Cheers,
        Rob