in reply to Re^8: Defining an XS symbol in the Makefile.PL in thread Defining an XS symbol in the Makefile.PL
Do the different long double types have differing ranges?
Yes, but we don't need to know anything about the range for the task at hand.
We just need to know the maximum number of bits of precision for each type, and the number of decimal digits required to accurately handle each of those maximum precisions.
That is:
53 bits needs 17 decimal digits
64 bits needs 21 decimal digits
113 bits needs 36 decimal digits
The DoubleDouble can actually accommodate some (not all) 2098bit values  which would require 633 decimal digits, but I'm still pondering what should be done about that.
I doubt that"%.632" NVgf will produce reliable results anyway.
Unpacking the bytes of the NV is probably cheaper and quicker than obtaining a numeric value, so maybe that's a better path to take for *all* NVs  not just the DoubleDouble.
Cheers, Rob
Re^10: Defining an XS symbol in the Makefile.PL
by jcb (Hermit) on Aug 19, 2019 at 22:33 UTC

You seem to misunderstand. The test code I offered above takes advantage of varying ranges to distinguish between known types. The code tests range by doing a string>FP>string roundtrip and comparing the result to the original string. It also directly tests available precision by adding a (smaller) value and testing if the sum is larger than the initial value. The highest of these tests that passes defines the number of digits of precision to use.
It also looks like you may have found a better way to do this that does not require determining the FP type in advance?
 [reply] 

It also looks like you may have found a better way to do this that does not require determining the FP type in advance?
Reading the bytes works well for comparing NV values, but unfortunately List::Util::uniqnum() also needs to be able to recognize that (eg) the IV "42" and the NV "42.0" are the same value.
Unfortunately that breaks down when one starts reading the bytes of the NV ;)
You can't quite avoid determining the FP type, anyway. For a 16byte extended precision long double I don't think it's wise to assume that 2 identical NVs would necessarily have the same 6 unused bytes.
For that type I decided I should locate the unused bytes and not read them. That additionally required that endianness be established (which is quite easy, courtesy of $Config{byteorder}).
So I'll just go with the original solution. As regards what to do regarding the DoubleDouble, I'm trying to establish whether the deplorable state of stringification that I'm seeing is typical ... or whether it's just *my* system that's suffering.
If mine is typical, then I'll just do "%.36" NVgf for the DoubleDouble. But if recent improvements have been made then I might try something smarter.
Noone (apart from me) uses DoubleDouble builds of perl anyway, and I haven't been able to find a failure using "%.36" NVgf so I'm not all that committed to spending too much time on it.
Cheers, Rob
 [reply] [d/l] [select] 
