lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


On 6/5/2018 8:33 AM, Albert Chan wrote:
Some operation system, say Windows, use EXTENDED PRECISION by default.
(64 bits long double instead of 53 bits double)

Example, from my Win7 laptop:

Python return the correct rounded sum:
(thus *all* Python import modules also does 53-bits float rounding)

1e16 + 2.9999           # good way to test float precision setting
     ^^^^^^^^^^^^^

What "1e16 + 2.9999" really means is that the user wants to utilize all 53 bits of double precision, PLUS having rounding done just the way the user likes it.

If the program uses all 53 bits and the ULPs are very important, that is an impossible thing to ask for, because all your rounding due to arithmetic ops is going to hammer your ULPs. What is a correct rounding? You would have to thread extremely carefully and do extensive testing to see rounding effects on error accumulation in your program and decide whether you want to trust those ULPs. But we want perfect ULPs now? How? Why?

So what is the point of "1e16 + 2.9999"? It's only useful for perfectionists who wish to see digit- or bit-perfect output across all languages and platforms. Please do not see floating-point as something that is mathematically beautiful or perfect; it is more of a mass of engineering compromises -- when you push its capabilities to the limits, you always have to manage the error thing. This is a world that is very, very far away from perfect ULP digits or bits.

Now, if you still want to change people's minds, please explain why the idea is one that is really important to have for Lua, in terms of how it affects normal apps and programs.


--
Cheers,
Kein-Hong Man (esq.)
Selangor, Malaysia