[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Setting Float Precision in Lua.c
- From: KHMan <keinhong@...>
- Date: Fri, 8 Jun 2018 09:40:06 +0800
On 6/7/2018 1:30 PM, Dirk Laurie wrote:
2018-06-07 4:04 GMT+02:00 KHMan wrote:
On 6/6/2018 8:53 PM, Albert Chan wrote:
= 123456789 / 1e20 -- both numbers exactly represented in double
= 1.23456789e-012 -- division guaranteed correct rounding
Here is a different approach (the long story approach):
So it's no problem for most of us. But if mathematicians keep thinking about
ideal situations and keep trying to hit exact points on the value line, then
they should keep on doing so and not bother the rest of us about it.
Well, you two have really been talking at cross purposes. All of what
you say is true, but you think Albert does not know that. He thinks he
Albert's point (see his second post) applies to the situation where
floating-point to a certain precision is all you have. In that
situation, there are algorithms that deliver additional precision by
doing clever things — but those algorithms rely on knowing what kind
of rounding the processor does, all the time, every time. Now if you
explicitly set rounding mode, you know that. If you don't set it, you
don't know. It's like seeding a pseudo-random number generator .
Where my point of view differs from Albert's that he thinks it
strengthens his case by pointing out that Windows is an example of a
system that gives undesired results. I think that it triggers the
reaction: yet another case where Windows is sloppy — so what?
Sloppy? Another mathematician might say that using extended
precision is better, only you guys are not using it correctly.
Opinions, everyone has a few of them.
Who's to say that one camp is the purveyor of all settings that
are correct and proper for IA32 floating point?
But the issue is just this: a failure to initialize something gives
I think it is a valid point with a very simple, cost-free workaround.
Yeah, until he pops up with the next thing that needs to be
perfect. I can see it already, perfect round-tripping atod/dtoa.
Then another thing. And another thing. Haven't you forgotten his
recent efforts at helping Lua get the bestest, most perfect, most
awesome PRNG? The bestest only lasts until it is knocked down by a
new research paper. Is Lua in the math academia business now?
All these things need auditing. B Dawson etc have even found
glitches with MSVC's number to ASCII or vice versa conversions,
for an older library I think. So to make Lua perfect for number
games, you need the auditing, the hunting down of glitches, and on
and on and on.
More likely it is up to binary release devs who should decide
whether to take this up. Whose real-world apps must really have
perfect output to 16 decimal digits? After 1000 float ops do you
still harp on a few roundings that are not the same for a
platform? It's the end of the world? Don't scientific people
already know how to manage errors for their scientific data? This
is really something that is more useful for math-oriented
academics to parade around with.
So Albert name-drops Vincent Lefèvre on his reply. The latter is
on the gcc mailing list and in the last few years that I noticed
him posting library announcements and such I don't recall him ever
pushing for 'proper' default compiler settings for gcc on IA32.
So Albert still needs to persuade Roberto & co. I'm just shooting
the breeze here. Hey, good luck there. ;-)
Kein-Hong Man (esq.)