[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: AW: LUA_NUMBER -> Integer conversion bug
- From: Sebastian Rohde <rohde@...>
- Date: Mon, 13 Mar 2006 16:06:43 +0100
Mike Pall wrote:
> No. The "standard" FPU state is very well defined by various
> documents. The most notable are the x86 ABIs for POSIX and
> Windows. The former specifies fpucw = 0x37f, the latter specifies
> fpucw = 0x27f. They differ only in the default precision setting
> (extended vs. double).
> Microsoft choose to deviate from this with DirectX by leaving the
> FPU state set to "float" precision. Blame them, not Lua. In fact
> D3DCREATE_FPU_PRESERVE really should've been the default setting.
> Even back then when it mattered (on a Pentium I or II without GPU
> acceleration). If you choose to live dangerously then you should
> get the option, but not just by accident.
ACK, but DirectX was not my point.
> This is not just about "conversion tricks". Every compiler
> generates tons of code which relies on the implicit assumptions
> from the ABI. Many math functions will produce invalid results if
> the precision or the rounding mode is not the default. Changing
> the default FPU state is a really bad idea.
> Anyway, you would've gotten wrong results even with earlier Lua
> versions, except under different circumstances. Try this:
Okay I think I got the point. Would you say that there is no reason to set
the FPU state? From my naïve point of view I would be surprised if there
were no performance difference between the precision modes.
Do you have sources which state otherwise?
Nonetheless I would say that this should be noted more specifically in the
api reference section of the reference manual or as an assertion (blame me
if I did not just see it). It should be easily detectable at runtime in
debug mode when the state is incorrect and this could produce an error.
Even if it is bad practise to set the precision mode a library should define
its dependencies. In this context it means that Lua depends on the standard