lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


Note that octuple precision (256-bits) is already part of the IEEE 754 as an approved extension since long (already used in 2006 on Apple G4).
Apple may be tempted to renew the support for it in its next ARM-based processor...

Le mar. 18 août 2020 à 23:58, Philippe Verdy <verdyp@gmail.com> a écrit :
I see... Thanks. Anyway I don't know why you use this magic value 50 which is twice more than what is needed, and not a multiple of 8 or 16.
Even with long doubles, a buffer size of 32 bytes would be enough (we're not converting to UTF-8, just to plain ASCII, with vs[n]printf() I think (I may be wrong with some double-byte locales)
But if you ever think about the possibility of double byte default locales in C, may be 50 is not enough and 64 would then be safer.

This part of "luaconf.h" is a bit tricky, seems to have been tweaked/adjusted manually with various test/fail/retries. There's no real test if this is safe when porting. We just seem to assume that only common IEEE 754 sizes will be used (including 80-bit long doubles, but why not 128-bit on some archs? What is there's a new architecture supporting 256-bit "long doubles" larger than the ISS 754 minimums just defined for "float" and "double"?)
There's no compiler directives to assert the implicit size constraints and still this allocates more than what is needed in common x64 and ARM64 archs used today (even the next coming Mac will use ARM64 after the 68k, PPC and x64 adventures, but here I think about architectures that would want to support Lua on large application servers with more "exotic" processsors and high levels of parallelism; including new processor types for IA like what Google is developing and now selling, or what is used in GPUs with their dedicated recompilers and native APIs; nVidia in particular hides lots of details).


Le mar. 18 août 2020 à 23:43, Andrew Gierth <andrew@tao11.riddles.org.uk> a écrit :
>>>>> "Philippe" == Philippe Verdy <verdyp@gmail.com> writes:

 Philippe> could this be related to
 Philippe> /* maximum length of the conversion of a number to a string */
 Philippe> #define MAXNUMBER2STR   50

No.

It's to do with the fact that the lua value is a tagged union which can
be either a 64-bit integer or 64-bit double. Optimization code which
is taking advantage of the fact that the type is known at the callsite
to produce specialized variants of the called function was confusing the
types and passing them in incorrect registers as a result.

See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96040

--
Andrew.