/* maximum length of the conversion of a number to a string */
#define MAXNUMBER2STR 50
where the string is allocated on the stack with an array of bytes whose size (including the null terminator) is not a multiple of the word size? Causing some internal bug in the stack slots allocator in GCC 10.1?
Note that "void luaO_tostring" is the only function where it is allocated this way. This may cause issue when this function is inlined (probably alignment problems).
May be this is solved by just making this a multiple of 8 bytes (64-bit architectures) or 16 bytes (128-bit architectures).
However how can even on a 64-bit architecture this generate a numeric string that could be 49 bytes long plus a null ?
May be the type for number can have its bitsize asserted to define the length that is needed for the mantissa, the exponent, the signs and the dot. If this is too complicate, why not just aligning 50 to the next multiple of 8 or 16, i.e. setting it to 56 or 64?
#define MAXNUMBER2STR 56This will be more than what is necessary, but at least it could avoid the alignment problem when inlining.
For numbers defined as IEEE 64-bit doubles, the decimal expansion can at msot produce a string which has 15 to 17 decimal digits, a signs, a dot, an "E" separator, and up to 3 digits plus a sign for the exponent, so the string has at most 24 characters plus a nul.
Only if characters are represented using double-byte digits or exponents (when not using the "C" locale in Basic Latin) the string soace could be larger than 25 bytes. 50 was defined as if all characters could be double-byte.
luaconf.h also contains various macros testing if numbers are complied with long doubles and sets the default prefcision to 19 digits of mantissa (14 digits for doubles, 7 digits for floats). The conversion is made using l_sprintf():
** (All uses in Lua have only one format item.)
#define l_sprintf(s,sz,f,i) snprintf(s,sz,f,i)
#define l_sprintf(s,sz,f,i) ((void)(sz), sprintf(s,f,i))
This macro does not use the (sz) parameter, if it uses sprintf(). But in C89 it uses snprintf(), and in GCC, snprintf may be an intrinsic which may be inlined by assembly instructions by the compiler depending on the size given (when using the advanced optimization), instead of performing a function call to the C library: this is where an alignment problem may possibly occur, where stack slots position may be incorrectly computed. One way to workaround this bug could simply as well be to round the size to a multiple of 8 or 16.
Here the crash exposes the use of snprintf() as a function call; may be this function has some acceleration tweaks for the optimized version, such as manipulating the stack and using vectored instructions; but vector instructions require some registers to be possibly saved to the stack: if stack slot positions are incorrectly computed, what is saved before using vector instructions may be overwritten by the vector instruction ins the same stack. Aligning the size to a correct multiple may help prevent this bug.
But I note an other warning in luaconf.h (this time for a float rather than a double or long double):
/* @@ lua_numbertointeger converts a float number with an integral value
** to an integer, or returns 0 if float is not within the range of
** a lua_Integer. (The range comparisons are tricky because of
** rounding. The tests here assume a two-complement representation,
** where MININTEGER always has an exact representation as a float;
** MAXINTEGER may not have one, and therefore its conversion to float
** may have an ill-defined value.)
#define lua_numbertointeger(n,p) \
((n) >= (LUA_NUMBER)(LUA_MININTEGER) && \
(n) < -(LUA_NUMBER)(LUA_MININTEGER) && \
(*(p) = (LUA_INTEGER)(n), 1))
(Here the problem is with MAXINTEGER for converting integers.)