I see... Thanks. Anyway I don't know why you use this magic value 50 which is twice more than what is needed, and not a multiple of 8 or 16.
Even with long doubles, a buffer size of 32 bytes would be enough (we're not converting to UTF-8, just to plain ASCII, with vs[n]printf() I think (I may be wrong with some double-byte locales)
But if you ever think about the possibility of double byte default locales in C, may be 50 is not enough and 64 would then be safer.
This part of "luaconf.h" is a bit tricky, seems to have been tweaked/adjusted manually with various test/fail/retries. There's no real test if this is safe when porting. We just seem to assume that only common IEEE 754 sizes will be used (including 80-bit long doubles, but why not 128-bit on some archs? What is there's a new architecture supporting 256-bit "long doubles" larger than the ISS 754 minimums just defined for "float" and "double"?)
There's no compiler directives to assert the implicit size constraints and still this allocates more than what is needed in common x64 and ARM64 archs used today (even the next coming Mac will use ARM64 after the 68k, PPC and x64 adventures, but here I think about architectures that would want to support Lua on large application servers with more "exotic" processsors and high levels of parallelism; including new processor types for IA like what Google is developing and now selling, or what is used in GPUs with their dedicated recompilers and native APIs; nVidia in particular hides lots of details).