lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

I am working on an extension to my (as yet released) Token Storage patch to allow preprocessing of Lua source code. When preprocessing is enabled via the new -E flag to `luac` the filtered source will be saved in the output file (default luac.out) instead of the bytecode.

To ensure that all preprocessing is working correctly I have been running the test suite files through `luac -E` and then running `lua luac.out` to check for errors. So far everything is working, and all token storage related features are properly preprocessed into vanilla Lua code.

However, there is one issue that appears to be Lua based, the default floating point format specifiers.

By default we have these defines for float, double and long double:

float:	%.7g
double:	%.14g
lngdbl:	%.19Lg

The problem, according to the IEEE floating point entries on Wikipedia, is that there are not enough digits specified. As per the Wikipedia articles the number of significant digits are as follows:

float:	6-9 
double:	15-17
80bit:	18-21 (probably `long double` for most people)
128bit:	33-36 (`long double` for SPARC, PowerPC, Itanium)

The original lines in `math.lua` I am getting errors on are as follows:

assert(0x.FfffFFFF == 1 - '0x.00000001')

assert(tonumber(' -1.00000000000001 ') == -1.00000000000001)

I am using LUA_REAL_DOUBLE for my floating-point type, when I change the format specifier from "%.14f" to "%.16g" the both errors are resolved.

With the specifier at "%.14f" the preprocessed lines are saved as follows:

assert(0.99999999976717 == 1 - "0x.00000001")

assert(tonumber(" -1.00000000000001 ") == -1.0)

With the specifier at "%.16f" the preprocessed lines are saved as follows:

assert(0.9999999997671694 == 1 - "0x.00000001")

assert(tonumber(" -1.00000000000001 ") == -1.00000000000001)

From what I have been able to find online these should be the proper format specifiers to display any numbers of the relevant data type:

float:	%.8f
double:	%.16f
80bit:	%.20Lf
128bit:	%.35Lf

I really believe it would be a good idea to incorporate this change into vanilla Lua, in this way users can be assured that any floating-point numbers written out by Lua can also be read back in without error. Additionally, the entire test suite (with ltests enabled) passes without error with the updated format specifier.

Regardless of the format specifiers being updated I may just preserve the original number format to write back out to the preprocessed source as anyone compiling Lua could change the format specifier and break the preprocessor. I just wanted to bring this precision issue to the attention of the Lua developers! :)