lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On Fri, Nov 17, 2017 at 5:58 AM, Hisham <> wrote:
> On 17 November 2017 at 04:38, Dirk Laurie <> wrote:
>> 2017-11-17 0:26 GMT+02:00 Paige DePol <>:
>>> Paige DePol <> wrote
>>>> In lvm.c:
>>>> #define forstepcheck(ns,ni,nl) \
>>>>          (luai_numne(ns, luai_numsub(L, luai_numadd(L, ni, ns), ni)) || \
>>>>           luai_numne(ns, luai_numsub(L, nl, luai_numsub(L, nl, ns))))
>>> This worked... until it didn't. Floating point numbers are so strange!
>>> 3.2 == (1.0 + 3.2) - 1.0  [3.2]
>>> 3.2 == (16.0 + 3.2) - 16.0  [3.2]
>>> Looks like that should pass the test... but it failed (sorry for the excessive width):
>>> 3.200000000000000177636 == (1.000000000000000000000 + 3.200000000000000177636) - 1.000000000000000000000  [3.200000000000000177636]
>>> 3.200000000000000177636 == (16.000000000000000000000 + 3.200000000000000177636) - 16.000000000000000000000  [3.199999999999999289457]
>> A for loop with floating-point increment, and what appears to be an
>> integer multiple of it as upper limit, is obscene code. In a numerical
>> analysis course, it is a standard example of what one should not do,
>> and a fix is taught. (It basically involves adding half the increment
>> to the upper limit.)
>> I know only one language that on a routine basis accepts that its
>> users may not know all that, and therefore provides a built-in fix for
>> the problem. I won't mention its name since it is a language that
>> people either love or hate. If the same idea is implemented in Lua, it
>> would involve two additional predefined constants math.abstol and
>> math.reltol. A floating-point loop 'for x in start,stop,inc' would be
>> equivalent to
>> do local x=start
>> while math.abs(x-stop) <= math.max(math.abstol,math.reltol*math.abs(x)) do
>>    -- loop body
>>    x = x+inc
>> end
>> The defaults are such that loops like "for x=1,5,0.1" give the
>> intuitively expected behaviour, but the user is permitted to change
>> either tolerance. Most commonly one would set one of them to 0, so
>> that the test simplifies to either relative or absolute tolerance.
>> I guess that the bottom line is that the VM instruction should right
>> at the start determine whether integer or floating-point arithmetic is
>> to be used, and that different algorithms coping with the different
>> problems (integer overflow, floating-point inexactness) should be
>> applied.
> If one were to be true to Lua's original spirit as a scripting
> language to be used by domain specialists in other fields, this would
> be an excellent choice. But if we're considering an epsilon in for
> loops, then I'd also expect the same tolerances to be taken into
> account in equality comparisons of floating-point numbers.
> The next question, then, would be "what are good default values for
> math.abstol and math.reltol"?
> -- Hisham

Absolute tolerance is numerical bupkis. No constant epsilon could ever
be meaningful across scales -- an epsilon that gives good behavior on
loops between 0 and 1 isn't going to give you good behavior on loops
between 1 and 1,000,000, and if you find a reasonable compromise there
then that epsilon is going to be unsuitable for a loop between 1e20
and 1e30.

Oh, sure, you could say it doesn't matter and that the behavior is
"good enough", or that the programmer should tune it in the weird
cases. But really, there's just no point in it, because this is a
solved problem: you have to make your tolerance scale with the
magnitude of the parameters.

As for relative tolerance... well, the big problem with determining a
value there is that the rounding error compounds with each addition.
If your loop only iterated twice, then a reltol of 2^-52 would be
sufficient -- you just have to deal with errors in the
least-significant bit. But the more times you repeat that addition
with the imprecisely-rounded interval, the larger that error becomes.

Tidbit: Dirk's code makes a minor error in that the relative tolerance
fails if x is 0. You're supposed to determine the relative tolerance
based on the largest absolute value used in the computation to avoid

We can solve these problems at the same time:

During the loop prologue, we just determine epsilon = max(abs(start),
abs(stop)) / abs(step) * 2^-52, then we loop while abs(x-stop) >
epsilon. Therefore we tolerate one rounding error per arithmetic
operation, and it only fails to have the desired behavior if you're
iterating so many times that the accumulated rounding error exceeds
the step size -- and I dare you to try to actually execute that loop
to completion. ;) And the performance characteristics ought to be
pretty good because there's only two FP operations (one subtraction,
one comparison -- absolute value is a bit operation, not a FP one)
happening per iteration there.


Better yet, screw the numerical methods.

As I said, the problem is that the rounding error is compounded each
time the addition takes place. But what if I told you there was a way
to run that loop without doing ANY floating-point additions?

Just convert the whole silly thing to integers: min=0,
max=(stop-start)/step+2^-50, increment=1. Each iteration, x = i *

FP multiplication only ever succumbs to rounding error once, and it's
guaranteed by IEEE 758 to be exact in all bits except the least
significant. This is a little bit slower than a naive floating-point
loop, because FP multiplication is slower than FP addition, but it's
more precise because the loop variable will only ever be off in the
least-significant bit -- there's no error ACCUMULATION.

If the performance penalty is unacceptable you can instead go ahead
and accumulate FP additions into the loop variable (reintroducing the
error accumulation), but still use the integer version for determining
the actual loop behavior (avoiding the fencepost error).

/s/ Adam