lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On 2017-11-17 07:26 AM, Dirk Laurie wrote:
2017-11-17 9:55 GMT+02:00 Paige DePol <>:
I know only one language that on a routine basis accepts that its
users may not know all that, and therefore provides a built-in fix for
the problem. I won't mention its name since it is a language that
people either love or hate.
Well, now I have to ask... what language?
Hint: the name of the tolerance in that language is ⎕CT.
(Having two tolerances is non-standard.).

If the same idea is implemented in Lua, it
would involve two additional predefined constants math.abstol and
math.reltol. A floating-point loop 'for x in start,stop,inc' would
be equivalent to
... [snip] ...
The defaults are such that loops like "for x=1,5,0.1" give the
intuitively expected behaviour, but the user is permitted to change
either tolerance. Most commonly one would set one of them to 0, so
that the test simplifies to either relative or absolute tolerance.
What do these tolerance values actually do from a code perspective? I was
wondering if there was a way to do what I believe you are talking about,
however, I don't think it was necessary for the simple detection test I
came up with... unless you are saying there is an issue?
The issue is that terminating decimals like 0.1 or 0.15 (except those that
happen to be terminating binaries too) are either too small or too large.
So strictly speaking a loop running from 0 to 3 should either include
3 .0 or not, depending on which was the case. As it happens, in IEEE
double precision, with step size 0.1, 3.0 is not included, but with step
size 0.15, it is.

The point is that Sam Q. Public does not care whether your loop does
precisely what the machine representation of the increment implies.
They want 3.0 included in both cases, since when counting in
decimals (which is what they were taught in grade school), that is
correct. It should not be their worry to know and take into account
to whether your computer rounds up or down when representing
0.1 or 0.15.

It's not just a question of decimals, the stepsize may have been
computed as p/q with integers p and q.

Now if you have math.abstol and math.reltol, the default behaviour
can be as Sam expects, and if you insist on the puristic but useless
behaviour of current Lua (and C, and whatever) loops, you can
always just set the tolerances to 0.

Globals are bad. Provide abstol and reltol all you want, but don't make them the default if that means relying on globals.

The only thing allowed to use globals is print() with tostring().

-- Dirk

Disclaimer: these emails may be made public at any given time, with or without reason. If you don't agree with this, DO NOT REPLY.