lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


David Given wrote:
David Jones wrote:
[...]
I think you're right; this is the only safe way.  I'm amazed that this
has gone unnoticed.  Of course, even with this solution it's _possible_
to construct a perverse C implementation where it would be undefined
behavious (if INT_MAX was huge enough to be outside the range of double).

Actually, talking to someone who knows more about floating-point
representations than I do: the Evil Hack to convert a double to an int, viz.:

#define lua_number2int(i,d) \
  { volatile union luai_Cast u; u.l_d = (d) + 6755399441055744.0; (i) = u.l_l; }

...should produce the 'right' results --- i.e. (double)-1 == (int)-1 ==
(unsigned int)0xFFFFFFFF --- everywhere where the hack does work, because the
hack relies on a particular binary representation. It's only when you do
things correctly that it will start to fail. You're on a Mac, right, Brian? If
so, you won't be using the hack.

The project in question is both PC, and Mac.

"n = (int)(unsigned int)d;" worked on the PC, but not the Mac (Intel) with negative values.

It seems as though redefining lua_Integer to be 64bit would solve this for all platforms, but Lua has too many implicit casts from lua_Integer to int, so I abandoned that change (the more mods I make to Lua, the less clean it feels).

I'm also thinking that I may have to re-think our entire use of flags, and bitfields, since we have other projects on the PS2 where lua_Number is a float, and therefore can't hold all 32bit integer values.

Brian