lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


HyperHacker wrote:
On Fri, May 28, 2010 at 04:41, Vyacheslav Egorov wrote:
Double 0xFF000000 is too big to be represented by a signed integer.
You should get value from Lua stack as double then cast it to unsigned
integer.

On Fri, May 28, 2010 at 12:14 PM, Dmitry Gapkalov wrote:
On MSVC compiler this construction
#define lua_number2int(i,n)  __asm {__asm fld n   __asm fistp i}
Example:
bit.band(0xFF000000, 0x00FFFFFF)
while debug after getting from stack, the first value is 0x80000000,
second 0xFFFFFF

On a related note, I'm curious how floating point rounding errors are
going to affect binary operations. I know of at least one emulator
(Dolphin, IIRC) that exposes a raw memory access API to Lua, but
suffers from rounding error - reading address 0x80000020 instead gives
you the value of 0x80000000, etc. I encountered the same problem when
implementing a similar API, and my solution was the HexType patch from
the wiki, but this won't work for everyone, unless 5.2 actually adds
an integer data type.

That's what Luiz's lua_number2uint() is for, it uses __int64.

This came up a while back. There's no reason why a double needs to 'round' a 32-bit value in that manner.

On x86, 0x80000000 is the "indefinite integer value" (x86's short answer for "help! i can't convert this!") for 32-bit results when converting, for example, when using CVTTSD2SI or FISTP. Compilers don't throw exceptions when doing such casting... and boom!

--
Cheers,
Kein-Hong Man (esq.)
Kuala Lumpur, Malaysia