lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


>>> I actually zoomed in to a call to DirectX9-s CreateDevice() (which
>>> returns
>>> successfully, I don't think it is memory leaking / writing in invalid
>>> places). *Right* before CreateDevice() string.sub works, *right* after
>>> CreateDevice(), string.sub fails.
>>
>>
>> This could be the FAQ where DirectX sets the FPU precision to
>> single, which starts to clobber double lua_numbers.  I think
>> there was a thread about it quite recently (or maybe I'm mixing
>> up this list with one of the many others where people get frequently
>> hit by the same DirectX misdesign... sigh...).

That was it! Thanks!!

However noticed a little problem in luaconf.h. (I'm using the LUA5.1 beta
that was recently released)

I changed LUA_NUMBER to float and LUAI_UACNUMBER to float, recompiled, and
still had the same problem. Then I noticed this piece of code (also in
luaconf.h) above it:

/* On a Pentium, resort to a trick */

#if !defined(LUA_ANSI) && !defined(__SSE2__) && \
    (defined(__i386) || defined (_M_IX86))
union luai_Cast { double l_d; long l_l; };
#define lua_number2int(i,d) \
  { volatile union luai_Cast u; u.l_d = (d) + 6755399441055744.0; (i) =
u.l_l; }
#define lua_number2integer(i,n)		lua_number2int(i, n)

(etcetera)

The above code only works when LUA_NUMBER is double! When I commented it
away, everything works as it should (but I don't get the cool Pentium
trick).

Maybe for this trick, luaconf.h can test what LUA_NUMBER really is and use a
'float'-trick or a 'double'-trick accordingly?

Anyway, again thank you very much for the help!

		Hugo