Hello. Getting a compilation error on MSVC 6.0 SP6 on Windows XP with the latest Lua. The error is as follows:
Line 338 of lmathlib.c:
static lua_Number I2d (Rand64 x) {
return (lua_Number)(trim64(x) >> shift64_FIG) * scaleFIG;
}
gives this error:
compile error C2520: conversion from unsigned __int64 to double not implemented, use signed __int64
[The VC6 Processor Pack may fix this - haven't tested - however it's incompatible with the latest version of VC6 (service pack 6)]
Simply casting to signed __int64 is wrong, as that will make the range -0.5 to +0.5. I suppose you could then always add 0.5 to move it to the 0, 1 range. A couple other proposed fixes:
// this one preserves the full 64-bits by masking off the MSB
// then later conditionally adding 0.5 bias back in
static lua_Number I2d (Rand64 x) {
signed __int64 y = x & 0x7FFFFFFFFFFFFFFF;
lua_Number d = (signed __int64)(trim64(y) >> shift64_FIG) * scaleFIG;
if (x & 0x8000000000000000) d
+= 0.5;
return d;
}
or, if you don't mind losing a bit of precision:
// only 63-bits of precision, the top-most (sign / MSB) bit is lost
// maybe OK as a double has only 52-bits mantissa
static lua_Number I2d (Rand64 x) {
signed __int64 y = x & 0x7FFFFFFFFFFFFFFF;
return (signed __int64)(trim64(y) >> (shift64_FIG - 1)) * scaleFIG;
}
Of course, these are both specific fixes _for my case_ and I do not know enough about Lua to make these either conditional nor portable. I am happy to test any fixes on my system though, if anyone wants to make this old compiler officially work again!
-Greg Kennedy