[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Slightly weird request about numbers
- From: David Given <dg@...>
- Date: Tue, 24 Jul 2007 13:35:20 +0100
-----BEGIN PGP SIGNED MESSAGE-----
> Is there a "built-in" C number type (even if it isn't proper ANSI C, and
> is some kind of gcc extension weirdness) which is big enough, or which
> is arranged in such a way, that I can easily rebuila Lua with this as my
> intrinsic representation of number, can still enjoy the fruits of
> floating point, yet still have more than 64 bits of integer accuracy
> under Windows and Linux under both 32-bit and 64-bit OSes.
Some versions of gcc support decimal floating point, all the way up to 128
bits --- the type is _Decimal128, and the suffix for constants is DL. However,
these are liable to be horribly slow as I'm sure they're all done in software:
However, I suspect long double will do you. The mantissa is limited to 63
bits, true, but the top bit isn't stored (for normalised numbers) as it's
always going to be one... and the sign is stored elsewhere (in the top bit of
the exponent). So you should get your full 64 bits of precision, in both
positive and negative. I think.
> Is this easy to do with a (non-standard) build of (standard) Lua and
> libraries? I don't care about the numbers mapping directly into Intel
> registers. Performance is only very mildly important.
Yes, trivial. Simply change the LUA_NUMBER definitions in luaconf.h. Remember
that you also need to change the scan strings as well or converting to and
from strings will go horribly wrong.
┌── ｄｇ＠ｃｏｗｌａｒｋ．ｃｏｍ ─── http://www.cowlark.com ───────────────────
│ "There does not now, nor will there ever, exist a programming language in
│ which it is the least bit hard to write bad programs." --- Flon's Axiom
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----