[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: question about Unicode
- From: Glenn Maynard <glenn@...>
- Date: Thu, 7 Dec 2006 19:24:09 -0500
On Thu, Dec 07, 2006 at 06:03:42PM -0500, Russ Cox wrote:
> 5. Applications whose job is text processing typically are easier
> working with internal arrays of characters rather than UTF-8
> (but they should still read and write UTF-8 externally!).
> The exact details of which data type you use to hold your
> character values is up to your application. 16-bit integer (if you
> don't care about the new Unicode points), 32-bit integer,
> and even double-precision floating point (if you use Lua)
> are all perfectly fine, with 16-bit being perhaps somewhat
> less than ideal (now that Unicode has bloated some)
> but still more efficient.
I've ported applications that use Lua to platforms with no 64-bit
floating point, so I don't like depending on lua_Number being double.
Fortunately, the Lua use wasn't very pervasive in the program at
the time and the problems were easily worked around. I'll probably
change lua_Number to float even on x86 ports in the future, to catch
"storing large integers in floats" problems quickly. (I think it
also makes stack alignment better.)
--
Glenn Maynard
- References:
- Re: question about Unicode, Roberto Ierusalimschy
- Re: question about Unicode, David Given
- Re: question about Unicode, Rici Lake
- Re: question about Unicode, Roberto Ierusalimschy
- Re: Re: question about Unicode, Ken Smith
- Re: question about Unicode, Adrian Perez
- Re: question about Unicode, Asko Kauppi
- Re: question about Unicode, Brian Weed
- Re: question about Unicode, Glenn Maynard
- Re: question about Unicode, Russ Cox