[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Could Lua itself become UTF8-aware?
- From: Andrew Starks <andrew@...>
- Date: Mon, 01 May 2017 19:41:51 +0000
2017-05-01 15:15 GMT+02:00 Soni L. <email@example.com>:
> But my latin-1! (why limit it to UTF-8 if you can just add support for
> pretty much every single extension of ASCII instead?)
Much as I loved ISO-8859-1, I have to admit that it has become
obsolete except as input to iconv very quickly.
There is one single language understood by every culture that is doing scientific work: math. As far as I'm aware, there is no alternate character set that one needs to translate to or from if you want to express change, vectors or some other mathematical concept with someone from another culture.
In the same way that the language of math is universally understood in every corner of the world, is standard ASCII the universally accepted character set for source code and primitive English used for keywords? If that is more true than less true, then it might be reasonable to argue that it is better to limit source code to ASCII instead of innovating to allow a broader range of characters, outside of string literals, unless there is a clear and convincing reason to break this norm (if it is a norm).
What problem does it solve? Is support for UTF-8 useful for automated script processing or some sort of DSL application?