thank you for your answer.
But maybe there are some misunderstandings:
- Long double and int128 is all included... (int128 is i16, integer with 16
bytes...). Just please understand, that you would get REAL support of 128bit
integers (with "REAL" I mean also for unpack, also in lua_Number...) only in
some future LUA_128BIT (currently only LUA_64BIT and LUA_32BIT available, as
I see it, but it should not be any magic to extend to LUA_128BIT). What do
you mean with "all IEEE number types" - do you want longer numbers than
IEEE has many types including vectorized numbers. They were added notably for GPU and signal processors. It also defines numeric semantics like interval bounds (and no infinites or nans), fixed point numbers (basically like integers with an implicit non-encoded scale), packed BCD...
- variable-length length also included (see the N type byte ... codes
Consider what was done in recent audio/video/photo codecs, like WebP, and other open works made by Google as part of HTTP2, or other open formats developed by cloud storage providers and social networks (that helped them save a lot of storage, including reencoders preserving the accuracy for JPEG photos). What they have in common is that they no longer restrict themselves to 2-complement notations: unary representation is used for variable length data (e.g. for Huffman decrompression lookups), as well a differential encodings. And they are not restricted to byte alingment: the unit of measurement is the single bit (this is no longer a problem even on small systems, given that bit handling benefits of local caches: the most restrictive condition is not the processor speed, but the external storage in memory (for embedded devices like IoT or in shared environments like wikis with tons of users: the scripts have to run in very strict conditions that are necessary to keep the servers responsive for all their users), or in storage (for databases and cloud servers) or in transmission (for mobile networks whose data volume is restricted, even if they are now much faster, but as well on webservers for website hosting). Research in efficient data compression and representation has never been more active, even with large servers to be used by many users.
There's also the goal of preserving the energy for mobile users as well as for servers in clouds (and not just commercial clouds). The final goal is also allow easy redeployment and scalability by allowing processes to be relocated (for now this is largely based on length static decisions, but dynamic redeployments on demand is in the rise, and this includes transparenty changes of native architecture, and reliability with failover solutions to backup systems with minimal time of unavailability: the systems must be able to restart in very short time and must be able to reconfigure themselves with reasonnable defaults and then use training to converge to a stable and reliable solution that will meet the demand). For that goal, being efficiently agnostic about architectures is important: the solution must be portable with minimal efforts or automatically with minimum change of code, and we need the system to work even with very heterogeneous systems with all sorts of scales.