lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

I'm talking about the newcoming processors that integrate more and more coprocessor features and vector instructions as extensions (grouping ranges of registers by pairs to create larger registers, and with dedicated ALU/FPU units that are now increasingly parallized with multiple processing channels. There's a trend now, with AI units, GPUs, and as well a need for new kinds of applications that will process huge amounts of data containing tiny bits of information that can only be detected within a large amount of noise, currently hidden/masked by the limited precision.
Apple, Google are already selling their new processors; even if they are based on a 64-bit architecture, they contain the vector extensions and 128 bit extensions  We'll see more and more use of these types because this is not for common desktop or web applications we used until now (including web apps for mobiles), as they are massively interconnected over a faster and larger network. And the number of devices with processing capabilities explodes; The industry finds new applications everyday: we are no longer at the time of experiments of specialized applications for specific domains by specific installations. We find them now everywhere in all sorts of objects (IOTs are now there for long, may be their local processing is limited but they work within a very capable and very wide network that provides and largely extends their services and usability by more nad more users). Even a basic users could still use them with small basic Lua scripts that will integrate into the compound system and will want to use the same datatype (it will not necessarily be slow)

Le mer. 19 août 2020 à 07:03, Sam Trenholme <> a écrit :
> 128-bit quadruple precision

The thing about IEEE 128-bit quadruple precision floats is that they
only have hardware support in the POWER9, the IBM S/390 from the 1990s,
and z/Architecture systems.  We’re talking systems which cost at least
$3000 and go up very quickly from there.

It *is* possible to change the default number type in luaconf.h (the
#define LUA_NUMBER) to something else, such as IEEE binary128 or
decimal128 float, as long as one’s C compiler supports the type.  Of
course, if using a type like that, it’s probably a good idea to change
LUA_NUMBER_FMT to hold more digits (and make sure the buffers can hold
the digits).

As it turns out, with mainstream processors, instead of having more
widespread support for 80-bit floats, ARM processors -- read, most of
the real-world processors out there in our smartphone-addicted age --
are instead increasing support for 16-bit float types.  Armv8.1-M and
ARMv8.2-A added 16-bit floats (1 sign bit, 5 exponent bits, 10 mantissa
bits), ARMv8.6-A added support for another 16-bit float format,
“BFloat16” (1 sign bit, 8 exponent bits, 7 mantissa bits).

Of course, someone *could* extend Lua to use GNU Multiple Precision
Arithmetic Library or what not as the native number type, but Lua is not
Python, and doing so would make Lua a lot larger and a lot slower. 
64-bit floats, the default, is usually good enough, keeps Lua small (I
like having a full Lua-5.1 derived interpreter which fits in 118,784
bytes) and when it’s not, there are solutions like out there.