• Subject: Re: Formatting numbers without precision loss
• From: Lorenzo Donati <lorenzodonatibz@...>
• Date: Tue, 20 Jun 2023 14:14:41 +0200

```(Also to Thjis)

On 20/06/2023 14:03, Lars Müller wrote:
```
```Exponential binary notation is indeed the only way to /exactly/
represent floats. If you're in full control of the format, it's the way
to go for serializing floats; it's effectively just the exponent plus a
hexdump of the mantissa. This also makes it very efficient and easy to
implement. You could even implement this formatting yourself in Lua.

But if your hands are tied with JSON or similar formats - or you want
human-readability - you need to format as decimal. This can't be exact,
but it can be precise enough for the conversion from string back to
number to yield the exact same number.

For this, 17 significant digits should suffice:

"The 53-bit significand precision gives from 15 to 17 significant
decimal digits <https://en.wikipedia.org/wiki/Significant_figures>
precision" -
https://en.wikipedia.org/wiki/Double-precision_floating-point_format

```
```
[snip]

```
```In other words, there always will be some float number whose binary
representation will be approximate by any amount of decimal figures in
a decimal representation.

There was some "good enough" approximation: IIRC %.14f worked well for
64bit double precision IEEE754 floats. "Well" means that for most
(all?) possible numbers the approximation was under some epsilon.

Cheers!

-- Lorenzo
```
```
```
```
I found also this:

```
It explains what I finally seem to remember: 17 digits are /theoretically/ sufficient to round-trip convert a double to-from a decimal representation, but the the conversion routine should use the correct IEEE754 rounding mode.
```
```
Actually the problem is that converting from binary to decimal floats is rather hard and some routine are buggy and even if they are not, unless you are in an environment that lets you access the IEEE754 float flags implemented in the FPU, you cannot be sure that the underlying routine uses them correctly.
```
```
So a format like "%.17g" is not guaranteed to produce a representation that when read and parsed by C "strtod" (or whatever other custom routine is implemented) will produce the initial number.
```
```
So yes, what I remembered correctly is that %a is /guaranteed/ to work because it "simply" spits out the bits in the implementation with little processing and the conversion from string to float representation is trivial and exact.
```
-- Lorenzo```