lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On Tue, Aug 17, 2021 at 1:58 PM Roberto Ierusalimschy
<> wrote:

> C does that mainly because that is what most (all?) CPUs do.

This is how it was standardized:

(begin quote)

        Currently, the C standard (C89) [...]
        differs from Fortran, which requires that the result
        must definitely be truncated toward zero.  Fortran-
        like behavior is permitted by C89, but it is not


        However, the
        argument that Fortran programmers are unpleasantly
        surprised by this aspect of C and that there would
        be negligible impact on code efficiency was accepted
        by WG14, who agreed to require Fortran-like behavior
        in C9x.


Change in 6.3.5 Multiplicative operators, Semantics,


                When integers are divided, the result of the /
                operator is the algebraic quotient with any
                fractional part discarded.  [FOOTNOTE: This is
                often called ``truncation toward zero''.]
        (Note that the relation between % and / is preserved.)


In the Rationale document, replace the first paragraph
        of 6.3.5 Multiplicative operators with:
                In C89, division of integers involving
                negative operands could round upward or
                downward, in an implementation-defined manner;
                the intent was to avoid incurring overhead in
                run-time code to check for special cases and
                enforce specific behavior.  However, in Fortran
                the result would always truncate toward zero,
                and the overhead seems to be acceptable to the
                numeric programming community.  Therefore, C9x
                now requires similar behavior, which should
                facilitate porting of code from Fortran to C.
                The table in subsection of this
                document illustrates the required semantics.

(end quote)

I do not know for sure how the Fortran rule came about, but one could
look at the applications of the truncating rule:

3 / 2 = 1
- 3 / 2 = -1
- (3/2) = -1
3 / -2 = -1
-3 / -2 = 1

And the flooring rule:

3 // 2 = 1
- 3 // 2 = -2
- (3 // 2) = -1
3 // -2 = -2
-3 // -2 = 1

And finally the rules we learn in school:

(-a) / (b) = -(a/b)
(a) / (-b) = -(a/b)
(-a) / (-b) = (a/b)

Also of note is that in Fortran conversion from floating to fixed
point (integer) also truncates (to zero). Thus floating point division
followed by conversion to fixed point would be consistent with fixed
point division. A different rule for fixed point division would not
have this property.

> CPUs do
> that because that division is the only one that respects the law
> (-a)/b == a/(-b) == -(a/b), which simplifies the hardware. (It can
> always divide positive integers and apply the proper signal to the
> result.)

Fortran was made for IBM 704, whose fixed point division had a
different rule, viz., "the sign of the remainder always agrees with
the sign of the dividend" [1], which would result in:

3 div 2 = 1 (rem 2)
-3 div 2 = -1 (rem -1)
3 div -2 = -2 (rem 2)
-3 div -2 =  1 (rem -1)

This latter rule seems also to have been that of IBM 701, IBM's first
(modern sense) computer. Even if some other contemporaneous computers
had different rules, I do not think we can say that the Fortran rule
followed the CPU rule, it looks like a deliberate choice. It is more
likely that the modern CPU rule followed the Fortran rule.


[1] IBM 704 Electronic Data-processing Machine. Manual of Operation.