Next: Scaling by the Base, Previous: Handling NaN, Up: Floating Point in Depth [Contents][Index]

The sign of zero is significant, and important, because it records the
creation of a value that is too small to represent, but came from
either the negative axis, or from the positive axis. Such fine
distinctions are essential for proper handling of *branch cuts*
in complex arithmetic (see Complex Arithmetic).

The key point about signed zeros is that in comparisons, their sign
does not matter: `0.0 == -0.0`

must *always* evaluate to
`1`

(true). However, they are not *the same number*, and
`-0.0`

in C code stands for a negative zero.