Previous: Changing Precision, Up: Arbitrary Precision Floats [Contents][Index]

CAUTION:Never depend on the exactness of floating-point arithmetic, even for apparently simple expressions!

Can arbitrary precision arithmetic give exact results? There are no easy answers. The standard rules of algebra often do not apply when using floating-point arithmetic. Among other things, the distributive and associative laws do not hold completely, and order of operation may be important for your computation. Rounding error, cumulative precision loss and underflow are often troublesome.

When `gawk`

tests the expressions ‘`0.1 + 12.2`’ and ‘`12.3`’
for equality
using the machine double precision arithmetic, it decides that they
are not equal!
(See Floating-point Programming.)
You can get the result you want by increasing the precision;
56 bits in this case will get the job done:

$gawk -M -v PREC=56 'BEGIN { print (0.1 + 12.2 == 12.3) }'-| 1

If adding more bits is good, perhaps adding even more bits of
precision is better?
Here is what happens if we use an even larger value of `PREC`

:

$gawk -M -v PREC=201 'BEGIN { print (0.1 + 12.2 == 12.3) }'-| 0

This is not a bug in `gawk`

or in the MPFR library.
It is easy to forget that the finite number of bits used to store the value
is often just an approximation after proper rounding.
The test for equality succeeds if and only if *all* bits in the two operands
are exactly the same. Since this is not necessarily true after floating-point
computations with a particular precision and effective rounding rule,
a straight test for equality may not work.

So, don’t assume that floating-point values can be compared for equality.
You should also exercise caution when using other forms of comparisons.
The standard way to compare between floating-point numbers is to determine
how much error (or *tolerance*) you will allow in a comparison and
check to see if one value is within this error range of the other.

In applications where 15 or fewer decimal places suffice,
hardware double precision arithmetic can be adequate, and is usually much faster.
But you do need to keep in mind that every floating-point operation
can suffer a new rounding error with catastrophic consequences as illustrated
by our earlier attempt to compute the value of the constant *pi*
(see Floating-point Programming).
Extra precision can greatly enhance the stability and the accuracy
of your computation in such cases.

Repeated addition is not necessarily equivalent to multiplication in floating-point arithmetic. In the example in Floating-point Programming:

$gawk 'BEGIN {>for (d = 1.1; d <= 1.5; d += 0.1) # loop five times (?)>i++>print i>}'-| 4

you may or may not succeed in getting the correct result by choosing
an arbitrarily large value for `PREC`

. Reformulation of
the problem at hand is often the correct approach in such situations.

Previous: Changing Precision, Up: Arbitrary Precision Floats [Contents][Index]