gawk uses a global working precision; it does not keep track of
the precision or accuracy of individual numbers. Performing an arithmetic
operation or calling a builtin function rounds the result to the current
working precision. The default working precision is 53 bits, which can be
modified using the builtin variable PREC
. You can also set the
value to one of the predefined caseinsensitive strings
shown in tablepredefinedprecisionstrings,
to emulate an IEEE754 binary format.
PREC  IEEE754 Binary Format


"half"  16bit halfprecision.

"single"  Basic 32bit single precision.

"double"  Basic 64bit double precision.

"quad"  Basic 128bit quadruple precision.

"oct"  256bit octuple precision.

Table 15.3: Predefined precision strings for PREC
The following example illustrates the effects of changing precision on arithmetic operations:
$ gawk M v PREC=100 'BEGIN { x = 1.0e400; print x + 0 > PREC = "double"; print x + 0 }'  1e400  0
Binary and decimal precisions are related approximately, according to the formula:
prec = 3.322 * dps
Here, prec denotes the binary precision (measured in bits) and dps (short for decimal places) is the decimal digits. We can easily calculate how many decimal digits the 53bit significand of an IEEE double is equivalent to: 53 / 3.322 which is equal to about 15.95. But what does 15.95 digits actually mean? It depends whether you are concerned about how many digits you can rely on, or how many digits you need.
It is important to know how many bits it takes to uniquely identify
a doubleprecision value (the C type double
). If you want to
convert from double
to decimal and back to double
(e.g.,
saving a double
representing an intermediate result to a file, and
later reading it back to restart the computation), then a few more decimal
digits are required. 17 digits is generally enough for a double
.
It can also be important to know what decimal numbers can be uniquely
represented with a double
. If you want to convert
from decimal to double
and back again, 15 digits is the most that
you can get. Stated differently, you should not present
the numbers from your floatingpoint computations with more than 15
significant digits in them.
Conversely, it takes a precision of 332 bits to hold an approximation of the constant pi that is accurate to 100 decimal places.
You should always add some extra bits in order to avoid the confusing roundoff issues that occur because numbers are stored internally in binary.