Next: , Up: Arbitrary Precision Floats


15.4.1 Setting the Working Precision

gawk uses a global working precision; it does not keep track of the precision or accuracy of individual numbers. Performing an arithmetic operation or calling a built-in function rounds the result to the current working precision. The default working precision is 53 bits, which can be modified using the built-in variable PREC. You can also set the value to one of the pre-defined case-insensitive strings shown in table-predefined-precision-strings, to emulate an IEEE-754 binary format.

PREC IEEE-754 Binary Format
"half" 16-bit half-precision.
"single" Basic 32-bit single precision.
"double" Basic 64-bit double precision.
"quad" Basic 128-bit quadruple precision.
"oct" 256-bit octuple precision.

Table 15.3: Predefined precision strings for PREC

The following example illustrates the effects of changing precision on arithmetic operations:

     $ gawk -M -v PREC=100 'BEGIN { x = 1.0e-400; print x + 0
     >   PREC = "double"; print x + 0 }'
     -| 1e-400
     -| 0

Binary and decimal precisions are related approximately, according to the formula:

prec = 3.322 * dps

Here, prec denotes the binary precision (measured in bits) and dps (short for decimal places) is the decimal digits. We can easily calculate how many decimal digits the 53-bit significand of an IEEE double is equivalent to: 53 / 3.322 which is equal to about 15.95. But what does 15.95 digits actually mean? It depends whether you are concerned about how many digits you can rely on, or how many digits you need.

It is important to know how many bits it takes to uniquely identify a double-precision value (the C type double). If you want to convert from double to decimal and back to double (e.g., saving a double representing an intermediate result to a file, and later reading it back to restart the computation), then a few more decimal digits are required. 17 digits is generally enough for a double.

It can also be important to know what decimal numbers can be uniquely represented with a double. If you want to convert from decimal to double and back again, 15 digits is the most that you can get. Stated differently, you should not present the numbers from your floating-point computations with more than 15 significant digits in them.

Conversely, it takes a precision of 332 bits to hold an approximation of the constant pi that is accurate to 100 decimal places.

You should always add some extra bits in order to avoid the confusing round-off issues that occur because numbers are stored internally in binary.