At the lowest level, the computer stores everything in terms of 1
or 0
.
For example, each program in Gnuastro, or each astronomical image you take with the telescope is actually a string of millions of these zeros and ones.
The space required to keep a zero or one is the smallest unit of storage, and is known as a bit.
However, understanding and manipulating this string of bits is extremely hard for most people.
Therefore, different standards are defined to package the bits into separate types with a fixed interpretation of the bits in each package.
To store numbers, the most basic standard/type is for integers (\(..., -2, -1, 0, 1, 2, ...\)). The common integer types are 8, 16, 32, and 64 bits wide (more bits will give larger limits). Each bit corresponds to a power of 2 and they are summed to create the final number. In the integer types, for each width there are two standards for reading the bits: signed and unsigned. In the ‘signed’ convention, one bit is reserved for the sign (stating that the integer is positive or negative). The ‘unsigned’ integers use that bit in the actual number and thus contain only positive numbers (starting from zero).
Therefore, at the same number of bits, both signed and unsigned integers can allow the same number of integers, but the positive limit of the unsigned
types is double their signed
counterparts with the same width (at the expense of not having negative numbers).
When the context of your work does not involve negative numbers (for example, counting, where negative is not defined), it is best to use the unsigned
types.
For the full numerical range of all integer types, see below.
Another standard of converting a given number of bits to numbers is the floating point standard, this standard can approximately store any real number with a given precision. There are two common floating point types: 32-bit and 64-bit, for single and double precision floating point numbers respectively. The former is sufficient for data with less than 8 significant decimal digits (most astronomical data), while the latter is good for less than 16 significant decimal digits. The representation of real numbers as bits is much more complex than integers. If you are interested to learn more about it, you can start with the Wikipedia article.
Practically, you can use Gnuastro’s Arithmetic program to convert/change the type of an image/data-cube (see Arithmetic), or Gnuastro Table program to convert a table column’s data type (see Column arithmetic).
Conversion of a dataset’s type is necessary in some contexts.
For example, the program/library, that you intend to feed the data into, only accepts floating point values, but you have an integer image/column.
Another situation that conversion can be helpful is when you know that your data only has values that fit within int8
or uint16
.
However it is currently formatted in the float64
type.
The important thing to consider is that operations involving wider, floating point, or signed types can be significantly slower than smaller-width, integer, or unsigned types respectively. Note that besides speed, a wider type also requires much more storage space (by 4 or 8 times). Therefore, when you confront such situations that can be optimized and want to store/archive/transfer the data, it is best to use the most efficient type. For example, if your dataset (image or table column) only has positive integers less than 65535, store it as an unsigned 16-bit integer for faster processing, faster transfer, and less storage space.
The short and long names for the recognized numeric data types in Gnuastro are listed below. Both short and long names can be used when you want to specify a type. For example, as a value to the common option --type (see Input/Output options), or in the information comment lines of Gnuastro text table format. The ranges listed below are inclusive.
u8
uint8
8-bit unsigned integers, range:
\([0\rm{\ to\ }2^8-1]\) or \([0\rm{\ to\ }255]\).
i8
int8
8-bit signed integers, range:
\([-2^7\rm{\ to\ }2^7-1]\) or \([-128\rm{\ to\ }127]\).
u16
uint16
16-bit unsigned integers, range:
\([0\rm{\ to\ }2^{16}-1]\) or \([0\rm{\ to\ }65535]\).
i16
int16
16-bit signed integers, range:
\([-2^{15}\rm{\ to\ }2^{15}-1]\) or
\([-32768\rm{\ to\ }32767]\).
u32
uint32
32-bit unsigned integers, range:
\([0\rm{\ to\ }2^{32}-1]\) or
\([0\rm{\ to\ }4294967295]\).
i32
int32
32-bit signed integers, range:
\([-2^{31}\rm{\ to\ }2^{31}-1]\) or
\([-2147483648\rm{\ to\ }2147483647]\).
u64
uint64
64-bit unsigned integers, range
\([0\rm{\ to\ }2^{64}-1]\) or
\([0\rm{\ to\ }18446744073709551615]\).
i64
int64
64-bit signed integers, range:
\([-2^{63}\rm{\ to\ }2^{63}-1]\) or
\([-9223372036854775808\rm{\ to\ }9223372036854775807]\).
f32
float32
32-bit (single-precision) floating point types. The maximum (minimum is its negative) possible value is \(3.402823\times10^{38}\). Single-precision floating points can accurately represent a floating point number up to \(\sim7.2\) significant decimals. Given the heavy noise in astronomical data, this is usually more than sufficient for storing results. For more, see Printing floating point numbers.
f64
float64
64-bit (double-precision) floating point types.
The maximum (minimum is its negative) possible value is \(\sim10^{308}\).
Double-precision floating points can accurately represent a floating point number \(\sim15.9\) significant decimals.
This is usually good for processing (mixing) the data internally, for example, a sum of single precision data (and later storing the result as float32
).
For more, see Printing floating point numbers.
Some file formats do not recognize all types. for example, the FITS standard (see Fits) does not define |
JavaScript license information
GNU Astronomy Utilities 0.23 manual, July 2024.