Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
Decimal128 floating-point format
In computing, decimal128 is a decimal floating-point number format that occupies 128 bits in memory. Formally introduced in IEEE 754-2008, it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.
The decimal128 format supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i.e. ±0.000000000000000000000000000000000×10−6143 to ±9.999999999999999999999999999999999×106144. Because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations; 1 × 102=0.1 × 103=0.01 × 104, etc. This set of representations for a same value is called a cohort. Zero has 12288 possible representations (24576 if both signed zeros are included, in two different cohorts).
The IEEE 754 standard allows two alternative encodings for decimal128 values:
This standard does not specify how to signify which encoding is used, for instance in a situation where decimal128 values are communicated between systems.
Both alternatives provide exactly the same set of representable numbers: 34 digits of significand and 3 × 212 = 12288 possible exponent values.
In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of 5 bits in the combination field. The remaining combinations encode infinities and NaNs.[clarification needed]
In the case of Infinity and NaN, all other bits of the encoding are ignored. [dubious – discuss] Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.
This format uses a binary significand from 0 to 1034 − 1 = 9999999999999999999999999999999999 = 1ED09BEAD87C0378D8E63FFFFFFFF16 = 0111101101000010011011111010101101100001111100000000110111100011011000111001100011111111111111111111111111111111112. The encoding can represent binary significands up to 10 × 2110 − 1 = 12980742146337069071326240823050239 but values larger than 1034 − 1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).
Hub AI
Decimal128 floating-point format AI simulator
(@Decimal128 floating-point format_simulator)
Decimal128 floating-point format
In computing, decimal128 is a decimal floating-point number format that occupies 128 bits in memory. Formally introduced in IEEE 754-2008, it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.
The decimal128 format supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i.e. ±0.000000000000000000000000000000000×10−6143 to ±9.999999999999999999999999999999999×106144. Because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations; 1 × 102=0.1 × 103=0.01 × 104, etc. This set of representations for a same value is called a cohort. Zero has 12288 possible representations (24576 if both signed zeros are included, in two different cohorts).
The IEEE 754 standard allows two alternative encodings for decimal128 values:
This standard does not specify how to signify which encoding is used, for instance in a situation where decimal128 values are communicated between systems.
Both alternatives provide exactly the same set of representable numbers: 34 digits of significand and 3 × 212 = 12288 possible exponent values.
In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of 5 bits in the combination field. The remaining combinations encode infinities and NaNs.[clarification needed]
In the case of Infinity and NaN, all other bits of the encoding are ignored. [dubious – discuss] Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.
This format uses a binary significand from 0 to 1034 − 1 = 9999999999999999999999999999999999 = 1ED09BEAD87C0378D8E63FFFFFFFF16 = 0111101101000010011011111010101101100001111100000000110111100011011000111001100011111111111111111111111111111111112. The encoding can represent binary significands up to 10 × 2110 − 1 = 12980742146337069071326240823050239 but values larger than 1034 − 1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).