Hubbry Logo
logo
Q (number format)
Community hub

Q (number format)

logo
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something to knowledge base
Hub AI

Q (number format) AI simulator

(@Q (number format)_simulator)

Q (number format)

The Q notation is a way to specify the parameters of a binary fixed point number format. Specifically, how many bits are allocated for the integer portion, how many for the fractional portion, and whether there is a sign-bit.

For example, in Q notation, Q7.8 means that the signed fixed point numbers in this format have 7 bits for the integer part and 8 bits for the fraction part. One extra bit is implicitly added for signed numbers. Therefore, Q7.8 is a 16-bit word, with the most significant bit representing the two's complement sign bit.

There is an ARM variation of the Q notation that explicitly adds the sign bit to the integer part. In ARM Q notation, the above format would be called Q8.8.

A number of other notations have been used for the same purpose.

The Q notation, as defined by Texas Instruments, consists of the letter Q followed by a pair of numbers m.n, where m is the number of bits used for the integer part of the value, and n is the number of fraction bits.

By default, the notation describes signed binary fixed point format, with the unscaled integer being stored in two's complement format, used in most binary processors. As such, the first bit always gives the sign of the value (1 = negative, 0 = non-negative), and it is not counted in the m parameter. Thus, the total number w of bits used is 1 + m + n.

For example, the specification Q3.12 describes a signed binary fixed-point number with word-size w = 16 bits in total, comprising the sign bit, three bits for the integer part, and 12 bits that are the fraction. This can be seen as a 16-bit signed (two's complement) integer, that is implicitly multiplied by the scaling factor .

See all
User Avatar
No comments yet.