Recent from talks
Nothing was collected or created yet.
Ones' complement
View on WikipediaThis article includes a list of general references, but it lacks sufficient corresponding inline citations. (January 2014) |
| Bits | Unsigned value | Ones' complement value |
|---|---|---|
| 000 | 0 | 0 |
| 001 | 1 | 1 |
| 010 | 2 | 2 |
| 011 | 3 | 3 |
| 100 | 4 | −3 |
| 101 | 5 | −2 |
| 110 | 6 | −1 |
| 111 | 7 | −0 |
| Bits | Unsigned value |
Ones' complement value |
|---|---|---|
| 0000 0000 | 0 | 0 |
| 0000 0001 | 1 | 1 |
| 0000 0010 | 2 | 2 |
| 0111 1110 | 126 | 126 |
| 0111 1111 | 127 | 127 |
| 1000 0000 | 128 | −127 |
| 1000 0001 | 129 | −126 |
| 1111 1101 | 253 | −2 |
| 1111 1110 | 254 | −1 |
| 1111 1111 | 255 | −0 |

The ones' complement of a binary number is the value obtained by inverting (flipping) all the bits in the binary representation of the number. The name "ones' complement"[1] refers to the fact that such an inverted value, if added to the original, would always produce an "all ones" number (the term "complement" refers to such pairs of mutually additive inverse numbers, here in respect to a non-0 base number). This mathematical operation is primarily of interest in computer science, where it has varying effects depending on how a specific computer represents numbers.
A ones' complement system or ones' complement arithmetic is a system in which negative numbers are represented by the inverse of the binary representations of their corresponding positive numbers. In such a system, a number is negated (converted from positive to negative or vice versa) by computing its ones' complement. An N-bit ones' complement numeral system can only represent integers in the range −(2N−1−1) to 2N−1−1 while two's complement can express −2N−1 to 2N−1−1. It is one of three common representations for negative integers in binary computers, along with two's complement and sign-magnitude.
The ones' complement binary numeral system is characterized by the bit complement of any integer value being the arithmetic negative of the value. That is, inverting all of the bits of a number (the logical complement) produces the same result as subtracting the value from 0.
Many early computers, e.g., the UNIVAC 1101, CDC 160, CDC 1604, CDC 6600, LINC, DEC PDP-1, UNIVAC 1107, and their successors, used ones' complement arithmetic. Successors of the CDC 6600 continued to use ones' complement arithmetic until the late 1980s along with the descendants of the UNIVAC 1107 (the UNIVAC 1100/2200 series), but all modern computers use two's complement.
Number representation
[edit]Positive numbers are the same simple, binary system used by two's complement and sign-magnitude. Negative values are the bit complement of the corresponding positive value. The largest positive value is characterized by the sign (high-order) bit being off (0) and all other bits being on (1). The lowest negative value is characterized by the sign bit being 1, and all other bits being 0. The table below shows all possible values in a four-bit system, from −7 to +7.
+ − 0 0000 1111 — Both +0 and −0 return TRUE when tested for zero 1 0001 1110 — and FALSE when tested for non-zero. 2 0010 1101 3 0011 1100 4 0100 1011 5 0101 1010 6 0110 1001 7 0111 1000
Basics
[edit]Adding two values is straightforward. Simply align the values on the least significant bit and add, propagating any carry to the bit one position left. If the carry extends past the end of the word it is said to have "wrapped around", a condition called an "end-around carry". When this occurs, the bit must be added back in at the right-most bit. This phenomenon does not occur in two's complement arithmetic.
0001 0110 22 + 0000 0011 3 =========== ==== 0001 1001 25
Subtraction is similar, except that borrows, rather than carries, are propagated to the left. If the borrow extends past the end of the word it is said to have "wrapped around", a condition called an "end-around borrow". When this occurs, the bit must be subtracted from the right-most bit. This phenomenon does not occur in two's complement arithmetic.
0000 0110 6 − 0001 0011 19 =========== ==== 1 1111 0011 −12 —An end-around borrow is produced, and the sign bit of the intermediate result is 1. − 0000 0001 1 —Subtract the end-around borrow from the result. =========== ==== 1111 0010 −13 —The correct result (6 − 19 = −13)
It is easy to demonstrate that the bit complement of a positive value is the negative magnitude of the positive value. The computation of 19 + 3 produces the same result as 19 − (−3).
Add 3 to 19.
0001 0011 19 + 0000 0011 3 =========== ==== 0001 0110 22
Subtract −3 from 19.
0001 0011 19 − 1111 1100 −3 =========== ==== 1 0001 0111 23 —An end-around borrow is produced. − 0000 0001 1 —Subtract the end-around borrow from the result. =========== ==== 0001 0110 22 —The correct result (19 − (−3) = 22).
Negative zero
[edit]Negative zero is the condition where all bits in a signed word are 1. This follows the ones' complement rules that a value is negative when the left-most bit is 1, and that a negative number is the bit complement of the number's magnitude. The value also behaves as zero when computing. Adding or subtracting negative zero to/from another value produces the original value.
Adding negative zero:
0001 0110 22 + 1111 1111 −0 =========== ==== 1 0001 0101 21 An end-around carry is produced. + 0000 0001 1 =========== ==== 0001 0110 22 The correct result (22 + (−0) = 22)
Subtracting negative zero:
0001 0110 22 − 1111 1111 −0 =========== ==== 1 0001 0111 23 An end-around borrow is produced. − 0000 0001 1 =========== ==== 0001 0110 22 The correct result (22 − (−0) = 22)
Negative zero is easily produced in a ones' complement adder. Simply add the positive and negative of the same magnitude.
0001 0110 22 + 1110 1001 −22 =========== ==== 1111 1111 −0 Negative zero.
Although the math always produces the correct results, a side effect of negative zero is that software must test for negative zero.
Avoiding negative zero
[edit]The generation of negative zero becomes a non-issue if addition is achieved with a complementing subtractor. The first operand is passed to the subtract unmodified, the second operand is complemented, and the subtraction generates the correct result, avoiding negative zero. The previous example added 22 and −22 and produced −0.
0001 0110 22 0001 0110 22 1110 1001 −22 1110 1001 −22 + 1110 1001 −22 − 0001 0110 22 + 0001 0110 22 − 1110 1001 −22 =========== ==== but =========== ==== ; and =========== === but =========== === 1111 1111 −0 0000 0000 0 1111 1111 −0 0000 0000 0
"Corner cases" arise when one or both operands are zero and/or negative zero.
0001 0010 18 0001 0010 18
− 0000 0000 0 − 1111 1111 −0
=========== ==== =========== ====
0001 0010 18 1 0001 0011 19
− 0000 0001 1
=========== ====
0001 0010 18
Subtracting +0 is trivial (as shown above). If the second operand is negative zero it is inverted and the original value of the first operand is the result. Subtracting −0 is also trivial. The result can be only one of two cases. In case 1, operand 1 is −0 so the result is produced simply by subtracting 1 from 1 at every bit position. In case 2, the subtraction will generate a value that is 1 larger than operand 1 and an end-around borrow. Completing the borrow generates the same value as operand 1.
The next example shows what happens when both operands are plus or minus zero:
0000 0000 0 0000 0000 0 1111 1111 −0 1111 1111 −0
+ 0000 0000 0 + 1111 1111 −0 + 0000 0000 0 + 1111 1111 −0
=========== ==== =========== ==== =========== ==== =========== ====
0000 0000 0 1111 1111 −0 1111 1111 −0 1 1111 1110 −1
+ 0000 0001 1
==================
1111 1111 −0
0000 0000 0 0000 0000 0 1111 1111 −0 1111 1111 −0 − 1111 1111 −0 − 0000 0000 0 − 1111 1111 −0 − 0000 0000 0 =========== ==== =========== ==== =========== ==== =========== ==== 1 0000 0001 1 0000 0000 0 0000 0000 0 1111 1111 −0 − 0000 0001 1 =========== ==== 0000 0000 0
This example shows that of the four possible conditions when adding only ±0, an adder will produce −0 in three of them. A complementing subtractor will produce −0 only when the first operand is −0 and the second is 0.
See also
[edit]References
[edit]- ^ Knuth, Donald E. "4.1. Positional Number Systems". The Art of Computer Programming, Volume 2: Seminumerical Algorithms (3rd ed.).
Detail-oriented readers and copy editors should notice the position of the apostrophe in terms like 'two's complement' and 'ones' complement': A two's complement number is complemented with respect to a single power of 2, while a ones' complement number is complemented with respect to a long sequence of 1s.
- Donald Knuth: The Art of Computer Programming, Volume 2: Seminumerical Algorithms, chapter 4.1
Ones' complement
View on GrokipediaFundamentals
Definition
Ones' complement is a method for representing signed integers in binary systems, where the negative counterpart of a positive number is obtained by applying a bitwise NOT operation—also known as bitwise inversion or complement—to its binary representation, flipping each bit from 0 to 1 and vice versa.[2][6] This approach contrasts with other signed representations by directly using the inversion to denote negation, allowing the most significant bit to implicitly indicate the sign (0 for positive or zero, 1 for negative) within the fixed-width bit field.[7] Historically, ones' complement served as one of the earliest schemes for handling signed numbers in computer arithmetic, enabling efficient implementation of operations like subtraction through addition of complemented values, as seen in pioneering machines from the mid-20th century.[8] It was employed in systems such as the DEC PDP-1[9] and CDC 6600,[10] where it supported binary fixed-width representations without relying on explicit sign-magnitude formatting for all calculations. This representation was particularly suited to the hardware constraints of early computers, providing a straightforward way to encode both positive and negative values in n-bit words.[11] A basic example illustrates the process: in a 4-bit representation, the positive integer +3 is encoded as 0011 in binary, and applying the bitwise NOT yields 1100, which represents -3.[12] Understanding ones' complement is a prerequisite for grasping binary fixed-width representations, such as those used in n-bit computer words, where the full range of values from -(2^{n-1} - 1) to +(2^{n-1} - 1) can be expressed, excluding the negative zero issue addressed elsewhere.[1]Bitwise Operation
The bitwise NOT operation, commonly referred to as the one's complement operation, forms the core of ones' complement by inverting every bit in a binary number's fixed-width representation. This unary operation processes the entire word length, transforming each bit independently without regard to its positional value.[13] To perform the bitwise NOT, begin with the binary form of the number and systematically flip each bit: a 0 becomes 1, and a 1 becomes 0, applied to all positions from the most significant bit to the least significant bit. Mathematically, for a bit at position , the complemented bit is given by , or equivalently using the bitwise NOT notation in binary systems. This inversion ensures the operation is deterministic and reversible within the same word size.[14] The fixed-width requirement is essential, as leading zeros in positive numbers must also be inverted to maintain the full bit length and avoid truncation errors in hardware implementations. For instance, in an 8-bit word, the positive value 5 is represented as00000101; applying bitwise NOT inverts all bits to 11111010, which denotes -5 in ones' complement. This example illustrates how the operation preserves the word size while achieving bit-level negation.[13][15]
Number Representation
Positive and Zero Values
In ones' complement representation, positive integers and zero are encoded identically to their unsigned binary counterparts, with the most significant bit (MSB) serving as the sign bit set to 0 to indicate non-negativity. This direct mapping ensures that the bit patterns for these values remain unchanged from standard binary notation, preserving simplicity in their storage and interpretation.[16][17] For an n-bit ones' complement system, the range of representable non-negative values spans from 0 to , allowing half of the total bit patterns to denote positives and zero while reserving the other half for negatives. This symmetric allocation (excluding the dual zero representations) provides a balanced capacity for signed integers compared to unsigned systems.[18][19] To illustrate, consider a 4-bit ones' complement system: the value +0 is encoded as0000, +1 as 0001, +3 as 0011, and the maximum positive +7 as 0111. These patterns mirror unsigned binary exactly, with the MSB consistently at 0. Zero specifically uses the all-zero bit pattern, termed positive zero, which contrasts with the all-ones pattern representing negative zero—a unique feature of ones' complement that arises from bitwise inversion for negatives but does not affect positive encodings.[20][21][22]
Negative Values
In one's complement representation, negative integers are encoded by performing a bitwise inversion on the binary representation of their absolute (positive) value. This involves flipping every bit—changing each 0 to 1 and each 1 to 0—across the fixed number of bits used for the representation.[11][23] The resulting bit pattern serves as the encoding for the negative number, building directly on the standard binary encoding of the corresponding positive value.[22] For example, in a 4-bit system, the positive value +1 is represented as0001 in binary. Inverting all bits yields 1110, which represents -1. Similarly, +4 is 0100, and its inversion is 1011, representing -4. These examples illustrate how the complemented form preserves the structure of binary while denoting negativity through the inversion process.[22][11]
The range of representable negative values in an -bit one's complement system spans from to , providing symmetric magnitude coverage relative to the positive range, with the all-1s pattern separately representing negative zero (distinct in bit pattern from positive zero but equal in value). For a 4-bit system, this corresponds to -7 through -1.[11] In an -bit encoding, the most negative value, such as -7 in 4 bits (1000), arises from inverting the largest positive value (0111).[24]
The most significant bit (MSB) in one's complement is always 1 for negative values, serving as a sign indicator, but unlike pure sign-magnitude systems, it is not isolated; the entire bit pattern, including the MSB, contributes integrally to the complemented magnitude calculation. To decode a negative representation, one inverts the bits back to obtain the positive magnitude and then applies the negative sign.[11][25] This holistic bit involvement distinguishes one's complement from other signed representations.[24]
Arithmetic Operations
Addition
In one's complement arithmetic, addition of two operands proceeds via standard binary addition, treating the numbers as unsigned values regardless of their signs. The bits are added column by column from the least significant bit (LSB) to the most significant bit (MSB), propagating any carries as in binary addition. If a carry is generated out of the MSB (indicating overflow in the fixed-width representation), this carry bit is not discarded; instead, it is added back to the LSB of the sum in a process known as the end-around carry. This adjustment ensures the result correctly represents the sum modulo , where is the number of bits, preserving the arithmetic properties of the representation.[26] Consider a 5-bit example adding +7 (00111) and +5 (00101). The binary addition yields 01100 with no carry out from the MSB, so no end-around carry is needed, resulting in +12 (01100). Now, adding -1 (11110) and -2 (11101): the initial sum is 11011 after discarding the carry out of 1 from the MSB, but applying the end-around carry adds 1 to the LSB, yielding 11100, which represents -3. For mixed signs, such as +1 (00001) and -3 (11100), the sum is 11101 with no carry out, directly giving -2. These steps demonstrate how the end-around carry corrects the result when necessary.[26] The result of addition can be expressed as the sum bits plus the end-around carry if an overflow carry occurs: , where is 1 if there is a carry out from the MSB and 0 otherwise. This mechanism handles cases where the sum exceeds the representable range but requires careful implementation in hardware to route the carry back efficiently.[26] Overflow in one's complement addition is detected when both operands have the same sign but the result has the opposite sign, indicating the sum has exceeded the bounds of the representation (e.g., adding two large positive numbers yielding a negative result). In such cases, the operation signals an overflow condition, though the end-around carry still produces a valid (but wrapped) value. For instance, in 5 bits, adding +11 (01011) and +9 (01001) gives a direct sum of 10100, which is negative despite positive operands, confirming overflow. No overflow occurs for operands of different signs, as the result stays within bounds.[26]Subtraction
In ones' complement arithmetic, subtraction of two numbers A and B (A - B) is implemented by adding A to the ones' complement of B, which effectively computes A + (-B), and then applying an end-around carry adjustment if a carry-out is generated during the addition.[27][28] This approach leverages the addition circuitry, as the ones' complement of B represents its negation.[29] Consider a 4-bit example where A = 5 (0101₂) and B = 3 (0011₂). The ones' complement of B is 1100₂. Adding 0101₂ + 1100₂ yields 10001₂, producing a carry-out of 1 and a sum of 0001₂. The end-around carry adds this 1 back to the least significant bit of the sum: 0001₂ + 0001₂ = 0010₂, which correctly represents 2 (5 - 3 = 2).[27][29] If no carry-out occurs, the result is the ones' complement of the true difference, indicating a negative value that requires further complementation to obtain the magnitude.[30] The ones' complement operation is its own inverse: applying it twice to a number yields the original value (within the fixed word length), which underpins the negation used in subtraction.[28] Overflow detection in subtraction follows the same principle as in addition, where inconsistency in the carries into and out of the sign bit signals an overflow condition.[27]Negative Zero Issue
Occurrence
In one's complement arithmetic, negative zero occurs when a positive number is added to its exact negative complement, yielding a bit pattern consisting of all 1s.[31] This happens under the condition of symmetric addition of opposite values, where the bitwise inversion used to form the negative ensures that each bit position sums to 1 without generating carries that propagate across bits.[32] A representative example in a 4-bit system illustrates this: the positive value +1 is represented as0001, while its negative -1 is 1110; adding these produces 1111, interpreted as negative zero.[33]
This negative zero (1111 in the 4-bit case) is distinct from positive zero (0000), as the former has the sign bit set to 1, creating two separate representations for the value zero despite their numerical equivalence.[31]
Consequences
In ones' complement representation, negative zero and positive zero are mathematically equivalent but possess distinct bit patterns—all ones for negative zero and all zeros for positive zero—which introduces logical inconsistencies in comparisons and equality tests. For instance, direct bitwise or numerical equality checks may fail to recognize them as identical, necessitating additional logic to treat both as zero and potentially leading to erroneous program behavior in conditional statements or loops.[34][35] Arithmetic operations exacerbate these issues, as subtracting a number from itself can yield negative zero instead of positive zero, resulting in unexpected signs or infinite loops in algorithms that rely on zero detection without accounting for the dual representations. Such errors propagate in subsequent computations, altering results in ways that deviate from expected mathematical outcomes, particularly in subtraction-heavy routines.[33][34] In early hardware like the UNIVAC 1100 series and UNIVAC 490, these dual zeros required specialized circuitry and instruction sets to handle inconsistencies, such as distinct skip conditions in shift and logical operations that could trigger unintended program branches when registers held negative zero. This increased design complexity and debugging challenges for programmers, as negative zero appearances in division quotients or remainders—especially in edge cases like dividing zero by a non-zero value with unlike signs—could lead to misinterpretations in further calculations.[34][36] The presence of negative zero also disrupts sorting algorithms and relational tests, where positive zero may be treated as greater than negative zero due to bit pattern differences, inverting expected orderings in data structures or search operations and compromising the reliability of ordered outputs in applications like databases or numerical simulations.[34][35]Mitigation Techniques
End-Around Carry Adjustment
In ones' complement arithmetic, the end-around carry adjustment is a corrective step applied after binary addition to ensure accurate results, particularly by normalizing the sum and handling overflow correctly. This technique involves detecting any carry-out from the most significant bit (MSB) during the addition of two operands and then adding that carry (equivalent to adding 1) back to the least significant bit (LSB) of the resulting sum. The adjustment propagates any resulting carries through the word, effectively restoring the correct value without requiring separate hardware for subtraction, as negative numbers are handled by complementing one operand and adding.[37] The algorithm for performing addition with end-around carry proceeds as follows:- Represent the operands in ones' complement form and add them using a standard binary adder, ignoring signs.
- Note any carry-out from the MSB.
- If a carry-out exists, add 1 to the LSB of the sum, allowing carries to propagate as needed to update the entire result.
Sign-Magnitude Alternative
In sign-magnitude representation, an alternative to ones' complement, the most significant bit serves as a dedicated sign bit (0 for positive or zero, 1 for negative), while the remaining bits represent the absolute magnitude of the value in standard unsigned binary form.[21] This approach inherently distinguishes the sign from the value, allowing for straightforward encoding of both positive and negative numbers without inverting bits for negation. For instance, in a 4-bit system, the number -3 is encoded as 1011, where the leading 1 indicates negativity and the trailing 011 (binary 3) gives the magnitude.[21] Unlike ones' complement, which produces a negative zero as the bitwise inverse of positive zero (all 1s), sign-magnitude represents positive zero as all bits zero (0000 in 4 bits) and negative zero as sign bit 1 with magnitude zero (1000 in 4 bits). This separation simplifies zero handling, as negative zero can be detected by checking if the magnitude is zero and normalized to positive zero by clearing the sign bit, avoiding the propagation issues seen in ones' complement arithmetic.[39] However, sign-magnitude introduces trade-offs in arithmetic operations, requiring separate logic for sign determination and magnitude computation: addition of numbers with the same sign involves adding magnitudes and retaining the sign, while differing signs necessitate subtracting the smaller magnitude from the larger and assigning the sign of the latter, often with an initial comparison step. This increases hardware complexity compared to the uniform adder used in ones' complement (with its end-around carry).[39] Historically, sign-magnitude was employed in early computers like the IBM 704 to represent fixed-point integers, providing a hybrid approach that bypassed some flaws of ones' complement by easing sign and zero management in software or hardware normalization routines.[40]Comparisons
With Two's Complement
Two's complement representation of negative numbers is obtained by taking the ones' complement and adding 1, which eliminates the negative zero present in ones' complement systems.[41] This adjustment ensures a single representation for zero, simplifying arithmetic operations and avoiding ambiguities in computations.[31] For example, in a 4-bit system, the representation of -1 in ones' complement is1110, obtained by inverting the bits of +1 (0001). In two's complement, it becomes 1111, as the ones' complement 1110 plus 1 yields 1111.[41] This pattern holds generally: the two's complement of a number is its bitwise inversion plus one, shifting the encoding to include an additional negative value without duplicating zero.[42]
In terms of arithmetic, two's complement streamlines addition and subtraction by treating all operations uniformly as addition, without the need for the end-around carry required in ones' complement addition to correct for negative zero.[21] For instance, adding two numbers in two's complement produces the correct result directly, including overflow detection via the sign bit, whereas ones' complement negation is simpler—requiring only bit inversion—compared to two's complement, which needs the additional increment step.[31]
The range of representable values also differs: for bits, ones' complement spans from to , such as -7 to +7 in 4 bits. Two's complement extends to to , like -8 to +7 in 4 bits, providing one more negative value at the expense of symmetry.[42] This asymmetry arises from the absence of negative zero, allowing fuller utilization of the bit patterns for negative integers.[21]
