Hubbry Logo
Ones' complementOnes' complementMain
Open search
Ones' complement
Community hub
Ones' complement
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Ones' complement
Ones' complement
from Wikipedia
Three-bit ones'-complement integers
Bits Unsigned value Ones' complement
value
000 0 0
001 1 1
010 2 2
011 3 3
100 4 −3
101 5 −2
110 6 −1
111 7 −0
8-bit ones'-complement integers
Bits Unsigned
value
Ones'
complement
value
0000 0000 0  0 
0000 0001 1  1 
0000 0010 2  2 
0111 1110 126  126 
0111 1111 127  127 
1000 0000 128  −127 
1000 0001 129  −126 
1111 1101 253  −2 
1111 1110 254  −1 
1111 1111 255  −0 
Cyclic depiction of unsigned (white ring), ones-complement (orange), twos-complement (teal) values of 4-bit integers (black) and the effect of adding 4 to an arbitrary value

The ones' complement of a binary number is the value obtained by inverting (flipping) all the bits in the binary representation of the number. The name "ones' complement"[1] refers to the fact that such an inverted value, if added to the original, would always produce an "all ones" number (the term "complement" refers to such pairs of mutually additive inverse numbers, here in respect to a non-0 base number). This mathematical operation is primarily of interest in computer science, where it has varying effects depending on how a specific computer represents numbers.

A ones' complement system or ones' complement arithmetic is a system in which negative numbers are represented by the inverse of the binary representations of their corresponding positive numbers. In such a system, a number is negated (converted from positive to negative or vice versa) by computing its ones' complement. An N-bit ones' complement numeral system can only represent integers in the range −(2N−1−1) to 2N−1−1 while two's complement can express −2N−1 to 2N−1−1. It is one of three common representations for negative integers in binary computers, along with two's complement and sign-magnitude.

The ones' complement binary numeral system is characterized by the bit complement of any integer value being the arithmetic negative of the value. That is, inverting all of the bits of a number (the logical complement) produces the same result as subtracting the value from 0.

Many early computers, e.g., the UNIVAC 1101, CDC 160, CDC 1604, CDC 6600, LINC, DEC PDP-1, UNIVAC 1107, and their successors, used ones' complement arithmetic. Successors of the CDC 6600 continued to use ones' complement arithmetic until the late 1980s along with the descendants of the UNIVAC 1107 (the UNIVAC 1100/2200 series), but all modern computers use two's complement.

Number representation

[edit]

Positive numbers are the same simple, binary system used by two's complement and sign-magnitude. Negative values are the bit complement of the corresponding positive value. The largest positive value is characterized by the sign (high-order) bit being off (0) and all other bits being on (1). The lowest negative value is characterized by the sign bit being 1, and all other bits being 0. The table below shows all possible values in a four-bit system, from −7 to +7.

    +      −
0  0000  1111 — Both +0 and −0 return TRUE when tested for zero
1  0001  1110 — and FALSE when tested for non-zero. 
2  0010  1101
3  0011  1100
4  0100  1011
5  0101  1010
6  0110  1001
7  0111  1000

Basics

[edit]

Adding two values is straightforward. Simply align the values on the least significant bit and add, propagating any carry to the bit one position left. If the carry extends past the end of the word it is said to have "wrapped around", a condition called an "end-around carry". When this occurs, the bit must be added back in at the right-most bit. This phenomenon does not occur in two's complement arithmetic.

  0001 0110     22
+ 0000 0011      3
===========   ====
  0001 1001     25

Subtraction is similar, except that borrows, rather than carries, are propagated to the left. If the borrow extends past the end of the word it is said to have "wrapped around", a condition called an "end-around borrow". When this occurs, the bit must be subtracted from the right-most bit. This phenomenon does not occur in two's complement arithmetic.

  0000 0110      6
− 0001 0011     19
===========   ====
1 1111 0011    −12    —An end-around borrow is produced, and the sign bit of the intermediate result is 1.
− 0000 0001      1    —Subtract the end-around borrow from the result.
===========   ====
  1111 0010    −13    —The correct result (6 − 19 = −13)

It is easy to demonstrate that the bit complement of a positive value is the negative magnitude of the positive value. The computation of 19 + 3 produces the same result as 19 − (−3).

Add 3 to 19.

  0001 0011     19
+ 0000 0011      3
===========   ====
  0001 0110     22

Subtract −3 from 19.

  0001 0011     19
− 1111 1100     −3
===========   ====
1 0001 0111     23    —An end-around borrow is produced.
− 0000 0001      1    —Subtract the end-around borrow from the result.
===========   ====
  0001 0110     22    —The correct result (19 − (−3) = 22).

Negative zero

[edit]

Negative zero is the condition where all bits in a signed word are 1. This follows the ones' complement rules that a value is negative when the left-most bit is 1, and that a negative number is the bit complement of the number's magnitude. The value also behaves as zero when computing. Adding or subtracting negative zero to/from another value produces the original value.

Adding negative zero:

  0001 0110     22
+ 1111 1111     −0
===========   ====
1 0001 0101     21    An end-around carry is produced.
+ 0000 0001      1
===========   ====
  0001 0110     22    The correct result (22 + (−0) = 22)

Subtracting negative zero:

  0001 0110     22
− 1111 1111     −0
===========   ====
1 0001 0111     23    An end-around borrow is produced.
− 0000 0001      1
===========   ====
  0001 0110     22    The correct result (22 − (−0) = 22)

Negative zero is easily produced in a ones' complement adder. Simply add the positive and negative of the same magnitude.

  0001 0110     22
+ 1110 1001    −22
===========   ====
  1111 1111     −0    Negative zero.

Although the math always produces the correct results, a side effect of negative zero is that software must test for negative zero.

Avoiding negative zero

[edit]

The generation of negative zero becomes a non-issue if addition is achieved with a complementing subtractor. The first operand is passed to the subtract unmodified, the second operand is complemented, and the subtraction generates the correct result, avoiding negative zero. The previous example added 22 and −22 and produced −0.

  0001 0110     22         0001 0110     22            1110 1001  −22         1110 1001   −22
+ 1110 1001    −22       − 0001 0110     22          + 0001 0110   22       − 1110 1001   −22
===========   ====  but  ===========   ====   ; and  ===========  ===  but  ===========   ===
  1111 1111     −0         0000 0000      0            1111 1111   −0         0000 0000     0

"Corner cases" arise when one or both operands are zero and/or negative zero.

  0001 0010     18         0001 0010     18
− 0000 0000      0       − 1111 1111     −0
===========   ====       ===========   ====
  0001 0010     18       1 0001 0011     19
                         − 0000 0001      1
                         ===========   ====
                           0001 0010     18

Subtracting +0 is trivial (as shown above). If the second operand is negative zero it is inverted and the original value of the first operand is the result. Subtracting −0 is also trivial. The result can be only one of two cases. In case 1, operand 1 is −0 so the result is produced simply by subtracting 1 from 1 at every bit position. In case 2, the subtraction will generate a value that is 1 larger than operand 1 and an end-around borrow. Completing the borrow generates the same value as operand 1.

The next example shows what happens when both operands are plus or minus zero:

  0000 0000      0         0000 0000      0         1111 1111     −0         1111 1111     −0
+ 0000 0000      0       + 1111 1111     −0       + 0000 0000      0       + 1111 1111     −0
===========   ====       ===========   ====       ===========   ====       ===========   ====
  0000 0000      0         1111 1111     −0         1111 1111     −0       1 1111 1110     −1
                                                                           + 0000 0001      1
                                                                           ==================
                                                                             1111 1111     −0
  0000 0000      0         0000 0000      0         1111 1111     −0         1111 1111     −0
− 1111 1111     −0       − 0000 0000      0       − 1111 1111     −0       − 0000 0000      0
===========   ====       ===========   ====       ===========   ====       ===========   ====
1 0000 0001      1         0000 0000      0         0000 0000      0         1111 1111     −0
− 0000 0001      1
===========   ====
  0000 0000      0

This example shows that of the four possible conditions when adding only ±0, an adder will produce −0 in three of them. A complementing subtractor will produce −0 only when the first operand is −0 and the second is 0.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
One's complement is a binary numeral system used to represent signed integers in computer arithmetic, where the magnitude of a positive number is encoded in its standard binary form prefixed with a leading zero as the sign bit, and the corresponding negative value is obtained by performing a bitwise inversion (complement) of all bits in the positive representation. This approach allows the most significant bit (MSB) to serve as the sign indicator—0 for non-negative and 1 for negative—while enabling arithmetic operations like addition and subtraction to be unified without separate circuitry for each. Unlike unsigned binary, one's complement supports both positive and negative values within a fixed bit width, typically ranging from -(2^{n-1} - 1) to +(2^{n-1} - 1) for an n-bit representation, resulting in an asymmetric range and dual representations for zero (all zeros for +0 and all ones for -0). In one's complement arithmetic, negation of a number simply requires flipping all its bits, which simplifies the hardware implementation of additive inverses compared to other schemes like sign-magnitude representation. Addition proceeds via standard binary addition, but with a key modification: any carry-out from the MSB is "end-around carried" back to the least significant bit (LSB) to handle overflow correctly and resolve the dual-zero issue during summation. For example, in an 8-bit system, the positive 5 (00000101) becomes -5 as 11111010, and adding -5 to 5 yields 11111111 (-0), illustrating the dual-zero issue. Subtraction is reduced to by complementing the subtrahend and adding it to the minuend, eliminating the need for dedicated subtraction logic in early processors. Historically, one's complement was employed in several pioneering computers during the mid-20th century, such as the 1100 series and supercomputers, due to its straightforward bit-flip negation and compatibility with binary adders for subtraction. However, its drawbacks—including the asymmetric range, dual zeros requiring special handling in comparisons, and added complexity from end-around carry—led to its gradual replacement by in most modern architectures by the 1970s. Despite this, one's complement persists in specific applications like error detection, notably in the (IP) header , where it computes the 16-bit one's complement of the sum of all 16-bit words in the header to verify during transmission. This usage leverages the system's properties for efficient in validation, ensuring the received sum complements to zero if no s occurred.

Fundamentals

Definition

Ones' complement is a method for representing signed integers in binary systems, where the negative counterpart of a positive number is obtained by applying a bitwise NOT operation—also known as bitwise inversion or complement—to its binary representation, flipping each bit from 0 to 1 and vice versa. This approach contrasts with other signed representations by directly using the inversion to denote negation, allowing the most significant bit to implicitly indicate the sign (0 for positive or zero, 1 for negative) within the fixed-width bit field. Historically, ones' complement served as one of the earliest schemes for handling signed numbers in computer arithmetic, enabling efficient implementation of operations like subtraction through addition of complemented values, as seen in pioneering machines from the mid-20th century. It was employed in systems such as the DEC PDP-1 and CDC 6600, where it supported binary fixed-width representations without relying on explicit sign-magnitude formatting for all calculations. This representation was particularly suited to the hardware constraints of early computers, providing a straightforward way to encode both positive and negative values in n-bit words. A basic example illustrates the process: in a 4-bit representation, the positive +3 is encoded as 0011 in binary, and applying the bitwise NOT yields 1100, which represents -3. Understanding ones' complement is a prerequisite for grasping binary fixed-width representations, such as those used in n-bit computer words, where the full range of values from -(2^{n-1} - 1) to +(2^{n-1} - 1) can be expressed, excluding the negative issue addressed elsewhere.

Bitwise Operation

The bitwise NOT operation, commonly referred to as the one's complement operation, forms the core of ones' complement by inverting every bit in a binary number's fixed-width representation. This processes the entire word length, transforming each bit independently without regard to its positional value. To perform the bitwise NOT, begin with the binary form of the number and systematically flip each bit: a becomes 1, and a 1 becomes , applied to all positions from the most significant bit to the least significant bit. Mathematically, for a bit bib_i at position ii, the complemented bit is given by 1bi1 - b_i, or equivalently bi\sim b_i using the bitwise NOT notation in binary systems. This inversion ensures the operation is deterministic and reversible within the same word size. The fixed-width requirement is essential, as leading zeros in positive numbers must also be inverted to maintain the full and avoid errors in hardware implementations. For instance, in an 8-bit word, the positive value 5 is represented as 00000101; applying bitwise NOT inverts all bits to 11111010, which denotes -5 in ones' complement. This example illustrates how the operation preserves the word size while achieving bit-level .

Number Representation

Positive and Zero Values

In ones' complement representation, positive integers and zero are encoded identically to their unsigned binary counterparts, with the most significant bit (MSB) serving as the sign bit set to 0 to indicate non-negativity. This direct mapping ensures that the bit patterns for these values remain unchanged from standard binary notation, preserving simplicity in their storage and interpretation. For an n-bit ones' complement system, the range of representable non-negative values spans from 0 to 2n112^{n-1} - 1, allowing half of the total 2n2^n bit patterns to denote positives and zero while reserving the other half for negatives. This symmetric allocation (excluding the dual zero representations) provides a balanced capacity for signed integers compared to unsigned systems. To illustrate, consider a 4-bit ones' complement system: the value +0 is encoded as 0000, +1 as 0001, +3 as 0011, and the maximum positive +7 as 0111. These patterns mirror unsigned binary exactly, with the MSB consistently at 0. Zero specifically uses the all-zero bit pattern, termed positive zero, which contrasts with the all-ones pattern representing negative zero—a unique feature of ones' complement that arises from bitwise inversion for negatives but does not affect positive encodings.

Negative Values

In one's complement representation, negative integers are encoded by performing a bitwise inversion on the binary representation of their absolute (positive) value. This involves flipping every bit—changing each 0 to 1 and each 1 to 0—across the fixed number of bits used for the representation. The resulting bit pattern serves as the encoding for the , building directly on the standard binary encoding of the corresponding positive value. For example, in a 4-bit system, the positive value +1 is represented as 0001 in binary. Inverting all bits yields 1110, which represents -1. Similarly, +4 is 0100, and its inversion is 1011, representing -4. These examples illustrate how the complemented form preserves the structure of binary while denoting negativity through the inversion process. The range of representable negative values in an nn-bit one's complement system spans from (2n11)-(2^{n-1} - 1) to 1-1, providing symmetric magnitude coverage relative to the positive range, with the all-1s pattern separately representing negative zero (distinct in bit pattern from positive zero but equal in value). For a 4-bit system, this corresponds to -7 through -1. In an nn-bit encoding, the most negative value, such as -7 in 4 bits (1000), arises from inverting the largest positive value (0111). The most significant bit (MSB) in one's complement is always 1 for negative values, serving as a indicator, but unlike pure sign-magnitude systems, it is not isolated; the entire bit pattern, including the MSB, contributes integrally to the complemented magnitude calculation. To decode a negative representation, one inverts the bits back to obtain the positive magnitude and then applies the negative . This holistic bit involvement distinguishes one's complement from other signed representations.

Arithmetic Operations

Addition

In one's complement arithmetic, addition of two operands proceeds via standard binary , treating the numbers as unsigned values regardless of their signs. The bits are added column by column from the least significant bit (LSB) to the most significant bit (MSB), propagating any carries as in binary addition. If a carry is generated out of the MSB (indicating overflow in the fixed-width representation), this carry bit is not discarded; instead, it is added back to the LSB of the sum in a process known as the end-around carry. This adjustment ensures the result correctly represents the sum 2n12^n - 1, where nn is the number of bits, preserving the arithmetic properties of the representation. Consider a 5-bit example adding +7 (00111) and +5 (00101). The binary addition yields 01100 with no carry out from the MSB, so no end-around carry is needed, resulting in +12 (01100). Now, adding -1 (11110) and -2 (11101): the initial sum is 11011 after discarding the carry out of 1 from the MSB, but applying the end-around carry adds 1 to the LSB, yielding 11100, which represents -3. For mixed signs, such as +1 (00001) and -3 (11100), the sum is 11101 with no carry out, directly giving -2. These steps demonstrate how the end-around carry corrects the result when necessary. The result of addition can be expressed as the sum bits plus the end-around carry if an overflow carry occurs: r=(a+b)+comod2nr = (a + b) + c_o \mod 2^n, where coc_o is 1 if there is a carry out from the MSB and 0 otherwise. This mechanism handles cases where the sum exceeds the representable range but requires careful implementation in hardware to route the carry back efficiently. Overflow in one's complement addition is detected when both operands have the same sign but the result has the opposite sign, indicating the sum has exceeded the bounds of the representation (e.g., adding two large positive numbers yielding a negative result). In such cases, the operation signals an overflow condition, though the end-around carry still produces a valid (but wrapped) value. For instance, in 5 bits, adding +11 (01011) and +9 (01001) gives a direct sum of 10100, which is negative despite positive operands, confirming overflow. No overflow occurs for operands of different signs, as the result stays within bounds.

Subtraction

In ones' complement arithmetic, subtraction of two numbers A and B (A - B) is implemented by adding A to the ones' complement of B, which effectively computes A + (-B), and then applying an end-around carry adjustment if a carry-out is generated during the addition. This approach leverages the circuitry, as the ones' complement of B represents its . Consider a 4-bit example where A = 5 (0101₂) and B = 3 (0011₂). The ones' complement of B is 1100₂. Adding 0101₂ + 1100₂ yields 10001₂, producing a carry-out of 1 and a sum of 0001₂. The end-around carry adds this 1 back to the least significant bit of the sum: 0001₂ + 0001₂ = 0010₂, which correctly represents 2 (5 - 3 = 2). If no carry-out occurs, the result is the ones' complement of the true difference, indicating a negative value that requires further complementation to obtain the magnitude. The ones' complement operation is its own inverse: applying it twice to a number yields the original value (within the fixed word length), which underpins the used in . Overflow detection in follows the same as in , where inconsistency in the carries into and out of the signals an overflow condition.

Negative Zero Issue

Occurrence

In one's complement arithmetic, negative zero occurs when a positive number is added to its exact negative complement, yielding a bit pattern consisting of all 1s. This happens under the condition of symmetric addition of opposite values, where the bitwise inversion used to form the negative ensures that each bit position sums to 1 without generating carries that propagate across bits. A representative example in a 4-bit illustrates this: the positive value +1 is represented as 0001, while its negative -1 is 1110; adding these produces 1111, interpreted as negative zero. This negative zero (1111 in the 4-bit case) is distinct from positive zero (0000), as the former has the set to 1, creating two separate representations for the value zero despite their numerical equivalence.

Consequences

In ones' complement representation, negative zero and positive zero are mathematically equivalent but possess distinct bit patterns—all ones for negative zero and all zeros for positive zero—which introduces logical inconsistencies in comparisons and equality tests. For instance, direct bitwise or numerical equality checks may fail to recognize them as identical, necessitating additional logic to treat both as zero and potentially leading to erroneous program behavior in conditional statements or loops. Arithmetic operations exacerbate these issues, as subtracting a number from itself can yield negative zero instead of positive zero, resulting in unexpected signs or infinite loops in algorithms that rely on zero detection without accounting for the dual representations. Such errors propagate in subsequent computations, altering results in ways that deviate from expected mathematical outcomes, particularly in subtraction-heavy routines. In early hardware like the and UNIVAC 490, these dual zeros required specialized circuitry and instruction sets to handle inconsistencies, such as distinct skip conditions in shift and logical operations that could trigger unintended program branches when registers held negative zero. This increased design complexity and challenges for programmers, as negative zero appearances in division quotients or remainders—especially in edge cases like dividing zero by a non-zero value with unlike signs—could lead to misinterpretations in further calculations. The presence of negative zero also disrupts sorting algorithms and relational tests, where positive zero may be treated as greater than negative zero due to bit differences, inverting expected orderings in structures or search operations and compromising the reliability of ordered outputs in applications like or numerical simulations.

Mitigation Techniques

End-Around Carry Adjustment

In ones' complement arithmetic, the end-around carry adjustment is a corrective step applied after binary to ensure accurate results, particularly by normalizing the sum and handling overflow correctly. This technique involves detecting any carry-out from the most significant bit (MSB) during the of two and then adding that carry (equivalent to adding 1) back to the least significant bit (LSB) of the resulting sum. The adjustment propagates any resulting carries through the word, effectively restoring the correct value without requiring separate hardware for , as negative numbers are handled by complementing one operand and adding. The algorithm for performing addition with end-around carry proceeds as follows:
  1. Represent the operands in ones' complement form and add them using a standard binary adder, ignoring signs.
  2. Note any carry-out from the MSB.
  3. If a carry-out exists, add 1 to the LSB of the sum, allowing carries to propagate as needed to update the entire result.
This process ensures modular arithmetic consistency within the fixed word length. However, it does not fully resolve the negative zero issue; for example, adding +1 (0001) and -1 (1110) in 4-bit yields 1111 (-0) with no carry-out, requiring additional logic to normalize zeros in comparisons or other operations. A representative example illustrates the technique's role in correcting the sum. Consider adding +3 and -1 in a 4-bit ones' complement , where +3 is 0011 and -1 is 1110. The initial yields 0011 + 1110 = 0001 with a carry-out of 1. Adding the carry back to the LSB gives 0001 + 0001 = 0010 (positive 2), correctly representing the sum. Without this step, the result would be 0001 (+1), which is incorrect. This adjustment simplifies hardware design by allowing a single circuit to handle both and is performed by complementing the subtrahend, adding it to the minuend, and applying the end-around carry if needed—reducing complexity compared to requiring dedicated logic.

Sign-Magnitude Alternative

In sign-magnitude representation, an alternative to ones' complement, the most significant bit serves as a dedicated sign bit (0 for positive or zero, 1 for negative), while the remaining bits represent the absolute magnitude of the value in standard unsigned binary form. This approach inherently distinguishes the sign from the value, allowing for straightforward encoding of both positive and negative numbers without inverting bits for negation. For instance, in a 4-bit system, the number -3 is encoded as 1011, where the leading 1 indicates negativity and the trailing 011 (binary 3) gives the magnitude. Unlike ones' complement, which produces a negative zero as the bitwise inverse of positive zero (all 1s), sign-magnitude represents positive zero as all bits (0000 in 4 bits) and negative zero as sign bit 1 with magnitude (1000 in 4 bits). This separation simplifies zero handling, as negative zero can be detected by checking if the magnitude is and normalized to positive zero by clearing the sign bit, avoiding the propagation issues seen in ones' complement arithmetic. However, sign-magnitude introduces trade-offs in arithmetic operations, requiring separate logic for sign determination and magnitude computation: addition of numbers with the same sign involves adding magnitudes and retaining the sign, while differing signs necessitate subtracting the smaller magnitude from the larger and assigning the sign of the latter, often with an initial comparison step. This increases hardware complexity compared to the uniform adder used in ones' complement (with its end-around carry). Historically, sign-magnitude was employed in early computers like the to represent fixed-point integers, providing a hybrid approach that bypassed some flaws of ones' complement by easing sign and zero management in software or hardware normalization routines.

Comparisons

With Two's Complement

Two's complement representation of negative numbers is obtained by taking the ones' complement and adding 1, which eliminates the negative zero present in ones' complement systems. This adjustment ensures a single representation for zero, simplifying arithmetic operations and avoiding ambiguities in computations. For example, in a 4-bit system, the representation of -1 in ones' complement is 1110, obtained by inverting the bits of +1 (0001). In , it becomes 1111, as the ones' complement 1110 plus 1 yields 1111. This pattern holds generally: the two's complement of a number is its bitwise inversion plus one, shifting the encoding to include an additional negative value without duplicating zero. In terms of arithmetic, two's complement streamlines addition and subtraction by treating all operations uniformly as addition, without the need for the end-around carry required in ones' complement addition to correct for negative zero. For instance, adding two numbers in two's complement produces the correct result directly, including overflow detection via the sign bit, whereas ones' complement negation is simpler—requiring only bit inversion—compared to two's complement, which needs the additional increment step. The range of representable values also differs: for nn bits, ones' complement spans from (2n11)-(2^{n-1} - 1) to +(2n11)+(2^{n-1} - 1), such as -7 to +7 in 4 bits. extends to 2n1-2^{n-1} to +(2n11)+(2^{n-1} - 1), like -8 to +7 in 4 bits, providing one more negative value at the expense of symmetry. This asymmetry arises from the absence of negative zero, allowing fuller utilization of the bit patterns for negative integers.

With Sign-Magnitude

In sign-magnitude representation, the most significant bit serves as a dedicated (0 for positive or zero, 1 for negative), while the remaining bits encode the of the number in standard . In contrast, ones' complement representation lacks a separate ; instead, negative numbers are formed by inverting all bits of the corresponding positive value, integrating the sign information across the entire word. This structural difference means sign-magnitude treats the sign and value asymmetrically, whereas ones' complement applies a uniform to the whole number. For a concrete illustration in a 4-bit system, the number -3 is represented as 1011 in sign-magnitude (sign bit 1 followed by magnitude 011, equivalent to binary 3) and as 1100 in ones' complement (bitwise inversion of positive 3, which is 0011). Both formats support a symmetric range of values, such as -7 to +7 in 4 bits, but they allocate bit patterns differently: sign-magnitude dedicates half its codes to positive and negative pairs, while ones' complement distributes them through inversion. A key trade-off between the two is their handling of zero, where both representations produce dual forms—positive zero (all bits ) and negative zero (sign bit 1 with zero magnitude for sign-magnitude, or all bits 1 for ones' complement)—leading to potential ambiguities in comparisons and arithmetic results. However, sign-magnitude arithmetic demands more complex hardware logic, as operations require separate evaluation of signs to decide whether to add or subtract magnitudes, often necessitating conditional circuits or dual paths (one for , one for subtraction of unsigned magnitudes), which increases gate count and delay. Ones' complement, by comparison, enables a more uniform approach to arithmetic: proceeds like unsigned binary with an end-around carry adjustment if a carry-out occurs, and subtraction is performed by adding the ones' complement of the subtrahend, allowing reuse of a single circuit for both operations. Regarding efficiency for negation, sign-magnitude requires only flipping the , a minimal single-bit operation, while ones' complement involves inverting every bit, which, though parallelizable in hardware, consumes more logic resources across the word width. Despite this, ones' complement's integrated inversion can simplify certain increment-decrement circuits in older designs, though both formats generally yield to in modern systems for overall arithmetic simplicity.

History and Applications

Origins

The ones' complement representation for signed binary integers emerged in the late and early , coinciding with the shift from decimal-based to binary arithmetic in electronic digital computers. This period saw the design of pioneering machines that required efficient methods for handling negative numbers, drawing from earlier complement techniques in systems but adapted to binary's simpler radix-2 structure. A significant early adoption occurred with the UNIVAC 1101 (originally the ERA 1101), developed by Engineering Research Associates starting in 1949 and first delivered in 1951. In this machine, the 48-bit accumulator employed ones' complement arithmetic, where negative numbers were formed by inverting the bits of their positive counterparts, facilitating subtractive operations fundamental to its design. The primary rationale for ones' complement lay in its hardware simplicity: negation required only bitwise inversion using NOT gates, avoiding the need for sequential carry propagation or dedicated subtraction circuitry. This contrasted with decimal complements (such as 9's complement), which demanded more intricate digit-wise operations; in binary, it streamlined addition-based subtraction by complementing the subtrahend and adding an end-around carry when necessary. As a foundational approach, ones' complement influenced subsequent developments, acting as a precursor to by highlighting issues like dual zero representations (+0 and -0), which later systems resolved through an additional increment step to unify arithmetic behavior.

Current Usage

Despite the widespread adoption of arithmetic in general-purpose computing since the 1960s, ones' complement has seen a significant decline, largely due to the standardization efforts exemplified by IBM's System/360 architecture, which opted for to simplify hardware implementation and improve performance in bit-serial models. This transition influenced subsequent CPU designs, rendering ones' complement obsolete in most modern processors. In contemporary hardware, ones' complement persists primarily in legacy mainframe systems for compatibility reasons, such as the Unisys ClearPath Dorado series, which maintains ones' complement arithmetic in its OS 2200 environment to support long-running enterprise applications. While rare in general embedded systems, certain digital signal processing (DSP) contexts leverage ones' complement for specific operations like checksum calculations due to its endianness-independent properties. A key niche application remains in network protocols, where ones' complement is employed for error detection in ; for instance, the IPv4 header , as defined in RFC 791, computes the ones' complement of the sum of 16-bit words in the header to verify integrity during transmission. Similarly, TCP and UDP use ones' complement arithmetic to detect transmission errors efficiently. In software, ones' complement is emulated for simulating vintage computers that originally used it, such as the or , allowing accurate reproduction of historical behaviors in tools like emulators for retrocomputing enthusiasts. Bitwise libraries also incorporate ones' complement operations; notably, Python's bitwise NOT operator (~) performs a ones' complement by inverting all bits of its , equivalent to -(x + 1) in systems, facilitating low-level bitwise manipulations.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.