Hubbry Logo
Arithmetic shiftArithmetic shiftMain
Open search
Arithmetic shift
Community hub
Arithmetic shift
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Arithmetic shift
Arithmetic shift
from Wikipedia
A right arithmetic shift of a binary number by 1. The empty position in the most significant bit is filled with a copy of the original MSB.
A left arithmetic shift of a binary number by 1. The empty position in the least significant bit is filled with a zero.
Arithmetic shift operators in various programming languages and processors
Language or processor Left Right
ActionScript 3, Java, JavaScript, Python, PHP, Ruby, C, C++,[1]D, C#, Go, Julia,
Rust (signed types only)[2],
Swift (signed types only)[note 1]
<< >>
Ada Shift_Left [3] Shift_Right_Arithmetic
Kotlin shl shr
Fortran SHIFTL SHIFTA [note 2]
Standard ML << ~>>
Verilog <<< >>> [note 3]
OpenVMS macro language @[note 4]
Scheme arithmetic-shift[note 5]
Common Lisp ash
OCaml lsl asr
Haskell Data.Bits.shift[note 6]
VHDL sla[note 7] sra
Assembly: Z80 SLA[5] SRA
Assembly: x86 SAL SAR
Assembly: 68k ASL ASR
Assembly: RISC-V sll, slli[6] sra, srai

In computer programming, an arithmetic shift is a shift operator, sometimes termed a signed shift (though it is not restricted to signed operands). The two basic types are the arithmetic left shift and the arithmetic right shift. For binary numbers it is a bitwise operation that shifts all of the bits of its operand; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in. Instead of being filled with all 0s, as in logical shift, when shifting to the right, the leftmost bit (usually the sign bit in signed integer representations) is replicated to fill in all the vacant positions (this is a kind of sign extension).

Some authors prefer the terms sticky right-shift and zero-fill right-shift for arithmetic and logical shifts respectively.[7]

Arithmetic shifts can be useful as efficient ways to perform multiplication or division of signed integers by powers of two. Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right by n bits on a two's complement signed binary number has the effect of dividing it by 2n, but it always rounds down (towards negative infinity). This is different from the way rounding is usually done in signed integer division (which rounds towards 0). This discrepancy has led to bugs in a number of compilers.[8]

For example, in the x86 instruction set, the SAR instruction (arithmetic right shift) divides a signed number by a power of two, rounding towards negative infinity.[9] However, the IDIV instruction (signed divide) divides a signed number, rounding towards zero. So a SAR instruction cannot be substituted for an IDIV by power of two instruction nor vice versa.

Formal definition

[edit]

The formal definition of an arithmetic shift, from Federal Standard 1037C is that it is:

A shift, applied to the representation of a number in a fixed radix numeration system and in a fixed-point representation system, and in which only the characters representing the fixed-point part of the number are moved. An arithmetic shift is usually equivalent to multiplying the number by a positive or a negative integral power of the radix, except for the effect of any rounding; compare the logical shift with the arithmetic shift, especially in the case of floating-point representation.

An important word in the FS 1073C definition is "usually".

Non-equivalence of arithmetic right shift and division

[edit]

However, arithmetic right shifts are major traps for the unwary, specifically in treating rounding of negative integers. For example, in the usual two's complement representation of negative integers, −1 is represented as all 1's. For an 8-bit signed integer this is 1111 1111. An arithmetic right-shift by 1 (or 2, 3, ..., 7) yields 1111 1111 again, which is still −1. This corresponds to rounding down (towards negative infinity), but is not the usual convention for division.

It is frequently stated that arithmetic right shifts are equivalent to division by a (positive, integral) power of the radix (e.g., a division by a power of 2 for binary numbers), and hence that division by a power of the radix can be optimized by implementing it as an arithmetic right shift. (A shifter is much simpler than a divider. On most processors, shift instructions will execute faster than division instructions.) Large number of 1960s and 1970s programming handbooks, manuals, and other specifications from companies and institutions such as DEC, IBM, Data General, and ANSI make such incorrect statements[10][page needed].

Logical right shifts are equivalent to division by a power of the radix (usually 2) only for positive or unsigned numbers. Arithmetic right shifts are equivalent to logical right shifts for positive signed numbers. Arithmetic right shifts for negative numbers in N's complement (usually two's complement) is roughly equivalent to division by a power of the radix (usually 2), where for odd numbers rounding downwards is applied (not towards 0 as usually expected).

Arithmetic right shifts for negative numbers are equivalent to division using rounding towards 0 in ones' complement representation of signed numbers as was used by some historic computers, but this is no longer in general use.

Handling the issue in programming languages

[edit]

The (1999) ISO standard for the programming language C defines the right shift operator in terms of divisions by powers of 2.[11] Because of the above-stated non-equivalence, the standard explicitly excludes from that definition the right shifts of signed numbers that have negative values. It does not specify the behaviour of the right shift operator in such circumstances, but instead requires each individual C compiler to define the behaviour of shifting negative values right.[note 8]

Like C, C++ had an implementation-defined right shift for signed integers until C++20. Starting in the C++20 standard, right shift of a signed integer is defined to be an arithmetic shift.[13]

Applications

[edit]

In applications where consistent rounding down is desired, arithmetic right shifts for signed values are useful. An example is in downscaling raster coordinates by a power of two, which maintains even spacing. For example, right shift by 1 sends 0, 1, 2, 3, 4, 5, ... to 0, 0, 1, 1, 2, 2, ..., and −1, −2, −3, −4, ... to −1, −1, −2, −2, ..., maintaining even spacing as −2, −2, −1, −1, 0, 0, 1, 1, 2, 2, ... In contrast, integer division with rounding towards zero sends −1, 0, and 1 all to 0 (3 points instead of 2), yielding −2, −1, −1, 0, 0, 0, 1, 1, 2, 2, ... instead, which is irregular at 0.

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
An arithmetic shift is a in that shifts the bits of a signed left or right by a specified number of positions, while preserving the of the number by filling the vacated bit positions with copies of the (the most significant bit). In an arithmetic left shift, bits are moved to the left, the least significant bits are filled with zeros, and bits shifted beyond the most significant position are discarded, effectively multiplying the number by a for positive values but potentially causing overflow for signed representations. For an arithmetic right shift, bits are moved to the right, the most significant bits are filled with the to maintain the number's (replicating 0 for positive numbers and 1 for negative numbers), and the least significant bit is discarded, which performs division by a while toward negative infinity in systems. This contrasts with a , which always fills vacated positions with zeros regardless of the , making it suitable for unsigned integers but altering the of signed ones. Arithmetic shifts are fundamental in processor arithmetic logic units (ALUs) for efficient implementation of and division by powers of two, as well as in applications like , , and in embedded systems.

Fundamentals

Definition

An arithmetic shift is a bitwise operation that shifts the bits of a binary number left or right by a specified number of positions, with the key characteristic of preserving the sign of the number when operating on signed integers represented in two's complement form. In this representation, the most significant bit (MSB) serves as the sign bit: 0 for positive (including zero) and 1 for negative values. To understand arithmetic shifts, consider the binary representation of signed integers using , the predominant method in modern architectures. For an n-bit , positive numbers are encoded in standard binary, while negative numbers are formed by inverting all bits of the positive equivalent and adding 1; this allows arithmetic operations to treat positive and negative values uniformly without separate sign handling. In an arithmetic left shift, vacated bit positions on the right are filled with zeros. In an arithmetic right shift, vacated bit positions on the left are filled by replicating the , ensuring the numerical value remains correctly interpreted as signed—zeros for positive numbers and ones for negative ones. This contrasts with logical shifts, which always fill with zeros regardless of sign. For example, consider an 8-bit positive 00001010₂ (decimal 10). An arithmetic left shift by 1 position yields 00010100₂ ( 20), filling the right with ; a right shift by 1 yields 00000101₂ ( 5), filling the left with due to the positive . For a negative 8-bit like 11110110₂ ( -10 in ), a right shift by 1 becomes 11111011₂ ( -5), filling the left with 1s to extend the and preserve negativity. Left shifts on negative numbers fill with on the right but may cause overflow if the shifts out. Arithmetic shifts originated in early computer architectures to enable efficient signed arithmetic operations, notably in the IBM System/360 introduced in 1964, where instructions like Shift Right Arithmetic (SRA) and Shift Left Arithmetic (SLA) explicitly preserved the sign bit for fixed-point integers.

Comparison to logical shift

The logical shift operation treats the operand as an unsigned integer, filling any vacated bit positions with zeros during both left and right shifts, regardless of the original sign of the value. In contrast, the arithmetic right shift preserves the sign of signed integers by filling vacated positions on the left with copies of the sign bit (0 for positive, 1 for negative), which maintains the numerical value's intended signed magnitude during division-like operations. This difference becomes critical when processing signed data: a logical shift can inadvertently change the sign or produce an incorrect positive value, while an arithmetic shift ensures sign preservation. To illustrate, consider an 8-bit representation where the value -4 is encoded as 11111100. A right shift by 1 bit using arithmetic shift fills with the (1), resulting in 11111110 (-2, preserving the negative sign and halving the magnitude). The same operation using fills with 0, yielding 01111111 (127, an unsigned positive value that alters the sign).
ValueBinary (8-bit)OperationArithmetic Right Shift ResultLogical Right Shift Result
Signed -411111100Right by 111111110 (-2)01111111 (127)
Confusion between these shifts often arises in mixed signed/unsigned contexts, such as when code intended for unsigned data applies to signed integers, leading to errors and unpredictable behavior like incorrect negative-to-positive conversions. For instance, right-shifting a negative signed value with a can yield a large positive number, causing overflows or logical errors in algorithms relying on sign preservation.

Operations

Left shift

The arithmetic left shift operation moves each bit in a to the left by a specified number of positions nn, filling the vacated rightmost bits with zeros while discarding any bits shifted out from the left in fixed-width representations such as registers or words. This behavior is identical for both unsigned and signed integers, as the left shift does not depend on the for insertion. In two's complement representation for signed integers, an arithmetic left shift by nn positions is mathematically equivalent to multiplying the original value xx by 2n2^n, as long as the result does not cause overflow within the fixed bit width. The equation can be expressed as xn=x×2nx \ll n = x \times 2^n. This equivalence holds because shifting left effectively scales the positional values of the bits by powers of 2, preserving the signed interpretation until overflow intervenes. For example, consider the 8-bit representation of the positive 5, which is 00000101200000101_2. Performing a left shift by 2 positions yields 00010100200010100_2, equivalent to 20 in , demonstrating by 22=42^2 = 4. Similarly, for the negative -5, represented as 11111011211111011_2, a left shift by 1 position results in 11110110211110110_2, which is -10 in , corresponding to by 21=22^1 = 2. Overflow occurs in signed arithmetic if the shifted result exceeds the representable range for the bit width, potentially causing the sign bit to change and flipping the sign of the value (e.g., a positive number becoming negative or vice versa). For instance, in 4-bit two's complement, shifting 7 (011120111_2) left by 1 produces 111021110_2 (-2), as the overflowed bit sets the sign bit to 1. Such behavior underscores the need to check for overflow in arithmetic computations using shifts.

Right shift

The arithmetic right shift operation moves the bits of a to the right by a specified number of positions, n, while filling the vacated most significant bits (leftmost) with copies of the original to preserve the number's sign in representation. This ensures that positive numbers remain positive and negative numbers remain negative after the shift. The least significant n bits are discarded (shifted out), effectively truncating the lower-order bits. For positive signed integers, an arithmetic right shift by n bits is equivalent to integer division by 2n2^n, as the operation aligns with standard unsigned right shifting in this case. However, for negative signed integers, the shift performs a division by 2n2^n, rounding toward negative rather than toward zero, due to the sign bit replication which adjusts the magnitude downward. For instance, shifting -5 (binary 11111011 in 8-bit ) right by 1 bit yields -3 (11111101), since -5 divided by 2 is -2.5, and gives -3. Consider an 8-bit example with the positive number 20, represented as 00010100. An arithmetic right shift by 2 bits moves the bits right, fills the left with 0 (), and discards the last two bits (00), resulting in 00000101, which is 5—equivalent to 20 divided by 4. For the -20, represented as 11101100, the same shift by 2 bits moves the bits right, fills the left with 1 (), and discards the last two bits (00), yielding 11111011, which is -5—consistent with flooring division of -20 by 4 to -5. This of least significant bits in signed arithmetic thus implements a form of division that prioritizes preservation over symmetric .

Properties

Sign preservation

In two's complement representation, the arithmetic right shift operation preserves the sign of a signed by replicating the (the most significant bit) into the vacated positions on the left, ensuring that a negative input remains negative and a positive input remains positive. For example, the 4-bit value -4 (binary 1100) shifted right by 1 bit becomes 1110 (-2), with the '1' replicated to maintain negativity. This replication distinguishes arithmetic right shifts from logical right shifts, which fill with zeros and could alter the sign interpretation. Arithmetic left shifts generally preserve the sign as well, since they fill the least significant bits with zeros and shift the leftward, but this holds only until overflow occurs, at which point the may change if the result exceeds the representable range. For instance, in an 8-bit system, shifting -1 (binary 11111111) left by 1 bit yields 11111110 (-2), preserving the negative sign, but further shifts can lead to overflow where the flips if the magnitude grows beyond the format's limits. Edge cases highlight the behavior clearly: shifting the all-zero pattern (representing 0) right arithmetically yields 0 regardless of shift amount, as the sign bit is 0 and replicates zeros; conversely, shifting the all-one pattern (representing -1) right replicates 1s, keeping the result as -1. In multi-word integers, such as extending a 32-bit signed value to 64 bits, arithmetic right shifts propagate the sign bit across word boundaries via sign extension, filling higher bits with the replicated sign to maintain the correct negative or positive interpretation in the larger format. These sign preservation properties enable efficient implementation of signed arithmetic operations, such as division by powers of 2, without requiring complex full adders for sign handling, as the replicated bits inherently adjust the magnitude while keeping the intact. In contrast, for unsigned integers, the holds no interpretive meaning, so arithmetic shifts are typically replaced by logical shifts that zero-fill, avoiding unnecessary sign replication.

Relation to multiplication and division

The arithmetic left shift operation by nn bits on an xx, denoted xnx \ll n, is exactly equivalent to by 2n2^n, assuming no overflow occurs in the representation. This holds for both signed and unsigned , as the shift fills the vacated least significant bits with zeros, effectively scaling the value by the power of two without altering the sign bit's influence in . In contrast, the arithmetic right shift xnx \gg n approximates division by 2n2^n but implements a floor division, always rounding the quotient toward negative . For nonnegative xx, this matches the typical integer division semantics of toward zero. However, for negative xx, the result differs due to the behavior; for example, in , 71=4-7 \gg 1 = -4, whereas the true division 7/2=3.5-7 / 2 = -3.5 would truncate to 3-3 under toward-zero . This discrepancy arises because the in arithmetic right shift preserves the negative sign while shifting, effectively rounding down rather than toward zero. These differences highlight key non-equivalences in signed arithmetic, particularly in rounding modes across systems. Modern languages like C and C++ specify signed integer division to truncate toward zero, independent of the shift operation's flooring. Arithmetic right shift, however, consistently floors, leading to potential mismatches for negative values unless adjusted. Other modes, such as rounding toward positive or negative infinity, are not natively supported by shifts but can influence division in floating-point contexts or specific hardware. To simulate toward-zero division for n=1n=1 using arithmetic right shift on negative odd integers, a common adjustment adds 1 post-shift when applicable: (x >> 1) + ((x < 0) && (x & 1) ? 1 : 0). This corrects the flooring bias only in cases where the least significant bit indicates a fractional remainder requiring upward adjustment toward zero.

Implementation

In hardware

Arithmetic shifts are implemented in the arithmetic logic unit (ALU) of processors through dedicated shifter circuits that handle bit manipulations efficiently. These circuits often employ multiplexers (MUXes) arranged in layers to form a barrel shifter, enabling variable shift amounts (n) in a single clock cycle by routing bits through multiple stages of selection. For arithmetic right shifts, sign extension is achieved by wiring the sign bit (most significant bit) of the operand to the input lines that fill the vacated high-order positions, preserving the number's sign during the operation. One of the earliest commercial implementations of arithmetic shifts appeared in the mainframe architecture, introduced in 1964, which included instructions like Shift Left Arithmetic (SLA) and Shift Right Arithmetic (SRA) in its fixed-point instruction set to support signed integer operations. These instructions were designed to integrate seamlessly with the system's ALU for efficient handling of two's complement arithmetic, marking a significant advancement in hardware support for signed bit shifts. In modern processors, arithmetic shifts are supported via specialized instructions that extend to both scalar and vector operations. For instance, the x86 architecture provides the SAR (Shift Arithmetic Right) instruction, which performs right shifts on registers while filling with the sign bit, and SIMD extensions like MMX and SSE include packed variants such as PSRAW for parallel arithmetic right shifts on multiple data elements. Similarly, RISC architectures like ARM incorporate the ASR (Arithmetic Shift Right) instruction, which differs from its logical counterpart LSR (Logical Shift Right) by sign-extending the operand, enabling efficient signed division in resource-constrained environments. In contrast to CISC designs like x86, which may combine shifts with other operations in complex instructions, RISC approaches prioritize simple, single-cycle shifts to enhance pipeline efficiency and reduce power consumption in embedded systems.

In programming languages

In C and C++, the right-shift operator >> performs an arithmetic shift for signed types, preserving the by filling vacated positions with copies of the , while it performs a for unsigned types by filling with zeros. This behavior for signed integers was implementation-defined prior to but has been standardized as arithmetic since then, rounding toward negative infinity for negative values. To ensure logical shifting on signed types, programmers often cast to unsigned explicitly, such as (unsigned int)x >> n. In Java, the >> operator implements an arithmetic right shift for all integer types, sign-extending the result to preserve the sign, while the >>> operator provides a logical right shift by zero-filling regardless of sign. For positive values, this corresponds to truncation toward zero in division by powers of two; for negative values, it truncates toward negative infinity due to sign extension. The Java Language Specification defines this precisely for 32-bit int and 64-bit long types, with the shift amount masked to the appropriate bit width (five bits for int, six for long). Python's >> operator performs an arithmetic right shift on integers, treating them as arbitrary-precision values and filling with the for negative numbers, equivalent to floor division by 2n2^n. In JavaScript, bitwise operators including >> convert operands to 32-bit signed integers before operating, resulting in an arithmetic right shift that preserves the sign via ; the result is then converted back to a floating-point number. Portability challenges arise primarily in C and C++, where the behavior of >> on signed negative integers remains implementation-defined in C (as per ISO C99 and later), potentially leading to logical instead of arithmetic shifts on some platforms, though most common compilers use arithmetic. Endianness does not affect shift operations directly, but multi-byte integer representations can complicate sign extension across architectures if shifts exceed the type's bit width, often requiring explicit masking of the shift amount to avoid undefined behavior. To achieve portable arithmetic right shifts in C, one common approach is to cast to unsigned, perform the shift, and adjust based on the original sign bit. When using arithmetic right shifts for signed integer division by powers of two, the truncation toward negative infinity for negative values can deviate from toward-zero semantics in some languages, necessitating adjustments for consistent "true" division. For positive or unsigned values seeking rounding up (ceiling division), a portable idiom in C/C++ and similar languages is (x + ((1U << n) - 1)) >> n, which adds the appropriate bias before shifting to handle the truncation correctly without branches. In Java and Python, where arithmetic shifts are reliably defined, such fixes are less critical but may still be applied for specific rounding modes.

Applications

In arithmetic computations

Arithmetic left shifts are commonly employed to perform by powers of 2, as shifting an left by kk bits effectively multiplies it by 2k2^k. For instance, in optimizations, expressions like x×8x \times 8 are transformed into x3x \ll 3 through techniques, which replace more expensive instructions with faster bit shifts. This optimization is enabled at optimization levels such as -O1 and higher in compilers like GCC, reducing execution time in loops and arithmetic-heavy code. Arithmetic right shifts facilitate division by powers of 2, where shifting right by kk bits approximates division by 2k2^k, specifically performing floor division for positive integers without additional adjustments. For positive values, this yields exact results matching integer division semantics, as the operation discards the least significant bits equivalent to the . In cases involving negative numbers, the shift rounds toward negative infinity due to extension, requiring adjustments like adding a before shifting to achieve toward-zero if needed for exactness. Beyond basic scaling, arithmetic shifts combine with bit masking to enable efficient data alignment and packing in computations, such as extracting a byte with x&0xFFx \& 0xFF and then shifting it left by 8 bits to position it in a 32-bit word. Compilers like GCC apply peephole optimizations to recognize and rewrite such patterns, converting sequences of masks and shifts into more efficient instruction streams during code generation. These shift-based techniques offer performance advantages over dedicated multiply and divide instructions, particularly in early processors lacking hardware support for , where shifts execute in fewer cycles. In modern embedded systems, they continue to provide cycle savings and reduced power consumption by avoiding slower division hardware, making them valuable for resource-constrained environments like microcontrollers.

In data representation

Arithmetic shifts play a crucial role in , where numbers are represented with a fixed binary point position to handle fractional values using hardware. In signed fixed-point formats, such as Qm.n (m bits, n fractional bits), an arithmetic right shift effectively divides the value by 2 by moving the binary point rightward while preserving the sign through sign-bit extension. For example, in a Q1.1 format (1 bit, 1 fractional bit, signed, using 3 bits total), the value 1.0 is represented as binary 010 (decimal 2, representing 2/2 = 1.0), and an arithmetic right shift yields binary 001 (decimal 1, representing 1/2 = 0.5). Similarly, for -1.0 (binary 110 in , decimal -2), the shift produces binary 111 (decimal -1, interpreted as -1/2 = -0.5 in Q1.1), maintaining the negative sign. In multi-precision arithmetic, arithmetic shifts facilitate when combining multiple words to represent larger signed . This involves replicating the from the most significant word across higher-order words using arithmetic right shifts on zero-padded extensions, ensuring the overall value remains correctly signed without altering magnitude. For instance, extending a 32-bit signed to 64 bits requires an arithmetic shift to fill the upper 32 bits with the , preserving arithmetic properties across precision boundaries. Arithmetic shifts find applications in digital signal processing (DSP) and graphics for scaling signed values, such as pixel intensities or filter coefficients, while retaining the full signed range. In DSP, right shifts align fractional components during multiplications in fixed-point filters, allowing efficient computation of convolutions without floating-point overhead. In graphics, they scale signed color differences or vertex coordinates, maintaining negative values for offsets. More recently, in machine learning, arithmetic shifts support INT8 quantized models by enabling bit-level scaling during inference, such as dequantizing activations via power-of-two adjustments to reduce computation in frameworks like TensorFlow. A key limitation of arithmetic shifts in fixed-point representation is precision loss from discarding least significant bits during right shifts, which accumulates in repeated operations and cannot be recovered, unlike floating-point formats that adjust exponents dynamically. This introduces quantization noise, particularly for fractional parts, potentially degrading accuracy in iterative DSP algorithms. Sign preservation, as noted in related properties, ensures consistent behavior but does not mitigate this loss.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.