Recent from talks
Nothing was collected or created yet.
Bitwise operation
View on WikipediaThis article needs additional citations for verification. (August 2018) |
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.
On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources.[1]
Bitwise operators
[edit]In the explanations below, any indication of a bit's position is counted from the right (least significant) side, advancing left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the first (i.e., the rightmost) one.
NOT
[edit]The bitwise NOT, or bitwise complement, is a unary operation that performs logical negation on each bit, forming the ones' complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example:
NOT 0111 (decimal 7) = 1000 (decimal 8)
NOT 10101011 (decimal 171) = 01010100 (decimal 84)
The result is equal to the two's complement of the value minus one. If two's complement arithmetic is used, then NOT x = -x − 1.
For unsigned integers, the bitwise complement of a number is the "mirror reflection" of the number across the half-way point of the unsigned integer's range. For example, for 8-bit unsigned integers, NOT x = 255 - x, which can be visualized on a graph as a downward line that effectively "flips" an increasing range from 0 to 255, to a decreasing range from 255 to 0. A simple but illustrative example use is to invert a grayscale image where each pixel is stored as an unsigned integer.
AND
[edit]
A bitwise AND is a binary operation that takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits. Thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1 (1 × 1 = 1); otherwise, the result is 0 (1 × 0 = 0 and 0 × 0 = 0). For example:
0101 (decimal 5) AND 0011 (decimal 3) = 0001 (decimal 1)
The operation may be used to determine whether a particular bit is set (1) or cleared (0). For example, given a bit pattern 0011 (decimal 3), to determine whether the second bit is set we use a bitwise AND with a bit pattern containing 1 only in the second bit:
0011 (decimal 3) AND 0010 (decimal 2) = 0010 (decimal 2)
Because the result 0010 is non-zero, we know the second bit in the original pattern was set. This is often called bit masking. (By analogy, the use of masking tape covers, or masks, portions that should not be altered or portions that are not of interest. In this case, the 0 values mask the bits that are not of interest.)
The bitwise AND may be used to clear selected bits (or flags) of a register in which each bit represents an individual Boolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible.
For example, 0110 (decimal 6) can be considered a set of four flags numbered from right to left, where the first and fourth flags are clear (0), and the second and third flags are set (1). The third flag may be cleared by using a bitwise AND with the pattern that has a zero only in the third bit:
0110 (decimal 6) AND 1011 (decimal 11) = 0010 (decimal 2)
Because of this property, it becomes easy to check the parity of a binary number by checking the value of the lowest valued bit. Using the example above:
0110 (decimal 6) AND 0001 (decimal 1) = 0000 (decimal 0)
Because 6 AND 1 is zero, 6 is divisible by two and therefore even.
OR
[edit]
A bitwise OR is a binary operation that takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1. For example:
0101 (decimal 5) OR 0011 (decimal 3) = 0111 (decimal 7)
The bitwise OR may be used to set to 1 the selected bits of the register described above. For example, the fourth bit of 0010 (decimal 2) may be set by performing a bitwise OR with the pattern with only the fourth bit set:
0010 (decimal 2) OR 1000 (decimal 8) = 1010 (decimal 10)
XOR
[edit]
A bitwise XOR is a binary operation that takes two bit patterns of equal length and performs the logical exclusive OR operation on each pair of corresponding bits. The result in each position is 1 if only one of the bits is 1, but will be 0 if both are 0 or both are 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same. For example:
0101 (decimal 5) XOR 0011 (decimal 3) = 0110 (decimal 6)
The bitwise XOR may be used to invert selected bits in a register (also called toggle or flip). Any bit may be toggled by XORing it with 1. For example, given the bit pattern 0010 (decimal 2) the second and fourth bits may be toggled by a bitwise XOR with a bit pattern containing 1 in the second and fourth positions:
0010 (decimal 2) XOR 1010 (decimal 10) = 1000 (decimal 8)
This technique may be used to manipulate bit patterns representing sets of Boolean states.
Assembly language programmers and optimizing compilers sometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and less memory than loading a zero value and saving it to the register.
If the set of bit strings of fixed length n (i.e. machine words) is thought of as an n-dimensional vector space over the field , then vector addition corresponds to the bitwise XOR.
Mathematical equivalents
[edit]Assuming , for the non-negative integers, the bitwise operations can be written as follows:
Truth table for all binary logical operators
[edit]There are 16 possible truth functions of two binary variables; this defines a truth table, termed a LUT2 lookup table, a.k.a. a Boolean function order k=2 (2 inputs).
Here is the bitwise equivalent operations of two bits P and Q:
| p | q | F0 | NOR1 | Xq2 | ¬p3 | ↛4 | ¬q5 | XOR6 | NAND7 | AND8 | XNOR9 | q10 | If/then11 | p12 | Then/if13 | OR14 | T15 | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
| 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | ||
| 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | ||
| 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | ||
| Bitwise equivalents |
0 | NOT (p OR q) |
(NOT p) AND q |
NOT p |
p AND (NOT q) |
NOT q |
p XOR q | NOT (p AND q) |
p AND q | NOT (p XOR q) |
q | (NOT p) OR q |
p | p OR (NOT q) |
p OR q | 1 | |||
The ternary equivalent is a LUT3 boolean function of order k=3 (three inputs), resulting in a table of 256 operations, and in computing is termed a Bitwise ternary logic instruction.
Bit shifts
[edit]The bit shifts are sometimes considered bitwise operations, because they treat a value as a series of bits rather than as a numerical quantity. In these operations, the digits are moved, or shifted, to the left or right. Registers in a computer processor have a fixed width, so some bits will be "shifted out" of the register at one end, while the same number of bits are "shifted in" from the other end; the differences between bit shift operators lie in how they determine the values of the shifted-in bits.
Bit addressing
[edit]If the width of the register (frequently 32 or even 64) is larger than the number of bits (usually 8) of the smallest addressable unit, frequently called byte, the shift operations induce an addressing scheme from the bytes to the bits. Thereby the orientations "left" and "right" are taken from the standard writing of numbers in a place-value notation, such that a left shift increases and a right shift decreases the value of the number ― if the left digits are read first, this makes up a big-endian orientation. Disregarding the boundary effects at both ends of the register, arithmetic and logical shift operations behave the same, and a shift by 8 bit positions transports the bit pattern by 1 byte position in the following way:
Little-endian ordering: a left shift by 8 positions increases the byte address by 1, a right shift by 8 positions decreases the byte address by 1. Big-endian ordering: a left shift by 8 positions decreases the byte address by 1, a right shift by 8 positions increases the byte address by 1.
Arithmetic shift
[edit]

In an arithmetic shift (sticky shift), the bits that are shifted out of either end are discarded. In a left arithmetic shift, zeros are shifted in on the right; in a right arithmetic shift, the sign bit (the MSB in two's complement) is shifted in on the left, thus preserving the sign of the operand.
This example uses an 8-bit register, interpreted as two's complement:
00010111 (decimal +23) LEFT-SHIFT = 00101110 (decimal +46)
10010111 (decimal −105) RIGHT-SHIFT = 11001011 (decimal −53)
In the first case, the leftmost digit was shifted past the end of the register, and a new 0 was shifted into the rightmost position. In the second case, the rightmost 1 was shifted out (perhaps into the carry flag), and a new 1 was copied into the leftmost position, preserving the sign of the number. Multiple shifts are sometimes shortened to a single shift by some number of digits. For example:
00010111 (decimal +23) LEFT-SHIFT-BY-TWO = 01011100 (decimal +92)
A left arithmetic shift by n is equivalent to multiplying by 2n (provided the value does not overflow), while a right arithmetic shift by n of a two's complement value is equivalent to taking the floor of division by 2n. If the binary number is treated as ones' complement, then the same right-shift operation results in division by 2n and rounding toward zero.
Logical shift
[edit]In a logical shift (zero fill shift), zeros are shifted in to replace the discarded bits. Therefore, the logical and arithmetic left-shifts are exactly the same.
However, as the logical right-shift inserts value 0 bits into the most significant bit, instead of copying the sign bit, it is ideal for unsigned binary numbers, while the arithmetic right-shift is ideal for signed two's complement binary numbers.
Circular shift
[edit]Another form of shift is the circular shift, bitwise rotation or bit rotation.
Rotate
[edit]In this operation, sometimes called rotate no carry, the bits are "rotated" as if the left and right ends of the register were joined. The value that is shifted into the right during a left-shift is whatever value was shifted out on the left, and vice versa for a right-shift operation. This is useful if it is necessary to retain all the existing bits, and is frequently used in digital cryptography.[clarification needed]
Rotate through carry
[edit]Rotate through carry is a variant of the rotate operation, where the bit that is shifted in (on either end) is the old value of the carry flag, and the bit that is shifted out (on the other end) becomes the new value of the carry flag.
A single rotate through carry can simulate a logical or arithmetic shift of one position by setting up the carry flag beforehand. For example, if the carry flag contains 0, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE is a logical right-shift, and if the carry flag contains a copy of the sign bit, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE is an arithmetic right-shift. For this reason, some microcontrollers such as low end PICs just have rotate and rotate through carry, and don't bother with arithmetic or logical shift instructions.
Rotate through carry is especially useful when performing shifts on numbers larger than the processor's native word size, because if a large number is stored in two registers, the bit that is shifted off one end of the first register must come in at the other end of the second. With rotate-through-carry, that bit is "saved" in the carry flag during the first shift, ready to shift in during the second shift without any extra preparation.
In high-level languages
[edit]In C family of languages
[edit]
In C and C++ languages, the logical shift operators are "<<" for left shift and ">>" for right shift. The number of places to shift is given as the second argument to the operator. For example,
x = y << 2;
assigns x the result of shifting y to the left by two bits, which is equivalent to a multiplication by four.
Shifts can result in implementation-defined behavior or undefined behavior, so care must be taken when using them. The result of shifting by a bit count greater than or equal to the word's size is undefined behavior in C and C++.[2][3] Right-shifting a negative value is implementation-defined and not recommended by good coding practice;[4] the result of left-shifting a signed value is undefined if the result cannot be represented in the result type.[2]
In C#, the right-shift is an arithmetic shift when the first operand is an int or long. If the first operand is of type uint or ulong, the right-shift is a logical shift.[5]
Circular shifts
[edit]
The C-family of languages lack a rotate operator (although C++20 provides std::rotl and std::rotr), but one can be synthesized from the shift operators. Care must be taken to ensure the statement is well formed to avoid undefined behavior and timing attacks in software with security requirements.[6] For example, a naive implementation that left-rotates a 32-bit unsigned value x by n positions is simply
uint32_t x = ..., n = ...;
uint32_t y = (x << n) | (x >> (32 - n));
However, a shift by 0 bits results in undefined behavior in the right-hand expression (x >> (32 - n)) because 32 - 0 is 32, and 32 is outside the range 0–31 inclusive. A second try might result in
uint32_t x = ..., n = ...;
uint32_t y = n ? (x << n) | (x >> (32 - n)) : x;
where the shift amount is tested to ensure that it does not introduce undefined behavior. However, the branch adds an additional code path and presents an opportunity for timing analysis and attack, which is often not acceptable in high-integrity software.[6] In addition, the code compiles to multiple machine instructions, which is often less efficient than the processor's native instruction.
To avoid the undefined behavior and branches under GCC and Clang, the following is recommended. The pattern is recognized by many compilers, and the compiler will emit a single rotate instruction:[7][8][9]
uint32_t x = ..., n = ...;
uint32_t y = (x << n) | (x >> (-n & 31));
There are also compiler-specific intrinsics implementing circular shifts, like _rotl8, _rotl16, _rotr8, _rotr16 in Microsoft Visual C++. Clang provides some rotate intrinsics for Microsoft compatibility that suffers the problems above.[9] GCC does not offer rotate intrinsics. Intel also provides x86 intrinsics.
Java
[edit]
In Java, all integer types are signed, so the "<<" and ">>" operators perform arithmetic shifts. Java adds the operator ">>>" to perform logical right shifts, but since the logical and arithmetic left-shift operations are identical for signed integer, there is no "<<<" operator in Java.
More details of Java shift operators:[10]
- The operators
<<(left shift),>>(signed right shift), and>>>(unsigned right shift) are called the shift operators. - The type of the shift expression is the promoted type of the left-hand operand. For example,
aByte >>> 2is equivalent to((int) aByte) >>> 2. - If the promoted type of the left-hand operand is int, only the five lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & with the mask value 0x1f (0b11111).[11] The shift distance actually used is therefore always in the range 0 to 31, inclusive.
- If the promoted type of the left-hand operand is long, then only the six lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & with the mask value 0x3f (0b111111).[11] The shift distance actually used is therefore always in the range 0 to 63, inclusive.
- The value of
n >>> sis n right-shifted s bit positions with zero-extension. - In bit and shift operations, the type
byteis implicitly converted toint. If the byte value is negative, the highest bit is one, then ones are used to fill up the extra bytes in the int. Sobyte b1 = -5; int i = b1 | 0x0200;will result ini == -5.
JavaScript
[edit]JavaScript uses bitwise operations to evaluate each of two or more units place to 1 or 0.[12]
Pascal
[edit]
In Pascal, as well as in all its dialects (such as Object Pascal and Standard Pascal), the logical left and right shift operators are "shl" and "shr", respectively. Even for signed integers, shr behaves like a logical shift, and does not copy the sign bit. The number of places to shift is given as the second argument. For example, the following assigns x the result of shifting y to the left by two bits:
x := y shl 2;
Other
[edit]- popcount, used in cryptography
- count leading zeros
- Binary-coded decimal
Applications
[edit]Bitwise operations are necessary particularly in lower-level programming such as device drivers, low-level graphics, communications protocol packet assembly, and decoding.
Although machines often have efficient built-in instructions for performing arithmetic and logical operations, all these operations can be performed by combining the bitwise operators and zero-testing in various ways.[13] For example, here is a pseudocode implementation of ancient Egyptian multiplication showing how to multiply two arbitrary integers a and b (a greater than b) using only bitshifts and addition:
c ← 0
while b ≠ 0
if (b and 1) ≠ 0
c ← c + a
left shift a by 1
right shift b by 1
return c
Another example is a pseudocode implementation of addition, showing how to calculate a sum of two integers a and b using bitwise operators and zero-testing:
while a ≠ 0
c ← b and a
b ← b xor a
left shift c by 1
a ← c
return b
Boolean algebra
[edit]Sometimes it is useful to simplify complex expressions made up of bitwise operations, for example when writing compilers. The goal of a compiler is to translate a high-level programming language into the most efficient machine code possible. Boolean algebra is used to simplify complex bitwise expressions.
AND
[edit]x & y = y & xx & (y & z) = (x & y) & zx & 0xFFFF = x[14]x & 0 = 0
x & x = x
OR
[edit]x | y = y | xx | (y | z) = (x | y) | zx | 0 = xx | 0xFFFF = 0xFFFFx | x = x
NOT
[edit]~(~x) = x
XOR
[edit]x ^ y = y ^ xx ^ (y ^ z) = (x ^ y) ^ zx ^ 0 = xx ^ y ^ y = xx ^ x = 0x ^ 0xFFFF = ~x
Additionally, XOR can be composed using the 3 basic operations (AND, OR, NOT)
a ^ b = (a | b) & (~a | ~b)a ^ b = (a & ~b) | (~a & b)
Others
[edit]x | (x & y) = xx & (x | y) = x~(x | y) = ~x & ~y~(x & y) = ~x | ~yx | (y & z) = (x | y) & (x | z)x & (y | z) = (x & y) | (x & z)x & (y ^ z) = (x & y) ^ (x & z)x + y = (x ^ y) + ((x & y) << 1)x - y = ~(~x + y)
Inverses and solving equations
[edit]It can be hard to solve for variables in Boolean algebra, because unlike regular algebra, several operations do not have inverses. Operations without inverses lose some of the original data bits when they are performed, and it is not possible to recover this missing information.
- Has inverse
- NOT
- XOR
- Rotate left
- Rotate right
- No inverse
- AND
- OR
- Shift left
- Shift right
Order of operations
[edit]Operations at the top of this list are executed first. See the main article for a more complete list.
See also
[edit]References
[edit]- ^ "CMicrotek Low-power Design Blog". CMicrotek. Retrieved 2015-08-12.
- ^ a b JTC1/SC22/WG14 N843 "C programming language", section 6.5.7
- ^ "Arithmetic operators - cppreference.com". en.cppreference.com. Retrieved 2016-07-06.
- ^ "INT13-C. Use bitwise operators only on unsigned operands". CERT: Secure Coding Standards. Software Engineering Institute, Carnegie Mellon University. Retrieved 2015-09-07.
- ^ "Operator (C# Reference)". Microsoft. Retrieved 2013-07-14.
- ^ a b "Near constant time rotate that does not violate the standards?". Stack Exchange Network. Retrieved 2015-08-12.
- ^ "Poor optimization of portable rotate idiom". GNU GCC Project. Retrieved 2015-08-11.
- ^ "Circular rotate that does not violate C/C++ standard?". Intel Developer Forums. Retrieved 2015-08-12.
- ^ a b "Constant not propagated into inline assembly, results in "constraint 'I' expects an integer constant expression"". LLVM Project. Retrieved 2015-08-11.
- ^ The Java Language Specification, section 15.19. Shift Operators
- ^ a b "Chapter 15. Expressions". oracle.com.
- ^ "JavaScript Bitwise". W3Schools.com.
- ^ "Synthesizing arithmetic operations using bit-shifting tricks". Bisqwit.iki.fi. 2014-02-15. Retrieved 2014-03-08.
- ^ Throughout this article, 0xFFFF means that all the bits in your data type need to be set to 1. The exact number of bits depends on the width of the data type.
- ^ - is negation here, not subtraction
- ^ - is subtraction here, not negation
External links
[edit]- Online Bitwise Calculator supports Bitwise AND, OR and XOR
- XORcat, a tool for bitwise-XOR files/streams
- Division using bitshifts
- "Bitwise Operations Mod N" by Enrique Zeleny, Wolfram Demonstrations Project.
- "Plots Of Compositions Of Bitwise Operations" by Enrique Zeleny, The Wolfram Demonstrations Project.
Bitwise operation
View on GrokipediaFundamentals
Definition and Purpose
Bitwise operations are fundamental primitives in computing that manipulate the individual bits of binary numbers, treating them as sequences of 0s and 1s without regard to their interpreted numerical value. These operations enable direct interaction with the binary representation of data, allowing for precise control over bit-level details in integers or other fixed-width data types.[4][5] The purpose of bitwise operations lies in providing efficient, low-level mechanisms for data manipulation, which are essential for optimizing algorithms, interfacing with hardware, and conserving resources in software development. They facilitate tasks such as extracting specific bits (masking), combining flags into compact structures, and implementing high-performance computations that avoid the overhead of higher-level arithmetic. In embedded systems and performance-critical applications, bitwise operations often outperform equivalent arithmetic approaches due to their simplicity and direct hardware support.[6][7] Bitwise operations trace their origins to the foundational designs of early digital computers in the 1940s, particularly in John von Neumann's 1945 "First Draft of a Report on the EDVAC," which specified a central arithmetic unit capable of performing both arithmetic and logical operations on fixed-length machine words. This inclusion of logical operations, which equate to bitwise manipulations, became integral to the arithmetic logic unit (ALU) in the von Neumann architecture that underpins most modern processors. Unlike arithmetic operations, which process binary data as numerical quantities to yield mathematically meaningful results, bitwise operations focus exclusively on the structural patterns of bits, enabling independent bit-level transformations.[8][9][7]Binary Representation
In computing, a bit is the fundamental unit of information, representing a binary digit that can hold one of two values: 0 or 1.[10] This binary nature stems from the physical implementation in digital circuits, where a bit is typically stored as a voltage level—low for 0 and high for 1.[11] Bits are grouped into larger units for practical storage and processing; a byte consists of exactly 8 bits, capable of representing 256 distinct values (from 0 to 255 in unsigned form).[12] Beyond bytes, data is often organized into words, which are fixed-size sequences of bits processed as a single unit by a computer's processor—commonly 32 bits or 64 bits in modern architectures, depending on the instruction set.[13][14] When representing multi-byte values, such as integers spanning multiple bytes, the ordering of bytes in memory introduces the concept of endianness. Big-endian systems store the most significant byte (containing the highest-order bits) at the lowest memory address, followed by less significant bytes in increasing order of address.[15] In contrast, little-endian systems reverse this, placing the least significant byte at the lowest address and the most significant byte at the highest.[16] This byte-ordering convention affects how data is interpreted across different hardware architectures, such as x86 processors (little-endian) versus some network protocols (big-endian).[17] Integer representations also distinguish between signed and unsigned formats, impacting how bit patterns encode values. Unsigned integers use all bits for magnitude, allowing a range from 0 to for bits. Signed integers, however, reserve the most significant bit as a sign bit (0 for positive, 1 for negative) and employ two's complement encoding for negative numbers to simplify arithmetic operations.[18] In two's complement, a negative number is formed by inverting all bits of its positive counterpart and adding 1, which extends the range to approximately half positive and half negative values (e.g., -128 to 127 for 8 bits).[19] This method ensures that the bit patterns for negative values align seamlessly with unsigned patterns for addition and subtraction, avoiding separate handling for signs.[20] For illustration, the decimal number 5 is represented in binary as101, corresponding to .[21] In fixed-width formats like an 8-bit byte, this is padded with leading zeros to 00000101, preserving the value while filling the available bits.[22] For signed two's complement in 8 bits, -5 would be the inversion of 00000101 (yielding 11111010) plus 1, resulting in 11111011.[19]
Core Bitwise Operators
NOT Operation
The bitwise NOT operation, also known as the one's complement, is a unary operator that inverts the bits of its integer operand, transforming every 0 to 1 and every 1 to 0 in the binary representation.[23] In most programming languages, including C, C++, Java, and Python, this operator is denoted by the tilde symbol (~).[23] In systems using two's complement representation for signed integers—which is the predominant method in modern computer architectures—the bitwise NOT of a number is mathematically equivalent to .[24] This equivalence arises because inverting all bits of yields the one's complement, and adding 1 would produce the two's complement negation; thus, the inversion alone subtracts 1 from the negation.[24] For example, consider an 8-bit representation of the decimal number 5, which is . Applying the NOT operation yields , equivalent to 250 in unsigned interpretation but -6 in signed two's complement (since ).[24] The behavior of the NOT operation differs between unsigned and signed integers. For unsigned types, it simply produces the maximum value of the type minus the operand (e.g., for 8-bit unsigned, ), without any sign-related effects.[23] For signed types in two's complement, it inverts the sign bit along with others, often resulting in a sign flip and potential implementation-defined behavior if the result exceeds the representable range, though overflow is typically undefined.[23] The operation can be summarized in a single-bit truth table:| Input | Output |
|---|---|
| 0 | 1 |
| 1 | 0 |
AND Operation
The bitwise AND operation is a binary operation that performs a logical AND on each pair of corresponding bits in two binary numbers, producing a result bit of 1 only if both input bits are 1, and 0 otherwise.[25] This operation is commonly denoted by the symbol& in most programming languages and mathematical contexts.[26] It can be viewed as a per-bit multiplication in binary arithmetic, where the operation aligns the bits of the operands by their position and applies the AND logic independently to each pair.
The truth table for the bitwise AND operation, showing all possible input combinations for two bits, is as follows:
| Input A | Input B | A & B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
00000101 (decimal 5) and 00000011 (decimal 3). Applying bitwise AND bit by bit yields 00000001 (decimal 1), as only the least significant bit is 1 in both operands. This demonstrates how the operation clears bits that are not set in both inputs, preserving only the common set bits.
A primary use of the bitwise AND operation is in masking, where it extracts or isolates specific bits from a binary number by ANDing with a mask that has 1s in the desired positions and 0s elsewhere.[26] For instance, to check if a number is even or odd, one can perform the operation with the mask 00000001 (decimal 1), which examines the least significant bit: a result of 0 indicates even, while 1 indicates odd. This technique is efficient for bit-level manipulations in low-level programming and hardware design.[25]
OR Operation
The bitwise OR operation, denoted by the symbol |, is a binary bitwise operator that performs a logical OR on each pair of corresponding bits in two integer operands. For each bit position, the result is 1 if at least one of the input bits is 1, and 0 otherwise.[2][27][28] The truth table for the bitwise OR operation is as follows: | Input A | Input B | Output (A | B) | |---------|---------|------------| | 0 | 0 | 0 | | 0 | 1 | 1 | | 1 | 0 | 1 | | 1 | 1 | 1 | This table illustrates all possible combinations for a single bit pair.[2][27][28] For example, applying bitwise OR to the 8-bit binary representations of 5 (00000101) and 2 (00000010) yields 7 (00000111), as the result combines the 1s from both inputs without overlap resolution.[27] A primary use of the bitwise OR operation is setting specific bits in a bit field, such as enabling flags in a configuration register by ORing the value with a mask containing 1s in the desired positions (e.g.,x = x | 0x01 sets the least significant bit).[2][28] This is common in low-level programming for tasks like assembling bit-packed structures or activating options in bitmasks.[27][28]
XOR Operation
The bitwise XOR (exclusive OR) operation is a fundamental binary bitwise operator that compares two input bits and outputs 1 only if the bits differ, and 0 if they are the same. This per-bit comparison results in a logical exclusive OR, where the output bit is set when exactly one of the input bits is 1. In digital logic and computing, XOR is defined as performing this operation independently on each corresponding pair of bits in the binary representations of two operands of equal length.[29] The truth table for the two-input XOR operation is as follows:| Input A | Input B | Output (A XOR B) |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
^, which applies the operation bit by bit to integer operands.[30] For example, performing XOR on the 8-bit binary values 00000101 (decimal 5) and 00000011 (decimal 3) yields 00000110 (decimal 6), as the differing bits in positions 1 and 2 (0-based from the right) result in 1s, while matching bits remain 0. Mathematically, bitwise XOR is equivalent to addition modulo 2 without carry for each bit position, aligning it with operations in binary arithmetic and error-correcting codes.[31]
A distinctive property of XOR is its self-inversivity: applying the operation with the same operand twice returns the original value, expressed as , since and . This involutory nature enables efficient bit toggling—such as flipping specific bits in a register by XORing with a mask—and underpins its utility in reversible data transformations or basic encryption primitives.[32]
Bit Shifting Operations
Logical Shift
A logical shift is a bitwise operation that shifts the bits of an operand to the left or right by a specified number of positions, filling the empty bit positions with zeros regardless of the operand's sign. In a left logical shift, bits move toward the most significant bit (MSB), with zeros inserted into the least significant bit (LSB) positions; bits shifted beyond the MSB are typically discarded. Conversely, a right logical shift moves bits toward the LSB, filling the new MSB positions with zeros, and discards bits shifted beyond the LSB. This operation treats the binary representation as a simple sequence of bits without regard to numerical sign, making it suitable for unsigned integers.[33][34] For unsigned integers, a left logical shift by positions is mathematically equivalent to multiplying the value by , as each shift effectively doubles the value by repositioning bits to higher place values. Similarly, a right logical shift by positions corresponds to integer division by , flooring the result by discarding the least significant bits. These equivalences hold because the operation preserves the unsigned interpretation and avoids sign-related adjustments. For example, consider the 8-bit unsigned binary value 00000101 (decimal 5): shifting left by 1 position yields 00001010 (decimal 10, equivalent to ); shifting right by 1 position yields 00000010 (decimal 2, equivalent to , floored).[35][36] In programming languages such as C and its derivatives, the left shift operator is denoted by<<, which always performs a logical shift by filling with zeros. The right shift operator >> behaves as a logical shift for unsigned types, ensuring zero-filling of the MSB positions, while for signed types it may perform an arithmetic shift instead. This distinction allows developers to manipulate bit patterns predictably for unsigned data, enabling efficient implementations of multiplication and division by powers of 2 without invoking the processor's arithmetic units. The operation's behavior is defined at the hardware level in instruction sets like x86 and MIPS, where dedicated instructions (e.g., sll for shift left logical and srl for shift right logical) execute the zero-filling semantics.[37][34]
Arithmetic Shift
An arithmetic shift is a bit-shifting operation designed for signed integers in two's complement representation, where the goal is to preserve the number's sign during the shift. In a right arithmetic shift, the bits vacated on the left are filled with copies of the original sign bit—0 for positive numbers and 1 for negative numbers—effectively extending the sign to approximate division by a power of two. The left arithmetic shift behaves identically to a logical left shift, filling vacated bits on the right with zeros, which approximates multiplication by a power of two.[38][39]/03:_MIPS_Arithmetic_and_Logical_Operators/3.10:_Shift_Operations) For example, consider the 8-bit two's complement representation of -5, which is11111011. A right arithmetic shift by 1 bit produces 11111101, equivalent to -3 in two's complement, approximating the result of -5 divided by 2 (with truncation toward negative infinity). This sign extension ensures the shifted value remains negative for negative inputs, distinguishing it from logical shifts that always fill with zeros and are typically used for unsigned integers.[39][40]
Arithmetic shifts are particularly useful for efficient signed integer division by powers of two, as the sign-preserving behavior aligns with two's complement arithmetic semantics. In programming languages, the right-shift operator >> often implements arithmetic shifting for signed types; for instance, in Java, >> performs sign extension, while the unsigned right-shift operator >>> fills with zeros regardless of sign. In C, the behavior of >> on signed integers is implementation-defined but commonly arithmetic on two's complement systems.[41][38][42]
However, arithmetic shifts—especially left shifts on signed integers—can introduce issues related to overflow. If a left shift causes the result to exceed the representable range (e.g., shifting a value such that bits enter the sign bit position), it may invoke undefined behavior in languages like C, where the standard does not specify outcomes for such cases. Programmers must ensure shifts do not overflow to avoid portability problems across implementations.[43][36]
Circular Shift
A circular shift, also known as a bitwise rotation, is an operation that moves the bits of a binary number in a cyclic manner, where the bits shifted out from one end of the operand are reintroduced at the opposite end, preserving the total number of bits without loss or gain.[44] This contrasts with non-circular shifts, such as logical shifts, which discard the overflow bits and fill the vacated positions with zeros.[44] Circular shifts exist in two primary directions: left (or rotate left), which moves bits toward the most significant bit (MSB) with overflow from the MSB wrapping to the least significant bit (LSB), and right (or rotate right), which moves bits toward the LSB with overflow from the LSB wrapping to the MSB.[44] There are two main subtypes of circular shifts: pure rotation, which operates solely on the bits within the operand without involving external flags, and rotation through carry, which incorporates a carry flag in the process.[45] In pure rotation, the operation is self-contained; for example, a left rotation shifts all bits leftward, directly inserting the overflow bit(s) from the MSB into the LSB position(s). In contrast, rotation through carry uses the processor's carry flag: the bit shifted out updates the carry flag, and the previous carry flag value is inserted into the vacated position, effectively treating the carry as an extension of the operand.[39] To illustrate, consider an 8-bit binary value 00010111 (decimal 23). A left circular shift by 1 position yields 00101110 (decimal 46), as the MSB (0) wraps to the LSB, and all other bits move left.[44] Similarly, a right circular shift by 1 position on the same value produces 10001011 (decimal 139), with the LSB (1) wrapping to the MSB.[44] Circular shifts are commonly implemented in assembly languages using instructions such as ROL (rotate left) and ROR (rotate right) for pure rotations, with variants like RCL and RCR for rotations through carry.[44] They find applications in hash functions, where operations like cyclic shifts help generate pseudorandom outputs efficiently, as seen in chaotic hash algorithms that employ variable circular shifts for one-way compression.[46] Additionally, circular shifts underpin cyclic error-correcting codes, where the code's invariance under cyclic permutations enables efficient encoding and syndrome computation via linear feedback shift registers.[47]Language Implementations
C Family Languages
In the C family of languages, including C and C++, bitwise operations are fundamental for low-level manipulation of integer types, providing direct control over binary representations. These operations apply to integer operands after integer promotions or usual arithmetic conversions, enabling efficient bit-level computations essential for systems programming. The core bitwise operators are the unary NOT (~), binary AND (&), OR (|), and XOR (^), along with the shift operators left shift (<<) and right shift (>>).[23][48] The bitwise NOT operator (~) inverts all bits of its operand, equivalent to subtracting the value from one less than the maximum representable value in the promoted type (e.g., for a 32-bit unsigned int, ~x = 0xFFFFFFFF - x). The binary AND (&) sets each bit in the result to 1 only if both corresponding bits in the operands are 1. The OR (|) sets a bit to 1 if at least one corresponding bit in the operands is 1, while XOR (^) sets it to 1 if the bits differ. These operations perform usual arithmetic conversions on operands, promoting them to a common type, typically int or unsigned int, before applying the bitwise logic.[23][48][49] Shift operations move bits left (<<) or right (>>) by the number of positions specified by the right operand. The left shift fills vacated bits with zeros and discards overflowed bits, effectively multiplying by powers of two for positive values within the type's range. The right shift behaves similarly but fills vacated bits with zeros for unsigned types; for signed integers, the behavior is implementation-defined, often preserving the sign bit (arithmetic shift) to maintain the sign for negative values. In C++, the behavior aligns closely with C, though C++20 unified some shift semantics for consistency.[23][48] Operator precedence in C family languages places unary ~ at a high level (precedence 2, right-to-left associativity), above multiplicative arithmetic operators (*, /, % at precedence 3). The shifts (<<, >>) follow at precedence 5 (left-to-right), below additive operators (+, - at precedence 4) but above relational operators. The binary bitwise operators then occur at lower levels: & at precedence 8, ^ at 9, and | at 10, all left-to-right, positioned after equality operators (==, != at precedence 7) but before logical AND (&& at precedence 11). This hierarchy ensures bitwise operations have lower priority than arithmetic but higher than assignments (precedence 14, right-to-left). Parentheses are recommended for clarity in mixed expressions.[50][48] Type considerations are critical: for unary ~ and shifts, the operand undergoes integer promotion to int or unsigned int if smaller. Binary bitwise operators (&, |, ^) use usual arithmetic conversions, balancing signedness and potentially promoting to unsigned if one operand is unsigned. Shifts promote the left operand via integer promotion and the right to int; however, behavior is undefined if the right operand is negative, zero (for some implementations), or greater than or equal to the bit width of the promoted left operand (e.g., shifting a 32-bit int by 32 or more). For signed left shifts, if the result is not representable in the type, behavior is undefined per C11 (ISO/IEC 9899:2011, section 6.5.7). These rules promote portability but require careful handling to avoid undefined behavior, especially in C++ where similar constraints apply.[23][49][48] A simple example demonstrates the AND operation:#include <stdio.h>
int main(void) {
int x = 5 & 3; // 5 is 101 in binary, 3 is 011; result is 001 (1)
printf("%d\n", x); // Outputs: 1
return 0;
}
#include <stdio.h>
int main(void) {
int x = 5 & 3; // 5 is 101 in binary, 3 is 011; result is 001 (1)
printf("%d\n", x); // Outputs: 1
return 0;
}
Java and JavaScript
In Java, bitwise operations are performed on the integral primitive types—byte, short, int, and long—with int (32 bits) and long (64 bits) being the primary types for such computations. The operators include & for bitwise AND, | for bitwise OR, ^ for bitwise XOR, ~ for bitwise NOT, << for left shift, >> for signed right shift (which preserves the sign bit via arithmetic shift), and >>> for unsigned right shift (which fills the leftmost bits with zeros regardless of the sign). These operations treat operands as two's complement representations, ensuring consistent handling of negative values where the sign bit is propagated in signed shifts.[38][51]
Unlike languages such as C++, Java does not support operator overloading, meaning the bitwise operators always apply directly to the primitive types without custom redefinition for user-defined classes. For example, the expression 0x2222 & 0x000F evaluates to 2 (decimal), as the bitwise AND masks the lower 4 bits. The >>> operator is particularly useful for treating negative int values as unsigned during right shifts, converting them to positive results by zero-filling.[52][38]
In JavaScript, bitwise operators similarly perform computations on 32-bit signed integers, automatically converting non-integer operands (such as floating-point numbers or other types) to signed 32-bit integers before applying the operation, with any excess bits discarded through truncation. The operators are &, |, ^, ~, <<, >> (arithmetic right shift that preserves the sign bit), and >>> (logical right shift that zero-fills). Negative numbers are handled using two's complement representation, consistent with Java's approach. For instance, 5 | 2 evaluates to 7, as the binary operation 0101 | 0010 yields 0111.[53][54]
JavaScript's bitwise operations on standard Number types wrap around the 32-bit boundary; shifting large numbers, such as Math.pow(2, 53) << 1, truncates to fit the 32-bit signed integer range, potentially leading to unexpected results for values exceeding 2^31 - 1. To handle arbitrary-precision integers, JavaScript provides BigInt support for most bitwise operators (e.g., 5n & 3n results in 1n), but >>> throws a TypeError on BigInt due to the lack of a fixed bit width. The syntax for these operators in both Java and JavaScript derives from the C family languages, facilitating familiar usage in managed environments.[53]
Other Languages
In Pascal, bitwise operations are supported through keywords such asand for bitwise AND, or for bitwise OR, xor for bitwise XOR, and not for bitwise NOT, while shifts are handled by shl for left shift and shr for right shift.[55][56] These operations apply to fixed-width integer types, such as the 32-bit Integer in most implementations, where overflow behavior depends on the compiler but typically wraps around.
Python provides bitwise operators including & for AND, | for OR, ^ for XOR, ~ for NOT (inversion), << for left shift, and >> for right shift, which operate on the binary representations of integers.[57] Unlike fixed-width languages, Python's integers are arbitrary-precision, allowing operations on numbers of any size without overflow, though shifts on very large integers may be computationally expensive.
Other languages exhibit variations in bitwise support. Rust mirrors C-style syntax with operators like &, |, ^, !, <<, and >>, but emphasizes safety through type-checked integers that prevent undefined behavior in shifts and overflows via checked variants. In assembly languages, such as x86, bitwise operations are implemented directly via hardware instructions like AND, OR, XOR, NOT, SHL, SHR, and ROL for rotate left, enabling low-level bit manipulation tied to the processor's architecture.[58]
Some languages, like standard SQL, lack native bitwise operators, restricting such functionality to database-specific extensions (e.g., T-SQL's &, |, ^) applied only to integer types, while others limit them to prevent misuse in non-numeric contexts.[59]
Applications
Data Manipulation
Bitwise operations play a crucial role in data manipulation by enabling precise control over individual bits within integers, facilitating tasks such as field extraction, value packing, and status checking without altering unrelated data.[60] Bit masking is a primary technique for extracting or setting specific fields in a data structure, typically using the bitwise AND operator to isolate bits by applying a mask that has 1s in the desired positions and 0s elsewhere. For example, to extract the lowest 8 bits (a byte) from a 32-bit integer value, one performsvalue & 0xFF, which clears all higher-order bits while retaining the target field intact.[61] To set bits in a field, the bitwise OR operator combines the original value with a mask containing the new bits to insert, ensuring other bits remain unchanged.[60]
Bit fields, or bit packing, allow multiple small values to be stored compactly within a single integer by assigning fixed bit widths to each component, which is particularly useful for memory-constrained environments or when transmitting data. In languages like C, bit fields are explicitly declared in structures to enforce this packing; for instance, a structure might define fields for day (5 bits), month (4 bits), and year (7 bits) within a single 16-bit integer, enabling efficient storage of date components.[62] A common application is packing RGB color values into a 24-bit or 32-bit integer, where the red component (8 bits) can be extracted using (color & 0xFF0000) >> 16, isolating and shifting the relevant bits to form an independent byte.[63]
Flag operations leverage bitwise operations to manage status indicators or boolean options as individual bits within an integer, allowing multiple flags to coexist in a single variable for compact representation. To test a flag, such as determining if a number is even by checking its least significant bit, one uses num & 1, which yields 0 for even numbers and 1 for odd, as the mask targets only that bit.[64] Setting a flag involves ORing the value with a shifted mask, like flags |= (1 << position), while clearing it requires ANDing with the inverted mask, flags &= ~(1 << position).[60]
In modern data serialization, bitwise operations support bit-packing to produce efficient binary formats, as exemplified by Protocol Buffers, where variable-length integer encoding (varint) packs values by storing fewer bytes for smaller numbers, reducing overall payload size during transmission.[65] This approach, combined with packed representations for repeated primitive fields, minimizes storage and bandwidth needs in distributed systems.[66]
Cryptography and Security
Bitwise operations, particularly the exclusive-or (XOR), form the cornerstone of many cryptographic primitives by enabling efficient mixing and diffusion of data while preserving computational security. In the one-time pad encryption scheme, perfect secrecy is achieved by XORing the plaintext message bit-by-bit with a random key of equal length, ensuring that the ciphertext reveals no information about the original message to an eavesdropper, as proven by Claude Shannon in his foundational work on secrecy systems.[67] This property relies on XOR's self-inverse nature, where applying the same operation with the key decrypts the message.[67] Block ciphers like the Advanced Encryption Standard (AES) incorporate bitwise operations extensively to provide confusion and diffusion across rounds of encryption. AES employs XOR in its AddRoundKey step to combine the state with a round key, alongside bitwise substitutions in SubBytes and linear mixes in MixColumns that operate on bytes using Galois field arithmetic, all of which contribute to its resistance against cryptanalytic attacks as standardized by NIST.[68] These operations ensure that small changes in the input propagate unpredictably throughout the output, enhancing security for symmetric encryption. In hash functions such as SHA-256, bitwise operations drive the compression process to produce a fixed-size digest resistant to preimage and collision attacks. The algorithm uses right rotations (ROTR), XOR, and majority functions (MAJ) in its message schedule and compression steps, along with choices (CH) and sigma functions involving AND, XOR, and shifts, to achieve avalanche effects where altering one input bit affects roughly half the output bits, as detailed in the NIST Secure Hash Standard.[69] These bitwise chains promote rapid diffusion, making SHA-256 suitable for integrity verification in protocols like digital signatures. Cyclic redundancy checks (CRC) leverage XOR and bit shifts for efficient error detection in data transmission and storage. CRC computation treats data as a polynomial over GF(2), performing modulo-2 division via XOR-based shifts with a generator polynomial, which detects burst errors up to the degree of the polynomial with high probability, as originally formalized by W. Wesley Peterson. In post-quantum cryptography, lattice-based schemes selected by NIST, such as ML-KEM (FIPS 203), increasingly rely on bitwise operations for masking secrets to mitigate side-channel attacks. Implementations use higher-order masking, splitting sensitive values into shares combined via bitwise AND and XOR, to protect against power analysis on lattice operations like number-theoretic transforms, ensuring security in quantum-resistant key encapsulation as standardized in 2024.[70][71] This approach addresses vulnerabilities in arithmetic-heavy lattice computations, aligning with NIST's ongoing standardization efforts through 2025.[72]Performance Optimization
Bitwise operations provide substantial performance advantages in computing due to their implementation as single-cycle instructions on most modern CPUs, with latencies often around one cycle and high throughput, outperforming arithmetic operations or conditional logic that may involve multiple cycles or branch prediction overhead. For instance, bitwise AND can select values without branching, avoiding the penalties of mispredicted branches that can stall pipelines for 10-20 cycles on x86 architectures. Common optimization tricks leverage these operations for efficient code. The XOR swap technique exchanges two variables without a temporary storage using three XOR assignments:x ^= y; y ^= x; x ^= y;, which can be slightly faster in register-bound scenarios by minimizing memory accesses, though compilers often inline equivalent optimizations for standard swaps.[73] For parity computation (determining if the number of set bits is odd or even), successive XOR folds—such as v ^= v >> 16; v ^= v >> 8; v ^= v >> 4; v ^= v >> 2; v ^= v >> 1; return v & 1;—reduce the 32-bit word to a single bit in about nine operations, enabling fast checks in hashing or error detection without full population counts.[73] Population count (hamming weight), useful for algorithms like nearest neighbor search, can be computed via parallel prefix sums with shifts and masks, such as v = v - ((v >> 1) & 0x55555555); v = (v & 0x33333333) + ((v >> 2) & 0x33333333); followed by further reductions, achieving constant-time execution in 12 operations comparable to table lookups but without cache misses.[73]
Hardware accelerations further amplify these benefits through parallelism. Intel's Advanced Vector Extensions (AVX) and AVX-512 support bitwise operations on 256- or 512-bit vectors, allowing simultaneous processing of multiple 32- or 64-bit elements in a single instruction, which can yield 4-8x speedups in vectorizable workloads like data compression or image processing compared to scalar code.[74] On GPUs, bitwise operations benefit from massive parallelism across thousands of cores, with 32-bit integer bitwise throughput matching floating-point rates on NVIDIA architectures, enabling up to 10-100x gains in bulk operations like cryptographic primitives or bit-parallel string matching over CPU equivalents.[75] The ARM Scalable Vector Extension (SVE), introduced in Armv8-A and extended in 2025 updates, provides scalable bitwise operations on vectors from 128 to 2048 bits, facilitating hardware-agnostic vectorization for high-performance computing tasks such as genomics or simulations, with performance scaling linearly with vector length.[76] Bitwise shifts can briefly reference multiplication or division by powers of two as a related shortcut, where compilers automatically convert such operations to shifts for 2-5x latency reductions in inner loops.
Theoretical Aspects
Boolean Algebra Integration
Bitwise operations are fundamentally rooted in Boolean algebra, where individual bits represent truth values (0 for false, 1 for true), enabling the manipulation of binary data through logical functions. This integration was pioneered by Claude Shannon in his 1938 master's thesis, which demonstrated how Boolean algebra could model the behavior of electrical switching circuits, laying the groundwork for digital computation by treating circuit states as binary propositions. In this framework, bitwise operators directly correspond to Boolean logical connectives: the bitwise AND (&) maps to conjunction (∧), producing 1 only if both inputs are 1; bitwise OR (|) maps to disjunction (∨), producing 1 if at least one input is 1; bitwise NOT (~) maps to negation (¬), inverting the input bit; and bitwise XOR (^) maps to exclusive or, producing 1 if the inputs differ. These mappings allow bits to function as atomic propositions in Boolean expressions, with operations preserving the algebra's semantic structure.[77] When applied to multi-bit integers, bitwise operations extend Boolean algebra pointwise across each corresponding bit position, treating the operands as vectors of independent Boolean variables. For instance, the bitwise AND of two -bit numbers computes the conjunction for each bit pair separately, yielding an -bit result where the -th bit is . This parallelism enables efficient vectorized logical processing in hardware and software.[77][25] Boolean algebraic laws apply seamlessly to these operations, facilitating simplification and optimization. Commutativity holds for AND and OR, such that and , allowing operand reordering without altering results. Associativity similarly applies, as and , supporting grouped evaluations in expressions. These properties, inherited from the underlying algebra, ensure bitwise computations remain consistent and predictable across bit positions.[25]Properties and Identities
Bitwise operations on integers, interpreted as bit vectors, satisfy the same algebraic properties as operations in Boolean algebra, where AND corresponds to conjunction (∧), OR to disjunction (∨), NOT to negation (¬), and XOR to exclusive or.[78] These properties hold bit by bit across the operands, enabling simplification of expressions involving bitwise operators.[79] Idempotence is a fundamental property of the AND and OR operations: for any bit vector , and . This means applying AND or OR with itself yields the original value, reflecting the idempotent nature of conjunction and disjunction in Boolean algebra.[79] The absorption laws further simplify expressions: and . These identities arise because any bit set in dominates the result when combined with bits from , absorbing the influence of the secondary operand.[80] De Morgan's laws provide dualities involving negation: and . These allow rewriting negated conjunctions as disjunctions of negations (and vice versa), facilitating equivalence transformations in bitwise expressions.[79] Distributivity holds for AND over OR: , and dually for OR over AND: . This mirrors the distributive property in Boolean lattices, allowing bitwise operations to factor across alternatives.[81] The XOR operation exhibits an inverse property due to its group structure under addition modulo 2: if , then and . Thus, XOR is its own inverse, enabling reversible computations such as swapping values without temporary storage via .[82] In linear algebra over the finite field GF(2, bitwise operations extend naturally to bit vectors, where addition is XOR and scalar multiplication (by 0 or 1) is AND. This framework supports efficient matrix operations, such as Gaussian elimination on binary matrices, using bitwise parallelism for solving systems modulo 2.[83] For instance, vector addition corresponds to , preserving the vector space axioms over GF(2.[84]Order of Operations
In programming languages that support bitwise operations, such as those in the C family and Java, the order of evaluation follows a strict precedence hierarchy among the operators. The bitwise NOT operator (~) has the highest precedence, binding more tightly than any other bitwise operator. This is followed by the shift operators (<< and >>), which have precedence between arithmetic operators like addition and subtraction on one hand, and relational operators on the other. The bitwise AND (&) comes next, with higher precedence than the bitwise XOR (^), which in turn precedes the bitwise OR (|).[50][85] Associativity determines how operators of equal precedence are grouped in expressions. For most bitwise operators, including &, ^, |, <<, and >>, associativity is left-to-right, meaning that in an expression likex & y & z, it is evaluated as (x & y) & z. The unary NOT operator (~), however, is right-to-left associative, so ~~x is parsed as ~(~x). This left-to-right convention applies consistently across C, Java, and similar languages, ensuring predictable parsing without ambiguity for binary operations.[50][85][86]
Parentheses are essential to override these default rules and enforce a desired order. For instance, the expression ~x & y evaluates as (~x) & y due to the high precedence of ~, potentially yielding different results from ~(x & y), which requires explicit grouping to apply NOT after the AND. Without parentheses, unintended groupings can lead to errors in bit manipulation tasks.[50][85]
While consistent in languages like C and Java, the precedence of bitwise operations integrates with broader order-of-operations rules, such as PEMDAS (parentheses, exponents, multiplication/division, addition/subtraction), where bitwise shifts occur after additive arithmetic but before comparisons. In mathematical contexts, such as Boolean algebra, precedence is more explicitly defined for logical equivalents: NOT holds the highest priority, followed by AND, then OR, with XOR often aligned at the level of OR; shifts are not typically part of these formal systems.[50][85][87]




