Hubbry Logo
Bitwise operationBitwise operationMain
Open search
Bitwise operation
Community hub
Bitwise operation
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Bitwise operation
Bitwise operation
from Wikipedia

In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.

On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources.[1]

Bitwise operators

[edit]

In the explanations below, any indication of a bit's position is counted from the right (least significant) side, advancing left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the first (i.e., the rightmost) one.

NOT

[edit]

The bitwise NOT, or bitwise complement, is a unary operation that performs logical negation on each bit, forming the ones' complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example:

NOT 0111  (decimal 7)
  = 1000  (decimal 8)
NOT 10101011  (decimal 171)
  = 01010100  (decimal 84)

The result is equal to the two's complement of the value minus one. If two's complement arithmetic is used, then NOT x = -x − 1.

For unsigned integers, the bitwise complement of a number is the "mirror reflection" of the number across the half-way point of the unsigned integer's range. For example, for 8-bit unsigned integers, NOT x = 255 - x, which can be visualized on a graph as a downward line that effectively "flips" an increasing range from 0 to 255, to a decreasing range from 255 to 0. A simple but illustrative example use is to invert a grayscale image where each pixel is stored as an unsigned integer.

AND

[edit]
Bitwise AND of 4-bit integers

A bitwise AND is a binary operation that takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits. Thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1 (1 × 1 = 1); otherwise, the result is 0 (1 × 0 = 0 and 0 × 0 = 0). For example:

    0101 (decimal 5)
AND 0011 (decimal 3)
  = 0001 (decimal 1)

The operation may be used to determine whether a particular bit is set (1) or cleared (0). For example, given a bit pattern 0011 (decimal 3), to determine whether the second bit is set we use a bitwise AND with a bit pattern containing 1 only in the second bit:

    0011 (decimal 3)
AND 0010 (decimal 2)
  = 0010 (decimal 2)

Because the result 0010 is non-zero, we know the second bit in the original pattern was set. This is often called bit masking. (By analogy, the use of masking tape covers, or masks, portions that should not be altered or portions that are not of interest. In this case, the 0 values mask the bits that are not of interest.)

The bitwise AND may be used to clear selected bits (or flags) of a register in which each bit represents an individual Boolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible.

For example, 0110 (decimal 6) can be considered a set of four flags numbered from right to left, where the first and fourth flags are clear (0), and the second and third flags are set (1). The third flag may be cleared by using a bitwise AND with the pattern that has a zero only in the third bit:

    0110 (decimal 6)
AND 1011 (decimal 11)
  = 0010 (decimal 2)

Because of this property, it becomes easy to check the parity of a binary number by checking the value of the lowest valued bit. Using the example above:

    0110 (decimal 6)
AND 0001 (decimal 1)
  = 0000 (decimal 0)

Because 6 AND 1 is zero, 6 is divisible by two and therefore even.

OR

[edit]
Bitwise OR of 4-bit integers

A bitwise OR is a binary operation that takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1. For example:

   0101 (decimal 5)
OR 0011 (decimal 3)
 = 0111 (decimal 7)

The bitwise OR may be used to set to 1 the selected bits of the register described above. For example, the fourth bit of 0010 (decimal 2) may be set by performing a bitwise OR with the pattern with only the fourth bit set:

   0010 (decimal 2)
OR 1000 (decimal 8)
 = 1010 (decimal 10)

XOR

[edit]
Bitwise XOR of 4-bit integers

A bitwise XOR is a binary operation that takes two bit patterns of equal length and performs the logical exclusive OR operation on each pair of corresponding bits. The result in each position is 1 if only one of the bits is 1, but will be 0 if both are 0 or both are 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same. For example:

    0101 (decimal 5)
XOR 0011 (decimal 3)
  = 0110 (decimal 6)

The bitwise XOR may be used to invert selected bits in a register (also called toggle or flip). Any bit may be toggled by XORing it with 1. For example, given the bit pattern 0010 (decimal 2) the second and fourth bits may be toggled by a bitwise XOR with a bit pattern containing 1 in the second and fourth positions:

    0010 (decimal 2)
XOR 1010 (decimal 10)
  = 1000 (decimal 8)

This technique may be used to manipulate bit patterns representing sets of Boolean states.

Assembly language programmers and optimizing compilers sometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and less memory than loading a zero value and saving it to the register.

If the set of bit strings of fixed length n (i.e. machine words) is thought of as an n-dimensional vector space over the field , then vector addition corresponds to the bitwise XOR.

Mathematical equivalents

[edit]

Assuming , for the non-negative integers, the bitwise operations can be written as follows:

Truth table for all binary logical operators

[edit]

There are 16 possible truth functions of two binary variables; this defines a truth table, termed a LUT2 lookup table, a.k.a. a Boolean function order k=2 (2 inputs).

Here is the bitwise equivalent operations of two bits P and Q:

p q F0 NOR1 Xq2 ¬p3 4 ¬q5 XOR6 NAND7 AND8 XNOR9 q10 If/then11 p12 Then/if13 OR14 T15
1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
0 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
Bitwise
equivalents
0 NOT
(p OR q)
(NOT p)
AND q
NOT
p
p AND
(NOT q)
NOT
q
p XOR q NOT
(p AND q)
p AND q NOT
(p XOR q)
q (NOT p)
OR q
p p OR
(NOT q)
p OR q 1

The ternary equivalent is a LUT3 boolean function of order k=3 (three inputs), resulting in a table of 256 operations, and in computing is termed a Bitwise ternary logic instruction.

Bit shifts

[edit]

The bit shifts are sometimes considered bitwise operations, because they treat a value as a series of bits rather than as a numerical quantity. In these operations, the digits are moved, or shifted, to the left or right. Registers in a computer processor have a fixed width, so some bits will be "shifted out" of the register at one end, while the same number of bits are "shifted in" from the other end; the differences between bit shift operators lie in how they determine the values of the shifted-in bits.

Bit addressing

[edit]

If the width of the register (frequently 32 or even 64) is larger than the number of bits (usually 8) of the smallest addressable unit, frequently called byte, the shift operations induce an addressing scheme from the bytes to the bits. Thereby the orientations "left" and "right" are taken from the standard writing of numbers in a place-value notation, such that a left shift increases and a right shift decreases the value of the number ― if the left digits are read first, this makes up a big-endian orientation. Disregarding the boundary effects at both ends of the register, arithmetic and logical shift operations behave the same, and a shift by 8 bit positions transports the bit pattern by 1 byte position in the following way:

Little-endian ordering: a left shift by 8 positions increases the byte address by 1,
a right shift by 8 positions decreases the byte address by 1.
Big-endian ordering: a left shift by 8 positions decreases the byte address by 1,
a right shift by 8 positions increases the byte address by 1.

Arithmetic shift

[edit]
Left arithmetic shift
Right arithmetic shift

In an arithmetic shift (sticky shift), the bits that are shifted out of either end are discarded. In a left arithmetic shift, zeros are shifted in on the right; in a right arithmetic shift, the sign bit (the MSB in two's complement) is shifted in on the left, thus preserving the sign of the operand.

This example uses an 8-bit register, interpreted as two's complement:

   00010111 (decimal +23) LEFT-SHIFT
=  00101110 (decimal +46)
   10010111 (decimal −105) RIGHT-SHIFT
=  11001011 (decimal −53)

In the first case, the leftmost digit was shifted past the end of the register, and a new 0 was shifted into the rightmost position. In the second case, the rightmost 1 was shifted out (perhaps into the carry flag), and a new 1 was copied into the leftmost position, preserving the sign of the number. Multiple shifts are sometimes shortened to a single shift by some number of digits. For example:

   00010111 (decimal +23) LEFT-SHIFT-BY-TWO
=  01011100 (decimal +92)

A left arithmetic shift by n is equivalent to multiplying by 2n (provided the value does not overflow), while a right arithmetic shift by n of a two's complement value is equivalent to taking the floor of division by 2n. If the binary number is treated as ones' complement, then the same right-shift operation results in division by 2n and rounding toward zero.

Logical shift

[edit]
Left logical shift
Right logical shift

In a logical shift (zero fill shift), zeros are shifted in to replace the discarded bits. Therefore, the logical and arithmetic left-shifts are exactly the same.

However, as the logical right-shift inserts value 0 bits into the most significant bit, instead of copying the sign bit, it is ideal for unsigned binary numbers, while the arithmetic right-shift is ideal for signed two's complement binary numbers.

Circular shift

[edit]

Another form of shift is the circular shift, bitwise rotation or bit rotation.

Rotate

[edit]
Left circular shift or rotate
Right circular shift or rotate

In this operation, sometimes called rotate no carry, the bits are "rotated" as if the left and right ends of the register were joined. The value that is shifted into the right during a left-shift is whatever value was shifted out on the left, and vice versa for a right-shift operation. This is useful if it is necessary to retain all the existing bits, and is frequently used in digital cryptography.[clarification needed]

Rotate through carry

[edit]
Left rotate through carry
Right rotate through carry

Rotate through carry is a variant of the rotate operation, where the bit that is shifted in (on either end) is the old value of the carry flag, and the bit that is shifted out (on the other end) becomes the new value of the carry flag.

A single rotate through carry can simulate a logical or arithmetic shift of one position by setting up the carry flag beforehand. For example, if the carry flag contains 0, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE is a logical right-shift, and if the carry flag contains a copy of the sign bit, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE is an arithmetic right-shift. For this reason, some microcontrollers such as low end PICs just have rotate and rotate through carry, and don't bother with arithmetic or logical shift instructions.

Rotate through carry is especially useful when performing shifts on numbers larger than the processor's native word size, because if a large number is stored in two registers, the bit that is shifted off one end of the first register must come in at the other end of the second. With rotate-through-carry, that bit is "saved" in the carry flag during the first shift, ready to shift in during the second shift without any extra preparation.

In high-level languages

[edit]

In C family of languages

[edit]

In C and C++ languages, the logical shift operators are "<<" for left shift and ">>" for right shift. The number of places to shift is given as the second argument to the operator. For example,

x = y << 2;

assigns x the result of shifting y to the left by two bits, which is equivalent to a multiplication by four.

Shifts can result in implementation-defined behavior or undefined behavior, so care must be taken when using them. The result of shifting by a bit count greater than or equal to the word's size is undefined behavior in C and C++.[2][3] Right-shifting a negative value is implementation-defined and not recommended by good coding practice;[4] the result of left-shifting a signed value is undefined if the result cannot be represented in the result type.[2]

In C#, the right-shift is an arithmetic shift when the first operand is an int or long. If the first operand is of type uint or ulong, the right-shift is a logical shift.[5]

Circular shifts
[edit]

The C-family of languages lack a rotate operator (although C++20 provides std::rotl and std::rotr), but one can be synthesized from the shift operators. Care must be taken to ensure the statement is well formed to avoid undefined behavior and timing attacks in software with security requirements.[6] For example, a naive implementation that left-rotates a 32-bit unsigned value x by n positions is simply

uint32_t x = ..., n = ...;
uint32_t y = (x << n) | (x >> (32 - n));

However, a shift by 0 bits results in undefined behavior in the right-hand expression (x >> (32 - n)) because 32 - 0 is 32, and 32 is outside the range 0–31 inclusive. A second try might result in

uint32_t x = ..., n = ...;
uint32_t y = n ? (x << n) | (x >> (32 - n)) : x;

where the shift amount is tested to ensure that it does not introduce undefined behavior. However, the branch adds an additional code path and presents an opportunity for timing analysis and attack, which is often not acceptable in high-integrity software.[6] In addition, the code compiles to multiple machine instructions, which is often less efficient than the processor's native instruction.

To avoid the undefined behavior and branches under GCC and Clang, the following is recommended. The pattern is recognized by many compilers, and the compiler will emit a single rotate instruction:[7][8][9]

uint32_t x = ..., n = ...;
uint32_t y = (x << n) | (x >> (-n & 31));

There are also compiler-specific intrinsics implementing circular shifts, like _rotl8, _rotl16, _rotr8, _rotr16 in Microsoft Visual C++. Clang provides some rotate intrinsics for Microsoft compatibility that suffers the problems above.[9] GCC does not offer rotate intrinsics. Intel also provides x86 intrinsics.

Java

[edit]

In Java, all integer types are signed, so the "<<" and ">>" operators perform arithmetic shifts. Java adds the operator ">>>" to perform logical right shifts, but since the logical and arithmetic left-shift operations are identical for signed integer, there is no "<<<" operator in Java.

More details of Java shift operators:[10]

  • The operators << (left shift), >> (signed right shift), and >>> (unsigned right shift) are called the shift operators.
  • The type of the shift expression is the promoted type of the left-hand operand. For example, aByte >>> 2 is equivalent to ((int) aByte) >>> 2.
  • If the promoted type of the left-hand operand is int, only the five lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & with the mask value 0x1f (0b11111).[11] The shift distance actually used is therefore always in the range 0 to 31, inclusive.
  • If the promoted type of the left-hand operand is long, then only the six lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & with the mask value 0x3f (0b111111).[11] The shift distance actually used is therefore always in the range 0 to 63, inclusive.
  • The value of n >>> s is n right-shifted s bit positions with zero-extension.
  • In bit and shift operations, the type byte is implicitly converted to int. If the byte value is negative, the highest bit is one, then ones are used to fill up the extra bytes in the int. So byte b1 = -5; int i = b1 | 0x0200; will result in i == -5.

JavaScript

[edit]

JavaScript uses bitwise operations to evaluate each of two or more units place to 1 or 0.[12]

Pascal

[edit]

In Pascal, as well as in all its dialects (such as Object Pascal and Standard Pascal), the logical left and right shift operators are "shl" and "shr", respectively. Even for signed integers, shr behaves like a logical shift, and does not copy the sign bit. The number of places to shift is given as the second argument. For example, the following assigns x the result of shifting y to the left by two bits:

x := y shl 2;

Other

[edit]

Applications

[edit]

Bitwise operations are necessary particularly in lower-level programming such as device drivers, low-level graphics, communications protocol packet assembly, and decoding.

Although machines often have efficient built-in instructions for performing arithmetic and logical operations, all these operations can be performed by combining the bitwise operators and zero-testing in various ways.[13] For example, here is a pseudocode implementation of ancient Egyptian multiplication showing how to multiply two arbitrary integers a and b (a greater than b) using only bitshifts and addition:

c  0
while b  0
    if (b and 1)  0
        c  c + a
    left shift a by 1
    right shift b by 1
return c

Another example is a pseudocode implementation of addition, showing how to calculate a sum of two integers a and b using bitwise operators and zero-testing:

while a  0
    c  b and a
    b  b xor a
    left shift c by 1
    a  c
return b

Boolean algebra

[edit]

Sometimes it is useful to simplify complex expressions made up of bitwise operations, for example when writing compilers. The goal of a compiler is to translate a high-level programming language into the most efficient machine code possible. Boolean algebra is used to simplify complex bitwise expressions.

AND

[edit]
  • x & y = y & x
  • x & (y & z) = (x & y) & z
  • x & 0xFFFF = x[14]
  • x & 0 = 0
  • x & x = x

OR

[edit]
  • x | y = y | x
  • x | (y | z) = (x | y) | z
  • x | 0 = x
  • x | 0xFFFF = 0xFFFF
  • x | x = x

NOT

[edit]
  • ~(~x) = x

XOR

[edit]
  • x ^ y = y ^ x
  • x ^ (y ^ z) = (x ^ y) ^ z
  • x ^ 0 = x
  • x ^ y ^ y = x
  • x ^ x = 0
  • x ^ 0xFFFF = ~x

Additionally, XOR can be composed using the 3 basic operations (AND, OR, NOT)

  • a ^ b = (a | b) & (~a | ~b)
  • a ^ b = (a & ~b) | (~a & b)

Others

[edit]
  • x | (x & y) = x
  • x & (x | y) = x
  • ~(x | y) = ~x & ~y
  • ~(x & y) = ~x | ~y
  • x | (y & z) = (x | y) & (x | z)
  • x & (y | z) = (x & y) | (x & z)
  • x & (y ^ z) = (x & y) ^ (x & z)
  • x + y = (x ^ y) + ((x & y) << 1)
  • x - y = ~(~x + y)

Inverses and solving equations

[edit]

It can be hard to solve for variables in Boolean algebra, because unlike regular algebra, several operations do not have inverses. Operations without inverses lose some of the original data bits when they are performed, and it is not possible to recover this missing information.

  • Has inverse
    • NOT
    • XOR
    • Rotate left
    • Rotate right
  • No inverse
    • AND
    • OR
    • Shift left
    • Shift right

Order of operations

[edit]

Operations at the top of this list are executed first. See the main article for a more complete list.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Bitwise operations are fundamental procedures in and digital electronics that manipulate the individual bits (binary digits of 0 or 1) within the binary representation of , such as integers, enabling low-level control over processing at the hardware level. These operations differ from arithmetic or logical operations by directly applying functions to corresponding bits across operands, often performed by the CPU's (ALU) for efficiency in tasks like compression, , and optimization. The most common bitwise operators include AND (&), OR (|), XOR (^), NOT (~), and shift operators such as left shift (<<) and right shift (>>). The AND operator produces a 1 in each bit position only if both input bits are 1, as defined by its truth table: 0&0=0, 1&0=0, 0&1=0, 1&1=1; this is useful for masking or clearing specific bits. Similarly, OR sets a bit to 1 if at least one input bit is 1 (truth table: 0|0=0, 1|0=1, 0|1=1, 1|1=1), enabling bit setting or union operations. The XOR operator outputs 1 when input bits differ (truth table: 0^0=0, 1^0=1, 0^1=1, 1^1=0), commonly used for toggling bits or error detection. NOT inverts all bits of a single operand (0 becomes 1, and vice versa), while shift operators relocate bits horizontally, with left shifts multiplying by powers of 2 and right shifts dividing, though behavior on signed integers can vary by implementation. In practice, bitwise operations are essential for low-level programming in languages like C and assembly, where they facilitate efficient data packing, flag manipulation in registers, and hardware interfacing in embedded systems. They also play a key role in algorithms for cryptography (e.g., bit scrambling), computer graphics (e.g., pixel manipulation), and network protocols (e.g., header field extraction), offering performance advantages over higher-level abstractions by reducing computational overhead.

Fundamentals

Definition and Purpose

Bitwise operations are fundamental primitives in that manipulate the individual bits of binary numbers, treating them as sequences of 0s and 1s without regard to their interpreted numerical value. These operations enable direct interaction with the binary representation of , allowing for precise control over bit-level details in integers or other fixed-width data types. The purpose of bitwise operations lies in providing efficient, low-level mechanisms for data manipulation, which are essential for optimizing algorithms, interfacing with hardware, and conserving resources in . They facilitate tasks such as extracting specific bits (masking), combining flags into compact structures, and implementing high-performance computations that avoid the overhead of higher-level arithmetic. In embedded systems and performance-critical applications, bitwise operations often outperform equivalent arithmetic approaches due to their simplicity and direct hardware support. Bitwise operations trace their origins to the foundational designs of early digital computers in the 1940s, particularly in John von Neumann's 1945 "First Draft of a Report on the EDVAC," which specified a central arithmetic unit capable of performing both arithmetic and logical operations on fixed-length machine words. This inclusion of logical operations, which equate to bitwise manipulations, became integral to the arithmetic logic unit (ALU) in the von Neumann architecture that underpins most modern processors. Unlike arithmetic operations, which process binary data as numerical quantities to yield mathematically meaningful results, bitwise operations focus exclusively on the structural patterns of bits, enabling independent bit-level transformations.

Binary Representation

In computing, a bit is the fundamental unit of information, representing a binary digit that can hold one of two values: or 1. This binary nature stems from the physical implementation in digital circuits, where a bit is typically stored as a voltage level—low for and high for 1. Bits are grouped into larger units for practical storage and processing; a byte consists of exactly 8 bits, capable of representing 256 distinct values (from to 255 in unsigned form). Beyond bytes, data is often organized into words, which are fixed-size sequences of bits processed as a single unit by a computer's processor—commonly 32 bits or 64 bits in modern architectures, depending on the instruction set. When representing multi-byte values, such as integers spanning multiple bytes, the ordering of bytes in introduces the concept of . Big-endian systems store the most significant byte (containing the highest-order bits) at the lowest , followed by less significant bytes in increasing order of address. In contrast, little-endian systems reverse this, placing the least significant byte at the lowest address and the most significant byte at the highest. This byte-ordering convention affects how data is interpreted across different hardware architectures, such as x86 processors (little-endian) versus some network protocols (big-endian). Integer representations also distinguish between signed and unsigned formats, impacting how bit patterns encode values. Unsigned integers use all bits for magnitude, allowing a range from 0 to 2n12^n - 1 for nn bits. Signed integers, however, reserve the most significant bit as a (0 for positive, 1 for negative) and employ encoding for negative numbers to simplify arithmetic operations. In , a negative number is formed by inverting all bits of its positive counterpart and adding 1, which extends the range to approximately half positive and half negative values (e.g., -128 to 127 for 8 bits). This method ensures that the bit patterns for negative values align seamlessly with unsigned patterns for addition and subtraction, avoiding separate handling for signs. For illustration, the number 5 is represented in binary as 101, corresponding to 1×22+0×21+1×20=4+1=51 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 = 4 + 1 = 5. In fixed-width formats like an 8-bit byte, this is padded with leading zeros to 00000101, preserving the value while filling the available bits. For signed in 8 bits, -5 would be the inversion of 00000101 (yielding 11111010) plus 1, resulting in 11111011.

Core Bitwise Operators

NOT Operation

The bitwise NOT operation, also known as the one's complement, is a unary operator that inverts the bits of its integer operand, transforming every 0 to 1 and every 1 to 0 in the binary representation. In most programming languages, including C, C++, Java, and Python, this operator is denoted by the tilde symbol (~). In systems using two's complement representation for signed integers—which is the predominant method in modern computer architectures—the bitwise NOT of a number xx is mathematically equivalent to x1-x - 1. This equivalence arises because inverting all bits of xx yields the one's complement, and adding 1 would produce the two's complement negation; thus, the inversion alone subtracts 1 from the negation. For example, consider an 8-bit representation of the decimal number 5, which is 00000101200000101_2. Applying the NOT operation yields 11111010211111010_2, equivalent to 250 in unsigned interpretation but -6 in signed (since 51=6-5 - 1 = -6). The of the NOT operation differs between unsigned and signed integers. For unsigned types, it simply produces the maximum value of the type minus the (e.g., for 8-bit unsigned, x=255x\sim x = 255 - x), without any sign-related effects. For signed types in , it inverts the along with others, often resulting in a sign flip and potential implementation-defined if the result exceeds the representable range, though overflow is typically undefined. The operation can be summarized in a single-bit :
InputOutput
01
10
This table illustrates the inversion for each bit independently.

AND Operation

The bitwise AND operation is a binary operation that performs a logical AND on each pair of corresponding bits in two binary numbers, producing a result bit of 1 only if both input bits are 1, and 0 otherwise. This operation is commonly denoted by the symbol & in most programming languages and mathematical contexts. It can be viewed as a per-bit in binary arithmetic, where the operation aligns the bits of the operands by their position and applies the AND logic independently to each pair. The for the bitwise AND operation, showing all possible input combinations for , is as follows:
Input AInput BA & B
000
010
100
111
This table illustrates the conjunctive nature of the operation, where the output is true (1) exclusively when both inputs are true (1). For example, consider the binary numbers 00000101 ( 5) and 00000011 ( 3). Applying bitwise AND bit by bit yields 00000001 ( 1), as only the least significant bit is 1 in both operands. This demonstrates how the operation clears bits that are not set in both inputs, preserving only the common set bits. A primary use of the bitwise AND operation is in masking, where it extracts or isolates specific bits from a by ANDing with a that has 1s in the desired positions and 0s elsewhere. For instance, to check if a number is even or odd, one can perform the operation with the mask 00000001 (decimal 1), which examines the least significant bit: a result of 0 indicates even, while 1 indicates odd. This technique is efficient for bit-level manipulations in low-level programming and hardware design.

OR Operation

The bitwise OR operation, denoted by the symbol |, is a binary bitwise operator that performs a logical OR on each pair of corresponding bits in two operands. For each bit position, the result is 1 if at least one of the input bits is 1, and 0 otherwise. The for the bitwise OR operation is as follows: | Input A | Input B | Output (A | B) | |---------|---------|------------| | 0 | 0 | 0 | | 0 | 1 | 1 | | 1 | 0 | 1 | | 1 | 1 | 1 | This table illustrates all possible combinations for a single bit pair. For example, applying bitwise OR to the 8-bit binary representations of 5 (00000101) and 2 (00000010) yields 7 (00000111), as the result combines the 1s from both inputs without overlap resolution. A primary use of the bitwise OR operation is setting specific bits in a , such as enabling flags in a configuration register by ORing the value with a containing 1s in the desired positions (e.g., x = x | 0x01 sets the least significant bit). This is common in low-level programming for tasks like assembling bit-packed structures or activating options in bitmasks.

XOR Operation

The bitwise XOR (exclusive OR) operation is a fundamental binary bitwise operator that compares two input bits and outputs 1 only if the bits differ, and 0 if they are the same. This per-bit comparison results in a logical exclusive OR, where the output bit is set when exactly one of the input bits is 1. In digital logic and computing, XOR is defined as performing this operation independently on each corresponding pair of bits in the binary representations of two operands of equal length. The for the two-input XOR operation is as follows:
Input AInput BOutput (A XOR B)
000
011
101
110
This table illustrates the core behavior: the output is 1 for differing inputs (01 or 10) and 0 for identical inputs (00 or 11). In programming languages such as , , and , the XOR operator is denoted by the symbol ^, which applies the operation bit by bit to operands. For example, performing XOR on the 8-bit binary values 00000101 ( 5) and 00000011 ( 3) yields 00000110 ( 6), as the differing bits in positions 1 and 2 (0-based from the right) result in 1s, while matching bits remain 0. Mathematically, bitwise XOR is equivalent to 2 without carry for each bit position, aligning it with operations in binary arithmetic and error-correcting codes. A distinctive property of XOR is its self-inversivity: applying the operation with the same operand twice returns the original value, expressed as xyy=xx \oplus y \oplus y = x, since yy=0y \oplus y = 0 and x0=xx \oplus 0 = x. This involutory nature enables efficient bit toggling—such as flipping specific bits in a register by XORing with a —and underpins its utility in reversible data transformations or basic primitives.

Bit Shifting Operations

Logical Shift

A is a that shifts the bits of an to the left or right by a specified number of positions, filling the empty bit positions with zeros regardless of the operand's . In a left logical shift, bits move toward the most significant bit (MSB), with zeros inserted into the least significant bit (LSB) positions; bits shifted beyond the MSB are typically discarded. Conversely, a right logical shift moves bits toward the LSB, filling the new MSB positions with zeros, and discards bits shifted beyond the LSB. This operation treats the binary representation as a simple sequence of bits without regard to numerical , making it suitable for unsigned integers. For unsigned integers, a left logical shift by nn positions is mathematically equivalent to multiplying the value by 2n2^n, as each shift effectively doubles the value by repositioning bits to higher place values. Similarly, a right logical shift by nn positions corresponds to division by 2n2^n, the result by discarding the least significant bits. These equivalences hold because the operation preserves the unsigned interpretation and avoids sign-related adjustments. For example, consider the 8-bit unsigned binary value 00000101 ( 5): shifting left by 1 position yields 00001010 ( 10, equivalent to 5×215 \times 2^1); shifting right by 1 position yields 00000010 ( 2, equivalent to 5÷215 \div 2^1, ). In programming languages such as C and its derivatives, the left shift operator is denoted by <<, which always performs a logical shift by filling with zeros. The right shift operator >> behaves as a logical shift for unsigned types, ensuring zero-filling of the MSB positions, while for signed types it may perform an arithmetic shift instead. This distinction allows developers to manipulate bit patterns predictably for unsigned data, enabling efficient implementations of multiplication and division by powers of 2 without invoking the processor's arithmetic units. The operation's behavior is defined at the hardware level in instruction sets like x86 and MIPS, where dedicated instructions (e.g., sll for shift left logical and srl for shift right logical) execute the zero-filling semantics.

Arithmetic Shift

An arithmetic shift is a bit-shifting operation designed for signed integers in two's complement representation, where the goal is to preserve the number's sign during the shift. In a right arithmetic shift, the bits vacated on the left are filled with copies of the original sign bit—0 for positive numbers and 1 for negative numbers—effectively extending the sign to approximate division by a power of two. The left arithmetic shift behaves identically to a logical left shift, filling vacated bits on the right with zeros, which approximates multiplication by a power of two./03:_MIPS_Arithmetic_and_Logical_Operators/3.10:_Shift_Operations) For example, consider the 8-bit representation of -5, which is 11111011. A right by 1 bit produces 11111101, equivalent to -3 in , approximating the result of -5 divided by 2 (with truncation toward negative infinity). This ensures the shifted value remains negative for negative inputs, distinguishing it from logical shifts that always fill with zeros and are typically used for unsigned integers. Arithmetic shifts are particularly useful for efficient signed division by powers of two, as the sign-preserving behavior aligns with arithmetic semantics. In programming languages, the right-shift operator >> often implements arithmetic shifting for signed types; for instance, in , >> performs , while the unsigned right-shift operator >>> fills with zeros regardless of sign. In C, the behavior of >> on signed integers is implementation-defined but commonly arithmetic on systems. However, arithmetic shifts—especially left shifts on signed integers—can introduce issues related to overflow. If a left shift causes the result to exceed the representable range (e.g., shifting a value such that bits enter the sign bit position), it may invoke undefined behavior in languages like C, where the standard does not specify outcomes for such cases. Programmers must ensure shifts do not overflow to avoid portability problems across implementations.

Circular Shift

A circular shift, also known as a , is an operation that moves the bits of a in a cyclic manner, where the bits shifted out from one end of the are reintroduced at the opposite end, preserving the total number of bits without loss or gain. This contrasts with non-circular shifts, such as logical shifts, which discard the overflow bits and fill the vacated positions with zeros. Circular shifts exist in two primary directions: left (or rotate left), which moves bits toward the most significant bit (MSB) with overflow from the MSB wrapping to the least significant bit (LSB), and right (or rotate right), which moves bits toward the LSB with overflow from the LSB wrapping to the MSB. There are two main subtypes of circular shifts: pure rotation, which operates solely on the bits within the operand without involving external flags, and rotation through carry, which incorporates a carry flag in the process. In pure rotation, the operation is self-contained; for example, a left rotation shifts all bits leftward, directly inserting the overflow bit(s) from the MSB into the LSB position(s). In contrast, rotation through carry uses the processor's carry flag: the bit shifted out updates the carry flag, and the previous carry flag value is inserted into the vacated position, effectively treating the carry as an extension of the operand. To illustrate, consider an 8-bit binary value 00010111 ( 23). A left by 1 position yields 00101110 ( 46), as the MSB (0) wraps to the LSB, and all other bits move left. Similarly, a right by 1 position on the same value produces 10001011 ( 139), with the LSB (1) wrapping to the MSB. Circular shifts are commonly implemented in assembly languages using instructions such as ROL (rotate left) and ROR (rotate right) for pure rotations, with variants like RCL and RCR for rotations through carry. They find applications in hash functions, where operations like cyclic shifts help generate pseudorandom outputs efficiently, as seen in chaotic hash algorithms that employ variable circular shifts for one-way compression. Additionally, circular shifts underpin cyclic error-correcting codes, where the code's invariance under cyclic permutations enables efficient encoding and computation via linear feedback shift registers.

Language Implementations

C Family Languages

In the C family of languages, including and , bitwise operations are fundamental for low-level manipulation of types, providing direct control over binary representations. These operations apply to operands after integer promotions or usual arithmetic conversions, enabling efficient bit-level computations essential for . The core bitwise operators are the unary NOT (~), binary AND (&), OR (|), and XOR (^), along with the shift operators left shift (<<) and right shift (>>). The bitwise NOT operator (~) inverts all bits of its operand, equivalent to subtracting the value from one less than the maximum representable value in the promoted type (e.g., for a 32-bit unsigned int, ~x = 0xFFFFFFFF - x). The binary AND (&) sets each bit in the result to 1 only if both corresponding bits in the operands are 1. The OR (|) sets a bit to 1 if at least one corresponding bit in the operands is 1, while XOR (^) sets it to 1 if the bits differ. These operations perform usual arithmetic conversions on operands, promoting them to a common type, typically int or unsigned int, before applying the bitwise logic. Shift operations move bits left (<<) or right (>>) by the number of positions specified by the right . The left shift fills vacated bits with zeros and discards overflowed bits, effectively multiplying by powers of two for positive values within the type's range. The right shift behaves similarly but fills vacated bits with zeros for unsigned types; for signed integers, the behavior is implementation-defined, often preserving the () to maintain the sign for negative values. In , the behavior aligns closely with , though unified some shift semantics for consistency. Operator precedence in C family languages places unary ~ at a high level (precedence 2, right-to-left associativity), above multiplicative arithmetic operators (*, /, % at precedence 3). The shifts (<<, >>) follow at precedence 5 (left-to-right), below additive operators (+, - at precedence 4) but above relational operators. The binary bitwise operators then occur at lower levels: & at precedence 8, ^ at 9, and | at 10, all left-to-right, positioned after equality operators (==, != at precedence 7) but before logical AND (&& at precedence 11). This hierarchy ensures bitwise operations have lower priority than arithmetic but higher than assignments (precedence 14, right-to-left). Parentheses are recommended for clarity in mixed expressions. Type considerations are critical: for unary ~ and shifts, the operand undergoes integer promotion to int or unsigned int if smaller. Binary bitwise operators (&, |, ^) use usual arithmetic conversions, balancing signedness and potentially promoting to unsigned if one operand is unsigned. Shifts promote the left operand via integer promotion and the right to int; however, behavior is undefined if the right operand is negative, zero (for some implementations), or greater than or equal to the bit width of the promoted left operand (e.g., shifting a 32-bit int by 32 or more). For signed left shifts, if the result is not representable in the type, behavior is undefined per C11 (ISO/IEC 9899:2011, section 6.5.7). These rules promote portability but require careful handling to avoid undefined behavior, especially in C++ where similar constraints apply. A simple example demonstrates the AND operation:

c

#include <stdio.h> int main(void) { int x = 5 & 3; // 5 is 101 in binary, 3 is 011; result is 001 (1) printf("%d\n", x); // Outputs: 1 return 0; }

#include <stdio.h> int main(void) { int x = 5 & 3; // 5 is 101 in binary, 3 is 011; result is 001 (1) printf("%d\n", x); // Outputs: 1 return 0; }

This illustrates bit masking, a common use where & extracts specific bits. Similar patterns apply to other operators, with C and C++ providing identical syntax for these fundamentals.

Java and JavaScript

In Java, bitwise operations are performed on the integral primitive types—byte, short, int, and long—with int (32 bits) and long (64 bits) being the primary types for such computations. The operators include & for bitwise AND, | for bitwise OR, ^ for bitwise XOR, ~ for bitwise NOT, << for left shift, >> for signed right shift (which preserves the sign bit via arithmetic shift), and >>> for unsigned right shift (which fills the leftmost bits with zeros regardless of the sign). These operations treat operands as two's complement representations, ensuring consistent handling of negative values where the sign bit is propagated in signed shifts. Unlike languages such as C++, does not support operator overloading, meaning the bitwise operators always apply directly to the primitive types without custom redefinition for user-defined classes. For example, the expression 0x2222 & 0x000F evaluates to 2 (), as the bitwise AND masks the lower 4 bits. The >>> operator is particularly useful for treating negative int values as unsigned during right shifts, converting them to positive results by zero-filling. In , bitwise operators similarly perform computations on 32-bit signed integers, automatically converting non-integer operands (such as floating-point numbers or other types) to signed 32-bit integers before applying the operation, with any excess bits discarded through truncation. The operators are &, |, ^, ~, <<, >> (arithmetic right shift that preserves the ), and >>> (logical right shift that zero-fills). Negative numbers are handled using representation, consistent with Java's approach. For instance, 5 | 2 evaluates to 7, as the 0101 | 0010 yields 0111. JavaScript's bitwise operations on standard Number types wrap around the 32-bit boundary; shifting large numbers, such as Math.pow(2, 53) << 1, truncates to fit the 32-bit signed integer range, potentially leading to unexpected results for values exceeding 2^31 - 1. To handle arbitrary-precision integers, JavaScript provides BigInt support for most bitwise operators (e.g., 5n & 3n results in 1n), but >>> throws a TypeError on BigInt due to the lack of a fixed bit width. The syntax for these operators in both and derives from the C family languages, facilitating familiar usage in managed environments.

Other Languages

In Pascal, bitwise operations are supported through keywords such as and for bitwise AND, or for bitwise OR, xor for bitwise XOR, and not for bitwise NOT, while shifts are handled by shl for left shift and shr for right shift. These operations apply to fixed-width integer types, such as the 32-bit Integer in most implementations, where overflow behavior depends on the compiler but typically wraps around. Python provides bitwise operators including & for AND, | for OR, ^ for XOR, ~ for NOT (inversion), << for left shift, and >> for right shift, which operate on the binary representations of integers. Unlike fixed-width languages, Python's integers are arbitrary-precision, allowing operations on numbers of any size without overflow, though shifts on very large integers may be computationally expensive. Other languages exhibit variations in bitwise support. Rust mirrors C-style syntax with operators like &, |, ^, !, <<, and >>, but emphasizes safety through type-checked integers that prevent in shifts and overflows via checked variants. In assembly languages, such as x86, bitwise operations are implemented directly via hardware instructions like AND, OR, XOR, NOT, SHL, SHR, and ROL for rotate left, enabling low-level tied to the processor's architecture. Some languages, like standard SQL, lack native bitwise operators, restricting such functionality to database-specific extensions (e.g., T-SQL's &, |, ^) applied only to types, while others limit them to prevent misuse in non-numeric contexts.

Applications

Data Manipulation

Bitwise operations play a crucial role in data manipulation by enabling precise control over individual bits within integers, facilitating tasks such as field extraction, value packing, and status checking without altering unrelated data. Bit masking is a primary technique for extracting or setting specific fields in a , typically using the bitwise AND operator to isolate bits by applying a that has 1s in the desired positions and 0s elsewhere. For example, to extract the lowest 8 bits (a byte) from a 32-bit value, one performs value & 0xFF, which clears all higher-order bits while retaining the target field intact. To set bits in a field, the bitwise OR operator combines the original value with a containing the new bits to insert, ensuring other bits remain unchanged. Bit fields, or bit packing, allow multiple small values to be stored compactly within a single integer by assigning fixed bit widths to each component, which is particularly useful for memory-constrained environments or when transmitting data. In languages like C, bit fields are explicitly declared in structures to enforce this packing; for instance, a structure might define fields for day (5 bits), month (4 bits), and year (7 bits) within a single 16-bit integer, enabling efficient storage of date components. A common application is packing RGB color values into a 24-bit or 32-bit integer, where the red component (8 bits) can be extracted using (color & 0xFF0000) >> 16, isolating and shifting the relevant bits to form an independent byte. Flag operations leverage bitwise operations to manage status indicators or options as individual bits within an , allowing multiple to coexist in a single variable for compact representation. To test a , such as determining if a number is even by checking its least significant bit, one uses num & 1, which yields 0 for even numbers and 1 for odd, as the targets only that bit. Setting a involves ORing the value with a shifted , like flags |= (1 << position), while clearing it requires ANDing with the inverted , flags &= ~(1 << position). In modern data serialization, bitwise operations support bit-packing to produce efficient binary formats, as exemplified by , where variable-length integer encoding (varint) packs values by storing fewer bytes for smaller numbers, reducing overall payload size during transmission. This approach, combined with packed representations for repeated primitive fields, minimizes storage and bandwidth needs in distributed systems.

Cryptography and Security

Bitwise operations, particularly the exclusive-or (XOR), form the cornerstone of many cryptographic primitives by enabling efficient mixing and diffusion of data while preserving computational security. In the one-time pad encryption scheme, perfect secrecy is achieved by XORing the plaintext message bit-by-bit with a random key of equal length, ensuring that the ciphertext reveals no information about the original message to an eavesdropper, as proven by Claude Shannon in his foundational work on secrecy systems. This property relies on XOR's self-inverse nature, where applying the same operation with the key decrypts the message. Block ciphers like the Advanced Encryption Standard (AES) incorporate bitwise operations extensively to provide confusion and diffusion across rounds of encryption. AES employs XOR in its AddRoundKey step to combine the state with a round key, alongside bitwise substitutions in SubBytes and linear mixes in MixColumns that operate on bytes using Galois field arithmetic, all of which contribute to its resistance against cryptanalytic attacks as standardized by NIST. These operations ensure that small changes in the input propagate unpredictably throughout the output, enhancing security for symmetric encryption. In hash functions such as SHA-256, bitwise operations drive the compression process to produce a fixed-size digest resistant to preimage and collision attacks. The algorithm uses right rotations (ROTR), XOR, and majority functions (MAJ) in its message schedule and compression steps, along with choices (CH) and sigma functions involving AND, XOR, and shifts, to achieve avalanche effects where altering one input bit affects roughly half the output bits, as detailed in the NIST Secure Hash Standard. These bitwise chains promote rapid diffusion, making SHA-256 suitable for integrity verification in protocols like digital signatures. Cyclic redundancy checks (CRC) leverage XOR and bit shifts for efficient error detection in data transmission and storage. CRC computation treats data as a polynomial over GF(2), performing modulo-2 division via XOR-based shifts with a generator polynomial, which detects burst errors up to the degree of the polynomial with high probability, as originally formalized by W. Wesley Peterson. In post-quantum cryptography, lattice-based schemes selected by NIST, such as ML-KEM (FIPS 203), increasingly rely on bitwise operations for masking secrets to mitigate side-channel attacks. Implementations use higher-order masking, splitting sensitive values into shares combined via bitwise AND and XOR, to protect against power analysis on lattice operations like number-theoretic transforms, ensuring security in quantum-resistant key encapsulation as standardized in 2024. This approach addresses vulnerabilities in arithmetic-heavy lattice computations, aligning with NIST's ongoing standardization efforts through 2025.

Performance Optimization

Bitwise operations provide substantial performance advantages in computing due to their implementation as single-cycle instructions on most modern CPUs, with latencies often around one cycle and high throughput, outperforming arithmetic operations or conditional logic that may involve multiple cycles or branch prediction overhead. For instance, bitwise AND can select values without branching, avoiding the penalties of mispredicted branches that can stall pipelines for 10-20 cycles on x86 architectures. Common optimization tricks leverage these operations for efficient code. The XOR swap technique exchanges two variables without a temporary storage using three XOR assignments: x ^= y; y ^= x; x ^= y;, which can be slightly faster in register-bound scenarios by minimizing memory accesses, though compilers often inline equivalent optimizations for standard swaps. For parity computation (determining if the number of set bits is odd or even), successive XOR folds—such as v ^= v >> 16; v ^= v >> 8; v ^= v >> 4; v ^= v >> 2; v ^= v >> 1; return v & 1;—reduce the 32-bit word to a single bit in about nine operations, enabling fast checks in hashing or error detection without full population counts. Population count (), useful for algorithms like , can be computed via parallel prefix sums with shifts and masks, such as v = v - ((v >> 1) & 0x55555555); v = (v & 0x33333333) + ((v >> 2) & 0x33333333); followed by further reductions, achieving constant-time execution in 12 operations comparable to table lookups but without cache misses. Hardware accelerations further amplify these benefits through parallelism. Intel's (AVX) and support bitwise operations on 256- or 512-bit vectors, allowing simultaneous processing of multiple 32- or 64-bit elements in a single instruction, which can yield 4-8x speedups in vectorizable workloads like data compression or image processing compared to scalar code. On GPUs, bitwise operations benefit from massive parallelism across thousands of cores, with 32-bit integer bitwise throughput matching floating-point rates on architectures, enabling up to 10-100x gains in bulk operations like or bit-parallel string matching over CPU equivalents. The ARM Scalable Vector Extension (SVE), introduced in Armv8-A and extended in 2025 updates, provides scalable bitwise operations on vectors from 128 to 2048 bits, facilitating hardware-agnostic vectorization for high-performance computing tasks such as or simulations, with performance scaling linearly with vector length. Bitwise shifts can briefly reference multiplication or division by powers of two as a related shortcut, where compilers automatically convert such operations to shifts for 2-5x latency reductions in inner loops.

Theoretical Aspects

Boolean Algebra Integration

Bitwise operations are fundamentally rooted in , where individual bits represent truth values (0 for false, 1 for true), enabling the manipulation of through logical functions. This integration was pioneered by in his 1938 master's thesis, which demonstrated how could model the behavior of electrical switching circuits, laying the groundwork for digital computation by treating circuit states as binary propositions. In this framework, bitwise operators directly correspond to Boolean logical connectives: the bitwise AND (&) maps to conjunction (∧), producing 1 only if both inputs are 1; bitwise OR (|) maps to disjunction (∨), producing 1 if at least one input is 1; bitwise NOT (~) maps to negation (¬), inverting the input bit; and bitwise XOR (^) maps to exclusive or, producing 1 if the inputs differ. These mappings allow bits to function as atomic propositions in Boolean expressions, with operations preserving the algebra's semantic structure. When applied to multi-bit integers, bitwise operations extend pointwise across each corresponding bit position, treating the operands as vectors of independent Boolean variables. For instance, the bitwise AND of two nn-bit numbers computes the conjunction for each bit pair separately, yielding an nn-bit result where the ii-th bit is aibia_i \wedge b_i. This parallelism enables efficient vectorized logical processing in hardware and software. Boolean algebraic laws apply seamlessly to these operations, facilitating simplification and optimization. Commutativity holds for AND and OR, such that xy=yxx \wedge y = y \wedge x and xy=yxx \vee y = y \vee x, allowing operand reordering without altering results. Associativity similarly applies, as (xy)z=x(yz)(x \wedge y) \wedge z = x \wedge (y \wedge z) and (xy)z=x(yz)(x \vee y) \vee z = x \vee (y \vee z), supporting grouped evaluations in expressions. These properties, inherited from the underlying algebra, ensure bitwise computations remain consistent and predictable across bit positions.

Properties and Identities

Bitwise operations on integers, interpreted as bit vectors, satisfy the same algebraic properties as operations in , where AND corresponds to conjunction (∧), OR to disjunction (∨), NOT to (¬), and to . These properties hold bit by bit across the operands, enabling simplification of expressions involving bitwise operators. Idempotence is a fundamental property of the AND and OR operations: for any bit vector xx, x&x=xx \& x = x and xx=xx | x = x. This means applying AND or OR with itself yields the original value, reflecting the idempotent nature of conjunction and disjunction in . The absorption laws further simplify expressions: x(x&y)=xx | (x \& y) = x and x&(xy)=xx \& (x | y) = x. These identities arise because any bit set in xx dominates the result when combined with bits from yy, absorbing the influence of the secondary operand. De Morgan's laws provide dualities involving negation: (x&y)=xy\sim(x \& y) = \sim x | \sim y and (xy)=x&y\sim(x | y) = \sim x \& \sim y. These allow rewriting negated conjunctions as disjunctions of negations (and vice versa), facilitating equivalence transformations in bitwise expressions. Distributivity holds for AND over OR: x&(yz)=(x&y)(x&z)x \& (y | z) = (x \& y) | (x \& z), and dually for OR over AND: x(y&z)=(xy)&(xz)x | (y \& z) = (x | y) \& (x | z). This mirrors the in Boolean lattices, allowing bitwise operations to factor across alternatives. The XOR operation exhibits an inverse property due to its group structure under addition modulo 2: if x=yzx = y \oplus z, then xz=yx \oplus z = y and yz=xy \oplus z = x. Thus, XOR is its own inverse, enabling reversible computations such as swapping values without temporary storage via a=b;b=a;a=ba \oplus= b; b \oplus= a; a \oplus= b. In linear algebra over the , bitwise operations extend naturally to bit vectors, where addition is XOR and (by 0 or 1) is AND. This framework supports efficient matrix operations, such as on binary matrices, using bitwise parallelism for solving systems modulo 2. For instance, vector addition u+v\mathbf{u} + \mathbf{v} corresponds to uv\mathbf{u} \oplus \mathbf{v}, preserving the vector space axioms over .

Order of Operations

In programming languages that support bitwise operations, such as those in the C family and Java, the order of evaluation follows a strict precedence hierarchy among the operators. The bitwise NOT operator (~) has the highest precedence, binding more tightly than any other bitwise operator. This is followed by the shift operators (<< and >>), which have precedence between arithmetic operators like addition and subtraction on one hand, and relational operators on the other. The bitwise AND (&) comes next, with higher precedence than the bitwise XOR (^), which in turn precedes the bitwise OR (|). Associativity determines how operators of equal precedence are grouped in expressions. For most bitwise operators, including &, ^, |, <<, and >>, associativity is left-to-right, meaning that in an expression like x & y & z, it is evaluated as (x & y) & z. The unary NOT operator (~), however, is right-to-left associative, so ~~x is parsed as ~(~x). This left-to-right convention applies consistently across C, Java, and similar languages, ensuring predictable parsing without ambiguity for binary operations. Parentheses are essential to override these default rules and enforce a desired order. For instance, the expression ~x & y evaluates as (~x) & y due to the high precedence of ~, potentially yielding different results from ~(x & y), which requires explicit grouping to apply NOT after the AND. Without parentheses, unintended groupings can lead to errors in bit manipulation tasks. While consistent in languages like C and Java, the precedence of bitwise operations integrates with broader order-of-operations rules, such as PEMDAS (parentheses, exponents, multiplication/division, addition/subtraction), where bitwise shifts occur after additive arithmetic but before comparisons. In mathematical contexts, such as Boolean algebra, precedence is more explicitly defined for logical equivalents: NOT holds the highest priority, followed by AND, then OR, with XOR often aligned at the level of OR; shifts are not typically part of these formal systems.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.