Hubbry Logo
search
logo
2320673

Modular arithmetic

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia
Time-keeping on this clock uses arithmetic modulo 12. Adding 4 hours to 9 o'clock gives 1 o'clock, since 13 is congruent to 1 modulo 12.

In mathematics, modular arithmetic is a system of arithmetic operations for integers, other than the usual ones from elementary arithmetic, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.

A familiar example of modular arithmetic is the hour hand on a 12-hour clock. If the hour hand points to 7 now, then 8 hours later it will point to 3. Ordinary addition would result in 7 + 8 = 15, but 15 reads as 3 on the clock face. This is because the hour hand makes one rotation every 12 hours and the hour number starts over when the hour hand passes 12. We say that 15 is congruent to 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12).

Similarly, if one starts at 12 and waits 8 hours, the hour hand will be at 8. If one instead waited twice as long, 16 hours, the hour hand would be on 4. This can be written as 2 × 8 ≡ 4 (mod 12). Note that after a wait of exactly 12 hours, the hour hand will always be right where it was before, so 12 acts the same as zero, thus 12 ≡ 0 (mod 12).

Congruence

[edit]

Given an integer m ≥ 1, called a modulus, two integers a and b are said to be congruent modulo m, if m is a divisor of their difference; that is, if there is an integer k such that

ab = k m.

Congruence modulo m is a congruence relation, meaning that it is an equivalence relation that is compatible with addition, subtraction, and multiplication. Congruence modulo m is denoted by

ab (mod m).

The parentheses mean that (mod m) applies to the entire equation, not just to the right-hand side (here, b).

This notation is not to be confused with the notation b mod m (without parentheses), which refers to the remainder of b when divided by m, known as the modulo operation: that is, b mod m denotes the unique integer r such that 0 ≤ r < m and rb (mod m).

The congruence relation may be rewritten as

a = k m + b,

explicitly showing its relationship with Euclidean division. However, the b here need not be the remainder in the division of a by m. Rather, ab (mod m) asserts that a and b have the same remainder when divided by m. That is,

a = p m + r,
b = q m + r,

where 0 ≤ r < m is the common remainder. We recover the previous relation (ab = k m) by subtracting these two expressions and setting k = pq.

Because the congruence modulo m is defined by the divisibility by m and because −1 is a unit in the ring of integers, a number is divisible by m exactly if it is divisible by m. This means that every non-zero integer m may be taken as a modulus.

Examples

[edit]

In modulus 12, one can assert that:

38 ≡ 14 (mod 12)

because the difference is 38 − 14 = 24 = 2 × 12, a multiple of 12. Equivalently, 38 and 14 have the same remainder 2 when divided by 12.

The definition of congruence also applies to negative values. For example:

Basic properties

[edit]

The congruence relation satisfies all the conditions of an equivalence relation:

  • Reflexivity: aa (mod m)
  • Symmetry: ab (mod m) if ba (mod m).
  • Transitivity: If ab (mod m) and bc (mod m), then ac (mod m)

If a1b1 (mod m) and a2b2 (mod m), or if ab (mod m), then:[1]

  • a + kb + k (mod m) for any integer k (compatibility with translation)
  • k ak b (mod m) for any integer k (compatibility with scaling)
  • k ak b (mod k m) for any integer k
  • a1 + a2b1 + b2 (mod m) (compatibility with addition)
  • a1a2b1b2 (mod m) (compatibility with subtraction)
  • a1 a2b1 b2 (mod m) (compatibility with multiplication)
  • akbk (mod m) for any non-negative integer k (compatibility with exponentiation)
  • p(a) ≡ p(b) (mod m), for any polynomial p(x) with integer coefficients (compatibility with polynomial evaluation)

If ab (mod m), then it is generally false that kakb (mod m). However, the following is true:

For cancellation of common terms, we have the following rules:

  • If a + kb + k (mod m), where k is any integer, then ab (mod m).
  • If k ak b (mod m) and k is coprime with m, then ab (mod m).
  • If k ak b (mod k m) and k ≠ 0, then ab (mod m).

The last rule can be used to move modular arithmetic into division. If b divides a, then (a/b) mod m = (a mod b m) / b.

The modular multiplicative inverse is defined by the following rules:

  • Existence: There exists an integer denoted a−1 such that aa−1 ≡ 1 (mod m) if and only if a is coprime with m. This integer a−1 is called a modular multiplicative inverse of a modulo m.
  • If ab (mod m) and a−1 exists, then a−1b−1 (mod m) (compatibility with multiplicative inverse, and, if a = b, uniqueness modulo m).
  • If axb (mod m) and a is coprime to m, then the solution to this linear congruence is given by xa−1b (mod m).

The multiplicative inverse xa−1 (mod m) may be efficiently computed by solving Bézout's equation a x + m y = 1 for x, y, by using the Extended Euclidean algorithm.

In particular, if p is a prime number, then a is coprime with p for every a such that 0 < a < p; thus a multiplicative inverse exists for all a that is not congruent to zero modulo p.

Advanced properties

[edit]

Some of the more advanced properties of congruence relations are the following:

  • Fermat's little theorem: If p is prime and does not divide a, then ap−1 ≡ 1 (mod p).
  • Euler's theorem: If a and m are coprime, then aφ(m) ≡ 1 (mod m), where φ is Euler's totient function.
  • A simple consequence of Fermat's little theorem is that if p is prime, then a−1ap−2 (mod p) is the multiplicative inverse of 0 < a < p. More generally, from Euler's theorem, if a and m are coprime, then a−1aφ(m)−1 (mod m). Hence, if ax1 (mod m), then xaφ(m)−1 (mod m).
  • Another simple consequence is that if ab (mod φ(m)), where φ is Euler's totient function, then kakb (mod m) provided k is coprime with m.
  • Wilson's theorem: p is prime if and only if (p − 1)! ≡ −1 (mod p).
  • Chinese remainder theorem: For any a, b and coprime m, n, there exists a unique x (mod mn) such that xa (mod m) and xb (mod n). In fact, xb mn−1 m + a nm−1 n (mod mn) where mn−1 is the inverse of m modulo n and nm−1 is the inverse of n modulo m.
  • Lagrange's theorem: If p is prime and f (x) = a0 xd + ... + ad is a polynomial with integer coefficients such that p is not a divisor of a0, then the congruence f (x) ≡ 0 (mod p) has at most d non-congruent solutions.
  • Primitive root modulo m: A number g is a primitive root modulo m if, for every integer a coprime to m, there is an integer k such that gka (mod m). A primitive root modulo m exists if and only if m is equal to 2, 4, pk or 2pk, where p is an odd prime number and k is a positive integer. If a primitive root modulo m exists, then there are exactly φ(φ(m)) such primitive roots, where φ is the Euler's totient function.
  • Quadratic residue: An integer a is a quadratic residue modulo m, if there exists an integer x such that x2a (mod m). Euler's criterion asserts that, if p is an odd prime, and a is not a multiple of p, then a is a quadratic residue modulo p if and only if
    a(p−1)/2 ≡ 1 (mod p).

Congruence classes

[edit]

The congruence relation is an equivalence relation. The equivalence class modulo m of an integer a is the set of all integers of the form a + k m, where k is any integer. It is called the congruence class or residue class of a modulo m, and may be denoted (a mod m), or as a or [a] when the modulus m is known from the context.

Each residue class modulo m contains exactly one integer in the range . Thus, these integers are representatives of their respective residue classes.

It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes.

Consequently, (a mod m) denotes generally the unique integer r such that 0 ≤ r < m and ra (mod m); it is called the residue of a modulo m.

In particular, (a mod m) = (b mod m) is equivalent to ab (mod m), and this explains why "=" is often used instead of "" in this context.

Residue systems

[edit]

Each residue class modulo m may be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2] (since this is the proper remainder which results from division). Any two members of different residue classes modulo m are incongruent modulo m. Furthermore, every integer belongs to one and only one residue class modulo m.[3]

The set of integers {0, 1, 2, ..., m − 1} is called the least residue system modulo m. Any set of m integers, no two of which are congruent modulo m, is called a complete residue system modulo m.

The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely one representative of each residue class modulo m.[4] For example, the least residue system modulo 4 is {0, 1, 2, 3}. Some other complete residue systems modulo 4 include:

  • {1, 2, 3, 4}
  • {13, 14, 15, 16}
  • {−2, −1, 0, 1}
  • {−13, 4, 17, 18}
  • {−5, 0, 6, 21}
  • {27, 32, 37, 42}

Some sets that are not complete residue systems modulo 4 are:

  • {−5, 0, 6, 22}, since 6 is congruent to 22 modulo 4.
  • {5, 15}, since a complete residue system modulo 4 must have exactly 4 incongruent residue classes.

Reduced residue systems

[edit]

Given the Euler's totient function φ(m), any set of φ(m) integers that are relatively prime to m and mutually incongruent under modulus m is called a reduced residue system modulo m.[5] The set {5, 15} from above, for example, is an instance of a reduced residue system modulo 4.

Covering systems

[edit]

Covering systems represent yet another type of residue system that may contain residues with varying moduli.

Integers modulo m

[edit]

In the context of this paragraph, the modulus m is almost always taken as positive.

The set of all congruence classes modulo m is a ring called the ring of integers modulo m, and is denoted , , or .[6] The ring is fundamental to various branches of mathematics (see § Applications below). (In some parts of number theory the notation is avoided because it can be confused with the set of m-adic integers.)

For m > 0 one has

When m = 1, is the zero ring; when m = 0, is not an empty set; rather, it is isomorphic to , since a0 = {a}.

Addition, subtraction, and multiplication are defined on by the following rules:

The properties given before imply that, with these operations, is a commutative ring. For example, in the ring , one has

as in the arithmetic for the 24-hour clock.

The notation is used because this ring is the quotient ring of by the ideal , the set formed by all multiples of m, that is, all numbers k m with

Under addition, is a cyclic group. All finite cyclic groups are isomorphic with for some m.[7]

The ring of integers modulo m is a field; that is, every nonzero element has a multiplicative inverse, if and only if m is prime. If m = pk is a prime power with k > 1, there exists a unique (up to isomorphism) finite field with m elements, which is not isomorphic to , which fails to be a field because it has zero-divisors.

If m > 1, denotes the multiplicative group of the integers modulo m that are invertible. It consists of the congruence classes am, where a is coprime to m; these are precisely the classes possessing a multiplicative inverse. They form an abelian group under multiplication; its order is φ(m), where φ is Euler's totient function.

Applications

[edit]

In pure mathematics, modular arithmetic is one of the foundations of number theory, touching on almost every aspect of its study, and it is also used extensively in group theory, ring theory, knot theory, and abstract algebra. In applied mathematics, it is used in computer algebra, cryptography, computer science, chemistry and the visual and musical arts.

A very practical application is to calculate checksums within serial number identifiers. For example, International Standard Book Number (ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise, International Bank Account Numbers (IBANs) use modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of the CAS registry number (a unique identifying number for each chemical compound) is a check digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10.

In cryptography, modular arithmetic directly underpins public key systems such as RSA and Diffie–Hellman, and provides finite fields which underlie elliptic curves, and is used in a variety of symmetric key algorithms including Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and RC4. RSA and Diffie–Hellman use modular exponentiation.

In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used in polynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations of polynomial greatest common divisor, exact linear algebra and Gröbner basis algorithms over the integers and the rational numbers. As posted on Fidonet in the 1980s and archived at Rosetta Code, modular arithmetic was used to disprove Euler's sum of powers conjecture on a Sinclair QL microcomputer using just one-fourth of the integer precision used by a CDC 6600 supercomputer to disprove it two decades earlier via a brute force search.[8]

In computer science, modular arithmetic is often applied in bitwise operations and other operations involving fixed-width, cyclic data structures. The modulo operation, as implemented in many programming languages and calculators, is an application of modular arithmetic that is often used in this context. The logical operator XOR sums 2 bits, modulo 2.

The use of long division to turn a fraction into a repeating decimal in any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10.

In music, arithmetic modulo 12 is used in the consideration of the system of twelve-tone equal temperament, where octave and enharmonic equivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharp is considered the same as D-flat).

The method of casting out nines offers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9).

Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular, Zeller's congruence and the Doomsday algorithm make heavy use of modulo-7 arithmetic.

More generally, modular arithmetic also has application in disciplines such as law (for example, apportionment), economics (for example, game theory) and other areas of the social sciences, where proportional division and allocation of resources plays a central part of the analysis.

Computational complexity

[edit]

Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved in polynomial time with a form of Gaussian elimination, for details see linear congruence theorem. Algorithms, such as Montgomery reduction, also exist to allow simple arithmetic operations, such as multiplication and exponentiation modulo m, to be performed efficiently on large numbers.

Some operations, like finding a discrete logarithm or a quadratic congruence appear to be as hard as integer factorization and thus are a starting point for cryptographic algorithms and encryption. These problems might be NP-intermediate.

Solving a system of non-linear modular arithmetic equations is NP-complete.[9]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Modular arithmetic is a fundamental branch of number theory that deals with integers in a cyclic manner, where equivalence is defined by congruence modulo a positive integer $ n $, known as the modulus, such that two integers $ a $ and $ b $ are congruent modulo $ n $ (written $ a \equiv b \pmod{n} $) if $ n $ divides their difference $ a - b $.[1] This system effectively "wraps around" numbers upon reaching multiples of $ n $, allowing computations to be reduced to remainders between 0 and $ n-1 $, which partitions the integers into $ n $ distinct residue classes.[2] For example, in modulo 12, the hours on a clock illustrate how 13 ≡ 1 and 14 ≡ 2, simplifying arithmetic by ignoring multiples of the modulus.[3] The modern formalism of modular arithmetic was introduced by Carl Friedrich Gauss in his seminal 1801 work Disquisitiones Arithmeticae, where he systematically developed the theory of congruences.[2] Gauss's contributions included key results like the law of quadratic reciprocity, which relates solvability of quadratic congruences across different moduli, laying the groundwork for advanced topics in analytic number theory.[3] Elements of related ideas appeared earlier in ancient Chinese mathematics, such as the Chinese Remainder Theorem in the Sunzi Suanjing (3rd–5th century CE).[4] Key properties of modular arithmetic mirror those of standard integer arithmetic, ensuring it forms a ring structure for each modulus: congruence is an equivalence relation (reflexive, symmetric, and transitive), and operations like addition and multiplication are compatible with congruence, so if $ a \equiv b \pmod{n} $ and $ c \equiv d \pmod{n} $, then $ a + c \equiv b + d \pmod{n} $ and $ ac \equiv bd \pmod{n} $.[5] Multiplication and addition are well-defined on residue classes, enabling efficient computations, while inverses exist for elements coprime to the modulus, leading to theorems like Fermat's Little Theorem for primes $ p $: $ a^p \equiv a \pmod{p} $.[1] These properties make modular arithmetic indispensable for solving Diophantine equations and analyzing patterns in integers.[2] Beyond pure mathematics, modular arithmetic underpins diverse applications, including cryptography—such as the RSA algorithm, which relies on the difficulty of factoring products of large primes modulo $ n $—and error-correcting codes like Reed-Solomon codes used in data storage and transmission. In computer science, it facilitates hash functions, pseudorandom number generation, and efficient large-integer arithmetic by reducing operations to bounded remainders, as seen in modular exponentiation for secure protocols.[2] Its elegance also drove breakthroughs like Andrew Wiles's 1994 proof of Fermat's Last Theorem, via connections to elliptic curves and modular forms.[3]

Foundations

Congruence relation

In modular arithmetic, two integers aa and bb are said to be congruent modulo a positive integer mm, denoted ab(modm)a \equiv b \pmod{m}, if mm divides the difference aba - b, that is, there exists an integer kk such that ab=kma - b = k m.[6] Here, mm is called the modulus, and the congruent integers aa and bb leave the same remainder when divided by mm.[6] This relation was first systematically introduced by Carl Friedrich Gauss in his 1801 treatise Disquisitiones Arithmeticae, where he developed it as a foundational tool for number theory.[7] The modulus mm defines a partition of the set of all integers into equivalence classes, known as residue classes, each consisting of all integers congruent to a fixed representative modulo mm.[6] Congruence modulo mm is an equivalence relation on the integers, satisfying the properties of reflexivity, symmetry, and transitivity.[6]
  • Reflexivity: For any integer aa, aa(modm)a \equiv a \pmod{m} because aa=0=0ma - a = 0 = 0 \cdot m, so mm divides aaa - a.[6]
  • Symmetry: If ab(modm)a \equiv b \pmod{m}, then mm divides aba - b, which implies mm divides ba=(ab)b - a = -(a - b), so ba(modm)b \equiv a \pmod{m}.[6]
  • Transitivity: If ab(modm)a \equiv b \pmod{m} and bc(modm)b \equiv c \pmod{m}, then mm divides aba - b and mm divides bcb - c, so mm divides (ab)+(bc)=ac(a - b) + (b - c) = a - c, hence ac(modm)a \equiv c \pmod{m}.[6]

Examples of congruence

A fundamental illustration of the congruence relation arises from the condition that two integers aa and bb are congruent modulo mm if mm divides aba - b.[8] Consider the integers 17 and 5 with modulus 12. Their difference is 175=1217 - 5 = 12, which is divisible by 12, so 175(mod12)17 \equiv 5 \pmod{12}.[9] This means 17 and 5 occupy the same position in the cyclic structure of residues modulo 12. A practical everyday example is clock arithmetic, which operates modulo 12. Adding 2 hours to 12 o'clock yields 2 o'clock, since 142(mod12)14 \equiv 2 \pmod{12}; the clock "wraps around" after 12, treating 14 as equivalent to 2.[10] Similarly, 11 o'clock plus 1 hour is 12 o'clock, or 12[0](/page/0)(mod12)12 \equiv [0](/page/0) \pmod{12}, highlighting how time cycles back to the starting point.[11] Even and odd numbers provide another simple case using modulus 2. All even integers are congruent to 0 modulo 2, as their difference from 0 is even (divisible by 2); for instance, 4[0](/page/0)(mod2)4 \equiv [0](/page/0) \pmod{2} because 4[0](/page/0)=44 - [0](/page/0) = 4 and 242 \mid 4.[12] Odd integers, such as 7, satisfy 71(mod2)7 \equiv 1 \pmod{2} since 71=67 - 1 = 6 and 262 \mid 6.[12] This partitions the integers into two equivalence classes: evens and odds. Another illustrative example using modulus 9 involves a number NN that leaves a remainder of 8 when divided by 9, so N8(mod9)N \equiv 8 \pmod{9}. Then, N+10(mod9)N + 1 \equiv 0 \pmod{9}, meaning N+1N + 1 is divisible by 9. For instance, if N=17N = 17, then 17÷917 \div 9 gives quotient 1 and remainder 8, while 18÷918 \div 9 gives quotient 2 and remainder 0. This demonstrates the addition property in modular arithmetic: adding 1 to a number congruent to 8 modulo 9 results in a multiple of 9.[13] Visually, congruence modulo mm can be represented as a number line that wraps around at every multiple of mm, forming a loop or circle where positions repeat every mm units; for modulus 12, the line folds back after 12, 24, and so on, aligning congruent numbers at the same point on the circle./04%3A_Number_Theory/4.13%3A__Modular_Arithmetic) A common pitfall is confusing congruence with equality: while congruent numbers like 17 and 5 modulo 12 share properties under division by 12, they are distinct integers and not identical.[14] Congruence denotes equivalence in a modular sense, preserving remainders but allowing differences that are multiples of the modulus.[15]

Properties of congruences

Basic properties

The congruence relation preserves the basic arithmetic operations of addition, subtraction, and multiplication, allowing computations in modular arithmetic to mirror those in the integers under certain conditions.[16][17] Specifically, if ab(modm)a \equiv b \pmod{m} and cd(modm)c \equiv d \pmod{m}, then a+cb+d(modm)a + c \equiv b + d \pmod{m}. This follows from the definition of congruence: since mm divides aba - b and mm divides cdc - d, it divides (a+c)(b+d)=(ab)+(cd)(a + c) - (b + d) = (a - b) + (c - d).[16][17] Similarly, subtraction is preserved: acbd(modm)a - c \equiv b - d \pmod{m}. The proof is analogous to addition, as mm divides (ac)(bd)=(ab)(cd)(a - c) - (b - d) = (a - b) - (c - d), or by noting that subtraction can be expressed as addition with the additive inverse of cc.[17][18] For multiplication, if ab(modm)a \equiv b \pmod{m} and cd(modm)c \equiv d \pmod{m}, then acbd(modm)a \cdot c \equiv b \cdot d \pmod{m}. To see this, expand acbd=a(cd)+d(ab)a \cdot c - b \cdot d = a(c - d) + d(a - b); since mm divides both aba - b and cdc - d, and integers are closed under multiplication and addition, mm divides the difference.[16][17] Every integer aa has an additive inverse modulo mm, denoted a(modm)-a \pmod{m}, such that a+(a)0(modm)a + (-a) \equiv 0 \pmod{m}. Explicitly, am(amodm)(modm)-a \equiv m - (a \mod m) \pmod{m} if a≢0(modm)a \not\equiv 0 \pmod{m}, and 00 is its own inverse; this holds because a+(ma)=m0(modm)a + (m - a) = m \equiv 0 \pmod{m}.[18][17] These properties enable straightforward computations. For instance, since 104(mod6)10 \equiv 4 \pmod{6} and 153(mod6)15 \equiv 3 \pmod{6}, it follows that 10+15=254+3=71(mod6)10 + 15 = 25 \equiv 4 + 3 = 7 \equiv 1 \pmod{6}.[16]

Advanced properties

One advanced property of congruences concerns exponentiation. If ab(modm)a \equiv b \pmod{m} and kk is a positive integer, then akbk(modm)a^k \equiv b^k \pmod{m}.[8] This result follows from mathematical induction on kk, using the basic property that congruences are preserved under multiplication: the base case holds for k=1k=1, and assuming it for kk, the inductive step shows ak+1=akabkb=bk+1(modm)a^{k+1} = a^k \cdot a \equiv b^k \cdot b = b^{k+1} \pmod{m}.[8] Linear congruences of the form axb(modm)ax \equiv b \pmod{m}, where aa, bb, and m>0m > 0 are integers, exhibit sophisticated solvability conditions. The congruence has solutions if and only if gcd(a,m)\gcd(a, m) divides bb; in this case, there are exactly gcd(a,m)\gcd(a, m) incongruent solutions modulo mm.[19] For instance, if gcd(a,m)=1\gcd(a, m) = 1, there is a unique solution modulo mm, corresponding to the existence of a modular inverse for aa modulo mm.[19] The Chinese Remainder Theorem provides a key result for systems of congruences with coprime moduli. If mm and nn are coprime positive integers and aa, bb are integers, then the system
{xa(modm)xb(modn) \begin{cases} x \equiv a \pmod{m} \\ x \equiv b \pmod{n} \end{cases}
has a unique solution modulo mnmn.[20] This theorem extends to any finite number of pairwise coprime moduli, enabling the decomposition of problems modulo a product into independent subproblems.[20] Fermat's Little Theorem emerges as a special case of more general exponentiation properties in modular arithmetic. For a prime pp and integer aa with gcd(a,p)=1\gcd(a, p) = 1, it states that ap11(modp)a^{p-1} \equiv 1 \pmod{p}.[21] This can be viewed as a consequence of the multiplicative group structure modulo pp, where the order divides p1p-1.[21] Euler's theorem generalizes Fermat's Little Theorem to composite moduli. If gcd(a,n)=1\gcd(a, n) = 1, then aϕ(n)1(modn)a^{\phi(n)} \equiv 1 \pmod{n}, where ϕ\phi is Euler's totient function, counting the integers up to nn that are coprime to nn.[22] When n=pn = p is prime, ϕ(p)=p1\phi(p) = p-1, recovering Fermat's Little Theorem exactly.[22] The proof relies on the fact that the units modulo nn form a group of order ϕ(n)\phi(n), so raising a unit to the ϕ(n)\phi(n)-th power yields the identity.[23]

Algebraic structures

Congruence classes

In modular arithmetic, congruence classes arise from the equivalence relation defined by congruence modulo mm, where mm is a positive integer, partitioning the set of all integers Z\mathbb{Z} into disjoint subsets. The congruence class of an integer aa modulo mm, denoted [a]m[a]_m, is the set of all integers bZb \in \mathbb{Z} such that ba(modm)b \equiv a \pmod{m}, or equivalently, [a]m={a+kmkZ}[a]_m = \{ a + k m \mid k \in \mathbb{Z} \}.[24][25] This concept was first systematically introduced by Carl Friedrich Gauss in his 1801 work Disquisitiones Arithmeticae, where congruences provided a framework for studying divisibility properties of integers. These classes form a partition of Z\mathbb{Z}, meaning they are pairwise disjoint—if [a]m[b]m[a]_m \neq [b]_m, then [a]m[b]m=[a]_m \cap [b]_m = \emptyset—and their union equals Z\mathbb{Z}, ensuring every integer belongs to exactly one class.[15][25] There are precisely mm distinct congruence classes modulo mm, corresponding to the possible remainders when integers are divided by mm.[24][15] Each individual class is infinite in size, as it contains infinitely many integers differing by multiples of mm, yet the finite number of classes captures the periodic structure inherent in modular arithmetic.[25][15] Any integer congruent to aa modulo mm can serve as a representative for [a]m[a]_m, but a canonical choice is the least non-negative residue rr where 0r<m0 \leq r < m, often denoted as the set {[0]m,[1]m,,[m1]m}\{ [0]_m, [1]_m, \dots, [m-1]_m \}.[24][25] In the context of group theory, these congruence classes can be visualized as cosets of the subgroup mZm\mathbb{Z} (the multiples of mm) in the additive group Z\mathbb{Z}, where [a]m=a+mZ[a]_m = a + m\mathbb{Z}, highlighting their algebraic structure without delving into operations on the classes themselves.[24]

Integers modulo n

The ring of integers modulo $ n $, denoted $ \mathbb{Z}/n\mathbb{Z} $ (or $ \mathbb{Z}_n $), is the quotient ring formed by the integers $ \mathbb{Z} $ modulo the principal ideal $ n\mathbb{Z} $.[26] Its elements are the congruence classes $ [a] = { k \in \mathbb{Z} \mid k \equiv a \pmod{n} } $, typically represented by the residues $ 0, 1, \dots, n-1 $. Addition and multiplication are defined componentwise via the operations on integers, reduced modulo $ n $: specifically, $ [a] + [b] = [a + b] $ and $ [a] \cdot [b] = [a \cdot b] $.[27] This structure satisfies the ring axioms: it is an abelian group under addition with identity $ [0] $, multiplication is associative and commutative, distributes over addition, and has a multiplicative identity $ [1] $, making $ \mathbb{Z}/n\mathbb{Z} $ a commutative ring with unity.[26] However, if $ n $ is composite, $ \mathbb{Z}/n\mathbb{Z} $ contains zero divisors—nonzero elements $ [a] $ and $ [b] $ such that $ [a] \cdot [b] = [0] $—for instance, in $ \mathbb{Z}/4\mathbb{Z} $, $ [2] \cdot [2] = [0] $.[28] The nonzero elements without zero divisors are the units, which are the classes $ [a] $ where $ \gcd(a, n) = 1 $; these form the multiplicative group $ (\mathbb{Z}/n\mathbb{Z})^\times $ under the ring's multiplication.[28] The ideals of $ \mathbb{Z}/n\mathbb{Z} $ are principal and correspond to the divisors of $ n $: each ideal is generated by $ [d] $ where $ d $ divides $ n $, and takes the form $ d(\mathbb{Z}/n\mathbb{Z}) = { [d] \cdot [x] \mid x \in \mathbb{Z}/n\mathbb{Z} } $.[27] When $ n = p $ is prime, $ \mathbb{Z}/p\mathbb{Z} $ has no zero divisors and every nonzero element is a unit, rendering it a field (also denoted $ \mathbb{F}_p $).[29]

Residue systems

Complete residue systems

A complete residue system modulo mm is a set of mm integers {a0,a1,,am1}\{a_0, a_1, \dots, a_{m-1}\} such that the residue classes [a0],[a1],,[am1][a_0], [a_1], \dots, [a_{m-1}] modulo mm are all distinct and cover every congruence class in Z/mZ\mathbb{Z}/m\mathbb{Z}.[30] This means that for any integer xx, there exists exactly one aia_i in the set such that xai(modm)x \equiv a_i \pmod{m}.[31] The standard example of a complete residue system modulo mm is the set of least non-negative residues {0,1,2,,m1}\{0, 1, 2, \dots, m-1\}, where each element represents a unique congruence class.[30] Other sets can be formed by permuting these elements or adding the same multiple of mm to each, preserving the distinct classes; for instance, {km,km+1,,km+(m1)}\{km, km+1, \dots, km+(m-1)\} for any integer kk is also complete.[32] Properties of complete residue systems include their flexibility in representation: any such system provides an exhaustive enumeration of the congruence classes modulo mm, and transformations like adding a fixed multiple of mm or rearranging the elements yield equivalent systems.[30] They are particularly useful in computations for indexing all possible residues, such as in iterative algorithms that cycle through every class without repetition.[33] For example, modulo 5, both {0,1,2,3,4}\{0, 1, 2, 3, 4\} and {5,6,7,8,9}\{5, 6, 7, 8, 9\} form complete residue systems, as each integer is congruent to exactly one element in the set modulo 5.[30]

Reduced residue systems

A reduced residue system modulo $ m $, where $ m > 1 $ is a positive integer, is a set of exactly $ \phi(m) $ integers that are pairwise incongruent modulo $ m $ and each relatively prime to $ m $, with $ \phi $ denoting Euler's totient function.[31] This set represents the distinct residue classes modulo $ m $ consisting solely of units in the ring $ \mathbb{Z}/m\mathbb{Z} $.[34] Such a system can be derived from any complete residue system modulo $ m $ by excluding elements not coprime to $ m $.[35] The size of a reduced residue system modulo $ m $ is precisely $ \phi(m) $, which counts the number of integers between 1 and $ m-1 $ that are coprime to $ m $.[31] These elements form a complete set of generators for the multiplicative group $ (\mathbb{Z}/m\mathbb{Z})^* $, the group of units modulo $ m $, under multiplication modulo $ m $.[34] For instance, when $ m = 10 $, $ \phi(10) = 4 $, and the set $ {1, 3, 7, 9} $ serves as a reduced residue system modulo 10, as each element is coprime to 10 and they represent distinct classes.[35] To construct a reduced residue system modulo $ m $, one standard method is to list all integers from 1 to $ m-1 $ that share no common prime factors with $ m $, ensuring the count matches $ \phi(m) $, which itself can be computed using the inclusion-exclusion principle based on the prime factorization of $ m $.[31] In the special case where $ m = p $ is prime, every integer from 1 to $ p-1 $ is coprime to $ p $, so $ {1, 2, \dots, p-1} $ forms a reduced residue system of size $ p-1 = \phi(p) $.[34] This construction highlights the role of reduced residue systems in studying multiplicative properties within modular arithmetic.[35]

Covering systems

A covering system is a finite collection of arithmetic progressions whose union is the set of all integers.[36] Equivalently, it consists of congruences xai(modmi)x \equiv a_i \pmod{m_i} with mi>1m_i > 1 for each ii, such that every integer xx satisfies at least one of these congruences.[37] The concept was introduced by Paul Erdős in 1950, initially to construct a counterexample refuting de Polignac's conjecture by showing that not every odd integer greater than 5 can be expressed as 2k+p2^k + p where pp is prime (or 1).[36] Erdős's work highlighted connections to density (as the union covers the integers with full density 1) and irreducibility (related to minimal such systems where no congruence can be removed without leaving gaps).[36] In 2015, Bob Hough resolved another of Erdős's conjectures on covering systems by proving that there exists a bound such that no covering system with distinct moduli all exceeding this bound (approximately 101610^{16}) can exist.[38] Covering systems are classified as distinct if all moduli mim_i are unequal, and disjoint (or exact) if the arithmetic progressions have no overlaps, meaning every integer satisfies exactly one congruence.[39] A minimal covering system is one where no proper subset still covers all integers.[36] An example of a distinct covering system with moduli that are mostly powers of 2 is the minimal system {2i1(mod2i):i=1,,n1}{0(mod2n1)}\{2^i - 1 \pmod{2^i} : i = 1, \dots, n-1\} \cup \{0 \pmod{2^n - 1}\}, which covers all integers for any n2n \geq 2.[36] A famous open problem, posed by Erdős, asks whether there exists a disjoint covering system with distinct odd moduli; he offered a $1000 prize for its resolution.[36] This remains unsolved, with partial results showing no such system exists if additional restrictions like square-free moduli are imposed.[40]

Applications

In number theory

Modular arithmetic plays a central role in the study of quadratic residues within number theory. An integer aa is a quadratic residue modulo an odd prime pp if there exists an integer xx such that x2a(modp)x^2 \equiv a \pmod{p} and gcd(a,p)=1\gcd(a, p) = 1; otherwise, aa is a quadratic nonresidue modulo pp.[41] The Legendre symbol (ap)\left( \frac{a}{p} \right) provides a concise way to determine this status: it equals 11 if aa is a quadratic residue modulo pp, 1-1 if aa is a quadratic nonresidue modulo pp, and 00 if pp divides aa.[42] Euler's criterion establishes that (ap)a(p1)/2(modp)\left( \frac{a}{p} \right) \equiv a^{(p-1)/2} \pmod{p}, linking the symbol directly to modular exponentiation.[42] The law of quadratic reciprocity further connects quadratic residues across different primes. For distinct odd primes pp and qq, it states that (pq)(qp)=(1)(p1)/2(q1)/2\left( \frac{p}{q} \right) \left( \frac{q}{p} \right) = (-1)^{(p-1)/2 \cdot (q-1)/2}.[43] This result, first proved by Carl Friedrich Gauss in his Disquisitiones Arithmeticae (1801), allows computation of the Legendre symbol (ap)\left( \frac{a}{p} \right) by reducing to smaller primes and facilitates the analysis of quadratic forms in number theory.[43] Wilson's theorem provides another key application, characterizing primes via factorials in modular arithmetic. It asserts that for a prime pp, (p1)!1(modp)(p-1)! \equiv -1 \pmod{p}.[44] This equivalence holds if and only if pp is prime, offering a primality test based on modular computation of the factorial. The theorem, proposed by John Wilson and proved by Joseph-Louis Lagrange in 1773, underscores the structure of the multiplicative group modulo pp.[44] Primitive roots modulo nn are generators of the multiplicative group (Z/nZ)(\mathbb{Z}/n\mathbb{Z})^*, and their existence is tied to specific forms of nn. For an odd prime pp, there exists a primitive root gg modulo pp, meaning the order of gg in (Z/pZ)(\mathbb{Z}/p\mathbb{Z})^* is p1p-1, so powers of gg yield all units modulo pp. Gauss proved this existence in Disquisitiones Arithmeticae (1801), and it extends to moduli of the form 2pk2p^k for odd prime pp and positive integer kk. The number of primitive roots modulo pp is ϕ(p1)\phi(p-1), where ϕ\phi is Euler's totient function, highlighting the cyclic nature of these groups. Modular arithmetic, particularly through the Chinese Remainder Theorem, aids in determining the solvability of Diophantine equations by checking consistency modulo primes. For instance, the equation x2+y2=z2x^2 + y^2 = z^2 can be analyzed modulo an odd prime pp: solutions exist if and only if 1-1 is a quadratic residue modulo pp (i.e., p1(mod4)p \equiv 1 \pmod{4}) or if p=2p=2, allowing global solvability to be inferred from local conditions via the theorem. This local-global principle, facilitated by modular reductions, is essential for equations like sums of squares. Dirichlet's theorem on arithmetic progressions demonstrates the density of primes in modular classes. If gcd(a,m)=1\gcd(a, m) = 1, then there are infinitely many primes congruent to aa modulo mm. Proved by Peter Gustav Lejeune Dirichlet in 1837 using analytic methods involving L-functions, this result generalizes Euclid's infinitude of primes and relies on the non-vanishing of certain Dirichlet characters modulo mm.

In computing and cryptography

Modular arithmetic plays a central role in computing applications, particularly in hash functions and pseudorandom number generation, where it enables efficient mapping and sequence production within bounded ranges. The djb2 hash function, developed by Daniel J. Bernstein, computes a hash value through iterative shifts, additions, and implicit modular reduction via integer overflow, typically modulo 2322^{32} or 2642^{64}, providing a simple yet effective distribution for hash tables and data integrity checks.[45] This approach leverages the modulo operation to fold large intermediate values back into a fixed-size output space, minimizing collisions in practical implementations.[45] Pseudorandom number generators in computing often employ linear congruential generators (LCGs), which produce sequences using the recurrence xn+1=(axn+c)modmx_{n+1} = (a x_n + c) \mod m, where aa, cc, and mm are chosen parameters, and mm defines the modulus for periodicity and uniformity.[46] Introduced by D. H. Lehmer in 1951, LCGs rely on modular multiplication and addition to generate numbers that approximate randomness for simulations, Monte Carlo methods, and procedural content in software.[46] The full period of mm is achievable under specific conditions on aa and cc, ensuring the sequence cycles through all residues before repeating.[46] In cryptography, modular arithmetic forms the foundation of public-key systems, enabling secure key exchange and encryption through computationally hard problems in finite fields. The RSA cryptosystem, proposed by Ronald Rivest, Adi Shamir, and Leonard Adleman in 1978, uses modular exponentiation for both encryption and decryption: the ciphertext cc is computed as c=memodnc = m^e \mod n, where mm is the plaintext, ee is the public exponent, and n=pqn = pq is the product of two large primes; decryption recovers m=cdmodnm = c^d \mod n using the private exponent dd, with security based on the difficulty of factoring nn and Euler's theorem ensuring correctness.[47] This modular framework allows asymmetric keys, where the public key (e,n)(e, n) is shared openly while the private key dd remains secret.[47] The Diffie-Hellman key exchange, introduced by Whitfield Diffie and Martin Hellman in 1976, facilitates secure shared secret generation over insecure channels using modular exponentiation in a prime field: one party computes gamodpg^a \mod p and shares it, the other computes gbmodpg^b \mod p, and the shared secret is (ga)bmodp=(gb)amodp=gabmodp(g^a)^b \mod p = (g^b)^a \mod p = g^{ab} \mod p, where gg is a generator and pp a large prime, relying on the discrete logarithm problem for security.[48] Here, gg is selected from the reduced residue system modulo pp to ensure it generates the multiplicative group.[48] Elliptic curve cryptography (ECC) extends modular arithmetic to elliptic curves over finite fields, typically modulo a large prime pp, where point addition and scalar multiplication operations are performed to solve the elliptic curve discrete logarithm problem.[49] Proposed independently by Neal Koblitz in 1987 and Victor Miller in 1986, ECC achieves equivalent security to larger modulus systems like RSA with shorter keys—for instance, a 256-bit ECC key offers security comparable to a 3072-bit RSA key—due to the curve's group structure defined by Weierstrass equations reduced modulo pp.[49] These operations underpin protocols like ECDSA for digital signatures and ECDH for key exchange, with standards from organizations like NIST specifying curves such as P-256 for practical deployment.[50]

In other fields

In music theory, modular arithmetic underpins the analysis of pitch classes in Western twelve-tone equal temperament, where pitches are represented as equivalence classes modulo 12 semitones to account for octave equivalence. This allows transpositions to be computed as additions modulo 12; for instance, shifting a pitch class by 7 semitones corresponds to adding 7 and reducing modulo 12, facilitating the study of intervals and set classes in atonal music.[51] In biology, modular arithmetic models circadian rhythms by treating time as periodic with a 24-hour cycle, enabling the representation of daily biological oscillations. For example, in systems biology simulations of clock models, time variables are adjusted using modulo operations relative to the cycle period (often 24 hours) to simulate phase shifts and light-dark photoperiods, as seen in the Input Signal Step Function for SBML models. This approach captures the repetitive nature of gene expression and behavioral patterns without unbounded time accumulation.[52] Physics simulations frequently employ modular arithmetic to implement periodic boundary conditions, where particle positions are taken modulo the simulation box dimensions to mimic infinite domains and eliminate edge effects. In molecular dynamics and lattice-based studies, such as sedimentation of particles in a cubic lattice, coordinates exceeding the box size are wrapped around using modulo the lattice length, ensuring translational invariance and enabling efficient computation of long-range interactions in materials science and fluid dynamics.[53] In economics, seasonal adjustments to time series data remove predictable yearly cycles by estimating and subtracting periodic components, often indexed modulo 12 for monthly observations to isolate non-seasonal trends like economic growth. The U.S. Bureau of Labor Statistics applies methods such as X-13ARIMA-SEATS to decompose series into trend-cycle, seasonal, and irregular parts, where seasonal factors are derived from historical patterns repeating every 12 months, aiding accurate forecasting of indicators like employment and inflation.[54] Modular arithmetic appears in art and design through patterns exhibiting rotational symmetry, notably in Penrose tilings, which use aperiodic arrangements of rhombi to create quasiperiodic structures inspired by five-fold symmetry. Constructions of such tilings can incorporate modular operations on angles or vectors to generate non-repeating motifs, as in vector-based methods that apply modulo arithmetic to directions for coloring or subdivision, influencing decorative arts, architecture, and fractal-inspired visuals.[55]

Computational aspects

Algorithms for modular operations

Modular addition and subtraction are fundamental operations that can be computed directly using the defining property of congruence: for integers aa, bb, and modulus m>0m > 0, (a+b)modm=((amodm)+(bmodm))modm(a + b) \mod m = ((a \mod m) + (b \mod m)) \mod m, and similarly (ab)modm=((amodm)(bmodm))modm(a - b) \mod m = ((a \mod m) - (b \mod m)) \mod m, where negative results are adjusted by adding mm to ensure non-negativity. These formulas handle potential overflow in computations with large integers by reducing operands modulo mm beforehand, as described in standard algorithms for multiple-precision arithmetic.[56] For modular multiplication of large integers, the binary method, also known as the Russian peasant algorithm, avoids computing the full product a×ba \times b before reduction. This approach decomposes aa into its binary representation and iteratively doubles bb (modulo mm) while halving aa, adding the current bb to the result whenever aa is odd; all operations are reduced modulo mm to prevent overflow. The algorithm requires O(loga)O(\log a) additions and shifts, making it efficient for big integers, and has historical roots traceable to ancient Egyptian mathematics around 1800 B.C.E., with modern formalization in computational texts.[57] Modular exponentiation, computing abmodma^b \mod m for large bb, employs the square-and-multiply algorithm, which leverages the binary expansion of bb to reduce the number of multiplications. Initialize result as 1; for each bit of bb from MSB to LSB, square the current result modulo mm and, if the bit is 1, multiply by aa modulo mm; this performs a squaring for each bit and an extra multiplication for each 1-bit, totaling O(logb)O(\log b) operations.[56] The method dates back over 2,000 years to Babylonian tablet algorithms and was analyzed in detail for computational efficiency in seminumerical contexts.[58] To compute the greatest common divisor gcd(a,m)\gcd(a, m) and, if gcd(a,m)=1\gcd(a, m) = 1, the modular inverse of aa modulo mm via Bézout's identity (ax+my=1a x + m y = 1 for integers x,yx, y), the extended Euclidean algorithm applies successive divisions while back-substituting to find coefficients. Start with the Euclidean algorithm steps: repeatedly replace (a,m)(a, m) by (m,amodm)(m, a \mod m) until remainder 0, yielding gcd\gcd; then reconstruct xx and yy from the quotients, where the inverse is xmodmx \mod m.[56] This extension of Euclid's original algorithm from around 300 B.C.E. enables solving linear Diophantine equations efficiently in O(logm)O(\log m) steps.[59] For scenarios involving repeated modular multiplications, such as in cryptographic protocols, Montgomery multiplication optimizes by representing numbers in a special "Montgomery form" xRmodNx R \mod N, where RR is a power of the radix coprime to modulus N>1N > 1. The core REDC operation computes z=(T+(TmodR)NN)/RmodNz = (T + (T \mod R) N' N) / R \mod N from input TT, where NN' satisfies RN1modNR N' \equiv 1 \mod N, yielding zTR1modNz \equiv T R^{-1} \mod N without explicit division by NN; converting inputs to Montgomery form allows a sequence of multiplications ending with a final REDC to recover the result.[60] Introduced in 1985, this method reduces trial divisions, making it particularly advantageous for multi-precision arithmetic in fixed-modulus computations.[60]

Complexity of modular computations

Modular addition and subtraction of two integers modulo $ m $, where $ m $ has $ n = \log_2 m $ bits, can be performed in $ O(n) $ time using standard arithmetic operations. For multiplication, naive methods achieve $ O(n^2) $ bit operations, but optimized algorithms like Karatsuba reduce this to $ O(n^{1.585}) $, though in practice for cryptographic sizes, the complexity is often treated as $ O(n^2) $ without specialized hardware.[61] Modular exponentiation, computing $ a^e \mod m $, uses the square-and-multiply algorithm, which requires $ O(\log e) $ modular multiplications, leading to an overall time complexity of $ O(n \cdot \log e) $ when each multiplication is $ O(n) $, or more precisely $ O(n^2 \log e) $ in the bit model.[62] This efficiency makes it suitable for large exponents in applications like cryptography. The space complexity for basic modular operations such as addition, subtraction, and multiplication is $ O(1) $ in terms of auxiliary space when operands fit in fixed-size words, but for arbitrary large $ m $, big-integer representations require $ O(n) $ space to store the numbers themselves.[63] Computing the modular inverse of $ a $ modulo $ m $, when $ \gcd(a, m) = 1 $, via the extended Euclidean algorithm takes $ O(n) $ time, as it performs a number of steps proportional to the number of bits in $ m $; the algorithm fails otherwise, returning the gcd instead.[64] Factoring an integer $ n $ (i.e., finding its non-trivial factors) is believed to be computationally hard in the classical setting, with no known polynomial-time algorithm for general $ n $, forming the basis for the security of systems like RSA where $ n $ is a product of large primes.[65] The best classical algorithms, such as the general number field sieve, run in subexponential time $ \exp(O((\log n)^{1/3} (\log \log n)^{2/3})) $.[65] In the quantum setting, Shor's algorithm factors $ n $ in polynomial time, specifically $ O(n^3 \log n \log \log n) $ quantum gates, by leveraging quantum modular exponentiation within a period-finding subroutine to efficiently compute the order of an element modulo $ n $. This contrasts sharply with classical bounds, highlighting the potential vulnerability of factorization-based cryptography to quantum computation.

References

User Avatar
No comments yet.