Hubbry Logo
Fermat primality testFermat primality testMain
Open search
Fermat primality test
Community hub
Fermat primality test
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Fermat primality test
Fermat primality test
from Wikipedia

The Fermat primality test is a probabilistic test to determine whether a number is a probable prime.

Concept

[edit]

Fermat's little theorem states that if p is prime and a is not divisible by p, then

If one wants to test whether p is prime, then we can pick random integers a not divisible by p and see whether the congruence holds. If it does not hold for a value of a, then p is composite. This congruence is unlikely to hold for a random a if p is composite.[1] Therefore, if the equality does hold for one or more values of a, then we say that p is probably prime.

However, note that the above congruence holds trivially for , because the congruence relation is compatible with exponentiation. It also holds trivially for if p is odd, for the same reason. That is why one usually chooses a random a in the interval .

Any a such that

when n is composite is known as a Fermat liar. In this case n is called Fermat pseudoprime to base a.

If we do pick an a such that

then a is known as a Fermat witness for the compositeness of n.

Example

[edit]

Suppose we wish to determine whether n = 221 is prime. Randomly pick 1 < a < 220, say a = 38. We check the above congruence and find that it holds:

Either 221 is prime, or 38 is a Fermat liar, so we take another a, say 24:

So 221 is composite and 38 was indeed a Fermat liar. Furthermore, 24 is a Fermat witness for the compositeness of 221.

Algorithm

[edit]

The algorithm can be written as follows:

Inputs: n: a value to test for primality, n>3; k: a parameter that determines the number of times to test for primality
Output: composite if n is composite, otherwise probably prime
Repeat k times:
Pick a randomly in the range [2, n − 2]
If , then return composite
If composite is never returned: return probably prime

The a values 1 and n − 1 are not used as the equality holds for all n and all odd n respectively, hence testing them adds no value.

Complexity

[edit]

Using fast algorithms for modular exponentiation and multiprecision multiplication, the running time of this algorithm is O(k log2n log log n) = Õ(k log2n), where k is the number of times we test a random a, and n is the value we want to test for primality; see Miller–Rabin primality test for details.

Flaw

[edit]

There are infinitely many Fermat pseudoprimes to any given basis a > 1.[1]: Theorem 1  Even worse, there are infinitely many Carmichael numbers.[2] These are numbers for which all values of with are Fermat liars. For these numbers, repeated application of the Fermat primality test performs the same as a simple random search for factors. While Carmichael numbers are substantially rarer than prime numbers (Erdős' upper bound for the number of Carmichael numbers[3] is lower than the prime number function n/log(n)) there are enough of them that Fermat's primality test is not often used in the above form. Instead, other more powerful extensions of the Fermat test, such as Baillie–PSW, Miller–Rabin, and Solovay–Strassen are more commonly used.

In general, if is a composite number that is not a Carmichael number, then at least half of all

(i.e. )

are Fermat witnesses. For proof of this, let be a Fermat witness and , , ..., be Fermat liars. Then

and so all for are Fermat witnesses.

Corollaries

[edit]

Analogous to the Lucas–Lehmer residue, is called the Fermet residue of n to base a. There are a few variants that produce different types of residues,[4] most importantly the strong probable prime (SPRP) residue.[5]

If the residue r for n to base a is known, then for any proper divisor k of n, it is possible to perform a quick though weaker primality test on n/k. If n/k is prime, by Fermat's theorem and . As a result , which can be much more efficiently checked for values of k much smaller than n. (This is the method used by the Great Internet Mersenne Prime Search for testing cofactors.)[4]

A even weaker form of the test can be conducted with a truncated if storage space for the residue is a concern, so long as and . Let , then for some w. Take this mod 2t and we have . n/k is composite if . A similar test can be done on truncated Lucas–Lehmer residues.[6]

It is also true that if all bases a are systematically checked in the interval , each demonstrating congruence to 1, the test is effectively deterministic. We may say that n is definitely prime. At cursory glance one may assume that n exists in the union of both primes and Carmichael numbers for such a scenario, but if a-values are systematically checked in the interval, one is bound to be a prime factor of a composite n at some point before , thus making a and n not coprime, and failing the congruence, even if n is indeed a Carmichael number. Carmichael numbers do fail Fermat's congruence to 1 if the base used is not coprime. Though this is more computationally expensive than brute force divisibility checking (trial division), it is of theoretical value.

Applications

[edit]

As mentioned above, most applications use a Miller–Rabin or Baillie–PSW test for primality. Sometimes a Fermat test (along with some trial division by small primes) is performed first to improve performance. GMP since version 3.0 uses a base-210 Fermat test after trial division and before running Miller–Rabin tests. Libgcrypt uses a similar process with base 2 for the Fermat test, but OpenSSL does not.

In practice with most big number libraries such as GMP, the Fermat test is not noticeably faster than a Miller–Rabin test, and can be slower for many inputs.[7]

As an exception, OpenPFGW uses only the Fermat test for probable prime testing. The program is typically used with multi-thousand digit inputs with a goal of maximum speed with very large inputs. Another well known program that relies only on the Fermat test is PGP where it is only used for testing of self-generated large random values (an open source counterpart, GNU Privacy Guard, uses a Fermat pretest followed by Miller–Rabin tests).

Prime number search projects

[edit]

Internet volunteer computing projects such as Great Internet Mersenne Prime Search (GIMPS) and PrimeGrid use the Fermat primality test because there is an efficient proof scheme (Gerbicz-Li) for modular exponentiation. Selected intermediate results, combined with a verifiable delay function, are used to generate a "proof" file for verifying the authenticity and correctness of the computation, protecting against both hardware error and malicious actors. This proof is hard to forge given a low order assumption. The original form of the verification (Gerbicz-Pietrzak) only worked with n being derivable from powers of 2, such as in the case of Mersenne primes, Mersenne cofactors, and Proth primes; Li's modification generalizes it to any n.[8]

The GIMPS in particular tests Mersenne primes and Mersenne cofactors. The default is to use a = 3 as all Mersenne numbers would pass the test with a = 2.

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Fermat primality test is a probabilistic used to assess whether a given positive n>1n > 1 is prime by leveraging , which states that if nn is prime and aa is an not divisible by nn, then an11(modn)a^{n-1} \equiv 1 \pmod{n}. The test operates by selecting one or more random bases aa (typically between 2 and n2n-2) coprime to nn, computing an1modna^{n-1} \mod n, and checking if the result equals 1; if it does not for any aa, then nn is definitively composite, as this violates the theorem for primes. Conversely, if the congruence holds for multiple independent trials (say, tt bases), nn is declared a probable prime with high confidence, as the probability of a composite nn passing decreases exponentially with tt—often to less than 1/2t1/2^t for randomly chosen bases—though it provides no absolute proof of primality. Named after the 17th-century French mathematician , who formulated the underlying theorem in correspondence around 1640, the test was formalized as a practical primality procedure in the amid growing needs for efficient in and computing. It is computationally efficient, relying on fast algorithms like binary exponentiation, which run in O(logn)O(\log n) multiplications modulo nn, making it suitable for testing large numbers up to hundreds of digits. However, its reliability is undermined by Fermat pseudoprimes—composite numbers that pass the test for specific bases—and especially Carmichael numbers, a subset of square-free composites that satisfy the congruence for all bases coprime to nn, such as 561 (3×11×173 \times 11 \times 17) or 1105; infinitely many Carmichael numbers exist, rendering the test non-deterministic without additional checks. Despite these limitations, the Fermat test remains a foundational tool in probabilistic primality testing, often serving as a quick initial filter before more robust methods like the Miller-Rabin test (a strengthened variant that resists Carmichael numbers) or deterministic algorithms such as the . Its simplicity has influenced applications in , , and , where false positives can be mitigated by iterating with diverse bases or combining with trial division for small factors.

Background

Fermat's Little Theorem

Fermat's Little Theorem states that if pp is a and aa is an such that gcd(a,p)=1\gcd(a, p) = 1, then ap11(modp)a^{p-1} \equiv 1 \pmod{p}. This congruence highlights a fundamental property of primes in , where the exponent p1p-1 acts as the order of the modulo pp. The theorem was first proposed by in a 1640 letter to Frénicle de Bessy, though Fermat provided no proof and claimed only to have discovered one. The first published proof appeared in 1736 from Leonhard Euler, who used on the binomial expansion of (1+1)p(1 + 1)^p. Euler later offered additional proofs, including ones aligning with emerging group-theoretic ideas. A modern proof relies on group theory: Consider the multiplicative group G=(Z/pZ)×G = (\mathbb{Z}/p\mathbb{Z})^\times, which consists of the integers from 1 to p1p-1 under multiplication modulo pp and has order G=p1|G| = p-1. For aa coprime to pp, aa generates a subgroup H=aH = \langle a \rangle of GG, where the order kk of aa is the smallest positive integer such that ak1(modp)a^k \equiv 1 \pmod{p}. By Lagrange's theorem, kk divides G=p1|G| = p-1, so p1=knp-1 = k \cdot n for some integer nn, and thus ap1=(ak)n1n1(modp)a^{p-1} = (a^k)^n \equiv 1^n \equiv 1 \pmod{p}. Special cases clarify the theorem's scope. For a=1a = 1, the congruence holds trivially since 1p11(modp)1^{p-1} \equiv 1 \pmod{p}. However, if gcd(a,p)>1\gcd(a, p) > 1, since pp is prime this implies pp divides aa, so a0(modp)a \equiv 0 \pmod{p} and ap10(modp)≢1(modp)a^{p-1} \equiv 0 \pmod{p} \not\equiv 1 \pmod{p}; the theorem requires coprimality. The theorem underpins efficient modular exponentiation, as exponents can be reduced modulo p1p-1 when computing powers coprime to pp, enabling fast algorithms like binary exponentiation essential for primality testing procedures.

Overview of Probabilistic Primality Tests

Primality testing involves algorithms designed to determine whether a given positive integer is prime or composite, a fundamental problem in number theory with applications in cryptography and computational mathematics. Efficient primality tests are essential for identifying large primes required in secure systems, such as generating keys for public-key encryption. Deterministic primality tests, which provide absolute certainty without randomization, face significant challenges for large numbers. Simple methods like trial division, which check divisibility up to the of the number, have a of O(√n), rendering them impractical for numbers with hundreds of bits, as in cryptographic contexts. Even advanced deterministic algorithms, such as the AKS test introduced in 2002, achieve —originally O(log^{12+ε} n) and later improved to O(log^{6} n)—but remain computationally expensive due to high constants and massive storage requirements; for instance, verifying a 1024-bit number can demand billions of gigabytes of memory and excessive operations in congruence checks. Probabilistic primality tests address these limitations by using to achieve high in results with far greater . These tests select random witnesses (bases) and verify number-theoretic conditions that primes satisfy, declaring a number "probably prime" if it passes multiple trials; the probability of error—misidentifying a composite as prime—decreases exponentially, typically bounded by less than (1/2)^k for k independent trials. This approach ensures that for practical purposes, such as in , the error rate can be made negligibly small, like 2^{-80}, with modest computational overhead. The development of probabilistic tests gained prominence in the , driven by the emergence of , which necessitated fast primality verification for enormous integers. Early methods, including the Fermat test based on , were among the first to exploit for speed, predating more robust variants like Solovay-Strassen (). Their key advantage lies in leveraging efficient , with around O(log^3 n) per trial, enabling practical testing of numbers up to thousands of bits in seconds on modern hardware—vastly outperforming deterministic alternatives for real-world applications.

Description of the Test

Core Principle

The Fermat primality test is grounded in , which states that if pp is a and aa is an coprime to pp, then ap11(modp)a^{p-1} \equiv 1 \pmod{p}. This theorem provides the foundational congruence that the test leverages to assess the primality of an odd n>2n > 2. The core principle of the test relies on the contrapositive of : if nn is composite and there exists an aa coprime to nn such that an1≢1(modn)a^{n-1} \not\equiv 1 \pmod{n}, then nn is definitely composite. In this scenario, such an aa serves as a witness to the compositeness of nn. Conversely, if an11(modn)a^{n-1} \equiv 1 \pmod{n} holds for a randomly selected aa coprime to nn, then nn passes the test and is considered probably prime; however, if nn is actually composite, this aa acts as a liar. Bases aa for the test are typically chosen from the range 2a<n2 \leq a < n with gcd(a,n)=1\gcd(a, n) = 1 to ensure coprimality. To increase confidence in the result, multiple distinct bases are tested iteratively, as a single liar does not rule out compositeness. For composite nn that are not Carmichael numbers, at least half of the possible bases coprime to nn are witnesses, yielding an error probability of at most 1/21/2 per test under these conditions.

Testing Procedure and Witnesses

The Fermat primality test begins by handling trivial cases: numbers such as n=1n = 1 or even n>2n > 2 are immediately determined to be not prime, as they fail basic primality conditions and do not satisfy the conditions of . For odd integers n3n \geq 3, the procedure relies on the contrapositive of , testing whether nn behaves like a prime under . To apply the test, select kk random bases aa such that 1<a<n1 < a < n and gcd(a,n)=1\gcd(a, n) = 1, ensuring coprimality to align with the theorem's assumptions. In practice, to avoid computing gcd beforehand, a random aa in 2 to n2n-2 is selected; if gcd(a,n)>1\gcd(a, n) > 1, then nn is composite. For each base, compute an1modna^{n-1} \mod n using efficient methods like binary exponentiation. If an1≢1(modn)a^{n-1} \not\equiv 1 \pmod{n} for any aa, then nn is definitively composite. If the congruence holds for all kk bases, output "probably prime," with the error probability (false positive for composites) bounded above by 2k2^{-k} for non-Carmichael composites in the worst case among such numbers, providing a confidence level of 12k1 - 2^{-k} under those conditions. A base aa (with 1<a<n1 < a < n and gcd(a,n)=1\gcd(a, n) = 1) is termed a Fermat witness to the compositeness of nn if an1≢1(modn)a^{n-1} \not\equiv 1 \pmod{n}, as this violation proves nn is not prime. Conversely, a Fermat liar is a base aa for which the congruence holds (an11(modn)a^{n-1} \equiv 1 \pmod{n}) despite nn being composite, potentially misleading the test into suggesting primality. The proportion of witnesses versus liars determines the test's reliability for a given composite nn, with at least half of the possible bases typically serving as witnesses for non-Carmichael composites.

Examples and Illustrations

Basic Example with a Composite Number

Consider the composite number n=91n = 91, which factors as 7×137 \times 13. To apply the , select bases aa coprime to 91 and compute a90mod91a^{90} \mod 91. If the result is not 1 for any such aa, then aa is a Fermat witness proving nn composite; if it equals 1 for all tested aa, nn may be prime or a pseudoprime to those bases. First, take a=2a = 2 (with gcd(2,91)=1\gcd(2, 91) = 1). Compute 290mod912^{90} \mod 91 using the binary exponentiation method, which efficiently calculates large powers via repeated squaring and multiplication. The binary representation of 90 is 101101021011010_2. Initialize result r=1r = 1 and base b=2b = 2; process the exponent from least significant bit:
  • Exponent 90 (even): b22=4mod91b \leftarrow 2^2 = 4 \mod 91, exponent 45\leftarrow 45.
  • Exponent 45 (odd): r1×4=4mod91r \leftarrow 1 \times 4 = 4 \mod 91, b42=16mod91b \leftarrow 4^2 = 16 \mod 91, exponent 22\leftarrow 22.
  • Exponent 22 (even): b162=25674mod91b \leftarrow 16^2 = 256 \equiv 74 \mod 91 (since 2562×91=74256 - 2 \times 91 = 74), exponent 11\leftarrow 11.
  • Exponent 11 (odd): r4×74=29623mod91r \leftarrow 4 \times 74 = 296 \equiv 23 \mod 91 (since 2963×91=23296 - 3 \times 91 = 23), b742=547616mod91b \leftarrow 74^2 = 5476 \equiv 16 \mod 91 (since 547660×91=165476 - 60 \times 91 = 16), exponent 5\leftarrow 5.
  • Exponent 5 (odd): r23×16=3684mod91r \leftarrow 23 \times 16 = 368 \equiv 4 \mod 91 (since 3684×91=4368 - 4 \times 91 = 4), b162=25674mod91b \leftarrow 16^2 = 256 \equiv 74 \mod 91, exponent 2\leftarrow 2.
  • Exponent 2 (even): b742=547616mod91b \leftarrow 74^2 = 5476 \equiv 16 \mod 91, exponent 1\leftarrow 1.
  • Exponent 1 (odd): r4×16=64mod91r \leftarrow 4 \times 16 = 64 \mod 91, b162=25674mod91b \leftarrow 16^2 = 256 \equiv 74 \mod 91, exponent 0\leftarrow 0.
Thus, 29064mod912^{90} \equiv 64 \mod 91, and 64≢1mod9164 \not\equiv 1 \mod 91, so 2 is a Fermat witness confirming 91 is composite. Now consider a=10a = 10 (with gcd(10,91)=1\gcd(10, 91) = 1). Direct computation via binary exponentiation yields 10901mod9110^{90} \equiv 1 \mod 91, making 10 a Fermat liar that falsely suggests 91 might be prime. This example illustrates the probabilistic nature of the test: a single witness definitively proves compositeness, but the existence of liars (exactly 36 out of the 72 coprime bases for 91) requires testing multiple random bases to confidently declare a number prime, as repeated liars reduce but do not eliminate the chance of error.

Example with a Prime Number

Consider the number n=97n = 97, which is a prime. To demonstrate the Fermat primality test succeeding on a prime, select bases a=5a = 5 and a=12a = 12 (both coprime to 97) and verify that a961(mod97)a^{96} \equiv 1 \pmod{97} in each case, as required by the test procedure. For a=5a = 5, compute 596mod975^{96} \mod 97 using repeated squaring, which efficiently reduces the exponentiation to O(log96)O(\log 96) multiplications modulo 97:
  • 515(mod97)5^1 \equiv 5 \pmod{97}
  • 5225(mod97)5^2 \equiv 25 \pmod{97}
  • 54252=62543(mod97)5^4 \equiv 25^2 = 625 \equiv 43 \pmod{97} (since 6256×97=43625 - 6 \times 97 = 43)
  • 58432=18496(mod97)5^8 \equiv 43^2 = 1849 \equiv 6 \pmod{97} (since 184919×97=61849 - 19 \times 97 = 6)
  • 51662=36(mod97)5^{16} \equiv 6^2 = 36 \pmod{97}
  • 532362=129635(mod97)5^{32} \equiv 36^2 = 1296 \equiv 35 \pmod{97} (since 129613×97=351296 - 13 \times 97 = 35)
  • 564352=122561(mod97)5^{64} \equiv 35^2 = 1225 \equiv 61 \pmod{97} (since 122512×97=611225 - 12 \times 97 = 61)
Since 96=64+3296 = 64 + 32 in binary, 596564×53261×35=21351(mod97)5^{96} \equiv 5^{64} \times 5^{32} \equiv 61 \times 35 = 2135 \equiv 1 \pmod{97} (since 213522×97=12135 - 22 \times 97 = 1). The result is 1, so 5 is not a witness against primality. For a=12a = 12, the computation simplifies further:
  • 12112(mod97)12^1 \equiv 12 \pmod{97}
  • 12214447(mod97)12^2 \equiv 144 \equiv 47 \pmod{97} (since 14497=47144 - 97 = 47)
  • 124472=220975(mod97)12^4 \equiv 47^2 = 2209 \equiv 75 \pmod{97} (since 220922×97=752209 - 22 \times 97 = 75)
  • 128752=562596(mod97)12^8 \equiv 75^2 = 5625 \equiv 96 \pmod{97} (since 562558×97=1965625 - 58 \times 97 = -1 \equiv 96)
  • 1216962=92161(mod97)12^{16} \equiv 96^2 = 9216 \equiv 1 \pmod{97} (since 921694×97=9897=19216 - 94 \times 97 = 98 - 97 = 1)
Thus, 1296=(1216)616=1(mod97)12^{96} = (12^{16})^6 \equiv 1^6 = 1 \pmod{97}. Again, the result is 1. This outcome aligns with Fermat's Little Theorem, which guarantees ap11(modp)a^{p-1} \equiv 1 \pmod{p} for prime pp and aa not divisible by pp, ensuring no witnesses exist for primes—unlike composites, where witnesses may appear probabilistically. Testing multiple bases, as done here, increases confidence in primality by reducing the chance of misleading results from pseudoprimes, though for true primes all valid bases pass. For small nn like 97, such computations are efficient and straightforward by hand or basic software.

Algorithm and Implementation

Step-by-Step Algorithm

The Fermat primality test is implemented as a probabilistic algorithm that verifies whether an input n>1n > 1 satisfies for randomly selected bases, declaring nn composite if it fails for any base or probably prime after a specified number of successful trials. The algorithm takes two inputs: an n>1n > 1 to test for primality, and an integer k1k \geq 1 specifying the number of independent trials, with typical values of 10 to 20 used in cryptographic applications to achieve a low error probability (e.g., less than 2202^{-20}). The following pseudocode outlines the core procedure, assuming efficient for the power computation:

function FermatPrimalityTest(n, k): if n < 2 or n == 2: return "Prime" if n == 2 else "Composite" if n % 2 == 0: return "Composite" if n == 3: return "Prime" for i from 1 to k: a = random integer in [2, n-2] if gcd(a, n) > 1: return "Composite" r = a^(n-1) mod n // using [modular exponentiation](/page/Modular_exponentiation) if r != 1: return "Composite" return "Probably Prime"

function FermatPrimalityTest(n, k): if n < 2 or n == 2: return "Prime" if n == 2 else "Composite" if n % 2 == 0: return "Composite" if n == 3: return "Prime" for i from 1 to k: a = random integer in [2, n-2] if gcd(a, n) > 1: return "Composite" r = a^(n-1) mod n // using [modular exponentiation](/page/Modular_exponentiation) if r != 1: return "Composite" return "Probably Prime"

This procedure first handles trivial cases, then performs kk iterations: in each, a random base aa is selected, its gcd with nn is checked (an optimization to detect shared factors early), and an1modna^{n-1} \mod n is computed; failure in any trial definitively identifies a composite, while success in all suggests probable primality. For simplicity in non-randomized variants or initial screening, fixed small bases such as 2, 3, and 5 can be used instead of random selection, though this reduces the probabilistic guarantees unless more bases are added. The output is either "Composite" (a deterministic declaration, as any failure proves compositeness) or "Probably Prime" (a probabilistic result, with error probability bounded by the choice of kk and the proportion of misleading bases). Edge cases include n=1n = 1 (always composite), n=2n = 2 (prime), and powers of 2 greater than 2 (composite, handled by the even-check).

Computational Complexity

The computational complexity of the Fermat primality test is primarily determined by the step, which computes an1modna^{n-1} \mod n (or an equivalent form) for each chosen base aa. For a single trial, binary requires O(logn)O(\log n) modular , each of which takes O(log2n)O(\log^2 n) bit operations using standard algorithms, yielding an overall of O(log3n)O(\log^3 n). With kk trials, the total time complexity becomes O(klog3n)O(k \log^3 n). Employing advanced multiplication methods, such as those based on the fast Fourier transform, reduces the per-trial complexity to O(log2+ϵn)O(\log^{2+\epsilon} n) for any ϵ>0\epsilon > 0, or approximately O(log2nloglogn)O(\log^2 n \log \log n) in practice, resulting in O(klog2nloglogn)O(k \log^2 n \log \log n) overall. The space complexity is O(logn)O(\log n), as it involves storing the input number nn and intermediate values of comparable bit length during computations. Compared to trial division, which requires O(n)O(\sqrt{n})
Add your contribution
Related Hubs
User Avatar
No comments yet.