Recent from talks
Nothing was collected or created yet.
Dixon's factorization method
View on WikipediaIn number theory, Dixon's factorization method (also Dixon's random squares method[1] or Dixon's algorithm) is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.
The algorithm was designed by John D. Dixon, a mathematician at Carleton University, and was published in 1981.[2]
Basic idea
[edit]Dixon's method is based on finding a congruence of squares modulo the integer N which is intended to factor. Fermat's factorization method finds such a congruence by selecting random or pseudo-random x values and hoping that the integer x2 mod N is a perfect square (in the integers):
For example, if N = 84923, (by starting at 292, the first number greater than √N and counting up) the 5052 mod 84923 is 256, the square of 16. So (505 − 16)(505 + 16) = 0 mod 84923. Computing the greatest common divisor of 505 − 16 and N using Euclid's algorithm gives 163, which is a factor of N.
In practice, selecting random x values will take an impractically long time to find a congruence of squares, since there are only √N squares less than N.
Dixon's method replaces the condition "is the square of an integer" with the much weaker one "has only small prime factors"; for example, there are 292 squares smaller than 84923; 662 numbers smaller than 84923 whose prime factors are only 2,3,5 or 7; and 4767 whose prime factors are all less than 30. (Such numbers are called B-smooth with respect to some bound B.)
If there are many numbers whose squares can be factorized as for a fixed set of small primes, linear algebra modulo 2 on the matrix will give a subset of the whose squares combine to a product of small primes to an even power — that is, a subset of the whose squares multiply to the square of a (hopefully different) number mod N.
Method
[edit]Suppose the composite number N is being factored. Bound B is chosen, and the factor base is identified (which is called P), the set of all primes less than or equal to B. Next, positive integers z are sought such that z2 mod N is B-smooth. Therefore we can write, for suitable exponents ai,
When enough of these relations have been generated (it is generally sufficient that the number of relations be a few more than the size of P), the methods of linear algebra, such as Gaussian elimination, can be used to multiply together these various relations in such a way that the exponents of the primes on the right-hand side are all even:
This yields a congruence of squares of the form a2 ≡ b2 (mod N), which can be turned into a factorization of N, N = gcd(a + b, N) × (N/gcd(a + b, N)). This factorization might turn out to be trivial (i.e. N = N × 1), which can only happen if a ≡ ±b (mod N), in which case another try must be made with a different combination of relations; but if a nontrivial pair of factors of N is reached, the algorithm terminates.
Pseudocode
[edit]This section is taken directly from Dixon (1981).
Dixon's algorithm
Initialization. Let L be a list of integers in the range [1, n], and let P = {p1, ..., ph} be the list of the h primes ≤ v. Let B and Z be initially empty lists (Z will be indexed by B).
Step 1. If L is empty, exit (algorithm unsuccessful). Otherwise, take the first term z from L, remove it from L, and proceed to Step 2.
Step 2. Compute w as the least positive remainder of z2 mod n. Factor w as:
where w′ has no factor in P. If w′ = 1, proceed to Step 3; otherwise, return to Step 1.
Step 3. Let a ← (a1, ..., ah). Add a to B and z to Z. If B has at most h elements, return to Step 1; otherwise, proceed to Step 4.
Step 4. Find the first vector c in B that is linearly dependent (mod 2) on earlier vectors in B. Remove c from B and from Z. Compute coefficients such that:
Define:
Proceed to Step 5.
Step 5. Compute:
so that:
If or , return to Step 1. Otherwise, return:
which provides a nontrivial factor of n, and terminate successfully.
Step-by-step example : factorizing (n = 84923) using Dixon's algorithm
[edit]This example is lifted directly from the LeetArxiv substack.[3] Credit is given to the original author.
- Initialization:
- Define a list of numbers L, ranging from 1 to 84923:
- Define a value v, which is the smoothness factor:
- Define a list P containing all the prime numbers less than or equal to v:
- Define B and Z, two empty lists. B is a list of powers, while Z is a list of accepted integers:
- Step 1: Iterating values
- Write a for loop that indexes the list . Each element in is indexed as . The for loop exits at the end of the list.
int n = 84923; for (int i = 1; i <= n; i++) { int z = i; }
- Step 2: Computing and v-smooth Prime Factorization
- To proceed, compute for each z, then express the result as a prime factorization.
- This step continues for all values of z in the range.
- Step 3: If is 7-smooth, then append its powers to list and append to list .
- Step 4: This step is split into two parts.
- Part 1: Find modulo 2.
- Part 2: Check if any row combinations of sum to even numbers
- For example, summing Row and Row gives us a vector of even numbers.
- and
- then
- .
- Step 5 : This step is split into four parts.
- Part 1. (Finding x): Multiply the corresponding values for the rows found in Step 4. Then find the square root. This gives us .
- For Row 2, we had .
- For Row 3, we had .
- Thus, we find :
- Part 1. (Finding x): Multiply the corresponding values for the rows found in Step 4. Then find the square root. This gives us .
- Part 2. (Finding y): Multiply the corresponding smooth factorizations for the rows found in Step 4. Then find the square root. This gives us .
- Part 3. (Find x + y and x - y) where x = 20712 and y = 16800.
- Part 4. Compute GCD(x+y, n) and GCD(x-y, n), where n = 84923, x+y = 292281 and x-y = 258681
Quick check shows .
Optimizations
[edit]The quadratic sieve is an optimization of Dixon's method. It selects values of x close to the square root of N such that x2 modulo N is small, thereby largely increasing the chance of obtaining a smooth number.
Other ways to optimize Dixon's method include using a better algorithm to solve the matrix equation, taking advantage of the sparsity of the matrix: a number z cannot have more than factors, so each row of the matrix is almost all zeros. In practice, the block Lanczos algorithm is often used. Also, the size of the factor base must be chosen carefully: if it is too small, it will be difficult to find numbers that factorize completely over it, and if it is too large, more relations will have to be collected.
A more sophisticated analysis, using the approximation that a number has all its prime factors less than with probability about (an approximation to the Dickman–de Bruijn function), indicates that choosing too small a factor base is much worse than too large, and that the ideal factor base size is some power of .
The optimal complexity of Dixon's method is
in big-O notation, or
in L-notation.
References
[edit]- ^ Kleinjung, Thorsten; et al. (2010). "Factorization of a 768-Bit RSA Modulus". Advances in Cryptology – CRYPTO 2010. Lecture Notes in Computer Science. Vol. 6223. pp. 333–350. doi:10.1007/978-3-642-14623-7_18. ISBN 978-3-642-14622-0. S2CID 11556080.
- ^ Dixon, J. D. (1981). "Asymptotically fast factorization of integers" (PDF). Math. Comp. 36 (153): 255–260. doi:10.1090/S0025-5718-1981-0595059-1. JSTOR 2007743.
- ^ Kibicho, Murage (2025). Hand-Written Paper Implementation: Asymptotically Fast Factorization of Integers.
Dixon's factorization method
View on GrokipediaIntroduction
Overview
Dixon's factorization method is a probabilistic algorithm for integer factorization, designed to find non-trivial factors of a composite integer by identifying relations among smooth numbers modulo . Developed by John D. Dixon in 1981, it represents a foundational factor base approach that operates in subexponential time, making it suitable for factoring large semiprimes without relying on special form assumptions about . The core principle relies on selecting random integers and computing , then factoring these residues completely over a carefully chosen factor base of small primes to obtain exponent vectors. Dependencies among these vectors are solved via linear algebra over the field GF(2), yielding a combination that produces a perfect square congruent to another square modulo , from which factors are derived using the greatest common divisor. This process exploits the higher density of smooth values in the residues compared to random integers, enabling efficient relation collection. In the 1980s, the method provided an efficient bridge between trial division and advanced sieving techniques like the quadratic sieve, feasible for relatively small composites up to around 20-30 digits on contemporary hardware, though its theoretical complexity for some constant highlighted its asymptotic advantages. The primary output is a pair of non-trivial factors and such that , assuming is semiprime, with a high probability of success upon finding a suitable square congruence.Historical development
Dixon's factorization method was invented by mathematician John D. Dixon at Carleton University and first published in 1981 in the paper "Asymptotically fast factorization of integers," appearing in Mathematics of Computation. The method emerged in the context of advancing integer factorization techniques amid the rise of public-key cryptography, particularly following the introduction of the RSA cryptosystem in 1978. It addressed limitations in prior approaches, such as the continued fraction factorization algorithm developed by Morrison and Brillhart in 1975, by providing the first probabilistic factoring algorithm with a rigorously proven subexponential expected running time of , where . This marked a significant theoretical breakthrough, shifting from heuristic subexponential methods to one with formal guarantees.[1][3] In its early years, Dixon's method had a profound influence on the development of more efficient factorization algorithms, notably inspiring the quadratic sieve introduced by Pomerance in 1981. The quadratic sieve, which improved upon Dixon's random squares approach by using sieving techniques for relation generation, achieved practical success in computational challenges; for instance, it was employed in the 1994 factorization of the 129-digit RSA-129 challenge number by a distributed team led by Atkins. Although RSA-129 was not factored using Dixon's method exclusively, this application underscored the foundational impact of Dixon's ideas on subsequent general-purpose factoring efforts.[3] Despite its theoretical advancements, Dixon's original algorithm faced significant practical limitations in the 1980s due to the era's computational constraints, including high storage requirements for the factor base and relations—on the order of , where is the number of relations needed—and intensive linear algebra computations. As noted in the original paper, the method was "more of theoretical than practical interest" at the time, with implementations feasible only for relatively small composites, typically up to around 20-30 decimal digits.[1]Mathematical prerequisites
Smooth numbers and B-smoothness
A B-smooth number, also known as a smooth number with smoothness bound B, is a positive integer whose prime factors are all less than or equal to B. Equivalently, it is a number whose largest prime factor (denoted P^+(m) for m) satisfies P^+(m) ≤ B. The density of B-smooth numbers up to a given n is a key property in analytic number theory, governing their distribution and frequency. Let ψ(n, B) denote the number of B-smooth positive integers ≤ n. Asymptotically, where ρ(u) is the Dickman-de Bruijn function, which estimates the proportion of B-smooth numbers up to n when u = log n / log B.[4] The function ρ(u) is defined recursively: ρ(u) = 1 for 0 ≤ u ≤ 1, and for u > 1, it satisfies the delay-differential equation u ρ'(u) + ρ(u - 1) = 0, or equivalently in integral form, This integral representation arises from considering the probabilistic model of generating random integers via products of primes, leading to the asymptotic behavior through renewal theory and the prime number theorem; for large u, ρ(u) ~ 1 / (u^{u(1+o(1))}), capturing the rapid decrease in density as B becomes small relative to n.[4] In Dixon's factorization method, B-smoothness plays a foundational role by ensuring that computed values, such as k2 mod n for random k, can be fully factored using only small primes up to B, thus producing relations whose prime exponents form vectors over the factor base for subsequent linear algebra.[1] This requirement allows the accumulation of sufficient relations to identify a nontrivial kernel element, ultimately yielding a factorization of n, with the smoothness probability dictating the number of trials needed.[1] The factor base consists of small primes ≤ B, providing the algebraic structure for these relations (detailed in the factor bases section).Factor bases
In Dixon's factorization method, the factor base is defined as a finite set of small primes , typically consisting of the first primes not exceeding a smoothness bound , selected to balance the probability of finding smooth values against the cost of factorization checks.[1] The construction of involves selecting such primes, with the set augmented by -1 to account for potential negative signs in factorizations.[1] The size of the factor base satisfies , where denotes the standard subexponential function used to express the asymptotic behavior of quantities in analytic number theory and factorization algorithms.[1] This structure ensures that any -smooth number modulo can be uniquely expressed as a product of nonnegative integer powers of primes from ; in Dixon's approach, the corresponding exponent vectors lie in for square-free relations, enabling the identification of linear dependencies that produce squares modulo .[1]Core algorithm
Selection of factor base and smoothness bound
The selection of the factor base and smoothness bound constitutes a critical initial phase in Dixon's factorization method, aimed at optimizing the efficiency of subsequent relation collection and linear algebra steps. The smoothness bound is asymptotically chosen as , where , to achieve subexponential runtime performance.[1] The size of the factor base is selected to be approximately , ensuring that roughly B-smooth relations can be collected to form a full-rank matrix over the finite field for dependency detection.[1] This parameter choice targets a balance where the expected number of smooth values aligns with the dimensionality required for the Gaussian elimination phase. The factor base consists of the smallest prime numbers up to , providing a compact set of primes for trial division during smoothness testing (as defined in the Mathematical prerequisites section).[1] Heuristics derived from the prime number theorem estimate , allowing initial sizing of the factor base before runtime execution.[1] The optimal smoothness bound is derived by minimizing the total runtime, which encompasses the costs of generating and factoring candidate values against the factor base and solving the resulting linear system. This leads to the asymptotic form where the exponent balances the exponential growth in trial division efforts (proportional to , with the count of B-smooth numbers up to ) against the cubic cost of linear algebra over variables.[1] This optimization ensures the method's heuristic runtime of dominates practical implementations for large composite .[3]Generating relations
In the relation generation phase of Dixon's factorization method, random integers are selected from the interval , and for each such , the value is computed, where is the integer to be factored. If or , the candidate is discarded as trivial, since these cases do not yield useful factorizations; otherwise, an attempt is made to factor completely using the primes in the preselected factor base .[1] The factoring is performed via trial division by the primes in , ordered by increasing size, to determine if is -smooth (i.e., all its prime factors are at most the smoothness bound ). If factors fully as with , the exponent vector is recorded; for efficiency in the subsequent phase, the exponents are often reduced modulo 2, yielding a vector over , as only the parity matters for constructing squares. The pair is then stored as a relation. If is not -smooth (i.e., it has a prime factor larger than ), the candidate is discarded, and a new is selected.[1][5] Relations are collected until at least are obtained, where , ensuring a sufficient number for linear dependence in the exponent vectors. Each valid relation corresponds to the congruence or equivalently, with . In vector notation, the factorization satisfies , reflecting the logarithmic distribution of prime factors in smooth numbers.[1] The process of discarding non-smooth candidates and retrying is repeated as needed; the expected number of trials to obtain one relation equals the reciprocal of the probability that a random integer in is -smooth, denoted where . Smoothness testing, as detailed in the mathematical prerequisites, relies on the density of smooth numbers to make this phase feasible.[1][5]Linear algebra phase
In the linear algebra phase of Dixon's factorization method, the collected relations are processed to identify linear dependencies that yield congruences of squares modulo the composite number . Each relation corresponds to an integer such that , where is the factor base of small primes and the product is -smooth for some bound . The exponent vector in is formed for each relation . These vectors comprise the rows of a matrix , where is the number of relations, typically exceeding to ensure dependencies exist.[1] The goal is to find a non-trivial kernel vector such that in , which identifies a linear combination of the exponent vectors that sums to the zero vector modulo 2. This is achieved through Gaussian elimination on , a standard method for solving linear systems over finite fields that row-reduces the matrix to identify dependencies efficiently. Each such dependency corresponds to a subset of relations where the indices with have even total exponents for every prime , ensuring . Multiple dependencies may be found, increasing the chances of obtaining a useful factorization.[6] For a dependency vector , the corresponding square is constructed as follows: let , so . Since the total exponents are even for each , the right-hand side is a perfect square for some integer , yielding , or equivalently . This congruence provides potential factor candidates.[1] To extract a non-trivial factor, compute and , then or . If , then divides non-trivially, splitting into factors and ; otherwise, if or (indicating ), the dependency is discarded, and another is sought. This step succeeds with probability approximately 1/2 for random splits, assuming with distinct primes . The process may require multiple dependencies to guarantee a successful factorization.[6]Implementation details
Pseudocode
Dixon's factorization method can be outlined in pseudocode as a high-level procedure that initializes the factor base and smoothness bound, collects smooth relations through random sampling, performs linear algebra over GF(2) to find dependencies among the relations, and extracts factors via GCD computations. This structure captures the core phases without delving into parameter tuning or efficiency analysis.[1][7][8] The following pseudocode implements the full algorithm for factoring an odd composite integer , assuming helper functions for smoothness testing (detailed in the Mathematical prerequisites section) and Gaussian elimination over GF(2). It collects relations until sufficiently many are obtained to guarantee a linear dependency, then processes dependencies to find a non-trivial factor.function Factor(n):
if n is even or n < 2: return n // Base cases
if IsPrime(n): return n // Fallback for primes
// Step 1: Select factor base F and smoothness bound B
B = SelectSmoothnessBound(n) // e.g., exp(sqrt(2 * ln(n) * ln(ln(n))))
F = [p for p in Primes up to B if p does not divide n] // Small primes
f = |F| // Size of factor base
// Step 2: Collect relations (k, full exponents, mod2 vector)
relations = [] // List of (k, full_exponents, mod2_vector) where mod2_vector in {0,1}^f
while |relations| < f + 10: // Collect extra for redundancy
k = RandomInteger(1, n-1) // Random k in 1 to n-1
if gcd(k, n) > 1: return gcd(k, n) // Trivial factor found
a = (k^2 mod n)
full_exponents, mod2_vector = IsSmooth(a, F) // Returns None if not smooth, else (full, mod2)
if full_exponents is not None:
relations.append( (k, full_exponents, mod2_vector) )
// Step 3: Build matrix and find row dependencies over GF(2)
m = |relations|
M = Matrix with rows as mod2_vectors from relations // m x f matrix over GF(2)
# To find dependencies among rows: compute null space of M transpose (f x m)
MT = Transpose(M) // f x m
kernel = GaussianElimination(MT) // Null space yields x in {0,1}^m
// Step 4: Process each dependency to find factors
for each non-zero x in kernel:
prod_k = 1
total_exponents = [0] * f // Accumulate full exponents
for i in 0 to m-1:
if x[i] == 1:
k_i, full_exp_i, _ = relations[i]
prod_k = (prod_k * k_i) % n
for j in 0 to f-1:
total_exponents[j] += full_exp_i[j]
# Compute y from half exponents
y = 1
for j in 0 to f-1:
half_exp = total_exponents[j] // 2
y = (y * pow(F[j], half_exp, n)) % n
d1 = gcd( (prod_k - y) % n, n )
d2 = gcd( (prod_k + y) % n, n )
if 1 < d1 < n: return d1
if 1 < d2 < n: return d2
// If no factor found (unlikely with sufficient relations), recurse or fail
return Factor(n) // Or report failure
function Factor(n):
if n is even or n < 2: return n // Base cases
if IsPrime(n): return n // Fallback for primes
// Step 1: Select factor base F and smoothness bound B
B = SelectSmoothnessBound(n) // e.g., exp(sqrt(2 * ln(n) * ln(ln(n))))
F = [p for p in Primes up to B if p does not divide n] // Small primes
f = |F| // Size of factor base
// Step 2: Collect relations (k, full exponents, mod2 vector)
relations = [] // List of (k, full_exponents, mod2_vector) where mod2_vector in {0,1}^f
while |relations| < f + 10: // Collect extra for redundancy
k = RandomInteger(1, n-1) // Random k in 1 to n-1
if gcd(k, n) > 1: return gcd(k, n) // Trivial factor found
a = (k^2 mod n)
full_exponents, mod2_vector = IsSmooth(a, F) // Returns None if not smooth, else (full, mod2)
if full_exponents is not None:
relations.append( (k, full_exponents, mod2_vector) )
// Step 3: Build matrix and find row dependencies over GF(2)
m = |relations|
M = Matrix with rows as mod2_vectors from relations // m x f matrix over GF(2)
# To find dependencies among rows: compute null space of M transpose (f x m)
MT = Transpose(M) // f x m
kernel = GaussianElimination(MT) // Null space yields x in {0,1}^m
// Step 4: Process each dependency to find factors
for each non-zero x in kernel:
prod_k = 1
total_exponents = [0] * f // Accumulate full exponents
for i in 0 to m-1:
if x[i] == 1:
k_i, full_exp_i, _ = relations[i]
prod_k = (prod_k * k_i) % n
for j in 0 to f-1:
total_exponents[j] += full_exp_i[j]
# Compute y from half exponents
y = 1
for j in 0 to f-1:
half_exp = total_exponents[j] // 2
y = (y * pow(F[j], half_exp, n)) % n
d1 = gcd( (prod_k - y) % n, n )
d2 = gcd( (prod_k + y) % n, n )
if 1 < d1 < n: return d1
if 1 < d2 < n: return d2
// If no factor found (unlikely with sufficient relations), recurse or fail
return Factor(n) // Or report failure
IsSmooth(a, F) performs trial division by primes in F to check B-smoothness and returns (full_exponents list, mod2_vector list) if successful (or None), and GaussianElimination(MT) computes the null space using standard row reduction over GF(2), yielding basis vectors for dependencies among the relations. The y is computed directly from the accumulated even total exponents halved, using modular exponentiation for each prime power. In practice, the loop collects more relations than minimally required to ensure the matrix has full rank with high probability.[1][7][9][8]
Edge cases encompass small n (handled by direct primality tests), prime n (returns n as trivial), or cases where gcd(k, n) yields an early trivial factor. Each processed dependency succeeds in splitting n with probability approximately 1/2, so multiple kernel vectors may need evaluation, and the algorithm may require restarts if no split is found after sufficient relations.[1][8]
Computational complexity
Dixon's factorization method achieves a subexponential running time of , which approximates . This complexity is dominated by the relation collection and linear algebra phases, marking it as the first integer factorization algorithm with a rigorously proven subexponential bound.[1] In the relation generation phase, approximately trials are needed, where each trial involves time for trial division to verify B-smoothness. This yields a phase complexity of .[10] The linear algebra phase employs Gaussian elimination on an approximately square matrix of dimension , incurring time and thus complexity; Strassen's algorithm can improve the exponent to roughly 2.807 but does not alter the subexponential form.[11] The overall time complexity is with for the balanced case, derived heuristically from needing about relations at cost each.[1] Space requirements are , primarily for storing the relation matrix.[10] The method is heuristic, assuming a suitable density of smooth numbers; worst-case performance could be exponential if smooth relations prove elusive. It remains practical for integers up to roughly but has been largely superseded for larger values by advanced variants.[12]Worked example
Factorization of 84923
To illustrate Dixon's factorization method, consider the composite number . For this small example, choose a smoothness bound , smaller than the theoretical optimum to keep the factor base manageable. The factor base consists of all primes ≤ B.[13] Relations are generated by selecting integers , computing , and checking if is B-smooth (all prime factors in F). If smooth, record the exponent vector , where . The following three smooth relations are found (among others that could be used):| k | Factorization | Vector (mod 2 for 2,3,5,7) | |
|---|---|---|---|
| 1965 | 39690 | (1,0,1,0) | |
| 8954 | 6804 | (0,1,0,1) | |
| 24524 | 1890 | (1,1,1,1) |
| 2 | 3 | 5 | 7 | |
|---|---|---|---|---|
| Rel1 | 1 | 0 | 1 | 0 |
| Rel2 | 0 | 1 | 0 | 1 |
| Rel3 | 1 | 1 | 1 | 1 |
Extensions and optimizations
Early optimizations
One of the initial practical enhancements to Dixon's factorization method involved refining the linear algebra phase for greater computational simplicity. In the original algorithm, Gaussian elimination is performed over the finite field GF(2), where the exponents of primes in the factorizations are reduced modulo 2. This avoids the complexity of solving a full integer matrix equation and suffices because a linear dependence modulo 2 yields a product that is a square modulo n, enabling the construction of the desired congruence of squares. The operation count for this phase is O(h^3), where h is the size of the factor base, making it feasible even with modest computing resources available in the early 1980s.[1] The early abort strategy further accelerated relation collection during trial division. After dividing out small factors, if the remaining cofactor exceeds the square of the next prime in the factor base to be tested, the process aborts for that candidate, as it is unlikely to be B-smooth. This heuristic discards most non-smooth values early, reducing the expected time for the trial division phase without significantly risking the collection of sufficient relations. Pomerance analyzed this tweak in detail, showing it lowers the overall runtime constant in Dixon's subexponential complexity.[12] Adjustments to the factor base selection also emerged as practical refinements. If the rate of smooth relations proves too low during initial runs, the base is dynamically expanded by including additional small primes, increasing the smoothness probability while keeping the linear algebra manageable. Moreover, partial relations were incorporated, retaining candidates where the cofactor after trial division is a small prime power not in the base; these partials could later be combined with others to form complete smooth relations. This approach, analyzed in early theoretical work, improved yield without altering the core algorithm.[14]Relation to advanced methods
Dixon's factorization method laid the groundwork for the quadratic sieve (QS), a direct successor developed by Carl Pomerance in 1981 that extends Dixon's core idea of collecting smooth relations over a factor base but shifts from random square selection to generating candidates via the quadratic form . These values are systematically smaller than random squares modulo , increasing the probability of smoothness and thereby reducing the randomness inherent in Dixon's approach.[15] The QS improves upon Dixon by replacing trial division for smoothness testing with a sieving process that efficiently identifies potential smooth values across a range, allowing for a larger factor base while retaining the same linear algebra phase to solve for dependencies. This results in a superior asymptotic complexity of for QS, compared to Dixon's , where .[16][1] Further evolutions include the multiple polynomial quadratic sieve (MPQS), introduced in 1987, which generalizes QS by employing multiple quadratic polynomials to cover a broader sieving interval with reduced computational overhead per polynomial. The number field sieve (NFS), originating in the early 1990s, extends this lineage by using higher-degree polynomials in algebraic number fields to generate smooth relations, further optimizing the balance between relation generation and smoothness probability; Dixon's random squares concept directly inspired the congruence generation strategies in these methods.[17] In modern cryptography and number theory, Dixon's method is rarely deployed standalone due to its inefficiency for large integers but remains a foundational algorithm taught in educational contexts for illustrating subexponential factorization principles, and it occasionally appears in hybrid implementations for factoring medium-sized numbers where simplicity outweighs speed.[18]| Method | Asymptotic Complexity | Practical Range (digits) |
|---|---|---|
| Dixon | Up to ~20 () | |
| QS | Up to ~100 () | |
| NFS | Beyond ~100 () |
