Recent from talks
Nothing was collected or created yet.
Trial division
View on WikipediaThis article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (March 2014) |
Trial division is the most laborious but easiest to understand of the integer factorization algorithms. The essential idea behind trial division tests to see if an integer n, the integer to be factored, can be divided by each number in turn that is less than or equal to the square root of n.
For example, to find the prime factors of n = 70, one can try to divide 70 by successive primes: first, 70 / 2 = 35; next, neither 2 nor 3 evenly divides 35; finally, 35 / 5 = 7, and 7 is itself prime. So 70 = 2 × 5 × 7.
Trial division was first described by Fibonacci in his book Liber Abaci (1202).[1]
Method
[edit]Given an integer n (n refers to "the integer to be factored"), the trial division consists of systematically testing whether n is divisible by any smaller number. Clearly, it is only worthwhile to test candidate factors less than n, and in order from two upwards because an arbitrary n is more likely to be divisible by two than by three, and so on. With this ordering, there is no point in testing for divisibility by four if the number has already been determined not divisible by two, and so on for three and any multiple of three, etc. Therefore, the effort can be reduced by selecting only prime numbers as candidate factors. Furthermore, the trial factors need go no further than because, if n is divisible by some number p, then n = p × q and if q were smaller than p, n would have been detected earlier as being divisible by q or by a prime factor of q.
A definite bound on the prime factors is possible. Suppose Pi is the i'th prime, so that P1 = 2, P2 = 3, P3 = 5, etc. Then the last prime number worth testing as a possible factor of n is Pi where P2i + 1 > n; equality here would mean that Pi + 1 is a factor. Thus, testing with 2, 3, and 5 suffices up to n = 48 not just 25 because the square of the next prime is 49, and below n = 25 just 2 and 3 are sufficient. Should the square root of n be an integer, then it is a factor and n is a perfect square.
The trial division algorithm in pseudocode:
algorithm trial-division is
input: Integer n to be factored
output: List F of prime factors of n
P ← set of all primes ≤
F ← empty list of factors
for each prime p in P do
while n mod p is 0
Add factor p to list F
n ← n/p
if F is empty (Original n is prime?)
Add factor n to list F
Determining the primes less than or equal to is not a trivial task as n gets larger, so the simplest computer programs to factor a number just try successive integers, prime and composite, from 2 to as possible factors.
Speed
[edit]In the worst case, trial division is a laborious algorithm. For a base-2 n digit number a, if it starts from two and works up only to the square root of a, the algorithm requires
trial divisions, where denotes the prime-counting function, the number of primes less than x. This does not take into account the overhead of primality testing to obtain the prime numbers as candidate factors. A useful table need not be large: P(3512) = 32749, the last prime that fits into a sixteen-bit signed integer and P(6542) = 65521 for unsigned sixteen-bit integers. That would suffice to test primality for numbers up to 655372 = 4,295,098,369. Preparing such a table (usually via the Sieve of Eratosthenes) would only be worthwhile if many numbers were to be tested. If instead a variant is used without primality testing, but simply dividing by every odd number less than the square root the base-2 n digit number a, prime or not, it can take up to about:
In both cases, the required time grows exponentially with the digits of the number.
Even so, this is a quite satisfactory method, considering that even the best-known algorithms have exponential time growth. For a chosen uniformly at random from integers of a given length, there is a 50% chance that 2 is a factor of a and a 33% chance that 3 is a factor of a, and so on. It can be shown that 88% of all positive integers have a factor under 100 and that 92% have a factor under 1000. Thus, when confronted by an arbitrary large a, it is worthwhile to check for divisibility by the small primes, since for , in base-2 .
However, many-digit numbers that do not have factors in the small primes can require days or months to factor with trial division. In such cases other methods are used such as the quadratic sieve and the general number field sieve (GNFS). Because these methods also have superpolynomial time growth a practical limit of n digits is reached very quickly. For this reason, in public key cryptography, values for a are chosen to have large prime factors of similar size so that they cannot be factored by any publicly known method in a useful time period on any available computer system or computer cluster such as supercomputers and computer grids. The largest cryptography-grade number that has been factored is RSA-250, a 250-digit number, using the GNFS and resources of several supercomputers. The running time was 2700 core years.
References
[edit]- Childs, Lindsay N. (2009). A concrete introduction to higher algebra. Undergraduate Texts in Mathematics (3rd ed.). New York, NY: Springer-Verlag. ISBN 978-0-387-74527-5. Zbl 1165.00002.
- Crandall, Richard; Pomerance, Carl (2005). Prime numbers. A computational perspective (2nd ed.). New York, NY: Springer-Verlag. ISBN 0-387-25282-7. Zbl 1088.11001.
External links
[edit]- Wikiversity offers a lesson on prime factorization using trial division with Python.
- Fast JavaScript Prime Factor Calculator using trial division. Can handle numbers up to about 253
- Trial Division in Java, C and JavaScript (in Portuguese)
Trial division
View on GrokipediaFundamentals
Definition
Trial division is a straightforward and elementary method in number theory used for primality testing and integer factorization, involving the systematic checking of potential divisors of a given integer starting from 2 and proceeding through successive integers up to .[1] This approach determines if is prime by verifying whether it has any divisors other than 1 and itself; if no such divisors exist within the tested range, is confirmed prime.[4] The core principle of trial division hinges on the divisibility property: if an integer divides evenly (i.e., ), then the quotient serves as the complementary factor, allowing recursive application of the method to the smaller quotient until the complete prime factorization is obtained or primality is established.[5] If no divisor is found during the process, must be prime, as any composite number would have been detected through this exhaustive search.[1] This method's mathematical foundation rests on the fundamental theorem of arithmetic and the pairing of factors: for any composite integer , it possesses at least one prime factor such that , because the factors of occur in pairs where the smaller factor is at most .[1] Thus, testing beyond is unnecessary, as any larger potential factor would imply a corresponding smaller one already checked.[2] For example, applying trial division to factor 315 begins by testing small divisors: 315 is divisible by 3 (since ), then 105 is divisible by 3 (), 35 by 5 (), and 7 is prime, yielding the prime factorization .[1]Historical Development
The method of trial division traces its origins to ancient Greek mathematics, where Euclid in his Elements (c. 300 BCE) provided the foundational definition of a prime in Book VII as a number greater than unity that is measured by no other number except unity itself, implying exhaustive checking of potential divisors. This approach was essential in Euclid's proof of the infinitude of primes in Book IX, Proposition 20, where he constructs a number larger than any finite set of primes and argues it must have a new prime factor, relying on the absence of divisors from the existing set. While trial division applies to individual integers, related methods like the sieve of Eratosthenes (c. 250 BCE) enabled efficient generation of prime tables for ranges.[3][6] During the medieval period, systematic applications of division for factoring emerged in Islamic mathematics, with further explicit descriptions in European works. Although early Arab scholars like Al-Khwarizmi advanced algebraic methods in the 9th century, the first detailed account of trial division as a structured algorithm for prime factorization, including testing up to the square root, appears in Leonardo Fibonacci's Liber Abaci (1202), where Chapter 5 describes dividing composite numbers successively by small primes to resolve their factors, as exemplified by factoring 805 as 5 × 7 × 23 through sequential trials.[7] In the Renaissance and Enlightenment eras, trial division was implicitly employed in prime searches by figures such as Pierre de Fermat and Leonhard Euler. Fermat's 17th-century explorations of prime forms and factorization techniques, including the difference-of-squares method, built upon trial checks for efficiency. In the 18th century, Euler refined trial division using quadratic residues for efficiency. Large prime tables, such as those up to 1,020,000, were computed by Chernac in 1811 using systematic trial division, facilitating studies on infinite series and the distribution of primes.[3][8] The 19th century saw formalization of trial division within number theory texts, notably in Peter Gustav Lejeune Dirichlet's Vorlesungen über Zahlentheorie (1863), which discusses primes and factorization methods, including trial division as a baseline for analytic results on prime distribution. In the 20th century, despite its inefficiencies, trial division persisted into the early computing era of the 1940s and 1950s, where it served as a foundational algorithm for generating prime tables and initial factoring in cryptographic precursors on early computers like the SWAC in 1952, which identified large Mersenne primes through systematic divisibility tests.[9]Algorithm Description
Basic Procedure
The basic procedure of trial division for factoring a positive integer begins by handling special cases to identify and extract the smallest prime factor, 2, if applicable. If is even, repeatedly divide by 2 until it is no longer divisible, recording each division as a factor of 2 in the prime factorization. This step ensures all powers of 2 are accounted for efficiently before proceeding to odd divisors.[10] Next, consider odd potential divisors starting from 3 up to , incrementing by 2 each time to skip even numbers. For each such , check if divides the current (i.e., ); if so, repeatedly divide by until it no longer divides evenly, recording the multiplicity of in the factorization. Update the upper limit as decreases during divisions to maintain correctness. This process continues until no further divisors are found within the range.[10] If, after these checks, the remaining , then it is itself a prime factor, which is added to the factorization; otherwise, the process concludes with the collected prime factors. For input , the factorization is empty, as 1 has no prime factors, though the algorithm assumes for standard application.[10] To illustrate, consider factoring 91. First, 91 is odd, so no divisions by 2. Then test odd from 3 to : 3 does not divide 91 (91 ÷ 3 ≈ 30.333); 5 does not (91 ÷ 5 = 18.2); but 7 does (91 ÷ 7 = 13), and 13 is prime with no further divisors needed. Thus, 91 = 7 × 13.[1]Implementation in Pseudocode
The basic trial division algorithm for prime factorization can be implemented efficiently by first handling the factor 2 separately to eliminate even divisors, then checking odd potential divisors up to the square root of the current value of the input number. This approach avoids unnecessary checks for even numbers beyond 2, reducing the number of iterations roughly by half. The pseudocode below assumes a positive integer input and returns a list of prime factors in non-decreasing order, with multiplicities.function factorize(n):
factors = empty list
// Handle factor 2
while n is even:
append 2 to factors
n = n / 2
// Check odd divisors from 3 to sqrt(n)
d = 3
while d * d <= n:
while n % d == 0:
append d to factors
n = n / d
d = d + 2
// If n is a prime greater than 2
if n > 2:
append n to factors
return factors
function factorize(n):
factors = empty list
// Handle factor 2
while n is even:
append 2 to factors
n = n / 2
// Check odd divisors from 3 to sqrt(n)
d = 3
while d * d <= n:
while n % d == 0:
append d to factors
n = n / d
d = d + 2
// If n is a prime greater than 2
if n > 2:
append n to factors
return factors
Optimizations and Variants
Prime-Only Trial Division
Prime-only trial division refines the basic trial division procedure by limiting the candidate divisors to prime numbers up to the square root of the integer under test. This optimization stems from the fundamental theorem of arithmetic, which ensures that every composite number has a prime factor no larger than its square root; thus, if has a composite divisor , its smallest prime factor would already have been checked as it is at most .[11] Consequently, testing only primes eliminates redundant checks against composites, whose prime factors would have been encountered earlier.[12] The modified procedure begins by generating a list of all prime numbers up to , commonly achieved through the Sieve of Eratosthenes, which efficiently identifies primes in this range.[11] Once the prime list is obtained, trial division proceeds by sequentially dividing by each prime in the list, starting from the smallest. If divides evenly, that prime is a factor, and the process recurses on the quotient (with multiplicity if needed) until the quotient is 1 or prime. If no such division occurs, is prime. This approach applies to both primality testing and full factorization. For example, to factorize , compute , so generate primes up to 17: 2, 3, 5, 7, 11, 13, 17. Testing these in order, 315 is divisible by 3 (); on 105, divisible by 3 (); 35 by 5 (); 7 is prime, giving the factorization . Although the upfront cost of sieving primes up to is , this is often amortized for large since the subsequent trials number only , where is the prime-counting function, compared to divisions in the unoptimized basic method.[13] This reduction in trial count—roughly by the prime number theorem—makes prime-only trial division more efficient for numbers beyond small scales, despite the initial sieve overhead.[14]Wheel Factorization
Wheel factorization is an optimization technique for trial division that leverages modular arithmetic to skip numbers that are multiples of small initial primes, thereby reducing the number of candidate divisors to test. It is based on the concept of prime wheels, where the wheel modulus is the product of the first few small primes (e.g., ), and candidate divisors are generated only in the residue classes modulo that are coprime to . This approach generates potential prime factors by stepping through these coprime residues, avoiding systematic checks of composites divisible by the initial primes.[15] In the standard wheel using modulus 30, the spoke positions—or residue classes coprime to 30—are 1, 7, 11, 13, 17, 19, 23, and 29. After initially dividing the target number by the small primes 2, 3, and 5 to remove any factors from these, trial division proceeds by testing only increments aligned with these residues up to . For instance, the sequence of candidates might start at 7 (the first spoke after 5) and advance by differences derived from the wheel, such as +4 to 11, +2 to 13, +4 to 17, and so on, cycling through the 8 coprime classes out of 30 possible residues.[15][10] The procedure begins by exhaustively dividing by 2, 3, and 5. If remains unfactored, wheel candidates are generated starting from the next prime (7) and continuing via the modular steps up to , checking divisibility only at these positions. This skips all even numbers beyond 2, multiples of 3 beyond 3, and multiples of 5 beyond 5, focusing efforts on likely prime candidates.[15] Consider the example of factoring . First, confirm it is odd and not divisible by 3 (sum of digits 2) or 5 (ends in 1). Then, using the mod-30 wheel, test at 7: ; since it divides evenly, proceed to factor 143 similarly, yielding . Thus, , with the wheel efficiently skipping irrelevant candidates like 9, 15, 21, etc.[15] Extensions to larger wheels incorporate additional small primes for denser skipping. For modulus , the wheel has 48 coprime residues out of 210, allowing trials to bypass multiples of 7 as well after initial divisions by 2, 3, 5, and 7; however, this increases preprocessing complexity due to the larger set of residues and step patterns. Larger wheels provide better asymptotic density but incur higher setup costs, making them suitable for applications with many factorizations.[16][15] The density improvement from wheel factorization reduces the number of trials by the factor , where is Euler's totient function counting integers up to coprime to . For , , so trials are limited to approximately 26.7% of the candidates up to ; for , , yielding about 22.9% of candidates.[15]Performance and Analysis
Time Complexity
Trial division in its basic form tests potential divisors sequentially from 2 up to , resulting in a worst-case time complexity of arithmetic operations, as this bound is achieved when is prime and all divisions must be performed. The space complexity remains , requiring only constant additional storage for variables like counters and remainders. An optimized variant that handles the factor 2 separately and then checks only odd integers up to reduces the number of trial divisions to approximately . In the prime-only variant, trials are restricted to precomputed prime numbers up to , yielding a time complexity of , derived from the prime number theorem which states that the number of primes up to . Precomputing these primes via the sieve of Eratosthenes adds to the overall time complexity, while the space required to store the prime list is . The wheel factorization variant builds on prime-only trial division by excluding candidates sharing factors with a small wheel modulus (typically the product of the first few primes), further reducing the time complexity to , where denotes Euler's totient function measuring the density of integers coprime to . For , , corresponding to roughly 27% of the checks in the basic odd-integer approach. Trial division and its variants exhibit time complexity exponential in the bit length of the input, as , rendering them impractical for large integers exceeding approximately .Practical Efficiency
In practice, basic trial division can factor a 30-bit integer (approximately ) in under 1 millisecond on a standard modern CPU, processing around 2,000 such numbers per second.[17] For larger inputs like (40 bits), the time is milliseconds due to the need to perform roughly 78,000 trial divisions up to the square root (using prime-only), with optimized implementations completing this in under 0.1 seconds on 2020s-era processors.[17] Several factors influence the speed of trial division, including hardware capabilities, input size, and the number's primality. Modern CPUs from the 2020s, with clock speeds exceeding 4 GHz and high-throughput integer division units, can handle approximately to trial operations per second in optimized loops. Larger numbers require more trials proportional to their square root, while prime numbers represent the worst case, necessitating checks up to without early termination. Input size dominates, as the number of potential divisors grows with , and primality forces exhaustive testing.[17] Trial division becomes infeasible for cryptographic-scale numbers, such as 1024-bit RSA moduli (around 308 digits), where the square root is (about ), requiring an astronomical number of operations—estimated at approximately years on a single modern processor even assuming constant-time divisions. It remains suitable, however, for small-to-medium integers or as a preprocessing step to remove trivial factors before applying advanced methods.[18] Empirical benchmarks show that optimized wheel factorization variants provide significant speedups over basic trial division. For instance, a wheel excluding multiples of 2, 3, and 5 achieves approximately 3.75 times the throughput for 30-bit numbers, processing 7,600 per second compared to 2,000 for the naive approach. More advanced wheel techniques, incorporating larger prime bases and sequential optimizations, yield average improvements of 75% (equivalent to about 4x speedup) on composites up to 20 digits (), with gains increasing for larger inputs like those around .[17][19] Trial division is commonly used in mathematical software libraries for quick factorization of numbers under . In Python's SymPy library, thefactorint function employs trial division by default for small factors (typically primes up to around , or factors up to about 10 digits), combined with Pollard rho for larger ones, reliably factoring 18-digit semiprimes in seconds on standard hardware.[20]
Post-2020 developments have explored GPU acceleration for trial division, particularly in parallel batch factorization, where CUDA implementations on NVIDIA GPUs demonstrate potential speedups of 10-100x over CPU for sieving-like trial checks on multiple numbers, though single-number factorization benefits less due to sequential dependencies.[21]
