Recent from talks
Nothing was collected or created yet.
Generation of primes
View on WikipediaIn computational number theory, a variety of algorithms make it possible to generate prime numbers efficiently. These are used in various applications, for example hashing, public-key cryptography, and search of prime factors in large numbers.
For relatively small numbers, it is possible to just apply trial division to each successive odd number. Prime sieves are almost always faster. Prime sieving is the fastest known way to deterministically enumerate the primes. There are some known formulas that can calculate the next prime but there is no known way to express the next prime in terms of the previous primes. Also, there is no effective known general manipulation and/or extension of some mathematical expression (even such including later primes) that deterministically calculates the next prime.
Prime sieves
[edit]A prime sieve or prime number sieve is a fast type of algorithm for finding primes. There are many prime sieves. The simple sieve of Eratosthenes (250s BCE), the sieve of Sundaram (1934), the still faster but more complicated sieve of Atkin[1] (2003), sieve of Pritchard (1979), and various wheel sieves[2] are most common.
A prime sieve works by creating a list of all integers up to a desired limit and progressively removing composite numbers (which it directly generates) until only primes are left. This is the most efficient way to obtain a large range of primes; however, to find individual primes, direct primality tests are more efficient[citation needed]. Furthermore, based on the sieve formalisms, some integer sequences (sequence A240673 in the OEIS) are constructed which also could be used for generating primes in certain intervals.
Large primes
[edit]Cryptography requires the use of very large primes: for example, with the RSA cryptosystem two primes of at least 2,048 bits (i.e. at least 22047) are recommended.[3] To generate these primes, the mainstream method is to generate random numbers in a target range and test them for primality using fast probabilistic methods: a short round of sieving (sieve of Eratosthenes or trial division) followed by Baillie–PSW primality test or the Miller–Rabin primality test;[4] a probable prime with a chance of 2-112 of being composite is considered plenty for the 2,048-bit case. Even if a composite number is chosen, it will likely be quickly discovered by causing failed operations, except when a Carmichael number happens to be chosen in the case of RNA.[5]
A less common choice is to use provable primes, which can be generated based on variants of Pocklington primality test,[6] especially Maurer's algorithm. Both the provable and probable primality tests rely on modular exponentiation.[4]
In addition with RSA, so-called "strong primes" with both p-1 and p+1 having a large prime factor is preferred,[4] as this is expected to slow down factoring attempts using the Polard's p-1 and Williams' p+1 algorithms. Such a choice has little effect against elliptic-curve factoring methods, however.[7]
Integers of special forms, such as Mersenne primes or Fermat primes, can be efficiently tested for primality if the prime factorization of p − 1 or p + 1 is known.
Complexity
[edit]The sieve of Eratosthenes is generally considered the easiest sieve to implement, but it is not the fastest in the sense of the number of operations for a given range for large sieving ranges. In its usual standard implementation (which may include basic wheel factorization for small primes), it can find all the primes up to N in time while basic implementations of the sieve of Atkin and wheel sieves run in linear time . Special versions of the Sieve of Eratosthenes using wheel sieve principles can have this same linear time complexity. A special version of the Sieve of Atkin and some special versions of wheel sieves which may include sieving using the methods from the Sieve of Eratosthenes can run in sublinear time complexity of . Note that just because an algorithm has decreased asymptotic time complexity does not mean that a practical implementation runs faster than an algorithm with a greater asymptotic time complexity: If in order to achieve that lesser asymptotic complexity the individual operations have a constant factor of increased time complexity that may be many times greater than for the simpler algorithm, it may never be possible within practical sieving ranges for the advantage of the reduced number of operations for reasonably large ranges to make up for this extra cost in time per operation.
Some sieving algorithms, such as the Sieve of Eratosthenes with large amounts of wheel factorization, take much less time for smaller ranges than their asymptotic time complexity would indicate because they have large negative constant offsets in their complexity and thus don't reach that asymptotic complexity until far beyond practical ranges. For instance, the Sieve of Eratosthenes with a combination of wheel factorization and pre-culling using small primes up to 19 uses time of about a factor of two less than that predicted for the total range for a range of 1019, which total range takes hundreds of core-years to sieve for the best of sieve algorithms.
The simple naive "one large sieving array" sieves of any of these sieve types take memory space of about , which means that 1) they are very limited in the sieving ranges they can handle to the amount of RAM (memory) available and 2) that they are typically quite slow since memory access speed typically becomes the speed bottleneck more than computational speed once the array size grows beyond the size of the CPU caches. The normally implemented page segmented sieves of both Eratosthenes and Atkin take space plus small sieve segment buffers which are normally sized to fit within the CPU cache; page segmented wheel sieves including special variations of the Sieve of Eratosthenes typically take much more space than this by a significant factor in order to store the required wheel representations; Pritchard's variation of the linear time complexity sieve of Eratosthenes/wheel sieve takes space. The better time complexity special version of the Sieve of Atkin takes space . Sorenson[8] shows an improvement to the wheel sieve that takes even less space at for any . However, the following is a general observation: the more the amount of memory is reduced, the greater the constant factor increase in the cost in time per operation even though the asymptotic time complexity may remain the same, meaning that the memory-reduced versions may run many times slower than the non-memory-reduced versions by quite a large factor.
See also
[edit]References
[edit]- ^ Atkin, A.; Bernstein, D. J. (2004). "Prime sieves using binary quadratic forms" (PDF). Mathematics of Computation. 73 (246): 1023–1030. Bibcode:2004MaCom..73.1023A. doi:10.1090/S0025-5718-03-01501-1.
- ^ Pritchard, Paul (1994). Improved Incremental Prime Number Sieves. Algorithmic Number Theory Symposium. pp. 280–288. CiteSeerX 10.1.1.52.835.
- ^ Barker, Elaine; Dang, Quynh (2015-01-22). "NIST Special Publication 800-57 Part 3 Revision 1: Recommendation for Key Management: Application-Specific Key Management Guidance" (PDF). National Institute of Standards and Technology. p. 12. doi:10.6028/NIST.SP.800-57pt3r1. Retrieved 2017-11-24.
- ^ a b c Švenda, Petr; Nemec, Matúš; Sekan, Peter; Kvašňovský, Rudolf; Formánek, David; Komárek, David; Matyáš, Vashek (August 2016). The Million-Key Question—Investigating the Origins of RSA Public Keys. 25th USENIX Security Symposium. Austin, TX, United States: USENIX Association. pp. 893–910. ISBN 978-1-931971-32-4.
- ^ "RSA with probable primes". Cryptography Stack Exchange.
- ^ Plaisted D. A. (1979). "Fast verification, testing, and generation of large primes". Theor. Comput. Sci. 9 (1): 1–16. doi:10.1016/0304-3975(79)90002-1.
- ^ "RSA Labs FAQ - What are strong primes and are they necessary for RSA?". security.nknu.edu.tw.
- ^ Sorenson, J. P. (1998). "Trading Time for Space in Prime Number Sieves". Algorithmic Number Theory. Lecture Notes in Computer Science. Vol. 1423. pp. 179–195. CiteSeerX 10.1.1.43.9487. doi:10.1007/BFb0054861. ISBN 978-3-540-64657-0.
Generation of primes
View on GrokipediaBasic and Historical Methods
Trial Division
Trial division is the most basic deterministic method for determining whether an integer is prime, by systematically checking for divisibility by all smaller integers up to . If no such divisor exists other than 1 and itself, then is prime.[5] The process begins by testing divisibility starting from 2: divide by 2; if it divides evenly, is composite. If not, proceed to check odd integers from 3 up to , incrementing by 2 each time. If any of these divides evenly, is composite; otherwise, it is prime.[6] Several optimizations make this method more efficient for manual or computational use. The primary one limits checks to , since if has a factor larger than , it must also have a corresponding factor smaller than that would already have been detected.[5] After handling 2 separately, skipping all subsequent even numbers reduces the trials by roughly half, as even divisors greater than 2 would imply is even and thus composite. Similarly, treating 3 as a special case and then checking only numbers of the form (skipping multiples of 2 and 3) further accelerates the process, though this borders on wheel factorization techniques.[7] This method originates from ancient Greek mathematics, with Euclid (c. 300 BCE) laying foundational work on primes in his Elements, including proofs of their infinitude that implicitly rely on basic divisibility checks for verification.[8] It was employed manually in antiquity to identify small primes, as seen in computations from Eratosthenes' era (c. 240 BCE), before more systematic approaches like sieving emerged.[8] For example, consider . Compute , so test divisors from 2 to 4. First, (not an integer). Next, (not an integer). Finally, (not an integer). Since no divisors are found, 17 is prime.[5] The time complexity for testing a single is , as it requires up to division operations.[9] To generate the first primes via trial division on consecutive integers up to the th prime (approximately by the prime number theorem), the total cost sums over from 2 to about , yielding overall, since the integral of from 1 to is and leads to this bound (ignoring logarithmic factors for simplicity).[10] This makes trial division practical only for small , serving as a foundation for more efficient range-based methods like the sieve of Eratosthenes.[5]Sieve of Eratosthenes
The Sieve of Eratosthenes is a classical algorithm for generating all prime numbers up to a given limit by systematically eliminating composite numbers through iterative marking of multiples. It begins by creating a boolean array of size , initializing all entries to true except for indices 0 and 1, which represent non-primes. Starting from the integer 2, the algorithm identifies the first unmarked number as prime and marks all its multiples (beginning from its square) as composite, repeating this process for each subsequent unmarked number up to . The remaining unmarked indices at the end correspond to primes.[11] This method was developed by the Greek mathematician Eratosthenes of Cyrene around 240 BCE while serving as chief librarian at the Library of Alexandria, marking it as one of the earliest known efficient techniques for batch-generating primes up to a specified limit manually.[12] Prior to this, identifying primes relied on individual trial division, but the sieve enabled the simultaneous elimination of multiples across the entire range, making it practical for constructing comprehensive lists of primes by hand.[13] To illustrate the process, consider finding all primes up to :- Initialize the array as true for indices 2 through 30.
- For , mark multiples starting from : 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30 as composite.
- Next unmarked is ; mark multiples from : 9, 12, 15, 18, 21, 24, 27, 30.
- Next unmarked is ; mark from : 25, 30.
- Next unmarked is ; mark from , so stop.
- The unmarked indices are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, yielding the primes.
input: integer n > 1
output: list of primes up to n
let is_prime be a boolean array of size n+1, initialized to true
is_prime[0] ← false
is_prime[1] ← false
for i ← 2 to floor(sqrt(n))
if is_prime[i]
for j ← i² to n step i
is_prime[j] ← false
primes ← all i where is_prime[i] is true and 1 < i ≤ n
return primes
input: integer n > 1
output: list of primes up to n
let is_prime be a boolean array of size n+1, initialized to true
is_prime[0] ← false
is_prime[1] ← false
for i ← 2 to floor(sqrt(n))
if is_prime[i]
for j ← i² to n step i
is_prime[j] ← false
primes ← all i where is_prime[i] is true and 1 < i ≤ n
return primes
Advanced Sieving Algorithms
Sieve of Atkin
The Sieve of Atkin, developed by A. O. L. Atkin and D. J. Bernstein in 2004, represents a significant advancement in sieving algorithms for generating prime numbers, particularly in computational number theory, by leveraging properties of binary quadratic forms to achieve better asymptotic performance than the classical Sieve of Eratosthenes.[15] Unlike earlier methods that mark multiples linearly, this algorithm identifies primes through the enumeration of solutions to specific irreducible quadratic Diophantine equations modulo 60, enabling a more efficient marking process for large ranges.[15] The core of the algorithm relies on six distinct quadratic forms that correspond to the possible residues modulo 60 for numbers not divisible by 2, 3, or 5 (the prime residues being 1, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 49, 53, 59). These forms are:- ,
- ,
- (with and not congruent to 0 modulo 3).
Wheel and Segmented Sieves
Wheel factorization is an optimization technique applied to sieving algorithms for generating prime numbers, where residues coprime to a chosen wheel modulus —typically the product of the first few small primes, such as —are precomputed, and the sieving process focuses only on those positions, effectively skipping multiples of the small primes factored into . This method reduces the density of numbers that need to be checked during sieving by the factor , where is Euler's totient function, leading to fewer operations overall.[16] The concept was formalized in the wheel sieve by Paul Pritchard in 1982, building on earlier ideas for efficient enumeration of candidates likely to be prime. A simple example of wheel factorization uses modulus 6 (), where the coprime residues are 1 and 5; this skips all even numbers and multiples of 3, sieving only on positions congruent to 1 or 5 modulo 6, which halves the workload compared to checking all integers beyond 3.[17] For a larger wheel like 30, the eight coprime residues (1, 7, 11, 13, 17, 19, 23, 29) further reduce the candidates to about 8/30 ≈ 0.267 of the original range, accelerating the process for subsequent prime eliminations.[18] When integrated with base sieves like the sieve of Eratosthenes, wheel factorization provides a constant-factor speedup, with the reduction approaching for the first primes, though larger wheels increase setup costs.[19] The segmented sieve addresses memory constraints in generating primes over large ranges by dividing the interval into smaller blocks of size at most , sieving each segment independently using a precomputed list of small primes up to . Introduced by Carter Bays and Richard H. Hudson in 1977, this variant avoids allocating a full array of size , instead requiring only space for the primes and current segment.[20] For each segment starting at offset , multiples of each small prime are marked by computing the first multiple in the segment as and stepping by , ensuring efficient elimination without revisiting prior segments. An illustrative segmented sieve example targets primes between and ; first, generate all primes up to using a standard sieve (yielding primes like 2, 3, 5, ..., 1009), then process 10 segments of 1000 numbers each, marking composites in the array for by offsets from multiples of those small primes, repeating for subsequent blocks up to , ultimately identifying 723 primes in the range such as 1000003 and 1000033.[21] This approach maintains the time complexity of the base sieve while limiting space to .[16] Combining wheel factorization with segmented sieving further optimizes large-scale computations; for instance, applying a wheel modulo 30 within segments reduces the array size per block by about 73%, enhancing cache efficiency.[18] These techniques are particularly valuable in distributed computing environments, where segmented wheel sieves enable parallel processing across nodes to enumerate all primes up to , as demonstrated in implementations for high-performance prime tabulation.[22]Primality Testing for Large Primes
Probabilistic Tests
Probabilistic primality tests provide efficient methods for verifying whether large numbers are prime with a tunable probability of error, making them indispensable for generating primes in cryptographic applications such as RSA key generation. These algorithms are randomized and witness-based, declaring a number composite if a chosen witness reveals inconsistency with Fermat's Little Theorem or related properties, while passing multiple trials suggests probable primality. The error probability decreases exponentially with the number of independent trials, allowing practical certainty for numbers up to thousands of bits.[23] The seminal Miller-Rabin test, introduced by Gary L. Miller in 1976 as a deterministic algorithm assuming the extended Riemann hypothesis, was adapted into a fully probabilistic version by Michael O. Rabin in 1980.[24][23] For an odd integer , the test proceeds as follows: first, express where is odd. Select a random base with . Compute . If or , then is a probable prime to base . Otherwise, for to , set ; if , then is a probable prime to base . If no such satisfies the condition and the final , then is composite. Each trial has an error probability of at most for composite , as at least three-quarters of bases are witnesses detecting compositeness.[23][25] Repeating the test with independent random bases yields an overall error probability bounded by . The time complexity is , dominated by modular exponentiations, each requiring time via repeated squaring and modular multiplication, with polylogarithmic space usage.[25] For numbers , the test becomes deterministic—guaranteeing primality or compositeness—when using the first 12 prime bases as witnesses (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37), as no strong pseudoprimes exist beyond these bases up to that limit. An illustrative example is testing (which factors as ) with base . Here, and , so compute . Since and , and no further squarings are needed (), is declared composite. The Jacobi symbol indicates 2 is a quadratic residue modulo 91, but the failure of the strong condition confirms compositeness without relying on it directly.[26] The Baillie-PSW test enhances reliability by combining a single Miller-Rabin trial with base 2 and a strong Lucas probable prime test, where parameters are chosen via the first quadratic non-residue (often starting from 5, 9, etc., until ). This variant has no known composite pseudoprimes up to at least , yielding a practical error probability below even for larger numbers.[27] The Miller-Rabin test, including variants like Baillie-PSW, is widely deployed; for instance, OpenSSL employs it during RSA key generation to verify candidate primes.[28] For 1024-bit numbers, 40 trials suffice to render the error probability negligible for cryptographic purposes.[29]Deterministic Tests
Deterministic primality tests provide unconditional proofs of primality for arbitrary integers, relying on number-theoretic certificates rather than probabilistic witnesses, ensuring absolute certainty without reliance on randomness. These methods contrast with faster probabilistic approaches by offering rigorous guarantees, though often at higher computational cost, making them suitable for applications requiring verified primes, such as cryptographic key generation where partial factorization of is feasible. The Agrawal–Kayal–Saxena (AKS) primality test, introduced in 2002, is the first general deterministic algorithm running in polynomial time, placing the problem of determining primality in the complexity class P.[30] It verifies whether an integer is prime by checking a set of polynomial congruences: for a suitable parameter chosen such that the order of modulo exceeds , and for to , it confirms . If all checks pass and is not a perfect power, then is prime; otherwise, it is composite. This identity holds for primes due to properties of finite fields and the binomial theorem modulo . The original implementation has a time complexity of , but subsequent optimizations by Lenstra and Pomerance reduced it to .[30] Despite these theoretical advances, the AKS test suffers from high constant factors, rendering it practical only for numbers up to approximately , beyond which execution times become prohibitive even on modern hardware.[31] Another practical deterministic method for large primes is the Elliptic Curve Primality Proving (ECPP) algorithm, developed in the 1980s and refined since. ECPP provides a primality certificate by recursively constructing a chain of elliptic curves over finite fields, leveraging properties of supersingular elliptic curves to reduce the problem to proving primality of smaller numbers. It is heuristic but in practice always succeeds for primes, with a subexponential time complexity of heuristically, making it suitable for proving primality of numbers with thousands of digits, such as those used in setting prime records. ECPP does not require factorization of and is implemented in tools like Primo for certifying large primes. Pocklington's theorem, formulated in 1914, enables primality certificates for numbers where a large factor of is known and fully factored. Specifically, if with , factored into primes , and for each there exists an integer such that but , then is prime. This Fermat-like condition ensures no prime divisor of divides except trivially, leveraging the structure of the multiplicative group modulo . The theorem is particularly effective for proving primality of large candidates in cryptography, where partial factorization of (the "safeprime" or Sophie Germain structure) is often engineered. For example, to prove 13 is prime, note with , and factored as . Take : , , and , satisfying the condition for both distinct primes 2 and 3. Maurer's method, proposed in 1995, converts probable primes into provably prime ones efficiently by recursively constructing primes with verifiable certificates, often using strong Lucas pseudoprime tests combined with Pocklington-like criteria. It generates a chain of smaller certified primes to witness the larger one's primality, achieving expected linear time in the bit length for producing random primes suitable for cryptographic use. This approach bridges probabilistic testing and deterministic proof, making it viable for generating RSA-sized primes (up to 2048 bits) with full certification when partial factorization is available, unlike the general but impractical AKS for such scales.[32] Overall, while AKS provides a universal theoretical foundation with complexity , methods like Pocklington, Maurer, and ECPP excel in practice for structured or large primes due to their reliance on certificates and recursion.[30]Complexity and Efficiency
Theoretical Complexity
The generation of all primes up to requires at minimum time in standard computational models, as the algorithm must examine or initialize data structures covering the interval to identify composites, even though the output size is only .[33] For determining the primality of a single integer of size , a lower bound of follows from the need to read the input in binary representation.[34] The Sieve of Eratosthenes achieves time for generating all primes up to , arising from the harmonic sum over primes where each composite is marked a constant number of times on average.[11] The Sieve of Atkin improves this to time asymptotically, by using quadratic forms to directly identify square-free numbers congruent to quadratic non-residues modulo small primes, reducing the marking operations.[16] More advanced analytic sieves, such as the Lagarias-Odlyzko method, enable computation of the prime-counting function in time for any , though adapting it to list all primes explicitly retains near-linear scaling due to output verification.[35] For primality testing of large integers, deterministic algorithms based on the AKS framework run in polynomial time , with refinements achieving slightly better exponents through improvements in modular composition and polynomial multiplication.[36] Randomized tests, such as Miller-Rabin, achieve expected time with negligible error probability, leveraging properties of strong pseudoprimes under the generalized Riemann hypothesis.[37] Space requirements for sieving primes up to can be reduced to using wheel factorization or segmented variants that store only the list of primes or process intervals of size , avoiding full arrays.[38] The Prime Number Theorem establishes that , implying that the density of primes decreases logarithmically, which directly influences generation rates: to produce the first primes requires searching up to approximately .[39] This asymptotic for the -th prime, , derives by inverting the PNT via the relation , yielding and solving iteratively.[40] Recent pseudodeterministic algorithms, such as the 2023 construction by Chen, Liao, and Ren, enable polynomial-time generation of canonical primes of specified bit lengths with overwhelming probability of consistent output across runs.[41]Practical Performance
In practical applications, the sieve of Eratosthenes, particularly in highly optimized implementations like PrimeSieve, demonstrates excellent performance on modern hardware. For instance, generating all primes up to takes approximately 0.4 seconds on a single core of an Intel Core i7-920 processor at 2.66 GHz.[42] For larger limits, such as sieving an interval of numbers near , a cache-optimized segmented variant completes in about 2.63 seconds on a 900 MHz Athlon system, though contemporary multi-core CPUs with 3-4 GHz clocks can achieve this in under a second through parallelization.[43] The sieve of Atkin, while theoretically superior for extremely large due to its complexity, often underperforms optimized Eratosthenes implementations in practice for limits up to , as the latter benefits from simpler memory access patterns and better cache utilization.[16] Memory trade-offs are critical for scaling to terabyte-range sieving. Segmented sieves divide the range into blocks fitting within available RAM, such as 64 GB, enabling generation of primes up to or higher without excessive swapping; each segment typically requires 6.5-8 bytes per sieving prime for storage of offsets and wheels.[43] Wheel factorization further reduces memory by skipping multiples of small primes, allowing efficient handling of ranges exceeding available RAM on consumer hardware. GPU acceleration via CUDA implementations, like CUDASieve, provides significant speedups for the sieving phase: on an NVIDIA GTX 1080, sieving up to completes in 5.65 ms, yielding up to 70x faster performance compared to single-threaded CPU sieving for the marking step alone, though total runtime including data transfer remains around 0.16 seconds.[44] In cryptographic contexts, primality testing dominates for generating large primes. The Miller-Rabin probabilistic test enables rapid verification of 2048-bit candidates, with a single probable prime generation taking about 11 seconds on a mid-2010s laptop using Python, but optimized C++ libraries on 2025-era hardware reduce this to milliseconds per candidate due to fewer iterations (e.g., 7-12 witnesses suffice for negligible error).[45] For RSA key pairs, two such primes are selected to ensure the modulus resists factoring, while safe primes (where is also prime) for Diffie-Hellman require additional testing but add only modest overhead. Software tools like PrimeSieve in C++ excel for bulk small-prime generation, while YAFU supports factoring-assisted prime finding for larger numbers, sieving up to in approximately 1.8 seconds on an Intel i7-6700K @ 4.2 GHz (as of 2018 benchmarks).[46] Optimizations such as multi-threading and SIMD instructions enhance sieve efficiency; for example, a multithreaded incremental segmented sieve leverages all cores to process chunks in parallel, achieving near-linear scaling on 16-core systems while minimizing synchronization overhead.[47] For special cases like Mersenne primes, the Lucas-Lehmer test offers deterministic verification, with GPU-accelerated versions completing tests for exponents up to in seconds on consumer cards, far outperforming CPU baselines.[48] Distributed projects like PrimeGrid employ wheel-based sieves across volunteer networks, routinely processing candidates with exponents exceeding (corresponding to numbers up to digits in generalized Fermat forms) as of 2025, enabling discovery of record primes through collective effort.[49] Challenges in practical performance include cache misses during sieving with large primes, which disrupt locality and inflate runtime by forcing frequent main memory accesses; optimizations like wheel segmentation and prefetching mitigate this by up to 4x.[50] In Miller-Rabin, witness selection overhead arises from random base choices per iteration, but deterministic witness sets (e.g., the first 12 primes up to 37) eliminate randomness for numbers up to ; for larger 2048-bit numbers, probabilistic testing with 7-12 random witnesses is standard, balancing speed and negligible error probability.| Implementation | Limit | Time (s) | Hardware | Notes |
|---|---|---|---|---|
| PrimeSieve (Eratosthenes) | 0.4 | Single-core i7-920 @2.66 GHz | Counts 50M primes[42] | |
| Segmented Sieve (Optimized) | Interval of near | 2.63 | 900 MHz Athlon | Excludes init; modern CPUs faster[43] |
| YAFU Sieve | 1.8 | Intel i7-6700K @4.2 GHz | Includes counting (2018 benchmark)[46] | |
| CUDASieve (GPU) | 0.00565 (sieving only) | GTX 1080 | 70x vs. CPU sieving[44] | |
| Miller-Rabin (2048-bit prime) | Single candidate | 11 | 2018 laptop (Python) | Probable prime; optimized <1s on 2025 hardware[45] |
