Recent from talks
Contribute something
Nothing was collected or created yet.
Chudnovsky algorithm
View on WikipediaThe Chudnovsky algorithm is a fast method for calculating the digits of π, based on Ramanujan's π formulae. Published by the Chudnovsky brothers in 1988,[1] it was used to calculate π to a billion decimal places.[2]
It was used in the world record calculations of 2.7 trillion digits of π in December 2009,[3] 10 trillion digits in October 2011,[4][5] 22.4 trillion digits in November 2016,[6] 31.4 trillion digits in September 2018–January 2019,[7] 50 trillion digits on January 29, 2020,[8] 62.8 trillion digits on August 14, 2021,[9] 100 trillion digits on March 21, 2022,[10] 105 trillion digits on March 14, 2024,[11] and 202 trillion digits on June 28, 2024.[12] Recently, the record was broken yet again on November 23rd, 2025 with 314 trillion digits of pi.[13][14] This was done through the usage of the algorithm on y-cruncher.
Algorithm
[edit]The algorithm is based on the negated Heegner number , the j-function , and on the following rapidly convergent generalized hypergeometric series:[15]
This identity is similar to some of Ramanujan's formulas involving π,[15] and is an example of a Ramanujan–Sato series.
The time complexity of the algorithm is , where n is the number of digits desired.[16]
Optimizations
[edit]The optimization technique used for the world record computations is called binary splitting.[17]
See also
[edit]References
[edit]- ^ Chudnovsky, David; Chudnovsky, Gregory (1988), Approximation and complex multiplication according to Ramanujan, Ramanujan revisited: proceedings of the centenary conference
- ^ Warsi, Karl; Dangerfield, Jan; Farndon, John; Griffiths, Johny; Jackson, Tom; Patel, Mukul; Pope, Sue; Parker, Matt (2019). The Math Book: Big Ideas Simply Explained. New York: Dorling Kindersley Limited. p. 65. ISBN 978-1-4654-8024-8.
- ^ Baruah, Nayandeep Deka; Berndt, Bruce C.; Chan, Heng Huat (2009-08-01). "Ramanujan's Series for 1/π: A Survey". American Mathematical Monthly. 116 (7): 567–587. doi:10.4169/193009709X458555.
- ^ Yee, Alexander; Kondo, Shigeru (2011), 10 Trillion Digits of Pi: A Case Study of summing Hypergeometric Series to high precision on Multicore Systems, Technical Report, Computer Science Department, University of Illinois, hdl:2142/28348
- ^ Aron, Jacob (March 14, 2012), "Constants clash on pi day", New Scientist
- ^ "22.4 Trillion Digits of Pi". www.numberworld.org.
- ^ "Google Cloud Topples the Pi Record". www.numberworld.org/.
- ^ "The Pi Record Returns to the Personal Computer". www.numberworld.org/.
- ^ "Pi-Challenge - Weltrekordversuch der FH Graubünden - FH Graubünden". www.fhgr.ch. Retrieved 2021-08-17.
- ^ "Calculating 100 trillion digits of pi on Google Cloud". cloud.google.com. Retrieved 2022-06-10.
- ^ Yee, Alexander J. (2024-03-14). "Limping to a new Pi Record of 105 Trillion Digits". NumberWorld.org. Retrieved 2024-03-16.
- ^ Ranous, Jordan (2024-06-28). "StorageReview Lab Breaks Pi Calculation World Record with Over 202 Trillion Digits". StorageReview.com. Retrieved 2024-07-20.
- ^ "StorageReview Sets New Pi Record: 314 Trillion Digits on a Dell PowerEdge R7725". StorageReview.com. Retrieved 2026-01-02.
- ^ OBrien, Kevin (2025-12-25). "Pi calculation world record shattered at 314 trillion digits with a four-month run on a single server — StorageReview retakes the crown, thanks to storage bandwidth". Tom's Hardware. Retrieved 2026-01-02.
- ^ a b Baruah, Nayandeep Deka; Berndt, Bruce C.; Chan, Heng Huat (2009), "Ramanujan's series for 1/π: a survey", American Mathematical Monthly, 116 (7): 567–587, doi:10.4169/193009709X458555, JSTOR 40391165, MR 2549375
- ^ "y-cruncher - Formulas". www.numberworld.org. Retrieved 2018-02-25.
- ^ Brent, Richard P.; Zimmermann, Paul (2010). Modern Computer Arithmetic. Vol. 18. Cambridge University Press. doi:10.1017/CBO9780511921698. ISBN 978-0-511-92169-8.
Chudnovsky algorithm
View on GrokipediaHistory and Development
Origins in Ramanujan's Work
The foundations of the Chudnovsky algorithm trace back to the groundbreaking work of Srinivasa Ramanujan in the early 20th century, particularly his explorations of modular forms and elliptic integrals for approximating π. In 1914, Ramanujan published a seminal paper detailing how modular equations of various degrees could yield highly efficient series expansions for 1/π, derived from the complete elliptic integral of the first kind and transformations under the modular group. These approximations leveraged the interplay between elliptic functions and hypergeometric series, providing convergence rates far superior to contemporary methods like those based on arctangents.[4] Central to Ramanujan's approach were class invariants, algebraic numbers constructed from theta functions associated with singular moduli, which served as precursors to fast-converging π series. For imaginary quadratic fields with discriminant d, particularly Heegner numbers such as d = -163—where the class number is 1—these invariants enable extraordinarily accurate approximations, as the j-invariant at the corresponding point yields values extremely close to integers when exponentiated with π. Ramanujan's insights into these structures, drawn from his notebooks and the 1914 paper, highlighted how such numbers amplify the precision of elliptic integral evaluations for π, foreshadowing later algorithmic accelerations.[5] Ramanujan also developed specific formulae for 1/π incorporating continued fractions and theta functions, such as those linking the Rogers-Ramanujan continued fraction to modular eta functions and yielding hypergeometric representations. These expressions, often involving products of theta series, allowed for compact, rapidly convergent sums that approximated π to many decimal places with few terms—for instance, one such series converges at a rate of about 8 decimal digits per term. His work in the 1910s, though initially unpublished in full detail, laid the theoretical groundwork for these techniques. Ramanujan's π series remained largely unproven until their rediscovery and rigorous validation in the 1980s by mathematicians including Jonathan and Peter Borwein, who provided analytic proofs using elliptic function theory and modular forms. This revival in the late 1980s spurred adaptations, such as those by the Chudnovsky brothers, who built upon these foundations to engineer practical computational algorithms.Chudnovsky Brothers' Formulation
David Volfovich Chudnovsky (born January 22, 1947) and Gregory Volfovich Chudnovsky (born April 17, 1952) are Soviet-born American mathematicians who emigrated to the United States in the 1970s and established their careers at Columbia University as research associates in the Department of Mathematics.[6] Both brothers specialized in number theory and algebraic geometry, with Gregory receiving a MacArthur Fellowship in 1981 for his contributions to these fields, often in collaboration with David.[7] In 1988, the Chudnovsky brothers published their formulation of an efficient series for computing π, derived from Srinivasa Ramanujan's foundational ideas on modular forms, incorporating the j-invariant and principles of complex multiplication.[8] This work appeared in the proceedings of the Ramanujan Centenary Conference, emphasizing approximations rooted in elliptic integrals and class invariants to achieve rapid convergence.[9] Motivated by the computational challenges of the 1980s, where supercomputers like Japan's NEC SX series were pushing records to hundreds of millions of digits, the brothers sought a more efficient algorithm to surpass these limits without relying on institutional resources.[10] Their approach addressed the era's hardware constraints by prioritizing series that maximized digits per computational step, enabling personal-scale high-precision calculations. The brothers' initial application of their formulation came in 1989, when they computed π to 1,011,196,691 decimal places—over a billion digits—using the Cray 2 and IBM 3090 supercomputers at IBM's Thomas J. Watson Research Center, funded through grants including from the National Science Foundation.[10] This achievement marked a world record at the time and demonstrated the practical power of their algorithm amid the decade's competitive race for π precision.[11]Mathematical Foundation
Hypergeometric Series Basis
The generalized hypergeometric series provides a foundational framework for many special functions in mathematics, extending the binomial series to more parameters. It is defined as where the Pochhammer symbol, or rising factorial, is given by for positive integer , with .[12] This series converges absolutely for and can be analytically continued elsewhere, making it versatile for representing solutions to differential equations and integrals.[12] In computations involving , hypergeometric series arise through connections to modular forms and elliptic integrals, where the complete elliptic integral of the first kind links the series directly to periods of elliptic curves, which are encoded by modular forms.[13] These relations stem from the transformation properties of modular functions, such as the elliptic modulus , which parameterize families of hypergeometric evaluations yielding . Heegner numbers, which are square-free positive integers such that the class number of the imaginary quadratic field is 1, play a crucial role in selecting parameters for hypergeometric series that accelerate convergence in formulas.[14] Specifically, these numbers correspond to discriminants where the minimal class number ensures the highest order of vanishing in associated modular forms, leading to series with terms decreasing faster than exponential rates tied to larger class numbers.[15] Hypergeometric series are particularly efficient for high-precision arithmetic in computations due to their rapid convergence when optimized via such number-theoretic parameters, allowing fewer terms to achieve arbitrary digit precision with recursive term evaluation that minimizes overhead in multiple-precision operations.[16] This structure supports scalable summation without excessive intermediate swell, outperforming slower-converging alternatives like arctangent series for large-scale calculations.The Chudnovsky Formula
The Chudnovsky formula provides a rapidly convergent infinite series for the reciprocal of π, derived from the theory of modular functions and elliptic integrals. It expresses as: This series originates from evaluating the j-function at a specific complex argument τ associated with the quadratic field of discriminant -163, which has class number 1, linking directly to Ramanujan's work on class number 1 invariants and their connections to nearly integer values of e^{π√d} for Heegner discriminants d.[17] The integer constants 545140134 and 13591409 arise from solving a modular equation of degree 17 that relates the singular moduli for the elliptic curve with complex multiplication by the ring of integers in ℚ(√-163). Meanwhile, the base 640320 is chosen such that (640320)^3 = -j(τ), where τ = (1 + √(-163))/2 and j(τ) is the value of the modular j-invariant at this point, with the equivalent formulation using the constant 426880 √10005 satisfying 640320^{3/2} / 12 = 426880 √10005 exactly, ensuring the series aligns with the algebraic structure of the period relations.[17] The asymptotic convergence of the series is exceptionally fast, with each additional term contributing approximately 14.1816474627 digits of precision to the value of π, due to the dominant (640320)^{-3} factor in the denominator's growth rate. This rate stems from the hypergeometric nature of the series, where the ratio of consecutive terms approaches 1/(640320)^3 in magnitude.[17]Algorithm Mechanics
Iterative Computation Process
The iterative computation process for the Chudnovsky algorithm begins by estimating the number of terms required to achieve the desired precision in decimal digits of . Specifically, , where is the number of desired digits, as each term in the series contributes approximately 14.18 decimal digits of accuracy due to the rapid convergence rate determined by the modulus .[18][19] Next, initialize a sum and iterate over to . For each , compute the -th term using arbitrary-precision integer arithmetic to handle the large numbers involved: the numerator is , and the denominator is . Add the term to . Factorials are computed iteratively to avoid redundant calculations, with each subsequent factorial built from the previous one (e.g., ), and powers of 640320 are accumulated multiplicatively.[20][2] The constant factor involving (equivalent to ) is handled separately to maintain exact rational arithmetic in the sum. This term is factored out, so the partial sum approximates the series without the square root; the final expression for is then , or equivalently . In practice, is precomputed to sufficient precision using high-precision floating-point methods or approximated via integer square root algorithms, and the full constant is incorporated at the end. For record computations requiring extreme precision, modular arithmetic is employed to verify the square root integration without floating-point errors.[20][1] After summing to terms, compute by inverting the result: first obtain , then take the reciprocal. Truncation after terms guarantees the desired precision because the series is alternating and decreasing, with the error bounded by the magnitude of the -th term, which is less than for the chosen . This ensures all digits are correct, as subsequent terms contribute negligibly.[20][2]Precision and Convergence
The Chudnovsky algorithm demonstrates exceptional convergence properties, yielding approximately 14.18 decimal digits of π per term in its hypergeometric series summation. This rate arises from the dominant exponential decay factor in the general term, where the base corresponds to roughly , ensuring that each additional term contributes a fixed, substantial increase in precision independent of the total number of digits computed.[21][22] In terms of computational complexity, the algorithm achieves a time complexity of for producing digits of π, when employing binary splitting to evaluate the series and fast Fourier transform-based multiplication for handling large integers. This bound reflects the cost of recursively splitting the sum into subseries and performing multiplications on -bit numbers, which dominate the overall runtime. The space complexity is linear, , primarily due to the storage requirements for partial sums and intermediate values at full precision.[23] Compared to Machin-like formulas relying on arctangent series, which often demand millions of terms for high-precision results owing to their slower convergence (typically yielding fewer than 2 digits per term in the primary series), the Chudnovsky approach requires significantly fewer iterations—on the order of terms—making it vastly more efficient for large . This advantage is particularly pronounced in the iterative summation process, where the rapid decay minimizes the number of operations needed.[24]Implementations and Enhancements
Binary Splitting Optimization
Binary splitting is a divide-and-conquer recursion technique designed to efficiently evaluate the partial sums of hypergeometric series, such as the one underlying the Chudnovsky algorithm, by representing them as ratios of large integers rather than using floating-point arithmetic. This method avoids the accumulation of rounding errors and minimizes the growth of intermediate values, making it suitable for high-precision computations. In the context of the Chudnovsky series, it transforms the summation into a tree of recursive subcomputations, where each node combines results from child intervals.[25] The core of binary splitting involves computing a partial sum , where denotes the -th term of the series, as with and being multi-precision integers. To evaluate this, select a midpoint , and recursively compute the left and right sub-sums and . The combined values follow the recurrences: For the base case when , set to the numerator of and to its denominator, both scaled appropriately to maintain integer arithmetic. These operations are performed depth-first or in a balanced tree fashion until the full sum from to is obtained.[25] This approach offers significant computational advantages over naive term-by-term summation, which requires multiplications for terms due to repeated scaling of accumulated sums. Binary splitting reduces this to multiplications by exploiting the recursive structure, with overall bit complexity , where is the precision in bits and is the cost of multiplying two -bit numbers (typically using fast algorithms like Schönhage-Strassen). Additionally, the method supports natural parallelism, as left and right subtrees can be evaluated concurrently on multiprocessor systems.[25][26] In the Chudnovsky algorithm, binary splitting excels at managing the complex factorial components in each term, such as , , and , along with powers like . These are computed using a multiplicative variant of the recurrence— and analogous for denominators—with base cases for single factors, enabling efficient evaluation of the products without computing enormous intermediate factorials in a linear pass. This integration keeps the overall summation scalable for the large (often millions of terms) needed for record-breaking precision in .[25]Software and Hardware Applications
The Chudnovsky algorithm has been implemented in various software tools optimized for high-precision computations of π, with y-cruncher standing out as a primary multi-threaded program developed by Alexander J. Yee in the 2010s. This software leverages custom arbitrary-precision arithmetic routines and binary splitting techniques to achieve scalable performance across multiple cores, enabling computations up to trillions of digits.[27][28] Other implementations utilize established libraries for arbitrary-precision arithmetic, such as the GNU Multiple Precision Arithmetic Library (GMP), which provides efficient handling of large integers and rationals in C-based programs. For instance, custom C++ implementations often integrate GMP to compute π via the Chudnovsky series, balancing speed and precision for educational or research purposes. In Python, the mpmath library facilitates straightforward implementations by supporting arbitrary-precision floating-point operations, allowing users to evaluate the algorithm's hypergeometric terms with configurable digit depths.[21] An example implementation using mpmath is the following function, which approximates π to the specified number of digits:from mpmath import mp, mpf, sqrt
def chudnovsky_pi(digits):
mp.dps = digits + 20
C = 426880 * sqrt(10005)
S = mpf(0)
K = mpf(6)
M = mpf(1)
X = mpf(1)
L = mpf(13591409)
for k in range(digits//14 + 1):
M = M * (K**3 - 16*K) / (k+1)**3
L += 545140134
X *= -262537412640768000
S += M * L / X
K += 12
return C / S
from mpmath import mp, mpf, sqrt
def chudnovsky_pi(digits):
mp.dps = digits + 20
C = 426880 * sqrt(10005)
S = mpf(0)
K = mpf(6)
M = mpf(1)
X = mpf(1)
L = mpf(13591409)
for k in range(digits//14 + 1):
M = M * (K**3 - 16*K) / (k+1)**3
L += 545140134
X *= -262537412640768000
S += M * L / X
K += 12
return C / S
chudnovsky_pi(1000) to compute an approximation of π to 1000 decimal places. It employs an iterative summation process, updating variables M, L, X, and K in each loop iteration to evaluate the series terms efficiently, achieving approximately 14 digits of precision per term.[29] MATLAB and Simulink also offer built-in support through symbolic math toolboxes, where the algorithm can be scripted for iterative summation and visualized in simulations.[2]
On the hardware side, early applications by the Chudnovsky brothers in 1988 involved a custom-built supercomputer named mzero, assembled from commercial off-the-shelf components to perform parallel arithmetic operations for π calculations exceeding one billion digits. Modern deployments favor high-memory CPU configurations in cloud environments, such as Google Cloud's N2-highmem virtual machines with up to 128 vCPUs and 864 GB of RAM per instance, which in 2022 supported the computation of π to 100 trillion digits using y-cruncher.[30][31] These setups prioritize CPU parallelism over GPUs due to the algorithm's reliance on sequential big-integer operations, though hybrid CPU-GPU explorations are emerging for preprocessing tasks.
A key challenge in these applications is memory management for intermediate results, which can reach terabyte scales for trillion-digit computations; records in 2024 and 2025 addressed this by employing high-capacity SSD arrays for temporary storage and data spilling, reducing RAM bottlenecks during binary splitting evaluations—for example, the April 2025 computation of 300 trillion digits utilized a KIOXIA NVMe SSD cluster with y-cruncher.[32][33][34]
