Hubbry Logo
Series accelerationSeries accelerationMain
Open search
Series acceleration
Community hub
Series acceleration
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Series acceleration
Series acceleration
from Wikipedia

In mathematics, a series acceleration method is any one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.

Definition

[edit]

Given an infinite series with a sequence of partial sums

having a limit

an accelerated series is an infinite series with a second sequence of partial sums

which asymptotically converges faster to than the original sequence of partial sums would:

A series acceleration method is a sequence transformation that transforms the convergent sequences of partial sums of a series into more quickly convergent sequences of partial sums of an accelerated series with the same limit. If a series acceleration method is applied to a divergent series then the proper limit of the series is undefined, but the sequence transformation can still act usefully as an extrapolation method to an antilimit of the series.

The mappings from the original to the transformed series may be linear sequence transformations or non-linear sequence transformations. In general, the non-linear sequence transformations tend to be more powerful.

Overview

[edit]

Two classical techniques for series acceleration are Euler's transformation of series[1] and Kummer's transformation of series.[2] A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method.

For alternating series, several powerful techniques, offering convergence rates from all the way to for a summation of terms, are described by Cohen et al.[3]

Euler's transform

[edit]

A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by

where is the forward difference operator, for which one has the formula

If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges.

A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation.[4]

Conformal mappings

[edit]

A series

can be written as , where the function f is defined as

The function can have singularities in the complex plane (branch point singularities, poles or essential singularities), which limit the radius of convergence of the series. If the point is close to or on the boundary of the disk of convergence, the series for will converge very slowly. One can then improve the convergence of the series by means of a conformal mapping that moves the singularities such that the point that is mapped to ends up deeper in the new disk of convergence.

The conformal transform needs to be chosen such that , and one usually chooses a function that has a finite derivative at w = 0. One can assume that without loss of generality, as one can always rescale w to redefine . We then consider the function

Since , we have . We can obtain the series expansion of by putting in the series expansion of because ; the first terms of the series expansion for will yield the first terms of the series expansion for if . Putting in that series expansion will thus yield a series such that if it converges, it will converge to the same value as the original series.

Non-linear sequence transformations

[edit]

Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations.

Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and therefore may be used as effective extrapolation methods.

Aitken method

[edit]

A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,

defined by

This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Series acceleration is a collection of numerical methods in designed to enhance the convergence rate of infinite series, particularly those that converge slowly or conditionally, by transforming them into equivalent series or sequences that approximate the sum more efficiently with fewer terms. These techniques are essential in for evaluating sums where direct partial demands excessively many terms to reach a specified precision. The origins of series acceleration trace back to the , with pioneering work by Leonhard Euler in the 1730s on accelerating slowly , such as the for π2/6\pi^2/6, and later developments including his transformation for in the 1760s, followed by Kummer's transformation in 1837. Key methods include Euler's transformation, which applies forward differences to like those for ln2\ln 2 or π/4\pi/4, yielding rapid convergence, such as for the Leibniz series for π/4\pi/4. Kummer's method leverages a known to accelerate positive-term series, such as reducing terms needed for three-digit accuracy in 1/n2\sum 1/n^2 from over 10,000 to 19. More modern approaches encompass Aitken's Δ2\Delta^2-process, effective for linearly convergent sequences by iteratively removing error terms, and Shanks' transformation, which generalizes it using Hankel determinants for broader applicability. For alternating series, advanced algorithms based on Chebyshev polynomials or moment-matching achieve exponential convergence rates superior to classical Euler-Van Wijngaarden methods, with error bounds like 5.828n5.828^{-n} or better, enabling high-precision computations up to thousands of digits. Other notable techniques include Levin's and Weniger's transformations, which handle logarithmic convergence via weighted sums and Pochhammer symbols. These methods find applications in evaluating mathematical constants, , and numerical integrations, often integrated into software for reliable approximation. While effective for specific series types, their success depends on the underlying convergence behavior, and no universal accelerator exists for all cases.

Fundamentals

Definition

Series acceleration refers to a collection of mathematical techniques designed to enhance the of the partial sums of an infinite series n=1an\sum_{n=1}^\infty a_n to its sum SS, where the original series converges but does so slowly. These methods typically involve applying a transformation to the of partial sums sn=k=1naks_n = \sum_{k=1}^n a_k, yielding a new that approaches SS more rapidly. The transformation preserves the limit, ensuring the accelerated still sums to SS, but achieves this with fewer terms for a given precision. The core objective of series acceleration is to identify a transformation TT such that the error in the transformed tn=T(sn)t_n = T(s_n) diminishes faster than the original error snSs_n - S. Mathematically, this means limntnSsnS=0\lim_{n \to \infty} \frac{t_n - S}{s_n - S} = 0, implying that tnS=o(snS)t_n - S = o(s_n - S) as nn \to \infty. For many slowly , where the original error behaves like O(1/n)O(1/n), can improve this to o(1/nk)o(1/n^k) for some k>1k > 1, significantly reducing computational effort. Unlike regularization or summation methods such as , which assign finite values to or extend convergence beyond standard limits, series acceleration is strictly applied to already —often conditionally convergent or logarithmically slow ones—to merely expedite their without altering the underlying convergence properties. A classic example is the alternating harmonic series n=1(1)n+1n=ln2\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} = \ln 2, which converges conditionally but requires thousands of terms for modest accuracy due to its O(1/n)O(1/n) error rate.

Motivation and Applications

Series acceleration techniques are essential in numerical computations where infinite series converge too slowly to be practical, often requiring an impractically large number of terms to achieve desired accuracy, particularly for asymptotic expansions that may diverge after a certain point. These methods transform the original series into a faster-converging one, significantly reducing computational cost while maintaining or improving precision. For instance, in scenarios involving slowly converging alternating series or near-divergent expansions, acceleration not only hastens convergence but also enhances numerical stability by mitigating rounding errors associated with summing numerous terms. Key benefits include the ability to handle series that are conditionally convergent or asymptotically divergent, allowing reliable approximations in regimes where direct summation fails. In , series acceleration is applied to Rayleigh-Schrödinger perturbation theory, where the perturbation series for energy levels often exhibits slow or erratic convergence due to strong interactions; methods like Padé approximants or nonlinear transformations accelerate this process, enabling accurate calculations for many-body systems. Similarly, in , virial expansions for equations of state in interacting gases suffer from limited , and resummation techniques extrapolate beyond the perturbative regime to model real gases at higher densities. In , these methods aid in solving equations by accelerating series representations of kernels, improving efficiency in iterative solvers for Fredholm or Volterra equations. A representative application is the computation of the exponential integral E1(z)=zettdtE_1(z) = \int_z^\infty \frac{e^{-t}}{t} \, dt, which arises in heat transfer and radiation problems; its asymptotic series for large z|z| diverges, but acceleration via Laguerre polynomial expansions or similar methods yields rapidly converging approximations with high accuracy using few terms. Another example involves Bessel functions, whose asymptotic expansions for large arguments are useful in wave propagation but truncate optimally at finite terms; modified asymptotic expansions refine these for better uniform convergence across parameter ranges.

Historical Context

Early Developments

The recognition of slow convergence in infinite series emerged in the 18th century, particularly in expansions involving and logarithms, where partial sums required numerous terms for practical accuracy. Mathematicians encountered these issues while developing representations for such functions, as the often diminished near certain points, limiting computational utility. Leonhard Euler made significant early contributions to series acceleration starting in the 1730s, including a 1731 method to accelerate the Basel problem series 1/n2=π2/6\sum 1/n^2 = \pi^2/6 to six decimal places, and work in 1735 on transformations for alternating series like those for ln2\ln 2 and π/4\pi/4. These efforts predated his later explorations of divergent series in the 1740s and 1760s, which provided foundational ideas influencing subsequent acceleration methods. In his correspondence with Nicholas Bernoulli from 1742 to 1745, Euler explored ways to assign finite values to divergent series, recognizing their potential utility despite non-convergence. His major paper "De seriebus divergentibus" (written 1754/55, published 1760) further developed these techniques, proposing methods to extract meaningful sums from series that do not converge in the traditional sense, including evaluations related to hypergeometric series. Euler's transformation for , introduced in the 1730s, improved convergence rates for slowly convergent examples like the alternating harmonic series for ln2\ln 2, demonstrating how reparameterization could yield faster approximations and predating formal series acceleration frameworks. In the , progress advanced through Augustin-Louis Cauchy's rigorous of series convergence and remainder estimates, emphasizing the distinction between convergent and cases. Cauchy, in his early 19th-century works, rejected heuristic summations of but provided foundational theorems on asymptotic behavior that informed later acceleration strategies. contributed a key transformation in for accelerating positive-term series using a known . Henri Poincaré extended this in 1886 by formalizing the theory of asymptotic series, introducing concepts for expansions valid in limiting regimes despite potential divergence.

Key Contributors

Leonhard Euler (1707–1783) laid foundational work in series acceleration through his early 1730s methods for the Basel series and alternating transformations, as well as his 1760 paper on divergent series (E247), where he explored summing such series by relating them to convergent ones via continued fractions and other techniques, including hypergeometric aspects. This work established early principles for accelerating slowly convergent series, influencing subsequent approaches to handling asymptotic expansions and divergent expressions. In the early , Walter B. Ford advanced the field by bridging linear summation methods toward nonlinear techniques in his studies on divergent and convergent infinite series, emphasizing practical applications to asymptotic developments of functions defined by Maclaurin series. Ford's contributions, along with those of contemporaries, helped systematize the of summability, providing tools to assign finite values to otherwise divergent expressions and paving the way for more sophisticated acceleration strategies. John W. Shanks, in the 1950s, developed general nonlinear transformations that extended and unified earlier methods, such as the Aitken δ2\delta^2 process, into a broader framework for accelerating both divergent and slowly convergent sequences. His eke_k transformations offered a flexible approach to decompose sequences into geometric components, significantly impacting by enabling efficient convergence for a wide class of series. David Levin, during the 1970s and 1980s, introduced the u-transform, a nonlinear method allowing variable acceleration orders to handle sequences with logarithmic or linear convergence rates more effectively. This transformation, based on assuming a model for the remainder, provided a universal tool for practical computations, particularly in alternating and monotonic series. Euler's pioneering transforms inspired later conformal mapping approaches in acceleration, where series are mapped to domains of faster convergence. Similarly, Shanks' work unified Aitken-like methods into a general nonlinear paradigm, facilitating the development of modern techniques.

Linear Methods

Euler Transform

The Euler transform is a linear method for accelerating the convergence of sequences of partial sums, particularly those arising from alternating or hypergeometric series. It constructs a new sequence by applying a weighted average of the original partial sums using alternating binomial coefficients, effectively reducing the term in the to the limit. This transformation is especially useful for series where the terms decrease slowly, such as those with logarithmic convergence rates. The core formula for the k-th accelerated partial sum, starting from the n-th original partial sum sns_n, is given by ek=1(2kk)m=0k(1)m(km)sn+m.e_k = \frac{1}{\binom{2k}{k}} \sum_{m=0}^k (-1)^m \binom{k}{m} s_{n+m}. The discrete sum is more commonly used in practice. The derivation relies on generating functions for the sequence of partial sums and the properties of difference equations. Specifically, if the sequence sns_n satisfies a linear difference equation with constant coefficients, the diagonalizes the , allowing the solution to be expressed in terms of accelerated terms that converge faster to the limit. This approach leverages the snxn=f(x)/(1x)\sum s_n x^n = f(x)/(1-x), where the Euler weights extract higher-order corrections via finite differences. A representative example is the application to the alternating series k=1(1)k+1k=ln2\sum_{k=1}^\infty \frac{(-1)^{k+1}}{k} = \ln 2. The partial sums sns_n converge linearly with error O(1/n)O(1/n). Applying the Euler transform produces accelerated estimates eke_k that exhibit quadratic convergence, with error O(1/k2)O(1/k^2), allowing high accuracy with fewer terms—for instance, using 20 accelerated terms yields approximately six decimal places of ln20.693147\ln 2 \approx 0.693147, compared to only two digits from 1000 original terms. The method was originally developed by Leonhard Euler in the mid-18th century to improve estimates for constants like π/4\pi/4 from the Leibniz series. Despite its effectiveness, the Euler transform has limitations: it performs well for series with known asymptotic error expansions, such as monotone decreasing alternating terms, but may fail to accelerate or even diverge for non-alternating or irregularly behaved series without suitable remainder estimates. The Euler-Maclaurin formula provides a linear method to accelerate the summation of series by approximating the sum k=abf(k)\sum_{k=a}^{b} f(k) through an integral plus correction terms involving Bernoulli numbers and derivatives of ff. This formula links discrete sums to continuous integrals, enabling systematic acceleration for slowly converging series where ff is smooth. The explicit form is k=abf(k)abf(x)dx+f(a)+f(b)2+k=1mB2k(2k)!(f(2k1)(b)f(2k1)(a))+R,\sum_{k=a}^{b} f(k) \approx \int_{a}^{b} f(x) \, dx + \frac{f(a) + f(b)}{2} + \sum_{k=1}^{m} \frac{B_{2k}}{(2k)!} \left( f^{(2k-1)}(b) - f^{(2k-1)}(a) \right) + R, where B2kB_{2k} are Bernoulli numbers and RR is a remainder term that decreases with higher mm, facilitating error control in numerical computations. For infinite series, setting bb \to \infty and assuming suitable decay of ff and its derivatives yields accelerated approximations, particularly useful for functions amenable to integration. Boole summation extends this linear framework as a variant tailored for numerical integration and series , employing forward differences to derive summation rules that enhance convergence for certain classes of functions. Developed by , it reformulates sums using difference operators analogous to those in the Euler-Maclaurin approach, but optimized for alternating or oscillatory terms via expansions that incorporate Euler numbers. This method proves effective for accelerating finite and infinite series in computational settings, such as quadrature, by providing closed-form corrections based on difference tables, thereby reducing truncation errors without requiring explicit derivatives. Kummer's method offers another linear acceleration technique, particularly for series expressible in terms of s, by transforming the general term through comparison with a known . Introduced by , it applies to specific functions like logarithms or arctangents, where the series is rewritten using the M(a,b,z)M(a, b, z) to accelerate convergence for alternating or representations. For instance, it relates the partial sums of a target series to those of a hypergeometric counterpart, yielding a that converges faster, often by a factor involving binomial coefficients or factorials. An illustrative application is the acceleration of the series ζ(2m)=n=11n2m\zeta(2m) = \sum_{n=1}^{\infty} \frac{1}{n^{2m}} for positive even integers mm, where the Euler-Maclaurin formula derives exact closed forms involving Bernoulli numbers. By applying the formula to f(x)=x2mf(x) = x^{-2m}, the integral and endpoint corrections yield ζ(2m)=(1)m+1B2m(2π)2m2(2m)!\zeta(2m) = (-1)^{m+1} \frac{B_{2m} (2\pi)^{2m}}{2 (2m)!}, transforming the divergent-like slow convergence into a finite expression computable via known Bernoulli values. This not only accelerates numerical evaluation but also reveals deep connections to . These linear extensions share the advantage of systematic error reduction by incorporating higher-order terms—such as additional Bernoulli corrections or difference orders—which diminish the exponentially for analytic ff, enabling precise approximations with fewer terms compared to direct .

Nonlinear Methods

Aitken δ² Process

The Aitken δ² process is a nonlinear method designed to accelerate the convergence of a slowly converging of partial sums by eliminating the leading geometric term. Introduced by Alexander Aitken in 1926 as an extension of Bernoulli's method for solving algebraic equations, it applies a transformation to three consecutive terms in the to produce an improved estimate of the limit. This process is particularly effective for sequences exhibiting linear convergence, where the behaves asymptotically like a . The core formula of the Aitken δ² process uses the forward difference operator Δ, defined as Δs_k = s_{k+1} - s_k and Δ²s_k = Δs_{k+1} - Δs_k = s_{k+2} - 2s_{k+1} + s_k. For a sequence {s_n}, the accelerated term is given by s^n=sn(Δsn1)2Δ2sn1,\hat{s}_n = s_n - \frac{(\Delta s_{n-1})^2}{\Delta^2 s_{n-1}}, which simplifies to s^n=sn(snsn1)2sn+12sn+sn1.\hat{s}_n = s_n - \frac{(s_n - s_{n-1})^2}{s_{n+1} - 2s_n + s_{n-1}}. This transformation can be applied iteratively to the resulting sequence for further acceleration. The derivation assumes that the sequence satisfies s_n = S + a r^n + o(r^n), where S is the limit, |r| < 1, and a is a constant. Under this model, the errors ε_n = s_n - S, ε_{n+1} = s_{n+1} - S, and ε_{n+2} = s_{n+2} - S satisfy ε_{n+1}/ε_n ≈ r and ε_{n+2}/ε_{n+1} ≈ r. Solving the system ε_{n+1} = r ε_n and ε_{n+2} = r ε_{n+1} for r and substituting yields S ≈ \hat{s}_n, effectively removing the dominant error term and leaving a higher-order remainder. This geometric assumption ensures that the transformed sequence converges faster, often cubically if the original error is exactly geometric. A representative example is the geometric series ∑_{k=0}^∞ x^k = 1/(1 - x) for |x| < 1, with partial sums s_n = (1 - x^{n+1})/(1 - x) = S - [x^{n+1}/(1 - x)], where the error is exactly geometric with ratio x. Applying the process to the first three partial sums s_0 = 1, s_1 = 1 + x, s_2 = 1 + x + x^2 yields \hat{s}_2 = 1/(1 - x) exactly, demonstrating convergence to the true sum in one application of the transformation using three terms. The method assumes a single dominant geometric error term with consistent signs in consecutive errors; it fails or performs poorly for sequences with alternating error signs or non-geometric errors, such as the logarithmic series for ln(1 + x), where the error decays like 1/n rather than geometrically. In such cases, extensions like modified δ² processes are needed to handle sign changes.

Shanks Transformation

The Shanks transformation, introduced by Daniel Shanks in 1955, is a nonlinear sequence transformation that extends the Aitken δ² process to higher orders, enabling the acceleration of slowly convergent series by eliminating multiple dominant error terms in a systematic table-based manner. It generates transformed values ek(n)e_k^{(n)} from the partial sums sns_n of a series, where nn indexes the starting point and kk denotes the order of acceleration, assuming the error en=ssne_n = s - s_n can be approximated by a linear combination of k+1k+1 geometric sequences. This method constructs a triangular table analogous to finite difference tables, but nonlinear, to produce entries that converge faster to the series limit. The core formula for the Shanks transformation entry ek(n)e_k^{(n)} can be expressed in closed form assuming known ratios rjr_j (roots of the characteristic equation for the error recurrence) as ek(n)=j=0k(1)j(kj)sn+jj=0k(1)j(kj)/(1rj),e_k^{(n)} = \frac{\sum_{j=0}^k (-1)^j \binom{k}{j} s_{n+j}}{\sum_{j=0}^k (-1)^j \binom{k}{j} / (1 - r_j)}, which weights consecutive terms to cancel the first kk error components. However, in practice, the rjr_j are not presupposed but determined implicitly from the sequence itself, and the transformation is computed efficiently via recursive differences in a table, avoiding direct solution of the characteristic equation. Equivalently, ek(n)e_k^{(n)} is given by the ratio of Hankel determinants: ek(n)=det(sn+i+j)i,j=0kdet(Δ2sn+i+j2)i,j=0k,e_k^{(n)} = \frac{\det \bigl( s_{n+i+j} \bigr)_{i,j=0}^k}{\det \bigl( \Delta^2 s_{n+i+j-2} \bigr)_{i,j=0}^k}, where Δ2\Delta^2 denotes the second forward difference operator, providing a stable numerical implementation for higher kk. This determinant representation highlights the method's connection to Padé approximants but is typically evaluated recursively using algorithms like Wynn's ε-algorithm for computational efficiency. A key property is its relation to the Aitken δ² process: the first-order case e1(n)e_1^{(n)} exactly recovers the Aitken transform sn(Δsn)2Δ2sns_n - \frac{(\Delta s_n)^2}{\Delta^2 s_n}, which achieves quadratic convergence by eliminating a single geometric error term, whereas higher kk enables cubic or greater convergence orders. Theoretically, the Shanks transformation yields the exact limit for any sequence whose error satisfies a linear homogeneous recurrence relation of order k+1k+1 with constant coefficients, making it optimal for series where the remainder asymptotically follows such a form, such as alternating or hypergeometric series. As an illustrative example, consider the Leibniz series for arctan(1)=π/40.7853981633974483\arctan(1) = \pi/4 \approx 0.7853981633974483, with partial sums sm=4j=0m1(1)j2j+1s_m = 4 \sum_{j=0}^{m-1} \frac{(-1)^j}{2j+1}, which converges linearly with error O(1/m)O(1/m). Applying the Shanks transformation to the first 7 partial sums yields approximately 4 correct decimal digits (error 2×105\sim 2 \times 10^{-5}), while using 25 terms achieves about 18 digits (error 4×1019\sim 4 \times 10^{-19}), demonstrating cubic convergence for k=2k=2 by eliminating the leading three error terms and highlighting the method's efficacy for alternating series with multiple logarithmic or inverse-power error components.

Mapping-Based Approaches

Conformal Mapping Principle

The conformal mapping principle in series acceleration utilizes conformal transformations of the complex plane to enhance the convergence of power series by altering the summation variable or contour, thereby extending the effective domain of analyticity through continuation. This geometric approach transforms a series with limited convergence radius into one that converges more rapidly in a mapped variable, avoiding proximity to singularities that impede the original expansion. By conformally mapping regions of the plane to reposition singularities farther from the origin or evaluation path, the method systematically improves numerical stability and summation efficiency for functions admitting analytic continuation. Mathematically, consider a power series f(z)=n=0anznf(z) = \sum_{n=0}^{\infty} a_n z^n, whose radius of convergence is restricted by singularities in the zz-plane. A conformal mapping ϕ\phi, which is analytic and one-to-one in a suitable domain, transforms zz to ww such that the image avoids these singularities, enlarging the convergence radius in the ww-plane. The transformed series becomes f(ϕ1(w))=n=0an[ϕ1(w)]nf(\phi^{-1}(w)) = \sum_{n=0}^{\infty} a_n [\phi^{-1}(w)]^n, where ϕ1\phi^{-1} maps a larger disk in ww back to the original domain in zz. This re-expansion leverages the angle-preserving property of conformal maps to preserve the function's value while optimizing the series' asymptotic behavior. The principle originated in the 19th and early 20th centuries within complex analysis for deriving series representations of special functions, such as hypergeometric and elliptic functions, where mappings facilitated analytic continuations beyond natural boundaries. Specific techniques for series acceleration via conformal mappings have been applied in various fields, including perturbative expansions in physics. A key advantage of this method lies in its ability to effectively manage complex singularities, including branch points and essential singularities, by mapping them to the boundary of the convergence disk, which maximizes the decay rate of coefficients and handles non-perturbative effects that linear methods struggle with. In contrast to discrete linear transformations, the continuous geometric nature of conformal mappings provides a robust framework for domains with known singularity structures, ensuring verifiable improvements in convergence without altering the function's identity.

Applications in Series Acceleration

Another example is the acceleration of perturbative series in quantum chromodynamics (QCD), where conformal mappings of the Borel plane are used to transform the variable and enlarge the domain of convergence, improving the summation of divergent expansions for quantities like the Adler function. These mappings, such as those optimizing the holomorphy domain onto a unit disk, enable better incorporation of nonperturbative effects and faster convergence rates. Overall, these applications often achieve exponential acceleration for meromorphic functions, where the mapping expands the effective domain of convergence, leading to practical improvements in computational efficiency for series summation in complex analysis and numerical methods.

Modern Extensions

Sequence Transformations

Sequence transformations represent advanced nonlinear extensions to earlier methods like the Shanks transformation, enabling adaptive acceleration for a broader class of slowly convergent or divergent series by incorporating variable parameters and recursive structures. These techniques, developed in the late 20th century, allow for higher-order error elimination and have been refined to handle quasi-linear convergence patterns where fixed-order methods fail. Levin's u-transform, introduced in 1973, provides variable-order acceleration by assuming the remainder term after the partial sum sns_n follows a power-law decay form rn=j=1kcj/(n+β)jr_n = \sum_{j=1}^k c_j / (n + \beta)^j, where β>0\beta > 0 is a shift parameter and kk denotes the order. The transformed sequence uk(n)(β)u_k^{(n)}(\beta) of order kk is given by uk(n)(β)=j=0k(1)j(kj)(n+β+j)k1sn+jωn+jj=0k(1)j(kj)(n+β+j)k1ωn+j,u_k^{(n)}(\beta) = \frac{ \sum_{j=0}^k (-1)^j \binom{k}{j} \frac{(n + \beta + j)^{k-1} s_{n+j} }{ \omega_{n+j} } }{ \sum_{j=0}^k (-1)^j \binom{k}{j} \frac{ (n + \beta + j)^{k-1} }{ \omega_{n+j} } }, where the auxiliary sequence is ωm=(m+β)(smsm1)\omega_m = (m + \beta) (s_m - s_{m-1}). This structure ensures the transform is exact for model sequences with remainders, making it suitable for series with algebraic convergence rates. Wynn's ε\varepsilon-, proposed in 1967, generates accelerated sequences recursively from successive differences of the original terms, directly linking to Padé approximants for rational approximations of generating functions. The defines a table where even-order entries correspond to Shanks transforms and odd-order ones to related approximations: ε1(n)=0,ε0(n)=sn,εk+1(n)=εk1(n+1)+1εk(n+1)εk(n),\begin{align*} \varepsilon_{-1}^{(n)} &= 0, \quad \varepsilon_0^{(n)} = s_n, \\ \varepsilon_{k+1}^{(n)} &= \varepsilon_{k-1}^{(n+1)} + \frac{1}{ \varepsilon_k^{(n+1)} - \varepsilon_k^{(n)} }, \end{align*} for k,n0k, n \geq 0. This recursive generation efficiently computes higher-order accelerations without explicit determinant evaluations, as required in the Shanks method. As an illustrative application, Levin's u-transform accelerates the series expansion derived from the slowly convergent integral 0exx1/2dx=π\int_0^\infty e^{-x} x^{-1/2} \, dx = \sqrt{\pi}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.