Recent from talks
Nothing was collected or created yet.
Trigonometric interpolation
View on WikipediaIn mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and cosines of given periods. This form is especially suited for interpolation of periodic functions.
An important special case is when the given data points are equally spaced, in which case the solution is given by the discrete Fourier transform.
Formulation of the interpolation problem
[edit]A trigonometric polynomial of degree K has the form
| 1 |
This expression contains 2K + 1 coefficients, a0, a1, … aK, b1, …, bK, and we wish to compute those coefficients so that the function passes through N points:
Since the trigonometric polynomial is periodic with period 2π, the N points can be distributed and ordered in one period as
(Note that we do not in general require these points to be equally spaced.) The interpolation problem is now to find coefficients such that the trigonometric polynomial p satisfies the interpolation conditions.
Formulation in the complex plane
[edit]The problem becomes more natural if we formulate it in the complex plane. We can rewrite the formula for a trigonometric polynomial as where i is the imaginary unit. If we set z = eix, then this becomes
with
This reduces the problem of trigonometric interpolation to that of polynomial interpolation on the unit circle. Existence and uniqueness for trigonometric interpolation now follows immediately from the corresponding results for polynomial interpolation.
For more information on formulation of trigonometric interpolating polynomials in the complex plane, see p. 156 of Interpolation using Fourier Polynomials.
Solution of the problem
[edit]Under the above conditions, there exists a solution to the problem for any given set of data points {xk, yk} as long as N, the number of data points, is not larger than the number of coefficients in the polynomial, i.e., N ≤ 2K+1 (a solution may or may not exist if N>2K+1 depending upon the particular set of data points). Moreover, the interpolating polynomial is unique if and only if the number of adjustable coefficients is equal to the number of data points, i.e., N = 2K + 1. In the remainder of this article, we will assume this condition to hold true.
Odd number of points
[edit]If the number of points N is odd, say N=2K+1, applying the Lagrange formula for polynomial interpolation to the polynomial formulation in the complex plane yields that the solution can be written in the form
| 5 |
where
The factor in this formula compensates for the fact that the complex plane formulation contains also negative powers of and is therefore not a polynomial expression in . The correctness of this expression can easily be verified by observing that and that is a linear combination of the right powers of . Upon using the identity
| 2 |
the coefficient can be written in the form
| 4 |
Even number of points
[edit]If the number of points N is even, say N=2K, applying the Lagrange formula for polynomial interpolation to the polynomial formulation in the complex plane yields that the solution can be written in the form
| 6 |
where
| 3 |
Here, the constants can be chosen freely. This is caused by the fact that the interpolating function (1) contains an odd number of unknown constants. A common choice is to require that the highest frequency is of the form a constant times , i.e. the term vanishes, but in general the phase of the highest frequency can be chosen to be . To get an expression for , we obtain by using (2) that (3) can be written on the form
This yields
and
Note that care must be taken in order to avoid infinities caused by zeros in the denominators.
Equidistant nodes
[edit]Further simplification of the problem is possible if nodes are equidistant, i.e.
see Zygmund for more details.
Odd number of points
[edit]Further simplification by using (4) would be an obvious approach, but is obviously involved. A much simpler approach is to consider the Dirichlet kernel
where is odd. It can easily be seen that is a linear combination of the right powers of and satisfies
Since these two properties uniquely define the coefficients in (5), it follows that
Here, the sinc-function prevents any singularities and is defined by
Even number of points
[edit]For even, we define the Dirichlet kernel as
Again, it can easily be seen that is a linear combination of the right powers of , does not contain the term and satisfies
Using these properties, it follows that the coefficients in (6) are given by
Note that does not contain the as well. Finally, note that the function vanishes at all the points . Multiples of this term can, therefore, always be added, but it is commonly left out.
Implementation
[edit]A MATLAB implementation of the above can be found here and is given by:
function P = triginterp(xi,x,y)
% TRIGINTERP Trigonometric interpolation.
% Input:
% xi evaluation points for the interpolant (vector)
% x equispaced interpolation nodes (vector, length N)
% y interpolation values (vector, length N)
% Output:
% P values of the trigonometric interpolant (vector)
N = length(x);
% Adjust the spacing of the given independent variable.
h = 2/N;
scale = (x(2)-x(1)) / h;
x = x/scale; xi = xi/scale;
% Evaluate interpolant.
P = zeros(size(xi));
for k = 1:N
P = P + y(k)*trigcardinal(xi-x(k),N);
end
function tau = trigcardinal(x,N)
ws = warning('off','MATLAB:divideByZero');
% Form is different for even and odd N.
if rem(N,2)==1 % odd
tau = sin(N*pi*x/2) ./ (N*sin(pi*x/2));
else % even
tau = sin(N*pi*x/2) ./ (N*tan(pi*x/2));
end
warning(ws)
tau(x==0) = 1; % fix value at x=0
Relation with the discrete Fourier transform
[edit]The special case in which the points xn are equally spaced is especially important. In this case, we have
The transformation that maps the data points yn to the coefficients ak, bk is obtained from the discrete Fourier transform (DFT) of order N.
(Because of the way the problem was formulated above, we have restricted ourselves to odd numbers of points. This is not strictly necessary; for even numbers of points, one includes another cosine term corresponding to the Nyquist frequency.)
The case of the cosine-only interpolation for equally spaced points, corresponding to a trigonometric interpolation when the points have even symmetry, was treated by Alexis Clairaut in 1754. In this case the solution is equivalent to a discrete cosine transform. The sine-only expansion for equally spaced points, corresponding to odd symmetry, was solved by Joseph Louis Lagrange in 1762, for which the solution is a discrete sine transform. The full cosine and sine interpolating polynomial, which gives rise to the DFT, was solved by Carl Friedrich Gauss in unpublished work around 1805, at which point he also derived a fast Fourier transform algorithm to evaluate it rapidly. Clairaut, Lagrange, and Gauss were all concerned with studying the problem of inferring the orbit of planets, asteroids, etc., from a finite set of observation points; since the orbits are periodic, a trigonometric interpolation was a natural choice. See also Heideman et al. (1984).
Applications in numerical computing
[edit]Chebfun, a fully integrated software system written in MATLAB for computing with functions, uses trigonometric interpolation and Fourier expansions for computing with periodic functions. Many algorithms related to trigonometric interpolation are readily available in Chebfun; several examples are available here.
References
[edit]- Kendall E. Atkinson, An Introduction to Numerical Analysis (2nd edition), Section 3.8. John Wiley & Sons, New York, 1988. ISBN 0-471-50023-2.
- M. T. Heideman, D. H. Johnson, and C. S. Burrus, "Gauss and the history of the fast Fourier transform," IEEE ASSP Magazine 1 (4), 14–21 (1984).
- G.B. Wright, M. Javed, H. Montanelli, and L.N. Trefethen, "Extension of Chebfun to periodic functions," SIAM. J. Sci. Comput., 37 (2015), C554-C573
- A. Zygmund, Trigonometric Series, Volume II, Chapter X, Cambridge University Press, 1988.
External links
[edit]Trigonometric interpolation
View on GrokipediaIntroduction
Definition and motivation
Trigonometric interpolation is a technique in numerical analysis for approximating a given periodic function by determining a trigonometric polynomial of degree at most that passes exactly through a set of or specified points.[4] Such a polynomial takes the form or equivalently in complex form, where the coefficients , , or are chosen to satisfy the interpolation conditions at the points .[2] This approach is particularly suited to functions defined on a periodic interval, such as or , where the basis functions and inherently possess the desired periodicity of period .[5] The primary motivation for trigonometric interpolation arises when dealing with periodic data, as standard polynomial interpolation on a finite interval often introduces artifacts like the Runge phenomenon, where the approximant oscillates wildly near the endpoints due to the non-periodic nature of polynomial basis functions.[2] In contrast, trigonometric interpolation employs a basis that matches the periodicity of the target function, thereby avoiding such boundary issues and ensuring the interpolant remains well-behaved over the entire real line by periodic extension.[4] This makes it especially valuable in applications involving signals, Fourier analysis, and periodic phenomena in physics and engineering, where preserving the function's cyclic structure is essential for accurate representation and avoiding artificial discontinuities.[6] A simple illustration of trigonometric interpolation involves approximating the periodic function at equally spaced points for on , where with . Since is itself a trigonometric polynomial of degree 1, the interpolant recovers exactly for sufficiently large , demonstrating the method's ability to faithfully reproduce smooth periodic functions without introducing extraneous oscillations.[2] In basic cases with smooth interpolants, this approach maintains periodicity while sidestepping the Gibbs phenomenon that can plague Fourier series approximations of discontinuous functions.[4]Historical development
Trigonometric interpolation traces its origins to the early 19th century, closely intertwined with the development of Fourier series for representing periodic functions. In 1822, Joseph Fourier introduced the concept of expanding periodic functions as infinite sums of sines and cosines in his work Théorie analytique de la chaleur, laying the groundwork for trigonometric approximations that interpolate function values at discrete points.[7] The partial sums of these series naturally provide trigonometric polynomials that interpolate periodic data at equidistant points, marking the initial connection between Fourier analysis and interpolation techniques.[8] In the early 19th century, the general problem of trigonometric interpolation had been formalized, with Friedrich Bessel providing the first complete account in 1815, focusing on interpolating periodic data using finite trigonometric sums for astronomical applications.[9] This built on earlier contributions from Leonhard Euler and Joseph-Louis Lagrange in the 18th century, who developed finite trigonometric series for planetary orbit interpolation, though explicit formulas for equidistant cases emerged more systematically in interpolation theory texts around the 1910s, as numerical analysis advanced.[10] In the early 20th century, the field formalized alongside broader numerical methods, with key practical algorithms emerging in the 1930s; notably, Cornelius Lanczos's 1938 paper outlined efficient trigonometric interpolation for empirical and analytical functions, emphasizing convergence and applications in physics like X-ray analysis.[11] A major milestone occurred in 1965 with the publication of the Cooley-Tukey algorithm, which enabled fast computation of the discrete Fourier transform (DFT), directly facilitating efficient evaluation of trigonometric interpolants at equidistant nodes and integrating interpolation with digital signal processing. This built on precursors like Carl Friedrich Gauss's 1805 unpublished work on trigonometric interpolation for least-squares orbit fitting, later recognized as an early fast Fourier transform variant.[8] Post-2000 advances have extended trigonometric interpolation to non-equidistant points, leveraging algorithms like the non-equispaced fast Fourier transform (NFFT) for improved flexibility in applications such as image processing and irregular data fitting.Mathematical Foundations
Trigonometric polynomials
A trigonometric polynomial of degree at most is a function expressible as where the coefficients are real or complex numbers.[12] This representation spans the finite-dimensional vector space generated by the basis functions .[12] Equivalently, in complex form, it can be written as where the coefficients satisfy for real-valued .[12] Trigonometric polynomials are inherently periodic with period , as each basis function satisfies .[12] The space of all such polynomials of degree at most forms a vector space of dimension .[12] The basis functions are orthogonal with respect to the inner product over , given by Specifically, and all cross integrals between distinct basis functions vanish.[13] For any set of distinct points in and arbitrary values , there exists a unique trigonometric polynomial of degree at most such that for .[12] This uniqueness follows from the dimension of the space matching the number of interpolation conditions and the linear independence of the basis evaluated at distinct points.[12] As an illustrative example, consider interpolating the values 1 at and -1 at using a trigonometric polynomial of degree at most 1. One such polynomial is , which satisfies and .[12] Note that uniqueness requires three points for degree 1, so multiple polynomials of degree at most 1 can fit two points.Comparison with polynomial interpolation
Trigonometric interpolation employs a basis consisting of trigonometric functions, such as sines and cosines, or equivalently complex exponentials for integer , in contrast to classical polynomial interpolation, which uses algebraic polynomials formed by powers of the variable .[2] This fundamental difference in basis makes trigonometric interpolation particularly suited for approximating periodic functions on intervals like , where the inherent periodicity of the basis avoids artificial discontinuities at the boundaries, whereas polynomial interpolation is more general-purpose but can introduce such issues for periodic data.[3] In terms of applicability, trigonometric interpolation at equidistant nodes excels for periodic functions by maintaining stability and avoiding the severe oscillations known as the Runge phenomenon that plague high-degree polynomial interpolation on equispaced points in non-periodic settings.[14] The Runge phenomenon arises in polynomial interpolation due to the clustering of Lebesgue function peaks near the endpoints, leading to exponential error growth, but trigonometric interpolation circumvents this for periodic data by leveraging the uniform distribution on the circle.[15] Regarding degree equivalence, a trigonometric polynomial of degree , spanning a space of dimension , corresponds in complexity to an algebraic polynomial of degree up to , yet it exhibits superior stability when adapted to circular or periodic domains.[3] Specifically, on the unit circle, trigonometric interpolation is equivalent to polynomial interpolation in the complex variable , where the trigonometric functions map to Laurent polynomials restricted to the unit circle, providing a direct algebraic correspondence without the instability issues of endpoint-focused polynomial methods.[3]Problem Formulation
General interpolation problem
The general trigonometric interpolation problem seeks to approximate a periodic function by finding a trigonometric polynomial that exactly matches prescribed values at specified points within one period. Given distinct points for , and corresponding function values , the goal is to determine a trigonometric polynomial of degree at most such that for each , where the number of points satisfies (odd case) or (even case).[16][17] In the real-variable formulation, the interpolating polynomial takes the form which spans a vector space of dimension . The unknown coefficients are obtained by solving the linear system formed by evaluating the polynomial at the interpolation points; this yields a structured matrix analogous to the Vandermonde matrix for polynomial interpolation.[16][18] Existence and uniqueness of the solution are guaranteed when , as the associated linear system is square and the matrix is invertible for distinct points due to the linear independence of the trigonometric basis functions. In the even case , the system is underdetermined relative to the full space dimension, necessitating a slight modification—such as restricting the polynomial space to dimension by omitting one basis function (e.g., the highest-frequency sine term)—to ensure a unique interpolant.[16][17] The interpolant can also be expressed in a Lagrange-like basis as , where each is a trigonometric polynomial satisfying (Kronecker delta); for general points, these basis functions are constructed via the product of shifted Dirichlet kernels or equivalent mechanisms, though explicit forms are more readily available for equidistant nodes.[16][18]Formulation using complex exponentials
Trigonometric interpolation can be reformulated in the complex domain by expressing the interpolating trigonometric polynomial as a finite sum of complex exponentials. For a set of 2n+1 distinct points , , the interpolant takes the form where the complex coefficients are chosen to satisfy the interpolation conditions for a given function .[4] This leads to the system of equations In matrix form, this is , where and are vectors of length 2n+1, and is the Vandermonde-like matrix with entries for row indices and column indices . The matrix is invertible provided the points are distinct, ensuring a unique solution for the coefficients .[4] This complex formulation relates directly to the real trigonometric form via Euler's formula, . Specifically, the coefficients satisfy , for , and for , with the reality of implying . This mapping preserves the equivalence between the two representations while facilitating algebraic manipulation in the complex plane.[19] The complex exponential basis offers computational advantages, particularly for equidistant nodes, where the system can be solved efficiently using the fast Fourier transform, converting the interpolation into a discrete Fourier analysis problem.[4]General Solutions
Solutions for odd number of interpolation points
When the number of interpolation points is odd, the trigonometric interpolant is a trigonometric polynomial of degree at most , expressed in complex exponential form as where the coefficients are determined to satisfy for . For general nodes , the coefficients solve a Vandermonde-like linear system. For equidistant nodes where the basis functions exhibit discrete orthogonality, the coefficients take the explicit DFT-like form This formulation arises from the complex exponential basis introduced in the previous section on problem formulation.[2] To recover the real-valued trigonometric form , the coefficients are obtained from the complex ones assuming is real, so . Specifically, Consider the example with , so points at equidistant nodes , , , and values , , . The coefficients are since the other terms vanish. Thus, . The real coefficients are , , , yielding which verifies , , .[2] The uniqueness of this interpolant follows from the dimension of the space of trigonometric polynomials of degree at most being , matching the number of points, and the linear independence of the functions at distinct points modulo , ensuring the evaluation matrix is invertible.[20]Solutions for even number of interpolation points
When the number of interpolation points is even, the space of trigonometric polynomials of degree at most has dimension , exceeding the number of conditions by one and rendering the interpolation problem underdetermined for general nodes. To resolve this and ensure uniqueness, the solution space is restricted to a subspace of dimension , such as trigonometric polynomials of degree at most augmented by a single additional term at frequency , typically a cosine-like term . A common choice is the "balanced" case, setting the sine coefficient at degree to zero (). For general nodes, coefficients are found by solving the corresponding linear system.[21] In the complex exponential formulation, the interpolant is , subject to the constraint (balanced case) to incorporate a cosine-like highest-frequency term, excluding the pure sine component at degree . The coefficients for and the shared are obtained by solving the linear system for . For equidistant nodes, this reduces to the discrete Fourier transform formula for , with special handling for the Nyquist frequency , where (real-valued). Note that for equidistant nodes, at the nodes, so it is naturally excluded from the interpolation space.[21][22] The corresponding real-valued expression is , where the coefficients for lower degrees are the standard Fourier coefficients, and (with ) resolves the additional freedom in the balanced case. This form emphasizes the cosine augmentation for the highest term, promoting stability in the interpolant.[21] For a concrete example with four points (, ) at equispaced nodes , , , with values , the balanced interpolant is , excluding since it vanishes at the nodes. The coefficients are computed by solving the 4×4 linear system from the basis functions evaluated at the nodes, yielding explicit formulas: For instance, these ensure interpolation for general data, such as , where is nonzero.[22]Equidistant Nodes
Explicit formulas for odd points
For an odd number of equidistant interpolation points , the nodes are given by for . The unique trigonometric polynomial of degree at most that interpolates given values at these points can be expressed in closed form using the Dirichlet kernel.[23] The kernel-based formula is where the normalized Dirichlet kernel is This expression follows from the fact that the Lagrange basis functions for trigonometric interpolation at these equidistant nodes are scaled shifts of the Dirichlet kernel, ensuring and is a trigonometric polynomial of the required degree.[23] Equivalently, the interpolant admits an explicit Fourier series representation with coefficients obtained via the discrete Fourier transform: This form directly computes the trigonometric coefficients from the data, leveraging the periodicity and equidistribution of the nodes. To illustrate, consider the Runge-like test function interpolated at () equidistant points on . The values are evaluated at the nodes, the DFT coefficients for are computed using the formula above, and is assembled as the corresponding trigonometric polynomial, demonstrating the method's ability to approximate periodic functions with low-degree terms while respecting the interpolation conditions.[24]Explicit formulas for even points
For an even number of equidistant interpolation points , the nodes are defined as for . The space of trigonometric polynomials that can interpolate arbitrary data at these points consists of functions of the form , where the term is excluded because it vanishes at all . This space has dimension , matching the number of points, ensuring uniqueness.[25] The coefficients in the Fourier form are computed similarly to the discrete Fourier transform, but with special handling for the Nyquist frequency . For , the complex coefficients are , and . For the Nyquist term, , since . The corresponding term is then , using the real part of the complex exponential sum to ensure the interpolant is real-valued and correctly reproduces the data.[26] As an example, consider four-point interpolation (, ) of a step function for and for , with values , , , . The Nyquist coefficient is , yielding no Nyquist term. The lower-frequency coefficients are (yielding , ), , and . Thus, , which matches the data points and demonstrates how the even-point formulation uses lower frequencies to preserve the function's behavior without the Nyquist contribution in this symmetric case.[25]Numerical implementation
The direct method for computing trigonometric interpolants involves formulating the interpolation problem as a linear system , where is the matrix with entries from the trigonometric basis functions (e.g., and ) evaluated at the nodes , contains the coefficients, and are the data values. For general nodes, this system is solved using Gaussian elimination, which requires operations for interpolation points, though the matrix is well-conditioned for the complex exponential basis with condition number 1.[2][27] For equidistant nodes, an efficient alternative leverages the discrete Fourier transform (DFT) to obtain the coefficients directly from the data. The complex coefficients are given by the DFT of the values , computed as for , and this can be evaluated in time using the fast Fourier transform (FFT).[2] To obtain real trigonometric coefficients, the imaginary unit roots are separated: the constant term is , while for , and , with adjustments for the highest frequency if is even.[28] Once coefficients are obtained, the interpolant is evaluated as if even. Direct summation works for small , but for stability and efficiency, especially at nodes near the data points, the barycentric form is preferred, expressing with precomputed weights for the second-kind form, avoiding ill-conditioned Lagrange bases.[29] In practice, languages like MATLAB and Julia provide built-in FFT functions for direct implementation. For example, in MATLAB, coefficients are computed asz = fft(f)/m;, followed by extraction of real and imaginary parts for and . Similarly, in Julia using the FFTW package, c = fft(y)/N yields the coefficients for equidistant nodes over . Pseudocode for real coefficients via FFT (assuming even , zero-padded if needed) is:
function real_trig_coeffs(y, m):
z = fft(y) / m # complex DFT coefficients
a0 = real(z[0])
a = 2 * real(z[1 : m//2])
b = -2 * imag(z[1 : m//2])
a[m//2] = real(z[m//2]) # Nyquist cosine term
return a0, a, b
function real_trig_coeffs(y, m):
z = fft(y) / m # complex DFT coefficients
a0 = real(z[0])
a = 2 * real(z[1 : m//2])
b = -2 * imag(z[1 : m//2])
a[m//2] = real(z[m//2]) # Nyquist cosine term
return a0, a, b
