Hubbry Logo
Trigonometric interpolationTrigonometric interpolationMain
Open search
Trigonometric interpolation
Community hub
Trigonometric interpolation
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Trigonometric interpolation
Trigonometric interpolation
from Wikipedia

In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and cosines of given periods. This form is especially suited for interpolation of periodic functions.

An important special case is when the given data points are equally spaced, in which case the solution is given by the discrete Fourier transform.

Formulation of the interpolation problem

[edit]

A trigonometric polynomial of degree K has the form

This expression contains 2K + 1 coefficients, a0, a1, … aK, b1, …, bK, and we wish to compute those coefficients so that the function passes through N points:

Since the trigonometric polynomial is periodic with period 2π, the N points can be distributed and ordered in one period as

(Note that we do not in general require these points to be equally spaced.) The interpolation problem is now to find coefficients such that the trigonometric polynomial p satisfies the interpolation conditions.

Formulation in the complex plane

[edit]

The problem becomes more natural if we formulate it in the complex plane. We can rewrite the formula for a trigonometric polynomial as where i is the imaginary unit. If we set z = eix, then this becomes

with

This reduces the problem of trigonometric interpolation to that of polynomial interpolation on the unit circle. Existence and uniqueness for trigonometric interpolation now follows immediately from the corresponding results for polynomial interpolation.

For more information on formulation of trigonometric interpolating polynomials in the complex plane, see p. 156 of Interpolation using Fourier Polynomials.

Solution of the problem

[edit]

Under the above conditions, there exists a solution to the problem for any given set of data points {xk, yk} as long as N, the number of data points, is not larger than the number of coefficients in the polynomial, i.e., N ≤ 2K+1 (a solution may or may not exist if N>2K+1 depending upon the particular set of data points). Moreover, the interpolating polynomial is unique if and only if the number of adjustable coefficients is equal to the number of data points, i.e., N = 2K + 1. In the remainder of this article, we will assume this condition to hold true.

Odd number of points

[edit]

If the number of points N is odd, say N=2K+1, applying the Lagrange formula for polynomial interpolation to the polynomial formulation in the complex plane yields that the solution can be written in the form

where

The factor in this formula compensates for the fact that the complex plane formulation contains also negative powers of and is therefore not a polynomial expression in . The correctness of this expression can easily be verified by observing that and that is a linear combination of the right powers of . Upon using the identity

the coefficient can be written in the form

Even number of points

[edit]

If the number of points N is even, say N=2K, applying the Lagrange formula for polynomial interpolation to the polynomial formulation in the complex plane yields that the solution can be written in the form

where

Here, the constants can be chosen freely. This is caused by the fact that the interpolating function (1) contains an odd number of unknown constants. A common choice is to require that the highest frequency is of the form a constant times , i.e. the term vanishes, but in general the phase of the highest frequency can be chosen to be . To get an expression for , we obtain by using (2) that (3) can be written on the form

This yields

and

Note that care must be taken in order to avoid infinities caused by zeros in the denominators.

Equidistant nodes

[edit]

Further simplification of the problem is possible if nodes are equidistant, i.e.

see Zygmund for more details.

Odd number of points

[edit]

Further simplification by using (4) would be an obvious approach, but is obviously involved. A much simpler approach is to consider the Dirichlet kernel

where is odd. It can easily be seen that is a linear combination of the right powers of and satisfies

Since these two properties uniquely define the coefficients in (5), it follows that

Here, the sinc-function prevents any singularities and is defined by

Even number of points

[edit]

For even, we define the Dirichlet kernel as

Again, it can easily be seen that is a linear combination of the right powers of , does not contain the term and satisfies

Using these properties, it follows that the coefficients in (6) are given by

Note that does not contain the as well. Finally, note that the function vanishes at all the points . Multiples of this term can, therefore, always be added, but it is commonly left out.

Implementation

[edit]

A MATLAB implementation of the above can be found here and is given by:

function P = triginterp(xi,x,y)
% TRIGINTERP Trigonometric interpolation.
% Input:
%   xi  evaluation points for the interpolant (vector)
%   x   equispaced interpolation nodes (vector, length N)
%   y   interpolation values (vector, length N)
% Output:
%   P   values of the trigonometric interpolant (vector)
N = length(x);
% Adjust the spacing of the given independent variable.
h = 2/N;
scale = (x(2)-x(1)) / h;
x = x/scale;  xi = xi/scale;
% Evaluate interpolant.
P = zeros(size(xi));
for k = 1:N
  P = P + y(k)*trigcardinal(xi-x(k),N);
end

function tau = trigcardinal(x,N)
ws = warning('off','MATLAB:divideByZero');
% Form is different for even and odd N.
if rem(N,2)==1   % odd
  tau = sin(N*pi*x/2) ./ (N*sin(pi*x/2));
else             % even
  tau = sin(N*pi*x/2) ./ (N*tan(pi*x/2));
end
warning(ws)
tau(x==0) = 1;     % fix value at x=0

Relation with the discrete Fourier transform

[edit]

The special case in which the points xn are equally spaced is especially important. In this case, we have

The transformation that maps the data points yn to the coefficients ak, bk is obtained from the discrete Fourier transform (DFT) of order N.

(Because of the way the problem was formulated above, we have restricted ourselves to odd numbers of points. This is not strictly necessary; for even numbers of points, one includes another cosine term corresponding to the Nyquist frequency.)

The case of the cosine-only interpolation for equally spaced points, corresponding to a trigonometric interpolation when the points have even symmetry, was treated by Alexis Clairaut in 1754. In this case the solution is equivalent to a discrete cosine transform. The sine-only expansion for equally spaced points, corresponding to odd symmetry, was solved by Joseph Louis Lagrange in 1762, for which the solution is a discrete sine transform. The full cosine and sine interpolating polynomial, which gives rise to the DFT, was solved by Carl Friedrich Gauss in unpublished work around 1805, at which point he also derived a fast Fourier transform algorithm to evaluate it rapidly. Clairaut, Lagrange, and Gauss were all concerned with studying the problem of inferring the orbit of planets, asteroids, etc., from a finite set of observation points; since the orbits are periodic, a trigonometric interpolation was a natural choice. See also Heideman et al. (1984).

Applications in numerical computing

[edit]

Chebfun, a fully integrated software system written in MATLAB for computing with functions, uses trigonometric interpolation and Fourier expansions for computing with periodic functions. Many algorithms related to trigonometric interpolation are readily available in Chebfun; several examples are available here.

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Trigonometric interpolation is a for approximating a function by a trigonometric —a finite of sines and cosines—that exactly matches the function's values at a set of equally spaced points on an interval, often with to suit periodic data. This approach is particularly effective for interpolating periodic functions, where it leverages the (DFT) to compute coefficients efficiently, decomposing the data into its sinusoidal components via the formula Xk=1Nn=0N1xneik2πn/NX_k = \frac{1}{N} \sum_{n=0}^{N-1} x_n e^{-i k 2\pi n / N} for k=0,1,,N1k = 0, 1, \dots, N-1, and reconstructing the interpolant as p(t)=k=0N1Xkeiktp(t) = \sum_{k=0}^{N-1} X_k e^{i k t} (or its real part for real-valued functions). Unlike , which can suffer from the Runge phenomenon—large oscillations near the endpoints for high-degree polynomials on non-periodic intervals—trigonometric interpolation maintains stability for equally spaced points due to its inherent periodicity and equivalence to polynomial interpolation in the via the substitution z=eixz = e^{i x}. The (FFT) algorithm further enhances its computational efficiency, reducing complexity from O(N2)O(N^2) to O(NlogN)O(N \log N), making it ideal for large datasets in and . Key applications include numerical simulations and optics, where it approximates parametric curves and periodic phenomena, such as wave propagations, by ensuring exact reproduction at sample points while converging rapidly for smooth functions.

Introduction

Definition and motivation

Trigonometric interpolation is a technique in numerical analysis for approximating a given periodic function f(x)f(x) by determining a trigonometric polynomial of degree at most nn that passes exactly through a set of 2n+12n+1 or 2n2n specified points. Such a polynomial takes the form tn(x)=a02+k=1n(akcos(kx)+bksin(kx)),t_n(x) = \frac{a_0}{2} + \sum_{k=1}^n \left( a_k \cos(kx) + b_k \sin(kx) \right), or equivalently in complex form, tn(x)=k=nnckeikx,t_n(x) = \sum_{k=-n}^n c_k e^{i k x}, where the coefficients aka_k, bkb_k, or ckc_k are chosen to satisfy the interpolation conditions tn(xj)=f(xj)t_n(x_j) = f(x_j) at the points xjx_j. This approach is particularly suited to functions defined on a periodic interval, such as [π,π)[-\pi, \pi) or [0,2π)[0, 2\pi), where the basis functions cos(kx)\cos(kx) and sin(kx)\sin(kx) inherently possess the desired periodicity of period 2π2\pi. The primary motivation for trigonometric interpolation arises when dealing with periodic data, as standard polynomial interpolation on a finite interval often introduces artifacts like the Runge phenomenon, where the approximant oscillates wildly near the endpoints due to the non-periodic nature of polynomial basis functions. In contrast, trigonometric interpolation employs a basis that matches the periodicity of the target function, thereby avoiding such boundary issues and ensuring the interpolant remains well-behaved over the entire real line by periodic extension. This makes it especially valuable in applications involving signals, , and periodic phenomena in physics and , where preserving the function's cyclic structure is essential for accurate representation and avoiding artificial discontinuities. A simple illustration of trigonometric interpolation involves approximating the f(x)=sin(x)f(x) = \sin(x) at equally spaced points xj=2πjmx_j = \frac{2\pi j}{m} for j=0,1,,m1j = 0, 1, \dots, m-1 on [0,2π)[0, 2\pi), where m=2n+1m = 2n+1 with n1n \geq 1. Since sin(x)\sin(x) is itself a trigonometric polynomial of degree 1, the interpolant tn(x)t_n(x) recovers f(x)f(x) exactly for sufficiently large nn, demonstrating the method's ability to faithfully reproduce smooth periodic functions without introducing extraneous oscillations. In basic cases with smooth interpolants, this approach maintains periodicity while sidestepping the that can plague approximations of discontinuous functions.

Historical development

Trigonometric interpolation traces its origins to the early , closely intertwined with the development of for representing periodic functions. In 1822, introduced the concept of expanding periodic functions as infinite sums of sines and cosines in his work Théorie analytique de la chaleur, laying the groundwork for trigonometric approximations that interpolate function values at discrete points. The partial sums of these series naturally provide trigonometric polynomials that interpolate periodic data at equidistant points, marking the initial connection between and interpolation techniques. In the early 19th century, the general problem of trigonometric interpolation had been formalized, with Friedrich Bessel providing the first complete account in 1815, focusing on interpolating periodic data using finite trigonometric sums for astronomical applications. This built on earlier contributions from Leonhard Euler and in the , who developed finite trigonometric series for planetary orbit interpolation, though explicit formulas for equidistant cases emerged more systematically in interpolation theory texts around the 1910s, as advanced. In the early , the field formalized alongside broader numerical methods, with key practical algorithms emerging in the 1930s; notably, Cornelius Lanczos's 1938 paper outlined efficient trigonometric interpolation for empirical and analytical functions, emphasizing convergence and applications in physics like analysis. A major milestone occurred in 1965 with the publication of the Cooley-Tukey algorithm, which enabled fast computation of the (DFT), directly facilitating efficient evaluation of trigonometric interpolants at equidistant nodes and integrating interpolation with . This built on precursors like Carl Friedrich Gauss's 1805 unpublished work on trigonometric interpolation for least-squares orbit fitting, later recognized as an early variant. Post-2000 advances have extended trigonometric interpolation to non-equidistant points, leveraging algorithms like the non-equispaced fast Fourier transform (NFFT) for improved flexibility in applications such as image processing and irregular data fitting.

Mathematical Foundations

Trigonometric polynomials

A trigonometric polynomial of degree at most nn is a function expressible as T(x)=a0+k=1n(akcos(kx)+bksin(kx)),T(x) = a_0 + \sum_{k=1}^n \left( a_k \cos(kx) + b_k \sin(kx) \right), where the coefficients ak,bka_k, b_k are real or complex numbers. This representation spans the finite-dimensional generated by the basis functions {1,cosx,sinx,,cos(nx),sin(nx)}\{ 1, \cos x, \sin x, \dots, \cos(nx), \sin(nx) \}. Equivalently, in complex form, it can be written as T(x)=k=nnckeikx,T(x) = \sum_{k=-n}^n c_k e^{i k x}, where the coefficients ckc_k satisfy ck=ckc_{-k} = \overline{c_k} for real-valued TT. Trigonometric polynomials are inherently periodic with period 2π2\pi, as each satisfies f(x+2π)=f(x)f(x + 2\pi) = f(x). The of all such polynomials of degree at most nn forms a of dimension 2n+12n + 1. The basis functions are orthogonal with respect to the L2L^2 inner product over [π,π][-\pi, \pi], given by f,g=ππf(x)g(x)dx.\langle f, g \rangle = \int_{-\pi}^{\pi} f(x) \overline{g(x)} \, dx. Specifically, ππcos(mx)cos(nx)dx={0if mn,πif m=n1,2πif m=n=0,\int_{-\pi}^{\pi} \cos(mx) \cos(nx) \, dx = \begin{cases} 0 & \text{if } m \neq n, \\ \pi & \text{if } m = n \geq 1, \\ 2\pi & \text{if } m = n = 0, \end{cases}
Add your contribution
Related Hubs
User Avatar
No comments yet.