Recent from talks
Contribute something
Nothing was collected or created yet.
Sinc function
View on Wikipedia
| Sinc | |
|---|---|
Part of the normalized sinc (blue) and unnormalized sinc function (red) shown on the same scale | |
| General information | |
| General definition | |
| Fields of application | Signal processing, spectroscopy |
| Domain, codomain and image | |
| Domain | |
| Image | |
| Basic features | |
| Parity | Even |
| Specific values | |
| At zero | 1 |
| Value at +∞ | 0 |
| Value at −∞ | 0 |
| Maxima | 1 at |
| Minima | at |
| Specific features | |
| Root | |
| Related functions | |
| Reciprocal | |
| Derivative | |
| Antiderivative | |
| Series definition | |
| Taylor series | |
In mathematics, physics and engineering, the sinc function (/ˈsɪŋk/ SINK), denoted by sinc(x), is defined as either or
the latter of which is sometimes referred to as the normalized sinc function. The only difference between the two definitions is in the scaling of the independent variable (the x axis) by a factor of π. In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is then analytic everywhere and hence an entire function.
The normalized sinc function is the Fourier transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal. The sinc filter is used in signal processing.
The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's formula) for the zeroth-order spherical Bessel function of the first kind.
The sinc function is also called the cardinal sine function.
Definitions
[edit]The sinc function has two forms, normalized and unnormalized.[1]
In mathematics, the historical unnormalized sinc function is defined for x ≠ 0 by
Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa(x).[2]
In digital signal processing and information theory, the normalized sinc function is commonly defined for x ≠ 0 by
In either case, the value at x = 0 is defined to be the limiting value for all real a ≠ 0 (the limit can be proven using the squeeze theorem).
The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of π). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of x.
Etymology
[edit]The function has also been called the cardinal sine or sine cardinal function.[3][4] The term "sinc" is a contraction of the function's full Latin name, the sinus cardinalis[5] and was introduced by Philip M. Woodward and I.L Davies in their 1952 article "Information theory and inverse probability in telecommunication", saying "This function occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own".[6] It is also used in Woodward's 1953 book Probability and Information Theory, with Applications to Radar.[5][7]
Properties
[edit]
The zero crossings of the unnormalized sinc are at non-zero integer multiples of π, while zero crossings of the normalized sinc occur at non-zero integers.
The local maxima and minima of the unnormalized sinc correspond to its intersections with the cosine function. That is, sin(ξ)/ξ = cos(ξ) for all points ξ where the derivative of sin(x)/x is zero and thus a local extremum is reached. This follows from the derivative of the sinc function:
The first few terms of the infinite series for the x coordinate of the n-th extremum with positive x coordinate are [citation needed] where and where odd n lead to a local minimum, and even n to a local maximum. Because of symmetry around the y axis, there exist extrema with x coordinates −xn. In addition, there is an absolute maximum at ξ0 = (0, 1).
The normalized sinc function has a simple representation as the infinite product:

and is related to the gamma function Γ(x) through Euler's reflection formula:
Euler discovered[8] that and because of the product-to-sum identity[9]

Euler's product can be recast as a sum
The continuous Fourier transform of the normalized sinc (to ordinary frequency) is rect(f): where the rectangular function is 1 for argument between −1/2 and 1/2, and zero otherwise. This corresponds to the fact that the sinc filter is the ideal (brick-wall, meaning rectangular frequency response) low-pass filter.
This Fourier integral, including the special case is an improper integral (see Dirichlet integral) and not a convergent Lebesgue integral, as
The normalized sinc function has properties that make it ideal in relationship to interpolation of sampled bandlimited functions:
- It is an interpolating function, i.e., sinc(0) = 1, and sinc(k) = 0 for nonzero integer k.
- The functions xk(t) = sinc(t − k) (k integer) form an orthonormal basis for bandlimited functions in the function space L2(R), with highest angular frequency ωH = π (that is, highest cycle frequency fH = 1/2).
Other properties of the two sinc functions include:
- The unnormalized sinc is the zeroth-order spherical Bessel function of the first kind, j0(x). The normalized sinc is j0(πx).
- where Si(x) is the sine integral,
- λ sinc(λx) (not normalized) is one of two linearly independent solutions to the linear ordinary differential equation The other is cos(λx)/x, which is not bounded at x = 0, unlike its sinc function counterpart.
- Using normalized sinc,
- The following improper integral involves the (not normalized) sinc function:
Relationship to the Dirac delta distribution
[edit]The normalized sinc function can be used as a nascent delta function, meaning that the following weak limit holds:
This is not an ordinary limit, since the left side does not converge. Rather, it means that
for every Schwartz function, as can be seen from the Fourier inversion theorem. In the above expression, as a → 0, the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of ±1/πx, regardless of the value of a.
This complicates the informal picture of δ(x) as being zero for all x except at the point x = 0, and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in the Gibbs phenomenon.
We can also make an immediate connection with the standard Dirac representation of by writing and
which makes clear the recovery of the delta as an infinite bandwidth limit of the integral.
Summation
[edit]All sums in this section refer to the unnormalized sinc function.
The sum of sinc(n) over integer n from 1 to ∞ equals π − 1/2:
The sum of the squares also equals π − 1/2:[10][11]
When the signs of the addends alternate and begin with +, the sum equals 1/2:
The alternating sums of the squares and cubes also equal 1/2:[12]
Series expansion
[edit]The Taylor series of the unnormalized sinc function can be obtained from that of the sine (which also yields its value of 1 at x = 0):
The series converges for all x. The normalized version follows easily:
Euler famously compared this series to the expansion of the infinite product form to solve the Basel problem.
Higher dimensions
[edit]The product of 1-D sinc functions readily provides a multivariate sinc function for the square Cartesian grid (lattice): sincC(x, y) = sinc(x) sinc(y), whose Fourier transform is the indicator function of a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesian lattice (e.g., hexagonal lattice) is a function whose Fourier transform is the indicator function of the Brillouin zone of that lattice. For example, the sinc function for the hexagonal lattice is a function whose Fourier transform is the indicator function of the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for the hexagonal, body-centered cubic, face-centered cubic and other higher-dimensional lattices can be explicitly derived[13] using the geometric properties of Brillouin zones and their connection to zonotopes.
For example, a hexagonal lattice can be generated by the (integer) linear span of the vectors
Denoting one can derive[13] the sinc function for this hexagonal lattice as
This construction can be used to design Lanczos window for general multidimensional lattices.[13]
Sinhc
[edit]Some authors, by analogy, define the hyperbolic sine cardinal function.[14][15][16]
See also
[edit]- Anti-aliasing filter – Mathematical transformation reducing the damage caused by aliasing
- Borwein integral – Type of mathematical integrals
- Dirichlet integral – Integral of sin(x)/x from 0 to infinity
- Lanczos resampling – Technique in signal processing
- List of mathematical functions
- Shannon wavelet
- Sinc filter – Ideal low-pass filter or averaging filter
- Sinc numerical methods
- Trigonometric functions of matrices – Important functions in solving differential equations
- Trigonometric integral – Special function defined by an integral
- Whittaker–Shannon interpolation formula – Signal (re-)construction algorithm
- Winkel tripel projection – Pseudoazimuthal compromise map projection (cartography)
References
[edit]- ^ Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., eds. (2010), "Numerical methods", NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248..
- ^ Singh, R. P.; Sapre, S. D. (2008). Communication Systems, 2E (illustrated ed.). Tata McGraw-Hill Education. p. 15. ISBN 978-0-07-063454-1. Extract of page 15
- ^ Weisstein, Eric W. "Sinc Function". mathworld.wolfram.com. Retrieved 2023-06-07.
- ^ Merca, Mircea (2016-03-01). "The cardinal sine function and the Chebyshev–Stirling numbers". Journal of Number Theory. 160: 19–31. doi:10.1016/j.jnt.2015.08.018. ISSN 0022-314X. S2CID 124388262.
- ^ a b Poynton, Charles A. (2003). Digital video and HDTV. Morgan Kaufmann Publishers. p. 147. ISBN 978-1-55860-792-7.
- ^ Woodward, P. M.; Davies, I. L. (March 1952). "Information theory and inverse probability in telecommunication" (PDF). Proceedings of the IEE - Part III: Radio and Communication Engineering. 99 (58): 37–44. doi:10.1049/pi-3.1952.0011.
- ^ Woodward, Phillip M. (1953). Probability and information theory, with applications to radar. London: Pergamon Press. p. 29. ISBN 978-0-89006-103-9. OCLC 488749777.
{{cite book}}: ISBN / Date incompatibility (help) - ^ Euler, Leonhard (1735). "On the sums of series of reciprocals". arXiv:math/0506415.
- ^ Sanjar M. Abrarov; Brendan M. Quine (2015). "Sampling by incomplete cosine expansion of the sinc function: Application to the Voigt/complex error function". Appl. Math. Comput. 258: 425–435. arXiv:1407.0533. doi:10.1016/j.amc.2015.01.072.
- ^ "Advanced Problem 6241". American Mathematical Monthly. 87 (6). Washington, DC: Mathematical Association of America: 496–498. June–July 1980. doi:10.1080/00029890.1980.11995075.
- ^ Robert Baillie; David Borwein; Jonathan M. Borwein (December 2008). "Surprising Sinc Sums and Integrals". American Mathematical Monthly. 115 (10): 888–901. doi:10.1080/00029890.2008.11920606. hdl:1959.13/940062. JSTOR 27642636. S2CID 496934.
- ^ Baillie, Robert (2008). "Fun with Fourier series". arXiv:0806.0150v2 [math.CA].
- ^ a b c Ye, W.; Entezari, A. (June 2012). "A Geometric Construction of Multivariate Sinc Functions". IEEE Transactions on Image Processing. 21 (6): 2969–2979. Bibcode:2012ITIP...21.2969Y. doi:10.1109/TIP.2011.2162421. PMID 21775264. S2CID 15313688.
- ^ Ainslie, Michael (2010). Principles of Sonar Performance Modelling. Springer. p. 636. ISBN 9783540876625.
- ^ Günter, Peter (2012). Nonlinear Optical Effects and Materials. Springer. p. 258. ISBN 9783540497134.
- ^ Schächter, Levi (2013). Beam-Wave Interaction in Periodic and Quasi-Periodic Structures. Springer. p. 241. ISBN 9783662033982.
Further reading
[edit]- Stenger, Frank (1993). Numerical Methods Based on Sinc and Analytic Functions. Springer Series on Computational Mathematics. Vol. 20. Springer-Verlag New York, Inc. doi:10.1007/978-1-4612-2706-9. ISBN 9781461276371.
External links
[edit]Sinc function
View on GrokipediaFundamental Definitions
Unnormalized Form
The unnormalized sinc function is defined mathematically as This piecewise definition addresses the indeterminate form at , where the expression encounters division by zero.[1] The point represents a removable singularity, resolved by assigning the value given by the limit , which ensures the function is continuous everywhere, including at the origin.[1] This limit follows from the standard Taylor series expansion of the sine function around zero, , yielding as . In certain contexts, particularly signal processing literature, the unnormalized sinc is denoted as to distinguish it from scaled variants.[6] The function features an oscillatory profile that decays as for large , with zeros at for every nonzero integer , corresponding to the roots of away from the origin.[1] Its graph displays a principal lobe centered at zero, symmetric about the y-axis, and enveloped by successively smaller sidelobes that diminish in amplitude.[1] Unlike the normalized form, which scales the argument by for unit integral over the real line, the unnormalized version integrates to from to .[1]Normalized Form
The normalized sinc function is defined as where the value at is obtained by taking the limit as approaches 0, yielding 1 by the standard as with .[7][3] This form differs from the unnormalized sinc by incorporating a -scaling in the argument, which normalizes the function such that its integral over the entire real line equals 1: .[7][3] The normalized sinc exhibits zeros at all nonzero integers, i.e., for any integer , due to while the denominator .[7][2] This orthogonality property at integer points, combined with , implies that the sum over all integers is 1: .[7] Often denoted simply as the normalized sinc in digital signal processing literature, this variant is particularly valued for its role in sampling theory, where the summation of its integer translates equals 1, facilitating ideal bandlimited signal reconstruction.[7][3]Origins
Etymology
The term "sinc" was coined by Philip M. Woodward in a 1952 technical report co-authored with I. L. Davies on applications of information theory to telecommunication and radar systems.[8] In this work, Woodward introduced the abbreviation to denote the function , noting that it "occurs so often in Fourier analysis and its applications that it does seem to merit a name of its own." The name is a shorthand derived directly from the mathematical expression "sin x / x" for the unnormalized form of the function.[8] Woodward specified that "sinc" should be pronounced like "sink," emphasizing its phonetic simplicity in technical discourse.[9] This introduction occurred amid post-World War II advancements in radar signal processing, where the function's role in waveform analysis necessitated efficient notation. The term's first formal publication appeared in Woodward's 1953 monograph Probability and Information Theory, with Applications to Radar, where it was defined on page 29 as .[10] Initially used informally within radar research circles, "sinc" gained standardization through its adoption in subsequent signal processing literature, becoming a staple in texts on Fourier transforms and communications engineering by the late 20th century.[9]Historical Development
The function , now recognized as the unnormalized sinc function, first appeared in mathematical literature through Joseph Fourier's foundational 1822 treatise Théorie analytique de la chaleur, where it emerged in series expansions for solving the heat equation via Fourier series, foreshadowing the development of transform theory.[11] In optics and wave theory, the sinc function gained significance as the envelope of diffraction patterns. Joseph von Fraunhofer described the single-slit diffraction pattern around 1821, which mathematically corresponds to the sinc function for a rectangular aperture. George Biddell Airy described the related Airy disk pattern for circular apertures in 1835, involving a form akin to that parallels the sinc profile for linear apertures.[12] Gustav Kirchhoff provided a rigorous scalar diffraction theory in 1882, deriving the Fresnel-Kirchhoff integral that yields the sinc function as the intensity distribution for diffraction through a rectangular slit, formalizing its application in wave optics.[13] The sinc function played an implicit yet pivotal role in sampling theory through the Nyquist-Shannon theorem. Harry Nyquist's 1928 analysis of telegraph transmission established the minimum bandwidth requirements for signal reconstruction without aliasing, implying the need for adequate sampling rates.[14] Claude Shannon explicitly formulated the theorem in 1949 at Bell Laboratories, demonstrating that ideal bandlimited signal reconstruction from samples uses sinc interpolation, a breakthrough tied to wartime communications research.[15] Following World War II, the sinc function saw increased adoption in radar and telecommunications engineering, particularly at Bell Laboratories during the 1940s, where it informed signal processing for pulse compression and bandwidth optimization.Mathematical Properties
Basic Analytic Properties
The sinc function, in both its unnormalized form (with ) and normalized form (with ), is an even function, satisfying for all real .[1] This symmetry follows directly from the even nature of the sine function when scaled appropriately in the denominator.[1] Additionally, the function is bounded on the real line, with for all , achieving equality only at .[1] The sinc function is continuous everywhere on , including at , where the removable singularity is resolved by defining the value as the limit .[1] Moreover, it is infinitely differentiable on , as the function extends analytically to the entire complex plane, forming an entire function.[1] This smooth behavior ensures that all derivatives exist and are continuous at every point, including the origin.[16] For large , the sinc function displays asymptotic decay characterized by , modulated by the oscillatory nature of the numerator, leading to persistent ripples that diminish in amplitude.[1] This slow decay, slower than exponential but faster than constant, contributes to the function's utility in applications requiring gradual roll-off.[1] Beyond the central peak at , the unnormalized sinc function features alternating local maxima and minima in its side lobes, with subsequent lobes showing progressively decreasing amplitudes. These extrema occur at points solving (for ), reflecting the interplay between the oscillatory sine and the decaying envelope. The normalized form exhibits analogous structure, scaled by , which is particularly relevant in discrete signal processing contexts.[1]Integral and Summation Formulas
The unnormalized sinc function, defined as (with ), satisfies the principal definite integral .[1] This result, known as the Dirichlet integral in its half-range form, implies .[1] These integrals arise naturally in Fourier analysis, where the sinc function is the inverse transform of the rectangular function.[1] For the normalized sinc function, (again with ), the corresponding integral evaluates to .[1] This normalization ensures the function integrates to unity over the real line, making it suitable as an interpolating kernel in signal processing.[3] A key summation property holds for the normalized form: for all real . This identity, central to the Shannon-Nyquist sampling theorem, can be derived via the Poisson summation formula, which equates the discrete sum to a sum over the Fourier transform of sinc (the rect function) evaluated at integer frequencies, yielding a constant value of 1. Alternatively, a direct derivation uses the partial fraction expansion ; substituting into the expression for the sum gives .Series Expansions
The unnormalized sinc function, defined as for and , possesses a Taylor series expansion around derived from the power series of the sine function divided by : This expansion arises because the sinc function is infinitely differentiable at , with all odd-order derivatives vanishing there. The series converges for all complex , reflecting the fact that the unnormalized sinc function is an entire function. For the normalized sinc function, defined as for and , the Taylor series around adjusts for the scaling in the argument of the sine: Like its unnormalized counterpart, this series converges everywhere in the complex plane, as the normalized sinc is also an entire function. These power series representations are especially valuable for approximating the sinc function when is small, where truncating after a few terms yields high accuracy with minimal computational effort, such as in initial-value problems or local evaluations in numerical algorithms.Relations to Other Concepts
Connection to Dirac Delta Distribution
The Dirac delta distribution admits an integral representation involving a limit of cosine functions, which directly connects to the sinc function through explicit evaluation of the integral. Specifically, Evaluating the integral yields , so the expression simplifies to where is a scaled form of the unnormalized sinc function . This limit holds in the distributional sense, meaning that for any smooth test function with compact support, as .[17] Equivalently, the sinc function provides a sequence of approximations to the delta distribution via scaling. For the unnormalized sinc, where the factor ensures the integral over the real line remains unity for each . In the theory of distributions, the sinc function itself serves as a test function in the Schwartz space due to its rapid decay and smoothness, but the key connection here is the limiting process that generates the delta from scaled sincs. This approximation is particularly useful in contexts requiring a smooth cutoff, as the sinc's oscillatory tails provide a natural bandwidth limitation. Historically, this sinc-based representation has been employed in physics to regularize the Dirac delta distribution, replacing singular impulses with finite-bandwidth approximations to facilitate computations in quantum mechanics and field theory while preserving key distributional properties in the limit. For instance, early applications in wave mechanics used such limits to handle point sources without introducing divergences prematurely. Seminal treatments appear in Fourier analysis texts, where the connection arises from the inverse transform of a rectangular spectrum approaching a constant, yielding the delta.[18][19]Hyperbolic Variant (Sinhc)
The hyperbolic variant of the sinc function, often denoted as , is defined as for , with the value at taken as the limit .[20] This definition parallels the structure of the trigonometric sinc function but replaces the sine with its hyperbolic counterpart, yielding a function that grows exponentially for large rather than oscillating.[20] The function is an entire function of the complex variable, analytic everywhere in the complex plane, as its Taylor series expansion converges for all : [21] This series arises directly from the Taylor expansion of . It has no real zeros, since only at for real , and approaches 1 there; for , is strictly monotonically increasing.[22] Unlike the sinc function, whose integral over the real line equals , the improper integral diverges. This follows from the asymptotic behavior as , which grows too rapidly for convergence.[21] The hyperbolic sinc appears in the analysis of modified Bessel functions, where it serves as a bounding or normalizing factor in inequalities and representations for generalized forms.[21] It also arises in solutions to certain diffusion equations, particularly in contexts involving hyperbolic modifications that account for finite propagation speeds.[23]Multidimensional Extensions
The sinc function extends naturally to higher dimensions through separable Cartesian and isotropic radial forms, providing foundational tools for multidimensional signal representation and analysis. In the Cartesian form, the n-dimensional unnormalized sinc function is defined as the product where . This separable structure allows independent computation along each coordinate axis, facilitating efficient numerical evaluation in applications requiring rectangular frequency-domain support. The radial form generalizes the function isotropically. In two dimensions, a common extension is the sombrero function, defined as where is the Bessel function of the first kind of order one and . This yields the rotationally symmetric "sombrero" profile, which is the Fourier transform of the circularly symmetric rectangular function and is used in filter design and diffraction modeling. In three dimensions, the isotropic form corresponding to spherical frequency support is employed in volume rendering and tomography for approximating reconstruction kernels.[24] A key property of the Cartesian form is its separability, enabling decomposition into one-dimensional operations along orthogonal axes, which preserves orthogonality in discrete bases like the sinc discrete variable representation (DVR). The integral of the unnormalized Cartesian sinc over evaluates to , reflecting the product of individual one-dimensional integrals each equaling . To achieve unit integral normalization in higher dimensions, the function is scaled by , analogous to the one-dimensional case where the normalized form integrates to 1; the multidimensional product then maintains unit volume. These extensions find use in multidimensional imaging and tomography, where the separable form supports efficient reconstruction on Cartesian grids, and radial variants aid in modeling isotropic point spread functions.[25]Applications
In Signal Processing
In digital signal processing, the sinc function plays a central role as the impulse response of the ideal low-pass filter, whose frequency response is a rectangular function that passes all frequencies below a cutoff and attenuates all above it to zero. This filter is theoretically perfect for bandlimiting signals but non-causal and of infinite duration due to the sinc's unbounded support.[26][27] The sinc function enables perfect reconstruction of bandlimited continuous-time signals from their discrete samples via the Shannon interpolation formula, also known as the Whittaker–Shannon interpolation formula. For a signal bandlimited to less than half the sampling frequency, the reconstructed signal is given by where is the sampling interval and are the samples; this formula stems from the Nyquist–Shannon sampling theorem, ensuring no information loss if the sampling rate exceeds twice the signal's bandwidth.[28][29] To prevent aliasing distortion during sampling, an anti-aliasing filter—ideally a low-pass filter with sinc impulse response—is applied beforehand to remove frequency components above the Nyquist frequency, thereby avoiding spectral folding that would otherwise corrupt the signal.[28][27] In practice, the sinc function's infinite extent makes ideal implementation impossible, leading to truncation and windowing of the impulse response to create finite impulse response (FIR) filters; for example, Lanczos resampling employs a sinc kernel windowed by another sinc function to approximate bandlimited interpolation while reducing computational demands. Truncation introduces artifacts such as the Gibbs phenomenon, manifesting as overshoots and ringing near signal discontinuities, with ripple amplitudes approaching 9% of the jump height regardless of filter length.[30][31]In Fourier Analysis
The sinc function plays a pivotal role in Fourier analysis due to its relationship with the rectangular function through the Fourier transform. The Fourier transform of the rectangular function , defined as 1 for and 0 otherwise, yields the unnormalized sinc function: , where . This pair is fundamental, as the sinc function represents the frequency response of a time-limited uniform pulse.[32] A key symmetry arises from the duality property of the Fourier transform, which states that if , then . Applying this to the rect-sinc pair gives the inverse relationship: the Fourier transform of is . This duality highlights the interchangeability between time-limited and bandlimited signals in the frequency domain, underscoring the sinc function's role in bridging compact support in one domain with infinite extent in the other.[32] Parseval's theorem further illustrates the sinc function's properties by equating energy across domains: . For the sinc function, whose transform is the rect function of unit height and width 1, the time-domain energy is , matching the frequency-domain integral over the rect's support. This application confirms the unit energy preservation and is used to evaluate sinc-related integrals analytically.[33] In convolution operations, the sinc function acts as the ideal kernel for bandlimiting signals. Convolving an arbitrary signal with implements perfect low-pass filtering, retaining only frequency components within the sinc's main lobe bandwidth while suppressing higher frequencies. This is because the sinc's Fourier transform, being a rect, enforces a sharp cutoff in the frequency domain.[34] The scaling property of the Fourier transform affects the sinc function's behavior in the frequency domain inversely. If the time-domain function is scaled as , its transform becomes , compressing or expanding the sinc's width proportionally to while adjusting the amplitude. This demonstrates how time scaling alters bandwidth, essential for analyzing resolution in Fourier representations.[35]Numerical Computation and Approximations
The sinc function is typically evaluated using the direct formula for , with the value defined as to resolve the removable singularity at the origin. This definition ensures continuity and is implemented in standard numerical libraries; for example, NumPy'snumpy.sinc function computes this expression element-wise for input arrays, returning 1 at zero via the limit. Similarly, MATLAB's built-in sinc function follows the same normalized definition and handles the zero case accordingly.[36]
For small , particularly near zero where direct evaluation might introduce minor floating-point cancellation in the numerator and denominator, a truncated Taylor series provides an accurate alternative. The series for the unnormalized (with ) is derived from the Taylor expansion of divided by :
where the remainder is bounded by the next term for small , enabling efficient polynomial evaluation up to desired order . This approach is useful for high-precision arithmetic or when avoiding trigonometric function calls.
Computing for large relies on the same direct formula, but the term requires argument reduction modulo to keep the input within the principal range and avoid overflow. However, for very large (exceeding approximately in IEEE 754 double precision), this reduction loses significant accuracy due to the irrationality of , rendering the fractional part of imprecise and causing to deviate from its true value by up to the full range . Specialized techniques, such as those employing higher-precision constants for reduction or extended-precision intermediates, are employed in advanced libraries to maintain accuracy in such cases.