Recent from talks
Nothing was collected or created yet.
Integral transform
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
| Part of a series of articles about |
| Calculus |
|---|
In mathematics, an integral transform is a type of transform that maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform.
General form
[edit]An integral transform is any transform of the following form:
The input of this transform is a function , and the output is another function . An integral transform is a particular kind of mathematical operator.
There are numerous useful integral transforms. Each is specified by a choice of the function of two variables, that is called the kernel or nucleus of the transform.
Some kernels have an associated inverse kernel which (roughly speaking) yields an inverse transform:
A symmetric kernel is one that is unchanged when the two variables are permuted; it is a kernel function such that . In the theory of integral equations, symmetric kernels correspond to self-adjoint operators.[1]
Motivation
[edit]There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" into another domain, in which manipulating and solving the equation may be much easier than in the original domain. The solution can then be mapped back to the original domain with the inverse of the integral transform.
There are many applications of probability that rely on integral transforms, such as "pricing kernel" or stochastic discount factor, or the smoothing of data recovered from robust statistics; see kernel (statistics).
History
[edit]The precursor of the transforms were the Fourier series to express functions in finite intervals. Later the Fourier transform was developed to remove the requirement of finite intervals.
Using the Fourier series, just about any practical function of time (the voltage across the terminals of an electronic device for example) can be represented as a sum of sines and cosines, each suitably scaled (multiplied by a constant factor), shifted (advanced or retarded in time) and "squeezed" or "stretched" (increasing or decreasing the frequency). The sines and cosines in the Fourier series are an example of an orthonormal basis.
Usage example
[edit]As an example of an application of integral transforms, consider the Laplace transform. This is a technique that maps differential or integro-differential equations in the "time" domain into polynomial equations in what is termed the "complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary component ω of the complex frequency s = −σ + iω corresponds to the usual concept of frequency, viz., the rate at which a sinusoid cycles, whereas the real component σ of the complex frequency corresponds to the degree of "damping", i.e. an exponential decrease of the amplitude.) The equation cast in terms of complex frequency is readily solved in the complex frequency domain (roots of the polynomial equations in the complex frequency domain correspond to eigenvalues in the time domain), leading to a "solution" formulated in the frequency domain. Employing the inverse transform, i.e., the inverse procedure of the original Laplace transform, one obtains a time-domain solution. In this example, polynomials in the complex frequency domain (typically occurring in the denominator) correspond to power series in the time domain, while axial shifts in the complex frequency domain correspond to damping by decaying exponentials in the time domain.
The Laplace transform finds wide application in physics and particularly in electrical engineering, where the characteristic equations that describe the behavior of an electric circuit in the complex frequency domain correspond to linear combinations of exponentially scaled and time-shifted damped sinusoids in the time domain. Other integral transforms find special applicability within other scientific and mathematical disciplines.
Another usage example is the kernel in the path integral:
This states that the total amplitude to arrive at is the sum (the integral) over all possible values of the total amplitude to arrive at the point multiplied by the amplitude to go from to [i.e. ].[2] It is often referred to as the propagator for a given system. This (physics) kernel is the kernel of the integral transform. However, for each quantum system, there is a different kernel.[3]
Table of transforms
[edit]| Transform | Symbol | K | f(t) | t1 | t2 | K−1 | u1 | u2 |
|---|---|---|---|---|---|---|---|---|
| Abel transform | F, f | [4] | t | |||||
| Associated Legendre transform | ||||||||
| Fourier transform | ||||||||
| Fourier sine transform | on , real-valued | |||||||
| Fourier cosine transform | on , real-valued | |||||||
| Hankel transform | ||||||||
| Hartley transform | ||||||||
| Hermite transform | ||||||||
| Hilbert transform | ||||||||
| Jacobi transform | ||||||||
| Laguerre transform | ||||||||
| Laplace transform | ||||||||
| Legendre transform | ||||||||
| Mellin transform | [5] | |||||||
| Two-sided Laplace transform |
||||||||
| Poisson kernel | ||||||||
| Radon transform | Rƒ | |||||||
| Weierstrass transform | ||||||||
| X-ray transform | Xƒ |
In the limits of integration for the inverse transform, c is a constant which depends on the nature of the transform function. For example, for the one and two-sided Laplace transform, c must be greater than the largest real part of the zeroes of the transform function.
Note that there are alternative notations and conventions for the Fourier transform.
Different domains
[edit]Here integral transforms are defined for functions on the real numbers, but they can be defined more generally for functions on a group.
- If instead one uses functions on the circle (periodic functions), integration kernels are then biperiodic functions; convolution by functions on the circle yields circular convolution.
- If one uses functions on the cyclic group of order n (Cn or Z/nZ), one obtains n × n matrices as integration kernels; convolution corresponds to circulant matrices.
General theory
[edit]Although the properties of integral transforms vary widely, they have some properties in common. For example, every integral transform is a linear operator, since the integral is a linear operator, and in fact if the kernel is allowed to be a generalized function then all linear operators are integral transforms (a properly formulated version of this statement is the Schwartz kernel theorem).
The general theory of such integral equations is known as Fredholm theory. In this theory, the kernel is understood to be a compact operator acting on a Banach space of functions. Depending on the situation, the kernel is then variously referred to as the Fredholm operator, the nuclear operator or the Fredholm kernel.
See also
[edit]References
[edit]- ^ Chapter 8.2, Methods of Theoretical Physics Vol. I (Morse & Feshbach)
- ^ Eq 3.42 in Feynman and Hibbs, Quantum Mechanics and Path Integrals, emended edition:
- ^ Mathematically, what is the kernel in path integral?
- ^ Assuming the Abel transform is not discontinuous at .
- ^ Some conditions apply, see Mellin inversion theorem for details.
Further reading
[edit]- A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4
- R. K. M. Thambynayagam, The Diffusion Handbook: Applied Solutions for Engineers, McGraw-Hill, New York, 2011. ISBN 978-0-07-175184-1
- "Integral transform", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
- Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
Integral transform
View on GrokipediaFundamentals
General Form
An integral transform is a linear mapping that converts a function defined on a domain, typically time or space, into another function in a transformed domain via an integral operation. The general form of such a transform is given by where is the kernel function that encodes the specific type of transform, and the limits to define the integration range over the original variable .[1][5] This formulation assumes appropriate conditions on and to ensure convergence of the integral. The inverse transform recovers the original function from the transformed one, typically through a similar integral expression: where is the inverse kernel, and the limits to correspond to the range in the transformed variable .[5][2] The measure reflects the standard Lebesgue integration in the transform space, with commonly denoting the transform variable, such as frequency or a complex parameter. Integral transforms can be classified as unilateral or bilateral based on the integration limits. Bilateral transforms integrate over the entire real line, from to , suitable for functions defined on all real numbers, as in the Fourier transform.[2] Unilateral transforms, like the Laplace transform, integrate from 0 to , applying to causal functions or those with support on the non-negative reals.[5] These distinctions affect the applicability and inversion procedures of the transform.Motivation
Integral transforms play a pivotal role in mathematical analysis by converting complex differential equations into simpler algebraic equations, thereby facilitating their solution. For instance, differentiation in the original domain often becomes multiplication by a parameter in the transformed domain, while convolutions—integral operations that model systems like linear time-invariant processes—transform into straightforward pointwise multiplications. This algebraic simplification is particularly valuable in engineering and physics, where differential equations describe dynamic systems, allowing analysts to leverage familiar techniques from algebra rather than advanced differential methods.[6]/09%3A_Transform_Techniques_in_Physics/9.09%3A_The_Convolution_Theorem) A key advantage of integral transforms lies in their ability to handle boundary value problems and initial conditions through a natural domain shift, embedding these constraints directly into the transformed equations without explicit enforcement during solving. In boundary value problems, such as those arising in heat conduction or wave propagation, transforms like the Fourier type incorporate spatial periodicity or decay conditions seamlessly, avoiding the need for series expansions or Green's functions in the original variables. Similarly, for initial value problems, the Laplace transform integrates time-zero states into the parameter, simplifying the treatment of transient behaviors in systems like electrical circuits or mechanical vibrations. This approach reduces computational complexity and error propagation in both analytical and numerical contexts.[6][7] The conceptual shift enabled by integral transforms—from time or spatial domains to frequency or momentum domains—provides profound insights into oscillatory or periodic phenomena, where direct analysis in the original domain may obscure underlying patterns. In the frequency domain, components of a signal or wave are decomposed into their constituent frequencies, revealing resonances, damping, or harmonic structures that are difficult to discern amid time-varying complexities. This perspective is essential for understanding phenomena like vibrations in structures or electromagnetic waves, where the transformed representation highlights energy distribution across scales.[8][9] Beyond these core benefits, integral transforms find broad utility in signal processing for filtering noise and compressing data, in physics for modeling wave propagation and quantum scattering, and in numerical methods for efficient approximations via spectral techniques. In signal processing, they enable the isolation of frequency bands to enhance or suppress specific features, as in audio equalization or image enhancement. In physics, applications span optics and acoustics, where transforms simplify the solution of Helmholtz equations governing wave behavior. Numerically, they underpin fast algorithms for partial differential equation solvers, improving accuracy and speed in simulations of fluid dynamics or electromagnetic fields. These applications underscore the transforms' versatility in bridging theoretical mathematics with practical problem-solving across disciplines.[10]Historical Development
Early Contributions
The concept of integral transforms emerged from early efforts to solve differential equations arising in physics and astronomy during the 18th century, with Leonhard Euler laying foundational groundwork through his work on special functions that anticipated transform methods. In the 1760s, Euler explored integrals that would later be recognized as precursors to integral transforms, particularly through his investigations of the beta and gamma functions, which he used to generalize factorials and evaluate infinite products and series in problems of interpolation and summation. These functions, expressed as definite integrals, provided tools for transforming problems in analysis into more tractable forms, influencing subsequent developments in solving ordinary differential equations (ODEs). Euler's contributions in this period, detailed in his correspondence and publications with the St. Petersburg Academy, marked an early shift toward integral representations in mathematical physics.[11] Pierre-Simon Laplace advanced these ideas significantly in the late 18th and early 19th centuries by developing what became known as the Laplace transform, initially as a method to solve linear ODEs encountered in celestial mechanics and astronomy. Beginning in the 1770s, Laplace applied integral transformations to analyze planetary perturbations and gravitational interactions, transforming differential equations into algebraic ones for easier resolution. His seminal work in this area appeared in papers from 1774 onward, where he used the transform to address probability distributions and mechanical systems, and was further elaborated in his multi-volume Mécanique Céleste (1799–1825), which applied these techniques to the stability of the solar system. Laplace's approach, rooted in operational calculus, demonstrated the power of integrals for inverting differential operators in physical contexts like orbital mechanics.[12] Adrien-Marie Legendre contributed to the early theory in the 1780s through his studies of spherical harmonics, which involved integral expansions for representing gravitational potentials on spheres. In 1782, Legendre introduced polynomials that facilitated the decomposition of functions on the sphere into orthogonal series, serving as a transform for problems in geodesy and astronomy. These harmonics, derived from Legendre's work on the attraction of spheroids, provided a basis for integral representations of potentials, influencing later transform methods in three-dimensional settings. His developments, published in Mémoires de l'Académie Royale des Sciences, emphasized orthogonality and convergence, key features of modern integral transforms.[13] Joseph Fourier's 1822 publication of Théorie Analytique de la Chaleur represented a pivotal advancement by introducing the Fourier series and integral as tools for solving the heat equation in conduction problems. Motivated by empirical studies of heat diffusion, Fourier expanded periodic functions into trigonometric series, enabling the transformation of partial differential equations into ordinary ones via separation of variables. This work, building on earlier trigonometric series by Euler and Bernoulli, established the Fourier transform's role in frequency analysis for physical phenomena, with applications to wave propagation and thermodynamics. Fourier's methods, rigorously justified through his prize-winning memoir of 1807 and the 1822 treatise, shifted focus toward integral forms for non-periodic functions, setting the stage for broader applications.[14]Modern Advancements
In the early 20th century, David Hilbert's work on spectral theory, spanning 1904 to 1912, laid the groundwork for understanding abstract integral operators through the analysis of integral equations. Hilbert's investigations into self-adjoint integral operators revealed the spectral decomposition, where operators could be diagonalized in a continuous spectrum, extending beyond discrete eigenvalues and influencing the formalization of integral transforms as operators on function spaces.[15] This spectral approach, detailed in his six papers on integral equations from 1904 to 1910 and culminating in his 1912 extension to infinite-dimensional spaces, provided a rigorous framework for treating integral transforms as bounded linear operators, bridging classical analysis with modern operator theory.[16] The Mellin transform, developed in the 1890s, emerged as a key tool for handling multiplicative convolutions, particularly in problems involving products of functions or scaling properties. Hjalmar Mellin's foundational contributions around 1897 formalized the transform's role in converting multiplicative operations into additive ones via its kernel, enabling efficient solutions to integral equations with power-law behaviors, such as those in asymptotic analysis and special functions.[17] By the mid-1910s, extensions by Mellin and contemporaries like Barnes emphasized its utility for Mellin-Barnes integrals, which resolved complex contour integrals arising in number theory and physics, solidifying its place in transform theory.[18] In the 1940s, the Z-transform was introduced to address discrete-time signals in control systems, marking a shift toward digital applications of integral transforms. Developed amid post-World War II advancements in sampled-data systems, particularly for radar and servo mechanisms, the transform was formalized by John R. Ragazzini and Lotfi A. Zadeh in their 1952 paper, which adapted continuous Laplace methods to discrete sequences using the generating function approach. This innovation facilitated stability analysis and design of feedback controllers, with early applications in the late 1940s at institutions like Columbia University, where it enabled the transition from analog to digital control theory.[19] The 1980s saw the rise of wavelet transforms, offering superior localized analysis compared to traditional Fourier methods, especially for non-stationary signals. Jean Morlet's 1982 work on wave propagation in seismology introduced the continuous wavelet transform using Gaussian-modulated plane waves, providing time-frequency resolution ideal for detecting transient features in geophysical data. Building on this, Ingrid Daubechies' 1988 construction of compactly supported orthonormal wavelets enabled discrete implementations with finite energy preservation, revolutionizing signal compression and multiresolution analysis in fields like image processing.[20] Although introduced in 1917, the Radon transform experienced significant post-1970s advancements in quantum mechanics and tomography, leveraging computational power for practical reconstructions. In medical imaging, Godfrey Hounsfield's 1972 computed tomography (CT) scanner applied the inverse Radon transform to X-ray projections, enabling 3D density mapping with sub-millimeter resolution and transforming diagnostic radiology.[21] In quantum mechanics, recent extensions incorporate the Radon transform into quantum tomography schemes, where it reconstructs quantum states from marginal distributions, as explored in symplectic formulations for phase-space representations since the 1990s.[22] Mid-20th-century developments in functional analysis and operator theory profoundly shaped integral transforms by embedding them within Hilbert and Banach spaces. From the 1930s onward, the spectral theorem for compact operators, advanced by figures like John von Neumann, treated integral kernels as Hilbert-Schmidt operators, unifying transforms under bounded linear mappings and enabling convergence proofs for series expansions.[16] This operator-theoretic perspective, consolidated by the 1950s through works on unbounded operators and distributions, facilitated generalizations like pseudo-differential operators, influencing applications in partial differential equations and quantum field theory.[23]Practical Applications
Illustrative Example
A classic illustrative example of an integral transform in action is the application of the Fourier transform to solve the one-dimensional heat equation, which models diffusion processes such as heat conduction in an infinite rod: , where is the temperature at position and time , and is the thermal diffusivity.[24] To solve this partial differential equation (PDE) with initial condition , apply the Fourier transform to both sides with respect to the spatial variable . The forward Fourier transform is defined as . Transforming the PDE yields an ordinary differential equation (ODE) in the frequency domain: , with initial condition .[24] This first-order ODE is straightforward to solve: . Applying the inverse Fourier transform, , gives the solution in the spatial domain: . This can equivalently be expressed as a convolution: , where the kernel is the fundamental solution representing instantaneous point-source diffusion.[24] For a Gaussian initial condition, such as with , the solution remains Gaussian but spreads over time: . This illustrates the physical interpretation of the heat equation, where the initial concentrated profile diffuses, with the variance increasing linearly as , demonstrating how the Fourier transform simplifies the PDE to an algebraic multiplication in the frequency domain before inversion reveals the time-evolved spreading behavior.[25]Table of Common Transforms
The table below compares several widely used integral transforms, detailing their forward and inverse formulas, kernels, domains, and primary applications.| Transform Name | Forward Formula | Inverse Formula | Kernel | Original Domain | Transform Domain | Main Applications |
|---|---|---|---|---|---|---|
| Fourier | Real line (, time or space) | Frequency () | Decomposition of periodic signals; solving partial differential equations in physics and engineering.[26] | |||
| Laplace | (Bromwich integral, ) | Non-negative reals () | Complex plane (, ) | Analysis of control systems and linear ordinary differential equations in electrical engineering.[27] | ||
| Mellin | Positive reals () | Complex plane (, vertical strip) | Problems involving scaling and multiplicative convolutions; connections to number theory via the Riemann zeta function.[28] | |||
| Hankel | (order zero) | (Bessel function of first kind, order zero) | Non-negative reals (, radial coordinate) | Non-negative reals (, radial frequency) | Solutions to partial differential equations with radial or cylindrical symmetry in two dimensions.[29] | |
| Radon | (filtered backprojection) | Line integral (delta function projection) | Plane () | Projection space (, slope and intercept) | Image reconstruction in computed tomography and projection-based imaging.[30] |
