Hubbry Logo
Orthogonal functionsOrthogonal functionsMain
Open search
Orthogonal functions
Community hub
Orthogonal functions
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Orthogonal functions
Orthogonal functions
from Wikipedia

In mathematics, orthogonal functions belong to a function space that is a vector space equipped with a bilinear form. When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval:

The functions and are orthogonal when this integral is zero, i.e. whenever . As with a basis of vectors in a finite-dimensional space, orthogonal functions can form an infinite basis for a function space. Conceptually, the above integral is the equivalent of a vector dot product; two vectors are mutually independent (orthogonal) if their dot-product is zero.

Suppose is a sequence of orthogonal functions of nonzero L2-norms . It follows that the sequence is of functions of L2-norm one, forming an orthonormal sequence. To have a defined L2-norm, the integral must be bounded, which restricts the functions to being square-integrable.

Trigonometric functions

[edit]

Several sets of orthogonal functions have become standard bases for approximating functions. For example, the sine functions sin nx and sin mx are orthogonal on the interval when and n and m are positive integers. For then

and the integral of the product of the two sine functions vanishes.[1] Together with cosine functions, these orthogonal functions may be assembled into a trigonometric polynomial to approximate a given function on the interval with its Fourier series.

Polynomials

[edit]

If one begins with the monomial sequence on the interval and applies the Gram–Schmidt process, then one obtains the Legendre polynomials. Another collection of orthogonal polynomials are the associated Legendre polynomials.

The study of orthogonal polynomials involves weight functions that are inserted in the bilinear form:

For Laguerre polynomials on the weight function is .

Both physicists and probability theorists use Hermite polynomials on , where the weight function is or .

Chebyshev polynomials are defined on and use weights or .

Zernike polynomials are defined on the unit disk and have orthogonality of both radial and angular parts.

Binary-valued functions

[edit]

Walsh functions and Haar wavelets are examples of orthogonal functions with discrete ranges.

Rational functions

[edit]
Plot of the Chebyshev rational functions of order n=0,1,2,3 and 4 between x=0.01 and 100.

Legendre and Chebyshev polynomials provide orthogonal families for the interval [−1, 1] while occasionally orthogonal families are required on [0, ∞). In this case it is convenient to apply the Cayley transform first, to bring the argument into [−1, 1]. This procedure results in families of rational orthogonal functions called Legendre rational functions and Chebyshev rational functions.

In differential equations

[edit]

Solutions of linear differential equations with boundary conditions can often be written as a weighted sum of orthogonal solution functions (a.k.a. eigenfunctions), leading to generalized Fourier series.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Orthogonal functions are a fundamental concept in , particularly in and approximation theory, consisting of a set of functions that are pairwise orthogonal with respect to an inner product defined on a suitable , such that the inner product of any two distinct functions is zero. This orthogonality condition generalizes the notion of vectors from finite-dimensional Euclidean spaces to infinite-dimensional Hilbert spaces, where the inner product is typically given by f,g=abf(x)g(x)dx\langle f, g \rangle = \int_a^b f(x) \overline{g(x)} \, dx over an interval [a,b][a, b], or a real-valued variant abf(x)g(x)dx=0\int_a^b f(x) g(x) \, dx = 0 for distinct non-zero functions ff and gg. A collection of functions {ϕn}\{\phi_n\} is said to be mutually orthogonal if ϕm,ϕn=0\langle \phi_m, \phi_n \rangle = 0 for all mnm \neq n, and often normalized to form an orthonormal set where ϕn,ϕn=1\langle \phi_n, \phi_n \rangle = 1 for each nn. If the set is complete—meaning it spans the entire space—any in the space can be uniquely expanded as a cnϕn(x)\sum c_n \phi_n(x), with coefficients cn=f,ϕnc_n = \langle f, \phi_n \rangle. Prominent examples include the {cos(nπx/L),sin(nπx/L)}\{\cos(n\pi x / L), \sin(n\pi x / L)\} on intervals like [L,L][-L, L] or [0,L][0, L], which satisfy orthogonality integrals yielding zero for distinct indices and positive norms for matching indices, as well as complex exponentials {einx}\{e^{i n x}\} on [π,π][-\pi, \pi]. Orthogonal functions underpin key techniques in analysis, such as expansions for representing periodic functions and solving boundary value problems in partial differential equations via . Other classical families, like Legendre, Hermite, and , provide orthogonal bases for expansions on specific intervals or weight functions, enabling efficient approximations in physics, engineering, and numerical methods. Properties such as , which equates the energy of a function to the sum of squared coefficients in its orthogonal expansion (f(x)2dx=cn2\int |f(x)|^2 \, dx = \sum |c_n|^2), highlight their role in preserving norms and facilitating energy conservation in and .

Fundamentals

Definition of orthogonality

In finite-dimensional Euclidean spaces, two vectors u\mathbf{u} and v\mathbf{v} are orthogonal if their uv=0\mathbf{u} \cdot \mathbf{v} = 0, meaning they are and form a . This concept provides intuition for orthogonality as a measure of or non-overlap in direction. For functions, the notion extends analogously to infinite-dimensional spaces. Two functions ff and gg are orthogonal over an interval [a,b][a, b] if the abf(x)g(x)dx=0\int_a^b f(x) g(x) \, dx = 0. This condition indicates that the functions do not "overlap" in a weighted average sense across the interval, generalizing the vector case without assuming finite dimensions. In broader mathematical frameworks, orthogonality is defined within inner product spaces, where two elements ff and gg (which may be functions or vectors) satisfy f,g=0\langle f, g \rangle = 0. Geometrically, this implies between ff and gg is π/2\pi/2 radians, as the cosine of the angle is cosθ=f,gfg=0\cos \theta = \frac{\langle f, g \rangle}{\|f\| \|g\|} = 0, preserving the perpendicularity interpretation from finite dimensions.

Inner products in function spaces

In function spaces, an inner product provides a that generalizes the from finite-dimensional vector spaces to infinite-dimensional settings, enabling the definition of , norms, and distances between functions. The most fundamental example is the space L2(Ω)L^2(\Omega) of square-integrable functions over a domain Ω\Omega, where the inner product measures the "overlap" between functions via integration. For real-valued functions in L2([a,b])L^2([a, b]), the standard inner product is defined as f,g=abf(x)g(x)dx,\langle f, g \rangle = \int_a^b f(x) g(x) \, dx, where the integral exists and is finite for f,gL2([a,b])f, g \in L^2([a, b]). This form satisfies the axioms of an inner product: linearity in the second argument, f,αg+βh=αf,g+βf,h\langle f, \alpha g + \beta h \rangle = \alpha \langle f, g \rangle + \beta \langle f, h \rangle for scalars α,β\alpha, \beta; symmetry, f,g=g,f\langle f, g \rangle = \langle g, f \rangle; and , f,f0\langle f, f \rangle \geq 0 with equality if and only if f=0f = 0 . Common finite domains include [1,1][-1, 1], as used for , or [0,π][0, \pi] for . In the complex case, for functions in L2(Ω)L^2(\Omega) with complex values, the inner product incorporates the complex conjugate to ensure conjugate symmetry: f,g=Ωf(x)g(x)dx.\langle f, g \rangle = \int_\Omega f(x) \overline{g(x)} \, dx. This adjustment preserves linearity in the second argument and conjugate symmetry, g,f=f,g\langle g, f \rangle = \overline{\langle f, g \rangle}, while maintaining positive definiteness via f,f=Ωf(x)2dx>0\langle f, f \rangle = \int_\Omega |f(x)|^2 \, dx > 0 for f≢0f \not\equiv 0. For infinite domains like Ω=(,)\Omega = (-\infty, \infty), as in Hermite functions, the integral is over the entire real line, requiring the functions to decay sufficiently for square-integrability. Weighted inner products extend the standard form by incorporating a positive weight function w(x)>0w(x) > 0 to emphasize certain regions of the domain, particularly useful for orthogonal polynomials on specific intervals. The general form is f,g=abf(x)g(x)w(x)dx\langle f, g \rangle = \int_a^b f(x) g(x) w(x) \, dx for real functions, or with g(x)\overline{g(x)} for complex cases, where w(x)w(x) ensures the integral defines a valid inner product satisfying the same axioms: positivity, linearity, and (conjugate) symmetry. Examples include w(x)=1w(x) = 1 on [π,π][-\pi, \pi] for trigonometric functions or w(x)=ex2w(x) = e^{-x^2} on (,)(-\infty, \infty) for Hermite polynomials, adapting the inner product to the natural measure of the space. These weights preserve the structure of Hilbert spaces when the resulting norm is complete.

Normalization and orthonormal sets

In inner product spaces of functions, an orthogonal set {ϕn}\{\phi_n\} can be normalized to form an orthonormal set by scaling each function by the reciprocal of its norm, defined as ϕn=ϕn,ϕn\|\phi_n\| = \sqrt{\langle \phi_n, \phi_n \rangle}
Add your contribution
Related Hubs
User Avatar
No comments yet.