Hubbry Logo
logo
Differential operator
Community hub

Differential operator

logo
0 subscribers
Read side by side
from Wikipedia

A harmonic function defined on an annulus. Harmonic functions are exactly those functions which lie in the kernel of the Laplace operator, an important differential operator.

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).

This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.

Definition

[edit]

Given a nonnegative integer m, an order- linear differential operator is a map from a function space on to another function space that can be written as:

where is a multi-index of non-negative integers, , and for each , is a function on some open domain in n-dimensional space. The operator is interpreted as

Thus for a function :

The notation is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.

The polynomial p obtained by replacing partials by variables in P is called the total symbol of P; i.e., the total symbol of P above is: where The highest homogeneous component of the symbol, namely,

is called the principal symbol of P.[1] While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle).[2]

More generally, let E and F be vector bundles over a manifold X. Then the linear operator

is a differential operator of order if, in local coordinates on X, we have

where, for each multi-index α, is a bundle map, symmetric on the indices α.

The kth order coefficients of P transform as a symmetric tensor

whose domain is the tensor product of the kth symmetric power of the cotangent bundle of X with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.

The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components

on each section u of E. Here Pνμ is the scalar differential operator defined by

With this trivialization, the principal symbol can now be written

In the cotangent space over a fixed point x of X, the symbol defines a homogeneous polynomial of degree k in with values in .

Fourier interpretation

[edit]

A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,

This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.

Examples

[edit]
Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.

History

[edit]

The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800.[3]

Notations

[edit]

The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:

, , and .

When taking higher, nth order derivatives, the operator may be written:

, , , or .

The derivative of a function f of an argument x is sometimes given as either of the following:

The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form

in his study of differential equations.

One of the most frequently seen differential operators is the Laplacian operator, defined by

Another differential operator is the Θ operator, or theta operator, defined by[4]

This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:

In n variables the homogeneity operator is given by

As in one variable, the eigenspaces of Θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)

In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:

Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.

Adjoint of an operator

[edit]

Given a linear differential operator the adjoint of this operator is defined as the operator such that where the notation is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product (or inner product).

Formal adjoint in one variable

[edit]

In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by

where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as and , one can also define the adjoint of T by

This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When is defined according to this formula, it is called the formal adjoint of T.

A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.

Several variables

[edit]

If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:

for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.

Example

[edit]

The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form

This property can be proven using the formal adjoint definition above.[5]

This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.

Properties

[edit]

Differentiation is linear, i.e.

where f and g are functions, and a is a constant.

Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule

Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:

The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.

The differential operators also obey the shift theorem.

Ring of polynomial differential operators

[edit]

Ring of univariate polynomial differential operators

[edit]

If R is a ring, let be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DXXD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring . This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form . It supports an analogue of Euclidean division of polynomials.

Differential modules[clarification needed] over (for the standard derivation) can be identified with modules over .

Ring of multivariate polynomial differential operators

[edit]

If R is a ring, let be the non-commutative polynomial ring over R in the variables , and I the two-sided ideal generated by the elements

for all where is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring .

This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form .

Coordinate-independent description

[edit]

In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E). In other words, there exists a linear mapping of vector bundles

such that

where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.

This just means that for a given section s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.

Relation to commutative algebra

[edit]

An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions we have

Here the bracket is defined as the commutator

This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.

Variants

[edit]

A differential operator of infinite order

[edit]

A differential operator of infinite order is (roughly) a differential operator whose total symbol is a power series instead of a polynomial.

Invariant differential operator

[edit]

An invariant differential operator is a differential operator that is also an invariant operator (e.g., commutes with a group action).

Bidifferential operator

[edit]

A differential operator acting on two functions is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra.[6]

Microdifferential operator

[edit]

A microdifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle.[7]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A differential operator is a mathematical operator that applies differentiation to functions, typically represented by the symbol $ D = \frac{d}{dx} $ for the first derivative in one variable, and extended to higher-order derivatives via powers like $ D^n = \frac{d^n}{dx^n} $.[1] These operators are linear, satisfying $ D(af + bg) = aD(f) + bD(g) $ for constants $ a, b $ and functions $ f, g $, and can be combined into polynomials such as $ L = a_n D^n + \cdots + a_1 D + a_0 $, where the coefficients $ a_k $ may be functions of the independent variable.[2] In multiple variables, they generalize to partial differential operators, like the Laplacian $ \Delta = \sum \frac{\partial^2}{\partial x_i^2} $, which measures the divergence of the gradient.[1] Differential operators form the foundation of the theory of differential equations, where equations of the form $ L(u) = f $ describe how functions evolve under differentiation, enabling the modeling of dynamic systems.[3] For instance, linear constant-coefficient operators facilitate the solution of ordinary differential equations by factoring into characteristic equations, revealing exponential solutions.[2] In partial differential equations, they underpin fundamental laws in physics, such as the heat equation $ \frac{\partial u}{\partial t} = k \Delta u $ for diffusion processes and the wave equation $ \frac{\partial^2 u}{\partial t^2} = c^2 \Delta u $ for vibrations.[4] Applications extend to engineering and mathematical physics, including electromagnetic fields via Maxwell's equations and fluid dynamics through the Navier-Stokes equations, where these operators capture spatial and temporal changes.[5][6] Advanced concepts, such as pseudodifferential operators, arise in microlocal analysis to handle singularities and wavefronts in solutions.[7]

Basic Concepts

Definition

A differential operator is a linear map D:C(Ω)C(Ω)D: C^\infty(\Omega) \to C^\infty(\Omega), where ΩRn\Omega \subset \mathbb{R}^n is an open set and C(Ω)C^\infty(\Omega) denotes the vector space of smooth real-valued functions on Ω\Omega, that satisfies a generalized Leibniz rule characterizing its finite order.[8] Specifically, DD is a differential operator of order at most kk if, for every smooth function gC(Ω)g \in C^\infty(\Omega), the commutator [D,mg]:mD(gm)gD(m)[D, m_g]: m \mapsto D(g m) - g D(m) (where mgm_g denotes multiplication by gg) is a differential operator of order at most k1k-1, with order 0 operators being precisely the continuous linear maps (i.e., multiplication operators).[8] The order of DD is the minimal such nonnegative integer kk for which the (k+1)(k+1)-fold iterated commutator with multiplication operators vanishes identically.[8] This inductive definition via commutators ensures that DD locally behaves like a finite combination of partial derivatives, up to smooth multiplication factors. In local coordinates x=(x1,,xn)x = (x_1, \dots, x_n) on Ω\Omega, any differential operator DD of order at most kk admits the explicit expression
D=αkaα(x)α, D = \sum_{|\alpha| \leq k} a_\alpha(x) \partial^\alpha,
where α=(α1,,αn)Nn\alpha = (\alpha_1, \dots, \alpha_n) \in \mathbb{N}^n is a multi-index with α=α1++αn|\alpha| = \alpha_1 + \dots + \alpha_n, α=x1α1xnαn\partial^\alpha = \partial_{x_1}^{\alpha_1} \cdots \partial_{x_n}^{\alpha_n} denotes the corresponding partial derivative operator of order α|\alpha|, and the coefficients aα:ΩRa_\alpha: \Omega \to \mathbb{R} are smooth functions.[9] The highest-order terms (with α=k|\alpha| = k) determine the principal symbol of DD, which plays a key role in analyzing its global properties. More generally, differential operators extend to smooth manifolds MM of dimension nn, acting as linear maps D:C(M)C(M)D: C^\infty(M) \to C^\infty(M) that are locally of the above form in any coordinate chart.[9] On a manifold, the order kk is independent of the choice of coordinates, preserving the commutator characterization relative to multiplication by smooth functions on MM.[8] This framework captures extensions of classical differentiation while ensuring consistency across overlapping charts via the partition of unity theorem.

Historical Development

The concept of differential operators traces its origins to the 18th century, amid the development of calculus, particularly in the calculus of variations. Leonhard Euler and Joseph-Louis Lagrange began treating higher-order derivatives as successive applications of a basic derivative operation acting on functions, facilitating the analysis of variational problems. Euler's foundational contributions in the 1740s, developed further through collaboration with Lagrange in the 1760s, marked an early recognition of derivatives as operator-like entities that could be composed and applied systematically to functionals.[10] This intuitive approach gained formal structure in the early 19th century with Louis François Antoine Arbogast's introduction of the standalone differential operator notation DD, which separated operational symbols from quantities and enabled algebraic manipulation of derivatives. Published in 1800 in Du calcul des dérivations et ses usages dans la théorie des suites et dans la géométrie, Arbogast's work represented a pivotal conceptual shift, allowing differential operators to be viewed independently of specific functions. Subsequent 19th-century advancements by Augustin-Louis Cauchy and Karl Weierstrass further embedded differential operators within the rigorous theory of partial differential equations (PDEs). Cauchy's 1827 analysis of the Cauchy-Riemann equations exemplified first-order differential operators in complex analysis, while his 1840 power series methods for nonlinear PDE initial value problems highlighted their role in solution existence and uniqueness. Weierstrass's mid-1870s emphasis on analytical rigor critiqued earlier informal approaches, influencing the study of elliptic operators and boundary value problems in PDEs.[11][12] In the early 20th century, Élie Cartan's development of exterior differential systems from 1899 onward provided a geometric framework for higher-order differential operators, integrating them with Lie groups and moving frames for solving systems of PDEs. The mid-20th century brought abstract generalizations: Laurent Schwartz's 1950-1951 theory of distributions extended differential operators to act on generalized functions, enabling solutions to PDEs beyond classical smoothness. Pseudodifferential operators, building on singular integral techniques, were advanced by Alberto Calderón in 1959 to address Cauchy problems for broad classes of PDEs. Algebraically, the ring of differential operators on smooth manifolds was formalized by Alexander Grothendieck in the 1960s, revealing deep ties to sheaf theory and D-modules. During the 1940s-1950s, connections to Lie algebras emerged, with the ring of constant-coefficient differential operators identified as the universal enveloping algebra of the translation Lie algebra, influencing representation theory and quantization. Quantum mechanics profoundly shaped this evolution, as Erwin Schrödinger in 1926 introduced differential operators like the momentum operator iddx-i\hbar \frac{d}{dx} to represent observables in wave mechanics, bridging classical analysis with operator algebras.[12][13][14]

Examples and Notation

Examples

A fundamental example of a first-order differential operator in one variable is the differentiation operator D=ddxD = \frac{d}{dx}, which maps a smooth function f(x)f(x) to its derivative Df=f(x)Df = f'(x). This operator has order 1, as its principal symbol is the nonzero linear function ξiξ\xi \mapsto i\xi in the cotangent variable ξ\xi. It satisfies the Leibniz rule for derivations: for smooth functions ff and gg, D(fg)=fDg+gDf=fg+gfD(fg) = f \, Dg + g \, Df = f g' + g f'.[15] A typical second-order differential operator in one variable is P=d2dx2+xddxP = \frac{d^2}{dx^2} + x \frac{d}{dx}, which arises in contexts such as the Airy differential equation. Its order is 2, determined by the highest-order term d2dx2\frac{d^2}{dx^2}, with principal symbol ξξ2\xi \mapsto -\xi^2. To verify it qualifies as a differential operator of order at most 2, consider its action under multiplication: for a smooth function ff and variable hh, compute P(fh)fPh=fh+2fh+xfhP(f h) - f \, P h = f'' h + 2 f' h' + x f' h. This remainder equals fh+(2f+xf)hf'' \cdot h + (2 f' + x f') \cdot h', which is a first-order differential operator in hh (order reduced by 1), confirming the Leibniz condition recursively.[15] In multiple variables, the divergence operator div=i=1nxi\operatorname{div} = \sum_{i=1}^n \frac{\partial}{\partial x_i} acts on a vector field V=(V1,,Vn)V = (V_1, \dots, V_n) by divV=i=1nVixi\operatorname{div} V = \sum_{i=1}^n \frac{\partial V_i}{\partial x_i}. This is a first-order differential operator, with principal symbol ξii=1nξi\xi \mapsto i \sum_{i=1}^n \xi_i. It obeys the multivariable Leibniz rule: for a vector field VV and scalar function ff, div(fV)=fdivV+fV=fi=1nVixi+i=1nfxiVi\operatorname{div}(f V) = f \, \operatorname{div} V + \nabla f \cdot V = f \sum_{i=1}^n \frac{\partial V_i}{\partial x_i} + \sum_{i=1}^n \frac{\partial f}{\partial x_i} V_i. The Laplacian Δ=i=1n2xi2\Delta = \sum_{i=1}^n \frac{\partial^2}{\partial x_i^2} is a second-order operator on scalar functions, with principal symbol ξξ2\xi \mapsto -\lVert \xi \rVert^2, and satisfies the corresponding higher-order Leibniz condition, such as Δ(fh)fΔh=2i=1nfxihxi+hΔf\Delta(f h) - f \, \Delta h = 2 \sum_{i=1}^n \frac{\partial f}{\partial x_i} \frac{\partial h}{\partial x_i} + h \Delta f, where the remainder is order at most 1 in hh.[15] On Riemannian manifolds, the covariant derivative \nabla provides a first-order differential operator that generalizes partial differentiation to tensor fields while respecting the manifold's geometry. For a vector field YY along a curve with tangent XX, XY\nabla_X Y measures the rate of change of YY parallel to the connection, satisfying X(fY)=fXY+(Xf)Y\nabla_X (f Y) = f \nabla_X Y + (X f) Y as the Leibniz rule. Its order is 1, with the principal symbol determined by the metric and connection form.[16] Similarly, the Dirac operator DD on a spinor bundle over a Riemannian manifold is a first-order elliptic differential operator, locally expressed as D=j=1nejejD = \sum_{j=1}^n e_j \cdot \nabla_{e_j} for an orthonormal frame {ej}\{e_j\}, where \cdot denotes Clifford multiplication. As a first-order differential operator, it satisfies the Leibniz rule D(fs)=fDs+c(df)sD(f s) = f D s + c(df) s for a smooth function ff and spinor section ss, where cc denotes Clifford multiplication. Its principal symbol is σD(X)=ij=1nX(ej)ej\sigma_D(X) = i \sum_{j=1}^n X(e_j) e_j \cdot, which is invertible for X0X \neq 0.[17] An example of a differential operator with variable coefficients and mixed derivatives is the heat equation operator L=tΔL = \frac{\partial}{\partial t} - \Delta, acting on functions u(t,x)u(t, x) in R×Rn\mathbb{R} \times \mathbb{R}^n. This has order 2, dominated by the second-order spatial Laplacian Δ\Delta, with principal symbol iτ+ξ2i \tau + \lVert \xi \rVert^2 in variables (τ,ξ)(\tau, \xi). It satisfies the Leibniz condition for order 2; for instance, in one spatial dimension, L(fu)=t(fu)xx(fu)=(tf)u+ftu[fxxu+2fxux+fuxx]L(f u) = \partial_t (f u) - \partial_{xx} (f u) = (\partial_t f) u + f \partial_t u - [f_{xx} u + 2 f_x u_x + f u_{xx}], and fLu=f(tuuxx)f L u = f (\partial_t u - u_{xx}), so the difference is (tf)ufxxu2fxux(\partial_t f) u - f_{xx} u - 2 f_x u_x, which rearranges to multiplication by (tffxx)(\partial_t f - f_{xx}) times uu plus multiplication by 2fx-2 f_x times uxu_x, a first-order operator in uu.[18]

Notations

In the univariate case, the differential operator corresponding to the first derivative is commonly denoted by DD, representing ddx\frac{d}{dx}, with higher powers DkD^k indicating the kk-th derivative dkdxk\frac{d^k}{dx^k}.[19] In the multivariate setting, partial derivatives with respect to variables x1,,xnx_1, \dots, x_n are denoted by i=xi\partial_i = \frac{\partial}{\partial x_i} for i=1,,ni = 1, \dots, n.[20] To compactly express higher-order partial derivatives, multi-index notation is standard: a multi-index α=(α1,,αn)\alpha = (\alpha_1, \dots, \alpha_n) is an nn-tuple of nonnegative integers, with α=i=1nαi|\alpha| = \sum_{i=1}^n \alpha_i denoting its order, and α=i=1niαi\partial^\alpha = \prod_{i=1}^n \partial_i^{\alpha_i} representing the corresponding mixed partial derivative.[20] This notation facilitates the summation over all partial derivatives of order at most kk, as in αkaα(x)αf(x)\sum_{|\alpha| \leq k} a_\alpha(x) \partial^\alpha f(x).[20] Polynomial differential operators are often symbolized as P(x,D)P(x, D), where D=(1,,n)D = (\partial_1, \dots, \partial_n) and PP is a polynomial in the variables xx and the formal symbols DjD_j, such as P(x,D)=αmaα(x)DαP(x, D) = \sum_{|\alpha| \leq m} a_\alpha(x) D^\alpha with Dα=(i)ααD^\alpha = (-i)^{|\alpha|} \partial^\alpha in some conventions to align with Fourier analysis.[21] On a smooth manifold MM, the space of differential operators of order at most kk acting on smooth sections of a vector bundle is denoted by Diffk(M)\mathrm{Diff}^k(M), forming a filtered algebra under composition.[22] Notational conventions vary between mathematics and physics: mathematicians typically use italicized or script D\mathcal{D} for formal differential operators and emphasize formal adjoints, while physicists often employ boldface or upright D\mathbf{D} and prioritize Hermitian adjoints in Hilbert space contexts.[23] For instance, the Laplacian operator may appear as Δ=ii2\Delta = \sum_i \partial_i^2 in mathematical texts but as 22-\hbar^2 \nabla^2 in quantum mechanics, highlighting domain-specific adjustments.[19] Composition of differential operators can be left or right ordered, affecting the symbol in quantization schemes; in Weyl quantization, the symbol of the product Op(a)Op(b)Op(a) Op(b) corresponds to a symmetric (Weyl) ordering where multiplication operators act midway between left and right derivatives, given by the oscillatory integral formula for the composed symbol.[23]

Fundamental Properties

General Properties

Differential operators are linear maps, meaning that for a differential operator PP of order mm, and functions u,vu, v and scalars a,ba, b, P(au+bv)=aPu+bPvP(au + bv) = a P u + b P v.[24] In appropriate function spaces, such as Sobolev spaces Hs(Rn)H^s(\mathbb{R}^n), these operators are continuous: a differential operator P(D)P(D) maps Hlocs+m(Ω)H^{s+m}_{\mathrm{loc}}(\Omega) continuously to Hlocs(Ω)H^s_{\mathrm{loc}}(\Omega) for all sRs \in \mathbb{R}.[24] This continuity holds in the topology induced by the Sobolev norms, ensuring well-defined behavior on spaces of functions with controlled derivatives.[25] The composition of two differential operators exhibits a Leibniz-type structure. If PP has order kk and QQ has order mm, then PQP \circ Q has order k+mk + m, and the leading terms in the composition arise from the product of the leading coefficients via a generalized Leibniz rule.[24] Specifically, for operators P=αkpα(x)αP = \sum_{|\alpha| \leq k} p_\alpha(x) \partial^\alpha and Q=βmqβ(x)βQ = \sum_{|\beta| \leq m} q_\beta(x) \partial^\beta, the highest-order part is simply the product of the principal parts.[26] Commutators with multiplication operators further illustrate this: for a first-order operator D=jD = \partial_j and smooth function ff, [D,f]g=D(fg)fDg=(jf)g[D, f] g = D(f g) - f D g = (\partial_j f) g, which is a zero-order operator, and in general, [D,f][D, f] reduces the order by at least one.[24] Each differential operator PP of order kk has a principal symbol σk(P)(x,ξ)=α=kaα(x)(iξ)α\sigma_k(P)(x, \xi) = \sum_{|\alpha| = k} a_\alpha(x) (i \xi)^\alpha, a homogeneous polynomial of degree kk in the cotangent variable ξ\xi.[24] This symbol captures the highest-order behavior and is independent of lower-order terms. An operator is elliptic if its principal symbol satisfies σk(P)(x,ξ)cξk|\sigma_k(P)(x, \xi)| \geq c |\xi|^k for some c>0c > 0 and all ξ0\xi \neq 0, ensuring the operator is "invertible" in the high-frequency regime and leading to improved regularity properties for solutions.[24] For systems, ellipticity requires the symbol matrix to be invertible for ξ0\xi \neq 0.[25]

Fourier Interpretation

The Fourier transform provides a powerful interpretation of differential operators by transforming them into multiplication operators in the frequency domain. For a function uu on Rn\mathbb{R}^n, the Fourier transform u^(ξ)=Rnu(x)eixξdx\hat{u}(\xi) = \int_{\mathbb{R}^n} u(x) e^{-i x \cdot \xi} \, dx (up to normalization constants) converts partial derivatives into multiplications: the operator j\partial_j acts as ju^(ξ)=iξju^(ξ)\widehat{\partial_j u}(\xi) = i \xi_j \hat{u}(\xi). More generally, for a multi-index α\alpha, α\partial^\alpha corresponds to (iξ)α(i \xi)^\alpha, so a linear differential operator P=αmaα(x)αP = \sum_{|\alpha| \leq m} a_\alpha(x) \partial^\alpha with smooth coefficients aαa_\alpha has symbol σP(x,ξ)=αmaα(x)(iξ)α\sigma_P(x, \xi) = \sum_{|\alpha| \leq m} a_\alpha(x) (i \xi)^\alpha, a polynomial in ξ\xi of degree at most mm. For constant coefficients, Pu^(ξ)=σP(ξ)u^(ξ)\widehat{P u}(\xi) = \sigma_P(\xi) \hat{u}(\xi). For variable coefficients, PP is a pseudodifferential operator whose action involves an oscillatory integral with this symbol, but Pu^(ξ)\widehat{P u}(\xi) is not pointwise multiplication by σP(x,ξ)u^(ξ)\sigma_P(x, \xi) \hat{u}(\xi).[27][28][29] A concrete example illustrates this correspondence: consider the first-order operator D=iddxD = -i \frac{d}{dx} on R\mathbb{R}, which is the momentum operator in quantum mechanics. Its Fourier transform yields D^f(ξ)=ξf^(ξ)\hat{D} f(\xi) = \xi \hat{f}(\xi), directly associating the operator with the frequency variable ξ\xi as a multiplier. This multiplier property extends to higher-order operators, such as the Laplacian Δ=j=1nj2\Delta = \sum_{j=1}^n \partial_j^2, where Δu^(ξ)=ξ2u^(ξ)\widehat{\Delta u}(\xi) = -|\xi|^2 \hat{u}(\xi), facilitating the solution of elliptic equations like Δu=f\Delta u = f via u^(ξ)=f^(ξ)/ξ2\hat{u}(\xi) = -\hat{f}(\xi) / |\xi|^2 for ξ0\xi \neq 0. Such transformations underscore the role of differential operators as special cases of pseudodifferential operators, where the symbol is precisely a polynomial in ξ\xi.[27] Beyond basic multiplication, the Fourier interpretation connects to the propagation of singularities in solutions to partial differential equations. Singularities in the wavefront set of a distribution propagate along bicharacteristic curves defined by the Hamilton flow of the principal symbol σm(P)(x,ξ)\sigma_m(P)(x, \xi), the homogeneous leading term of degree mm. This phenomenon is analyzed using Fourier integral operators, which generalize pseudodifferential operators to handle phase shifts and oscillatory integrals, ensuring that singularities neither appear nor disappear except along these flows for hyperbolic or properly supported operators. The propagation theorem, established through microlocal analysis, applies to solutions of Pu=fP u = f, where the wavefront set of uu is contained in that of ff union the flow-out from characteristic sets.[30] Finally, the Fourier framework links differential operators to symbol quantization in Weyl calculus, a symmetric quantization scheme where the operator associated to a symbol a(x,ξ)a(x, \xi) is Opw(a)f(x)=(2π)nei(xy)ηa(x+y2,ξ)f(y)dydη\mathrm{Op}_w(a) f(x) = (2\pi)^{-n} \iint e^{i (x-y) \cdot \eta} a\left(\frac{x+y}{2}, \xi\right) f(y) \, dy \, d\eta with η=ξ\eta = \xi adjusted via oscillatory integrals. For polynomial symbols of differential operators, Weyl quantization coincides with the standard left or right quantizations due to the exact polynomial structure, providing a bridge to semiclassical analysis and deformation quantization in phase space. This connection preserves the total symbol and enables precise control over operator composition via the Moyal product.[31][29]

Adjoint Operators

Formal Adjoint in One Variable

In the context of linear differential operators acting on functions in one variable, the formal adjoint DD^* of an operator DD is defined such that for suitable test functions ff and gg with compact support, the integration by parts formula holds:
(Df)gdx=f(Dg)dx, \int_{-\infty}^{\infty} (D f) g \, dx = \int_{-\infty}^{\infty} f (D^* g) \, dx,
where boundary terms vanish due to the compact support assumption.[3] This definition ensures that the adjoint captures the duality between the operator and its action under the L2L^2 inner product, up to boundary contributions that are controlled in appropriate function spaces.[3] For a polynomial differential operator D=k=0mak(x)dkdxkD = \sum_{k=0}^m a_k(x) \frac{d^k}{dx^k} with smooth coefficients ak(x)a_k(x), the explicit form of the formal adjoint is
Dg=k=0m(1)kdkdxk(ak(x)g). D^* g = \sum_{k=0}^m (-1)^k \frac{d^k}{dx^k} \left( a_k(x) g \right).
This formula arises from applying the product rule (Leibniz rule) repeatedly during integration by parts to transfer all derivatives from ff to gg.[3] The derivation begins with the first-order case and proceeds inductively: for the differentiation operator ddx\frac{d}{dx}, integration by parts gives (f)gdx=fgdx\int (f') g \, dx = -\int f g' \, dx, so (ddx)=ddx\left( \frac{d}{dx} \right)^* = -\frac{d}{dx}.[3] For higher orders, the Leibniz rule for adjoints follows: if D=PdkdxkD = P \frac{d^k}{dx^k} with multiplication by P(x)P(x), then (D)=(1)kdkdxk(P)(D)^* = (-1)^k \frac{d^k}{dx^k} (P \cdot), and the full operator sums these terms.[3] A key example is the first-order operator, where D=ddxD = \frac{d}{dx} yields D=ddxD^* = -\frac{d}{dx}, as noted above. For a second-order operator D=d2dx2+b(x)ddx+c(x)D = \frac{d^2}{dx^2} + b(x) \frac{d}{dx} + c(x), repeated integration by parts produces
D=d2dx2bddx+(cb), D^* = \frac{d^2}{dx^2} - b \frac{d}{dx} + (c - b'),
where bb' denotes the derivative of bb with respect to xx.[32] This reflects the sign flip for odd-order terms and the adjustment for variable coefficients via the product rule. An operator DD is formally self-adjoint if D=DD = D^*, a condition that simplifies the analysis of symmetric problems in partial differential equations and ensures real eigenvalues under suitable boundary conditions.[3] For instance, the pure second derivative d2dx2\frac{d^2}{dx^2} satisfies this property, as its adjoint is itself.[3]

Formal Adjoint in Several Variables

In several variables, the formal adjoint of a linear partial differential operator extends the concept from the one-dimensional case by incorporating the multivariable integration by parts formula, often derived via the divergence theorem. For a domain ΩRn\Omega \subset \mathbb{R}^n with smooth boundary, consider smooth functions f,gf, g with appropriate support or boundary conditions such that boundary integrals vanish. The formal adjoint DD^* of a linear partial differential operator DD is defined by the relation
Ω(Df)gdx=Ωf(Dg)dx, \int_\Omega (D f) g \, dx = \int_\Omega f (D^* g) \, dx,
where the integrals are taken with respect to the Lebesgue measure on Ω\Omega. This definition ensures that DD^* is the unique differential operator satisfying the bilinear pairing identity for test functions, ignoring boundary terms in the formal sense.[3] For a general linear partial differential operator of order at most mm,
Df=αmaα(x)αf, D f = \sum_{|\alpha| \leq m} a_\alpha(x) \partial^\alpha f,
where α=(α1,,αn)\alpha = (\alpha_1, \dots, \alpha_n) is a multi-index with α=i=1nαi|\alpha| = \sum_{i=1}^n \alpha_i, α=i=1nαixiαi\partial^\alpha = \prod_{i=1}^n \frac{\partial^{\alpha_i}}{\partial x_i^{\alpha_i}}, and the coefficients aα(x)a_\alpha(x) are smooth functions on Ω\Omega, the formal adjoint is given by
Dg=αm(1)αα(aα(x)g). D^* g = \sum_{|\alpha| \leq m} (-1)^{|\alpha|} \partial^\alpha (a_\alpha(x) g).
This expression preserves the order mm of the operator and transforms the leading terms accordingly.[3] The formula for DD^* arises from repeated applications of the multivariable integration by parts rule, which leverages the product rule for derivatives and the divergence theorem. For a single first-order partial derivative, integration by parts yields
Ω(if)gdx=Ωf(ig)dx+ΩfgnidS, \int_\Omega (\partial_i f) g \, dx = -\int_\Omega f (\partial_i g) \, dx + \int_{\partial \Omega} f g n_i \, dS,
where nin_i is the ii-th component of the outward unit normal, showing that the formal adjoint of i\partial_i is i-\partial_i when boundary terms are neglected. For a term of higher order αf\partial^\alpha f, integration by parts is applied successively to each factor iαi\partial_i^{\alpha_i}, introducing a factor of (1)αi(-1)^{\alpha_i} per variable and pulling the coefficient aαa_\alpha inside the derivatives via the Leibniz rule: α(aαg)=βα(αβ)(βaα)(αβg)\partial^\alpha (a_\alpha g) = \sum_{\beta \leq \alpha} \binom{\alpha}{\beta} (\partial^\beta a_\alpha) (\partial^{\alpha - \beta} g). The overall sign (1)α(-1)^{|\alpha|} accounts for the total number of integrations by parts across all variables.[33] A fundamental illustration is the divergence operator div:C(Ω;Rn)C(Ω)\operatorname{div}: C^\infty(\Omega; \mathbb{R}^n) \to C^\infty(\Omega), defined by divV=i=1niVi\operatorname{div} V = \sum_{i=1}^n \partial_i V_i for a vector field V=(V1,,Vn)V = (V_1, \dots, V_n). Its formal adjoint is the negative gradient (div)ϕ=ϕ=(1ϕ,,nϕ)(\operatorname{div})^* \phi = -\nabla \phi = (-\partial_1 \phi, \dots, -\partial_n \phi), satisfying
Ω(divV)ϕdx=i=1nΩVi(iϕ)dx+Ωϕ(Vn)dS. \int_\Omega (\operatorname{div} V) \phi \, dx = -\sum_{i=1}^n \int_\Omega V_i (\partial_i \phi) \, dx + \int_{\partial \Omega} \phi (V \cdot n) \, dS.
This relation follows directly from applying integration by parts to each component, confirming the structure for vector-valued operators.[34] The distinction between the formal adjoint and the L2L^2 adjoint lies in their settings: the formal adjoint DD^* is a purely differential expression without specified domain, applicable to smooth functions, whereas the L2L^2 adjoint is the densely defined unbounded operator on the Hilbert space L2(Ω)L^2(\Omega) whose graph ensures the pairing holds for functions in its maximal domain, typically requiring DgL2(Ω)D^* g \in L^2(\Omega). Under assumptions that smooth compactly supported functions are dense in L2(Ω)L^2(\Omega) and the coefficients aαa_\alpha are sufficiently regular (e.g., bounded and continuous), the L2L^2 adjoint coincides with the formal adjoint on this dense subspace, enabling extension by continuity.[35]

Examples of Adjoints

A classic example in one variable is the Euler operator $ E = x \frac{d}{dx} $, acting on smooth functions on (0,)(0, \infty). Its formal adjoint with respect to the L2L^2 inner product f,g=0f(x)g(x)dx\langle f, g \rangle = \int_0^\infty f(x) g(x) \, dx is $ E^* = -x \frac{d}{dx} - 1 $. To verify this, consider Ef,g=0g(x)(xf(x))dx\langle E f, g \rangle = \int_0^\infty g(x) \left( x f'(x) \right) dx. Integration by parts yields [g(x)xf(x)]00f(x)ddx(xg(x))dx=B0f(x)(xg(x)+g(x))dx[g(x) x f(x)]_0^\infty - \int_0^\infty f(x) \frac{d}{dx} \left( x g(x) \right) dx = B - \int_0^\infty f(x) \left( x g'(x) + g(x) \right) dx, where BB denotes boundary terms that vanish for suitable test functions with compact support. Thus, Ef,g=Bf,xg+g\langle E f, g \rangle = B - \langle f, x g' + g \rangle, so $ E^* g = - x g' - g = -x \frac{d}{dx} g - g $.[32] In several variables, the gradient operator :Cc(Rn)Cc(Rn,Rn)\nabla: C_c^\infty(\mathbb{R}^n) \to C_c^\infty(\mathbb{R}^n, \mathbb{R}^n) has formal adjoint $ \nabla^* = -\operatorname{div} $, where divv=i=1nvixi\operatorname{div} \mathbf{v} = \sum_{i=1}^n \frac{\partial v_i}{\partial x_i}. This follows from integration by parts: u,v=Rnuvdx=Rnudivvdx+B\langle \nabla u, \mathbf{v} \rangle = \int_{\mathbb{R}^n} \nabla u \cdot \mathbf{v} \, dx = -\int_{\mathbb{R}^n} u \operatorname{div} \mathbf{v} \, dx + B, with boundary terms B=0B=0 for compactly supported functions. Consequently, the Laplacian Δ=div\Delta = \operatorname{div} \circ \nabla is formally self-adjoint, Δ=Δ\Delta^* = \Delta, as Δu,v=u,v=u,Δv\langle \Delta u, v \rangle = \langle \nabla u, \nabla v \rangle = \langle u, \Delta v \rangle by applying the adjoint twice.[34] In physics, the momentum operator $ p = -i \frac{d}{dx} $ on L2(R)L^2(\mathbb{R}), restricted to the dense domain Cc(R)C_c^\infty(\mathbb{R}) of smooth compactly supported functions, is essentially self-adjoint. This means its closure is self-adjoint, ensuring a unique self-adjoint extension used in quantum mechanics for the free particle Hamiltonian. Essential self-adjointness follows from the fact that the deficiency indices are zero, verified via solutions to $ p^* \psi = \pm i \psi $, which lie outside L2(R)L^2(\mathbb{R}).[36] For a variable coefficient operator in R2\mathbb{R}^2, consider $ P = \frac{\partial}{\partial x} + x \frac{\partial}{\partial y} $. The formal adjoint is $ P^* = -\frac{\partial}{\partial x} - x \frac{\partial}{\partial y} - 1 $. Verification proceeds by integration by parts in the inner product Pu,v=R2v(ux+xuy)dxdy=BR2u(vx+(xv)y)dxdy=BR2u(vx+xvy+v)dxdy\langle P u, v \rangle = \int_{\mathbb{R}^2} v \left( u_x + x u_y \right) dx dy = B - \int_{\mathbb{R}^2} u \left( v_x + (x v)_y \right) dx dy = B - \int_{\mathbb{R}^2} u \left( v_x + x v_y + v \right) dx dy, where the extra 1-1 arises from xy=1\frac{\partial x}{\partial y} = 1. Thus, $ P^* v = - v_x - x v_y - v $.[32] The formal adjoint ignores boundary terms and is defined locally on smooth functions, but the actual adjoint in a Hilbert space like L2(Ω)L^2(\Omega) depends on the domain, incorporating boundary conditions to ensure Lu,v=u,[L](/page/Adjoint)v\langle L u, v \rangle = \langle u, [L^*](/page/Adjoint) v \rangle without boundary contributions. For instance, on a bounded interval [a,b][a,b], the operator ddx\frac{d}{dx} requires boundary conditions (e.g., Dirichlet u(a)=u(b)=0u(a)=u(b)=0) for self-adjointness, altering the domain of the adjoint relative to the formal version.[32]

Algebraic Structures

Ring of Univariate Polynomial Differential Operators

The ring of univariate polynomial differential operators, often denoted Diff(R)\operatorname{Diff}(R) for a commutative ring RR, is the associative algebra generated by the polynomials R[x]R[x] and the differentiation operator ddx\frac{d}{dx}, where elements are finite sums i=0nfi(ddx)i\sum_{i=0}^n f_i \left(\frac{d}{dx}\right)^i with fiR[x]f_i \in R[x].[37] The multiplication in Diff(R)\operatorname{Diff}(R) is defined via operator composition, incorporating the Leibniz rule: for f,gR[x]f, g \in R[x], (fddx)(g)=fdgdx+fg(f \frac{d}{dx})(g) = f \frac{d g}{dx} + f' g, where ff' denotes the formal derivative of ff with respect to xx.[37] This structure ensures that Diff(R)\operatorname{Diff}(R) acts naturally on R[x]R[x] as a ring of endomorphisms. Diff(R)\operatorname{Diff}(R) admits an Ore extension presentation: Diff(R)R[x][;δ]\operatorname{Diff}(R) \cong R[x][\partial; \delta], where δ\delta is the standard derivation on R[x]R[x] given by δ(x)=1\delta(x) = 1 and extended RR-linearly, with the multiplication rule a=σ(a)+δ(a)\partial \cdot a = \sigma(a) \partial + \delta(a) for aR[x]a \in R[x] and σ=id\sigma = \mathrm{id}.[38] This construction highlights the non-commutative nature of the ring, arising from the commutation relation [ddx,x]=ddxxxddx=1[\frac{d}{dx}, x] = \frac{d}{dx} \cdot x - x \cdot \frac{d}{dx} = 1.[37] The algebra Diff(R)\operatorname{Diff}(R) is the first Weyl algebra when RR is a field of characteristic zero, generated by xx and \partial (with \partial corresponding to ddx\frac{d}{dx}) subject to the key relation xx=1\partial x - x \partial = 1.[37] This relation generates the entire non-commutative structure, distinguishing the Weyl algebra from commutative polynomial rings. For R=CR = \mathbb{C}, the Weyl algebra is simple, possessing no nontrivial two-sided ideals.[39]

Ring of Multivariate Polynomial Differential Operators

The ring of multivariate polynomial differential operators, often denoted as Diff(Rn)\operatorname{Diff}(\mathbb{R}^n) or D(Rn)D(\mathbb{R}^n), is the noncommutative associative algebra over R\mathbb{R} (or C\mathbb{C}) generated by the coordinate multiplication operators x1,,xnx_1, \dots, x_n and the partial differentiation operators 1=x1,,n=xn\partial_1 = \frac{\partial}{\partial x_1}, \dots, \partial_n = \frac{\partial}{\partial x_n}, subject to the commutation relations [i,xj]=δij[\partial_i, x_j] = \delta_{ij}, [i,j]=0[\partial_i, \partial_j] = 0, and [xi,xj]=0[x_i, x_j] = 0 for all i,j=1,,ni, j = 1, \dots, n, where δij\delta_{ij} is the Kronecker delta.[40] These relations ensure that the algebra faithfully represents the action of linear partial differential operators with polynomial coefficients on the space of smooth functions on Rn\mathbb{R}^n.[40] This algebra is known as the nn-th Weyl algebra, denoted AnA_n, and can be formally constructed as the quotient
An=Cx1,,xn,1,,n/I, A_n = \mathbb{C}\langle x_1, \dots, x_n, \partial_1, \dots, \partial_n \rangle / I,
where II is the two-sided ideal generated by the specified commutators.[40] The multivariate structure generalizes the univariate case, introducing additional generators and relations that capture interactions across multiple variables.[40] The Weyl algebra AnA_n carries a natural filtration by operator order, where the jj-th filtered component FjAnF^j A_n consists of elements of total degree at most jj in the i\partial_i's (with xix_i's having order 0).[40] The associated graded ring grAn=jFjAn/Fj1An\operatorname{gr} A_n = \bigoplus_j F^j A_n / F^{j-1} A_n is isomorphic to the commutative polynomial ring C[x1,,xn,1,,n]\mathbb{C}[x_1, \dots, x_n, \partial_1, \dots, \partial_n] in 2n2n variables, reflecting the commutative approximation of the noncommutative structure.[40] In the global setting, AnA_n acts on the entire space Rn\mathbb{R}^n or Cn\mathbb{C}^n, whereas local versions arise as stalks of the sheaf of differential operators on algebraic varieties, such as the sheaf DX\mathcal{D}_X over a smooth variety XX.[40] Over an algebraically closed field of characteristic zero, such as C\mathbb{C}, the Weyl algebra An(C)A_n(\mathbb{C}) is simple, possessing no nontrivial two-sided ideals.[40] A key application arises in quantum mechanics, where the commutation relations [i,xj]=δij[\partial_i, x_j] = \delta_{ij} (up to scaling by ii\hbar) encode the canonical commutation relations of the Heisenberg algebra, governing the position and momentum operators in nn-dimensional phase space.[41]

Coordinate-Independent Description

In the coordinate-independent setting, differential operators on a smooth manifold MM act between smooth sections of vector bundles EME \to M and FMF \to M. The space Diffk(E,F)\mathrm{Diff}^k(E, F) consists of all linear maps P:C(M,E)C(M,F)P: C^\infty(M, E) \to C^\infty(M, F) of order at most kk, defined such that for any point xMx \in M, there exists a neighborhood UU of xx where the iterated commutator [[P,ms1],ms2],msk+1][\cdots [P, m_{s_1}], m_{s_2}] \cdots, m_{s_{k+1}}] vanishes for all smooth sections s1,,sk+1s_1, \dots, s_{k+1} of EE, with msm_s denoting pointwise multiplication by the section ss.[42] This characterization ensures the operators satisfy a generalized Leibniz rule, extending the product rule to higher orders via tensor products of sections: for an order-kk operator, P(fs)=j=0k(kj)(Pjf)jsP(f \cdot s) = \sum_{j=0}^k \binom{k}{j} (P_j f) \cdot \nabla^j s, where \nabla is a connection and PjP_j are lower-order terms, though the precise form depends on the bundle structure.[42] Locally, in a trivialization of EE and FF over a chart (U,ϕ)(U, \phi) on MM, such operators reduce to the classical coordinate form αkaα(x)α\sum_{|\alpha| \leq k} a_\alpha(x) \partial^\alpha, where aαa_\alpha are smooth coefficient sections and α\partial^\alpha are partial derivatives.[35] Globally, however, the coordinate-free description employs jet bundles Jk(E)J^k(E), which parametrize the kk-th order jets of sections of EE—equivalence classes of sections agreeing up to kk-th order derivatives at a point—allowing differential operators to be viewed as morphisms between jet bundles and FF.[43] Covariant derivatives induced by a connection on EE provide prototypical order-one differential operators in Diff1(E,ETM)\mathrm{Diff}^1(E, E \otimes T^*M), mapping sections to covector-valued sections while preserving the Leibniz rule X(fs)=(Xf)s+fXs\nabla_X (f s) = (\nabla_X f) s + f \nabla_X s for vector fields XX.[44] Higher-order operators arise naturally from compositions of such covariant derivatives, generating the full space Diff(E,F)\mathrm{Diff}^*(E, F) in a manner independent of local coordinates.[43] The collection of all differential operators kDiffk(E,F)\bigcup_k \mathrm{Diff}^k(E, F) forms a filtered ring under composition, with the filtration Diffk(E,F)Diffk+1(E,F)\mathrm{Diff}^k(E, F) \subseteq \mathrm{Diff}^{k+1}(E, F) preserved such that the product of order-kk and order-ll operators has order at most k+lk+l.[42] The associated graded ring is isomorphic to the ring of symbols via the principal symbol map σk:Diffk(E,F)/Diffk1(E,F)Γ(Sk(TM)Hom(E,F))\sigma_k: \mathrm{Diff}^k(E, F)/\mathrm{Diff}^{k-1}(E, F) \to \Gamma(S^k(T^*M) \otimes \mathrm{Hom}(E, F)), where Sk(TM)S^k(T^*M) denotes symmetric kk-th powers of the cotangent bundle, rendering the symbol sequence exact and facilitating algebraic analysis.[45] A canonical example is the de Rham complex on MM, a chain complex of differential forms Ω(M)\Omega^\bullet(M) where the exterior derivative d:Ωp(M)Ωp+1(M)d: \Omega^p(M) \to \Omega^{p+1}(M) serves as a first-order differential operator satisfying d2=0d^2 = 0 and the graded Leibniz rule d(αβ)=dαβ+(1)degααdβd(\alpha \wedge \beta) = d\alpha \wedge \beta + (-1)^{\deg \alpha} \alpha \wedge d\beta.[35]

Advanced Variants

Differential Operators of Infinite Order

Differential operators of infinite order generalize finite-order differential operators by allowing formal power series expansions in the differentiation operator, typically expressed in exponential form to ensure convergence on suitable function spaces. Specifically, such an operator DD on functions ff is defined as
Df(x)=k=0akk!xkf(x), Df(x) = \sum_{k=0}^\infty \frac{a_k}{k!} \partial_x^k f(x),
where the coefficients {ak}\{a_k\} form a sequence with positive radius of convergence, making the series converge for analytic functions or in appropriate topologies. This form arises naturally from composing the operator with the exponential generating function ϕ(z)=k=0akzkk!\phi(z) = \sum_{k=0}^\infty a_k \frac{z^k}{k!}, which is entire, ensuring the operator ϕ(x)\phi(\partial_x) acts continuously on spaces beyond smooth functions.[46][47] The operator satisfies a Leibniz rule derived from that of the derivative:
D(fg)(x)=k=0akk!j=0k(kj)xjf(x)xkjg(x). D(fg)(x) = \sum_{k=0}^\infty \frac{a_k}{k!} \sum_{j=0}^k \binom{k}{j} \partial_x^j f(x) \cdot \partial_x^{k-j} g(x).
Prominent examples include the translation operator ehxe^{h \partial_x}, which shifts functions via ehxf(x)=f(x+h)e^{h \partial_x} f(x) = f(x + h) for hRh \in \mathbb{R}, and the heat semigroup operator etΔe^{t \Delta} for t>0t > 0, where Δ\Delta is the Laplacian, generating solutions to the heat equation tu=Δu\partial_t u = \Delta u. These operators extend the finite-order case, where the series truncates, and are well-defined on entire functions of exponential type, preserving properties like reality of zeros under iteration.[46][48] In terms of topology, infinite-order differential operators are continuous when acting on Gevrey classes Gs\mathcal{G}^s, spaces of functions where higher derivatives satisfy kf(x)Ck+1(k!)s|\partial^k f(x)| \leq C^{k+1} (k!)^s for some C>0C > 0 and s1s \geq 1, with analytic functions corresponding to s=1s=1. This continuity holds for symbols in Gevrey classes, ensuring boundedness in norms adapted to the growth of derivatives. The symbol of such an operator, obtained via Fourier transform, is an entire function in the frequency variable ξ\xi, extending the principal symbol concept from finite-order operators to ϕ(iξ)\phi(i\xi), where ϕ\phi is of exponential type.[49][50][48] Applications of infinite-order differential operators appear prominently in solving partial differential equations (PDEs) through formal power series methods, where they facilitate the construction of fundamental solutions or semigroups for evolution equations. For instance, operators like etΔe^{t \Delta} directly yield the Green's function for the heat equation, while more general ϕ(Δθ,ω)\phi(\Delta_{\theta,\omega}) solve Cauchy problems for second-order PDEs with variable coefficients, converging in spaces of entire functions to provide asymptotic behaviors or zero distributions of solutions.[46][48]

Bidifferential Operators

A bidifferential operator is a bilinear map B:C(M)×C(M)C(M)B: C^\infty(M) \times C^\infty(M) \to C^\infty(M) that acts as a differential operator in each argument separately, generalizing the notion of bilinear forms to incorporate differentiation. Locally, on an open set in Rn\mathbb{R}^n, a bidifferential operator of total order at most kk takes the form
B(f,g)=α+βkaαβ(x)αf(x)βg(x), B(f,g) = \sum_{|\alpha| + |\beta| \leq k} a_{\alpha\beta}(x) \, \partial^\alpha f(x) \, \partial^\beta g(x),
where the coefficients aαβa_{\alpha\beta} are smooth functions and α,β\alpha, \beta are multi-indices.[51] This expression ensures bilinearity and the property that, for fixed gg, B(,g)B(\cdot, g) is a differential operator of order at most kk in the first variable, and analogously for fixed ff.[52] The total order of BB is defined as the maximum of α+β|\alpha| + |\beta| over all nonzero terms, measuring the highest combined degree of differentiation. Composition of bidifferential operators preserves this structure: if BB has order kk and CC has order ll, then CBC \circ B has order at most k+lk + l, as derivatives compose additively.[53] The operators satisfy the standard Leibniz rule in each argument separately, arising from the product rule for derivatives.[52] Representative examples include the zeroth-order product operator B(f,g)=fgB(f,g) = f g, which is simply multiplication, and the first-order operator B(f,g)=fxgB(f,g) = f \partial_x g in one dimension, combining multiplication in the first argument with differentiation in the second. In number theory, Rankin–Cohen brackets provide higher-order examples, defined for modular forms f1,f2f_1, f_2 of weights λ,λ\lambda', \lambda'' and order \ell by
Rλ,λλ+λ+(f1,f2)(z)=j=0(λ+λ+1jj)f1(j)(z)f2(j)(z)(2iπ), R_{\lambda',\lambda''}^{\lambda' + \lambda'' + \ell}(f_1, f_2)(z) = \sum_{j=0}^\ell \binom{\lambda' + \lambda'' + \ell - 1 - j}{\ell - j} \frac{f_1^{(\ell - j)}(z) f_2^{(j)}(z)}{(2i\pi)^{\ell}},
a bidifferential operator of order \ell that is invariant under the modular group.[54] In quantum field theory, Wick products, such as the normal-ordered bilinear form on fields ϕψ\partial \phi \cdot \partial \psi, function as bidifferential operators by subtracting vacuum expectations to ensure proper renormalization. The formal adjoint BB^* of a bidifferential operator BB is defined such that integration by parts yields B(f,g)h=fB(g,h)\int B(f,g) \, h = \int f \, B^*(g,h) (up to boundary terms), resulting in B(f,g)=(1)βaαβαgβfB^*(f,g) = \sum (-1)^{|\beta|} a_{\alpha\beta} \partial^\alpha g \, \partial^\beta f after transposing arguments and applying signs from the adjoint of each derivative (=\partial^* = -\partial). This introduces sign changes depending on the order in the second argument.[55] Bidifferential operators are closely related to tensor products of differential operators: the space of such operators of order at most kk corresponds to elements in the tensor product Diffk(M)Diffk(M)\mathrm{Diff}^k(M) \otimes \mathrm{Diff}^k(M), where Diffk(M)\mathrm{Diff}^k(M) is the module of differential operators of order k\leq k on MM, acting diagonally on pairs of functions via $ (P \otimes Q)(f \otimes g) = P(f) Q(g) $. This structure underlies their role in deformation quantization and representation theory.[55]

Microdifferential Operators

Microdifferential operators arise in microlocal analysis as a refinement of differential operators, enabling precise localization of their action on the cotangent bundle TMT^*M of a smooth manifold MM. These operators act on Lagrangian distributions, which are distributions associated to Lagrangian submanifolds of T(M×M)T^*(M \times M) and generalize smooth functions and their singularities in phase space. Formally, a microdifferential operator PP of order mm is defined via an oscillatory integral representation:
(Pu)(x)=eiϕ(x,y,θ)a(x,y,θ)u(y)dydθ, (Pu)(x) = \int e^{i\phi(x,y,\theta)} a(x,y,\theta) u(y) \, dy \, d\theta,
where ϕ\phi is a non-degenerate phase function whose graph is a Lagrangian submanifold ΛT(M×M)\Lambda \subset T^*(M \times M), and the amplitude aa belongs to the symbol class Sm(TM×TM)S^m(T^*M \times T^*M), consisting of smooth functions satisfying estimates xαyβθγaC(1+θ)mγ|\partial^\alpha_x \partial^\beta_y \partial^\gamma_\theta a| \leq C (1+|\theta|)^{m - |\gamma|} for multi-indices α,β,γ\alpha, \beta, \gamma. This structure allows microdifferential operators to capture propagation phenomena in phase space, extending the classical Leibniz rule to infinite-order formal series while preserving algebraic properties like composition.[56] A central concept in the theory is microlocal ellipticity, which occurs when the principal symbol pm(x,y,θ)Sm(TM×TM)p_m(x,y,\theta) \in S^m(T^*M \times T^*M) is invertible on Λ\Lambda away from the characteristic variety Char(P)={(x,y,θ)Λpm(x,y,θ)=0}\mathrm{Char}(P) = \{(x,y,\theta) \in \Lambda \mid p_m(x,y,\theta) = 0\}. Elliptic microdifferential operators propagate singularities along bicharacteristic strips in the cotangent bundle, ensuring that singularities of solutions to equations Pu=fPu = f follow the Hamiltonian flow of the principal symbol. This propagation of singularities is crucial for analyzing hyperbolic and elliptic partial differential equations, where the operator dictates how wavefronts evolve microlocally. For instance, in wave equations, microlocal ellipticity guarantees finite propagation speed, with singularities confined to conical neighborhoods of the light cone in phase space. Pseudodifferential operators provide a key example of microdifferential operators of order zero, where the phase function is the standard bilinear form ϕ(x,y,θ)=(xy)θ\phi(x,y,\theta) = (x-y) \cdot \theta, and the Lagrangian Λ\Lambda is the conormal bundle to the diagonal in M×MM \times M. In this case, the full symbol admits an asymptotic expansion a(x,θ)j=0amj(x,θ)a(x,\theta) \sim \sum_{j=0}^\infty a_{m-j}(x,\theta) with amja_{m-j} homogeneous of degree mjm-j in θ\theta, enabling the operator to be expressed as P=k=0(i)kk!θka(x,θ)DxkP = \sum_{k=0}^\infty \frac{(-i)^k}{k!} \partial_\theta^k a(x,\theta) D_x^k in local coordinates. This representation highlights their role in smoothing or amplifying singularities based on the symbol's behavior.[57] The relation to wavefront sets underscores the microlocal nature of these operators: the wavefront set WF(u)TM0\mathrm{WF}(u) \subset T^*M \setminus 0 measures the singular directions of a distribution uu, and a microdifferential operator PP smooths singularities outside its characteristic set, meaning that if (x,ξ)Char(P)(x,\xi) \notin \mathrm{Char}(P), then uu is smooth microlocally near (x,ξ)(x,\xi) implies PuPu is smooth there. More precisely, WF(Pu)(TMChar(P))WF(u)(TMChar(P))\mathrm{WF}(Pu) \cap (T^*M \setminus \mathrm{Char}(P)) \subset \mathrm{WF}(u) \cap (T^*M \setminus \mathrm{Char}(P)), ensuring that PP does not introduce new singularities away from the characteristic variety. This property is foundational for proving local solvability and hypoellipticity in PDE theory. As an advanced generalization, Fourier integral operators extend microdifferential operators by allowing arbitrary clean canonical relations—Lagrangian immersions ΛTM×TN0\Lambda \subset T^*M \times T^*N \setminus 0—rather than restricting to graph-like structures over the diagonal. These operators, also realized via oscillatory integrals with symbols in appropriate classes, model changes of variables and scattering processes, preserving microlocal ellipticity and wavefront set propagation in broader geometric settings.[58]

References

User Avatar
No comments yet.