Hubbry Logo
Differential operatorDifferential operatorMain
Open search
Differential operator
Community hub
Differential operator
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Differential operator
Differential operator
from Wikipedia

A harmonic function defined on an annulus. Harmonic functions are exactly those functions which lie in the kernel of the Laplace operator, an important differential operator.

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).

This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.

Definition

[edit]

Given a nonnegative integer m, an order- linear differential operator is a map from a function space on to another function space that can be written as:

where is a multi-index of non-negative integers, , and for each , is a function on some open domain in n-dimensional space. The operator is interpreted as

Thus for a function :

The notation is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.

The polynomial p obtained by replacing partials by variables in P is called the total symbol of P; i.e., the total symbol of P above is: where The highest homogeneous component of the symbol, namely,

is called the principal symbol of P.[1] While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle).[2]

More generally, let E and F be vector bundles over a manifold X. Then the linear operator

is a differential operator of order if, in local coordinates on X, we have

where, for each multi-index α, is a bundle map, symmetric on the indices α.

The kth order coefficients of P transform as a symmetric tensor

whose domain is the tensor product of the kth symmetric power of the cotangent bundle of X with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.

The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components

on each section u of E. Here Pνμ is the scalar differential operator defined by

With this trivialization, the principal symbol can now be written

In the cotangent space over a fixed point x of X, the symbol defines a homogeneous polynomial of degree k in with values in .

Fourier interpretation

[edit]

A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,

This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.

Examples

[edit]
Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.

History

[edit]

The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800.[3]

Notations

[edit]

The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:

, , and .

When taking higher, nth order derivatives, the operator may be written:

, , , or .

The derivative of a function f of an argument x is sometimes given as either of the following:

The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form

in his study of differential equations.

One of the most frequently seen differential operators is the Laplacian operator, defined by

Another differential operator is the Θ operator, or theta operator, defined by[4]

This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:

In n variables the homogeneity operator is given by

As in one variable, the eigenspaces of Θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)

In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:

Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.

Adjoint of an operator

[edit]

Given a linear differential operator the adjoint of this operator is defined as the operator such that where the notation is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product (or inner product).

Formal adjoint in one variable

[edit]

In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by

where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as and , one can also define the adjoint of T by

This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When is defined according to this formula, it is called the formal adjoint of T.

A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.

Several variables

[edit]

If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:

for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.

Example

[edit]

The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form

This property can be proven using the formal adjoint definition above.[5]

This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.

Properties

[edit]

Differentiation is linear, i.e.

where f and g are functions, and a is a constant.

Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule

Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:

The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.

The differential operators also obey the shift theorem.

Ring of polynomial differential operators

[edit]

Ring of univariate polynomial differential operators

[edit]

If R is a ring, let be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DXXD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring . This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form . It supports an analogue of Euclidean division of polynomials.

Differential modules[clarification needed] over (for the standard derivation) can be identified with modules over .

Ring of multivariate polynomial differential operators

[edit]

If R is a ring, let be the non-commutative polynomial ring over R in the variables , and I the two-sided ideal generated by the elements

for all where is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring .

This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form .

Coordinate-independent description

[edit]

In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E). In other words, there exists a linear mapping of vector bundles

such that

where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.

This just means that for a given section s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.

Relation to commutative algebra

[edit]

An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions we have

Here the bracket is defined as the commutator

This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.

Variants

[edit]

A differential operator of infinite order

[edit]

A differential operator of infinite order is (roughly) a differential operator whose total symbol is a power series instead of a polynomial.

Invariant differential operator

[edit]

An invariant differential operator is a differential operator that is also an invariant operator (e.g., commutes with a group action).

Bidifferential operator

[edit]

A differential operator acting on two functions is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra.[6]

Microdifferential operator

[edit]

A microdifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle.[7]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A differential operator is a mathematical operator that applies differentiation to functions, typically represented by the symbol D=ddxD = \frac{d}{dx} for the first in one variable, and extended to higher-order derivatives via powers like Dn=dndxnD^n = \frac{d^n}{dx^n}. These operators are linear, satisfying D(af+bg)=aD(f)+bD(g)D(af + bg) = aD(f) + bD(g) for constants a,ba, b and functions f,gf, g, and can be combined into polynomials such as L=anDn++a1D+a0L = a_n D^n + \cdots + a_1 D + a_0, where the coefficients aka_k may be functions of the independent variable. In multiple variables, they generalize to partial differential operators, like the Laplacian Δ=2xi2\Delta = \sum \frac{\partial^2}{\partial x_i^2}, which measures the of the . Differential operators form the foundation of the of differential equations, where equations of the form L(u)=fL(u) = f describe how functions evolve under differentiation, enabling the modeling of dynamic systems. For instance, linear constant-coefficient operators facilitate the solution of ordinary differential equations by factoring into characteristic equations, revealing exponential solutions. In partial differential equations, they underpin fundamental laws in physics, such as the ut=kΔu\frac{\partial u}{\partial t} = k \Delta u for diffusion processes and the wave equation 2ut2=c2Δu\frac{\partial^2 u}{\partial t^2} = c^2 \Delta u for . Applications extend to and , including electromagnetic fields via and through the Navier-Stokes equations, where these operators capture spatial and temporal changes. Advanced concepts, such as pseudodifferential operators, arise in to handle singularities and wavefronts in solutions.

Basic Concepts

Definition

A differential operator is a linear map D:C(Ω)C(Ω)D: C^\infty(\Omega) \to C^\infty(\Omega), where ΩRn\Omega \subset \mathbb{R}^n is an and C(Ω)C^\infty(\Omega) denotes the of smooth real-valued functions on Ω\Omega, that satisfies a generalized Leibniz rule characterizing its finite order. Specifically, DD is a differential operator of order at most kk if, for every smooth function gC(Ω)g \in C^\infty(\Omega), the commutator [D,mg]:mD(gm)gD(m)[D, m_g]: m \mapsto D(g m) - g D(m) (where mgm_g denotes multiplication by gg) is a differential operator of order at most k1k-1, with order 0 operators being precisely the continuous linear maps (i.e., multiplication operators). The order of DD is the minimal such nonnegative integer kk for which the (k+1)(k+1)-fold iterated commutator with multiplication operators vanishes identically. This inductive definition via commutators ensures that DD locally behaves like a finite of partial derivatives, up to smooth multiplication factors. In local coordinates x=(x1,,xn)x = (x_1, \dots, x_n) on Ω\Omega, any differential operator DD of order at most kk admits the explicit expression D=αkaα(x)α,D = \sum_{|\alpha| \leq k} a_\alpha(x) \partial^\alpha, where α=(α1,,αn)Nn\alpha = (\alpha_1, \dots, \alpha_n) \in \mathbb{N}^n is a multi-index with α=α1++αn|\alpha| = \alpha_1 + \dots + \alpha_n, α=x1α1xnαn\partial^\alpha = \partial_{x_1}^{\alpha_1} \cdots \partial_{x_n}^{\alpha_n} denotes the corresponding operator of order α|\alpha|, and the coefficients aα:ΩRa_\alpha: \Omega \to \mathbb{R} are smooth functions. The highest-order terms (with α=k|\alpha| = k) determine the principal symbol of DD, which plays a key role in analyzing its global properties. More generally, differential operators extend to smooth manifolds MM of dimension nn, acting as linear maps D:C(M)C(M)D: C^\infty(M) \to C^\infty(M) that are locally of the above form in any coordinate chart. On a manifold, the order kk is independent of the choice of coordinates, preserving the commutator characterization relative to multiplication by smooth functions on MM. This framework captures extensions of classical differentiation while ensuring consistency across overlapping charts via the partition of unity theorem.

Historical Development

The concept of differential operators traces its origins to the , amid the development of , particularly in the . Leonhard Euler and began treating higher-order derivatives as successive applications of a basic operation acting on functions, facilitating the analysis of variational problems. Euler's foundational contributions in the 1740s, developed further through collaboration with Lagrange in the 1760s, marked an early recognition of derivatives as operator-like entities that could be composed and applied systematically to functionals. This intuitive approach gained formal structure in the early 19th century with Louis François Antoine Arbogast's introduction of the standalone differential operator notation DD, which separated operational symbols from quantities and enabled algebraic manipulation of derivatives. Published in 1800 in Du calcul des dérivations et ses usages dans la théorie des suites et dans la géométrie, Arbogast's work represented a pivotal conceptual shift, allowing differential operators to be viewed independently of specific functions. Subsequent 19th-century advancements by and further embedded differential operators within the rigorous theory of partial differential equations (PDEs). Cauchy's 1827 analysis of the Cauchy-Riemann equations exemplified first-order differential operators in , while his 1840 power series methods for nonlinear PDE initial value problems highlighted their role in solution existence and uniqueness. Weierstrass's mid-1870s emphasis on analytical rigor critiqued earlier informal approaches, influencing the study of elliptic operators and boundary value problems in PDEs. In the early 20th century, Élie Cartan's development of exterior differential systems from onward provided a geometric framework for higher-order differential operators, integrating them with Lie groups and moving frames for solving systems of PDEs. The mid-20th century brought abstract generalizations: Laurent Schwartz's 1950-1951 theory of distributions extended differential operators to act on generalized functions, enabling solutions to PDEs beyond classical smoothness. Pseudodifferential operators, building on singular integral techniques, were advanced by Alberto Calderón in 1959 to address Cauchy problems for broad classes of PDEs. Algebraically, the ring of differential operators on smooth manifolds was formalized by in the 1960s, revealing deep ties to sheaf theory and D-modules. During the 1940s-1950s, connections to s emerged, with the ring of constant-coefficient differential operators identified as the universal enveloping algebra of the translation , influencing and quantization. profoundly shaped this evolution, as in introduced differential operators like the momentum operator iddx-i\hbar \frac{d}{dx} to represent observables in wave mechanics, bridging classical analysis with operator algebras.

Examples and Notation

Examples

A fundamental example of a first-order differential operator in one variable is the differentiation operator D=ddxD = \frac{d}{dx}, which maps a smooth function f(x)f(x) to its Df=f(x)Df = f'(x). This operator has order 1, as its principal is the nonzero ξiξ\xi \mapsto i\xi in the cotangent variable ξ\xi. It satisfies the Leibniz rule for derivations: for smooth functions ff and gg, D(fg)=fDg+gDf=fg+gfD(fg) = f \, Dg + g \, Df = f g' + g f'. A typical second-order differential operator in one variable is P=d2dx2+xddxP = \frac{d^2}{dx^2} + x \frac{d}{dx}, which arises in contexts such as the Airy differential equation. Its order is 2, determined by the highest-order term d2dx2\frac{d^2}{dx^2}, with principal symbol ξξ2\xi \mapsto -\xi^2. To verify it qualifies as a differential operator of order at most 2, consider its action under multiplication: for a smooth function ff and variable hh, compute P(fh)fPh=fh+2fh+xfhP(f h) - f \, P h = f'' h + 2 f' h' + x f' h. This remainder equals fh+(2f+xf)hf'' \cdot h + (2 f' + x f') \cdot h', which is a first-order differential operator in hh (order reduced by 1), confirming the Leibniz condition recursively. In multiple variables, the divergence operator div=i=1nxi\operatorname{div} = \sum_{i=1}^n \frac{\partial}{\partial x_i} acts on a V=(V1,,Vn)V = (V_1, \dots, V_n) by divV=i=1nVixi\operatorname{div} V = \sum_{i=1}^n \frac{\partial V_i}{\partial x_i}. This is a differential operator, with principal ξii=1nξi\xi \mapsto i \sum_{i=1}^n \xi_i. It obeys the multivariable Leibniz rule: for a vector field VV and scalar function ff, div(fV)=fdivV+fV=fi=1nVixi+i=1nfxiVi\operatorname{div}(f V) = f \, \operatorname{div} V + \nabla f \cdot V = f \sum_{i=1}^n \frac{\partial V_i}{\partial x_i} + \sum_{i=1}^n \frac{\partial f}{\partial x_i} V_i. The Laplacian Δ=i=1n2xi2\Delta = \sum_{i=1}^n \frac{\partial^2}{\partial x_i^2} is a second-order operator on scalar functions, with principal ξξ2\xi \mapsto -\lVert \xi \rVert^2, and satisfies the corresponding higher-order Leibniz condition, such as Δ(fh)fΔh=2i=1nfxihxi+hΔf\Delta(f h) - f \, \Delta h = 2 \sum_{i=1}^n \frac{\partial f}{\partial x_i} \frac{\partial h}{\partial x_i} + h \Delta f, where the remainder is order at most 1 in hh. On Riemannian manifolds, the \nabla provides a differential operator that generalizes partial differentiation to tensor fields while respecting the manifold's . For a YY along a with XX, XY\nabla_X Y measures the rate of change of YY parallel to the connection, satisfying X(fY)=fXY+(Xf)Y\nabla_X (f Y) = f \nabla_X Y + (X f) Y as the Leibniz rule. Its order is 1, with the principal symbol determined by the metric and . Similarly, the DD on a over a is a elliptic differential operator, locally expressed as D=j=1nejejD = \sum_{j=1}^n e_j \cdot \nabla_{e_j} for an orthonormal frame {ej}\{e_j\}, where \cdot denotes Clifford . As a differential operator, it satisfies the Leibniz rule D(fs)=fDs+c(df)sD(f s) = f D s + c(df) s for a smooth function ff and section ss, where cc denotes Clifford . Its principal symbol is σD(X)=ij=1nX(ej)ej\sigma_D(X) = i \sum_{j=1}^n X(e_j) e_j \cdot, which is invertible for X0X \neq 0. An example of a differential operator with variable coefficients and mixed derivatives is the operator L=tΔL = \frac{\partial}{\partial t} - \Delta, acting on functions u(t,x)u(t, x) in R×Rn\mathbb{R} \times \mathbb{R}^n. This has order 2, dominated by the second-order spatial Laplacian Δ\Delta, with principal symbol iτ+ξ2i \tau + \lVert \xi \rVert^2 in variables (τ,ξ)(\tau, \xi). It satisfies the Leibniz condition for order 2; for instance, in one spatial dimension, L(fu)=t(fu)xx(fu)=(tf)u+ftu[fxxu+2fxux+fuxx]L(f u) = \partial_t (f u) - \partial_{xx} (f u) = (\partial_t f) u + f \partial_t u - [f_{xx} u + 2 f_x u_x + f u_{xx}], and fLu=f(tuuxx)f L u = f (\partial_t u - u_{xx}), so the difference is (tf)ufxxu2fxux(\partial_t f) u - f_{xx} u - 2 f_x u_x, which rearranges to multiplication by (tffxx)(\partial_t f - f_{xx}) times uu plus multiplication by 2fx-2 f_x times uxu_x, a operator in uu.

Notations

In the univariate case, the differential operator corresponding to the first is commonly denoted by DD, representing ddx\frac{d}{dx}, with higher powers DkD^k indicating the kk-th dkdxk\frac{d^k}{dx^k}. In the multivariate setting, partial derivatives with respect to variables x1,,xnx_1, \dots, x_n are denoted by i=xi\partial_i = \frac{\partial}{\partial x_i} for i=1,,ni = 1, \dots, n. To compactly express higher-order partial derivatives, is standard: a multi-index α=(α1,,αn)\alpha = (\alpha_1, \dots, \alpha_n) is an nn- of nonnegative integers, with α=i=1nαi|\alpha| = \sum_{i=1}^n \alpha_i denoting its order, and α=i=1niαi\partial^\alpha = \prod_{i=1}^n \partial_i^{\alpha_i} representing the corresponding mixed . This notation facilitates the over all partial derivatives of order at most kk, as in αkaα(x)αf(x)\sum_{|\alpha| \leq k} a_\alpha(x) \partial^\alpha f(x). Polynomial differential operators are often symbolized as P(x,D)P(x, D), where D=(1,,n)D = (\partial_1, \dots, \partial_n) and PP is a polynomial in the variables xx and the formal symbols DjD_j, such as P(x,D)=αmaα(x)DαP(x, D) = \sum_{|\alpha| \leq m} a_\alpha(x) D^\alpha with Dα=(i)ααD^\alpha = (-i)^{|\alpha|} \partial^\alpha in some conventions to align with Fourier analysis. On a smooth manifold MM, the space of differential operators of order at most kk acting on smooth sections of a vector bundle is denoted by Diffk(M)\mathrm{Diff}^k(M), forming a filtered algebra under composition. Notational conventions vary between and physics: mathematicians typically use italicized or script D\mathcal{D} for formal differential operators and emphasize formal adjoints, while physicists often employ boldface or upright D\mathbf{D} and prioritize Hermitian adjoints in contexts. For instance, the Laplacian operator may appear as Δ=ii2\Delta = \sum_i \partial_i^2 in mathematical texts but as 22-\hbar^2 \nabla^2 in , highlighting domain-specific adjustments. Composition of differential operators can be left or right ordered, affecting the symbol in quantization schemes; in Weyl quantization, the symbol of the product Op(a)Op(b)Op(a) Op(b) corresponds to a symmetric (Weyl) ordering where multiplication operators act midway between left and right derivatives, given by the oscillatory integral formula for the composed symbol.

Fundamental Properties

General Properties

Differential operators are linear maps, meaning that for a differential operator PP of order mm, and functions u,vu, v and scalars a,ba, b, P(au+bv)=aPu+bPvP(au + bv) = a P u + b P v. In appropriate function spaces, such as Sobolev spaces Hs(Rn)H^s(\mathbb{R}^n), these operators are continuous: a differential operator P(D)P(D) maps Hlocs+m(Ω)H^{s+m}_{\mathrm{loc}}(\Omega) continuously to Hlocs(Ω)H^s_{\mathrm{loc}}(\Omega) for all sRs \in \mathbb{R}. This continuity holds in the topology induced by the Sobolev norms, ensuring well-defined behavior on spaces of functions with controlled derivatives. The composition of two differential operators exhibits a Leibniz-type structure. If PP has order kk and QQ has order mm, then PQP \circ Q has order k+mk + m, and the leading terms in the composition arise from the product of the leading coefficients via a generalized Leibniz rule. Specifically, for operators P=αkpα(x)αP = \sum_{|\alpha| \leq k} p_\alpha(x) \partial^\alpha and Q=βmqβ(x)βQ = \sum_{|\beta| \leq m} q_\beta(x) \partial^\beta, the highest-order part is simply the product of the principal parts. Commutators with operators further illustrate this: for a operator D=jD = \partial_j and smooth function ff, [D,f]g=D(fg)fDg=(jf)g[D, f] g = D(f g) - f D g = (\partial_j f) g, which is a zero-order operator, and in general, [D,f][D, f] reduces the order by at least one. Each differential operator PP of order kk has a principal σk(P)(x,ξ)=α=kaα(x)(iξ)α\sigma_k(P)(x, \xi) = \sum_{|\alpha| = k} a_\alpha(x) (i \xi)^\alpha, a of degree kk in the cotangent variable ξ\xi. This captures the highest-order behavior and is independent of lower-order terms. An operator is elliptic if its principal satisfies σk(P)(x,ξ)cξk|\sigma_k(P)(x, \xi)| \geq c |\xi|^k for some c>0c > 0 and all ξ0\xi \neq 0, ensuring the operator is "invertible" in the high-frequency regime and leading to improved regularity properties for solutions. For systems, ellipticity requires the matrix to be invertible for ξ0\xi \neq 0.

Fourier Interpretation

The Fourier transform provides a powerful interpretation of differential operators by transforming them into multiplication operators in the . For a function uu on Rn\mathbb{R}^n, the Fourier transform u^(ξ)=Rnu(x)eixξdx\hat{u}(\xi) = \int_{\mathbb{R}^n} u(x) e^{-i x \cdot \xi} \, dx (up to normalization constants) converts partial derivatives into multiplications: the operator j\partial_j acts as ju^(ξ)=iξju^(ξ)\widehat{\partial_j u}(\xi) = i \xi_j \hat{u}(\xi)
Add your contribution
Related Hubs
User Avatar
No comments yet.