Hubbry Logo
logo
Exact differential
Community hub

Exact differential

logo
0 subscribers

Wikipedia

from Wikipedia

In multivariate calculus, a differential or differential form is said to be exact or perfect (exact differential), as contrasted with an inexact differential, if it is equal to the general differential for some differentiable function  in an orthogonal coordinate system (hence is a multivariable function whose variables are independent, as they are always expected to be when treated in multivariable calculus).

An exact differential is sometimes also called a total differential, or a full differential, or, in the study of differential geometry, it is termed an exact form.

The integral of an exact differential over any integral path is path-independent, and this fact is used to identify state functions in thermodynamics.

Overview

[edit]

Definition

[edit]

Even if we work in three dimensions here, the definitions of exact differentials for other dimensions are structurally similar to the three dimensional definition. In three dimensions, a form of the type is called a differential form. This form is called exact on an open domain in space if there exists some differentiable scalar function defined on such that throughout , where are orthogonal coordinates (e.g., Cartesian, cylindrical, or spherical coordinates). In other words, in some open domain of a space, a differential form is an exact differential if it is equal to the general differential of a differentiable function in an orthogonal coordinate system.

The subscripts outside the parenthesis in the above mathematical expression indicate which variables are being held constant during differentiation. Due to the definition of the partial derivative, these subscripts are not required, but they are explicitly shown here as reminders.

Integral path independence

[edit]

The exact differential for a differentiable scalar function defined in an open domain is equal to , where is the gradient of , represents the scalar product, and is the general differential displacement vector, if an orthogonal coordinate system is used. If is of differentiability class (continuously differentiable), then is a conservative vector field for the corresponding potential by the definition. For three dimensional spaces, expressions such as and can be made.

The gradient theorem states

that does not depend on which integral path between the given path endpoints and is chosen. So it is concluded that the integral of an exact differential is independent of the choice of an integral path between given path endpoints (path independence).

For three dimensional spaces, if defined on an open domain is of differentiability class (equivalently is of ), then this integral path independence can also be proved by using the vector calculus identity and Stokes' theorem.

for a simply closed loop with the smooth oriented surface in it. If the open domain is simply connected open space (roughly speaking, a single piece open space without a hole within it), then any irrotational vector field (defined as a vector field which curl is zero, i.e., ) has the path independence by the Stokes' theorem, so the following statement is made; In a simply connected open region, any vector field that has the path-independence property (so it is a conservative vector field.) must also be irrotational and vice versa. The equality of the path independence and conservative vector fields is shown here.

Thermodynamic state function

[edit]

In thermodynamics, when is exact, the function is a state function of the system: a mathematical function which depends solely on the current equilibrium state, not on the path taken to reach that state. Internal energy , Entropy , Enthalpy , Helmholtz free energy , and Gibbs free energy are state functions. Generally, neither work nor heat is a state function. (Note: is commonly used to represent heat in physics. It should not be confused with the use earlier in this article as the parameter of an exact differential.)

One dimension

[edit]

In one dimension, a differential form

is exact if and only if has an antiderivative (but not necessarily one in terms of elementary functions). If has an antiderivative and let be an antiderivative of so , then obviously satisfies the condition for exactness. If does not have an antiderivative, then we cannot write with for a differentiable function so is inexact.

Two and three dimensions

[edit]

By symmetry of second derivatives, for any "well-behaved" (non-pathological) function , we have

Hence, in a simply-connected region R of the xy-plane, where are independent,[1] a differential form

is an exact differential if and only if the equation

holds. If it is an exact differential so and , then is a differentiable (smoothly continuous) function along and , so . If holds, then and are differentiable (again, smoothly continuous) functions along and respectively, and is only the case.

For three dimensions, in a simply-connected region R of the xyz-coordinate system, by a similar reason, a differential

is an exact differential if and only if between the functions A, B and C there exist the relations

; ; 

These conditions are equivalent to the following sentence: If G is the graph of this vector valued function then for all tangent vectors X,Y of the surface G then s(XY) = 0 with s the symplectic form.

These conditions, which are easy to generalize, arise from the independence of the order of differentiations in the calculation of the second derivatives. So, in order for a differential dQ, that is a function of four variables, to be an exact differential, there are six conditions (the combination ) to satisfy.

Partial differential relations

[edit]

If a differentiable function is one-to-one (injective) for each independent variable, e.g., is one-to-one for at a fixed while it is not necessarily one-to-one for , then the following total differentials exist because each independent variable is a differentiable function for the other variables, e.g., .

Substituting the first equation into the second and rearranging, we obtain

Since and are independent variables, and may be chosen without restriction. For this last equation to generally hold, the bracketed terms must be equal to zero.[2] The left bracket equal to zero leads to the reciprocity relation while the right bracket equal to zero goes to the cyclic relation as shown below.

Reciprocity relation

[edit]

Setting the first term in brackets equal to zero yields

A slight rearrangement gives a reciprocity relation,

There are two more permutations of the foregoing derivation that give a total of three reciprocity relations between , and .

Cyclic relation

[edit]

The cyclic relation is also known as the cyclic rule or the Triple product rule. Setting the second term in brackets equal to zero yields

Using a reciprocity relation for on this equation and reordering gives a cyclic relation (the triple product rule),

If, instead, reciprocity relations for and are used with subsequent rearrangement, a standard form for implicit differentiation is obtained:

Some useful equations derived from exact differentials in two dimensions

[edit]

(See also Bridgman's thermodynamic equations for the use of exact differentials in the theory of thermodynamic equations)

Suppose we have five state functions , and . Suppose that the state space is two-dimensional and any of the five quantities are differentiable. Then by the chain rule

but also by the chain rule:

and

so that (by substituting (2) and (3) into (1)):

which implies that (by comparing (4) with (1)):

Letting in (5) gives:

Letting in (5) gives:

Letting and in (7) gives:

using ( gives the triple product rule:

See also

[edit]

References

[edit]
[edit]

Grokipedia

from Grokipedia
In mathematics and physics, an exact differential is a differential form $ df = P(\mathbf{x}) \cdot d\mathbf{x} $ in multiple variables that arises as the total differential of a scalar potential function $ f $, such that $ P_i = \frac{\partial f}{\partial x_i} $ for each component, ensuring the line integral $ \int df $ between two points is path-independent and equals $ f(B) - f(A) $.[1] For a two-variable case, $ df = P(x,y) , dx + Q(x,y) , dy $ is exact if there exists $ f(x,y) $ satisfying these partial derivative relations.[2] A key test for exactness, assuming the functions are continuously differentiable, is the equality of mixed partial derivatives: $ \frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x} $.[1] This condition stems from Clairaut's theorem on the equality of mixed partials and guarantees the existence of the potential function locally.[2] Exact differentials play a central role in thermodynamics, where they distinguish state functions—like internal energy $ U $ or entropy $ S $, whose differentials $ dU $ and $ dS $ are exact—from path functions like heat $ \delta Q $ and work $ \delta W $, which are inexact and depend on the process path.[2] In this context, exactness implies the function's value depends only on the system's state, not its history, enabling the formulation of fundamental relations such as $ dU = \delta Q - \delta W $.[2] In the study of differential equations, an equation $ M(x,y) , dx + N(x,y) , dy = 0 $ is termed exact if its left side is the exact differential of some function $ \Psi(x,y) $, allowing direct integration to yield the implicit solution $ \Psi(x,y) = C $.[3] This approach simplifies solving first-order equations without needing integrating factors, provided the exactness condition holds.[3]

Definition and Properties

Formal Definition

In multivariable calculus, an exact differential is a differential form that arises as the total differential of some scalar potential function. Consider a differential form in two variables, ω=P(x,y)dx+Q(x,y)dy\omega = P(x,y)\, dx + Q(x,y)\, dy. This form is exact if there exists a scalar function f(x,y)f(x,y) such that df=ωdf = \omega, meaning fx=P(x,y)\frac{\partial f}{\partial x} = P(x,y) and fy=Q(x,y)\frac{\partial f}{\partial y} = Q(x,y).[1][4] A necessary and sufficient condition for exactness in two variables, assuming sufficient smoothness of PP and QQ, is that the mixed partial derivatives satisfy Py=Qx\frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x}. This equality follows from Clairaut's theorem (also known as Schwarz's theorem), which states that if the second partial derivatives of ff are continuous, then 2fyx=2fxy\frac{\partial^2 f}{\partial y \partial x} = \frac{\partial^2 f}{\partial x \partial y}.[1][4][5] This concept generalizes to nn variables, where a 1-form ω=i=1nPi(x)dxi\omega = \sum_{i=1}^n P_i(\mathbf{x})\, dx_i is exact if it is the exterior derivative of a 0-form (scalar function), or equivalently, if the associated vector field P=(P1,,Pn)\mathbf{P} = (P_1, \dots, P_n) is conservative; in particular, in three dimensions, its curl vanishes: ×P=0\nabla \times \mathbf{P} = \mathbf{0}.[6][7] The formalization of exact differentials within the framework of vector calculus occurred in the late 19th century, primarily through the independent contributions of Josiah Willard Gibbs and Oliver Heaviside, who developed the modern notation and theorems for vector fields and their differentials.[8]

Path Independence

A defining characteristic of an exact differential ω\omega is that the line integral Cω\int_C \omega between two fixed endpoints aa and bb in the domain is independent of the path CC connecting them.[9] This property holds because an exact differential satisfies ω=df\omega = df for some scalar potential function ff, ensuring the integral captures only the net change in ff.[10] The fundamental theorem for line integrals formalizes this path independence. For a smooth curve CC parameterized by r(t)\vec{r}(t) with atba \leq t \leq b, and ω=fdr\omega = \nabla f \cdot d\vec{r} where f\nabla f is continuous, the theorem states:
Cfdr=f(r(b))f(r(a)). \int_C \nabla f \cdot d\vec{r} = f(\vec{r}(b)) - f(\vec{r}(a)).
This equality depends solely on the endpoints, not the specific path.[9] To derive this, parameterize the curve and apply the chain rule:
fdrdt=ddt[f(r(t))]. \nabla f \cdot \frac{d\vec{r}}{dt} = \frac{d}{dt} [f(\vec{r}(t))].
Integrating both sides from t=at = a to t=bt = b and invoking the fundamental theorem of calculus yields:
abddt[f(r(t))]dt=f(r(b))f(r(a)). \int_a^b \frac{d}{dt} [f(\vec{r}(t))] \, dt = f(\vec{r}(b)) - f(\vec{r}(a)).
Thus, the line integral simplifies to the difference in the potential function values, confirming path independence.[10] In contrast, inexact differentials lead to path-dependent integrals, where the value varies with the chosen route. For instance, the work done by non-conservative forces, such as friction, depends on the trajectory taken, as energy dissipation accumulates differently along varied paths.[11][12] A simple example illustrates this for an exact differential. Consider ω=2xydx+x2dy\omega = 2xy \, dx + x^2 \, dy, which is exact with potential f(x,y)=x2yf(x,y) = x^2 y. The line integral from (0,0)(0,0) to (1,1)(1,1) equals f(1,1)f(0,0)=10=1f(1,1) - f(0,0) = 1 - 0 = 1, regardless of path. Direct computation along the straight line y=xy = x (where dy=dxdy = dx, 0x10 \leq x \leq 1) gives:
01(2xx+x21)dx=01(3x2)dx=1. \int_0^1 (2x \cdot x + x^2 \cdot 1) \, dx = \int_0^1 (3x^2) \, dx = 1.
Along the piecewise path (first along the x-axis to (1,0)(1,0), then up the y-axis to (1,1)(1,1)), the integral is 0+01x2dy=10 + \int_0^1 x^2 \, dy = 1 (with x=1x=1), confirming the same result.[9]

Thermodynamic Applications

State Functions

In thermodynamics, state functions are properties of a system that depend solely on its current state, defined by variables such as temperature, pressure, and volume, rather than the history or path taken to reach that state. Examples include internal energy UU, enthalpy HH, entropy SS, Helmholtz free energy FF, and Gibbs free energy GG. In contrast, process functions like heat QQ and work WW are path-dependent, meaning their values vary with the specific process connecting initial and final states.[13] The differential of a state function is exact, ensuring that changes in the function are independent of the path. For instance, the first law of thermodynamics states that $ dU = \delta Q - \delta W $, where δQ\delta Q and δW\delta W are inexact differentials for infinitesimal heat and work transfers (with δW\delta W denoting work done by the system), but their difference $ dU $ is exact. This holds for any process, reversible or irreversible, implying that the finite change ΔU=UfinalUinitial\Delta U = U_{\text{final}} - U_{\text{initial}} depends only on the initial and final states, not the intermediate path.[14] A key criterion for identifying state functions is that the line integral of their differential around any closed cycle vanishes: dϕ=0\oint d\phi = 0, where ϕ\phi is the state function. This property confirms path independence and distinguishes state functions from process functions, whose cyclic integrals are generally nonzero.[15][16] In modern contexts, such as non-equilibrium thermodynamics, the exactness of differentials for state functions may not hold in the traditional sense, as systems deviate from equilibrium states during irreversible processes like rapid expansions or transport phenomena. Here, internal energy and other state variables can exhibit path-dependent behaviors due to entropy production and spatial gradients, requiring extensions like local equilibrium approximations or additional flux variables to describe dynamics accurately.[17]

Examples in Thermodynamics

In thermodynamics, the internal energy $ U $ serves as a fundamental example of an exact differential, expressed as $ dU = T , dS - P , dV $, where $ T $ is temperature, $ S $ is entropy, $ P $ is pressure, and $ V $ is volume. This form is exact because $ U $ is a state function depending solely on the state variables $ S $ and $ V $, ensuring that changes in $ U $ are path-independent and determined only by the initial and final states of the system.[18][19] Similarly, the enthalpy $ H $, defined as $ H = U + PV $, has the exact differential $ dH = T , dS + V , dP $. As a state function with natural variables $ S $ and $ P $, $ H $ exhibits path independence, meaning the change $ \Delta H $ for any process depends only on the endpoints, making it particularly useful for constant-pressure processes where $ \Delta H $ equals the heat transferred.[18][19] The path independence of exact differentials like those for $ U $ and $ H $ highlights a key distinction in thermodynamic processes: reversible versus irreversible. In reversible processes, the equality $ dU = T , dS - P , dV $ holds exactly, allowing full recovery of work and heat along the path, whereas irreversible processes involve inequalities (e.g., $ dq < T , dS $), but the net change in state functions such as $ \Delta U $ or $ \Delta H $ remains path-independent, relying only on initial and final states.[20] The Gibbs free energy $ G = H - TS $ provides another illustration, with its exact differential $ dG = -S , dT + V , dP $, exact due to its dependence on state variables $ T $ and $ P $. This form is crucial in phase equilibria, where at constant $ T $ and $ P $, the minimum $ G $ determines stable phases, and equilibrium between phases occurs when their chemical potentials are equal, ensuring $ dG = 0 $.[19][21]

Mathematical Formulations

One-Dimensional Case

In the one-dimensional case, the differential of a function f(x)f(x) is given by df=f(x)dxdf = f'(x)\, dx, where f(x)f'(x) is the derivative of ff with respect to xx. This form is exact by definition, as it directly represents the infinitesimal change in ff corresponding to an infinitesimal change dxdx in the independent variable xx.[22] The integral of this exact differential from a point aa to bb yields abdf=f(b)f(a)\int_a^b df = f(b) - f(a), which follows from the Fundamental Theorem of Calculus and holds regardless of the specific path taken, since in one dimension there is only a single possible path along the line.[22] This path independence is a trivial consequence of the linear nature of one-dimensional space. A representative example from mechanics illustrates this concept: the position s(t)s(t) of a particle moving along a straight line serves as the potential function whose exact differential is the displacement ds=v(t)dtds = v(t)\, dt, where v(t)v(t) is the velocity. Integrating this differential gives the change in position Δs=t1t2v(t)dt=s(t2)s(t1)\Delta s = \int_{t_1}^{t_2} v(t)\, dt = s(t_2) - s(t_1), directly linking the antiderivative of velocity to displacement.[23] This one-dimensional formulation is fundamentally tied to the concept of antiderivatives in calculus, where the exact differential df=f(x)dxdf = f'(x)\, dx implies that f(x)f(x) is the antiderivative of f(x)f'(x), up to a constant, ensuring that integration recovers the original function precisely.[22]

Multidimensional Case

In the multidimensional case, the concept of an exact differential extends beyond one variable to differential forms on Rn\mathbb{R}^n, where a 1-form ω=Pdx1+Qdx2++Rdxn\omega = P \, dx_1 + Q \, dx_2 + \cdots + R \, dx_n is exact if there exists a scalar potential function ff such that ω=df\omega = df, meaning the coefficients are the partial derivatives of ff.[24] This corresponds to the vector field (P,Q,,R)(P, Q, \dots, R) being conservative, with f=(P,Q,,R)\nabla f = (P, Q, \dots, R).[25] In two dimensions, consider the 1-form ω=P(x,y)dx+Q(x,y)dy\omega = P(x,y) \, dx + Q(x,y) \, dy. The form is exact if and only if it is closed, satisfying the condition Py=Qx\frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x}, which is equivalent to the curl of the associated vector field (P,Q)(P, Q) being zero: ×(P,Q)=0\nabla \times (P, Q) = 0.[24] This test ensures path independence of the line integral Cω\int_C \omega, a hallmark of conservative fields where the integral depends only on the endpoints.[25] For three dimensions, the 1-form ω=P(x,y,z)dx+Q(x,y,z)dy+R(x,y,z)dz\omega = P(x,y,z) \, dx + Q(x,y,z) \, dy + R(x,y,z) \, dz is exact if the vector field (P,Q,R)(P, Q, R) has zero curl: ×(P,Q,R)=0\nabla \times (P, Q, R) = 0.[26] The components of the curl are (RyQz,PzRx,QxPy)=(0,0,0)\left( \frac{\partial R}{\partial y} - \frac{\partial Q}{\partial z}, \frac{\partial P}{\partial z} - \frac{\partial R}{\partial x}, \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right) = (0, 0, 0).[24] Again, this implies the line integral Cω\int_C \omega is path-independent, equaling f(B)f(A)f(B) - f(A) for endpoints AA and BB.[25] To find the potential function ff for an exact form, integrate one coefficient while treating others as constants, then determine remaining arbitrary functions using the other coefficients. In two dimensions, integrate PP with respect to xx to obtain f(x,y)=Pdx+h(y)f(x,y) = \int P \, dx + h(y), differentiate with respect to yy, and set equal to QQ to solve for h(y)h'(y).[25] In three dimensions, similarly integrate PP with respect to xx to get f(x,y,z)=Pdx+g(y,z)f(x,y,z) = \int P \, dx + g(y,z), then use QQ to find gy\frac{\partial g}{\partial y} and RR to find gz\frac{\partial g}{\partial z}.[25] Verify by checking f=(P,Q,R)\nabla f = (P, Q, R). A classic example is the gravitational field near Earth's surface, approximated as F=gz^\mathbf{F} = -g \hat{z}, which is conservative since ×F=0\nabla \times \mathbf{F} = 0, with potential f=gz+Cf = g z + C and path-independent work done by the field.[25] In simply connected domains, such as R2\mathbb{R}^2 or R3\mathbb{R}^3 minus isolated points, the Poincaré lemma guarantees that every closed 1-form is exact, ensuring the existence of a potential function under these topological conditions.[27]

Differential Relations

Reciprocity Relation

In the context of exact differentials, the reciprocity relation arises as a direct consequence of the exactness condition for a differential form. Consider a function Z(x,y)Z(x, y) whose total differential is dZ=M(x,y)dx+N(x,y)dydZ = M(x, y)\, dx + N(x, y)\, dy, where the differential is exact. This exactness requires that the mixed second partial derivatives of ZZ are equal, specifically 2Zxy=2Zyx\frac{\partial^2 Z}{\partial x \partial y} = \frac{\partial^2 Z}{\partial y \partial x}. Since My=2Zyx\frac{\partial M}{\partial y} = \frac{\partial^2 Z}{\partial y \partial x} and Nx=2Zxy\frac{\partial N}{\partial x} = \frac{\partial^2 Z}{\partial x \partial y}, the reciprocity relation follows: My=Nx\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}.[28][29] This relation can be proven using the equality of mixed partial derivatives, a fundamental theorem in multivariable calculus stating that if the second partial derivatives are continuous, then 2Zxy=2Zyx\frac{\partial^2 Z}{\partial x \partial y} = \frac{\partial^2 Z}{\partial y \partial x}. For the exact differential dZdZ, differentiating MM with respect to yy yields the left-hand mixed partial, while differentiating NN with respect to xx yields the right-hand mixed partial; their equality enforces the reciprocity condition. This proof underscores the path independence of exact differentials, as the condition ensures ZZ is a state function.[28] In thermodynamics, the reciprocity relation applies to state functions like internal energy U(S,V)U(S, V), whose differential is dU=TdSPdVdU = T\, dS - P\, dV, an exact differential where M=TM = T and N=PN = -P. Applying the relation gives (TV)S=(PS)V\left( \frac{\partial T}{\partial V} \right)_S = -\left( \frac{\partial P}{\partial S} \right)_V. This thermodynamic reciprocity links measurable quantities such as temperature and pressure to entropy changes.[29] The reciprocity relation serves as the foundation for deriving Maxwell's relations, which are additional equalities among thermodynamic partial derivatives obtained by applying the reciprocity condition to various thermodynamic potentials. For instance, from the differential of the Gibbs free energy dG=SdT+VdPdG = -S\, dT + V\, dP, reciprocity yields (SP)T=(VT)P\left( \frac{\partial S}{\partial P} \right)_T = -\left( \frac{\partial V}{\partial T} \right)_P, facilitating experimental determination of thermodynamic properties.[29]

Cyclic Relation

In multivariable calculus, the cyclic relation, also known as the triple product rule or Euler's chain rule, emerges as a consequence of the exactness of a total differential for interdependent variables xx, yy, and zz, where z=z(x,y)z = z(x, y). The relation states that
(xy)z(yz)x(zx)y=1. \left( \frac{\partial x}{\partial y} \right)_z \left( \frac{\partial y}{\partial z} \right)_x \left( \frac{\partial z}{\partial x} \right)_y = -1.
This identity holds because the partial derivatives must satisfy consistency conditions for the differentials to be path-independent.[30] The derivation follows directly from the total differential dz=(zx)ydx+(zy)xdydz = \left( \frac{\partial z}{\partial x} \right)_y dx + \left( \frac{\partial z}{\partial y} \right)_x dy, which is exact if the mixed partial derivatives are equal, i.e., 2zyx=2zxy\frac{\partial^2 z}{\partial y \partial x} = \frac{\partial^2 z}{\partial x \partial y}. To obtain the cyclic form, express yy as a function of xx and zz, yielding dy=(yx)zdx+(yz)xdzdy = \left( \frac{\partial y}{\partial x} \right)_z dx + \left( \frac{\partial y}{\partial z} \right)_x dz. Substituting the expression for dzdz and collecting terms leads to coefficients that must vanish for arbitrary dxdx and dydy, resulting in the product equaling 1-1. This ensures the differential form is integrable and the variables are related through a state function.[30] In thermodynamics, the cyclic relation applies to state functions derived from exact differentials like the internal energy differential dU=TdSPdVdU = T \, dS - P \, dV, where U=U(S,V)U = U(S, V), T=(US)VT = \left( \frac{\partial U}{\partial S} \right)_V, and P=(UV)SP = -\left( \frac{\partial U}{\partial V} \right)_S. For instance, considering the interdependent variables PP, VV, and TT, the relation takes the form
(PT)V(TV)P(VP)T=1, \left( \frac{\partial P}{\partial T} \right)_V \left( \frac{\partial T}{\partial V} \right)_P \left( \frac{\partial V}{\partial P} \right)_T = -1,
which links the coefficients TT and PP to measurable properties like the thermal expansion coefficient α=1V(VT)P\alpha = \frac{1}{V} \left( \frac{\partial V}{\partial T} \right)_P and isothermal compressibility β=1V(VP)T\beta = -\frac{1}{V} \left( \frac{\partial V}{\partial P} \right)_T, yielding (PT)V=αβ\left( \frac{\partial P}{\partial T} \right)_V = \frac{\alpha}{\beta}. This form is verified for ideal gases using the equation of state PV=nRTPV = nRT.[31][32] A common misconception is applying the cyclic relation to non-state variables, such as heat QQ or work WW, whose differentials dQdQ and dWdW are inexact and path-dependent; the relation holds only for state functions where the total differential is exact, ensuring the partial derivatives reflect intrinsic dependencies rather than process-specific paths.[33] The cyclic relation is distinct from but complementary to the reciprocity relation, which involves pairwise symmetry in second derivatives.[30]

Derived Equations and Identities

In Two Dimensions

In two dimensions, an exact differential takes the form ω=M(x,y)dx+N(x,y)dy\omega = M(x,y) \, dx + N(x,y) \, dy, where there exists a scalar potential function f(x,y)f(x,y) such that ω=df\omega = df.[3] The total differential of ff is given by
df=fxdx+fydy, df = \frac{\partial f}{\partial x} \, dx + \frac{\partial f}{\partial y} \, dy,
so M=f/xM = \partial f / \partial x and N=f/yN = \partial f / \partial y. Differentiating MM yields
dM=Mxdx+Mydy, dM = \frac{\partial M}{\partial x} \, dx + \frac{\partial M}{\partial y} \, dy,
and since M=f/xM = \partial f / \partial x, the equality of mixed partial derivatives implies M/y=2f/yx=N/x\partial M / \partial y = \partial^2 f / \partial y \partial x = \partial N / \partial x. This leads to the key integrability condition for exactness: M/y=N/x\partial M / \partial y = \partial N / \partial x.[3][24] If the differential is inexact, an integrating factor μ(x,y)\mu(x,y) may exist such that μω\mu \omega becomes exact, satisfying (μM)/y=(μN)/x\partial (\mu M) / \partial y = \partial (\mu N) / \partial x, though the focus here remains on exact cases.[34] A prominent application arises in thermodynamics, where the enthalpy differential dH=TdS+VdPdH = T \, dS + V \, dP is exact, with T=(H/S)PT = (\partial H / \partial S)_P and V=(H/P)SV = (\partial H / \partial P)_S. The exactness condition T/P=V/S\partial T / \partial P = \partial V / \partial S then yields the Maxwell relation
(TP)S=(VS)P. \left( \frac{\partial T}{\partial P} \right)_S = \left( \frac{\partial V}{\partial S} \right)_P.
[35][36] The reciprocity relation in two dimensions, stemming from the equality of mixed partials, ensures that the Jacobian matrix of the transformation defined by the partial derivatives is symmetric. For df=Mdx+Ndydf = M \, dx + N \, dy, the Jacobian
J=(MxMyNxNy) J = \begin{pmatrix} \frac{\partial M}{\partial x} & \frac{\partial M}{\partial y} \\ \frac{\partial N}{\partial x} & \frac{\partial N}{\partial y} \end{pmatrix}
satisfies JT=JJ^T = J due to M/y=N/x\partial M / \partial y = \partial N / \partial x.[24]

Generalizations to Higher Dimensions

In the context of multivariable calculus and differential geometry, the concept of an exact differential extends naturally to nn dimensions through the framework of differential forms. A smooth 1-form ω\omega on an open subset of Rn\mathbb{R}^n is said to be exact if there exists a smooth function f:RnRf: \mathbb{R}^n \to \mathbb{R} such that ω=df\omega = df; equivalently, ω\omega is closed if its exterior derivative vanishes, i.e., dω=0d\omega = 0. On contractible domains, such as star-shaped open sets in Rn\mathbb{R}^n, the Poincaré lemma guarantees that every closed form is exact, ensuring the existence of a potential function whose gradient yields the form.[37] This generalizes the two-dimensional condition where exactness implies path-independent line integrals, now holding for higher-dimensional integrals over simply connected regions.[38] From such exact differentials arise higher-order relations analogous to Maxwell's relations in thermodynamics, derived from the equality of mixed partial derivatives of the potential function. For a thermodynamic potential Φ\Phi depending on nn independent variables x1,,xnx_1, \dots, x_n, the exactness of dΦ=i=1nyidxid\Phi = \sum_{i=1}^n y_i \, dx_i (where the yiy_i are conjugate variables) implies that the Hessian matrix of second partial derivatives is symmetric, yielding 2Φxixj=2Φxjxi\frac{\partial^2 \Phi}{\partial x_i \partial x_j} = \frac{\partial^2 \Phi}{\partial x_j \partial x_i} for all i,ji, j. This produces (n2)\binom{n}{2} independent Maxwell-like relations connecting cross derivatives, such as (yixj)=(yjxi)\left( \frac{\partial y_i}{\partial x_j} \right) = \left( \frac{\partial y_j}{\partial x_i} \right), enforcing the symmetry of the Hessian and reducing the number of independent second derivatives from n2n^2 to n(n+1)2\frac{n(n+1)}{2}. These identities facilitate relating measurable quantities like pressure and entropy across multiple state variables.[39] A concrete example occurs in three-dimensional thermodynamics with the Helmholtz free energy F(T,V,N)F(T, V, N), where dF=SdTPdV+μdNdF = -S \, dT - P \, dV + \mu \, dN and SS, PP, μ\mu are entropy, pressure, and chemical potential, respectively. Exactness leads to cross-partial equalities, including (SV)T,N=(PT)V,N\left( \frac{\partial S}{\partial V} \right)_{T,N} = \left( \frac{\partial P}{\partial T} \right)_{V,N}, (SN)T,V=(μT)V,N\left( \frac{\partial S}{\partial N} \right)_{T,V} = -\left( \frac{\partial \mu}{\partial T} \right)_{V,N}, and (PN)T,V=(μV)T,N\left( \frac{\partial P}{\partial N} \right)_{T,V} = \left( \frac{\partial \mu}{\partial V} \right)_{T,N}, enabling derivations of phase behavior and response functions in multi-component systems.[35] The condition for closedness in nn dimensions corresponds to the vanishing of the nn-dimensional analogue of the curl for the associated vector field. For a 1-form ω=i=1nFidxi\omega = \sum_{i=1}^n F_i \, dx_i, dω=0d\omega = 0 implies that the 2-form components satisfy i<j(FjxiFixj)dxidxj=0\sum_{i<j} \left( \frac{\partial F_j}{\partial x_i} - \frac{\partial F_i}{\partial x_j} \right) dx_i \wedge dx_j = 0, or in vector terms, the antisymmetric part of the Jacobian vanishes pairwise. However, on non-contractible manifolds, closed forms need not be exact; de Rham cohomology quantifies this obstruction, with cohomology groups Hk(M)H^k(M) measuring the dimension of closed kk-forms modulo exact ones, as seen in examples like the punctured plane where the angular form dθd\theta is closed but not exact.[37][38]

References

User Avatar
No comments yet.