Hubbry Logo
DerivativeDerivativeMain
Open search
Derivative
Community hub
Derivative
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Derivative
Derivative
from Wikipedia

The graph of a function, drawn in black, and a tangent line to that graph, drawn in red. The slope of the tangent line is equal to the derivative of the function at the marked point.
The derivative at different points of a differentiable function. In this case, the derivative is equal to .

In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. The derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable.[1] The process of finding a derivative is called differentiation.

There are multiple different notations for differentiation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. Higher order derivatives are used in physics; for example, the first derivative with respect to time of the position of a moving object is its velocity, and the second derivative is its acceleration.

Derivatives can be generalized to functions of several real variables. In this case, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector.

Definition

[edit]

As a limit

[edit]

A function of a real variable is differentiable at a point of its domain, if its domain contains an open interval containing , and the limit exists.[2] This means that, for every positive real number , there exists a positive real number such that, for every such that and then is defined, and where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit.[3]

If the function is differentiable at , that is if the limit exists, then this limit is called the derivative of at . Multiple notations for the derivative exist.[4] The derivative of at can be denoted , read as " prime of "; or it can be denoted , read as "the derivative of with respect to at " or " by (or over) at ". See § Notation below. If is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point to the value of the derivative of at . This function is written and is called the derivative function or the derivative of . The function sometimes has a derivative at most, but not all, points of its domain. The function whose value at equals whenever is defined and elsewhere is undefined is also called the derivative of . It is still a function, but its domain may be smaller than the domain of .[5]

For example, let be the squaring function: . Then the quotient in the definition of the derivative is[6] The division in the last step is valid as long as . The closer is to , the closer this expression becomes to the value . The limit exists, and for every input the limit is . So, the derivative of the squaring function is the doubling function: .

The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function , specifically the points and . As is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of at . In other words, the derivative is the slope of the tangent.[7]

Using infinitesimals

[edit]

One way to think of the derivative is as the ratio of an infinitesimal change in the output of the function to an infinitesimal change in its input.[8] In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required.[9] The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the in the Leibniz notation. Thus, the derivative of becomes for an arbitrary infinitesimal , where denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real.[10] Taking the squaring function as an example again,

Continuity and differentiability

[edit]
This function does not have a derivative at the marked point, as the function is not continuous there (specifically, it has a jump discontinuity).
The absolute value function is continuous but fails to be differentiable at x = 0 since the tangent slopes do not approach the same value from the left as they do from the right.

If is differentiable at , then must also be continuous at .[11] As an example, choose a point and let be the step function that returns the value 1 for all less than , and returns a different value 10 for all greater than or equal to . The function cannot have a derivative at . If is negative, then is on the low part of the step, so the secant line from to is very steep; as tends to zero, the slope tends to infinity. If is positive, then is on the high part of the step, so the secant line from to has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by is continuous at , but it is not differentiable there. If is positive, then the slope of the secant line from 0 to is one; if is negative, then the slope of the secant line from to is .[12] This can be seen graphically as a "kink" or a "cusp" in the graph at . Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by is not differentiable at . In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative.[13]

Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points.[14] Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function.[15] In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point.[16]

Notation

[edit]

One common way of writing the derivative of a function is Leibniz notation, introduced by Gottfried Wilhelm Leibniz in 1675, which denotes a derivative as the quotient of two differentials, such as and .[17] It is still commonly used when the equation is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by , read as "the derivative of with respect to ".[18] This derivative can alternately be treated as the application of a differential operator to a function, Higher derivatives are expressed using the notation for the -th derivative of . These are abbreviations for multiple applications of the derivative operator; for example, [19] Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if and then [20]

Another common notation for differentiation is by using the prime mark in the symbol of a function . This notation, due to Joseph-Louis Lagrange, is now known as prime notation.[21] The first derivative is written as , read as " prime of ", or , read as " prime".[22] Similarly, the second and the third derivatives can be written as and , respectively.[23] For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as or .[24] The latter notation generalizes to yield the notation for the th derivative of .[19]

In Newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. If is a function of , then the first and second derivatives can be written as and , respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry.[25] However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables.

Another notation is D-notation, which represents the differential operator by the symbol .[19] The first derivative is written and higher derivatives are written with a superscript, so the -th derivative is . This notation is sometimes called Euler notation, although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast.[26] To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function , its partial derivative with respect to can be written or . Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. and .[27]

Rules of computation

[edit]

In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation.[28]

Rules for basic functions

[edit]

The following are the rules for the derivatives of the most common basic functions. Here, is a real number, and is the base of the natural logarithm, approximately 2.71828.[29]

  • Derivatives of powers:
  • Functions of exponential, natural logarithm, and logarithm with general base:
    , for
    , for
    , for
  • Trigonometric functions:
  • Inverse trigonometric functions:
    , for
    , for

Rules for combined functions

[edit]

The following rules allow deducing derivatives of many functions from the derivatives of the basic functions:[30]

  • Constant rule: if is a constant function, then for all ,
  • Sum rule:
    for all functions and and all real numbers and .
  • Product rule:
    for all functions and . As a special case, this rule includes the fact whenever is a constant because by the constant rule.
  • Quotient rule:
    for all functions and at all inputs where g ≠ 0.
  • Chain rule for composite functions: If , then

Computation example

[edit]

The derivative of the function given by is Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions , , , , and , as well as the constant , were also used.

Antidifferentiation

[edit]

An antiderivative of a function is a function whose derivative is . Antiderivatives are not unique: if is an antiderivative of , then so is , where is any constant, because the derivative of a constant is zero.[31] The fundamental theorem of calculus shows that finding an antiderivative of a function gives a way to compute the areas of shapes bounded by that function. More precisely, the integral of a function over a closed interval is equal to the difference between the values of an antiderivative evaluated at the endpoints of that interval.[32]

Higher-order derivatives

[edit]

Higher order derivatives are the result of differentiating a function repeatedly. Given that is a differentiable function, the derivative of is the first derivative, denoted as . The derivative of is the second derivative, denoted as , and the derivative of is the third derivative, denoted as . By continuing this process, if it exists, the th derivative is the derivative of the th derivative or the derivative of order . As has been discussed above, the generalization of derivative of a function may be denoted as .[33] A function that has successive derivatives is called times differentiable. If the -th derivative is continuous, then the function is said to be of differentiability class .[34] A function that has infinitely many derivatives is called infinitely differentiable or smooth.[35] Any polynomial function is infinitely differentiable; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero.[36]

One application of higher-order derivatives is in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time,[28] and the third derivative is the jerk.[37]

In other dimensions

[edit]

Vector-valued functions

[edit]

A vector-valued function of a real variable sends real numbers to vectors in some vector space . A vector-valued function can be split up into its coordinate functions , meaning that . This includes, for example, parametric curves in or . The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is,[38] if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of exists for every value of , then is another vector-valued function.[38]

Partial derivatives

[edit]

Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function with respect to the variable is variously denoted by

, , , , or ,

among other possibilities.[39] It can be thought of as the rate of change of the function in the -direction.[40] Here is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee".[41] For example, let , then the partial derivative of function with respect to both variables and are, respectively: In general, the partial derivative of a function in the direction at the point is defined to be:[42]

This is fundamental for the study of the functions of several real variables. Let be such a real-valued function. If all partial derivatives with respect to are defined at the point , these partial derivatives define the vector which is called the gradient of at . If is differentiable at every point in some domain, then the gradient is a vector-valued function that maps the point to the vector . Consequently, the gradient determines a vector field.[43]

Directional derivatives

[edit]

If is a real-valued function on , then the partial derivatives of measure its variation in the direction of the coordinate axes. For example, if is a function of and , then its partial derivatives measure the variation in in the and direction. However, they do not directly measure the variation of in any other direction, such as along the diagonal line . These are measured using directional derivatives. Given a vector , then the directional derivative of in the direction of at the point is:[44]

If all the partial derivatives of exist and are continuous at , then they determine the directional derivative of in the direction by the formula:[45]

Total derivative and Jacobian matrix

[edit]

When is a function from an open subset of to , then the directional derivative of in a chosen direction is the best linear approximation to at that point and in that direction. However, when , no single directional derivative can give a complete picture of the behavior of . The total derivative gives a complete picture by considering all directions at once. That is, for any vector starting at , the linear approximation formula holds:[46] Similarly with the single-variable derivative, is chosen so that the error in this approximation is as small as possible. The total derivative of at is the unique linear transformation such that[46] Here is a vector in , so the norm in the denominator is the standard length on . However, is a vector in , and the norm in the numerator is the standard length on .[46] If is a vector starting at , then is called the pushforward of by .[47]

If the total derivative exists at , then all the partial derivatives and directional derivatives of exist at , and for all , is the directional derivative of in the direction . If is written using coordinate functions, so that , then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of at :[48]

Generalizations

[edit]

The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point.

  • An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers to . The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition.[49] If is identified with by writing a complex number as then a differentiable function from to is certainly differentiable as a function from to (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy–Riemann equations – see holomorphic functions.[50]
  • Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold is a space that can be approximated near each point by a vector space called its tangent space: the prototypical example is a smooth surface in . The derivative (or differential) of a (differentiable) map between manifolds, at a point in , is then a linear map from the tangent space of at to the tangent space of at . The derivative function becomes a map between the tangent bundles of and . This definition is used in differential geometry.[51]
  • Differentiation can also be defined for maps between vector space, such as Banach space, in which those generalizations are the Gateaux derivative and the Fréchet derivative.[52]
  • One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average".[53]
  • Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is differential algebra. Here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on.[54]
  • The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus.[55]
  • The arithmetic derivative involves the function that is defined for the integers by the prime factorization. This is an analogy with the product rule.[56]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , particularly in , the derivative of a function measures the instantaneous rate of change of the function with respect to one of its variables, equivalent to the of the line to the function's graph at a given point. Formally, for a ff, the derivative f(x)f'(x) at a point xx is defined as the limit
f(x)=limh0f(x+h)f(x)h,f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h},
provided this limit exists, representing the sensitivity of the function's output to changes in its input. This concept underpins much of and extends to higher-order derivatives, such as the second derivative f(x)f''(x), which describes concavity or acceleration-like behavior.
The development of the derivative traces back to the late 17th century, when and independently formulated the foundations of , introducing notation like dydx\frac{dy}{dx} for the derivative and applying it to problems in physics and . Earlier precursors, including work by mathematicians like on tangents and the Persian scholar Sharaf al-Dīn al-Ṭūsī on cubic polynomials in the , anticipated aspects of instantaneous rates of change, but Newton and Leibniz's rigorous framework revolutionized . later provided a more precise limit-based definition in the , solidifying its role in . Derivatives have broad applications across , , and , enabling the modeling of dynamic systems and optimization. In physics, the first derivative of position with respect to time yields , while the second derivative gives , fundamental to and Newtonian . In economics, derivatives quantify marginal quantities, such as the rate of change in or functions, aiding in production and pricing. They also support techniques like for solving real-world problems involving varying quantities and Newton's method for numerical root-finding of equations.

Definition

As a limit

In , the derivative of a function ff at a point aa in its domain is formally defined as the limit f(a)=limh0f(a+h)f(a)h,f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h}, provided this limit exists. This definition applies to functions ff defined on an open interval containing aa, where the limit represents the instantaneous rate of change of ff at aa. The expression f(a+h)f(a)h\frac{f(a + h) - f(a)}{h} is known as the , which measures the rate of change of ff over the small interval [a,a+h][a, a + h]. Geometrically, this quotient equals the of the connecting the points (a,f(a))(a, f(a)) and (a+h,f(a+h))(a + h, f(a + h)) on the graph of ff. As hh approaches 0, the approaches the line to the curve at (a,f(a))(a, f(a)), so the derivative f(a)f'(a) gives the of this line. For the limit to exist at an interior point aa, the two one-sided limits must agree: the right-hand derivative limh0+f(a+h)f(a)h\lim_{h \to 0^+} \frac{f(a + h) - f(a)}{h} and the left-hand derivative limh0f(a+h)f(a)h\lim_{h \to 0^-} \frac{f(a + h) - f(a)}{h} must both exist and be equal. At an endpoint of the domain, such as the left endpoint of a closed interval, differentiability is defined using the appropriate one-sided limit, typically the right-hand derivative. Consider the example of f(x)=x2f(x) = x^2 at a=1a = 1. The derivative is f(1)=limh0(1+h)212h.f'(1) = \lim_{h \to 0} \frac{(1 + h)^2 - 1^2}{h}. First, expand the numerator: (1+h)21=1+2h+h21=2h+h2(1 + h)^2 - 1 = 1 + 2h + h^2 - 1 = 2h + h^2. Then, f(1)=limh02h+h2h=limh0h(2+h)h=limh0(2+h)=2,f'(1) = \lim_{h \to 0} \frac{2h + h^2}{h} = \lim_{h \to 0} \frac{h(2 + h)}{h} = \lim_{h \to 0} (2 + h) = 2, since h0h \neq 0 in the intermediate step, and the limit exists.

Using infinitesimals

introduced the concept of infinitesimals in the late as a foundational tool for his development of , viewing them as quantities smaller than any finite number but nonzero, which allowed for the representation of instantaneous rates of change in continuous motion. In his 1684 paper "Nova Methodus pro Maximis et Minimis," Leibniz employed infinitesimals to derive tangents and areas, treating differentials like dxdx as infinitesimal increments that model the "momentary endeavor" of and variation in functions. Although criticized for lacking rigor, this approach provided an intuitive framework for that influenced early practitioners until the 19th-century shift to limit-based definitions resolved foundational issues. An intuitive definition of the derivative using expresses it as the ratio of infinitesimal changes: for a function ff, the derivative at xx is f(x)=f(x+dx)f(x)dxf'(x) = \frac{f(x + dx) - f(x)}{dx}, where dxdx is an infinitesimal quantity approaching zero but treated as nonzero in computations. This heuristic avoids explicit limits by directly manipulating small increments, aligning with Leibniz's original vision of as a method for handling continuous change through such "fictions." The modern rigorous revival of infinitesimals occurred in the through Abraham Robinson's non-standard analysis, which constructs the hyperreal numbers R*\mathbb{R} as an extension of the reals incorporating genuine infinitesimals and infinite numbers via ultrapowers of real sequences. Hyperreals form a totally where finite elements are those bounded by standard reals, and infinitesimals are nonzero hyperreals smaller in absolute value than any positive real. Central to this framework is the , which states that a sentence holds in the reals if and only if its non-standard counterpart holds in the hyperreals, enabling the rigorous importation of standard theorems into the extended system. This approach offers advantages in intuition by permitting direct computations with infinitesimals, bypassing the complexities of epsilon-delta limits while preserving logical equivalence to standard analysis, thus aiding pedagogical clarity and simplifying proofs of calculus rules. For example, the derivative of sin(x)\sin(x) at a real number xx can be found by considering an infinitesimal ϵR\epsilon \in {}^*\mathbb{R} with ϵ0\epsilon \approx 0 but ϵ0\epsilon \neq 0: sin(x+ϵ)sin(x)ϵ=sin(x)cos(ϵ)+cos(x)sin(ϵ)sin(x)ϵ=sin(x)cos(ϵ)1ϵ+cos(x)sin(ϵ)ϵ.\frac{\sin(x + \epsilon) - \sin(x)}{\epsilon} = \frac{\sin(x)\cos(\epsilon) + \cos(x)\sin(\epsilon) - \sin(x)}{\epsilon} = \sin(x) \cdot \frac{\cos(\epsilon) - 1}{\epsilon} + \cos(x) \cdot \frac{\sin(\epsilon)}{\epsilon}. By the transfer principle, cos(ϵ)1\cos(\epsilon) \approx 1 and sin(ϵ)ϵ\sin(\epsilon) \approx \epsilon, so the first term is approximately zero and the second is cos(x)\cos(x), yielding sin(x)=cos(x)\sin'(x) = \cos(x).

Notation and Representation

Standard notations

In mathematical analysis, the derivative of a function is expressed using several standard notations, each suited to different contexts in calculus and its applications. The two primary notations are the Lagrange notation and the Leibniz notation, which are widely adopted in textbooks and research for denoting the rate of change of a function. The Lagrange notation, introduced by Joseph-Louis Lagrange, denotes the first derivative of a function ff at a point xx as f(x)f'(x), where the prime symbol indicates differentiation with respect to the independent variable. For higher-order derivatives, this extends to multiple primes, such as f(x)f''(x) for the second derivative, or more generally f(n)(x)f^{(n)}(x) for the nnth derivative, providing a compact way to represent successive differentiations. This notation is particularly convenient for single-variable functions, as it treats the derivative as an operation on the function itself without explicitly referencing the variable of differentiation. In contrast, the Leibniz notation, developed by , expresses the derivative of y=f(x)y = f(x) as dydx\frac{dy}{dx} or dfdx\frac{df}{dx}, emphasizing the ratio of infinitesimal changes in the . For higher orders, it uses dnydxn\frac{d^n y}{dx^n} or dnfdxn\frac{d^n f}{dx^n}. This form is especially useful in contexts like problems, where rates of change with respect to different variables (such as time) must be related, as it naturally accommodates implicit differentiation and applications. For derivatives with respect to time, particularly in physics and engineering, Newton's dot notation is standard, denoting f˙(t)=dfdt\dot{f}(t) = \frac{df}{dt} for the first derivative and f¨(t)\ddot{f}(t) for the second, among higher orders. In , partial derivatives are conventionally written as fx\frac{\partial f}{\partial x}, indicating differentiation with respect to one variable while holding others constant. The choice of notation often depends on the problem: Lagrange notation excels for abstract function analysis in single-variable , while Leibniz notation facilitates problems involving interrelated variables, such as in differential equations or optimization.

Historical notations

The development of notation for the derivative began in the late 17th century with Isaac Newton's introduction of fluxion notation, where he used a dot over the variable, such as x˙\dot{x}, to denote the rate of change or "fluxion" of a quantity xx. Newton conceived this notation around 1666 during his early work on what he called the "," though it was not published until 1693 in his work on quadratures and later fully in 1736. This notation emphasized the temporal or geometric flow of quantities, aligning with Newton's physical and geometric perspective on , but it gradually fell out of favor in favor of more algebraic and analytic approaches. Independently, developed a differential notation in 1675, using symbols like dydx\frac{dy}{dx} or d/dxd/dx to represent the derivative as a of infinitesimals, which profoundly influenced the analytic framework of . Although conceived in a 1675 , this notation first appeared in print in Leibniz's 1684 "Nova methodus pro maximis et minimis" in Acta Eruditorum, where the lowercase dd signified an infinitesimal difference. Leibniz's system, with its operator-like d/dxd/dx, facilitated manipulations such as the chain rule and became the dominant notation due to its clarity in expressing differentials and its adaptability to integration and series expansions. In the 18th century, Leonhard Euler employed variations including an increment-based approach with small quantities in expressions for differences, building on Newtonian and Leibnizian ideas but integrated into his analytic works, such as his use of the prime symbol f(x)f'(x) and the operator Df(x)Df(x) for the derivative. These notations appeared in Euler's texts like his 1755 Institutiones calculi differentialis and highlighted infinitesimal methods, though they were eventually refined for greater conciseness in higher-order derivatives compared to emerging functional notations. A significant shift occurred with Joseph-Louis Lagrange's introduction of the prime notation ff' in his 1797 treatise Théorie des fonctions analytiques, where he treated the derivative as a "derived function" to emphasize the of functions without relying on infinitesimals or limits. This notation, which used successive primes for higher derivatives like ff'', gained adoption in for its simplicity and direct association with functions, influencing modern textbooks and theoretical work. These historical notations evolved into the standard modern forms like Leibniz's dydx\frac{dy}{dx} and Lagrange's ff', which remain prevalent today.

Differentiability

Conditions for differentiability

A function f:DRf: D \to \mathbb{R}, where DRD \subseteq \mathbb{R} is an interval, is differentiable at a point cDc \in D if the limit limh0f(c+h)f(c)h\lim_{h \to 0} \frac{f(c + h) - f(c)}{h} exists and is finite; this limit is denoted f(c)f'(c). This condition is equivalent to the difference quotient approaching the same value along every sequence (hn)(h_n) in R\mathbb{R} with hn0h_n \neq 0 and hn0h_n \to 0. A function is differentiable on an interval II if it is differentiable at every point in II, with the understanding that for interior points this requires the two-sided limit to exist, while for endpoints of a closed interval one-sided limits may be used if specified. Derivatives possess the intermediate value property, even if they are discontinuous: if ff is differentiable on an interval II and f(a)<λ<f(b)f'(a) < \lambda < f'(b) for a,bIa, b \in I with a<ba < b, then there exists c(a,b)c \in (a, b) such that f(c)=λf'(c) = \lambda. This result, known as Darboux's theorem, follows from the mean value theorem applied to auxiliary functions. Sufficient conditions for differentiability include membership in the class C1(I)C^1(I), meaning ff is differentiable on II with continuous derivative ff'; this implies differentiability everywhere on II. A weaker condition is the Lipschitz condition: if f(x)f(y)Kxy|f(x) - f(y)| \leq K |x - y| for some constant K>0K > 0 and all x,yIx, y \in I, then ff is differentiable on II with respect to , by . An example of a function differentiable everywhere on R\mathbb{R} but with a discontinuous derivative is f(x)={x2sin(1/x)if x0,0if x=0.f(x) = \begin{cases} x^2 \sin(1/x) & \text{if } x \neq 0, \\ 0 & \text{if } x = 0. \end{cases} Here, f(x)=2xsin(1/x)cos(1/x)f'(x) = 2x \sin(1/x) - \cos(1/x) for x0x \neq 0 and f(0)=0f'(0) = 0, but limx0f(x)\lim_{x \to 0} f'(x) does not exist due to the of cos(1/x)-\cos(1/x).

Relation to continuity

A fundamental result in establishes that differentiability at a point implies continuity at that same point. Specifically, if a function ff is differentiable at a point aa in its domain, then ff is continuous at aa. To prove this theorem, consider the definition of the derivative: f(a)=limh0f(a+h)f(a)h.f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h}. Since the limit exists and equals f(a)f'(a), multiply both sides by hh (noting that limh0h=0\lim_{h \to 0} h = 0): limh0[f(a+h)f(a)]=f(a)limh0h=f(a)0=0.\lim_{h \to 0} [f(a + h) - f(a)] = f'(a) \cdot \lim_{h \to 0} h = f'(a) \cdot 0 = 0. This shows that limh0f(a+h)=f(a)\lim_{h \to 0} f(a + h) = f(a), which is precisely the definition of continuity at aa. The converse of this does not hold: a function can be continuous at a point without being differentiable there. For example, the function f(x)=xf(x) = |x| is continuous at x=0x = 0 because limx0x=0=f(0)\lim_{x \to 0} |x| = 0 = f(0), but it is not differentiable at x=0x = 0 since the left-hand derivative is 1-1 and the right-hand derivative is 11, so the two-sided limit does not exist. This one-way implication has significant consequences in : all differentiable functions are continuous, but continuity alone does not guarantee differentiability, highlighting that differentiability is a stricter condition. It plays a crucial role in theorems like the , which requires a function to be continuous on a closed interval [a,b][a, b] and differentiable on the open interval (a,b)(a, b); the implication ensures the continuity condition is satisfied on the interior points where differentiability holds. Moreover, differentiability is a strictly local property: it only requires the existence of the derivative (and thus continuity) at the specific point aa, without implications for behavior elsewhere in the domain.

Computation of Derivatives

Derivatives of basic functions

The derivative of a f(x)=cf(x) = c, where cc is a constant, is zero. This follows from the limit definition of the derivative: f(x)=limh0f(x+h)f(x)h=limh0cch=limh00h=0.f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h} = \lim_{h \to 0} \frac{c - c}{h} = \lim_{h \to 0} \frac{0}{h} = 0. This constant rule holds for any real constant cc. The power rule states that for a function f(x)=xnf(x) = x^n, where nn is a positive integer, the derivative is f(x)=nxn1f'(x) = n x^{n-1}. To derive this using the limit definition, substitute into the definition: f(x)=limh0(x+h)nxnh.f'(x) = \lim_{h \to 0} \frac{(x + h)^n - x^n}{h}. Expand (x+h)n(x + h)^n using the binomial theorem: (x+h)n=k=0n(nk)xnkhk=xn+nxn1h+n(n1)2xn2h2++hn.(x + h)^n = \sum_{k=0}^{n} \binom{n}{k} x^{n-k} h^k = x^n + n x^{n-1} h + \frac{n(n-1)}{2} x^{n-2} h^2 + \cdots + h^n. Subtract xnx^n and divide by hh: (x+h)nxnh=nxn1+n(n1)2xn2h++hn1.\frac{(x + h)^n - x^n}{h} = n x^{n-1} + \frac{n(n-1)}{2} x^{n-2} h + \cdots + h^{n-1}. As h0h \to 0, all terms with hh vanish, yielding nxn1n x^{n-1}. This derivation extends to rational exponents via roots and powers, and to real exponents through limits and continuity arguments, maintaining the form f(x)=nxn1f'(x) = n x^{n-1}. The derivative of the sine function is ddxsinx=cosx\frac{d}{dx} \sin x = \cos x. Using the limit definition: (sinx)=limh0sin(x+h)sinxh.(\sin x)' = \lim_{h \to 0} \frac{\sin(x + h) - \sin x}{h}. Apply the angle addition formula: sin(x+h)=sinxcosh+cosxsinh\sin(x + h) = \sin x \cos h + \cos x \sin h. Substitute: sinxcosh+cosxsinhsinxh=sinxcosh1h+cosxsinhh.\frac{\sin x \cos h + \cos x \sin h - \sin x}{h} = \sin x \cdot \frac{\cos h - 1}{h} + \cos x \cdot \frac{\sin h}{h}. As h0h \to 0, limh0cosh1h=0\lim_{h \to 0} \frac{\cos h - 1}{h} = 0 and limh0sinhh=1\lim_{h \to 0} \frac{\sin h}{h} = 1, so the limit simplifies to cosx1=cosx\cos x \cdot 1 = \cos x. Similarly, for cosine, ddxcosx=sinx\frac{d}{dx} \cos x = -\sin x, derived analogously using cos(x+h)=cosxcoshsinxsinh\cos(x + h) = \cos x \cos h - \sin x \sin h, yielding limh0cos(x+h)cosxh=sinx\lim_{h \to 0} \frac{\cos(x + h) - \cos x}{h} = -\sin x. These rely on the standard limits limh0sinhh=1\lim_{h \to 0} \frac{\sin h}{h} = 1 and limh0cosh1h=0\lim_{h \to 0} \frac{\cos h - 1}{h} = 0. The derivative of the exponential function f(x)=exf(x) = e^x is f(x)=exf'(x) = e^x. From the limit definition: (ex)=limh0ex+hexh=exlimh0eh1h.(e^x)' = \lim_{h \to 0} \frac{e^{x + h} - e^x}{h} = e^x \lim_{h \to 0} \frac{e^h - 1}{h}. The key limit limh0eh1h=1\lim_{h \to 0} \frac{e^h - 1}{h} = 1 defines the differentiability of exe^x at the base, confirming the result. This property distinguishes the natural exponential from other bases. The derivative of the natural logarithm f(x)=lnxf(x) = \ln x for x>0x > 0 is f(x)=1xf'(x) = \frac{1}{x}. Using the limit definition: (lnx)=limh0+ln(x+h)lnxh=limh0+ln(1+hx)h=1xlimh0+ln(1+hx)hx.(\ln x)' = \lim_{h \to 0^+} \frac{\ln(x + h) - \ln x}{h} = \lim_{h \to 0^+} \frac{\ln\left(1 + \frac{h}{x}\right)}{h} = \frac{1}{x} \lim_{h \to 0^+} \frac{\ln\left(1 + \frac{h}{x}\right)}{\frac{h}{x}}. Let k=hxk = \frac{h}{x}, so as h0+h \to 0^+, k0+k \to 0^+, and the limit becomes 1xlimk0+ln(1+k)k=1x1\frac{1}{x} \lim_{k \to 0^+} \frac{\ln(1 + k)}{k} = \frac{1}{x} \cdot 1, since limk0ln(1+k)k=1\lim_{k \to 0} \frac{\ln(1 + k)}{k} = 1 follows from the definition of the derivative of ln\ln at 1 or the series expansion of ln(1+k)\ln(1 + k).

Derivatives of combined functions

In calculus, derivatives of combined functions are computed using specific rules that extend the differentiation of basic functions to products, quotients, compositions, and other forms. These rules, developed in the late , enable the analysis of more complex expressions without expanding them fully, preserving efficiency in calculations./02%3A_Calculus_in_the_17th_and_18th_Centuries/2.01%3A_Newton_and_Leibniz_Get_Started) The applies to the derivative of a product of two differentiable functions u(x)u(x) and v(x)v(x). It states that (uv)(x)=u(x)v(x)+u(x)v(x),(u v)'(x) = u'(x) v(x) + u(x) v'(x), where the prime denotes differentiation with respect to xx. This formula, first articulated by in his 1684 paper Nova Methodus pro Maximis et Minimis, accounts for the rate of change of each factor while holding the other constant./02%3A_Calculus_in_the_17th_and_18th_Centuries/2.01%3A_Newton_and_Leibniz_Get_Started) The quotient rule handles the derivative of a quotient of two differentiable functions u(x)u(x) and v(x)v(x), where v(x)0v(x) \neq 0. It is given by (uv)(x)=u(x)v(x)u(x)v(x)[v(x)]2.\left( \frac{u}{v} \right)'(x) = \frac{u'(x) v(x) - u(x) v'(x)}{[v(x)]^2}. Originating from the foundational work of Leibniz and Johann Bernoulli in the development of infinitesimal calculus, this rule derives from applying the product rule to u(x)[v(x)]1u(x) \cdot [v(x)]^{-1}. For compositions of functions, the chain rule computes the derivative of f(g(x))f(g(x)), where ff and gg are differentiable. The rule states (fg)(x)=f(g(x))g(x).(f \circ g)'(x) = f'(g(x)) \cdot g'(x). Leibniz introduced an early form of this in a 1676 , formalizing it as a substitution in limits. A proof sketch proceeds by definition: the derivative is limh0f(g(x+h))f(g(x))h=limh0[f(g(x+h))f(g(x))g(x+h)g(x)g(x+h)g(x)h].\lim_{h \to 0} \frac{f(g(x + h)) - f(g(x))}{h} = \lim_{h \to 0} \left[ \frac{f(g(x + h)) - f(g(x))}{g(x + h) - g(x)} \cdot \frac{g(x + h) - g(x)}{h} \right]. Letting k=g(x+h)g(x)k = g(x + h) - g(x), as h0h \to 0, k0k \to 0 by differentiability of gg, so the limit becomes f(g(x))g(x)f'(g(x)) \cdot g'(x). Implicit differentiation finds dydx\frac{dy}{dx} when yy is defined implicitly by an F(x,y)=0F(x, y) = 0, assuming yy is differentiable with respect to xx. Differentiating both sides with respect to xx yields dydx=FxFy,\frac{dy}{dx} = -\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}}, provided Fy0\frac{\partial F}{\partial y} \neq 0. This one-variable technique, rooted in for relating differentials, treats yy as a function of xx and applies the chain rule to terms involving yy./02%3A_Calculus_in_the_17th_and_18th_Centuries/2.01%3A_Newton_and_Leibniz_Get_Started) Logarithmic differentiation simplifies derivatives of products, quotients, or powers by taking the natural logarithm. For a function y=u(x)v(x)y = u(x)^{v(x)} or a product y=u(x)v(x)y = u(x) v(x), compute lny=v(x)lnu(x)\ln y = v(x) \ln u(x) or lny=lnu(x)+lnv(x)\ln y = \ln u(x) + \ln v(x), differentiate implicitly to get 1yy=\frac{1}{y} y' = right-hand side, then multiply by yy. This method leverages the chain rule and properties of logarithms, particularly useful for expressions with variable exponents or multiple factors.

Computation examples

To illustrate the practical computation of derivatives, the following examples apply the , , and to specific functions, followed by a related rates application involving of a . These computations demonstrate step-by-step differentiation and simplification where appropriate. Consider the function y=x3sin(x)y = x^3 \sin(x). To find dydx\frac{dy}{dx}, apply the , which states that if y=u(x)v(x)y = u(x) v(x), then dydx=u(x)v(x)+u(x)v(x)\frac{dy}{dx} = u'(x) v(x) + u(x) v'(x), where u(x)=x3u(x) = x^3 and v(x)=sin(x)v(x) = \sin(x). The derivative of u(x)u(x) is u(x)=3x2u'(x) = 3x^2 by the power rule, and the derivative of v(x)v(x) is v(x)=cos(x)v'(x) = \cos(x). Substituting yields: dydx=3x2sin(x)+x3cos(x).\frac{dy}{dx} = 3x^2 \sin(x) + x^3 \cos(x). This can be factored as x2(3sin(x)+xcos(x))x^2 (3 \sin(x) + x \cos(x)) for simplification. Next, differentiate y=x2+1x1y = \frac{x^2 + 1}{x - 1} using the : if y=u(x)v(x)y = \frac{u(x)}{v(x)}, then dydx=u(x)v(x)u(x)v(x)[v(x)]2\frac{dy}{dx} = \frac{u'(x) v(x) - u(x) v'(x)}{[v(x)]^2}, with u(x)=x2+1u(x) = x^2 + 1 and v(x)=x1v(x) = x - 1. Here, u(x)=2xu'(x) = 2x and v(x)=1v'(x) = 1. Substituting gives: dydx=2x(x1)(x2+1)1(x1)2=2x22xx21(x1)2=x22x1(x1)2.\frac{dy}{dx} = \frac{2x (x - 1) - (x^2 + 1) \cdot 1}{(x - 1)^2} = \frac{2x^2 - 2x - x^2 - 1}{(x - 1)^2} = \frac{x^2 - 2x - 1}{(x - 1)^2}. This simplified form highlights the algebraic reduction after applying the rule. For the chain rule, consider y=sin(x2)y = \sin(x^2). Identify the outer function as f(u)=sin(u)f(u) = \sin(u) where u=x2u = x^2 is the inner function. The chain rule states dydx=f(u)dudx\frac{dy}{dx} = f'(u) \cdot \frac{du}{dx}, so f(u)=cos(u)=cos(x2)f'(u) = \cos(u) = \cos(x^2) and dudx=2x\frac{du}{dx} = 2x. Thus: dydx=cos(x2)2x=2xcos(x2).\frac{dy}{dx} = \cos(x^2) \cdot 2x = 2x \cos(x^2). This example underscores the need to differentiate the inner function separately. In related rates problems, derivatives relate rates of change over time. For an inflating spherical balloon with volume V=43πr3V = \frac{4}{3} \pi r^3, where rr is the radius, differentiate implicitly with respect to time tt: dVdt=4πr2drdt\frac{dV}{dt} = 4 \pi r^2 \frac{dr}{dt}. Suppose the volume increases at dVdt=100π\frac{dV}{dt} = 100 \pi cubic units per second when r=2r = 2 units. Then: 100π=4π(2)2drdt    100π=16πdrdt    drdt=10016=6.25 units per second.100 \pi = 4 \pi (2)^2 \frac{dr}{dt} \implies 100 \pi = 16 \pi \frac{dr}{dt} \implies \frac{dr}{dt} = \frac{100}{16} = 6.25 \text{ units per second}. This computes the radius growth rate from the known volume rate. Derivatives can be verified numerically by approximating the via finite differences, such as f(x)f(x+h)f(x)hf'(x) \approx \frac{f(x + h) - f(x)}{h} for small hh, or by plotting the function and its derivative to check tangency. For instance, for y=x3sin(x)y = x^3 \sin(x) at x=π2x = \frac{\pi}{2}, the exact derivative is approximately 7.40, while a numerical with h=0.001h = 0.001 yields about 7.40, confirming closeness; plotting shows the derivative curve matching the function's visually.

Antiderivatives

Definition of antiderivative

In , an of a function ff, denoted FF, is a such that its derivative equals ff, that is, F(x)=f(x)F'(x) = f(x) for all xx in the domain of ff. This relationship positions antiderivation as the inverse operation to differentiation. The general form of an antiderivative incorporates an arbitrary constant CC, yielding F(x)=f(x)dx+CF(x) = \int f(x) \, dx + C, where the indefinite notation f(x)dx\int f(x) \, dx represents the family of all such antiderivatives. This notation emphasizes that antiderivatives are unique only up to an additive constant; if FF and GG are two antiderivatives of ff, then F(x)G(x)=CF(x) - G(x) = C for some constant CC. For basic power functions, the antiderivative of f(x)=xnf(x) = x^n where n1n \neq -1 is given by xndx=xn+1n+1+C.\int x^n \, dx = \frac{x^{n+1}}{n+1} + C. Differentiating this returns the original function xnx^n, illustrating how differentiation reverses the antiderivation process while discarding the constant.

The (FTC) establishes the profound connection between differentiation and definite integration, demonstrating that these two core operations in are inverses under appropriate conditions. It consists of two parts that together justify the use of antiderivatives to evaluate definite integrals and reveal the derivative of an accumulated integral. The first part, often called the differentiation under the integral sign theorem, states that if ff is continuous on the closed interval [a,b][a, b], then the function defined by F(x)=axf(t)dtF(x) = \int_a^x f(t) \, dt is differentiable on (a,b)(a, b) (and continuous on [a,b][a, b]) with derivative F(x)=f(x)F'(x) = f(x) for all x(a,b)x \in (a, b). This result shows that the definite from a fixed lower limit to a variable upper limit yields an of ff, interpreting integration as the accumulation of the rate of change given by ff. A standard proof sketch for the first part relies on the for Integrals. Consider the for F(x)F'(x): F(x)=limh0F(x+h)F(x)h=limh01hxx+hf(t)dt.F'(x) = \lim_{h \to 0} \frac{F(x+h) - F(x)}{h} = \lim_{h \to 0} \frac{1}{h} \int_x^{x+h} f(t) \, dt. Since ff is continuous on [x,x+h][x, x+h], the for Integrals guarantees a point ch[x,x+h]c_h \in [x, x+h] such that xx+hf(t)dt=f(ch)h\int_x^{x+h} f(t) \, dt = f(c_h) \cdot h, so the quotient simplifies to f(ch)f(c_h). As h0h \to 0, continuity of ff implies chxc_h \to x and thus f(ch)f(x)f(c_h) \to f(x), yielding F(x)=f(x)F'(x) = f(x). The second part, known as the evaluation theorem, states that if ff is continuous on [a,b][a, b] and FF is any of ff (so F(x)=f(x)F'(x) = f(x) on [a,b][a, b]), then abf(x)dx=F(b)F(a).\int_a^b f(x) \, dx = F(b) - F(a). This allows definite integrals, representing net accumulation over [a,b][a, b], to be computed directly from the values of an at the endpoints, bypassing explicit or approximation. The continuity assumption on ff ensures Riemann integrability over [a,b][a, b] and the existence of the derivative F(x)=f(x)F'(x) = f(x) for all x(a,b)x \in (a, b). Weaker conditions, such as ff being Riemann integrable on [a,b][a, b] (e.g., bounded with discontinuities on a set of measure zero), yield versions where FF is differentiable with F(x)=f(x)F'(x) = f(x) on (a,b)(a, b) . The FTC's implications extend to numerical methods, where it underpins algorithms for approximating integrals by estimating antiderivatives, and conceptually frames derivatives as instantaneous rates within the total change captured by integrals.

Higher-Order Derivatives

Second and higher derivatives

The second derivative of a function ff, denoted f(x)f''(x), is obtained by differentiating the first derivative f(x)f'(x) with respect to xx, providing insight into the function's or concavity. For a twice-differentiable function, if f(x)>0f''(x) > 0, the graph is concave up at xx (resembling a U-shape), indicating that the function is accelerating upward; conversely, if f(x)<0f''(x) < 0, it is concave down (resembling an inverted U), showing downward . This interpretation extends from the first derivative's role in to the second's role in rate of change of , a concept formalized in classical texts. Higher-order derivatives generalize this process: the nn-th derivative f(n)(x)f^{(n)}(x) is the result of differentiating ff nn times successively, capturing increasingly refined aspects of the function's behavior, such as jerk () in or higher moments in approximation theory. These derivatives exist if the function is sufficiently smooth, typically in the class CnC^n of nn-times continuously differentiable functions. In physics, for position s(t)s(t), the first derivative is v(t)=s(t)v(t) = s'(t), the second is a(t)=s(t)a(t) = s''(t), and higher ones describe changes in acceleration, essential for modeling oscillatory or . Applications include identifying points, where f(x)f''(x) changes sign, marking transitions from concave up to down, which signal potential changes in the function's monotonicity or growth rate. A key application of higher derivatives is in local approximations via , which expands a function around a point aa as f(x)f(a)+f(a)(xa)+f(a)2!(xa)2++f(n)(a)n!(xa)n+Rn(x),f(x) \approx f(a) + f'(a)(x - a) + \frac{f''(a)}{2!}(x - a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x - a)^n + R_n(x), where Rn(x)R_n(x) is the remainder term, allowing precise estimation of function values near aa using derivative information up to order nn. This theorem, attributed to Brook Taylor in 1715, underpins series expansions and numerical methods in analysis. For example, consider f(x)=x4f(x) = x^4. The first derivative is f(x)=4x3f'(x) = 4x^3, the second is f(x)=12x2f''(x) = 12x^2 (always non-negative, indicating global concavity up), and the third is f(x)=24xf'''(x) = 24x, which changes sign at x=0x=0.

Notation for higher derivatives

For the second derivative of a function f(x)f(x), the prime notation extends the first derivative symbol by adding a double prime, denoted as f(x)f''(x). This convention, part of Lagrange's notation, indicates successive differentiation with respect to the independent variable xx. Similarly, the third derivative is written as f(x)f'''(x), with additional primes for each higher order. To denote the nnth derivative in a more general form, the prime notation uses superscript parentheses, expressed as f(n)(x)f^{(n)}(x). This avoids excessive primes for large nn and clearly specifies the order of differentiation. An alternative in some contexts is the operator notation Dnf(x)D^n f(x), where DD represents the differentiation operator applied nn times. In , higher generalize the first derivative form dydx\frac{dy}{dx} to dnydxn\frac{d^n y}{dx^n} for the nnth order, emphasizing the ratio of changes raised to the power nn. This is particularly useful when the function is expressed as y=f(x)y = f(x), as in d2ydx2\frac{d^2 y}{dx^2} for the second derivative. Evaluation of higher at a specific point aa follows standard function notation, such as f(a)f''(a) or f(n)(a)f^{(n)}(a). In the context of differential equations, especially those involving time as the independent variable, Newton's dot notation is commonly employed for higher orders. The first derivative is y˙\dot{y}, the second is y¨\ddot{y}, and higher orders use multiple dots, such as \dddoty\dddot{y} for the third. This contrasts with the prime notation yy'' often used for the same second-order derivative in non-time-dependent equations. Additionally, the prime notation yy'' is standard in ordinary differential equations to denote second derivatives without specifying the variable explicitly. For functions of several variables, higher-order partial derivatives use analogous notations but with partial symbols, such as nfxn\frac{\partial^n f}{\partial x^n} or fxxxf_{xx\dots x} (with nn subscripts), distinguishing them from total derivatives; these are addressed in multivariable contexts.

Derivatives in Several Variables

Partial derivatives

In multivariable calculus, the partial derivative of a function f:RnRf: \mathbb{R}^n \to \mathbb{R} with respect to one of its variables, say xix_i, at a point (a1,,an)(a_1, \dots, a_n) is defined as the limit fxi(a1,,an)=limh0f(a1,,ai+h,,an)f(a1,,an)h,\frac{\partial f}{\partial x_i}(a_1, \dots, a_n) = \lim_{h \to 0} \frac{f(a_1, \dots, a_i + h, \dots, a_n) - f(a_1, \dots, a_n)}{h}, provided the limit exists; this measures the rate of change of ff in the direction of the xix_i-axis while holding all other variables constant. The definition extends the single-variable derivative by fixing the other inputs, analogous to the ordinary limit-based derivative but applied to a univariate slice of the function. Common notation for the partial derivative of ff with respect to xx in a function of two variables f(x,y)f(x, y) includes the Leibniz symbol fx\frac{\partial f}{\partial x} or the subscript form fxf_x; subscripts are extended for higher dimensions, such as fxyf_{x y} for a second-order mixed partial. To compute a partial derivative, treat all variables except the one of interest as constants and apply the rules of single-variable differentiation, such as the power rule or chain rule. For example, consider f(x,y)=x2y+sinyf(x, y) = x^2 y + \sin y; the partial with respect to xx is fx=2xy\frac{\partial f}{\partial x} = 2 x y, obtained by differentiating x2yx^2 y as if yy were constant and treating siny\sin y as such, while the partial with respect to yy is fy=x2+cosy\frac{\partial f}{\partial y} = x^2 + \cos y, differentiating x2yx^2 y using the with x2x^2 constant and siny\sin y directly. Higher-order partial derivatives are obtained by successive partial differentiation; for a second-order mixed partial, such as 2fxy\frac{\partial^2 f}{\partial x \partial y}, first compute fy\frac{\partial f}{\partial y} and then take its partial with respect to xx. Under suitable conditions, Clairaut's theorem states that if the mixed partial derivatives 2fxy\frac{\partial^2 f}{\partial x \partial y} and 2fyx\frac{\partial^2 f}{\partial y \partial x} both exist and are continuous in a neighborhood of a point, then they are equal at that point. This equality holds for most continuously differentiable functions encountered in applications.

Directional derivatives

The directional derivative of a scalar-valued function f:RnRf: \mathbb{R}^n \to \mathbb{R} at a point a\mathbf{a} in the direction of a u\mathbf{u} is defined as Duf(a)=limh0f(a+hu)f(a)h,D_{\mathbf{u}} f(\mathbf{a}) = \lim_{h \to 0} \frac{f(\mathbf{a} + h \mathbf{u}) - f(\mathbf{a})}{h}, provided the limit exists. This measures the instantaneous rate of change of ff at a\mathbf{a} as one moves along the line through a\mathbf{a} in the direction u\mathbf{u}. Geometrically, the directional derivative Duf(a)D_{\mathbf{u}} f(\mathbf{a}) represents the slope of the tangent line to the curve obtained by restricting ff to the line passing through a\mathbf{a} in the direction u\mathbf{u}. Partial derivatives are special cases of directional derivatives, corresponding to directions along the coordinate axes. If ff is differentiable at a\mathbf{a}, then the directional derivative exists in every direction u\mathbf{u} and equals the dot product of the gradient vector f(a)\nabla f(\mathbf{a}) with u\mathbf{u}. However, the existence of partial derivatives at a\mathbf{a} does not ensure that directional derivatives exist in all directions; full differentiability of ff at a\mathbf{a} is required for directional derivatives to exist universally. For instance, consider f(x,y)=xyf(x,y) = xy at the point (1,1)(1,1) in the direction u=(12,12)\mathbf{u} = \left( \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \right)
Add your contribution
Related Hubs
User Avatar
No comments yet.