Hubbry Logo
Multivariable calculusMultivariable calculusMain
Open search
Multivariable calculus
Community hub
Multivariable calculus
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Multivariable calculus
Multivariable calculus
from Wikipedia

Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one.[1]

Multivariable calculus may be thought of as an elementary part of calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus.

Introduction

[edit]

In single-variable calculus, operations like differentiation and integration are made to functions of a single variable. In multivariate calculus, it is required to generalize these to multiple variables, and the domain is therefore multi-dimensional. Care is therefore required in these generalizations, because of two key differences between 1D and higher dimensional spaces:

  1. There are infinite ways to approach a single point in higher dimensions, as opposed to two (from the positive and negative direction) in 1D;
  2. There are multiple extended objects associated with the dimension; for example, a 1D function is represented as a curve on the 2D Cartesian plane, but a scalar-valued function of two variables is a surface in 3D, while curves can also live in 3D space.

The consequence of the first difference is the difference in the definition of the limits and continuity. Directional limits and derivatives define the limit and differential along a 1D parametrized curve, reducing the problem to the 1D case. Further higher-dimensional objects can be constructed from these operators.

The consequence of the second difference is the existence of multiple types of integration, including line integrals, surface integrals and volume integrals. Due to the non-uniqueness of these integrals, an antiderivative or indefinite integral cannot be properly defined.

Limits

[edit]

A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions.

A limit along a path may be defined by considering a parametrised path in n-dimensional Euclidean space. Any function can then be projected on the path as a 1D function . The limit of to the point along the path can hence be defined as

Note that the value of this limit can be dependent on the form of , i.e. the path chosen, not just the point which the limit approaches.[1]: 19–22  For example, consider the function

If the point is approached through the line , or in parametric form:

Plot of the function f(x, y) = (x²y)/(x4 + y2)

Then the limit along the path will be:

On the other hand, if the path (or parametrically, ) is chosen, then the limit becomes:

Since taking different paths towards the same point yields different values, a general limit at the point cannot be defined for the function.

A general limit can be defined if the limits to a point along all possible paths converge to the same value, i.e. we say for a function that the limit of to some point is L, if and only if

for all continuous functions such that .

Continuity

[edit]

From the concept of limit along a path, we can then derive the definition for multivariate continuity in the same manner, that is: we say for a function that is continuous at the point , if and only if

for all continuous functions such that .

As with limits, being continuous along one path does not imply multivariate continuity.

Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example.[1]: 17–19  For example, for a real-valued function with two real-valued parameters, , continuity of in for fixed and continuity of in for fixed does not imply continuity of .

Consider

It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle . Furthermore, the functions defined for constant and and by

and

are continuous. Specifically,

for all x and y. Therefore, and moreover, along the coordinate axes, and . Therefore the function is continuous along both individual arguments.

However, consider the parametric path . The parametric function becomes

Therefore,

It is hence clear that the function is not multivariate continuous, despite being continuous in both coordinates.

Theorems regarding multivariate limits and continuity

[edit]
  • All properties of linearity and superposition from single-variable calculus carry over to multivariate calculus.
  • Composition: If and are both multivariate continuous functions at the points and respectively, then is also a multivariate continuous function at the point .
  • Multiplication: If and are both continuous functions at the point , then is continuous at , and is also continuous at provided that .
  • If is a continuous function at point , then is also continuous at the same point.
  • If is Lipschitz continuous (with the appropriate normed spaces as needed) in the neighbourhood of the point , then is multivariate continuous at .

Differentiation

[edit]

Directional derivative

[edit]

The derivative of a single-variable function is defined as

Using the extension of limits discussed above, one can then extend the definition of the derivative to a scalar-valued function along some path :

Unlike limits, for which the value depends on the exact form of the path , it can be shown that the derivative along the path depends only on the tangent vector of the path at , i.e. , provided that is Lipschitz continuous at , and that the limit exists for at least one such path.

It is therefore possible to generalize the definition of the directional derivative as follows: The directional derivative of a scalar-valued function along the unit vector at some point is

or, when expressed in terms of ordinary differentiation,

which is a well defined expression because is a scalar function with one variable in .

It is not possible to define a unique scalar derivative without a direction; it is clear for example that . It is also possible for directional derivatives to exist for some directions but not for others.

Partial derivative

[edit]

The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant.[1]: 26ff 

A partial derivative may be thought of as the directional derivative of the function along a coordinate axis.

Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator () is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function.

Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable.[1]: 654ff 

Multiple integration

[edit]

The multiple integral extends the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration.[1]: 367ff 

The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves.

Fundamental theorem of calculus in multiple dimensions

[edit]

In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus:[1]: 543ff 

In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds.[2]

Applications and uses

[edit]

Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular,

Type of functions Applicable techniques
Curves
for
Lengths of curves, line integrals, and curvature.
Surfaces
for
Areas of surfaces, surface integrals, flux through surfaces, and curvature.
Scalar fields Maxima and minima, Lagrange multipliers, directional derivatives, level sets.
Vector fields Any of the operations of vector calculus including gradient, divergence, and curl.

Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics.

Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data.

Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus.

Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Multivariable calculus is the extension of single-variable calculus to functions of several variables, encompassing differential, , and techniques applied to higher-dimensional spaces. It provides mathematical tools essential for modeling and analyzing phenomena in fields such as physics, , and , where quantities depend on multiple independent variables. At its core, multivariable calculus introduces partial derivatives, which measure how a function changes with respect to one variable while holding others constant, along with concepts like the chain rule, directional derivatives, and gradients for optimization and tangent planes to surfaces. These tools enable the study of functions from Rn\mathbb{R}^n to R\mathbb{R}, including level sets, extrema via Lagrange multipliers, and the geometry of curves and surfaces in . The integral calculus component generalizes to multiple integrals, such as double and triple integrals over regions in the plane or space, often computed using iterated integrals, , or coordinate transformations like polar, cylindrical, or spherical systems. These integrals quantify volumes, masses, and other accumulated quantities, with theorems like Fubini's allowing evaluation by successive single integrations. A significant aspect is vector calculus, which deals with vector fields, line integrals along curves, surface integrals over parametrized surfaces, and fundamental theorems including Green's theorem for planar regions, Stokes' theorem relating line and surface integrals, and the divergence theorem connecting flux through closed surfaces to volume integrals. These results unify differential and integral forms, facilitating applications in fluid dynamics, electromagnetism, and conservative fields. Overall, multivariable calculus forms a foundational framework for advanced mathematics and scientific computation, emphasizing both theoretical rigor and practical problem-solving.

Overview

Definition and Scope

Multivariable calculus is a of that extends the principles of single-variable calculus to functions involving two or more independent variables, generalizing concepts such as limits, derivatives, and integrals to higher-dimensional spaces. In this framework, functions map points in Rn\mathbb{R}^n (where n2n \geq 2) to values in R\mathbb{R} or Rm\mathbb{R}^m, enabling the analysis of phenomena that depend on multiple inputs simultaneously. This generalization addresses the behavior of such functions over regions in multiple dimensions, building on foundational tools like vectors to represent points and directions in Rn\mathbb{R}^n. The scope of multivariable calculus includes key topics such as partial derivatives for measuring rates of change with respect to individual variables, multiple integrals for computing volumes and masses in higher dimensions, and the study of vector fields through line integrals, surface integrals, and theorems like and the . It contrasts with single-variable calculus by introducing complexities like path non-uniqueness in limits—where the approach to a point can vary along different directions—and the incorporation of higher-dimensional , such as curves, surfaces, and manifolds, which require careful consideration of orientation and . These elements provide a rigorous toolkit for handling multivariable systems without the linear constraints of one-dimensional . Multivariable calculus holds profound importance in applied sciences, serving as a cornerstone for modeling real-world systems with interdependent variables, such as electromagnetic fields in physics, optimization of production functions in , and stress analysis in engineering structures. By enabling the quantification of gradients, fluxes, and extrema in multiple dimensions, it facilitates precise predictions and designs in these fields, revealing insights unattainable through single-variable methods alone. This field emerged in the 19th century through pivotal contributions from mathematicians including , who advanced surface theory and , and , who introduced concepts of n-dimensional manifolds, setting the stage for its formal development (see Historical Development for further details).

Historical Development

The foundations of multivariable calculus were established in the late through the independent development of single-variable by and , which provided the analytical tools necessary for extending differentiation and integration to functions of multiple variables. These early contributions focused primarily on one-dimensional problems in physics and , but they set the stage for handling higher-dimensional phenomena by introducing concepts like limits, derivatives, and integrals that could be generalized. In the , Leonhard Euler advanced the field by incorporating multivariable ideas into his studies of , formulating equations that described the motion of inviscid fluids using partial differential equations around the 1750s. Toward the late 1700s, further developed partial derivatives as a key tool in , applying them to optimize functions subject to constraints and laying groundwork for variational problems involving multiple variables. The 19th century marked significant milestones, beginning with Carl Friedrich Gauss's 1827 paper on the theory of curved surfaces, which introduced intrinsic measures of curvature independent of embedding in higher-dimensional space. contributed to the study of double integrals in the 1810s, examining issues with changing the in his 1814 memoir on definite integrals. Key theorems emerged soon after: George Green published his theorem in 1828, relating line integrals to area integrals for potential functions; George Gabriel Stokes stated his generalization in 1850, connecting surface integrals to line integrals on boundaries; and Gauss formulated the around 1813, publishing it in 1833, linking volume integrals to surface fluxes. advanced integration and geometric theories in the 1850s through his work on complex functions and manifolds, influencing multivariable calculus. By the 1880s, and independently developed , systematizing operations like , , and curl to unify these theorems in a vector framework. In the , multivariable calculus was refined through abstract formulations in and , with Bernhard Riemann's 1854 habilitation lecture influencing later generalizations to manifolds, and subsequent work by figures like integrating tensor analysis for curved spaces in the early 1900s. These advancements, building on 19th-century foundations, enabled applications in and modern analysis by abstracting multivariable concepts to arbitrary dimensions without Euclidean assumptions.

Mathematical Foundations

Vectors and Vector Operations

In multivariable calculus, vectors in Euclidean space Rn\mathbb{R}^n are defined as ordered nn-tuples of real numbers, such as v=(v1,v2,,vn)\mathbf{v} = (v_1, v_2, \dots, v_n) where each viRv_i \in \mathbb{R}./04:_R/4.01:_Vectors_in_R) Geometrically, these vectors can be interpreted as points in nn-dimensional space or as directed arrows originating from the origin, providing a foundation for representing positions and directions in higher dimensions./05:_Real-Valued_Functions_of_Several_Variables/5.00:_Structure_of_Rn) Basic vector operations in Rn\mathbb{R}^n include addition and scalar multiplication. For two vectors u=(u1,,un)\mathbf{u} = (u_1, \dots, u_n) and v=(v1,,vn)\mathbf{v} = (v_1, \dots, v_n), their sum is u+v=(u1+v1,,un+vn)\mathbf{u} + \mathbf{v} = (u_1 + v_1, \dots, u_n + v_n), which geometrically corresponds to the of vector addition./04:_R/4.02:_Vector_Algebra) Scalar multiplication by a cc yields cu=(cu1,,cun)c\mathbf{u} = (c u_1, \dots, c u_n), scaling the vector's magnitude and possibly reversing its direction if c<0c < 0./04:_R/4.02:_Vector_Algebra) The dot product, also known as the inner product, is a fundamental operation that produces a scalar from two vectors a=(a1,,an)\mathbf{a} = (a_1, \dots, a_n) and b=(b1,,bn)\mathbf{b} = (b_1, \dots, b_n), defined algebraically as ab=i=1naibi\mathbf{a} \cdot \mathbf{b} = \sum_{i=1}^n a_i b_i. The Euclidean norm, or length, of a vector a\mathbf{a} is then given by a=aa=i=1nai2\|\mathbf{a}\| = \sqrt{\mathbf{a} \cdot \mathbf{a}} = \sqrt{\sum_{i=1}^n a_i^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.