Hubbry Logo
Covariance and contravariance of vectorsCovariance and contravariance of vectorsMain
Open search
Covariance and contravariance of vectors
Community hub
Covariance and contravariance of vectors
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Covariance and contravariance of vectors
Covariance and contravariance of vectors
from Wikipedia
A   vector, v, represented in terms of
tangent basis
  e1, e2, e3 to the   coordinate curves (left),
dual basis, covector basis, or reciprocal basis
  e1, e2, e3 to   coordinate surfaces (right),
in 3-d general curvilinear coordinates (q1, q2, q3), a tuple of numbers to define a point in a position space. Note the basis and cobasis coincide only when the basis is orthonormal.[1][specify]

In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis.[2] Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just called vectors and covariant vectors are called covectors or dual vectors. The terms covariant and contravariant were introduced by James Joseph Sylvester in 1851.[3][4]

Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are often used in physical and geometric problems. Associated with any coordinate system is a natural choice of coordinate basis for vectors based at each point of the space, and covariance and contravariance are particularly important for understanding how the coordinate description of a vector changes by passing from one coordinate system to another. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance.

Introduction

[edit]

In physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list (or tuple) of numbers such as

The numbers in the list depend on the choice of coordinate system. For instance, if the vector represents position with respect to an observer (position vector), then the coordinate system may be obtained from a system of rigid rods, or reference axes, along which the components v1, v2, and v3 are measured. For a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a certain way in passing from one coordinate system to another.

A simple illustrative case is that of a Euclidean vector. For a vector, once a set of basis vectors has been defined, then the components of that vector will always vary opposite to that of the basis vectors. That vector is therefore defined as a contravariant tensor. Take a standard position vector for example. By changing the scale of the reference axes from meters to centimeters (that is, dividing the scale of the reference axes by 100, so that the basis vectors now are meters long), the components of the measured position vector are multiplied by 100. A vector's components change scale inversely to changes in scale to the reference axes, and consequently a vector is called a contravariant tensor.

A vector, which is an example of a contravariant tensor, has components that transform inversely to the transformation of the reference axes, (with example transformations including rotation and dilation). The vector itself does not change under these operations; instead, the components of the vector change in a way that cancels the change in the spatial axes. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that the basis vectors transform according to , then the components of a vector v in the original basis ( ) must be similarly transformed via . The components of a vector are often represented arranged in a column.

By contrast, a covector has components that transform like the reference axes. It lives in the dual vector space, and represents a linear map from vectors to scalars. The dot product operator involving vectors is a good example of a covector. To illustrate, assume we have a covector defined as , where is a vector. The components of this covector in some arbitrary basis are , with being the basis vectors in the corresponding vector space. (This can be derived by noting that we want to get the correct answer for the dot product operation when multiplying by an arbitrary vector , with components ). The covariance of these covector components is then seen by noting that if a transformation described by an invertible matrix M were to be applied to the basis vectors in the corresponding vector space, , then the components of the covector will transform with the same matrix , namely, . The components of a covector are often represented arranged in a row.

A third concept related to covariance and contravariance is invariance. A scalar (also called type-0 or rank-0 tensor) is an object that does not vary with the change in basis. An example of a physical observable that is a scalar is the mass of a particle. The single, scalar value of mass is independent to changes in basis vectors and consequently is called invariant. The magnitude of a vector (such as distance) is another example of an invariant, because it remains fixed even if geometrical vector components vary. (For example, for a position vector of length meters, if all Cartesian basis vectors are changed from meters in length to meters in length, the length of the position vector remains unchanged at meters, although the vector components will all increase by a factor of ). The scalar product of a vector and a covector is invariant, because one has components that vary with the base change, and the other has components that vary oppositely, and the two effects cancel out. One thus says that covectors are dual to vectors.

Thus, to summarize:

  • A vector or tangent vector, has components that contra-vary with a change of basis to compensate. That is, the matrix that transforms the vector components must be the inverse of the matrix that transforms the basis vectors. The components of vectors (as opposed to those of covectors) are said to be contravariant. In Einstein notation (implicit summation over repeated index), contravariant components are denoted with upper indices as in
  • A covector or cotangent vector has components that co-vary with a change of basis in the corresponding (initial) vector space. That is, the components must be transformed by the same matrix as the change of basis matrix in the corresponding (initial) vector space. The components of covectors (as opposed to those of vectors) are said to be covariant. In Einstein notation, covariant components are denoted with lower indices as in
  • The scalar product of a vector and covector is the scalar , which is invariant. It is the duality pairing of vectors and covectors.

Definition

[edit]

The general formulation of covariance and contravariance refers to how the components of a coordinate vector transform under a change of basis (passive transformation).[5] Thus let V be a vector space of dimension n over a field of scalars S, and let each of f = (X1, ..., Xn) and f′ = (Y1, ..., Yn) be a basis of V.[note 1] Also, let the change of basis from f to f′ be given by

for some invertible n×n matrix A with entries . Here, each vector Yj of the f′ basis is a linear combination of the vectors Xi of the f basis, so that

which are the columns of the matrix product .

Contravariant transformation

[edit]

A vector in V is expressed uniquely as a linear combination of the elements of the f basis as

where vi[f] are elements of the field S known as the components of v in the f basis. Denote the column vector of components of v by v[f]:

so that (2) can be rewritten as a matrix product

The vector v may also be expressed in terms of the f′ basis, so that

However, since the vector v itself is invariant under the choice of basis,

The invariance of v combined with the relationship (1) between f and f′ implies that

giving the transformation rule

In terms of components,

where the coefficients are the entries of the inverse matrix of A.

Because the components of the vector v transform with the inverse of the matrix A, these components are said to transform contravariantly under a change of basis.

The way A relates the two pairs is depicted in the following informal diagram using an arrow. The reversal of the arrow indicates a contravariant change:

Covariant transformation

[edit]

A linear functional α on V is expressed uniquely in terms of its components (elements in S) in the f basis as

These components are the action of α on the basis vectors Xi of the f basis.

Under the change of basis from f to f′ (via 1), the components transform so that

Denote the row vector of components of α by α[f]:

so that (3) can be rewritten as the matrix product

Because the components of the linear functional α transform with the matrix A, these components are said to transform covariantly under a change of basis.

The way A relates the two pairs is depicted in the following informal diagram using an arrow. A covariant relationship is indicated since the arrows travel in the same direction:

Had a column vector representation been used instead, the transformation law would be the transpose

Coordinates

[edit]

The choice of basis f on the vector space V defines uniquely a set of coordinate functions on V, by means of

The coordinates on V are therefore contravariant in the sense that

Conversely, a system of n quantities vi that transform like the coordinates xi on V defines a contravariant vector (or simply vector). A system of n quantities that transform oppositely to the coordinates is then a covariant vector (or covector).

This formulation of contravariance and covariance is often more natural in applications in which there is a coordinate space (a manifold) on which vectors live as tangent vectors or cotangent vectors. Given a local coordinate system xi on the manifold, the reference axes for the coordinate system are the vector fields

This gives rise to the frame f = (X1, ..., Xn) at every point of the coordinate patch.

If yi is a different coordinate system and

then the frame f' is related to the frame f by the inverse of the Jacobian matrix of the coordinate transition:

Or, in indices,

A tangent vector is by definition a vector that is a linear combination of the coordinate partials . Thus a tangent vector is defined by

Such a vector is contravariant with respect to change of frame. Under changes in the coordinate system, one has

Therefore, the components of a tangent vector transform via

Accordingly, a system of n quantities vi depending on the coordinates that transform in this way on passing from one coordinate system to another is called a contravariant vector.

Covariant and contravariant components of a vector with a metric

[edit]
Covariant and contravariant components of a vector when the basis is not orthogonal.

In a finite-dimensional vector space V over a field K with a non-degenerate symmetric bilinear form g : V × VK (which may be referred to as the metric tensor), there is little distinction between covariant and contravariant vectors, because the bilinear form allows covectors to be identified with vectors. That is, a vector v uniquely determines a covector α via

for all vectors w. Conversely, each covector α determines a unique vector v by this equation. Because of this identification of vectors with covectors, one may speak of the covariant components or contravariant components of a vector, that is, they are just representations of the same vector using the reciprocal basis.

Given a basis f = (X1, ..., Xn) of V, there is a unique reciprocal basis f# = (Y1, ..., Yn) of V determined by requiring that

the Kronecker delta. In terms of these bases, any vector v can be written in two ways:

The components vi[f] are the contravariant components of the vector v in the basis f, and the components vi[f] are the covariant components of v in the basis f. The terminology is justified because under a change of basis,

where is an invertible matrix, and the matrix transpose has its usual meaning.

Euclidean plane

[edit]

In the Euclidean plane, the dot product allows for vectors to be identified with covectors. If is a basis, then the dual basis satisfies

Thus, e1 and e2 are perpendicular to each other, as are e2 and e1, and the lengths of e1 and e2 normalized against e1 and e2, respectively.

Example

[edit]

For example,[6] suppose that we are given a basis e1, e2 consisting of a pair of vectors making a 45° angle with one another, such that e1 has length 2 and e2 has length 1. Then the dual basis vectors are given as follows:

  • e2 is the result of rotating e1 through an angle of 90° (where the sense is measured by assuming the pair e1, e2 to be positively oriented), and then rescaling so that e2e2 = 1 holds.
  • e1 is the result of rotating e2 through an angle of 90°, and then rescaling so that e1e1 = 1 holds.

Applying these rules, we find

and

Thus the change of basis matrix in going from the original basis to the reciprocal basis is

since

For instance, the vector

is a vector with contravariant components

The covariant components are obtained by equating the two expressions for the vector v:

so

Three-dimensional Euclidean space

[edit]

In the three-dimensional Euclidean space, one can also determine explicitly the dual basis to a given set of basis vectors e1, e2, e3 of E3 that are not necessarily assumed to be orthogonal nor of unit norm. The dual basis vectors are:

Even when the ei and ei are not orthonormal, they are still mutually reciprocal:

Then the contravariant components of any vector v can be obtained by the dot product of v with the dual basis vectors:

Likewise, the covariant components of v can be obtained from the dot product of v with basis vectors, viz.

Then v can be expressed in two (reciprocal) ways, viz.

or

Combining the above relations, we have

and we can convert between the basis and dual basis with

and

If the basis vectors are orthonormal, then they are the same as the dual basis vectors.

Vector spaces of any dimension

[edit]

The following applies to any vector space of dimension n equipped with a non-degenerate commutative and distributive dot product, and thus also to the Euclidean spaces of any dimension.

All indices in the formulas run from 1 to n. The Einstein notation for the implicit summation of the terms with the same upstairs (contravariant) and downstairs (covariant) indices is followed.

The historical and geometrical meaning of the terms contravariant and covariant will be explained at the end of this section.

Definitions

[edit]
  1. Covariant basis of a vector space of dimension n: {any linearly independent basis for which in general is }, i.e. not necessarily orthonormal (D.1).
  2. Contravariant components of a vector : (D.2).
  3. Dual (contravariant) basis of a vector space of dimension n: (D.3).
  4. Covariant components of a vector : (D.4).
  5. Components of the covariant metric tensor: ; the metric tensor can be considered a square matrix, since it only has two covariant indices: ; for the commutative property of the dot product, the are symmetric (D.5).
  6. Components of the contravariant metric tensor: ; these are the elements of the inverse of the covariant metric tensor/matrix , and for the properties of the inverse of a symmetric matrix, they're also symmetric (D.6).

Corollaries

[edit]
  • (1).
    Proof: from the properties of the inverse matrix (D.6).
  • (2).
    Proof: let's suppose that ; we will show that . Taking the dot product of both sides with :
    ; multiplying both sides by :
  • (3).
    Proof:
  • (4).
    Proof: let's suppose that ; we will show that . Taking the dot product of both sides with : ; multiplying both sides by :
  • (5).
    Proof:
  • (6).
    Proof: specular to (5).
  • (7).
    Proof:
  • (8).
    Proof: specular to (7).
  • (9).
    Proof:
.
  • (10).
    Proof: specular to (9).

Historical and geometrical meaning

[edit]
Aid for explaining the geometrical meaning of covariant and contravariant vector components.

Considering this figure for the case of an Euclidean space with , since , if we want to express in terms of the covariant basis, we have to multiply the basis vectors by the coefficients .

With and thus and fixed, if the module of increases, the value of the component decreases, and that's why they're called contra-variant (with respect to the variation of the basis vectors module).

Symmetrically, corollary (7) states that the components equal the dot product between the vector and the covariant basis vectors, and since this is directly proportional to the basis vectors module, they're called co-variant.

If we consider the dual (contravariant) basis, the situation is perfectly specular: the covariant components are contra-variant with respect to the module of the dual basis vectors, while the contravariant components are co-variant.

So in the end it all boils down to a matter of convention: historically the first non-orthonormal basis of the vector space of choice was called "covariant", its dual basis "contravariant", and the corresponding components named specularly.

If the covariant basis becomes orthonormal, the dual contravariant basis aligns with it and the covariant components collapse into the contravariant ones, the most familiar situation when dealing with geometrical Euclidean vectors. and become the identity matrix , and:

If the metric is non-Euclidean, but for instance Minkowskian like in the special relativity and general relativity theories, the basis are never orthonormal, even in the case of special relativity where and become, for . In this scenario, the covariant and contravariant components always differ.

Use in tensor analysis

[edit]

The distinction between covariance and contravariance is particularly important for computations with tensors, which often have mixed variance. This means that they have both covariant and contravariant components, or both vector and covector components. The valence of a tensor is the number of covariant and contravariant terms, and in Einstein notation, covariant components have lower indices, while contravariant components have upper indices. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although modern differential geometry uses more sophisticated index-free methods to represent tensors.

In tensor analysis, a covariant vector varies more or less reciprocally to a corresponding contravariant vector. Expressions for lengths, areas and volumes of objects in the vector space can then be given in terms of tensors with covariant and contravariant indices. Under simple expansions and contractions of the coordinates, the reciprocity is exact; under affine transformations the components of a vector intermingle on going between covariant and contravariant expression.

On a manifold, a tensor field will typically have multiple, upper and lower indices, where Einstein notation is widely used. When the manifold is equipped with a metric, covariant and contravariant indices become very closely related to one another. Contravariant indices can be turned into covariant indices by contracting with the metric tensor. The reverse is possible by contracting with the (matrix) inverse of the metric tensor. Note that in general, no such relation exists in spaces not endowed with a metric tensor. Furthermore, from a more abstract standpoint, a tensor is simply "there" and its components of either kind are only calculational artifacts whose values depend on the chosen coordinates.

The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in the tangent bundle as well as the cotangent bundle.

A contravariant vector is one which transforms like , where are the coordinates of a particle at its proper time . A covariant vector is one which transforms like , where is a scalar field.

Algebra and geometry

[edit]

In category theory, there are covariant functors and contravariant functors. The assignment of the dual space to a vector space is a standard example of a contravariant functor. Contravariant (resp. covariant) vectors are contravariant (resp. covariant) functors from a -torsor to the fundamental representation of . Similarly, tensors of higher degree are functors with values in other representations of . However, some constructions of multilinear algebra are of "mixed" variance, which prevents them from being functors.

In differential geometry, the components of a vector relative to a basis of the tangent bundle are covariant if they change with the same linear transformation as a change of basis. They are contravariant if they change by the inverse transformation. This is sometimes a source of confusion for two distinct but related reasons. The first is that vectors whose components are covariant (called covectors or 1-forms) actually pull back under smooth functions, meaning that the operation assigning the space of covectors to a smooth manifold is actually a contravariant functor. Likewise, vectors whose components are contravariant push forward under smooth mappings, so the operation assigning the space of (contravariant) vectors to a smooth manifold is a covariant functor. Secondly, in the classical approach to differential geometry, it is not bases of the tangent bundle that are the most primitive object, but rather changes in the coordinate system. Vectors with contravariant components transform in the same way as changes in the coordinates (because these actually change oppositely to the induced change of basis). Likewise, vectors with covariant components transform in the opposite way as changes in the coordinates.

See also

[edit]

Notes

[edit]

Citations

[edit]
  1. ^ Misner, C.; Thorne, K.S.; Wheeler, J.A. (1973). Gravitation. W.H. Freeman. ISBN 0-7167-0344-0.
  2. ^ Frankel, Theodore (2012). The geometry of physics : an introduction. Cambridge: Cambridge University Press. p. 42. ISBN 978-1-107-60260-1. OCLC 739094283.
  3. ^ Sylvester, J.J. (1851). "On the general theory of associated algebraical forms". Cambridge and Dublin Mathematical Journal. Vol. 6. pp. 289–293.
  4. ^ Sylvester, J.J. University Press (16 February 2012). The collected mathematical papers of James Joseph Sylvester. Vol. 3, 1870–1883. Cambridge University Press. ISBN 978-1107661431. OCLC 758983870.
  5. ^ J A Schouten (1954). Ricci calculus (2 ed.). Springer. p. 6.
  6. ^ Bowen, Ray; Wang, C.-C. (2008) [1976]. "§3.14 Reciprocal Basis and Change of Basis". Introduction to Vectors and Tensors. Dover. pp. 78, 79, 81. ISBN 9780486469140.

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In the context of and , the concepts of and contravariance distinguish between two types of vectors based on their transformation behavior under coordinate changes. Contravariant vectors, often simply called vectors, have components that transform linearly with the matrix of the coordinate transformation, ensuring that their directional properties are preserved across different bases. In contrast, covariant vectors, also known as covectors or dual vectors, transform with the inverse of the matrix, reflecting their role as linear functionals that map vectors to scalars while maintaining invariance in their action. This duality is fundamental to tensor analysis, where contravariant components are denoted by upper indices (e.g., VμV^\mu) and covariant components by lower indices (e.g., VμV_\mu), facilitating the description of physical laws in general coordinate systems. The transformation rules arise from the need to ensure that inner products and other multilinear operations remain consistent regardless of the chosen coordinates. For a contravariant vector V\mathbf{V}, if the coordinates change via xα=xα(xβ)x'^\alpha = x'^\alpha(x^\beta), the components satisfy Vα=xαxβVβV'^\alpha = \frac{\partial x'^\alpha}{\partial x^\beta} V^\beta, mirroring the transformation of the basis vectors themselves. Covariant vectors, such as the gradient of a scalar field, obey Vα=xβxαVβV'_\alpha = \frac{\partial x^\beta}{\partial x'^\alpha} V_\beta, which aligns with the change in the dual basis to preserve the scalar product VαVαV^\alpha V_\alpha. In Euclidean spaces with orthogonal coordinates, the distinction may appear subtle, but it becomes essential in curvilinear or non-orthogonal systems, such as those in general relativity. These notions extend to tensors of higher rank, where each contravariant index follows the direct transformation and each covariant index the inverse, enabling the formulation of quantities like the gμνg_{\mu\nu} that raises and lowers indices to interconvert between the two types. In physics, contravariant vectors often represent displacements or velocities, while covariant ones describe forces or gradients, ensuring that equations like or Einstein's field equations are form-invariant. The Einstein summation convention, implied over repeated indices, further streamlines these expressions, underscoring the elegance of this framework in modern and .

Fundamentals

Introduction

Covariance and contravariance of vectors refer to the distinct transformation behaviors exhibited by certain mathematical objects under changes in coordinate systems, setting them apart from scalars, which remain invariant. These properties are fundamental to tensor in and physics, enabling the consistent description of quantities like displacements and gradients in varying frames. Unlike scalars, which do not depend on the choice of coordinates, vectors must adjust their components to reflect the geometry of the , ensuring that physical or geometric relations hold universally. The origins of these concepts trace back to the late 19th and early 20th centuries, emerging from efforts to develop a coordinate-independent for curved spaces and general coordinate systems. Italian mathematicians and formalized them within the framework of absolute differential , or tensor calculus, in their seminal 1900–1901 publication. This work introduced the systematic treatment of covariant and contravariant entities as part of a broader formalism, which proved crucial for applications in and physics, including the eventual formulation of by . Conceptually, contravariant vectors align with the transformation of the coordinate basis itself, much like a displacement vector that stretches or contracts with the grid lines in a new coordinate chart to maintain its directional magnitude. Covariant vectors, on the other hand, transform inversely, akin to the dual basis, as seen in the of a scalar function, which pairs with displacements to yield coordinate-independent changes in the scalar. This duality ensures that the inner product between a contravariant and a covariant vector remains a scalar, preserving essential invariances. Such distinctions become indispensable in non-Cartesian coordinate systems or on curved manifolds, where basis vectors vary in scale and orientation, requiring vectors to adapt appropriately to avoid distortions in physical interpretations. For instance, in spherical or general , failing to account for these transformation properties would misrepresent quantities like or fields. The detailed rules governing these transformations are addressed in subsequent sections.

Contravariant vectors

A contravariant vector v\mathbf{v} at a point in a manifold is a that can be expressed in terms of a local coordinate basis as v=viei\mathbf{v} = v^i \mathbf{e}_i, where {ei}\{\mathbf{e}_i\} denotes the basis vectors associated with the coordinates {xi}\{x^i\} and viv^i are the contravariant components (with upper indices). This representation emphasizes that the vector is a of the basis vectors, with the components scaling according to the basis choice. Under a change of coordinates from {xi}\{x^i\} to {xj}\{x'^j\}, the components of a contravariant vector transform linearly according to the rule vj=xjxiviv'^j = \frac{\partial x'^j}{\partial x^i} v^i, where the partial derivatives form the matrix of the transformation. This law ensures that the vector itself remains unchanged as an abstract entity, while its components adjust to reflect the new basis. In physics, common examples of contravariant vectors include , which describes the rate of change of position along coordinate directions; displacement, representing infinitesimal changes dxidx^i; and , particularly in relativistic contexts where it aligns with coordinates. These quantities transform with the coordinate grid, meaning their components increase if the basis vectors shorten and decrease if the basis vectors lengthen. Geometrically, a contravariant vector can be interpreted as an arrow that stretches or contracts in proportion to the spacing of the coordinate grid lines, preserving its intrinsic direction and magnitude across different coordinate systems. This behavior contrasts with the transformation of covariant vectors, which act on the dual space.

Covariant vectors

Covariant vectors, also known as covectors, are linear functionals on the tangent space at a point of a smooth manifold, belonging to the cotangent space TpMT_p^*M, which is the dual vector space to the tangent space TpMT_pM. Formally, a covariant vector ωTpM\omega \in T_p^*M satisfies ω(av+bw)=aω(v)+bω(w)\omega(av + bw) = a\omega(v) + b\omega(w) for all scalars a,bRa, b \in \mathbb{R} and tangent vectors v,wTpMv, w \in T_pM. In a local coordinate chart with basis vectors {/xi}\{\partial/\partial x^i\} for TpMT_pM, the covariant vector is expressed as ω=ωiei\omega = \omega_i \mathbf{e}^i, where {ei}\{\mathbf{e}^i\} (often denoted dxidx^i) forms the dual basis satisfying the duality relation ei(/xj)=δji\mathbf{e}^i(\partial/\partial x^j) = \delta^i_j, the Kronecker delta. This representation ensures that the components ωi\omega_i encode how ω\omega evaluates tangent vectors to yield scalars. Under a change of coordinates from (xi)(x^i) to (xj)(x'^j), the components of a covariant vector transform inversely to those of the basis vectors, according to the law ωj=xixjωi\omega'_j = \frac{\partial x^i}{\partial x'^j} \omega_i. This transformation, involving the inverse Jacobian matrix of partial derivatives, preserves the intrinsic value of ω(v)\omega(v) for any vv, as the scaling of the covector components compensates for the direct scaling of the tangent basis. Physically, covariant vectors arise as the differentials of scalar fields; for a smooth function f:MRf: M \to \mathbb{R}, the covector dfpTpMdf_p \in T_p^*M is defined by dfp(v)=v(f)df_p(v) = v(f), the directional derivative of ff at pp along vv, with components dfp=fxi(p)dxidf_p = \frac{\partial f}{\partial x^i}(p) \, dx^i. In mechanics and field theory, forces can also manifest as covariant vectors in certain formulations, such as the electromagnetic field strength components FμF_\mu or the electric field Eμ=VxμE_\mu = -\frac{\partial V}{\partial x^\mu} derived from a potential VV, where the lower index denotes the covariant nature. Geometrically, the dual basis interpretation highlights that covariant vectors contract inversely when the tangent basis expands—for instance, under a linear stretching of coordinates, the dual basis ei\mathbf{e}^i scales by the reciprocal factors to maintain the invariant ei(/xj)=δji\mathbf{e}^i(\partial/\partial x^j) = \delta^i_j. This behavior contrasts with contravariant vectors, which expand with the basis to preserve their action.

Transformation Laws

Coordinate changes

In differential geometry, a coordinate transformation on a smooth manifold is defined by a diffeomorphism, which is a smooth, bijective map with a smooth inverse, between overlapping coordinate charts. This maps old coordinates xix^i on an open set UMU \subset M to new coordinates xj=xj(x)x'^j = x'^j(x) on the corresponding set in the image chart, ensuring the transition functions are smooth to preserve the manifold's differentiable structure. Such transformations allow local descriptions of geometric objects in different coordinate systems while maintaining global consistency. The local behavior of this transformation is captured by the matrix, whose entries are the partial derivatives Jij=xjxiJ^j_i = \frac{\partial x'^j}{\partial x^i}, representing the of the map at each point. The inverse transformation, from new to old coordinates, has entries xixk\frac{\partial x^i}{\partial x'^k}, which form the matrix inverse of JJ, assuming non-degeneracy for the . This matrix is crucial for propagating differential structures across charts. Under these coordinate changes, scalar functions remain invariant, meaning their values f(p)f(p) at a point pMp \in M are unchanged regardless of the used to express them. Tensors, as multilinear maps on and cotangent spaces, are also invariant as geometric objects, though their components transform according to rules involving the and its inverse to ensure coordinate-independent properties. These transformations set the foundation for how vectors and other tensors behave under diffeomorphisms. A concrete example occurs in the two-dimensional Euclidean plane, where polar coordinates (r,θ)(r, \theta) transform to Cartesian coordinates (x,y)(x, y) via the diffeomorphism x=rcosθx = r \cos \theta, y=rsinθy = r \sin \theta. The Jacobian matrix is (xrxθyryθ)=(cosθrsinθsinθrcosθ),\begin{pmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta} \end{pmatrix} = \begin{pmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta \end{pmatrix}, with determinant rr, illustrating how the transformation stretches areas radially. The inverse map from Cartesian to polar, r=x2+y2r = \sqrt{x^2 + y^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.