Hubbry Logo
search
logo
2313862

Dimension (vector space)

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia
A diagram of dimensions 1, 2, 3, and 4

In mathematics, the dimension of a vector space V is the cardinality (i.e., the number of vectors) of a basis of V over its base field.[1][2] It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension.

For every vector space there exists a basis,[a] and all bases of a vector space have equal cardinality;[b] as a result, the dimension of a vector space is uniquely defined. We say is finite-dimensional if the dimension of is finite, and infinite-dimensional if its dimension is infinite.

The dimension of the vector space over the field can be written as or as read "dimension of over ". When can be inferred from context, is typically written.

Examples

[edit]

The vector space has as a standard basis, and therefore More generally, and even more generally, for any field

The complex numbers are both a real and complex vector space; we have and So the dimension depends on the base field.

The only vector space with dimension is the vector space consisting only of its zero element.

Properties

[edit]

If is a linear subspace of then

To show that two finite-dimensional vector spaces are equal, the following criterion can be used: if is a finite-dimensional vector space and is a linear subspace of with then

The space has the standard basis where is the -th column of the corresponding identity matrix. Therefore, has dimension

Any two finite dimensional vector spaces over with the same dimension are isomorphic. Any bijective map between their bases can be uniquely extended to a bijective linear map between the vector spaces. If is some set, a vector space with dimension over can be constructed as follows: take the set of all functions such that for all but finitely many in These functions can be added and multiplied with elements of to obtain the desired -vector space.

An important result about dimensions is given by the rank–nullity theorem for linear maps.

If is a field extension, then is in particular a vector space over Furthermore, every -vector space is also a -vector space. The dimensions are related by the formula In particular, every complex vector space of dimension is a real vector space of dimension

Some formulae relate the dimension of a vector space with the cardinality of the base field and the cardinality of the space itself. If is a vector space over a field and if the dimension of is denoted by then:

If dim is finite then
If dim is infinite then

Generalizations

[edit]

A vector space can be seen as a particular case of a matroid, and in the latter there is a well-defined notion of dimension. The length of a module and the rank of an abelian group both have several properties similar to the dimension of vector spaces.

The Krull dimension of a commutative ring, named after Wolfgang Krull (1899–1971), is defined to be the maximal number of strict inclusions in an increasing chain of prime ideals in the ring.

Trace

[edit]

The dimension of a vector space may alternatively be characterized as the trace of the identity operator. For instance, This appears to be a circular definition, but it allows useful generalizations.

Firstly, it allows for a definition of a notion of dimension when one has a trace but no natural sense of basis. For example, one may have an algebra with maps (the inclusion of scalars, called the unit) and a map (corresponding to trace, called the counit). The composition is a scalar (being a linear operator on a 1-dimensional space) corresponds to "trace of identity", and gives a notion of dimension for an abstract algebra. In practice, in bialgebras, this map is required to be the identity, which can be obtained by normalizing the counit by dividing by dimension (), so in these cases the normalizing constant corresponds to dimension.

Alternatively, it may be possible to take the trace of operators on an infinite-dimensional space; in this case a (finite) trace is defined, even though no (finite) dimension exists, and gives a notion of "dimension of the operator". These fall under the rubric of "trace class operators" on a Hilbert space, or more generally nuclear operators on a Banach space.

A subtler generalization is to consider the trace of a family of operators as a kind of "twisted" dimension. This occurs significantly in representation theory, where the character of a representation is the trace of the representation, hence a scalar-valued function on a group whose value on the identity is the dimension of the representation, as a representation sends the identity in the group to the identity matrix: The other values of the character can be viewed as "twisted" dimensions, and find analogs or generalizations of statements about dimensions to statements about characters or representations. A sophisticated example of this occurs in the theory of monstrous moonshine: the -invariant is the graded dimension of an infinite-dimensional graded representation of the monster group, and replacing the dimension with the character gives the McKay–Thompson series for each element of the Monster group.[3]

See also

[edit]
  • Fractal dimension – Ratio providing a statistical index of complexity variation with scale
  • Krull dimension – In mathematics, dimension of a ring
  • Matroid rank – Maximum size of an independent set of the matroid
  • Rank (linear algebra) – Dimension of the column space of a matrix
  • Topological dimension – Topologically invariant definition of the dimension of a space, also called Lebesgue covering dimension

Notes

[edit]

References

[edit]

Sources

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In linear algebra, the dimension of a vector space VV is the number of vectors in any basis for VV, assuming VV has a finite basis.[1] This cardinality is independent of the particular basis chosen, ensuring a well-defined measure of the vector space's "size."[2] The dimension of the zero vector space {0}\{0\} is defined to be 0, as it has no nonzero vectors and thus no basis vectors.[3] Vector spaces that cannot be spanned by a finite set of vectors are termed infinite-dimensional, such as the space of all polynomials with real coefficients.[3] The concept of dimension extends naturally to subspaces and quotients, where key properties like the dimension formula hold: for subspaces UU and WW of VV, dim(U+W)+dim(UW)=dimU+dimW\dim(U + W) + \dim(U \cap W) = \dim U + \dim W.[4] This invariance and additivity make dimension a fundamental invariant in linear algebra, facilitating the classification of vector spaces up to isomorphism—two finite-dimensional vector spaces over the same field are isomorphic if and only if they have the same dimension.[5] In practice, computing the dimension involves finding the rank of a matrix representation or verifying linear independence and spanning properties of candidate bases.[6] Dimension plays a crucial role in applications beyond pure mathematics, such as in systems of linear equations, where the solution space's dimension equals the number of free variables, and in functional analysis, where infinite-dimensional spaces model phenomena like Hilbert spaces in quantum mechanics.[7] It also underpins theorems like the rank-nullity theorem, which states that for a linear transformation T:VWT: V \to W, dimV=dim(kerT)+dim(imT)\dim V = \dim(\ker T) + \dim(\operatorname{im} T).[8]

Basic Concepts

Definition

A vector space $ V $ over a field $ F $ is an abelian group under vector addition equipped with a scalar multiplication operation satisfying compatibility axioms, including distributivity and associativity.[9] The dimension of a vector space $ V $ over a field $ F $, denoted $ \dim V $, is defined as the cardinality of any basis for $ V $.[10] All bases of $ V $ have the same cardinality, ensuring the dimension is well-defined.[11] If $ \dim V = n $ is finite, then $ V $ is isomorphic to the coordinate space $ F^n $.[12] In the infinite-dimensional case, $ \dim V $ is an infinite cardinal equal to the cardinality of any basis.[13]

Basis

In a vector space VV over a field FF, a basis is a subset {e1,,en}\{e_1, \dots, e_n\} of VV that is both linearly independent and spans VV.[1] This means every vector in VV can be expressed as a finite linear combination of the basis vectors, and no nontrivial linear combination of them equals the zero vector. A key characterization of a basis is that it provides a unique representation for every vector in the space: for any vVv \in V, there exist unique scalars a1,,anFa_1, \dots, a_n \in F such that
v=i=1naiei. v = \sum_{i=1}^n a_i e_i.
This uniqueness follows directly from the linear independence of the set, ensuring that the coefficients are determined solely by vv and the basis.[14] The Steinitz exchange lemma establishes that any two finite bases of a finite-dimensional vector space have the same cardinality. In the general case, including infinite dimensions, all bases of the same vector space have the same cardinality, which underpins the well-definedness of the dimension. Specifically, if one basis spans the space and another is linearly independent, the spanning basis must contain at least as many elements (in the sense of cardinalities), allowing arguments to match sizes.[15] The existence of a basis in any vector space relies on Zorn's lemma, an equivalent form of the axiom of choice. Consider the partially ordered set of all linearly independent subsets of VV, ordered by inclusion; every chain has an upper bound (its union, which remains linearly independent), so Zorn's lemma guarantees a maximal element, which must span VV and thus form a basis.[16]

Finite-Dimensional Cases

Examples

The dimension of a vector space can be intuitively understood as the number of independent "degrees of freedom" required to specify any vector in the space. In the real vector space R2\mathbb{R}^2, which models the plane, two coordinates are needed to locate a point, corresponding to two degrees of freedom along perpendicular directions; thus, its dimension is 2. Similarly, R3\mathbb{R}^3, representing three-dimensional space, requires three coordinates and has dimension 3.[17][18] A fundamental example of a finite-dimensional vector space is Rn\mathbb{R}^n, the set of all nn-tuples of real numbers under componentwise addition and scalar multiplication. This space has dimension nn, with the standard basis consisting of the unit vectors e1,,ene_1, \dots, e_n, where eie_i has a 1 in the ii-th position and 0s elsewhere. Any vector in Rn\mathbb{R}^n can be uniquely expressed as a linear combination of these basis vectors, confirming the dimension.[19][20] Another illustrative case is the vector space of polynomials over a field FF with degree less than nn, denoted Pn(F)P_n(F). This space has dimension nn, and a basis is given by the monomials {1,x,x2,,xn1}\{1, x, x^2, \dots, x^{n-1}\}. Each polynomial of degree at most n1n-1 is uniquely determined by its nn coefficients, aligning with the basis size.[21][22] The space of all m×nm \times n matrices over a field FF, denoted Mm×n(F)M_{m \times n}(F), forms a vector space under matrix addition and scalar multiplication, with dimension mnm n. A basis consists of the matrix units EijE_{ij} for 1im1 \leq i \leq m and 1jn1 \leq j \leq n, where EijE_{ij} has a 1 in the (i,j)(i,j)-entry and 0s elsewhere. Every matrix is a unique linear combination of these mnm n basis elements, corresponding to its entries.[23][24] In contrast, the vector space of continuous real-valued functions on the interval [0,1][0,1], denoted C([0,1])C([0,1]), is infinite-dimensional, as it lacks a finite basis and requires infinitely many parameters to specify arbitrary continuous functions.[25][26]

Dimension Theorem

The dimension theorem for vector spaces establishes a fundamental relationship between the dimensions of a vector space, one of its subspaces, and the corresponding quotient space. Specifically, if $ V $ is a vector space over a field $ F $ and $ W $ is a subspace of $ V $, then the quotient space $ V/W $ is also a vector space, and the dimensions satisfy
dimV=dimW+dim(V/W). \dim V = \dim W + \dim(V/W).
This equality holds whenever the dimensions are defined (finite or infinite), but it is particularly straightforward in the finite-dimensional case.[27][28] To prove this, extend a basis $ {w_1, \dots, w_n} $ of $ W $ to a basis $ {w_1, \dots, w_n, v_1, \dots, v_k} $ of $ V $, which is possible by the properties of bases in vector spaces. The cosets $ {v_1 + W, \dots, v_k + W} $ then form a basis for $ V/W $: they span $ V/W $ because any element $ v + W $ in the quotient can be expressed using the basis of $ V $ and projected modulo $ W $, and they are linearly independent since any linear dependence relation in the quotient would imply a dependence in $ V $ that contradicts the basis extension. Thus, $ \dim(V/W) = k $, and with $ \dim V = n + k $ and $ \dim W = n $, the formula follows.[27][28] When $ V $ is finite-dimensional, the theorem implies that $ V/W $ is also finite-dimensional and $ \dim(V/W) = \dim V - \dim W $, providing a direct way to compute quotient dimensions from subspace dimensions. For instance, if $ W $ has codimension 1 in a space of dimension 3, then $ \dim(V/W) = 2 $, illustrating how the theorem quantifies the "size" reduction by the subspace. This result previews the kernel-image relation in linear transformations, where the dimension of the domain equals the sum of the kernel dimension and the image dimension, though full details appear elsewhere.[27][28]

Key Properties

Invariance

One fundamental property of the dimension of a vector space is its invariance under linear isomorphisms. For finite-dimensional vector spaces VV and WW over the same field FF with dimV=dimW=n<\dim V = \dim W = n < \infty, there exists a linear isomorphism T:VWT: V \to W, meaning VWV \cong W.[29] This theorem establishes that the dimension uniquely classifies finite-dimensional vector spaces up to isomorphism.[30] To prove this, select a basis {v1,,vn}\{v_1, \dots, v_n\} for VV and a basis {w1,,wn}\{w_1, \dots, w_n\} for WW. Define TT on the basis by T(vi)=wiT(v_i) = w_i for each ii, and extend linearly to all of VV by T(i=1naivi)=i=1naiwiT\left(\sum_{i=1}^n a_i v_i\right) = \sum_{i=1}^n a_i w_i for scalars aiFa_i \in F. Since the images form a basis for WW, TT is bijective and thus an isomorphism.[29] This invariance extends to infinite-dimensional cases using Hamel bases. Two vector spaces over the same field are isomorphic if and only if the cardinalities of their Hamel bases are equal, where the dimension is defined as this cardinality.[30] A key consequence is that all vector spaces of dimension nn over a fixed field FF are isomorphic to each other and to the standard space FnF^n. However, dimensions depend on the scalar field; for instance, Cn\mathbb{C}^n has dimension nn as a complex vector space but dimension 2n2n as a real vector space, so Rn≇Cn\mathbb{R}^n \not\cong \mathbb{C}^n over R\mathbb{R}.[2][31]

Subspace Dimensions

In finite-dimensional vector spaces, the dimensions of subspaces exhibit additive properties when considering their sums and intersections. Let $ V $ be a finite-dimensional vector space over a field $ F $, and let $ U $ and $ W $ be subspaces of $ V $. The sum $ U + W $ is defined as the set of all elements of the form $ u + w $ where $ u \in U $ and $ w \in W $; this forms a subspace of $ V $. Similarly, the intersection $ U \cap W $ consists of elements common to both $ U $ and $ W $, which is also a subspace. The relationship between these dimensions is captured by the fundamental formula
dim(U+W)=dimU+dimWdim(UW). \dim(U + W) = \dim U + \dim W - \dim(U \cap W).
This equation, known as Grassmann's formula, quantifies how the dimension of the sum exceeds that of the individual subspaces by accounting for their overlap.[32][33] A rearrangement of this formula yields the Grassmann identity:
dim(U+W)+dim(UW)=dimU+dimW. \dim(U + W) + \dim(U \cap W) = \dim U + \dim W.
This identity highlights the balance between the "new" directions introduced by the sum and the shared directions in the intersection, providing a symmetric view of subspace dimensions. It holds under the finite-dimensional assumption and is a cornerstone for analyzing subspace decompositions.[32]/05%3A_Span_and_Bases/5.04%3A_Dimension) When the subspaces have trivial intersection, i.e., $ U \cap W = { 0 } $, the sum $ U + W $ is called the direct sum, denoted $ U \oplus W $. In this case, every element of $ U \oplus W $ can be uniquely expressed as $ u + w $ with $ u \in U $ and $ w \in W $, and the dimension formula simplifies to
dim(UW)=dimU+dimW. \dim(U \oplus W) = \dim U + \dim W.
This additivity is essential for constructing bases of larger spaces from disjoint bases of the summands and follows directly from Grassmann's formula by setting $ \dim(U \cap W) = 0 $. The result can be established using the dimension theorem for quotient spaces, where the intersection serves as the kernel in an appropriate quotient construction.[32][34]

Rank-Nullity Theorem

The rank-nullity theorem, also known as the dimension theorem for linear maps, states that if $ T: V \to W $ is a linear map between vector spaces with $ V $ finite-dimensional, then
dim(V)=dim(kerT)+dim(\imT), \dim(V) = \dim(\ker T) + \dim(\im T),
where $ \ker T $ is the kernel of $ T $ and $ \im T $ is its image.[35] This relation holds regardless of whether $ W $ is finite- or infinite-dimensional, as the theorem depends only on the finite dimensionality of the domain.[8] The dimension of the kernel, denoted nullity($ T $) or $ \dim(\ker T) ,measuresthe"degeneracy"ofthemap,whilethedimensionoftheimage,denotedrank(, measures the "degeneracy" of the map, while the dimension of the image, denoted rank( T $) or $ \dim(\im T) $, quantifies the "span" it achieves in the codomain.[35] These terms extend the matrix concepts of nullity (dimension of the null space) and rank (dimension of the column space) to abstract linear maps.[36] To prove the theorem, let $ { \mathbf{u}_1, \dots, \mathbf{u}_k } $ be a basis for $ \ker T $, where $ k = \dim(\ker T) $. Extend this to a basis $ { \mathbf{u}_1, \dots, \mathbf{u}_k, \mathbf{v}_1, \dots, \mathbf{v}_m } $ for $ V $, so $ \dim(V) = k + m $. The images $ T(\mathbf{v}_1), \dots, T(\mathbf{v}_m) $ are linearly independent in $ \im T $ (since any dependence would imply a nontrivial relation in $ V $ involving kernel elements) and span $ \im T $ (as every element is $ T $ of some vector in $ V $, expressible via the basis). Thus, $ { T(\mathbf{v}_1), \dots, T(\mathbf{v}_m) } $ is a basis for $ \im T $, so $ \dim(\im T) = m $, and $ \dim(V) = \dim(\ker T) + \dim(\im T) $.[35] A key application is to invertibility: the linear map $ T $ is invertible if and only if $ \rank(T) = \dim(V) $, which is equivalent to $ \nullity(T) = 0 $ (i.e., $ \ker T = { \mathbf{0} } $) and $ T $ being surjective onto its image, which coincides with $ V $ under this condition.[35] This follows directly from the theorem, as full rank implies the image has the same dimension as the domain. The result aligns with the dimension theorem applied to the isomorphism $ V / \ker T \cong \im T $.[8]

Infinite-Dimensional Cases

Hamel Bases

In a vector space VV over a field KK, a Hamel basis is a linearly independent subset BVB \subseteq V such that every vector in VV can be uniquely expressed as a finite linear combination of elements from BB. This algebraic notion of basis generalizes the finite-dimensional case but restricts expansions to finite sums, distinguishing it from topological bases that allow infinite series. The concept was introduced by Georg Hamel in his 1905 paper, where it was used to construct discontinuous solutions to Cauchy's functional equation.[37] The existence of a Hamel basis for any vector space relies on the axiom of choice, typically proved using Zorn's lemma applied to the partially ordered set of linearly independent subsets of VV. Specifically, Zorn's lemma guarantees a maximal linearly independent subset, which spans VV via finite combinations. All Hamel bases of a given vector space have the same cardinality, known as the Hamel dimension of the space.[38] A prominent example is the vector space R\mathbb{R} over Q\mathbb{Q}, where a Hamel basis exists but cannot be explicitly constructed without the axiom of choice. Such a basis has cardinality equal to the continuum, 202^{\aleph_0}, as the dimension must match the cardinality of R\mathbb{R} itself, and any countable basis would imply R\mathbb{R} is countable, a contradiction.[39] Hamel bases exhibit pathological properties in infinite-dimensional settings, particularly in Banach spaces, rendering them unsuitable for analytic applications. In an infinite-dimensional Banach space, the coordinate functionals associated with a Hamel basis—mapping each vector to its coefficient relative to a basis element—are discontinuous except for at most finitely many. Consequently, no infinite-dimensional Banach space admits a Hamel basis for which all coordinate functionals are continuous, as continuity of these functionals would imply finite dimensionality. This limitation arises because finite expansions fail to capture the convergent infinite series essential for normed space analysis.

Schauder Bases

In a Banach space XX, a Schauder basis is a sequence {en}n=1\{e_n\}_{n=1}^\infty such that every xXx \in X can be uniquely expressed as x=n=1anenx = \sum_{n=1}^\infty a_n e_n, where the infinite series converges in the norm of XX.[40] This topological condition distinguishes Schauder bases from Hamel bases, which use only finite linear combinations without regard for convergence.[41] The uniqueness of the coefficients ana_n follows from the completeness of the space, and it implies the existence of a biorthogonal sequence of continuous linear functionals {fn}n=1\{f_n\}_{n=1}^\infty on XX satisfying fn(em)=δnmf_n(e_m) = \delta_{nm} for all n,mn, m, with an=fn(x)a_n = f_n(x) for each xx. These functionals are bounded, ensuring the basis projection operators PNx=n=1Nfn(x)enP_N x = \sum_{n=1}^N f_n(x) e_n are uniformly bounded in norm.[42] A canonical example is the Haar system in Lp[0,1]L^p[0,1] for 1p<1 \leq p < \infty, consisting of constant functions on dyadic intervals and their differences, which forms an unconditional Schauder basis allowing representation of integrable functions via converging wavelet-like expansions.[41] In such spaces, any Hamel basis fails to be Schauder, as infinite-dimensional Banach spaces require uncountable Hamel bases, precluding countable series convergence.[43] Banach spaces admitting Schauder bases are necessarily separable, as the closed linear span of the basis is the whole space and rational combinations of initial segments form a countable dense set.[40] It was long conjectured that every separable Banach space possesses a Schauder basis, but Per Enflo constructed a counterexample in 1973: a separable, reflexive Banach space lacking both the approximation property and any Schauder basis.[44]

Generalizations

Modules

The concept of dimension extends from vector spaces over fields to modules over rings, where the analogous notion is the rank of a free module. A free module MM over a ring RR (typically assumed commutative with unity) is one that admits a basis, meaning a linearly independent generating set {e1,,en}\{e_1, \dots, e_n\} such that every element of MM can be uniquely expressed as a finite RR-linear combination of the basis elements.[45] The rank of such a free module is defined as the cardinality of any basis, which serves as the module-theoretic counterpart to the dimension of a vector space.[45] For commutative rings RR, the rank is well-defined and invariant under isomorphism: if two free modules MRnM \cong R^n and NRmN \cong R^m are isomorphic, then n=mn = m, as localization at a maximal ideal reduces the problem to comparing dimensions of vector spaces over the residue field.[45] However, unlike vector spaces over fields, not every module over a ring is free, and submodules of free modules need not be free; for instance, over the integers Z\mathbb{Z}, the module Z/nZ\mathbb{Z}/n\mathbb{Z} for n>1n > 1 is a torsion module with no basis and thus rank 0, since every nonzero element is annihilated by a nonzero integer.[46] In contrast, Zn\mathbb{Z}^n is a free Z\mathbb{Z}-module of rank nn, generated by the standard basis vectors.[47] Over general rings, the absence of division (i.e., zero divisors or non-invertible elements) means that linear independence and spanning do not always interact as seamlessly as in vector spaces; a set may generate a module without being a basis if relations exist beyond scalar multiples.[45] In algebraic K-theory, the rank of a projective module can relate to the Euler characteristic via a rank function ρ:K0(R)Z\rho: K_0(R) \to \mathbb{Z} on the Grothendieck group, where ρ([M])\rho([M]) provides a dimension-like invariant capturing the "size" of MM.[48]

Trace

In the context of finite-dimensional vector spaces, the trace provides an invariant associated to endomorphisms that generalizes the notion of dimension. For an endomorphism $ T: V \to V $ on a finite-dimensional vector space $ V $ over a field $ F $, the trace $ \operatorname{tr}(T) $ is defined as the sum of the diagonal entries of the matrix representing $ T $ with respect to any basis of $ V $.[32] This definition is independent of the choice of basis, making the trace a well-defined property of the operator itself.[32] Equivalently, $ \operatorname{tr}(T) $ equals the sum of the eigenvalues of $ T $, counted with algebraic multiplicity.[32] A direct connection to dimension arises with the identity endomorphism $ I: V \to V $, where $ \operatorname{tr}(I) = \dim(V) $, as the matrix of $ I $ consists entirely of 1s on the diagonal.[32] The trace exhibits several key properties that underscore its role as an invariant. Notably, for endomorphisms $ S, T: V \to V $, $ \operatorname{tr}(ST) = \operatorname{tr}(TS) $.[32] Additionally, the trace is invariant under similarity transformations: if $ S = P^{-1} T P $ for some invertible $ P $, then $ \operatorname{tr}(S) = \operatorname{tr}(T) $.[32] This concept extends to free modules of finite rank over commutative rings. For a free $ R $-module $ M $ of rank $ n $ and an endomorphism $ f: M \to M $, the trace $ \operatorname{tr}(f) $ is defined analogously as the sum of the diagonal entries of any matrix representing $ f $ with respect to a basis of $ M $.[49] This yields an element of $ R $, and the definition is independent of the basis chosen.[49] For the identity endomorphism on such a module, $ \operatorname{tr}(I) = n \cdot 1_R $, linking back to the rank (generalizing dimension).[49] The properties $ \operatorname{tr}(fg) = \operatorname{tr}(gf) $ and invariance under "similarity" (via change of basis) hold similarly, provided the representations are compatible.[50]

References

User Avatar
No comments yet.