Hubbry Logo
logo
Linear independence
Community hub

Linear independence

logo
0 subscribers
Read side by side
from Wikipedia
Linearly independent vectors in
Linearly dependent vectors in a plane in

In linear algebra, a set of vectors is said to be linearly independent if there exists no vector in the set that is equal to a linear combination of the other vectors in the set. If such a vector exists, then the vectors are said to be linearly dependent. Linear independence is part of the definition of linear basis.[1]

A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space.

Definition

[edit]

A sequence of vectors from a vector space V is said to be linearly dependent, if there exist scalars not all zero, such that

where denotes the zero vector.

If , this implies that a single vector is linear dependent if and only if it is the zero vector.

If , this implies that at least one of the scalars is nonzero, say , and the above equation is able to be written as

Thus, a set of vectors is linearly dependent if and only if one of them is zero or a linear combination of the others.

A sequence of vectors is said to be linearly independent if it is not linearly dependent, that is, if the equation

can only be satisfied by for This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of as a linear combination of its vectors is the trivial representation in which all the scalars are zero.[2] Even more concisely, a sequence of vectors is linearly independent if and only if can be represented as a linear combination of its vectors in a unique way.

If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors is linearly independent if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful.

A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent.

Infinite case

[edit]

An infinite set of vectors is linearly independent if every finite subset is linearly independent. This definition applies also to finite sets of vectors, since a finite set is a finite subset of itself, and every subset of a linearly independent set is also linearly independent.

Conversely, an infinite set of vectors is linearly dependent if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set.

An indexed family of vectors is linearly independent if it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise, the family is said to be linearly dependent.

A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in x over the reals has the (infinite) subset {1, x, x2, ...} as a basis.

Definition via span

[edit]

Let be a vector space. A set is linearly independent if and only if is a minimal element of

by the inclusion order. In contrast, is linearly dependent if it has a proper subset whose span is a superset of .

Geometric examples

[edit]
  • and are independent and define the plane P.
  • , and are dependent because all three are contained in the same plane.
  • and are dependent because they are parallel to each other.
  • , and are independent because and are independent of each other and is not a linear combination of them or, equivalently, because they do not belong to a common plane. The three vectors define a three-dimensional space.
  • The vectors (null vector, whose components are equal to zero) and are dependent since .

Geographic location

[edit]

A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement is true, but it is not necessary to find the location.

In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is a linear combination of the other two vectors, and it makes the set of vectors linearly dependent, that is, one of the three vectors is unnecessary to define a specific location on a plane.

Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general, n linearly independent vectors are required to describe all locations in n-dimensional space.

Evaluating linear independence

[edit]

The zero vector

[edit]

If one or more vectors from a given sequence of vectors is the zero vector then the vectors are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose that is an index (i.e. an element of ) such that Then let (alternatively, letting be equal to any other non-zero scalar will also work) and then let all other scalars be (explicitly, this means that for any index other than (i.e. for ), let so that consequently ). Simplifying gives:

Because not all scalars are zero (in particular, ), this proves that the vectors are linearly dependent.

As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly independent.

Now consider the special case where the sequence of has length (i.e. the case where ). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, if is any vector then the sequence (which is a sequence of length ) is linearly dependent if and only if ; alternatively, the collection is linearly independent if and only if

Linear dependence and independence of two vectors

[edit]

This example considers the special case where there are exactly two vector and from some real or complex vector space. The vectors and are linearly dependent if and only if at least one of the following is true:

  1. is a scalar multiple of (explicitly, this means that there exists a scalar such that ) or
  2. is a scalar multiple of (explicitly, this means that there exists a scalar such that ).

If then by setting we have (this equality holds no matter what the value of is), which shows that (1) is true in this particular case. Similarly, if then (2) is true because If (for instance, if they are both equal to the zero vector ) then both (1) and (2) are true (by using for both).

If then is only possible if and ; in this case, it is possible to multiply both sides by to conclude This shows that if and then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly independent). If but instead then at least one of and must be zero. Moreover, if exactly one of and is (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false).

The vectors and are linearly independent if and only if is not a scalar multiple of and is not a scalar multiple of .

Vectors in R2

[edit]

Three vectors: Consider the set of vectors and then the condition for linear dependence seeks a set of non-zero scalars, such that

or

Row reduce this matrix equation by subtracting the first row from the second to obtain,

Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is

Rearranging this equation allows us to obtain

which shows that non-zero ai exist such that can be defined in terms of and Thus, the three vectors are linearly dependent.

Two vectors: Now consider the linear dependence of the two vectors and and check,

or

The same row reduction presented above yields,

This shows that which means that the vectors and are linearly independent.

Vectors in R4

[edit]

In order to determine if the three vectors in

are linearly dependent, form the matrix equation,

Row reduce this equation to obtain,

Rearrange to solve for v3 and obtain,

This equation is easily solved to define non-zero ai,

where can be chosen arbitrarily. Thus, the vectors and are linearly dependent.

Alternative method using determinants

[edit]

An alternative method relies on the fact that vectors in are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero.

In this case, the matrix formed by the vectors is

We may write a linear combination of the columns as

We are interested in whether AΛ = 0 for some nonzero vector Λ. This depends on the determinant of , which is

Since the determinant is non-zero, the vectors and are linearly independent.

Otherwise, suppose we have vectors of coordinates, with Then A is an n×m matrix and Λ is a column vector with entries, and we are again interested in AΛ = 0. As we saw previously, this is equivalent to a list of equations. Consider the first rows of , the first equations; any solution of the full list of equations must also be true of the reduced list. In fact, if i1,...,im is any list of rows, then the equation must be true for those rows.

Furthermore, the reverse is true. That is, we can test whether the vectors are linearly dependent by testing whether

for all possible lists of rows. (In case , this requires only one determinant, as above. If , then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available.

More vectors than dimensions

[edit]

If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors in

Natural basis vectors

[edit]

Let and consider the following elements in , known as the natural basis vectors:

Then are linearly independent.

Proof

Suppose that are real numbers such that

Since

then for all

Linear independence of functions

[edit]

Let be the vector space of all differentiable functions of a real variable . Then the functions and in are linearly independent.

Proof

[edit]

Suppose and are two real numbers such that

Take the first derivative of the above equation:

for all values of We need to show that and In order to do this, we subtract the first equation from the second, giving . Since is not zero for some , It follows that too. Therefore, according to the definition of linear independence, and are linearly independent.

Space of linear dependencies

[edit]

A linear dependency or linear relation among vectors v1, ..., vn is a tuple (a1, ..., an) with n scalar components such that

If such a linear dependence exists with at least a nonzero component, then the n vectors are linearly dependent. Linear dependencies among v1, ..., vn form a vector space.

If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneous system of linear equations, with the coordinates of the vectors as coefficients. A basis of the vector space of linear dependencies can therefore be computed by Gaussian elimination.

Generalizations

[edit]

Affine independence

[edit]

A set of vectors is said to be affinely dependent if at least one of the vectors in the set can be defined as an affine combination of the others. Otherwise, the set is called affinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Contrapositively, every linearly independent set is affinely independent. Note that an affinely independent set is not necessarily linearly independent.

Consider a set of vectors of size each, and consider the set of augmented vectors of size each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent.[3]: 256 

Linearly independent vector subspaces

[edit]

Two vector subspaces and of a vector space are said to be linearly independent if [4] More generally, a collection of subspaces of are said to be linearly independent if for every index where [4] The vector space is said to be a direct sum of if these subspaces are linearly independent and

See also

[edit]
  • Matroid – Abstraction of linear independence of vectors

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Linear independence is a fundamental concept in linear algebra that characterizes sets or families of vectors within a vector space. A set of vectors {v1,v2,,vk}\{ \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k \} is linearly independent if the equation $ c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \dots + c_k \mathbf{v}_k = \mathbf{0} $ holds only when all scalars $ c_1 = c_2 = \dots = c_k = 0 $, meaning no vector in the set can be expressed as a nontrivial linear combination of the others.[1] This property distinguishes linearly independent collections from linearly dependent ones, where at least one vector is redundant as a linear combination of the rest.[2] Linear independence is essential for defining the structure of vector spaces, particularly in relation to span, basis, and dimension. A basis is a linearly independent set of vectors that spans the entire vector space, providing a minimal generating set for all elements in the space.[3] The dimension of a vector space is the cardinality of any such basis, which remains consistent regardless of the choice of basis, and it determines the maximum size of a linearly independent subset.[4] For instance, in Rn\mathbb{R}^n, the standard basis vectors form a linearly independent set of size nn, confirming the dimension is nn.[5] Beyond theoretical foundations, linear independence has practical implications in matrix theory and applications. The columns (or rows) of a matrix are linearly independent if and only if the associated homogeneous system has only the trivial solution, which relates directly to the matrix's rank and invertibility.[6] This concept extends to solving systems of linear equations,[7] optimizing in machine learning,[8] and analyzing data structures in fields like physics and computer science, where it ensures non-redundant representations.[7]

Definitions

Finite-dimensional vector spaces

In the context of finite-dimensional vector spaces, the concepts of vector spaces, linear combinations, and the zero vector are foundational prerequisites. A vector space VV over a field FF consists of vectors that can be added and scaled by elements of FF, with the zero vector 0\mathbf{0} serving as the additive identity. Linear combinations involve sums of the form aivi\sum a_i \mathbf{v}_i, where aiFa_i \in F and viV\mathbf{v}_i \in V.[9] A finite set of vectors {v1,,vn}\{\mathbf{v}_1, \dots, \mathbf{v}_n\} in a vector space VV over a field FF is linearly independent if the only solution in FF to the equation a1v1++anvn=0a_1 \mathbf{v}_1 + \dots + a_n \mathbf{v}_n = \mathbf{0} is a1==an=0a_1 = \dots = a_n = 0.[10] This condition ensures that no vector in the set can be expressed as a linear combination of the others. Formally, the set is linearly independent if
i=1naivi=0    ai=0i=1,,n. \sum_{i=1}^n a_i \mathbf{v}_i = \mathbf{0} \quad \implies \quad a_i = 0 \quad \forall i = 1, \dots, n.
[9] The negation of this property defines linear dependence: a set is linearly dependent if there exist scalars a1,,anFa_1, \dots, a_n \in F, not all zero, such that i=1naivi=0\sum_{i=1}^n a_i \mathbf{v}_i = \mathbf{0}.[10] The concept of linear independence was formalized by Giuseppe Peano in his 1888 work Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassmann, where he provided the first axiomatic treatment of vector spaces over the reals, building on earlier ideas from mathematicians like Grassmann and Möbius.[11] Although detailed discussions of span and bases appear later in the theory, linear independence is essential for identifying sets that form bases in finite-dimensional spaces.[9]

Infinite-dimensional vector spaces

In infinite-dimensional vector spaces, the notion of linear independence extends to possibly infinite families of vectors indexed by a set II, which may be countably or uncountably infinite. A family {viiI}\{v_i \mid i \in I\} is linearly independent if, for every finite subset JIJ \subseteq I, the only solution to the equation
jJajvj=0 \sum_{j \in J} a_j v_j = 0
is aj=0a_j = 0 for all jJj \in J.[12] This condition ensures that no nontrivial finite linear combination of the vectors vanishes, mirroring the finite-dimensional case but restricting nontrivial relations to finite subcollections.[12] A key structure in this context is the Hamel basis, defined as a linearly independent set that spans the vector space algebraically, meaning every vector in the space can be expressed as a finite linear combination of basis elements.[13] By Zorn's lemma (an equivalent of the axiom of choice), every vector space possesses a Hamel basis, though in infinite dimensions, this basis is typically uncountable and its explicit construction is impossible without additional assumptions.[13] For instance, in the Hilbert space 2\ell^2 of square-summable sequences, any Hamel basis must be uncountable, as the space has cardinality 202^{\aleph_0} and cannot be spanned algebraically by a countable set using only finite combinations.[14] The standard orthonormal basis {en}n=1\{e_n\}_{n=1}^\infty in 2\ell^2, where ene_n has a 1 in the nnth position and zeros elsewhere, provides an example of a countably infinite linearly independent set.[15] However, this set does not form a Hamel basis, as its algebraic span consists only of sequences with finite support, which is a proper subspace of 2\ell^2. In contrast, Schauder bases in topological vector spaces like 2\ell^2 permit spanning via infinite convergent linear combinations, highlighting a distinction from the purely algebraic Hamel framework.[16] A fundamental subtlety in infinite dimensions is that linear independence governs only finite combinations, so spanning sets like Hamel bases require potentially uncountably many elements to cover the space without infinite sums, unlike finite-dimensional cases where independence directly ties to dimension.[16] This algebraic restriction often renders Hamel bases impractical for analysis in spaces equipped with topology, such as Banach or Hilbert spaces.[14]

Equivalent characterizations

A finite set of vectors {v1,,vn}\{v_1, \dots, v_n\} in a vector space VV over a field FF is linearly independent if and only if the dimension of its span is nn, meaning the vectors form a basis for \span{v1,,vn}\span\{v_1, \dots, v_n\}.[17] This equivalence holds because linear independence ensures no redundancies, allowing the set to achieve the maximal dimension equal to its cardinality within the subspace it generates.[18] To see this, consider a proof by induction on nn. For n=1n=1, the set {v1}\{v_1\} is linearly independent if v10v_1 \neq 0, in which case dim\span{v1}=1\dim \span\{v_1\} = 1. Assume the statement holds for sets of size k1k-1. For a set of size kk, the subset {v1,,vk1}\{v_1, \dots, v_{k-1}\} is linearly independent by the definition, so dim\span{v1,,vk1}=k1\dim \span\{v_1, \dots, v_{k-1}\} = k-1 by the induction hypothesis. The full set is linearly independent if and only if vk\span{v1,,vk1}v_k \notin \span\{v_1, \dots, v_{k-1}\}, which increases the dimension to kk.[19] Equivalently, the set {v1,,vn}\{v_1, \dots, v_n\} is linearly independent if and only if every vector in \span{v1,,vn}\span\{v_1, \dots, v_n\} has a unique representation as a linear combination of the vectors in the set.[19] This uniqueness follows directly from the triviality of the kernel of the coordinate map associating coefficients to linear combinations. Another characterization uses linear maps: the vectors {v1,,vn}\{v_1, \dots, v_n\} are linearly independent if and only if the linear map T:FnVT: F^n \to V defined by T(ei)=viT(e_i) = v_i, where {e1,,en}\{e_1, \dots, e_n\} is the standard basis of FnF^n, is injective.[18] Injectivity means the kernel is trivial, which corresponds precisely to the only solution of aivi=0\sum a_i v_i = 0 being ai=0a_i = 0 for all ii. A set SS is linearly independent if and only if it can be extended to a basis of the ambient vector space VV (assuming VV is finite-dimensional).[20] In finite dimensions, starting from a linearly independent SS, one can iteratively add vectors from a spanning set until spanning VV, preserving independence at each step; the converse follows from subsets of bases being independent. Thus, the vectors {v1,,vn}\{v_1, \dots, v_n\} form a basis for their span if and only if they are linearly independent, as they trivially span \span{v1,,vn}\span\{v_1, \dots, v_n\}.[19]

Geometric and Visual Interpretations

In two-dimensional space

In two-dimensional Euclidean space R2\mathbb{R}^2, the concept of linear independence for vectors gains an intuitive geometric interpretation. A set consisting of two vectors is linearly independent if they are not collinear, meaning neither vector is a scalar multiple of the other. Geometrically, such vectors point in different directions and form the sides of a parallelogram with positive area, allowing them to span the entire plane. In contrast, collinear vectors lie along the same line and span only that line, forming a degenerate parallelogram with zero area.[21] For example, the standard basis vectors e1=(1,0)\mathbf{e}_1 = (1, 0) and e2=(0,1)\mathbf{e}_2 = (0, 1) are linearly independent, as they are perpendicular and together span all of R2\mathbb{R}^2. On the other hand, the vectors (1,0)(1, 0) and (2,0)(2, 0) are linearly dependent, since (2,0)=2(1,0)(2, 0) = 2 \cdot (1, 0), and they both lie along the x-axis. Similarly, (1,2)(1, 2) and (2,4)(2, 4) are dependent because (2,4)=2(1,2)(2, 4) = 2 \cdot (1, 2). A single nonzero vector in R2\mathbb{R}^2, such as (1,1)(1, 1), is linearly independent, as the equation c(1,1)=(0,0)c \cdot (1, 1) = (0, 0) implies c=0c = 0. However, the zero vector (0,0)(0, 0) by itself is linearly dependent, since 1(0,0)=(0,0)1 \cdot (0, 0) = (0, 0) with a nonzero scalar.[22][21][23] Any set of three or more vectors in R2\mathbb{R}^2 is always linearly dependent, as the space has dimension 2 and cannot be spanned by more than two linearly independent vectors; at least one vector must lie in the span of the others, akin to the pigeonhole principle in geometry. Non-collinear pairs span the full plane, while collinear ones are confined to a one-dimensional line. To check dependence for two vectors u=(u1,u2)\mathbf{u} = (u_1, u_2) and v=(v1,v2)\mathbf{v} = (v_1, v_2), compute the determinant u1v2u2v1u_1 v_2 - u_2 v_1; the vectors are dependent if this equals zero, which measures the signed area of the parallelogram they form.[24][21]

In higher-dimensional spaces

In three-dimensional space R3\mathbb{R}^3, three vectors are linearly independent if they span the entire space without being coplanar, forming a parallelepiped with nonzero volume that corresponds to the tetrahedron they define with the origin having positive volume.[25] For instance, the standard basis vectors e1=(1,0,0)\mathbf{e}_1 = (1,0,0), e2=(0,1,0)\mathbf{e}_2 = (0,1,0), and e3=(0,0,1)\mathbf{e}_3 = (0,0,1) are linearly independent, as they align along mutually orthogonal axes and collectively span R3\mathbb{R}^3.[26] In contrast, any set including the zero vector or where one vector is a scalar multiple of another fails to add a new dimension and is thus dependent.[25] This geometric intuition generalizes to Rn\mathbb{R}^n for n>3n > 3, where a set of kk vectors (with knk \leq n) is linearly independent if their span forms a full kk-dimensional subspace without dimensional collapse, meaning each successive vector extends the span by one dimension.[25] However, in Rn\mathbb{R}^n, any collection of n+1n+1 vectors must be linearly dependent, as they can occupy at most an nn-dimensional space and thus cannot all contribute unique directions.[27] Visually, linear independence in higher dimensions preserves a "full rank" orientation, where the vectors maintain their maximal possible spread; dependence, conversely, causes a flattening into a lower-dimensional subspace, such as vectors collapsing onto a hyperplane.[25] Fundamentally, a set of vectors is linearly independent if and only if they do not all lie within any proper subspace of dimension less than the size of the set.[25]

Determination Methods

For two or three vectors

For two vectors u\mathbf{u} and v\mathbf{v} in a vector space over a field, the set {u,v}\{\mathbf{u}, \mathbf{v}\} is linearly independent if and only if neither vector is the zero vector and v\mathbf{v} is not a scalar multiple of u\mathbf{u}.[1] This condition ensures that the only solution to the equation au+bv=0a \mathbf{u} + b \mathbf{v} = \mathbf{0} is the trivial solution a=b=0a = b = 0.[18] To verify linear independence for two nonzero vectors, one can check whether v\mathbf{v} lies in the span of {u}\{\mathbf{u}\}, which occurs if and only if there exists a scalar cc such that v=cu\mathbf{v} = c \mathbf{u}.[18] If no such scalar exists, the vectors are linearly independent. Geometrically, in the plane, this corresponds to the vectors not being collinear.[18] For three vectors u\mathbf{u}, v\mathbf{v}, and w\mathbf{w} in R3\mathbb{R}^3, the set {u,v,w}\{\mathbf{u}, \mathbf{v}, \mathbf{w}\} is linearly independent if the equation au+bv+cw=0a \mathbf{u} + b \mathbf{v} + c \mathbf{w} = \mathbf{0} has only the trivial solution a=b=c=0a = b = c = 0.[28] The vectors are linearly dependent if there exists a nontrivial solution, meaning at least one coefficient is nonzero.[28] For three vectors in R3\mathbb{R}^3, they can be checked by forming the matrix with them as columns and computing its determinant; independence holds if and only if det0\det \neq 0. Geometrically, this means the vectors are not coplanar.[29] Consider the vectors (1,1)(1,1), (1,2)(1,2), and (2,3)(2,3) in R2\mathbb{R}^2: these are linearly dependent because (2,3)=1(1,1)+1(1,2)(2,3) = 1 \cdot (1,1) + 1 \cdot (1,2).[30] Any set containing the zero vector is linearly dependent, as 10+0u+0v=01 \cdot \mathbf{0} + 0 \cdot \mathbf{u} + 0 \cdot \mathbf{v} = \mathbf{0} provides a nontrivial linear combination yielding zero.[18] In two dimensions, a step-by-step check for two vectors u=(u1,u2)\mathbf{u} = (u_1, u_2) and v=(v1,v2)\mathbf{v} = (v_1, v_2) involves computing the 2D cross product analog, given by the determinant u1v2u2v1u_1 v_2 - u_2 v_1; the vectors are linearly independent if and only if this value is nonzero.[29]

Matrix-based approaches

One effective way to determine the linear independence of a set of kk vectors in Rn\mathbb{R}^n is to form an n×kn \times k matrix AA whose columns are these vectors. The set is linearly independent if and only if AA has full column rank, meaning rank(A)=k\operatorname{rank}(A) = k.[18] This condition ensures that the columns span a kk-dimensional subspace without redundancy.[18] Equivalently, the columns of AA are linearly independent if and only if the homogeneous equation Ax=0A \mathbf{x} = \mathbf{0} has only the trivial solution x=0\mathbf{x} = \mathbf{0}, indicating that the kernel (null space) of AA is trivial.[6] This kernel characterization directly ties linear independence to the invertibility properties of the linear transformation represented by AA.[6] This matrix-based approach relies on the vectors residing in a coordinate space such as Rn\mathbb{R}^n, where they possess natural coordinate representations with respect to the standard basis, allowing direct construction of the numerical matrix AA and application of the null space test. In a general vector space VV, however, vectors lack inherent coordinate representations, so it is impossible to directly construct such a numerical matrix and check its null space for non-trivial vectors. To apply an analogous matrix method in a general vector space VV, one must first select a basis for VV, express the given vectors as coordinate vectors with respect to that basis, and then form the corresponding matrix. Otherwise, linear dependence must be assessed using the abstract definition: the vectors are linearly dependent if there exist scalars, not all zero, such that their linear combination equals the zero vector. To compute the rank and verify full column rank, Gaussian elimination can be applied to row-reduce AA to row echelon form. The vectors are linearly independent if the reduced form has kk pivot positions (one in each column) with no zero rows appearing before the kk-th pivot.[31] This method systematically identifies dependencies by revealing the number of independent columns through the pivot count.[31] For the special case where k=nk = n (a square matrix), the vectors are linearly independent if and only if det(A)0\det(A) \neq 0.[32] A nonzero determinant confirms that AA is invertible, implying full rank and thus independence of the columns.[32] Consider an example in R3\mathbb{R}^3 with vectors v1=(100)\mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, v2=(010)\mathbf{v}_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, and v3=(110)\mathbf{v}_3 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}. Form the matrix
A=(101011000). A = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}.
Row reduction yields
(101011000), \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix},
with only two pivots, so rank(A)=2<3\operatorname{rank}(A) = 2 < 3, confirming linear dependence.[18] In contrast, replacing v3\mathbf{v}_3 with (001)\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} gives full rank 3 via reduction to the identity matrix.[18] The general algorithm involves placing the vectors as columns (or rows, equivalently, since row rank equals column rank) in a matrix, performing Gaussian elimination to echelon form, and counting the pivots: independence holds if this count equals kk.[31] This approach scales efficiently for computational verification in larger dimensions.[31]

Dimension and spanning set relations

In a finite-dimensional vector space VV over a field FF with dimV=n\dim V = n, the maximum size of a linearly independent set is nn, and any linearly independent set of exactly nn vectors forms a basis for VV.[33] This result establishes that the dimension nn precisely measures the "size" of the space in terms of independent directions, ensuring all bases share the same cardinality.[5] A linearly independent set {v1,,vk}\{v_1, \dots, v_k\} in VV spans a subspace of dimension exactly kk, as the vectors can be extended to a basis for that subspace while preserving independence.[34] Conversely, if a set spans a subspace but contains redundant vectors, removing them yields a basis whose size equals the subspace dimension. This bidirectional relation underscores how linear independence determines the minimal number of vectors needed to generate a given span.[33] The Steinitz exchange lemma provides a key mechanism for relating different bases: if {u1,,um}\{u_1, \dots, u_m\} and {v1,,vn}\{v_1, \dots, v_n\} are bases for VV, then m=nm = n. Moreover, if B\mathcal{B} is a basis and wspanBw \notin \operatorname{span} \mathcal{B}, then there exists some uiBu_i \in \mathcal{B} such that B{ui}{w}\mathcal{B} \setminus \{u_i\} \cup \{w\} remains a basis.[35] This exchange property allows iterative replacement of basis vectors without altering the spanning or independence properties, proving the invariance of basis size across all bases.[35] If SS spans VV and TT is a linearly independent subset of VV, then Tn=dimV|T| \leq n = \dim V, with equality holding if and only if TT is a basis for VV.[5] This bound follows directly from the Steinitz exchange lemma applied to extend TT within the span of SS, showing that no independent set can exceed the dimension without redundancy.[35] Consequently, any set of more than nn vectors in VV must be linearly dependent, as it surpasses the maximum possible independence cardinality.[36] To see why sets larger than the dimension are dependent, consider a set {v1,,vn+1}\{v_1, \dots, v_{n+1}\} in VV. Define the linear map ϕ:Fn+1V\phi: F^{n+1} \to V by ϕ(ei)=vi\phi(e_i) = v_i, where eie_i are the standard basis vectors. The image of ϕ\phi has dimension at most nn, so by the rank-nullity theorem, dimkerϕ=(n+1)rankϕ1\dim \ker \phi = (n+1) - \operatorname{rank} \phi \geq 1. A nonzero vector in the kernel yields nontrivial coefficients showing civi=0\sum c_i v_i = 0 with not all ci=0c_i = 0, confirming dependence.[37]

Key Properties and Special Cases

Involvement of the zero vector

The zero vector occupies a distinctive position in the theory of linear independence, as its presence in any set of vectors guarantees linear dependence. Specifically, if a set $ S = {\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n} $ in a vector space contains the zero vector 0\mathbf{0}, then $ S $ is linearly dependent. This holds because there exists a nontrivial linear combination equaling the zero vector: assign the coefficient 1 to 0\mathbf{0} and 0 to all other vectors, yielding
10+0v2++0vn=0, 1 \cdot \mathbf{0} + 0 \cdot \mathbf{v}_2 + \cdots + 0 \cdot \mathbf{v}_n = \mathbf{0},
which is nontrivial since not all coefficients are zero.[6][18] This theorem has direct implications for the structure of linearly independent sets and bases. Since bases must be linearly independent and span the vector space, they cannot include the zero vector; thus, every linearly independent set consists exclusively of nonzero vectors. For instance, the singleton set {v}\{ \mathbf{v} \} is linearly independent if and only if v0\mathbf{v} \neq \mathbf{0}, as the equation $ c \mathbf{v} = \mathbf{0} $ implies $ c = 0 $ precisely when v\mathbf{v} is nonzero. Additionally, the empty set is conventionally regarded as linearly independent, as the only linear combination (with no vectors) is the trivial one equaling 0\mathbf{0}; it spans the trivial subspace {0}\{ \mathbf{0} \} and serves as a basis for the zero vector space, which has dimension 0.[38][23][39] A practical consequence arises when dealing with dependent sets: if linear dependence stems from the inclusion of the zero vector, removing 0\mathbf{0} from the set may yield a linearly independent subset that spans the same subspace as the original set minus 0\mathbf{0}. This removal process preserves the span while eliminating the dependence introduced by the zero vector, facilitating the extraction of maximal independent subsets.[40][24]

Standard basis vectors

In the vector space $ F^n $, where $ F $ is a field, the standard basis is the set of vectors $ { e_1, e_2, \dots, e_n } $, with $ e_i $ having a 1 in the $ i $-th position and 0 elsewhere for $ i = 1, \dots, n $. These vectors are linearly independent by construction, as any nontrivial linear combination equaling the zero vector would require all coefficients to be zero.[41] The standard basis vectors span $ F^n $ and are linearly independent, thereby forming a basis for the space. To verify their linear independence, consider the $ n \times n $ matrix whose columns are these vectors; it is the identity matrix $ I_n $, which has determinant 1 (nonzero), confirming that the only solution to $ I_n \mathbf{c} = \mathbf{0} $ is the zero vector $ \mathbf{c} = \mathbf{0} $.[39] As a basis, they provide unique coordinates for any vector in $ F^n $, meaning every $ \mathbf{v} \in F^n $ can be expressed uniquely as $ \mathbf{v} = c_1 e_1 + \dots + c_n e_n $, where the $ c_i $ are the components of $ \mathbf{v} $.[42] This concept generalizes to other finite-dimensional spaces, such as the space of polynomials of degree at most $ n-1 $, denoted $ P_{n-1} $, where the standard basis is the set of monomials $ { 1, x, x^2, \dots, x^{n-1} } $. These monomials are linearly independent and span $ P_{n-1} $, analogous to the coordinate basis in $ F^n $.[43] For example, in $ \mathbb{R}^2 $, the vectors $ e_1 = (1, 0) $ and $ e_2 = (0, 1) $ are linearly independent and form a basis; adding any third vector, such as $ (1, 1) $, results in a linearly dependent set, as $ (1, 1) = e_1 + e_2 $. The nonzero nature of these standard basis vectors ensures their independence, in contrast to including the zero vector, which would introduce dependence.[44]

Independence in function spaces

In function spaces, such as the vector space of continuous functions on an interval or polynomials over a field, a set of functions $ {f_1, \dots, f_n} $ is linearly independent if whenever $ \sum_{i=1}^n a_i f_i(x) = 0 $ for all $ x $ in the domain, it follows that $ a_1 = \dots = a_n = 0 $.[45] This definition mirrors that in finite-dimensional spaces but applies to the pointwise addition and scalar multiplication of functions.[45] A classic example occurs in the space of polynomials of degree at most 2, denoted $ \mathbb{P}_2 $, over the reals. The set $ {1, x, x^2} $ is linearly independent because if $ a + b x + c x^2 = 0 $ for all $ x $, equating coefficients yields $ a = b = c = 0 $.[46] However, adjoining $ x^3 $ to form $ {1, x, x^2, x^3} $ in $ \mathbb{P}_2 $ results in linear dependence, as the dimension of $ \mathbb{P}_2 $ is 3, so any four elements must satisfy a nontrivial linear relation.[46] For exponential functions, the set $ {e^{\lambda_1 t}, \dots, e^{\lambda_k t}} $ with distinct $ \lambda_i \in \mathbb{C} $ is linearly independent over the reals or complexes.[47] This follows from the Wronskian determinant $ W(e^{\lambda_1 t}, \dots, e^{\lambda_k t})(t) = \det \begin{pmatrix} e^{\lambda_1 t} & \cdots & e^{\lambda_k t} \ \lambda_1 e^{\lambda_1 t} & \cdots & \lambda_k e^{\lambda_k t} \ \vdots & \ddots & \vdots \ \lambda_1^{k-1} e^{\lambda_1 t} & \cdots & \lambda_k^{k-1} e^{\lambda_k t} \end{pmatrix} = e^{(\lambda_1 + \dots + \lambda_k) t} \prod_{1 \leq i < j \leq k} (\lambda_j - \lambda_i) $, which is nonzero for all $ t $ when the $ \lambda_i $ are distinct.[48] More generally, for a set of $ n $ sufficiently differentiable functions $ f_1, \dots, f_n $, linear independence holds if and only if their Wronskian $ W(f_1, \dots, f_n)(t) \neq 0 $ at some point $ t $ in the domain.[49] To see this, suppose $ \sum_{i=1}^n a_i f_i(t_0) = 0 $ at some $ t_0 $ with not all $ a_i = 0 $; without loss of generality, assume $ a_n \neq 0 $. Then $ f_n(t_0) = -\sum_{i=1}^{n-1} (a_i / a_n) f_i(t_0) $. Differentiating the relation repeatedly and evaluating at $ t_0 $ yields a system whose determinant is the Wronskian at $ t_0 $, leading to a contradiction if it is nonzero.[48] Conversely, if the functions are dependent, the Wronskian vanishes identically.[49] In the infinite-dimensional case, the monomials $ {x^n \mid n = 0, 1, 2, \dots } $ form a linearly independent set in the vector space of formal power series over a field, as any finite linear combination equaling zero implies all coefficients are zero by uniqueness of series representations.[50]

Advanced Concepts and Generalizations

Linear dependence relations

In the context of a finite set of vectors {v1,,vn}\{v_1, \dots, v_n\} in a vector space over a field FF, the linear dependence relations are the nontrivial tuples (a1,,an)Fn(a_1, \dots, a_n) \in F^n satisfying i=1naivi=0\sum_{i=1}^n a_i v_i = 0.[51] These relations correspond precisely to the elements of the kernel of the linear map defined by the matrix AA with columns v1,,vnv_1, \dots, v_n, where the kernel consists of all solutions to Aa=0A \mathbf{a} = \mathbf{0}.[51] The set of all such coefficient tuples forms a subspace of FnF^n, known as the dependence space, and its dimension equals nrn - r, where rr is the rank of AA (equivalently, the dimension of the span of {v1,,vn}\{v_1, \dots, v_n\}).[52] This follows directly from the rank-nullity theorem applied to the linear map from FnF^n to the ambient space induced by AA.[52] For example, consider two collinear vectors in R2\mathbb{R}^2, such as v1=(1,0)v_1 = (1, 0) and v2=(2,0)v_2 = (2, 0). The span has dimension 1, so the dependence space has dimension 21=12 - 1 = 1, yielding essentially one relation up to scalar multiple: 2v1v2=02 v_1 - v_2 = 0.[51] A key property is that minimal linearly dependent sets—those where the full set is dependent but every proper subset is independent—have dependence space of dimension 1, meaning exactly one relation up to scalar multiplication.[52] In more algebraic terms, the dependence relations form a module over the field FF (specifically, a vector subspace of FnF^n), and in module-theoretic contexts, these are studied as syzygies among the vectors.[53] The vectors {v1,,vn}\{v_1, \dots, v_n\} are linearly independent if and only if the dependence space is the trivial subspace {0}\{\mathbf{0}\}.[51]

Affine independence

Affine independence generalizes the concept of linear independence to points in an affine space, focusing on affine combinations rather than linear ones. A finite set of points $ {p_0, p_1, \dots, p_k} $ in a vector space over R\mathbb{R} (or more generally, over a field) is affinely independent if the associated difference vectors $ {p_1 - p_0, p_2 - p_0, \dots, p_k - p_0} $ form a linearly independent set.[54] This condition ensures that the points do not lie in a lower-dimensional affine subspace than expected from their count.[55] Equivalently, the points $ p_0, \dots, p_k $ are affinely independent if there is no nontrivial affine relation, meaning no scalars $ \lambda_0, \lambda_1, \dots, \lambda_k $, not all zero, satisfying both $ \sum_{i=0}^k \lambda_i p_i = 0 $ and $ \sum_{i=0}^k \lambda_i = 0 $.[56] This formulation captures the idea that no point lies in the affine hyperplane defined by a nontrivial combination of the others with weights summing to zero.[57] Affine independence thus provides a coordinate-free perspective, independent of the choice of origin, unlike linear independence which is tied to the vector space structure.[54] Geometrically, in Euclidean space $ \mathbb{R}^n $, any set of affinely independent points with cardinality at most $ n+1 $ spans an affine subspace of dimension equal to the number of points minus one, forming the vertices of a simplex.[57] The maximum size of an affinely independent set in $ \mathbb{R}^n $ is $ n+1 $; for instance, in $ \mathbb{R}^2 $, up to three non-collinear points can be affinely independent, as they form a triangle, while any four points are necessarily affinely dependent since they cannot all avoid lying on a common line or plane without redundancy.[56][57] Affine independence relates directly to linear independence through translation: if points are affinely independent, then the differences $ p_i - p_0 $ for $ i = 1, \dots, k $ are linearly independent, and the affine span of the points has dimension equal to the linear dimension of this difference set.[54] A key theorem states that the affine dimension of a set $ S $ is the dimension of the linear span of $ {x - y \mid x, y \in S} $, which is one less than the maximum number of affinely independent points in $ S $.[58] This equivalence underscores the role of affine independence in defining the intrinsic geometry of point sets without reference to a fixed origin.[55]

Independence of subspaces

In linear algebra, a family of subspaces {U1,,Uk}\{U_1, \dots, U_k\} of a vector space VV over a field FF is said to be linearly independent if, for each i=1,,ki = 1, \dots, k, the intersection Ui(jiUj)={0}U_i \cap \left( \sum_{j \neq i} U_j \right) = \{0\}. This condition ensures that no nonzero element of UiU_i can be expressed as a linear combination of elements from the other subspaces. Equivalently, the family is linearly independent if the sum i=1kUi\sum_{i=1}^k U_i is a direct sum, meaning every element in the sum admits a unique representation as i=1kui\sum_{i=1}^k u_i with uiUiu_i \in U_i for each ii.[59] The direct sum notation V=i=1kUiV = \bigoplus_{i=1}^k U_i is used when the family {U1,,Uk}\{U_1, \dots, U_k\} is linearly independent and their sum spans the entire space VV, i.e., V=i=1kUiV = \sum_{i=1}^k U_i. In this case, every vector in VV decomposes uniquely into components from each UiU_i, providing a canonical decomposition of the space. This structure is fundamental in decomposing vector spaces into simpler components, such as in the study of invariant subspaces under linear transformations.[60] A concrete example occurs in the Euclidean space Rn\mathbb{R}^n, where the standard coordinate subspaces—such as the xx-axis spanned by (1,0,,0)(1,0,\dots,0) and the yy-axis spanned by (0,1,0,,0)(0,1,0,\dots,0) in Rn\mathbb{R}^n for n2n \geq 2—form a linearly independent family. Here, the intersection of one axis with the sum of the others is trivially {[0](/page/0)}\{[0](/page/0)\}, and their direct sum yields the full space Rn\mathbb{R}^n. This illustrates how orthogonal directions contribute independently to the overall structure.[61] One characterization of linear independence for such a family is that the natural inclusion map ι:i=1kUiV\iota: \bigoplus_{i=1}^k U_i \to V, which sends (u1,,uk)i=1kui(u_1, \dots, u_k) \mapsto \sum_{i=1}^k u_i, is an isomorphism onto its image i=1kUi\sum_{i=1}^k U_i. This isomorphism property highlights the absence of relations between the subspaces beyond their trivial overlaps at the zero vector.[60] A key consequence of linear independence is the additivity of dimensions: if {U1,,Uk}\{U_1, \dots, U_k\} is linearly independent, then dim(i=1kUi)=i=1kdimUi\dim\left( \sum_{i=1}^k U_i \right) = \sum_{i=1}^k \dim U_i. This equality holds because bases of the individual subspaces can be concatenated to form a basis for the sum, without redundancy. Conversely, if the dimensions add up in this way for a sum of subspaces, the family must be linearly independent.[61] While the primary focus here is on vector spaces, the notion of linear independence extends analogously to modules over a ring, where a family of submodules is independent if each intersects the sum of the others trivially, leading to a direct sum decomposition. This generalization appears in the study of module theory, preserving the core ideas of unique decompositions and dimension-like invariants where applicable.[62]

References

User Avatar
No comments yet.