Hubbry Logo
Linear independenceLinear independenceMain
Open search
Linear independence
Community hub
Linear independence
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Linear independence
Linear independence
from Wikipedia
Linearly independent vectors in
Linearly dependent vectors in a plane in

In linear algebra, a set of vectors is said to be linearly independent if there exists no vector in the set that is equal to a linear combination of the other vectors in the set. If such a vector exists, then the vectors are said to be linearly dependent. Linear independence is part of the definition of linear basis.[1]

A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space.

Definition

[edit]

A sequence of vectors from a vector space V is said to be linearly dependent, if there exist scalars not all zero, such that

where denotes the zero vector.

If , this implies that a single vector is linear dependent if and only if it is the zero vector.

If , this implies that at least one of the scalars is nonzero, say , and the above equation is able to be written as

Thus, a set of vectors is linearly dependent if and only if one of them is zero or a linear combination of the others.

A sequence of vectors is said to be linearly independent if it is not linearly dependent, that is, if the equation

can only be satisfied by for This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of as a linear combination of its vectors is the trivial representation in which all the scalars are zero.[2] Even more concisely, a sequence of vectors is linearly independent if and only if can be represented as a linear combination of its vectors in a unique way.

If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors is linearly independent if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful.

A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent.

Infinite case

[edit]

An infinite set of vectors is linearly independent if every finite subset is linearly independent. This definition applies also to finite sets of vectors, since a finite set is a finite subset of itself, and every subset of a linearly independent set is also linearly independent.

Conversely, an infinite set of vectors is linearly dependent if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set.

An indexed family of vectors is linearly independent if it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise, the family is said to be linearly dependent.

A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in x over the reals has the (infinite) subset {1, x, x2, ...} as a basis.

Definition via span

[edit]

Let be a vector space. A set is linearly independent if and only if is a minimal element of

by the inclusion order. In contrast, is linearly dependent if it has a proper subset whose span is a superset of .

Geometric examples

[edit]
  • and are independent and define the plane P.
  • , and are dependent because all three are contained in the same plane.
  • and are dependent because they are parallel to each other.
  • , and are independent because and are independent of each other and is not a linear combination of them or, equivalently, because they do not belong to a common plane. The three vectors define a three-dimensional space.
  • The vectors (null vector, whose components are equal to zero) and are dependent since .

Geographic location

[edit]

A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement is true, but it is not necessary to find the location.

In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is a linear combination of the other two vectors, and it makes the set of vectors linearly dependent, that is, one of the three vectors is unnecessary to define a specific location on a plane.

Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general, n linearly independent vectors are required to describe all locations in n-dimensional space.

Evaluating linear independence

[edit]

The zero vector

[edit]

If one or more vectors from a given sequence of vectors is the zero vector then the vectors are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose that is an index (i.e. an element of ) such that Then let (alternatively, letting be equal to any other non-zero scalar will also work) and then let all other scalars be (explicitly, this means that for any index other than (i.e. for ), let so that consequently ). Simplifying gives:

Because not all scalars are zero (in particular, ), this proves that the vectors are linearly dependent.

As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly independent.

Now consider the special case where the sequence of has length (i.e. the case where ). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, if is any vector then the sequence (which is a sequence of length ) is linearly dependent if and only if ; alternatively, the collection is linearly independent if and only if

Linear dependence and independence of two vectors

[edit]

This example considers the special case where there are exactly two vector and from some real or complex vector space. The vectors and are linearly dependent if and only if at least one of the following is true:

  1. is a scalar multiple of (explicitly, this means that there exists a scalar such that ) or
  2. is a scalar multiple of (explicitly, this means that there exists a scalar such that ).

If then by setting we have (this equality holds no matter what the value of is), which shows that (1) is true in this particular case. Similarly, if then (2) is true because If (for instance, if they are both equal to the zero vector ) then both (1) and (2) are true (by using for both).

If then is only possible if and ; in this case, it is possible to multiply both sides by to conclude This shows that if and then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly independent). If but instead then at least one of and must be zero. Moreover, if exactly one of and is (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false).

The vectors and are linearly independent if and only if is not a scalar multiple of and is not a scalar multiple of .

Vectors in R2

[edit]

Three vectors: Consider the set of vectors and then the condition for linear dependence seeks a set of non-zero scalars, such that

or

Row reduce this matrix equation by subtracting the first row from the second to obtain,

Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is

Rearranging this equation allows us to obtain

which shows that non-zero ai exist such that can be defined in terms of and Thus, the three vectors are linearly dependent.

Two vectors: Now consider the linear dependence of the two vectors and and check,

or

The same row reduction presented above yields,

This shows that which means that the vectors and are linearly independent.

Vectors in R4

[edit]

In order to determine if the three vectors in

are linearly dependent, form the matrix equation,

Row reduce this equation to obtain,

Rearrange to solve for v3 and obtain,

This equation is easily solved to define non-zero ai,

where can be chosen arbitrarily. Thus, the vectors and are linearly dependent.

Alternative method using determinants

[edit]

An alternative method relies on the fact that vectors in are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero.

In this case, the matrix formed by the vectors is

We may write a linear combination of the columns as

We are interested in whether AΛ = 0 for some nonzero vector Λ. This depends on the determinant of , which is

Since the determinant is non-zero, the vectors and are linearly independent.

Otherwise, suppose we have vectors of coordinates, with Then A is an n×m matrix and Λ is a column vector with entries, and we are again interested in AΛ = 0. As we saw previously, this is equivalent to a list of equations. Consider the first rows of , the first equations; any solution of the full list of equations must also be true of the reduced list. In fact, if i1,...,im is any list of rows, then the equation must be true for those rows.

Furthermore, the reverse is true. That is, we can test whether the vectors are linearly dependent by testing whether

for all possible lists of rows. (In case , this requires only one determinant, as above. If , then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available.

More vectors than dimensions

[edit]

If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors in

Natural basis vectors

[edit]

Let and consider the following elements in , known as the natural basis vectors:

Then are linearly independent.

Proof

Suppose that are real numbers such that

Since

then for all

Linear independence of functions

[edit]

Let be the vector space of all differentiable functions of a real variable . Then the functions and in are linearly independent.

Proof

[edit]

Suppose and are two real numbers such that

Take the first derivative of the above equation:

for all values of We need to show that and In order to do this, we subtract the first equation from the second, giving . Since is not zero for some , It follows that too. Therefore, according to the definition of linear independence, and are linearly independent.

Space of linear dependencies

[edit]

A linear dependency or linear relation among vectors v1, ..., vn is a tuple (a1, ..., an) with n scalar components such that

If such a linear dependence exists with at least a nonzero component, then the n vectors are linearly dependent. Linear dependencies among v1, ..., vn form a vector space.

If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneous system of linear equations, with the coordinates of the vectors as coefficients. A basis of the vector space of linear dependencies can therefore be computed by Gaussian elimination.

Generalizations

[edit]

Affine independence

[edit]

A set of vectors is said to be affinely dependent if at least one of the vectors in the set can be defined as an affine combination of the others. Otherwise, the set is called affinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Contrapositively, every linearly independent set is affinely independent. Note that an affinely independent set is not necessarily linearly independent.

Consider a set of vectors of size each, and consider the set of augmented vectors of size each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent.[3]: 256 

Linearly independent vector subspaces

[edit]

Two vector subspaces and of a vector space are said to be linearly independent if [4] More generally, a collection of subspaces of are said to be linearly independent if for every index where [4] The vector space is said to be a direct sum of if these subspaces are linearly independent and

See also

[edit]
  • Matroid – Abstraction of linear independence of vectors

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Linear independence is a fundamental concept in linear algebra that characterizes sets or families of vectors within a . A set of vectors {v1,v2,,vk}\{ \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k \} is linearly independent if the equation c1v1+c2v2++ckvk=0c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \dots + c_k \mathbf{v}_k = \mathbf{0} holds only when all scalars c1=c2==ck=0c_1 = c_2 = \dots = c_k = 0, meaning no vector in the set can be expressed as a nontrivial of the others. This property distinguishes linearly independent collections from linearly dependent ones, where at least one vector is redundant as a of the rest. Linear independence is essential for defining the structure of vector spaces, particularly in relation to span, basis, and dimension. A basis is a linearly independent set of vectors that spans the entire , providing a minimal generating set for all elements in the space. The dimension of a is the cardinality of any such basis, which remains consistent regardless of the choice of basis, and it determines the maximum size of a linearly independent subset. For instance, in Rn\mathbb{R}^n, the standard basis vectors form a linearly independent set of size nn, confirming the dimension is nn. Beyond theoretical foundations, linear independence has practical implications in matrix theory and applications. The columns (or rows) of a matrix are linearly independent the associated homogeneous has only the trivial solution, which relates directly to the matrix's rank and invertibility. This concept extends to solving systems of linear equations, optimizing in , and analyzing data structures in fields like physics and , where it ensures non-redundant representations.

Definitions

Finite-dimensional vector spaces

In the context of finite-dimensional vector spaces, the concepts of , linear combinations, and the zero vector are foundational prerequisites. A VV over a field FF consists of vectors that can be added and scaled by elements of FF, with the zero vector 0\mathbf{0} serving as the . Linear combinations involve sums of the form aivi\sum a_i \mathbf{v}_i, where aiFa_i \in F and viV\mathbf{v}_i \in V. A finite set of vectors {v1,,vn}\{\mathbf{v}_1, \dots, \mathbf{v}_n\} in a VV over a field FF is linearly independent if the only solution in FF to the equation a1v1++anvn=0a_1 \mathbf{v}_1 + \dots + a_n \mathbf{v}_n = \mathbf{0} is a1==an=0a_1 = \dots = a_n = 0. This condition ensures that no vector in the set can be expressed as a of the others. Formally, the set is linearly independent if i=1naivi=0    ai=0i=1,,n.\sum_{i=1}^n a_i \mathbf{v}_i = \mathbf{0} \quad \implies \quad a_i = 0 \quad \forall i = 1, \dots, n. The negation of this property defines linear dependence: a set is linearly dependent if there exist scalars a1,,anFa_1, \dots, a_n \in F, not all zero, such that i=1naivi=0\sum_{i=1}^n a_i \mathbf{v}_i = \mathbf{0}. The concept of linear independence was formalized by in his 1888 work Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassmann, where he provided the first axiomatic treatment of vector spaces over the reals, building on earlier ideas from mathematicians like Grassmann and Möbius. Although detailed discussions of span and bases appear later in the theory, linear independence is essential for identifying sets that form bases in finite-dimensional spaces.

Infinite-dimensional vector spaces

In infinite-dimensional vector spaces, the notion of linear independence extends to possibly infinite families of vectors indexed by a set II, which may be countably or uncountably infinite. A family {viiI}\{v_i \mid i \in I\} is linearly independent if, for every finite JIJ \subseteq I, the only solution to the equation jJajvj=0\sum_{j \in J} a_j v_j = 0 is aj=0a_j = 0 for all jJj \in J. This condition ensures that no nontrivial finite of the vectors vanishes, mirroring the finite-dimensional case but restricting nontrivial relations to finite subcollections. A key structure in this context is the Hamel basis, defined as a linearly independent set that spans the algebraically, meaning every vector in the space can be expressed as a finite of basis elements. By (an equivalent of the ), every vector space possesses a Hamel basis, though in infinite dimensions, this basis is typically uncountable and its explicit construction is impossible without additional assumptions. For instance, in the Hilbert space 2\ell^2 of square-summable sequences, any Hamel basis must be uncountable, as the space has cardinality 202^{\aleph_0} and cannot be spanned algebraically by a countable set using only finite combinations. The standard orthonormal basis {en}n=1\{e_n\}_{n=1}^\infty in 2\ell^2, where ene_n has a 1 in the nnth position and zeros elsewhere, provides an example of a countably infinite linearly independent set. However, this set does not form a Hamel basis, as its algebraic span consists only of sequences with finite support, which is a proper subspace of 2\ell^2. In contrast, Schauder bases in topological vector spaces like 2\ell^2 permit spanning via infinite convergent linear combinations, highlighting a distinction from the purely algebraic Hamel framework. A fundamental subtlety in infinite dimensions is that linear independence governs only finite combinations, so spanning sets like Hamel bases require potentially uncountably many elements to cover the space without infinite sums, unlike finite-dimensional cases where independence directly ties to dimension. This algebraic restriction often renders Hamel bases impractical for in spaces equipped with , such as Banach or Hilbert spaces.

Equivalent characterizations

A finite set of vectors {v1,,vn}\{v_1, \dots, v_n\} in a VV over a field FF is linearly independent if and only if the of its span is nn, meaning the vectors form a basis for \span{v1,,vn}\span\{v_1, \dots, v_n\}. This equivalence holds because linear independence ensures no redundancies, allowing the set to achieve the maximal equal to its within the subspace it generates. To see this, consider a proof by induction on nn. For n=1n=1, the set {v1}\{v_1\} is linearly independent if v10v_1 \neq 0, in which case dim\span{v1}=1\dim \span\{v_1\} = 1. Assume the statement holds for sets of size k1k-1. For a set of size kk, the {v1,,vk1}\{v_1, \dots, v_{k-1}\} is linearly independent by the , so dim\span{v1,,vk1}=k1\dim \span\{v_1, \dots, v_{k-1}\} = k-1 by the induction hypothesis. The full set is linearly independent vk\span{v1,,vk1}v_k \notin \span\{v_1, \dots, v_{k-1}\}, which increases the to kk. Equivalently, the set {v1,,vn}\{v_1, \dots, v_n\} is linearly independent if and only if every vector in \span{v1,,vn}\span\{v_1, \dots, v_n\} has a unique representation as a linear combination of the vectors in the set. This uniqueness follows directly from the triviality of the kernel of the coordinate map associating coefficients to linear combinations. Another characterization uses linear maps: the vectors {v1,,vn}\{v_1, \dots, v_n\} are linearly independent if and only if the linear map T:FnVT: F^n \to V defined by T(ei)=viT(e_i) = v_i, where {e1,,en}\{e_1, \dots, e_n\} is the standard basis of FnF^n, is injective. Injectivity means the kernel is trivial, which corresponds precisely to the only solution of aivi=0\sum a_i v_i = 0 being ai=0a_i = 0 for all ii. A set SS is linearly independent if and only if it can be extended to a basis of the ambient VV (assuming VV is finite-dimensional). In finite dimensions, starting from a linearly independent SS, one can iteratively add vectors from a spanning set until spanning VV, preserving independence at each step; the converse follows from subsets of bases being independent. Thus, the vectors {v1,,vn}\{v_1, \dots, v_n\} form a basis for their span if and only if they are linearly independent, as they trivially span \span{v1,,vn}\span\{v_1, \dots, v_n\}.

Geometric and Visual Interpretations

In two-dimensional space

In two-dimensional Euclidean space R2\mathbb{R}^2, the concept of linear independence for vectors gains an intuitive geometric interpretation. A set consisting of two vectors is linearly independent if they are not collinear, meaning neither vector is a scalar multiple of the other. Geometrically, such vectors point in different directions and form the sides of a parallelogram with positive area, allowing them to span the entire plane. In contrast, collinear vectors lie along the same line and span only that line, forming a degenerate parallelogram with zero area. For example, the vectors e1=(1,0)\mathbf{e}_1 = (1, 0) and e2=(0,1)\mathbf{e}_2 = (0, 1) are linearly independent, as they are and together span all of R2\mathbb{R}^2. On the other hand, the vectors (1,0)(1, 0) and (2,0)(2, 0) are linearly dependent, since (2,0)=2(1,0)(2, 0) = 2 \cdot (1, 0), and they both lie along the x-axis. Similarly, (1,2)(1, 2) and (2,4)(2, 4) are dependent because (2,4)=2(1,2)(2, 4) = 2 \cdot (1, 2). A single nonzero vector in R2\mathbb{R}^2, such as (1,1)(1, 1), is linearly independent, as the equation c(1,1)=(0,0)c \cdot (1, 1) = (0, 0) implies c=0c = 0. However, the zero vector (0,0)(0, 0) by itself is linearly dependent, since 1(0,0)=(0,0)1 \cdot (0, 0) = (0, 0) with a nonzero scalar. Any set of three or more vectors in R2\mathbb{R}^2 is always linearly dependent, as the space has dimension 2 and cannot be spanned by more than two linearly independent vectors; at least one vector must lie in the span of the others, akin to the in . Non-collinear pairs span the full plane, while collinear ones are confined to a one-dimensional line. To check dependence for two vectors u=(u1,u2)\mathbf{u} = (u_1, u_2) and v=(v1,v2)\mathbf{v} = (v_1, v_2), compute the u1v2u2v1u_1 v_2 - u_2 v_1; the vectors are dependent if this equals zero, which measures the signed area of the they form.

In higher-dimensional spaces

In three-dimensional space R3\mathbb{R}^3, three vectors are linearly independent if they span the entire space without being coplanar, forming a with nonzero volume that corresponds to the they define with the origin having positive volume. For instance, the vectors e1=(1,0,0)\mathbf{e}_1 = (1,0,0), e2=(0,1,0)\mathbf{e}_2 = (0,1,0), and e3=(0,0,1)\mathbf{e}_3 = (0,0,1) are linearly independent, as they align along mutually orthogonal axes and collectively span R3\mathbb{R}^3. In contrast, any set including the zero vector or where one vector is a scalar multiple of another fails to add a new and is thus dependent. This geometric intuition generalizes to Rn\mathbb{R}^n for n>3n > 3, where a set of kk vectors (with knk \leq n) is linearly independent if their span forms a full kk-dimensional subspace without dimensional collapse, meaning each successive vector extends the span by one . However, in Rn\mathbb{R}^n, any collection of n+1n+1 vectors must be linearly dependent, as they can occupy at most an nn-dimensional space and thus cannot all contribute unique directions. Visually, linear independence in higher dimensions preserves a "full rank" orientation, where the vectors maintain their maximal possible spread; dependence, conversely, causes a flattening into a lower-dimensional subspace, such as vectors collapsing onto a hyperplane. Fundamentally, a set of vectors is linearly independent if and only if they do not all lie within any proper subspace of dimension less than the size of the set.

Determination Methods

For two or three vectors

For two vectors u\mathbf{u} and v\mathbf{v} in a over a field, the set {u,v}\{\mathbf{u}, \mathbf{v}\} is linearly independent neither vector is the zero vector and v\mathbf{v} is not a scalar multiple of u\mathbf{u}. This condition ensures that the only solution to the equation au+bv=0a \mathbf{u} + b \mathbf{v} = \mathbf{0} is the trivial solution a=b=0a = b = 0. To verify linear independence for two nonzero vectors, one can check whether v\mathbf{v} lies in the span of {u}\{\mathbf{u}\}, which occurs there exists a scalar cc such that v=cu\mathbf{v} = c \mathbf{u}. If no such scalar exists, the vectors are linearly independent. Geometrically, in the plane, this corresponds to the vectors not being collinear. For three vectors u\mathbf{u}, v\mathbf{v}, and w\mathbf{w} in R3\mathbb{R}^3, the set {u,v,w}\{\mathbf{u}, \mathbf{v}, \mathbf{w}\} is linearly independent if the equation au+bv+cw=0a \mathbf{u} + b \mathbf{v} + c \mathbf{w} = \mathbf{0} has only the trivial solution a=b=c=0a = b = c = 0. The vectors are linearly dependent if there exists a nontrivial solution, meaning at least one is nonzero. For three vectors in R3\mathbb{R}^3, they can be checked by forming the matrix with them as columns and computing its determinant; independence holds if and only if det0\det \neq 0. Geometrically, this means the vectors are not coplanar. Consider the vectors (1,1)(1,1), (1,2)(1,2), and (2,3)(2,3) in R2\mathbb{R}^2: these are linearly dependent because (2,3)=1(1,1)+1(1,2)(2,3) = 1 \cdot (1,1) + 1 \cdot (1,2). Any set containing the zero vector is linearly dependent, as 10+0u+0v=01 \cdot \mathbf{0} + 0 \cdot \mathbf{u} + 0 \cdot \mathbf{v} = \mathbf{0} provides a nontrivial linear combination yielding zero. In two dimensions, a step-by-step check for two vectors u=(u1,u2)\mathbf{u} = (u_1, u_2) and v=(v1,v2)\mathbf{v} = (v_1, v_2) involves computing the 2D cross product analog, given by the u1v2u2v1u_1 v_2 - u_2 v_1; the vectors are linearly independent if and only if this value is nonzero.

Matrix-based approaches

One effective way to determine the linear independence of a set of kk vectors in Rn\mathbb{R}^n is to form an n×kn \times k matrix AA whose columns are these vectors. The set is linearly independent if and only if AA has full column rank, meaning rank(A)=k\operatorname{rank}(A) = k. This condition ensures that the columns span a kk-dimensional subspace without redundancy. Equivalently, the columns of AA are linearly independent the homogeneous equation Ax=0A \mathbf{x} = \mathbf{0} has only the trivial solution x=0\mathbf{x} = \mathbf{0}, indicating that the kernel (null space) of AA is trivial. This kernel characterization directly ties linear independence to the invertibility properties of the linear transformation represented by AA. To compute the rank and verify full column rank, can be applied to row-reduce AA to . The vectors are linearly independent if the reduced form has kk pivot positions (one in each column) with no zero rows appearing before the kk-th pivot. This method systematically identifies dependencies by revealing the number of independent columns through the pivot count. For the special case where k=nk = n (a ), the vectors are linearly independent if and only if det(A)0\det(A) \neq 0. A nonzero confirms that AA is invertible, implying full rank and thus of the columns. Consider an example in R3\mathbb{R}^3 with vectors v1=(100)\mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
Add your contribution
Related Hubs
User Avatar
No comments yet.