Hubbry Logo
Orthogonal complementOrthogonal complementMain
Open search
Orthogonal complement
Community hub
Orthogonal complement
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Orthogonal complement
Orthogonal complement
from Wikipedia

In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace of a vector space equipped with a bilinear form is the set of all vectors in that are orthogonal to every vector in . Informally, it is called the perp, short for perpendicular complement. It is a subspace of .

Example

[edit]

Let be the vector space equipped with the usual dot product (thus making it an inner product space), and let with then its orthogonal complement can also be defined as being

The fact that every column vector in is orthogonal to every column vector in can be checked by direct computation. The fact that the spans of these vectors are orthogonal then follows by bilinearity of the dot product. Finally, the fact that these spaces are orthogonal complements follows from the dimension relationships given below.

General bilinear forms

[edit]

Let be a vector space over a field equipped with a bilinear form We define to be left-orthogonal to , and to be right-orthogonal to , when For a subset of define the left-orthogonal complement to be

There is a corresponding definition of the right-orthogonal complement. For a reflexive bilinear form, where , the left and right complements coincide. This will be the case if is a symmetric or an alternating form.

The definition extends to a bilinear form on a free module over a commutative ring, and to a sesquilinear form extended to include any free module over a commutative ring with conjugation.[1]

Properties

[edit]
  • An orthogonal complement is a subspace of ;
  • If then ;
  • The radical of is a subspace of every orthogonal complement;
  • ;
  • If is non-degenerate and is finite-dimensional, then .
  • If are subspaces of a finite-dimensional space and then .

Inner product spaces

[edit]

This section considers orthogonal complements in an inner product space .[2]

Two vectors and are called orthogonal if , which happens if and only if scalars .[3]

If is any subset of an inner product space then its orthogonal complement in is the vector subspace which is always a closed subset (hence, a closed vector subspace) of [3][proof 1] that satisfies:

  • ;
  • ;
  • ;
  • ;
  • .

If is a vector subspace of an inner product space then If is a closed vector subspace of a Hilbert space then[3] where is called the orthogonal decomposition of into and and it indicates that is a complemented subspace of with complement

Properties

[edit]

The orthogonal complement is always closed in the metric topology. In finite-dimensional spaces, that is merely an instance of the fact that all subspaces of a vector space are closed. In infinite-dimensional Hilbert spaces, some subspaces are not closed, but all orthogonal complements are closed. If is a vector subspace of a Hilbert space the orthogonal complement of the orthogonal complement of is the closure of that is,

Some other useful properties that always hold are the following. Let be a Hilbert space and let and be linear subspaces. Then:

  • ;
  • if then ;
  • ;
  • ;
  • if is a closed linear subspace of then ;
  • if is a closed linear subspace of then the (inner) direct sum.

The orthogonal complement generalizes to the annihilator, and gives a Galois connection on subsets of the inner product space, with associated closure operator the topological closure of the span.

Finite dimensions

[edit]

For a finite-dimensional inner product space of dimension , the orthogonal complement of a -dimensional subspace is an -dimensional subspace, and the double orthogonal complement is the original subspace:

If , where , , and refer to the row space, column space, and null space of (respectively), then[4]

Banach spaces

[edit]

There is a natural analog of this notion in general Banach spaces. In this case one defines the orthogonal complement of to be a subspace of the dual of defined similarly as the annihilator

It is always a closed subspace of . There is also an analog of the double complement property. is now a subspace of (which is not identical to ). However, the reflexive spaces have a natural isomorphism between and . In this case we have

This is a rather straightforward consequence of the Hahn–Banach theorem.

Applications

[edit]

In special relativity the orthogonal complement is used to determine the simultaneous hyperplane at a point of a world line. The bilinear form used in Minkowski space determines a pseudo-Euclidean space of events.[5] The origin and all events on the light cone are self-orthogonal. When a time event and a space event evaluate to zero under the bilinear form, then they are hyperbolic-orthogonal. This terminology stems from the use of conjugate hyperbolas in the pseudo-Euclidean plane: conjugate diameters of these hyperbolas are hyperbolic-orthogonal.

See also

[edit]

Notes

[edit]

References

[edit]

Bibliography

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In linear algebra, the orthogonal complement of a SS of an VV is the set S={vVv,s=0 for all sS}S^\perp = \{ v \in V \mid \langle v, s \rangle = 0 \text{ for all } s \in S \}, consisting of all elements in VV that are orthogonal to every element of SS. This concept generalizes the notion of perpendicularity from to abstract vector spaces equipped with an inner product, such as the in Rn\mathbb{R}^n. When SS is a subspace WW of a finite-dimensional VV, WW^\perp is itself a subspace of VV, and VV decomposes orthogonally as the V=WWV = W \oplus W^\perp, meaning every vector in VV can be uniquely expressed as the sum of a vector in WW and a vector in WW^\perp. Furthermore, the double orthogonal complement satisfies (W)=W(W^\perp)^\perp = W, and the dimensions additively relate by dimW+dimW=dimV\dim W + \dim W^\perp = \dim V. In the specific case of Rn\mathbb{R}^n with the standard , the orthogonal complement of the row space of a matrix AA is the null space of AA, and vice versa for the column space and left null space, underpinning key results like the rank-nullity theorem. The orthogonal complement plays a central role in applications such as orthogonal projections, where the projection of a vector onto WW is the closest point in WW to the vector, with the error vector lying in WW^\perp; this is foundational in problems and . In Hilbert spaces, which are complete inner product spaces, the orthogonal complement extends to infinite dimensions, enabling decompositions essential for and .

Fundamentals

Definition

In an , which is a equipped with an inner product—a positive-definite that generalizes the and allows measurement of lengths and angles—orthogonality between vectors is defined via this structure. Specifically, two vectors uu and vv in the space are orthogonal if their inner product satisfies u,v=0\langle u, v \rangle = 0, indicating perpendicularity in the geometric sense induced by the inner product. For a subspace WW of an inner product space VV, the orthogonal complement of WW, denoted WW^\perp, is the set of all vectors in VV that are orthogonal to every vector in WW. Formally, W={vVv,w=0 wW}.W^\perp = \{ v \in V \mid \langle v, w \rangle = 0 \ \forall \, w \in W \}. This definition captures the collection of all elements perpendicular to the entire subspace WW with respect to the inner product ,\langle \cdot, \cdot \rangle. The notation WW^\perp is standard in linear algebra texts, though in some contexts involving dual spaces, the orthogonal complement relates to the annihilator of WW under the identification provided by the inner product.

Example

Consider the R2\mathbb{R}^2 equipped with the standard inner product, which is the . Let WW be the one-dimensional subspace spanned by the vector (1,0)(1,0), corresponding to the x-axis. To compute the orthogonal complement WW^\perp algebraically, identify all vectors (x,y)R2(x, y) \in \mathbb{R}^2 such that (x,y),(1,0)=x1+y0=x=0\langle (x, y), (1, 0) \rangle = x \cdot 1 + y \cdot 0 = x = 0. This condition holds for all vectors of the form (0,y)(0, y), so W=span{(0,1)}W^\perp = \operatorname{span}\{(0,1)\}, which is the y-axis. Geometrically, WW^\perp consists of all lines through the origin that are to WW; in this case, it forms the vertical line along the y-axis, orthogonal to the horizontal x-axis. This example demonstrates how the orthogonal complement partitions the into mutually perpendicular directions. In higher dimensions, the orthogonal complement of a one-dimensional subspace like this generalizes to an (n1)(n-1)-dimensional to the original direction.

Inner Product Spaces

Properties

In an VV, if WW is a subspace, then the orthogonal complement WW^\perp satisfies WW={0}W \cap W^\perp = \{0\}. To see this, suppose xWWx \in W \cap W^\perp; then x,x=0\langle x, x \rangle = 0, which implies x2=0\|x\|^2 = 0 and thus x=0x = 0, using the positive-definiteness of the inner product. The set WW^\perp is itself a subspace of VV, closed under and . For vectors x,yWx, y \in W^\perp and scalar α\alpha, linearity of the inner product gives x+y,w=x,w+y,w=0+0=0\langle x + y, w \rangle = \langle x, w \rangle + \langle y, w \rangle = 0 + 0 = 0 for all wWw \in W, and similarly αx,w=αx,w=0\langle \alpha x, w \rangle = \alpha \langle x, w \rangle = 0. For a closed subspace WW of a VV, the double complement property holds: (W)=W(W^\perp)^\perp = W. This follows from showing W(W)W \subseteq (W^\perp)^\perp (since if wWw \in W, then w,z=0\langle w, z \rangle = 0 for all zWz \in W^\perp) and using the closedness to ensure equality via the orthogonal projection onto WW. In a , every closed subspace WW admits an orthogonal decomposition: V=WWV = W \oplus W^\perp. For any vVv \in V, the orthogonal projection PvWP v \in W satisfies vPvWv - P v \in W^\perp, and uniqueness arises because if v=w1+z1=w2+z2v = w_1 + z_1 = w_2 + z_2 with wiWw_i \in W and ziWz_i \in W^\perp, then w1w2=z2z1WW={0}w_1 - w_2 = z_2 - z_1 \in W \cap W^\perp = \{0\}. If {w1,,wk}\{w_1, \dots, w_k\} is a basis for the subspace WW, then WW^\perp is the null space of the matrix whose rows are the coordinates of the wiw_i with respect to some basis of VV. Equivalently, xWx \in W^\perp x,wi=0\langle x, w_i \rangle = 0 for each i=1,,ki = 1, \dots, k, which uses the of the inner product to extend from the basis to the entire span of WW.

Finite Dimensions

In finite-dimensional inner product spaces, the orthogonal complement exhibits particularly tractable properties due to the existence of bases and the ability to compute dimensions directly. For an inner product space VV of dimension nn and a subspace WVW \subseteq V, the orthogonal complement WW^\perp satisfies the dimension theorem: dimW+dimW=n\dim W + \dim W^\perp = n. This result follows from the direct sum decomposition V=WWV = W \oplus W^\perp, which holds uniquely in finite dimensions, ensuring that every vector in VV can be expressed as the sum of a unique component in WW and one in WW^\perp. The dimension of the orthogonal complement also connects to the rank-nullity theorem in matrix terms. If WW is the column space of a matrix ARn×kA \in \mathbb{R}^{n \times k} whose columns form a basis for WW, then WW^\perp is the null space of ATA^T, so dimW=n\rank(A)\dim W^\perp = n - \rank(A). This relation highlights how the "deficiency" in the spanning power of the basis vectors for WW directly determines the size of its orthogonal complement. A key application in finite dimensions is the orthogonal projection onto WW. For an {u1,,uk}\{u_1, \dots, u_k\} of WW, the orthogonal projection of a vector vVv \in V onto WW is given by \projWv=i=1kv,uiui.\proj_W v = \sum_{i=1}^k \langle v, u_i \rangle u_i. This formula provides the unique vector in WW closest to vv in the inner product norm, with the error v\projWvv - \proj_W v lying in WW^\perp. It is computationally efficient when an is available, often obtained via the Gram-Schmidt process. To illustrate, consider R3\mathbb{R}^3 with the standard and the plane WW spanned by {(1,0,0),(0,1,0)}\{(1,0,0), (0,1,0)\}, which is the xyxy-plane. The orthogonal complement WW^\perp consists of vectors (0,0,z)(0,0,z) for zRz \in \mathbb{R}, forming the zz-axis, a line to the plane. Here, dimW=2\dim W = 2 and dimW=1\dim W^\perp = 1, verifying the dimension theorem. The of WW in VV, defined as \codimW=ndimW\codim W = n - \dim W, equals dimW\dim W^\perp, offering an interpretation of the orthogonal complement as measuring the " deficiency" of WW relative to the full space. This perspective is useful in applications like solving systems of linear equations, where WW^\perp captures the solution space to homogeneous constraints.

Generalizations

Bilinear Forms

In the context of a VV equipped with a B:V×VFB: V \times V \to \mathbb{F}, where F\mathbb{F} is a field, the orthogonal complement of a subspace WVW \subseteq V is defined as WB={vVB(v,w)=0 wW}W^\perp_B = \{ v \in V \mid B(v, w) = 0 \ \forall w \in W \}. This generalizes the standard notion from inner product spaces, where BB is a symmetric positive-definite form, to arbitrary bilinear forms that may not possess such properties. The set WBW^\perp_B is always a subspace of VV. The radical of the BB, denoted rad(B)=VB\mathrm{rad}(B) = V^\perp_B, consists of all vectors in VV orthogonal to the entire space and measures the degeneracy of BB; specifically, BB is non-degenerate rad(B)={0}\mathrm{rad}(B) = \{0\}. In finite dimensions, for any subspace WW, the dimensions satisfy dimW+dimWBdimV\dim W + \dim W^\perp_B \geq \dim V, with equality holding if BB is non-degenerate (or more precisely, if the restriction of BB to WBW^\perp_B induces a non-degenerate form on the ). When BB is alternating (hence skew-symmetric), as in symplectic forms, or symmetric, as in quadratic forms, the orthogonal complement plays a key role in identifying isotropic subspaces, which are subspaces WW satisfying WWBW \subseteq W^\perp_B. For non-degenerate alternating forms on even-dimensional spaces, maximal isotropic subspaces have dimension equal to half the dimension of VV. Consider the R2\mathbb{R}^2 with the B((x1,y1),(x2,y2))=x1y2y1x2B((x_1, y_1), (x_2, y_2)) = x_1 y_2 - y_1 x_2, which is the standard symplectic (alternating) form given by the . This form is non-degenerate, as its (0110)\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} is invertible. For W=span{(1,0)}W = \mathrm{span}\{(1, 0)\}, the orthogonal complement is WB={(a,b)R2B((a,b),(1,0))=b=0}=span{(1,0)}=WW^\perp_B = \{ (a, b) \in \mathbb{R}^2 \mid B((a, b), (1, 0)) = -b = 0 \} = \mathrm{span}\{(1, 0)\} = W, illustrating that WW is isotropic since BB vanishes on W×WW \times W.

Banach Spaces

In normed linear spaces, the orthogonal complement generalizes the inner product notion through duality with the continuous XX^*. For a subspace WW of a normed space XX, the annihilator W0W^0 is the closed subspace of XX^* consisting of all continuous linear functionals fXf \in X^* such that f(w)=0f(w) = 0 for every wWw \in W. The orthogonal complement is then defined as the preannihilator of this annihilator: W={vXf,v=0 fW0}.W^\perp = \{ v \in X \mid \langle f, v \rangle = 0 \ \forall \, f \in W^0 \}. This set WW^\perp is always a closed subspace of XX. The plays a central role in characterizing this construction. It ensures the existence of non-zero continuous linear functionals that separate a given point from a proper closed convex , implying that the double annihilator precisely recovers the norm closure: W=(W0)0=WW^\perp = (W^0)^0 = \overline{W}, the closure of WW in the norm topology of XX. If WW is already closed, then W=WW^\perp = W, which is thus closed. This topological closure property highlights the emphasis on continuity and completeness in Banach spaces, where XX is complete. In relation to the weak topology on XX, induced by the seminorms f,| \langle f, \cdot \rangle | for fXf \in X^*, the orthogonal complement WW^\perp coincides with the kernel of the operator associated to the i:WXi: W \hookrightarrow X. Specifically, the i:XWi^*: X^* \to W^* has kernel W0W^0, and WW^\perp is the set of elements in XX annihilated by all such kernels, aligning with weak continuity properties of bounded operators. A concrete example arises in the \ell^\infty of bounded real sequences with the supremum norm, where the subspace c0c_0 consists of sequences converging to zero. Since c0c_0 is closed in \ell^\infty, its annihilator c00c_0^0 in ()(\ell^\infty)^* leads to the orthogonal complement c0=c0c_0^\perp = c_0. However, unlike Hilbert spaces, this does not yield a decomposition =c0c0\ell^\infty = c_0 \oplus c_0^\perp with a bounded projection onto c0c_0; in fact, c0c_0 admits no closed topological complement in \ell^\infty. In contrast to Hilbert spaces, where the Riesz representation theorem identifies XX^* with XX via the inner product and guarantees an orthogonal X=WWX = W \oplus W^\perp for closed WW, general Banach spaces lack such a . Thus, while WW^\perp provides a topological closure, it does not ensure a complementary subspace that is "orthogonal" in a sense, underscoring the absence of guaranteed orthogonal projections without an inner product .

Applications

Linear Algebra

In finite-dimensional linear algebra, orthogonal complements play a key role in solving systems of linear equations Ax=bAx = b, where AA is an m×nm \times n matrix. By the fundamental theorem of linear algebra, the orthogonal complement of the column space of AA in Rm\mathbb{R}^m is the null space of ATA^T, consisting of all vectors yy such that ATy=0A^T y = 0. This relationship ensures that the system Ax=bAx = b is consistent bb is orthogonal to the null space of ATA^T, meaning no vector in Null(AT)\operatorname{Null}(A^T) has a nonzero with bb. Similarly, the orthogonal complement of the row space of AA in Rn\mathbb{R}^n is the null space of AA, which describes the solution space as a particular solution plus homogeneous solutions orthogonal to the rows of AA. The Gram-Schmidt process utilizes orthogonal complements to construct from a linearly independent set of vectors , such as Rn\mathbb{R}^n. Given vectors {v1,v2,,vk}\{v_1, v_2, \dots, v_k\}, the process iteratively projects each viv_i onto the span of the previous orthogonalized vectors and subtracts that projection, effectively placing the result of the previous span. For instance, the second vector becomes v2=v2v1v2v1v1v1v_2^\perp = v_2 - \frac{v_1 \cdot v_2}{v_1 \cdot v_1} v_1, which is orthogonal to v1v_1. Normalizing these yields , enabling efficient computations in algorithms reliant on orthogonality. Orthogonal complements are central to the least squares method for approximating solutions to overdetermined systems Ax=bAx = b, where no exact solution exists. The goal is to minimize Axb2\|Ax - b\|^2 by finding the projection of bb onto the column space of AA, denoted projCol(A)b=Ax^\operatorname{proj}_{\operatorname{Col}(A)} b = A \hat{x}, where x^=(ATA)1ATb\hat{x} = (A^T A)^{-1} A^T b assuming AA has full column rank. The error vector bprojCol(A)bb - \operatorname{proj}_{\operatorname{Col}(A)} b then lies in the orthogonal complement of Col(A)\operatorname{Col}(A), which is Null(AT)\operatorname{Null}(A^T), ensuring the residual is perpendicular to every column of AA. This projection property extends to the , where a matrix ARm×nA \in \mathbb{R}^{m \times n} with full column rank is factored as A=QRA = QR, with QQ having orthonormal columns spanning Col(A)\operatorname{Col}(A) and RR upper triangular. In the full A=[Q1Q2][R10]A = [Q_1 \, Q_2] \begin{bmatrix} R_1 \\ 0 \end{bmatrix}, the columns of Q2Q_2 form an for the orthogonal complement Col(A)=Null(AT)\operatorname{Col}(A)^\perp = \operatorname{Null}(A^T), providing a complete of Rm\mathbb{R}^m into orthogonal subspaces. This factorization aids in computations by solving Rx=QTbR x = Q^T b.

Functional Analysis

In functional analysis, the orthogonal complement plays a central role in s, where every closed subspace MM admits an orthogonal projection PM:HHP_M: H \to H onto MM, defined by PMx=yP_M x = y where yMy \in M minimizes xyH\|x - y\|_H. This projection is a bounded linear operator with PM1\|P_M\| \leq 1, (PM=PMP_M^* = P_M), and idempotent (PM2=PMP_M^2 = P_M), satisfying H=MMH = M \oplus M^\perp with M=kerPMM^\perp = \ker P_M. The further connects orthogonal complements to duality: for a HH, the dual HH^* is isometrically isomorphic to HH via HzH\ell \in H^* \mapsto z \in H where (x)=x,zH\ell(x) = \langle x, z \rangle_H, and the kernel of \ell is the orthogonal complement of the span of zz. The for operators on Hilbert spaces decomposes the space into orthogonal eigenspaces: for a compact T:HHT: H \to H, HH is the λσ(T)ker(TλI)\overline{\bigoplus_{\lambda \in \sigma(T)} \ker(T - \lambda I)} where the eigenspaces ker(TλI)\ker(T - \lambda I) are pairwise orthogonal, closed, and finite-dimensional (except possibly for λ=0\lambda = 0). For non-compact operators, the decomposition is more general, involving both discrete and continuous parts; the orthogonal complement of the closure of the span of eigenvectors corresponds to the subspace associated with the continuous . This decomposition relies on the orthogonal complement to ensure the direct sum is Hilbert, enabling the representation Tx=λλx,eλHeλT x = \sum_{\lambda} \lambda \langle x, e_\lambda \rangle_H e_\lambda for an of eigenvectors {eλ}\{e_\lambda\} in the discrete case. In Sobolev spaces, which are Hilbert spaces of functions with weak derivatives, orthogonal complements appear in weak formulations of partial differential equations (PDEs). For the Dirichlet problem Δu=f-\Delta u = f on a domain Ω\Omega with u=0u = 0 on Ω\partial \Omega, the weak formulation seeks uH01(Ω)u \in H^1_0(\Omega) such that Ωuvdx=Ωfvdx\int_\Omega \nabla u \cdot \nabla v \, dx = \int_\Omega f v \, dx for all test functions vH01(Ω)v \in H^1_0(\Omega), where H01(Ω)H^1_0(\Omega) is the closure of compactly supported smooth functions. The variational problem is well-posed via the Lax-Milgram theorem, as the bilinear form is continuous and coercive on H01(Ω)H^1_0(\Omega) with respect to the inner product. For Fredholm operators T:HHT: H \to H on Hilbert spaces, which are bounded with finite-dimensional kernel and and closed range, the index ind(T)=dimkerTdim\cokerT\operatorname{ind}(T) = \dim \ker T - \dim \coker T is invariant under compact perturbations. The identifies with the orthogonal complement of the range in the dual, \cokerT(ranT)kerT\coker T \cong (\operatorname{ran} T)^\perp \cong \ker T^* via the Riesz , since HH is self-dual. This connection via orthogonal complements in dual spaces underpins index theory, as in the Atiyah-Singer theorem for elliptic operators on manifolds. A concrete example arises in the L2[0,1]L^2[0,1] with inner product f,g=01f(t)g(t)dt\langle f, g \rangle = \int_0^1 f(t) \overline{g(t)} \, dt: the subspace of constant functions, spanned by the indicator 11, has orthogonal complement consisting of mean-zero functions {fL2[0,1]:01f(t)dt=0}\{f \in L^2[0,1] : \int_0^1 f(t) \, dt = 0\}, since f,1=0\langle f, 1 \rangle = 0 precisely when the integral vanishes. This decomposition L2[0,1]=C1(C1)L^2[0,1] = \mathbb{C} \cdot 1 \oplus (\mathbb{C} \cdot 1)^\perp illustrates the projection onto constants as integration, a of norm 1.
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.