Hubbry Logo
Generalized inverseGeneralized inverseMain
Open search
Generalized inverse
Community hub
Generalized inverse
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Generalized inverse
Generalized inverse
from Wikipedia

In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix .

A matrix is a generalized inverse of a matrix if [1][2][3] A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse.[1]

Motivation

[edit]

Consider the linear system

where is an matrix and the column space of . If and is nonsingular then will be the solution of the system. Note that, if is nonsingular, then

Now suppose is rectangular (), or square and singular. Then we need a right candidate of order such that for all

[4]

That is, is a solution of the linear system . Equivalently, we need a matrix of order such that

Hence we can define the generalized inverse as follows: Given an matrix , an matrix is said to be a generalized inverse of if [1][2][3] The matrix has been termed a regular inverse of by some authors.[5]

Types

[edit]

Important types of generalized inverse include:

  • One-sided inverse (right inverse or left inverse)
    • Right inverse: If the matrix has dimensions and , then there exists an matrix called the right inverse of such that , where is the identity matrix.
    • Left inverse: If the matrix has dimensions and , then there exists an matrix called the left inverse of such that , where is the identity matrix.[6]
  • Bott–Duffin inverse
  • Drazin inverse
  • Moore–Penrose inverse

Some generalized inverses are defined and classified based on the Penrose conditions:

where denotes conjugate transpose. If satisfies the first condition, then it is a generalized inverse of . If it satisfies the first two conditions, then it is a reflexive generalized inverse of . If it satisfies all four conditions, then it is the pseudoinverse of , which is denoted by and also known as the Moore–Penrose inverse, after the pioneering works by E. H. Moore and Roger Penrose.[2][7][8][9][10][11] It is convenient to define an -inverse of as an inverse that satisfies the subset of the Penrose conditions listed above. Relations, such as , can be established between these different classes of -inverses.[1]

When is non-singular, any generalized inverse and is therefore unique. For a singular , some generalised inverses, such as the Drazin inverse and the Moore–Penrose inverse, are unique, while others are not necessarily uniquely defined.

Examples

[edit]

Reflexive generalized inverse

[edit]

Let

Since , is singular and has no regular inverse. However, and satisfy Penrose conditions (1) and (2), but not (3) or (4). Hence, is a reflexive generalized inverse of .

One-sided inverse

[edit]

Let

Since is not square, has no regular inverse. However, is a right inverse of . The matrix has no left inverse.

Inverse of other semigroups (or rings)

[edit]

The element b is a generalized inverse of an element a if and only if , in any semigroup (or ring, since the multiplication function in any ring is a semigroup).

The generalized inverses of the element 3 in the ring are 3, 7, and 11, since in the ring :

The generalized inverses of the element 4 in the ring are 1, 4, 7, and 10, since in the ring :

If an element a in a semigroup (or ring) has an inverse, the inverse must be the only generalized inverse of this element, like the elements 1, 5, 7, and 11 in the ring .

In the ring any element is a generalized inverse of 0; however 2 has no generalized inverse, since there is no b in such that .

Construction

[edit]

The following characterizations are easy to verify:

  • A right inverse of a non-square matrix is given by , provided has full row rank.[6]
  • A left inverse of a non-square matrix is given by , provided has full column rank.[6]
  • If is a rank factorization, then is a g-inverse of , where is a right inverse of and is left inverse of .
  • If for any non-singular matrices and , then is a generalized inverse of for arbitrary and .
  • Let be of rank . Without loss of generality, letwhere is the non-singular submatrix of . Then,is a generalized inverse of if and only if .

Uses

[edit]

Any generalized inverse can be used to determine whether a system of linear equations has any solutions, and if so to give all of them. If any solutions exist for the n × m linear system

,

with vector of unknowns and vector of constants, all solutions are given by

,

parametric on the arbitrary vector , where is any generalized inverse of . Solutions exist if and only if is a solution, that is, if and only if . If A has full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique.[12]

Generalized inverses of matrices

[edit]

The generalized inverses of matrices can be characterized as follows. Let , and

be its singular-value decomposition. Then for any generalized inverse , there exist[1] matrices , , and such that

Conversely, any choice of , , and for matrix of this form is a generalized inverse of .[1] The -inverses are exactly those for which , the -inverses are exactly those for which , and the -inverses are exactly those for which . In particular, the pseudoinverse is given by :

Transformation consistency properties

[edit]

In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse, satisfies the following definition of consistency with respect to transformations involving unitary matrices U and V:

.

The Drazin inverse, satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrix S:

.

The unit-consistent (UC) inverse,[13] satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matrices D and E:

.

The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers.

See also

[edit]

Citations

[edit]
  1. ^ a b c d e f Ben-Israel & Greville 2003, pp. 2, 7
  2. ^ a b c Nakamura 1991, pp. 41–42
  3. ^ a b Rao & Mitra 1971, pp. vii, 20
  4. ^ Rao & Mitra 1971, p. 24
  5. ^ Rao & Mitra 1971, pp. 19–20
  6. ^ a b c Rao & Mitra 1971, p. 19
  7. ^ Rao & Mitra 1971, pp. 20, 28, 50–51
  8. ^ Ben-Israel & Greville 2003, p. 7
  9. ^ Campbell & Meyer 1991, p. 10
  10. ^ James 1978, p. 114
  11. ^ Nakamura 1991, p. 42
  12. ^ James 1978, pp. 109–110
  13. ^ Uhlmann 2018

Sources

[edit]

Textbook

[edit]
  • Ben-Israel, Adi; Greville, Thomas Nall Eden (2003). Generalized Inverses: Theory and Applications (2nd ed.). New York, NY: Springer. doi:10.1007/b97366. ISBN 978-0-387-00293-4.
  • Campbell, Stephen L.; Meyer, Carl D. (1991). Generalized Inverses of Linear Transformations. Dover. ISBN 978-0-486-66693-8.
  • Horn, Roger Alan; Johnson, Charles Royal (1985). Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6.
  • Nakamura, Yoshihiko (1991). Advanced Robotics: Redundancy and Optimization. Addison-Wesley. ISBN 978-0201151985.
  • Rao, C. Radhakrishna; Mitra, Sujit Kumar (1971). Generalized Inverse of Matrices and its Applications. New York: John Wiley & Sons. pp. 240. ISBN 978-0-471-70821-6.

Publication

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In linear algebra, a generalized inverse (also known as a g-inverse) of an m×nm \times n matrix AA is any n×mn \times m matrix GG that satisfies the equation AGA=AA G A = A. This condition extends the notion of a standard inverse to non-square or singular matrices, where no true inverse exists, and allows for solutions to linear systems Ax=bA x = b even when bb is not in the column space of AA. Generalized inverses always exist for any matrix but are generally not unique unless additional constraints are imposed. Generalized inverses are categorized by the subset of the four Penrose equations they satisfy, introduced by in 1955: (1) AGA=AA G A = A, (2) GAG=GG A G = G, (3) (AG)=AG(A G)^* = A G, and (4) (GA)=GA(G A)^* = G A, where * denotes the conjugate transpose (or transpose for real matrices). A {1}-inverse satisfies only the first equation, while a {1,2}-inverse (reflexive generalized inverse) also satisfies the second. The Moore–Penrose pseudoinverse, denoted AA^\dagger, satisfies all four equations and is unique for every matrix; it provides the minimum-norm least-squares solution to Ax=bA x = b and projects orthogonally onto the column space of AA. The concept traces its origins to E. H. Moore's 1920 work on the "general reciprocal" of algebraic matrices, which laid foundational ideas for handling non-invertible cases, though published as an abstract. Penrose formalized the four equations in his seminal paper, establishing the pseudoinverse as a choice. The broader theory of generalized inverses, including classifications and applications, was systematically developed by C. Radhakrishna Rao and Mitra in their 1971 monograph, which emphasized statistical uses such as linear and testing. These tools are essential in fields like statistics, , , and for solving underdetermined or overdetermined systems.

Fundamentals

Definition and Motivation

In algebraic structures such as semigroups or rings, a generalized inverse of an element AA is an element gg (often denoted AgA^g) that satisfies the equation AgA=AA g A = A. This condition represents a minimal form of partial invertibility, where gg acts as an inverse for AA restricted to the image of AA, without requiring full invertibility or the existence of a two-sided inverse. Weaker variants include one-sided generalized inverses, such as those satisfying only AgA=AA g A = A (a left-ish inverse) or gAg=gg A g = g (a right-ish inverse), which capture asymmetric notions of partial reversal in non-commutative settings. In the context of linear algebra, the concept applies to matrices over fields like the real or complex numbers, where AA is an arbitrary m×nm \times n matrix and gg (or AA^-) is an n×mn \times m matrix satisfying AgA=AA g A = A. This extends the classical matrix inverse, which exists only for square nonsingular matrices (where det(A)0\det(A) \neq 0), to handle singular square matrices or rectangular ones, where the is undefined or the dimensions preclude a two-sided inverse. The equation AgA=AA g A = A ensures that gAg A is a projection onto the column space of AA, providing a way to "undo" the action of AA where possible. The primary motivation for generalized inverses arises in solving linear systems Ax=bA x = b, where AA may be singular or rectangular, rendering standard inversion impossible. Here, if the system is consistent (i.e., bb lies in the column space of AA), a generalized inverse gg yields a particular solution x=gbx = g b, while the full general solution is given by x=gb+(IngA)zx = g b + (I_n - g A) z for arbitrary zRnz \in \mathbb{R}^n (or Cn\mathbb{C}^n), capturing the null space contributions. To derive this, note that substituting x=gb+(IngA)zx = g b + (I_n - g A) z into AxA x gives Agb+A(IngA)z=Agb+(AAgA)z=Agb+(AA)z=Agb=bA g b + A (I_n - g A) z = A g b + (A - A g A) z = A g b + (A - A) z = A g b = b, confirming consistency preservation; the term (IngA)(I_n - g A) projects onto the kernel of AA, parameterizing all solutions. This framework addresses the failure of regular inverses in underdetermined or overdetermined systems, enabling systematic treatment of ill-posed problems. As a foundational concept, the generalized inverse establishes the limitations of classical invertibility and sets the stage for exploring specialized types, such as the Moore-Penrose inverse, which refines the basic definition with additional symmetry and orthogonality properties.

Historical Background

The concept of the generalized inverse emerged in the early as a means to extend the notion of matrix inversion beyond nonsingular square matrices, particularly to address reciprocal systems in linear equations. In 1920, introduced the idea in his abstract "On the Reciprocal of the General Algebraic Matrix," where he described a generalized reciprocal for arbitrary algebraic matrices, laying foundational groundwork for handling singular and rectangular cases. This work, later elaborated in a publication, motivated solutions to inconsistent linear systems by generalizing the inverse to encompass projections onto relevant subspaces. Mid-20th-century advancements formalized and expanded these ideas into broader algebraic structures. Roger Penrose's 1955 paper "A Generalized Inverse for Matrices" provided a rigorous definition through four axiomatic conditions, establishing the unique pseudoinverse now known as the Moore-Penrose inverse, which satisfies symmetry, idempotence, and orthogonality properties for real or complex matrices. An explicit formula for the Moore-Penrose inverse via full-rank factorization of matrices was first pointed out by C. C. MacDuffee in private communications, bridging linear algebra with ring theory. Parallel developments in semigroup theory introduced partial inverses; the algebraic framework of inverse semigroups, which model partial symmetries through unique idempotent inverses, was pioneered in the 1950s by Gordon B. Preston and Viktor V. Wagner, extending generalized inversion to non-invertible transformations. In the late , specialized types of generalized inverses proliferated to address singular operators and ring elements. Michael P. Drazin introduced the Drazin inverse in 1958 for elements in associative rings, defined via a power condition that captures the invertible part of perturbations, proving useful for differential equations and Markov chains. The group inverse, applicable to index-1 elements where the kernel and range align appropriately, was developed in and matrix contexts in the , emphasizing reflexive properties in partial algebraic structures. Extensions into the have refined these concepts with new characterizations and broader applicability. The core inverse, introduced by Oskar M. Baksalary and Götz Trenkler in 2010 as an alternative to the group inverse for index-1 matrices, combines outer inverse properties with range conditions to preserve core-EP structures. Recent work includes a 2023 geometric characterization of the Moore-Penrose inverse using polar decompositions of operator perturbations in Hilbert spaces, enhancing . Ongoing refinements, such as the W-weighted m-weak core inverse proposed in 2024, continue to extend these inverses to rectangular matrices without introducing major paradigm shifts.

Types

One-Sided and Reflexive Inverses

In the context of generalized inverses within s and matrix algebras, one-sided inverses provide a minimal extension of classical left and right inverses to non-invertible elements. A right one-sided inverse gg of an element AA satisfies AgA=AA g A = A, which captures a partial left-inversion property without requiring full invertibility. This condition holds for a strong right inverse when Ag=IA g = I, applicable to surjective linear maps or matrices where the number of rows mm is at most the number of columns nn with full row rank. Similarly, a left one-sided inverse gg satisfies gAg=gg A g = g, representing a partial right-inversion, and aligns with the strong left inverse gA=Ig A = I for injective maps or matrices with mnm \geq n and full column rank. These definitions arise naturally in theory, where they characterize elements within Green's LL- and RR-classes relative to the variety of inverses V(A)={gAgA=A,gAg=g}V(A) = \{g \mid A g A = A, g A g = g\}. A reflexive generalized inverse combines both one-sided conditions, satisfying AgA=AA g A = A and gAg=gg A g = g simultaneously, thereby acting as a two-sided partial inverse that preserves AA under composition with gg from either side. This makes gg a von Neumann regular inverse in terms, ensuring AA is regular (i.e., AASAA \in A S A for some SS). Unlike one-sided inverses, which apply to rectangular matrices or asymmetric elements (e.g., enabling solutions in over- or under-determined systems), reflexive inverses exhibit square-like behavior, requiring compatible dimensions or class structures where left and right properties align. The Moore-Penrose inverse represents a special reflexive type augmented with conditions for . Key properties of these inverses include their non-uniqueness—multiple gg may satisfy the equations for a given AA—and a close relation to idempotents: for a right one-sided inverse, AgA g is idempotent since (Ag)2=AgAg=Ag(A g)^2 = A g A g = A g, projecting onto the image of AA; analogously, gAg A is idempotent for a left one-sided inverse. In finite semigroups, reflexive generalized inverses exist for every element, as finiteness implies regularity (every aa admits gg with aga=aa g a = a and gag=gg a g = g), though one-sided versions may exist more broadly via class decompositions like Vl(A)={fA1fE(LA)}V_l(A) = \{f A^{-1} \mid f \in E(L_A)\} for left inverses, where E(LA)E(L_A) denotes idempotents in the left principal ideal. These structures facilitate applications in solving inconsistent equations or analyzing partial orderings in algebraic settings without full invertibility.

Moore-Penrose Inverse

The Moore-Penrose inverse, also known as the pseudoinverse, of a matrix AA is a unique matrix A+A^+ that generalizes the concept of the inverse for non-square or singular matrices, satisfying a specific set of four conditions introduced by . These conditions ensure that A+A^+ provides a way to solve linear systems in Hilbert spaces, particularly for complex matrices. The notion traces back to E. H. Moore's earlier work on the "general reciprocal" of matrices, which laid foundational ideas for handling divisors of zero in algebraic structures. The four Penrose conditions defining A+A^+ are:
  1. AXA=AA X A = A
  2. XAX=XX A X = X
  3. (AX)=AX(A X)^* = A X
  4. (XA)=XA(X A)^* = X A
where ^* denotes the (). These equations capture the essential properties of an inverse while incorporating symmetry to ensure uniqueness in the Euclidean structure of complex vector spaces. For any complex matrix ACm×nA \in \mathbb{C}^{m \times n}, there exists a unique X=A+X = A^+ satisfying all four conditions simultaneously. Geometrically, the Moore-Penrose inverse provides the minimum-norm least-squares solution to the Ax=bA x = b. Specifically, for a given bCmb \in \mathbb{C}^m, the vector x=A+bx = A^+ b minimizes Axb2\|A x - b\|_2 among all least-squares solutions and, among those, has the smallest Euclidean norm x2\|x\|_2. This interpretation arises from the requirement of orthogonal projections in Hilbert spaces, where the solution projects bb onto the range of AA in a way that respects the inner product structure. To derive these conditions from the least-squares minimization framework, consider the problem of solving Ax=bA x = b where AA may not have full rank or bb may not lie in the range of AA. The least-squares solutions satisfy the normal equations AAx=AbA^* A x = A^* b, but this may have multiple solutions if AAA^* A is singular. To select the unique minimum-norm solution, impose the condition that xx is orthogonal to the null space of AA, i.e., xrange(A)x \in \operatorname{range}(A^*). Let P=AA+P = A A^+ denote the orthogonal projection onto range(A)\operatorname{range}(A). Then, the least-squares condition requires Axbrange(A)A x - b \perp \operatorname{range}(A), or equivalently, Pb=AxP b = A x. Substituting x=A+bx = A^+ b yields Pb=AA+bP b = A A^+ b, confirming that AA+A A^+ is indeed the projection operator. To connect this to the Penrose conditions, assume x=A+bx = A^+ b satisfies the minimization: first, AxA x is the projection of bb onto range(A)\operatorname{range}(A), so A(A+b)=PbA (A^+ b) = P b, and applying AA again gives AAA+b=APb=AbA A A^+ b = A P b = A b' where b=Pbb' = P b, satisfying condition 1: AA+A=AA A^+ A = A. For condition 2, A+AA+=A+A^+ A A^+ = A^+, note that A+bA^+ b lies in range(A)\operatorname{range}(A^*), and the idempotence follows from the projection properties: A+(AA+b)=A+Pb=A+bA^+ (A A^+ b) = A^+ P b = A^+ b, since PbP b is the closest point. The symmetry conditions 3 and 4 arise from the self-adjoint nature of orthogonal projections: AA+A A^+ is self-adjoint because it projects orthogonally, so (AA+)=AA+(A A^+)^* = A A^+, and similarly for A+AA^+ A, which projects onto range(A)\operatorname{range}(A^*). Thus, (AA+)=(AA)+A=AA+(A A^+)^* = (A A)^+ A^* = A A^+ implies the Hermitian symmetry. This derivation shows how the conditions encode the variational principles of and minimum norm in Hilbert spaces. The relation to orthogonal projections is explicit: P=AA+P = A A^+ is the orthogonal projection onto range(A)\operatorname{range}(A), and Q=A+AQ = A^+ A is the orthogonal projection onto range(A)\operatorname{range}(A^*). These projectors satisfy P2=PP^2 = P, Q2=QQ^2 = Q, and P=PP^* = P, Q=QQ^* = Q, directly following from the Penrose conditions. For example, from condition 1 and 3, (AA+)2=AA+AA+=A(A+A)A+=AA+(A A^+) ^2 = A A^+ A A^+ = A (A^+ A) A^+ = A A^+, confirming and self-adjointness. This framework underscores the Moore-Penrose inverse's role in decomposing spaces into orthogonal complements, essential for applications in linear algebra over Hilbert spaces.

Drazin and Group Inverses

The index of a square matrix AA, denoted ind(A)\operatorname{ind}(A), is defined as the smallest nonnegative integer kk such that ker(Ak)=ker(Ak+1)\ker(A^k) = \ker(A^{k+1}), or equivalently, rank(Ak)=rank(Ak+1)\operatorname{rank}(A^k) = \operatorname{rank}(A^{k+1}). This index measures the "singularity depth" of AA and is finite if and only if the ascent (dimension of the generalized kernel growth) stabilizes. The Drazin inverse of AA, denoted ADA^D, is a generalized inverse that exists if and only if ind(A)\operatorname{ind}(A) is finite. It is the unique matrix satisfying the conditions Ak+1AD=Ak,ADAAD=AD,AAD=ADA,A^{k+1} A^D = A^k, \quad A^D A A^D = A^D, \quad A A^D = A^D A, where k=ind(A)k = \operatorname{ind}(A). The first equation ensures that ADA^D "inverts" AA on the range of AkA^k, while the latter two impose idempotence and commutativity. The Drazin inverse commutes with AA and is idempotent on the core subspace, with AADA A^D being the spectral idempotent projecting onto the range of AkA^k. When ind(A)=0\operatorname{ind}(A) = 0, AA is invertible and AD=A1A^D = A^{-1}, which is a reflexive . A special case arises when ind(A)=1\operatorname{ind}(A) = 1, in which the Drazin inverse is called the group inverse, denoted A#A^\#. It satisfies AA#A=A,A#AA#=A#,AA#=A#A.A A^\# A = A, \quad A^\# A A^\# = A^\#, \quad A A^\# = A^\# A. These equations characterize A#A^\# uniquely when it exists, and it arises naturally in the study of power-regular elements in semigroups where the index is at most 1. For square matrices over the complex numbers, the Drazin inverse relates closely to the canonical form of AA. Specifically, if A=PJP1A = P J P^{-1} where JJ is the form, then AD=PJDP1A^D = P J^D P^{-1}, with JDJ^D obtained by replacing each block for a nonzero eigenvalue λ\lambda with the corresponding block of λ1\lambda^{-1} (adjusted for the part via the finite index), and setting blocks for eigenvalue 0 to zero except for the core structure aligned with the index. This construction inverts the semisimple part while annihilating the component beyond the index. An equivalent characterization of the Drazin inverse involves the core polynomial equation Ak+1(AADI)=0,A^{k+1} (A A^D - I) = 0, which highlights that AADA A^D acts as an identity on the image of Ak+1A^{k+1}. This equation, along with commutativity and idempotence, ensures uniqueness and ties the inverse to the minimal polynomial of AA restricted to the non-nilpotent part.

Other Types

The core inverse, applicable to square matrices of index at most one, is defined as the unique matrix AcA^c satisfying the equations AAcA=AA A^c A = A, AcAAc=AcA^c A A^c = A^c, and R(Ac)R(A)\mathcal{R}(A^c) \subseteq \mathcal{R}(A). This inverse can be explicitly expressed for such matrices as Ac=(AA)#AA^c = (A^* A)^\# A^*, where #\# denotes the group inverse. It serves as an intermediate between the group inverse and the Moore-Penrose inverse, particularly useful for matrices where the index condition holds, and extends the Drazin inverse for higher indices in a specialized manner. The Bott-Duffin inverse, originally developed for analyzing electrical networks, provides a generalized {1,3}-inverse for square matrices that minimizes a specific norm in representations involving projections. For positive operators, it arises in contexts where an operator AA is represented as A=PAP+(IP)A = P A P + (I - P) for a suitable projection PP, yielding the inverse as the minimizer over such decompositions. This construction ensures invertibility in constrained subspaces and has niche applications in optimization problems requiring bounded representations. Recent extensions include the extended core inverse, introduced in 2024, which for square complex matrices combines the sum and difference of the Moore-Penrose inverse, core-EP inverse, and MPCEP inverse to form a unique inner inverse satisfying specific matrix equations. This variant reduces to the standard core inverse for index-one matrices and addresses limitations in prior extensions by preserving inner inverse properties. In 2025, the generalized right core inverse was defined in Banach *-algebras as an extension of the pseudo right core inverse, characterized via right core decompositions and quasi-nilpotent parts, with polar-like properties that facilitate algebraic manipulations in non-commutative settings.

Constructions

For Matrices

One practical method for constructing a generalized inverse of a finite-dimensional matrix ARm×nA \in \mathbb{R}^{m \times n} with rank rr relies on its rank factorization A=BCA = BC, where BRm×rB \in \mathbb{R}^{m \times r} has full column rank and CRr×nC \in \mathbb{R}^{r \times n} has full row rank. To obtain a reflexive generalized inverse (satisfying both AGA=AAGA = A and GAG=GGAG = G), compute the Moore-Penrose inverses B+B^+ and C+C^+ explicitly using their full-rank properties: B+=(BTB)1BTB^+ = (B^T B)^{-1} B^T and C+=CT(CCT)1C^+ = C^T (C C^T)^{-1}. Then set G=C+B+G = C^+ B^+. This GG satisfies AGA=BCC+B+BC=B(CC+)(B+B)C=BIrIrC=BC=AA G A = B C C^+ B^+ B C = B (C C^+) (B^+ B) C = B I_r I_r C = BC = A, confirming the {1}-inverse property; the reflexive property follows similarly from GAG=C+B+BCC+B+=C+(B+B)(CC+)B+=C+IrIrB+=C+B+=GG A G = C^+ B^+ B C C^+ B^+ = C^+ (B^+ B) (C C^+) B^+ = C^+ I_r I_r B^+ = C^+ B^+ = G. The Moore-Penrose inverse A+A^+, a specific reflexive generalized inverse satisfying all four Penrose conditions, can be constructed via the singular value decomposition (SVD) of AA. Compute the SVD A=UΣVHA = U \Sigma V^H, where URm×mU \in \mathbb{R}^{m \times m} and VRn×nV \in \mathbb{R}^{n \times n} are unitary matrices, ΣRm×n\Sigma \in \mathbb{R}^{m \times n} is diagonal with nonnegative singular values σ1σ2σr>0\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_r > 0 on the main diagonal (and zeros elsewhere), and r=\rank(A)r = \rank(A). Form Σ+\Sigma^+ as the n×mn \times m matrix with diagonal entries 1/σi1/\sigma_i for i=1,,ri = 1, \dots, r and zeros otherwise. Then A+=VΣ+UHA^+ = V \Sigma^+ U^H. This construction satisfies the Penrose conditions. The following steps outline the computation of A+A^+ using SVD in practice:

function A_plus = moore_penrose(A, m, n) [U, Sigma_diag, V] = svd(A); // Compute SVD: A = U * diag(Sigma_diag) * V^H // Sigma_diag is m x n diagonal matrix with singular values on diagonal r = rank(Sigma_diag); // Number of nonzero singular values Sigma_plus = zeros(n, m); for i = 1 to r Sigma_plus(i, i) = 1 / Sigma_diag(i, i); end A_plus = V * Sigma_plus * U'; // For real matrices, H = transpose end

function A_plus = moore_penrose(A, m, n) [U, Sigma_diag, V] = svd(A); // Compute SVD: A = U * diag(Sigma_diag) * V^H // Sigma_diag is m x n diagonal matrix with singular values on diagonal r = rank(Sigma_diag); // Number of nonzero singular values Sigma_plus = zeros(n, m); for i = 1 to r Sigma_plus(i, i) = 1 / Sigma_diag(i, i); end A_plus = V * Sigma_plus * U'; // For real matrices, H = transpose end

This algorithm leverages standard SVD routines available in numerical libraries, with computational complexity dominated by the SVD step, typically O(min(mn2,m2n))O(\min(m n^2, m^2 n)). Another approach for a reflexive generalized inverse uses a maximal nonsingular submatrix. Select index sets I{1,,m}I \subset \{1,\dots,m\} and J{1,,n}J \subset \{1,\dots,n\} with I=J=r|I| = |J| = r such that the submatrix AIJA_{I J} is nonsingular (invertible). Permute rows and columns of AA so that AIJA_{I J} occupies the leading r×rr \times r block. A reflexive generalized inverse GG is then obtained by placing (AIJ)1(A_{I J})^{-1} in the leading r×rr \times r block of GG (corresponding to the permuted positions) and setting all other entries to zero; adjustments via permutation matrices ensure compatibility with the original indexing. This method reduces computation to inverting an r×rr \times r nonsingular matrix after rank-revealing permutation.

In Algebraic Structures

In semigroups, a generalized inverse of an element AA is defined as an element gg satisfying AgA=AA g A = A. This notion is analyzed through Green's relations, which partition the semigroup into equivalence classes: the R\mathcal{R}-class for elements generating the same right , the L\mathcal{L}-class for left principal ideals, and the H\mathcal{H}-class as their . To construct such a gg, one identifies an "inverse along an element dd" where gg belongs to the H\mathcal{H}-class of dd and satisfies gAd=d=dAgg A d = d = d A g, ensuring gdSSdg \in d S \cap S d. Existence relies on the presence of idempotents in these classes; specifically, an inverse along dd exists and is unique if there is an idempotent eRdE(S)e \in \mathcal{R}_d \cap E(S) such that AedA e d and deAd e A form trace products in the semigroup, or equivalently if AdLdA d \mathcal{L} d and the H\mathcal{H}-class of AdA d is a group. In rings, generalized inverses are particularly well-developed for regular elements, where an element AA is regular if there exists xx such that A=AxAA = A x A. In this case, a reflexive generalized inverse is given by g=xAxg = x A x, satisfying both AgA=AA g A = A and gAg=gg A g = g. Von Neumann regular rings, in which every element is regular, admit such inverses for all elements. The facilitates the construction by decomposing the ring relative to the idempotents e=Axe = A x and f=xAf = x A: the ring RR splits into orthogonal components eRee R e, eR(1e)e R (1-e), (1e)Re(1-e) R e, and (1e)R(1e)(1-e) R (1-e), allowing explicit representation of AA and gg in matrix-like form over these corner rings and enabling study of uniqueness and absorption properties. In Banach algebras, constructions of Drazin-like generalized inverses extend these ideas to infinite-dimensional settings, often via perturbation theory for elements with isolated spectral points. For an element AA whose spectrum σ(A)\sigma(A) has 0 as an isolated point of finite algebraic multiplicity mm, the Drazin inverse ADA^D satisfies Am+1AD=Am=(AD)m+1AmA^{m+1} A^D = A^m = (A^D)^{m+1} A^m and AAD=ADAA A^D = A^D A. This is constructed using the holomorphic functional calculus: if Γ\Gamma is a contour enclosing σ(A)\sigma(A) excluding the isolated point 0, then AD=12πiΓλ1R(λ,A)dλA^D = \frac{1}{2\pi i} \int_\Gamma \lambda^{-1} R(\lambda, A) d\lambda, where R(λ,A)=(λIA)1R(\lambda, A) = (\lambda I - A)^{-1} is the resolvent, effectively projecting onto the generalized eigenspace for nonzero eigenvalues while handling the pole at 0 perturbatively. Perturbation methods bound the stability of ADA^D under small changes EE, yielding estimates like (A+E)DADCE\| (A+E)^D - A^D \| \leq C \|E\| for sufficiently small E\|E\| when the index remains finite. The matrix case exemplifies these constructions as a special instance of finite von Neumann regular rings.

Examples

Matrix Examples

A simple example of a reflexive generalized inverse arises with the singular matrix A=(1000).A = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}. Here, the matrix g=Ag = A itself serves as a reflexive generalized inverse, satisfying AgA=AA g A = A and gAg=gg A g = g. To verify, first compute A2=(1000)=AA^2 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} = A, so AgA=A2A=A3=AA g A = A^2 A = A^3 = A (since A2=AA^2 = A and A3=AA^3 = A). Similarly, gAg=A2A=Ag A g = A^2 A = A. This example illustrates a , where the reflexive inverse coincides with the original due to . For a one-sided generalized inverse, consider the rectangular matrix A=(12),A = \begin{pmatrix} 1 \\ 2 \end{pmatrix}, a 2×1 column vector of full column rank. A right generalized inverse gg satisfies AgA=AA g A = A. One such gg is the row vector g=(1525)g = \begin{pmatrix} \frac{1}{5} & \frac{2}{5} \end{pmatrix}, corresponding to the minimal norm solution among possible choices. First, compute the scalar gA=151+252=15+45=1g A = \frac{1}{5} \cdot 1 + \frac{2}{5} \cdot 2 = \frac{1}{5} + \frac{4}{5} = 1. Then, AgA=A(gA)=A1=AA g A = A (g A) = A \cdot 1 = A, confirming the condition. The product Ag=(15252545)A g = \begin{pmatrix} \frac{1}{5} & \frac{2}{5} \\ \frac{2}{5} & \frac{4}{5} \end{pmatrix} is the orthogonal projection onto the column of AA, a rank-1 matrix that is not the full identity but acts as a partial inverse in the range. Other choices, such as g=(10)g = \begin{pmatrix} 1 & 0 \end{pmatrix}, also satisfy the condition since gA=1g A = 1 and AgA=AA g A = A, but yield a different projector Ag=(1020)A g = \begin{pmatrix} 1 & 0 \\ 2 & 0 \end{pmatrix}. The Moore-Penrose inverse provides a unique generalized inverse satisfying additional and orthogonality conditions. Consider the rank-1 singular matrix A=(1111).A = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. Its Moore-Penrose inverse is A+=14(1111)A^+ = \frac{1}{4} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. To compute this via (SVD), first find AAT=(2222)A A^T = \begin{pmatrix} 2 & 2 \\ 2 & 2 \end{pmatrix}, with eigenvalues 4 and 0 (trace 4, 0). The nonzero is σ1=2\sigma_1 = 2, with left singular vector u1=12(11)u_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}
Add your contribution
Related Hubs
User Avatar
No comments yet.