Hubbry Logo
Commuting matricesCommuting matricesMain
Open search
Commuting matrices
Community hub
Commuting matrices
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Commuting matrices
Commuting matrices
from Wikipedia

In linear algebra, two matrices and are said to commute if , or equivalently if their commutator is zero. Matrices that commute with matrix are called the commutant of matrix (and vice versa).[1]

A set of matrices is said to commute if they commute pairwise, meaning that every pair of matrices in the set commutes.

Characterizations and properties

[edit]
  • Commuting matrices preserve each other's eigenspaces.[2] As a consequence, commuting matrices over an algebraically closed field are simultaneously triangularizable; that is, there are bases over which they are both upper triangular. In other words, if commute, there exists a similarity matrix such that is upper triangular for all . The converse is not necessarily true, as the following counterexample shows:
However, if the square of the commutator of two matrices is zero, that is, , then the converse is true.[3]
  • Two diagonalizable matrices and commute () if they are simultaneously diagonalizable (that is, there exists an invertible matrix such that both and are diagonal).[4]: p. 64  The converse is also true; that is, if two diagonalizable matrices commute, they are simultaneously diagonalizable.[5] But if you take any two matrices that commute (and do not assume they are two diagonalizable matrices) they are simultaneously diagonalizable already if one of the matrices has no multiple eigenvalues.[6]
  • If and commute, they have a common eigenvector. If has distinct eigenvalues, and and commute, then 's eigenvectors are 's eigenvectors.
  • If one of the matrices has the property that its minimal polynomial coincides with its characteristic polynomial (that is, it has the maximal degree), which happens in particular whenever the characteristic polynomial has only simple roots, then the other matrix can be written as a polynomial in the first.
  • As a direct consequence of simultaneous triangulizability, the eigenvalues of two commuting complex matrices A, B with their algebraic multiplicities (the multisets of roots of their characteristic polynomials) can be matched up as in such a way that the multiset of eigenvalues of any polynomial in the two matrices is the multiset of the values . This theorem is due to Frobenius.[7]
  • Two Hermitian matrices commute if their eigenspaces coincide. In particular, two Hermitian matrices without multiple eigenvalues commute if they share the same set of eigenvectors. This follows by considering the eigenvalue decompositions of both matrices. Let and be two Hermitian matrices. and have common eigenspaces when they can be written as and . It then follows that
  • The property of two matrices commuting is not transitive: A matrix may commute with both and , and still and do not commute with each other. As an example, the identity matrix commutes with all matrices, which between them do not all commute. If the set of matrices considered is restricted to Hermitian matrices without multiple eigenvalues, then commutativity is transitive, as a consequence of the characterization in terms of eigenvectors.
  • Lie's theorem, which shows that any representation of a solvable Lie algebra is simultaneously upper triangularizable may be viewed as a generalization.
  • An n × n matrix commutes with every other n × n matrix if and only if it is a scalar matrix, that is, a matrix of the form , where is the n × n identity matrix and is a scalar. In other words, the center of the group of n × n matrices under multiplication is the subgroup of scalar matrices.
  • Fix a finite field , let denote the number of ordered pairs of commuting matrices over , W. Feit and N. J. Fine[8] showed the equation

Examples

[edit]
  • The identity matrix commutes with all matrices.
  • Jordan blocks commute with upper triangular matrices that have the same value along bands.
  • If the product of two symmetric matrices is symmetric, then they must commute. That also means that every diagonal matrix commutes with all other diagonal matrices.[9][10]
  • Circulant matrices commute. They form a commutative ring since the sum of two circulant matrices is circulant.

History

[edit]

The notion of commuting matrices was introduced by Cayley in his memoir on the theory of matrices, which also provided the first axiomatization of matrices. The first significant results on commuting matrices were proved by Frobenius in 1878.[11]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In linear algebra, two square matrices AA and BB over a field (typically the real or complex numbers) are said to commute if AB=BAAB = BA, meaning the order of does not affect the result. This property is expressed using the [A,B]=ABBA=0[A, B] = AB - BA = 0. While is generally non-commutative—unlike —the commuting case is central to many advanced results in the field. A key theorem states that if AA and BB are both diagonalizable and commute, then there exists a single SS such that S1ASS^{-1}AS and S1BSS^{-1}BS are both diagonal, allowing them to share a common eigenbasis. This simultaneous diagonalization simplifies computations and reveals structural similarities between the matrices. For (Hermitian) matrices, which arise frequently in applications like and optimization, commutativity implies simultaneous diagonalization via a unitary transformation, preserving inner products and . In quantum theory, commuting Hermitian operators represent compatible observables that can be measured simultaneously with definite outcomes, as they share common eigenvectors. More broadly, over the complex numbers, a family of commuting matrices is simultaneously upper triangularizable, with further implications for representations of algebras, Jordan canonical forms, and stability analysis in dynamical systems.

Fundamentals

Definition

In linear algebra, matrix multiplication is generally non-commutative, meaning that for arbitrary square matrices AA and BB of the same order n×nn \times n, the product ABAB does not necessarily equal BABA. Two such matrices A,BMn(F)A, B \in M_n(\mathbb{F}), where F\mathbb{F} is a field (such as the real or complex numbers), are said to commute if AB=BAAB = BA. This condition is equivalently expressed through the [A,B]=ABBA=0[A, B] = AB - BA = 0. The concept applies specifically to square matrices of the same size, since requires compatible dimensions for both ABAB and BABA to be defined and comparable; non-square matrices do not admit a standard commutativity relation in this sense.

Basic Properties

The set of all matrices that commute with a fixed matrix AMn(F)A \in M_n(\mathbb{F}), denoted the centralizer C(A)={BMn(F)AB=BA}C(A) = \{ B \in M_n(\mathbb{F}) \mid AB = BA \}, forms a subalgebra of the full matrix algebra Mn(F)M_n(\mathbb{F}) over the field F\mathbb{F}. This subalgebra is closed under addition and scalar multiplication, as if B,CC(A)B, C \in C(A) and α,βF\alpha, \beta \in \mathbb{F}, then A(αB+βC)=αAB+βAC=αBA+βCA=(αB+βC)AA(\alpha B + \beta C) = \alpha AB + \beta AC = \alpha BA + \beta CA = (\alpha B + \beta C)A. It is also closed under matrix multiplication, since if B,CC(A)B, C \in C(A), then A(BC)=(AB)C=B(AC)=B(CA)=(BC)AA(BC) = (AB)C = B(AC) = B(CA) = (BC)A. The II commutes with every matrix AA, as IA=AI=AIA = AI = A, so IC(A)I \in C(A) for all AA. Similarly, every scalar multiple of the identity, cIcI for cFc \in \mathbb{F}, commutes with AA because (cI)A=cA=A(cI)(cI)A = cA = A(cI). Consequently, the centralizer of the identity is the entire matrix algebra: C(I)=Mn(F)C(I) = M_n(\mathbb{F}). A trivial but illustrative case arises with diagonal matrices: any two diagonal matrices D1,D2Dn(F)D_1, D_2 \in D_n(\mathbb{F}) (the of n×nn \times n diagonal matrices over F\mathbb{F}) commute, as their product is entrywise of diagonals, which is commutative. In fact, the centralizer of a with distinct diagonal entries is precisely the set of diagonal matrices. While commuting matrices preserve each other's eigenspaces—for if Av=λvAv = \lambda v, then A(Bv)=B(Av)=λ(Bv)A(Bv) = B(Av) = \lambda (Bv), so BvBv is also an eigenvector with eigenvalue λ\lambda unless Bv=0Bv = 0—they do not necessarily share a complete set of common eigenvectors. For instance, the commutes with every matrix but has the full space as its eigenspace, whereas a non-diagonalizable matrix like a Jordan block has a proper eigenspace of dimension less than nn.

Advanced Properties

Commuting Families

A commuting family of matrices is a finite or infinite collection {A1,A2,,Ak}\{A_1, A_2, \dots, A_k\} of matrices in Mn(F)M_n(\mathbb{F}), where F\mathbb{F} is a field, such that AiAj=AjAiA_i A_j = A_j A_i for all i,ji, j. These families extend the notion of pairwise commutativity to multiple matrices and play a key role in understanding the structure of commutative substructures within the full matrix algebra Mn(F)M_n(\mathbb{F}). The centralizer of a single matrix AMn(F)A \in M_n(\mathbb{F}), denoted Z(A)Z(A), is the subalgebra consisting of all matrices BMn(F)B \in M_n(\mathbb{F}) that with AA, i.e., Z(A)={BMn(F)AB=BA}Z(A) = \{B \in M_n(\mathbb{F}) \mid AB = BA\}. This set forms an associative algebra under matrix addition and multiplication, and any commuting family containing AA is contained in Z(A)Z(A). For n×nn \times n matrices over an algebraically closed field, the dimension of Z(A)Z(A) satisfies dimZ(A)n\dim Z(A) \geq n, with equality if and only if AA is a cyclic matrix (i.e., the minimal polynomial of AA has degree nn). In this minimal case, Z(A)Z(A) coincides with the algebra of all polynomials in AA. A commuting family {A1,,Ak}\{A_1, \dots, A_k\} generates a commutative of Mn(F)M_n(\mathbb{F}), meaning the F\mathbb{F}-span of all products of the AiA_i (including the identity) is abelian under . Such subalgebras are of particular interest because their structure reflects the joint properties of the family. For instance, if the matrices are diagonalizable over C\mathbb{C}, the family admits simultaneous diagonalization. A canonical example of a commuting family is the set of all diagonal n×nn \times n matrices over F\mathbb{F}, which pairwise commute since diagonal matrices commute elementwise. This family generates a commutative subalgebra of dimension nn.

Simultaneous Diagonalization and Triangularization

A commuting family of diagonalizable matrices over the complex numbers can be simultaneously diagonalized via a single invertible similarity transformation. Specifically, if {Aα}Mn(C)\{A_\alpha\} \subset M_n(\mathbb{C}) is a family of pairwise commuting diagonalizable matrices, there exists an invertible matrix SMn(C)S \in M_n(\mathbb{C}) such that S1AαSS^{-1} A_\alpha S is diagonal for every α\alpha. This result follows from the fact that such families share a common eigenbasis, allowing a unified diagonal form. For the special case of normal matrices, which are unitarily diagonalizable individually, the theorem strengthens: a family of commuting normal matrices is simultaneously unitarily diagonalizable, meaning there exists a unitary UMn(C)U \in M_n(\mathbb{C}) such that UAαUU^* A_\alpha U is diagonal for all α\alpha. This is a direct extension of Schur's theorem to families and underscores the role of normality in preserving unitarity. More generally, any family of pairwise commuting complex matrices admits simultaneous upper triangularization, regardless of diagonalizability. That is, for commuting A,BMn(C)A, B \in M_n(\mathbb{C}), there exists an invertible PMn(C)P \in M_n(\mathbb{C}) such that both P1APP^{-1} A P and P1BPP^{-1} B P are upper triangular. This holds for arbitrary finite or infinite commuting families {Aα}Mn(C)\{A_\alpha\} \subset M_n(\mathbb{C}), where a single invertible PP (or equivalently, a unitary UU via equivalence of similarity and unitary triangularization) renders all P1AαPP^{-1} A_\alpha P upper triangular. The diagonal entries in this form correspond to the eigenvalues, ordered compatibly across the family, reflecting shared spectral properties such as common eigenspaces. A deeper algebraic condition governs simultaneous diagonalizability: a commuting family is simultaneously diagonalizable if and only if the associative algebra it generates is semisimple. In this context, the generated algebra is commutative due to pairwise commutativity, and semisimplicity ensures decomposition into a direct sum of simple components (fields over C\mathbb{C}), enabling a basis of common eigenvectors. However, not all commuting families satisfy this; for instance, a family consisting of two commuting nilpotent Jordan blocks of size greater than 1 generates a non-semisimple algebra and cannot be simultaneously diagonalized, though it remains simultaneously triangularizable. Such limitations highlight the distinction between triangular and diagonal forms in non-diagonalizable cases.

Characterizations

Algebraic Characterizations

Algebraic characterizations of commuting matrices focus on conditions expressed through identities, algebraic structures, and trace properties within the ring of matrices. A fundamental result states that if two n×nn \times n matrices AA and BB over an commute (i.e., AB=BAAB = BA), and AA is non-derogatory—meaning its minimal has degree nn, equal to its —then BB can be expressed as a in AA of degree at most n1n-1. This characterization highlights how commutativity restricts the centralizer of AA to the generated by AA itself when AA achieves the maximal possible degree for its minimal . The converse also holds: if B=p(A)B = p(A) for some pp, then AB=BAAB = BA trivially, as polynomials in AA commute with AA. This equivalence is central to understanding the structure of commutative subalgebras in matrix rings. A related perspective arises from the generated by AA and BB. If AA and BB commute, the they generate consists of all linear combinations and products, which forms a commutative . In particular, Weyl's work on the structure of such algebras implies that commuting matrices satisfy relations within this generated algebra, ensuring that elements derived from AA and BB remain interdependent through non-commutative extensions only if the original pair does not commute. This algebraic interdependence underscores that commutativity is equivalent to the generated algebra being commutative, without higher-degree relations beyond polynomials in the generators under the non-derogatory assumption. From a viewpoint, the bracket [,][\cdot, \cdot] defines the Lie algebra gl(n)\mathfrak{gl}(n) of n×nn \times n matrices. If AA and BB commute, the Lie generated by AA and BB—spanned by linear combinations xA+yBxA + yB for scalars x,yx, y—is abelian, as [xA+yB,xA+yB]=(xyyx)[A,B]=0[xA + yB, x'A + y'B] = (xy' - yx')[A, B] = 0 for all scalars. This abelianity is a direct algebraic consequence of [A,B]=0[A, B] = 0 and characterizes pairs where the generated Lie structure has vanishing brackets, distinguishing them from non-abelian . Seminal treatments emphasize this as a foundational property linking matrix commutativity to broader . An additional algebraic condition involves traces: if AA and BB commute, then tr(AkBm)=tr(BmAk)\operatorname{tr}(A^k B^m) = \operatorname{tr}(B^m A^k) for all non-negative integers k,mk, m, due to the cyclic property of the trace, which allows reordering AkBm=BmAkA^k B^m = B^m A^k within the trace. This equality holds as a necessary consequence of commutativity but is not sufficient, as there exist non-commuting pairs satisfying it for specific k,mk, m or even all powers in low dimensions, though counterexamples abound in general. This trace-based characterization provides a verifiable algebraic test, albeit incomplete, for potential commutativity. Finally, a variant of Frobenius's theorem offers another algebraic lens: if AA and BB commute and share no common invariant (under certain irreducibility conditions on the pair), then the entire is irreducible for the they generate, implying shared s only when reducibility occurs. More precisely, commutativity ensures that any for AA is also invariant for BB, algebraically captured by the joint action preserving the same lattice of subspaces. This result, rooted in early , algebraically ties commutativity to the coincidence of invariant subspace structures without invoking spectral data.

Spectral and Geometric Characterizations

Commuting matrices exhibit profound connections to , particularly through their interactions with eigenvalues and eigenspaces. A fundamental property arises when considering an eigenvector of one matrix under the action of a commuting partner. Suppose AA and BB are matrices satisfying AB=BAAB = BA, and let vv be an eigenvector of AA with eigenvalue λ\lambda, so Av=λvAv = \lambda v. Then, A(Bv)=B(Av)=B(λv)=λ(Bv),A(Bv) = B(Av) = B(\lambda v) = \lambda (Bv), which implies that BvBv is also an eigenvector of AA corresponding to the same eigenvalue λ\lambda (or zero if Bv=0Bv = 0). This demonstrates that BB maps the eigenspace of AA for λ\lambda into itself, preserving the eigenspace. More generally, if AA and BB commute, each preserves the eigenspaces of the other. That is, the eigenspace Eλ(A)={vAv=λv}E_\lambda(A) = \{ v \mid Av = \lambda v \} is invariant under BB, meaning B(Eλ(A))Eλ(A)B(E_\lambda(A)) \subseteq E_\lambda(A). If AA and BB are both diagonalizable, then they are simultaneously diagonalizable, admitting a common eigenbasis in which both are diagonal. From a geometric perspective, commutativity of linear operators on a implies the existence of joint invariant subspaces. The eigenspaces of one operator serve as invariant subspaces for the other, allowing the operators to act compatibly on these subspaces. This joint invariance facilitates the of the into common components, underscoring commutativity as a condition for aligned geometric structures in the operator . For non-diagonalizable matrices, the spectral characterization extends to generalized eigenspaces. If AB=BAAB = BA, then BB preserves each generalized eigenspace of AA, defined as Gλ(A)={v(AλI)kv=0 for some k}G_\lambda(A) = \{ v \mid (A - \lambda I)^k v = 0 \text{ for some } k \}. Within these spaces, the Jordan structures are compatible: the Jordan chains of AA are mapped by BB in a manner that respects the chain lengths and eigenvalue associations, enabling simultaneous upper triangularization with aligned blocks. This compatibility ensures that the parts of the Jordan decompositions interact consistently under commutation.

Examples and Applications

Canonical Examples

One canonical class of commuting matrices consists of all diagonal matrices over the complex numbers. For any two n×nn \times n diagonal matrices A=\diag(a1,,an)A = \diag(a_1, \dots, a_n) and B=\diag(b1,,bn)B = \diag(b_1, \dots, b_n), the product is given by AB=\diag(a1b1,,anbn)=BA,AB = \diag(a_1 b_1, \dots, a_n b_n) = BA, demonstrating commutativity directly from the absence of off-diagonal terms in the multiplication. This property holds because diagonal matrices preserve the vectors as eigenvectors, allowing simultaneous diagonalization in the same basis. In contrast, rotation matrices in three dimensions provide a fundamental example of non-commutativity. Consider the rotation matrices for 90-degree rotations around the x- and y-axes: Rx(90)=(100001010),Ry(90)=(001010100).R_x(90^\circ) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix}, \quad R_y(90^\circ) = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0 \end{pmatrix}.
Add your contribution
Related Hubs
User Avatar
No comments yet.