Hubbry Logo
search
logo

Exchange matrix

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In mathematics, especially linear algebra, the exchange matrices (also called the reversal matrix, backward identity, or standard involutory permutation) are special cases of permutation matrices, where the 1 elements reside on the antidiagonal and all other elements are zero. In other words, they are 'row-reversed' or 'column-reversed' versions of the identity matrix.[1]

Definition

[edit]

If J is an n × n exchange matrix, then the elements of J are

Properties

[edit]
  • Premultiplying a matrix by an exchange matrix flips vertically the positions of the former's rows, i.e.,
  • Postmultiplying a matrix by an exchange matrix flips horizontally the positions of the former's columns, i.e.,
  • Exchange matrices are symmetric; that is:
  • For any integer k: In particular, Jn is an involutory matrix; that is,
  • The trace of Jn is 1 if n is odd and 0 if n is even. In other words:
  • The determinant of Jn is: As a function of n, it has period 4, giving 1, 1, −1, −1 when n is congruent modulo 4 to 0, 1, 2, and 3 respectively.
  • The characteristic polynomial of Jn is:

its eigenvalues are 1 (with multiplicity ) and -1 (with multiplicity ).

  • The adjugate matrix of Jn is: (where sgn is the sign of the permutation πk of k elements).

Relationships

[edit]
  • An exchange matrix is the simplest anti-diagonal matrix.
  • Any matrix A satisfying the condition AJ = JA is said to be centrosymmetric.
  • Any matrix A satisfying the condition AJ = JAT is said to be persymmetric.
  • Symmetric matrices A that satisfy the condition AJ = JA are called bisymmetric matrices. Bisymmetric matrices are both centrosymmetric and persymmetric.

See also

[edit]
  • Pauli matrices (the first Pauli matrix is a 2 × 2 exchange matrix)

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In linear algebra, the exchange matrix (also known as the reversal matrix, backward identity, or standard involutory permutation matrix) is a square matrix of order nn with ones along the main anti-diagonal (from the top-right to the bottom-left corner) and zeros elsewhere.[1] For example, the 3×3 exchange matrix JJ is given by
J=(001010100). J = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}.
This matrix reverses the order of the entries in any vector it multiplies (on the left or right, due to its symmetry), effectively permuting the components from first to last.[2] The exchange matrix possesses several key properties that make it fundamental in matrix analysis. It is symmetric (J=JTJ = J^T), orthogonal (JTJ=IJ^T J = I), and an involution (J2=IJ^2 = I), meaning it is its own inverse and transpose.[2][1] These attributes ensure that multiplying by JJ is a reversible operation without scaling, preserving norms and inner products in vector spaces. Additionally, a matrix AA is centrosymmetric if it commutes with JJ under conjugation, i.e., JAJ=AJ A J = A, which implies symmetry with respect to the center of the matrix.[2] Exchange matrices play a crucial role in various applications, including the study of symmetric and persymmetric matrices, where conjugation with JJ transforms or symmetrizes structures like Hankel or Toeplitz matrices. They also appear in numerical algorithms for Gaussian elimination, eigenvalue computations, and the analysis of structured matrices in signal processing and control theory, facilitating efficient reversals and permutations without full recomputation.[1]

Definition and Examples

Formal Definition

The $ n \times n $ exchange matrix $ J_n $, also known as the reversal matrix, is a specific permutation matrix whose entries are given by $ J_{i,j} = \delta_{i, n+1-j} $, where $ \delta $ denotes the Kronecker delta; this configuration places 1s exclusively along the anti-diagonal and 0s elsewhere.[3][4] Equivalently, $ J_n $ is the identity matrix with its rows (or columns) reversed in order.[2] As a permutation matrix, $ J_n $ represents the reversal permutation $ \sigma: {1, \dots, n} \to {1, \dots, n} $ defined by $ \sigma(k) = n+1-k $, which acts on the standard basis by mapping the vector $ e_k $ to $ e_{n+1-k} $.[3] When applied to an arbitrary matrix $ A $, premultiplication by $ J_n $ reverses the order of the rows of $ A $, while postmultiplication $ A J_n $ reverses the order of the columns of $ A $.[5] Furthermore, the conjugation (similarity transformation) $ J_n A J_n $ reverses both the rows and columns of $ A $, producing a matrix whose entries are given by $ (J_n A J_n){i,j} = A{n-i+1, n-j+1} $. This corresponds to a 180-degree rotation of the entries of $ A $ (reflection across the center), which is distinct from the transpose $ A^T $, a reflection across the main diagonal. This has a direct interpretation in terms of change of basis for linear operators. If $ M $ is the matrix representation of a linear operator $ T $ with respect to the ordered basis $ \mathcal{B} = {v_1, v_2, \dots, v_n} $, then in the reversed basis $ \mathcal{B}' = {v_n, v_{n-1}, \dots, v_1} $, the matrix representation $ M' $ of $ T $ satisfies
Mi,j=Mni+1,nj+1, M'_{i,j} = M_{n-i+1, n-j+1},
or equivalently, $ M' = J_n M J_n $. Thus, reversing the basis order conjugates the operator's matrix by the exchange matrix, rotating its entries 180 degrees rather than transposing them.

Examples in Low Dimensions

In low dimensions, the exchange matrix provides concrete illustrations of its role as a reversal operator on basis vectors. For dimension $ n=2 $, the exchange matrix $ J_2 $ is given by
J2=(0110), J_2 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},
which swaps the standard basis vectors $ e_1 = (1, 0)^T $ and $ e_2 = (0, 1)^T $, as $ J_2 e_1 = e_2 $ and $ J_2 e_2 = e_1 $.[6] This structure places 1s on the anti-diagonal, reflecting the general form where entries are 1 only when the row index plus column index equals $ n+1 $.[3] For dimension $ n=3 $, the exchange matrix $ J_3 $ takes the form
J3=(001010100), J_3 = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix},
reversing the order of the standard basis vectors such that $ J_3 e_1 = e_3 $, $ J_3 e_2 = e_2 $, and $ J_3 e_3 = e_1 $.[3] This anti-diagonal pattern emerges consistently, positioning 1s to permute coordinates in reverse. To verify its reversal effect, consider the action on a sample vector $ v = (1, 2, 3)^T $: multiplying yields $ J_3 v = (3, 2, 1)^T $, directly exchanging the first and third components while leaving the middle unchanged.[3]

Algebraic Properties

Symmetry and Involutory Nature

The exchange matrix $ J_n $, also known as the reversal matrix, is symmetric, satisfying $ J_n^T = J_n $. This property arises from its structure, where the entries are 1 on the anti-diagonal and 0 elsewhere, such that $ (J_n){i,j} = \delta{i, n+1-j} $. Transposing the matrix reflects it across the main diagonal, but due to the anti-diagonal placement of the 1s, the result remains unchanged, preserving the original form.[2] Furthermore, $ J_n $ is involutory, meaning $ J_n^2 = I_n $, where $ I_n $ is the $ n \times n $ identity matrix. This follows from the reversal permutation it induces: multiplying a vector by $ J_n $ reverses its components, and applying the reversal again restores the original order. To see this algebraically, the $ (i,j) $-th entry of $ J_n^2 $ is $ \sum_{k=1}^n (J_n){i,k} (J_n){k,j} = \sum_{k=1}^n \delta_{i, n+1-k} \delta_{k, n+1-j} $, which simplifies to $ \delta_{i,j} $ because the sum picks out $ k = n+1-j $ only when $ i = j $. This confirms that double reversal yields the identity.[2] As a symmetric involutory matrix, $ J_n $ is orthogonal, satisfying $ J_n J_n^T = I_n $. Substituting the symmetry gives $ J_n J_n = I_n $, which aligns directly with the involutory property, implying that $ J_n $ preserves the Euclidean norm of vectors under multiplication. This orthogonality underscores its role as a permutation matrix corresponding to the reversal operation.[2]

Powers and Inverse

The exchange matrix $ J_n $, being the permutation matrix associated with the reversal permutation $ \sigma(i) = n+1-i $, satisfies $ J_n^2 = I_n $, where $ I_n $ is the $ n \times n $ identity matrix.[7] This relation holds because composing the reversal permutation with itself yields the identity permutation, as reversing the order twice restores the original sequence; explicitly, the (i,j)(i,j)-entry of $ J_n^2 $ is 1 if and only if $ j = n+1-i $ and the corresponding entry in the second $ J_n $ maps back to $ i $, resulting in the identity.[7] Consequently, the powers of $ J_n $ exhibit a simple periodicity: $ J_n^k = I_n $ for even positive integers $ k $, and $ J_n^k = J_n $ for odd positive integers $ k $. This order-2 cycling distinguishes the exchange matrix from permutation matrices corresponding to higher-order permutations, such as cycles of length greater than 2, whose powers require more steps to return to the identity.[7] The self-inverse property follows directly from the squared identity: $ J_n^{-1} = J_n $, confirming that $ J_n $ is an involution in the group of invertible matrices. For verification in general dimension $ n $, one may compute the product $ J_n J_n $ entrywise, noting that the anti-diagonal structure ensures each standard basis vector $ e_i $ is mapped to $ e_{n+1-i} $ and then back to $ e_i $.[7]

Spectral Properties

Eigenvalues and Eigenvectors

The exchange matrix $ J_n \in \mathbb{R}^{n \times n} $, defined as the permutation matrix corresponding to the reversal permutation $ \sigma(i) = n+1-i $, possesses eigenvalues solely of $ +1 $ and $ -1 $. The algebraic multiplicity of $ +1 $ is $ \lceil n/2 \rceil $, while that of $ -1 $ is $ \lfloor n/2 \rfloor $. These values arise from the cycle decomposition of the reversal permutation, which consists of $ \lfloor n/2 \rfloor $ disjoint 2-cycles (each pairing indices $ i $ and $ n+1-i $ for $ i = 1, \dots, \lfloor n/2 \rfloor $) and, when $ n $ is odd, one additional 1-cycle at the central index $ (n+1)/2 $. For a permutation matrix, the eigenvalues are the roots of unity matching the lengths of its cycles: a 2-cycle yields eigenvalues $ +1 $ and $ -1 $, while a 1-cycle yields $ +1 $. Thus, the $ \lfloor n/2 \rfloor $ 2-cycles contribute $ \lfloor n/2 \rfloor $ instances each of $ +1 $ and $ -1 $, augmented by an extra $ +1 $ for odd $ n $.[8] The corresponding eigenspaces reflect the symmetry properties induced by the reversal action. The eigenspace for eigenvalue $ +1 $ comprises all palindromic vectors $ \mathbf{v} $ satisfying $ v_i = v_{n+1-i} $ for $ i = 1, \dots, n $, forming a subspace of dimension $ \lceil n/2 \rceil $ (spanned by basis vectors where the first $ \lceil n/2 \rceil $ components are chosen freely and the remainder are mirrored). The eigenspace for eigenvalue $ -1 $ comprises all skew-palindromic (or anti-palindromic) vectors satisfying $ v_i = -v_{n+1-i} $ for $ i = 1, \dots, n $ (with the central component zero if $ n $ is odd), forming a subspace of dimension $ \lfloor n/2 \rfloor $. These characterizations follow from the fact that $ J_n $ is symmetric and centrosymmetric (commuting with itself), enabling a decomposition of the space into symmetric and skew-symmetric subspaces with respect to reversal. For illustration with $ n=3 $, the exchange matrix is
J3=(001010100). J_3 = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}.
To find the eigenvectors, solve $ (J_3 - \lambda I_3) \mathbf{v} = \mathbf{0} $ for each eigenvalue. For $ \lambda = -1 $, the equation $ J_3 \mathbf{v} = -\mathbf{v} $ yields $ v_3 = -v_1 $, $ v_2 = -v_2 $ (implying $ v_2 = 0 $), and $ v_1 = -v_3 $, so a basis vector is $ \begin{pmatrix} 1 \ 0 \ -1 \end{pmatrix} $ (up to scaling); direct verification confirms $ J_3 \begin{pmatrix} 1 \ 0 \ -1 \end{pmatrix} = \begin{pmatrix} -1 \ 0 \ 1 \end{pmatrix} = -\begin{pmatrix} 1 \ 0 \ -1 \end{pmatrix} $. For $ \lambda = +1 $ (multiplicity 2), the equation $ J_3 \mathbf{v} = \mathbf{v} $ yields $ v_3 = v_1 $, $ v_2 = v_2 $, and $ v_1 = v_3 $, so the eigenspace is all vectors of the form $ \begin{pmatrix} a \ b \ a \end{pmatrix} $ for scalars $ a, b $; basis vectors are $ \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} $ and $ \begin{pmatrix} 0 \ 1 \ 0 \end{pmatrix} $ (or, equivalently, $ \begin{pmatrix} 1 \ 2 \ 1 \end{pmatrix} $ as a non-basis example, satisfying $ J_3 \begin{pmatrix} 1 \ 2 \ 1 \end{pmatrix} = \begin{pmatrix} 1 \ 2 \ 1 \end{pmatrix} $).

Trace and Determinant

The trace of the exchange matrix $ J_n $, defined as the sum of its diagonal entries, equals 1 when $ n $ is odd due to the single 1 at the central position $ i = (n+1)/2 $ where the row and anti-diagonal indices coincide, and equals 0 when $ n $ is even as no diagonal entries are 1 in that case. The determinant of $ J_n $ is $ (-1)^{\lfloor n/2 \rfloor} $, which follows from viewing $ J_n $ as the permutation matrix corresponding to the reversal permutation $ \sigma(i) = n+1-i $; the sign of this permutation is determined by the parity of the number of inversions, equal to $ \binom{n}{2} = n(n-1)/2 $, equivalent modulo 2 to $ \lfloor n/2 \rfloor $. Alternatively, this can be computed via cofactor expansion along the first row, yielding a recurrence that confirms the closed form.[9] These scalars align with the spectral properties of $ J_n $: the trace equals the sum of the eigenvalues (with multiplicity), while the determinant equals their product, providing invariants that reflect the matrix's involutory and permutation structure.

Relationships to Other Concepts

As a Permutation Matrix

The exchange matrix $ J_n $, also known as the reversal matrix, belongs to the class of permutation matrices and specifically represents the reversal permutation $ \sigma $, defined by $ \sigma(i) = n + 1 - i $ for $ i = 1, 2, \dots, n $. Like all permutation matrices, $ J_n $ is a square matrix with exactly one 1 in each row and each column, and zeros elsewhere; the entry $ (J_n)_{i,j} = 1 $ if and only if $ j = n + 1 - i $, ensuring it permutes the standard basis vectors by reversing their order.[3][10] The reversal permutation $ \sigma $ can be expressed as a product of $ \lfloor n/2 \rfloor $ disjoint transpositions, each swapping positions $ i $ and $ n + 1 - i $ for $ i = 1 $ to $ \lfloor n/2 \rfloor $. Since each transposition has odd parity, the overall sign of $ \sigma $ (and thus the determinant of $ J_n $) is $ (-1)^{\lfloor n/2 \rfloor} $, making it even when $ \lfloor n/2 \rfloor $ is even and odd otherwise.[10] In contrast to general permutation matrices, which may correspond to arbitrary bijections and often involve longer cycles, the reversal permutation is an involution, satisfying $ \sigma^2 = \mathrm{id} $ (the identity permutation) and hence $ J_n^2 = I_n $. This property arises directly from the reversal action, as applying it twice restores the original order. Regarding fixed points, $ \sigma $ has none when $ n $ is even, but for odd $ n $, there is exactly one fixed point at the middle index $ i = (n+1)/2 $, where $ \sigma(i) = i $.

Connections to Symmetric Matrix Classes

The exchange matrix $ J_n $, also known as the reversal or counteridentity matrix, serves as a fundamental tool for characterizing various classes of symmetric matrices that exhibit reversal-based symmetries. These classes arise from specific commutation or conjugation relations involving $ J_n $, which enforce structural properties invariant under row or column reversals. Such matrices are prevalent in numerical linear algebra and applications requiring symmetry exploitation for efficient computation.[11] Centrosymmetric matrices are defined as square matrices $ A $ satisfying $ J_n A J_n = A $, a condition that implies symmetry under a 180-degree rotation about the matrix center, or equivalently, $ a_{i,j} = a_{n+1-i, n+1-j} $ for all entries. This conjugation with $ J_n $ preserves the matrix structure, making centrosymmetric matrices closed under inversion and multiplication when applicable. The exchange matrix thus generates this class by enforcing bilateral reversal invariance, which simplifies eigenvalue computations and decompositions in structured problems.[12][13] Persymmetric matrices, in contrast, satisfy $ A J_n = J_n A^T $, reflecting symmetry across the anti-diagonal or row-reversal invariance, where $ a_{i,j} = a_{n+1-j, n+1-i} $. This relation highlights the role of $ J_n $ in transposing while reversing, a property that ensures the class is preserved under inversion for nonsingular cases. Persymmetric structures often intersect with Toeplitz matrices, aiding in banded or constant-diagonal analyses.[14][11] Bisymmetric matrices combine centrosymmetry and persymmetry, satisfying both $ J_n A J_n = A $ and $ A J_n = J_n A^T $, or equivalently for symmetric $ A $, $ A = A^T $ and $ J_n A = A J_n $. This commutation property with $ J_n $ implies symmetry about both main and anti-diagonals, forming a subclass where the exchange matrix acts as a symmetry operator. The exchange matrix's involutory nature ($ J_n^2 = I $) ensures these classes are well-defined and algebraically tractable, as seen in bisymmetric Toeplitz matrices used in signal processing for symmetric filter design.[11][15][16]

Applications

Row and Column Operations

The exchange matrix $ J_n $, also known as the reversal matrix, facilitates row and column reversals in linear algebra computations by pre- or post-multiplying a given matrix, effectively reordering elements along the anti-diagonal. When $ J_n $ is multiplied on the left of a matrix $ A $, it reverses the rows of $ A $; right multiplication reverses the columns. This operation is particularly useful in algorithms requiring standardized matrix forms without altering the underlying linear structure. In Gaussian elimination, $ J_n $ functions as an elementary-like matrix to achieve full row reversal, equivalent to a sequence of pairwise row interchanges, although standard implementations prefer individual swaps for numerical stability during pivoting to handle zero or small pivots. The determinant of $ J_n $, given by $ \det(J_n) = (-1)^{n(n-1)/2} $, reflects the parity of this sequence of swaps and thus influences the sign change in the overall determinant computation when reversals are applied.[17] For broader matrix manipulations, the exchange matrix enables transformations to symmetrize or standardize structured forms, such as converting a Toeplitz matrix $ T $ to a Hankel matrix via $ J_n T J_n $, which flips the constant diagonals to constant anti-diagonals and is commonly employed in algorithms for structured linear systems. This reversal aids in exploiting symmetries for efficient solving of systems involving Toeplitz-plus-Hankel matrices.[18][19] Computationally, multiplying a matrix by $ J_n $ to perform reversal incurs an $ O(n^2) $ complexity due to the permutation structure, allowing efficient implementation in numerical software for large-scale linear algebra tasks without excessive overhead.[20]

Signal Processing and Reversal

In digital signal processing, the exchange matrix $ J_n $ facilitates time reversal of a discrete-time signal represented as an $ n $-dimensional vector $ \mathbf{x} $, yielding $ J_n \mathbf{x} $ where the components are reordered such that the $ k $-th element becomes $ x(n-1-k) $ for $ k = 0, 1, \dots, n-1 $. This operation implements the signal transformation $ y(k) = x(-k) $ in discrete time, preserving the signal's energy while flipping its temporal sequence around the origin. It is particularly valuable in applications requiring symmetry, such as the design of linear-phase FIR filters, where $ J_n $ enforces the antisymmetric or symmetric properties needed for zero-phase distortion.[21] Additionally, in computing autocorrelation functions for stationary processes, $ J_n $ relates the forward and backward prediction error filters, enabling efficient estimation of the autocorrelation matrix's Toeplitz structure via manipulations like $ R_{xx} = J_n R_{xx} J_n $, where $ R_{xx} $ is the signal's autocorrelation matrix.[22] The exchange matrix extends to polynomial processing, where it reverses the coefficients of a polynomial $ p(z) = \sum_{k=0}^{n-1} a_k z^k $ represented by the vector $ \mathbf{a} = [a_0, a_1, \dots, a_{n-1}]^T $, producing $ J_n \mathbf{a} $ and transforming $ p(z) $ into the reversed form $ z^{n-1} p(1/z) = \sum_{k=0}^{n-1} a_{n-1-k} z^k $. This reversal is essential in polynomial matrix theory for tasks like stabilizing control systems or analyzing reciprocal polynomials in filter design. In eigenvalue problems for matrix polynomials, the reversal operation via $ J_n $ helps identify finite and infinite eigenvalues symmetrically, as seen in the definition of the reversal matrix polynomial $ P^R(\lambda) = \lambda^{\deg P} P(1/\lambda) $, which aids in constructing canonical forms.[23] For the two-dimensional case, the exchange matrix $ J_2 $ equals the Pauli X matrix $ \sigma_x = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix} $, a building block in quantum mechanics and quantum information processing. A representative application is matched filtering in discrete-time radar or communications systems, where the optimal receiver filter for a known signal $ \mathbf{s} $ uses the time-reversed conjugate $ h = J_n \mathbf{s}^* $ (for real signals, simply $ J_n \mathbf{s} $), correlating the received signal with this reversed template to achieve maximum signal-to-noise ratio at the decision instant. This reversal ensures the filter's impulse response aligns the signal's energy peak with the output sampling time, as demonstrated in simulations where it improves detection probability by focusing correlated components.[24]
User Avatar
No comments yet.