Hubbry Logo
search
logo

Modal matrix

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors.[1]

Specifically the modal matrix for the matrix is the n × n matrix formed with the eigenvectors of as columns in . It is utilized in the similarity transformation

where is an n × n diagonal matrix with the eigenvalues of on the main diagonal of and zeros elsewhere. The matrix is called the spectral matrix for . The eigenvalues must appear left to right, top to bottom in the same order as their corresponding eigenvectors are arranged left to right in .[2]

Example

[edit]

The matrix

has eigenvalues and corresponding eigenvectors

A diagonal matrix , similar to is

One possible choice for an invertible matrix such that is

[3]

Note that since eigenvectors themselves are not unique, and since the columns of both and may be interchanged, it follows that both and are not unique.[4]

Generalized modal matrix

[edit]

Let be an n × n matrix. A generalized modal matrix for is an n × n matrix whose columns, considered as vectors, form a canonical basis for and appear in according to the following rules:

  • All Jordan chains consisting of one vector (that is, one vector in length) appear in the first columns of .
  • All vectors of one chain appear together in adjacent columns of .
  • Each chain appears in in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.).[5]

One can show that

where is a matrix in Jordan normal form. By premultiplying by , we obtain

Note that when computing these matrices, equation (1) is the easiest of the two equations to verify, since it does not require inverting a matrix.[6]

Example

[edit]

This example illustrates a generalized modal matrix with four Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.[7] The matrix

has a single eigenvalue with algebraic multiplicity . A canonical basis for will consist of one linearly independent generalized eigenvector of rank 3 (generalized eigenvector rank; see generalized eigenvector), two of rank 2 and four of rank 1; or equivalently, one chain of three vectors , one chain of two vectors , and two chains of one vector , .

An "almost diagonal" matrix in Jordan normal form, similar to is obtained as follows:

where is a generalized modal matrix for , the columns of are a canonical basis for , and .[8] Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both and may be interchanged, it follows that both and are not unique.[9]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In linear algebra, a modal matrix is an $ n \times n $ matrix whose columns consist of the eigenvectors of a square matrix $ A $, enabling the diagonalization of $ A $ through the similarity transformation $ A = M \Lambda M^{-1} $, where $ \Lambda $ is the diagonal matrix of eigenvalues and $ M $ is the modal matrix.[1] This construction assumes $ A $ is diagonalizable, with eigenvectors satisfying $ A x_i = \lambda_i x_i $ for each eigenvalue $ \lambda_i $, and allows for the simplification of matrix powers and exponentials, such as in solving differential equations.[1] In control theory, the modal matrix plays a crucial role in analyzing linear time-invariant systems described by state-space equations $ \dot{x} = A x + B u $, where it decouples the system dynamics into independent modal coordinates via the transformation $ x = M z $, yielding $ \dot{z} = \Lambda z + M^{-1} B u $.[2] This facilitates the computation of the state-transition matrix $ \Phi(t) = M e^{\Lambda t} M^{-1} $ and assessment of stability, as the eigenvalues in $ \Lambda $ determine whether the system is asymptotically stable (all real parts negative).[2] In structural dynamics and vibration analysis, the modal matrix comprises mass-normalized mode shapes (eigenvectors) of the system's stiffness and mass matrices, transforming coupled equations of motion into uncoupled modal equations for free vibration: $ \ddot{q}_i + \omega_i^2 q_i = 0 $, where $ q_i $ are modal coordinates and $ \omega_i $ are natural frequencies.[3] This orthogonality property, satisfying $ \Phi^T M \Phi = I $ and $ \Phi^T K \Phi = \Omega^2 $, reduces computational complexity in simulating multi-degree-of-freedom systems, such as in finite element analysis for engineering structures.[3]

Fundamentals

Definition

In linear algebra, a modal matrix is a square matrix constructed from the eigenvectors of a diagonalizable matrix, facilitating its diagonalization through a similarity transformation. For a diagonalizable $ n \times n $ matrix $ A $ over the complex numbers, with distinct or repeated eigenvalues $ \lambda_1, \dots, \lambda_n $ and corresponding eigenvectors $ \mathbf{v}_1, \dots, \mathbf{v}_n $, the modal matrix $ P $ is defined as the $ n \times n $ matrix whose columns are these eigenvectors: $ P = [\mathbf{v}_1 \ \mathbf{v}_2 \ \dots \ \mathbf{v}_n] $. This matrix satisfies the relation
P1AP=D, P^{-1} A P = D,
where $ D = \operatorname{diag}(\lambda_1, \dots, \lambda_n) $ is the diagonal matrix containing the eigenvalues on its main diagonal.[4] The existence of such a modal matrix presupposes that $ A $ is diagonalizable, meaning it possesses a full set of $ n $ linearly independent eigenvectors spanning the $ n $-dimensional vector space.[5] Without this linear independence, $ A $ cannot be diagonalized, and no modal matrix exists in this form. Over the complex numbers, every square matrix has at least one eigenvalue by the fundamental theorem of algebra, but diagonalizability depends on the geometric multiplicity matching the algebraic multiplicity for each eigenvalue.[5] Notation for the modal matrix conventionally uses $ P $ in many linear algebra contexts, distinguishing it from other transformation matrices, though in engineering applications it is often denoted by $ \Phi $ to emphasize mode shapes.[6] The term "modal matrix" originates from modal analysis in structural engineering and vibrations, where columns represent natural modes of a system, but the concept applies broadly to any diagonalizable matrix in linear algebra.[6]

Basic Properties

The modal matrix $ P $, whose columns are the eigenvectors of a diagonalizable matrix $ A $, is invertible if and only if these eigenvectors are linearly independent, ensuring the existence of $ P^{-1} $ for the diagonalization $ A = P D P^{-1} $, where $ D $ is the diagonal matrix of eigenvalues.[5] This invertibility holds precisely for diagonalizable matrices, as the linear independence of $ n $ eigenvectors in $ \mathbb{R}^n $ or $ \mathbb{C}^n $ guarantees a full basis for the vector space.[7] The columns of $ P $ corresponding to a given eigenvalue span the eigenspace associated with that eigenvalue, providing a basis for the subspace of vectors scaled by the eigenvalue under $ A .[](https://math.emory.edu/ lchen41/teaching/2020Fall/Section33.pdf)Eachsuchcolumncanbescaledbyanarbitrarynonzeroscalarwithoutalteringthediagonalizationproperty,sinceeigenvectorsaredefineduptoscaling;however,normalizationoftentounitlength(.[](https://math.emory.edu/~lchen41/teaching/2020_Fall/Section_3-3.pdf) Each such column can be scaled by an arbitrary nonzero scalar without altering the diagonalization property, since eigenvectors are defined up to scaling; however, normalization—often to unit length ( | \mathbf{v}_i | = 1 $)—is commonly applied to standardize the matrix and simplify computations.[8] In symmetric cases, orthogonal normalization further ensures $ P $ is unitary, but this is a special instance rather than a general requirement.[9] A key advantage of the modal matrix arises in exponentiation: for any positive integer $ k $, $ A^k = P D^k P^{-1} $, where $ D^k $ is the diagonal matrix with entries raised to the $ k $-th power, decoupling the computation into simple scalar operations on the eigenvalues./04:_Eigenvalues_and_eigenvectors/4.03:_Diagonalization_similarity_and_powers_of_a_matrix) The modal matrix is not unique; different choices of scaling for individual columns or permutations of column order (matching the corresponding eigenvalues in $ D $) yield equivalent diagonalizations, though the diagonal form $ D $ itself is unique up to permutation of its entries.[5]

Diagonalization Process

Construction of Modal Matrix

The construction of a modal matrix PP for a diagonalizable square matrix ARn×nA \in \mathbb{R}^{n \times n} involves systematically identifying its eigenvalues and corresponding eigenvectors to form the columns of PP. This process assumes that AA admits a full set of nn linearly independent eigenvectors, ensuring PP is invertible.[10] The first step is to compute the eigenvalues by solving the characteristic equation det(AλI)=0\det(A - \lambda I) = 0, where II is the identity matrix. This yields the eigenvalues λ1,λ2,,λn\lambda_1, \lambda_2, \dots, \lambda_n, which may include multiplicities. The characteristic polynomial is typically found by expanding the determinant, and its roots can be obtained analytically for small nn or numerically for larger matrices.[10] For each distinct eigenvalue λi\lambda_i, the next step is to determine the corresponding eigenvectors by solving the homogeneous system (AλiI)v=0(A - \lambda_i I)v = 0 for nonzero vectors vv. This equation defines the eigenspace, and a basis for it consists of linearly independent eigenvectors associated with λi\lambda_i. One or more eigenvectors are selected per eigenvalue, scaled arbitrarily (often to unit length for normalization), to span the eigenspace.[10] In the case of repeated eigenvalues, where the algebraic multiplicity exceeds one, the matrix AA remains diagonalizable if the geometric multiplicity (dimension of the eigenspace) equals the algebraic multiplicity. Under this condition, a set of linearly independent eigenvectors equal in number to the multiplicity can be found for that eigenvalue, allowing their inclusion as separate columns in PP. If the geometric multiplicity is lower, AA is not diagonalizable, and a modal matrix cannot be constructed in the standard sense.[7] The final step is to assemble the modal matrix PP by arranging the selected eigenvectors as columns: P=[v1v2vn]P = [v_1 \, | \, v_2 \, | \, \dots \, | \, v_n]. The columns must be verified to be linearly independent, which is guaranteed if AA is diagonalizable. This ensures PP transforms AA into diagonal form via similarity.[10] In practice, especially for large matrices, numerical software is employed to compute eigenvalues and eigenvectors reliably. For instance, MATLAB's eig function computes both: [V, D] = eig(A) returns DD as the diagonal matrix of eigenvalues and VV as the modal matrix with eigenvectors as columns. This method uses algorithms like the QZ algorithm for generalized problems but applies directly to standard eigenvalue decomposition, handling numerical stability and repeated roots.[11]

Similarity Transformation

The similarity transformation facilitated by the modal matrix PP, whose columns are the eigenvectors of a diagonalizable matrix AA, allows AA to be expressed in a diagonal form. Specifically, if Avi=λiviA \mathbf{v}_i = \lambda_i \mathbf{v}_i for each eigenvector vi\mathbf{v}_i and corresponding eigenvalue λi\lambda_i, then multiplying AA by PP yields AP=PDA P = P D, where DD is the diagonal matrix with diag(D)=(λ1,,λn)\operatorname{diag}(D) = (\lambda_1, \dots, \lambda_n)./04%3A_Eigenvalues_and_eigenvectors/4.03%3A_Diagonalization_similarity_and_powers_of_a_matrix)[4] This relation follows directly from the eigenvector equation applied column-wise to PP. Premultiplying both sides by P1P^{-1} (which exists since the eigenvectors are linearly independent for diagonalizable AA) gives the core diagonalization equation A=PDP1A = P D P^{-1}./04%3A_Eigenvalues_and_eigenvectors/4.03%3A_Diagonalization_similarity_and_powers_of_a_matrix) Equivalently, the inverse similarity transformation P1AP=DP^{-1} A P = D represents AA in the eigenbasis defined by the columns of PP. The inverse P1P^{-1} can be computed using Gaussian elimination on the augmented matrix [PI][P \mid I] or via the adjugate formula P1=1det(P)adj(P)P^{-1} = \frac{1}{\det(P)} \operatorname{adj}(P), though numerical stability favors the former for large matrices./04%3A_Eigenvalues_and_eigenvectors/4.03%3A_Diagonalization_similarity_and_powers_of_a_matrix)[4] This transformation interprets the columns of PP as a change of basis in which the linear operator corresponding to AA acts diagonally, scaling each basis vector by its associated eigenvalue. Verification of the diagonalization involves direct matrix multiplication: PDP1P D P^{-1} should recover the original AA, confirming the transformation's accuracy./04%3A_Eigenvalues_and_eigenvectors/4.03%3A_Diagonalization_similarity_and_powers_of_a_matrix) A key implication of this similarity is the simplification of matrix functions. For any analytic function ff, f(A)=Pf(D)P1f(A) = P f(D) P^{-1}, where f(D)f(D) is diagonal with entries f(λi)f(\lambda_i); this is particularly useful for computing exponentials eAte^{At} or polynomials in control systems and dynamical analysis, reducing complexity from O(n3)O(n^3) to O(n)O(n) operations after the initial transformation./04%3A_Eigenvalues_and_eigenvectors/4.03%3A_Diagonalization_similarity_and_powers_of_a_matrix)[4]

Illustrative Example

Consider the matrix $ A = \begin{pmatrix} 6 & -1 \ 2 & 3 \end{pmatrix} $. To construct its modal matrix, first compute the characteristic polynomial det(AλI)=(λ5)(λ4)=0\det(A - \lambda I) = (\lambda - 5)(\lambda - 4) = 0, yielding eigenvalues λ1=5\lambda_1 = 5 and λ2=4\lambda_2 = 4.[7] The corresponding eigenvectors are found by solving (AλiI)vi=0(A - \lambda_i I) \mathbf{v}_i = 0. For λ1=5\lambda_1 = 5, the eigenvector is v1=(11)\mathbf{v}_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}. For λ2=4\lambda_2 = 4, the eigenvector is v2=(12)\mathbf{v}_2 = \begin{pmatrix} 1 \\ 2 \end{pmatrix}.[7] The modal matrix PP is formed by taking these eigenvectors as columns: $ P = \begin{pmatrix} 1 & 1 \ 1 & 2 \end{pmatrix} $. The inverse is $ P^{-1} = \begin{pmatrix} 2 & -1 \ -1 & 1 \end{pmatrix} $, since det(P)=1\det(P) = 1. The diagonal matrix is $ D = \begin{pmatrix} 5 & 0 \ 0 & 4 \end{pmatrix} $.[7] To verify the diagonalization, compute $ P D P^{-1} $:
PD=(1112)(5004)=(5458), P D = \begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} 5 & 0 \\ 0 & 4 \end{pmatrix} = \begin{pmatrix} 5 & 4 \\ 5 & 8 \end{pmatrix},
PDP1=(5458)(2111)=(1045+41085+8)=(6123)=A. P D P^{-1} = \begin{pmatrix} 5 & 4 \\ 5 & 8 \end{pmatrix} \begin{pmatrix} 2 & -1 \\ -1 & 1 \end{pmatrix} = \begin{pmatrix} 10 - 4 & -5 + 4 \\ 10 - 8 & -5 + 8 \end{pmatrix} = \begin{pmatrix} 6 & -1 \\ 2 & 3 \end{pmatrix} = A.
This confirms the similarity transformation.[7] The modal form simplifies powers of AA. Direct computation gives $ A^2 = \begin{pmatrix} 34 & -9 \ 18 & 7 \end{pmatrix} $. Using the modal decomposition, $ A^2 = P D^2 P^{-1} $, where $ D^2 = \begin{pmatrix} 25 & 0 \ 0 & 16 \end{pmatrix} $:
PD2=(25162532),PD2P1=(25162532)(2111)=(501625+16503225+32)=(349187). P D^2 = \begin{pmatrix} 25 & 16 \\ 25 & 32 \end{pmatrix}, \quad P D^2 P^{-1} = \begin{pmatrix} 25 & 16 \\ 25 & 32 \end{pmatrix} \begin{pmatrix} 2 & -1 \\ -1 & 1 \end{pmatrix} = \begin{pmatrix} 50 - 16 & -25 + 16 \\ 50 - 32 & -25 + 32 \end{pmatrix} = \begin{pmatrix} 34 & -9 \\ 18 & 7 \end{pmatrix}.
This matches the direct result, illustrating the computational efficiency for higher powers.[7]

Extensions

Generalized Modal Matrix

In the context of square matrices over an algebraically closed field where, for at least one eigenvalue, the algebraic multiplicity exceeds the geometric multiplicity, the matrix is not diagonalizable by a modal matrix of standard eigenvectors alone. This situation necessitates the Jordan canonical form, and the generalized modal matrix provides the similarity transformation to achieve it.[12] A generalized modal matrix PP for an n×nn \times n matrix AA is defined as an invertible matrix whose columns form a basis of generalized eigenvectors for AA, specifically arranged into chains corresponding to the Jordan blocks in the Jordan canonical form. To construct these chains for a Jordan block of size kk associated with eigenvalue λ\lambda, begin with a standard eigenvector v1v_1 satisfying
(AλI)v1=0, (A - \lambda I) v_1 = 0,
where λI\lambda I is the scalar multiple of the identity matrix. Then, for each subsequent m=2,,km = 2, \dots, k, solve the equation
(AλI)vm=vm1 (A - \lambda I) v_m = v_{m-1}
to obtain vmv_m, ensuring the chain {v1,v2,,vk}\{v_1, v_2, \dots, v_k\} lies within the generalized eigenspace for λ\lambda and maintains linear independence. The number and sizes of such chains are determined by the dimensions of the kernels of powers of (AλI)(A - \lambda I).[12] Assembling the chains for all eigenvalues into the columns of PP yields the decomposition A=PJP1A = P J P^{-1}, where JJ is the Jordan canonical form, a block-diagonal matrix with Jordan blocks along the diagonal. In contrast to the standard modal matrix, where every column is an eigenvector, the generalized modal matrix includes higher-order generalized eigenvectors beyond the chain starters; yet, a complete set of chains guarantees that PP is invertible and spans the entire space.

Jordan Modal Matrix

The Jordan canonical form of a square matrix AA over the complex numbers consists of Jordan blocks arranged on the block diagonal, where each Jordan block Jk(λ)J_k(\lambda) is an upper triangular matrix given by Jk(λ)=λ[Ik](/page/Identitymatrix)+NkJ_k(\lambda) = \lambda [I_k](/page/Identity_matrix) + N_k. Here, IkI_k is the k×kk \times k identity matrix, and NkN_k is the nilpotent matrix with 1's on the superdiagonal and zeros elsewhere, satisfying Nkk=[0](/page/0)N_k^k = [0](/page/0) but Nkk1[0](/page/0)N_k^{k-1} \neq [0](/page/0). This form generalizes the diagonal case to handle defective matrices where the geometric multiplicity is less than the algebraic multiplicity of an eigenvalue.[13] In the Jordan modal matrix context, the modal matrix PP is an invertible matrix whose columns form chains of generalized eigenvectors, one chain per Jordan block. For a Jordan block of size kk corresponding to eigenvalue λ\lambda, the chain consists of vectors v1,v2,,vkv_1, v_2, \dots, v_k, where v1v_1 is an eigenvector satisfying (AλI)v1=0(A - \lambda I) v_1 = 0, and each subsequent vector satisfies (AλI)vi=vi1(A - \lambda I) v_i = v_{i-1} for i=2,,ki = 2, \dots, k. The columns of PP are ordered by grouping these chains for each block, typically starting with the eigenvector at the end of the chain in the matrix representation. This structure ensures that the action of AA on the basis translates to the shift-and-scale behavior in the Jordan form.[14] The complete decomposition is A=PJP1A = P J P^{-1}, where JJ is the block-diagonal Jordan canonical form with the specified blocks. This similarity transformation decouples the system into independent Jordan chains, facilitating analysis of dynamics even for non-diagonalizable matrices. For instance, consider the matrix
A=(425243002). A = \begin{pmatrix} 4 & -2 & 5 \\ -2 & 4 & -3 \\ 0 & 0 & 2 \end{pmatrix}.
The eigenvalues are λ=6\lambda = 6 (multiplicity 1) and λ=2\lambda = 2 (multiplicity 2). For λ=2\lambda = 2, an eigenvector is v1=(110)v_1 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}, and a generalized eigenvector v2v_2 satisfies (A2I)v2=v1(A - 2I) v_2 = v_1, yielding v2=(111)v_2 = \begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix}. For λ=6\lambda = 6, the eigenvector is v3=(110)v_3 = \begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix}. The modal matrix is
P=(111111001), P = \begin{pmatrix} 1 & 1 & -1 \\ -1 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix},
and the Jordan form is
J=(600021002), J = \begin{pmatrix} 6 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{pmatrix},
verifying A=PJP1A = P J P^{-1}. For the simple defective case A=(λ10λ)A = \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}, which is already in Jordan form, PP is the identity matrix, with chain v1=(10)v_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix} (eigenvector) and v2=(01)v_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix} satisfying (AλI)v2=v1(A - \lambda I) v_2 = v_1.[15] The Jordan canonical form JJ is unique up to the ordering of the blocks, while the modal matrix PP is unique up to scaling of vectors within each chain and reordering of chains for the same eigenvalue.[16]

Applications

In Linear Systems

In linear systems theory, modal matrices play a central role in solving systems of linear ordinary differential equations of the form x˙=Ax\dot{x} = A x, where AA is an n×nn \times n constant matrix and xx is the state vector. Assuming AA is diagonalizable, the modal matrix PP consists of the right eigenvectors of AA as its columns, enabling a similarity transformation A=PDP1A = P D P^{-1} where DD is diagonal with eigenvalues λi\lambda_i on the diagonal. The solution is then x(t)=PeDtP1x(0)x(t) = P e^{D t} P^{-1} x(0), where eDt=diag(eλ1t,,eλnt)e^{D t} = \operatorname{diag}(e^{\lambda_1 t}, \dots, e^{\lambda_n t}), decoupling the system into independent scalar equations z˙i=λizi\dot{z}_i = \lambda_i z_i in the modal coordinates z=P1xz = P^{-1} x.[17] Modal decomposition further interprets this solution as a superposition of modes: x(t)=i=1ncieλitvix(t) = \sum_{i=1}^n c_i e^{\lambda_i t} v_i, where viv_i is the ii-th column of PP (the eigenvector) and cic_i are coefficients determined by initial conditions. Each column of PP thus corresponds to a mode, with the time evolution eλite^{\lambda_i t} governing the amplitude and the eigenvector viv_i defining the mode shape, which describes the relative contributions of each state variable to that mode. This decomposition reveals the intrinsic dynamics of the system, such as exponential growth for Re(λi)>0\operatorname{Re}(\lambda_i) > 0 or oscillatory behavior for complex conjugate pairs.[18] In control theory, for state-space models x˙=Ax+Bu\dot{x} = A x + B u, y=Cx+Duy = C x + D u, the modal matrix facilitates analysis of controllability and observability by transforming the system into modal coordinates x~=P1x\tilde{x} = P^{-1} x, yielding x~˙=Dx~+(P1B)u\dot{\tilde{x}} = D \tilde{x} + (P^{-1} B) u and y=(CP)x~+Duy = (C P) \tilde{x} + D u. The system is controllable if the matrix P1BP^{-1} B has no zero rows (ensuring every mode can be influenced by the input) and observable if CPC P has no zero columns (ensuring every mode affects the output); these conditions align with the full rank of the standard controllability and observability matrices. Additionally, state feedback u=Kxu = -K x can be designed in modal coordinates to decouple the modes, allowing independent control of each via diagonal gain matrices.[18] The eigenvalues of AA represent the open-loop poles of the system, dictating its natural response, and the modal matrix aids pole placement by enabling feedback designs that assign desired closed-loop poles while preserving mode shapes. For multi-input controllable systems, linear state feedback exists to place the poles at arbitrary locations, as established in foundational work on pole assignment. This technique is crucial for achieving specified performance, such as faster response or reduced overshoot.[19] Stability analysis leverages the modal decomposition: the system is asymptotically stable if all Re(λi)<0\operatorname{Re}(\lambda_i) < 0, with the modal matrix providing insight into mode shapes to identify dominant or unstable directions in the state space. Unstable modes (Re(λi)0\operatorname{Re}(\lambda_i) \geq 0) can be stabilized via feedback that shifts those poles into the left half-plane, while the eigenvectors reveal the subspaces associated with each mode's stability properties.[17]

In Mechanical Vibrations

In the analysis of multi-degree-of-freedom (MDOF) undamped mechanical systems, the equations of motion are expressed as $ M \ddot{q} + K q = 0 $, where $ M $ is the symmetric positive-definite mass matrix, $ K $ is the symmetric positive semi-definite stiffness matrix, and $ q $ is the displacement vector.[20] Assuming a solution of the form $ q = \phi e^{i \omega t} $, where $ \phi $ is the mode shape vector and $ \omega $ is the natural frequency, substitution yields the generalized eigenvalue problem $ (K - \omega^2 M) \phi = 0 $.[21] This problem determines the natural frequencies $ \omega_i $ (as square roots of the eigenvalues) and corresponding mode shapes $ \phi_i $ (as eigenvectors), which describe the system's oscillatory patterns.[22] The modal matrix $ \Phi $ is constructed with the mode shapes as its columns: $ \Phi = [\phi_1 , \phi_2 , \dots , \phi_n] $, where $ n $ is the number of degrees of freedom.[20] These mode shapes represent the independent vibrational modes of the system, each associated with a distinct natural frequency $ \omega_i $. Due to the symmetry of $ M $ and $ K $, the eigenvectors satisfy orthogonality conditions with respect to these matrices: $ \phi_r^T M \phi_s = 0 $ and $ \phi_r^T K \phi_s = 0 $ for $ r \neq s $.[22] Consequently, the transformed matrices are diagonal: $ \Phi^T M \Phi = \text{diag}(m_1, m_2, \dots, m_n) $ and $ \Phi^T K \Phi = \text{diag}(k_1, k_2, \dots, k_n) $, where $ k_i = \omega_i^2 m_i $.[21] This orthogonality simplifies computations and highlights the uncoupled nature of the modes. To decouple the equations of motion, a similarity transformation is applied using the modal matrix: introduce modal coordinates $ \eta = \Phi^{-1} q $, so $ q = \Phi \eta $. Substituting into the original equation and premultiplying by $ \Phi^T $ yields the decoupled set $ \ddot{\eta}_i + \omega_i^2 \eta_i = 0 $ for each mode $ i $, where each equation behaves as an independent single-degree-of-freedom oscillator.[20] The general solution is then a linear superposition $ q(t) = \Phi \eta(t) $, with $ \eta_i(t) = A_i \cos(\omega_i t) + B_i \sin(\omega_i t) $, determined by initial conditions.[22] For systems with viscous damping, the modal matrix from the undamped problem applies when the damping is proportional, meaning the damping matrix $ C $ can be expressed as $ C = \alpha M + \beta K $ for scalars $ \alpha $ and $ \beta $. In this case, the same $ \Phi $ diagonalizes $ C $, resulting in decoupled modal equations $ \ddot{\eta}_i + 2 \zeta_i \omega_i \dot{\eta}_i + \omega_i^2 \eta_i = 0 $, where $ \zeta_i $ is the modal damping ratio.[20] This extension preserves the utility of modal analysis for lightly damped structures, such as in aerospace and civil engineering applications.
User Avatar
No comments yet.