Hubbry Logo
search
logo

Nonnegative matrix

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In mathematics, a nonnegative matrix, written

is a matrix in which all the elements are equal to or greater than zero, that is,

A positive matrix is a matrix in which all the elements are strictly greater than zero. The set of positive matrices is the interior of the set of all non-negative matrices. While such matrices are commonly found, the term "positive matrix" is only occasionally used due to the possible confusion with positive-definite matrices, which are different. A matrix which is both non-negative and is positive semidefinite is called a doubly non-negative matrix.

A rectangular non-negative matrix can be approximated by a decomposition with two other non-negative matrices via non-negative matrix factorization.

Eigenvalues and eigenvectors of square positive matrices are described by the Perron–Frobenius theorem.

Properties

[edit]
  • The trace and every row and column sum/product of a nonnegative matrix is nonnegative.

Inversion

[edit]

The inverse of any non-singular M-matrix [clarification needed] is a non-negative matrix. If the non-singular M-matrix is also symmetric then it is called a Stieltjes matrix.

The inverse of a non-negative matrix is usually not non-negative. The exception is the non-negative monomial matrices: a non-negative matrix has non-negative inverse if and only if it is a (non-negative) monomial matrix. Note that thus the inverse of a positive matrix is not positive or even non-negative, as positive matrices are not monomial, for dimension n > 1.

Specializations

[edit]

There are a number of groups of matrices that form specializations of non-negative matrices, e.g. stochastic matrix; doubly stochastic matrix; symmetric non-negative matrix.

See also

[edit]

Bibliography

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A nonnegative matrix is a matrix whose entries are all real numbers greater than or equal to zero.[1] This article primarily concerns square nonnegative matrices.[2] These matrices form a fundamental class in linear algebra, generalizing positive matrices (where all entries are strictly positive) and playing a central role in the study of spectral theory, graph theory, and applied mathematics.[2] Key subclasses include irreducible nonnegative matrices, which cannot be permuted into a block upper triangular form with more than one block, and primitive matrices, for which some power has all positive entries.[3] The spectral properties of nonnegative matrices are governed by the Perron-Frobenius theorem, first established by Oskar Perron in 1907 for positive matrices and extended by Ferdinand Georg Frobenius in 1912 to irreducible nonnegative matrices.[2] For an irreducible nonnegative matrix AA, the theorem asserts that the spectral radius ρ(A)\rho(A) is a positive real eigenvalue of algebraic and geometric multiplicity one, with a corresponding positive eigenvector; all other eigenvalues λ\lambda satisfy λρ(A)|\lambda| \leq \rho(A), and for primitive matrices, the strict inequality λ<ρ(A)|\lambda| < \rho(A) holds and the powers of AA normalized by ρ(A)m\rho(A)^m converge to a rank-one matrix of the form uvT\mathbf{u} \mathbf{v}^T, where u\mathbf{u} is the right Perron eigenvector and v\mathbf{v} the left Perron eigenvector (normalized such that vTu=1\mathbf{v}^T \mathbf{u} = 1), whose columns are multiples of u\mathbf{u}.[2][3] Nonnegative matrices arise naturally in modeling phenomena involving nonnegative quantities, such as population dynamics in Leslie matrices, where the dominant eigenvalue determines growth rates.[2] They also underpin Markov chains through stochastic matrices (nonnegative with row or column sums equal to one), whose stationary distributions are given by the Perron eigenvector.[3] In computer science, the Google PageRank algorithm employs a primitive stochastic matrix to rank web pages, relying on the convergence guaranteed by the theorem.[2] Further applications include ranking in tournaments and congestion control in TCP protocols.[3]

Fundamentals

Definition

A nonnegative matrix is a matrix whose entries are nonnegative real numbers. Formally, an n×mn \times m matrix A=(aij)A = (a_{ij}) is nonnegative if aij0a_{ij} \geq 0 for all i=1,,ni = 1, \dots, n and j=1,,mj = 1, \dots, m.[4] The notation A0A \geq 0 is commonly used to indicate that AA is nonnegative, with the inequality understood entrywise; this extends to the partial ordering on matrices where ABA \geq B if and only if aijbija_{ij} \geq b_{ij} for all i,ji, j.[4][5] When the matrix is square (n=mn = m), it is often simply called a nonnegative square matrix. A related class is the positive matrix, defined as a matrix with all entries strictly positive (aij>0a_{ij} > 0 for all i,ji, j).[6] The study of nonnegative matrices originated in the context of the Perron-Frobenius theorem, introduced by Oskar Perron in 1907 for positive matrices.[7]

Basic Examples

A fundamental example of a nonnegative matrix is the identity matrix scaled by a nonnegative scalar c0c \geq 0. For n=2n=2, the matrix cI2=(c00c)cI_2 = \begin{pmatrix} c & 0 \\ 0 & c \end{pmatrix} has all entries nonnegative, satisfying the definition directly.[8] Another common instance arises as the adjacency matrix of a directed graph where edge weights are nonnegative real numbers. Consider the 2×2 matrix A=(0211)A = \begin{pmatrix} 0 & 2 \\ 1 & 1 \end{pmatrix}, which encodes a graph with a directed edge of weight 2 from vertex 1 to 2, weight 1 from 2 to 1, and a self-loop of weight 1 at vertex 2; all entries are nonnegative.[9] For a positive matrix, where every entry exceeds zero, the all-ones matrix JnJ_n provides a clear case. The 2×2 version is J2=(1111)J_2 = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, with all entries positive.[8] Nonnegative matrices need not be square; a non-square example is the incidence matrix of a bipartite graph, which records the presence (1) or absence (0) of edges between partite sets. For a bipartite graph with partite sets of sizes 2 and 3, and edges connecting vertex 1 to the first two vertices of the second set, and vertex 2 to the last two, the resulting 2×3 matrix is (110011)\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix}, featuring only nonnegative entries.[10]

Algebraic Properties

Sums and Products

A fundamental property of nonnegative matrices is their closure under entrywise addition and nonnegative scalar multiplication. If $ A $ and $ B $ are $ m \times n $ nonnegative matrices, then each entry of $ A + B $ is the sum of two nonnegative real numbers and hence nonnegative, making $ A + B $ nonnegative. Similarly, if $ c \geq 0 $ is a scalar and $ A $ is nonnegative, then $ cA $ has entries $ c a_{ij} \geq 0 $, preserving nonnegativity. These operations form the basis of the convex cone structure of the set of nonnegative matrices.[11] The set of nonnegative matrices is also closed under matrix multiplication. If $ A $ is an $ m \times p $ nonnegative matrix with entries $ a_{ij} $ and $ B $ is a $ p \times n $ nonnegative matrix with entries $ b_{jk} $, then the product $ AB $ is an $ m \times n $ matrix whose $ (i,k) $-th entry is given by
(AB)ik=j=1paijbjk0, (AB)_{ik} = \sum_{j=1}^p a_{ij} b_{jk} \geq 0,
as each term $ a_{ij} b_{jk} \geq 0 $ and the sum of nonnegative terms is nonnegative. This property ensures that the product remains entrywise nonnegative, establishing the nonnegative matrices as a semigroup under multiplication.[11] For square nonnegative matrices, the trace provides another nonnegative scalar invariant. If $ A = (a_{ij}) $ is an $ n \times n $ nonnegative matrix, the trace is $ \operatorname{tr}(A) = \sum_{i=1}^n a_{ii} \geq 0 $, since it sums the nonnegative diagonal entries. This follows directly from the definition of the trace as a sum over the diagonal.[11] Row and column sums offer useful summaries of the magnitude of nonnegative matrices. For an $ m \times n $ nonnegative matrix $ A = (a_{ij}) $, the $ i $-th row sum is $ s_i = \sum_{j=1}^n a_{ij} \geq 0 $, and the $ j $-th column sum is $ t_j = \sum_{i=1}^m a_{ij} \geq 0 $, as each is a finite sum of nonnegative entries. The maximum row sum, $ \max_i s_i $, defines the induced infinity norm $ |A|\infty $, which bounds the action of $ A $ on vectors with infinity norm at most 1, satisfying $ |Ax|\infty \leq |A|\infty |x|\infty $ for any vector $ x $.[11]

Powers and Norms

For a square nonnegative matrix A0A \geq 0, the powers AkA^k for positive integers kk are also nonnegative, as matrix multiplication preserves nonnegativity entrywise: each entry of AkA^k is a finite sum of products of nonnegative entries from AA. The entries of AkA^k exhibit growth governed by the spectral radius ρ(A)\rho(A), with typical asymptotic behavior where (Ak)ijcijρ(A)k(A^k)_{ij} \sim c_{ij} \rho(A)^k for some nonnegative constants cijc_{ij} depending on the structure of AA.[8] The spectral radius ρ(A)\rho(A) of a nonnegative matrix AA admits the Collatz–Wielandt characterization: ρ(A)=maxx>0mini(Ax)ixi\rho(A) = \max_{x > 0} \min_i \frac{(Ax)_i}{x_i}, where the maximum is over positive vectors xx and the minimum is over indices ii. This variational formula provides a min-max principle for computing or bounding ρ(A)\rho(A) without eigenvalues. For positive matrices A>0A > 0, an equivalent formulation is ρ(A)=inf{λ>0:x>0 with Axλx}\rho(A) = \inf \{ \lambda > 0 : \exists x > 0 \text{ with } Ax \leq \lambda x \}, highlighting the smallest scaling factor for which AA maps some positive vector into a scaled version of itself.[8] Matrix norms play a key role in bounding powers and the spectral radius of nonnegative matrices. The infinity norm is defined as A=maxijaij\|A\|_\infty = \max_i \sum_j |a_{ij}|; for nonnegative AA, this simplifies to the maximum absolute row sum maxijaij\max_i \sum_j a_{ij}, since all entries are nonnegative. For any consistent matrix norm \|\cdot\|, the spectral radius satisfies ρ(A)A\rho(A) \leq \|A\|, with equality possible for the infinity norm when AA has a positive eigenvector aligned with the all-ones vector. This inequality extends to powers: ρ(Ak)=ρ(A)kAkAk\rho(A^k) = \rho(A)^k \leq \|A^k\| \leq \|A\|^k.[8] Gelfand's formula relates the spectral radius to the growth of powers via norms: ρ(A)=limkAk1/k\rho(A) = \lim_{k \to \infty} \|A^k\|^{1/k} for any matrix norm \|\cdot\|. In the general case, the proof relies on the spectral theorem or resolvent estimates, showing ρ(A)lim infkAk1/klim supkAk1/kmaxiλi(A)\rho(A) \leq \liminf_{k \to \infty} \|A^k\|^{1/k} \leq \limsup_{k \to \infty} \|A^k\|^{1/k} \leq \max_i |\lambda_i(A)|, where equality follows from the fact that the spectral radius equals the maximum eigenvalue modulus. For nonnegative matrices, the formula holds identically, but the nonnegativity simplifies verification: since Ak0A^k \geq 0, the powers remain in the nonnegative cone, and the Collatz–Wielandt formula ensures the limit exists and equals ρ(A)\rho(A) by bounding the Rayleigh quotients for normalized powers. For instance, using the infinity norm, Ak1/k\|A^k\|_\infty^{1/k} converges to ρ(A)\rho(A) because row sums of AkA^k grow like ρ(A)k\rho(A)^k.[8]

Spectral Theory

Perron-Frobenius Theorem for Positive Matrices

The Perron-Frobenius theorem for positive matrices, originally established by Oskar Perron in 1907, provides fundamental insights into the spectral properties of matrices with strictly positive entries.[12] In his seminal paper "Zur Theorie der Matrizen," Perron proved that such matrices possess a dominant positive eigenvalue with distinctive characteristics that dominate the spectrum.[12] This result laid the groundwork for broader applications in linear algebra and related fields, emphasizing the role of positivity in ensuring the existence of a real eigenvalue larger in magnitude than all others.[13] For a square matrix $ A > 0 $ with all entries strictly positive, the theorem states that there exists a positive real eigenvalue $ \rho $, known as the Perron root, which is simple and satisfies $ |\lambda| < \rho $ for every other eigenvalue $ \lambda $ of $ A $.[12] Moreover, $ \rho $ is the spectral radius of $ A $, and it admits a corresponding positive eigenvector $ x > 0 $ such that $ A x = \rho x $.[13] The algebraic and geometric multiplicity of $ \rho $ is one, ensuring uniqueness in the eigenspace for positive eigenvectors. This Perron root $ \rho $ is greater than the absolute value of any other eigenvalue, establishing strict dominance in the spectrum.[12] A standard proof outline employs the Collatz-Wielandt characterization, which leverages variational principles on the positive orthant. Define the function $ f(y) = \min_{i=1}^n \frac{(A y)i}{y_i} $ for $ y > 0 $; the maximum value of $ f(y) $ over all positive vectors $ y $ equals $ \rho $, achieved precisely when $ y $ is a positive eigenvector for $ \rho $.[14] Similarly, the minimum of $ g(y) = \max{i=1}^n \frac{(A y)_i}{y_i} $ also yields $ \rho $. To establish existence, consider the compactness of the simplex in the projective space of positive vectors and apply the Brouwer fixed-point theorem to a suitable map derived from $ A $, confirming a fixed point corresponding to the eigenvector equation. Uniqueness and simplicity follow by contradiction: supposing another eigenvalue $ \lambda $ with $ |\lambda| \geq \rho $ leads to inconsistencies with the positivity and the variational maximum, while multiplicity greater than one would imply a non-positive direction in the eigenspace, violating the positive eigenvector property.[13]

Extensions to Irreducible Nonnegative Matrices

A nonnegative matrix $ A \in \mathbb{R}^{n \times n} $ with $ A \geq 0 $ is defined to be irreducible if its associated directed graph $ G(A) $, where vertices correspond to indices and a directed edge from $ i $ to $ j $ exists if $ a_{ij} > 0 $, is strongly connected. This means there is a directed path from every vertex to every other vertex in $ G(A) $. Equivalently, for every pair of indices $ i, j $, there exists a positive integer $ k $ (depending on $ i $ and $ j $) such that $ (A^k)_{ij} > 0 $.[3] This extension of the Perron-Frobenius theorem to irreducible nonnegative matrices, originally due to Frobenius, states that if $ A \geq 0 $ is irreducible of size $ n \times n $, then the spectral radius $ \rho(A) $ is a positive real number that is a simple eigenvalue of $ A $, with a corresponding positive eigenvector $ x > 0 $ such that $ Ax = \rho(A) x $. Moreover, $ \rho(A) $ has strictly positive left and right eigenvectors, and for any other eigenvalue $ \lambda $ of $ A $, $ |\lambda| \leq \rho(A) $, with equality holding if and only if $ \lambda = \rho(A) \omega $ for some $ h $-th root of unity $ \omega $, where $ h $ is the imprimitivity index of $ A $. There are exactly $ h $ distinct eigenvalues of modulus $ \rho(A) $, each simple.[3] An irreducible nonnegative matrix $ A $ is called primitive if there exists a positive integer $ k $ such that $ A^k > 0 $ (i.e., all entries of $ A^k $ are positive). For primitive matrices, the Perron-Frobenius eigenvalue $ \rho(A) $ is strictly dominant: $ |\lambda| < \rho(A) $ for all other eigenvalues $ \lambda $. In this case, the imprimitivity index $ h = 1 $, so $ \rho(A) $ is the only eigenvalue on the circle $ |\lambda| = \rho(A) $ in the complex plane. Additionally, since $ A^k > 0 $, the matrix $ A^k $ is invertible with a positive inverse, as positive matrices are nonsingular and possess positive inverses by the Perron-Frobenius theorem for positive matrices.[3] The imprimitivity index $ h $ of an irreducible nonnegative matrix $ A $ is defined as the greatest common divisor of the lengths of all directed cycles in its graph $ G(A) $, or equivalently, the number of distinct eigenvalues of modulus $ \rho(A) $. If $ h > 1 $, the matrix is imprimitive, and there are exactly $ h $ eigenvalues on the circle $ |\lambda| = \rho(A) $, located at $ \rho(A) e^{2\pi i j / h} $ for $ j = 0, 1, \dots, h-1 $. This periodicity reflects the cyclic structure in $ G(A) $, where the graph decomposes into $ h $ disjoint subclasses with transitions only between consecutive classes.[3]

Invertibility

Conditions for Invertibility

A square nonnegative matrix $ A \geq 0 $ is invertible if and only if its determinant is nonzero, det(A)0\det(A) \neq 0. However, nonnegativity does not guarantee invertibility, as demonstrated by the all-ones matrix $ J $, where every entry is 1; this matrix has rank 1 and det(J)=0\det(J) = 0 for dimensions greater than 1. The rank of a nonnegative matrix coincides with its conventional rank over the reals, and full rank is necessary for invertibility. A necessary condition for full rank is the absence of zero rows or columns, since such a row or column implies linear dependence. For irreducible nonnegative matrices, the Perron root $ \rho > 0 $, but this alone does not ensure full rank, as irreducible matrices can still be singular. The Gershgorin circle theorem provides a sufficient condition for invertibility adapted to nonnegative matrices. All eigenvalues of $ A $ lie in the union of disks centered at the diagonal entries $ a_{ii} \geq 0 $ with radii $ R_i = \sum_{j \neq i} a_{ij} \geq 0 $. If the union of these disks excludes 0 (e.g., if $ A $ is strictly diagonally dominant with positive diagonals, so $ a_{ii} > R_i $ for all $ i $, placing all disks in the open right half-plane), then 0 is not an eigenvalue and $ A $ is invertible. For nonnegative matrices, the permanent per(A)0\operatorname{per}(A) \geq 0, and the van der Waerden inequality states that $ |\det(A)| \leq \operatorname{per}(A) $. The property det(A)0\det(A) \geq 0 holds only for subclasses like totally nonnegative matrices; in general, det(A)\det(A) can be negative or zero (with zero indicating singularity), though nonnegativity does not force positivity of the determinant. Counterexamples include nilpotent nonnegative matrices, such as the forward shift matrix $ N $ with 1's on the superdiagonal and zeros elsewhere, where $ N^k = 0 $ for $ k $ equal to the dimension, ensuring det(N)=0\det(N) = 0.

Nonnegative Inverses

A nonnegative square matrix AA is said to have a nonnegative inverse if AA is invertible and A10A^{-1} \geq 0, meaning all entries of the inverse are nonnegative. Such matrices arise in applications like Markov chains and optimization where preserving nonnegativity under inversion is desirable. Unlike general nonnegative matrices, which may have inverses with negative entries, this property imposes a strict structural constraint on AA. The class of invertible nonnegative matrices with nonnegative inverses is precisely the set of positive monomial matrices. A positive monomial matrix is the product of a permutation matrix and a diagonal matrix with positive diagonal entries, i.e., A=DPA = D P, where D=diag(d1,,dn)D = \operatorname{diag}(d_1, \dots, d_n) with di>0d_i > 0 for all ii, and PP is a permutation matrix. In this form, each row and each column of AA contains exactly one positive entry, with the rest being zero. The inverse is then A1=P1D1A^{-1} = P^{-1} D^{-1}, which is also a positive monomial matrix, featuring nonnegative entries (specifically, positive in the permuted diagonal positions and zero elsewhere).[15][16] This characterization ensures that the matrix is invertible, as the determinant is the product of the positive diagonal entries (up to the sign from the permutation, but absolute value positive). For instance, a diagonal matrix with positive entries is a trivial positive monomial matrix (with the identity permutation), and its inverse is the diagonal matrix with reciprocal entries, all positive. Similarly, a scaled permutation matrix, such as A=(0230)A = \begin{pmatrix} 0 & 2 \\ 3 & 0 \end{pmatrix}, has inverse A1=(01/21/30)A^{-1} = \begin{pmatrix} 0 & 1/2 \\ 1/3 & 0 \end{pmatrix}, both nonnegative. These examples illustrate how the structure limits off-diagonal interactions, preventing negative entries in the inverse that could arise from solving systems with coupled positive terms.[15]

Special Classes

Stochastic Matrices

A stochastic matrix is a square matrix whose entries are nonnegative real numbers and whose rows each sum to 1; such matrices are termed row-stochastic. This normalization ensures that the matrix can represent transition probabilities in a Markov chain, where the entry pijp_{ij} denotes the probability of transitioning from state ii to state jj. The defining equation for an n×nn \times n row-stochastic matrix P=(pij)P = (p_{ij}) is
j=1npij=1 \sum_{j=1}^n p_{ij} = 1
for each row index i=1,,ni = 1, \dots, n. Column-stochastic matrices are defined analogously, with each column summing to 1 instead.[17] Row-stochastic matrices possess several key spectral properties arising from their nonnegative entries and normalization. The spectral radius ρ(P)\rho(P) equals 1, and 1 is an eigenvalue with right eigenvector the all-ones vector 1=(1,,1)T\mathbf{1} = (1, \dots, 1)^T, satisfying P1=1P \mathbf{1} = \mathbf{1}. This follows directly from the row-sum condition, as the ii-th entry of P1P \mathbf{1} is jpij=1\sum_j p_{ij} = 1. For irreducible row-stochastic matrices—meaning the associated directed graph is strongly connected—the Perron-Frobenius theorem, originally established by Oskar Perron for positive matrices and extended by Ferdinand Georg Frobenius to irreducible nonnegative ones, implies that 1 is a simple eigenvalue and the dominant one in modulus, with a unique (up to scaling) positive right eigenvector 1\mathbf{1} and a unique positive left eigenvector π\pi normalized such that π1=1\pi \mathbf{1} = 1, representing the stationary distribution of the corresponding Markov chain.[12][18] If the irreducible row-stochastic matrix is primitive—i.e., some power PkP^k has all positive entries—then the powers PkP^k converge as kk \to \infty to the rank-1 matrix whose rows are all equal to the stationary distribution π\pi. This convergence reflects the long-term behavior in Markov chains, where the state distribution approaches the unique stationary measure regardless of the initial distribution (assuming positivity of π\pi). Irreducibility, as discussed in the spectral theory of nonnegative matrices, is essential for these uniqueness and convergence guarantees.[18] A special subclass consists of doubly stochastic matrices, which are both row- and column-stochastic, so that 1TP=1T\mathbf{1}^T P = \mathbf{1}^T as well. Such matrices preserve the uniform distribution as a stationary measure. The Birkhoff-von Neumann theorem asserts that every doubly stochastic matrix is a convex combination of permutation matrices, providing a combinatorial interpretation of these matrices as probabilistic mixtures of permutations. It is known that at most n22n+2n^2 - 2n + 2 permutation matrices suffice for such a decomposition. This result, proved by Garrett Birkhoff, has applications in assignment problems and optimal transport.[19][20]

M-Matrices

M-matrices, while not nonnegative themselves due to potentially negative off-diagonal entries, are closely related to nonnegative matrices via representations such as A=sIBA = sI - B where BB is a nonnegative matrix and s>ρ(B)s > \rho(B). An M-matrix is a real square matrix A=(aij)A = (a_{ij}) such that aij0a_{ij} \leq 0 for all iji \neq j (i.e., nonpositive off-diagonal entries) and all principal minors of AA are nonnegative.[21] Equivalently, AA is a Z-matrix (nonpositive off-diagonals) with all eigenvalues having positive real parts.[22] The term "M-matrix" was first introduced by Alexander Ostrowski in 1937, in reference to Hermann Minkowski's work on related sign patterns and stability, though systematic characterizations emerged in the early 1950s through contributions by Lothar Collatz on monotone matrices and Hans Schneider on eigenvalue inequalities.[22][21] A key representation of an M-matrix is A=sIBA = sI - B, where BB is a nonnegative matrix, s0s \geq 0, and sρ(B)s \geq \rho(B) (the spectral radius of BB); for nonsingularity, the inequality is strict: s>ρ(B)s > \rho(B).[21] If AA is nonsingular, its inverse satisfies A10A^{-1} \geq 0 (entrywise nonnegative), a property that distinguishes M-matrices among Z-matrices and links them to inverse-positive operators.[22] When s>ρ(B)s > \rho(B), the inverse can be expressed via the Neumann series:
A1=1sk=0(Bs)k, A^{-1} = \frac{1}{s} \sum_{k=0}^{\infty} \left( \frac{B}{s} \right)^k,
which converges due to ρ(B/s)<1\rho(B/s) < 1.[22] This series expansion underscores the stability inherent in M-matrices, as their positive real-part eigenvalues ensure asymptotic stability in linear systems like x˙=Ax\dot{x} = -Ax.[21] The Z-signature of a Z-matrix, defined as the maximum number of negative eigenvalues across all its principal submatrices, provides another characterization: an M-matrix has Z-signature zero, meaning no principal submatrix admits negative eigenvalues.[22] This property, formalized in works by Fiedler and Pták in 1962, reinforces the all-positive-real-parts eigenvalue condition and facilitates comparisons with other matrix classes in spectral theory.[22] Overall, these features position M-matrices as a foundational class in the study of sign-patterned matrices, with characterizations compiled in surveys like those by Plemmons in 1977.[22]
User Avatar
No comments yet.