Recent from talks
Contribute something
Nothing was collected or created yet.
Tridiagonal matrix
View on WikipediaIn linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal:
The determinant of a tridiagonal matrix is given by the continuant of its elements.[1]
An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.
Properties
[edit]A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix.[2] In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that p + q/2 = n — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.[3]
The set of all n × n tridiagonal matrices forms a 3n-2 dimensional vector space.
Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.
Determinant
[edit]The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation.[4] Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let
The sequence (fi) is called the continuant and satisfies the recurrence relation
with initial values f0 = 1 and f−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix.
Inversion
[edit]The inverse of a non-singular tridiagonal matrix T
is given by
where the θi satisfy the recurrence relation
with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy
with initial conditions ϕn+1 = 1 and ϕn = an.[5][6]
Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal[7] or Toeplitz matrices[8] and for the general case as well.[9][10]
In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa.[11] The inverse of a symmetric tridiagonal matrix can be written as a single-pair matrix (a.k.a. generator-representable semiseparable matrix) of the form[12][13]
where with
Solution of linear system
[edit]A system of equations Ax = b for can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations.[14]
Eigenvalues
[edit]When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely:[15][16]
A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero.[17] Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring operations for a matrix of size , although fast algorithms exist which (without parallel computation) require only .[18]
As a side note, an unreduced symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.[19]
Similarity to symmetric tridiagonal matrix
[edit]For unsymmetric or nonsymmetric tridiagonal matrices one can compute the eigendecomposition using a similarity transformation. Given a real tridiagonal, nonsymmetric matrix
where . Assume that each product of off-diagonal entries is strictly positive and define a transformation matrix by[20]
The similarity transformation yields a symmetric tridiagonal matrix by:[21][20]
Note that and have the same eigenvalues.
Computer programming
[edit]A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step.[22]
A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional arrays, one of length n containing the diagonal elements, and two of length n − 1 containing the subdiagonal and superdiagonal elements.
Applications
[edit]The discretization in space of the one-dimensional diffusion or heat equation
using second order central finite differences results in
with discretization constant . The matrix is tridiagonal with and . Note: no boundary conditions are used here.
See also
[edit]Notes
[edit]- ^ Thomas Muir (1960). A treatise on the theory of determinants. Dover Publications. pp. 516–525.
- ^ Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. p. 28. ISBN 0521386322.
- ^ Horn & Johnson, page 174
- ^ El-Mikkawy, M. E. A. (2004). "On the inverse of a general tridiagonal matrix". Applied Mathematics and Computation. 150 (3): 669–679. doi:10.1016/S0096-3003(03)00298-4.
- ^ Da Fonseca, C. M. (2007). "On the eigenvalues of some tridiagonal matrices". Journal of Computational and Applied Mathematics. 200: 283–286. doi:10.1016/j.cam.2005.08.047.
- ^ Usmani, R. A. (1994). "Inversion of a tridiagonal jacobi matrix". Linear Algebra and Its Applications. 212–213: 413–414. doi:10.1016/0024-3795(94)90414-6.
- ^ Hu, G. Y.; O'Connell, R. F. (1996). "Analytical inversion of symmetric tridiagonal matrices". Journal of Physics A: Mathematical and General. 29 (7): 1511. Bibcode:1996JPhA...29.1511H. doi:10.1088/0305-4470/29/7/020.
- ^ Huang, Y.; McColl, W. F. (1997). "Analytical inversion of general tridiagonal matrices". Journal of Physics A: Mathematical and General. 30 (22): 7919. Bibcode:1997JPhA...30.7919H. doi:10.1088/0305-4470/30/22/026.
- ^ Mallik, R. K. (2001). "The inverse of a tridiagonal matrix". Linear Algebra and Its Applications. 325 (1–3): 109–139. doi:10.1016/S0024-3795(00)00262-7.
- ^ Kılıç, E. (2008). "Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions". Applied Mathematics and Computation. 197: 345–357. doi:10.1016/j.amc.2007.07.046.
- ^ Raf Vandebril; Marc Van Barel; Nicola Mastronardi (2008). Matrix Computations and Semiseparable Matrices. Volume I: Linear Systems. JHU Press. Theorem 1.38, p. 41. ISBN 978-0-8018-8714-7.
- ^ Meurant, Gerard (1992). "A review on the inverse of symmetric tridiagonal and block tridiagonal matrices". SIAM Journal on Matrix Analysis and Applications. 13 (3): 707–728. doi:10.1137/0613045.
- ^ Bossu, Sebastien (2024). "Tridiagonal and single-pair matrices and the inverse sum of two single-pair matrices". Linear Algebra and Its Applications. 699: 129–158. arXiv:2304.06100. doi:10.1016/j.laa.2024.06.018.
- ^ Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). The Johns Hopkins University Press. ISBN 0-8018-5414-8.
- ^ Noschese, S.; Pasquini, L.; Reichel, L. (2013). "Tridiagonal Toeplitz matrices: Properties and novel applications". Numerical Linear Algebra with Applications. 20 (2): 302. doi:10.1002/nla.1811.
- ^ This can also be written as because , as is done in: Kulkarni, D.; Schmidt, D.; Tsui, S. K. (1999). "Eigenvalues of tridiagonal pseudo-Toeplitz matrices" (PDF). Linear Algebra and Its Applications. 297 (1–3): 63–80. doi:10.1016/S0024-3795(99)00114-7.
- ^ Parlett, B.N. (1997) [1980]. The Symmetric Eigenvalue Problem. Classics in applied mathematics. Vol. 20. SIAM. ISBN 0-89871-402-8. OCLC 228147822.
- ^ Coakley, E.S.; Rokhlin, V. (2012). "A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices". Applied and Computational Harmonic Analysis. 34 (3): 379–414. doi:10.1016/j.acha.2012.06.003.
- ^ Dhillon, Inderjit Singh (1997). A New O(n2) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem (PDF) (PhD). University of California, Berkeley. p. 8. CSD-97-971, ADA637073.
- ^ a b Kreer, M. (1994). "Analytic birth-death processes: a Hilbert space approach". Stochastic Processes and Their Applications. 49 (1): 65–74. doi:10.1016/0304-4149(94)90112-0.
- ^ Meurant, Gérard (October 2008). "Tridiagonal matrices" (PDF) – via Institute for Computational Mathematics, Hong Kong Baptist University.
- ^ Eidelman, Yuli; Gohberg, Israel; Gemignani, Luca (2007-01-01). "On the fast reduction of a quasiseparable matrix to Hessenberg and tridiagonal forms". Linear Algebra and Its Applications. 420 (1): 86–101. doi:10.1016/j.laa.2006.06.028. ISSN 0024-3795.
External links
[edit]- Tridiagonal and Bidiagonal Matrices in the LAPACK manual.
- Moawwad El-Mikkawy, Abdelrahman Karawia (2006). "Inversion of general tridiagonal matrices". Applied Mathematics Letters. 19 (8): 712–720. doi:10.1016/j.aml.2005.11.012.
- High performance algorithms for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
- Tridiagonal linear system solver in C++
Tridiagonal matrix
View on GrokipediaDefinition and Notation
Formal Definition
A tridiagonal matrix is a square matrix of order in which all entries are zero except those on the main diagonal, the subdiagonal (immediately below the main diagonal), and the superdiagonal (immediately above the main diagonal).[2] This structure confines the nonzero elements to a narrow band around the main diagonal, distinguishing it from denser matrices.[8] In terms of bandwidth, a tridiagonal matrix has a lower bandwidth of 1 and an upper bandwidth of 1, meaning the nonzero entries extend at most one position below and above the main diagonal.[9] More formally, for an matrix , it satisfies whenever .[2] This contrasts with a diagonal matrix, which has bandwidth (0,0) and only nonzero main diagonal entries, and with bidiagonal matrices, which have bandwidth (1,0) or (0,1) by including only one off-diagonal. Broader banded matrices, such as pentadiagonal ones, allow nonzeros up to two positions away, resulting in bandwidth (2,2).[9] Under certain conditions, such as the matrix being irreducible and satisfying diagonal dominance (where each diagonal entry's absolute value is at least the sum of the absolute values of the off-diagonal entries in its row), tridiagonal matrices exhibit desirable properties like nonsingularity.[2] A tridiagonal matrix is irreducible if all subdiagonal and superdiagonal entries are nonzero, ensuring the associated directed graph is strongly connected.[2] These matrices are particularly valuable in sparse matrix contexts for efficient storage and computation in numerical linear algebra.[8]Matrix Representation and Examples
A tridiagonal matrix of order is typically denoted in block form using vectors for its three nonzero diagonals: the main diagonal consists of elements for , the subdiagonal (immediately below the main diagonal) consists of elements for , and the superdiagonal (immediately above the main diagonal) consists of elements for .[10] This notation compactly represents the sparse structure where all other entries are zero, as seen in the general form: For a concrete illustration, consider a 3×3 tridiagonal matrix with specific numerical values, such as , , , , , , and : This example highlights the banded structure, with nonzero entries confined to the specified diagonals. Similarly, a 4×4 tridiagonal matrix can be formed with values , , , , , , , , , and : These numerical instances demonstrate how varying the diagonal elements allows for diverse applications while preserving the tridiagonal form.[10] In the symmetric case, the subdiagonal and superdiagonal elements are equal, i.e., for , resulting in a matrix that equals its own transpose. For example, a 3×3 symmetric tridiagonal matrix might take the form: where the off-diagonal symmetries and ensure . This structure is prevalent in applications like quantum mechanics and orthogonal polynomials, where self-adjoint operators are modeled.[11] A special subclass is the tridiagonal Toeplitz matrix, characterized by constant values along each diagonal: a fixed on the main diagonal, fixed on the subdiagonal, and fixed on the superdiagonal. For , such a matrix appears as: This constant-diagonal property simplifies analytical computations, such as finding closed-form expressions for eigenvalues, and arises in signal processing and difference equations.[6] Tridiagonal matrices also arise naturally in the discretization of second-order linear recurrence relations. Specifically, solving a tridiagonal system , where has subdiagonal elements , diagonal , and superdiagonal , yields components that satisfy a second-order inhomogeneous linear recurrence of the form , with coefficients derived from the matrix entries and initial conditions from the boundary equations. In the Toeplitz case with constant coefficients, this recurrence admits explicit solutions via characteristic equations.[12]Algebraic Properties
Basic Structure and Operations
A tridiagonal matrix of order is sparse, with at most nonzero entries: on the main diagonal, on the superdiagonal, and on the subdiagonal.[13] This sparsity structure facilitates efficient storage and computation, requiring only space compared to for a dense matrix of the same order.[13] The set of tridiagonal matrices forms a vector subspace of the space of all matrices, as it is closed under addition and scalar multiplication. Specifically, the sum of two tridiagonal matrices is tridiagonal, since the addition of corresponding entries preserves zeros outside the three central diagonals. Similarly, multiplying a tridiagonal matrix by a scalar yields another tridiagonal matrix, with all nonzero entries scaled accordingly.[13] The product of two tridiagonal matrices is generally pentadiagonal, with nonzero entries potentially extending to the second superdiagonal and second subdiagonal, resulting in a bandwidth of up to 5 (or semi-bandwidth 2). This follows from the property that the product of two band matrices with semi-bandwidths and has semi-bandwidth at most . The product remains tridiagonal if the off-diagonal contributions cancel out, such as when one matrix is diagonal or the matrices commute in a way that avoids fill-in on the outer diagonals.[13] The transpose of a tridiagonal matrix is also tridiagonal, with the superdiagonal and subdiagonal interchanged while the main diagonal remains unchanged. If the original matrix is symmetric (superdiagonal equals subdiagonal), then it equals its transpose.[13] The trace of a tridiagonal matrix, defined as the sum of its eigenvalues, equals the sum of its main diagonal entries, independent of the off-diagonal elements. For norms, the Frobenius norm is the square root of the sum of squares of all entries, which for a tridiagonal matrix primarily depends on the diagonal and adjacent off-diagonals but ties closely to the diagonal magnitudes in sparse approximations; the infinity norm, as the maximum absolute row sum, is bounded by the maximum of the absolute values of the diagonal plus the two adjacent off-diagonals per row.[13] These properties underpin the use of tridiagonal matrices in solving sparse linear systems efficiently.[13]Determinant Computation
The determinant of an tridiagonal matrix , with main diagonal entries , subdiagonal entries , and superdiagonal entries , satisfies the three-term recurrence relation with base cases and .[14] This relation arises from expanding the determinant along the last row and applying properties of determinants for bordered matrices, enabling computation in linear time complexity .[15] For the special case of a Toeplitz tridiagonal matrix with constant main diagonal , constant subdiagonal , and constant superdiagonal , the recurrence becomes a linear homogeneous relation with constant coefficients, . The characteristic equation is , with roots . Assuming , the closed-form solution is If the discriminant , the roots are complex, and the expression can be rewritten in trigonometric form, such as where the diagonal entries are and the off-diagonal entries are .[16] In cases of repeated roots, the form adjusts to . For constant diagonals, the ratio admits a continued fraction representation, , which converges to the dominant root for large .[16] The Gershgorin circle theorem provides bounds on the eigenvalues of a tridiagonal matrix, with each disk centered at and radius (or adjusted at boundaries); since the determinant equals the product of eigenvalues, these bounds indirectly constrain the possible magnitude of the determinant. However, direct recursive computation via the three-term relation can exhibit numerical instability for large when , due to amplification of roundoff errors in the recessive mode. The closed-form expression can suffer from cancellation when the roots are close, leading to loss of significant digits in floating-point arithmetic.Eigenvalues and Eigenvectors
The characteristic polynomial of a tridiagonal matrix of order , with diagonal entries , subdiagonal entries , and superdiagonal entries , is defined as . This polynomial satisfies the three-term recurrence relation , with initial conditions and .[17] The eigenvalues of are the roots of , and these roots inherit the recursive structure through the continued fraction representation or generating function solutions of the recurrence.[17] For the special case of a symmetric tridiagonal Toeplitz matrix with constant diagonal entries and constant off-diagonal entries (i.e., , for all ), the eigenvalues admit an explicit closed-form expression: for .[6] The corresponding eigenvectors have components for .[6] Since the matrix is symmetric, its eigenvectors form an orthogonal basis for . This orthogonality follows from the spectral theorem for symmetric matrices and is preserved in the explicit sine form, where the inner product between distinct eigenvectors and vanishes due to the orthogonality of sine functions over the discrete grid.[18] As , the eigenvalues of the symmetric tridiagonal Toeplitz matrix densely fill the interval , approximating the continuous spectrum of the associated infinite Toeplitz operator. The asymptotic distribution of the normalized eigenvalues follows the arcsine law with density for , reflecting the uniform spacing in the cosine argument.[6][19] Perturbation theory for nearly tridiagonal matrices, such as those where small fill-in entries are introduced outside the tridiagonal band, shows that eigenvalues remain close to those of the original tridiagonal form if the perturbations are block-structured or localized. For a symmetric tridiagonal matrix perturbed by setting one off-diagonal entry to zero, the eigenvalue shifts are bounded by the square root of the perturbation magnitude, with explicit bounds derived from interlacing properties.[20] More generally, for Hermitian block tridiagonal matrices under blockwise perturbations, well-separated eigenvalues exhibit insensitivity, with perturbation bounds scaling as the norm of the off-block entries relative to eigenvalue gaps.[21]Advanced Properties
Matrix Inversion
The inverse of a nonsingular tridiagonal matrix is generally a full (dense) matrix, but its entries possess explicit closed-form expressions that can be derived using the Green's function approach, which interprets the inverse as the discrete Green's function for the second-order linear difference equation associated with the matrix.[22] For a general tridiagonal matrix with diagonal entries , subdiagonal entries , and superdiagonal entries , the entries of the inverse are given by ratios involving the determinants of appropriate principal submatrices. Define the leading principal minors for , where and , satisfying the three-term recurrence for . Similarly, define the trailing principal minors for , where and , with the recurrence for . Then, for , and for , In the symmetric case ( for all ), this reduces to the transpose of the upper triangle formula.[23][24] The matrix is invertible if and only if , which ensures the existence of the expressions above; this condition is equivalent to all leading principal minors for , guaranteeing that Gaussian elimination without pivoting encounters no zero pivots.[23] In the special case of a symmetric positive definite tridiagonal matrix (where for all and all leading principal minors are positive), the entries of the inverse decay monotonically away from the main diagonal, with the rate of decay depending on the minimum eigenvalue and the off-diagonal magnitudes; specifically, for strictly diagonally dominant matrices, for some constants and .[25] The leading and trailing minors can be computed in time via their respective recurrence relations, enabling the full inverse to be assembled in arithmetic operations by evaluating the products (which can be accelerated using prefix products).[26]Similarity to Symmetric Forms
A similarity transformation preserves the eigenvalues of a matrix, as the transformed matrix shares the same spectrum as the original matrix . This property holds because the characteristic polynomials of and are identical, ensuring that the eigenvalues and their algebraic multiplicities remain unchanged under such transformations. For Hermitian matrices, orthogonal similarity transformations can reduce the matrix to a symmetric tridiagonal form, which simplifies subsequent computations while preserving the spectrum. The Householder reduction method achieves this by applying a sequence of Householder reflections—orthogonal matrices that zero out elements below the subdiagonal in successive columns, starting from the first. This process eliminates entries outside the tridiagonal band in a stable manner, requiring approximately floating-point operations for an matrix. Alternatively, the Lanczos algorithm, originally proposed in 1950, performs tridiagonalization through an iterative process that generates an orthonormal basis via three-term recurrence relations, effectively projecting the matrix onto a Krylov subspace to yield a symmetric tridiagonal representation.[27] For real nonsymmetric matrices, the standard similarity reduction for eigenvalue computations is to upper Hessenberg form using Householder reflections or Givens rotations, which eliminates elements below the subdiagonal. Reduction to tridiagonal form is possible but less common due to numerical stability issues and higher costs. In the QR algorithm, Hessenberg (or tridiagonal for symmetric cases) reduction accelerates iterations by banding the matrix.[28] Post-reduction, the tridiagonal matrix may be unreduced, meaning all subdiagonal elements are nonzero, which implies it has no multiple eigenvalues and corresponds to an irreducible structure. In contrast, a block tridiagonal form arises if some subdiagonal elements are exactly zero, indicating the presence of invariant subspaces or multiple eigenvalues that partition the matrix into smaller tridiagonal blocks. This distinction is important for further analysis, as block forms allow divide-and-conquer strategies in eigenvalue solvers.[29][29]Solution of Linear Systems
Solving a linear system , where is an tridiagonal matrix with subdiagonal entries (for ), diagonal entries (for ), and superdiagonal entries (for ), and , requires ensuring the existence and uniqueness of the solution. A sufficient condition for to be nonsingular—and thus for the system to have a unique solution—is strict diagonal dominance, where , , and for .[30] This property guarantees invertibility for general matrices, including tridiagonal ones, via the Gershgorin circle theorem, as all eigenvalues lie within disks centered at the diagonal entries that do not include the origin.[30] One theoretical approach to solving the system adapts Cramer's rule, which expresses each component as the ratio of two determinants: the determinant of the matrix obtained by replacing the -th column of with , divided by . For tridiagonal matrices, these determinants (and subdeterminants) can be computed efficiently using a three-term recurrence relation, reducing the cost from for dense matrices to overall, as each of the determinants requires operations.[31] However, this method remains inefficient compared to direct factorization techniques, as it does not exploit the structure for linear-time performance and can accumulate rounding errors in finite precision.[32] More efficient theoretical resolution leverages factorization, such as LU decomposition, which for tridiagonal matrices can be performed in time due to the limited bandwidth.[33] The resulting lower and upper triangular factors then allow solution via forward substitution to solve followed by backward substitution to solve , each also requiring operations, yielding an overall complexity of for the system.[33] This approach underscores the structural advantage of tridiagonal systems over dense ones, where substitution alone costs . The well-posedness of the problem depends on the condition number , typically measured in the 2-norm as , which bounds the relative error amplification: . For tridiagonal matrices, explicit -time algorithms exist to compute or , exploiting the QR factorization or recurrence relations for norms of and .[34] Well-posedness holds when remains bounded independently of , as in certain Toeplitz tridiagonal matrices where regardless of dimension; otherwise, growing (e.g., exponentially in some cases) indicates sensitivity to perturbations.[35][34] Perturbation analysis reveals vulnerabilities in ill-conditioned tridiagonal systems, where small changes in entries or the right-hand side can produce large deviations in . For instance, consider a symmetric tridiagonal M-matrix with diagonal entries and off-diagonals , where small (e.g., ) yields for , amplifying relative errors by a factor of hundreds despite the matrix being diagonally dominant. Wilkinson's seminal work on error analysis in eigenvalue problems extended to linear systems highlights such cases, showing that Gaussian elimination on near-singular tridiagonals incurs backward errors bounded by , but forward errors can approach times the backward error, emphasizing the need for careful numerical handling.[36]Numerical Algorithms
LU Decomposition
The LU decomposition of a tridiagonal matrix with subdiagonal entries (), diagonal entries (), and superdiagonal entries () factors , where is a unit lower bidiagonal matrix with 1's on the main diagonal and subdiagonal entries (), and is an upper bidiagonal matrix with diagonal entries () and superdiagonal entries ().[37] The factors are computed using the following forward recurrence relations without pivoting: This process requires arithmetic operations and storage, as only the nonzero elements of and need to be retained.[37] No pivoting is required if is strictly diagonally dominant, i.e., and for , , ensuring all and numerical stability in floating-point arithmetic.[38] Otherwise, the algorithm may encounter breakdown if some , necessitating pivoting strategies that disrupt the bidiagonal structure.[38] This factorization is closely related to the evaluation of continued fractions, where the recurrence relations mirror the iterative computation of convergents in a continued fraction expansion for solving tridiagonal systems.[39]Thomas Algorithm
The Thomas algorithm, also known as the tridiagonal matrix algorithm (TDMA), is a direct method for solving a system of linear equations , where is an tridiagonal matrix with subdiagonal elements (for ), diagonal elements (for ), and superdiagonal elements (for ), and is the right-hand side vector.[40] Named after physicist Llewellyn Thomas, who introduced it in 1949, the algorithm exploits the banded structure of to perform Gaussian elimination without fill-in, reducing computational complexity from for general dense matrices to .[41] It consists of a forward elimination phase that transforms the system into an upper triangular form and a subsequent back-substitution phase to recover the solution .[40] In the forward elimination phase, the algorithm first computes the diagonal elements of the upper triangular factor in the LU decomposition of (with unit diagonal in ), followed by updating the right-hand side to a modified vector . Specifically, set and ; then, for to , compute the multiplier , , and .[40] This phase effectively solves while constructing , avoiding storage of the full or matrices beyond the necessary scalars. The back-substitution phase then solves by setting and, for down to , .[40] The following pseudocode outlines the algorithm:# Forward elimination
u[1] = b[1]
z[1] = d[1] / u[1]
for i = 2 to n:
w = a[i] / u[i-1]
u[i] = b[i] - w * c[i-1]
z[i] = (d[i] - w * z[i-1]) / u[i]
# Back-substitution
x[n] = z[n]
for i = n-1 downto 1:
x[i] = z[i] - (c[i] / u[i]) * x[i+1]
# Forward elimination
u[1] = b[1]
z[1] = d[1] / u[1]
for i = 2 to n:
w = a[i] / u[i-1]
u[i] = b[i] - w * c[i-1]
z[i] = (d[i] - w * z[i-1]) / u[i]
# Back-substitution
x[n] = z[n]
for i = n-1 downto 1:
x[i] = z[i] - (c[i] / u[i]) * x[i+1]
