Recent from talks
Nothing was collected or created yet.
Augmented matrix
View on WikipediaThis article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (February 2022) |
In linear algebra, an augmented matrix is a matrix obtained by appending a -dimensional column vector , on the right, as a further column to a -dimensional matrix . This is usually done for the purpose of performing the same elementary row operations on the augmented matrix as is done on the original one when solving a system of linear equations by Gaussian elimination.
For example, given the matrices and column vector , where the augmented matrix is
For a given number of unknowns, the number of solutions to a system of linear equations depends only on the rank of the matrix of coefficients representing the system and the rank of the corresponding augmented matrix where the components of consist of the right hand sides of the successive linear equations. According to the Rouché–Capelli theorem, any system of linear equations
where is the -component column vector whose entries are the unknowns of the system is inconsistent (has no solutions) if the rank of the augmented matrix is greater than the rank of the coefficient matrix . If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables . Otherwise the general solution has free parameters where is the difference between the number of variables and the rank. In such a case there as an affine space of solutions of dimension equal to this difference.
The inverse of a nonsingular square matrix of dimension may be found by appending the identity matrix to the right of to form the dimensional augmented matrix . Applying elementary row operations to transform the left-hand block to the identity matrix , the right-hand block is then the inverse matrix
Example of finding the inverse of a matrix
[edit]Let be the square 2×2 matrix
To find the inverse of we form the augmented matrix where is the identity matrix. We then reduce the part of corresponding to to the identity matrix using elementary row operations on . the right part of which is the inverse .
Existence and number of solutions
[edit]Consider the system of equations
The coefficient matrix is and the augmented matrix is
Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are an infinite number of solutions.
In contrast, consider the system
The coefficient matrix is and the augmented matrix is
In this example the coefficient matrix has rank 2 while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent rows has made the system of equations inconsistent.
Solution of a linear system
[edit]As used in linear algebra, an augmented matrix is used to represent the coefficients and the solution vector of each equation set. For the set of equations the coefficients and constant terms give the matrices and hence give the augmented matrix
Note that the rank of the coefficient matrix, which is 3, equals the rank of the augmented matrix, so at least one solution exists; and since this rank equals the number of unknowns, there is exactly one solution.
To obtain the solution, row operations can be performed on the augmented matrix to obtain the identity matrix on the left side, yielding so the solution of the system is (x, y, z) = (4, 1, −2).
References
[edit]- Marcus, Marvin; Minc, Henryk (1992). A survey of matrix theory and matrix inequalities. Dover books on advanced mathematics. New York: Dover Publ. p. 31. ISBN 978-0-486-67102-4.
Augmented matrix
View on GrokipediaFundamentals
Definition
An augmented matrix is a matrix formed by adjoining a column vector of constants to the coefficient matrix of a system of linear equations, thereby representing the entire system in a compact matrix form.[4] This structure facilitates the application of systematic algebraic manipulations to solve or analyze the system without explicitly retaining the variables in each equation.[4] For a system of linear equations given by , where is an coefficient matrix and is an column vector of constants, the augmented matrix is denoted as , resulting in an matrix.[4] The vertical bar conventionally separates the coefficients from the constants, though it is often omitted in formal notation.[1] The concept of the augmented matrix emerged in the context of Gaussian elimination during the 19th- and 20th-century developments in linear algebra, building on earlier elimination methods traced back to ancient Chinese mathematics and refined by European mathematicians like Carl Friedrich Gauss, though no specific inventor is attributed to the term itself.[5] A key property is that the solution set of the original system remains unchanged under permissible row operations applied to the augmented matrix, as these operations correspond to equivalent transformations of the equations.[4]Construction from Linear Equations
To construct an augmented matrix from a system of linear equations, first identify the coefficient matrix and the constant vector , forming the augmented matrix .[1] The process begins by rewriting the system so that all variables are on the left side of the equals sign and constants on the right, ensuring each equation is in the standard form . For each equation, the coefficients form a row in the matrix, with the constant appended as an additional entry in a final column separated by a vertical bar to denote the equals sign. If a variable is missing from an equation, insert a zero coefficient in the corresponding column to maintain alignment across rows. The number of rows equals the number of equations, and the number of coefficient columns equals the number of variables.[6][7] Consider the system of two equations in two variables:The augmented matrix is
Here, the first row captures the coefficients 2 and 3 with constant 5, and the second row uses 1 and -1 with constant 1.[1] This construction applies uniformly to both homogeneous systems, where all constants (resulting in a zero augmented column), and non-homogeneous systems, where at least one . For instance, the homogeneous system , yields
while a non-homogeneous counterpart like , produces
The presence of non-zero constants in the augmented column distinguishes non-homogeneous systems but does not alter the assembly steps.[6][7] Edge cases arise in systems with zero equations (an empty matrix) or zero rows, such as the trivial equation , which forms a row in homogeneous contexts or otherwise (with implying inconsistency, though not analyzed here). Underdetermined systems, featuring more variables than equations, result in augmented matrices with more coefficient columns than rows; for example, the single equation constructs as , accommodating free variables during later analysis.[1][7]
Notation and Representation
An augmented matrix is conventionally denoted using the bracket notation , where is the coefficient matrix and is the column vector of constants from the right-hand side of the linear system , with the vertical bar serving as a delimiter to clearly separate the coefficients from the constants.[8] This notation emphasizes the structure of the original system while facilitating matrix operations like row reduction.[9] The dimensions of an augmented matrix are always , where represents the number of equations (corresponding to the rows) and the number of variables (corresponding to the columns of ), with the extra column accommodating the constants in .[8] For instance, a system with three equations and two variables yields a augmented matrix.[9] In visual representation, augmented matrices are typically displayed in a tabular array format, such as using LaTeX'sbmatrix environment with an explicit vertical bar for separation, as in the following example for the system , :
[9]
This distinguishes it from a plain coefficient matrix by appending the constant column, often aligned with the equals signs in the original equations. In plain text or handwritten notes, the bar may be represented as a simple vertical line or omitted in compact forms, though the standard printed convention includes it to avoid ambiguity.[8]
Common conventions include ordering variables sequentially from left to right as across the coefficient columns, ensuring consistent alignment with the system's variable indices.[9] Leading entries in rows may include zeros if absent variables are implied (e.g., a missing term is entered as 0 in that column), and parameters or vectors in the constants are treated as fixed entries in the final column without special notation beyond their scalar or vector form.[8] Some texts use alternative separators like commas or brackets instead of the vertical bar, particularly in computational contexts where the full matrix is parsed programmatically, but the bar remains the predominant symbolic choice in pedagogical materials.[9]
Row Reduction Techniques
Elementary Row Operations
Elementary row operations are the fundamental manipulations performed on an augmented matrix , where is the coefficient matrix and is the constant vector, to simplify the representation of a system of linear equations without altering its solution set.[10] There are three types of these operations: interchanging two rows, multiplying a row by a nonzero scalar, and adding a multiple of one row to another row.[11] The first operation, interchanging two rows, reorders the equations in the system but leaves the solution set unchanged, as the order of equations does not affect their collective solutions.[12] The second operation multiplies all entries in a single row by a nonzero constant , which corresponds to multiplying the corresponding equation by , preserving the equality and thus the solutions.[12] The third operation adds a multiple of one row to another row, equivalent to adding times one equation to another, which generates a new equation satisfied by the same solutions without introducing inconsistencies.[12] These operations are reversible: swapping rows again restores the original, multiplying by undoes scaling, and adding times the modified row back reverses the addition.[10] They also preserve the row space of the matrix, as each type produces linear combinations that span the same subspace.[13] Consider the augmented matrix for the system and : Swapping the rows yields: Multiplying the first row by gives: To demonstrate the third operation, first scale the first row (after swap) by to get a leading 1: Adding times the first row to the second row results in: Each transformation maintains the original solution set.[11]Gaussian Elimination Process
The Gaussian elimination process is a systematic algorithm that employs elementary row operations to transform the augmented matrix of a system of linear equations into row echelon form, enabling efficient determination of solutions through subsequent back-substitution.[14] The procedure builds on the three fundamental elementary row operations—swapping rows, multiplying a row by a nonzero scalar, and adding a multiple of one row to another—to progressively eliminate variables below pivot positions.[14] The algorithm operates column by column, starting from the leftmost column and proceeding to the right, for columns to , where is the number of rows (equations) and is the number of columns excluding the augmented part (variables). In each column :- Identify a pivot: Locate the entry with the largest absolute value in column from row to row (partial pivoting); if all entries are zero, skip to the next column.
- Swap rows if necessary to place this pivot entry in position .
- Eliminate below the pivot: For each row , subtract an appropriate multiple of row from row to set the entry in column to zero.
