Hubbry Logo
Augmented matrixAugmented matrixMain
Open search
Augmented matrix
Community hub
Augmented matrix
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Augmented matrix
Augmented matrix
from Wikipedia

In linear algebra, an augmented matrix is a matrix obtained by appending a -dimensional column vector , on the right, as a further column to a -dimensional matrix . This is usually done for the purpose of performing the same elementary row operations on the augmented matrix as is done on the original one when solving a system of linear equations by Gaussian elimination.

For example, given the matrices and column vector , where the augmented matrix is

For a given number of unknowns, the number of solutions to a system of linear equations depends only on the rank of the matrix of coefficients representing the system and the rank of the corresponding augmented matrix where the components of consist of the right hand sides of the successive linear equations. According to the Rouché–Capelli theorem, any system of linear equations

where is the -component column vector whose entries are the unknowns of the system is inconsistent (has no solutions) if the rank of the augmented matrix is greater than the rank of the coefficient matrix . If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables . Otherwise the general solution has free parameters where is the difference between the number of variables and the rank. In such a case there as an affine space of solutions of dimension equal to this difference.

The inverse of a nonsingular square matrix of dimension may be found by appending the identity matrix to the right of to form the dimensional augmented matrix . Applying elementary row operations to transform the left-hand block to the identity matrix , the right-hand block is then the inverse matrix

Example of finding the inverse of a matrix

[edit]

Let be the square 2×2 matrix

To find the inverse of we form the augmented matrix where is the identity matrix. We then reduce the part of corresponding to to the identity matrix using elementary row operations on . the right part of which is the inverse .

Existence and number of solutions

[edit]

Consider the system of equations

The coefficient matrix is and the augmented matrix is

Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are an infinite number of solutions.

In contrast, consider the system

The coefficient matrix is and the augmented matrix is

In this example the coefficient matrix has rank 2 while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent rows has made the system of equations inconsistent.

Solution of a linear system

[edit]

As used in linear algebra, an augmented matrix is used to represent the coefficients and the solution vector of each equation set. For the set of equations the coefficients and constant terms give the matrices and hence give the augmented matrix

Note that the rank of the coefficient matrix, which is 3, equals the rank of the augmented matrix, so at least one solution exists; and since this rank equals the number of unknowns, there is exactly one solution.

To obtain the solution, row operations can be performed on the augmented matrix to obtain the identity matrix on the left side, yielding so the solution of the system is (x, y, z) = (4, 1, −2).

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
An augmented matrix is a rectangular array in linear algebra formed by adjoining a column of constant terms from a system of linear equations to its coefficient matrix, enabling efficient solution via row reduction methods like Gaussian elimination. This structure represents the system Ax=bAx = b as the matrix [Ab][A \mid b], where AA is the m×nm \times n coefficient matrix and bb is the m×1m \times 1 vector of constants, with each row corresponding to one equation and each of the first nn columns to the coefficients of the variables. The origins of the augmented matrix concept lie in ancient , documented in The Nine Chapters on the Mathematical Art (compiled by the 1st century BCE), where the Fangcheng Rule employed rectangular arrays on a counting board to solve systems of linear equations through a process equivalent to —predating Western developments by nearly two millennia. In the West, the method gained prominence through (1777–1855), who applied it in 1809 for least-squares problems, leading to its modern naming despite earlier European explorations in the 16th–17th centuries using substitution-based elimination. Formal matrix theory, underpinning the augmented form, was later developed by in 1858. To form an augmented matrix, the coefficients of each variable in the equations populate the left columns, separated by a line (often dashed in notation) from the rightmost column of constants; for example, the system 3x+2y=13x + 2y = 1 and 2xy=22x - y = -2 yields [321212]\begin{bmatrix} 3 & 2 & | & 1 \\ 2 & -1 & | & -2 \end{bmatrix}. Solving proceeds by applying elementary row operations—interchanging rows, multiplying a row by a nonzero scalar, or adding a multiple of one row to another—which preserve the while transforming the matrix into (upper triangular with leading 1s) for back-substitution or reduced row echelon form (identity-like on the left) for direct variable assignment. The resulting form determines the system's solution type: a unique solution if the coefficient part is full rank (pivot in every column); infinitely many solutions if there are free variables (rank less than number of variables); or no solution if inconsistent (e.g., a row like [0 0  1][0 \ 0 \ | \ 1]). Augmented matrices are fundamental in computational linear algebra, extending to applications in , and for modeling and optimization problems.

Fundamentals

Definition

An augmented matrix is a matrix formed by adjoining a column vector of constants to the coefficient matrix of a system of linear equations, thereby representing the entire system in a compact matrix form. This structure facilitates the application of systematic algebraic manipulations to solve or analyze the system without explicitly retaining the variables in each equation. For a given by Ax=bAx = b, where AA is an m×nm \times n and bb is an m×1m \times 1 column vector of constants, the augmented matrix is denoted as [Ab][A \mid b], resulting in an m×(n+1)m \times (n+1) matrix. The \mid conventionally separates the coefficients from the constants, though it is often omitted in formal notation. The concept of the augmented matrix emerged in the context of during the 19th- and 20th-century developments in linear algebra, building on earlier elimination methods traced back to ancient and refined by European mathematicians like , though no specific inventor is attributed to the term itself. A key property is that the of the original system remains unchanged under permissible row operations applied to the augmented matrix, as these operations correspond to equivalent transformations of the equations.

Construction from Linear Equations

To construct an augmented matrix from a system of linear equations, first identify the coefficient matrix AA and the constant vector bb, forming the augmented matrix [Ab][A \mid b]. The process begins by rewriting the system so that all variables are on the left side of the equals sign and constants on the right, ensuring each equation is in the standard form a1x1+a2x2++anxn=ba_1 x_1 + a_2 x_2 + \dots + a_n x_n = b. For each equation, the coefficients a1,a2,,ana_1, a_2, \dots, a_n form a row in the matrix, with the constant bb appended as an additional entry in a final column separated by a vertical bar to denote the equals sign. If a variable is missing from an equation, insert a zero coefficient in the corresponding column to maintain alignment across rows. The number of rows equals the number of equations, and the number of coefficient columns equals the number of variables. Consider the system of two equations in two variables:
2x+3y=52x + 3y = 5
xy=1.x - y = 1.
The augmented matrix is
[235111].\begin{bmatrix} 2 & 3 & \mid & 5 \\ 1 & -1 & \mid & 1 \end{bmatrix}.
Here, the first row captures the coefficients 2 and 3 with constant 5, and the second row uses 1 and -1 with constant 1.
This construction applies uniformly to both homogeneous systems, where all constants bi=0b_i = 0 (resulting in a zero augmented column), and non-homogeneous systems, where at least one bi0b_i \neq 0. For instance, the homogeneous system x+y=0x + y = 0, 2xy=02x - y = 0 yields
[110210],\begin{bmatrix} 1 & 1 & \mid & 0 \\ 2 & -1 & \mid & 0 \end{bmatrix},
while a non-homogeneous counterpart like x+y=2x + y = 2, 2x[y](/page/Y)=12x - [y](/page/Y) = 1 produces
[112211].\begin{bmatrix} 1 & 1 & \mid & 2 \\ 2 & -1 & \mid & 1 \end{bmatrix}.
The presence of non-zero constants in the augmented column distinguishes non-homogeneous systems but does not alter the assembly steps.
Edge cases arise in systems with zero equations (an empty matrix) or zero rows, such as the trivial equation 0=00 = 0, which forms a row [00][0 \mid 0] in homogeneous contexts or [0b][0 \mid b] otherwise (with b0b \neq 0 implying inconsistency, though not analyzed here). Underdetermined systems, featuring more variables than equations, result in augmented matrices with more coefficient columns than rows; for example, the single equation x+2y+3z=4x + 2y + 3z = 4 constructs as [1 2 34][1 \ 2 \ 3 \mid 4], accommodating free variables during later analysis.

Notation and Representation

An augmented matrix is conventionally denoted using the bracket notation [Ab][A \mid b], where AA is the and bb is the column vector of constants from the right-hand side of the linear system Ax=bAx = b, with the vertical bar \mid serving as a to clearly separate the coefficients from the constants. This notation emphasizes the structure of the original system while facilitating matrix operations like row reduction. The dimensions of an augmented matrix are always m×(n+1)m \times (n+1), where mm represents the number of equations (corresponding to the rows) and nn the number of variables (corresponding to the columns of AA), with the extra column accommodating the constants in bb. For instance, a system with three equations and two variables yields a 3×33 \times 3 augmented matrix. In visual representation, augmented matrices are typically displayed in a tabular format, such as using LaTeX's bmatrix environment with an explicit for separation, as in the following example for the x+y=27x + y = 27, 2xy=02x - y = 0: [1127210]\begin{bmatrix} 1 & 1 & \mid & 27 \\ 2 & -1 & \mid & 0 \end{bmatrix} This distinguishes it from a plain coefficient matrix by appending the constant column, often aligned with the equals signs in the original equations. In plain text or handwritten notes, the bar may be represented as a simple vertical line or omitted in compact forms, though the standard printed convention includes it to avoid ambiguity. Common conventions include ordering variables sequentially from left to right as x1,x2,,xnx_1, x_2, \dots, x_n across the coefficient columns, ensuring consistent alignment with the system's variable indices. Leading entries in rows may include zeros if absent variables are implied (e.g., a missing x2x_2 term is entered as 0 in that column), and parameters or vectors in the constants are treated as fixed entries in the final column without special notation beyond their scalar or vector form. Some texts use alternative separators like commas or brackets instead of the vertical bar, particularly in computational contexts where the full matrix is parsed programmatically, but the bar remains the predominant symbolic choice in pedagogical materials.

Row Reduction Techniques

Elementary Row Operations

Elementary row operations are the fundamental manipulations performed on an augmented matrix [Ab][A \mid \mathbf{b}], where AA is the and b\mathbf{b} is the constant vector, to simplify the representation of a without altering its solution set. There are three types of these operations: interchanging two rows, multiplying a row by a nonzero scalar, and adding a multiple of one row to another row. The first operation, interchanging two rows, reorders the s in the system but leaves the unchanged, as the order of s does not affect their collective solutions. The second operation multiplies all entries in a single row by a nonzero constant kk, which corresponds to multiplying the corresponding by kk, preserving the equality and thus the solutions. The third operation adds a multiple kk of one row to another row, equivalent to adding kk times one to another, which generates a new satisfied by the same solutions without introducing inconsistencies. These operations are reversible: swapping rows again restores the original, multiplying by 1/k1/k undoes scaling, and adding k-k times the modified row back reverses the . They also preserve the row space of the matrix, as each type produces linear combinations that span the same subspace. Consider the augmented matrix for the system x+2y=3x + 2y = 3 and 4x+5y=64x + 5y = 6: [123456]\begin{bmatrix} 1 & 2 & \mid & 3 \\ 4 & 5 & \mid & 6 \end{bmatrix} Swapping the rows yields: [456123]\begin{bmatrix} 4 & 5 & \mid & 6 \\ 1 & 2 & \mid & 3 \end{bmatrix} Multiplying the first row by 1/21/2 gives: [22.53123]\begin{bmatrix} 2 & 2.5 & \mid & 3 \\ 1 & 2 & \mid & 3 \end{bmatrix} To demonstrate the third operation, first scale the first row (after swap) by 1/41/4 to get a leading 1: [11.251.5123]\begin{bmatrix} 1 & 1.25 & \mid & 1.5 \\ 1 & 2 & \mid & 3 \end{bmatrix} Adding 1-1 times the first row to the second row results in: [11.251.500.751.5]\begin{bmatrix} 1 & 1.25 & \mid & 1.5 \\ 0 & 0.75 & \mid & 1.5 \end{bmatrix} Each transformation maintains the original solution set.

Gaussian Elimination Process

The Gaussian elimination process is a systematic algorithm that employs elementary row operations to transform the augmented matrix of a system of linear equations into row echelon form, enabling efficient determination of solutions through subsequent back-substitution. The procedure builds on the three fundamental elementary row operations—swapping rows, multiplying a row by a nonzero scalar, and adding a multiple of one row to another—to progressively eliminate variables below pivot positions. The algorithm operates column by column, starting from the leftmost column and proceeding to the right, for columns k=1k = 1 to min(m,n)\min(m, n), where mm is the number of rows (equations) and nn is the number of columns excluding the augmented part (variables). In each column kk:
  1. Identify a pivot: Locate the entry with the largest in column kk from row kk to row mm (partial pivoting); if all entries are zero, skip to the next column.
  2. Swap rows if necessary to place this pivot entry in position (k,k)(k, k).
  3. Eliminate below the pivot: For each row j>kj > k, subtract an appropriate multiple of row kk from row jj to set the entry in column kk to zero.
This forward elimination phase continues until the matrix achieves row echelon form, where all entries below each pivot are zero and pivots are the first nonzero entries in their rows. Partial pivoting enhances numerical stability by selecting the largest possible pivot, which reduces the growth of rounding errors during floating-point computations in practical implementations. Consider the following example for the system: {3x12x2+2x3=9x12x2+x3=52x1x22x3=1\begin{cases} 3x_1 - 2x_2 + 2x_3 = 9 \\ x_1 - 2x_2 + x_3 = 5 \\ 2x_1 - x_2 - 2x_3 = -1 \end{cases}
Add your contribution
Related Hubs
User Avatar
No comments yet.