Hubbry Logo
Quadratic formQuadratic formMain
Open search
Quadratic form
Community hub
Quadratic form
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Quadratic form
Quadratic form
from Wikipedia

In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example,

is a quadratic form in the variables x and y. The coefficients usually belong to a fixed field K, such as the real or complex numbers, and one speaks of a quadratic form over K. Over the reals, a quadratic form is said to be definite if it takes the value zero only when all its variables are simultaneously zero; otherwise it is isotropic.

Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory (orthogonal groups), differential geometry (the Riemannian metric, the second fundamental form), differential topology (intersection forms of manifolds, especially four-manifolds), Lie theory (the Killing form), and statistics (where the exponent of a zero-mean multivariate normal distribution has the quadratic form )

Quadratic forms are not to be confused with quadratic equations, which have only one variable and may include terms of degree less than two. A quadratic form is a specific instance of the more general concept of forms.

Introduction

[edit]

Quadratic forms are homogeneous quadratic polynomials in n variables. In the cases of one, two, and three variables they are called unary, binary, and ternary and have the following explicit form:

where a, ..., f are the coefficients.[1]

The theory of quadratic forms and methods used in their study depend in a large measure on the nature of the coefficients, which may be real or complex numbers, rational numbers, or integers. In linear algebra, analytic geometry, and in the majority of applications of quadratic forms, the coefficients are real or complex numbers. In the algebraic theory of quadratic forms, the coefficients are elements of a certain field. In the arithmetic theory of quadratic forms, the coefficients belong to a fixed commutative ring, frequently the integers Z or the p-adic integers Zp.[2] Binary quadratic forms have been extensively studied in number theory, in particular, in the theory of quadratic fields, continued fractions, and modular forms. The theory of integral quadratic forms in n variables has important applications to algebraic topology.

Using homogeneous coordinates, a non-zero quadratic form in n variables defines an (n − 2)-dimensional quadric in the (n − 1)-dimensional projective space. This is a basic construction in projective geometry. In this way one may visualize 3-dimensional real quadratic forms as conic sections. An example is given by the three-dimensional Euclidean space and the square of the Euclidean norm expressing the distance between a point with coordinates (x, y, z) and the origin:

A closely related notion with geometric overtones is a quadratic space, which is a pair (V, q), with V a vector space over a field K, and q : VK a quadratic form on V. See § Definitions below for the definition of a quadratic form on a vector space.

History

[edit]

The study of quadratic forms, in particular the question of whether a given integer can be the value of a quadratic form over the integers, dates back many centuries. One such case is Fermat's theorem on sums of two squares, which determines when an integer may be expressed in the form x2 + y2, where x, y are integers. This problem is related to the problem of finding Pythagorean triples, which appeared in the second millennium BCE.[3]

In 628, the Indian mathematician Brahmagupta wrote Brāhmasphuṭasiddhānta, which includes, among many other things, a study of equations of the form x2ny2 = c. He considered what is now called Pell's equation, x2ny2 = 1, and found a method for its solution.[4] In Europe this problem was studied by Brouncker, Euler and Lagrange.

In 1801 Gauss published Disquisitiones Arithmeticae, a major portion of which was devoted to a complete theory of binary quadratic forms over the integers. Since then, the concept has been generalized, and the connections with quadratic number fields, the modular group, and other areas of mathematics have been further elucidated.

Associated symmetric matrix

[edit]

Any n × n matrix A determines a quadratic form qA in n variables by where A = (aij).

Example

[edit]

Consider the case of quadratic forms in three variables x, y, z. The matrix A has the form

The above formula gives

So, two different matrices define the same quadratic form if and only if they have the same elements on the diagonal and the same values for the sums b + d, c + g and f + h. In particular, the quadratic form qA is defined by a unique symmetric matrix

This generalizes to any number of variables as follows.

General case

[edit]

Given a quadratic form qA over the real numbers, defined by the matrix A = (aij), the matrix is symmetric, defines the same quadratic form as A, and is the unique symmetric matrix that defines qA.

So, over the real numbers (and, more generally, over a field of characteristic different from two), there is a one-to-one correspondence between quadratic forms and symmetric matrices that determine them.

Real quadratic forms

[edit]

A fundamental problem is the classification of real quadratic forms under a linear change of variables.

Jacobi proved that, for every real quadratic form, there is an orthogonal diagonalization; that is, an orthogonal change of variables that puts the quadratic form in a "diagonal form" where the associated symmetric matrix is diagonal. Moreover, the coefficients λ1, λ2, ..., λn are determined uniquely up to a permutation.[5]

If the change of variables is given by an invertible matrix that is not necessarily orthogonal, one can suppose that all coefficients λi are 0, 1, or −1. Sylvester's law of inertia states that the numbers of each 0, 1, and −1 are invariants of the quadratic form, in the sense that any other diagonalization will contain the same number of each. The signature of the quadratic form is the triple (n0, n+, n), where these components count the number of 0s, number of 1s, and the number of −1s, respectively. Sylvester's law of inertia shows that this is a well-defined quantity attached to the quadratic form.

The case when all λi have the same sign is especially important: in this case the quadratic form is called positive definite (all 1) or negative definite (all −1). If none of the terms are 0, then the form is called nondegenerate; this includes positive definite, negative definite, and isotropic quadratic form (a mix of 1 and −1); equivalently, a nondegenerate quadratic form is one whose associated symmetric form is a nondegenerate bilinear form. A real vector space with an indefinite nondegenerate quadratic form of index (p, q) (denoting p 1s and q −1s) is often denoted as Rp,q particularly in the physical theory of spacetime.

The discriminant of a quadratic form, concretely the class of the determinant of a representing matrix in K / (K×)2 (up to non-zero squares) can also be defined, and for a real quadratic form is a cruder invariant than signature, taking values of only "positive, zero, or negative". Zero corresponds to degenerate, while for a non-degenerate form it is the parity of the number of negative coefficients, (−1)n.

These results are reformulated in a different way below.

Let q be a quadratic form defined on an n-dimensional real vector space. Let A be the matrix of the quadratic form q in a given basis. This means that A is a symmetric n × n matrix such that where x is the column vector of coordinates of v in the chosen basis. Under a change of basis, the column x is multiplied on the left by an n × n invertible matrix S, and the symmetric square matrix A is transformed into another symmetric square matrix B of the same size according to the formula

Any symmetric matrix A can be transformed into a diagonal matrix by a suitable choice of an orthogonal matrix S, and the diagonal entries of B are uniquely determined – this is Jacobi's theorem. If S is allowed to be any invertible matrix then B can be made to have only 0, 1, and −1 on the diagonal, and the number of the entries of each type (n0 for 0, n+ for 1, and n for −1) depends only on A. This is one of the formulations of Sylvester's law of inertia and the numbers n+ and n are called the positive and negative indices of inertia. Although their definition involved a choice of basis and consideration of the corresponding real symmetric matrix A, Sylvester's law of inertia means that they are invariants of the quadratic form q.

The quadratic form q is positive definite if q(v) > 0 (similarly, negative definite if q(v) < 0) for every nonzero vector v.[6] When q(v) assumes both positive and negative values, q is an isotropic quadratic form. The theorems of Jacobi and Sylvester show that any positive definite quadratic form in n variables can be brought to the sum of n squares by a suitable invertible linear transformation: geometrically, there is only one positive definite real quadratic form of every dimension. Its isometry group is a compact orthogonal group O(n). This stands in contrast with the case of isotropic forms, when the corresponding group, the indefinite orthogonal group O(p, q), is non-compact. Further, the isometry groups of Q and Q are the same (O(p, q) ≈ O(q, p)), but the associated Clifford algebras (and hence pin groups) are different.

Definitions

[edit]

A quadratic form over a field K is a map q : VK from a finite-dimensional K-vector space to K such that q(av) = a2q(v) for all aK, vV and the function q(u + v) − q(u) − q(v) is a bilinear form.

More concretely, an n-ary quadratic form over a field K is a homogeneous polynomial of degree 2 in n variables with coefficients in K:

This formula may be rewritten using matrices: let x be the column vector with components x1, ..., xn and A = (aij) be the n × n matrix over K whose entries are the coefficients of q. Then

A vector v = (x1, ..., xn) is a null vector if q(v) = 0.

Two n-ary quadratic forms φ and ψ over K are equivalent if there exists a nonsingular linear transformation CGL(n, K) such that

Let the characteristic of K be different from 2.[7] The coefficient matrix A of q may be replaced by the symmetric matrix (A + AT)/2 with the same quadratic form, so it may be assumed from the outset that A is symmetric. Moreover, a symmetric matrix A is uniquely determined by the corresponding quadratic form. Under an equivalence C, the symmetric matrix A of φ and the symmetric matrix B of ψ are related as follows:

The associated bilinear form of a quadratic form q is defined by

Thus, bq is a symmetric bilinear form over K with matrix A. Conversely, any symmetric bilinear form b defines a quadratic form and these two processes are the inverses of each other. As a consequence, over a field of characteristic not equal to 2, the theories of symmetric bilinear forms and of quadratic forms in n variables are essentially the same.

Quadratic space

[edit]

Given an n-dimensional vector space V over a field K, a quadratic form on V is a function Q : VK that has the following property: for some basis, the function q that maps the coordinates of vV to Q(v) is a quadratic form. In particular, if V = Kn with its standard basis, one has

The change of basis formulas show that the property of being a quadratic form does not depend on the choice of a specific basis in V, although the quadratic form q depends on the choice of the basis.

A finite-dimensional vector space with a quadratic form is called a quadratic space.

The map Q is a homogeneous function of degree 2, which means that it has the property that, for all a in K and v in V:

When the characteristic of K is not 2, the bilinear map B : V × VK over K is defined: This bilinear form B is symmetric. That is, B(x, y) = B(y, x) for all x, y in V, and it determines Q: Q(x) = B(x, x) for all x in V.

When the characteristic of K is 2, so that 2 is not a unit, it is still possible to use a quadratic form to define a symmetric bilinear form B′(x, y) = Q(x + y) − Q(x) − Q(y). However, Q(x) can no longer be recovered from this B in the same way, since B′(x, x) = 0 for all x (and is thus alternating).[8] Alternatively, there always exists a bilinear form B (not in general either unique or symmetric) such that B″(x, x) = Q(x).

The pair (V, Q) consisting of a finite-dimensional vector space V over K and a quadratic map Q from V to K is called a quadratic space, and B as defined here is the associated symmetric bilinear form of Q. The notion of a quadratic space is a coordinate-free version of the notion of quadratic form. Sometimes, Q is also called a quadratic form.

Two n-dimensional quadratic spaces (V, Q) and (V′, Q′) are isometric if there exists an invertible linear transformation T : VV (isometry) such that

The isometry classes of n-dimensional quadratic spaces over K correspond to the equivalence classes of n-ary quadratic forms over K.

Generalization

[edit]

Let R be a commutative ring, M be an R-module, and b : M × MR be an R-bilinear form.[9] A mapping q : MR : vb(v, v) is the associated quadratic form of b, and B : M × MR : (u, v) ↦ q(u + v) − q(u) − q(v) is the polar form of q.

A quadratic form q : MR may be characterized in the following equivalent ways:

  • There exists an R-bilinear form b : M × MR such that q(v) is the associated quadratic form.
  • q(av) = a2q(v) for all aR and vM, and the polar form of q is R-bilinear.
[edit]

Two elements v and w of V are called orthogonal if B(v, w) = 0. The kernel of a bilinear form B consists of the elements that are orthogonal to every element of V. Q is non-singular if the kernel of its associated bilinear form is {0}. If there exists a non-zero v in V such that Q(v) = 0, the quadratic form Q is isotropic, otherwise it is definite. This terminology also applies to vectors and subspaces of a quadratic space. If the restriction of Q to a subspace U of V is identically zero, then U is totally singular.

The orthogonal group of a non-singular quadratic form Q is the group of the linear automorphisms of V that preserve Q: that is, the group of isometries of (V, Q) into itself.

If a quadratic space (A, Q) has a product so that A is an algebra over a field, and satisfies then it is a composition algebra.

Equivalence of forms

[edit]

Every quadratic form q in n variables over a field of characteristic not equal to 2 is equivalent to a diagonal form

Such a diagonal form is often denoted by a1, ..., an. Classification of all quadratic forms up to equivalence can thus be reduced to the case of diagonal forms.

Geometric meaning

[edit]

Using Cartesian coordinates in three dimensions, let x = (x, y, z)T, and let A be a symmetric 3-by-3 matrix. Then the geometric nature of the solution set of the equation xTAx + bTx = 1 depends on the eigenvalues of the matrix A.

If all eigenvalues of A are non-zero, then the solution set is an ellipsoid or a hyperboloid.[citation needed] If all the eigenvalues are positive, then it is an ellipsoid; if all the eigenvalues are negative, then it is an imaginary ellipsoid (we get the equation of an ellipsoid but with imaginary radii); if some eigenvalues are positive and some are negative, then it is a hyperboloid; if the eigenvalues are all equal and positive, then it is a sphere (special case of an ellipsoid with equal semi-axes corresponding to the presence of equal eigenvalues).

If there exist one or more eigenvalues λi = 0, then the shape depends on the corresponding bi. If the corresponding bi ≠ 0, then the solution set is a paraboloid (either elliptic or hyperbolic); if the corresponding bi = 0, then the dimension i degenerates and does not come into play, and the geometric meaning will be determined by other eigenvalues and other components of b. When the solution set is a paraboloid, whether it is elliptic or hyperbolic is determined by whether all other non-zero eigenvalues are of the same sign: if they are, then it is elliptic; otherwise, it is hyperbolic.

Integral quadratic forms

[edit]

Quadratic forms over the ring of integers are called integral quadratic forms, whereas the corresponding modules are quadratic lattices (sometimes, simply lattices). They play an important role in number theory and topology.

An integral quadratic form has integer coefficients, such as x2 + xy + y2; equivalently, given a lattice Λ in a vector space V (over a field with characteristic 0, such as Q or R), a quadratic form Q is integral with respect to Λ if and only if it is integer-valued on Λ, meaning Q(x, y) ∈ Z if x, y ∈ Λ.

This is the current use of the term; in the past it was sometimes used differently, as detailed below.

Historical use

[edit]

Historically there was some confusion and controversy over whether the notion of integral quadratic form should mean:

twos in
the quadratic form associated to a symmetric matrix with integer coefficients
twos out
a polynomial with integer coefficients (so the associated symmetric matrix may have half-integer coefficients off the diagonal)

This debate was due to the confusion of quadratic forms (represented by polynomials) and symmetric bilinear forms (represented by matrices), and "twos out" is now the accepted convention; "twos in" is instead the theory of integral symmetric bilinear forms (integral symmetric matrices).

In "twos in", binary quadratic forms are of the form ax2 + 2bxy + cy2, represented by the symmetric matrix This is the convention Gauss uses in Disquisitiones Arithmeticae.

In "twos out", binary quadratic forms are of the form ax2 + bxy + cy2, represented by the symmetric matrix

Several points of view mean that twos out has been adopted as the standard convention. Those include:

  • better understanding of the 2-adic theory of quadratic forms, the 'local' source of the difficulty;
  • the lattice point of view, which was generally adopted by the experts in the arithmetic of quadratic forms during the 1950s;
  • the actual needs for integral quadratic form theory in topology for intersection theory;
  • the Lie group and algebraic group aspects.

Universal quadratic forms

[edit]

An integral quadratic form whose image consists of all the positive integers is sometimes called universal. Lagrange's four-square theorem shows that w2 + x2 + y2 + z2 is universal. Ramanujan generalized this aw2 + bx2 + cy2 + dz2 and found 54 multisets {a, b, c, d} that can each generate all positive integers, namely,

  • {1, 1, 1, d}, 1 ≤ d ≤ 7
  • {1, 1, 2, d}, 2 ≤ d ≤ 14
  • {1, 1, 3, d}, 3 ≤ d ≤ 6
  • {1, 2, 2, d}, 2 ≤ d ≤ 7
  • {1, 2, 3, d}, 3 ≤ d ≤ 10
  • {1, 2, 4, d}, 4 ≤ d ≤ 14
  • {1, 2, 5, d}, 6 ≤ d ≤ 10

There are also forms whose image consists of all but one of the positive integers. For example, {1, 2, 5, 5} has 15 as the exception. Recently, the 15 and 290 theorems have completely characterized universal integral quadratic forms: if all coefficients are integers, then it represents all positive integers if and only if it represents all integers up through 290; if it has an integral matrix, it represents all positive integers if and only if it represents all integers up through 15.

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A quadratic form is a homogeneous polynomial of degree two in a finite number of variables, generalizing the notion of a quadratic equation to multiple variables and expressible as q(x)=xTAxq(\mathbf{x}) = \mathbf{x}^T A \mathbf{x}, where AA is a and x\mathbf{x} is a column vector. These forms arise naturally in diverse mathematical contexts, including linear algebra, where they are associated with symmetric matrices, and , where binary quadratic forms like ax2+bxy+cy2ax^2 + bxy + cy^2 are used to study Diophantine equations and class numbers. Over the real numbers, quadratic forms can be diagonalized by orthogonal transformations, and Sylvester's law of inertia classifies them by their signature—the numbers of positive, negative, and zero eigenvalues—determining if they are positive definite, indefinite, or semidefinite. Historically, quadratic forms have roots in ancient mathematics, with developments by Babylonian mathematicians around 1800 BCE, Brahmagupta in 628 CE, and in 1801. Applications include defining conic sections and quadrics in , principal component analysis in statistics via covariance matrices, and modeling spacetime in physics, such as the (3,1) signature of .

Fundamentals

Definition and basic properties

A quadratic form over a field FF (of characteristic not equal to 2) is a of degree two in nn variables x1,,xnx_1, \dots, x_n with coefficients in FF. It takes the general form Q(x)=1ijnaijxixj,Q(\mathbf{x}) = \sum_{1 \leq i \leq j \leq n} a_{ij} x_i x_j, where the coefficients satisfy aij=ajia_{ij} = a_{ji} to ensure symmetry in the cross terms, though this is implicit in the summation convention. This expression defines a map Q:FnFQ: F^n \to F that captures the quadratic nature through its terms. The defining property of quadratic homogeneity distinguishes quadratic forms from general homogeneous polynomials, which may have terms of arbitrary equal degree greater than two. Specifically, for any scalar λF\lambda \in F and vector xFn\mathbf{x} \in F^n, Q(λx)=λ2Q(x).Q(\lambda \mathbf{x}) = \lambda^2 Q(\mathbf{x}). This scaling behavior ensures all monomials contribute equally to the degree-two structure, enabling unique algebraic manipulations not available to higher-degree forms. A key immediate property arising from this homogeneity is the additivity relation Q(x+y)+Q(xy)=2Q(x)+2Q(y),Q(\mathbf{x} + \mathbf{y}) + Q(\mathbf{x} - \mathbf{y}) = 2 Q(\mathbf{x}) + 2 Q(\mathbf{y}), which holds for all x,yFn\mathbf{x}, \mathbf{y} \in F^n and allows the quadratic form to be reconstructed from its associated via polarization. Basic examples illustrate these properties. Consider the Q(x,y)=x2+2xy+y2Q(x, y) = x^2 + 2xy + y^2 over the real numbers, which simplifies to (x+y)2(x + y)^2 and satisfies the homogeneity Q(λx,λy)=λ2(x+y)2Q(\lambda x, \lambda y) = \lambda^2 (x + y)^2 as well as the additivity relation. This form demonstrates how cross terms like 2xy2xy arise naturally in the expansion while preserving the overall quadratic structure.

Associated

Every quadratic form QQ on a VV over a field of characteristic not 2, such as the real or complex numbers, induces an associated B:V×VFB: V \times V \to \mathbb{F}. This association is given by the : B(x,y)=Q(x+y)Q(xy)4,B(x, y) = \frac{Q(x + y) - Q(x - y)}{4}, which expresses the bilinear form directly in terms of the quadratic form and establishes a bridge between quadratic and bilinear structures. The form BB is bilinear by construction, as the right-hand side is a of quadratic terms that linearize under the operations. Moreover, BB is symmetric, satisfying B(x,y)=B(y,x)B(x, y) = B(y, x) for all x,yVx, y \in V, since Q(x+y)Q(xy)=Q(y+x)Q(yx)Q(x + y) - Q(x - y) = Q(y + x) - Q(y - x). Conversely, the quadratic form recovers from the via Q(x)=B(x,x)Q(x) = B(x, x), confirming the direct correspondence between the two. This association is unique: for any quadratic form QQ, there exists exactly one BB such that Q(x)=B(x,x)Q(x) = B(x, x) for all xVx \in V, and every arises this way from its associated quadratic form. This uniqueness follows from the , which fully determines BB from QQ, and the symmetry ensures no other can produce the same quadratic values on the diagonal. For a concrete illustration, consider the quadratic form Q(x,y)=x2+2xy+y2Q(x, y) = x^2 + 2xy + y^2 on R2\mathbb{R}^2. Applying the polarization identity yields the associated symmetric bilinear form B((x1,y1),(x2,y2))=x1x2+x1y2+y1x2+y1y2,B((x_1, y_1), (x_2, y_2)) = x_1 x_2 + x_1 y_2 + y_1 x_2 + y_1 y_2, which is symmetric and satisfies Q(x,y)=B((x,y),(x,y))Q(x, y) = B((x, y), (x, y)). This example demonstrates how cross terms in QQ distribute evenly in BB, reflecting the underlying symmetry. This coordinate-free construction of BB from QQ facilitates later representations in terms of matrices, though the bilinear form itself remains independent of any basis choice.

Matrix representation

In finite-dimensional vector spaces over the reals or complex numbers, quadratic forms are concretely represented using symmetric matrices. Consider a quadratic form Q:RnRQ: \mathbb{R}^n \to \mathbb{R} defined on column vectors x=(x1,,xn)Tx = (x_1, \dots, x_n)^T. It can be expressed as Q(x)=xTAxQ(x) = x^T A x, where A=(aij)A = (a_{ij}) is an n×nn \times n symmetric matrix with real entries satisfying A=ATA = A^T, meaning aij=ajia_{ij} = a_{ji} for all i,ji, j. This representation arises from the associated symmetric bilinear form, where the off-diagonal entries capture the cross terms via 2aij2a_{ij} for iji \neq j. For the general case of nn variables, AA is an n×nn \times n , and every quadratic form on Rn\mathbb{R}^n admits a unique such representation. To see this, suppose a quadratic form is initially given by Q(x)=xTBxQ(x) = x^T B x for some (possibly nonsymmetric) matrix BB. Then Q(x)=xT(B+BT2)xQ(x) = x^T \left( \frac{B + B^T}{2} \right) x, since the skew-symmetric part BBT2\frac{B - B^T}{2} contributes xT(BBT2)x=0x^T \left( \frac{B - B^T}{2} \right) x = 0 for all xx, as (xTCx)T=xTCTx=xTCx\left( x^T C x \right)^T = x^T C^T x = -x^T C x implies it vanishes. Thus, replacing BB with its symmetric part yields the desired form, proving the existence and uniqueness up to the choice of basis. Under a , the matrix representation transforms accordingly. If PP is an invertible n×nn \times n matrix whose columns are the basis vectors, and x=Pyx = P y expresses coordinates in the new basis, then Q(x)=yT(PTAP)yQ(x) = y^T (P^T A P) y, so the new matrix is A=PTAPA' = P^T A P. This congruence transformation preserves the quadratic nature of the form while reflecting the coordinate change. For a concrete example, consider the quadratic form Q(x,y)=x2+2xy+y2Q(x, y) = x^2 + 2xy + y^2 in two variables. Its representation is A=(1111),A = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, since Q(xy)=(xy)(1111)(xy)=x2+2xy+y2Q\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x & y \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = x^2 + 2xy + y^2.

Quadratic forms over the reals

Diagonalization and spectral theorem

The provides a fundamental tool for analyzing quadratic forms over the real numbers, as these forms are represented by . It states that every real ARn×nA \in \mathbb{R}^{n \times n} is orthogonally diagonalizable, meaning there exists an OO (satisfying OTO=IO^T O = I) and a D=diag(λ1,,λn)D = \operatorname{diag}(\lambda_1, \dots, \lambda_n) with real entries λi\lambda_i such that A=ODOTA = O D O^T. This decomposition arises because symmetric matrices are with respect to the standard Euclidean inner product, ensuring all eigenvalues are real and eigenvectors can be chosen to form an . In the context of a quadratic form Q(x)=xTAxQ(\mathbf{x}) = \mathbf{x}^T A \mathbf{x}, the spectral theorem enables a change of variables x=Oy\mathbf{x} = O \mathbf{y} that transforms QQ into a sum of squares: Q(x)=yTDy=i=1nλiyi2.Q(\mathbf{x}) = \mathbf{y}^T D \mathbf{y} = \sum_{i=1}^n \lambda_i y_i^2. This orthogonal transformation preserves the Euclidean norm and simplifies the form to a diagonal expression without cross terms, facilitating geometric and analytic interpretations. The eigenvalues λi\lambda_i on the diagonal directly reflect the scaling factors along the principal axes defined by the columns of OO. A proof sketch proceeds by induction on the dimension nn. For symmetric AA, the characteristic polynomial has real roots, so AA has at least one real eigenvalue λ\lambda with eigenvector v\mathbf{v}; normalizing gives an orthonormal v\mathbf{v}. The eigenspace and its orthogonal complement are invariant under AA, reducing the problem to smaller symmetric matrices on these subspaces. Distinct eigenvalues yield orthogonal eigenvectors, as uTAv=λuTv\mathbf{u}^T A \mathbf{v} = \lambda \mathbf{u}^T \mathbf{v} and symmetry imply uTAv=μuTv\mathbf{u}^T A \mathbf{v} = \mu \mathbf{u}^T \mathbf{v}, so (λμ)uTv=0(\lambda - \mu) \mathbf{u}^T \mathbf{v} = 0. For multiplicities, Gram-Schmidt orthogonalization extends to a full orthonormal eigenbasis, forming OO. For example, consider the A=(1111).A = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. The characteristic equation det(AλI)=λ(λ2)=0\det(A - \lambda I) = \lambda(\lambda - 2) = 0 yields eigenvalues λ1=2\lambda_1 = 2 and λ2=0\lambda_2 = 0. The corresponding orthonormal eigenvectors are 12(11)\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}
Add your contribution
Related Hubs
User Avatar
No comments yet.