Recent from talks
Nothing was collected or created yet.
Householder transformation
View on WikipediaIn linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.[1]
Definition
[edit]Operator and transformation
[edit]The Householder operator[2] may be defined over any finite-dimensional inner product space with inner product and unit vector as
It is also common to choose a non-unit vector , and normalize it directly in the Householder operator's expression:[4]
Such an operator is linear and self-adjoint.
If , note that the reflection hyperplane can be defined by its normal vector, a unit vector (a vector with length ) that is orthogonal to the hyperplane. The reflection of a point about this hyperplane is the Householder transformation:
where is the vector from the origin to the point , and is the conjugate transpose of .

Householder matrix
[edit]The matrix constructed from this transformation can be expressed in terms of an outer product as:
is known as the Householder matrix, where is the identity matrix.
Properties
[edit]The Householder matrix has the following properties:
- it is Hermitian: ,
- it is unitary: (via the Sherman-Morrison formula),
- hence it is involutory: .
- A Householder matrix has eigenvalues . To see this, notice that if is orthogonal to the vector which was used to create the reflector, then , i.e., is an eigenvalue of multiplicity , since there are independent vectors orthogonal to . Also, notice (since is by definition a unit vector), and so is an eigenvalue with multiplicity .
- The determinant of a Householder reflector is , since the determinant of a matrix is the product of its eigenvalues, in this case one of which is with the remainder being (as in the previous point), or via the Matrix determinant lemma.
Example
[edit]Consider the normalization of a vector containing in each entry,
Then the Householder matrix corresponding to the vector is
Note that if we have another vector representing a coordinate in the 2D plane
then in this case flips and negates the and coordinates, in other words we have
which corresponds to reflecting the vector across the line , which our original vector is normal to.
Applications
[edit]Geometric optics
[edit]In geometric optics, specular reflection can be expressed in terms of the Householder matrix (see Specular reflection § Vector formulation).
Numerical linear algebra
[edit]Note that representing a Householder matrix requires only the entries of a single vector, not of an entire matrix (which in most algorithms is never explicitly formed), thereby minimizing the required storage and memory references needed to use them.
Further, multiplying a Householder matrix by a vector does not involve a full matrix-vector multiplication, but rather only one vector dot product, and then one saxpy operation. This means its arithmetic complexity is of the same order of two low-level BLAS-1 operations. Therefore, Householder matrices are extremely arithmetically efficient.[5]
Finally, using to denote the computed value and to denote the mathematically exact value, then for a given Household matrix ,
Where (where is unit roundoff, the size of the matrix , and some small constant). In other words, multiplications by Householder matrices are also extremely backwards stable.[6]
Since Householder transformations minimize storage, memory references, arithmetic complexity, and optimize numerical stability, they are widely used in numerical linear algebra, for example, to annihilate the entries below the main diagonal of a matrix,[7] to perform QR decompositions and in the first step of the QR algorithm. They are also widely used for transforming to a Hessenberg form. For symmetric or Hermitian matrices, the symmetry can be preserved, resulting in tridiagonalization.[8][9]
QR decomposition
[edit]Householder transformations can be used to calculate a QR decomposition. Consider a matrix triangularized up to column , then our goal is to construct such Householder matrices that act upon the principal submatrices of a given matrix
via the matrix
.
(note that we already established before that Householder transformations are unitary matrices, and since the multiplication of unitary matrices is itself a unitary matrix, this gives us the unitary matrix of the QR decomposition)
If we can find a so that we could accomplish this. Thinking geometrically, we are looking for a plane so that the reflection about this plane happens to land directly on the basis vector. In other words,
| 1 |
for some constant . However, for this to happen, we must have And since is a unit vector, this means that we must have
| 2 |
Now if we apply equation (2) back into equation (1), we get Or, in other words, by comparing the scalars in front of the vector we must have Or Which means that we can solve for as This completes the construction; however, in practice we want to avoid catastrophic cancellation in equation (2). To do so, we choose[5] the sign of as
Tridiagonalization (Hessenberg)
[edit]This procedure is presented in Numerical Analysis by Burden and Faires, and works when the matrix is symmetric. In the non-symmetric case, it is still useful as a similar procedure can result in a Hessenberg matrix.
It uses a slightly altered function with .[10] In the first step, to form the Householder matrix in each step we need to determine and , which are:
From and , construct vector :
where , , and
- for each
Then compute:
Having found and computed the process is repeated for as follows:
Continuing in this manner, the tridiagonal and symmetric matrix is formed.
Examples
[edit]In this example, also from Burden and Faires,[10] the given matrix is transformed to the similar tridiagonal matrix A3 by using the Householder method.
Following those steps in the Householder method, we have:
The first Householder matrix:
Used to form
As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process is finished after two steps.
Quantum computation
[edit]
As unitary matrices are useful in quantum computation, and Householder transformations are unitary, they are very useful in quantum computing. One of the central algorithms where they're useful is Grover's algorithm, where we are trying to solve for a representation of an oracle function represented by what turns out to be a Householder transformation:
(here the is part of the bra-ket notation and is analogous to which we were using previously)
This is done via an algorithm that iterates via the oracle function and another operator known as the Grover diffusion operator defined by
and .
Computational and theoretical relationship to other unitary transformations
[edit]The Householder transformation is a reflection about a hyperplane with unit normal vector , as stated earlier. An -by- unitary transformation satisfies . Taking the determinant (-th power of the geometric mean) and trace (proportional to arithmetic mean) of a unitary matrix reveals that its eigenvalues have unit modulus. This can be seen directly and swiftly:
Since arithmetic and geometric means are equal if the variables are constant (see inequality of arithmetic and geometric means), we establish the claim of unit modulus.
For the case of real valued unitary matrices we obtain orthogonal matrices, . It follows rather readily (see Orthogonal matrix) that any orthogonal matrix can be decomposed into a product of 2-by-2 rotations, called Givens rotations, and Householder reflections. This is appealing intuitively since multiplication of a vector by an orthogonal matrix preserves the length of that vector, and rotations and reflections exhaust the set of (real valued) geometric operations that render invariant a vector's length.
The Householder transformation was shown to have a one-to-one relationship with the canonical coset decomposition of unitary matrices defined in group theory, which can be used to parametrize unitary operators in a very efficient manner.[11]
Finally we note that a single Householder transform, unlike a solitary Givens transform, can act on all columns of a matrix, and as such exhibits the lowest computational cost for QR decomposition and tridiagonalization. The penalty for this "computational optimality" is, of course, that Householder operations cannot be as deeply or efficiently parallelized. As such Householder is preferred for dense matrices on sequential machines, whilst Givens is preferred on sparse matrices, and/or parallel machines.
See also
[edit]Notes
[edit]- ^ Householder, A. S. (1958). "Unitary Triangularization of a Nonsymmetric Matrix" (PDF). Journal of the ACM. 5 (4): 339–342. doi:10.1145/320941.320947. MR 0111128. S2CID 9858625.
- ^ Roman 2008, p. 243-244
- ^ Methods of Applied Mathematics for Engineers and Scientist. Cambridge University Press. 28 June 2013. pp. Section E.4.11. ISBN 9781107244467.
- ^ Roman 2008, p. 244
- ^ a b Saad, Yousef (2003). Iterative methods for sparse linear systems. Society for Industrial and Applied Mathematics. pp. 11–14.
- ^ Higham, Nicholas J. (2002). Accuracy and stability of numerical algorithms (2nd ed.). Philadelphia: Society for Industrial and Applied Mathematics. p. 358. ISBN 0-89871-521-0.
- ^ Taboga, Marco. "Householder matrix, Lectures on matrix algebra".
- ^ Schabauer, Hannes; Pacher, Christoph; Sunderland, Andrew G.; Gansterer, Wilfried N. (2010-05-01). "Toward a parallel solver for generalized complex symmetric eigenvalue problems". Procedia Computer Science. 1 (1): 437–445. doi:10.1016/j.procs.2010.04.047.
- ^ Golub, Gene Howard; Van Loan, Charles F. (1996). Matrix computations (3rd ed.). Baltimore London: Johns Hopkins university press. p. 211. ISBN 0-8018-5414-8.
- ^ a b Burden, Richard; Faires, Douglas; Burden, Annette (2016). Numerical analysis (10th ed.). Thomson Brooks/Cole. ISBN 9781305253667.
- ^ Renan Cabrera; Traci Strohecker; Herschel Rabitz (2010). "The canonical coset decomposition of unitary matrices through Householder transformations". Journal of Mathematical Physics. 51 (8): 082101. arXiv:1008.2477. Bibcode:2010JMP....51h2101C. doi:10.1063/1.3466798. S2CID 119641896.
References
[edit]- LaBudde, C.D. (1963). "The reduction of an arbitrary real square matrix to tridiagonal form using similarity transformations". Mathematics of Computation. 17 (84). American Mathematical Society: 433–437. doi:10.2307/2004005. JSTOR 2004005. MR 0156455.
- Morrison, D.D. (1960). "Remarks on the Unitary Triangularization of a Nonsymmetric Matrix". Journal of the ACM. 7 (2): 185–186. doi:10.1145/321021.321030. MR 0114291. S2CID 23361868.
- Cipra, Barry A. (2000). "The Best of the 20th Century: Editors Name Top 10 Algorithms". SIAM News. 33 (4): 1. (Herein Householder Transformation is cited as a top 10 algorithm of this century)
- Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 11.3.2. Householder Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-13.
- Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, ISBN 978-0-387-72828-5
Householder transformation
View on GrokipediaDefinition
Mathematical Formulation
The Householder transformation, also known as a Householder reflection, is a linear operator defined on an inner product space that performs a reflection of vectors across a hyperplane passing through the origin, with the hyperplane orthogonal to a given non-zero vector .[4] In the context of Euclidean space , equipped with the standard inner product , the transformation is given by where is the identity matrix and .[1] This operator is orthogonal, satisfying , and thus preserves lengths and angles in the space.[2] An inner product space is a vector space over the real or complex numbers endowed with an inner product, which induces a norm and enables the definition of orthogonality via . The orthogonal projection of a vector onto the one-dimensional subspace spanned by is Applying the Householder transformation to yields the reflected vector which geometrically mirrors across the hyperplane perpendicular to , leaving vectors in that hyperplane unchanged.[1][5] The Householder transformation is named after Alston S. Householder, an American mathematician who formalized its use in the 1950s for numerical linear algebra, particularly in matrix decompositions.[6] His seminal 1958 paper introduced the method in the context of unitary triangularization, establishing its foundational role in stable computational algorithms.[4]Householder Reflector Matrix
The Householder reflector matrix, denoted as , represents the linear transformation corresponding to a reflection across a hyperplane in Euclidean space and takes the explicit form , where is the identity matrix and is a unit vector () serving as the normal to the reflection hyperplane.[4] This matrix form derives directly from the geometric definition of a reflection operator, which subtracts twice the projection of a vector onto the normal direction from the original vector. Here, for some nonzero vector defining the hyperplane's orientation. The matrix is symmetric, as , since the outer product is symmetric. It is also orthogonal, satisfying (and thus ): where the simplification relies on . These properties ensure that preserves vector norms and inner products, making it suitable for numerical linear algebra tasks.[4] For a concrete illustration in two dimensions, consider , so and . Then, and Applying to the standard basis vector yields , demonstrating the reflection: the vector is mirrored across the line perpendicular to (the line ). In contrast to arbitrary reflection matrices, the Householder reflector employs a specific choice of (and thus ) to map a target vector onto a scalar multiple of a standard basis vector, such as , which zeros out all but the first component.[4] This targeted selection distinguishes it from general reflections and enables efficient zero introductions in matrix factorizations.Properties
Algebraic Properties
The Householder transformation matrix , where is a nonzero vector, is orthogonal, satisfying .[7] This orthogonality follows directly from the form of , as since is a scalar and .[2] Additionally, is real symmetric, with , because the outer product is symmetric.[7] The determinant of is , distinguishing it as a reflection rather than a rotation.[7] This arises because has eigenvalues consisting of with multiplicity (for eigenvectors orthogonal to ) and with multiplicity 1 (for the eigenvector ).[8] Consequently, the trace of in dimensions is , reflecting the sum of these eigenvalues.[2] The matrix is involutory, satisfying , which implies that is its own inverse.[7] This property holds because as the cross terms cancel.[8] The rank-one update structure of , derived via the Sherman-Morrison formula for low-rank modifications to the identity, underscores its efficiency in computations.[7] Products of Householder matrices yield orthogonal matrices, enabling the construction of arbitrary orthogonal transformations through sequential reflections.[9]Geometric Interpretation
The Householder transformation represents a reflection of vectors across a hyperplane in Euclidean space, where the hyperplane is defined as perpendicular to a chosen nonzero vector . This geometric operation leaves all points on the hyperplane unchanged while flipping the position of points on the opposite side relative to the plane, effectively negating their components parallel to . Such reflections preserve Euclidean distances and angles between vectors, ensuring that the transformation is an isometry of the space.[10][11] Vectors lying within the hyperplane—those orthogonal to —remain fixed under the transformation, as their projection onto is zero. For any vector, the transformation subtracts twice its projection onto , which geometrically corresponds to mirroring the parallel component across the origin along that direction. This selective negation along the normal direction while fixing the orthogonal subspace highlights the transformation's role in reorienting specific components without distorting the overall geometry.[5][12] In two dimensions, the Householder transformation acts as a reflection over a line perpendicular to , such as mirroring a point across the y-axis if aligns with the x-axis, resulting in a simple flip without any rotational component. Extending to three dimensions, it reflects over a plane with normal , for instance, transforming a vector pointing away from the xy-plane to its symmetric counterpart on the other side of the plane. Unlike rotations, which preserve orientation and involve circular motion around an axis, the Householder transformation performs a pure flip across the hyperplane, reversing handedness in the affected direction. In contrast to projections, which map vectors onto a subspace and are neither bijective nor invertible, the reflection is a bijection that fully spans the space, allowing reversal by applying the same transformation again.[10][11][5] A distinctive geometric feature is its application in aligning a vector with a standard basis direction, such as reflecting a given vector toward the first coordinate axis to zero out its subdiagonal components in a matrix column, all while maintaining the vector's original norm due to the inherent orthogonality of the transformation.[12][10]Construction and Computation
Deriving the Reflector Vector
The reflector vector in a Householder transformation is derived to reflect a given nonzero vector onto a scalar multiple of the first standard basis vector , thereby zeroing all components of except the first. This construction ensures that the transformation , where and denotes the Euclidean vector norm, while preserving the norm since is orthogonal.[4][13] The basic mathematical derivation begins by noting that the reflection hyperplane is perpendicular to , so must be parallel to the vector difference . A simple choice is , yielding the unnormalized reflector . The normalized reflector is then , ensuring . With this , the transformation satisfies .[13] For improved numerical properties, the sign of is adjusted based on the first component of : set , where if and otherwise (with by convention). The unnormalized reflector becomes , and the normalized form is This choice aligns the first component of away from zero when is close in magnitude to , and the transformation yields .[13] The outline of the derivation process is as follows: first, compute the norm ; second, determine the sign ; third, form the unnormalized ; and finally, normalize to obtain if a unit vector is required (though in applications, the unnormalized form is often retained with a scaling factor for efficient computation of ). This process assumes familiarity with vector norms and the basis vector .[13] As a concrete example, consider , for which and , so . The unnormalized reflector is , with . The normalized . The corresponding Householder transformation is , and applying it gives as the first (and only nonzero) component matches .[13]Numerical Stability and Algorithms
The numerical stability of Householder transformations hinges on careful selection of the reflector vector , particularly the sign adjustment in its first component to prevent subtractive cancellation during norm computation. Specifically, choosing the sign of the scaling factor , where is the first element of the target vector , ensures that the difference is computed without loss of precision, as it avoids near-cancellation when and have opposite signs. This choice maximizes the magnitude of the diagonal element in the reflector, reducing relative error growth to , where is machine epsilon. Without this adjustment, severe cancellation can occur, leading to instability that amplifies errors in subsequent computations, as demonstrated in error analyses showing up to quadratic growth in condition number dependence.[14] Furthermore, Householder transformations exhibit backward stability when applied in sequence, meaning the computed result satisfies (or similar for the full factorization), with , independent of the matrix condition number. This property arises from the orthogonal nature of the transformations and the use of rank-one updates, which bound rounding errors effectively in floating-point arithmetic. Numerical experiments confirm that this stability holds even for ill-conditioned matrices, with residual norms remaining close to machine precision. In practice, the algorithm for applying a Householder transformation (with ) to a matrix from the left proceeds via a rank-one update to avoid explicit formation of , which would be -costly. The step computes the inner product , then updates , efficiently zeroing subdiagonal elements in a column during triangularization. This approach leverages Level-2 BLAS operations for the update, ensuring scalability. Pseudocode for applying the transformation to an matrix (assuming is an -vector with first element 1 for normalization):function apply_householder_left(A, u)
beta = 2 / (u' * u)
w = u' * A % row vector, m x n becomes 1 x n
A = A - beta * (u * w) % [outer product](/page/Outer_product) update
return A
end
function apply_householder_left(A, u)
beta = 2 / (u' * u)
w = u' * A % row vector, m x n becomes 1 x n
A = A - beta * (u * w) % [outer product](/page/Outer_product) update
return A
end
Applications in Linear Algebra
QR Factorization
The QR factorization, also known as QR decomposition, represents a matrix (with ) as the product , where is an orthogonal matrix and is an upper triangular matrix. Householder transformations provide an efficient and stable method to compute this decomposition by applying a sequence of such reflections from the left to , successively eliminating the subdiagonal elements in each column to produce the upper triangular .[15] This approach ensures that the transformations preserve the Euclidean norms of the columns of , as each Householder matrix is orthogonal.[2] The process begins with the first column of : a Householder reflector is constructed to map the subcolumn onto a multiple of the first standard basis vector , zeroing all entries below the (1,1) position. The updated matrix becomes , with the first column now having zeros below the diagonal. For the second column, a reflector is applied to the trailing submatrix , zeroing entries below the (2,2) position without disturbing the prior zeros, yielding . This continues for to , with each acting on the submatrix starting from row and column , resulting in .[10] The full orthogonal matrix is then , satisfying or equivalently , where is upper triangular with the norms of the original columns of appearing on its diagonal (up to signs).[16] In practice, the algorithm avoids explicitly forming to save computation and storage. It initializes by copying to and sets , but instead applies each (from to ) to the trailing submatrix of and accumulates the effect on if needed. To store the reflectors compactly, the lower triangular part of (below the diagonal) is overwritten with the Householder vectors (normalized so the first entry is 1), and auxiliary scalars are stored separately. This compact representation allows reconstruction of or application of or to a vector in operations without forming the full matrix. The overall flop count for the factorization is approximately .[16][17] Consider the example of decomposing the 3×3 matrix For the first column , compute , and choose the reflector vector , normalized to with . Applying to zeros the (2,1) and (3,1) entries, yielding where the signs ensure numerical stability by avoiding cancellation. For the second column of the submatrix , , , , normalized to , and applied to the 2×2 trailing submatrix zeros the (3,2) entry, resulting in upper triangular The product can then be reconstructed from the stored vectors to verify .[17][10] This method offers key benefits in numerical linear algebra, particularly for solving least squares problems , where the solution is , avoiding the ill-conditioned normal equations whose condition number is ; instead, the Householder QR preserves , ensuring backward stability with small residual errors bounded by machine epsilon times .[15] It is also more efficient than alternatives like Givens rotations, requiring about two-thirds the operations for dense matrices.[10]Hessenberg and Tridiagonal Reduction
Householder transformations are employed to reduce a general square matrix to upper Hessenberg form through a similarity transformation, where the resulting matrix has zeros below the subdiagonal. This is achieved by applying a sequence of Householder reflectors from both the left and right, yielding , with orthogonal. The process begins by zeroing the elements below the subdiagonal in the first column using a Householder reflector on rows 2 through , followed by applying the same reflector from the right to maintain similarity; subsequent columns are treated similarly, with care to avoid disrupting previously zeroed entries via bulge chasing techniques.[18] For symmetric matrices, the Hessenberg reduction simplifies to tridiagonal form, as the symmetry ensures that the superdiagonal mirrors the subdiagonal. Here, Householder transformations suffice: for the -th step, a reflector is constructed to zero entries below the subdiagonal in column of the trailing submatrix, applied from both sides to produce , where is symmetric tridiagonal and orthogonal. The subdiagonal elements are preserved throughout, capturing essential off-diagonal structure. This two-sided application ensures numerical stability, with the backward error bounded by , where is machine epsilon.[19][18] The similarity transformation preserves the eigenvalues of , as orthogonal similarities maintain the spectrum. In the basic algorithm, each step computes the reflector vector for the subcolumn by setting , normalized, then applies to the appropriate submatrix. Bulge chasing, involving additional reflectors to eliminate introduced nonzeros, ensures the form is maintained without fill-in beyond the band. The total cost is approximately flops for Hessenberg form and flops for symmetric tridiagonal form of an matrix.[20][21] A representative example illustrates the tridiagonalization of a 4×4 symmetric matrix: In the first step, a Householder reflector zeros entries below the subdiagonal in the first column of the 3×3 trailing submatrix, yielding an intermediate matrix with zeros in positions (3,1) and (4,1). Applying from both sides gives: The second step uses on the 2×2 trailing submatrix to zero (4,3), resulting in the tridiagonal form: The subdiagonal elements (e.g., -3 and -5/3) are preserved, facilitating subsequent eigenvalue computations.[22] This reduction to Hessenberg or tridiagonal form serves as an essential precursor to the QR algorithm for computing eigenvalues and eigenvectors, as the condensed structure accelerates iterations while preserving spectral properties.[21]Broader Applications
Geometric Optics
In geometric optics, the Householder transformation provides a precise mathematical model for specular reflection, where light rays bounce off a smooth surface following the law that the angle of incidence equals the angle of reflection. The vector in the transformation corresponds to the unit normal vector to the reflecting plane, effectively acting as the "mirror" that reverses the component of the incident ray parallel to this normal while leaving the perpendicular component unchanged. This analogy underscores the transformation's role as a reflection operator, mirroring physical optics phenomena in a linear algebraic framework.[23] A key application arises in ray tracing simulations for computer graphics and optical design, where the Householder transformation computes the direction of reflected rays to model light propagation realistically. For an incident ray direction (a unit vector pointing toward the surface) and surface normal (also unit length), the reflected ray is given by which is precisely the action of the Householder matrix applied to . This formulation ensures the reflected ray lies in the plane spanned by and , maintaining the geometric constraints of reflection without requiring explicit angle calculations.[24] As a specific example, consider simulating reflection in a plane mirror: an incident ray from a light source at position striking the mirror at point with normal has direction . The reflected direction is computed as above, allowing determination of the virtual image location , which ensures the optical path length from an observer to the image equals the actual folded path via the mirror. This enables accurate rendering of image positions and path lengths in optical systems.[25] The principles underlying such reflections have been integral to geometric optics since the 19th century, building on earlier formulations like those in Fresnel's work on wave optics interfaces, though the vectorial description facilitated modern simulations. The Householder transformation, introduced in 1958, provided a numerically stable matrix form that extended these ideas to computational contexts, including approximations for non-planar surfaces by iteratively applying local planar reflections at tangent planes along curved geometries.[26] Additionally, the transformation's unitarity—ensuring is orthogonal and self-inverse—preserves the reversibility of light paths, a core tenet of optics where inverting ray directions reconstructs the incident configuration without loss of energy or directionality.[24]Quantum Computing
In quantum computing, the Householder transformation serves as a unitary operator that performs reflections within the Hilbert space, playing a crucial role in algorithms such as Grover's search. This quantum analog mirrors the classical operation by reflecting quantum states across a hyperplane orthogonal to a chosen normalized vector , defined by the operator . As a unitary and Hermitian matrix, it preserves state norms while inverting the projection onto , enabling geometric manipulations essential for quantum state engineering.[27] Quantum circuits implementing Householder reflections can be constructed using sequences of elementary gates, often leveraging controlled-phase operations or oracles for efficiency. One method decomposes the reflection into state preparation circuits and controlled interactions, achieving arbitrary reflections in systems of states with interaction steps in physical realizations, though general digital circuits for arbitrary may scale exponentially in qubit number. For the specific case in Grover's algorithm, the reflection over the uniform superposition employs Hadamard gates combined with multi-controlled phase flips, realizable with controlled-phase gates and Pauli operators.[28] A primary application lies in amplitude amplification, where Householder reflections boost the probability of measuring desired states by iteratively reflecting over subspaces. In this framework, the diffusion step reflects the current state across the subspace spanned by the uniform superposition , yielding with , where and . This operator is equivalent to the Householder reflection up to a global phase factor of -1. This operation, alternated with an oracle-defined reflection across the hyperplane orthogonal to the good state subspace, rotates the state toward the target in the two-dimensional plane spanned by the initial uniform state and the marked states, achieving quadratic speedup over classical search.[29] For a concrete example in a 2-qubit system (), the Householder reflection for the diffusion operator—searching for a marked state like —is decomposed as follows: apply Hadamard gates to create the uniform superposition, followed by Pauli-X gates on both qubits, a controlled-Z gate (CZ) with both qubits as controls and one as target (effectively a double-controlled phase flip), then reverse the X and Hadamard gates. This circuit inverts amplitudes about the average using four Hadamard gates, two X gates, and one CZ gate, forming one iteration of the amplification.[30] The integration of Householder transformations into quantum information processing emerged in the late 1990s with foundational quantum algorithms, offering an efficient means to enact arbitrary reflections via gates in targeted scenarios like uniform subspace reflections.[27]Relations to Other Transformations
Comparisons with Givens Rotations
The Givens rotation is an orthogonal transformation consisting of a 2D plane rotation designed to zero out a single selected element in a vector or matrix, which can be generalized to higher dimensions by embedding the 2D rotation block within an identity matrix that affects only two specific rows or columns.[31] In contrast, a Householder transformation zeros an entire subcolumn below the pivot in a single step by reflecting across a hyperplane orthogonal to a carefully chosen vector, requiring O(n) operations for a subcolumn of length n. A Givens rotation zeros only one entry per application with O(1) operations but necessitates O(n) sequential rotations to achieve the same subcolumn zeroing, resulting in roughly twice the computational cost for dense matrices. Consequently, Householder transformations are more efficient for dense linear algebra problems, while Givens rotations excel in sparse settings where targeted updates minimize fill-in and preserve matrix structure.[32]| Aspect | Householder QR (m × n, m ≥ n) | Givens QR (m × n, m ≥ n) |
|---|---|---|
| Leading Flop Count | 2mn² - (2/3)n³ | 4mn² |
| Suitability | Dense matrices | Sparse matrices |
| Stability | Backward stable, O(ε) error | Backward stable, O(ε) error |
