Hubbry Logo
search
logo

HHL algorithm

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for obtaining certain limited information about the solution to a system of linear equations, introduced by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. Specifically, the algorithm estimates quadratic functions of the solution vector to a given system.[1]

The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm and Grover's search algorithm. Assuming the system is sparse,[2] has a low condition number , and that the user is only interested in certain information about solution vector and not the entire vector itself, the algorithm has a runtime of , where is the number of variables. This offers an exponential speedup over the fastest classical algorithm, which runs in (or for positive semidefinite matrices).

An implementation of the HHL algorithm was first demonstrated in 2013 by three independent publications, consisting of simple systems on specially designed devices.[3][4][5] The first demonstration of a general-purpose version of the algorithm appeared in 2018.[6]

Overview

[edit]

Given an Hermitian matrix and unit vector , the HHL algorithms prepares the quantum state whose amplitudes are the entries of the solution to the linear system . The algorithm cannot efficiently output the solution x itself, but allows one to efficiently estimate for a Hermitian matrix .

The algorithm first prepares the quantum state whose amplitudes are equal to the entries of . Using Hamiltonian simulation, the unitary operator is applied to for a superposition of different times t. The algorithm then uses quantum phase estimation to decompose in the eigenbasis of and find the corresponding eigenvalues . The state of the system after this step is approximately

where are the eigenvectors of A and is the j-th coefficient of b in the eigenbasis of A.

We would then like to apply the linear map taking to for some constant C. This map is not unitary and must be implemented using a quantum measurement with a nonzero probability of failure. After it succeeds, we have uncomputed the register and have a state proportional to


By performing the quantum measurement corresponding to M, we get an estimate of . One could use quantum tomography to retrieve all components of x, but this would require repeating the algorithm roughly N times.

Detailed description

[edit]

Assumptions and initialization

[edit]

The algorithm requires the following assumptions to hold:

  1. The algorithm requires A to be Hermitian so that it can be exponentiated into a unitary operator. If A is not Hermitian, one can define a Hermitian matrix and solve to obtain .
  2. The algorithm requires an efficient procedure to prepare . It is assumed that either has already been prepared or there exists some B which takes some quantum state to efficiently. Any error in the preparation of is ignored.
  3. The algorithm assumes that the state can be prepared efficiently, where for some large T. The coefficients of are chosen to minimize a certain quadratic loss function which induces error in the subroutine described below.
  4. The algorithm assumes that the unitary operator can be applied efficiently. This is possible using Hamiltonian simulation if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row which can be computed in time O(s) given a row index. One can then apply in time .

Uinvert subroutine

[edit]

The key subroutine to the algorithm, denoted , is defined as follows using phase estimation:

  1. Prepare on register C
  2. Apply the conditional Hamiltonian evolution (sum)
  3. Apply the Fourier transform to the register C. Denote the resulting basis states with for k = 0, ..., T − 1. Define .
  4. Adjoin a three-dimensional register S in the state
  1. Reverse steps 1–3, uncomputing any garbage produced along the way.

The phase estimation procedure in steps 1-3 estimates the eigenvalues of A up to error .

The ancilla register in step 4 is needed to construct a state with inverted eigenvalues corresponding to the diagonalized inverse of A. The states 'nothing', 'well' and 'ill' are used to direct the loop body; 'nothing' indicates that the matrix inversion has not yet taken place, 'well' indicates that it has and the loop should halt, and 'ill' indicates that part of is in the ill-conditioned subspace of A and the algorithm cannot produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state collapses to the desired output.

Main loop

[edit]

The main loop follows amplitude amplification: starting with , repeatedly apply

After each iteration, is measured and will produce a value of 'nothing', 'well', or 'ill.' The loop is repeated until 'well' is measured, which occurs with some probability . Using amplitude amplification achieves a given error using queries, as opposed to using naive repetition.

After successfully measuring 'well' on the system will be in a state proportional to

The quantum measurement corresponding to M then gives an estimate of .

Analysis

[edit]

Classical efficiency

[edit]

The best classical algorithm which produces the actual solution vector is Gaussian elimination, which runs in time.

If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector , which can be found in time by minimizing the quadratic function .

When only a summary statistic of the solution vector is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of in .

Quantum efficiency

[edit]

The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be , where is the error parameter and is the condition number of . This was subsequently improved to by Andris Ambainis[7] and to for large condition number cases by Peniel Tsemo et al[8], and a quantum algorithm with runtime polynomial in was developed by Childs et al.[9] Since the HHL algorithm maintains its logarithmic scaling in only for sparse or low rank matrices, Wossnig et al.[10] extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in time compared to the of the standard HHL algorithm.

Optimality

[edit]

An important factor in the performance of the matrix inversion algorithm is the condition number , which represents the ratio of 's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix lie between and 1, in which case the claimed run-time proportional to will be achieved. Therefore, the speedup over classical algorithms is increased further when is a .[1]

If the run-time of the algorithm were made poly-logarithmic in then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE.[1]

Error analysis

[edit]

Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating . Assuming that is s-sparse, this can be done with an error bounded by a constant , which will translate to the additive error achieved in the output state .

The phase estimation step errs by in estimating , which translates into a relative error of in . If , taking induces a final error of . This requires that the overall run-time efficiency be increased proportional to to minimize error.

Experimental realization

[edit]

While a general-purpose quantum computer does not yet exist, one can still try to execute a proof of concept implementation of the HHL algorithm. This remained a challenge for years, until three groups independently did so in 2013.

On February 5, 2013, a group led by Stefanie Barz reported an implementation of the HHL algorithm on a photonic quantum computer. The implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Experimental measurements of the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.[4]

On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit NMR quantum computer. The implementation was tested using linear systems of 2 variables. Across three experiments, the solution vector was obtained with over 96% fidelity.[5]

On February 18, 2013, Cai et al. reported an experimental demonstration solving 2-by-2 linear systems. The quantum circuit was optimized and compiled into a linear optical network with four photonic qubits and four controlled logic gates, which were used to coherently implement the subroutines of the HHL algorithm. For various input vectors, the realization gave solutions with fidelities ranging from 0.825 to 0.993.[11]

Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al.[12] in 2018 using the algorithm developed by Subaşı et al.[13]

Proposed applications

[edit]

Several concrete applications of the HHL algorithm have been proposed, which analyze the algorithm's input assumptions and output guarantees for particular problems.

Electromagnetic scattering
Clader et al. gave a version of the HHL algorithm which allows a preconditioner to be included, which can be used improve the dependence on the condition number. The algorithm was applied to solving for the radar cross-section of a complex shape, which was one of the first examples of an application of the HHL algorithm to a concrete problem.[14]
Solving linear differential equations
Berry proposed an algorithm for solving linear, time-dependent initial value problems using the HHL algorithm.[15]
Solving nonlinear differential equations
Two groups proposed[16] efficient algorithms for numerically integrating dissipative nonlinear ordinary differential equations. Liu et al.[17] utilized Carleman linearization for second order equations and Lloyd et al.[18] used a mean field linearization method inspired by the nonlinear Schrödinger equation for general order nonlinearities. The resulting linear equations are solved using quantum algorithms for linear differential equations.
Finite element method
The finite element method approximates linear partial differential equations using large systems of linear equations. Montanaro and Pallister demonstrate that the HHL algorithm can achieve a polynomial quantum speedup for the resulting linear systems. Exponential speedups are not expected for problems in a fixed dimension or for which the solution meets certain smoothness conditions, such as certain high-order problems in many-body dynamics, or some problems in computational finance.[19]
Least-squares fitting
Wiebe et al. gave a quantum algorithm to determine the quality of a least-squares fit. The optimal coefficients cannot be calculated directly from the output of the quantum algorithm, but the algorithm still outputs the optimal least-squares error.[20]
Machine learning
Many quantum machine learning algorithms have been developed, a large number of which use the HHL algorithm as a subroutine. The runtime of certain classical algorithms is often polynomial in the size and dimension of a dataset, while the HHL algorithm can give an exponential speedup in some cases. However, a line work initiated by Ewin Tang has found that for most quantum machine learning algorithms, there are classical algorithms giving the same exponential speedups with similar input assumptions.
Finance
Proposals for using HHL in finance include solving partial differential equations for the Black–Scholes equation and determining portfolio optimization via a Markowitz solution.[21]
Quantum chemistry
The linearized coupled cluster method in quantum chemistry can be recast as a system of linear equations. In 2023, Baskaran et al. proposed the use of HHL algorithm to solve the resulting linear systems.[22] The number of state register qubits in the quantum algorithm is the logarithm of the number of excitations, offering an exponential reduction in the number of required qubits when compared to using the variational quantum eigensolver or quantum phase estimation.

Implementation difficulties

[edit]

Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson[23] analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm.

  1. the solution vector, , has to be efficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takes steps the exponential advantage of HHL would vanish.
  2. the QPE phases calls for the generation of the unitary , and its controlled application. The efficiency of this step depends on the matrix being sparse and 'well conditioned' (low ). Otherwise, the application of would grow as and once again, the algorithm's quantum advantage would vanish.
  3. lastly, the vector, , is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operator . If actual values of are needed, then HHL would need to be repeated times, killing the exponential speed-up. However, three ways of avoiding getting the actual values have been proposed: first, if only some properties of the solution are needed;[24] second, if the results are needed only to feed downstream matrix operations; third, if only a sample of the solution is needed.[25]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for solving linear systems of the form Ax=bAx = b, where AA is an N×NN \times N sparse Hermitian matrix with condition number κ\kappa, by preparing a quantum state x|\mathbf{x}\rangle proportional to the classical solution vector x\mathbf{x} and enabling the estimation of expectation values such as xMx\mathbf{x}^\dagger M \mathbf{x} for some observable MM.[1] Introduced in 2009 by Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd, it achieves an exponential speedup over classical methods, running in time polynomial in logN\log N, κ\kappa, and the desired precision ϵ\epsilon, compared to the classical O(Nκ)\mathcal{O}(N \sqrt{\kappa}) complexity for sparse matrices.[2] The algorithm proceeds in three main stages: first, it uses quantum phase estimation (QPE) to encode the eigenvalues of AA into a register of qubits while applying powers of AA via its sparse oracle access; second, it performs controlled rotations on an ancillary qubit to apply the inversion A1A^{-1} effectively by scaling with the reciprocal eigenvalues; and third, it applies inverse QPE and post-selection to yield the solution state x|\mathbf{x}\rangle, from which inner products can be sampled.[3] This process exploits quantum superposition and interference to handle the full matrix implicitly, without explicitly storing or manipulating the NN-dimensional vector.[4] Key requirements include efficient implementation of the matrix AA as a sparse unitary operator, Hermitian structure (or extensions to normal matrices), and a low condition number κ\kappa to avoid amplification of errors.[1] The HHL algorithm has foundational importance in quantum computing, demonstrating one of the first provable exponential speedups for a practically relevant problem and serving as a building block for advanced applications in quantum simulation, optimization, and quantum machine learning (QML), such as linear regression, support vector machines, and recommender systems.[5] For instance, it accelerates computations in electromagnetic scattering and electrical network analysis by solving large-scale linear systems quantumly.[6][7] However, practical limitations persist, including sensitivity to decoherence and noise on current noisy intermediate-scale quantum (NISQ) devices, the need for fault-tolerant quantum computers for large NN, and the fact that it outputs a quantum state rather than a classical vector, necessitating additional measurements for full solution recovery.[3] Subsequent improvements, such as variable-time amplitude amplification and linear combination of unitaries, have addressed some runtime dependencies on κ\kappa.[3]

Introduction

Overview

The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm designed to solve systems of linear equations of the form $ Ax = b $, where $ A $ is an $ N \times N $ matrix and $ b $ is a known input vector.[2] It achieves an exponential speedup compared to classical methods for large-scale problems involving specific matrix properties, enabling efficient estimation of solution features such as expectation values $ x^\dagger M x $ for some observable matrix $ M $.[2] The core output of the HHL algorithm is a quantum state $ |x\rangle $ proportional to the classical solution vector $ x $, normalized as $ |x\rangle \propto A^{-1} |b\rangle $.[2] This state representation does not yield the full vector explicitly but allows subsequent quantum algorithms—such as those in quantum machine learning—to access and manipulate the solution with potential quadratic or exponential speedups inherited from the encoding.[2] In outline, the algorithm encodes the input $ b $ as a quantum state $ |b\rangle $, applies quantum phase estimation to decompose it into the eigenbasis of a Hamiltonian simulation of $ A $, performs eigenvalue inversion via controlled operations on an ancillary register, and post-selects the output state upon measuring the ancillary qubit in a specific basis.[2] The quantum speedup holds under the conditions that $ A $ is Hermitian, sparse (with a constant number of nonzero entries per row), and well-conditioned (with small condition number $ \kappa $), ensuring the runtime scales favorably with system size $ N $ for such matrices.[2]

Historical development

The Harrow–Hassidim–Lloyd (HHL) algorithm was first proposed in 2009 by Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd in their seminal paper titled "Quantum algorithm for solving linear systems of equations."[2] This work introduced a quantum method to approximate solutions to sparse linear systems Ax=bA\mathbf{x} = \mathbf{b} with exponential speedup over classical approaches under certain conditions, such as when the matrix AA is well-conditioned and Hermitian. The primary motivation behind the HHL algorithm stemmed from the desire to extend the paradigm-shifting speedups of earlier quantum algorithms—such as Shor's algorithm for integer factorization and Grover's algorithm for unstructured search—to fundamental linear algebra problems.[2] These problems underpin a wide array of applications in quantum simulation of physical systems, optimization, and machine learning, where classical methods scale poorly with system size.[2] By leveraging quantum phase estimation and Hamiltonian simulation techniques, HHL enabled the preparation of a quantum state proportional to the solution vector, facilitating efficient estimation of expectation values rather than full classical readout.[2] In the years immediately following its proposal, several theoretical extensions enhanced the algorithm's robustness and applicability. A key improvement came in 2012 from Andris Ambainis, who refined the dependence on the matrix condition number κ\kappa from quadratic to nearly linear using variable-time amplitude amplification, making the algorithm more efficient for moderately ill-conditioned systems. Another significant early advancement was the 2013 preconditioned quantum linear system algorithm by Brian D. Clader, Brent C. Jacobs, and Christopher R. Sprouse, which addressed limitations for non-sparse matrices by incorporating classical preconditioning to reduce the effective condition number, thereby broadening HHL's practical utility. The HHL algorithm quickly gained recognition as a cornerstone of quantum linear algebra. It was integrated into influential reviews and textbooks on quantum computing, such as the comprehensive primer by M. Dervović, P. Herbster, M. Mountney, S. Severini, N. Usher, and L. Wossnig, which detailed its foundational role and subsequent developments.[3] This early acknowledgment underscored HHL's potential to drive progress in quantum-enhanced numerical methods.[3]

Background

Classical linear systems solving

The problem of solving a linear system consists of finding a vector $ x \in \mathbb{R}^N $ (or $ \mathbb{C}^N $) such that $ Ax = b $, where $ A $ is an $ N \times N $ nonsingular matrix and $ b $ is a known vector of length $ N $.[8] This task arises in numerous scientific and engineering applications, including simulations in physics, optimization, and data analysis.[8] Classical direct methods, such as Gaussian elimination, transform the system into an equivalent upper triangular form through row operations, enabling back-substitution to obtain $ x $. For dense matrices, this approach requires $ O(N^3) $ arithmetic operations in the worst case.[9] Gaussian elimination traces its roots to the early 19th century, when Carl Friedrich Gauss developed systematic elimination procedures for solving least-squares problems in astronomy.[10] These methods are reliable for small to moderate $ N $ but become computationally prohibitive for large $ N $ due to their cubic scaling. For sparse matrices, where $ A $ has only $ s \ll N^2 $ nonzeros per row on average, iterative methods are preferred to exploit structure and reduce costs. The conjugate gradient (CG) method, introduced by Magnus Hestenes and Eduard Stiefel in 1952, is a seminal Krylov subspace algorithm for symmetric positive definite systems, converging in at most $ N $ iterations but typically requiring $ O(\sqrt{\kappa}) $ iterations in practice, with each iteration costing $ O(s N) $ operations, where $ \kappa $ is the condition number of $ A $ (defined as $ \kappa = |A| |A^{-1}| $, the ratio of the largest to smallest singular values).[11][8] Similarly, the generalized minimal residual (GMRES) method, developed by Yousef Saad and Martin Schultz in 1986, extends this to nonsymmetric systems by minimizing the residual in a Krylov subspace, with convergence depending on $ \kappa $ and sparsity $ s $, often requiring restarts to manage storage.[8] Modern implementations incorporate preconditioning—techniques like incomplete LU factorization to approximate $ A^{-1} $—to reduce effective $ \kappa $ and accelerate convergence, enabling solutions for systems with millions of unknowns.[8] Despite these advances, classical solvers face significant limitations for high-dimensional problems. Direct methods scale as $ O(N^3) $, demanding infeasible memory and time for $ N > 10^4 $ or so, while iterative methods scale polynomially in $ N $ and $ \kappa $ but can stall if $ \kappa $ is large (e.g., ill-conditioned matrices from discretized PDEs), leading to slow convergence or numerical instability. In very large dimensions, even sparse iterative solvers struggle with storage for vectors and the need for efficient matrix-vector products, rendering exponentially scaled systems (in terms of problem parameters) intractable on classical hardware.[12][8]

Relevant quantum computing concepts

In quantum computing, the basic unit of information is the qubit, which can exist in a superposition of its basis states, unlike classical bits that are strictly 0 or 1. The general state of a single qubit is given by
ψ=α0+β1, |\psi\rangle = \alpha |0\rangle + \beta |1\rangle,
where α\alpha and β\beta are complex amplitudes satisfying α2+β2=1|\alpha|^2 + |\beta|^2 = 1, representing the probabilities of measuring the qubit in 0|0\rangle or 1|1\rangle. This superposition allows a quantum system of nn qubits to represent 2n2^n states simultaneously, enabling parallel processing that underpins algorithms like HHL.[13] Quantum gates form the building blocks for manipulating these superposition states through unitary transformations. The Hadamard gate HH, for instance, generates equal superposition from a basis state:
H0=0+12,H1=012, H |0\rangle = \frac{|0\rangle + |1\rangle}{\sqrt{2}}, \quad H |1\rangle = \frac{|0\rangle - |1\rangle}{\sqrt{2}},
creating balanced superpositions essential for exploring multiple computational branches. Controlled gates, such as the controlled-NOT (CNOT), introduce entanglement by applying an operation on a target qubit only if the control qubit is in 1|1\rangle, linking the states of multiple qubits to perform correlated operations. These gates enable the construction of universal quantum circuits, assuming a set like Pauli gates, Hadamard, and phase shifts suffices for arbitrary computations.[13] For algorithms addressing classical problems, such as linear systems, classical data must be encoded into quantum states with sufficient fidelity. In the HHL algorithm, the input vector bb is prepared as the quantum state b=iβii|b\rangle = \sum_i \beta_i |i\rangle, where the βi\beta_i are the normalized entries of bb, typically using amplitude encoding to achieve logarithmic qubit usage relative to the vector dimension. Efficient state preparation requires the encoding circuit to run in polylogarithmic time and ensure good overlap between b|b\rangle and the eigenvectors of the system matrix, as low overlap reduces the algorithm's effective precision.[2][14] The HHL algorithm operates under the assumption of fault-tolerant quantum computing, where logical qubits are protected by quantum error correction codes to maintain coherence over extended circuit depths. This necessitates universal gate sets implementable with error rates below the fault-tolerance threshold (typically around 10310^{-3} to 10410^{-4} per gate), allowing polylogarithmic-depth circuits in the matrix size NN and condition number κ\kappa. Without such fault tolerance, decoherence would overwhelm the subtle phase information critical to the algorithm.[2][13]

Algorithm Description

Setup and assumptions

The HHL algorithm addresses the problem of solving a system of linear equations $ A \mathbf{x} = \mathbf{b} $, where $ A $ is an $ N \times N $ Hermitian matrix and $ \mathbf{b} $ is a known vector, by producing a quantum state encoding the solution $ \mathbf{x} $. If $ A $ is not Hermitian, it can be embedded into a larger block-diagonal Hermitian matrix $ \tilde{A} = \begin{pmatrix} 0 & A \ A^\dagger & 0 \end{pmatrix} $, effectively doubling the dimension while preserving the solution structure. Without loss of generality, assume $ A $ is scaled such that its eigenvalues lie in the interval $ [1/\kappa, 1] $, where $ \kappa $ is the condition number and $ |A| \leq 1 $. The matrix $ A $ is required to be $ s $-sparse, with at most $ s $ non-zero entries per row, and these entries must be computable in $ O(s) $ time to enable efficient Hamiltonian simulation. The condition number $ \kappa \ll \mathrm{poly}(N) $ to ensure the algorithm's runtime remains polynomial in $ \log N $.[2] The input vector $ \mathbf{b} $ is encoded as a normalized quantum state $ |\mathbf{b}\rangle = \sum_j \beta_j |u_j\rangle $, where $ |u_j\rangle $ are the eigenvectors of $ A $ with eigenvalues $ \lambda_j $, and this state must be preparable in time polynomial in $ \log N $. For the solution to have unit $ \ell_2 $-norm (up to constants), $ \mathbf{b} $ should have limited overlap with eigenvectors corresponding to very small eigenvalues, ensuring $ |\mathbf{x}| = O(1) $.[2] The output of the algorithm is a quantum state $ |\mathbf{x}\rangle $ approximating the normalized solution $ |\mathbf{x}\rangle \approx \sum_j (\beta_j / \lambda_j) |u_j\rangle / |\sum_j (\beta_j / \lambda_j) |u_j\rangle| $, with the approximation error controlled by the algorithm's precision parameters. This state encodes $ \mathbf{x} $ in superposition, allowing extraction of expectation values $ \langle \mathbf{x} | M | \mathbf{x} \rangle $ for suitable observables $ M $ without reconstructing the full vector classically. The sparsity assumption on $ A $ aligns with classical methods for sparse linear systems, enabling quantum speedup under similar structural constraints.[2]

Hamiltonian simulation

In the HHL algorithm, Hamiltonian simulation constructs the unitary time evolution operator $ U(t) = e^{-i A t} $ for the scaled Hermitian matrix $ A $, which is assumed to be $ s $-sparse with at most $ s $ nonzero entries per row.[15][16] This operator approximates the continuous-time dynamics generated by $ A $ over time $ t $, enabling the encoding of spectral properties into quantum states.[15] Efficient simulation relies on sparse oracle access to $ A $, defined as an oracle $ O_{A} $ that, on input $ |i\rangle |0\rangle $, outputs $ \sum_j A_{ij} |j\rangle |A_{ij}\rangle $ for the relevant nonzero entries $ j $ in row $ i $, computable in $ O(s \mathrm{polylog} N) $ time where $ N $ is the dimension.[15][16] This access model allows querying the matrix structure and values without storing the full $ A $, which is crucial for scalability in large systems.[16] The simulation is typically achieved using sparse Hamiltonian simulation techniques, such as Trotter-Suzuki decomposition for structured forms of $ A $, or linear combinations of unitaries (LCU), which expresses $ A $ as a weighted sum of implementable unitaries and selects the desired evolution via post-selection or amplification.[16] Both methods leverage the sparsity to decompose the exponential into gate-efficient circuits.[16] The computational cost of simulating $ U(t) $ scales as $ O(t |A| / \epsilon_{\mathrm{sim}}) $ queries to the oracle, where $ \epsilon_{\mathrm{sim}} $ is the simulation error, but optimized implementations achieve polylogarithmic dependence on $ N $ in gate count for fixed $ t $ and constant error.[15][16] This simulated unitary serves as the core primitive in the HHL algorithm, supplying the controlled operations needed for subsequent eigenvalue extraction.[15]

Phase estimation

The quantum phase estimation (QPE) subroutine in the HHL algorithm is applied to the unitary time evolution operator $ U(t) = e^{-iAt} $ derived from the Hamiltonian simulation of the Hermitian matrix $ A $, where $ t $ is a scaling time parameter. For an eigenvector $ |u_j\rangle $ of $ A $ with eigenvalue $ \lambda_j $, the corresponding eigenvalue of $ U(t) $ is $ e^{-i\lambda_j t} = e^{-i 2\pi \phi_j} $, with phase $ \phi_j = \lambda_j t / 2\pi $. The QPE estimates $ \phi_j $ to high precision, enabling recovery of an approximation $ \tilde{\lambda}_j \approx \lambda_j $ by scaling back with $ t $. This step entangles an ancillary clock register with the input state to encode the eigenvalues in the computational basis.[2] The circuit for QPE begins by initializing the clock register to $ |0\rangle^{\otimes m} $ and the system register to the input state $ |\psi\rangle $, where $ m $ is the number of qubits in the clock register. An $ m $-qubit Hadamard gate $ H^{\otimes m} $ is applied to the clock register, creating an equal superposition $ \frac{1}{\sqrt{2^m}} \sum_{k=0}^{2^m - 1} |k\rangle |\psi\rangle $. For each clock qubit $ l = 0 $ to $ m-1 $, a controlled operation applies $ U(2^l t) $ to the system register, controlled on the $ l $-th clock qubit being in state $ |1\rangle $. Finally, an inverse quantum Fourier transform (QFT) is applied to the clock register, yielding the state $ |\tilde{\phi}_j\rangle |\psi\rangle \approx |\phi_j\rangle |\psi\rangle $ for eigenvector inputs, where $ |\tilde{\phi}_j\rangle $ is the binary approximation of $ \phi_j $. The scaling $ \tilde{\lambda}_j = 2\pi \tilde{\phi}_j / t $ directly provides the eigenvalue estimate in the clock register.[2][17] The precision of the eigenvalue estimate is controlled by the clock register size $ m = O(\log(1/\varepsilon_{\mathrm{QPE}})) $, achieving an additive error $ |\tilde{\lambda}j - \lambda_j| \leq \varepsilon{\mathrm{QPE}} $ with success probability at least $ 1 - \delta $, where $ \delta $ can be made small (e.g., $ O(1/m) $) by standard QPE analysis assuming $ \lambda_j $ lies within the estimable range. This logarithmic scaling in precision qubits is a key efficiency feature of QPE, contrasting with classical methods that require linear resources for similar accuracy. In the HHL context, $ \varepsilon_{\mathrm{QPE}} $ is chosen proportionally to the desired overall solution error, typically $ O(\varepsilon / \kappa) $ where $ \kappa $ is the condition number of $ A $, but the subroutine itself operates independently of subsequent inversion steps.[2][17] When integrated into HHL, the input to QPE is the normalized right-hand-side state $ |b\rangle \approx \sum_j \beta_j |u_j\rangle $, where $ {\beta_j} $ are the coefficients in the eigenbasis of $ A $. The output is the entangled state $ \sum_j \beta_j |\tilde{\lambda}_j\rangle |u_j\rangle $, with the clock register holding the approximate eigenvalues conditional on each eigenvector component. This superposition preserves the amplitudes $ \beta_j $ while attaching eigenvalue information, setting up the algorithm for conditional operations on $ |\tilde{\lambda}_j\rangle $ without collapsing the coherent superposition. The success probability for the full input state inherits the per-eigenvalue guarantees, assuming the eigenvalues are well-conditioned and $ t $ is chosen appropriately (e.g., $ t = O(\kappa) $) to map phases into [0,1).[2]

Eigenvalue inversion

In the HHL algorithm, an ancillary qubit, initialized in the state $ |0\rangle_A $, is introduced to enable the conditional inversion of the estimated eigenvalues obtained from the phase estimation subroutine.[2] This ancilla allows the inversion to be encoded as an amplitude in its state, facilitating the extraction of the solution vector upon postselection. The inversion is implemented via controlled rotations on the ancilla, conditioned on the clock register that encodes the estimated eigenvalues $ \tilde{\lambda}_k $. Specifically, for each basis state $ |k\rangle $ in the clock register corresponding to $ \tilde{\lambda}_k ,acontrolled, a controlled- R_y(\theta_k) $ gate is applied to the ancilla, where $ \theta_k = 2 \arcsin(C / \tilde{\lambda}_k) $ and $ C $ is a normalization constant chosen such that $ C \leq \min_j \lambda_j $, typically $ C = O(1/\kappa) $ with $ \kappa $ being the condition number of the matrix.[2] This rotation ensures that the amplitude of $ |1\rangle_A $ is proportional to $ 1/\tilde{\lambda}_k $, while avoiding over-rotation for small eigenvalues by the bound on $ C $. The controlled rotations are realized by decomposing the multi-qubit clock register into single-qubit controls, applying a series of two-qubit gates tailored to each $ |k\rangle $.[2] Following the phase estimation step, which yields the entangled state $ \sum_j \beta_j |\tilde{\lambda}_j\rangle |u_j\rangle |0\rangle_A $ (where $ |b\rangle = \sum_j \beta_j |u_j\rangle $ and $ |u_j\rangle $ are the eigenvectors of the Hermitian matrix $ A $ with eigenvalues $ \lambda_j $), the controlled rotations evolve the state to:
jβjλ~juj(1(Cλ~j)20A+Cλ~j1A). \sum_j \beta_j |\tilde{\lambda}_j\rangle |u_j\rangle \left( \sqrt{1 - \left( \frac{C}{\tilde{\lambda}_j} \right)^2 } \, |0\rangle_A + \frac{C}{\tilde{\lambda}_j} |1\rangle_A \right).
[2] This transformation inverts the eigenvalues conditionally, encoding the reciprocals in the ancilla's $ |1\rangle_A $ component. The goal of this step is to prepare a state where postselection on the ancilla in $ |1\rangle_A $ projects the system onto $ \sum_j \beta_j (C / \lambda_j) |u_j\rangle $, which is proportional to $ C A^{-1} |b\rangle \approx C |x\rangle $, the normalized solution to the linear system $ A |x\rangle = |b\rangle $, up to the approximation in the eigenvalue estimates.[2] The success probability of this projection is $ \Omega(1/\kappa^2) $, reflecting the conditioning of the problem.

Measurement and output

After the eigenvalue inversion step, the quantum state includes an ancilla qubit that is measured in the computational basis. A measurement outcome of 1 on this ancilla post-selects the solution subspace, yielding a state proportional to |x⟩ = ∑_j β_j λ_j^{-1} |u_j⟩, where β_j are the coefficients of the input vector |b⟩ in the eigenbasis of A, λ_j are the eigenvalues, and |u_j⟩ are the corresponding eigenvectors. Upon successful post-selection, the clock register from the phase estimation is uncomputed to disentangle it, leaving the normalized solution state |x⟩ ≈ A^{-1} |b⟩ / ||A^{-1} |b⟩|| in the register. The success probability of this post-selection is at least Ω(1/κ²), where κ is the condition number of A, due to the scaling introduced by the controlled inversion on the ancilla. If the probability is low, amplitude amplification can be applied to boost it, requiring O(κ) repetitions of the algorithm to obtain the state with high fidelity, though post-selection alone often suffices for many applications. The output state |x⟩ is not typically read out via full quantum state tomography, which would be inefficient; instead, it is used directly for computing expectation values such as ⟨x| M |x⟩ for some observable M, often via the Hadamard test to estimate inner products without collapsing the full vector. For norms or specific components, amplitude estimation techniques provide efficient readout of quantities like ||A^{-1} |b⟩|| or |⟨i|x⟩|² by sampling in the computational basis. This approach leverages the quantum superposition to extract solution properties probabilistically while preserving the advantages of the algorithm's exponential speedup.

Performance Analysis

Runtime complexity

The runtime complexity of the Harrow-Hassidim-Lloyd (HHL) algorithm is analyzed in terms of query complexity to sparse oracles and overall gate count, assuming access to an ss-sparse Hermitian matrix AA via an efficient oracle that allows row computations. In the original formulation, the total query complexity scales as O~(s2κ3logN/ϵ)\tilde{O}(s^2 \kappa^3 \log N / \epsilon), where κ\kappa is the condition number of AA, NN is the dimension of the system, and ϵ\epsilon is the desired solution precision.[18] Subsequent optimizations, particularly using variable-time amplitude amplification, improve the dependence on κ\kappa to linear while worsening on ϵ\epsilon, yielding O~(κlog3κlogN/ϵ3\polylog(1/ϵ))\tilde{O}(\kappa \log^3 \kappa \log N / \epsilon^3 \polylog(1/\epsilon)).[19] The complexity breakdown reveals that quantum phase estimation (QPE) often dominates the cost, especially for high-precision regimes, requiring O~((κlogNϵ)2/log(κ/ϵ))\tilde{O} \left( \left( \frac{\kappa \log N}{\epsilon} \right)^2 / \log(\kappa / \epsilon) \right) queries under optimized conditions with constant-depth simulations. In contrast, the Hamiltonian simulation subroutine incurs a cost of O~(sκlogN/ϵ)\tilde{O}(s \kappa \log N / \epsilon), which is typically lower when sparsity ss is small.[18][19] Regarding gate counts, the HHL algorithm achieves a total depth of polylogarithmic in NN, specifically O~(\poly(logN,log(1/ϵ)))\tilde{O}(\poly(\log N, \log(1/\epsilon))), when implemented with fault-tolerant quantum computing resources. This polylogarithmic scaling in NN provides an exponential advantage in matrix dimension compared to classical methods that scale linearly or sublinearly for sparse systems. The dependence on parameters is linear in κ\kappa after optimizations, logarithmic in NN, and worse than linear in 1/ϵ1/\epsilon in early improved versions, though cubic scaling in κ\kappa appeared in the original due to unoptimized amplification steps.[18][19]

Classical comparison

The classical algorithms for solving linear systems of equations Ax=bA\mathbf{x} = \mathbf{b} provide baselines against which the HHL algorithm's performance can be evaluated, particularly in terms of computational complexity. Direct solvers, such as Gaussian elimination with partial pivoting, require O(N3)O(N^3) operations for an N×NN \times N dense matrix, rendering them inefficient for large-scale problems where NN exceeds millions. For sparse matrices with at most ss non-zero entries per row, iterative methods like the conjugate gradient (CG) algorithm offer improved efficiency, achieving convergence in O(κlog(1/ϵ))O(\sqrt{\kappa} \log(1/\epsilon)) iterations, where κ\kappa is the condition number of AA (the ratio of its largest to smallest singular value) and ϵ\epsilon is the desired relative error; each iteration costs O(sN)O(s N) time, yielding an overall complexity of O~(sNκ)\tilde{O}(s N \sqrt{\kappa}). These classical approaches scale linearly with NN, lacking any logarithmic dependence on the system dimension. In contrast, the HHL algorithm demonstrates a theoretical exponential speedup over classical methods for sparse, ill-conditioned linear systems under specific conditions. Its runtime of O~(s2κ3logN/ϵ)\tilde{O}(s^2 \kappa^3 \log N / \epsilon) replaces the linear NN factor with logN\log N, providing an advantage that grows exponentially with the logarithm of the matrix size—effectively trading polynomial scaling in κ\kappa for superpolynomial gains in NN. This speedup materializes primarily for ss-sparse matrices where κ\kappa and 1/ϵ1/\epsilon are polylogarithmic in NN, and the post-selection step (which succeeds with probability Ω(1/κ)\Omega(1/\kappa)) can be repeated efficiently without dominating the cost. For instance, in applications like quantum chemistry simulations involving large but sparse Hamiltonians, HHL can theoretically outperform CG by orders of magnitude when the system's ill-conditioning is mild relative to its size. The quantum advantage of HHL is not universal and can fail in several regimes. For dense matrices where sNs \approx N, the effective speedup reduces to at most quadratic, as the sparsity assumption no longer holds and classical direct or preconditioned iterative solvers regain competitiveness. Similarly, when κ\kappa grows faster than polylogarithmic in NN—common in highly ill-conditioned systems—the κ3\kappa^3 scaling in HHL's runtime erodes the benefit, often making classical methods faster overall. In practice, hybrid classical-quantum strategies, which leverage classical preprocessing to reduce κ\kappa or handle dense components, frequently outperform pure HHL implementations in these scenarios. To illustrate the speedup threshold, consider a large sparse system with N=250N = 2^{50}; here, HHL is theoretically faster than classical iterative solvers provided κ109\kappa \lesssim 10^9, assuming constant sparsity s=1s=1 and unit precision ϵ=1\epsilon=1, as the quantum log-factor dominates the classical linear term below this condition number boundary.[18]

Optimality

The query complexity of the HHL algorithm for solving sparse linear systems has been shown to match known lower bounds up to logarithmic factors. Specifically, any quantum algorithm solving the quantum linear systems problem (QLSP) with an s-sparse matrix of condition number κ requires Ω(κ) queries to the sparse access oracle, even for constant precision ε and matrix dimension N.[20] This lower bound is established through reductions from hard search problems in the sparse oracle model, demonstrating that the κ dependence in HHL is unavoidable without additional assumptions.[20] Subsequent improvements have refined HHL's complexity while preserving near-optimality. In 2017, Childs, Kothari, and Somma introduced a quantum linear systems solver that exponentially improves the dependence on precision ε, changing it from linear in 1/ε to polylogarithmic in 1/ε, yielding a total query complexity of \tilde{O}(\kappa^3 \log N \cdot \mathrm{polylog}(1/\varepsilon)) under the original HHL assumptions.[21] This polynomial method achieves the improved ε scaling but retains higher powers of κ compared to later frameworks. The quantum singular value transformation (QSVT) framework further generalizes HHL to handle non-invertible matrices by applying optimal-degree polynomial transformations to the singular values of the input matrix, without relying on phase estimation. In the block-encoding model, QSVT-based linear system solvers achieve query complexity \tilde{O}(\kappa \polylog(N, 1/\varepsilon)), which is optimal for the eigenvalue inversion step and extends HHL's applicability to singular value decompositions and other non-Hermitian cases with minimal overhead.[22] As of 2025, recent advances include randomized QSVT variants that achieve gate complexity independent of the number of terms in sparse Hamiltonians while maintaining the near-optimal \tilde{O}(\kappa \polylog(N, 1/\varepsilon)) scaling, enhancing practicality for large-scale simulations.[23] Despite these advances, open questions remain regarding optimality beyond the sparse, Hermitian setting. In particular, achieving similar κ-independent or low-overhead performance for non-sparse matrices or directly solving non-Hermitian systems without embedding into larger Hermitian forms lacks tight bounds, with current methods incurring additional logarithmic or polynomial factors.

Error Analysis

Error sources

The Harrow-Hassidim-Lloyd (HHL) algorithm relies on several approximations inherent to its quantum subroutines, which introduce errors that must be controlled to achieve the desired solution precision. These errors primarily stem from the discretization in quantum phase estimation (QPE), the approximate inversion of eigenvalues, imperfections in initial state preparation, and inaccuracies in Hamiltonian simulation. Each contributes to the overall infidelity between the output state and the ideal solution, with bounds derived from the algorithm's analysis.[2][24] A key source of error arises in the QPE subroutine, which discretizes the continuous eigenvalues of the normalized Hamiltonian into a finite binary representation using mm ancillary qubits. This leads to an eigenvalue rounding error bounded by O(1/2m)O(1/2^m), where the estimated eigenvalue λ~j\tilde{\lambda}_j satisfies λ~jλj1/2m|\tilde{\lambda}_j - \lambda_j| \leq 1/2^m. Consequently, the inverted estimate satisfies 1/λ~j1/λj+O(ϵQPE/λmin2)1/\tilde{\lambda}_j \approx 1/\lambda_j + O(\epsilon_{\mathrm{QPE}} / \lambda_{\min}^2), with ϵQPE=O(1/2m)\epsilon_{\mathrm{QPE}} = O(1/2^m), amplifying for small eigenvalues near the minimum λmin\lambda_{\min}. This discretization is fundamental to QPE, as detailed in the phase estimation procedure.[2][24] The eigenvalue inversion step introduces approximation errors due to the precision of the controlled rotation gates approximating the reciprocal eigenvalues. The inversion is implemented via a rotation that scales the eigenvalues by CλjC \lambda_j, where C=O(1/λmin)C = O(1/\lambda_{\min}) ensures the amplitudes remain valid for the rotation. However, the choice of finite CC affects the success probability of post-selection, scaling as Ω(1/κ2)\Omega(1/\kappa^2), but does not introduce error in the fidelity of the post-selected solution state, which remains proportional to the ideal solution. Rotation gate precision further contributes to infidelity if not sufficiently accurate.[2][24] Errors in preparing the initial state b|b\rangle, which encodes the right-hand side vector of the linear system, propagate through the algorithm. If the preparation circuit achieves the state with infidelity δ\delta (i.e., the prepared state has overlap 1δ1 - \delta with the ideal b|b\rangle), this infidelity leads to an output error bounded by O(δ)O(\sqrt{\delta}) in the solution state, as the subsequent operations amplify the initial deviation proportionally to the square root of the overlap. Efficient state preparation, often via quantum random access memory (qRAM), is assumed but introduces this controllable error term.[2][24] Hamiltonian simulation errors, arising from approximations like Trotterization or linear combination of unitaries (LCU), add unitary infidelity ϵsim\epsilon_{\mathrm{sim}} to the controlled evolutions used in QPE. These errors accumulate over the multiple applications required for phase estimation, leading to amplified distortions in the eigenvalue estimates; specifically, the QPE output fidelity degrades by a factor related to 1+O(ϵsimt0)1 + O(\epsilon_{\mathrm{sim}} \cdot t_0), where t0t_0 is the simulation evolution time scaling with the condition number and desired precision. For sparse Hamiltonians, higher-order Trotter or LCU methods can reduce ϵsim\epsilon_{\mathrm{sim}}, but the error remains a primary approximation source.[2][24]

Precision scaling

The precision of the HHL algorithm's output state is fundamentally limited by errors arising from quantum phase estimation (QPE), Hamiltonian simulation, and post-selection effects, with the total error bound given by xxidealO(κεQPE+εsim+δ)\| |x\rangle - |x_{\text{ideal}}\rangle \| \leq O(\kappa \varepsilon_{\text{QPE}} + \varepsilon_{\text{sim}} + \delta), where κ\kappa is the condition number of the input matrix, εQPE\varepsilon_{\text{QPE}} is the QPE precision, εsim\varepsilon_{\text{sim}} is the simulation error, and δ\delta accounts for post-selection inaccuracies and initial state infidelity.[2] To achieve an overall output precision of ε\varepsilon, the QPE precision must be set to εQPE=O(ε/κ)\varepsilon_{\text{QPE}} = O(\varepsilon / \kappa), as errors in eigenvalue inversion amplify linearly with κ\kappa due to the combined effect of eigenvalue sensitivity and the λj1\lambda_j^{-1} scaling.[2] This precision requirement imposes significant overhead on resource demands: the number of ancillary qubits for QPE increases to O(log(κ/ε))O(\log(\kappa / \varepsilon)) to resolve eigenvalues finely enough, while the worst-case runtime scales as O(κ3logN/ε)O(\kappa^3 \log N / \varepsilon) due to repeated circuit executions for error suppression and success probability boosting, where NN is the matrix dimension.[2] However, optimizations can reduce this to O(κlogN/ε)O(\kappa \log N / \varepsilon) by leveraging advanced techniques that minimize full-circuit repetitions. The condition number κ\kappa exacerbates error amplification, as perturbations in eigenvalue estimates propagate through the inversion, necessitating well-conditioned matrices (small κ\kappa) for practical viability; ill-conditioned systems can render the output unreliable even with high precision settings.[2] Mitigations focus on efficient error control without excessive overhead: oblivious amplitude amplification boosts the low success probability (initially Ω(1/κ2)\Omega(1/\kappa^2)) to near-constant levels using O(κ)O(\kappa) queries, avoiding full HHL circuit repetitions by amplifying directly on the clock register.[2] Additionally, variable-time QPE enhances efficiency by adaptively adjusting evolution times based on measured phases, reducing the average runtime while maintaining precision bounds. These approaches collectively ensure that precision scaling remains polynomial in κ\kappa and 1/ε1/\varepsilon, preserving the algorithm's theoretical advantages over classical methods for sparse, well-conditioned problems.[2]

Applications

Core applications

The HHL algorithm finds primary application in quantum machine learning, particularly for solving least-squares problems central to linear regression tasks. By efficiently inverting sparse, well-conditioned matrices, HHL enables the preparation of a quantum state proportional to the solution vector, allowing estimation of regression coefficients through measurement of expectation values. This approach achieves an exponential speedup over classical methods for high-dimensional datasets, provided the input data can be encoded into a quantum state. A representative example is the use of HHL to compute the solution to $ A \mathbf{w} = \mathbf{b} $, where $ A $ is the design matrix and $ \mathbf{w} $ the weight vector, facilitating tasks like supervised learning on exponentially large feature spaces. In kernel-based methods, HHL supports the computation of inner products between solution states $ \langle x | x \rangle $, which underpins algorithms such as quantum support vector machines (QSVMs). These methods leverage HHL to solve the dual optimization problem for classification, where kernel evaluations replace explicit feature mappings, yielding logarithmic dependence on data size rather than polynomial. This has been demonstrated in binary classification problems, where HHL accelerates the training phase by preparing the support vector solution state for subsequent amplitude estimation.[25] For optimization, HHL serves as a subroutine in solving linear programs through dual formulations, particularly by speeding up iterations in interior-point methods (IPMs). In these methods, repeated solutions to linear systems arising from Newton steps are required; HHL's polylogarithmic scaling in matrix dimension can quadratically accelerate convergence compared to classical IPMs, assuming sparse and Hermitian matrices. This application is especially relevant for dense linear programs, where the algorithm inverts the Hessian-like matrices to find feasible directions toward the optimum. Seminal work has shown that hybrid quantum-classical IPMs using HHL achieve exponential speedup in the bit precision for certain linear programs.[26] In quantum simulation, HHL enables the solution of time-dependent linear differential equations, including the Schrödinger equation, by reformulating the dynamics as a linear system inversion. Specifically, the time evolution can be addressed by solving equations of the form $ (i \partial_t - H) |\psi(t)\rangle = 0 $, where $ H $ is the Hamiltonian, through phase estimation and matrix inversion to propagate the state efficiently. This provides an exponential advantage over classical simulation for sparse Hamiltonians, allowing high-precision evolution over long times with query complexity scaling as $ \kappa \log(1/\epsilon) / \log(\log(1/\epsilon)) $, where $ \kappa $ is the condition number and $ \epsilon $ the error. The approach has been applied to initial-value problems in quantum dynamics, demonstrating superconvergence for oscillatory systems. Data fitting represents another core use, where HHL delivers exponential speedup for sparse signal processing, such as in compressed sensing scenarios. By solving underdetermined least-squares systems $ A \mathbf{x} = \mathbf{b} $ with sparse $ \mathbf{x} $, HHL prepares a state encoding the minimal-norm solution, from which sparsity-promoting norms can be estimated via post-selection or amplitude amplification. This is particularly effective for reconstructing signals from incomplete measurements, as the algorithm's runtime depends logarithmically on the signal dimension while exponentially improving over classical compressed sensing for well-conditioned, low-rank data. For instance, fitting exponential models to large datasets uses HHL to evaluate fit quality and parameter estimates in superposition, enabling applications like quantum state tomography with reduced sampling overhead.

Extensions and variants

One prominent extension of the HHL algorithm addresses the limitation that the original formulation requires the input matrix AA to be Hermitian. For non-Hermitian matrices, the algorithm can be adapted by embedding AA into a larger Hermitian matrix of the form (0AA0)\begin{pmatrix} 0 & A \\ A^\dagger & 0 \end{pmatrix}, which doubles the matrix dimension but introduces only a constant-factor overhead in resources. This embedding preserves the essential structure needed for quantum phase estimation while enabling the solution of the original non-Hermitian system through post-processing on the doubled space. More recent variants focus on iterative approaches to mitigate the strong dependence of HHL on the condition number κ\kappa of AA. In 2025, an iterative HHL algorithm was proposed specifically for computing nuclear resonances, leveraging eigenvector continuation to analytically extend solutions beyond physical boundaries and iteratively refine estimates with reduced sensitivity to κ\kappa.[27] This method applies successive HHL iterations on modified systems, using classical projections to stabilize convergence and achieve accurate resonance energies in few-body nuclear systems after a small number of steps, such as 5–6 iterations for targeted eigenvalues.[27] Hybrid variants integrate classical preprocessing with the quantum core of HHL to enhance compatibility with noisy intermediate-scale quantum (NISQ) devices. A 2024 hybrid HHL method employs classical preconditioning to guide eigenvalue inversion, estimating singular values with higher precision (e.g., two extra bits) before quantum processing, which reduces overall error by up to 57% in simulations and demonstrates feasibility on hardware like IBM Torino for small 2×2 systems.[28] This approach uses semiclassical quantum phase estimation with mid-circuit measurements, minimizing ancillary qubits to one and gate depth, while classical steps handle spectral scaling for better eigenvalue resolution.[29] A broader generalization of HHL arises through the quantum singular value transformation (QSVT) framework, which applies arbitrary polynomials to the singular values of a block-encoded matrix without requiring full quantum phase estimation. QSVT-based HHL variants perform matrix inversion by selecting a polynomial that approximates 1/σi1/\sigma_i for singular values σi\sigma_i, achieving optimal query complexity of O(κlog(1/ϵ))O(\kappa \log(1/\epsilon)) to precision ϵ\epsilon, independent of matrix sparsity assumptions in the original HHL. This enables extensions to non-unitary operations and handles ill-conditioned systems more robustly, unifying HHL with other linear algebra tasks under a single protocol.

Implementations

Early experiments

The first experimental realization of the HHL algorithm occurred in 2013 using photonic quantum hardware. Cai et al. demonstrated the algorithm on a linear optical network with four photonic qubits and four controlled-NOT gates, solving 2×2 linear systems with solution fidelities ranging from 0.825 to 0.993.[30] This proof-of-concept highlighted the feasibility of quantum linear solving but was constrained to small-scale systems due to the probabilistic nature of photonic gates. Independently, Barz et al. implemented a two-qubit photonic processor based on polarization-encoded qubits and entangling operations, applying the HHL algorithm to simple linear equations with process fidelities of 64.7% to 98.1%.[31] NMR-based experiments followed shortly after, providing deterministic control over qubit interactions. In 2014, Pan et al. executed the HHL algorithm on a four-qubit liquid-state NMR quantum computer, solving 2×2 systems across three distinct test cases and achieving overall fidelities exceeding 96%.[32] These implementations explicitly demonstrated the core subroutines of quantum phase estimation for eigenvalue extraction and controlled inversion for matrix conditioning, with errors primarily arising from pulse imperfections rather than decoherence. Subsequent NMR work in the mid-2010s extended to 3–4 qubit configurations, refining these steps for improved scalability in liquid-state systems. Superconducting qubit platforms enabled further progress toward larger matrices in the late 2010s. In 2017, Zheng et al. reported an implementation on a four-qubit superconducting processor from the University of Science and Technology of China, solving 2×2 linear systems with a process fidelity of 0.837 ± 0.006.[33] Practical performance was bottlenecked by gate errors and qubit decoherence times on the order of microseconds. Efforts by other groups have explored larger systems but underscored noise as the dominant limitation. These early demonstrations established proof-of-principle success for the HHL algorithm across diverse hardware modalities, achieving high-fidelity solutions for toy problems. Key findings revealed that experimental errors were overwhelmingly due to hardware decoherence and imperfect gates, rather than approximations inherent to the algorithm itself, paving the way for noise-mitigation strategies in subsequent implementations.

Recent developments

In 2024, researchers demonstrated a hybrid variant of the HHL algorithm tailored for noisy intermediate-scale quantum (NISQ) devices, incorporating classical optimizations to enhance feasibility on current hardware. This approach, termed hybrid HHL++, leverages semiclassical quantum phase estimation and spectral scaling to reduce qubit and gate requirements while maintaining solution fidelity. Implemented on Quantinuum's H-series trapped-ion processors, it successfully solved linear systems for portfolio optimization problems with matrix sizes up to 8×8, achieving high fidelity close to 1 through controlled-SWAP tests and demonstrating up to 291 two-qubit gate depths on real hardware.[29] Advancements in software frameworks have enabled more practical simulations and hybrid executions of the HHL algorithm in 2025. Using the Qrisp high-level quantum programming language integrated with Xanadu's Catalyst compiler, demonstrations showcased hybrid quantum-classical workflows for solving linear systems, including applications to finite-difference approximations of Poisson equations in one and two dimensions. These implementations achieved numerical accuracy on the order of 10310^{-3}, comparable to classical methods like the Thomas algorithm for one-dimensional cases, while highlighting the algorithm's potential for partial differential equation solvers despite limitations in conditioning.[34][35] Iterative refinements to the HHL algorithm have shown promise in specialized domains such as nuclear physics. A 2025 study introduced an iterative HHL framework combined with eigenvector continuation and complex scaling to compute nuclear resonances, particularly for the α-α system. This method reduces the number of oracle queries by approximately 50% compared to standard approaches by leveraging continuation techniques to approximate non-Hermitian eigenvalues, yielding resonant states in good agreement with classical computations and offering efficiency gains for coupled-channel problems.[27] Security assessments of the HHL algorithm have gained attention amid growing concerns over quantum hardware vulnerabilities. In 2025, evaluations targeted fault injection-like attacks, including improper initialization and higher-energy disruptions on critical qubits such as ancilla, clock, and input states during matrix inversion steps. A proposed defense mechanism, involving circuit redesigns for self-detection and error mitigation, was validated through simulations and executions on IBM quantum hardware, preserving solution accuracy with minimal overhead and resilience to noise-induced faults.[36]

Challenges

Theoretical limitations

The HHL algorithm's efficiency is highly sensitive to the condition number κ\kappa of the input matrix AA, defined as the ratio of its largest to smallest singular values. The runtime scales as O~(κ2logN/ϵ)\tilde{O}(\kappa^2 \log N / \epsilon), where NN is the matrix dimension and ϵ\epsilon is the desired precision, leading to no quantum speedup over classical methods if κ=Ω(poly(N))\kappa = \Omega(\mathrm{poly}(N)), as the overall complexity then becomes polynomial in NN rather than logarithmic.[2] Moreover, errors in the phase estimation step amplify exponentially with κ\kappa, since the inversion step produces coefficients 1/λj1/\lambda_j that magnify small eigenvalue perturbations, potentially rendering the solution unreliable for ill-conditioned systems without additional conditioning techniques.[2] The algorithm operates within an oracle model that demands efficient sparse access to AA, requiring it to be ss-sparse (at most ss non-zero entries per row, with s=O(poly(logN))s = O(\mathrm{poly}(\log N))) and row-computable in O(s)O(s) classical time. This enables Hamiltonian simulation of eiAte^{iAt} in O~(logNs2κ2/ϵ)\tilde{O}(\log N \cdot s^2 \kappa^2 / \epsilon) time, but random or dense matrices lack such sparse structure, necessitating costly preprocessing to construct an appropriate oracle and often eliminating the exponential advantage.[2] The output of HHL is a quantum state x|x\rangle proportional to the solution vector, which provides no direct classical access to the full solution; efficient readout of all components would require Ω(N)\Omega(N) measurements, collapsing the speedup. This state is only useful in contexts requiring further quantum processing, such as amplitude amplification or integration into larger quantum algorithms, without which its practical utility is limited.[37][2] HHL assumes the matrix AA corresponds to a Hamiltonian that can be efficiently simulated unitarily via sparse access, restricting applicability to structured problems; for general non-unitarily simulable Hamiltonians, embedding into a larger unitary (e.g., via block encoding) introduces significant overhead in query complexity and ancilla qubits, often scaling as O(κ)O(\kappa) or worse and undermining scalability.[2][37]

Practical difficulties

Implementing the Harrow-Hassidim-Lloyd (HHL) algorithm on current noisy intermediate-scale quantum (NISQ) devices faces significant hurdles due to inherent hardware limitations, particularly noise and decoherence effects. NISQ systems exhibit two-qubit gate error rates typically ranging from 0.05% to 0.5%, with leading devices approaching 0.1% or better, which accumulate rapidly in the deep circuits required by HHL, leading to fidelity degradation and unreliable outputs.[29] Without full quantum error correction, these errors necessitate hybrid approaches or mitigation techniques, but even then, achieving meaningful precision remains challenging on devices with 50–100 qubits.[14] Moreover, decoherence times on superconducting qubits now reach 0.3-1.6 ms in advanced devices, though still constraining the total circuit depth before quantum information is lost, which is particularly problematic for HHL's phase estimation steps that demand prolonged coherence.[38] State preparation of the input vector |b⟩ represents another major overhead in HHL implementations. For arbitrary vectors, quantum amplitude encoding requires a circuit depth scaling as O(√N) in the worst case, where N is the matrix dimension, imposing significant resource demands on NISQ hardware lacking efficient quantum random access memory (QRAM).[14] However, for structured data—such as those arising from Fourier transforms—quantum state preparation can leverage the quantum Fourier transform (QFT) to reduce this cost to polylogarithmic in N, enabling more feasible encoding in applications like signal processing.[14] This overhead not only increases gate counts but also amplifies susceptibility to noise, often requiring multiple repetitions to achieve acceptable success probabilities. Recent hybrid HHL variants integrate classical preprocessing to reduce quantum circuit depth, enabling demonstrations on matrices up to 64×64 as of 2025, though full fault-tolerance remains required for advantage.[29] Scalability of HHL is further impeded by the qubit and depth requirements of its core quantum phase estimation (QPE) subroutine. QPE demands O(log N) ancillary qubits for precision scaling with system size N, yet the associated controlled operations result in circuit depths that exceed available coherence times on current hardware, limiting full-scale demonstrations to matrices up to 64×64 or larger in hybrid variants, though still far from exponential scaling.[29] For instance, even modest N values lead to control depths incompatible with NISQ constraints, necessitating circuit optimizations like semiclassical QPE to compress ancilla usage and mitigate decoherence.[29] As of 2025, additional concerns include vulnerability to hybrid attacks, such as side-channel exploits targeting the oracle implementations in HHL's Hamiltonian simulation phase, where timing or power variations could leak sensitive circuit details.[39] Resource estimations for fault-tolerant scaling highlight the path forward: achieving quantum advantage for N ≈ 10^6 would require on the order of 10^5 physical qubits under surface code error correction, with runtimes inflated polynomially by the noise tolerance ε (e.g., poly(1/ε_noise) overhead) and total execution times reaching 10^6 seconds or more.[40] These projections underscore the need for advances in error-corrected architectures to realize HHL's potential beyond NISQ limitations.[40]

References

User Avatar
No comments yet.