Hubbry Logo
Hamiltonian simulationHamiltonian simulationMain
Open search
Hamiltonian simulation
Community hub
Hamiltonian simulation
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Hamiltonian simulation
Hamiltonian simulation
from Wikipedia

Hamiltonian simulation (also referred to as quantum simulation) is a problem in quantum information science that attempts to find the computational complexity and quantum algorithms needed for simulating quantum systems. Hamiltonian simulation is a problem that demands algorithms which implement the evolution of a quantum state efficiently. The Hamiltonian simulation problem was proposed by Richard Feynman in 1982, where he proposed a quantum computer as a possible solution since the simulation of general Hamiltonians seem to grow exponentially with respect to the system size.[1]

Problem statement

[edit]

In the Hamiltonian simulation problem, given a Hamiltonian ( hermitian matrix acting on qubits), a time and maximum simulation error , the goal is to find an algorithm that approximates such that , where is the ideal evolution and is the spectral norm. A special case of the Hamiltonian simulation problem is the local Hamiltonian simulation problem. This is when is a k-local Hamiltonian on qubits where and acts non-trivially on at most qubits instead of qubits.[2] The local Hamiltonian simulation problem is important because most Hamiltonians that occur in nature are k-local.[2]

Techniques

[edit]

Product formulas

[edit]

Also known as Trotter formulas or Trotter–Suzuki decompositions, Product formulas simulate the sum-of-terms of a Hamiltonian by simulating each one separately for a small time slice.[3][4] If , then for a large ; where is the number of time steps to simulate for. The larger the , the more accurate the simulation.

If the Hamiltonian is represented as a Sparse matrix, the distributed edge coloring algorithm can be used to decompose it into a sum of terms; which can then be simulated by a Trotter–Suzuki algorithm.[5]

Taylor series

[edit]

by the Taylor series expansion.[6] This says that during the evolution of a quantum state, the Hamiltonian is applied over and over again to the system with a various number of repetitions. The first term is the identity matrix so the system doesn't change when it is applied, but in the second term the Hamiltonian is applied once. For practical implementations, the series has to be truncated , where the bigger the , the more accurate the simulation.[7] This truncated expansion is then implemented via the linear combination of unitaries (LCU) technique for Hamiltonian simulation.[6] Namely, one decomposes the Hamiltonian such that each is unitary (for instance, the Pauli operators always provide such a basis), and so each is also a linear combination of unitaries.

Quantum walk

[edit]

In the quantum walk, a unitary operation whose spectrum is related to the Hamiltonian is implemented then the Quantum phase estimation algorithm is used to adjust the eigenvalues. This makes it unnecessary to decompose the Hamiltonian into a sum-of-terms like the Trotter-Suzuki methods.[6]

Quantum signal processing

[edit]

The quantum signal processing algorithm works by transducing the eigenvalues of the Hamiltonian into an ancilla qubit, transforming the eigenvalues with single qubit rotations and finally projecting the ancilla.[8] It has been proved to be optimal in query complexity when it comes to Hamiltonian simulation.[8]

Complexity

[edit]

The table of the complexities of the Hamiltonian simulation algorithms mentioned above. The Hamiltonian simulation can be studied in two ways. This depends on how the Hamiltonian is given. If it is given explicitly, then gate complexity matters more than query complexity. If the Hamiltonian is described as an Oracle (black box) then the number of queries to the oracle is more important than the gate count of the circuit. The following table shows the gate and query complexity of the previously mentioned techniques.

Technique Gate complexity Query complexity
Product formula 1st order [7] [9]
Taylor series [7] [6]
Quantum walk [7] [5]
Quantum signal processing [7] [8]

Where is the largest entry of .

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Hamiltonian simulation is a cornerstone problem in , involving the approximation of the eiHte^{-iHt} that describes the of a under a given Hermitian Hamiltonian HH for time tt, typically within a specified error ϵ\epsilon, to model the dynamics of efficiently on quantum hardware. This task is central to realizing Richard Feynman's vision of using quantum computers to simulate physical processes that are intractable for classical computers due to the exponential scaling of quantum state spaces. The concept traces its origins to Feynman's 1982 proposal, where he argued that quantum mechanical systems, particularly those involving many particles, cannot be accurately simulated classically without exponential resource overhead, advocating instead for programmable quantum simulators to mimic such evolutions. Early algorithmic developments in the , such as Lloyd's use of Trotter-Suzuki product formulas to decompose eiHte^{-iHt} into short-time evolutions under summands of HH, laid the groundwork for digital quantum simulation, achieving query complexities scaling as O((tH/ϵ)1+o(1))O((t \|H\| / \epsilon)^{1+o(1)}) for sparse Hamiltonians. Subsequent advances, including methods and , improved efficiency, but these were superseded by optimal techniques like quantum signal processing (QSP) and qubitization, which achieve near-linear dependence on simulation time tHmaxt \|H\|_{\max} and polylogarithmic scaling in 1/ϵ1/\epsilon, with query complexity O(tHmax+polylog(tHmax/ϵ))O(t \|H\|_{\max} + \mathrm{polylog}(t \|H\|_{\max}/\epsilon)). Key methods for Hamiltonian simulation include product formulas, which approximate the via repeated applications of exponentials of Hamiltonian terms, suitable for near-term noisy intermediate-scale quantum (NISQ) devices despite higher-order errors; of unitaries (LCU) and select-apply oracles for sparse access models; and advanced frameworks like QSP, which leverage block encodings to implement the with minimal overhead. These algorithms often assume access to oracles that prepare or block-encode HH, enabling simulations of local Hamiltonians in or . Applications span quantum chemistry for molecular energy calculations, materials science for modeling Hubbard models, and as subroutines in algorithms like the Harrow-Hassidim-Lloyd (HHL) solver for linear systems, promising exponential speedups over classical methods for certain problems. However, challenges persist, including the need for fault-tolerant quantum computers to handle long simulation times, sensitivity to noise in NISQ hardware, and the development of efficient encodings for complex, time-dependent Hamiltonians. In October 2025, Google Quantum AI demonstrated a verifiable quantum advantage using their Willow processor, achieving a 13,000-fold speedup over the world's fastest supercomputer in simulating quantum dynamics for physics tasks. Ongoing research focuses on optimal sample complexity and low-energy subspace simulations to mitigate these issues.

Introduction

Definition and motivation

In quantum mechanics, the dynamics of a closed quantum system are governed by the Hamiltonian HH, a Hermitian operator that encodes the total of the . This operator generates the time evolution through the time-dependent , iddtψ(t)=Hψ(t),i \hbar \frac{d}{dt} |\psi(t)\rangle = H |\psi(t)\rangle, where ψ(t)|\psi(t)\rangle is the state vector at time tt, ii is the , and \hbar is the reduced Planck's constant. The formal solution to this equation is the unitary time-evolution operator U(t)=eiHt/U(t) = e^{-i H t / \hbar}, which advances an initial state ψ(0)|\psi(0)\rangle to ψ(t)=U(t)ψ(0)|\psi(t)\rangle = U(t) |\psi(0)\rangle. Hamiltonian simulation refers to the problem of implementing, or approximately implementing, this U(t)U(t) (or equivalently, evolving the state ψ(t)|\psi(t)\rangle) on a quantum computer, given access to a description of the Hamiltonian HH and the evolution time tt. The goal is to produce an output state that approximates ψ(t)|\psi(t)\rangle within a specified error tolerance, leveraging the quantum computer's ability to naturally represent and manipulate quantum states. The primary motivation for Hamiltonian simulation stems from the challenge of modeling complex classically, where the exponential growth in dimension renders exact simulations infeasible for large systems, such as or many-body interactions. This task realizes Richard Feynman's 1982 vision of quantum computers as tools for simulating the inherently quantum behavior of nature, particularly for problems like quantum field theories or condensed matter systems that defy efficient classical computation due to entanglement and superposition. By enabling such simulations, Hamiltonian simulation underpins broader applications in and , where understanding is essential.

Historical development

The concept of using to simulate other originated with Richard Feynman's 1982 proposal, where he argued that classical computers struggle with the exponential complexity of simulating quantum physics, suggesting instead that a quantum computer could efficiently mimic the of physical Hamiltonians. This idea laid the foundational motivation for Hamiltonian simulation as a core application of . In 1996, formalized the first efficient for universal quantum simulation, demonstrating that any local quantum system could be simulated on a quantum computer using Trotterization—a product formula approximation of the time-evolution operator—achieving polynomial scaling in the system size and evolution time. During the , advancements built on Lloyd's work by refining Trotterization through higher-order formulas originally developed by Masuo in the , which were adapted to quantum contexts to reduce approximation errors. These higher-order Trotter-Suzuki decompositions enabled more accurate simulations for increasingly complex Hamiltonians, such as those in , paving the way for applications in . The decade also saw extensions to sparse Hamiltonians, with algorithms leveraging the structure of physically realistic systems to improve efficiency. The marked a shift toward optimal simulation methods, including the development of truncated approaches for approximating the evolution operator, which offered advantages in gate for certain input models. Contributions in the late included the qubitization technique, rooted in Alexei Kitaev's 1995 ideas on phase estimation and block-encoding, which was rigorously implemented by Guang Hao Low and L. Chuang to achieve near-optimal query independent of the spectral norm; and quantum singular value transformation (QSVT), introduced by András Gilyén and colleagues, which further optimized by allowing polynomial transformations on s, achieving optimality for a wide class of problems. This method subsumed prior techniques like of unitaries and sparse access models, enabling simulations with scaling close to the theoretical lower bound. Randomized methods like qDRIFT, proposed by Edward H. Campbell, provided practical error mitigation for noisy intermediate-scale quantum devices by randomizing term orders in product formulas. In the 2020s, extensions to time-dependent Hamiltonians advanced rapidly, with algorithms scaling with the L1-norm of the rather than the spectral norm, as shown by Dominic W. Berry, Andrew M. Childs, and Yuan Su. Recent advances (2023–2025) include hybrid real-imaginary for low-depth Hamiltonian simulation in quantum optimization and efficient algorithms for quantum thermal simulation, enhancing applicability to near-term devices. These milestones have solidified Hamiltonian simulation as a benchmark for quantum advantage, with ongoing refinements targeting fault-tolerant implementations.

Problem formulation

Time-independent Hamiltonians

In the time-independent Hamiltonian simulation problem, the goal is to approximate the unitary time-evolution operator U(t)=eiHtU(t) = e^{-i H t} generated by a fixed Hermitian Hamiltonian HH acting on an nn-qubit system, where the dimension of the Hilbert space is d=2nd = 2^n. This operator describes the dynamics of a closed quantum system evolving under the time-independent Schrödinger equation iddtψ(t)=Hψ(t)i \frac{d}{dt} |\psi(t)\rangle = H |\psi(t)\rangle, with the solution ψ(t)=eiHtψ(0)|\psi(t)\rangle = e^{-i H t} |\psi(0)\rangle. The problem requires implementing a quantum circuit VV that approximates U(t)U(t) using queries to oracles providing access to HH, such that the implemented unitary can be applied to an initial state with high fidelity. The approximation is typically required to satisfy U(t)Vϵ\| U(t) - V \| \leq \epsilon in the norm (also known as the induced by the vector 2-norm), where ϵ>0\epsilon > 0 is a user-specified tolerance; this ensures that for any initial state ψ|\psi\rangle, the output state satisfies U(t)ψVψ2ϵ\| U(t) |\psi\rangle - V |\psi\rangle \|_2 \leq \epsilon. For scenarios involving quantum channels or open systems, the diamond norm may be used instead to bound the in the completely bounded trace-preserving , though the norm suffices for unitary simulation of closed systems. The input Hamiltonian HH is accessed via black-box oracles that allow efficient queries to its matrix elements or , such as selecting rows and retrieving nonzero entries. Common assumptions include HH being Hermitian to guarantee unitarity of U(t)U(t), and often sparse with at most a (e.g., polylogarithmic in dd) number of nonzero entries per row, or local in the sense of arising from interactions involving a constant number of qubits. To simplify , HH is frequently normalized such that its norm satisfies H1\| H \| \leq 1, though bounds scale with the actual norm. These assumptions enable efficient simulation while capturing physically relevant systems like lattice models or molecular Hamiltonians. The problem was first formalized in the context of universal quantum simulation, demonstrating that any local Hamiltonian can be efficiently simulated on a quantum computer. Modern formulations emphasize query complexity to the oracles as the primary resource measure, with gate counts following from standard models.

Time-dependent Hamiltonians

In the context of quantum simulation, time-dependent Hamiltonians arise when the system's energy operator varies with time, such as in driven or those subject to external fields. This generalizes the time-independent case, where the Hamiltonian remains fixed, serving as a special instance when H(s)=HH(s) = H for all ss. The primary goal is to approximate the unitary time-evolution operator that propagates the from initial time 0 to final time tt, enabling the study of dynamic phenomena like coherent control or nonequilibrium processes. The evolution operator for a time-dependent Hamiltonian H(s)H(s) is formally defined as the time-ordered exponential U(t)=Texp(i0tH(s)ds),U(t) = \mathcal{T} \exp\left( -\frac{i}{\hbar} \int_0^t H(s) \, ds \right), where T\mathcal{T} denotes the time-ordering operator that ensures non-commuting operators at different times are multiplied in chronological order. This expression cannot be simplified to a single exponential unless [H(s),H(s)]=0[H(s), H(s')] = 0 for all s,ss, s'. A brief expansion via the provides insight into its : U(t)=n=0(i/)nn!0tds10tdsnT{H(s1)H(sn)}U(t) = \sum_{n=0}^\infty \frac{(-i/\hbar)^n}{n!} \int_0^t ds_1 \cdots \int_0^t ds_n \, \mathcal{T} \bigl\{ H(s_1) \cdots H(s_n) \bigr\}, where the time-ordering applies to the product inside the integrals. In practice, H(s)H(s) is often approximated as piecewise constant over small intervals Δsk\Delta s_k, allowing the overall evolution to be decomposed into a sequence of time-independent exponentials U(t)kexp(iHkΔsk/)U(t) \approx \prod_k \exp\left( -i H_k \Delta s_k / \hbar \right), with the number of pieces scaling with the desired accuracy. Simulating such dynamics introduces significant challenges beyond the time-independent setting, particularly when H(s)H(s) at different times do not commute, necessitating explicit handling of the time-ordering to avoid errors from naive approximations. This non-commutativity increases the number of queries required to access the Hamiltonian compared to fixed-H cases, as algorithms must resolve interactions across time slices. Additionally, errors from approximations in each interval accumulate multiplicatively over the total evolution time tt, demanding finer discretizations or higher-order methods to maintain overall precision ε\varepsilon. For specific forms of time-dependence, such as periodic H(s+T)=H(s)H(s + T) = H(s) in driven systems or linear ramps H(s)=H0+sVH(s) = H_0 + s V, tailored approaches can mitigate these issues by exploiting or perturbative structure. Recent algorithmic progress has addressed these challenges with near-optimal scalings. For instance, the method introduced by , Childs, Su, Wang, and Wiebe achieves a gate of O((0tH(s)1ds)polylog(t/ε,Hmax/ε))O\left( \left( \int_0^t \|H(s)\|_1 \, ds \right) \mathrm{polylog}(t/\varepsilon, \|H\|_\mathrm{max}/\varepsilon) \right) for sparse Hamiltonians under the L1L^1-norm input model, where the integral captures the cumulative strength over time; this improves upon prior bounds scaling with tH(t)maxt \|H(t)\|_\mathrm{max} and is nearly optimal given known lower bounds. Such advancements enable efficient of realistic time-varying systems, like those in or condensed matter, while bounding error accumulation.

Input access models

In Hamiltonian simulation, the input access model specifies how the Hamiltonian operator is provided to the , enabling the implementation of its action without storing the full exponential-sized matrix. Common models balance efficiency with the structure of the Hamiltonian, such as sparsity or locality, to minimize the number of queries required for simulation. These models define oracles or unitaries that allow the algorithm to access relevant entries or embeddings of the Hamiltonian, with query complexity measured by the number of calls to these access routines. The sparse access model applies to Hamiltonians that are d-sparse, meaning each row or column has at most d non-zero entries, which is suitable for many physical systems like tight-binding models. In this model, two s are provided: a SELECT oracle that, given an index j for the j-th non-zero entry in a column or row, applies the corresponding unitary to shift the state to that position, and a PREPARE oracle that prepares a uniform superposition over the indices of non-zero positions. The query complexity of optimal simulation algorithms under this model scales as \tilde{O}(d t |H| + \polylog(t |H| / \epsilon)), achieving near-optimal dependence on all parameters including polylogarithmic scaling in 1/ε. This framework was introduced for efficient simulation of sparse Hamiltonians, enabling sublinear dependence on system size for certain classes. The dense access model assumes direct access to the full matrix entries of the Hamiltonian, such as through an that outputs the (i,j)-th entry upon query. However, this requires specifying up to 2^n entries for an n-qubit system, rendering it exponentially costly and impractical for large n, though it serves as a theoretical baseline for non-sparse cases. Query in this model would naively scale with the matrix , but it is rarely used due to storage and access overheads. A more versatile model is block-encoding, where the Hamiltonian H is embedded into a larger unitary U acting on ancillary qubits such that the top-left block approximates H normalized by a scaling factor α ≥ ||H||. Formally, a (α,ε)-block-encoding satisfies (I0m)U(I0m)Hαε,\left\| \left( I \otimes \langle 0^m | \right) U \left( I \otimes |0^m \rangle \right) - \frac{H}{\alpha} \right\| \leq \varepsilon,
Add your contribution
Related Hubs
User Avatar
No comments yet.