Hubbry Logo
MetadynamicsMetadynamicsMain
Open search
Metadynamics
Community hub
Metadynamics
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Metadynamics
Metadynamics
from Wikipedia

Metadynamics (MTD; also abbreviated as METAD or MetaD) is a computer simulation method in computational physics, chemistry and biology. It is used to estimate the free energy and other state functions of a system, where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Alessandro Laio and Michele Parrinello in 2002[1] and is usually applied within molecular dynamics simulations. MTD closely resembles a number of newer methods such as adaptively biased molecular dynamics,[2] adaptive reaction coordinate forces[3] and local elevation umbrella sampling.[4] More recently, both the original and well-tempered metadynamics[5] were derived in the context of importance sampling and shown to be a special case of the adaptive biasing potential setting.[6] MTD is related to the Wang–Landau sampling.[7]

Introduction

[edit]

The technique builds on a large number of related methods including (in a chronological order) the deflation,[8] tunneling,[9] tabu search,[10] local elevation,[11] conformational flooding,[12] Engkvist-Karlström[13] and Adaptive Biasing Force methods.[14]

Metadynamics has been informally described as "filling the free energy wells with computational sand".[15] The algorithm assumes that the system can be described by a few collective variables (CV). During the simulation, the location of the system in the space determined by the collective variables is calculated and a positive Gaussian potential is added to the real energy landscape of the system. In this way the system is discouraged to come back to the previous point. During the evolution of the simulation, more and more Gaussians sum up, thus discouraging more and more the system to go back to its previous steps, until the system explores the full energy landscape—at this point the modified free energy becomes a constant as a function of the collective variables which is the reason for the collective variables to start fluctuating heavily. At this point the energy landscape can be recovered as the opposite of the sum of all Gaussians.

The time interval between the addition of two Gaussian functions, as well as the Gaussian height and Gaussian width, are tuned to optimize the ratio between accuracy and computational cost. By simply changing the size of the Gaussian, metadynamics can be fitted to yield very quickly a rough map of the energy landscape by using large Gaussians, or can be used for a finer grained description by using smaller Gaussians.[1] Usually, the well-tempered metadynamics[5] is used to change the Gaussian size adaptively. Also, the Gaussian width can be adapted with the adaptive Gaussian metadynamics.[16]

Metadynamics has the advantage, upon methods like adaptive umbrella sampling, of not requiring an initial estimate of the energy landscape to explore.[1] However, it is not trivial to choose proper collective variables for a complex simulation. Typically, it requires several trials to find a good set of collective variables, but there are several automatic procedures proposed: essential coordinates,[17] Sketch-Map,[18] and non-linear data-driven collective variables.[19]

Multi-replica approach

[edit]

Independent metadynamics simulations (replicas) can be coupled together to improve usability and parallel performance. There are several such methods proposed: the multiple walker MTD,[20] the parallel tempering MTD,[21] the bias-exchange MTD,[22] and the collective-variable tempering MTD.[23] The last three are similar to the parallel tempering method and use replica exchanges to improve sampling. Typically, the Metropolis–Hastings algorithm is used for replica exchanges, but the infinite swapping[24] and Suwa-Todo[25] algorithms give better replica exchange rates.[26]

High-dimensional approach

[edit]

Typical (single-replica) MTD simulations can include up to 3 CVs, even using the multi-replica approach, it is hard to exceed 8 CVs in practice. This limitation comes from the bias potential, constructed by adding Gaussian functions (kernels). It is a special case of the kernel density estimator (KDE). The number of required kernels, for a constant KDE accuracy, increases exponentially with the number of dimensions. So MTD simulation length has to increase exponentially with the number of CVs to maintain the same accuracy of the bias potential. Also, the bias potential, for fast evaluation, is typically approximated with a regular grid.[27] The required memory to store the grid increases exponentially with the number of dimensions (CVs) too.

A high-dimensional generalization of metadynamics is NN2B.[28] It is based on two machine learning algorithms: the nearest-neighbor density estimator (NNDE) and the artificial neural network (ANN). NNDE replaces KDE to estimate the updates of bias potential from short biased simulations, while ANN is used to approximate the resulting bias potential. ANN is a memory-efficient representation of high-dimensional functions, where derivatives (biasing forces) are effectively computed with the backpropagation algorithm.[28][29]

An alternative method, exploiting ANN for the adaptive bias potential, uses mean potential forces for the estimation.[30] This method is also a high-dimensional generalization of the Adaptive Biasing Force (ABF) method.[31] Additionally, the training of ANN is improved using Bayesian regularization,[32] and the error of approximation can be inferred by training an ensemble of ANNs.[30]

Developments since 2015

[edit]

In 2015, White, Dama, and Voth introduced experiment-directed metadynamics, a method that allows for shaping molecular dynamics simulations to match a desired free energy surface. This technique guides the simulation towards conformations that align with experimental data, enhancing our understanding of complex molecular systems and their behavior.[33]

In 2020, an evolution of metadynamics was proposed, the on-the-fly probability enhanced sampling method (OPES),[34][35][36] which is now the method of choice of Michele Parrinello's research group.[37] The OPES method has only a few robust parameters, converges faster than metadynamics, and has a straightforward reweighting scheme.[38] In 2024, a replica-exchange variant of OPES was developed, named OneOPES,[39] designed to exploit a thermal gradient and multiple CVs to sample large biochemical systems with several degrees of freedom. This variant aims to address the challenge of describing such systems, where the numerous degrees of freedom are often difficult to capture with only a few CVs. OPES has been implemented in the PLUMED library since version 2.7.[40]

Algorithm

[edit]

Assume we have a classical -particle system with positions at in the Cartesian coordinates . The particle interaction are described with a potential function . The potential function form (e.g. two local minima separated by a high-energy barrier) prevents an ergodic sampling with molecular dynamics or Monte Carlo methods.

Original metadynamics

[edit]

A general idea of MTD is to enhance the system sampling by discouraging revisiting of sampled states. It is achieved by augmenting the system Hamiltonian with a bias potential :

.

The bias potential is a function of collective variables . A collective variable is a function of the particle positions . The bias potential is continuously updated by adding bias at rate , where is an instantaneous collective variable value at time :

.

At infinitely long simulation time , the accumulated bias potential converges to free energy with opposite sign (and irrelevant constant ):

For a computationally efficient implementation, the update process is discretised into time intervals ( denotes the floor function) and -function is replaced with a localized positive kernel function . The bias potential becomes a sum of the kernel functions centred at the instantaneous collective variable values at time :

.

Typically, the kernel is a multi-dimensional Gaussian function, whose covariance matrix has diagonal non-zero elements only:

.

The parameter , , and are determined a priori and kept constant during the simulation.

Implementation

[edit]

Below there is a pseudocode of MTD base on molecular dynamics (MD), where and are the -particle system positions and velocities, respectively. The bias is updated every MD steps, and its contribution to the system forces is .

set initial  and  
set 

every MD step:
    compute CV values:
        
    
    every  MD steps:
        update bias potential:
            
    
    compute atomic forces:
        
    
    propagate  and  by 

Free energy estimator

[edit]

The finite size of the kernel makes the bias potential to fluctuate around a mean value. A converged free energy can be obtained by averaging the bias potential. The averaging is started from , when the motion along the collective variable becomes diffusive:

Applications

[edit]

Metadynamics has been used to study:

Implementations

[edit]

PLUMED

[edit]

PLUMED[47] is an open-source library implementing many MTD algorithms and collective variables. It has a flexible object-oriented design[48][49] and can be interfaced with several MD programs (AMBER, GROMACS, LAMMPS, NAMD, Quantum ESPRESSO, DL_POLY_4, CP2K, and OpenMM).[50][51]

Other

[edit]

Other MTD implementations exist in the Collective Variables Module[52] (for LAMMPS, NAMD, and GROMACS), ORAC, CP2K,[53] EDM,[54] and Desmond.

[edit]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Metadynamics is a computational method in molecular dynamics simulations designed to enhance the sampling of rare events and reconstruct the underlying free energy landscape of complex systems. By introducing a history-dependent bias potential—typically in the form of Gaussian-shaped hills—deposited along selected collective variables, the technique discourages the system from revisiting previously explored regions, thereby facilitating the escape from local free energy minima and enabling efficient exploration of multidimensional potential energy surfaces. Developed in 2002 by Alessandro Laio and Michele Parrinello, metadynamics addresses key limitations of standard , which often fail to capture infrequent transitions due to temporal and energetic barriers in high-dimensional systems. The method's core principle relies on the bias potential converging to the negative of the free energy surface along the chosen collective variables, such as interatomic distances, dihedral angles, or coordination numbers, allowing for the quantitative estimation of thermodynamic properties without prior knowledge of reaction coordinates. Practical implementations involve tuning parameters like the height and width of the Gaussian hills to balance exploration speed and accuracy, with convergence typically assessed through the flattening of the bias potential. Since its inception, metadynamics has become a of sampling techniques, applied across , chemistry, and to study processes including and ligand binding, chemical reactions and catalysis, phase transitions in solids, and solvation dynamics. Notable examples include the dissociation of NaCl in water and conformational changes in dialanine peptides, demonstrating its efficacy for occurring on timescales beyond standard simulations. Extensions such as well-tempered metadynamics have improved convergence and control, while recent integrations with for collective variable selection and infrequent metadynamics for kinetic rate calculations have expanded its scope to predict not only equilibrium free energies but also transition dynamics in complex biomolecular and catalytic systems. As of 2025, ongoing developments, including hybrid approaches like metadynamics, continue to refine its accuracy and applicability, particularly in high-throughput simulations of and .

Introduction

Definition and Purpose

Metadynamics is a computational technique employed in simulations to enhance sampling of rare events and reconstruct free-energy surfaces (FESs) in complex systems, such as biomolecules, chemical reactions, and materials. It operates by introducing a history-dependent bias potential into the system's Hamiltonian, typically in the form of Gaussian-shaped hills deposited along the in a reduced space of collective variables (CVs). This bias discourages revisitation of previously explored regions, compelling the system to escape local free-energy minima and explore the broader FES, thereby addressing the timescale limitations of standard simulations where rare transitions occur infrequently. The primary purpose of metadynamics is to enable the quantitative estimation of free energies associated with conformational changes, reaction pathways, and other slow processes that are challenging to observe directly in unbiased simulations. By filling the FES with the bias potential over time, the method not only accelerates exploration but also allows reconstruction of the underlying unbiased FES, as the negative of the bias at convergence approximates the free energy. This approach is particularly valuable in fields like biophysics and chemistry, where understanding energy barriers (e.g., protein folding or ligand binding) provides insights into thermodynamic stability and kinetics. Originally proposed to overcome the trapping of systems in metastable states during coarse-grained dynamics, metadynamics has evolved into a versatile tool for studying multidimensional FESs without requiring prior knowledge of transition mechanisms. Its non-Markovian nature, driven by the adaptive bias, ensures efficient sampling of saddle points and alternative pathways, making it suitable for applications ranging from to .

Historical Development

Metadynamics was first introduced in 2002 by Alessandro Laio and Michele Parrinello as a method to enhance sampling in simulations and reconstruct free energy surfaces along selected variables. The approach addresses the limitations of standard simulations, which often remain trapped in local free energy minima due to high barriers separating relevant states, such as in or chemical reactions. By depositing small Gaussian-shaped bias potentials at previously visited positions in the variable space, the method gradually fills these minima, driving the system to explore rare events and converge toward a flat free energy landscape whose negative represents the underlying free energy. Early applications and refinements followed shortly after, with the technique demonstrating efficacy in systems like the dissociation of NaCl in water and conformational changes in dialanine dipeptide. In 2008, Laio and Francesco L. Gervasio provided a comprehensive review of metadynamics, highlighting its versatility across , chemistry, and materials science, while noting challenges in convergence and bias deposition for multidimensional spaces. That same year, a significant advancement came with well-tempered metadynamics, proposed by Alessandro Barducci, Giovanni Bussi, and Michele Parrinello, which modifies the Gaussian height to decrease over time based on an effective temperature parameter, ensuring smoother convergence and ergodic exploration without overfilling the free energy surface. Subsequent developments in the integrated metadynamics with other enhanced sampling techniques, such as replica-exchange methods, to handle multiple collective variables more effectively. Open-source implementations in plugins like PLUMED, starting around 2013, broadened its accessibility and adoption in major simulation packages. By 2020, Giovanni Bussi and Alessandro Laio reflected on two decades of progress, emphasizing metadynamics' evolution into a robust tool for complex systems, including machine learning-assisted collective variable selection and applications to non-equilibrium processes.

Theoretical Foundations

Collective Variables

In metadynamics, collective variables (CVs), often denoted as s(R)\mathbf{s}(\mathbf{R}), represent a reduced set of coordinates that capture the slow, relevant of a complex molecular system, where R\mathbf{R} denotes the full set of atomic positions. These variables project the high-dimensional configuration space onto a lower-dimensional subspace, enabling the reconstruction of the free energy surface (FES) along pathways of interest, such as chemical reactions or conformational changes. By focusing on CVs, metadynamics avoids the curse of dimensionality inherent in unbiased simulations of many-body systems. The role of CVs in the metadynamics algorithm is central: a history-dependent potential V(s,t)V(\mathbf{s}, t) is deposited as a sum of Gaussian-shaped hills centered at the system's current position in CV space at discrete times tt', discouraging revisits to previously explored regions and filling free energy basins to promote exploration. Mathematically, the bias evolves as V(s,t)=t=tG,2tG,,tWexp((ss(t))22Δs2),V(\mathbf{s}, t) = \sum_{t' = t_G, 2t_G, \dots, t} W \exp\left( -\frac{(\mathbf{s} - \mathbf{s}(t'))^2}{2\Delta s^2} \right), where WW is the Gaussian height, Δs\Delta s the width, and tGt_G the deposition interval. For large tt, the negative of this bias approximates the FES, F(s)V(s,t)F(\mathbf{s}) \approx -V(\mathbf{s}, t), up to a constant. The forces derived from the CVs drive the system's dynamics, ensuring that the bias acts only along these coordinates while the full Cartesian coordinates evolve naturally. Poorly chosen CVs can lead to incomplete sampling or distorted FES reconstruction, as they must encompass all significant slow modes to avoid hidden barriers. Selecting appropriate CVs relies on physical intuition, experimental data, or preliminary simulations, prioritizing those that distinguish metastable states and align with the reaction coordinate. Common examples include interatomic distances (e.g., for bond breaking in Na-Cl dissociation), dihedral angles (e.g., ϕ,ψ\phi, \psi torsions in protein folding), coordination numbers (e.g., number of hydrogen bonds in secondary structure transitions), or path collective variables for multi-step processes. For example, in the original metadynamics application to alanine dipeptide, the backbone dihedral angles ϕ\phi and ψ\psi were used as CVs, effectively capturing the Ramachandran basin transitions. For multi-dimensional cases, 2-4 CVs are typical to balance computational cost and accuracy. Challenges in CV selection arise from the need to identify slow variables without prior knowledge, potentially leading to inefficient biasing if fast modes are overlooked. To address this, data-driven approaches such as on short unbiased trajectories or variational methods like time-lagged independent component analysis (tICA) can automatically derive CVs that maximize the eigenvalue spectrum of the transition matrix, ensuring they project onto the slowest relaxation modes. These methods have been integrated into metadynamics variants to enhance reliability in biomolecular and materials applications.

Free Energy Surfaces

In metadynamics, the free energy surface (FES) is defined as the free energy F(s)\mathcal{F}(\mathbf{s}) expressed as a function of a set of collective variables s\mathbf{s}, which are low-dimensional coordinates chosen to describe the slow of the system, such as interatomic distances or dihedral angles. These surfaces encapsulate the underlying of complex systems, where minima correspond to stable states and barriers represent activation energies for like conformational transitions or chemical reactions. The FES guides the system's dynamics through the mean force F=sF\mathbf{F} = -\nabla_{\mathbf{s}} \mathcal{F}, making its accurate reconstruction essential for understanding processes that are inaccessible to standard due to high barriers. In the original metadynamics algorithm, the FES is reconstructed by introducing a history-dependent bias potential V(s,t)V(\mathbf{s}, t) that discourages the system from revisiting previously explored regions, effectively flattening the FES over time. This bias is built by depositing small Gaussian-shaped hills at the current position of the collective variables every τ\tau time steps: V(s,t)=t=τ,2τ,twexp(i=1M(sisi(t))22δsi2),V(\mathbf{s}, t) = \sum_{t' = \tau, 2\tau, \dots \leq t} w \exp\left( -\sum_{i=1}^{M} \frac{(s_i - s_i(t'))^2}{2 \delta s_i^2} \right), where ww is the height of each Gaussian, δsi\delta s_i is the width along the ii-th collective variable, and MM is the dimensionality of s\mathbf{s}. As the simulation progresses and the bias fills the FES wells, the system diffuses freely, and for sufficiently long times tτt \gg \tau, the negative of the bias potential converges to the FES up to an additive constant: F(s)V(s,t)+C.\mathcal{F}(\mathbf{s}) \approx -V(\mathbf{s}, t \to \infty) + C. This approach allows quantitative estimation of free energy differences, as demonstrated in applications like the dissociation of NaCl in , where barriers of several kcal/mol were accurately mapped. However, the original method can lead to oscillations and overcompensation if the Gaussian parameters are not finely tuned, potentially distorting the FES. To address these convergence issues, well-tempered metadynamics modifies the bias deposition by making the Gaussian height adaptive and decreasing over time, ensuring smoother filling of the FES without overbiasing. The height of the kk-th Gaussian is given by wk=w0exp(V(s(tk),tk)kBΔT),w_k = w_0 \exp\left( -\frac{V(\mathbf{s}(t_k), t_k)}{k_B \Delta T} \right), where w0w_0 is the initial height, kBk_B is the , and ΔT\Delta T is a parameter controlling the intensity (typically ΔTT/4\Delta T \approx T/4, with TT the system ). The total evolves as in the original method but with this variable height, leading to an equilibrium distribution where the effective along s\mathbf{s} is T+ΔTT + \Delta T. The FES is then recovered via reweighting the biased P(s)P(\mathbf{s}): \mathcal{F}(\mathbf{s}) = \frac{1 + \Delta T / T} \mathcal{F}^\dagger(\mathbf{s}) + C, with F(s)=kBTlnP(s)\mathcal{F}^\dagger(\mathbf{s}) = -k_B T \ln P(\mathbf{s}) the free energy of the biased ensemble. This formulation guarantees asymptotic convergence to the exact FES and allows tuning of the exploration depth, as validated on systems like alanine dipeptide folding. Well-tempered metadynamics has become the standard for FES reconstruction due to its robustness and error control.

Original Algorithm

Procedure and Implementation

The original metadynamics algorithm integrates a history-dependent bias potential into standard (MD) simulations to enhance sampling of rare events and reconstruct free energy surfaces (FES) along chosen collective variables (CVs). The procedure begins with the selection of appropriate CVs, which are low-dimensional coordinates that capture the slow relevant to the process of interest, such as dihedral angles in biomolecules or reaction coordinates in chemical reactions. These CVs, denoted as s(R)\mathbf{s}(\mathbf{R}) where R\mathbf{R} represents atomic positions, must be differentiable to allow computation of bias forces on the atoms via the chain rule. The core implementation involves running an MD simulation at a fixed temperature TT, where the bias potential V(s,t)V(\mathbf{s}, t) is added to the underlying potential energy U(R)U(\mathbf{R}), modifying the effective Hamiltonian to H=U(R)+V(s(R),t)+K(p)H = U(\mathbf{R}) + V(\mathbf{s}(\mathbf{R}), t) + K(\mathbf{p}), with KK the kinetic energy. At regular intervals τG\tau_G (typically 500–1000 MD steps, chosen to avoid excessive overlap of bias terms while ensuring efficient filling), a small Gaussian "hill" is deposited at the system's current CV position s(t)\mathbf{s}(t). This Gaussian is defined in CV space as exp(i=1M(sisi(t))22σi2),\exp\left( -\sum_{i=1}^{M} \frac{ (s_i - s_i(t) )^2 }{2 \sigma_i^2 } \right), where MM is the number of CVs, and σi\sigma_i is the width for the ii-th CV, selected to match the equilibrium fluctuations of sis_i (e.g., 0.1–0.35 rad for angles). The height ww of each Gaussian (e.g., 0.5–2 kJ/mol) is kept constant and small relative to the thermal energy kBTk_B T to ensure gradual bias accumulation without over-distorting the dynamics initially. The total bias potential at time tt is the discrete sum V(s,t)=wk=1NGexp(i=1M(sisi(kτG))22σi2),V(\mathbf{s}, t) = w \sum_{k=1}^{N_G} \exp\left( -\sum_{i=1}^{M} \frac{ (s_i - s_i(k \tau_G) )^2 }{2 \sigma_i^2 } \right), where NG=t/τGN_G = t / \tau_G is the number of deposited hills. The bias force on the CVs is sV(s,t)-\nabla_{\mathbf{s}} V(\mathbf{s}, t), which is projected back onto atomic coordinates to influence the trajectory. As the simulation proceeds, the accumulating Gaussians fill free energy basins, flattening the FES and driving the system to explore higher-energy regions and transitions. Convergence is monitored by observing when the CV trajectory shows diffusive behavior without recrossing the same basin repeatedly, typically after the bias has compensated the underlying FES barriers (on the order of 10–100 ns for biomolecular systems, depending on complexity). At this point, the reconstructed FES is obtained as F(s)=V(s,t)+C\mathcal{F}(\mathbf{s}) = -V(\mathbf{s}, t) + C, where CC is a constant, since the bias asymptotically equals the negative of the free energy up to a shift. Practical implementation requires computational efficiency in evaluating the sum over hills, often achieved by grid-based storage or reweighting techniques to handle the growing number of terms (up to thousands). Parameter tuning is empirical: overly large ww or small τG\tau_G can lead to rough FES estimates, while poor CV choice may result in incomplete sampling or hysteresis.

Free Energy Reconstruction

In the original metadynamics , the free energy surface (FES) along the chosen variables s\mathbf{s} is reconstructed by accumulating a history-dependent potential VG(s,t)V_G(\mathbf{s}, t) composed of small Gaussian functions added at regular time intervals. Each Gaussian is centered at the current position of the variables s(t)\mathbf{s}(t') and has a fixed width 2ω2\omega and initial height ww, with the updated as VG(s,t+Δt)=VG(s,t)+wexp(i=1d(sisi(t))22ωi2),V_G(\mathbf{s}, t + \Delta t) = V_G(\mathbf{s}, t) + w \exp\left( -\sum_{i=1}^d \frac{(s_i - s_i(t))^2}{2\omega_i^2} \right), where dd is the dimensionality of the variable space and the sum runs over previously deposited Gaussians. This discourages revisits to previously explored regions, filling the free energy wells and enabling the system to escape metastable states and explore the entire FES. As the simulation progresses, the deposited Gaussians collectively oppose the underlying free energy F(s)F(\mathbf{s}), compensating its shape. At long times tt, when the system diffuses freely over the FES without being trapped, the bias potential converges to the negative of the free energy up to an additive constant: F(s)=VG(s,t)+C.F(\mathbf{s}) = -V_G(\mathbf{s}, t) + C. The constant CC can be determined by setting the free energy minimum to zero or by referencing known values from unbiased s. This reconstruction assumes the Gaussian width is small compared to FES variations, ensuring accurate resolution of barriers and minima. The method's efficiency scales with the dimensionality as (1/ω)d(1/\omega)^d, making it suitable for low-dimensional collective variables (typically 1–3). Convergence of the reconstruction is monitored by observing when the bias potential stabilizes, indicated by the system performing unbiased-like oscillations around the FES or by the rate of Gaussian deposition becoming uniform across the space. In practice, since Gaussians continue to be added indefinitely in the original formulation, the FES is estimated using a time of the bias after an initial filling period tfillt_{\rm fill}, where tfillt_{\rm fill} is chosen such that the lowest free energy region is sufficiently filled: VˉG(s)=1ttfilltfilltVG(s,t)dt.\bar{V}_G(\mathbf{s}) = \frac{1}{t - t_{\rm fill}} \int_{t_{\rm fill}}^t V_G(\mathbf{s}, t') \, dt'. This mitigates ongoing deposition and provides a smoother estimate. Errors arise from finite time, Gaussian parameters, or incomplete ; statistical can be assessed via block averaging of the bias history, with typical errors on the order of a few kBTk_BT for well-converged 1D FES in biomolecular systems. Limitations include potential overestimation of barriers if the collective variables do not fully capture the , and computational cost from summing many Gaussians, often alleviated by grid-based storage.

Advanced Variants

Well-Tempered Metadynamics

Well-tempered metadynamics is an advanced variant of the original metadynamics algorithm designed to address its limitations in convergence and over-exploration of the collective variable (CV) space. Introduced by Barducci, Bussi, and Parrinello in , it modifies the bias deposition process to ensure that the added Gaussian hills decrease in height over time, leading to a smoothly converging estimate of the free energy surface (FES). This approach unifies the principles of metadynamics with sampling, allowing for tunable control over the extent of sampling enhancement while preventing the bias potential from exceeding the true FES. In standard metadynamics, Gaussian-shaped potentials are added at fixed intervals with constant height, which can cause the bias to oscillate around the FES and lead to numerical instabilities or diffusion into irrelevant regions of the CV space. Well-tempered metadynamics overcomes this by making the height of each Gaussian dependent on the current potential at the deposition site, specifically w(t)=w0exp(V(s(t),t)ΔT)w(t) = w_0 \exp\left(-\frac{V(s(t), t)}{\Delta T}\right), where w0w_0 is the initial height, V(s,t)V(s, t) is the potential, and ΔT\Delta T is a parameter analogous to an additional that controls the strength. The total potential is then constructed as V(s,t)=ΔTln(1ω(t)ΔT0texp(V(s(t),t)ΔT)δ(ss(t))dt)V(s, t) = -\Delta T \ln \left(1 - \frac{\omega(t)}{\Delta T} \int_0^t \exp\left(\frac{V(s(t'), t')}{\Delta T}\right) \delta(s - s(t')) \, dt' \right), where ω(t)\omega(t) is the Gaussian width-related factor. As the progresses, the rate slows proportionally to 1/t1/t, ensuring asymptotic convergence to the target FES without overfilling. The key parameter in well-tempered metadynamics is the bias factor γ=T+ΔTT\gamma = \frac{T + \Delta T}{T}, where TT is the physical in energy units. Larger γ\gamma values enhance fluctuations more aggressively but increase the of ergodicity breaking, while smaller values promote gentler sampling closer to the unbiased . In the long-time limit, the bias converges to V(s)=(11γ)F(s)V(s) = \left(1 - \frac{1}{\gamma}\right) F(s), allowing direct reconstruction of the FES via F(s)=γγ1V(s)F(s) = -\frac{\gamma}{\gamma - 1} V(s). This formulation also enables reweighting to compute unbiased averages of other observables during the same , enhancing efficiency. Convergence is typically assessed by monitoring the of the CV or the stabilization of the FES estimate, with error scaling as O(1/t)O(1/\sqrt{t})
Add your contribution
Related Hubs
User Avatar
No comments yet.