Hubbry Logo
Dynamic mode decompositionDynamic mode decompositionMain
Open search
Dynamic mode decomposition
Community hub
Dynamic mode decomposition
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Dynamic mode decomposition
Dynamic mode decomposition
from Wikipedia

In data science, dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter J. Schmid and Joern Sesterhenn in 2008.[1][2] Given a time series of data, DMD computes a set of modes, each of which is associated with a fixed oscillation frequency and decay/growth rate. For linear systems in particular, these modes and frequencies are analogous to the normal modes of the system, but more generally, they are approximations of the modes and eigenvalues of the composition operator (also called the Koopman operator). Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such as principal component analysis (PCA), which computes orthogonal modes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with a damped (or driven) sinusoidal behavior in time.

Overview

[edit]

Dynamic mode decomposition was first introduced by Schmid as a numerical procedure for extracting dynamical features from flow data.[3]

The data takes the form of a snapshot sequence

where is the -th snapshot of the flow field, and is a data matrix whose columns are the individual snapshots. These snapshots are assumed to be related via a linear mapping that defines a linear dynamical system

that remains approximately the same over the duration of the sampling period. Written in matrix form, this implies that

where is the vector of residuals that accounts for behaviors that cannot be described completely by , , , and . Regardless of the approach, the output of DMD is the eigenvalues and eigenvectors of , which are referred to as the DMD eigenvalues and DMD modes respectively.

Algorithm

[edit]

There are two methods for obtaining these eigenvalues and modes. The first is Arnoldi-like, which is useful for theoretical analysis due to its connection with Krylov methods. The second is a singular value decomposition (SVD) based approach that is more robust to noise in the data and to numerical errors.

The Arnoldi approach

[edit]

In fluids applications, the size of a snapshot, , is assumed to be much larger than the number of snapshots , so there are many equally valid choices of . The original DMD algorithm picks so that each of the snapshots in can be expressed as linear combinations of the snapshots in . Because most of the snapshots appear in both data sets, this representation is error free for all snapshots except , which is written as

where is a set of coefficients DMD must identify and is the residual. In total,

where is the companion matrix

The vector can be computed by solving a least squares problem, which minimizes the overall residual. In particular if we take the QR decomposition of , then .

In this form, DMD is a type of Arnoldi method, and therefore the eigenvalues of are approximations of the eigenvalues of . Furthermore, if is an eigenvector of , then is an approximate eigenvector of . The reason an eigendecomposition is performed on rather than is because is much smaller than , so the computational cost of DMD is determined by the number of snapshots rather than the size of a snapshot.

The SVD-based approach

[edit]

Instead of computing the companion matrix , the SVD-based approach yields the matrix that is related to via a similarity transform. To do this, assume we have the SVD of . Then

Equivalent to the assumption made by the Arnoldi-based approach, we choose such that the snapshots in can be written as the linear superposition of the columns in , which is equivalent to requiring that they can be written as the superposition of POD modes. With this restriction, minimizing the residual requires that it is orthogonal to the POD basis (i.e., ). Then multiplying both sides of the equation above by yields , which can be manipulated to obtain

Because and are related via similarity transform, the eigenvalues of are the eigenvalues of , and if is an eigenvector of , then is an eigenvector of .

In summary, the SVD-based approach is as follows:

  1. Split the time series of data in into the two matrices and .
  2. Compute the SVD of .
  3. Form the matrix , and compute its eigenvalues and eigenvectors .
  4. The -th DMD eigenvalue is and -th DMD mode is the .

The advantage of the SVD-based approach over the Arnoldi-like approach is that noise in the data and numerical truncation issues can be compensated for by truncating the SVD of . As noted in [3] accurately computing more than the first couple modes and eigenvalues can be difficult on experimental data sets without this truncation step.

Theoretical and algorithmic advancements

[edit]

Since its inception in 2010, a considerable amount of work has focused on understanding and improving DMD. One of the first analyses of DMD by Rowley et al.[4] established the connection between DMD and the Koopman operator, and helped to explain the output of DMD when applied to nonlinear systems. Since then, a number of modifications have been developed that either strengthen this connection further or enhance the robustness and applicability of the approach.

  • Optimized DMD: Optimized DMD is a modification of the original DMD algorithm designed to compensate for two limitations of that approach: (i) the difficulty of DMD mode selection, and (ii) the sensitivity of DMD to noise or other errors in the last snapshot of the time series.[5] Optimized DMD recasts the DMD procedure as an optimization problem where the identified linear operator has a fixed rank. Furthermore, unlike DMD which perfectly reproduces all of the snapshots except for the last, Optimized DMD allows the reconstruction errors to be distributed throughout the data set, which appears to make the approach more robust in practice.
  • Optimal Mode Decomposition: Optimal Mode Decomposition (OMD) recasts the DMD procedure as an optimization problem and allows the user to directly impose the rank of the identified system.[6] Provided this rank is chosen properly, OMD can produce linear models with smaller residual errors and more accurate eigenvalues on both synthetic and experimental data sets.
  • Exact DMD: The Exact DMD algorithm generalizes the original DMD algorithm in two ways. First, in the original DMD algorithm the data must be a time series of snapshots, but Exact DMD accepts a data set of snapshot pairs.[7] The snapshots in the pair must be separated by a fixed , but do not need to be drawn from a single time series. In particular, Exact DMD can allow data from multiple experiments to be aggregated into a single data set. Second, the original DMD algorithm effectively pre-processes the data by projecting onto a set of POD modes. The Exact DMD algorithm removes this pre-processing step, and can produce DMD modes that cannot be written as the superposition of POD modes.
  • Sparsity Promoting DMD: Sparsity promoting DMD is a post processing procedure for DMD mode and eigenvalue selection.[8] Sparsity promoting DMD uses an penalty to identify a smaller set of important DMD modes, and is an alternative approach to the DMD mode selection problem that can be solved efficiently using convex optimization techniques.
  • Multi-Resolution DMD: Multi-Resolution DMD (mrDMD) is a combination of the techniques used in multiresolution analysis with Exact DMD designed to robust extracting DMD modes and eigenvalues from data sets containing multiple timescales.[9] The mrDMD approach was applied to global surface temperature data, and identifies a DMD mode that appears during El Nino years.
  • Extended DMD: Extended DMD is a modification of Exact DMD that strengthens the connection between DMD and the Koopman operator.[10] As the name implies, Extended DMD is an extension of DMD that uses a richer set of observable functions to produce more accurate approximations of the Koopman operator. This extended set could be chosen a priori or learned from data.[11][12] It also demonstrated the DMD and related methods produce approximations of the Koopman eigenfunctions in addition to the more commonly used eigenvalues and modes.
  • Residual DMD: Residual DMD provides a means to control the projection errors of DMD and Extended DMD that arise from finite-dimensional approximations of the Koopman operator.[13][14] The method utilizes the same snapshot data but introduces an additional finite matrix that captures infinite-dimensional residuals exactly in the large data limit. This enables users to sidestep spectral pollution (spurious modes), verify Koopman mode decompositions and learned dictionaries, and compute continuous spectra. Moreover, the method further bolsters the link between DMD and the Koopman operator by demonstrating how the spectral content of the latter can be computed with verification and error control.
  • Physics-informed DMD: Physics-informed DMD forms a Procrustes problem that restricts the family of admissible models to a matrix manifold that respects the physical structure of the system.[15] This allows physical structures to be incorporated into DMD. This approach is less prone to overfitting, requires less training data, and is often less computationally expensive to build than standard DMD models.
  • Measure-preserving EDMD: Measure-preserving extended DMD (mpEDMD) offers a Galerkin method whose eigendecomposition converges to the spectral quantities of the Koopman operators for general measure-preserving dynamical systems.[16] This method employs an orthogonal Procrustes problem (essentially a polar decomposition) to DMD and extended DMD. Beyond convergence, mpEDMD upholds physical conservation laws, and exhibits enhanced robustness to noise as well as improved long-term behavior.
  • DMD with Control: Dynamic mode decomposition with control (DMDc) [17] is a modification of the DMD procedure designed for data obtained from input output systems. One unique feature of DMDc is the ability to disambiguate the effects of system actuation from the open loop dynamics, which is useful when data are obtained in the presence of actuation.
  • Total Least Squares DMD: Total Least Squares DMD is a recent modification of Exact DMD meant to address issues of robustness to measurement noise in the data. In,[18] the authors interpret the Exact DMD as a regression problem that is solved using ordinary least squares (OLS), which assumes that the regressors are noise free. This assumption creates a bias in the DMD eigenvalues when it is applied to experimental data sets where all of the observations are noisy. Total least squares DMD replaces the OLS problem with a total least squares problem, which eliminates this bias.
  • Dynamic Distribution Decomposition: DDD focuses on the forward problem in continuous time, i.e., the transfer operator. However the method developed can also be used for fitting DMD problems in continuous time.[19]

In addition to the algorithms listed here, similar application-specific techniques have been developed. For example, like DMD, Prony's method represents a signal as the superposition of damped sinusoids. In climate science, linear inverse modeling is also strongly connected with DMD.[20] For a more comprehensive list, see Tu et al.[7]

Examples

[edit]

Trailing edge of a profile

[edit]
Fig 1 Trailing edge Vortices (Entropy)

The wake of an obstacle in the flow may develop a Kármán vortex street. The Fig.1 shows the shedding of a vortex behind the trailing edge of a profile. The DMD-analysis was applied to 90 sequential Entropy fields (animated gif (1.9MB)) and yield an approximated eigenvalue-spectrum as depicted below. The analysis was applied to the numerical results, without referring to the governing equations. The profile is seen in white. The white arcs are the processor boundaries since the computation was performed on a parallel computer using different computational blocks.

Fig.2 DMD-spectrum
Fig.2 DMD-spectrum

Roughly a third of the spectrum was highly damped (large, negative ) and is not shown. The dominant shedding mode is shown in the following pictures. The image to the left is the real part, the image to the right, the imaginary part of the eigenvector.

Again, the entropy-eigenvector is shown in this picture. The acoustic contents of the same mode is seen in the bottom half of the next plot. The top half corresponds to the entropy mode as above.

Synthetic example of a traveling pattern

[edit]

The DMD analysis assumes a pattern of the form where is any of the independent variables of the problem, but has to be selected in advance. Take for example the pattern

With the time as the preselected exponential factor.

A sample is given in the following figure with , and . The left picture shows the pattern without, the right with noise added. The amplitude of the random noise is the same as that of the pattern.

A DMD analysis is performed with 21 synthetically generated fields using a time interval , limiting the analysis to .

The spectrum is symmetric and shows three almost undamped modes (small negative real part), whereas the other modes are heavily damped. Their numerical values are respectively. The real one corresponds to the mean of the field, whereas corresponds to the imposed pattern with . Yielding a relative error of −1/1000. Increasing the noise to 10 times the signal value yields about the same error. The real and imaginary part of one of the latter two eigenmodes is depicted in the following figure.

See also

[edit]

Several other decompositions of experimental data exist. If the governing equations are available, an eigenvalue decomposition might be feasible.

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Dynamic mode decomposition (DMD) is a -driven that approximates the dynamics of complex systems by decomposing sequences of spatiotemporal snapshots into a set of spatial modes associated with linear temporal evolution operators, effectively capturing dominant coherent structures and their growth or decay rates. Developed initially for analyzing flows, DMD generalizes traditional techniques like by incorporating the system's temporal dynamics directly from , without requiring an underlying governing equation. The core of DMD involves constructing a linear operator that best fits the mapping between consecutive data snapshots, typically through a least-squares approximation, followed by an eigendecomposition to yield the modes, eigenvalues (indicating frequencies and growth rates), and eigenvectors (temporal coefficients). This process reveals the system's spectral properties, akin to a in modal space, and is particularly effective for high-dimensional data where the number of snapshots exceeds the spatial dimensions. Mathematically, for a sequence of snapshots X=[x1,x2,,xm]\mathbf{X} = [\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_m] and Y=[x2,x3,,xm+1]\mathbf{Y} = [\mathbf{x}_2, \mathbf{x}_3, \dots, \mathbf{x}_{m+1}], DMD computes the operator A\mathbf{A} such that YAX\mathbf{Y} \approx \mathbf{A} \mathbf{X}, with the eigendecomposition A=ΦΛΨ\mathbf{A} = \mathbf{\Phi} \mathbf{\Lambda} \mathbf{\Psi}^* providing the modes Φ\mathbf{\Phi}, eigenvalues Λ\mathbf{\Lambda}, and coefficients b\mathbf{b} for reconstruction x(t)k=1rϕkbkλkt/Δt\mathbf{x}(t) \approx \sum_{k=1}^r \mathbf{\phi}_k \mathbf{b}_k \lambda_k^{t/\Delta t}. Originally introduced by Peter J. Schmid and Jörn Sesterhenn in for extracting dynamic information from numerical simulations and extended to experimental flow fields in by Schmid, DMD has roots in earlier work on Arnoldi-like decompositions and has since been formalized as an approximation to the Koopman operator, which linearizes nonlinear dynamics in an infinite-dimensional space. Its applications span for instability analysis in flows like wakes and jets, but have expanded to for background subtraction, for modeling disease spread, for neural signal decoding, and even financial for . Extensions such as exact DMD, sparse DMD, and physics-informed variants address noise, control inputs, and nonlinear enhancements, making it robust for real-world, noisy datasets.

Introduction

Definition and Motivation

Dynamic mode decomposition (DMD) is a data-driven that approximates the of a linear operator underlying high-dimensional sequential , obtained by performing an eigendecomposition of an approximating linear operator A=YXA = Y X^\dagger, where XX and YY are matrices of snapshot and XX^\dagger denotes the Moore-Penrose pseudoinverse. This approach extracts spatiotemporal patterns without requiring knowledge of the governing equations, making it suitable for analyzing time-evolving systems from observational alone. The motivation for DMD arises in the study of complex dynamical systems, such as fluid flows governed by the nonlinear Navier-Stokes equations, where traditional modal decomposition techniques like fail to capture the underlying linear dynamics due to prevalent nonlinearities. In these systems, which often exhibit low-dimensional behavior despite their high dimensionality, DMD provides a means to identify dominant coherent structures and their temporal evolution, facilitating the analysis of instabilities, transitions, and in fields like and . Key benefits of DMD include its ability to identify dynamic modes, which represent spatial structures, and associated eigenvalues that encode growth or decay rates and frequencies, enabling a of the system's dynamics for and prediction. The input consists of time-resolved snapshots arranged into data matrices X=[x1,x2,,xm1]X = [x_1, x_2, \dots, x_{m-1}] and Y=[x2,x3,,xm]Y = [x_2, x_3, \dots, x_m], where xkx_k are high-dimensional state vectors at discrete times. The output comprises the DMD modes ϕj\phi_j, eigenvalues λj\lambda_j, and amplitudes bjb_j, allowing reconstruction of the dynamics via the xkj=1rbjϕjλjk,x_k \approx \sum_{j=1}^r b_j \phi_j \lambda_j^k, where rr is the rank of the . This framework has been particularly applied in to decompose experimental and numerical flow data into dynamically relevant modes.

Historical Background

Dynamic mode decomposition (DMD) was first introduced by Peter J. Schmid and Jörn Sesterhenn in at the 61st Annual Meeting of the APS Division of Fluid Dynamics in , , . The method emerged as a data-driven technique for extracting dynamic structures from time-resolved flow data, initially developed to analyze both numerical simulations and experimental measurements in . The initial formulation positioned DMD as an extension of (), incorporating a temporal constraint to capture the evolution of coherent structures rather than just spatial correlations. This approach allowed for the identification of modes that reveal the underlying dynamics of complex flows, bridging the gap between spatial and temporal analyses in turbulent systems. In 2010, Schmid formalized and expanded the method in a seminal paper published in the Journal of Fluid Mechanics, providing a rigorous framework for applying DMD to flow analysis and demonstrating its efficacy on both simulated and experimental datasets. This publication marked a key milestone, establishing DMD as a versatile tool for in fluids and influencing subsequent developments in data-driven modeling. Throughout the 2010s, DMD saw increasing adoption across scientific computing, particularly following the 2016 book Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems by J. Nathan Kutz, Steven L. Brunton, Bingni W. Brunton, and Joshua L. Proctor, which provided a comprehensive pedagogical treatment and broadened its accessibility beyond . During this period, early connections were drawn to , with extensions like dynamic mode decomposition with control enabling the incorporation of input signals for and feedback design. These links also highlighted DMD's role in , approximating eigenvalues of linear operators to characterize system behavior.

Mathematical Prerequisites

Snapshot Matrices and Data Representation

In dynamic mode decomposition (DMD), sequential from a is organized into snapshot matrices to capture the evolution over time. The snapshot matrix XX is constructed as X=[x1,x2,,xm]X = [x_1, x_2, \dots, x_m], where each column xiRnx_i \in \mathbb{R}^n represents a spatial snapshot of the system's state at time tit_i, and mm is the number of snapshots. The delayed snapshot matrix XX' is then formed as X=[x2,x3,,xm+1]X' = [x_2, x_3, \dots, x_{m+1}], shifting the by one time step to pair consecutive states. These matrices assume a discrete-time linear evolution model, where XAXX' \approx A X and ARn×nA \in \mathbb{R}^{n \times n} is a constant system matrix approximating the underlying dynamics. This approximation holds exactly for linear systems and serves as a for nonlinear systems over short time intervals. For high-dimensional data where the spatial dimension nn greatly exceeds the number of snapshots mm (i.e., nmn \gg m), direct computation with AA becomes infeasible due to storage and numerical costs. Low-rank approximations address this by projecting the data onto a reduced subspace, typically via of XX, retaining only the dominant modes to capture essential dynamics while discarding noise. Preprocessing the snapshot data is crucial for robust DMD application. Centering subtracts the temporal mean from each snapshot to remove steady-state biases, though it is not strictly required as DMD can handle non-centered data. Handling missing snapshots often involves imputation techniques, such as expectation-maximization algorithms within a state-space framework, to reconstruct gaps and maintain matrix structure. Uniform time sampling at constant intervals Δt\Delta t is assumed in standard DMD for simplicity, but non-uniform sampling introduces challenges like irregular time shifts, which can be mitigated by specialized variants or resampling, though these may increase sensitivity to noise.

Singular Value Decomposition Basics

Singular value decomposition (SVD) is a matrix factorization technique that decomposes an m×nm \times n complex matrix XX into the form X=UΣVX = U \Sigma V^*, where UU is an m×mm \times m unitary matrix whose columns are the left singular vectors, VV is an n×nn \times n unitary matrix whose columns are the right singular vectors, and Σ\Sigma is an m×nm \times n diagonal matrix containing the singular values σ1σ2σmin(m,n)0\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_{\min(m,n)} \geq 0 along its main diagonal. The singular values quantify the importance of each mode in the decomposition, representing the gain or stretching factors associated with the corresponding singular vectors. Truncated SVD provides a low-rank approximation of XX by retaining only the first rr singular values and vectors, where rmin(m,n)r \ll \min(m,n), yielding Xr=UrΣrVrX_r = U_r \Sigma_r V_r^*. This approximation captures a significant portion of the matrix's , often selecting rr such that the retained singular values account for 99% of the total variance, thereby enabling effective while bounding the approximation error by XXrσr+1\|X - X_r\| \leq \sigma_{r+1}. In the context of , such truncation filters noise and identifies dominant structures in high-dimensional datasets. The Moore-Penrose pseudoinverse of XX, denoted XX^\dagger, is computed via the SVD as X=VΣUX^\dagger = V \Sigma^\dagger U^*, where Σ\Sigma^\dagger is the with entries 1/σi1/\sigma_i for σi>0\sigma_i > 0 and zero otherwise. This pseudoinverse solves least-squares problems, such as minimizing Xba\|X b - a\| for bb, and is particularly useful when XX is rank-deficient. In dynamic mode decomposition (DMD), SVD of the snapshot matrix projects the data onto a lower-dimensional (POD) basis, reducing the rank to mitigate noise and . Specifically, the SVD enables computation of a reduced S~=UAU\tilde{S} = U^* A U via the pseudoinverse XX^\dagger, where AXXA \approx X' X^\dagger approximates the linear dynamics operator, allowing extraction of dynamic modes as Φ=UΨ\Phi = U \Psi from the eigendecomposition of S~\tilde{S}, where Ψ\Psi are the right eigenvectors. This step ensures robust mode identification by focusing on the most energetic directions in the data.

Core Algorithms

Arnoldi Approach

The Arnoldi approach to dynamic mode decomposition constructs a from a sequence of data snapshots to approximate the of the underlying linear operator AA without explicitly forming or storing the full high-dimensional matrix AA. This , rooted in the classical Arnoldi , projects the dynamics onto a low-dimensional , enabling the analysis of large-scale systems such as fluid flows where direct computation of AA is computationally prohibitive. By leveraging successive matrix-vector multiplications with snapshots, it efficiently captures dominant dynamic modes while minimizing memory requirements. The process begins with an initial vector, typically the first snapshot x1x_1, which serves as q1q_1 after normalization. Subsequent basis vectors are generated iteratively: for k=1,2,k = 1, 2, \dots, compute AqkA q_k (implicitly via the next snapshot or linear mapping), then orthogonalize against previous vectors to obtain coefficients hik=qi(Aqk)h_{ik} = q_i^* (A q_k) for i=1,,ki = 1, \dots, k, and form the new vector qk+1=(Aqki=1khikqi)/hk+1,kq_{k+1} = \left( A q_k - \sum_{i=1}^k h_{ik} q_i \right) / h_{k+1,k}, where hk+1,k=Aqki=1khikqih_{k+1,k} = \| A q_k - \sum_{i=1}^k h_{ik} q_i \|. These steps build an Q=[q1,,qm]Q = [q_1, \dots, q_m] spanning the , while the coefficients populate the upper HH of size m×mm \times m, satisfying the relation AQQHA Q \approx Q H. The of AA are then approximated by those of the much smaller HH. To obtain the dynamic modes, perform the eigendecomposition HW=WΛH W = W \Lambda, where Λ\Lambda contains the approximate eigenvalues and WW the corresponding eigenvectors. The DMD modes are computed as Φ=XBW\Phi = X B W, where XX is the matrix of initial snapshots, and BB solves the least-squares problem minBYXB\min_B \| Y - X B \| with YY the shifted snapshots, ensuring the modes align with the data evolution. This yields AQHQA \approx Q H Q^*, providing a of the dynamics. The approach excels in for systems with large state dimension nn, as it requires only O(m2n)O(m^2 n) operations for subspace size mnm \ll n and avoids full matrix storage, making it suitable for numerical simulations. However, it can be sensitive to in experimental data, potentially leading to inaccurate eigenvalue estimates without additional stabilization.

SVD-Based Approach

The SVD-based approach to dynamic mode decomposition (DMD) represents the standard modern implementation for computing dynamic modes and eigenvalues from snapshot data, leveraging (SVD) to ensure numerical robustness and . This method approximates the linear operator governing the system's evolution by projecting the data onto a low-rank subspace, making it particularly effective for high-dimensional datasets such as those from fluid flows or spatiotemporal measurements. Introduced by Schmid in , it addresses limitations in earlier iterative techniques by using orthogonal bases from SVD to mitigate ill-conditioning. The algorithm begins with the construction of two snapshot matrices from a time series of state vectors {xjRn}j=0m\{ \mathbf{x}_j \in \mathbb{R}^n \}_{j=0}^{m}, sampled at discrete times tj=jΔtt_j = j \Delta t. The first matrix is X=[x0,x1,,xm1]Rn×mX = [\mathbf{x}_0, \mathbf{x}_1, \dots, \mathbf{x}_{m-1}] \in \mathbb{R}^{n \times m}, and the shifted matrix is X=[x1,x2,,xm]Rn×mX' = [\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_m] \in \mathbb{R}^{n \times m}. A truncated (economy) SVD is then computed on XX: XUrΣrVr,X \approx U_r \Sigma_r V_r^*, where UrRn×rU_r \in \mathbb{R}^{n \times r} contains the first rr left singular vectors, ΣrRr×r\Sigma_r \in \mathbb{R}^{r \times r} is the diagonal matrix of the rr largest singular values, VrRm×rV_r \in \mathbb{R}^{m \times r} contains the corresponding right singular vectors, and rmin(n,m)r \ll \min(n, m) is chosen based on the decay of singular values. This step identifies the dominant coherent structures in the data. Next, a of the dynamics matrix AA (such that XAXX' \approx A X) is formed in the reduced coordinates: A~=UrXVrΣr1Cr×r.\tilde{A} = U_r^* X' V_r \Sigma_r^{-1} \in \mathbb{C}^{r \times r}. This matrix A~\tilde{A} captures the action of AA projected onto the modes given by the columns of UrU_r. An eigendecomposition is then performed: A~W=WΛ,\tilde{A} W = W \Lambda, where Λ=diag(λ1,,λr)\Lambda = \operatorname{diag}(\lambda_1, \dots, \lambda_r) contains the eigenvalues λj\lambda_j (Ritz values approximating those of AA), and WCr×rW \in \mathbb{C}^{r \times r} contains the corresponding eigenvectors. The DMD modes are obtained by projecting back to the full space: Φ=UrWCn×r,\Phi = U_r W \in \mathbb{C}^{n \times r}, with the jj-th column ϕj\phi_j representing the spatial structure associated with λj\lambda_j. For more accurate "exact" modes in noisy data, an alternative form is Φ=XVrΣr1W\Phi = X' V_r \Sigma_r^{-1} W, which directly fits the shifted snapshots. To reconstruct the dynamics, the amplitudes (or coefficients) bCr\mathbf{b} \in \mathbb{C}^r are computed by projecting the onto the modes: b=Φx0,\mathbf{b} = \Phi^\dagger \mathbf{x}_0, where Φ\Phi^\dagger is the Moore-Penrose pseudoinverse of Φ\Phi. The state at time step kk is then approximated via modal expansion: xkj=1rbjϕjλjk=Φdiag(b)Λk.\mathbf{x}_k \approx \sum_{j=1}^r b_j \phi_j \lambda_j^k = \Phi \operatorname{diag}(\mathbf{b}) \Lambda^k. This enables forward prediction and reveals the temporal evolution driven by each mode, with λj|\lambda_j| indicating growth or decay and arg(λj)\arg(\lambda_j) the associated . For continuous-time interpretation, the growth rates are given by Re(log(λj)/Δt)\operatorname{Re}(\log(\lambda_j)/\Delta t), and frequencies by Im(log(λj)/Δt)\operatorname{Im}(\log(\lambda_j)/\Delta t). Noise handling is inherent in the truncation to rank rr, where rr is selected by inspecting the singular value spectrum in Σr\Sigma_r; rapid decay typically separates signal from noise, allowing discard of small singular values (σr+10\sigma_{r+1} \approx 0) to filter out measurement errors without losing essential dynamics. This truncation projects the data onto a denoised subspace, improving the conditioning of A~\tilde{A} and the accuracy of the eigendecomposition. Compared to the precursor Arnoldi approach, the SVD-based method offers superior stability for ill-conditioned snapshot matrices by employing orthogonal SVD projections rather than iterative construction.

Theoretical Foundations

Connection to Koopman Operator Theory

The Koopman operator provides a theoretical framework for analyzing nonlinear s by linearizing them in an infinite-dimensional space of observables. For a nonlinear discrete-time defined by the map F:RnRnF: \mathbb{R}^n \to \mathbb{R}^n, where xk+1=F(xk)\mathbf{x}_{k+1} = F(\mathbf{x}_k), the Koopman operator KK acts on scalar observables g:RnCg: \mathbb{R}^n \to \mathbb{C} such that Kg(xk)=g(F(xk))=g(xk+1)K g(\mathbf{x}_k) = g(F(\mathbf{x}_k)) = g(\mathbf{x}_{k+1}). This operator is linear despite the nonlinearity of FF, and its spectral decomposition yields eigenvalues μj\mu_j and eigenfunctions ψj\psi_j that characterize the system's global dynamics, including growth rates and frequencies. Dynamic mode decomposition (DMD) establishes a connection to Koopman operator theory by approximating KK in a finite-dimensional subspace spanned by data snapshots. Specifically, DMD constructs a linear operator AA from paired snapshot matrices XX and YY, where YAXY \approx A X, such that AA approximates the action of KK on the subspace of formed by the data. The DMD modes ϕj\phi_j and eigenvalues λj\lambda_j then serve as finite-dimensional surrogates for the Koopman eigenfunctions ψj\psi_j and eigenvalues μj\mu_j, enabling modal decomposition of the dynamics in this projected space. For linear systems, where xk+1=Axk\mathbf{x}_{k+1} = A \mathbf{x}_k, the Koopman operator coincides exactly with AA when are linear functions, providing a direct link. In nonlinear cases, however, DMD relies on data-driven , capturing only the Koopman spectrum observable within the snapshot subspace. Despite this approximation, DMD has limitations in fully representing the Koopman operator, particularly for strongly nonlinear systems. It projects dynamics onto a defined by the data, potentially missing eigenfunctions outside this span and leading to incomplete capture of the full Koopman spectrum. This restriction arises because DMD assumes linearity in the observable space spanned by snapshots, which may not encompass the infinite-dimensional nature of KK for highly nonlinear FF. The theoretical ties between DMD and Koopman theory were strengthened in foundational work by Tu et al. (2014), which formalized DMD as a data-driven method for approximating Koopman modes and introduced exact DMD as a refinement linking the two frameworks.

Exact Dynamic Mode Decomposition

Exact dynamic mode decomposition (exact DMD) provides a precise numerical method for approximating the linear dynamics operator that maps one set of system snapshots to the next, by minimizing the Frobenius norm of the reconstruction error XAXF\|X' - A X\|_F, where XCn×mX \in \mathbb{C}^{n \times m} and XCn×mX' \in \mathbb{C}^{n \times m} are the data matrices containing mm snapshots of an nn-dimensional state at consecutive time steps, and ACn×nA \in \mathbb{C}^{n \times n} is the sought-after operator. This optimization yields the closed-form solution A=XXA = X' X^\dagger, with XX^\dagger denoting the Moore-Penrose pseudoinverse of XX, which can be computed efficiently without forming the full AA explicitly. The DMD modes and eigenvalues are then obtained by eigendecomposing AA, providing an exact representation of the best linear approximation to the data-generating dynamics. In contrast to standard DMD, which applies a low-rank truncation to the (SVD) of XX prior to operator approximation and projects modes onto the leading (POD) directions, exact DMD employs the full-rank pseudoinverse and defers any truncation until after the eigendecomposition, thereby avoiding information loss from premature . This distinction is particularly advantageous for short datasets, where the number of snapshots mm is smaller than the state dimension nn, as the full pseudoinverse preserves all available data correlations without artificial rank constraints. The algorithmic implementation of exact DMD, as formalized by Tu et al. (2014), begins with the SVD of X=UΣVX = U \Sigma V^*, followed by construction of the reduced A~=UXVΣ1\tilde{A} = U^* X' V \Sigma^{-1}. An exact eigendecomposition of this low-dimensional A~\tilde{A} (typically m×mm \times m) yields eigenvalues λj\lambda_j and eigenvectors wjw_j, from which the full DMD modes are reconstructed as the columns of Φ=XVΣ1W\Phi = X' V \Sigma^{-1} W, where WW collects the eigenvectors wjw_j. This procedure effectively projects the snapshots onto basis (spanned by UU) for computational efficiency, performs the exact minimization in the reduced space, and lifts the modes back to the original coordinates, ensuring that Φ\Phi lies in the span of XX' rather than XX. These modes Φ\Phi satisfy AΦ=ΦΛA \Phi = \Phi \Lambda exactly for the least-squares AA, establishing a rigorous link to Koopman operator theory by providing the eigenvectors of a linear subspace approximation to the true nonlinear evolution operator when the data admits a low-dimensional linear embedding.

Advanced Variants and Extensions

Optimized and Sparse DMD

Optimized dynamic mode decomposition (OPT-DMD) enhances the standard DMD by globally minimizing the reconstruction error through nonlinear optimization techniques, such as variable projection methods, to better approximate the underlying linear dynamics from snapshot data. This approach addresses limitations in traditional DMD where the approximation of the dynamics matrix can introduce biases, particularly for unevenly sampled or noisy data. Introduced in 2023, OPT-DMD has been applied to develop low-cost reduced-order models in plasma physics, demonstrating superior predictive accuracy compared to basic DMD in identifying quasi-periodic behaviors. Sparse DMD extends the framework by incorporating sparsity constraints, typically via L1 regularization, to select a subset of dominant modes that best capture the data's essential dynamics while discarding redundant or noisy components. This promotes interpretability and computational efficiency by reducing the number of active modes. The for sparse DMD is formulated as minimizing the Frobenius norm of the reconstruction error plus a sparsity penalty on the DMD amplitudes vector b\mathbf{b}: minbYΦbF+γb1\min_{\mathbf{b}} \| \mathbf{Y} - \mathbf{\Phi} \mathbf{b} \|_F + \gamma \| \mathbf{b} \|_1 where Y\mathbf{Y} and X\mathbf{X}' are the snapshot matrices, Φ\mathbf{\Phi} are the , γ>0\gamma > 0 is the regularization parameter, and the problem is efficiently solved using the alternating direction method of multipliers (ADMM). Seminal work in established this sparsity-promoting variant, enabling mode selection for complex flows. More recent applications, such as in 2023 studies on spatiotemporal data, leverage sparse DMD for automated in high-dimensional systems, enhancing model parsimony. Advancements in of DMD appeared in 2024, tailoring the method to model breakage kinetics in particulate systems by optimizing mode amplitudes and eigenvalues to fit experimental distributions over time. This globally optimized DMD variant improves accuracy in capturing nonlinear breakage processes compared to standard approximations. In 2025, extensions incorporating time-varying amplitudes addressed transient dynamics, allowing modes to adapt their contributions over time for better reconstruction of non-stationary signals, such as sudden activity onset in video streams. These optimizations yield significant computational gains, particularly through mode reduction, enabling real-time applications like motion detection in streaming video where sparse representations process high-frame-rate data with low latency and . For instance, 2025 implementations achieve efficient by retaining only key dynamic modes, reducing computational overhead by orders of magnitude relative to full DMD.

Physics-Informed and Kernel DMD

Physics-informed dynamic mode decomposition (piDMD) integrates physical principles, such as symmetries, invariances, and conservation laws, directly into the DMD framework to enhance model robustness and prevent in high-dimensional . By restricting the dynamics matrix to a manifold that respects these principles, piDMD formulates the problem as a optimization, yielding closed-form solutions for constraints like energy preservation in fluid flows. For instance, energy-preserving variants enforce unitary transformations to maintain in simulations of incompressible flows, improving accuracy over standard DMD by aligning modes with physical laws. Kernel dynamic mode decomposition (kernel DMD) extends DMD to nonlinear systems by employing the kernel trick to map data onto higher-dimensional reproducing kernel Hilbert spaces (RKHS), where the Koopman operator can be approximated linearly. This approach avoids explicit feature engineering by using kernel functions, such as Gaussian or , to compute inner products implicitly, enabling the extraction of dynamic modes from nonlinear manifolds. The core approximation is given by K(X,X)K(X,X)A,\mathbf{K}(\mathbf{X}, \mathbf{X}') \approx \mathbf{K}(\mathbf{X}, \mathbf{X}) \mathbf{A}, where K(X,X)\mathbf{K}(\mathbf{X}, \mathbf{X}') is the kernel matrix between consecutive snapshot matrices X\mathbf{X} and X\mathbf{X}', and A\mathbf{A} is the transition matrix; eigendecomposition of the kernel K(X,X)\mathbf{K}(\mathbf{X}, \mathbf{X}) then yields Koopman eigenvalues, eigenfunctions, and modes. Originally developed for spectral analysis of the Koopman operator in time series data, kernel DMD has been applied to problems like attractors and , outperforming linear DMD in capturing nonlinear evolution. Recent advancements include time-delay embedding DMD, which augments standard DMD with lagged snapshots to reconstruct nonlinear dynamics from univariate or sparse , enabling improved long-term predictions in systems like metabolic oscillations. Published in 2025, this variant uses high embedding dimensions (e.g., d=150d = 150) to approximate nonlinear attractors linearly, achieving accurate comparable to neural networks for damped oscillations while classifying trajectories in . Quantum DMD, introduced in 2023, leverages quantum for high-dimensional many-body systems, offering exponential speedup over classical methods by processing with reduced sampling, as demonstrated in and fluid analysis. Additionally, geostrophic DMD, developed in 2025, applies multi-resolution DMD to sea-surface height data from observations, isolating balanced geostrophic motions from internal gravity waves in ocean currents like the , with correlations exceeding 0.99 in simulations and over 0.9 in real data. These extensions enhance in and complex systems by incorporating domain-specific structures, such as time for nonlinearity or quantum advantages for .

Applications

Fluid Dynamics Cases

Dynamic mode decomposition (DMD) has been applied to analyze in the wake of a , a classic benchmark in . In a numerical simulation at Re=100, snapshots of the field were used to extract DMD modes that capture the dominant oscillatory behavior of the von Kármán vortex street. The leading modes reveal pairs of counter-rotating vortices shedding alternately from the , with the associated eigenvalues indicating a of approximately 0.165, aligning closely with the expected value for this flow regime. Spatial mode visualizations show streamwise elongated structures in the wake, while temporal evolution demonstrates periodic growth and decay modulated by the imaginary part of the eigenvalues. A synthetic traveling wave example illustrates DMD's ability to decompose propagating patterns into coherent modes. Consider a two-dimensional dataset generated by a superposition of sinusoidal waves traveling at constant speed, such as u(x,y,t)=sin(2π(kxωt))+sin(2π(kyωt))u(x,y,t) = \sin(2\pi (kx - \omega t)) + \sin(2\pi (ky - \omega t)), where kk is the and ω\omega is the . DMD applied to snapshots of this field extracts modes with eigenvalues λ=eiωΔt\lambda = e^{-i \omega \Delta t}, precisely recovering the wave's and direction through the argument of λ\lambda. This decomposition separates the traveling components, with spatial modes representing wavefronts and temporal coefficients showing linear without dispersion. Recent advancements include optimized DMD for reconstructing atmospheric flows from sparse data. In a 2025 study, optimized DMD was used to build reduced-order models for global dynamics, effectively reconstructing tracer transport patterns in chemically reactive flows simulated by the GEOS-Chem model at 4°×5° resolution. The method adapts mode selection to minimize reconstruction , achieving mean relative errors below 10% over 20-day forecasts, and highlights dominant transport modes akin to advective structures in fluid flows. DMD also aids in predicting turbulence in shear flows by approximating Koopman operators from simulation data. Modes reveal shear-layer instabilities, with frequencies derived from ω=(log(λ)/Δt)/(2π)\omega = \Im(\log(\lambda)/\Delta t) / (2\pi) corresponding to observed turbulent scales. In general, DMD modes in fluid dynamics uncover instability frequencies via the Strouhal number St=DU(log(λ)/Δt)2πSt = \frac{D}{U} \cdot \frac{\Im(\log(\lambda)/\Delta t)}{2\pi}, where DD is a characteristic length, UU is the free-stream velocity, and Δt\Delta t is the snapshot interval; this quantifies shedding rates in wakes and shear layers. Spatial plots of modes often use contour or vector fields to depict coherent structures, while temporal plots show exponential evolution, aiding interpretation of flow stability.

Broader Scientific Applications

Dynamic mode decomposition (DMD) has found applications in quantum many-body systems, where it enables long-time forecasting of from limited data snapshots. In a 2025 study, DMD was applied to simulate the evolution of quantum states in many-body systems, capturing non-linear interactions and predicting behaviors over extended timescales that are computationally prohibitive for direct . This approach leverages DMD's ability to extract dominant modes from time-series data of observables, facilitating reduced-order models for quantum process optimization. In and , DMD analyzes large-scale and atmospheric motions by decomposing spatiotemporal into coherent modes. For instance, a 2025 analysis used DMD on satellite altimetry from the Surface Water and Ocean Topography (SWOT) mission to identify geostrophically balanced flows in the , revealing dominant wave patterns and their contributions to regional circulation. Similarly, in particulate geophysics, globally optimized DMD modeled breakage kinetics in granular systems, providing data-driven predictions of particle size evolution under mechanical stress with improved accuracy over traditional population balance equations. Biological and epidemiological applications of DMD focus on extracting dynamic patterns from spatiotemporal disease data. A seminal 2015 work applied DMD to and historical measles records, identifying propagating modes that correspond to infection waves and enabling early detection of outbreaks without assuming underlying epidemiological models. Recent extensions incorporate sparse DMD variants to handle noisy, high-dimensional datasets, enhancing mode extraction for modern systems tracking vector-borne or respiratory diseases. In engineering contexts, DMD supports real-time analysis of dynamic processes across diverse media. For , a 2025 method employed DMD for motion detection in streaming footage, achieving low-latency foreground-background separation by decomposing pixel trajectories into low-rank modes, suitable for and autonomous systems. In plasma engineering, DMD was used in 2023 to model E × B plasma drifts from simulation data, extracting spatiotemporal patterns for reduced-order representations that predict instability growth in fusion devices and thrusters. Across these fields, a key benefit of DMD lies in its role as a reduced-order modeling tool for control applications, where extracted modes inform feedback strategies to stabilize or manipulate systems. For example, DMD with control extensions has been integrated into linear quadratic regulators for high-dimensional processes, reducing computational demands while maintaining predictive fidelity. This versatility underscores DMD's value in bridging data-driven insights with actionable interventions in non-fluid domains.

Challenges and Limitations

Dynamic mode decomposition (DMD) exhibits significant sensitivity to in input , particularly during the of the pseudoinverse, where small perturbations can be amplified due to ill-conditioning of the snapshot matrix. This amplification arises from the (SVD) step, where the pseudoinverse X=VΣ1UTX^\dagger = V \Sigma^{-1} U^T inverts the singular values, magnifying errors when the exceeds 100. To mitigate this, practitioners often apply SVD truncation to retain only the dominant rr modes where rmin(m,n)r \ll \min(m, n), or employ regularization techniques such as optimized DMD or total least-squares variants, which reduce bias and enhance robustness in noisy environments. Standard DMD assumes uniformly spaced snapshots, but real-world data frequently involves non-uniform sampling due to irregular time steps in experiments or adaptive simulations, necessitating specialized extensions. Methods like θ-DMD address this by reformulating the linear mapping to accommodate variable intervals, enabling extraction of coherent modes from subsampled or irregularly timed flow data without significant loss in accuracy compared to uniform cases. For high-dimensional systems where the state dimension nn is large, computational poses a challenge, as the full SVD of snapshot matrices becomes prohibitive in time and memory. Randomized SVD approximations alleviate this by computing a low-rank efficiently, scaling with the intrinsic rank rather than nn, and enabling near-optimal DMD on massive datasets. Additionally, streaming variants support real-time processing of sequentially arriving data, reformulating DMD to update modes incrementally without storing the entire . DMD requires a sufficient number of snapshots m>rm > r, where rr is the rank of the dynamics, to reliably approximate the Koopman operator; fewer snapshots lead to underdetermined problems and poor mode resolution. Short sequences from multiple initial conditions often outperform long single-trajectory data in well-conditioned systems, as extended records can accumulate noise without improving eigenvalue estimates.

Interpretability and Predictive Accuracy

In dynamic mode decomposition (DMD), the eigenvalues λj\lambda_j associated with each mode provide key insights into the system's dynamics. The continuous-time eigenvalues are obtained as μj=log(λj)/Δt\mu_j = \log(\lambda_j)/\Delta t, where the real part Re(μj)\operatorname{Re}(\mu_j) indicates growth or decay rates, and the imaginary part Im(μj)\operatorname{Im}(\mu_j) corresponds to frequencies. This association enables the interpretation of modes as coherent structures that capture dominant spatiotemporal patterns, such as propagating waves or decaying transients in fluid flows. However, in nonlinear regimes, these modes can mix non-physical components, leading to reduced interpretability as the fails to disentangle true dynamical features from nonlinear interactions. DMD excels in short-term predictions by approximating nonlinear dynamics through a , but its accuracy diminishes over longer horizons, particularly in systems where trajectories diverge rapidly due to sensitivity to initial conditions. A 2025 study on periodic and quasi-periodic solutions demonstrated that DMD predictions remain reliable for short times but exhibit significant errors in regimes, with divergence observed beyond a few Lyapunov times. Validation of DMD typically involves assessing reconstruction error, defined as the norm of the difference between original data and mode-reconstructed snapshots, and comparing computed eigenvalues to known true values in benchmark problems like the flow past a . In such benchmarks, low reconstruction errors (e.g., below 5% for dominant modes) confirm , though discrepancies arise when data noise exceeds 10%. Recent assessments highlight ongoing challenges in handling transients, where standard DMD assumes constant mode amplitudes, leading to inaccuracies in time-varying systems; extensions incorporating time-varying amplitudes with sparsity and smoothness regularization have been developed to improve transient detection. Additionally, predictive accuracy heavily depends on , with noisy or sparse snapshots amplifying mode mixing and eigenvalue inaccuracies. To address these limitations, integrating physics-informed constraints, such as enforcing conservation laws in the , enhances both interpretability and long-term accuracy, providing more accurate eigenvalue estimates compared to standard DMD in nonlinear benchmarks.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.