Hubbry Logo
Dynamical systemDynamical systemMain
Open search
Dynamical system
Community hub
Dynamical system
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Dynamical system
Dynamical system
from Wikipedia
The Lorenz attractor arises in the study of the Lorenz oscillator, a dynamical system.

In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured.[citation needed] Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.

At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state.[1][2] However, some systems are stochastic, in that random events also affect the evolution of the state variables.

The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics,[3][4] biology,[5] chemistry, engineering,[6] economics,[7] history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.

Overview

[edit]

The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.

Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.

For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:

  • The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
  • The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
  • The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
  • The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.

History

[edit]

Many people regard French mathematician Henri Poincaré as the founder of dynamical systems.[8] Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.

Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.

In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.

Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.

Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.

In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems.[9] His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.[10]

Formal definition

[edit]

In the most general sense,[11][12] a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function

with

(where is the 2nd projection map)

and for any x in X:

for and , where we have defined the set for any x in X.

In particular, in the case that we have for every x in X that and thus that Φ defines a monoid action of T on X.

The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system.

We often write

if we take one of the variables as constant. The function

is called the flow through x and its graph is called the trajectory through x. The set

is called the orbit through x. The orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T

Thus, in particular, if S is Φ-invariant, for all x in S. That is, the flow through x must be defined for all time for every element of S.

More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.

Geometrical definition

[edit]

In the geometrical definition, a dynamical system is the tuple . is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain .

Real dynamical system

[edit]

A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow.

Discrete dynamical system

[edit]

A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade.[13]

Cellular automaton

[edit]

A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice.

Multidimensional generalization

[edit]

Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.

Compactification of a dynamical system

[edit]

Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*).

In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.

Measure theoretical definition

[edit]

A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: XX is said to be Σ-measurable if and only if, for every σ in Σ, one has . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system.

The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.

Relation to geometric definition

[edit]

The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.

Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.

For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.

Construction of dynamical systems

[edit]

The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following:

where

  • represents the velocity of the material point x
  • M is a finite dimensional manifold
  • v: T × MTM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM.

There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions.

Depending on the properties of this vector field, the mechanical system is called

  • autonomous, when v(t, x) = v(x)
  • homogeneous when v(t, 0) = 0 for all t

The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above

The dynamical system is then (T, M, Φ).

Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy

where is a functional from the set of evolution functions to the field of the complex numbers.

This equation is useful when modeling mechanical systems with complicated constraints.

Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.

Examples

[edit]

Linear dynamical systems

[edit]

Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).

Flows

[edit]

For a flow, the vector field v(x) is an affine function of the position in the phase space, that is,

with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b:

When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,

When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.

The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.

Linear vector fields and a few trajectories.

Maps

[edit]

A discrete-time, affine dynamical system has the form of a matrix difference equation:

with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.

As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.

There are also many other discrete dynamical systems.

Local dynamics

[edit]

The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.

Rectification

[edit]

A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.

The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.

Near periodic orbits

[edit]

In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γx0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.

The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part

This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.

Conjugation results

[edit]

The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.

In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.

The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.

Bifurcation theory

[edit]

When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.

Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.

The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.

Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.

Ergodic systems

[edit]

In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that

In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.

In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.

For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.

One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).

The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,

By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.

The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.

Nonlinear dynamical systems and chaos

[edit]

Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).

This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"

The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear.

Solutions of finite duration

[edit]

For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,[14] meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.

As example, the equation:

Admits the finite duration solution:

that is zero for and is not Lipschitz continuous at its ending time

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A dynamical system is a that describes how the state of a system evolves over time, where the state is represented by variables in a , and the evolution follows deterministic rules such as ordinary differential equations for continuous time or iterative maps for discrete time. This framework captures a wide range of behaviors, including growth, decay, oscillation, evolution, collapse, and chaos, by focusing on both quantitative trajectories and qualitative long-term dynamics. Dynamical systems are broadly classified into continuous and discrete types, with continuous systems modeled by flows generated by differential equations that describe smooth changes, and discrete systems by iterations of functions that update states in successive steps. Key concepts in the theory include (the set of all possible states), orbits (the paths traced by states over time), fixed points or equilibria (states that do not change), and stability (whether nearby states converge to or diverge from equilibria). Additional notions such as attractors (sets toward which orbits converge), bifurcations (qualitative changes in behavior as parameters vary), and chaotic dynamics (sensitive dependence on initial conditions) are central to understanding complex evolutions. The field traces its origins to the late , particularly Henri Poincaré's foundational work on the in , which introduced qualitative methods for analyzing nonlinear systems and highlighted the limitations of . This was expanded in the by George Birkhoff's and Stephen Smale's topological approaches, establishing dynamical systems as an independent mathematical discipline with deep ties to analysis, , and probability. The theory's maturation into a core area of reflects its role in addressing global orbit structures and invariant sets, often without requiring explicit solutions. Applications of dynamical systems are vast and interdisciplinary, modeling physical phenomena like planetary motion and fluid in physics, and spread in , control strategies in , and even neural network behaviors in . In complex settings, such as interdependent nonlinear systems, emergent patterns arise from interactions far from equilibrium, influencing fields from climate modeling to . These models not only predict behaviors but also inform decision-making by revealing stability thresholds and chaotic regimes.

Introduction

Core Concepts

A dynamical system formalizes the evolution of a physical, biological, or abstract process over time through a deterministic rule that maps states to subsequent states. At its core, it consists of a state space XX, which is the set of all possible configurations or conditions of the system, and an evolution rule ϕ\phi, which describes how the system transitions from one state to another, either continuously via a flow or discretely via a map. In the continuous case, ϕ\phi is often a flow Φ:R×XX\Phi: \mathbb{R} \times X \to X that parametrizes the progression along time, while in the discrete case, it is an iterated map T:XXT: X \to X applied successively. This pair (X,ϕ)(X, \phi) captures the essence of predictability in deterministic processes, where the system's behavior is fully specified by its current state without external randomness. Dynamical systems are inherently deterministic, meaning that given the evolution rule and an initial state, the entire future (and often past) path is uniquely determined, contrasting with systems that incorporate probabilistic elements. The initial-value problem plays a central role here: it specifies a starting point x0Xx_0 \in X at some initial time, allowing the evolution rule to generate the system's history from that point onward. This setup ensures that solutions to the dynamical system are well-defined and unique under suitable conditions on ϕ\phi, such as continuity or , enabling the study of long-term behavior without ambiguity. The path traced by the system from an initial state under the evolution rule is known as a , while the set of all points visited along this path forms the . In discrete systems, the orbit is the sequence {ϕn(x0)n=0,1,2,}\{ \phi^n(x_0) \mid n = 0, 1, 2, \dots \}, where ϕn\phi^n denotes nn-fold ; in continuous systems, it is the image of the flow curve {ϕ(t,x0)tR}\{ \phi(t, x_0) \mid t \in \mathbb{R} \}. Orbits provide the fundamental building blocks for analyzing qualitative features like stability or periodicity, as multiple trajectories may converge or diverge based on their starting points in the state space. In the phase space, which visualizes the state space XX (often with coordinates representing variables like position and ), trajectories appear as distinct curves or lines illustrating the system's ; for instance, closed loops indicate periodic orbits, while diverging paths suggest . Such representations highlight how nearby initial conditions can lead to similar or wildly different orbits, a key intuition for understanding . Dynamical systems find applications in modeling physical processes like and biological ones like .

Scope and Applications

Dynamical systems theory encompasses a wide range of applications in the natural and social sciences, providing mathematical frameworks to model and predict the evolution of complex processes over time. In , it is used to describe the motion of celestial bodies, such as planetary orbits under gravitational forces. In , dynamical systems model , capturing interactions between and environmental factors to forecast ecological changes. Similarly, in , these models analyze market fluctuations and resource allocation, simulating how economic variables like evolve. In , particularly control systems, dynamical systems theory designs feedback mechanisms to stabilize processes in mechanical and electrical systems. A key strength of dynamical systems lies in its focus on qualitative behaviors, such as stability—where systems return to equilibrium after perturbations—and periodicity, where trajectories exhibit repeating cycles. These properties allow researchers to assess long-term outcomes without solving equations explicitly, revealing patterns like attractors in state space that govern system evolution. The theory emerged as a vital bridge between and applied sciences in the mid-20th century, integrating differential equations with interdisciplinary problems to address nonlinear phenomena across fields. Computational simulations, including methods, further enable the approximation of trajectories for systems too complex for analytical solutions.

Historical Context

Early Developments

The study of dynamical systems traces its origins to the foundational work in during the . Galileo Galilei (1564–1642) made pivotal contributions through his experiments on inclined planes and , demonstrating that objects accelerate uniformly under and formulating the principle of , which posits that bodies maintain their state of motion unless acted upon by external forces. These ideas, detailed in his Dialogue Concerning the Two Chief World Systems (1632) and Discourses and Mathematical Demonstrations Relating to Two New Sciences (1638), shifted the analysis of motion from qualitative descriptions to quantitative laws, serving as precursors to systematic studies of evolving physical systems. Galileo's emphasis on empirical observation and mathematical modeling laid the groundwork for deterministic approaches to motion. Building directly on Galileo's insights, synthesized these concepts in his seminal Philosophiæ Naturalis Principia Mathematica (1687), where he articulated the three laws of motion and the law of universal gravitation. Newton's framework established as a , in which the future state of any mechanical configuration evolves predictably from initial conditions and governing forces, enabling precise predictions such as planetary orbits. This Principia marked a cornerstone for dynamical systems by introducing differential equations to describe continuous evolution, influencing subsequent and highlighting the predictability inherent in classical laws. By the late 19th century, the limitations of purely analytical solutions in complex systems prompted qualitative investigations. (1854–1912), in response to a prize competition posed by in 1889, examined the in during the 1880s and 1890s, revealing that small perturbations could lead to intricate, non-periodic behaviors in planetary motions. In his memoir published in Acta Mathematica (1890), Poincaré pioneered qualitative methods, such as recurrence theorems and the analysis of invariant manifolds, to study the global structure of solutions without solving equations explicitly. These approaches, applied to the restricted , demonstrated the inherent instability in certain configurations and foreshadowed the complexities of nonlinear dynamics. A parallel advancement came from Alexander Lyapunov (1857–1918), whose 1892 doctoral thesis The General Problem of the Stability of Motion provided the first comprehensive theory for assessing equilibrium stability in dynamical systems governed by differential equations. Lyapunov defined stability as the property where solutions remain bounded near an equilibrium under small perturbations and introduced asymptotic stability, where solutions converge to the equilibrium over time. His direct method, using auxiliary functions to bound solution behavior, offered practical tools for mechanical and celestial applications, bridging physical intuition with rigorous analysis.

20th-Century Foundations

In the early , advanced the abstract study of dynamical systems through his foundational work on topological dynamics, introduced in his 1927 monograph Dynamical Systems, where he analyzed the qualitative behavior of on compact phase spaces using topological methods. This framework emphasized the invariance of topological properties under homeomorphisms, providing a rigorous basis for understanding long-term dynamics without explicit solutions. Complementing this, Birkhoff's 1931 pointwise ergodic theorem established that, for an ergodic measure-preserving transformation on a , the time average of an integrable function along almost every converges to its space average, linking to abstract dynamics. Building on these ideas, Aleksandr Andronov and introduced the concept of in the 1930s, formalized in their 1937 paper on "rough systems" (systèmes grossiers), which defined systems whose qualitative phase portraits remain unchanged under small perturbations of the defining equations. This notion, applied initially to , prioritized robustness in qualitative theory, ensuring that generic dynamical behaviors persist despite imperfections in models derived from physical observations. Their work shifted focus from exact solvability to the topological equivalence of nearby systems, influencing later classifications of stable flows. The mid-20th century saw further maturation of qualitative theory through contributions from and . In the 1950s, Kolmogorov's seminar in fostered developments in ergodic and Hamiltonian dynamics, including his 1954 on the persistence of quasi-periodic motions in nearly integrable systems under small perturbations, which quantified stability in multi-dimensional phase spaces. By the 1960s, Smale extended these ideas globally, proving in his 1960 paper on Morse inequalities that certain gradient-like systems satisfy topological constraints on fixed points and periodic orbits, and in his seminal 1967 Bulletin article, he unified local and global analyses for differentiable flows on manifolds, establishing as a dense property in the space of smooth vector fields. A pivotal event signaling the field's maturity was the 1963 International Symposium on Nonlinear Differential Equations and Nonlinear Mechanics, organized by Joseph P. LaSalle and , which gathered leading researchers to discuss qualitative methods, stability, and applications, resulting in proceedings that synthesized progress in abstract formulations.

Definitions and Formulations

Geometric Approach

In the geometric approach, a continuous-time dynamical system, often called a real dynamical system, is defined on a smooth manifold XX as a one-parameter group of diffeomorphisms, or flow, denoted ϕt:R×XX\phi_t: \mathbb{R} \times X \to X, where tRt \in \mathbb{R} parameterizes time. This flow satisfies the group properties: the identity axiom ϕ0(x)=x\phi_0(x) = x for all xXx \in X, and the composition axiom ϕt+s(x)=ϕt(ϕs(x))\phi_{t+s}(x) = \phi_t(\phi_s(x)) for all t,sRt, s \in \mathbb{R} and xXx \in X, ensuring invertibility via ϕt=ϕt1\phi_{-t} = \phi_t^{-1}. These axioms capture the deterministic evolution of states in the phase space XX, where trajectories {ϕt(x)tR}\{ \phi_t(x) \mid t \in \mathbb{R} \} trace the system's paths under smooth transformations preserving the manifold's topology. The flow ϕt\phi_t is generated by a smooth V:XTXV: X \to TX, where TXTX is the of XX. Specifically, for each xXx \in X, the curve tϕt(x)t \mapsto \phi_t(x) satisfies the autonomous ddtϕt(x)=V(ϕt(x)),ϕ0(x)=x.\frac{d}{dt} \phi_t(x) = V(\phi_t(x)), \quad \phi_0(x) = x. To derive this, consider the composition property: fix xx and differentiate ϕt+h(x)=ϕt(ϕh(x))\phi_{t+h}(x) = \phi_t(\phi_h(x)) with respect to hh at h=0h=0, yielding ddtϕt(x)=ddhh=0ϕh(ϕt(x))\frac{d}{dt} \phi_t(x) = \frac{d}{dh} \big|_{h=0} \phi_h(\phi_t(x))
Add your contribution
Related Hubs
User Avatar
No comments yet.