Hubbry Logo
logo
Iterated function
Community hub

Iterated function

logo
0 subscribers
Read side by side
from Wikipedia

Iterated transformations of the object on the left
On top is a clockwise rotation by 90°. It has order 4, because that is the smallest positive exponent that produces the identity. Below is a shear mapping with infinite order.
Below that are their compositions, which both have order 3.

In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated.

For example, on the image on the right:

Iterated functions are studied in computer science, fractals, dynamical systems, mathematics and renormalization group physics.

Definition

[edit]

The formal definition of an iterated function on a set X follows.

Let X be a set and f: XX be a function.

Defining f n as the n-th iterate of f, where n is a non-negative integer, by: and

where idX is the identity function on X and (f g)(x) = f (g(x)) denotes function composition. This notation has been traced to and John Frederick William Herschel in 1813.[1][2][3][4] Herschel credited Hans Heinrich Bürmann for it, but without giving a specific reference to the work of Bürmann, which remains undiscovered.[5]

Because the notation f n may refer to both iteration (composition) of the function f or exponentiation of the function f (the latter is commonly used in trigonometry), some mathematicians[citation needed] choose to use to denote the compositional meaning, writing fn(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f [n](x) was used by Benjamin Peirce[6][4][nb 1] whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead.[7][4][nb 2]

Abelian property and iteration sequences

[edit]

In general, the following identity holds for all non-negative integers m and n,

This is structurally identical to the property of exponentiation that aman = am + n.

In general, for arbitrary general (negative, non-integer, etc.) indices m and n, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, Tm(Tn(x)) = Tm n(x), since Tn(x) = cos(n arccos(x)).

The relation (f m)n(x) = (f n)m(x) = f mn(x) also holds, analogous to the property of exponentiation that (am)n = (an)m = amn.

The sequence of functions f n is called a Picard sequence,[8][9] named after Charles Émile Picard.

For a given x in X, the sequence of values fn(x) is called the orbit of x.

If f n (x) = f n+m (x) for some integer m > 0, the orbit is called a periodic orbit. The smallest such value of m for a given x is called the period of the orbit. The point x itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit.

Fixed points

[edit]

If x = f(x) for some x in X (that is, the period of the orbit of x is 1), then x is called a fixed point of the iterated sequence. The set of fixed points is often denoted as Fix(f). There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem.

There are several techniques for convergence acceleration of the sequences produced by fixed point iteration.[10] For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence.

Limiting behaviour

[edit]

Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point.[11]

When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set.

The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behavior of small neighborhoods under iteration. Also see infinite compositions of analytic functions.

Other limiting behaviors are possible; for example, wandering points are points that move away, and never come back even close to where they started.

Invariant measure

[edit]

If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states.

In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos.

Fractional iterates and flows, and negative iterates

[edit]
g: RR is a trivial functional 5th root of f: R+R+, f(x) = sin(x). The computation of f(π6) = 12 = g5(π6) is shown.

The notion f1/n must be used with care when the equation gn(x) = f(x) has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for n = 2 and f(x) = 4x − 6, both g(x) = 6 − 2x and g(x) = 2x − 2 are solutions; so the expression f 1/2(x) does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root of f can always be obtained if f's domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study.

Fractional iteration of a function can be defined: for instance, a half iterate of a function f is a function g such that g(g(x)) = f(x).[12] This function g(x) can be written using the index notation as f 1/2(x) . Similarly, f 1/3(x) is the function defined such that f1/3(f1/3(f1/3(x))) = f(x), while f2/3(x) may be defined as equal to f 1/3(f1/3(x)), and so forth, all based on the principle, mentioned earlier, that f mf n = f m + n. This idea can be generalized so that the iteration count n becomes a continuous parameter, a sort of continuous "time" of a continuous orbit.[13][14]

In such cases, one refers to the system as a flow (cf. section on conjugacy below.)

If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example, f −1(x) is the normal inverse of f, while f −2(x) is the inverse composed with itself, i.e. f −2(x) = f −1(f −1(x)). Fractional negative iterates are defined analogously to fractional positive ones; for example, f −1/2(x) is defined such that f −1/2(f −1/2(x)) = f −1(x), or, equivalently, such that f −1/2(f 1/2(x)) = f 0(x) = x.

Some formulas for fractional iteration

[edit]

One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows.[15]

  1. First determine a fixed point for the function such that f(a) = a.
  2. Define f n(a) = a for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates.
  3. Expand fn(x) around the fixed point a as a Taylor series,
  4. Expand out
  5. Substitute in for fk(a) = a, for any k,
  6. Make use of the geometric progression to simplify terms, There is a special case when f '(a) = 1,

This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy.

Example 1

[edit]

For example, setting f(x) = Cx + D gives the fixed point a = D/(1 − C), so the above formula terminates to just which is trivial to check.

Example 2

[edit]

Find the value of where this is done n times (and possibly the interpolated values when n is not an integer). We have f(x) = 2x. A fixed point is a = f(2) = 2.

So set x = 1 and f n (1) expanded around the fixed point value of 2 is then an infinite series, which, taking just the first three terms, is correct to the first decimal place when n is positive. Also see Tetration: f n(1) = n2. Using the other fixed point a = f(4) = 4 causes the series to diverge.

For n = −1, the series computes the inverse function ⁠2+ln x/ln 2.

Example 3

[edit]

With the function f(x) = xb, expand around the fixed point 1 to get the series which is simply the Taylor series of x(bn ) expanded around 1.

Conjugacy

[edit]

If f and g are two iterated functions, and there exists a homeomorphism h such that g = h−1fh , then f and g are said to be topologically conjugate.

Clearly, topological conjugacy is preserved under iteration, as gn = h−1  ○ f nh. Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking f(x) = x + 1, one has the iteration of g(x) = h−1(h(x) + 1) as

gn(x) = h−1(h(x) + n),   for any function h.

Making the substitution x = h−1(y) = ϕ(y) yields

g(ϕ(y)) = ϕ(y+1),   a form known as the Abel equation.

Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at x = 0, f(0) = 0, one may often solve[16] Schröder's equation for a function Ψ, which makes f(x) locally conjugate to a mere dilation, g(x) = f '(0) x, that is

f(x) = Ψ−1(f '(0) Ψ(x)).

Thus, its iteration orbit, or flow, under suitable provisions (e.g., f '(0) ≠ 1), amounts to the conjugate of the orbit of the monomial,

Ψ−1(f '(0)n Ψ(x)),

where n in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent n no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit:[17] the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group.[18]

Iterates of the sine function (blue), in the first half-period. Half-iterate (orange), i.e., the sine's functional square root; the functional square root of that, the quarter-iterate (black) above it; and further fractional iterates up to the 1/64th. The functions below the (blue) sine are six integral iterates below it, starting with the second iterate (red) and ending with the 64th iterate. The green envelope triangle represents the limiting null iterate, a triangular function serving as the starting point leading to the sine function. The dashed line is the negative first iterate, i.e. the inverse of sine (arcsin). (From the general pedagogy web-site.[19] For the notation, see [2].)

This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic.

Markov chains

[edit]

If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain.

Examples

[edit]

There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems.

Ernst Schröder,[20] in 1870, worked out special cases of the logistic map, such as the chaotic case f(x) = 4x(1 − x), so that Ψ(x) = arcsin(x)2, hence f n(x) = sin(2n arcsin(x))2.

A nonchaotic case Schröder also illustrated with his method, f(x) = 2x(1 − x), yielded Ψ(x) = −1/2 ln(1 − 2x), and hence fn(x) = −1/2((1 − 2x)2n − 1).

If f is the action of a group element on a set, then the iterated function corresponds to a free group.

Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some[20] that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n.

(see note)

where:

(see note)

where:

  (fractional linear transformation)[21]

where:

  (generic Abel equation)
(Chebyshev polynomial for integer m)

Note: these two special cases of ax2 + bx + c are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table.

Some of these examples are related among themselves by simple conjugacies.

Means of study

[edit]

Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators.

In computer science

[edit]

In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs.

Definitions in terms of iterated functions

[edit]

Two important functionals can be defined in terms of iterated functions. These are summation:

and the equivalent product:

Functional derivative

[edit]

The functional derivative of an iterated function is given by the recursive formula:

Lie's data transport equation

[edit]

Iterated functions crop up in the series expansion of combined functions, such as g(f(x)).

Given the iteration velocity, or beta function (physics),

for the nth iterate of the function f, we have[22]

For example, for rigid advection, if f(x) = x + t, then v(x) = t. Consequently, g(x + t) = exp(t ∂/∂x) g(x), action by a plain shift operator.

Conversely, one may specify f(x) given an arbitrary v(x), through the generic Abel equation discussed above,

where

This is evident by noting that

For continuous iteration index t, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group,

The initial flow velocity v suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the translation functional equation,[23]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In mathematics, an iterated function is obtained by composing a given function ff with itself multiple times, where the nn-th iterate f(n)f^{(n)} denotes the nn-fold composition applied to an initial input.[1] This process generates a sequence of states xn=f(n)(x0)x_n = f^{(n)}(x_0) from an initial value x0x_0, modeling discrete-time evolution in dynamical systems.[1] Iterated functions form the basis for analyzing long-term behavior, including convergence to fixed points where f(x)=xf(x) = x, periodic orbits forming cycles of period nn where f(n)(x)=xf^{(n)}(x) = x but f(k)(x)xf^{(k)}(x) \neq x for 1k<n1 \leq k < n, and chaotic dynamics in nonlinear examples.[1] Key applications span numerical analysis, such as Newton's method for root-finding, where iterates xn+1=xnf(xn)/f(xn)x_{n+1} = x_n - f(x_n)/f'(x_n) converge quadratically to solutions under suitable conditions.[1] In population biology, the logistic map f(x)=λx(1x)f(x) = \lambda x (1 - x) illustrates bifurcations and chaos as the parameter λ\lambda varies, with fixed points at x=0x = 0 and x=11/λx = 1 - 1/\lambda for 1<λ31 < \lambda \leq 3.[1] Iterated functions also underpin fractal geometry through iterated function systems (IFS), collections of contractive mappings whose fixed points yield self-similar attractors like the Sierpinski triangle.[2] These concepts extend to continuous-time analogs via functional iterates and flows, enabling broader studies in ergodic theory and complex dynamics.[3]

Fundamentals

Definition

In mathematics, an iterated function arises from the repeated composition of a given function with itself. Consider a function $ f: X \to X $, where $ X $ is a set, ensuring that the codomain of $ f $ aligns with its domain to permit successive applications. The $ n $-th iterate of $ f $, denoted $ f^n $, is formally defined recursively for positive integers $ n $ as $ f^1 = f $ and $ f^{n+1} = f \circ f^n $, where $ \circ $ denotes function composition.[4] Function composition serves as the foundational operation for iteration: given functions $ f: X \to Y $ and $ g: Y \to Z $, the composite $ (f \circ g): X \to Z $ is defined by $ (f \circ g)(x) = f(g(x)) $ for all $ x \in X $. For self-composition in iteration, the requirement $ Y \subseteq X $ guarantees that the output of each application remains within the domain, allowing the process to continue indefinitely without domain restrictions disrupting the sequence. This consistency is essential, as mismatches could render higher iterates undefined on portions of $ X $.[4][5] Notation for iterates varies across mathematical literature; the superscript $ f^n $ is common and traces back to early 19th-century usage, while alternatives like $ f^{(n)} $ emphasize the iterative exponent to distinguish from functional powers in other contexts. For instance, $ f^2(x) = f(f(x)) $ or $ f^{(2)}(x) = f(f(x)) $ both represent the second iterate. These conventions facilitate clear expression in dynamical systems and related fields where repeated application is analyzed.[4][6]

Iteration Sequences

In the study of iterated functions, the iteration sequence, often referred to as the orbit or trajectory of a point xx in the domain XX, is defined as the sequence {fn(x)}n=0\{f^n(x)\}_{n=0}^\infty, where f0(x)=xf^0(x) = x is the identity map and fn+1(x)=f(fn(x))f^{n+1}(x) = f(f^n(x)) for n0n \geq 0.[7] This construction captures the successive applications of the function f:XXf: X \to X starting from the initial point xx.[7] The forward orbit specifically denotes this one-sided sequence for nonnegative integers nn, highlighting the directional progression from the initial condition without requiring the function to be invertible.[7] The choice of initial point xx plays a pivotal role, as different starting values generally produce distinct sequences, even under the same function ff.[8] Simple examples demonstrate how iteration sequences can exhibit varied initial behaviors. For the linear function f(x)=2xf(x) = 2x on the real line with initial point x=1x = 1, the sequence is 1,2,4,8,1, 2, 4, 8, \dots, illustrating unbounded growth.[8] In a nonlinear case, consider the logistic map f(x)=4x(1x)f(x) = 4x(1 - x) on the interval [0,1][0, 1] starting from x=0.1x = 0.1; the sequence begins 0.1,0.36,0.9216,0.2890,0.1, 0.36, 0.9216, 0.2890, \dots and oscillates across the interval.[7] Periodic points emerge within these sequences as points where the iteration returns to the starting value after a finite number of steps, specifically xx is a periodic point of period k>0k > 0 if fk(x)=xf^k(x) = x but fm(x)xf^m(x) \neq x for 0<m<k0 < m < k, creating a finite cycle in the trajectory.[9]

Basic Properties

Fixed Points

In the context of iterated functions, a fixed point of a function $ f $ is a point $ x^* $ in its domain such that $ f(x^) = x^ $.[10] More generally, for the $ n $-th iterate $ f^n $, a point $ x^* $ is a periodic point of period $ n $ if $ f^n(x^) = x^ $ but $ f^k(x^) \neq x^ $ for all positive integers $ k < n $; fixed points are thus periodic points of period 1. Fixed points of iterated functions are classified by their stability, particularly in one-dimensional real dynamical systems where $ f $ is differentiable at $ x^* $. A fixed point $ x^* $ is attracting if $ |f'(x^)| < 1 $, meaning nearby points converge to $ x^ $ under iteration; it is superattracting if $ f'(x^) = 0 $, leading to quadratic convergence. Conversely, $ x^ $ is repelling if $ |f'(x^)| > 1 $, causing nearby points to diverge; if $ |f'(x^)| = 1 $, the fixed point is neutral (or indifferent), with behavior depending on higher-order terms.[11] The existence of fixed points for continuous functions is guaranteed by several fundamental theorems. Brouwer's fixed-point theorem states that every continuous function $ f: B^n \to B^n $, where $ B^n $ is the closed unit ball in $ \mathbb{R}^n $ (a compact convex set), has at least one fixed point.[12] For contractions in metric spaces, Banach's fixed-point theorem asserts that if $ (X, d) $ is a complete metric space and $ f: X \to X $ is a contraction mapping (i.e., there exists $ k < 1 $ such that $ d(f(x), f(y)) \leq k , d(x, y) $ for all $ x, y \in X $), then $ f $ has a unique fixed point, and iterations converge to it from any starting point. (Banach, 1922) A classic example is the logistic map $ f(x) = r x (1 - x) $ for $ x \in [0, 1] $ and parameter $ r > 0 $, where fixed points solve $ x = r x (1 - x) $, yielding $ x^* = 0 $ and $ x^* = 1 - 1/r $ (for $ r \neq 1 $). The stability of $ x^* = 0 $ is attracting for $ 0 < r < 1 $ (since $ |f'(0)| = |r| < 1 $) but repelling for $ r > 1 $; meanwhile, $ x^* = 1 - 1/r $ is attracting for $ 1 < r < 3 $ (where $ |f'(x^*)| = |2 - r| < 1 $) and becomes repelling for $ r > 3 $, marking the onset of period-doubling bifurcations.[13]

Abelian Property

The Abelian property in the context of iterated functions refers to the commutativity of iterations when the original functions commute under composition. Specifically, the iteration is Abelian if, for functions $ f $ and $ g $ satisfying $ f \circ g = g \circ f $, the higher-order iterates satisfy $ f^n \circ g^m = g^m \circ f^n $ for all non-negative integers $ n $ and $ m $.[14] This property arises naturally in the semigroup generated by $ f $ and $ g $ under composition, where the commutativity of generators ensures the entire subsemigroup is commutative. The property can be proved by induction: the base case holds by assumption, and assuming $ f^k \circ g = g \circ f^k $ for $ k < n $, the inductive step follows from $ f^n \circ g = f^{n-1} \circ (f \circ g) = f^{n-1} \circ (g \circ f) = (f^{n-1} \circ g) \circ f = (g \circ f^{n-1}) \circ f = g \circ f^n $, with similar reasoning for powers of $ g $. Conditions for Abelian iteration often involve solutions to functional equations that linearize the iteration, such as Schröder's equation $ \psi(f(x)) = s \psi(x) $, where $ s $ is a constant and $ \psi $ conjugates $ f $ to multiplication by $ s $.[15] Under this conjugacy, iterates correspond to powers of $ s $, which commute since multiplication is commutative, thereby ensuring the iterates of $ f $ form an Abelian semigroup.[15] This property has significant implications for solving functional equations in iteration theory, as it reduces non-commutative problems to commutative ones via conjugacy, facilitating the construction of continuous or fractional iterates in dynamical systems. The concept of Abelian iteration traces its origins to Henri Poincaré's work in the late 19th century, where he explored functional equations for iterating solutions to differential equations, laying foundational ideas for commutativity in continuous dynamics.

Asymptotic Behaviors

Limiting Behavior

The limiting behavior of an iterated function fn(x)f^n(x) describes the long-term dynamics of the sequence generated by repeated application of ff, which may converge to a fixed point, diverge to infinity, or exhibit other patterns such as dense orbits or chaos.[16] Convergence to a fixed point pp occurs when the sequence {fn(x)}\{f^n(x)\} approaches pp as nn \to \infty. For a fixed point pp where f(p)=pf(p) = p, it is attracting if nearby points move toward it under iteration; a sufficient local condition is f(p)<1|f'(p)| < 1, ensuring that the distance to pp decreases geometrically for initial points sufficiently close to pp.[10] Globally, in a complete metric space, if ff is a contraction mapping with Lipschitz constant k<1k < 1, the Banach fixed-point theorem guarantees a unique fixed point, and the iteration converges to it from any starting point, with error bounded by knd(x0,p)k^n d(x_0, p).[17] For example, the function f(x)=x+12f(x) = \frac{x + 1}{2} on [0,1][0,1] is a contraction with k=12k = \frac{1}{2}, and iterations converge monotonically to the fixed point p=1p = 1.[16] Divergence arises when orbits move away from fixed points or bounded regions. If f(p)>1|f'(p)| > 1, the fixed point is repelling, and sequences starting near pp typically diverge, potentially escaping to infinity in unbounded domains.[10] In nonlinear systems, such as quadratic maps over the complex plane, orbits may escape to infinity if the magnitude grows without bound under iteration, as seen in the definition of the Mandelbrot set where points with bounded orbits remain inside the set.[18] Alternatively, in some nonlinear cases, orbits can remain bounded but exhibit chaotic behavior, densely filling certain regions without converging to a single limit.[19] Stability in more general settings, including continuous-time analogs like flows, is quantified by Lyapunov exponents, which measure the average exponential rate of separation of nearby trajectories. For a discrete dynamical system given by iteration of ff, the largest Lyapunov exponent is λ=limn1nlog(fn)(x)\lambda = \lim_{n \to \infty} \frac{1}{n} \log |(f^n)'(x)|, where λ<0\lambda < 0 indicates attracting behavior and convergence, while λ>0\lambda > 0 signals instability and potential divergence or chaos; these exponents exist almost everywhere by the Oseledets multiplicative ergodic theorem.[20] For instance, iterating a rotation Rα(x)=x+α(mod1)R_\alpha(x) = x + \alpha \pmod{1} on the circle, if α\alpha is irrational, produces dense orbits that oscillate without converging to a fixed point, corresponding to Lyapunov exponent zero due to isometry.[21]

Invariant Measures

In the context of iterated functions, an invariant measure for a measurable transformation f:XXf: X \to X on a measurable space (X,B)(X, \mathcal{B}) is a probability measure μ\mu on B\mathcal{B} satisfying μ(f1(A))=μ(A)\mu(f^{-1}(A)) = \mu(A) for every measurable set ABA \in \mathcal{B}. This condition ensures that the measure of any set remains unchanged when pulled back under the dynamics of ff, reflecting the preservation of probabilistic structure under iteration. Such measures capture the long-term statistical properties of orbits generated by ff. Among invariant measures, ergodic ones play a central role in ergodic theory. An invariant measure μ\mu is ergodic if every ff-invariant measurable set AA (i.e., f1(A)=Af^{-1}(A) = A) satisfies μ(A)=0\mu(A) = 0 or μ(A)=1\mu(A) = 1. The ergodic decomposition theorem states that any invariant probability measure μ\mu can be uniquely represented as an integral over a family of ergodic invariant measures: specifically, there exists a probability measure λ\lambda on the space of ergodic measures such that μ=νdλ(ν)\mu = \int \nu \, d\lambda(\nu). This decomposition allows complex invariant measures to be analyzed through their ergodic components, facilitating the study of mixing and recurrence properties in dynamical systems. For certain classes of maps, invariant measures can be constructed explicitly using the Perron-Frobenius operator. Consider a piecewise expanding C2C^2 map ff on an interval, divided into finitely many monotonic branches where f(x)λ>1|f'(x)| \geq \lambda > 1. The associated Perron-Frobenius operator PP acts on integrable densities hh by
(Ph)(x)=yf1(x)h(y)f(y), (P h)(x) = \sum_{y \in f^{-1}(x)} \frac{h(y)}{|f'(y)|},
and an invariant density hh satisfies Ph=hP h = h, yielding an absolutely continuous invariant measure μ\mu with density hh with respect to Lebesgue measure. The Lasota-Yorke theorem guarantees the existence of such a measure for these maps, establishing a Lasota-Yorke inequality that bounds the variation of PhP h and ensures the operator's contractivity in appropriate norms.[22] Invariant measures find applications in stochastic processes, where they correspond to stationary distributions that remain unaltered under the evolution of the process. In probabilistic interpretations of iterated functions, these measures describe equilibrium states, linking deterministic dynamics to statistical long-run behavior.

Advanced Iteration Techniques

Fractional Iterates

Fractional iterates extend the concept of integer-order iteration to non-integer orders, allowing the composition of a function ff a rational number p/qp/q times, where pp and qq are integers with q>0q > 0. A function gg is a fractional iterate of order p/qp/q if it satisfies the relation (g)q=fp(g)^q = f^p, meaning that applying gg qq times yields the pp-th iterate of ff. This definition generalizes the integer case, where fnf^n denotes nn compositions of ff, and enables the study of continuous parameterizations of iterations under suitable conditions.[23] One primary method to construct fractional iterates near a fixed point involves solving Schröder's functional equation, which linearizes the dynamics through a conjugacy. Suppose ff has a fixed point at x0x_0 with multiplier s=f(x0)0,1s = f'(x_0) \neq 0, 1, and let ψ\psi be a function satisfying ψ(f(x))=sψ(x)\psi(f(x)) = s \cdot \psi(x) in a neighborhood of x0x_0. Then, the fractional iterate of order tt (for real tt) is given by
ft(x)=ψ1(stψ(x)), f^t(x) = \psi^{-1} \left( s^t \cdot \psi(x) \right),
provided ψ\psi is invertible. This solution arises from the eigefunction property of ψ\psi, which conjugates ff to multiplication by ss, allowing exponentiation for fractional powers; the existence of analytic solutions to Schröder's equation was established by Koenigs in 1884 for s1|s| \neq 1.[24] For functions without suitable fixed points or exhibiting translational behavior, Abel's functional equation provides an alternative approach, particularly for solving translation equations like α(f(x))=α(x)+1\alpha(f(x)) = \alpha(x) + 1, the Abel functional equation, introduced by Schröder in 1871. Here, α\alpha conjugates ff to a unit translation, and the fractional iterate follows as ft(x)=α1(α(x)+t)f^t(x) = \alpha^{-1}(\alpha(x) + t). This method is effective for functions with no fixed points, such as certain exponential maps. In cases of translational dynamics, the equation facilitates embedding the iteration into a continuous flow.[24] Despite these constructions, fractional iterates are generally non-unique without additional constraints, as multiple functions can satisfy the defining relation (g)q=fp(g)^q = f^p. For instance, solutions to Schröder's or Abel's equations may differ by periodic perturbations or require specification of growth conditions; uniqueness can be ensured by imposing analyticity, monotonicity, or bounded variation on the iterates. Such limitations highlight the need for context-specific assumptions in dynamical systems to select a principal branch.[25][26]

Negative Iterates and Flows

In dynamical systems, a flow associated with an iterated function ff is a one-parameter family of maps {ϕt:XXtR}\{\phi_t : X \to X \mid t \in \mathbb{R}\} on a space XX, satisfying the axioms ϕ0=id\phi_0 = \mathrm{id}, the identity map, and ϕs+t=ϕsϕt\phi_{s+t} = \phi_s \circ \phi_t for all s,tRs, t \in \mathbb{R}, with ϕ1=f\phi_1 = f.[27] This structure forms a one-parameter group under composition, extending the discrete iteration fn=ϕnf^n = \phi_n for integer n0n \geq 0 to continuous time, and provides a continuous parameterization of iterations.[28] Such flows generalize iterated functions by embedding them into a semigroup or group action, often on manifolds or Banach spaces. Flows are typically constructed by embedding the iteration into an autonomous ordinary differential equation (ODE) x˙=F(x)\dot{x} = F(x), where ϕt(x)\phi_t(x) is the solution at time tt starting from initial condition xXx \in X, and the vector field FF satisfies F(x)=ddtϕt(x)t=0F(x) = \frac{d}{dt} \phi_t(x) \big|_{t=0}.[27] If FF is continuously differentiable or Lipschitz continuous, the Picard-Lindelöf theorem guarantees local existence and uniqueness of solutions, ensuring a unique flow in a neighborhood of each point.[28] Alternatively, for iterations on Lie groups or matrix semigroups, flows can be realized via the matrix exponential or one-parameter subgroups, where the discrete step ff corresponds to exponentiation at t=1t=1. However, global uniqueness may fail without additional assumptions, such as completeness of the vector field or compactness of XX, leading to potential non-uniqueness in extensions beyond local solutions.[27] The negative parameter t<0t < 0 in flows generalizes the concept of functional inverses, with ϕt=(ϕt)1\phi_{-t} = (\phi_t)^{-1}, allowing backward evolution along trajectories and enabling analysis of pre-images in a continuous manner.[28] This reversibility holds for invertible systems, such as orientation-preserving diffeomorphisms, but can break down for non-invertible maps unless embedded in a larger reversible flow. A canonical example is the linear flow on Rn\mathbb{R}^n generated by x˙=Ax\dot{x} = A x, yielding ϕt(x)=etAx\phi_t(x) = e^{t A} x, where AA is a constant matrix and etAe^{t A} is the matrix exponential; for t<0t < 0, this contracts or expands depending on the eigenvalues of AA.[27] Uniqueness in this case follows from the analyticity of the exponential, though issues arise if AA has Jordan blocks requiring careful limits.[28] Fractional iterates can be viewed as discrete approximations to such flows at non-integer times.[29]

Transformations and Equivalences

Conjugacy

In the context of iterated functions, two functions f:XXf: X \to X and g:YYg: Y \to Y defined on topological spaces are said to be topologically conjugate if there exists a homeomorphism h:XYh: X \to Y such that g=h1fhg = h^{-1} \circ f \circ h, or equivalently, hf=ghh \circ f = g \circ h.[30] This relation establishes a structural equivalence between the dynamics generated by ff and gg, preserving essential topological features of their iterations.[31] A key property of conjugacy is its preservation of iterates: if ff and gg are conjugate via hh, then the nn-th iterates satisfy gn=h1fnhg^n = h^{-1} \circ f^n \circ h for all positive integers nn, meaning orbits under ff map bijectively to orbits under gg while maintaining their temporal structure.[30] Consequently, dynamical invariants such as the existence and periods of periodic points, as well as qualitative behaviors like attraction or repulsion, are identical for conjugate systems.[32] Topological conjugacy relies solely on continuous invertibility through homeomorphisms, which may distort distances and angles. Smoother forms of conjugacy, such as C1C^1 conjugacy, require the homeomorphism hh to be continuously differentiable, preserving not only topological structure but also first-order differential properties like derivatives at fixed points; this is more restrictive and holds under additional assumptions on the regularity of ff and gg, such as C2C^2 smoothness.[33] For instance, C1C^1 conjugacy ensures that the Jacobians align appropriately, facilitating linear approximations in analytic settings.[34] Conjugacy finds significant applications in simplifying the analysis of iterated functions, particularly by reducing complex nonlinear dynamics to more tractable forms. A prominent example is the Hartman–Grobman theorem, which states that near a hyperbolic fixed point (where the Jacobian has no eigenvalues on the imaginary axis), a C1C^1 diffeomorphism is topologically conjugate to its linearization via a homeomorphism defined on a neighborhood of the point.[34] This linearization reveals the local phase portrait, including stable and unstable manifolds, thereby aiding the study of asymptotic behaviors and stability without solving the full nonlinear system; the theorem, originally established by Grobman (1959) and Hartman (1960), extends to higher smoothness under further conditions like those in Sternberg's work (1958).[34]

Schröder's Equation

Schröder's equation serves as a key tool for analyzing the iteration of functions near a fixed point by linearizing the dynamics through a change of variables. For a function ff with a fixed point xx^*, meaning f(x)=xf(x^*) = x^*, and where the derivative s=f(x)s = f'(x^*) satisfies s0,1s \neq 0, 1, the equation takes the form
ψ(f(x))=sψ(x), \psi(f(x)) = s \, \psi(x),
with ψ\psi being an invertible function defined in a neighborhood of xx^*.[35] This setup assumes ff is analytic near xx^*, enabling the equation to capture the local scaling behavior induced by iterations of ff.[35] The equation was introduced by Ernst Schröder in 1871 as part of his foundational work on the iteration of analytic functions in the complex plane, motivated by problems in solving functional iteration explicitly.[36] Schröder's approach emphasized constructing solutions to facilitate the study of repeated compositions, particularly for functions without closed-form iterates.[24] Solutions to Schröder's equation typically exist as analytic functions when s<1|s| < 1, corresponding to an attractive fixed point. In such cases, a unique (up to scalar multiple) solution ψ\psi can be expressed as a convergent power series centered at xx^*, with the leading term adjusted to normalize ψ(x)=0\psi(x^*) = 0 and ψ(x)=1\psi'(x^*) = 1.[37] This power series is often constructed recursively by substituting the series expansion of ff into the equation and equating coefficients, yielding
ψ(x)=(xx)+k=2ck(xx)k, \psi(x) = (x - x^*) + \sum_{k=2}^{\infty} c_k (x - x^*)^k,
where the coefficients ckc_k satisfy relations derived from $s^k c_k = $ higher-order terms involving previous coefficients.[38] For broader analyticity, Gabriel Koenigs established in 1884 the existence of such a solution via a limit process on normalized iterates, ensuring convergence in a sector around the fixed point.[39] With a solution ψ\psi in hand, fractional iterates of ff can be defined explicitly as
ft(x)=ψ1(stψ(x)), f^t(x) = \psi^{-1} \left( s^t \, \psi(x) \right),
for real or complex tt, provided ψ1\psi^{-1} is well-defined in the relevant range.[24] This formula extends integer iterations to continuous flows, preserving the semigroup property ft+u=ftfuf^{t+u} = f^t \circ f^u, and is particularly useful for approximating dynamics in cases where direct computation of iterates is intractable.[24]

Applications and Examples

Markov Chains

In the context of iterated functions, Markov chains model probabilistic transitions between discrete states as iterations of a stochastic matrix. A Markov chain is defined by a finite or countable state space and a transition probability function that determines the probability of moving from one state to another, independent of prior history. This transition structure is represented by a stochastic matrix PP, where each entry pijp_{ij} denotes the probability of transitioning from state ii to state jj, and the rows sum to 1. The state distribution evolves iteratively: if pn\mathbf{p}_n is the probability distribution vector at step nn, then pn+1=Ppn\mathbf{p}_{n+1} = P \mathbf{p}_n, with the nn-step distribution given by pn=Pnp0\mathbf{p}_n = P^n \mathbf{p}_0.[40][41] Absorbing Markov chains feature one or more absorbing states, where once entered, the process remains there with probability 1, corresponding to fixed points in the iteration. In such chains, repeated application of PP leads to convergence to a stationary distribution concentrated on the absorbing states, regardless of the initial distribution, as transient states drain into absorbers over iterations. Ergodic Markov chains, in contrast, are those that are irreducible and aperiodic, ensuring that the iteration PnP^n converges to a unique stationary distribution π\pi satisfying π=Pπ\pi = P \pi, where the long-term behavior mixes uniformly across states. This convergence is guaranteed by the Perron-Frobenius theorem for positive stochastic matrices, which states that powers of an irreducible, aperiodic stochastic matrix approach a rank-1 matrix with identical rows equal to the stationary distribution.[40][41] Classification of Markov chains relies on irreducibility and periodicity to predict iterative behavior. A chain is irreducible if every state is reachable from every other state, meaning the state space forms a single communicating class under the transition graph of PP; otherwise, it decomposes into transient and recurrent classes. Periodicity measures the cyclic structure: the period of a state ii is the greatest common divisor of the set {n1:pii(n)>0}\{n \geq 1 : p_{ii}^{(n)} > 0\}, where pii(n)p_{ii}^{(n)} is the nn-step return probability, and the chain is periodic if this period exceeds 1 for some states, leading to oscillatory convergence in iterations rather than monotonic mixing. Aperiodic chains (period 1) facilitate smoother convergence to stationarity.[40] The stationary distribution π\pi of a Markov chain serves as an invariant measure for the iterated function defined by PP, preserving the probability mass under application: πP=π\pi P = \pi. For irreducible, positive recurrent chains, this π\pi is unique and exists as the long-term limiting distribution, directly linking probabilistic invariance to the fixed points of the iteration operator. In ergodic chains, the time average of the state distribution converges to this invariant measure, underscoring the role of iteration in revealing equilibrium properties.[40][41]

Dynamical Systems and Chaos

Iterated functions serve as fundamental models in discrete dynamical systems, where the evolution of a state is described by repeated application of a map $ f: X \to X $ on a space $ X $, often an interval or the complex plane. These systems capture the behavior of nonlinear processes, such as population dynamics or orbital mechanics, where simple rules lead to complex outcomes. A canonical example is the logistic map $ f(x) = r x (1 - x) $ for $ x \in [0, 1] $ and parameter $ r > 0 $, originally studied as a discrete analog to continuous population growth models. For $ r = 4 $, the map exhibits fully developed chaos, with orbits densely filling the interval and being ergodic with respect to the invariant measure $ \mu(dx) = \frac{1}{\pi \sqrt{x(1-x)}} dx $.[42] Chaotic behavior in such iterated systems is characterized by sensitivity to initial conditions, where infinitesimally close starting points diverge exponentially over iterations. This sensitivity is quantified by the Lyapunov exponent $ \lambda $, defined as $ \lambda = \lim_{n \to \infty} \frac{1}{n} \ln \left| \frac{df^n(x)}{dx} \right| $ for typical $ x $, with $ \lambda > 0 $ indicating chaos. In the logistic map at $ r = 4 $, $ \lambda = \ln 2 > 0 $, confirming exponential divergence and the hallmark unpredictability of chaotic dynamics despite determinism. Positive Lyapunov exponents distinguish chaotic attractors from regular ones, ensuring that long-term predictions require infinite precision in initial states.[43] The onset of chaos often occurs via bifurcations, particularly the period-doubling route, where stable periodic orbits double in period as a parameter like $ r $ increases, culminating in an infinite cascade at a finite value. For the logistic map, period-doubling bifurcations occur at $ r_n $ approaching the Feigenbaum point $ r_\infty \approx 3.56995 $, governed by the universal Feigenbaum constant $ \delta \approx 4.6692 $, which describes the scaling $ r_{n+1} - r_n \sim \delta^{-1} (r_n - r_{n-1}) $. Beyond $ r_\infty $, the attractor becomes strange, with fractal structure and non-integer dimension. This route exemplifies how iterated functions transition from order to chaos universally across one-dimensional maps.[44] Strange attractors in higher-dimensional iterated systems extend these ideas, featuring fractal geometry where trajectories remain bounded yet never repeat, combining sensitivity with structural complexity. In complex dynamics, the Mandelbrot set emerges from iterating quadratic maps $ z_{n+1} = z_n^2 + c $ with $ c \in \mathbb{C} $, defining the parameter space where the critical orbit remains bounded; its boundary is a fractal with Hausdorff dimension 2. For $ c $ on the boundary, the corresponding Julia set $ J_c = { z \in \mathbb{C} : (z^2 + c)^n(z) \not\to \infty } $ is a strange attractor, often totally disconnected and chaotic, illustrating how iteration generates self-similar fractals central to understanding turbulent or irregular dynamics.[45]

Computational and Analytical Methods

Means of Study

Analytical tools for studying iterated functions often rely on functional equations, which relate the iterates of a function to its properties at fixed points or periodic orbits. For instance, systems of functional equations can be formulated to characterize the behavior of iterations near fixed points, providing insights into convergence and stability without explicit computation of higher iterates.[24] Iteration theory further employs such equations to embed discrete iterations into continuous dynamical systems, facilitating the analysis of long-term behavior through embedding theorems.[46] Perturbation theory complements these approaches by approximating the effects of small deviations in the function or initial conditions on the iterates. In the context of iterated function systems, perturbative methods adjust moments or statistical properties of orbits under polynomial maps, enabling quantitative predictions of attractor dimensions and measures.[47] This technique is particularly useful for systems where exact solutions are intractable, allowing series expansions to estimate iterative trajectories.[48] Numerical methods provide practical means to explore iterated functions, with fixed-point iteration serving as a foundational algorithm to approximate solutions by repeatedly applying the function until convergence. The method converges linearly near attracting fixed points if the derivative's absolute value is less than 1, offering a simple way to trace orbits.[16] For visualization, cobweb plots graphically depict the iterative process by plotting the function alongside the line $ y = x $, revealing convergence, divergence, or cycles through the trajectory's path.[49] Software implementations facilitate these simulations, with Python's NumPy and SciPy libraries enabling efficient computation of function orbits via vectorized array operations and optimization routines. For example, iterative applications can be simulated using loops over NumPy arrays to generate long sequences of iterates for analysis. MATLAB similarly supports orbit simulation through built-in functions for numerical integration and plotting, allowing users to model discrete iterations as difference equations. A key challenge in numerical studies arises in chaotic regimes, where small rounding errors amplify exponentially due to sensitive dependence on initial conditions, leading to unreliable long-term orbit predictions. This instability necessitates high-precision arithmetic or shadow trajectory techniques to validate chaotic behaviors in iterated maps.[50] Such issues underscore the need for careful error analysis in simulations of nonlinear iterations.[51]

Functional Derivatives

In the context of iterated functions, functional derivatives extend classical differentiation to examine variations in the n-th iterate fn(x)f^n(x) with respect to the iteration index nn or parameters of the base function ff. The derivative with respect to nn, often termed the iterating velocity, quantifies the rate of change in the iterate as the number of compositions increases, enabling analysis of convergence and stability in iterative processes. For instance, in frameworks like operational calculus for differentiable programming, this velocity is derived as nfn=ν(h)1h\partial_n f^n = \nu \cdot ( \partial h )^{-1} \cdot h, where hh is an eigenmap and ν\nu relates to the function's eigenvalue, facilitating explicit computations for specific iterates such as summation reductions.[52] A foundational tool for these derivatives is the chain rule applied iteratively to compositions. For the derivative with respect to the input xx, repeated application yields
(fn)(x)=k=0n1f(fk(x)), (f^n)'(x) = \prod_{k=0}^{n-1} f'(f^k(x)),
which captures the cumulative effect of local sensitivities along the orbit of iterations; this holds under standard differentiability assumptions and is exemplified in compositions like esin(x2)e^{\sin(x^2)}, where the product form emerges from inward differentiation.[53] For parameter-dependent iterates in systems like contracting maps with probabilities, derivatives of long-term averages with respect to parameters are computable with linear effort relative to the average itself, assuming contractivity and ergodicity.[54] These derivatives find key applications in sensitivity analysis for optimization, where they assess how perturbations in parameters or iterations affect outcomes in iterative algorithms. In parameter identification for models like automotive thermal simulations, sensitivity-guided iterations—using first-order Sobol indices—prioritize identifiable parameters, achieving average relative errors of 1.62% and model output errors of 0.108°C in under a day, far outperforming traditional methods requiring weeks.[55] Post-2000 developments have integrated such derivatives into variational principles for iterates; for example, operational calculus enables higher-order iterating velocities for machine learning applications like early stopping, while numerical approximations minimize error functionals E(g)=(g(g(x))f(x))2dxE(g) = \int (g(g(x)) - f(x))^2 \, dx to construct functional square roots of nonlinear maps like the exponential.[52][56]

References

User Avatar
No comments yet.