Hubbry Logo
Iterated functionIterated functionMain
Open search
Iterated function
Community hub
Iterated function
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Iterated function
Iterated function
from Wikipedia

Iterated transformations of the object on the left
On top is a clockwise rotation by 90°. It has order 4, because that is the smallest positive exponent that produces the identity. Below is a shear mapping with infinite order.
Below that are their compositions, which both have order 3.

In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated.

For example, on the image on the right:

Iterated functions are studied in computer science, fractals, dynamical systems, mathematics and renormalization group physics.

Definition

[edit]

The formal definition of an iterated function on a set X follows.

Let X be a set and f: XX be a function.

Defining f n as the n-th iterate of f, where n is a non-negative integer, by: and

where idX is the identity function on X and (f g)(x) = f (g(x)) denotes function composition. This notation has been traced to and John Frederick William Herschel in 1813.[1][2][3][4] Herschel credited Hans Heinrich Bürmann for it, but without giving a specific reference to the work of Bürmann, which remains undiscovered.[5]

Because the notation f n may refer to both iteration (composition) of the function f or exponentiation of the function f (the latter is commonly used in trigonometry), some mathematicians[citation needed] choose to use to denote the compositional meaning, writing fn(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f [n](x) was used by Benjamin Peirce[6][4][nb 1] whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead.[7][4][nb 2]

Abelian property and iteration sequences

[edit]

In general, the following identity holds for all non-negative integers m and n,

This is structurally identical to the property of exponentiation that aman = am + n.

In general, for arbitrary general (negative, non-integer, etc.) indices m and n, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, Tm(Tn(x)) = Tm n(x), since Tn(x) = cos(n arccos(x)).

The relation (f m)n(x) = (f n)m(x) = f mn(x) also holds, analogous to the property of exponentiation that (am)n = (an)m = amn.

The sequence of functions f n is called a Picard sequence,[8][9] named after Charles Émile Picard.

For a given x in X, the sequence of values fn(x) is called the orbit of x.

If f n (x) = f n+m (x) for some integer m > 0, the orbit is called a periodic orbit. The smallest such value of m for a given x is called the period of the orbit. The point x itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit.

Fixed points

[edit]

If x = f(x) for some x in X (that is, the period of the orbit of x is 1), then x is called a fixed point of the iterated sequence. The set of fixed points is often denoted as Fix(f). There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem.

There are several techniques for convergence acceleration of the sequences produced by fixed point iteration.[10] For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence.

Limiting behaviour

[edit]

Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point.[11]

When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set.

The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behavior of small neighborhoods under iteration. Also see infinite compositions of analytic functions.

Other limiting behaviors are possible; for example, wandering points are points that move away, and never come back even close to where they started.

Invariant measure

[edit]

If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states.

In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos.

Fractional iterates and flows, and negative iterates

[edit]
g: RR is a trivial functional 5th root of f: R+R+, f(x) = sin(x). The computation of f(π6) = 12 = g5(π6) is shown.

The notion f1/n must be used with care when the equation gn(x) = f(x) has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for n = 2 and f(x) = 4x − 6, both g(x) = 6 − 2x and g(x) = 2x − 2 are solutions; so the expression f 1/2(x) does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root of f can always be obtained if f's domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study.

Fractional iteration of a function can be defined: for instance, a half iterate of a function f is a function g such that g(g(x)) = f(x).[12] This function g(x) can be written using the index notation as f 1/2(x) . Similarly, f 1/3(x) is the function defined such that f1/3(f1/3(f1/3(x))) = f(x), while f2/3(x) may be defined as equal to f 1/3(f1/3(x)), and so forth, all based on the principle, mentioned earlier, that f mf n = f m + n. This idea can be generalized so that the iteration count n becomes a continuous parameter, a sort of continuous "time" of a continuous orbit.[13][14]

In such cases, one refers to the system as a flow (cf. section on conjugacy below.)

If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example, f −1(x) is the normal inverse of f, while f −2(x) is the inverse composed with itself, i.e. f −2(x) = f −1(f −1(x)). Fractional negative iterates are defined analogously to fractional positive ones; for example, f −1/2(x) is defined such that f −1/2(f −1/2(x)) = f −1(x), or, equivalently, such that f −1/2(f 1/2(x)) = f 0(x) = x.

Some formulas for fractional iteration

[edit]

One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows.[15]

  1. First determine a fixed point for the function such that f(a) = a.
  2. Define f n(a) = a for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates.
  3. Expand fn(x) around the fixed point a as a Taylor series,
  4. Expand out
  5. Substitute in for fk(a) = a, for any k,
  6. Make use of the geometric progression to simplify terms, There is a special case when f '(a) = 1,

This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy.

Example 1

[edit]

For example, setting f(x) = Cx + D gives the fixed point a = D/(1 − C), so the above formula terminates to just which is trivial to check.

Example 2

[edit]

Find the value of where this is done n times (and possibly the interpolated values when n is not an integer). We have f(x) = 2x. A fixed point is a = f(2) = 2.

So set x = 1 and f n (1) expanded around the fixed point value of 2 is then an infinite series, which, taking just the first three terms, is correct to the first decimal place when n is positive. Also see Tetration: f n(1) = n2. Using the other fixed point a = f(4) = 4 causes the series to diverge.

For n = −1, the series computes the inverse function ⁠2+ln x/ln 2.

Example 3

[edit]

With the function f(x) = xb, expand around the fixed point 1 to get the series which is simply the Taylor series of x(bn ) expanded around 1.

Conjugacy

[edit]

If f and g are two iterated functions, and there exists a homeomorphism h such that g = h−1fh , then f and g are said to be topologically conjugate.

Clearly, topological conjugacy is preserved under iteration, as gn = h−1  ○ f nh. Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking f(x) = x + 1, one has the iteration of g(x) = h−1(h(x) + 1) as

gn(x) = h−1(h(x) + n),   for any function h.

Making the substitution x = h−1(y) = ϕ(y) yields

g(ϕ(y)) = ϕ(y+1),   a form known as the Abel equation.

Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at x = 0, f(0) = 0, one may often solve[16] Schröder's equation for a function Ψ, which makes f(x) locally conjugate to a mere dilation, g(x) = f '(0) x, that is

f(x) = Ψ−1(f '(0) Ψ(x)).

Thus, its iteration orbit, or flow, under suitable provisions (e.g., f '(0) ≠ 1), amounts to the conjugate of the orbit of the monomial,

Ψ−1(f '(0)n Ψ(x)),

where n in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent n no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit:[17] the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group.[18]

Iterates of the sine function (blue), in the first half-period. Half-iterate (orange), i.e., the sine's functional square root; the functional square root of that, the quarter-iterate (black) above it; and further fractional iterates up to the 1/64th. The functions below the (blue) sine are six integral iterates below it, starting with the second iterate (red) and ending with the 64th iterate. The green envelope triangle represents the limiting null iterate, a triangular function serving as the starting point leading to the sine function. The dashed line is the negative first iterate, i.e. the inverse of sine (arcsin). (From the general pedagogy web-site.[19] For the notation, see [2].)

This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic.

Markov chains

[edit]

If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain.

Examples

[edit]

There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems.

Ernst Schröder,[20] in 1870, worked out special cases of the logistic map, such as the chaotic case f(x) = 4x(1 − x), so that Ψ(x) = arcsin(x)2, hence f n(x) = sin(2n arcsin(x))2.

A nonchaotic case Schröder also illustrated with his method, f(x) = 2x(1 − x), yielded Ψ(x) = −1/2 ln(1 − 2x), and hence fn(x) = −1/2((1 − 2x)2n − 1).

If f is the action of a group element on a set, then the iterated function corresponds to a free group.

Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some[20] that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n.

(see note)

where:

(see note)

where:

  (fractional linear transformation)[21]

where:

  (generic Abel equation)
(Chebyshev polynomial for integer m)

Note: these two special cases of ax2 + bx + c are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table.

Some of these examples are related among themselves by simple conjugacies.

Means of study

[edit]

Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators.

In computer science

[edit]

In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs.

Definitions in terms of iterated functions

[edit]

Two important functionals can be defined in terms of iterated functions. These are summation:

and the equivalent product:

Functional derivative

[edit]

The functional derivative of an iterated function is given by the recursive formula:

Lie's data transport equation

[edit]

Iterated functions crop up in the series expansion of combined functions, such as g(f(x)).

Given the iteration velocity, or beta function (physics),

for the nth iterate of the function f, we have[22]

For example, for rigid advection, if f(x) = x + t, then v(x) = t. Consequently, g(x + t) = exp(t ∂/∂x) g(x), action by a plain shift operator.

Conversely, one may specify f(x) given an arbitrary v(x), through the generic Abel equation discussed above,

where

This is evident by noting that

For continuous iteration index t, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group,

The initial flow velocity v suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the translation functional equation,[23]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In mathematics, an iterated function is obtained by composing a given function ff with itself multiple times, where the nn-th iterate f(n)f^{(n)} denotes the nn-fold composition applied to an initial input. This process generates a sequence of states xn=f(n)(x0)x_n = f^{(n)}(x_0) from an initial value x0x_0, modeling discrete-time evolution in dynamical systems. Iterated functions form the basis for analyzing long-term behavior, including convergence to fixed points where f(x)=xf(x) = x, periodic orbits forming cycles of period nn where f(n)(x)=xf^{(n)}(x) = x but f(k)(x)xf^{(k)}(x) \neq x for 1k<n1 \leq k < n, and chaotic dynamics in nonlinear examples. Key applications span numerical analysis, such as for root-finding, where iterates xn+1=xnf(xn)/f(xn)x_{n+1} = x_n - f(x_n)/f'(x_n) converge quadratically to solutions under suitable conditions. In population biology, the logistic map f(x)=λx(1x)f(x) = \lambda x (1 - x) illustrates bifurcations and chaos as the parameter λ\lambda varies, with fixed points at x=0x = 0 and x=11/λx = 1 - 1/\lambda for 1<λ31 < \lambda \leq 3. Iterated functions also underpin fractal geometry through , collections of contractive mappings whose fixed points yield self-similar attractors like the Sierpinski triangle. These concepts extend to continuous-time analogs via functional iterates and flows, enabling broader studies in ergodic theory and complex dynamics.

Fundamentals

Definition

In mathematics, an iterated function arises from the repeated composition of a given function with itself. Consider a function f:XXf: X \to X, where XX is a set, ensuring that the codomain of ff aligns with its domain to permit successive applications. The nn-th iterate of ff, denoted fnf^n, is formally defined recursively for positive integers nn as f1=ff^1 = f and fn+1=ffnf^{n+1} = f \circ f^n, where \circ denotes function composition. Function composition serves as the foundational operation for iteration: given functions f:XYf: X \to Y and g:YZg: Y \to Z, the composite (fg):XZ(f \circ g): X \to Z is defined by (fg)(x)=f(g(x))(f \circ g)(x) = f(g(x)) for all xXx \in X. For self-composition in iteration, the requirement YXY \subseteq X guarantees that the output of each application remains within the domain, allowing the process to continue indefinitely without domain restrictions disrupting the sequence. This consistency is essential, as mismatches could render higher iterates undefined on portions of XX. Notation for iterates varies across mathematical literature; the superscript fnf^n is common and traces back to early 19th-century usage, while alternatives like f(n)f^{(n)} emphasize the iterative exponent to distinguish from functional powers in other contexts. For instance, f2(x)=f(f(x))f^2(x) = f(f(x)) or f(2)(x)=f(f(x))f^{(2)}(x) = f(f(x)) both represent the second iterate. These conventions facilitate clear expression in dynamical systems and related fields where repeated application is analyzed.

Iteration Sequences

In the study of iterated functions, the iteration sequence, often referred to as the orbit or trajectory of a point xx in the domain XX, is defined as the sequence {fn(x)}n=0\{f^n(x)\}_{n=0}^\infty, where f0(x)=xf^0(x) = x is the identity map and fn+1(x)=f(fn(x))f^{n+1}(x) = f(f^n(x)) for n0n \geq 0. This construction captures the successive applications of the function f:XXf: X \to X starting from the initial point xx. The forward orbit specifically denotes this one-sided sequence for nonnegative integers nn, highlighting the directional progression from the initial condition without requiring the function to be invertible. The choice of initial point xx plays a pivotal role, as different starting values generally produce distinct sequences, even under the same function ff. Simple examples demonstrate how iteration sequences can exhibit varied initial behaviors. For the linear function f(x)=2xf(x) = 2x on the real line with initial point x=1x = 1, the sequence is 1,2,4,8,1, 2, 4, 8, \dots, illustrating unbounded growth. In a nonlinear case, consider the logistic map f(x)=4x(1x)f(x) = 4x(1 - x) on the interval [0,1][0, 1] starting from x=0.1x = 0.1; the sequence begins 0.1,0.36,0.9216,0.2890,0.1, 0.36, 0.9216, 0.2890, \dots and oscillates across the interval. Periodic points emerge within these sequences as points where the iteration returns to the starting value after a finite number of steps, specifically xx is a periodic point of period k>0k > 0 if fk(x)=xf^k(x) = x but fm(x)xf^m(x) \neq x for 0<m<k0 < m < k, creating a finite cycle in the trajectory.

Basic Properties

Fixed Points

In the context of iterated functions, a fixed point of a function ff is a point xx^* in its domain such that f(x)=xf(x^*) = x^*. More generally, for the nn-th iterate fnf^n, a point xx^* is a periodic point of period nn if fn(x)=xf^n(x^*) = x^* but fk(x)xf^k(x^*) \neq x^* for all positive integers k<nk < n; fixed points are thus periodic points of period 1. Fixed points of iterated functions are classified by their stability, particularly in one-dimensional real dynamical systems where ff is differentiable at xx^*. A fixed point xx^* is attracting if f(x)<1|f'(x^*)| < 1, meaning nearby points converge to xx^* under iteration; it is superattracting if f(x)=0f'(x^*) = 0, leading to quadratic convergence. Conversely, xx^* is repelling if f(x)>1|f'(x^*)| > 1, causing nearby points to diverge; if f(x)=1|f'(x^*)| = 1, the fixed point is neutral (or indifferent), with behavior depending on higher-order terms. The existence of fixed points for continuous functions is guaranteed by several fundamental theorems. Brouwer's fixed-point theorem states that every continuous function f:BnBnf: B^n \to B^n, where BnB^n is the closed unit ball in Rn\mathbb{R}^n (a compact convex set), has at least one fixed point. For contractions in metric spaces, Banach's fixed-point theorem asserts that if (X,d)(X, d) is a complete metric space and f:XXf: X \to X is a contraction mapping (i.e., there exists k<1k < 1 such that d(f(x),f(y))kd(x,y)d(f(x), f(y)) \leq k \, d(x, y) for all x,yXx, y \in X), then ff has a unique fixed point, and iterations converge to it from any starting point. (Banach, 1922) A classic example is the logistic map f(x)=rx(1x)f(x) = r x (1 - x) for x[0,1]x \in [0, 1] and parameter r>0r > 0, where fixed points solve x=rx(1x)x = r x (1 - x), yielding x=0x^* = 0 and x=11/rx^* = 1 - 1/r (for r1r \neq 1). The stability of x=0x^* = 0 is attracting for 0<r<10 < r < 1 (since f(0)=r<1|f'(0)| = |r| < 1) but repelling for r>1r > 1; meanwhile, x=11/rx^* = 1 - 1/r is attracting for 1<r<31 < r < 3 (where f(x)=2r<1|f'(x^*)| = |2 - r| < 1) and becomes repelling for r>3r > 3, marking the onset of period-doubling bifurcations.

Abelian Property

The Abelian property in the context of iterated functions refers to the commutativity of iterations when the original functions commute under composition. Specifically, the iteration is Abelian if, for functions ff and gg satisfying fg=gff \circ g = g \circ f, the higher-order iterates satisfy fngm=gmfnf^n \circ g^m = g^m \circ f^n for all non-negative integers nn and mm. This property arises naturally in the generated by ff and gg under composition, where the commutativity of generators ensures the entire subsemigroup is commutative. The property can be proved by induction: the base case holds by assumption, and assuming fkg=gfkf^k \circ g = g \circ f^k for k<nk < n, the inductive step follows from fng=fn1(fg)=fn1(gf)=(fn1g)f=(gfn1)f=gfnf^n \circ g = f^{n-1} \circ (f \circ g) = f^{n-1} \circ (g \circ f) = (f^{n-1} \circ g) \circ f = (g \circ f^{n-1}) \circ f = g \circ f^n, with similar reasoning for powers of gg. Conditions for Abelian iteration often involve solutions to functional equations that linearize the iteration, such as Schröder's equation ψ(f(x))=sψ(x)\psi(f(x)) = s \psi(x), where ss is a constant and ψ\psi conjugates ff to multiplication by ss. Under this conjugacy, iterates correspond to powers of ss, which commute since multiplication is commutative, thereby ensuring the iterates of ff form an Abelian semigroup. This property has significant implications for solving functional equations in iteration theory, as it reduces non-commutative problems to commutative ones via conjugacy, facilitating the construction of continuous or fractional iterates in dynamical systems. The concept of Abelian iteration traces its origins to Henri Poincaré's work in the late 19th century, where he explored functional equations for iterating solutions to differential equations, laying foundational ideas for commutativity in continuous dynamics.

Asymptotic Behaviors

Limiting Behavior

The limiting behavior of an iterated function fn(x)f^n(x) describes the long-term dynamics of the sequence generated by repeated application of ff, which may converge to a fixed point, diverge to infinity, or exhibit other patterns such as dense orbits or chaos. Convergence to a fixed point pp occurs when the sequence {fn(x)}\{f^n(x)\} approaches pp as nn \to \infty. For a fixed point pp where f(p)=pf(p) = p, it is attracting if nearby points move toward it under iteration; a sufficient local condition is f(p)<1|f'(p)| < 1, ensuring that the distance to pp decreases geometrically for initial points sufficiently close to pp. Globally, in a complete metric space, if ff is a contraction mapping with Lipschitz constant k<1k < 1, the Banach fixed-point theorem guarantees a unique fixed point, and the iteration converges to it from any starting point, with error bounded by knd(x0,p)k^n d(x_0, p). For example, the function f(x)=x+12f(x) = \frac{x + 1}{2} on [0,1][0,1] is a contraction with k=12k = \frac{1}{2}, and iterations converge monotonically to the fixed point p=1p = 1. Divergence arises when orbits move away from fixed points or bounded regions. If f(p)>1|f'(p)| > 1, the fixed point is repelling, and sequences starting near pp typically diverge, potentially escaping to in unbounded domains. In nonlinear systems, such as quadratic maps over the , orbits may escape to if the magnitude grows without bound under , as seen in the definition of the where points with bounded orbits remain inside the set. Alternatively, in some nonlinear cases, orbits can remain bounded but exhibit chaotic behavior, densely filling certain regions without converging to a single limit. Stability in more general settings, including continuous-time analogs like flows, is quantified by Lyapunov exponents, which measure the average exponential rate of separation of nearby trajectories. For a discrete given by iteration of ff, the largest is λ=limn1nlog(fn)(x)\lambda = \lim_{n \to \infty} \frac{1}{n} \log |(f^n)'(x)|, where λ<0\lambda < 0 indicates attracting behavior and convergence, while λ>0\lambda > 0 signals instability and potential divergence or chaos; these exponents exist almost everywhere by the Oseledets multiplicative ergodic theorem. For instance, iterating a Rα(x)=x+α(mod1)R_\alpha(x) = x + \alpha \pmod{1} on the circle, if α\alpha is , produces dense orbits that oscillate without converging to a fixed point, corresponding to zero due to .

Invariant Measures

In the context of iterated functions, an invariant measure for a measurable transformation f:XXf: X \to X on a (X,B)(X, \mathcal{B}) is a μ\mu on B\mathcal{B} satisfying μ(f1(A))=μ(A)\mu(f^{-1}(A)) = \mu(A) for every measurable set ABA \in \mathcal{B}. This condition ensures that the measure of any set remains unchanged when pulled back under the dynamics of ff, reflecting the preservation of probabilistic structure under iteration. Such measures capture the long-term statistical properties of orbits generated by ff. Among invariant measures, ones play a central role in . An invariant measure μ\mu is ergodic if every ff-invariant measurable set AA (i.e., f1(A)=Af^{-1}(A) = A) satisfies μ(A)=0\mu(A) = 0 or μ(A)=1\mu(A) = 1. The ergodic decomposition theorem states that any invariant μ\mu can be uniquely represented as an over a family of ergodic invariant measures: specifically, there exists a λ\lambda on the space of ergodic measures such that μ=νdλ(ν)\mu = \int \nu \, d\lambda(\nu). This decomposition allows complex invariant measures to be analyzed through their ergodic components, facilitating the study of mixing and recurrence properties in dynamical systems. For certain classes of maps, invariant measures can be constructed explicitly using the Perron-Frobenius operator. Consider a piecewise expanding C2C^2 map ff on an interval, divided into finitely many monotonic branches where f(x)λ>1|f'(x)| \geq \lambda > 1. The associated Perron-Frobenius operator PP acts on integrable densities hh by (Ph)(x)=yf1(x)h(y)f(y),(P h)(x) = \sum_{y \in f^{-1}(x)} \frac{h(y)}{|f'(y)|}, and an invariant density hh satisfies Ph=hP h = h, yielding an absolutely continuous invariant measure μ\mu with density hh with respect to . The Lasota-Yorke theorem guarantees the existence of such a measure for these maps, establishing a Lasota-Yorke inequality that bounds the variation of PhP h and ensures the operator's contractivity in appropriate norms. Invariant measures find applications in stochastic processes, where they correspond to stationary distributions that remain unaltered under the evolution of the process. In probabilistic interpretations of iterated functions, these measures describe equilibrium states, linking deterministic dynamics to statistical long-run behavior.

Advanced Iteration Techniques

Fractional Iterates

Fractional iterates extend the concept of -order iteration to non- orders, allowing the composition of a function ff a p/qp/q times, where pp and qq are with q>0q > 0. A function gg is a fractional iterate of order p/qp/q if it satisfies the relation (g)q=fp(g)^q = f^p, meaning that applying gg qq times yields the pp-th iterate of ff. This definition generalizes the integer case, where fnf^n denotes nn compositions of ff, and enables the study of continuous parameterizations of iterations under suitable conditions. One primary method to construct fractional iterates near a fixed point involves solving Schröder's functional equation, which linearizes the dynamics through a conjugacy. Suppose ff has a fixed point at x0x_0 with multiplier s=f(x0)0,1s = f'(x_0) \neq 0, 1, and let ψ\psi be a function satisfying ψ(f(x))=sψ(x)\psi(f(x)) = s \cdot \psi(x) in a neighborhood of x0x_0. Then, the fractional iterate of order tt (for real tt) is given by ft(x)=ψ1(stψ(x)),f^t(x) = \psi^{-1} \left( s^t \cdot \psi(x) \right), provided ψ\psi is invertible. This solution arises from the eigefunction property of ψ\psi, which conjugates ff to multiplication by ss, allowing exponentiation for fractional powers; the existence of analytic solutions to Schröder's equation was established by Koenigs in 1884 for s1|s| \neq 1. For functions without suitable fixed points or exhibiting translational behavior, Abel's provides an alternative approach, particularly for solving equations like α(f(x))=α(x)+1\alpha(f(x)) = \alpha(x) + 1, the Abel , introduced by Schröder in 1871. Here, α\alpha conjugates ff to a unit , and the fractional iterate follows as ft(x)=α1(α(x)+t)f^t(x) = \alpha^{-1}(\alpha(x) + t). This method is effective for functions with no fixed points, such as certain exponential maps. In cases of translational dynamics, the equation facilitates the into a continuous flow. Despite these constructions, fractional iterates are generally non-unique without additional constraints, as multiple functions can satisfy the defining relation (g)q=fp(g)^q = f^p. For instance, solutions to Schröder's or Abel's equations may differ by periodic perturbations or require specification of growth conditions; can be ensured by imposing analyticity, monotonicity, or on the iterates. Such limitations highlight the need for context-specific assumptions in dynamical systems to select a principal .

Negative Iterates and Flows

In dynamical systems, a flow associated with an iterated function ff is a one-parameter family of maps {ϕt:XXtR}\{\phi_t : X \to X \mid t \in \mathbb{R}\} on a space XX, satisfying the axioms ϕ0=id\phi_0 = \mathrm{id}, the identity map, and ϕs+t=ϕsϕt\phi_{s+t} = \phi_s \circ \phi_t for all s,tRs, t \in \mathbb{R}, with ϕ1=f\phi_1 = f. This structure forms a one-parameter group under composition, extending the discrete iteration fn=ϕnf^n = \phi_n for integer n0n \geq 0 to continuous time, and provides a continuous parameterization of iterations. Such flows generalize iterated functions by embedding them into a semigroup or group action, often on manifolds or Banach spaces. Flows are typically constructed by embedding the iteration into an autonomous ordinary differential equation (ODE) x˙=F(x)\dot{x} = F(x), where ϕt(x)\phi_t(x) is the solution at time tt starting from initial condition xXx \in X, and the vector field FF satisfies F(x)=ddtϕt(x)t=0F(x) = \frac{d}{dt} \phi_t(x) \big|_{t=0}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.