Hubbry Logo
Fixed point (mathematics)Fixed point (mathematics)Main
Open search
Fixed point (mathematics)
Community hub
Fixed point (mathematics)
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Fixed point (mathematics)
Fixed point (mathematics)
from Wikipedia
The function (shown in red) has the fixed points 0, 1, and 2.

In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically, for functions, a fixed point is an element that is mapped to itself by the function. Any set of fixed points of a transformation is also an invariant set.

Fixed point of a function

[edit]

Formally, c is a fixed point of a function f if c belongs to both the domain and the codomain of f, and f(c) = c. In particular, f cannot have any fixed point if its domain is disjoint from its codomain. If f is defined on the real numbers, it corresponds, in graphical terms, to a curve in the Euclidean plane, and each fixed-point c corresponds to an intersection of the curve with the line y = x, cf. picture.

For example, if f is defined on the real numbers by then 2 is a fixed point of f, because f(2) = 2.

Not all functions have fixed points: for example, f(x) = x + 1 has no fixed points because x + 1 is never equal to x for any real number.

Fixed point iteration

[edit]

In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. Specifically, given a function with the same domain and codomain, a point in the domain of , the fixed-point iteration is

which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of .

The notions of attracting fixed points, repelling fixed points, and periodic points are defined with respect to fixed-point iteration.

Fixed-point theorems

[edit]

A fixed-point theorem is a result saying that at least one fixed point exists, under some general condition.[1]

For example, the Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, fixed-point iteration will always converge to a fixed point.

The Brouwer fixed-point theorem (1911) says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point.

The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology give a way to count fixed points.

Fixed point of a group action

[edit]

In algebra, for a group G acting on a set X with a group action , x in X is said to be a fixed point of g if .

The fixed-point subgroup of an automorphism f of a group G is the subgroup of G:

Similarly, the fixed-point subring of an automorphism f of a ring R is the subring of the fixed points of f, that is,

In Galois theory, the set of the fixed points of a set of field automorphisms is a field called the fixed field of the set of automorphisms.

Topological fixed point property

[edit]

A topological space is said to have the fixed point property (FPP) if for any continuous function

there exists such that .

The FPP is a topological invariant, i.e., it is preserved by any homeomorphism. The FPP is also preserved by any retraction.

According to the Brouwer fixed-point theorem, every compact and convex subset of a Euclidean space has the FPP. Compactness alone does not imply the FPP, and convexity is not even a topological property, so it makes sense to ask how to topologically characterize the FPP. In 1932 Borsuk asked whether compactness together with contractibility could be a necessary and sufficient condition for the FPP to hold. The problem was open for 20 years until the conjecture was disproved by Kinoshita, who found an example of a compact contractible space without the FPP.[2]

Fixed points of partial orders

[edit]

In domain theory, the notion and terminology of fixed points is generalized to a partial order. Let ≤ be a partial order over a set X and let f: XX be a function over X. Then a prefixed point (also spelled pre-fixed point, sometimes shortened to prefixpoint or pre-fixpoint)[citation needed] of f is any p such that f(p) ≤ p. Analogously, a postfixed point of f is any p such that pf(p).[3] The opposite usage occasionally appears.[4] Malkis justifies the definition presented here as follows: "since f is before the inequality sign in the term f(x) ≤ x, such x is called a prefix point."[5] A fixed point is a point that is both a prefixpoint and a postfixpoint. Prefixpoints and postfixpoints have applications in theoretical computer science.[6]

Least fixed point

[edit]

In order theory, the least fixed point of a function from a partially ordered set (poset) to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique.

One way to express the Knaster–Tarski theorem is to say that a monotone function on a complete lattice has a least fixed point that coincides with its least prefixpoint (and similarly its greatest fixed point coincides with its greatest postfixpoint).[7]

Fixed-point combinator

[edit]

In combinatory logic for computer science, a fixed-point combinator is a higher-order function that returns a fixed point of its argument function, if one exists. Formally, if the function f has one or more fixed points, then

Fixed-point logics

[edit]

In mathematical logic, fixed-point logics are extensions of classical predicate logic that have been introduced to express recursion. Their development has been motivated by descriptive complexity theory and their relationship to database query languages, in particular to Datalog.

Applications

[edit]

In many fields, equilibria or stability are fundamental concepts that can be described in terms of fixed points. Some examples follow.

See also

[edit]

Notes

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , a fixed point of a function f:XXf: X \to X is an element xXx \in X such that f(x)=xf(x) = x. This concept arises in various contexts, including iterations of mappings, solutions to equations, and equilibrium states in dynamical systems. Fixed point theory explores conditions guaranteeing the existence, uniqueness, and properties of such points for mappings on different spaces, encompassing metric, topological, and order-theoretic approaches. Central results include the , which states that a on a complete metric space has a unique fixed point, enabling proofs of existence and uniqueness for solutions to differential equations and optimization problems. Another foundational theorem is Brouwer's fixed point theorem, asserting that every continuous function from a nonempty, compact, convex subset of Rn\mathbb{R}^n to itself has at least one fixed point; this result underpins much of algebraic topology and has broad implications for proving non-retractability properties. Extensions of these theorems address more general settings, such as Schauder's fixed point theorem for compact operators on Banach spaces and Kakutani's fixed point theorem for upper hemicontinuous correspondences with convex values on compact convex sets in Rn\mathbb{R}^n, which ensures a fixed point exists. In ordered structures, Tarski's fixed point theorem guarantees that an order-preserving map on a has least and greatest fixed points. These theorems find extensive applications beyond , including the existence of Nash equilibria in via Kakutani's theorem and Walrasian equilibria in economic models through Brouwer's result.

Fixed Points in Functions

Definition and Examples

In mathematics, a fixed point of a function f:XXf: X \to X, where XX is a set, is an element xXx \in X such that f(x)=xf(x) = x. This concept arises in various branches of mathematics, including analysis, topology, and dynamical systems, where fixed points represent equilibria or invariant points under the mapping defined by ff. Graphically, for functions from R\mathbb{R} to R\mathbb{R}, fixed points correspond to the intersection points of the curve y=f(x)y = f(x) with the line y=xy = x. In higher dimensions, such as for f:RnRnf: \mathbb{R}^n \to \mathbb{R}^n, these intersections occur where the graph of ff meets the diagonal subspace y=x\mathbf{y} = \mathbf{x}. Basic examples illustrate the diversity of fixed points. The identity function f(x)=xf(x) = x has every point in its domain as a fixed point. A constant function f(x)=cf(x) = c for all xXx \in X, where cXc \in X, has exactly one fixed point at x=cx = c. The linear function f(x)=2xf(x) = 2x on R\mathbb{R} has a unique fixed point at x=0x = 0, since solving 2x=x2x = x yields x=0x = 0. In contrast, the translation f(x)=x+1f(x) = x + 1 on R\mathbb{R} has no fixed points, as x+1=xx + 1 = x implies 1=01 = 0, which is impossible. Functions can thus possess zero, one, multiple, or infinitely many fixed points, depending on their form and domain. The concept of fixed points received early recognition in 19th-century , particularly through Henri Poincaré's work on periodic solutions to differential equations, where his 1886 development of the index's continuation invariance served as a precursor to modern fixed point theory. Theorems like Brouwer's later guarantee the existence of fixed points for continuous functions on compact convex sets in .

Fixed Point Iteration

Fixed-point iteration is a fundamental numerical technique for approximating solutions to equations of the form x=g(x)x = g(x), where gg is a . The method begins with an initial approximation x0x_0 and generates a sequence defined by xn+1=g(xn)x_{n+1} = g(x_n) for n=0,1,2,n = 0, 1, 2, \dots, aiming for the sequence to converge to a fixed point xx^* satisfying x=g(x)x^* = g(x^*). This approach transforms root-finding problems f(x)=0f(x) = 0 into fixed-point problems by rearranging to g(x)=xαf(x)g(x) = x - \alpha f(x) for a suitable scaling factor α>0\alpha > 0, though the choice of gg significantly affects performance. Convergence of the iteration requires that gg be a contraction mapping in a suitable interval around xx^*. For a differentiable gg, local convergence occurs if g(x)<1|g'(x^*)| < 1, ensuring the sequence approaches xx^* at a linear rate, where the asymptotic error constant is g(x)|g'(x^*)|, meaning xn+1xg(x)xnx|x_{n+1} - x^*| \approx |g'(x^*)| |x_n - x^*|. The rate is determined by analyzing the error propagation: if en=xnxe_n = x_n - x^*, then en+1=g(xn)g(x)g(x)ene_{n+1} = g(x_n) - g(x^*) \approx g'(x^*) e_n under the mean value theorem. To accelerate this linear convergence, techniques like Steffensen's method modify the iteration to achieve quadratic convergence without requiring derivatives; it computes an accelerated step by evaluating gg three times per iteration, effectively mimicking the secant method for root-finding. Steffensen's method, originally proposed in 1933, updates via p=xn(g(xn)xn)2g(g(xn))2g(xn)+xnp = x_n - \frac{(g(x_n) - x_n)^2}{g(g(x_n)) - 2g(x_n) + x_n} before resuming fixed-point steps. Error estimation in fixed-point iteration often relies on the Lipschitz constant LL of gg, where g(x)g(y)Lxy|g(x) - g(y)| \leq L |x - y| for all x,yx, y in the domain. If L<1L < 1, the a posteriori error bound is xnxL1Lxn+1xn|x_n - x^*| \leq \frac{L}{1 - L} |x_{n+1} - x_n|, providing a practical stopping criterion when consecutive iterates are sufficiently close. For a priori estimates, assuming the initial error x0xδ|x_0 - x^*| \leq \delta and L<1L < 1, the bound simplifies to xnxLn1Lx1x0|x_n - x^*| \leq \frac{L^n}{1 - L} |x_1 - x_0|, highlighting the geometric decay. These bounds assume gg maps an interval containing x0x_0 into itself, ensuring the iterates remain bounded. A classic practical example is solving x=cosxx = \cos x (in radians), which has a unique real fixed point known as the , approximately 0.739085. Using g(x)=cosxg(x) = \cos x and starting from x0=0x_0 = 0, the iterates converge monotonically: x11.000000x_1 \approx 1.000000, x20.540302x_2 \approx 0.540302, x30.857554x_3 \approx 0.857554, and so on, reaching machine precision in about 20 iterations on a standard computer. In programming, such as Python, implementation involves a loop checking xn+1xn<1010|x_{n+1} - x_n| < 10^{-10}, with care for floating-point precision near the fixed point where g(x)=sin(x)0.6736<1|g'(x^*)| = |\sin(x^*)| \approx 0.6736 < 1. Despite its simplicity, fixed-point iteration has limitations, including potential oscillation or divergence. Oscillation arises when g(x)<0g'(x) < 0 near xx^* with g(x)<1|g'(x^*)| < 1, causing the sequence to alternate around the fixed point, as in g(x)=0.5x+1g(x) = -0.5x + 1 where iterates zigzag toward 2/3. Divergence occurs if g(x)>1|g'(x^*)| > 1, with errors amplifying each step; for instance, g(x)=1.5xg(x) = 1.5x starting from x0=1x_0 = 1 yields x1=1.5x_1 = 1.5, x2=2.25x_2 = 2.25, growing unbounded. Even when g(x)=1|g'(x^*)| = 1, convergence may fail or be sublinear, necessitating alternative formulations of gg. This local behavior aligns with the Banach fixed-point theorem, which guarantees unique convergence in complete metric spaces for global contractions.

Fixed-Point Theorems

Theorems in Metric and Normed Spaces

In complete metric spaces, the establishes the existence and uniqueness of fixed points for contraction mappings. Specifically, if (X,d)(X, d) is a and f:XXf: X \to X is a contraction, meaning there exists k[0,1)k \in [0, 1) such that d(f(x),f(y))kd(x,y)d(f(x), f(y)) \leq k \, d(x, y) for all x,yXx, y \in X, then ff has a unique fixed point x^* \in X&#36; satisfying x^* = f(x^*)$ . This theorem, first proved by in 1922 as part of his doctoral work at the Lwów School of Mathematics—a influential center of in —provides a constructive method for locating the fixed point through . The proof relies on successive approximations starting from an arbitrary x0Xx_0 \in X, defining the sequence {xn}\{x_n\} by xn+1=f(xn)x_{n+1} = f(x_n). The distances satisfy d(xn+1,xn)knd(x1,x0)d(x_{n+1}, x_n) \leq k^n d(x_1, x_0), so the partial sums i=0n1d(xi+1,xi)d(x1,x0)/(1k)\sum_{i=0}^{n-1} d(x_{i+1}, x_i) \leq d(x_1, x_0) / (1 - k), forming a Cauchy sequence that converges to a limit xXx^* \in X due to completeness . Continuity of ff then implies x=f(x)x^* = f(x^*), and uniqueness follows because any two fixed points would be connected by a contraction, leading to d(x,y)kd(x,y)d(x^*, y^*) \leq k \, d(x^*, y^*), which forces d(x,y)=0d(x^*, y^*) = 0 since k<1k < 1 . The error estimate d(xn,x)kn/(1k)d(x1,x0)|d(x_n, x^*)| \leq k^n / (1 - k) \, d(x_1, x_0) ensures linear convergence of the iteration, highlighting the theorem's algorithmic value . A key application arises in ordinary differential equations (ODEs), where the Picard-Lindelöf theorem guarantees local and uniqueness of solutions to initial value problems y=f(t,y)y' = f(t, y), y(t0)=y0y(t_0) = y_0, under Lipschitz continuity in yy. This reformulates the ODE as finding a fixed point of the integral operator T(y)(t)=y0+t0tf(s,y(s))dsT(y)(t) = y_0 + \int_{t_0}^t f(s, y(s)) \, ds on a suitable complete metric space of continuous functions, often a closed ball in the Banach space C[I]C[I] with the sup norm, where TT acts as a contraction for small intervals . Banach's theorem thus underpins the iterative Picard method, converging to the unique solution and extending classical results from the late 19th century to infinite-dimensional settings . Extending beyond contractions, the Schauder fixed-point theorem addresses continuous mappings on infinite-dimensional spaces. In a Banach space EE, if KK is a nonempty, compact, convex subset and f:KKf: K \to K is continuous, then ff has at least one fixed point . Proved by Juliusz Schauder in 1930, also from the Polish school, this result relies on compactness to apply finite-dimensional analogs like via approximation by finite-dimensional subspaces, ensuring the image remains contained within KK . Unlike , it guarantees existence but not uniqueness, and applies to nonlinear problems in functional analysis, such as proving existence of solutions to elliptic partial differential equations via compact embeddings .

Theorems in Topology

In topology, fixed-point theorems establish the existence of points invariant under continuous mappings on certain spaces, often without requiring contractivity. A foundational result is the , which asserts that any continuous function f:DnDnf: D^n \to D^n, where DnD^n denotes the closed unit ball in Rn\mathbb{R}^n, possesses at least one fixed point, i.e., there exists xDnx \in D^n such that f(x)=xf(x) = x. This theorem, proved by Luitzen E. J. Brouwer in 1911, relies on topological invariants; one classical proof employs simplicial homology to show that the mapping degree is nonzero, implying a fixed point, while an alternative combinatorial approach uses Sperner's lemma on triangulations of the simplex. Notably, the theorem guarantees existence but not uniqueness, as multiple fixed points may occur, and counterexamples exist for open sets or noncompact domains. Generalizations extend Brouwer's result to broader classes of mappings and spaces. Kakutani's theorem, established in 1941, applies to set-valued maps: if XX is a compact convex subset of Rn\mathbb{R}^n and Φ:X2X\Phi: X \to 2^X is upper hemicontinuous with nonempty convex values, then Φ\Phi has a fixed point, meaning there exists xXx \in X such that xΦ(x)x \in \Phi(x). This builds directly on Brouwer's framework by replacing single-valued continuity with hemicontinuity, preserving the existence guarantee through approximation by single-valued functions. More advanced theorems incorporate algebraic topology to quantify fixed points. The Lefschetz fixed-point theorem, developed by Solomon Lefschetz in the 1930s, generalizes Brouwer's result to compact triangulable spaces: for a continuous self-map f:XXf: X \to X, the Lefschetz number Λ(f)=q=0dimX(1)qtr(fq)\Lambda(f) = \sum_{q=0}^{\dim X} (-1)^q \operatorname{tr}(f_{q*}), where fqf_{q*} is the induced map on the qq-th homology group with rational coefficients and tr\operatorname{tr} denotes the trace, satisfies Λ(f)0\Lambda(f) \neq 0 implying the existence of a fixed point. This algebraic formulation uses traces on homology to detect nontrivial fixed-point indices, extending to manifolds and CW-complexes. Nielsen fixed-point theory, initiated by Jakob Nielsen in 1927, provides a homotopy-invariant approach to counting essential fixed points. It partitions the fixed-point set of a map f:XXf: X \to X into homotopy classes via the , defining the Nielsen number N(f)N(f) as the number of essential classes; N(f)N(f) lower bounds the minimal number of fixed points over all homotopic maps, and N(f)>0N(f) > 0 ensures existence. This theory refines Lefschetz by incorporating path-lifting indices, applicable to manifolds where the is nontrivial. An important variant for equivariant maps is the Eilenberg–Montgomery theorem from 1946, which states that if YY is a compact acyclic (with vanishing Čech homology in positive dimensions) and Φ:Y2Y\Phi: Y \to 2^Y is an upper semicontinuous map with nonempty, compact, acyclic values, then Φ\Phi admits a fixed point. This result, proved using lemmas and , applies to spheres under group actions, ensuring fixed points for maps preserving acyclicity. Brouwer's theorem has profoundly influenced , notably in John Nash's 1951 proof of equilibrium existence via fixed points of best-response maps.

Fixed Points in Algebraic Structures

Fixed Points of Group Actions

In group theory, a group GG acts on a set XX via a from GG to the on XX. A point xXx \in X is fixed by an element gGg \in G if gx=xg \cdot x = x. The fixed point set of gg is \Fix(g)={xXgx=x}\Fix(g) = \{ x \in X \mid g \cdot x = x \}. The fixed points of the entire action are the points invariant under all group elements, given by \Fix(G)=gG\Fix(g)\Fix(G) = \bigcap_{g \in G} \Fix(g). Fixed points relate closely to the structure of orbits and stabilizers in group actions. The orbit of xx is \Orb(x)={gxgG}\Orb(x) = \{ g \cdot x \mid g \in G \}, and the stabilizer \Stab(x)={gGgx=x}\Stab(x) = \{ g \in G \mid g \cdot x = x \} consists of elements fixing xx. By the orbit-stabilizer theorem, \Orb(x)=G/\Stab(x)|\Orb(x)| = |G| / |\Stab(x)|, so points with large stabilizers (including the full group for fixed points) have trivial orbits of size 1. For example, consider the group of rotations SO(2)SO(2) acting on the plane R2\mathbb{R}^2 by rotations around the origin; the origin is fixed by every rotation, forming \Fix(SO(2))={(0,0)}\Fix(SO(2)) = \{ (0,0) \}, while other points have orbits that are circles. Burnside's lemma provides a method to count orbits using fixed points. For a finite group GG acting on a finite set XX, the number of orbits is the average number of fixed points over all group elements: \Orb(X)=1GgG\Fix(g).|\Orb(X)| = \frac{1}{|G|} \sum_{g \in G} |\Fix(g)|. This lemma, originally formulated in the context of counting distinct objects under symmetry, equates orbit enumeration to averaging fixed points and has applications in combinatorics and enumeration problems. In representation theory, where a group GG acts linearly on a vector space VV over a field (forming a representation ρ:G\GL(V)\rho: G \to \GL(V)), the fixed subspace is the set of vectors invariant under the action: VG={vVρ(g)v=v gG}V^G = \{ v \in V \mid \rho(g)v = v \ \forall g \in G \}. This is the subspace on which GG acts trivially, corresponding to the trivial representation component in the decomposition of VV into irreducibles. Its dimension can be computed using character theory as the inner product χV,1\langle \chi_V, 1 \rangle, where χV\chi_V is the character of the representation and 11 is the trivial character. For instance, in the regular representation of a finite group, the fixed subspace has dimension 1, spanned by the sum of basis elements.

Fixed Points in Partial Orders

In a partially ordered set (poset) (P,)(P, \leq), consider a function f:PPf: P \to P. An element xPx \in P is called a prefixpoint of ff if f(x)xf(x) \leq x, and a postfixpoint if f(x)xf(x) \geq x. A fixed point satisfies both conditions simultaneously, i.e., f(x)=xf(x) = x. These notions extend the standard fixed point concept to ordered structures, where the order captures approximations or inclusions relevant to iterative processes. The provides a foundational result for the existence of fixed points in such settings. Specifically, if (L,)(L, \leq) is a and f:LLf: L \to L is a monotone function (i.e., xyx \leq y implies f(x)f(y)f(x) \leq f(y)), then ff has a least fixed point, given by the infimum of the set of all prefixpoints of ff, and a greatest fixed point, given by the supremum of the set of all postfixpoints of ff. Moreover, the collection of all fixed points of ff itself forms a complete lattice under the induced order. This theorem, originally proved by Alfred Tarski, guarantees the existence without requiring additional continuity assumptions beyond monotonicity. The least fixed point can be constructed via iterative application starting from the bottom element \bot of the lattice: define the sequence x0=x_0 = \bot and xn+1=f(xn)x_{n+1} = f(x_n) for nNn \in \mathbb{N}, then the least fixed point is the supremum n<ωxn\bigvee_{n < \omega} x_n. Dually, the greatest fixed point arises from iteration starting from the top element \top, yielding the infimum of the descending chain. These constructions leverage the completeness of the lattice to ensure convergence to fixed points. A prominent example arises in , where complete partial orders (cpos) model computational domains, and monotone semantic functions interpret program constructs; the least fixed point of such a function defines the of recursive definitions, capturing all finite approximations of program behavior. This order-theoretic framework underpins applications in program semantics, as explored in later sections.

Advanced Fixed Point Concepts

Topological Fixed Point Property

In , a XX has the fixed-point property (FPP) if every continuous self-map f:XXf: X \to X admits at least one fixed point, that is, there exists xXx \in X such that f(x)=xf(x) = x. This property captures an intrinsic stability of the space under continuous deformations, ensuring that no self-map can "move everything away from itself" without leaving some point unchanged. Spaces with the FPP are of interest in continuum theory and , as they generalize behaviors observed in convex sets. Brouwer's fixed-point theorem establishes that the closed unit ball in has the FPP, as any continuous self-map of the ball must fix some point. This extends to simplices, which are homeomorphic to convex sets and thus inherit the FPP via Brouwer's result. Similarly, tree-like continua—compact connected metric spaces that are inverse limits of trees—often possess the FPP, particularly for maps homotopic to the identity, as shown in characterizations of their deformation retract properties. However, not all tree-like continua have the full FPP for arbitrary continuous maps, with counterexamples arising in more complex inverse limit constructions. A notable to the conjecture that all contractible continua have the FPP is provided by Kinoshita's construction of a two-dimensional contractible continuum in R3\mathbb{R}^3 that admits a fixed-point-free continuous self-map. This continuum is not an absolute retract, highlighting that contractibility alone does not guarantee the FPP, and underscoring the role of local structure in fixed-point guarantees. Such examples demonstrate the subtlety of the property in non-convex spaces. The FPP is preserved under retractions: if AA is a retract of a space XX with the FPP, then AA also has the FPP, as any self-map on AA extends to one on XX via the retraction and inclusion. In particular, retracts of absolute neighborhood retracts (ANRs) like balls or simplices inherit the FPP when the ambient ANR possesses it. This provides a partial , though a complete one remains elusive for general continua. Ongoing research in addresses the FPP for non-compact spaces, where assumptions in classical theorems fail; post-2020 studies explore nonexpansive mappings in spaces like L1L^1 and limits in categories to extend fixed-point guarantees beyond compact settings.

Fixed-Point Combinators and Logics

In untyped , fixed-point combinators enable the definition of recursive functions without explicit self-reference, by constructing terms that satisfy the equation Yf=f(Yf)Y f = f (Y f) for any function ff. The canonical example is the , defined as Y=λf.(λx.f(xx))(λx.f(xx)),Y = \lambda f. (\lambda x. f (x x)) (\lambda x. f (x x)), which allows the encoding of recursive procedures such as the function: applying YY to a non-recursive yields the full recursive version. This construction demonstrates the expressive power of for general , as every lambda term ff admits a fixed point. In , introduced a fixed-point construction using diagonalization to enable self-referential programs in his 1936 paper, showing that for any computable functional ϕ\phi, there exists a program whose computation mimics another under ϕ\phi. This arises from the diagonal argument in Turing machines, where a machine ee can be built to compute ϕe(e)\phi_e(e), the application of ϕ\phi to itself, facilitating quines and undecidability proofs. 's approach in lambda-definability further links this to Church's system, proving equivalence between and lambda-definability via fixed points. Fixed-point logics extend with operators for least and greatest fixed points, capturing inductive definitions and infinite behaviors in verification. The propositional μ-calculus, introduced by Dexter Kozen in 1983, augments basic with a least fixed-point operator μX.ϕ(X)\mu X. \phi(X), where XX is a proposition variable, defining the smallest set satisfying a monotone formula ϕ\phi; dual greatest fixed points use ν\nu. This logic expresses properties like in transition systems via fixed points of semantic operators, subsuming many temporal logics and enabling . In , fixed points underpin recursive query languages like , where programs are sets of Horn clauses evaluated bottom-up to compute the least fixed point of a monotone operator on relations. For instance, queries iteratively expand paths until stabilization, converging to the fixed point representing all reachable pairs; this inflationary semantics ensures termination for stratified programs. and proposed augmenting with fixed-point operators to handle such in their work from the late 1970s, while Ashok Chandra and David Harel formalized Datalog's computable queries under this framework in 1982. The resolution of through infinite iterations also evokes fixed-point ideas, as the dichotomy paradox—requiring traversal of infinitely many subintervals to cover a distance—converges to a fixed total via the limit of partial sums n=11/2n=1\sum_{n=1}^\infty 1/2^n = 1, attained in finite time under constant velocity. Similarly, Achilles and the tortoise resolves when the infinite catch-up steps accumulate to a fixed meeting point, aligning supertasks with convergent fixed points in .

Applications of Fixed Points

In Analysis and Dynamical Systems

In the context of ordinary differential equations (ODEs), fixed points serve as equilibria, representing rest points where the system's dynamics halt. For an autonomous ODE of the form x˙=f(x)\dot{x} = f(x) in Rn\mathbb{R}^n, a fixed point xx^* satisfies f(x)=0f(x^*) = 0, meaning trajectories starting at xx^* remain stationary. This perspective arises naturally via a change of variables, where the deviation y=xxy = x - x^* transforms the equation to y˙=f(y+x)\dot{y} = f(y + x^*), centering the analysis around the equilibrium. Stability analysis of these fixed points classifies their under perturbations, often using . A fixed point is hyperbolic if all eigenvalues of the matrix Df(x)Df(x^*) have nonzero real parts; those with negative real parts indicate attracting directions, while positive ones indicate repelling. Lyapunov exponents provide a quantitative measure for this in both continuous and discrete systems: at a fixed point, they correspond to the real parts of the eigenvalues (or their logarithms for maps), with all negative exponents signifying an attracting () fixed point and at least one positive indicating repelling (unstable) . For instance, in low-dimensional systems, a () has all exponents negative, ensuring nearby trajectories converge exponentially. Bifurcations describe qualitative changes in fixed points as system parameters vary, altering their number or stability. In a , two fixed points—one stable and one unstable—collide and annihilate as the parameter crosses a critical value, often modeled by the normal form x˙=μ+x2\dot{x} = \mu + x^2 where μ\mu is the bifurcation parameter; for μ<0\mu < 0, two real fixed points exist, merging at μ=0\mu = 0. This phenomenon is generic in one-dimensional systems and extends to higher dimensions, marking transitions from stable equilibria to oscillatory or chaotic dynamics. In chaos theory, fixed points play a role within strange attractors, fractal structures where trajectories exhibit dense orbits sensitive to initial conditions. These attractors, such as the Lorenz attractor, contain unstable fixed points embedded in a bounded region, with positive Lyapunov exponents quantifying the exponential divergence of nearby paths, contrasting the convergence to simple fixed points or limit cycles. The presence of at least one positive exponent alongside negative ones ensures the attractor's complexity, as seen in dissipative systems where volume contracts overall but stretches locally near fixed points. The Poincaré-Bendixson theorem provides a foundational result for planar dynamical systems, restricting possible long-term behaviors. For a C1C^1 flow in R2\mathbb{R}^2 confined to a compact invariant set without fixed points, the ω\omega- of any trajectory must be a periodic orbit (); if fixed points are present, the is either a fixed point, a cycle, or a graphic connecting them. This theorem underscores the absence of chaos in two dimensions, as strange attractors cannot form without higher-dimensional embedding. Existence of fixed points in such systems can often be established using the Banach fixed-point theorem applied to equivalent integral equations, guaranteeing unique solutions under contractive conditions.

In Computer Science and Economics

In denotational semantics, the meaning of programs, particularly those involving recursion, is defined as the fixed point of a monotone operator on a complete partial order (cpo) of semantic domains. This approach, pioneered by Dana Scott and Christopher Strachey, models recursive definitions by applying the least fixed-point theorem to ensure the semantics of a recursive function is the smallest solution satisfying its own equation. For instance, in Scott-Strachey semantics, the denotation of a recursive procedure is computed as the least fixed point of a functional that maps approximate solutions to more precise ones, enabling rigorous proofs of program equivalence and termination properties. In , fixed points arise in the training dynamics of generative adversarial networks (GANs), where the generator and discriminator reach equilibrium that corresponds to a fixed point of their joint optimization process. This equilibrium ensures the generator produces data indistinguishable from the true distribution, as formalized in the original GAN framework. Post-2020 research has extended this to architectures, where self-attention mechanisms exhibit attractor-like fixed points that stabilize transient memories during processing, enhancing the model's ability to capture long-range dependencies without explicit recurrence. In , a Nash equilibrium is defined as a strategy profile that forms a fixed point of the best-response mapping, where no player can improve their payoff by unilaterally deviating. The existence of such equilibria in finite games is guaranteed by applied to the continuous best-response correspondence in the space of mixed strategies. This formulation, introduced by John Nash, underpins analyses of strategic interactions in non-cooperative settings. In , the Arrow-Debreu model establishes the existence of general equilibrium as a fixed point of the excess demand function, where prices adjust to clear all markets simultaneously. Under assumptions of and continuous production sets, this equilibrium balances across commodities and time states, as proven using fixed-point theorems on compact convex sets. Fixed-point semantics also underpin recursive queries in deductive , where inflationary fixed points compute the model for queries with or aggregation by iteratively expanding the result set until no new tuples are added. This approach, adapted to relational systems like SQL extensions, ensures termination and correctness for stratified programs, mirroring the monotone fixed-point computation in . Fixed-point logics further support in these systems by expressing recursive properties as fixed points of transition relations.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.