Hubbry Logo
Contraction mappingContraction mappingMain
Open search
Contraction mapping
Community hub
Contraction mapping
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contraction mapping
Contraction mapping
from Wikipedia

In mathematics, a contraction mapping, or contraction or contractor, on a metric space (M, d) is a function f from M to itself, with the property that there is some real number such that for all x and y in M,

The smallest such value of k is called the Lipschitz constant of f. Contractive maps are sometimes called Lipschitzian maps. If the above condition is instead satisfied for k ≤ 1, then the mapping is said to be a non-expansive map.

More generally, the idea of a contractive mapping can be defined for maps between metric spaces. Thus, if (M, d) and (N, d') are two metric spaces, then is a contractive mapping if there is a constant such that for all x and y in M.

Every contraction mapping is Lipschitz continuous and hence uniformly continuous (for a Lipschitz continuous function, the constant k is no longer necessarily less than 1).

A contraction mapping has at most one fixed point. Moreover, the Banach fixed-point theorem states that every contraction mapping on a non-empty complete metric space has a unique fixed point, and that for any x in M the iterated function sequence x, f (x), f (f (x)), f (f (f (x))), ... converges to the fixed point. This concept is very useful for iterated function systems where contraction mappings are often used. Banach's fixed-point theorem is also applied in proving the existence of solutions of ordinary differential equations, and is used in one proof of the inverse function theorem.[1]

Contraction mappings play an important role in dynamic programming problems.[2][3]

Firmly non-expansive mapping

[edit]

A non-expansive mapping with can be generalized to a firmly non-expansive mapping in a Hilbert space if the following holds for all x and y in : where This is a special case of averaged nonexpansive operators with .[4] A firmly non-expansive mapping is always non-expansive, via the Cauchy–Schwarz inequality.

The class of firmly non-expansive maps is closed under convex combinations, but not compositions.[5] This class includes proximal mappings of proper, convex, lower-semicontinuous functions, hence it also includes orthogonal projections onto non-empty closed convex sets. The class of firmly nonexpansive operators is equal to the set of resolvents of maximally monotone operators.[6] Surprisingly, while iterating non-expansive maps has no guarantee to find a fixed point (e.g. multiplication by -1), firm non-expansiveness is sufficient to guarantee global convergence to a fixed point, provided a fixed point exists. More precisely, if then for any initial point , iterating yields convergence to a fixed point . This convergence might be weak in an infinite-dimensional setting.[5]

Subcontraction map

[edit]

A subcontraction map or subcontractor is a map f on a metric space (M, d) such that If the image of a subcontractor f is compact, then f has a fixed point.[7]

Locally convex spaces

[edit]

In a locally convex space (E, P) with topology given by a set P of seminorms, one can define for any p ∈ P a p-contraction as a map f such that there is some kp < 1 such that p(f(x) − f(y))kp p(xy). If f is a p-contraction for all p ∈ P and (E, P) is sequentially complete, then f has a fixed point, given as limit of any sequence xn+1 = f(xn), and if (E, P) is Hausdorff, then the fixed point is unique.[8]

See also

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , a contraction mapping, also known as a contractive mapping, is a function f:XXf: X \to X defined on a (X,d)(X, d) such that there exists a constant k[0,1)k \in [0, 1) satisfying d(f(x),f(y))kd(x,y)d(f(x), f(y)) \leq k \cdot d(x, y) for all x,yXx, y \in X, meaning it uniformly reduces distances between points by a factor strictly less than 1. The , the cornerstone result associated with contraction mappings, asserts that if XX is a and ff is a contraction on XX, then ff has a unique fixed point xXx^* \in X where f(x)=xf(x^*) = x^*, and the iterative sequence defined by xn+1=f(xn)x_{n+1} = f(x_n) for any initial x0Xx_0 \in X converges to xx^* at a geometric rate bounded by kn/(1k)k^n / (1 - k). This theorem guarantees not only existence and uniqueness but also an effective constructive method for approximating the fixed point via successive iterations. The concept originated with Stefan Banach in his 1922 doctoral dissertation published in Fundamenta Mathematicae, where it was presented as a key tool in , though similar ideas appeared earlier in works by mathematicians like for solving differential equations. Independently generalized by Renato Caccioppoli in 1930, it is sometimes called the Banach–Caccioppoli theorem, and its influence extended Banach's foundational contributions to the field formalized in his 1932 book Théorie des opérations linéaires. Over the century since its inception, the theorem has seen numerous extensions, such as those by Kolmogorov and Fomin in the 1950s–1960s relaxing the contraction condition to apply to iterates, and further generalizations to non-continuous or probabilistic settings, underscoring its robustness and broad applicability. Contraction mappings find extensive use across pure and , including proving local and of solutions to ordinary differential equations via Picard , where the is shown to be a contraction under conditions on the right-hand side. They also underpin numerical methods like for root-finding, which converges quadratically under suitable conditions akin to contractions, and appear in integral equations of the second kind, where successive approximations yield solutions. Beyond , applications extend to computer science and engineering, such as Google's algorithm modeling web link structures as contractions on graph spaces, and in through systems that generate self-similar sets via contractive transformations. These properties make contraction mappings indispensable for establishing convergence in iterative algorithms and solving nonlinear problems in diverse fields.

Fundamentals

Definition

In a (X,d)(X, d), a mapping f:XXf: X \to X is called a contraction mapping (or simply a contraction) if there exists a constant k[0,1)k \in [0, 1) such that d(f(x),f(y))kd(x,y)d(f(x), f(y)) \leq k \cdot d(x, y) for all x,yXx, y \in X. This inequality implies that the mapping reduces distances between points by a uniform factor less than 1 across the entire . The constant kk is known as the Lipschitz constant of the mapping, and the condition k<1k < 1 distinguishes strict contractions from non-strict ones (where k1k \leq 1), with the former ensuring a uniform shrinkage that is the focus of classical theory. In contrast to pointwise contractions, where the contraction factor may vary by location or apply only locally, the standard definition requires a single k<1k < 1 to hold globally and uniformly for all pairs of points. The concept of contraction mappings was introduced by Stefan Banach in 1922, originally in the context of solving functional equations within abstract sets.

Basic Properties

A contraction mapping is a subclass of Lipschitz continuous mappings, distinguished by having a Lipschitz constant kk satisfying 0k<10 \le k < 1. This stricter condition ensures that distances between images of points are uniformly reduced relative to their original distances, unlike general Lipschitz mappings where the constant can be any nonnegative number (including values greater than or equal to 1). Every contraction mapping is continuous. Suppose f:(X,d)(X,d)f: (X, d) \to (X, d) is a contraction with constant k<1k < 1. To verify continuity at an arbitrary point xXx \in X, fix ε>0\varepsilon > 0. Set δ=ε/k>0\delta = \varepsilon / k > 0. Then, for any yXy \in X with d(y,x)<δd(y, x) < \delta, d(f(y),f(x))kd(y,x)<kεk=ε.d(f(y), f(x)) \le k \, d(y, x) < k \cdot \frac{\varepsilon}{k} = \varepsilon. Thus, ff is continuous at xx. Since the choice of δ\delta depends only on ε\varepsilon and not on xx, ff is in fact uniformly continuous on XX.

Fixed-Point Theorems

Banach Fixed-Point Theorem

The Banach fixed-point theorem, also known as the contraction mapping theorem or Banach contraction principle, provides a fundamental guarantee of existence and uniqueness for fixed points of certain mappings in metric spaces. Specifically, if (X,d)(X, d) is a complete metric space and f:XXf: X \to X is a contraction mapping with Lipschitz constant k[0,1)k \in [0, 1), then ff has a unique fixed point x^* \in X&#36; satisfying f(x^) = x^.[](http://kielich.amu.edu.pl/StefanBanach/pdf/oeuvres2/305.pdf)Moreover,foranyinitialpoint.[](http://kielich.amu.edu.pl/Stefan_Banach/pdf/oeuvres2/305.pdf) Moreover, for any initial point x_0 \in X,thesequenceofPicarditeratesdefinedby, the sequence of Picard iterates defined by x_{n+1} = f(x_n)forforn \geq 0convergestoconverges tox^*inthemetricin the metricd$. A key aspect of the theorem is the explicit rate of convergence provided by the error estimate: d(xn,x)kn1kd(x0,x1)d(x_n, x^*) \leq \frac{k^n}{1 - k} \, d(x_0, x_1) for all n0n \geq 0. This bound highlights the geometric convergence of the iterates, with the factor knk^n ensuring rapid approach to the fixed point as nn increases, provided kk is sufficiently small. The theorem's strength lies in its constructive nature, allowing the fixed point to be approximated iteratively without additional assumptions beyond the contraction property. The theorem requires two essential conditions: the completeness of the metric space (X,d)(X, d), which ensures the Cauchy sequence of iterates converges, and the global contraction property, meaning d(f(x),f(y))kd(x,y)d(f(x), f(y)) \leq k \, d(x, y) holds for all x,yXx, y \in X with the same k<1k < 1. Without completeness, counterexamples exist where contractions fail to have fixed points, such as in the rationals with the standard metric. Similarly, relaxing the global k<1k < 1 to local contractions may yield fixed points but not necessarily uniqueness or guaranteed convergence via iterates. Named after the Polish mathematician Stefan Banach, the theorem was first proved in 1922 as part of his doctoral dissertation, generalizing earlier fixed-point results by Henri Poincaré in 1886 and Joseph Liouville on specific analytic mappings. Banach's formulation abstracted these ideas to general complete metric spaces, establishing a cornerstone of modern functional analysis and nonlinear problems.

Proof of the Banach Fixed-Point Theorem

The proof of the Banach fixed-point theorem proceeds by constructing a sequence of iterates and demonstrating its convergence to a unique fixed point in the complete metric space (X,d)(X, d), where f:XXf: X \to X is a contraction mapping with constant k<1k < 1. Begin by selecting an arbitrary initial point x0Xx_0 \in X. Define the sequence of Picard iterates recursively as xn+1=f(xn)x_{n+1} = f(x_n) for n0n \geq 0. This sequence is well-defined since ff maps XX into itself. To establish convergence, first show that the sequence is Cauchy. For integers m>n0m > n \geq 0, d(xm,xn)i=nm1d(xi+1,xi)i=nm1kid(x1,x0)knd(x1,x0)i=0ki=kn1kd(x1,x0),d(x_m, x_n) \leq \sum_{i=n}^{m-1} d(x_{i+1}, x_i) \leq \sum_{i=n}^{m-1} k^i \, d(x_1, x_0) \leq k^n \, d(x_1, x_0) \sum_{i=0}^{\infty} k^i = \frac{k^n}{1 - k} \, d(x_1, x_0), where the inequality follows from the contraction property d(f(x),f(y))kd(x,y)d(f(x), f(y)) \leq k \, d(x, y) applied successively, and the final bound uses the geometric series sum i=0ki=1/(1k)\sum_{i=0}^{\infty} k^i = 1/(1-k) since k<1|k| < 1. As nn \to \infty, the right-hand side tends to 0 independently of m>nm > n, so {xn}\{x_n\} is a Cauchy sequence. Since XX is complete, the Cauchy sequence {xn}\{x_n\} converges to some limit xXx^* \in X. To verify that xx^* is a fixed point, note that ff is continuous as a contraction (with d(f(x),f(y))kd(x,y)d(f(x), f(y)) \leq k \, d(x, y) implying uniform continuity). Thus, taking the limit in xn+1=f(xn)x_{n+1} = f(x_n) yields x=f(x)x^* = f(x^*). For uniqueness, suppose yXy^* \in X is another fixed point, so f(y)=yf(y^*) = y^*. Then d(x,y)=d(f(x),f(y))kd(x,y)d(x^*, y^*) = d(f(x^*), f(y^*)) \leq k \, d(x^*, y^*). Since k<1k < 1, it follows that (1k)d(x,y)0(1 - k) d(x^*, y^*) \leq 0, implying d(x,y)=0d(x^*, y^*) = 0 and hence x=yx^* = y^*. This also shows ff is injective. The convergence rate of the iterates to the fixed point is given by d(xn,x)kn1kd(x1,x0)d(x_n, x^*) \leq \frac{k^n}{1 - k} \, d(x_1, x_0), derived similarly by bounding the tail of the geometric series from the Cauchy estimate and using d(x,xm)i=md(xi+1,xi)km1kd(x1,x0)d(x^*, x_m) \leq \sum_{i=m}^{\infty} d(x_{i+1}, x_i) \leq \frac{k^m}{1 - k} \, d(x_1, x_0) for any mm, then letting mm \to \infty.

Examples and Applications

Metric Space Examples

A classic example of a contraction mapping occurs in the complete metric space (R,d)(\mathbb{R}, d), where d(x,y)=xyd(x, y) = |x - y|, with the function f(x)=x/2f(x) = x/2. This mapping satisfies the contraction condition with constant k=1/2<1k = 1/2 < 1, since f(x)f(y)=(xy)/2=(1/2)xy|f(x) - f(y)| = |(x - y)/2| = (1/2)|x - y| for all x,yRx, y \in \mathbb{R}. The unique fixed point is x=0x = 0, solved from x=x/2x = x/2, implying x/2=0x/2 = 0. Starting from any initial x0Rx_0 \in \mathbb{R}, the iterates xn+1=f(xn)x_{n+1} = f(x_n) converge geometrically to 0 at rate k=1/2k = 1/2, as the error satisfies xn0(1/2)nx0|x_n - 0| \leq (1/2)^n |x_0|. Another illustrative example is the function f(x)=cosxf(x) = \cos x on the closed interval [1,1][-1, 1] subset of R\mathbb{R}, equipped with the standard metric d(x,y)=xyd(x, y) = |x - y|. Here, ff maps [1,1][-1, 1] into itself, since cosx[1,1]\cos x \in [-1, 1] for x[1,1]x \in [-1, 1]. It is a contraction with constant k=sin10.8415<1k = \sin 1 \approx 0.8415 < 1, derived from the mean value theorem: f(x)f(y)=sincxy|f(x) - f(y)| = |-\sin c| \cdot |x - y| for some cc between xx and yy, and sup[1,1]sinc=sin1\sup_{[-1,1]} |\sin c| = \sin 1. The equation x=cosxx = \cos x thus has a unique solution in [1,1][-1, 1], which has no closed-form expression but can be approximated iteratively, converging to approximately 0.739085. In the space of bounded continuous functions C[0,1]C[0, 1] with the supremum norm y=supx[0,1]y(x)\|y\|_\infty = \sup_{x \in [0,1]} |y(x)|, consider the integral equation y(x)=g(x)+01K(x,t)y2(t)dty(x) = g(x) + \int_0^1 K(x, t) y^2(t) \, dt, where gC[0,1]g \in C[0, 1], KC([0,1]×[0,1])K \in C([0, 1] \times [0, 1]), and M=max(x,t)[0,1]2K(x,t)M = \max_{(x,t) \in [0,1]^2} |K(x, t)|. Define the operator T(y)(x)=g(x)+01K(x,t)y2(t)dtT(y)(x) = g(x) + \int_0^1 K(x, t) y^2(t) \, dt; under the condition g<1/(8M)\|g\|_\infty < 1/(8M), TT maps the closed ball of radius r=1/(4M)r = 1/(4M) into itself and is a contraction with constant k=2Mr<1k = 2Mr < 1. The unique fixed point in this ball solves the equation, with no explicit closed form in general, but the contraction ensures iterative convergence in the sup norm.

Real-World Applications

Contraction mappings play a pivotal role in numerical analysis through the method of successive approximations, which iteratively solves nonlinear equations by finding fixed points of a mapping gg where x=g(x)x = g(x). For root-finding problems like f(x)=0f(x) = 0, the iteration xk+1=g(xk)x_{k+1} = g(x_k) converges to the root if gg is a contraction on a complete metric space, ensuring linear convergence with rate determined by the contraction constant q<1q < 1. In particular, , defined by xk+1=xkf(xk)/f(xk)x_{k+1} = x_k - f(x_k)/f'(x_k), can be analyzed as a fixed-point iteration under contraction conditions; near a simple root where f(x)0f'(x^*) \neq 0, the associated mapping exhibits contraction properties that guarantee local convergence. A specific case is Newton's method for solving f(x)=0f(x) = 0, which exhibits quadratic convergence near a simple root xx^* where f(x)=0f(x^*) = 0 and f(x)0f'(x^*) \neq 0, assuming ff is twice continuously differentiable in a neighborhood of xx^*, with the initial guess sufficiently close to xx^*. The error satisfies ek+1f(x)2f(x)ek2e_{k+1} \approx \frac{|f''(x^*)|}{2 |f'(x^*)|} e_k^2. The simplified variant, using a fixed initial Jacobian Af(x(0))A \approx f'(x^{(0)}) in xk+1=xkA1f(xk)x_{k+1} = x_k - A^{-1} f(x_k), converges linearly if the mapping is a contraction with constant q<1q < 1 in a ball around the solution, provided second derivatives are bounded and the initial guess is sufficiently close. In the theory of differential equations, contraction mappings underpin the Picard-Lindelöf theorem, which establishes local existence and uniqueness for solutions to initial value problems y=f(x,y)y' = f(x, y), y(x0)=y0y(x_0) = y_0. The theorem considers the integral operator T[y](x)=y0+x0xf(t,y(t))dtT[y](x) = y_0 + \int_{x_0}^x f(t, y(t)) \, dt on the space of continuous functions over a closed interval [x0h,x0+h][x_0 - h, x_0 + h], where h>0h > 0 is small enough that TT becomes a contraction with constant k=Lh<1k = L h < 1, with LL bounding f/y|\partial f / \partial y|. The Picard iteration yn+1=T[yn]y_{n+1} = T[y_n], starting from y0(x)=y0y_0(x) = y_0, then converges uniformly to the unique solution in this ball. In economics, contraction mappings facilitate fixed-point iterations to compute equilibrium prices in general equilibrium models, such as those involving dynamic principal-agent problems with continuous choice sets. By showing that the Bellman operator for the principal's value function is a contraction under discounting and compactness assumptions, the theorem ensures a unique equilibrium policy and value function, computable via value function iteration. For instance, in one-to-one matching models with linear transferable utility, the system of fixed-point equations for equilibrium transfers forms a contraction mapping, guaranteeing unique transfers and stable outcomes. In computer science, contraction mappings ensure convergence in algorithms like , which computes webpage importance as the fixed point of the iteration ρ=αPρ+(1α)v\rho = \alpha P \rho + (1 - \alpha) v, where PP is the transition matrix and α<1\alpha < 1 makes the mapping a contraction under the 1-norm, yielding a unique steady-state ranking vector via power iteration. Variants of k-means clustering, such as contraction clustering algorithms like RASTER and S-RASTER for data streams, leverage contraction principles by progressively shrinking the data space through grid scaling and density-based projections, enabling single-pass convergence to clusters without the local optima issues of standard k-means. In fractal geometry, an iterated function system (IFS) consists of a finite collection of contraction mappings {f1,,fn}\{f_1, \dots, f_n\} on a complete metric space, such as the space H\mathbb{H} of nonempty compact subsets equipped with the Hausdorff metric. The associated Hutchinson operator F(A)=i=1nfi(A)F(A) = \bigcup_{i=1}^n f_i(A) is a contraction on H\mathbb{H} with constant k=maxiki<1k = \max_i k_i < 1, where kik_i is the Lipschitz constant of fif_i. The guarantees a unique nonempty compact attractor KHK \subseteq \mathbb{H} satisfying F(K)=KF(K) = K, which generates self-similar fractals such as the Sierpinski triangle or .

Generalizations and Variants

Non-Expansive Mappings

A non-expansive mapping, also known as a 1-Lipschitz mapping, is a function f:XXf: X \to X defined on a metric space (X,d)(X, d) satisfying d(f(x),f(y))d(x,y)d(f(x), f(y)) \leq d(x, y) for all x,yXx, y \in X. This condition represents a relaxation of the strict contraction mapping, where the Lipschitz constant k<1k < 1, allowing the distance between images to remain at most the original distance without necessary reduction. Non-expansive mappings inherit several beneficial properties from their Lipschitz nature. They are uniformly continuous, ensuring that small changes in input produce bounded changes in output relative to the metric. However, unlike strict contractions, non-expansive mappings do not guarantee a unique fixed point in general complete metric spaces; for instance, the identity mapping f(x)=xf(x) = x is non-expansive on any metric space but admits fixed points at every point. Fixed-point existence for non-expansive mappings requires additional structural assumptions. In finite-dimensional Euclidean spaces, Brouwer's fixed-point theorem ensures a fixed point for any continuous self-mapping of a compact convex set, and since non-expansiveness implies continuity, the result applies directly. For set-valued non-expansive mappings—where the Hausdorff distance between images satisfies a similar inequality—Kakutani's fixed-point theorem guarantees existence when the mapping is upper semicontinuous with nonempty, closed, convex values on a compact convex subset of Rn\mathbb{R}^n. In infinite-dimensional settings, such as , the Browder–Göhde–Kirk theorem establishes fixed-point existence for non-expansive mappings on nonempty, closed, bounded, convex subsets, though uniqueness remains absent without further conditions. Every contraction mapping is non-expansive, but the converse fails, highlighting the broader applicability of this class at the cost of weakened convergence guarantees.

Firmly Non-Expansive Mappings

In a Hilbert space HH, a mapping f:HHf: H \to H is firmly nonexpansive if, for all x,yHx, y \in H, f(x)f(y)2+(If)(x)(If)(y)2xy2.\|f(x) - f(y)\|^2 + \|(I - f)(x) - (I - f)(y)\|^2 \leq \|x - y\|^2. This condition implies that ff is nonexpansive, as the first term on the left-hand side is bounded above by the right-hand side. An equivalent formulation is that ff satisfies f(x)f(y),xyf(x)f(y)2\langle f(x) - f(y), x - y \rangle \geq \|f(x) - f(y)\|^2 for all x,yHx, y \in H, which aligns with monotonicity properties in inner product spaces. Firmly nonexpansive mappings are precisely the 12\frac{1}{2}-averaged operators, meaning f=(1α)I+αTf = (1 - \alpha)I + \alpha T for some nonexpansive TT and α=12\alpha = \frac{1}{2}. Equivalently, the operator 2fI2f - I is nonexpansive. This characterization extends to the difference operator IfI - f, which satisfies a 2-Lipschitz condition in certain contexts, reinforcing the mapping's stability. In fixed-point theory, firmly nonexpansive mappings, being a subclass of nonexpansive mappings, admit fixed points on nonempty, compact, convex subsets of by the Browder-Göhde-Kirk theorem. They are particularly amenable to iterative methods, such as the Krasnoselskii-Mann iteration xn+1=(1λn)xn+λnf(xn)x_{n+1} = (1 - \lambda_n) x_n + \lambda_n f(x_n) with λn(0,1)\lambda_n \in (0,1), which converges weakly to a fixed point under standard conditions on the sequence {λn}\{\lambda_n\}. This iteration leverages the averaged nature of firmly nonexpansive operators for improved convergence behavior compared to general nonexpansive cases. Firmly nonexpansive mappings play a central role in the theory of monotone operators, where the resolvent JA=(I+A)1J_A = (I + A)^{-1} of a maximal monotone operator A:HHA: H \rightrightarrows H is firmly nonexpansive. This connection is foundational in solving variational inequalities, as the fixed points of such resolvents correspond to zeros of AA. In optimization, they underpin proximal algorithms for minimizing convex functions, where the proximity operator \proxγg(x)=arg miny(g(y)+12γyx2)\prox_{\gamma g}(x) = \argmin_y \left( g(y) + \frac{1}{2\gamma} \|y - x\|^2 \right) is firmly nonexpansive for γ>0\gamma > 0. These properties ensure reliable convergence in methods addressing monotone inclusions and related problems.

Subcontraction Maps

A subcontraction mapping on a (X,d)(X, d) is defined such that for every point xXx \in X, there exists a constant kx[0,1)k_x \in [0, 1) satisfying d(f(x),f(y))kxd(x,y)d(f(x), f(y)) \leq k_x \, d(x, y) for all yy in some neighborhood of xx. In the global variant, the inequality holds for all yXy \in X with the point-dependent constant kxk_x. This notion, also termed a pointwise contraction, relaxes the contraction condition by allowing the contraction factor to vary by location, accommodating mappings that shrink distances unevenly across the space. Unlike the classical contraction mapping, which requires a single uniform constant k<1k < 1 for all pairs of points, subcontractions permit the constants kxk_x to approach 1 as xx varies, facilitating local or partial contraction analysis in spaces where global uniformity fails. This flexibility is particularly valuable for investigating fixed points of mappings that exhibit contraction behavior only in restricted regions or with point-specific rates. Fixed-point results for subcontractions include Edelstein's theorem, which applies to compact metric spaces: a continuous self-mapping ff satisfying d(f(x),f(y))<d(x,y)d(f(x), f(y)) < d(x, y) for all xyx \neq y has a unique fixed point, interpretable as a strict subcontraction without an explicit uniform bound. For complete metric spaces, Caristi's theorem extends this by guaranteeing a fixed point for mappings where d(x,f(x))ψ(x)ψ(f(x))d(x, f(x)) \leq \psi(x) - \psi(f(x)) for a lower semicontinuous function ψ:XR\psi: X \to \mathbb{R} with closed bounded sublevel sets {zX:ψ(z)α}\{z \in X : \psi(z) \leq \alpha\}, encompassing subcontractions via appropriate choices of ψ\psi. These theorems ensure existence without requiring global uniformity, though uniqueness may depend on additional path-connectedness or compactness assumptions. An illustrative example arises in non-complete metric spaces, such as the open unit interval (0,1)(0, 1) with the Euclidean metric, where a mapping may possess a global Lipschitz constant k1k \geq 1 but admit local contraction constants kx<1k_x < 1 at each interior point. For a differentiable f:(0,1)(0,1)f: (0,1) \to (0,1) with f(x)<1|f'(x)| < 1 for all x(0,1)x \in (0,1) yet supx(0,1)f(x)=1\sup_{x \in (0,1)} |f'(x)| = 1 (approached near the endpoints), the mapping fails global contraction due to the supremum reaching 1 and the space's incompleteness, but the pointwise-local property supports fixed-point analysis under supplementary conditions like those in Caristi's framework.

Extensions to Specific Spaces

Contractions in Locally Convex Spaces

In locally convex topological vector spaces, the concept of a contraction mapping is adapted to the absence of a single global norm by leveraging the underlying family of seminorms or the uniform structure induced by balanced convex neighborhoods of the origin. A mapping T:XXT: X \to X on a locally convex space XX is defined as contractive if there exists h(0,1)h \in (0,1) such that for every balanced convex neighborhood UU of the origin and all x,yXx, y \in X, if xytUx - y \in tU for some t>0t > 0, then TxTyhtUTx - Ty \in h t U. This condition ensures that TT uniformly scales distances in a topological across the generating neighborhoods, generalizing the metric contraction while respecting the locally convex . More generally, a (ψ,ϕ)(\psi, \phi)-contractive mapping satisfies a similar inclusion with continuous, strictly increasing functions ψ,ϕ:[0,)[0,)\psi, \phi: [0, \infty) \to [0, \infty) where ψ(0)=ϕ(0)\psi(0) = \phi(0) and ψ(t)<ϕ(t)\psi(t) < \phi(t) for t>0t > 0, allowing for broader classes of mappings that still contract neighborhoods asymptotically. Fixed-point theorems in this setting extend the to non-metric topologies, often requiring sequential completeness or additional structural assumptions. For instance, in a complete locally convex , a (ψ,ϕ)(\psi, \phi)-contractive mapping admits a unique fixed point, with the iterative sequence Tnx0T^n x_0 converging to it for any initial x0Xx_0 \in X. Michael's selection theorem plays a key role for set-valued contractions or lower hemicontinuous maps with convex values, guaranteeing a continuous selection that reduces the problem to a single-valued contraction, thereby ensuring fixed points in paracompact domains mapping to Banach spaces. Similarly, the Eilenberg-Montgomery applies to multi-valued contractions in , asserting the existence of fixed points for upper semicontinuous maps with acyclic values on absolute neighborhood retracts, facilitating analysis in topological dynamics. The lack of a global metric in locally convex spaces introduces challenges for convergence, necessitating tools like asymptotic s—points minimizing limits of in iterative sequences—or absorbing sets, which ensure iterates remain bounded within convex, topology-generating neighborhoods. For asymptotically nonexpansive mappings, fixed points exist in weakly compact convex subsets via asymptotic methods, addressing non-contractive behaviors that approximate contractions over iterations. These techniques are essential for proving without a function. The development of contraction mappings in locally convex spaces emerged in the mid-20th century, building on foundational work in topological fixed-point theory during the and to handle infinite-dimensional settings in dynamics and . Seminal contributions, such as Krasnoselskii's extensions for nonlinear contractions and Michael's selections, addressed uniform structures beyond norms, influencing applications in topological vector spaces.

Contractions in Normed Spaces

In a normed linear space (X,)(X, \|\cdot\|), a mapping f:XXf: X \to X is called a contraction if there exists a constant kk with 0k<10 \leq k < 1 such that f(x)f(y)kxy\|f(x) - f(y)\| \leq k \|x - y\| for all x,yXx, y \in X. This leverages the metric d(x,y)=xyd(x, y) = \|x - y\| induced by the norm, specializing the general notion of contractions in metric spaces to the structured setting of normed spaces. Unlike arbitrary metrics, the norm-induced metric preserves the vector space operations, which facilitates analysis of mappings that respect or affinity. When ff is a bounded linear operator T:XXT: X \to X, the contraction condition simplifies to the operator norm satisfying T<1\|T\| < 1. In this case, the spectral radius ρ(T)\rho(T) obeys ρ(T)T<1\rho(T) \leq \|T\| < 1 by the spectral radius formula ρ(T)=limnTn1/n\rho(T) = \lim_{n \to \infty} \|T^n\|^{1/n}. Consequently, the resolvent (IT)1(I - T)^{-1} exists as a bounded linear operator and equals the Neumann series (IT)1=n=0Tn,(I - T)^{-1} = \sum_{n=0}^\infty T^n, which converges absolutely in the operator norm. In complete normed spaces (Banach spaces), xn+1=f(xn)x_{n+1} = f(x_n) for a contraction ff converges to the unique fixed point in the . This applies particularly to solving linear systems Ax=bAx = b, rewritten as x=(IA)x+bx = (I - A)x + b; if A<1\|A\| < 1, then f(x)=(IA)x+bf(x) = (I - A)x + b is a contraction, and the iteration converges to the solution x=(IA)1bx = (I - A)^{-1}b. The structure distinguishes contractions in normed spaces from those in general metric spaces, as it admits affine contractions of the form f(x)=Tx+cf(x) = Tx + c where TT is linear with T<1\|T\| < 1 and cXc \in X; such mappings satisfy the contraction inequality with the same constant as TT. This affinity enables algebraic manipulations and insights unavailable in purely metric settings. In broader locally convex spaces, contractions can be defined via families of seminorms, extending these properties topologically.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.