Hubbry Logo
Limit of a sequenceLimit of a sequenceMain
Open search
Limit of a sequence
Community hub
Limit of a sequence
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Limit of a sequence
Limit of a sequence
from Wikipedia
diagram of a hexagon and pentagon circumscribed outside a circle
The sequence given by the perimeters of regular n-sided polygons that circumscribe the unit circle has a limit equal to the perimeter of the circle, i.e. . The corresponding sequence for inscribed polygons has the same limit.
1 0.841471
2 0.958851
...
10 0.998334
...
100 0.999983

As the positive integer becomes larger and larger, the value becomes arbitrarily close to . We say that "the limit of the sequence equals ."

In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol (e.g., ).[1] If such a limit exists and is finite, the sequence is called convergent.[2] A sequence that does not converge is said to be divergent.[3] The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.[1]

Limits can be defined in any metric or topological space, but are usually first encountered in the real numbers.

History

[edit]

The Greek philosopher Zeno of Elea is famous for formulating paradoxes that involve limiting processes.

Leucippus, Democritus, Antiphon, Eudoxus, and Archimedes developed the method of exhaustion, which uses an infinite sequence of approximations to determine an area or a volume. Archimedes succeeded in summing what is now called a geometric series.

Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment."[4]

Pietro Mengoli anticipated the modern idea of limit of a sequence with his study of quasi-proportions in Geometriae speciosae elementa (1659). He used the term quasi-infinite for unbounded and quasi-null for vanishing.

Newton dealt with series in his works on Analysis with infinite series (written in 1669, circulated in manuscript, published in 1711), Method of fluxions and infinite series (written in 1671, published in English translation in 1736, Latin original published much later) and Tractatus de Quadratura Curvarum (written in 1693, published in 1704 as an Appendix to his Optiks). In the latter work, Newton considers the binomial expansion of , which he then linearizes by taking the limit as tends to .

In the 18th century, mathematicians such as Euler succeeded in summing some divergent series by stopping at the right moment; they did not much care whether a limit existed, as long as it could be calculated. At the end of the century, Lagrange in his Théorie des fonctions analytiques (1797) opined that the lack of rigour precluded further development in calculus. Gauss in his study of hypergeometric series (1813) for the first time rigorously investigated the conditions under which a series converged to a limit.

The modern definition of a limit (for any there exists an index so that ...) was given by Bernard Bolzano (Der binomische Lehrsatz, Prague 1816, which was little noticed at the time), and by Karl Weierstrass in the 1870s.

Real numbers

[edit]
The plot of a convergent sequence {an} is shown in blue. Here, one can see that the sequence is converging to the limit 0 as n increases.

In the real numbers, a number is the limit of the sequence , if the numbers in the sequence become closer and closer to , and not to any other number.

Examples

[edit]
  • If for constant , then .[proof 1][5]
  • If , then .[proof 2][5]
  • If when is even, and when is odd, then . (The fact that whenever is odd is irrelevant.)
  • Given any real number, one may easily construct a sequence that converges to that number by taking decimal approximations. For example, the sequence converges to . The decimal representation is the limit of the previous sequence, defined by
  • Finding the limit of a sequence is not always obvious. Two examples are (the limit of which is the number e) and the arithmetic–geometric mean. The squeeze theorem is often useful in the establishment of such limits.

Definition

[edit]

We call the limit of the sequence , which is written

, or
,

if the following condition holds:

For each real number , there exists a natural number such that, for every natural number , we have .[6]

In other words, for every measure of closeness , the sequence's terms are eventually that close to the limit. The sequence is said to converge to or tend to the limit .

Symbolically, this is:

.

If a sequence converges to some limit , then it is convergent and is the only limit; otherwise is divergent. A sequence that has zero as its limit is sometimes called a null sequence.

Illustration

[edit]

Properties

[edit]

Some other important properties of limits of real sequences include the following:

  • When it exists, the limit of a sequence is unique.[5]
  • Limits of sequences behave well with respect to the usual arithmetic operations. If and exists, then
[5]
[5]
[5]
provided [5]
  • For any continuous function , if exists, then exists too. In fact, any real-valued function is continuous if and only if it preserves the limits of sequences (though this is not necessarily true when using more general notions of continuity).
  • If for all greater than some , then .
  • (Squeeze theorem) If for all greater than some , and , then .
  • (Monotone convergence theorem) If is bounded and monotonic for all greater than some , then it is convergent.
  • A sequence is convergent if and only if every subsequence is convergent.
  • If every subsequence of a sequence has its own subsequence which converges to the same point, then the original sequence converges to that point.

These properties are extensively used to prove limits, without the need to directly use the cumbersome formal definition. For example, once it is proven that , it becomes easy to show—using the properties above—that (assuming that ).

Infinite limits

[edit]

A sequence is said to tend to infinity, written

, or
,

if the following holds:

For every real number , there is a natural number such that for every natural number , we have ; that is, the sequence terms are eventually larger than any fixed .

Symbolically, this is:

.

Similarly, we say a sequence tends to minus infinity, written

, or
,

if the following holds:

For every real number , there is a natural number such that for every natural number , we have ; that is, the sequence terms are eventually smaller than any fixed .

Symbolically, this is:

.

If a sequence tends to infinity or minus infinity, then it is divergent. However, a divergent sequence need not tend to plus or minus infinity, and the sequence provides one such example.

Metric spaces

[edit]

Definition

[edit]

A point of the metric space is the limit of the sequence if:

For each real number , there is a natural number such that, for every natural number , we have .

Symbolically, this is:

.

This coincides with the definition given for real numbers when and .

Properties

[edit]
  • When it exists, the limit of a sequence is unique, as distinct points are separated by some positive distance, so for less than half this distance, sequence terms cannot be within a distance of both points.
  • For any continuous function f, if exists, then . In fact, a function f is continuous if and only if it preserves the limits of sequences.

Cauchy sequences

[edit]
The plot of a Cauchy sequence (xn), shown in blue, as versus n. Visually, we see that the sequence appears to be converging to a limit point as the terms in the sequence become closer together as n increases. In the real numbers every Cauchy sequence converges to some limit.

A Cauchy sequence is a sequence whose terms ultimately become arbitrarily close together, after sufficiently many initial terms have been discarded. The notion of a Cauchy sequence is important in the study of sequences in metric spaces, and, in particular, in real analysis. One particularly important result in real analysis is the Cauchy criterion for convergence of sequences: a sequence of real numbers is convergent if and only if it is a Cauchy sequence. This remains true in other complete metric spaces.

Topological spaces

[edit]

Definition

[edit]

A point of the topological space is a limit or limit point[7][8] of the sequence if:

For every neighbourhood of , there exists some such that for every , we have .[9]

This coincides with the definition given for metric spaces, if is a metric space and is the topology generated by .

A limit of a sequence of points in a topological space is a special case of a limit of a function: the domain is in the space , with the induced topology of the affinely extended real number system, the range is , and the function argument tends to , which in this space is a limit point of .

Properties

[edit]

In a Hausdorff space, limits of sequences are unique whenever they exist. This need not be the case in non-Hausdorff spaces; in particular, if two points and are topologically indistinguishable, then any sequence that converges to must converge to and vice versa.

Hyperreal numbers

[edit]

The definition of the limit using the hyperreal numbers formalizes the intuition that for a "very large" value of the index, the corresponding term is "very close" to the limit. More precisely, a real sequence tends to L if for every infinite hypernatural , the term is infinitely close to (i.e., the difference is infinitesimal). Equivalently, L is the standard part of :

.

Thus, the limit can be defined by the formula

.

where the limit exists if and only if the righthand side is independent of the choice of an infinite .

Sequence of more than one index

[edit]

Sometimes one may also consider a sequence with more than one index, for example, a double sequence . This sequence has a limit if it becomes closer and closer to when both n and m becomes very large.

Example

[edit]
  • If for constant , then .
  • If , then .
  • If , then the limit does not exist. Depending on the relative "growing speed" of and , this sequence can get closer to any value between and .

Definition

[edit]

We call the double limit of the sequence , written

, or
,

if the following condition holds:

For each real number , there exists a natural number such that, for every pair of natural numbers , we have .[10]

In other words, for every measure of closeness , the sequence's terms are eventually that close to the limit. The sequence is said to converge to or tend to the limit .

Symbolically, this is:

.

The double limit is different from taking limit in n first, and then in m. The latter is known as iterated limit. Given that both the double limit and the iterated limit exists, they have the same value. However, it is possible that one of them exist but the other does not.

Infinite limits

[edit]

A sequence is said to tend to infinity, written

, or
,

if the following holds:

For every real number , there is a natural number such that for every pair of natural numbers , we have ; that is, the sequence terms are eventually larger than any fixed .

Symbolically, this is:

.

Similarly, a sequence tends to minus infinity, written

, or
,

if the following holds:

For every real number , there is a natural number such that for every pair of natural numbers , we have ; that is, the sequence terms are eventually smaller than any fixed .

Symbolically, this is:

.

If a sequence tends to infinity or minus infinity, then it is divergent. However, a divergent sequence need not tend to plus or minus infinity, and the sequence provides one such example.

Pointwise limits and uniform limits

[edit]

For a double sequence , we may take limit in one of the indices, say, , to obtain a single sequence . In fact, there are two possible meanings when taking this limit. The first one is called pointwise limit, denoted

, or
,

which means:

For each real number and each fixed natural number , there exists a natural number such that, for every natural number , we have .[11]

Symbolically, this is:

.

When such a limit exists, we say the sequence converges pointwise to .

The second one is called uniform limit, denoted

,
,
, or
,

which means:

For each real number , there exists a natural number such that, for every natural number and for every natural number , we have .[11]

Symbolically, this is:

.

In this definition, the choice of is independent of . In other words, the choice of is uniformly applicable to all natural numbers . Hence, one can easily see that uniform convergence is a stronger property than pointwise convergence: the existence of uniform limit implies the existence and equality of pointwise limit:

If uniformly, then pointwise.

When such a limit exists, we say the sequence converges uniformly to .

Iterated limit

[edit]

For a double sequence , we may take limit in one of the indices, say, , to obtain a single sequence , and then take limit in the other index, namely , to get a number . Symbolically,

.

This limit is known as iterated limit of the double sequence. The order of taking limits may affect the result, i.e.,

in general.

A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit to be uniform in .[10]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In mathematics, the limit of a sequence is a foundational concept in that describes the value LL toward which the terms of an infinite sequence {an}\{a_n\} of real numbers approach as the index nn tends to , provided such a value exists. Informally, the sequence converges to LL if, for all sufficiently large nn, the terms ana_n can be made arbitrarily close to LL. Formally, limnan=L\lim_{n \to \infty} a_n = L if for every ε>0\varepsilon > 0, there exists a positive NN such that anL<ε|a_n - L| < \varepsilon whenever n>Nn > N. This ε\varepsilon-NN definition ensures that the tail of the sequence lies within any given neighborhood of LL, capturing the intuitive notion of eventual proximity. If no such finite LL exists, the sequence may diverge to ±\pm \infty or oscillate without converging; for instance, an=na_n = n diverges to ++\infty, while an=(1)na_n = (-1)^n fails to converge due to perpetual alternation between -1 and 1. A convergent sequence has a unique limit, a property proven using the triangle inequality in the real numbers. Limits of sequences underpin key theorems in analysis, such as algebraic properties allowing operations like limn(an+bn)=limnan+limnbn\lim_{n \to \infty} (a_n + b_n) = \lim_{n \to \infty} a_n + \lim_{n \to \infty} b_n for convergent sequences {an}\{a_n\} and {bn}\{b_n\}, and the squeeze theorem, which states that if ancnbna_n \leq c_n \leq b_n for large nn and both {an}\{a_n\} and {bn}\{b_n\} converge to LL, then so does {cn}\{c_n\}. This concept is among the most subtle and essential in , serving as the basis for defining continuity of functions, , integrals, and more advanced structures like metric spaces and . Sequences without limits, or those diverging in specific ways, are crucial for studying series convergence and asymptotic behavior in applied fields like physics and .

Historical Development

Early Intuitive Notions

The concept of limits in sequences emerged intuitively in through paradoxes that challenged notions of motion and infinity. , around the 5th century BCE, posed the paradox of Achilles and the , where the swift Achilles appears unable to overtake a slower due to an infinite series of ever-diminishing intervals that he must traverse. This puzzle intuitively suggested that infinite processes could converge to a finite outcome, foreshadowing the idea of a limit without providing a resolution. In the , of Syracuse (c. 287–212 BCE) advanced these ideas through the , a technique for approximating areas by inscribing and circumscribing polygons that increasingly approached the curved boundary. In his treatise , demonstrated that the area of a parabolic segment equals four-thirds the area of the inscribed triangle by iteratively adding triangles whose areas summed in a , effectively bounding the region between lower and upper limits that converged to the exact value. This approach relied on the principle that if two quantities could be made arbitrarily close without equaling, one must be equal to the other, providing an early rigorous yet intuitive handling of convergence. Medieval and Renaissance mathematics further explored infinite processes, particularly in Indian traditions. While (476–550 CE) contributed rational approximations, such as π ≈ 3.1416 derived from circumference-to-diameter ratios, the Kerala school in the 14th–16th centuries developed infinite series expansions for and π, like the series for arctangent that used to compute precise values through partial sums approaching a limit. These methods echoed exhaustion by summing infinitely many terms to approximate transcendental quantities. In , Renaissance scholars revisited Archimedean techniques, applying them to volumes and areas in preparation for . By the , intuitive notions of limits underpinned the invention of . Isaac developed fluxions around 1665–1666, treating quantities as flowing variables whose instantaneous rates of change—moments or infinitesimally small increments—approximated tangents and areas through limiting processes. Independently, Gottfried Wilhelm formulated infinitesimals in the 1670s as "inassignable" quantities smaller than any given positive number yet non-zero, using them to derive rules for differentiation and integration as ratios of these evanescent differences. These precursors treated limits as the outcome of infinite approximations in continuous change, setting the stage for 19th-century formalization.

Formalization in the 19th Century

The formalization of the limit concept for sequences in the emerged as a direct response to longstanding philosophical critiques of infinitesimal methods in , particularly George Berkeley's 1734 attack in , where he derided as "ghosts of departed quantities" lacking logical foundation. This prompted mathematicians to develop rigorous, non-infinitesimal definitions grounded in inequalities, transforming intuitive notions from ancient paradoxes—such as Zeno's—into precise analytical tools. By mid-century, these efforts established the epsilon-based framework that underpins modern . Bernard Bolzano laid early groundwork in his 1817 pamphlet Rein analytischer Beweis des Lehrsatzes, daß zwischen je zwey Werthen, die ein entgegengesetztes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung vorhanden sey. While primarily proving the for continuous functions, Bolzano introduced a definition of continuity that implicitly relied on limit concepts: a function is continuous if, for points sufficiently close, the difference in function values can be made arbitrarily small. This approach used the theorem to ensure the existence of limit points in infinite sets, bridging geometric intuition to algebraic precision without invoking infinitesimals. Augustin-Louis Cauchy advanced this rigor in his 1821 textbook Cours d'analyse de l'École Polytechnique, where he provided the first systematic definition of the limit of a . Cauchy stated: "When the successive values attributed to the same variable indefinitely approach a fixed value, so as to end by differing from it by as little as one wishes, this last is called the limit of all the others." For proofs, he operationalized this with an condition, paraphrased as: a converges to LL if for every ϵ>0\epsilon > 0, there exists a NN such that for all n>Nn > N, anL<ϵ|a_n - L| < \epsilon. This formulation, applied extensively to series and functions, eliminated reliance on fluxions and established limits as the cornerstone of calculus. Karl Weierstrass further refined these ideas in his Berlin University lectures beginning in the 1850s, culminating in a fully epsilon-N formalization by 1861 that dispelled any residual ambiguity. He defined the limit of a sequence pnp_n as LL if, for every ϵ>0\epsilon > 0, there exists an NN such that for all n>Nn > N, pnL<ϵ|p_n - L| < \epsilon, emphasizing arithmetic verification over geometric intuition. Delivered to students like Hermann Amandus Schwarz, these lectures—later disseminated through notes—ensured the epsilon method's adoption, purging infinitesimals entirely and solidifying sequence limits as a discrete, verifiable process.

Limits over the Real Numbers

Formal Definition

A sequence of real numbers is a function a:NRa: \mathbb{N} \to \mathbb{R}, where N\mathbb{N} denotes the set of positive integers, often denoted as {an}n=1\{a_n\}_{n=1}^\infty or simply {an}\{a_n\}. The real numbers R\mathbb{R} are equipped with the standard metric given by the absolute value xy|x - y| for x,yRx, y \in \mathbb{R}, which measures the distance between points. The formal definition of the limit of a sequence in R\mathbb{R}, known as the ε\varepsilon-NN definition, is as follows: A sequence {an}\{a_n\} converges to a limit LRL \in \mathbb{R} if for every ε>0\varepsilon > 0, there exists NNN \in \mathbb{N} such that for all n>Nn > N, anL<ε|a_n - L| < \varepsilon. limnan=L    ε>0NNn>N,anL<ε.\lim_{n \to \infty} a_n = L \iff \forall \varepsilon > 0 \, \exists N \in \mathbb{N} \, \forall n > N, \, |a_n - L| < \varepsilon. This definition was introduced by in 1821 and rigorously formalized by in the mid-19th century. Common notations for this convergence include limnan=L\lim_{n \to \infty} a_n = L or anLa_n \to L. If a limit exists, it is unique. To see this, suppose limnan=L\lim_{n \to \infty} a_n = L and limnan=M\lim_{n \to \infty} a_n = M with LML \neq M. Let ε=LM/2>0\varepsilon = |L - M|/2 > 0. Then there exists N1NN_1 \in \mathbb{N} such that for all n>N1n > N_1, anL<ε|a_n - L| < \varepsilon, and N2NN_2 \in \mathbb{N} such that for all n>N2n > N_2, anM<ε|a_n - M| < \varepsilon. For n>max(N1,N2)n > \max(N_1, N_2), the yields LMLan+anM<2ε=LM|L - M| \leq |L - a_n| + |a_n - M| < 2\varepsilon = |L - M|, a contradiction. Thus, L=ML = M. A constant sequence {an}\{a_n\} where an=cRa_n = c \in \mathbb{R} for all nn converges to cc, since anc=0<ε|a_n - c| = 0 < \varepsilon holds for any ε>0\varepsilon > 0 and any NNN \in \mathbb{N} (e.g., N=1N = 1).

Illustrative Examples

To illustrate the concept of limits for sequences in the real numbers, consider the sequence defined by an=1na_n = \frac{1}{n} for nNn \in \mathbb{N}. This sequence converges to 0, as for any ϵ>0\epsilon > 0, choosing N=1/ϵN = \lceil 1/\epsilon \rceil ensures that for all n>Nn > N, an0=1n<ϵ|a_n - 0| = \frac{1}{n} < \epsilon. The following table shows the first ten terms of this sequence, demonstrating its approach to 0:
nnan=1/na_n = 1/n
11.000
20.500
30.333
40.250
50.200
60.167
70.143
80.125
90.111
100.100
As nn increases, the terms decrease monotonically toward 0. In contrast, the sequence an=na_n = n diverges to ++\infty, meaning that for every M>0M > 0, there exists NNN \in \mathbb{N} such that for all n>Nn > N, an>Ma_n > M. For instance, selecting N=M+1N = \lfloor M \rfloor + 1 satisfies this condition. Another example is the sequence an=(1)na_n = (-1)^n, which alternates between -1 and 1 and does not converge to any , as the terms fail to get arbitrarily close to any single limit point. Finally, the sequence an=sinna_n = \sin n is bounded between -1 and 1 but does not converge, because its terms are dense in the interval [1,1][-1, 1], visiting every subinterval infinitely often due to the irrationality of π\pi.

Fundamental Properties

The algebra of limits for sequences in the real numbers allows for the manipulation of convergent sequences using field operations. Specifically, if {an}\{a_n\} and {bn}\{b_n\} are sequences converging to limits AA and BB respectively, and cRc \in \mathbb{R} is a constant, then the sequence {can+bn}\{c a_n + b_n\} converges to cA+BcA + B. Similarly, the product sequence {anbn}\{a_n b_n\} converges to ABAB, and if B0B \neq 0, the quotient sequence {an/bn}\{a_n / b_n\} converges to A/BA/B. A brief sketch of the proof for the sum rule relies on the : for any ϵ>0\epsilon > 0, there exist N1,N2N_1, N_2 such that for n>max(N1,N2)n > \max(N_1, N_2), anA<ϵ/2|a_n - A| < \epsilon/2 and bnB<ϵ/2|b_n - B| < \epsilon/2, so (an+bn)(A+B)anA+bnB<ϵ|(a_n + b_n) - (A + B)| \leq |a_n - A| + |b_n - B| < \epsilon. Limits of sequences also preserve order relations when they exist. If {an}\{a_n\} and {bn}\{b_n\} converge to AA and BB, and anbna_n \leq b_n for all sufficiently large nn, then ABA \leq B. This follows from the contrapositive: assuming A>BA > B leads to a contradiction via the definition of convergence and the of the reals. The provides a powerful tool for establishing convergence indirectly. If {gn}\{g_n\}, {fn}\{f_n\}, and {hn}\{h_n\} are sequences such that gnfnhng_n \leq f_n \leq h_n for all sufficiently large nn, and both {gn}\{g_n\} and {hn}\{h_n\} converge to the same limit LL, then {fn}\{f_n\} also converges to LL. The proof proceeds by showing that for any ϵ>0\epsilon > 0, the inequalities imply fnLmax(gnL,hnL)<ϵ|f_n - L| \leq \max(|g_n - L|, |h_n - L|) < \epsilon for large nn. Monotone sequences exhibit particularly nice convergence behavior in the reals. A sequence {an}\{a_n\} is monotone if it is either non-decreasing (an+1ana_{n+1} \geq a_n for all nn) or non-increasing (an+1ana_{n+1} \leq a_n for all nn). The monotone convergence theorem states that every bounded monotone sequence converges: if non-decreasing and bounded above, it converges to its supremum; if non-increasing and bounded below, to its infimum./02%3A_Sequences/2.03%3A_Monotone_Sequences) This result is closely tied to the Bolzano-Weierstrass theorem, which guarantees that every bounded sequence has a convergent subsequence, allowing the limit of a monotone sequence to be identified as the least upper bound of its range.

Infinite and Oscillatory Limits

In the context of sequences of real numbers, a sequence {an}\{a_n\} is said to diverge to ++\infty if, for every M>0M > 0, there exists NNN \in \mathbb{N} such that an>Ma_n > M for all n>Nn > N. Similarly, the sequence diverges to -\infty if, for every M>0M > 0, there exists NNN \in \mathbb{N} such that an<Ma_n < -M for all n>Nn > N. This extends the notion of convergence beyond finite limits, capturing unbounded growth in a precise manner. A classic example is the sequence an=n2a_n = n^2, which diverges to ++\infty since the quadratic growth ensures terms exceed any positive bound for sufficiently large nn. Another is an=na_n = -n, diverging to -\infty. These cases illustrate how sequences can "tend to infinity" without converging to a . Oscillatory divergence occurs when a sequence fails to converge to any real limit LL, meaning there exists some such that for every NNN \in \mathbb{N}, the ϵ\epsilon-neighborhood of LL does not contain all but finitely many terms of the sequence./02:_Sequences/2.01:_Convergence) For instance, the sequence an=sin(nπ/2)a_n = \sin(n\pi/2) oscillates between -1, 0, and 1 indefinitely, preventing convergence to any single value. A more dramatic case is an=nsin(n)a_n = n \sin(n), which oscillates with increasing amplitude and thus without bound. One-sided limits for sequences are less common than for functions but arise in the context of , such as even or odd indices. If the of even terms converges to one value and the odd terms to another, the full sequence oscillates and diverges. For example, in an=(1)na_n = (-1)^n, the even terms are constantly 1 while odd terms are -1, yielding distinct "one-sided" behaviors along these . All convergent sequences are bounded, meaning there exists M>0M > 0 such that anM|a_n| \leq M for all nn./02:_Sequences/2.01:_Convergence) To see this, if {an}\{a_n\} converges to LL, choose ϵ=1\epsilon = 1; then for n>Nn > N, an<L+1|a_n| < |L| + 1, and bounding the finite initial terms yields an overall bound. However, the converse fails: bounded sequences like sin(nπ/2)\sin(n\pi/2) need not converge./02:_Sequences/2.01:_Convergence) In oscillatory cases, the squeeze theorem can sometimes bound terms to show divergence, as with alternating sequences trapped between constants.

Generalizations to Metric Spaces

Definition in Metric Spaces

In a metric space (X,d)(X, d), where XX is a set and d:X×X[0,)d: X \times X \to [0, \infty) is a metric satisfying the usual axioms of non-negativity, symmetry, and the triangle inequality, the notion of convergence for a sequence generalizes the real-number case. Specifically, the real line R\mathbb{R} equipped with the standard metric d(a,b)=abd(a, b) = |a - b| serves as a prototypical example. A sequence {xn}n=1\{x_n\}_{n=1}^\infty in XX converges to a limit xXx \in X, denoted limnxn=x\lim_{n \to \infty} x_n = x or xnxx_n \to x, if for every ϵ>0\epsilon > 0, there exists a positive integer NN such that d(xn,x)<ϵd(x_n, x) < \epsilon for all n>Nn > N. This definition captures the intuitive idea that the terms xnx_n get arbitrarily close to xx as nn increases, measured via the metric dd. The condition d(xn,x)<ϵd(x_n, x) < \epsilon is equivalent to stating that the sequence terms eventually lie inside the open ball centered at the limit point, defined as B(x,ϵ)={yXd(y,x)<ϵ}.B(x, \epsilon) = \{ y \in X \mid d(y, x) < \epsilon \}. These open balls form the basic neighborhoods in the topology induced by the metric, providing a geometric foundation for convergence without relying on any additional structure beyond the distance function. In contrast to the real numbers, where the total order enables properties such as the monotone convergence theorem for bounded increasing sequences, general metric spaces lack a canonical ordering; thus, monotonicity must be defined metrically (e.g., via additive distances along the sequence), and convergence depends purely on the metric-induced topology rather than order-based supremum or infimum principles. A concrete illustration occurs in the Euclidean space Rm\mathbb{R}^m with the standard Euclidean metric d(u,v)=k=1m(ukvk)2d(\mathbf{u}, \mathbf{v}) = \sqrt{\sum_{k=1}^m (u_k - v_k)^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.