Recent from talks
Nothing was collected or created yet.
Conditional convergence
View on WikipediaIn mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely.
Definition
[edit]More precisely, a series of real numbers is said to converge conditionally if exists (as a finite real number, i.e. not or ), but
A classic example is the alternating harmonic series given by which converges to , but is not absolutely convergent (see Harmonic series).
Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any value at all, including ∞ or −∞; see Riemann series theorem. Agnew's theorem describes rearrangements that preserve convergence for all convergent series.
The Lévy–Steinitz theorem identifies the set of values to which a series of terms in Rn can converge.
Indefinite integrals may also be conditionally convergent. A typical example of a conditionally convergent integral is (see Fresnel integral) where the integrand oscillates between positive and negative values indefinitely, but enclosing smaller areas each time.
See also
[edit]References
[edit]- Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964).
Conditional convergence
View on GrokipediaConvergence Basics
Absolute Convergence
In mathematics, a series of real or complex numbers is said to converge absolutely if the series of the absolute values converges to a finite limit.[10] This condition ensures a stronger form of convergence than ordinary (or conditional) convergence, where the partial sums approach a limit without regard to the signs of the terms. Absolute convergence implies ordinary convergence because, for m > n, |s_m - s_n| = |∑{k=n+1}^m a_k| ≤ ∑{k=n+1}^m |a_k|, and the right-hand side approaches 0 as n, m → ∞ since ∑ |a_n| converges; thus, {s_n} is a Cauchy sequence and converges to some finite S with |S| ≤ ∑_{n=1}^∞ |a_n|. A key property is that absolute convergence is invariant under permutations of the terms, meaning the sum remains the same regardless of the order in which the terms are added.[10] The concept of absolute convergence was first rigorously defined by Augustin-Louis Cauchy in his 1821 work Cours d'analyse, where he distinguished it from ordinary convergence to address issues in series summation.[11] Karl Weierstrass later emphasized its importance in the mid-19th century through his lectures on function theory, highlighting its role in ensuring robust behavior of series under rearrangements and in uniform convergence tests.[12] A classic example is the geometric series for , which converges absolutely because is also a geometric series with ratio , summing to .[13] In contrast, conditional convergence occurs when converges but diverges, making the sum sensitive to term order.[10]Conditional Convergence
In mathematics, particularly in the study of infinite series, conditional convergence describes a scenario where a series converges, but only in a manner dependent on the specific order of its terms. A series is said to converge conditionally if the sequence of its partial sums converges to a finite limit , yet the series does not converge absolutely.[14] This contrasts with absolute convergence, where the series converges, providing a stronger form of convergence that implies ordinary convergence regardless of term arrangement.[15] Formally, converges conditionally to if and .[14] Here, the failure of absolute convergence serves as the defining prerequisite, as established in standard real analysis texts: every absolutely convergent series is convergent, making conditional convergence the residual case of convergence without this absolute property.[15] A key implication of conditional convergence is its sensitivity to the order of terms; unlike absolutely convergent series, rearrangements of the terms in a conditionally convergent series can yield a different sum or even cause divergence.[14] This order dependence arises precisely because the absolute series diverges, allowing the partial sums to be influenced by how positive and negative terms are interleaved.[15] The distinction from absolute convergence underpins the basic characterization of conditional convergence through a simple proof sketch: since absolute convergence implies convergence via the triangle inequality—specifically, for , , which approaches 0 as if converges—any convergent series that lacks absolute convergence must be conditionally convergent.[16]Examples and Illustrations
Alternating Harmonic Series
The alternating harmonic series is defined as the infinite series This series serves as the canonical example of conditional convergence in the context of infinite series.[17] The convergence of the alternating harmonic series follows from the alternating series test, which states that an alternating series with converges if the sequence is decreasing and . Here, , which is positive, decreasing, and approaches 0 as . The proof relies on showing that the sequence of partial sums is bounded and monotonic in subsequences: the odd-indexed partial sums decrease and are bounded below by 0, while the even-indexed partial sums increase and are bounded above by 1, implying both converge to the same limit by the monotone convergence theorem.[18] The sum of the series equals . This result arises as the special case of the Mercator series (Taylor expansion of ) for , extended to by Abel summation since the series converges at the endpoint of its interval of convergence .[17][19] The series does not converge absolutely because the absolute value series is the harmonic series, which diverges. The divergence of the harmonic series can be established using the integral test: consider , which is positive, continuous, and decreasing on ; the improper integral diverges, so the series diverges by comparison, as the partial sums exceed the integral from 1 to . Alternatively, it is a -series with , which diverges.[20][21] For approximation, the error when truncating at the th partial sum is bounded by the magnitude of the next term: , with the true sum lying between and . An explicit formula for the partial sum is where the integral term represents the remainder and alternates in sign while decreasing to 0.[18][22] Numerical illustration of the partial sums approaching is shown below for the first six terms:| Partial Sum | Approximation Error | |
|---|---|---|
| 1 | 1.000000 | 0.306853 |
| 2 | 0.500000 | 0.193147 |
| 3 | 0.833333 | 0.140186 |
| 4 | 0.583333 | 0.109814 |
| 5 | 0.783333 | 0.090186 |
| 6 | 0.616667 | 0.076480 |
