Hubbry Logo
IterationIterationMain
Open search
Iteration
Community hub
Iteration
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Iteration
Iteration
from Wikipedia

Iteration means repeating a process to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is the starting point of the next iteration.

In mathematics and computer science, iteration (along with the related technique of recursion) is a standard element of algorithms.

Mathematics

[edit]
An iteration of nine squeeze mappings applied to a hyperbolic sector

In mathematics, iteration may refer to the process of iterating a function, i.e. applying a function repeatedly, using the output from one iteration as the input to the next. Iteration of apparently simple functions can produce complex behaviors and difficult problems – for examples, see the Collatz conjecture and juggler sequences.

Another use of iteration in mathematics is in iterative methods which are used to produce approximate numerical solutions to certain mathematical problems. Newton's method is an example of an iterative method. Manual calculation of a number's square root is a common use and a well-known example.

Computing

[edit]

In computing, iteration is a technique that marks out of a block of statements within a computer program for a defined number of repetitions. That block of statements is said to be iterated. A computer programmer might also refer to that block of statements as an iteration.

Implementations

[edit]

Loops constitute the most common language constructs for performing iterations. The following pseudocode "iterates" three times the line of code between begin & end through a for loop, and uses the values of i as increments.

a := 0
for i := 1 to 3 do       { loop three times }
begin
    a := a + i;          { add the current value of i to a }
end;
print(a);                { the number 6 is printed (0 + 1; 1 + 2; 3 + 3) }

It is permissible, and often necessary, to use values from other parts of the program outside the bracketed block of statements, to perform the desired function.

Iterators constitute alternative language constructs to loops, which ensure consistent iterations over specific data structures. They can eventually save time and effort in later coding attempts. In particular, an iterator allows one to repeat the same kind of operation at each node of such a data structure, often in some pre-defined order.

Iteratees are purely functional language constructs, which accept or reject data during the iterations.

Relation with recursion

[edit]

Recursions and iterations have different algorithmic definitions, even though they can generate identical results. The primary difference is that recursion can be a solution without prior knowledge as to how many times the action must repeat, while a successful iteration requires that foreknowledge.

Some types of programming languages, known as functional programming languages, are designed such that they do not set up a block of statements for explicit repetition, as with the for loop. Instead, those programming languages exclusively use recursion. Rather than call out a block of code to repeate a pre-defined number of times, the executing code block instead "divides" the work into a number of separate pieces, after which the code block executes itself on each individual piece. Each piece of work is divided repeatedly until the "amount" of work is as small as possible, at which point the algorithm does that work very quickly. The algorithm then "reverses" and reassembles the pieces into a complete whole.

The classic example of recursion is in list-sorting algorithms, such as merge sort. The merge sort recursive algorithm first repeatedly divides the list into consecutive pairs. Each pair is then ordered, then each consecutive pair of pairs, and so forth until the elements of the list are in the desired order.

The code below is an example of a recursive algorithm in the Scheme programming language that outputs the same result as the pseudocode under the previous heading.

(let iterate ((i 1) (a 0))
  (if (<= i 3)
    (iterate (+ i 1) (+ a i))
    (display a)))

Education

[edit]

In some schools of pedagogy, iterations are used to describe the process of teaching or guiding students to repeat experiments, assessments, or projects, until more accurate results are found, or the student has mastered the technical skill. This idea is found in the old adage, "Practice makes perfect." In particular, "iterative" is defined as the "process of learning and development that involves cyclical inquiry, enabling multiple opportunities for people to revisit ideas and critically reflect on their implication."[1]

Unlike computing and math, educational iterations are not predetermined; instead, the task is repeated until success according to some external criteria (often a test) is achieved.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Iteration is the repeated application of a procedure, where the output of each repetition serves as the input for the subsequent one, often until a desired condition or convergence criterion is achieved. The term originates from the Latin iteratio, meaning "repetition" or "doing again". This concept is foundational across disciplines, enabling the generation of sequences that approximate solutions to complex problems. In , iteration typically involves applying a function ff successively to an initial value x0x_0, producing a xn+1=f(xn)x_{n+1} = f(x_n) that may converge to a fixed point where x=f(x)x = f(x). Such methods are central to for solving equations, as seen in techniques like the Newton-Raphson method, which iteratively refines root approximations using the formula xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}. Iterative processes are particularly valuable for problems lacking closed-form solutions, such as finding eigenvalues or optimizing functions, and their convergence depends on properties like the Lipschitz constant of the function. In , iteration manifests as a control structure that repeats a block of code, contrasting with by avoiding stack overhead through explicit loops. Common implementations include for loops for a fixed number of repetitions, while loops for condition-based , and do-while loops that ensure at least one execution. These constructs underpin algorithms like sorting (e.g., , which iteratively scans and swaps elements) and searching, enhancing computational efficiency for tasks involving large datasets. Iteration's role extends to practices, such as agile methodologies, where it denotes incremental cycles of , testing, and refinement to evolve products iteratively. Beyond these core areas, iteration influences fields like and through models that repeat scenarios to predict outcomes, always grounded in the principle of progressive refinement.

Definition and Overview

Basic Concept

Iteration refers to the repetition of a set of operations or instructions until a desired condition is met, serving as a core mechanism for achieving outcomes through successive applications of a . The term derives from the Latin iteratio ("repetition"), from iterare ("to do again"). For instance, in approximating the value of a like pi, one might repeatedly apply a simple arithmetic , refining the estimate with each cycle until sufficient accuracy is reached. Iteration can be finite, involving a predetermined number of repetitions, or potentially infinite, continuing until a specific termination criterion such as convergence to a target value is satisfied. In finite cases, the process halts after a fixed , ensuring predictability, whereas infinite iteration risks endless execution unless guarded by a stopping rule. In everyday routines, iteration manifests as sequential repetitions, akin to the step-by-step actions in preparing a , where one measures ingredients, stirs, and checks cyclically to perfect the result. This mirrors broader problem-solving, where iteration acts as a foundational building block, enabling incremental refinement and to construct more intricate systems across fields like and optimization. In and computing, it underpins methods for successive approximations and loop-based executions, respectively.

Historical Origins

The concept of iteration traces its origins to ancient mathematical practices, particularly in the solving of equations through successive approximations. Around 2000 BCE, Babylonian mathematicians employed iterative techniques to compute square roots, using a method that involved repeated averaging of an initial guess and the quotient obtained by dividing the target number by that guess, yielding increasingly accurate results. This approach represented an early recognition of the power of repetition to refine solutions, applied to problems in geometry and algebra recorded on clay tablets. In the 7th century CE, Indian mathematician advanced iterative approximations in his treatise Brahmasphutasiddhanta, providing an for square roots that built upon earlier Indian traditions by iteratively adjusting estimates through arithmetic operations. His method, which involved successive refinements to achieve practical accuracy, exemplified the growing sophistication of iterative processes in arithmetic and their integration into astronomical calculations. The marked a pivotal development with Newton's formulation of an iterative technique for root-finding, detailed in his 1671 manuscript De analysi per aequationes numero terminorum infinitas, which used tangent lines to approximate roots through repeated corrections. This method, later known as , formalized iteration as a systematic tool for solving nonlinear equations, influencing subsequent and . By the late 19th century, contributed to the formalization of iterative functions in , particularly through his 1880s investigations into dynamical systems and , where he analyzed the behavior of repeated mappings to study stability and chaos. His work in Les Méthodes Nouvelles de la Mécanique Céleste (1892–1899), rooted in earlier papers, established iteration as a core concept in understanding periodic and recurrent phenomena. In , Alan Turing's theoretical model of computation introduced iteration into formal computing frameworks via the , described in his 1936 paper "On Computable Numbers," which enabled repetitive state transitions to simulate loops and algorithmic repetition. This abstraction laid the groundwork for programmable iteration in . The mid-20th century saw iteration fully integrated into practical computing through John von Neumann's stored-program concept, outlined in the 1945 report, which allowed programs—including iterative loops—to be stored and executed in the same memory as data, revolutionizing electronic computation. This architecture enabled efficient repetition in software, transforming iteration from a mathematical tool into a foundational element of digital systems.

Mathematics

Iterative Methods

Iterative methods in involve generating a of approximations where each subsequent term is computed based on the previous ones, typically expressed as xn+1=g(xn)x_{n+1} = g(x_n) for some function gg, to solve equations or systems that lack closed-form solutions. These methods are particularly valuable for addressing problems where direct analytical approaches are infeasible, such as finding of nonlinear equations or solving large sparse linear systems. Common examples include the for root-finding, which iteratively halves an interval [a,b][a, b] containing a of a f(x)=0f(x) = 0 (where f(a)f(b)<0f(a) f(b) < 0) by evaluating the midpoint and updating the bracket based on sign changes, ensuring steady convergence regardless of the function's derivative. For linear systems Ax=bAx = b, the Jacobi method updates each component independently using values from the previous iteration, formulated as xi(k+1)=1aii(bijiaijxj(k))x_i^{(k+1)} = \frac{1}{a_{ii}} \left( b_i - \sum_{j \neq i} a_{ij} x_j^{(k)} \right), while the Gauss-Seidel method incorporates newly computed values immediately, as in xi(k+1)=1aii(bij<iaijxj(k+1)j>iaijxj(k))x_i^{(k+1)} = \frac{1}{a_{ii}} \left( b_i - \sum_{j < i} a_{ij} x_j^{(k+1)} - \sum_{j > i} a_{ij} x_j^{(k)} \right), often leading to faster convergence under suitable conditions. A foundational result for fixed-point iteration is the , which states that if gg is continuous on [a,b][a, b] with g(x)[a,b]g(x) \in [a, b] for all x[a,b]x \in [a, b], and if g(x)k<1|g'(x)| \leq k < 1 for some constant kk and all x(a,b)x \in (a, b), then there exists a unique fixed point p[a,b]p \in [a, b] such that g(p)=pg(p) = p, and the sequence xn+1=g(xn)x_{n+1} = g(x_n) converges to pp for any initial x0[a,b]x_0 \in [a, b]. This condition on the derivative ensures the iteration acts as a contraction mapping, guaranteeing linear convergence to the solution. In numerical analysis, iterative methods are essential for tackling intractable problems like nonlinear equations and large-scale linear systems arising from discretizations, where direct methods such as Gaussian elimination become computationally prohibitive due to time and memory demands. They enable practical solutions by exploiting problem structure, such as sparsity or diagonal dominance, to achieve efficient approximations without requiring exact inversion.

Convergence Analysis

In the analysis of iterative methods, convergence refers to the process by which the sequence generated by an iteration approaches a fixed point or solution under specified conditions. A foundational result is the Banach fixed-point theorem, which states that if (X,d)(X, d) is a complete metric space and T:XXT: X \to X is a contraction mapping—meaning there exists k<1k < 1 such that d(T(x),T(y))kd(x,y)d(T(x), T(y)) \leq k \, d(x, y) for all x,yXx, y \in X—then TT has a unique fixed point xXx^* \in X, and the sequence defined by xn+1=T(xn)x_{n+1} = T(x_n) converges to xx^* from any initial x0Xx_0 \in X. This convergence is linear, with the error satisfying d(xn+1,x)kd(xn,x)d(x_{n+1}, x^*) \leq k \, d(x_n, x^*), ensuring the distance to the fixed point decreases by at least the factor kk at each step. The order of convergence quantifies the speed at which the error diminishes. For linear convergence, the error en=xnxe_n = |x_n - x^*| reduces asymptotically by a constant factor ρ<1\rho < 1, so en+1ρene_{n+1} \approx \rho e_n, as seen in the iteration from a contraction mapping under the Banach theorem. Higher orders provide faster reduction; quadratic convergence occurs when en+1Cen2e_{n+1} \approx C e_n^2 for some constant C>0C > 0, meaning the number of correct digits roughly doubles with each iteration. for root-finding, given by xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}, achieves quadratic convergence near a simple root xx^* where f(x)=0f(x^*) = 0 and f(x)0f'(x^*) \neq 0, provided the initial guess is sufficiently close and ff is twice continuously differentiable. For xn+1=g(xn)x_{n+1} = g(x_n), error bounds can be derived from the Lipschitz constant of gg. If g(x)K<1|g'(x)| \leq K < 1 on an interval containing the fixed point xx^* and the iterates, then the error satisfies xn+1xKxnx|x_{n+1} - x^*| \leq K |x_n - x^*|, implying linear convergence with rate KK. More generally, the a posteriori error estimate xn+1xK1Kxn+1xn|x_{n+1} - x^*| \leq \frac{K}{1 - K} |x_{n+1} - x_n| provides a practical way to assess convergence without knowing xx^*, assuming the iterates remain in the interval where the bound holds. Convergence is not guaranteed in all cases; divergence or instability arises when the iteration amplifies errors. For fixed-point iteration, if g(x)>1|g'(x^*)| > 1, the fixed point is repulsive, and the sequence typically diverges from xx^* unless starting exactly at it, as small perturbations grow exponentially. Even if g(x)=1|g'(x^*)| = 1, convergence may fail or be very slow, highlighting the need for g(x)<1|g'(x^*)| < 1 as a sufficient local condition for attraction.

Computer Science

Algorithmic Iteration

In algorithm design, iteration serves as a core mechanism for executing repetitive operations on discrete data structures, enabling efficient processing of inputs by repeating a fixed set of steps until a specified condition is met. This technique is particularly vital in handling collections like arrays, where operations such as searching or sorting require systematic traversal without redundant computations. Unlike one-pass direct solutions that may overlook elements, iterative approaches ensure completeness by cycling through in controlled loops, optimizing for in computational problems. A prominent example is the bubble sort algorithm, which iteratively compares adjacent elements in an and swaps them if they are out of order, propagating larger elements toward the end in each pass until no further swaps are needed. This process typically involves nested loops: an outer loop for passes and an inner loop for comparisons, continuing until the array achieves sorted order. The algorithm's simplicity makes it illustrative of iteration's role in transforming unsorted data, though its quadratic performance limits practical use for large datasets. Time complexity analysis highlights iteration's efficiency in basic algorithms; for instance, a simple loop traversing an once, as in , incurs O(n time, where n is the array length, since each element is examined proportionally to the input size. This contrasts with direct solutions like constant-time lookups in pre-indexed structures, but iteration dominates for where preprocessing is infeasible. In , a for might appear as follows, initializing an index and advancing until the target is found or the end is reached:

procedure LinearSearch(array A, target t) i ← 0 while i < length(A) and A[i] ≠ t i ← i + 1 if i < length(A) return i // found at index i else return -1 // not found

procedure LinearSearch(array A, target t) i ← 0 while i < length(A) and A[i] ≠ t i ← i + 1 if i < length(A) return i // found at index i else return -1 // not found

This structure executes up to n iterations in the worst case, establishing linear scaling. Iteration also plays a key role in adapting divide-and-conquer strategies to loop-based implementations, avoiding recursive overhead while preserving logarithmic . For example, iterative binary search on a sorted repeatedly bisects the search interval using low and high pointers, updating bounds based on comparisons until the target is located or the interval empties. This approach mirrors the recursive variant but uses a single for all halvings, achieving O(log n) time by reducing the problem size exponentially per iteration. for this is:

procedure IterativeBinarySearch(sorted [array](/page/Array) A, target t) low ← 0 high ← length(A) - 1 while low ≤ high mid ← (low + high) / 2 if A[mid] = t return mid else if A[mid] < t low ← mid + 1 else high ← mid - 1 return -1 // not found

procedure IterativeBinarySearch(sorted [array](/page/Array) A, target t) low ← 0 high ← length(A) - 1 while low ≤ high mid ← (low + high) / 2 if A[mid] = t return mid else if A[mid] < t low ← mid + 1 else high ← mid - 1 return -1 // not found

Such iterative formulations enhance space efficiency, as they require only O(1) extra memory beyond the input.

Programming Constructs

In imperative programming languages, iteration is primarily implemented through loop constructs that allow repeated execution of code blocks based on predefined conditions or counters. These structures enable developers to express repetitive tasks directly in the source code, facilitating efficient processing of sequences or collections. The for loop is a common construct for fixed-iteration scenarios, where the number of repetitions is known in advance, such as iterating over a range from 1 to n by initializing a counter, checking its bounds, and incrementing it after each iteration. This design minimizes boilerplate code for counting loops, making it suitable for tasks like array traversal. In contrast, the while loop supports condition-based iteration, evaluating a boolean expression before executing the loop body and continuing only if the condition remains true, which is ideal for scenarios where the iteration count is indeterminate, such as processing input until an end-of-file signal. The do-while loop, a variant of the while loop, performs the condition check after the body executes at least once, ensuring initial execution regardless of the condition and proving useful for interactive prompts or validation cycles. Infinite loops occur when a loop's termination condition is never met, such as a with a perpetually true condition like while(true), potentially causing program hangs unless interrupted externally or by design for event-driven systems. To manage such loops and provide fine-grained control, statements like break terminate the loop prematurely upon meeting a specific criterion, while continue skips the remainder of the current iteration and advances to the next, allowing selective omission of operations without restructuring the loop. These mechanisms enhance flexibility in handling exceptional cases within iterative logic. In languages like , iteration eschews explicit loops in favor of higher-order functions such as folds, which recursively apply an operation across a like a list to accumulate a result, promoting immutability and composability over mutable state changes seen in imperative loops. For instance, foldr processes elements from right to left, enabling declarative expressions of iteration that align with the paradigm's emphasis on pure functions. Performance optimizations for iterative constructs often involve , a technique that replicates the loop body multiple times to reduce overhead from branch instructions and condition checks, thereby increasing and execution speed, particularly in tight loops with simple bodies. This method can yield significant speedups in compute-intensive applications but requires careful factor selection to avoid excessive .

Comparison with Recursion

Iteration and recursion represent alternative approaches to repetition in computing, where recursion achieves repetition through a function invoking itself to address subproblems via self-reference, in contrast to iteration, which utilizes loop structures to execute code blocks repeatedly without expanding the call stack. Tail recursion, a specific form where the recursive call occurs as the final operation without subsequent computations, enables optimization in certain languages by converting the recursion into an iterative process, thereby reusing the existing stack frame and preventing excessive memory allocation. In terms of , iteration generally requires constant space O(1)O(1), relying on fixed variables within the loop, whereas demands O(n)O(n) space due to the accumulation of stack frames for each call level. For instance, a recursive function for input nn creates nn stack frames, each storing parameters and return states, while an iterative equivalent employs a single loop with a running product variable, maintaining constant space usage.

pseudocode

// Recursive factorial (O(n) space) function factorial(n): if n == 0: return 1 return n * [factorial](/page/Factorial)(n - 1) // Iterative factorial (O(1) space) function factorial(n): result = 1 for i from 1 to n: result = result * i return result

// Recursive factorial (O(n) space) function factorial(n): if n == 0: return 1 return n * [factorial](/page/Factorial)(n - 1) // Iterative factorial (O(1) space) function factorial(n): result = 1 for i from 1 to n: result = result * i return result

Iteration suits straightforward linear tasks to optimize performance and memory, whereas excels in modeling hierarchical structures, such as for traversals, leveraging the call stack to track paths implicitly.

Applications

Educational Contexts

Iteration plays a foundational role in K-12 computer science curricula, where it is often introduced through visual and interactive tools to build early computational thinking skills. One seminal example is the Logo programming language, developed in the late 1960s and widely adopted in the 1970s for elementary education, which uses iterative commands to control a "turtle" graphic that draws shapes on the screen. Students learn repetition by issuing commands like REPEAT 4 [FORWARD 100 RIGHT 90] to create a square, fostering an intuitive understanding of loops through immediate visual feedback. This approach emphasizes problem-solving in a low-stakes environment, helping young learners grasp iteration without abstract syntax. At the university level, iteration is a core topic in courses, where it is analyzed formally through concepts like loop invariants to ensure algorithmic correctness. These courses typically include exercises where students identify invariants—properties that remain true before and after each loop iteration—for algorithms such as summing elements or searching sorted lists. For instance, in a loop computing the sum of numbers from 1 to n, the invariant might state that the partial sum up to the current index equals the for that range, allowing proof of termination and accuracy. Such pedagogical strategies bridge theoretical reasoning with practical programming, preparing students for advanced algorithm design. The pedagogical benefits of teaching iteration extend to enhancing problem-solving abilities, particularly by promoting and as outlined in Jeannette Wing's framework. Iteration encourages students to break complex tasks into repeatable steps, such as decomposing a into looped segments, which cultivates and algorithmic mindset essential for broader STEM disciplines. However, learners often encounter challenges like off-by-one errors, where loop bounds are mis-set, leading to incorrect iterations such as processing one extra or missing element in an . To address these, educators employ strategies like , where students collaborate in driver-navigator roles to debug loops in real-time, reducing error rates and boosting confidence in iterative constructs.

Scientific and Engineering Uses

In physics, iteration plays a crucial role in simulating systems, where small changes in initial conditions can lead to vastly different outcomes, as seen in atmospheric dynamics. relies on iterative methods to approximate solutions to partial differential equations governing fluid motion and . These methods discretize space and time, iteratively updating grid-point values over successive time steps to forecast patterns. A seminal application was the 1950 numerical integration of the barotropic , which used approximations to iteratively propagate variables, marking the first successful computer-based forecast on the . This iterative approach remains foundational in modern global circulation models, enabling predictions of phenomena like storm development despite sensitivity to perturbations. In , iterative processes underpin control systems that maintain stability and performance through continuous feedback adjustments. Proportional-integral-derivative (PID) controllers exemplify this by iteratively computing control outputs based on signals, integrating past errors, and anticipating future trends to regulate systems like temperature or speed. The iterative nature arises in the feedback loop, where the controller repeatedly samples the process variable and adjusts the manipulated variable until the setpoint is reached. The Ziegler-Nichols tuning method, introduced in 1942, provides a systematic iterative procedure to determine optimal PID parameters by inducing sustained oscillations and deriving gains from their amplitude and period, widely adopted in industrial automation. Such iterative feedback ensures robust performance in applications ranging from to chemical processing, where real-time corrections mitigate disturbances. In , iteration facilitates the of equilibria in game-theoretic models, where agents update strategies based on perceived opponent behaviors. The fictitious play algorithm models this by having players iteratively best-respond to the empirical frequency of others' past actions, converging to equilibria in certain games. Proposed by George W. Brown in , it simulates repeated play where each iteration assumes opponents follow a stationary mixed strategy derived from historical data, applicable to market competition and auction design. This iterative learning process captures in economic interactions, influencing analyses of oligopolistic pricing and bargaining outcomes. In , particularly bioinformatics, iterative algorithms enhance the alignment of biological sequences to infer evolutionary relationships and functional similarities. Dynamic programming methods, applied iteratively across sequence positions, build optimal alignments by filling scoring matrices that account for matches, mismatches, and gaps. The Needleman-Wunsch algorithm, developed in 1970, uses this iterative fill-in process for global alignments, computing scores row-by-row to maximize similarity between protein or sequences. This approach has become integral to phylogenetic studies and genome annotation, where iterations reveal conserved motifs amid sequence divergence.

References

  1. https://sebokwiki.org/wiki/Iteration_%28glossary%29
Add your contribution
Related Hubs
User Avatar
No comments yet.