Recent from talks
Contribute something
Nothing was collected or created yet.
Iteration
View on Wikipedia
This article needs additional citations for verification. (July 2024) |
Iteration means repeating a process to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is the starting point of the next iteration.
In mathematics and computer science, iteration (along with the related technique of recursion) is a standard element of algorithms.
Mathematics
[edit]
In mathematics, iteration may refer to the process of iterating a function, i.e. applying a function repeatedly, using the output from one iteration as the input to the next. Iteration of apparently simple functions can produce complex behaviors and difficult problems – for examples, see the Collatz conjecture and juggler sequences.
Another use of iteration in mathematics is in iterative methods which are used to produce approximate numerical solutions to certain mathematical problems. Newton's method is an example of an iterative method. Manual calculation of a number's square root is a common use and a well-known example.
Computing
[edit]In computing, iteration is a technique that marks out of a block of statements within a computer program for a defined number of repetitions. That block of statements is said to be iterated. A computer programmer might also refer to that block of statements as an iteration.
Implementations
[edit]Loops constitute the most common language constructs for performing iterations. The following pseudocode "iterates" three times the line of code between begin & end through a for loop, and uses the values of i as increments.
a := 0
for i := 1 to 3 do { loop three times }
begin
a := a + i; { add the current value of i to a }
end;
print(a); { the number 6 is printed (0 + 1; 1 + 2; 3 + 3) }
It is permissible, and often necessary, to use values from other parts of the program outside the bracketed block of statements, to perform the desired function.
Iterators constitute alternative language constructs to loops, which ensure consistent iterations over specific data structures. They can eventually save time and effort in later coding attempts. In particular, an iterator allows one to repeat the same kind of operation at each node of such a data structure, often in some pre-defined order.
Iteratees are purely functional language constructs, which accept or reject data during the iterations.
Relation with recursion
[edit]Recursions and iterations have different algorithmic definitions, even though they can generate identical results. The primary difference is that recursion can be a solution without prior knowledge as to how many times the action must repeat, while a successful iteration requires that foreknowledge.
Some types of programming languages, known as functional programming languages, are designed such that they do not set up a block of statements for explicit repetition, as with the for loop. Instead, those programming languages exclusively use recursion. Rather than call out a block of code to repeate a pre-defined number of times, the executing code block instead "divides" the work into a number of separate pieces, after which the code block executes itself on each individual piece. Each piece of work is divided repeatedly until the "amount" of work is as small as possible, at which point the algorithm does that work very quickly. The algorithm then "reverses" and reassembles the pieces into a complete whole.
The classic example of recursion is in list-sorting algorithms, such as merge sort. The merge sort recursive algorithm first repeatedly divides the list into consecutive pairs. Each pair is then ordered, then each consecutive pair of pairs, and so forth until the elements of the list are in the desired order.
The code below is an example of a recursive algorithm in the Scheme programming language that outputs the same result as the pseudocode under the previous heading.
(let iterate ((i 1) (a 0))
(if (<= i 3)
(iterate (+ i 1) (+ a i))
(display a)))
Education
[edit]In some schools of pedagogy, iterations are used to describe the process of teaching or guiding students to repeat experiments, assessments, or projects, until more accurate results are found, or the student has mastered the technical skill. This idea is found in the old adage, "Practice makes perfect." In particular, "iterative" is defined as the "process of learning and development that involves cyclical inquiry, enabling multiple opportunities for people to revisit ideas and critically reflect on their implication."[1]
Unlike computing and math, educational iterations are not predetermined; instead, the task is repeated until success according to some external criteria (often a test) is achieved.
See also
[edit]References
[edit]- ^ Helen Timperley; Aaron Wilson; Heather Barrar; Irene Fung. "Teacher Professional Learning and Development: Best Evidence Synthesis Iteration [BES]" (PDF). OECD. p. 238. Archived from the original (PDF) on 26 June 2013. Retrieved 4 April 2013.
Iteration
View on GrokipediaDefinition and Overview
Basic Concept
Iteration refers to the repetition of a set of operations or instructions until a desired condition is met, serving as a core mechanism for achieving outcomes through successive applications of a process. The term derives from the Latin iteratio ("repetition"), from iterare ("to do again").[1] For instance, in approximating the value of a mathematical constant like pi, one might repeatedly apply a simple arithmetic formula, refining the estimate with each cycle until sufficient accuracy is reached.[11] Iteration can be finite, involving a predetermined number of repetitions, or potentially infinite, continuing until a specific termination criterion such as convergence to a target value is satisfied.[12] In finite cases, the process halts after a fixed count, ensuring predictability, whereas infinite iteration risks endless execution unless guarded by a stopping rule.[13] In everyday routines, iteration manifests as sequential repetitions, akin to the step-by-step actions in preparing a meal, where one measures ingredients, stirs, and checks doneness cyclically to perfect the result.[14] This mirrors broader problem-solving, where iteration acts as a foundational building block, enabling incremental refinement and adaptation to construct more intricate systems across fields like design and optimization.[15] In mathematics and computing, it underpins methods for successive approximations and loop-based executions, respectively.[16]Historical Origins
The concept of iteration traces its origins to ancient mathematical practices, particularly in the solving of equations through successive approximations. Around 2000 BCE, Babylonian mathematicians employed iterative techniques to compute square roots, using a method that involved repeated averaging of an initial guess and the quotient obtained by dividing the target number by that guess, yielding increasingly accurate results.[17] This approach represented an early recognition of the power of repetition to refine solutions, applied to problems in geometry and algebra recorded on clay tablets.[17] In the 7th century CE, Indian mathematician Brahmagupta advanced iterative approximations in his treatise Brahmasphutasiddhanta, providing an algorithm for square roots that built upon earlier Indian traditions by iteratively adjusting estimates through arithmetic operations.[18] His method, which involved successive refinements to achieve practical accuracy, exemplified the growing sophistication of iterative processes in arithmetic and their integration into astronomical calculations.[18] The 17th century marked a pivotal development with Isaac Newton's formulation of an iterative technique for root-finding, detailed in his 1671 manuscript De analysi per aequationes numero terminorum infinitas, which used tangent lines to approximate roots through repeated corrections.[19] This method, later known as Newton's method, formalized iteration as a systematic tool for solving nonlinear equations, influencing subsequent calculus and numerical analysis.[19] By the late 19th century, Henri Poincaré contributed to the formalization of iterative functions in mathematics, particularly through his 1880s investigations into dynamical systems and celestial mechanics, where he analyzed the behavior of repeated mappings to study stability and chaos.[20] His work in Les Méthodes Nouvelles de la Mécanique Céleste (1892–1899), rooted in earlier papers, established iteration as a core concept in understanding periodic and recurrent phenomena.[21] In the 1930s, Alan Turing's theoretical model of computation introduced iteration into formal computing frameworks via the Turing machine, described in his 1936 paper "On Computable Numbers," which enabled repetitive state transitions to simulate loops and algorithmic repetition.[22] This abstraction laid the groundwork for programmable iteration in theoretical computer science.[22] The mid-20th century saw iteration fully integrated into practical computing through John von Neumann's stored-program concept, outlined in the 1945 EDVAC report, which allowed programs—including iterative loops—to be stored and executed in the same memory as data, revolutionizing electronic computation.[23] This architecture enabled efficient repetition in software, transforming iteration from a mathematical tool into a foundational element of digital systems.[23]Mathematics
Iterative Methods
Iterative methods in numerical analysis involve generating a sequence of approximations where each subsequent term is computed based on the previous ones, typically expressed as for some function , to solve equations or systems that lack closed-form solutions.[24] These methods are particularly valuable for addressing problems where direct analytical approaches are infeasible, such as finding roots of nonlinear equations or solving large sparse linear systems.[4] Common examples include the bisection method for root-finding, which iteratively halves an interval containing a root of a continuous function (where ) by evaluating the midpoint and updating the bracket based on sign changes, ensuring steady convergence regardless of the function's derivative.[25] For linear systems , the Jacobi method updates each component independently using values from the previous iteration, formulated as , while the Gauss-Seidel method incorporates newly computed values immediately, as in , often leading to faster convergence under suitable conditions.[26] A foundational result for fixed-point iteration is the fixed-point theorem, which states that if is continuous on with for all , and if for some constant and all , then there exists a unique fixed point such that , and the sequence converges to for any initial .[27] This condition on the derivative ensures the iteration acts as a contraction mapping, guaranteeing linear convergence to the solution.[24] In numerical analysis, iterative methods are essential for tackling intractable problems like nonlinear equations and large-scale linear systems arising from discretizations, where direct methods such as Gaussian elimination become computationally prohibitive due to time and memory demands.[4] They enable practical solutions by exploiting problem structure, such as sparsity or diagonal dominance, to achieve efficient approximations without requiring exact inversion.[26]Convergence Analysis
In the analysis of iterative methods, convergence refers to the process by which the sequence generated by an iteration approaches a fixed point or solution under specified conditions. A foundational result is the Banach fixed-point theorem, which states that if is a complete metric space and is a contraction mapping—meaning there exists such that for all —then has a unique fixed point , and the sequence defined by converges to from any initial .[28] This convergence is linear, with the error satisfying , ensuring the distance to the fixed point decreases by at least the factor at each step.[29] The order of convergence quantifies the speed at which the error diminishes. For linear convergence, the error reduces asymptotically by a constant factor , so , as seen in the iteration from a contraction mapping under the Banach theorem.[30] Higher orders provide faster reduction; quadratic convergence occurs when for some constant , meaning the number of correct digits roughly doubles with each iteration.[31] Newton's method for root-finding, given by , achieves quadratic convergence near a simple root where and , provided the initial guess is sufficiently close and is twice continuously differentiable.[32] For fixed-point iteration , error bounds can be derived from the Lipschitz constant of . If on an interval containing the fixed point and the iterates, then the error satisfies , implying linear convergence with rate .[27] More generally, the a posteriori error estimate provides a practical way to assess convergence without knowing , assuming the iterates remain in the interval where the bound holds.[33] Convergence is not guaranteed in all cases; divergence or instability arises when the iteration amplifies errors. For fixed-point iteration, if , the fixed point is repulsive, and the sequence typically diverges from unless starting exactly at it, as small perturbations grow exponentially.[34] Even if , convergence may fail or be very slow, highlighting the need for as a sufficient local condition for attraction.[5]Computer Science
Algorithmic Iteration
In algorithm design, iteration serves as a core mechanism for executing repetitive operations on discrete data structures, enabling efficient processing of inputs by repeating a fixed set of steps until a specified condition is met. This technique is particularly vital in handling collections like arrays, where operations such as searching or sorting require systematic traversal without redundant computations. Unlike one-pass direct solutions that may overlook elements, iterative approaches ensure completeness by cycling through data in controlled loops, optimizing for scalability in computational problems.[35] A prominent example is the bubble sort algorithm, which iteratively compares adjacent elements in an array and swaps them if they are out of order, propagating larger elements toward the end in each pass until no further swaps are needed. This process typically involves nested loops: an outer loop for passes and an inner loop for comparisons, continuing until the array achieves sorted order. The algorithm's simplicity makes it illustrative of iteration's role in transforming unsorted data, though its quadratic performance limits practical use for large datasets.[36] Time complexity analysis highlights iteration's efficiency in basic algorithms; for instance, a simple loop traversing an array once, as in linear search, incurs O(n time, where n is the array length, since each element is examined proportionally to the input size. This contrasts with direct solutions like constant-time lookups in pre-indexed structures, but iteration dominates for unstructured data where preprocessing is infeasible. In pseudocode, a while loop for linear search might appear as follows, initializing an index and advancing until the target is found or the end is reached:procedure LinearSearch(array A, target t)
i ← 0
while i < length(A) and A[i] ≠ t
i ← i + 1
if i < length(A)
return i // found at index i
else
return -1 // not found
procedure LinearSearch(array A, target t)
i ← 0
while i < length(A) and A[i] ≠ t
i ← i + 1
if i < length(A)
return i // found at index i
else
return -1 // not found
procedure IterativeBinarySearch(sorted [array](/page/Array) A, target t)
low ← 0
high ← length(A) - 1
while low ≤ high
mid ← (low + high) / 2
if A[mid] = t
return mid
else if A[mid] < t
low ← mid + 1
else
high ← mid - 1
return -1 // not found
procedure IterativeBinarySearch(sorted [array](/page/Array) A, target t)
low ← 0
high ← length(A) - 1
while low ≤ high
mid ← (low + high) / 2
if A[mid] = t
return mid
else if A[mid] < t
low ← mid + 1
else
high ← mid - 1
return -1 // not found
Programming Constructs
In imperative programming languages, iteration is primarily implemented through loop constructs that allow repeated execution of code blocks based on predefined conditions or counters. These structures enable developers to express repetitive tasks directly in the source code, facilitating efficient processing of sequences or collections.[39] The for loop is a common construct for fixed-iteration scenarios, where the number of repetitions is known in advance, such as iterating over a range from 1 to n by initializing a counter, checking its bounds, and incrementing it after each iteration. This design minimizes boilerplate code for counting loops, making it suitable for tasks like array traversal.[40] In contrast, the while loop supports condition-based iteration, evaluating a boolean expression before executing the loop body and continuing only if the condition remains true, which is ideal for scenarios where the iteration count is indeterminate, such as processing input until an end-of-file signal.[41] The do-while loop, a variant of the while loop, performs the condition check after the body executes at least once, ensuring initial execution regardless of the condition and proving useful for interactive prompts or validation cycles.[42] Infinite loops occur when a loop's termination condition is never met, such as a while loop with a perpetually true condition like while(true), potentially causing program hangs unless interrupted externally or by design for event-driven systems. To manage such loops and provide fine-grained control, statements like break terminate the loop prematurely upon meeting a specific criterion, while continue skips the remainder of the current iteration and advances to the next, allowing selective omission of operations without restructuring the loop. These mechanisms enhance flexibility in handling exceptional cases within iterative logic.[43] In functional programming languages like Haskell, iteration eschews explicit loops in favor of higher-order functions such as folds, which recursively apply an operation across a data structure like a list to accumulate a result, promoting immutability and composability over mutable state changes seen in imperative loops. For instance, foldr processes elements from right to left, enabling declarative expressions of iteration that align with the paradigm's emphasis on pure functions.[44] Performance optimizations for iterative constructs often involve loop unrolling, a compiler technique that replicates the loop body multiple times to reduce overhead from branch instructions and condition checks, thereby increasing instruction-level parallelism and execution speed, particularly in tight loops with simple bodies. This method can yield significant speedups in compute-intensive applications but requires careful factor selection to avoid excessive code bloat.[45]Comparison with Recursion
Iteration and recursion represent alternative approaches to repetition in computing, where recursion achieves repetition through a function invoking itself to address subproblems via self-reference, in contrast to iteration, which utilizes loop structures to execute code blocks repeatedly without expanding the call stack.[46] Tail recursion, a specific form where the recursive call occurs as the final operation without subsequent computations, enables optimization in certain languages by converting the recursion into an iterative process, thereby reusing the existing stack frame and preventing excessive memory allocation.[47] In terms of space complexity, iteration generally requires constant space , relying on fixed variables within the loop, whereas recursion demands space due to the accumulation of stack frames for each call level. For instance, a recursive factorial function for input creates stack frames, each storing parameters and return states, while an iterative equivalent employs a single loop with a running product variable, maintaining constant space usage.[48][46]// Recursive factorial (O(n) space)
function factorial(n):
if n == 0:
return 1
return n * [factorial](/page/Factorial)(n - 1)
// Iterative factorial (O(1) space)
function factorial(n):
result = 1
for i from 1 to n:
result = result * i
return result
// Recursive factorial (O(n) space)
function factorial(n):
if n == 0:
return 1
return n * [factorial](/page/Factorial)(n - 1)
// Iterative factorial (O(1) space)
function factorial(n):
result = 1
for i from 1 to n:
result = result * i
return result
Applications
Educational Contexts
Iteration plays a foundational role in K-12 computer science curricula, where it is often introduced through visual and interactive tools to build early computational thinking skills. One seminal example is the Logo programming language, developed in the late 1960s and widely adopted in the 1970s for elementary education, which uses iterative commands to control a "turtle" graphic that draws shapes on the screen.[49] Students learn repetition by issuing commands like REPEAT 4 [FORWARD 100 RIGHT 90] to create a square, fostering an intuitive understanding of loops through immediate visual feedback. This approach emphasizes problem-solving in a low-stakes environment, helping young learners grasp iteration without abstract syntax.[50] At the university level, iteration is a core topic in discrete mathematics courses, where it is analyzed formally through concepts like loop invariants to ensure algorithmic correctness. These courses typically include exercises where students identify invariants—properties that remain true before and after each loop iteration—for algorithms such as summing array elements or searching sorted lists.[51] For instance, in a loop computing the sum of numbers from 1 to n, the invariant might state that the partial sum up to the current index equals the formula for that range, allowing proof of termination and accuracy.[52] Such pedagogical strategies bridge theoretical reasoning with practical programming, preparing students for advanced algorithm design. The pedagogical benefits of teaching iteration extend to enhancing problem-solving abilities, particularly by promoting pattern recognition and decomposition as outlined in Jeannette Wing's computational thinking framework.[53] Iteration encourages students to break complex tasks into repeatable steps, such as decomposing a drawing into looped segments, which cultivates abstraction and algorithmic mindset essential for broader STEM disciplines.[54] However, learners often encounter challenges like off-by-one errors, where loop bounds are mis-set, leading to incorrect iterations such as processing one extra or missing element in an array.[55] To address these, educators employ strategies like pair programming, where students collaborate in driver-navigator roles to debug loops in real-time, reducing error rates and boosting confidence in iterative constructs.[56]Scientific and Engineering Uses
In physics, iteration plays a crucial role in simulating chaotic systems, where small changes in initial conditions can lead to vastly different outcomes, as seen in atmospheric dynamics. Numerical weather prediction relies on iterative finite difference methods to approximate solutions to partial differential equations governing fluid motion and thermodynamics. These methods discretize space and time, iteratively updating grid-point values over successive time steps to forecast weather patterns. A seminal application was the 1950 numerical integration of the barotropic vorticity equation, which used finite difference approximations to iteratively propagate weather variables, marking the first successful computer-based forecast on the ENIAC.[57] This iterative approach remains foundational in modern global circulation models, enabling predictions of chaotic phenomena like storm development despite sensitivity to perturbations.[58] In engineering, iterative processes underpin control systems that maintain stability and performance through continuous feedback adjustments. Proportional-integral-derivative (PID) controllers exemplify this by iteratively computing control outputs based on error signals, integrating past errors, and anticipating future trends to regulate systems like temperature or speed. The iterative nature arises in the feedback loop, where the controller repeatedly samples the process variable and adjusts the manipulated variable until the setpoint is reached. The Ziegler-Nichols tuning method, introduced in 1942, provides a systematic iterative procedure to determine optimal PID parameters by inducing sustained oscillations and deriving gains from their amplitude and period, widely adopted in industrial automation. Such iterative feedback ensures robust performance in applications ranging from robotics to chemical processing, where real-time corrections mitigate disturbances.[59] In economics, iteration facilitates the computation of equilibria in game-theoretic models, where agents update strategies based on perceived opponent behaviors. The fictitious play algorithm models this by having players iteratively best-respond to the empirical frequency of others' past actions, converging to Nash equilibria in certain games. Proposed by George W. Brown in 1951, it simulates repeated play where each iteration assumes opponents follow a stationary mixed strategy derived from historical data, applicable to market competition and auction design. This iterative learning process captures bounded rationality in economic interactions, influencing analyses of oligopolistic pricing and bargaining outcomes.[60] In biology, particularly bioinformatics, iterative algorithms enhance the alignment of biological sequences to infer evolutionary relationships and functional similarities. Dynamic programming methods, applied iteratively across sequence positions, build optimal alignments by filling scoring matrices that account for matches, mismatches, and gaps. The Needleman-Wunsch algorithm, developed in 1970, uses this iterative fill-in process for global alignments, computing scores row-by-row to maximize similarity between protein or nucleotide sequences. This approach has become integral to phylogenetic studies and genome annotation, where iterations reveal conserved motifs amid sequence divergence.[61]References
- https://sebokwiki.org/wiki/Iteration_%28glossary%29
