Hubbry Logo
Evolutionary programmingEvolutionary programmingMain
Open search
Evolutionary programming
Community hub
Evolutionary programming
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Evolutionary programming
Evolutionary programming
from Wikipedia

Evolutionary programming is an evolutionary algorithm, where a share of new population is created by mutation of previous population without crossover.[1][2] Evolutionary programming differs from evolution strategy ES() in one detail.[1] All individuals are selected for the new population, while in ES(), every individual has the same probability to be selected. It is one of the four major evolutionary algorithm paradigms.[3]

History

[edit]

It was first used by Lawrence J. Fogel in the US in 1960 in order to use simulated evolution as a learning process aiming to generate artificial intelligence.[4] It was used to evolve finite-state machines as predictors.[5]

Timeline of EP - selected algorithms[1]
Year Description Reference
1966 EP introduced by Fogel et al. [6]
1992 Improved fast EP - Cauchy mutation is used instead of Gaussian mutation [7]
2002 Generalized EP - usage of Lévy-type mutation [8]
2012 Diversity-guided EP - Mutation step size is guided by diversity [9]
2013 Adaptive EP - The number of successful mutations determines the strategy parameter [10]
2014 Social EP - Social cognitive model is applied meaning replacing individuals with cognitive agents [11]
2015 Immunised EP - Artificial immune system inspired mutation and selection [12]
2016 Mixed mutation strategy EP - Gaussian, Cauchy and Lévy mutations are used [13]
2017 Fast Convergence EP - An algorithm, which boosts convergence speed and solution quality [14]
2017 Immune log-normal EP - log-normal mutation combined with artificial immune system [15]
2018 ADM-EP - automatically designed mutation operators [16]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Evolutionary programming (EP) is a and paradigm within the field of that simulates the process of to generate adaptive solutions for complex problems, primarily through mechanisms of and selection without the use of genetic crossover. Originating in the early , EP was pioneered by Lawrence J. Fogel as an approach to , focusing on the evolution of finite state machines to perform tasks such as prediction and . This method emphasizes behavioral adaptation over explicit genetic representation, allowing populations of candidate solutions to iteratively improve their fitness in response to environmental pressures. The foundational work on EP traces back to Fogel's 1962 exploration of autonomous automata and was formalized in his 1966 book Artificial Intelligence through Simulated Evolution, co-authored with A.J. Owens and M.J. Walsh, where experiments demonstrated the evolution of predictive models outperforming fixed strategies. In the decades following, David B. Fogel, Lawrence's son, extended EP to broader applications, including continuous parameter optimization and the training of neural networks, introducing self-adaptive mutation operators that dynamically adjust variation rates based on performance feedback. These advancements positioned EP as a robust alternative to other evolutionary algorithms, such as genetic algorithms (which rely heavily on crossover) and evolution strategies (which focus on real-valued parameters), by prioritizing domain-independent behavioral evolution. Key features of EP include a population-based search where each represents a potential solution—often encoded as a or real-valued vector—that undergoes Gaussian mutation to produce offspring, followed by tournament selection to retain the fittest members for the next generation. This process enables EP to handle non-differentiable, noisy, or multimodal objective functions effectively, making it suitable for design, scheduling, and biological modeling. Notable applications have included evolving controllers for robotic systems, optimizing antenna designs, and developing strategies for game playing, such as chess programs that learn without explicit rule encoding. Despite its strengths in self-adaptation, EP's computational intensity for large-scale problems has led to hybrid integrations with local search techniques in contemporary research.

Overview

Definition and core principles

Evolutionary programming (EP) is a , -based optimization technique that simulates the process of biological to search for solutions to complex problems. Developed as a method for generating , EP evolves a of candidate solutions—initially represented as finite state machines (FSMs)—across generations by applying variation operators and selection pressures to minimize or maximize a predefined fitness function. Unlike traditional optimization approaches, EP treats the solutions as adaptive organisms that improve their performance in response to an environmental challenge, such as predicting sequential data. At its core, EP draws on Darwinian principles of variation and , but it uniquely emphasizes phenotypic evolution, focusing on behavioral outcomes rather than explicit genetic structures. Variation is achieved primarily through , which introduces random changes to individual solutions to generate offspring, while recombination (crossover) is absent in the classical formulation to maintain a direct linkage between parental and offspring behaviors. Selection then favors individuals with superior fitness, typically measured by their ability to perform well in the target task, ensuring that the population progressively adapts over iterations. This process continues until a satisfactory solution is found or convergence is reached, mimicking how natural populations evolve without relying on mechanisms. Central to EP are several key concepts that define its operational framework. A consists of multiple individuals, each encoding a potential solution in a problem-specific representation, such as the state transitions and outputs of an FSM. Fitness evaluation quantifies an individual's effectiveness in the domain, for instance, by scoring the accuracy of predictions against observed data sequences. Generational progression involves creating a new population from the fittest parents via , followed by competitive selection to retain the most promising candidates, thereby guiding the search toward optimal or near-optimal behaviors. The historical motivation for EP stemmed from efforts to evolve intelligent behaviors in artificial systems, particularly by treating programs as living entities that adapt to dynamic environments. Originally applied to the evolution of FSMs for time-series tasks, EP viewed these machines as organisms whose structures and parameters self-adapt through evolutionary processes, enabling them to forecast future events based on past observations without human-designed rules. This approach highlighted EP's role within the broader field of as a technique for behavioral optimization.

Relation to other evolutionary algorithms

Evolutionary programming (EP) is one of the four primary paradigms within the broader field of (EC), alongside genetic algorithms (GA), evolution strategies (ES), and (GP). These paradigms collectively employ population-based optimization techniques inspired by biological , featuring iterative processes of variation and selection to evolve solutions toward problem optima. While sharing this foundational structure, they diverge significantly in solution representations and variation mechanisms, allowing EP to emphasize behavioral adaptation in domains like finite state machines and real-valued optimization. A core distinction of EP lies in its reliance on as the sole variation operator, applied to real-valued or behavioral representations without crossover, in contrast to GA's binary string encodings and heavy use of crossover to simulate . EP also differs from , which targets continuous optimization through self-adaptive rates and often includes recombination, positioning more toward numerical fine-tuning. Furthermore, EP contrasts with GP's tree-based evolution of computer programs, where crossover operates on rather than EP's focus on phenotypic . Despite these differences, all four paradigms incorporate selection to favor superior individuals for propagation, ensuring survival of the fittest across generations. EP's representational flexibility particularly bridges GA's discrete search spaces and ES's continuous ones, enabling versatile applications that draw from multiple influences. The nomenclature surrounding these techniques evolved in the 1990s, shifting from isolated terms like "evolutionary programming" to the unified "evolutionary computation" framework, which encouraged hybridization and cross-pollination of operators and representations among EP, GA, ES, and GP.

History

Origins in the 1960s

Evolutionary programming (EP) was invented in 1960 by Lawrence J. Fogel while serving as a program director at the . Motivated by the limitations of early approaches that relied on explicit rule-based programming, Fogel sought to simulate the evolutionary to develop machine intelligence, emphasizing adaptation through variation and selection rather than predefined instructions. This work positioned EP as a response to the challenges in modeling intelligent behavior in uncertain environments, drawing inspiration from biological evolution to evolve predictive capabilities autonomously. In its initial formulation, EP represented candidate solutions as finite state machines (FSMs), treating these machines as "organisms" in a population. The FSMs were tasked with predicting the next symbol in sequences of binary symbols, simulating simple language prediction or pattern recognition. Evolution proceeded by mutating FSM transitions—such as altering outputs, adding or deleting states—to generate offspring, with selection based on a payoff function measuring predictive accuracy, often the squared error between predicted and actual symbols. This behavioral focus, rather than genetic encoding, distinguished EP from contemporaneous methods and aimed to evolve adaptive strategies without human-designed rules. Fogel's foundational ideas were first detailed in his 1962 paper "Autonomous Automata," published in Industrial Research, which introduced the concept of self-organizing machines through simulated evolution. This was expanded in his 1964 Ph.D. dissertation, "On the Organization of Intellect," at the , where he formalized the evolutionary process for intellect simulation. The seminal 1966 book, Artificial Intelligence through Simulated Evolution, co-authored with Alvin J. Owens and Michael J. Walsh, synthesized these efforts and provided comprehensive experiments demonstrating EP's viability. Early applications of EP in the 1960s focused on evolving FSMs for time-series prediction and , such as binary sequences to model basic linguistic structures. Additional demonstrations included game-playing strategies, like simulating air-to-air combat tactics through coevolved opponent behaviors. These efforts culminated in the founding of Decision Science, Inc. in 1965 by Fogel, Owens, and Walsh, the first company dedicated to commercial applications of for real-world problems in prediction and control.

Developments from the 1980s onward

In the 1980s, David Fogel revived and extended (EP) beyond its original focus on finite state machines, applying it to generalized optimization problems using representations such as real-valued vectors, permutations, and matrices. This shift emphasized EP's versatility for numerical and , building on Lawrence Fogel's foundational ideas while adapting the method to broader computational challenges. In 1988, David Fogel introduced tournament selection as a probabilistic mechanism to replace the original fitness-proportional selection, enabling softer competition among solutions and improving convergence on problems like the traveling salesman. A key milestone in the early 1990s was the incorporation of self-adaptation, where strategy parameters for controlling mutation rates were evolved alongside the primary solutions, allowing the algorithm to automatically adjust variation intensity without external tuning. This innovation, detailed in papers by Fogel et al. in 1991 and 1992, introduced meta-evolutionary programming, enhancing EP's robustness for dynamic environments by permitting correlations between object and strategy variables. During the 1990s, EP integrated more deeply into the broader field of , gaining traction through theoretical advancements and applications in areas such as control systems and . David Fogel's 1995 book, Evolutionary Computation: Toward a New Philosophy of Machine , synthesized these developments, arguing for EP's role in advancing machine intelligence through simulated and highlighting its philosophical implications for computation. By the decade's end, enhancements like adaptive strategies further improved EP's efficiency on benchmark functions. In the 2000s, EP saw commercialization through ventures like , Inc., founded by David Fogel in 1993 to apply evolutionary methods to real-world engineering problems, including and defense applications. From the to 2025, developments emphasized via , with distributed implementations on high-performance clusters enabling EP to handle large-scale optimization, such as in multi-objective problems and scenarios, though it remained less dominant than hybrid approaches with . Integration with contemporary AI, particularly neural networks, was limited until the 2020s, when evolutionary techniques began supporting for tasks like architecture search and plasticity adaptation.

Algorithmic components

Solution representation

In classical evolutionary programming, candidate solutions are represented as finite state machines (FSMs), each comprising a finite set of states, transitions between those states triggered by input , and outputs associated with each state or transition. These FSMs form complete diagrams for each individual in the population, enabling the evolution of predictive behaviors for tasks such as sequence from non-stationary . Modern evolutionary programming employs more adaptable representations tailored to the problem domain, with a prevalent shift to real-valued vectors for , where each individual is encoded as a D-dimensional array of real numbers denoting adjustable parameters. Binary strings serve as an alternative for challenges, while other formats like permutations or variable-length lists accommodate specific needs such as scheduling or evolving structures. Central to these encodings in evolutionary programming is of mapping "genotypes"—the internal representations such as FSM diagrams or vectors—to "phenotypes," the observable behaviors or functional outputs they produce, without imposing a rigid to allow domain-specific customization. For instance, in optimization for function minimization, real-valued vectors directly embody the , yielding phenotypes as the evaluated function values. This representational approach emphasizes behavioral efficacy over syntactic structure, facilitating the direct of performance in diverse applications like control policy design, where parameter vectors capture essential dynamics without extraneous constraints.

Mutation and variation operators

In evolutionary programming (EP), acts as the primary mechanism for introducing variation into the , supplanting crossover operators common in other evolutionary algorithms; it generates diversity by applying small, random perturbations to parent solutions, enabling gradual exploration of the search space. Classical EP, as originally formulated by Fogel, targets finite state machines (FSMs) representing predictive models, where involves probabilistic alterations to machine components such as transitions or outputs to evolve behavioral capabilities. Specifically, each offspring is produced from a parent by applying a Poisson-distributed number of (with mean rate typically 3.0), where each equally likely selects one of five operations: adding a state (up to a maximum of 25 states), deleting a state (maintaining at least 3 states), changing the initial state, altering an output symbol, or modifying a next-state transition, with affected elements chosen uniformly at random. Early implementations, including Gaussian perturbations for continuous aspects, emphasized these structural changes to adapt FSMs for tasks like time-series prediction. Modern EP shifts focus to real-valued representations for numerical optimization, employing Gaussian on vectors to produce . For an individual with object variables x=(x1,,xn)\mathbf{x} = (x_1, \dots, x_n) and associated strategy σ=(σ1,,σn)\boldsymbol{\sigma} = (\sigma_1, \dots, \sigma_n), each component is mutated independently as xi=xi+N(0,σi),x_i' = x_i + N(0, \sigma_i), where N(0,σi)N(0, \sigma_i) denotes a normally distributed random deviate with mean 0 and standard deviation σi\sigma_i, controlling the perturbation magnitude for that . This approach, without recombination, relies on alone to navigate continuous search spaces, often with fixed population sizes of 50–100 individuals to balance computational efficiency and diversity. Boundary handling for constrained domains typically involves reflection (e.g., values beyond limits) or repair to ensure feasibility. To enhance adaptability, modern EP incorporates self-adaptation of the strategy parameters σ\boldsymbol{\sigma}, allowing to evolve strengths alongside solutions. The update for each σi\sigma_i follows a log-normal scheme: σi=σiexp(τN(0,1)+τNi(0,1)),\sigma_i' = \sigma_i \exp\left( \tau' N(0,1) + \tau N_i(0,1) \right), where N(0,1)N(0,1) and Ni(0,1)N_i(0,1) are independent standard normal random variables (with N(0,1)N(0,1) shared globally across dimensions), and τ,τ\tau, \tau' are exogenous learning rates often set to τ=1/2n\tau = 1/\sqrt{2n}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.