Recent from talks
Contribute something
Nothing was collected or created yet.
Evolutionary programming
View on Wikipedia| Part of a series on the |
| Evolutionary algorithm |
|---|
| Genetic algorithm (GA) |
| Genetic programming (GP) |
| Differential evolution |
| Evolution strategy |
| Evolutionary programming |
| Related topics |
Evolutionary programming is an evolutionary algorithm, where a share of new population is created by mutation of previous population without crossover.[1][2] Evolutionary programming differs from evolution strategy ES() in one detail.[1] All individuals are selected for the new population, while in ES(), every individual has the same probability to be selected. It is one of the four major evolutionary algorithm paradigms.[3]
History
[edit]It was first used by Lawrence J. Fogel in the US in 1960 in order to use simulated evolution as a learning process aiming to generate artificial intelligence.[4] It was used to evolve finite-state machines as predictors.[5]
| Year | Description | Reference |
|---|---|---|
| 1966 | EP introduced by Fogel et al. | [6] |
| 1992 | Improved fast EP - Cauchy mutation is used instead of Gaussian mutation | [7] |
| 2002 | Generalized EP - usage of Lévy-type mutation | [8] |
| 2012 | Diversity-guided EP - Mutation step size is guided by diversity | [9] |
| 2013 | Adaptive EP - The number of successful mutations determines the strategy parameter | [10] |
| 2014 | Social EP - Social cognitive model is applied meaning replacing individuals with cognitive agents | [11] |
| 2015 | Immunised EP - Artificial immune system inspired mutation and selection | [12] |
| 2016 | Mixed mutation strategy EP - Gaussian, Cauchy and Lévy mutations are used | [13] |
| 2017 | Fast Convergence EP - An algorithm, which boosts convergence speed and solution quality | [14] |
| 2017 | Immune log-normal EP - log-normal mutation combined with artificial immune system | [15] |
| 2018 | ADM-EP - automatically designed mutation operators | [16] |
See also
[edit]References
[edit]- ^ a b c Slowik, Adam; Kwasnicka, Halina (1 August 2020). "Evolutionary algorithms and their applications to engineering problems". Neural Computing and Applications. 32 (16): 12363–12379. doi:10.1007/s00521-020-04832-8. ISSN 1433-3058.
- ^ Abido, Mohammad A.; Elazouni, Ashraf (30 November 2021). "Modified multi-objective evolutionary programming algorithm for solving project scheduling problems". Expert Systems with Applications. 183 115338. doi:10.1016/j.eswa.2021.115338. ISSN 0957-4174.
- ^ Brameier, Markus (2004). "On Linear Genetic Programming". Dissertation. Retrieved 27 December 2024.
- ^ "Artificial Intelligence through Simulated Evolution". Evolutionary Computation. 2009. doi:10.1109/9780470544600.ch7. ISBN 978-0-470-54460-0.
- ^ Abraham, Ajith; Nedjah, Nadia; Mourelle, Luiza de Macedo (2006). "Evolutionary Computation: from Genetic Algorithms to Genetic Programming". Genetic Systems Programming: Theory and Experiences. Studies in Computational Intelligence. 13. Springer: 1–20. doi:10.1007/3-540-32498-4_1. ISBN 978-3-540-29849-6.
- ^ Fogel, LJ; Owens, AJ; Walsh, MJ (1966). rtificial intelligence thorough simulated evolution. New York: Wiley.
- ^ Xin Yao; Yong Liu; Guangming Lin (July 1999). "Evolutionary programming made faster". IEEE Transactions on Evolutionary Computation. 3 (2): 82–102. doi:10.1109/4235.771163.
- ^ Iwamatsu, Masao (1 August 2002). "Generalized evolutionary programming with Lévy-type mutation". Computer Physics Communications. 147 (1): 729–732. Bibcode:2002CoPhC.147..729I. doi:10.1016/S0010-4655(02)00386-7. ISSN 0010-4655.
- ^ Alam, Mohammad Shafiul; Islam, Md. Monirul; Yao, Xin; Murase, Kazuyuki (1 June 2012). "Diversity Guided Evolutionary Programming: A novel approach for continuous optimization". Applied Soft Computing. 12 (6): 1693–1707. doi:10.1016/j.asoc.2012.02.002. ISSN 1568-4946.
- ^ Das, Swagatam; Mallipeddi, Rammohan; Maity, Dipankar (1 April 2013). "Adaptive evolutionary programming with p-best mutation strategy". Swarm and Evolutionary Computation. 9: 58–68. doi:10.1016/j.swevo.2012.11.002. ISSN 2210-6502.
- ^ Nan, LI; Xiaomin, BAI; Shouzhen, ZHU; Jinghong, ZHENG (1 January 2014). "Social Evolutionary Programming Algorithm onUnit Commitment in Wind Power Integrated System". IFAC Proceedings Volumes. 47 (3): 3611–3616. doi:10.3182/20140824-6-ZA-1003.00384. ISSN 1474-6670.
- ^ Gao, Wei (1 August 2015). "Slope stability analysis based on immunised evolutionary programming". Environmental Earth Sciences. 74 (4): 3357–3369. Bibcode:2015EES....74.3357G. doi:10.1007/s12665-015-4372-0. ISSN 1866-6299.
- ^ Pang, Jinwei; Dong, Hongbin; He, Jun; Feng, Qi (July 2016). "Mixed mutation strategy evolutionary programming based on Shapley value". 2016 IEEE Congress on Evolutionary Computation (CEC). pp. 2805–2812. doi:10.1109/CEC.2016.7744143. ISBN 978-1-5090-0623-6.
- ^ Basu, Mousumi (14 September 2017). "Fast Convergence Evolutionary Programming for Multi-area Economic Dispatch". Electric Power Components and Systems. 45 (15): 1629–1637. doi:10.1080/15325008.2017.1376234. ISSN 1532-5008.
- ^ Mansor, M.H.; Musirin, I.; Othman, M.M. (April 2017). "Immune Log-Normal Evolutionary Programming (ILNEP) for solving economic dispatch problem with prohibited operating zones". 2017 4th International Conference on Industrial Engineering and Applications (ICIEA). pp. 163–167. doi:10.1109/IEA.2017.7939199. ISBN 978-1-5090-6774-9.
- ^ Hong, Libin; Drake, John H.; Woodward, John R.; Özcan, Ender (1 January 2018). "A hyper-heuristic approach to automated generation of mutation operators for evolutionary programming". Applied Soft Computing. 62: 162–175. doi:10.1016/j.asoc.2017.10.002. ISSN 1568-4946.
External links
[edit]Evolutionary programming
View on GrokipediaOverview
Definition and core principles
Evolutionary programming (EP) is a stochastic, population-based optimization technique that simulates the process of biological evolution to search for solutions to complex problems. Developed as a method for generating artificial intelligence, EP evolves a population of candidate solutions—initially represented as finite state machines (FSMs)—across generations by applying variation operators and selection pressures to minimize or maximize a predefined fitness function. Unlike traditional optimization approaches, EP treats the solutions as adaptive organisms that improve their performance in response to an environmental challenge, such as predicting sequential data.[3] At its core, EP draws on Darwinian principles of variation and natural selection, but it uniquely emphasizes phenotypic evolution, focusing on behavioral outcomes rather than explicit genetic structures. Variation is achieved primarily through mutation, which introduces random changes to individual solutions to generate offspring, while recombination (crossover) is absent in the classical formulation to maintain a direct linkage between parental and offspring behaviors. Selection then favors individuals with superior fitness, typically measured by their ability to perform well in the target task, ensuring that the population progressively adapts over iterations. This process continues until a satisfactory solution is found or convergence is reached, mimicking how natural populations evolve without relying on sexual reproduction mechanisms.[4][3] Central to EP are several key concepts that define its operational framework. A population consists of multiple individuals, each encoding a potential solution in a problem-specific representation, such as the state transitions and outputs of an FSM. Fitness evaluation quantifies an individual's effectiveness in the domain, for instance, by scoring the accuracy of predictions against observed data sequences. Generational progression involves creating a new population from the fittest parents via mutation, followed by competitive selection to retain the most promising candidates, thereby guiding the search toward optimal or near-optimal behaviors.[5][3] The historical motivation for EP stemmed from efforts to evolve intelligent behaviors in artificial systems, particularly by treating programs as living entities that adapt to dynamic environments. Originally applied to the evolution of FSMs for time-series prediction tasks, EP viewed these machines as organisms whose structures and parameters self-adapt through evolutionary processes, enabling them to forecast future events based on past observations without human-designed rules. This approach highlighted EP's role within the broader field of evolutionary computation as a technique for behavioral optimization.[3][4]Relation to other evolutionary algorithms
Evolutionary programming (EP) is one of the four primary paradigms within the broader field of evolutionary computation (EC), alongside genetic algorithms (GA), evolution strategies (ES), and genetic programming (GP).[3][6] These paradigms collectively employ population-based optimization techniques inspired by biological evolution, featuring iterative processes of variation and selection to evolve solutions toward problem optima.[7] While sharing this foundational structure, they diverge significantly in solution representations and variation mechanisms, allowing EP to emphasize behavioral adaptation in domains like finite state machines and real-valued optimization.[3] A core distinction of EP lies in its reliance on mutation as the sole variation operator, applied to real-valued or behavioral representations without crossover, in contrast to GA's binary string encodings and heavy use of crossover to simulate genetic recombination.[7][3] EP also differs from ES, which targets continuous parameter optimization through self-adaptive mutation rates and often includes recombination, positioning ES more toward numerical fine-tuning.[6] Furthermore, EP contrasts with GP's tree-based evolution of computer programs, where crossover operates on syntactic structures rather than EP's focus on phenotypic behavior.[3] Despite these differences, all four paradigms incorporate selection to favor superior individuals for propagation, ensuring survival of the fittest across generations.[7] EP's representational flexibility particularly bridges GA's discrete search spaces and ES's continuous ones, enabling versatile applications that draw from multiple influences.[6] The nomenclature surrounding these techniques evolved in the 1990s, shifting from isolated terms like "evolutionary programming" to the unified "evolutionary computation" framework, which encouraged hybridization and cross-pollination of operators and representations among EP, GA, ES, and GP.[3][7]History
Origins in the 1960s
Evolutionary programming (EP) was invented in 1960 by Lawrence J. Fogel while serving as a program director at the National Science Foundation.[3] Motivated by the limitations of early artificial intelligence approaches that relied on explicit rule-based programming, Fogel sought to simulate the evolutionary process to develop machine intelligence, emphasizing adaptation through variation and selection rather than predefined instructions.[7] This work positioned EP as a response to the challenges in modeling intelligent behavior in uncertain environments, drawing inspiration from biological evolution to evolve predictive capabilities autonomously.[3] In its initial formulation, EP represented candidate solutions as finite state machines (FSMs), treating these machines as "organisms" in a population.[7] The FSMs were tasked with predicting the next symbol in sequences of binary symbols, simulating simple language prediction or pattern recognition.[3] Evolution proceeded by mutating FSM transitions—such as altering outputs, adding or deleting states—to generate offspring, with selection based on a payoff function measuring predictive accuracy, often the squared error between predicted and actual symbols.[7] This behavioral focus, rather than genetic encoding, distinguished EP from contemporaneous methods and aimed to evolve adaptive strategies without human-designed rules.[3] Fogel's foundational ideas were first detailed in his 1962 paper "Autonomous Automata," published in Industrial Research, which introduced the concept of self-organizing machines through simulated evolution.[3] This was expanded in his 1964 Ph.D. dissertation, "On the Organization of Intellect," at the University of California, Los Angeles, where he formalized the evolutionary process for intellect simulation.[7] The seminal 1966 book, Artificial Intelligence through Simulated Evolution, co-authored with Alvin J. Owens and Michael J. Walsh, synthesized these efforts and provided comprehensive experiments demonstrating EP's viability. Early applications of EP in the 1960s focused on evolving FSMs for time-series prediction and pattern recognition, such as forecasting binary sequences to model basic linguistic structures.[7] Additional demonstrations included game-playing strategies, like simulating air-to-air combat tactics through coevolved opponent behaviors. These efforts culminated in the founding of Decision Science, Inc. in 1965 by Fogel, Owens, and Walsh, the first company dedicated to commercial applications of evolutionary computation for real-world problems in prediction and control.[3]Developments from the 1980s onward
In the 1980s, David Fogel revived and extended evolutionary programming (EP) beyond its original focus on finite state machines, applying it to generalized optimization problems using representations such as real-valued vectors, permutations, and matrices.[3] This shift emphasized EP's versatility for numerical and combinatorial optimization, building on Lawrence Fogel's foundational ideas while adapting the method to broader computational challenges. In 1988, David Fogel introduced tournament selection as a probabilistic mechanism to replace the original fitness-proportional selection, enabling softer competition among solutions and improving convergence on problems like the traveling salesman.[3] A key milestone in the early 1990s was the incorporation of self-adaptation, where strategy parameters for controlling mutation rates were evolved alongside the primary solutions, allowing the algorithm to automatically adjust variation intensity without external tuning.[3] This innovation, detailed in papers by Fogel et al. in 1991 and 1992, introduced meta-evolutionary programming, enhancing EP's robustness for dynamic environments by permitting correlations between object and strategy variables.[3] During the 1990s, EP integrated more deeply into the broader field of evolutionary computation, gaining traction through theoretical advancements and applications in areas such as control systems and pattern recognition. David Fogel's 1995 book, Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, synthesized these developments, arguing for EP's role in advancing machine intelligence through simulated evolution and highlighting its philosophical implications for computation. By the decade's end, enhancements like adaptive mutation strategies further improved EP's efficiency on benchmark functions.[3] In the 2000s, EP saw commercialization through ventures like Natural Selection, Inc., founded by David Fogel in 1993 to apply evolutionary methods to real-world engineering problems, including signal processing and defense applications.[8] From the 2010s to 2025, developments emphasized scalability via parallel computing, with distributed implementations on high-performance clusters enabling EP to handle large-scale optimization, such as in multi-objective problems and big data scenarios, though it remained less dominant than hybrid approaches with deep learning.[9] Integration with contemporary AI, particularly neural networks, was limited until the 2020s, when evolutionary techniques began supporting neuroevolution for tasks like architecture search and plasticity adaptation.[10]Algorithmic components
Solution representation
In classical evolutionary programming, candidate solutions are represented as finite state machines (FSMs), each comprising a finite set of states, transitions between those states triggered by input symbols, and outputs associated with each state or transition.[4] These FSMs form complete diagrams for each individual in the population, enabling the evolution of predictive behaviors for tasks such as symbol sequence forecasting from non-stationary time series.[5] Modern evolutionary programming employs more adaptable representations tailored to the problem domain, with a prevalent shift to real-valued vectors for continuous optimization, where each individual is encoded as a D-dimensional array of real numbers denoting adjustable parameters.[11] Binary strings serve as an alternative for discrete optimization challenges, while other formats like permutations or variable-length lists accommodate specific needs such as scheduling or evolving structures.[11] Central to these encodings in evolutionary programming is the principle of mapping "genotypes"—the internal representations such as FSM diagrams or parameter vectors—to "phenotypes," the observable behaviors or functional outputs they produce, without imposing a rigid schema to allow domain-specific customization.[4] For instance, in parameter optimization for function minimization, real-valued vectors directly embody the genotype, yielding phenotypes as the evaluated function values.[11] This representational approach emphasizes behavioral efficacy over syntactic structure, facilitating the direct evolution of performance in diverse applications like control policy design, where parameter vectors capture essential dynamics without extraneous constraints.[11]Mutation and variation operators
In evolutionary programming (EP), mutation acts as the primary mechanism for introducing variation into the population, supplanting crossover operators common in other evolutionary algorithms; it generates diversity by applying small, random perturbations to parent solutions, enabling gradual exploration of the search space.[12] Classical EP, as originally formulated by Fogel, targets finite state machines (FSMs) representing predictive models, where mutation involves probabilistic alterations to machine components such as transitions or outputs to evolve behavioral capabilities. Specifically, each offspring is produced from a parent by applying a Poisson-distributed number of mutations (with mean rate typically 3.0), where each mutation equally likely selects one of five operations: adding a state (up to a maximum of 25 states), deleting a state (maintaining at least 3 states), changing the initial state, altering an output symbol, or modifying a next-state transition, with affected elements chosen uniformly at random. Early implementations, including Gaussian perturbations for continuous aspects, emphasized these structural changes to adapt FSMs for tasks like time-series prediction.[4][13] Modern EP shifts focus to real-valued representations for numerical optimization, employing Gaussian mutation on parameter vectors to produce offspring. For an individual with object variables and associated strategy parameters , each component is mutated independently as where denotes a normally distributed random deviate with mean 0 and standard deviation , controlling the perturbation magnitude for that dimension. This approach, without recombination, relies on mutation alone to navigate continuous search spaces, often with fixed population sizes of 50–100 individuals to balance computational efficiency and diversity. Boundary handling for constrained domains typically involves reflection (e.g., mirroring values beyond limits) or repair to ensure feasibility.[12][14] To enhance adaptability, modern EP incorporates self-adaptation of the strategy parameters , allowing the algorithm to evolve mutation strengths alongside solutions. The update for each follows a log-normal scheme: where and are independent standard normal random variables (with shared globally across dimensions), and are exogenous learning rates often set to and for -dimensional problems. This derivation stems from the need to maintain positive values while enabling both global (via , promoting uniform scaling) and local (via , allowing dimension-specific tuning) adjustments; the exponential multiplier ensures multiplicative changes that scale with current , fostering progressive refinement from broad exploration to precise exploitation as evolution proceeds. Self-adaptation was integrated into EP in the early 1990s, drawing parallels to evolution strategies but tailored for EP's mutation-centric framework.[14][15]Selection mechanisms
In evolutionary programming (EP), the fitness function provides a problem-specific evaluation of an individual's solution quality, typically measuring performance against a defined objective such as prediction accuracy or error minimization. In classical EP applications, such as those involving finite state machines for sequence prediction, fitness is calculated as the fraction of correct predictions over a series of input symbols or as an average payoff derived from a comparison matrix of expected versus actual outputs.[4] These raw fitness values can be used directly or converted to rankings to normalize comparisons across the population, ensuring fair assessment regardless of scale.[3] Modern extensions of EP adapt the fitness function to handle complex scenarios, including multi-objective optimization where trade-offs among conflicting goals (e.g., cost versus performance) are evaluated using Pareto dominance or scalarization techniques.[16] In noisy fitness landscapes, common in real-world simulations or sensor-based problems, evaluations incorporate stochastic variations; strategies like averaging multiple fitness assessments or robust ranking mitigate noise effects to maintain reliable selection.[17] In classical EP, fitness directly represents a payoff, such as the number of correct behavioral predictions in predictive tasks, emphasizing survival based on demonstrated competence.[4] The core selection mechanism in EP is tournament selection, pioneered by David Fogel in 1988, which introduces stochastic competition to balance exploitation and exploration. Each individual competes against q randomly chosen opponents (often q=10), earning a "win" if its fitness equals or exceeds that of the opponent; the total wins determine survival probability, allowing even suboptimal solutions a chance to persist and avoid premature convergence to local optima.[3][2] Elitism may be optionally applied by guaranteeing the retention of the highest-fitness individual, though the inherent randomness of tournaments frequently preserves top performers without explicit intervention.[18] EP follows a generational replacement model where offspring fully supplant parents, commonly using a (μ, λ) scheme with λ = μ—each of the μ parents generates one mutated offspring, and tournament selection chooses the next μ individuals from the combined pool of 2μ.[4] This comma-style approach, where only offspring compete for advancement, fosters convergence through progressive diversity reduction as lower-fitness variants are culled.[19] Variants like (μ + λ) or (μ, μ) are less prevalent in standard EP, as the pure replacement emphasizes adaptation via competition among new candidates.[3] A simplified pseudocode representation of the tournament selection process in EP is:# Assume [population](/page/Population) P of size 2μ (μ parents + μ [offspring](/page/Offspring)), each with fitness f[i]
wins = array of size 2μ, initialized to 0
for i in 0 to 2μ - 1:
for j in 1 to q:
k = random index from 0 to 2μ - 1, k ≠ i
if f[i] >= f[k]:
wins[i] += 1
# Rank by wins descending (ties broken by [fitness](/page/F Fitness) or randomly)
sort P by wins descending
next_generation = top μ individuals from sorted P
# Assume [population](/page/Population) P of size 2μ (μ parents + μ [offspring](/page/Offspring)), each with fitness f[i]
wins = array of size 2μ, initialized to 0
for i in 0 to 2μ - 1:
for j in 1 to q:
k = random index from 0 to 2μ - 1, k ≠ i
if f[i] >= f[k]:
wins[i] += 1
# Rank by wins descending (ties broken by [fitness](/page/F Fitness) or randomly)
sort P by wins descending
next_generation = top μ individuals from sorted P

