Recent from talks
Nothing was collected or created yet.
Genetic operator
View on Wikipedia| Part of a series on the |
| Evolutionary algorithm |
|---|
| Genetic algorithm (GA) |
| Genetic programming (GP) |
| Differential evolution |
| Evolution strategy |
| Evolutionary programming |
| Related topics |
A genetic operator is an operator used in evolutionary algorithms (EA) to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful.[1] Genetic operators are used to create and maintain genetic diversity (mutation operator), combine existing solutions (also known as chromosomes) into new solutions (crossover) and select between solutions (selection).[2][3]
The classic representatives of evolutionary algorithms include genetic algorithms, evolution strategies, genetic programming and evolutionary programming. In his book discussing the use of genetic programming for the optimization of complex problems, computer scientist John Koza has also identified an 'inversion' or 'permutation' operator; however, the effectiveness of this operator has never been conclusively demonstrated and this operator is rarely discussed in the field of genetic programming.[4][5] For combinatorial problems, however, these and other operators tailored to permutations are frequently used by other EAs.[6][7]
Mutation (or mutation-like) operators are said to be unary operators, as they only operate on one chromosome at a time. In contrast, crossover operators are said to be binary operators, as they operate on two chromosomes at a time, combining two existing chromosomes into one new chromosome.[8][9]
Operators
[edit]Genetic variation is a necessity for the process of evolution. Genetic operators used in evolutionary algorithms are analogous to those in the natural world: survival of the fittest, or selection; reproduction (crossover, also called recombination); and mutation.
Selection
[edit]Selection operators give preference to better candidate solutions (chromosomes), allowing them to pass on their 'genes' to the next generation (iteration) of the algorithm. The best solutions are determined using some form of objective function (also known as a 'fitness function' in evolutionary algorithms), before being passed to the crossover operator. Different methods for choosing the best solutions exist, for example, fitness proportionate selection and tournament selection.[10] A further or the same selection operator is used to determine the individuals for being selected to form the next parental generation. The selection operator may also ensure that the best solution(s) from the current generation always become(s) a member of the next generation without being altered;[11] this is known as elitism or elitist selection.[2][12][13]
Crossover
[edit]Crossover is the process of taking more than one parent solutions (chromosomes) and producing a child solution from them. By recombining portions of good solutions, the evolutionary algorithm is more likely to create a better solution.[2] As with selection, there are a number of different methods for combining the parent solutions, including the edge recombination operator (ERO) and the 'cut and splice crossover' and 'uniform crossover' methods. The crossover method is often chosen to closely match the chromosome's representation of the solution; this may become particularly important when variables are grouped together as building blocks, which might be disrupted by a non-respectful crossover operator. Similarly, crossover methods may be particularly suited to certain problems; the ERO is considered a good option for solving the travelling salesman problem.[14]
Mutation
[edit]The mutation operator encourages genetic diversity amongst solutions and attempts to prevent the evolutionary algorithm converging to a local minimum by stopping the solutions becoming too close to one another. In mutating the current pool of solutions, a given solution may change between slightly and entirely from the previous solution.[15] By mutating the solutions, an evolutionary algorithm can reach an improved solution solely through the mutation operator.[2] Again, different methods of mutation may be used; these range from a simple bit mutation (flipping random bits in a binary string chromosome with some low probability) to more complex mutation methods in which genes in the solution are changed, for example by adding a random value from the Gaussian distribution to the current gene value. As with the crossover operator, the mutation method is usually chosen to match the representation of the solution within the chromosome.[15][3]
Combining operators
[edit]While each operator acts to improve the solutions produced by the evolutionary algorithm working individually, the operators must work in conjunction with each other for the algorithm to be successful in finding a good solution.[3] Using the selection operator on its own will tend to fill the solution population with copies of the best solution from the population. If the selection and crossover operators are used without the mutation operator, the algorithm will tend to converge to a local minimum, that is, a good but sub-optimal solution to the problem. Using the mutation operator on its own leads to a random walk through the search space. Only by using all three operators together can the evolutionary algorithm become a noise-tolerant global search algorithm, yielding good solutions to the problem at hand.[2]
References
[edit]- ^ Jiang, Dazhi; Tian, Zhihang; He, Zhihui; Tu, Geng; Huang, Ruixiang (1 September 2021). "A framework for designing of genetic operators automatically based on gene expression programming and differential evolution". Natural Computing. 20 (3): 395–411. doi:10.1007/s11047-020-09830-2. ISSN 1572-9796.
- ^ a b c d e "Introduction to Genetic Algorithms". Archived from the original on 11 August 2015. Retrieved 20 August 2015.
- ^ a b c Eiben, A.E.; Smith, J.E. (2015). "Representation, Mutation, and Recombination". Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. pp. 49–78. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1. S2CID 20912932.
- ^ Koza, John R. (1996). Genetic programming : on the programming of computers by means of natural selection (6th ed.). Cambridge, Mass.: MIT Press. ISBN 0-262-11170-5.
- ^ "Genetic programming operators". FTP server (FTP). Retrieved 20 August 2015.[dead ftp link] (To view documents see Help:FTP)
- ^ Eiben, A.E.; Smith, J.E. (2015). "Mutation for Permutation Representation". Introduction to Evolutionary Computing. Natural Computing Series (2nd ed.). Berlin, Heidelberg: Springer. pp. 69–70. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1.
- ^ Yu, Xinjie; Gen, Mitsuo (2010). "Mutation Operators". Introduction to Evolutionary Algorithms. Decision Engineering. London: Springer. pp. 286–288. doi:10.1007/978-1-84996-129-5. ISBN 978-1-84996-128-8.
- ^ "Genetic operators". Archived from the original on 30 December 2017. Retrieved 20 August 2015.
- ^ Eiben, A.E.; Smith, J.E. (2015). "Variation Operators (Mutation and Recombination)". Introduction to Evolutionary Computing. Natural Computing Series (2nd ed.). Berlin, Heidelberg: Springer. pp. 31–33. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1.
- ^ Eiben, A.E.; Smith, J.E. (2015). "Parent Selection". Introduction to Evolutionary Computing. Natural Computing Series (2nd ed.). Berlin, Heidelberg: Springer. pp. 80–87. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1.
- ^ Eiben, A.E.; Smith, J.E. (2015). "Survivor Selection". Introduction to Evolutionary Computing. Natural Computing Series (2nd ed.). Berlin, Heidelberg: Springer. pp. 87–90. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1.
- ^ "Introduction to Genetic Algorithm". Retrieved 20 August 2015.
- ^ Eiben, A.E.; Smith, J.E. (2015). Introduction to Evolutionary Computing. Natural Computing Series (2nd ed.). Berlin, Heidelberg: Springer. p. 89. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1.
- ^ Whitley, Darrell; Starkweather, Timothy; Fuquay, D'Ann (1989), Schaffer, J.D. (ed.), "Scheduling Problems and Traveling Salesmen: The Genetic Edge Recombination Operator", Proceedings of the 3rd International Conference on Genetic Algorithms (ICGA), San Francisco: Morgan Kaufmann, pp. 133–140, ISBN 1558600663
- ^ a b Bäck, Thomas; Fogel, David B.; Whitley, Darrell; Angeline, Peter J. (1999). "Mutation operators". In Bäck, Thomas; Fogel, David B.; Michalewicz, Zbigniew (eds.). Evolutionary computation Vol. 1, Basic algorithms and operators. Boca Racon: CRC Press. pp. 237–255. ISBN 0-585-30560-9. OCLC 45730387.
Genetic operator
View on GrokipediaFundamentals
Definition and Purpose
Genetic operators are computational procedures employed in genetic algorithms (GAs) and broader evolutionary computation frameworks to manipulate a population of candidate solutions, referred to as individuals, thereby generating successive generations of solutions modeled after biological evolution.http://www.scholarpedia.org/article/Genetic_algorithms These operators emulate natural processes by encoding solutions as chromosomes—typically in forms like binary strings or real-valued vectors—and applying transformations to evolve the population toward improved outcomes.https://link.springer.com/article/10.1007/s11042-020-10139-6 The primary purpose of genetic operators is to facilitate efficient navigation of complex search spaces, where promoting fitter individuals drives convergence to high-quality solutions while preserving diversity to avoid premature stagnation.https://link.springer.com/article/10.1007/s11042-020-10139-6 This involves balancing exploration, which generates novel variations to broadly sample the solution space, and exploitation, which intensifies focus on promising regions to refine solutions—a dynamic essential for tackling optimization problems with multiple local optima.https://mitpress.mit.edu/9780262581110/adaptation-in-natural-and-artificial-systems/ Effective operator application thus enhances the algorithm's ability to approximate global optima in non-linear, high-dimensional domains.http://www.scholarpedia.org/article/Genetic_algorithms Central to their function are prerequisites like population representation, which structures individuals for manipulation (e.g., binary encodings for discrete problems or real vectors for continuous ones), and fitness evaluation, a scalar function that quantifies each individual's performance relative to the optimization objective.https://link.springer.com/article/10.1007/s11042-020-10139-6 Without these, operators cannot selectively propagate advantageous traits across generations.https://mitpress.mit.edu/9780262581110/adaptation-in-natural-and-artificial-systems/ The core operators—selection, crossover, and mutation—collectively orchestrate this process.http://www.scholarpedia.org/article/Genetic_algorithms In practice, consider a GA for function optimization, where an initial random population of parameter vectors undergoes operator-induced transformations over iterations, progressively converging to near-optimal configurations that minimize or maximize the target function.https://link.springer.com/article/10.1007/s11042-020-10139-6 This evolutionary progression mirrors natural adaptation, yielding robust solutions for applications like parameter tuning in engineering design.http://www.scholarpedia.org/article/Genetic_algorithmsHistorical Context
The foundations of genetic operators trace back to mid-20th-century cybernetics, where concepts of self-reproduction and adaptive systems laid groundwork for evolutionary computation. In the 1950s, John von Neumann explored self-reproducing automata as theoretical models for reliable computation and biological replication, influencing later ideas of algorithmic evolution through mechanisms that could generate and modify structures autonomously.[6] Building on this, Richard M. Friedberg's 1958 work introduced machine learning via evolutionary processes, using random mutations and selection to evolve programs on an IBM 704 computer, marking an early empirical attempt to simulate adaptive improvement without explicit programming.[7] In the 1960s, parallel developments in evolution strategies (ES) by Ingo Rechenberg and Hans-Paul Schwefel at the Technical University of Berlin introduced mutation-based variation and selection for continuous parameter optimization, laying foundational principles for real-valued representations in evolutionary algorithms.[8] The formal inception of genetic operators occurred with John Holland's seminal 1975 book Adaptation in Natural and Artificial Systems, which introduced genetic algorithms (GAs) as computational methods inspired by natural evolution, incorporating operators such as selection, crossover, and mutation to mimic biological adaptation and solve optimization problems.[9] Holland, recognized as the founder of GAs, emphasized these operators as key to schema processing and building block hypotheses for adaptive search. Concurrently, Kenneth De Jong's 1975 dissertation analyzed GA behavior and parameters, including operator rates, providing empirical validation for their role in function optimization.[10] During the 1980s, GAs and their operators gained popularity for tackling complex optimization challenges, such as the traveling salesman problem, due to advances in computing power and broader accessibility of Holland's ideas.[11] David Goldberg's 1989 book Genetic Algorithms in Search, Optimization, and Machine Learning further standardized operator usage, offering theoretical and practical frameworks that propelled their adoption in engineering and AI applications.[12] By the 1990s, the field evolved from binary representations to real-coded GAs, enabling more precise handling of continuous parameters, as demonstrated in early works like Eshelman and Schaffer's interval-schemata analysis. This shift coincided with John Koza's 1992 integration of operators into genetic programming, extending them to evolve tree-based structures for automatic program synthesis.[13]Core Operators
Selection
Selection is a genetic operator in evolutionary algorithms that probabilistically chooses individuals from the current population to serve as parents for the next generation, with selection probabilities typically proportional to their fitness values, thereby mimicking natural selection by favoring the survival and reproduction of fitter solutions. This process forms a mating pool that drives evolutionary progress toward higher-quality solutions while balancing exploration and exploitation in the search space. Common types of selection include fitness proportionate selection, also known as roulette wheel selection, where the probability of selecting individual is given by , with denoting the fitness of individual . Rank-based selection addresses issues in fitness proportionate methods by assigning selection probabilities based on an individual's rank in the sorted population rather than raw fitness values, helping to prevent premature convergence caused by highly fit individuals dominating the pool. Tournament selection, another prevalent method, involves randomly sampling individuals (where is typically small, such as 2 or 3) and selecting the one with the highest fitness as a parent, offering tunable selection pressure through the choice of . Key parameters in selection include selection intensity, which controls the bias toward fitter individuals, and elitism, a strategy that guarantees the preservation of the top-ranked individuals into the next generation to maintain progress. However, high selection pressure can lead to loss of population diversity, resulting in stagnation or premature convergence to suboptimal solutions, as overly aggressive favoritism reduces the exploration of novel genotypes. The mathematical foundation of selection often revolves around the expected number of offspring for an individual, calculated as , where is the mean population fitness, reflecting the reproductive advantage of superior fitness. For linear ranking selection, a specific form of rank-based method, the selection probability is , where is the population size, is the rank of individual (with 1 for the best), and (0 ≤ c ≤ 1) is the selection intensity parameter that adjusts the pressure. In the 0/1 knapsack problem, where the goal is to maximize value without exceeding weight capacity, tournament selection with effectively identifies and propagates individuals representing high-value, low-weight item combinations by repeatedly choosing the fitter solution from random pairs, leading to efficient convergence in benchmark instances.[14] Selected parents subsequently undergo crossover and mutation to produce offspring for the next generation.Crossover
In genetic algorithms, crossover is a recombination operator that exchanges genetic material between two selected parent solutions to produce offspring, thereby simulating the inheritance of beneficial traits observed in sexual reproduction and promoting diversity in the population. This process typically involves aligning the chromosomes of the parents and swapping segments at designated points, which allows the algorithm to combine advantageous building blocks from different individuals. The effectiveness of crossover relies on the prior selection of high-fitness parents, ensuring that the resulting progeny inherit promising genetic structures.[9] Common types of crossover operators vary by representation and problem domain. For binary-encoded chromosomes, single-point crossover selects a single random locus and swaps the substrings beyond that point between parents, preserving large schema while introducing moderate disruption. Multi-point crossover extends this by using multiple random loci for swaps, increasing variability but risking greater disruption of building blocks. Uniform crossover, in contrast, exchanges each allele independently with a fixed probability, often 0.5, which provides fine-grained recombination suitable for exploring broad solution spaces. In permutation-based problems, such as the traveling salesman problem (TSP), order-preserving operators like partially mapped crossover (PMX) maintain the relative order of elements by mapping segments between two cut points and repairing the remaining positions to avoid duplicates. The crossover rate, typically set between 0.6 and 0.9, determines the probability of applying the operator to a pair of parents, balancing exploration and exploitation in the evolutionary search.[9][15][16][17] For real-coded genetic algorithms addressing continuous optimization, crossover operators must handle numerical values without discretization artifacts, often leading to disruption if not designed carefully. Arithmetic crossover computes offspring as a convex combination of parents, given bywhere is a fixed parameter (e.g., 0.5) or adaptively chosen, enabling interpolation within the search space. Blend crossover (BLX-) addresses disruption by sampling offspring from an expanded interval around the parents: for each dimension, if , the offspring is drawn uniformly from , with typically 0.1 to promote neighborhood exploration. In binary representations, the schema theorem provides a theoretical foundation, stating that the expected number of copies of a schema after crossover satisfies
where is the number of instances at time , is the average fitness, is the population mean fitness, is the crossover rate, is the schema's defining length, and is the chromosome length; this implies that short, high-fitness schemata propagate with high probability under low-disruption crossover.[18][9] An illustrative example occurs in function optimization, where crossover blends high-fitness parent vectors to produce offspring that inherit effective trait combinations, accelerating convergence toward global optima. In the TSP, PMX recombines route segments from two parent tours, mapping partial paths to preserve feasibility while exploring new permutations that leverage strong subsequences from each parent. These operators enhance the algorithm's ability to exploit selected genetic material, though their performance depends on tuning to the problem's structure.[17][17]

