Hubbry Logo
Genetic algorithmGenetic algorithmMain
Open search
Genetic algorithm
Community hub
Genetic algorithm
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Genetic algorithm
Genetic algorithm
from Wikipedia

The 2006 NASA ST5 spacecraft antenna. This complicated shape was found by an evolutionary computer design program to create the best radiation pattern. It is known as an evolved antenna.

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA).[1] Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation.[2] Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles,[3] hyperparameter optimization, and causal inference.[4]

Methodology

[edit]

Optimization problems

[edit]

In a genetic algorithm, a population of candidate solutions (called individuals, creatures, organisms, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.[5]

The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.

A typical genetic algorithm requires:

  1. a genetic representation of the solution domain,
  2. a fitness function to evaluate the solution domain.

A standard representation of each candidate solution is as an array of bits (also called bit set or bit string).[5] Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.

Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.

Initialization

[edit]

The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest.[6]

Selection

[edit]

During each successive generation, a portion of the existing population is selected to reproduce for a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.

The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem-dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.

In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.

Genetic operators

[edit]

The next step is to generate a second generation population of solutions from those selected, through a combination of genetic operators: crossover (also called recombination), and mutation.

For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research[7][8] suggests that more than two "parents" generate higher quality chromosomes.

These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.

Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search.

Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.[citation needed]

It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem's complexity class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.

Heuristics

[edit]

In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.[9][10]

Termination

[edit]

This generational process is repeated until a termination condition has been reached. Common terminating conditions are:

  • A solution is found that satisfies minimum criteria
  • Fixed number of generations reached
  • Allocated budget (computation time/money) reached
  • The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
  • Manual inspection
  • Combinations of the above

The building block hypothesis

[edit]

Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:

  1. A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
  2. A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.

Goldberg describes the heuristic as follows:

"Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."[11]

Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold.[12][13] Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.[14][15][16]

Limitations

[edit]

The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms:

  • Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems.[citation needed]
  • Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane [citation needed]. In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.[citation needed]
  • The "better" solution is only in comparison to other solutions. As a result, the stopping criterion is not clear in every problem.[citation needed]
  • In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions,[17] although the No Free Lunch theorem[18] proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation.[citation needed]
  • Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.[citation needed]
  • GAs cannot effectively solve problems in which the only fitness measure is a binary pass/fail outcome (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.[citation needed]
  • For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.[citation needed]

Variants

[edit]

Chromosome representation

[edit]

The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.

When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.

Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.[19][20]

An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome.[21] This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.

Another important expansion of the Genetic Algorithm (GA) accessible solution space was driven by the need to make representations amenable to variable levels of knowledge about the solution states. Variable-length representations were inspired by the observation that, in nature, evolution tends to progress from simpler organisms to more complex ones—suggesting an underlying rationale for embracing flexible structures.[22] A second, more pragmatic motivation was that most real-world engineering and knowledge-based problems do not naturally conform to rigid knowledge structures.[23]

These early innovations in variable-length representations laid essential groundwork for the development of Genetic programming, which further extended the classical GA paradigm. Such representations required enhancements to the simplistic genetic operators used for fixed-length chromosomes, enabling the emergence of more sophisticated and adaptive GA models.

Elitism

[edit]

A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.[24]

Parallel implementations

[edit]

Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction. Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.

Adaptive GAs

[edit]

Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically.[25][26]

Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm),[27] the adjustment of pc and pm depends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence.[28] In CAGA (clustering-based adaptive genetic algorithm),[29] through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. Recent approaches use more abstract variables for deciding pc and pm. Examples are dominance & co-dominance principles[30] and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity.[31]

It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA [citation needed] while overcoming the lack of robustness of hill climbing.

This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.[32]

A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.

A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA,[33] GEMGA[34] and LLGA.[35]

Problem domains

[edit]

Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs[citation needed]. GAs have also been applied to engineering.[36] Genetic algorithms are often applied as an approach to solve global optimization problems.

As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).

Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector,[37] antennae designed to pick up radio signals in space,[38] walking methods for computer figures,[39] optimal design of aerodynamic bodies in complex flowfields[40]

In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task:

[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.

[...]

I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs.

— Steven Skiena[41]: 267 

History

[edit]

In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution.[42] Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey.[43][44] His 1954 publication was not widely noticed. Starting in 1957,[45] the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[46] and Crosby (1973).[47] Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms.[48] Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).[49]

Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,[50] artificial evolution only became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies.[51][52][53][54] Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.

Commercial products

[edit]

In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.[55][circular reference] In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote[56] about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995.[57] Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version.[58] Since the 1990s, MATLAB has built in three derivative-free optimization heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).[59]

[edit]

Parent fields

[edit]

Genetic algorithms are a sub-field:

[edit]

Evolutionary algorithms

[edit]

Evolutionary algorithms is a sub-field of evolutionary computing.

  • Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain.[60] They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
  • Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
  • Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled[61][62] or generated from guided-crossover.[63]
  • Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. There are many variants of Genetic Programming, including Cartesian genetic programming, Gene expression programming,[64] grammatical evolution, Linear genetic programming, Multi expression programming etc.
  • Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items.[65] The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
  • Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.

Swarm intelligence

[edit]

Swarm intelligence is a sub-field of evolutionary computing.

  • Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas.
  • Although considered an Estimation of distribution algorithm,[66] Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables.[67]

Other evolutionary computing algorithms

[edit]

Evolutionary computation is a sub-field of the metaheuristic methods.

  • Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
  • Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.[68]
  • Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space.
  • Differential evolution (DE) inspired by migration of superorganisms.[69]
  • Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information.[70] Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant.

Other metaheuristic methods

[edit]

Metaheuristic methods broadly fall within stochastic optimisation methods.

  • Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
  • Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
  • Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.

Other stochastic optimisation methods

[edit]
  • The cross-entropy (CE) method generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
  • Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics.

See also

[edit]

References

[edit]

Bibliography

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A genetic algorithm (GA) is a optimization technique that simulates the process of and to search for approximate solutions to complex problems, maintaining a of solutions that evolve over generations through mechanisms like selection, crossover, and . Developed by John H. Holland in the early 1970s at the , GAs draw from Charles Darwin's theory of by and were formalized in Holland's 1975 book Adaptation in Natural and Artificial Systems, establishing them as a foundational subset of . In a GA, solutions are typically represented as strings or chromosomes in a finite , analogous to genetic material, with each individual in the evaluated by a fitness function that measures its quality relative to the problem's objective. The algorithm proceeds iteratively: fitter individuals are selected probabilistically for , where crossover recombines segments from parent solutions to create offspring, and introduces small random alterations to prevent premature convergence and promote of the search space. This evolutionary process continues across multiple generations until a termination criterion, such as a maximum number of iterations or sufficient fitness improvement, is met, often yielding robust solutions to problems where traditional methods like fail due to nonlinearity or . GAs excel in handling noisy, dynamic environments and large, complex search spaces without requiring derivative information, making them particularly valuable for . Notable applications span engineering design, such as aerodynamic optimization and ; scheduling and problems in ; for and training; and bioinformatics for prediction and . As of the 2020s, developments integrate GAs with other techniques, like hybrid models combining them with s, , , or large language models, enhancing performance in emerging areas such as system optimization, , and code generation.

Fundamentals

Definition and principles

A genetic algorithm (GA) is a inspired by that evolves a of solutions toward improved performance on an . Developed as a computational to biological , GAs operate on a set of potential solutions represented as individuals in a , iteratively refining them to approximate optimal or near-optimal outcomes for complex search problems. The core principles of GAs emphasize , where individuals with higher fitness—measured relative to the objective function—are preferentially selected for reproduction, promoting the propagation of advantageous traits across generations. This process relies on probabilistic transition rules rather than deterministic ones, introducing elements to explore diverse regions of the solution space and avoid local optima. Iterative improvement occurs over successive generations, balancing exploitation of promising solutions with exploration of new possibilities through mechanisms that mimic . At a high level, a GA begins with an initial random of solutions, evaluates their fitness against the problem's criteria, and applies selection, recombination, and variation operators to generate for the next generation. This cycle repeats until a predefined termination criterion, such as convergence or a maximum number of generations, is reached. Unlike exact methods that guarantee global through exhaustive search, GAs are approximate and , making them particularly suited for tackling non-linear, multimodal, or high-dimensional search spaces where traditional optimization techniques falter.

Biological inspiration

Genetic algorithms (GAs) are fundamentally inspired by the mechanisms of biological evolution, particularly as abstracted by John H. Holland in his seminal 1975 work, Adaptation in Natural and Artificial Systems, where he framed GAs as computational models of adaptive processes observed in natural systems. Holland's approach sought to emulate how biological entities adapt to their environments through iterative improvement, drawing parallels between organic evolution and algorithmic search for optimal solutions in complex problem spaces. This biological motivation positions GAs within the broader field of , emphasizing adaptation as a robust strategy for navigating rugged fitness landscapes. At the heart of this inspiration lies the analogy to Charles Darwin's theory of , where "" translates to candidate solutions (individuals) in a competing based on their evaluated fitness relative to the problem at hand. Fitter individuals are preferentially selected for reproduction, mimicking how advantageous traits in nature confer greater and propagate through successive generations. This process ensures that the population evolves toward higher-quality solutions over time, much as natural populations adapt to selective pressures in their habitats. Key genetic mechanisms further bridge the biological and computational realms. Crossover, or recombination, emulates by exchanging segments of genetic material between two parent individuals to produce offspring with blended traits, thereby generating novel combinations that may enhance fitness. Mutation introduces small, random alterations to an individual's representation, analogous to spontaneous genetic changes in that introduce variability and prevent stagnation. Through these operators, traits are inherited and refined across generations, fostering incremental improvements akin to the gradual of . Population dynamics in GAs also reflect ecological principles, where maintaining within the group is essential to explore diverse regions of the solution space and evade entrapment in suboptimal local optima. This mirrors in natural ecosystems, which promotes resilience and adaptability by ensuring a broad range of genetic variations that can respond to changing environmental conditions. Holland emphasized this aspect in modeling adaptive systems, highlighting how diversity sustains long-term evolutionary progress.

Core Components

Population representation

In genetic algorithms, the population consists of multiple individuals, each represented as a chromosome—a structured string of genes that encodes potential solutions to the . This artificial mimics biological , where genes correspond to decision variables or parameters, allowing the algorithm to manipulate and evolve solutions through genetic operators. The choice of representation influences the algorithm's ability to explore the search space effectively, as introduced in foundational work on adaptive systems. Common encoding types vary by problem domain to ensure suitable granularity and operator compatibility. Binary encoding, the most prevalent method, represents chromosomes as strings of bits (0s and 1s), suitable for ; for instance, in the 0/1 , each bit indicates whether an item is included (1) or excluded (0) in the solution set, enabling compact representation of subsets. Real-valued encoding uses floating-point numbers directly in the , ideal for continuous optimization like function minimization, where genes denote real parameters without loss. Permutation encoding employs sequences of unique integers to represent orderings, as in the traveling salesman problem, where the lists cities in a tour order without duplicates to avoid invalid solutions. Tree-based encoding structures chromosomes as hierarchical trees, primarily in , where nodes and branches symbolize program components like functions and terminals for evolving executable code. Population size, the number of individuals maintained per , varies depending on the problem and available computational resources, but commonly ranges from 20 to several hundred in practice, striking a balance between maintaining and computational efficiency. Smaller populations risk premature convergence to suboptimal solutions due to limited , while larger ones enhance diversity by sampling more of the search space but prolong convergence and increase costs; the optimal size is typically determined empirically for the specific problem. The decoding process maps the (encoded ) to the (problem-specific solution) for , ensuring the representation aligns with the objective function's requirements. For binary encoding, this involves converting bit strings to values or sets via standard binary-to-decimal translation; real-valued decoding applies direct numerical interpretation; decoding rearranges elements into feasible schedules or paths; and tree-based decoding interprets the as an program through traversal and substitution. This mapping must be invertible and problem-appropriate to preserve solution validity, as emphasized in early theoretical frameworks.

Fitness evaluation

In genetic algorithms, the fitness function serves as the primary mechanism for assessing the quality of each in the , assigning a scalar value that reflects how well the 's encoded solution aligns with the problem's objectives. This function typically evaluates solutions for maximization, where higher values indicate better , or minimization, where lower values are preferred, depending on the optimization goal. The fitness value is computed based on the problem-specific criteria, enabling to differentiate between promising and suboptimal candidates without requiring , which distinguishes genetic algorithms from traditional optimization methods. Designing an effective fitness function requires careful consideration of several principles to ensure the algorithm's practicality and performance. is essential, as the function must handle increasing problem without disproportionate growth in time, often achieved by leveraging efficient structures or approximations for large-scale instances. Computational efficiency is critical, given that genetic algorithms perform thousands to millions of evaluations per run; thus, the function should minimize resource demands while providing accurate quality measures. For constrained problems, where solutions must satisfy inequalities or equalities, penalty methods are commonly employed to handle infeasibility by subtracting a penalty term proportional to constraint violations from the raw objective value, thereby degrading the fitness of invalid solutions and guiding the search toward feasible regions. A seminal approach to such penalty-free constraint handling compares solutions pairwise during selection, prioritizing feasibility and objective superiority without explicit parameters. Illustrative examples highlight the fitness function's versatility across problem types. In the traveling salesman problem, the fitness is often defined as the inverse of the total distance of a proposed tour, rewarding shorter paths that visit all cities exactly once and return to the start, which promotes convergence to near-optimal routes through repeated evaluations. For numerical function optimization, a simple minimization task might use f(x)=x2f(x) = x^2 over a bounded domain, where the fitness directly corresponds to this objective, penalizing deviations from the global minimum at x=0x = 0 and driving the toward it via iterative improvement. These evaluations play a pivotal role in by assigning higher probabilities to superior individuals, thereby biasing the population distribution toward the problem's over generations.

Algorithm Execution

Initialization and selection

The initialization phase of a genetic algorithm generates the starting of candidate solutions, which serves as the foundation for . Random initialization is the standard approach, where individuals are created by uniformly sampling values from the problem's search space, ensuring an unbiased of possible solutions. This method, emphasized in foundational works, promotes initial diversity by avoiding bias toward any particular region of the search space. However, excessive uniformity in random sampling can limit early progress, prompting the need for techniques to measure and enhance diversity, such as calculating genotypic or phenotypic variance across the . Seeded initialization complements random methods by incorporating domain-specific heuristics or prior knowledge to generate promising initial individuals, such as using greedy algorithms or problem-specific rules to the population toward feasible or high-quality solutions. This hybrid strategy accelerates convergence compared to pure random starts while preserving diversity through a mix of seeded and random elements. Empirical studies show that seeded populations improve performance on complex optimization problems by providing a more effective starting point, though care must be taken to avoid reducing overall exploration. Selection mechanisms determine which individuals from the current are chosen as parents for reproduction, with probabilities typically biased toward higher fitness to mimic . Roulette wheel selection, a proportional method introduced by John Holland, allocates selection probability linearly to an individual's fitness relative to the average, visualized as spinning a wheel where fitter solutions occupy larger segments. This approach efficiently favors elites but can lead to premature convergence if a few high-fitness individuals dominate early generations, as negative fitness values or scaling issues exacerbate inequality. Tournament selection randomly draws a of k individuals (typically k=2 or 3) from the and selects the fittest as a , offering a simple, parallelizable alternative to roulette wheel. It provides tunable selection pressure via k—smaller values promote diversity through more random choices, while larger k intensifies exploitation of fit solutions—and avoids fitness scaling problems, making it computationally faster and less prone to stagnation. However, very large k can mimic selection, overly favoring the absolute best and reducing exploration. Rank-based selection assigns reproduction probabilities based on an 's ordinal rank in the sorted fitness list rather than raw fitness values, often using linear or exponential scaling (e.g., probability decreasing from the top-ranked ). This method, developed to address wheel's sensitivity to fitness variance, prevents super-s from monopolizing selection and maintains steady pressure across generations, though it may underutilize absolute fitness information in highly varied landscapes. Comparative analyses indicate rank-based approaches outperform proportional methods in multimodal problems by sustaining diversity longer. Selection pressure, quantified by the variance in reproduction probabilities, governs the balance between exploration (broad search via diverse parents) and exploitation (intensifying good solutions). Low pressure (e.g., small tournament size or flat ranking) encourages population diversity but slows convergence, while high pressure risks genetic drift and loss of viable schemata. Optimal pressure depends on problem dimensionality and population size, with empirical tuning often required to avoid suboptimal local optima.

Genetic operators

Genetic operators in genetic algorithms primarily consist of crossover and , which work together to generate new candidate solutions () from selected parents, thereby driving the evolutionary process toward improved fitness. Crossover combines genetic material from two parents to produce that inherit advantageous traits, while introduces small random changes to maintain diversity and avoid premature convergence. These operators are applied probabilistically during each generation, with their rates tuned to balance exploration of the search space and exploitation of promising regions. Crossover, also known as recombination, is the dominant operator in most genetic algorithms, typically applied with a probability between 0.6 and 0.9 to a pair of selected . In single-point crossover, a random position is chosen along the (often represented as a binary string), and the segments before and after this point are swapped between the parents to form two ; for example, parents "101000" and "001110" with a crossover point after the third bit yield "101110" and "001000". This method preserves larger building blocks of potentially fit solutions. Multi-point crossover extends this by selecting multiple points (e.g., two or more) for swapping, allowing finer-grained recombination, though it risks disrupting beneficial linkages if points are too numerous. Uniform crossover, in contrast, swaps individual based on a random binary , where each has an independent probability (often 0.5) of being taken from one parent or the other; this promotes greater diversity but can lead to less coherent . These techniques, originally formalized in binary encodings, emulate biological recombination and are most effective when the representation aligns with the problem's structure. Mutation serves to inject novelty into the population by altering one or more in an offspring, typically at a low probability of 0.001 to 0.1 per gene to prevent excessive disruption while countering stagnation from repeated selection of similar individuals. For binary encodings, the standard bit-flip inverts a randomly selected bit (0 to 1 or vice versa), such as changing "101000" to "101010" at the fifth position, which introduces local variations without overhauling the solution. In real-valued encodings, Gaussian mutation adds noise drawn from a (mean 0, small standard deviation) to a gene value, ensuring changes remain bounded and proportional to the variable's scale; for instance, a gene value of 5.2 might become 5.7 after adding a Gaussian perturbation of 0.5. This operator is crucial for problems, where bit-flip would be inappropriate, and helps escape local optima by enabling small, probabilistic explorations. The interplay between crossover and mutation is essential for effective search: crossover exploits existing good solutions by recombining fit parents to amplify beneficial traits, while explores untried regions to introduce diversity and prevent the algorithm from converging too narrowly. Consider binary string parents "11100" (high fitness) and "00111" (moderate fitness) undergoing single-point crossover at position 3 to produce "11111" (enhanced fitness) and "00100" (mixed); applying bit-flip to the second offspring at position 4 yields "00110", potentially uncovering a novel high-fitness variant. Without , repeated crossovers might perpetuate similar structures, leading to premature stagnation; conversely, excessive could undermine exploitation. This balance ensures the population evolves robustly across generations. In cases where operators produce invalid offspring, such as duplicates in permutation encodings for problems like the traveling salesman, repair mechanisms are applied to restore feasibility without altering the core operation. For permutation crossovers (e.g., partially mapped crossover), a simple repair might involve swapping duplicate elements with missing ones to ensure a unique ordering, preserving as much parental information as possible while enforcing constraints. These post-operation fixes, such as gene repair algorithms, maintain solution validity and are particularly vital for combinatorial domains where infeasible individuals cannot be evaluated.

Termination criteria

In genetic algorithms, termination criteria define the conditions under which the iterative evolutionary process halts, ensuring a balance between achieving sufficient convergence and avoiding excessive computational expenditure. These criteria are essential because genetic algorithms do not inherently know when an optimal or near-optimal solution has been reached, and without them, the algorithm could run indefinitely. Among the most common termination criteria is a fixed number of generations or a predetermined limit on function evaluations, which imposes a straightforward upper bound on runtime regardless of solution quality. This approach is particularly useful in resource-constrained environments where predictability is prioritized over adaptive stopping. Convergence criteria, such as detecting no improvement in the best individual’s fitness over a specified number of consecutive generations (often denoted as N iterations without progress), signal when the population appears to have stabilized around a local or global optimum. Resource-based limits, including maximum execution time or computational budget (e.g., CPU cycles or memory usage), further complement these by accounting for practical constraints in real-world deployments. Stagnation detection enhances these basic methods by monitoring indicators of evolutionary halt, such as reduced variance in the population's fitness values or a plateau in the (best) individual's fitness over multiple generations. For instance, if the standard deviation of fitness scores falls below a threshold, it suggests minimal diversity and little potential for further improvement through genetic operators like selection, crossover, and . heuristics build on this by incorporating delta fitness thresholds—halting when the change in best fitness is smaller than a predefined value—or predefined solution quality targets, where the algorithm stops upon reaching a fitness score that meets domain-specific (e.g., 95% of the known global optimum). These heuristics are especially effective in optimization problems where approximate solutions suffice, allowing premature termination without significant loss in quality. The choice of termination criteria involves inherent trade-offs: overly strict conditions, such as a low N for no-improvement or tight delta thresholds, risk prematurely concluding with suboptimal solutions before the population fully explores the search space. Conversely, lenient criteria, like high generation limits or loose variance thresholds, may lead to unnecessary resource consumption on stagnant populations, increasing costs without proportional gains in solution quality. Selecting appropriate criteria often requires empirical tuning based on the problem's and available resources, with hybrid approaches combining multiple indicators for robustness.

Theoretical Basis

Building block hypothesis

The building block hypothesis posits that genetic algorithms achieve effective search by implicitly processing and combining "building blocks," which are short, low-order schemata representing above-average partial solutions in the search space. Schemata serve as similarity templates, defined over the chromosome's with fixed positions (specific alleles, such as 0 or 1 in binary representations) and unspecified positions (wildcards, often denoted by *). These building blocks capture modular components of promising solutions, allowing to evaluate and propagate advantageous substrings without explicit decomposition. Under this , short schemata—those spanning few positions and thus low in order— with above-average fitness increase exponentially within the across generations, assuming minimal disruption from or crossover. Selection preferentially retains individuals embedding these high-fitness schemata, amplifying their representation and enabling their recombination into longer, more complete solutions. This process mimics natural evolution's assembly of beneficial traits, where local optima in subspaces contribute to . The implies that genetic algorithms excel in large, complex search spaces by leveraging parallelism: numerous are sampled and tested simultaneously across the , with recombination juxtaposing high-fitness blocks to form superior individuals efficiently. This modular assembly avoids the curse of dimensionality inherent in brute-force methods, focusing computational effort on promising combinations rather than isolated evaluations. For example, in a binary-encoded problem, the schema 1 (matching strings with 1 in the second position) might exhibit high average fitness if that position correlates with better performance; over generations, it would propagate as parent selection favors matching individuals, and crossover combines it with other fit schemata like 1** or **1 to build fuller solutions.

Schema theorem

The schema theorem, also known as the fundamental theorem of genetic algorithms, provides a mathematical framework for understanding how genetic algorithms process and propagate schemata across generations. Formulated by John H. Holland, it quantifies the expected change in the number of instances of a schema HH from one generation to the next, demonstrating the selective pressure favoring schemata with above-average fitness while accounting for disruptive effects of genetic operators. The theorem states that the expected number of instances of schema HH at generation t+1t+1, denoted E[m(H,t+1)]E[m(H,t+1)], satisfies: E[m(H,t+1)]m(H,t)f(H)fˉ(t)(1pcδ(H)l1)(1pmo(H))E[m(H,t+1)] \geq m(H,t) \cdot \frac{f(H)}{\bar{f}(t)} \cdot \left(1 - p_c \cdot \frac{\delta(H)}{l-1}\right) \cdot (1 - p_m \cdot o(H)) where m(H,t)m(H,t) is the number of individuals in the population at generation tt that match schema HH, f(H)f(H) is the average fitness of individuals matching HH, fˉ(t)\bar{f}(t) is the average fitness of the population at generation tt, pcp_c is the crossover probability, pmp_m is the mutation probability per bit, o(H)o(H) is the order of schema HH (number of fixed positions), δ(H)\delta(H) is the defining length of schema HH (distance between the outermost fixed positions), and ll is the length of the chromosome. This inequality arises from a derivation that decomposes the effects of the three primary steps in a genetic algorithm: selection, crossover, and . Under proportional selection, the expected number of copies of HH produced is at least m(H,t)f(H)fˉ(t)m(H,t) \cdot \frac{f(H)}{\bar{f}(t)}, as higher-fitness schemata are reproduced more frequently. Crossover may disrupt schema HH with probability at most pcδ(H)l1p_c \cdot \frac{\delta(H)}{l-1}, assuming one-point crossover where the defining length influences disruption; this term bounds the survival probability after crossover. Mutation disrupts each fixed bit independently with probability pmp_m, yielding a survival factor of (1pm)o(H)(1 - p_m)^{o(H)}, approximated as 1pmo(H)1 - p_m \cdot o(H) for small pmp_m. Combining these yields the lower bound on E[m(H,t+1)]E[m(H,t+1)]. The theorem relies on several key assumptions, including fixed-length binary string representations for chromosomes, one-point crossover that does not disrupt schemata within their defining regions (no intrachromosomal disruption), and low mutation rates where pm1p_m \ll 1. These conditions align with the genetic algorithm model, ensuring the bounding terms accurately reflect operator behaviors without higher-order interactions. Qualitatively, the schema theorem implies that genetic algorithms converge by exponentially amplifying short, low-order with fitness above the population average, provided disruption rates are controlled; this growth rate is f(H)fˉ(t)>1\frac{f(H)}{\bar{f}(t)} > 1 for such schemata, leading to implicit parallelism in processing multiple promising combinations simultaneously. While not providing convergence proofs, it offers insights into why genetic algorithms effectively navigate search spaces by building solutions from favored subsolutions over generations.

Variants

Encoding schemes

Integer encoding represents solutions as sequences of integers, which is particularly suited for optimization problems involving discrete variables, such as or combinatorial tasks where variables take on whole-number values within specified ranges. This approach facilitates direct manipulation through arithmetic crossover and operators, enhancing search efficiency in domains like allocation in . Unlike binary strings, encodings avoid the need for bit-level decoding, reducing computational overhead while preserving the granularity needed for integer-constrained problems. Symbolic encoding employs strings of symbols or s to represent ordered structures, commonly applied in scheduling problems where the sequence of operations or jobs must be optimized without repetition. For instance, in , a might encode a of job operations, allowing genetic operators to generate feasible schedules by swapping or inverting segments, thereby maintaining problem-specific constraints like precedence relations. This encoding promotes locality, where small changes in the lead to meaningful variations in the , improving the algorithm's ability to explore valid solution spaces. Graph-based encodings model solutions as graphs or networks, ideal for problems involving connectivity, such as circuit design or routing in communication networks, where nodes and edges represent components and their interactions. In genetic programming variants, graphs enable representations of modular structures with shared subcomponents, supporting reuse and multi-output programs through edge recombination during crossover. This approach excels in capturing relational dependencies but requires specialized operators to preserve graph validity, such as ensuring acyclicity in directed graphs. Grammar-based encodings, as in grammatical evolution, use variable-length integer strings to index productions in a , generating executable programs or expressions in an arbitrary language. The specifies a derivation path through the grammar's rules, allowing the of syntactically correct phenotypes without explicit structures, which is advantageous for tasks. This method decouples the search space from the output language, enabling adaptation to diverse domains like or controller design. Problem-specific adaptations like Gray coding modify binary representations to minimize between adjacent values, mitigating "Hamming cliffs" where small phenotypic changes require flipping multiple bits, thus smoothing the . In binary-encoded genetic algorithms, Gray codes enhance crossover efficacy by promoting gradual exploration, as seen in parameter optimization where they reduce the risk of disruptive . For real-valued problems, encodings often pair with crossover to sample diverse points in continuous spaces, preserving diversity without positional bias. Hybrid representations integrate multiple encoding types within a single , such as binary for discrete variables and real-valued for continuous ones, to address mixed-integer optimization challenges like gear transmission . This allows tailored operators—bit-flip for binary segments and Gaussian perturbation for real segments—facilitating comprehensive searches in hybrid search spaces. Such schemes are effective in applications but demand careful operator to avoid invalid hybrids. The choice of encoding significantly influences genetic operator efficacy, as mismatched representations can lead to invalid offspring or inefficient searches; for example, permutation encodings in symbolic schemes ensure feasible crossovers in ordering problems, while graph-based ones may require repair mechanisms to maintain structural . Poor encodings exacerbate premature convergence by creating rugged landscapes that trap populations in local optima, whereas adaptive schemes like Gray coding or grammar-based mappings promote sustained diversity and smoother convergence. Empirical studies show that encoding impacts convergence speed, with hybrid and specialized forms often outperforming uniform binary in complex, multimodal domains by better aligning genotypic changes with phenotypic improvements.

Elitism and adaptation

is a technique in genetic algorithms that preserves the best-performing individuals from one directly into the next, ensuring that the overall fitness of the does not decrease. This approach guarantees monotonic improvement in the best solution over s by copying a fixed number or proportion of the top individuals without subjecting them to genetic operators. Introduced in early experimental studies of genetic algorithms, was shown to enhance optimization performance on benchmark functions by preventing the loss of superior solutions during selection. Variants of elitism appear in related evolutionary strategies, such as the (μ+λ)-, where selection for the next generation occurs from the combined pool of μ parents and λ , inherently favoring the fittest individuals and promoting steady progress toward optima. In contrast to the non-elitist (μ,λ)-, which selects only from , the (μ+λ) variant ensures that proven solutions persist, making it particularly effective for problems. This mechanism, developed in the context of , has been analyzed to demonstrate faster convergence rates compared to purely offspring-based selection. Adaptive mechanisms in genetic algorithms dynamically adjust parameters like crossover and mutation rates during execution, often based on diversity, fitness variance, or convergence indicators to balance exploration and exploitation. For instance, rates can be varied probabilistically across generations, decreasing as the population converges to refine solutions while increasing earlier to maintain diversity; empirical tests on standard test functions revealed that such adaptation outperforms fixed-rate strategies by achieving higher success rates in locating global optima. Self-adaptive approaches encode parameter values directly into individual genotypes, allowing them to evolve alongside the solutions, which fosters problem-specific tuning without external intervention. Feedback-based adaptation, including systems, uses performance metrics such as fitness stagnation or to tune selection pressure or operator probabilities in real-time. These methods monitor variance in fitness values to reduce rates when convergence is evident, thereby accelerating refinement while injecting variability to escape local optima. Comprehensive reviews confirm that adaptive techniques, including fuzzy controllers, improve convergence speed and solution quality across diverse optimization tasks by mitigating premature convergence risks inherent in static parameters. The integration of and yields synergistic benefits, such as enhanced global search capability and reduced susceptibility to local traps, leading to superior performance in multimodal landscapes. For example, combining elitist preservation with adaptive has been shown to increase the probability of finding optimal solutions by up to 20-30% on deceptive functions compared to non-adaptive, non-elitist baselines. These enhancements make such variants widely adopted in practical optimization domains requiring robust, efficient search.

Parallel implementations

Parallel implementations of genetic algorithms (GAs) distribute computational tasks across multiple processors or nodes to enhance , particularly for large and computationally intensive fitness evaluations. These approaches leverage parallel architectures such as multi-core CPUs, GPUs, or distributed clusters, partitioning the or operations to reduce execution time while preserving the evolutionary search dynamics. Three primary models dominate: the master-slave, , and cellular paradigms, each balancing parallelism with algorithmic integrity in distinct ways. In the master-slave model, a central master processor manages the entire , performing selection, crossover, and operations, while slave processors handle the parallel evaluation of fitness functions for distributed individuals. This coarse-grained approach minimizes communication overhead, as slaves return only fitness values to the master, making it suitable for problems where fitness dominates runtime. Seminal by Cantú-Paz demonstrates that this model achieves near-linear speedup proportional to the number of slaves when fitness evaluations are the bottleneck, with maintained above 90% for up to 100 processors in benchmark optimizations. The island model divides the population into multiple subpopulations, each evolving independently on separate processors or "islands" using standard GA operators within their local group. Periodically, individuals migrate between islands to exchange genetic material, preventing premature convergence and promoting diversity across the global search space. This coarse-to-medium-grained strategy excels in maintaining solution variety through isolated evolution, yielding speedups of 10-50 times on distributed systems for large-scale optimizations like function minimization. Migration policies in the island model critically influence performance and diversity. Frequency determines how often exchanges occur, typically every 10-100 generations to balance exploration and exploitation; overly frequent migrations can homogenize subpopulations, while infrequent ones risk stagnation. Topologies define connectivity, such as ring (linear for gradual diffusion), step (stepping-stone grid for localized spread), or fully connected ( for rapid global mixing), with ring topologies often providing robust diversity in 20-50 setups. Exchange rules specify selection criteria, including sending the best individuals (to propagate elites), random samples (to introduce variability), or worst replacements (to cull locals), as analyzed by Alba and Troya, where hybrid rules improved convergence by 20-30% over uniform policies in parallel distributed GAs. The cellular model organizes the on a two-dimensional grid, where each individual interacts only with neighbors in a local for selection and , fostering fine-grained parallelism with synchronous or asynchronous updates across processors. This structure enforces spatial locality, enhancing diversity by limiting mating to proximate individuals and simulating diffusion-like . Advantages include inherent load distribution on grid-based hardware and sustained exploration, with empirical studies showing 5-15 times over sequential GAs on multicore systems for problems like neural network training, due to reduced needs. Load balancing in parallel GAs addresses variations in fitness computation times, especially in heterogeneous environments where nodes differ in processing power or task complexity. Strategies include dynamic task assignment, where the master or coordinator reallocates unevaluated individuals based on estimated computation costs, or using meta-optimization via another GA to schedule workloads preemptively. For instance, in distributed systems with varying fitness kernels, adaptive partitioning ensures processor utilization exceeds 80%, mitigating bottlenecks from expensive evaluations like simulations. Cantú-Paz's models highlight that without balancing, efficiency drops below 50% on heterogeneous clusters, underscoring the need for runtime monitoring. Overall, these parallel implementations provide substantial speedups for large populations—often scaling to thousands of individuals across hundreds of nodes—while isolated in and cellular models preserves , outperforming sequential GAs in convergence speed and solution quality for complex, high-dimensional problems.

Applications

Optimization domains

Genetic algorithms (GAs) are particularly suited to problems, which require selecting or arranging discrete elements from a to achieve an optimal configuration, often under NP-hard complexity that renders exact methods computationally prohibitive for large instances. The population-based, nature of GAs enables parallel exploration of vast discrete search spaces, leveraging recombination and to combine promising partial solutions without relying on problem-specific heuristics or gradients. This makes GAs effective for problems where local optima abound and exhaustive enumeration is impractical. A prominent example is the traveling salesman problem (TSP), which involves finding the shortest Hamiltonian cycle through a set of . In GAs for TSP, solutions are encoded as permutations of city orders, with fitness evaluated as total tour distance; specialized crossover operators, such as partially mapped crossover, preserve valid paths while introducing diversity, allowing GAs to approximate optimal tours for instances up to thousands of more efficiently than branch-and-bound methods in practice. The 0/1 knapsack problem, seeking to maximize total value of selected items without exceeding a weight capacity, represents another key combinatorial domain. GAs encode selections as binary strings, where each bit indicates inclusion, and fitness balances value against weight violations; through generational evolution, GAs often achieve near-optimal fillings for multidimensional variants with hundreds of items, surpassing dynamic programming on high-dimensional cases due to their global search capability. Scheduling problems, such as , further illustrate GA efficacy in combinatorial settings by assigning operations to machines to minimize or . Schedules are typically represented as or random-key chromosomes, with fitness computed from completion times; GAs handle the factorial growth of feasible sequences by evolving diverse timetables, yielding high-quality solutions for flexible job-shop variants involving dozens of jobs and machines. In domains, GAs address real-valued parameter tuning, such as minimizing nonlinear functions over unbounded or bounded spaces. The , a benchmark multimodal landscape given by f(x)=10n+i=1n(xi210cos(2πxi)),f(\mathbf{x}) = 10n + \sum_{i=1}^{n} \left( x_i^2 - 10 \cos(2\pi x_i) \right), tests GA robustness against numerous local minima surrounding the global optimum at x=0\mathbf{x} = \mathbf{0}. Real-valued encodings with arithmetic crossover and Gaussian enable GAs to converge to near-zero values in 10-30 dimensions, demonstrating their utility in engineering parameter optimization like aerodynamic design. For , GAs extend to approximating Pareto fronts, sets of non-dominated solutions balancing conflicting goals such as and . Adaptations like NSGA-II employ non-dominated sorting to rank solutions by dominance and crowding distance for diversity preservation, efficiently generating diverse trade-off solutions without weighting objectives, as validated on test suites with two to three objectives. Handling constraints is integral to GA applications in optimization domains with equality or inequality restrictions. Penalty methods incorporate violations into the fitness function, such as quadratic penalties that degrade scores proportionally to constraint breach severity, steering evolution toward feasible regions without altering the search operators. Repair techniques, conversely, post-process infeasible solutions—e.g., by greedily adjusting variables to satisfy bounds—ensuring population feasibility at the of potential search , with hybrid approaches combining both for robust on problems.

Real-world examples

In applications, genetic algorithms have been employed to optimize antenna designs for missions. For NASA's Space Technology 5 (ST5) mission, a genetic algorithm evolved complex wire antenna geometries to meet stringent performance requirements at X-band frequencies (7.2 GHz and 8.47 GHz), resulting in a design that achieved maximum gains up to 10 dB and minimum gains exceeding -40 dB at 7.2 GHz, satisfying mission specifications where traditional human designs struggled with constraints. This evolved antenna was successfully fabricated, tested, and deployed on the ST5 spacecraft launched in , demonstrating practical viability in aerospace hardware. Genetic algorithms have also advanced rocket engine component optimization since the 1990s. In a 1996 NASA study, genetic algorithms were applied to design unconventional 3D rocket nozzles, such as bell-annular tripropellant and linear aerospike configurations for vehicles like the X-33, by evolving populations of design parameters to maximize thrust while navigating integration constraints. The approach explored diverse design families, outperforming gradient-based methods in avoiding local optima and enabling rapid contour optimization via streamline tracing, which contributed to more efficient nozzle shapes for reusable launch systems. Structural engineering benefits from genetic algorithms in truss optimization, where they minimize weight under load constraints. A seminal 1992 application used a genetic algorithm for discrete optimization of plane and space trusses, encoding cross-sectional areas as binary strings and evolving solutions to achieve near-minimum weights, such as reducing the 10-bar truss weight to 5,060 lb under stress and displacement limits. This method has influenced real-world designs, like satellite booms and bridge frameworks, by handling discrete variables effectively without requiring derivative information. In , genetic algorithms facilitate to enhance model performance on high-dimensional data. A 1998 framework applied genetic algorithms to select subsets of features for classifiers, using wrapper methods with accuracy as fitness on real-world datasets like the dataset, yielding subsets that improved accuracy from 80% to 91% over full feature sets while reducing dimensionality. This approach has been adopted in bioinformatics and image analysis, where it identifies relevant genes or pixels amid noise. Genetic algorithms also evolve neural network topologies, optimizing both structure and weights simultaneously. The NeuroEvolution of Augmenting Topologies (NEAT) method, introduced in 2002, uses a genetic algorithm to incrementally add nodes and connections, starting from minimal networks, and has been applied to control tasks in and , achieving solutions that adapt to dynamic environments like evolving controllers for simulated pole-balancing or vehicle navigation. NEAT's protection of structural innovations has enabled high-performance networks in real-world scenarios, such as autonomous drone flight paths. Financial applications leverage genetic algorithms for and trading rule discovery. In portfolio management, genetic algorithms encode asset weights as chromosomes and evolve allocations to maximize returns under risk constraints (e.g., conditional value-at-risk), as demonstrated in a 2024 study on the index achieving approximately 5.6% higher cumulative returns (122.74% vs. 116.26%) than equal-weight portfolios. For trading, a 1999 study used genetic algorithms to evolve technical rules for the S&P Composite Index over 1963-1989, generating buy/sell signals based on moving averages that occasionally outperformed buy-and-hold strategies before costs, highlighting their utility in rule induction for systems. In recent years as of 2025, genetic algorithms have been increasingly integrated with techniques for , enhancing performance in complex tasks such as image and . For instance, hybrid GA-deep learning frameworks have improved accuracy by 5-9% over traditional grid search methods in ensemble models. These examples illustrate genetic algorithms' impact, with outcomes like the ST5 antenna's successful mission deployment and significant material savings in structural designs, underscoring their role in achieving practical efficiencies across domains.

Limitations

Performance challenges

One significant performance challenge in genetic algorithms (GAs) is premature convergence, where the population loses early in the evolutionary process, causing the algorithm to settle into a local optimum rather than exploring the global search space effectively. This phenomenon arises primarily from high selection pressure, which favors fitter individuals excessively, leading to rapid dominance of similar solutions and a reduction in the population's variability. For instance, tournament selection with large tournament sizes amplifies this issue by increasing the likelihood of replicating near-identical chromosomes, thereby stifling the recombination of diverse schemata as predicted by the schema theorem. Scalability poses another critical limitation for GAs, particularly in high-dimensional optimization problems, where the computational demands escalate dramatically due to the in the number of potential solutions and the time required for fitness evaluations. In such scenarios, the "curse of dimensionality" manifests as the must grow linearly or superlinearly with the problem's dimensionality to maintain adequate sampling of the search space, resulting in evaluation times that can become prohibitive for problems beyond moderate scales, such as those with hundreds of variables. Representative studies on trap functions and hierarchical problems demonstrate that simple GAs require population sizes scaling as O(lk)O(l^k), where ll is the block length and kk the number of blocks, highlighting the inherent non-polynomial resource needs. This bottleneck is exacerbated when fitness functions involve expensive simulations or real-world assessments, limiting GAs' practicality for large-scale applications without approximations. GAs are highly sensitive to parameter settings, such as crossover and rates, which often require manual tuning tailored to specific problem instances, with no universal configuration yielding optimal performance across diverse landscapes. This sensitivity stems from the , which mathematically proves that any optimization algorithm, including GAs, performs equally on average over all possible problems when prior knowledge is absent, implying that effective parameter choices are domain-dependent and cannot be generalized without . For example, high rates may enhance exploration in rugged landscapes but disrupt convergence in smoother ones, necessitating iterative experimentation that increases setup costs. Due to their stochastic nature, GAs provide no guarantees of optimality, as outcomes vary across independent runs even with identical initial conditions and parameters, potentially yielding suboptimal solutions influenced by random initialization and operator applications. This variability arises from the probabilistic selection, crossover, and processes, which introduce inherent randomness that can trap the algorithm in local optima despite multiple generations. In practice, this means that while GAs often deliver high-quality approximations, such as in the traveling salesman problem where solutions within 1-5% of optimality are common, repeated executions with statistical analysis are required to assess reliability, adding to computational overhead.

Comparison to alternatives

Genetic algorithms (GAs) differ from gradient-based optimization methods in their ability to handle non-differentiable, discontinuous, or black-box objective functions without requiring information, making them suitable for complex search spaces where are unavailable or unreliable. In contrast, gradient-based techniques, such as steepest descent or conjugate , excel in smooth, differentiable landscapes but are prone to converging to local optima, especially in multimodal problems, whereas GAs maintain population diversity to explore globally and avoid such trapping. For instance, in electromagnetic , GAs have shown robustness for problems with many quantized parameters, outperforming gradient methods that struggle with quantization effects. Compared to exact optimization methods like branch-and-bound or dynamic programming, GAs serve as approximation heuristics for computationally intractable problems, such as large-scale , where exact solutions are infeasible due to exponential time complexity. While exact methods provide provable optimality for problems within tractable bounds, GAs trade guarantees for scalability, delivering near-optimal solutions efficiently but without formal proofs of solution quality. This makes GAs preferable for real-world applications in optimization domains like scheduling or engineering design, where approximate results suffice and exact computation is prohibitive. To enhance performance, GAs are often hybridized with local search in memetic algorithms, combining the broad exploration of evolutionary operators with the exploitation of neighborhood-based refinement, leading to faster convergence and better solution quality on diverse landscapes. These hybrids, inspired by cultural evolution, have demonstrated superior results over pure GAs in multimodal optimization tasks by mitigating premature convergence. Empirical benchmarks indicate that GAs are particularly competitive on multimodal functions with multiple local optima, where their stochastic population-based search effectively navigates rugged terrains, but they are generally slower than gradient-based or exact methods on convex, unimodal problems due to unnecessary exploration overhead. For example, in standard test suites like the De Jong functions, GAs achieve high success rates on multimodal instances but require more evaluations on unimodal ones compared to deterministic optimizers.

History

Origins and development

The concept of genetic algorithms (GAs) draws from early ideas in and , particularly Sewall Wright's 1932 introduction of the metaphor, which visualized genotypes as points on a multidimensional surface where adaptive peaks represent higher fitness, influencing later computational models of . In the , pioneers like Nils Barricelli and Alex Fraser conducted some of the first computer simulations of evolutionary processes; Fraser's work used digital computers to model genetic systems with random variation and selection, laying groundwork for algorithmic in optimization. John , a professor at the , advanced these ideas in the early 1960s by developing foundational theories for adaptive systems inspired by biological evolution, including mechanisms of , crossover, and selection. His 1962 outline for a logical theory of adaptive systems proposed using computational processes to mimic Darwinian adaptation, marking a shift toward practical algorithms. During this period, Holland integrated GAs into early simulations of classifier systems—rule-based models where strings representing if-then rules evolved via genetic operators to solve and control problems, demonstrating GAs' potential for learning in dynamic environments. Holland's seminal 1975 book, Adaptation in Natural and Artificial Systems, formalized GAs as a class of search algorithms, providing theoretical foundations including the schema theorem, which explains how short, high-fitness schemata propagate under selection and genetic operators. This work established GAs as abstractions of for solving complex optimization tasks, emphasizing their biological inspiration from and recombination. In the 1980s, GAs gained academic traction beyond through dedicated conferences and publications; the First International Conference on Genetic Algorithms in 1985 fostered collaboration among researchers, sharing results on GA applications and refinements. Concurrently, journals such as and began featuring GA studies, solidifying the field and encouraging interdisciplinary adoption in computer science and engineering.

Key milestones and products

The 1990s marked a significant boom in the adoption and application of genetic algorithms, largely propelled by David E. Goldberg's influential book Genetic Algorithms in Search, Optimization, and , published in 1989, which provided a comprehensive theoretical and practical framework that spurred widespread research and implementation in the subsequent decade. This period saw the emergence of key open-source libraries that facilitated experimentation and standardization, including the Evolutionary Computation in Java (ECJ) toolkit, initiated in 1999 by Sean Luke at , which supported genetic algorithms alongside other evolutionary techniques for large-scale simulations. Similarly, the Distributed Evolutionary Algorithms in Python (DEAP) library, released in 2012 but building on 1990s foundational concepts, enabled rapid prototyping of genetic algorithms in Python environments. Major milestones in the field included the inaugural IEEE Conference on in 1994, held as part of the World Congress on Computational Intelligence in , which formalized the growing community and showcased advancements in genetic algorithms for optimization problems. A pivotal contribution was the development of the Non-dominated Sorting Genetic Algorithm (NSGA) in 1994 by N. Srinivas and Kalyanmoy Deb, which introduced efficient non-dominated sorting for , significantly improving upon earlier Pareto-based approaches and becoming a benchmark for subsequent algorithms. Commercial products emerged in the 1990s to bring genetic algorithms to practical optimization tasks, exemplified by Axcelis Inc.'s Evolver software, released in 1990 as an add-in for , which applied genetic algorithms to nonlinear and combinatorial problems in and . A notable real-world application occurred in 2006 with NASA's Space Technology 5 (ST5) mission, where genetic algorithms automatically evolved an X-band antenna design that met stringent performance requirements for the spacecraft's communication system, demonstrating the technology's efficacy in . In recent years up to 2025, genetic algorithms have integrated deeply with and , particularly in (AutoML) tools such as TPOT (Tree-based Pipeline Optimization Tool), which employs —a variant of genetic algorithms—to automate the design of pipelines, achieving competitive performance on benchmark datasets since its introduction in 2016. Additionally, quantum-inspired genetic algorithms have gained traction for enhanced optimization, as seen in 2024 developments where they optimize multilayer photonic structures for by combining principles with classical genetic operators to explore vast design spaces more efficiently. In 2025, hybrid genetic algorithms combined with have been applied to in models, enhancing efficiency in complex search spaces.

Evolutionary computation

Evolutionary computation (EC) is a subfield of and that employs computational models inspired by biological to solve optimization and problems. These methods simulate processes such as , mutation, and recombination to evolve populations of candidate solutions toward improved performance, often in complex search spaces where traditional optimization techniques may fail. EC encompasses several paradigms, including genetic algorithms (GAs), evolution strategies (ES), (EP), and (GP), each tailored to specific representation and operator emphases. Evolution strategies, developed by Ingo Rechenberg and Hans-Paul Schwefel in the , focus on optimizing real-valued parameters for continuous problems. ES employ self-adaptive mutation rates, where strategy parameters evolve alongside object variables to dynamically adjust search step sizes, and typically prioritize over recombination. In contrast to GAs, which often use discrete encodings like binary strings and emphasize population-level crossover, ES operate directly on continuous phenotypes with individual-focused variation, making them suitable for numerical optimization without genotype-phenotype distinctions. Evolutionary programming, originated by Lawrence J. Fogel in 1960, evolves finite state machines or behavioral models through probabilistic s without initial reliance on crossover. EP stresses finite differences in to explore solution neighborhoods, differing from GAs by avoiding explicit genetic operators and focusing on individual adaptation for tasks like or control systems. Genetic programming, introduced by John R. Koza in , extends EC to evolve hierarchical computer programs represented as tree structures. GP applies selection and variation to breed programs that solve problems through automatic induction, contrasting with standard GAs by targeting functional compositions rather than fixed-length strings, often for or design automation. Despite these differences, all EC paradigms, including GAs, share core elements: maintaining a population of solutions evaluated by a fitness function, applying selection to favor superior individuals, and generating variation through and/or recombination to explore the search space. This common framework enables GAs to integrate techniques from ES, EP, and GP, such as self-adaptation or tree-based representations, for hybrid approaches in broader optimization contexts.

Other metaheuristics

Metaheuristics encompass a broad class of optimization techniques designed to find approximate solutions to complex problems where exact methods are impractical. They are broadly classified into trajectory-based methods, which iteratively improve a single candidate solution by exploring a of neighboring solutions, and population-based methods, which maintain and evolve a set of multiple solutions simultaneously to enhance diversity and global search capabilities. Trajectory-based approaches emphasize intensification around promising areas, while population-based ones prioritize diversification across the search space. Prominent trajectory-based metaheuristics include , which uses structures called tabu lists to avoid revisiting recent solutions and prevent cycling in local search, thereby promoting exploration of unvisited regions. Guided local search augments standard local search by dynamically penalizing features of infeasible or suboptimal solutions to escape local optima, using utility-based adjustments to guide the trajectory toward better global outcomes. , inspired by the metallurgical annealing process, allows probabilistic acceptance of worse solutions based on a decreasing "" parameter, enabling the algorithm to occasionally escape local minima early in the search and converge more deterministically later. In contrast, non-evolutionary population-based metaheuristics like and coordinate multiple agents differently from genetic algorithms. Ant colony optimization simulates deposition and evaporation on solution paths, where artificial ants construct solutions probabilistically based on accumulated trail strengths, fostering collaborative path reinforcement over iterations. Particle swarm optimization updates particle positions and velocities toward personal and global best-known solutions, leveraging social and cognitive influences to converge the swarm on promising regions without explicit genetic operators. , another population-based method, generates new candidate solutions through vector differences and scaling between randomly selected population members, followed by crossover and selection, emphasizing continuous parameter optimization via arithmetic rather than discrete recombination. Genetic algorithms stand out among population-based metaheuristics due to their holistic evolution of an entire through biologically inspired mechanisms—selection favoring fitter individuals, crossover exchanging genetic material between pairs, and introducing random variations—which collectively mimic to maintain diversity and explore multimodal landscapes more robustly than single-trajectory or coordinated-swarm approaches. This genetic paradigm enables GAs to handle discrete, combinatorial, and mixed search spaces effectively, differing from the path-building or velocity-driven coordination in alternatives like or particle swarm methods.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.