Recent from talks
Nothing was collected or created yet.
Multi-objective optimization
View on WikipediaMulti-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
For a multi-objective optimization problem, it is not guaranteed that a single solution simultaneously optimizes each objective. The objective functions are said to be conflicting. A solution is called nondominated, Pareto optimal, Pareto efficient or noninferior, if none of the objective functions can be improved in value without degrading some of the other objective values. Without additional subjective preference information, there may exist a (possibly infinite) number of Pareto optimal solutions, all of which are considered equally good. Researchers study multi-objective optimization problems from different viewpoints and, thus, there exist different solution philosophies and goals when setting and solving them. The goal may be to find a representative set of Pareto optimal solutions, and/or quantify the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the subjective preferences of a human decision maker (DM).
Bicriteria optimization denotes the special case in which there are two objective functions.
There is a direct relationship between multitask optimization and multi-objective optimization.[1]
Introduction
[edit]A multi-objective optimization problem is an optimization problem that involves multiple objective functions.[2][3][4] In mathematical terms, a multi-objective optimization problem can be formulated as
where the integer is the number of objectives and the set is the feasible set of decision vectors, which is typically but it depends on the -dimensional application domain. The feasible set is typically defined by some constraint functions. In addition, the vector-valued objective function is often defined as

If some objective function is to be maximized, it is equivalent to minimize its negative or its inverse. We denote the image of ; a feasible solution or feasible decision; and an objective vector or an outcome.
In multi-objective optimization, there does not typically exist a feasible solution that minimizes all objective functions simultaneously. Therefore, attention is paid to Pareto optimal solutions; that is, solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives. In mathematical terms, a feasible solution is said to (Pareto) dominate another solution , if
- , and
- .
A solution (and the corresponding outcome ) is called Pareto optimal if there does not exist another solution that dominates it. The set of Pareto optimal outcomes, denoted , is often called the Pareto front, Pareto frontier, or Pareto boundary.
The Pareto front of a multi-objective optimization problem is bounded by a so-called nadir objective vector and an ideal objective vector , if these are finite. The nadir objective vector is defined as
and the ideal objective vector as
In other words, the components of the nadir and ideal objective vectors define the upper and lower bounds of the objective function of Pareto optimal solutions. In practice, the nadir objective vector can only be approximated as, typically, the whole Pareto optimal set is unknown. In addition, a utopian objective vector , such that where is a small constant, is often defined because of numerical reasons.
Examples of applications
[edit]Economics
[edit]In economics, many problems involve multiple objectives along with constraints on what combinations of those objectives are attainable. For example, consumer's demand for various goods is determined by the process of maximization of the utilities derived from those goods, subject to a constraint based on how much income is available to spend on those goods and on the prices of those goods. This constraint allows more of one good to be purchased only at the sacrifice of consuming less of another good; therefore, the various objectives (more consumption of each good is preferred) are in conflict with each other. A common method for analyzing such a problem is to use a graph of indifference curves, representing preferences, and a budget constraint, representing the trade-offs that the consumer is faced with.
Another example involves the production possibilities frontier, which specifies what combinations of various types of goods can be produced by a society with certain amounts of various resources. The frontier specifies the trade-offs that the society is faced with — if the society is fully utilizing its resources, more of one good can be produced only at the expense of producing less of another good. A society must then use some process to choose among the possibilities on the frontier.
Macroeconomic policy-making is a context requiring multi-objective optimization. Typically a central bank must choose a stance for monetary policy that balances competing objectives — low inflation, low unemployment, low balance of trade deficit, etc. To do this, the central bank uses a model of the economy that quantitatively describes the various causal linkages in the economy; it simulates the model repeatedly under various possible stances of monetary policy, in order to obtain a menu of possible predicted outcomes for the various variables of interest. Then in principle it can use an aggregate objective function to rate the alternative sets of predicted outcomes, although in practice central banks use a non-quantitative, judgement-based, process for ranking the alternatives and making the policy choice.
Finance
[edit]In finance, a common problem is to choose a portfolio when there are two conflicting objectives — the desire to have the expected value of portfolio returns be as high as possible, and the desire to have risk, often measured by the standard deviation of portfolio returns, be as low as possible. This problem is often represented by a graph in which the efficient frontier shows the best combinations of risk and expected return that are available, and in which indifference curves show the investor's preferences for various risk-expected return combinations. The problem of optimizing a function of the expected value (first moment) and the standard deviation (square root of the second central moment) of portfolio return is called a two-moment decision model.
Optimal control
[edit]In engineering and economics, many problems involve multiple objectives which are not describable as the-more-the-better or the-less-the-better; instead, there is an ideal target value for each objective, and the desire is to get as close as possible to the desired value of each objective. For example, energy systems typically have a trade-off between performance and cost[5][6] or one might want to adjust a rocket's fuel usage and orientation so that it arrives both at a specified place and at a specified time; or one might want to conduct open market operations so that both the inflation rate and the unemployment rate are as close as possible to their desired values.
Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty. Commonly a multi-objective quadratic objective function is used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value. Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time, intertemporal optimization techniques are employed.[7]
Optimal design
[edit]Product and process design can be largely improved using modern modeling, simulation, and optimization techniques.[citation needed] The key question in optimal design is measuring what is good or desirable about a design. Before looking for optimal designs, it is important to identify characteristics that contribute the most to the overall value of the design. A good design typically involves multiple criteria/objectives such as capital cost/investment, operating cost, profit, quality and/or product recovery, efficiency, process safety, operation time, etc. Therefore, in practical applications, the performance of process and product design is often measured with respect to multiple objectives. These objectives are typically conflicting, i.e., achieving the optimal value for one objective requires some compromise on one or more objectives.
For example, when designing a paper mill, one can seek to decrease the amount of capital invested in a paper mill and enhance the quality of paper simultaneously. If the design of a paper mill is defined by large storage volumes and paper quality is defined by quality parameters, then the problem of optimal design of a paper mill can include objectives such as i) minimization of expected variation of those quality parameters from their nominal values, ii) minimization of the expected time of breaks and iii) minimization of the investment cost of storage volumes. Here, the maximum volume of towers is a design variable. This example of optimal design of a paper mill is a simplification of the model used in.[8] Multi-objective design optimization has also been implemented in engineering systems in the circumstances such as control cabinet layout optimization,[9] airfoil shape optimization using scientific workflows,[10] design of nano-CMOS,[11] system on chip design, design of solar-powered irrigation systems,[12] optimization of sand mould systems,[13][14] engine design,[15][16] optimal sensor deployment[17] and optimal controller design.[18][19]
Process optimization
[edit]Multi-objective optimization has been increasingly employed in chemical engineering and manufacturing. In 2009, Fiandaca and Fraga used the multi-objective genetic algorithm (MOGA) to optimize the pressure swing adsorption process (cyclic separation process). The design problem involved the dual maximization of nitrogen recovery and nitrogen purity. The results approximated the Pareto frontier well with acceptable trade-offs between the objectives.[20]
In 2010, Sendín et al. solved a multi-objective problem for the thermal processing of food. They tackled two case studies (bi-objective and triple-objective problems) with nonlinear dynamic models. They used a hybrid approach consisting of the weighted Tchebycheff and the Normal Boundary Intersection approach. The novel hybrid approach was able to construct a Pareto optimal set for the thermal processing of foods.[21]
In 2013, Ganesan et al. carried out the multi-objective optimization of the combined carbon dioxide reforming and partial oxidation of methane. The objective functions were methane conversion, carbon monoxide selectivity, and hydrogen to carbon monoxide ratio. Ganesan used the Normal Boundary Intersection (NBI) method in conjunction with two swarm-based techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) to tackle the problem.[22] Applications involving chemical extraction[23] and bioethanol production processes[24] have posed similar multi-objective problems.
In 2013, Abakarov et al. proposed an alternative technique to solve multi-objective optimization problems arising in food engineering.[25] The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used to compute the initial set of the non-dominated or Pareto-optimal solutions. The Analytic Hierarchy Process and Tabular Method were used simultaneously for choosing the best alternative among the computed subset of non-dominated solutions for osmotic dehydration processes.[26]
In 2018, Pearce et al. formulated task allocation to human and robotic workers as a multi-objective optimization problem, considering production time and the ergonomic impact on the human worker as the two objectives considered in the formulation. Their approach used a Mixed-Integer Linear Program to solve the optimization problem for a weighted sum of the two objectives to calculate a set of Pareto optimal solutions. Applying the approach to several manufacturing tasks showed improvements in at least one objective in most tasks and in both objectives in some of the processes.[27]
Radio resource management
[edit]The purpose of radio resource management is to satisfy the data rates that are requested by the users of a cellular network.[28] The main resources are time intervals, frequency blocks, and transmit powers. Each user has its own objective function that, for example, can represent some combination of the data rate, latency, and energy efficiency. These objectives are conflicting since the frequency resources are very scarce, thus there is a need for tight spatial frequency reuse which causes immense inter-user interference if not properly controlled. Multi-user MIMO techniques are nowadays used to reduce the interference by adaptive precoding. The network operator would like to both bring great coverage and high data rates, thus the operator would like to find a Pareto optimal solution that balance the total network data throughput and the user fairness in an appropriate subjective manner.
Radio resource management is often solved by scalarization; that is, selection of a network utility function that tries to balance throughput and user fairness. The choice of utility function has a large impact on the computational complexity of the resulting single-objective optimization problem.[28] For example, the common utility of weighted sum rate gives an NP-hard problem with a complexity that scales exponentially with the number of users, while the weighted max-min fairness utility results in a quasi-convex optimization problem with only a polynomial scaling with the number of users.[29]
Electric power systems
[edit]Reconfiguration, by exchanging the functional links between the elements of the system, represents one of the most important measures which can improve the operational performance of a distribution system. The problem of optimization through the reconfiguration of a power distribution system, in terms of its definition, is a historical single objective problem with constraints. Since 1975, when Merlin and Back [30] introduced the idea of distribution system reconfiguration for active power loss reduction, until nowadays, a lot of researchers have proposed diverse methods and algorithms to solve the reconfiguration problem as a single objective problem. Some authors have proposed Pareto optimality based approaches (including active power losses and reliability indices as objectives). For this purpose, different artificial intelligence based methods have been used: microgenetic,[31] branch exchange,[32] particle swarm optimization [33] and non-dominated sorting genetic algorithm.[34]
Inspection of infrastructure
[edit]Autonomous inspection of infrastructure has the potential to reduce costs, risks and environmental impacts, as well as ensuring better periodic maintenance of inspected assets. Typically, planning such missions has been viewed as a single-objective optimization problem, where one aims to minimize the energy or time spent in inspecting an entire target structure.[35] For complex, real-world structures, however, covering 100% of an inspection target is not feasible, and generating an inspection plan may be better viewed as a multiobjective optimization problem, where one aims to both maximize inspection coverage and minimize time and costs. A recent study has indicated that multiobjective inspection planning indeed has the potential to outperform traditional methods on complex structures[36]
Solution
[edit]As multiple Pareto optimal solutions for multi-objective optimization problems usually exist, what it means to solve such a problem is not as straightforward as it is for a conventional single-objective optimization problem. Therefore, different researchers have defined the term "solving a multi-objective optimization problem" in various ways. This section summarizes some of them and the contexts in which they are used. Many methods convert the original problem with multiple objectives into a single-objective optimization problem. This is called a scalarized problem. If the Pareto optimality of the single-objective solutions obtained can be guaranteed, the scalarization is characterized as done neatly.
Solving a multi-objective optimization problem is sometimes understood as approximating or computing all or a representative set of Pareto optimal solutions.[37][38]
When decision making is emphasized, the objective of solving a multi-objective optimization problem is referred to as supporting a decision maker in finding the most preferred Pareto optimal solution according to their subjective preferences.[2][39] The underlying assumption is that one solution to the problem must be identified to be implemented in practice. Here, a human decision maker (DM) plays an important role. The DM is expected to be an expert in the problem domain.
The most preferred results can be found using different philosophies. Multi-objective optimization methods can be divided into four classes.[3]
- In so-called no-preference methods, no DM is expected to be available, but a neutral compromise solution is identified without preference information.[2] The other classes are so-called a priori, a posteriori, and interactive methods, and they all involve preference information from the DM in different ways.
- In a priori methods, preference information is first asked from the DM, and then a solution best satisfying these preferences is found.
- In a posteriori methods, a representative set of Pareto optimal solutions is first found, and then the DM must choose one of them.
- In interactive methods, the decision maker is allowed to search for the most preferred solution iteratively. In each iteration of the interactive method, the DM is shown Pareto optimal solution(s) and describes how the solution(s) could be improved. The information given by the DM is then taken into account while generating new Pareto optimal solution(s) for the DM to study in the next iteration. In this way, the DM learns about the feasibility of their wishes and can concentrate on solutions that are interesting to them. The DM may stop the search whenever they want to.
More information and examples of different methods in the four classes are given in the following sections.
No-preference methods
[edit]When a decision maker does not explicitly articulate any preference information, the multi-objective optimization method can be classified as a no-preference method.[3] A well-known example is the method of global criterion,[40] in which a scalarized problem of the form
is solved. In the above problem, can be any norm, with common choices including , , and .[2] The method of global criterion is sensitive to the scaling of the objective functions. Thus, it is recommended that the objectives be normalized into a uniform, dimensionless scale.[2][39]
A priori methods
[edit]A priori methods require that sufficient preference information is expressed before the solution process.[3] Well-known examples of a priori methods include the utility function method, lexicographic method, and goal programming.
Utility function method
[edit]The utility function method assumes the decision maker's utility function is available. A mapping is a utility function if for all it holds that if the decision maker prefers to , and if the decision maker is indifferent between and . The utility function specifies an ordering of the decision vectors (recall that vectors can be ordered in many different ways). Once is obtained, it suffices to solve
but in practice, it is very difficult to construct a utility function that would accurately represent the decision maker's preferences,[2] particularly since the Pareto front is unknown before the optimization begins.
Lexicographic method
[edit]The lexicographic method assumes that the objectives can be ranked in the order of importance. We assume that the objective functions are in the order of importance so that is the most important and the least important to the decision maker. Subject to this assumption, various methods can be used to attain the lexicographically optimal solution. Note that a goal or target value is not specified for any objective here, which makes it different from the Lexicographic Goal Programming method.
Scalarizing
[edit]
Scalarizing a multi-objective optimization problem is an a priori method, which means formulating a single-objective optimization problem such that optimal solutions to the single-objective optimization problem are Pareto optimal solutions to the multi-objective optimization problem.[3] In addition, it is often required that every Pareto optimal solution can be reached with some parameters of the scalarization.[3] With different parameters for the scalarization, different Pareto optimal solutions are produced. A general formulation for a scalarization of a multi-objective optimization problem is
where is a vector parameter, the set is a set depending on the parameter , and is a function.
Very well-known examples are:
- linear scalarization
- where the weights of the objectives are the parameters of the scalarization.
- -constraint method (see, e.g.[2])
- where upper bounds are parameters as above and is the objective to be minimized.
Somewhat more advanced examples are the following:
- achievement scalarizing problems of Wierzbicki[41]
- One example of the achievement scalarizing problems can be formulated as
- where the term is called the augmentation term, is a small constant, and and are the nadir and utopian vectors, respectively. In the above problem, the parameter is the so-called reference point representing objective function values preferred by the decision maker.
- Sen's multi-objective programming[42]
- where is individual optima (absolute) for objectives of maximization and minimization to .
- hypervolume/Chebyshev scalarization[43]
- where the weights of the objectives are the parameters of the scalarization. If the parameters/weights are drawn uniformly in the positive orthant, it is shown that this scalarization provably converges to the Pareto front,[43] even when the front is non-convex.
Smooth Chebyshev (Tchebycheff) scalarization
[edit]The smooth Chebyshev scalarization;[44] also called smooth Tchebycheff scalarisation (STCH); replaces the non-differentiable max-operator of the classical Chebyshev scalarization with a smooth logarithmic soft-max, making standard gradient-based optimization applicable. Unlike typical scalarization methods, it guarantees exploration of the entire Pareto front, convex or concave.
- Definition
For a minimization problem with objective functions and the ideal objective vector , the smooth Chebyshev scalarising function is
where is the smoothing parameter and is a weight vector on the probability simplex .
As this converges to the classical (non-smooth) Chebyshev form
The parameter controls the trade-off between differentiability and approximation accuracy: smaller values yield a closer match to the classical Chebyshev scalarisation but reduce the Lipschitz constant of the gradient, while larger values give a smoother surface at the cost of looser approximation.

- Properties
- Smoothness and complexity — is continuously differentiable with an -Lipschitz gradient. When every is convex the function is convex, and an -optimal point is reachable in first-order iterations; sub-gradient descent on needs iterations.[44]
- Pareto optimality — For any every minimizer of is weakly Pareto-optimal; if all (or the minimizer is unique) it is Pareto-optimal.[44]
- Exhaustiveness — There exists a threshold such that, for , every Pareto-optimal point can be obtained as a minimizer of for some weight vector ; when the Pareto front is convex this holds for all .[44]
For example, portfolio optimization is often conducted in terms of mean-variance analysis. In this context, the efficient set is a subset of the portfolios parametrized by the portfolio mean return in the problem of choosing portfolio shares to minimize the portfolio's variance of return subject to a given value of ; see Mutual fund separation theorem for details. Alternatively, the efficient set can be specified by choosing the portfolio shares to maximize the function ; the set of efficient portfolios consists of the solutions as ranges from zero to infinity.
Some of the above scalarizations involve invoking the minimax principle, where always the worst of the different objectives is optimized.[45]
A posteriori methods
[edit]A posteriori methods aim at producing all the Pareto optimal solutions or a representative subset of the Pareto optimal solutions. Most a posteriori methods fall into either one of the following three classes:
- Mathematical programming-based a posteriori methods where an algorithm is run repeatedly, each run producing one Pareto optimal solution;
- Evolutionary algorithms where one run of the algorithm produces a set of Pareto optimal solutions;
- Deep learning methods where a model is first trained on a subset of solutions and then queried to provide other solutions on the Pareto front.
Mathematical programming
[edit]Well-known examples of mathematical programming-based a posteriori methods are the Normal Boundary Intersection (NBI),[46] Modified Normal Boundary Intersection (NBIm),[47] Normal Constraint (NC),[48][49] Successive Pareto Optimization (SPO),[50] and Directed Search Domain (DSD)[51] methods, which solve the multi-objective optimization problem by constructing several scalarizations. The solution to each scalarization yields a Pareto optimal solution, whether locally or globally. The scalarizations of the NBI, NBIm, NC, and DSD methods are constructed to obtain evenly distributed Pareto points that give a good approximation of the real set of Pareto points.
Evolutionary algorithms
[edit]Evolutionary algorithms are popular approaches to generating Pareto optimal solutions to a multi-objective optimization problem. Most evolutionary multi-objective optimization (EMO) algorithms apply Pareto-based ranking schemes. Evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II),[52] its extended version NSGA-III,[53][54] Strength Pareto Evolutionary Algorithm 2 (SPEA-2)[55] and multiobjective differential evolution variants have become standard approaches, although some schemes based on particle swarm optimization and simulated annealing[56] are significant. The main advantage of evolutionary algorithms, when applied to solve multi-objective optimization problems, is the fact that they typically generate sets of solutions, allowing computation of an approximation of the entire Pareto front. The main disadvantage of evolutionary algorithms is their lower speed and the Pareto optimality of the solutions cannot be guaranteed; it is only known that none of the generated solutions is dominated by another.
Another paradigm for multi-objective optimization based on novelty using evolutionary algorithms was recently improved upon.[57] This paradigm searches for novel solutions in objective space (i.e., novelty search[58] on objective space) in addition to the search for non-dominated solutions. Novelty search is like stepping stones guiding the search to previously unexplored places. It is especially useful in overcoming bias and plateaus as well as guiding the search in many-objective optimization problems.
Deep learning methods
[edit]Deep learning conditional methods are new approaches to generating several Pareto optimal solutions. The idea is to use the generalization capacity of deep neural networks to learn a model of the entire Pareto front from a limited number of example trade-offs along that front, a task called Pareto Front Learning.[59] Several approaches address this setup, including using hypernetworks[59] and using Stein variational gradient descent.[60]
List of methods
[edit]Commonly known a posteriori methods are listed below:
- Approximation-Guided Evolution (first algorithm to directly implement and optimize the formal concept of approximation from theoretical computer science)[61]
- Benson's algorithm for multi-objective linear programs and for multi-objective convex programs
- Directed Search Domain (DSD)[62]
- ε-constraint method[63][64]
- IOSO (Indirect Optimization on the basis of Self-Organization)
- MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition)[65]
- Modified Normal Boundary Intersection (NBIm)[47]
- Multi-objective Branch-and-Bound[66][67][68]
- Multi-objective particle swarm optimization
- Normal Boundary Intersection (NBI)[46]
- Normal Constraint (NC)[48][49]
- NSGA-II (Non-dominated Sorting Genetic Algorithm II), uses non-dominated sorting and crowding distance for selection and diversity maintenance[52]
- NSGA-III (Non-dominated Sorting Genetic Algorithm III), uses reference points and is designed for many-objective optimization[69]
- Pareto-Hypernetworks [59]
- PGEN (Pareto surface generation for convex multi-objective instances)[70]
- Reactive Search Optimization (using machine learning for adapting strategies and objectives),[71][72] implemented in LIONsolver
- SMS-EMOA (S-metric selection evolutionary multi-objective algorithm). Uses selection based on the hypervolume indicator (S-Metric) of a Pareto front approximation. [73]
- SPEA2 (Strength Pareto Evolutionary Algorithm 2), a population-based evolutionary algorithm using Pareto dominance counts for convergence and density estimation for diversity maintenance[74]
- Subpopulation Algorithm based on Novelty[57]
- Successive Pareto Optimization (SPO)[50]
Interactive methods
[edit]In interactive methods of optimizing multiple objective problems, the solution process is iterative and the decision maker continuously interacts with the method when searching for the most preferred solution (see e.g., Miettinen 1999,[2] Miettinen 2008[75]). In other words, the decision maker is expected to express preferences at each iteration to get Pareto optimal solutions that are of interest to the decision maker and learn what kind of solutions are attainable.
The following steps are commonly present in interactive methods of optimization:[75]
- initialize (e.g., calculate ideal and approximated nadir objective vectors and show them to the decision maker)
- generate a Pareto optimal starting point (by using e.g., some no-preference method or solution given by the decision maker)
- ask for preference information from the decision maker (e.g., aspiration levels or number of new solutions to be generated)
- generate new Pareto optimal solution(s) according to the preferences and show it/them and possibly some other information about the problem to the decision maker
- if several solutions were generated, ask the decision maker to select the best solution so far
- stop (if the decision maker wants to; otherwise, go to step 3).
The above aspiration levels refer to desirable objective function values forming a reference point. Instead of mathematical convergence, often used as a stopping criterion in mathematical optimization methods, psychological convergence is often emphasized in interactive methods. Generally speaking, a method is terminated when the decision maker is confident that he/she has found the most preferred solution available.
Types of preference information
[edit]There are different interactive methods involving different types of preference information. Three types can be identified based on
- trade-off information,
- reference points, and
- classification of objective functions.[75]
On the other hand, a fourth type of generating a small sample of solutions is included in:[76][77] An example of the interactive method utilizing trade-off information is the Zionts-Wallenius method,[78] where the decision maker is shown several objective trade-offs at each iteration, and (s)he is expected to say whether (s)he likes, dislikes, or is indifferent with respect to each trade-off. In reference point-based methods (see e.g.,[79][80]), the decision maker is expected at each iteration to specify a reference point consisting of desired values for each objective and a corresponding Pareto optimal solution(s) is then computed and shown to them for analysis. In classification-based interactive methods, the decision maker is assumed to give preferences in the form of classifying objectives at the current Pareto optimal solution into different classes, indicating how the values of the objectives should be changed to get a more preferred solution. Then, the classification information is considered when new (more preferred) Pareto optimal solution(s) are computed. In the satisficing trade-off method (STOM),[81] three classes are used: objectives whose values 1) should be improved, 2) can be relaxed, and 3) are acceptable as such. In the NIMBUS method,[82][83] two additional classes are also used: objectives whose values 4) should be improved until a given bound and 5) can be relaxed until a given bound.
Hybrid methods
[edit]Different hybrid methods exist, but here we consider hybridizing MCDM (multi-criteria decision-making) and EMO (evolutionary multi-objective optimization). A hybrid algorithm in multi-objective optimization combines algorithms/approaches from these two fields (see e.g.,[75]). Hybrid algorithms of EMO and MCDM are mainly used to overcome shortcomings by utilizing strengths. Several types of hybrid algorithms have been proposed in the literature, e.g., incorporating MCDM approaches into EMO algorithms as a local search operator, leading a DM to the most preferred solution(s), etc. A local search operator is mainly used to enhance the rate of convergence of EMO algorithms.
The roots for hybrid multi-objective optimization can be traced to the first Dagstuhl seminar organized in November 2004 (see here). Here, some of the best minds[citation needed] in EMO (Professor Kalyanmoy Deb, Professor Jürgen Branke, etc.) and MCDM (Professor Kaisa Miettinen, Professor Ralph E. Steuer, etc.) realized the potential in combining ideas and approaches of MCDM and EMO fields to prepare hybrids of them. Subsequently, many more Dagstuhl seminars have been arranged to foster collaboration. Recently, hybrid multi-objective optimization has become an important theme in several international conferences in the area of EMO and MCDM (see e.g.,[84][85]).
Visualization of the Pareto front
[edit]Visualization of the Pareto front is one of the a posteriori preference techniques of multi-objective optimization. The a posteriori preference techniques provide an important class of multi-objective optimization techniques.[2] Usually, the a posteriori preference techniques include four steps: (1) computer approximates the Pareto front, i.e., the Pareto optimal set in the objective space; (2) the decision maker studies the Pareto front approximation; (3) the decision maker identifies the preferred point at the Pareto front; (4) computer provides the Pareto optimal decision, whose output coincides with the objective point identified by the decision maker. From the point of view of the decision maker, the second step of the a posteriori preference techniques is the most complicated. There are two main approaches to informing the decision maker. First, a number of points of the Pareto front can be provided in the form of a list (interesting discussion and references are given in[86]) or using heatmaps.[87]
Visualization in bi-objective problems: tradeoff curve
[edit]In the case of bi-objective problems, informing the decision maker concerning the Pareto front is usually carried out by its visualization: the Pareto front, often named the tradeoff curve in this case, can be drawn at the objective plane. The tradeoff curve gives full information on objective values and on objective tradeoffs, which inform how improving one objective is related to deteriorating the second one while moving along the tradeoff curve. The decision maker takes this information into account while specifying the preferred Pareto optimal objective point. The idea to approximate and visualize the Pareto front was introduced for linear bi-objective decision problems by S. Gass and T. Saaty.[88] This idea was developed and applied in environmental problems by J.L. Cohon.[89] A review of methods for approximating the Pareto front for various decision problems with a small number of objectives (mainly, two) is provided in.[90]
Visualization in high-order multi-objective optimization problems
[edit]There are two generic ideas for visualizing the Pareto front in high-order multi-objective decision problems (problems with more than two objectives). One of them, which is applicable in the case of a relatively small number of objective points that represent the Pareto front, is based on using the visualization techniques developed in statistics (various diagrams, etc.; see the corresponding subsection below). The second idea proposes the display of bi-objective cross-sections (slices) of the Pareto front. It was introduced by W.S. Meisel in 1973[91] who argued that such slices inform the decision maker on objective tradeoffs. The figures that display a series of bi-objective slices of the Pareto front for three-objective problems are known as the decision maps. They give a clear picture of tradeoffs between the three criteria. The disadvantages of such an approach are related to the following two facts. First, the computational procedures for constructing the Pareto front's bi-objective slices are unstable since the Pareto front is usually not stable. Secondly, it is applicable in the case of only three objectives. In the 1980s, the idea of W.S. Meisel was implemented in a different form—in the form of the Interactive Decision Maps (IDM) technique.[92] More recently, N. Wesner[93] proposed using a combination of a Venn diagram and multiple scatterplots of the objective space to explore the Pareto frontier and select optimal solutions.
See also
[edit]References
[edit]- ^ J. -Y. Li, Z. -H. Zhan, Y. Li and J. Zhang, "Multiple Tasks for Multiple Objectives: A New Multiobjective Optimization Method via Multitask Optimization," in IEEE Transactions on Evolutionary Computation, doi:10.1109/TEVC.2023.3294307
- ^ a b c d e f g h i Kaisa Miettinen (1999). Nonlinear Multiobjective Optimization. Springer. ISBN 978-0-7923-8278-2. Retrieved 29 May 2012.
- ^ a b c d e f Ching-Lai Hwang; Abu Syed Md Masud (1979). Multiple objective decision making, methods and applications: a state-of-the-art survey. Springer-Verlag. ISBN 978-0-387-09111-2. Retrieved 29 May 2012.
- ^ Hassanzadeh, Hamidreza; Rouhani, Modjtaba (2010). "A multi-objective gravitational search algorithm". In Computational Intelligence, Communication Systems and Networks (CICSyN): 7–12.
- ^ Shirazi, Ali; Najafi, Behzad; Aminyavari, Mehdi; Rinaldi, Fabio; Taylor, Robert A. (2014-05-01). "Thermal–economic–environmental analysis and multi-objective optimization of an ice thermal energy storage system for gas turbine cycle inlet air cooling". Energy. 69: 212–226. Bibcode:2014Ene....69..212S. doi:10.1016/j.energy.2014.02.071. hdl:11311/845828.
- ^ Najafi, Behzad; Shirazi, Ali; Aminyavari, Mehdi; Rinaldi, Fabio; Taylor, Robert A. (2014-02-03). "Exergetic, economic and environmental analyses and multi-objective optimization of an SOFC-gas turbine hybrid cycle coupled with an MSF desalination system". Desalination. 334 (1): 46–59. Bibcode:2014Desal.334...46N. doi:10.1016/j.desal.2013.11.039. hdl:11311/764704.
- ^ Rafiei, S. M. R.; Amirahmadi, A.; Griva, G. (2009). "Chaos rejection and optimal dynamic response for boost converter using SPEA multi-objective optimization approach". 2009 35th Annual Conference of IEEE Industrial Electronics. pp. 3315–3322. doi:10.1109/IECON.2009.5415056. ISBN 978-1-4244-4648-3. S2CID 2539380.
- ^ Ropponen, A.; Ritala, R.; Pistikopoulos, E. N. (2011). "Optimization issues of the broke management system in papermaking". Computers & Chemical Engineering. 35 (11): 2510. doi:10.1016/j.compchemeng.2010.12.012.
- ^ Pllana, Sabri; Memeti, Suejb; Kolodziej, Joanna (2019). "Customizing Pareto Simulated Annealing for Multi-objective Optimization of Control Cabinet Layout". arXiv:1906.04825 [cs.OH].
- ^ Nguyen, Hoang Anh; van Iperen, Zane; Raghunath, Sreekanth; Abramson, David; Kipouros, Timoleon; Somasekharan, Sandeep (2017). "Multi-objective optimisation in scientific workflow". Procedia Computer Science. 108: 1443–1452. doi:10.1016/j.procs.2017.05.213. hdl:1826/12173.
- ^ Ganesan, T.; Elamvazuthi, I.; Vasant, P. (2015-07-01). "Multiobjective design optimization of a nano-CMOS voltage-controlled oscillator using game theoretic-differential evolution". Applied Soft Computing. 32: 293–299. doi:10.1016/j.asoc.2015.03.016.
- ^ Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P. (2013-01-01). "Hypervolume-Driven Analytical Programming for Solar-Powered Irrigation System Optimization". In Zelinka, Ivan; Chen, Guanrong; Rössler, Otto E.; Snasel, Vaclav; Abraham, Ajith (eds.). Nostradamus 2013: Prediction, Modeling and Analysis of Complex Systems. Advances in Intelligent Systems and Computing. Vol. 210. Springer International Publishing. pp. 147–154. doi:10.1007/978-3-319-00542-3_15. ISBN 978-3-319-00541-6.
- ^ Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P. (2013-01-01). "Multiobjective Optimization of Green Sand Mould System Using Chaotic Differential Evolution". In Gavrilova, Marina L.; Tan, C. J. Kenneth; Abraham, Ajith (eds.). Transactions on Computational Science XXI. Lecture Notes in Computer Science. Vol. 8160. Springer Berlin Heidelberg. pp. 145–163. doi:10.1007/978-3-642-45318-2_6. ISBN 978-3-642-45317-5.
- ^ Surekha, B.; Kaushik, Lalith K.; Panduy, Abhishek K.; Vundavilli, Pandu R.; Parappagoudar, Mahesh B. (2011-05-07). "Multi-objective optimization of green sand mould system using evolutionary algorithms". The International Journal of Advanced Manufacturing Technology. 58 (1–4): 9–17. doi:10.1007/s00170-011-3365-8. ISSN 0268-3768. S2CID 110315544.
- ^ "MultiObjective Optimization in Engine Design Using Genetic Algorithms to Improve Engine Performance | ESTECO". www.esteco.com. Retrieved 2015-12-01.
- ^ Courteille, E.; Mortier, F.; Leotoing, L.; Ragneau, E. (2005-05-16). Multi-Objective Robust Design Optimization of an Engine Mounting System (PDF). SAE 2005 Noise and Vibration Conference and Exhibition, May 2005, Traverse City, United States. doi:10.4271/2005-01-2412. S2CID 20170456.
- ^ Domingo-Perez, Francisco; Lazaro-Galilea, Jose Luis; Wieser, Andreas; Martin-Gorostiza, Ernesto; Salido-Monzu, David; Llana, Alvaro de la (April 2016). "Sensor placement determination for range-difference positioning using evolutionary multi-objective optimization". Expert Systems with Applications. 47: 95–105. doi:10.1016/j.eswa.2015.11.008.
- ^ Bemporad, Alberto; Muñoz de la Peña, David (2009-12-01). "Multiobjective model predictive control". Automatica. 45 (12): 2823–2830. doi:10.1016/j.automatica.2009.09.032.
- ^ Panda, Sidhartha (2009-06-01). "Multi-objective evolutionary algorithm for SSSC-based controller design". Electric Power Systems Research. 79 (6): 937–944. Bibcode:2009EPSR...79..937P. doi:10.1016/j.epsr.2008.12.004.
- ^ Fiandaca, Giovanna; Fraga, Eric S.; Brandani, Stefano (2009). "A multi-objective genetic algorithm for the design of pressure swing adsorption". Engineering Optimization. 41 (9): 833–854. doi:10.1080/03052150903074189. S2CID 120201436. Retrieved 2015-12-01.
- ^ Sendín, José Oscar H.; Alonso, Antonio A.; Banga, Julio R. (2010-06-01). "Efficient and robust multi-objective optimization of food processing: A novel approach with application to thermal sterilization". Journal of Food Engineering. 98 (3): 317–324. doi:10.1016/j.jfoodeng.2010.01.007. hdl:10261/48082.
- ^ Ganesan, T.; Elamvazuthi, I.; Ku Shaari, Ku Zilati; Vasant, P. (2013-03-01). "Swarm intelligence and gravitational search algorithm for multi-objective optimization of synthesis gas production". Applied Energy. 103: 368–374. Bibcode:2013ApEn..103..368G. doi:10.1016/j.apenergy.2012.09.059.
- ^ Ganesan, Timothy; Elamvazuthi, Irraivan; Vasant, Pandian; Shaari, Ku Zilati Ku (2015-03-23). "Multiobjective Optimization of Bioactive Compound Extraction Process via Evolutionary Strategies". In Nguyen, Ngoc Thanh; Trawiński, Bogdan; Kosala, Raymond (eds.). Intelligent Information and Database Systems. Lecture Notes in Computer Science. Vol. 9012. Springer International Publishing. pp. 13–21. doi:10.1007/978-3-319-15705-4_2. ISBN 978-3-319-15704-7.
- ^ Mehdi, Khosrow-Pour (2014-06-30). Contemporary Advancements in Information Technology Development in Dynamic Environments. IGI Global. ISBN 9781466662537.
- ^ Abakarov. A.; Sushkov. Yu.; Mascheroni. R.H. (2012). "Multi-criteria optimization and decision-making approach for improving of food engineering processes" (PDF). International Journal of Food Studies. 2: 1–21. doi:10.7455/ijfs/2.1.2013.a1. S2CID 3708256.
- ^ Abakarov, A.; Sushkov, Y.; Almonacid, S.; Simpson, R. (2009). "Multiobjective Optimisation Approach: Thermal Food Processing". Journal of Food Science. 74 (9): E471 – E487. doi:10.1111/j.1750-3841.2009.01348.x. hdl:10533/134983. PMID 20492109.
- ^ Pearce, Margaret; Mutlu, Bilge; Shah, Julie; Radwin, Robert (2018). "Optimizing Makespan and Ergonomics in Integrating Collaborative Robots Into Manufacturing Processes". IEEE Transactions on Automation Science and Engineering. 15 (4): 1772–1784. Bibcode:2018ITASE..15.1772P. doi:10.1109/tase.2018.2789820. ISSN 1545-5955. S2CID 52927442.
- ^ a b E. Björnson and E. Jorswieck, Optimal Resource Allocation in Coordinated Multi-Cell Systems, Foundations and Trends in Communications and Information Theory, vol. 9, no. 2-3, pp. 113-381, 2013.
- ^ Z.-Q. Luo and S. Zhang, Dynamic spectrum management: Complexity and duality, IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 1, pp. 57–73, 2008.
- ^ Merlin, A.; Back, H. Search for a Minimal-Loss Operating Spanning Tree Configuration in an Urban Power Distribution System. In Proceedings of the 1975 Fifth Power Systems Computer Conference (PSCC), Cambridge, UK, 1–5 September 1975; pp. 1–18.
- ^ Mendoza, J.E.; Lopez, M.E.; Coello, C.A.; Lopez, E.A. Microgenetic multiobjective reconfiguration algorithm considering power losses and reliability indices for medium voltage distribution network. IET Gener. Transm. Distrib. 2009, 3, 825–840.
- ^ Bernardon, D.P.; Garcia, V.J.; Ferreira, A.S.Q.; Canha, L.N. Multicriteria distribution network reconfiguration considering subtransmission analysis. IEEE Trans. Power Deliv. 2010, 25, 2684–2691.
- ^ Amanulla, B.; Chakrabarti, S.; Singh, S.N. Reconfiguration of power distribution systems considering reliability and power loss. IEEE Trans. Power Deliv. 2012, 27, 918–926.
- ^ Tomoiagă, B.; Chindriş, M.; Sumper, A.; Sudria-Andreu, A.; Villafafila-Robles, R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. Energies 2013, 6, 1439-1455.
- ^ Galceran, Enric; Carreras, Marc (2013). "A survey on coverage path planning for robotics". Robotics and Autonomous Systems. 61 (12): 1258–1276. CiteSeerX 10.1.1.716.2556. doi:10.1016/j.robot.2013.09.004. ISSN 0921-8890. S2CID 1177069.
- ^ Ellefsen, K.O.; Lepikson, H.A.; Albiez, J.C. (2019). "Multiobjective coverage path planning: Enabling automated inspection of complex, real-world structures". Applied Soft Computing. 61: 264–282. arXiv:1901.07272. doi:10.1016/j.asoc.2017.07.051. hdl:10852/58883. ISSN 1568-4946.
- ^ Matthias Ehrgott (1 June 2005). Multicriteria Optimization. Birkhäuser. ISBN 978-3-540-21398-7. Retrieved 29 May 2012.
- ^ Carlos A. Coello; Gary B. Lamont; David A. Van Veldhuisen (2007). Evolutionary Algorithms for Solving Multi-Objective Problems. Springer. ISBN 978-0-387-36797-2. Retrieved 1 November 2012.
- ^ a b Jürgen Branke; Kalyanmoy Deb; Kaisa Miettinen; Roman Slowinski (21 November 2008). Multiobjective Optimization: Interactive and Evolutionary Approaches. Springer. ISBN 978-3-540-88907-6. Retrieved 1 November 2012.
- ^ Zeleny, M. (1973), "Compromise Programming", in Cochrane, J.L.; Zeleny, M. (eds.), Multiple Criteria Decision Making, University of South Carolina Press, Columbia, pp. 262–301
- ^ Wierzbicki, A. P. (1982). "A mathematical basis for satisficing decision making". Mathematical Modelling. 3 (5): 391–405. doi:10.1016/0270-0255(82)90038-0.
- ^ Sen, Chandra, (1983) A new approach for multi-objective rural development planning, The Indian Economic Journal, Vol.30, (4), 91-96.
- ^ a b Golovin, Daniel; Zhang, Qiuyi (2020). "Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization". arXiv:2006.04655 [cs.LG].
- ^ a b c d Lin, Xi; Zhang, Xiaoyuan; Yang, Zhiyuan; Liu, Fei; Wang, Zhenkun; Zhang, Qingfu (2024). "Smooth Tchebycheff Scalarization for Multi-Objective Optimization". arXiv:2402.19078 [cs.LG].
- ^ Xu, J., Tao, Z. (2011). Rough Multiple Objective Decision Making. Vereinigtes Königreich: CRC Press., Page 67 https://books.google.com/books?id=zwDSBQAAQBAJ&dq=the%20minimax%20multi%20objective%20-game&pg=PA67
- ^ a b Das, I.; Dennis, J. E. (1998). "Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems". SIAM Journal on Optimization. 8 (3): 631. doi:10.1137/S1052623496307510. hdl:1911/101880. S2CID 207081991.
- ^ a b Motta, Renato S.; Afonso, Silvana M. B.; Lyra, Paulo R. M. (8 January 2012). "A modified NBI and NC method for the solution of N-multiobjective optimization problems". Structural and Multidisciplinary Optimization. 46 (2): 239–259. doi:10.1007/s00158-011-0729-5. S2CID 121122414.
- ^ a b Messac, A.; Ismail-Yahaya, A.; Mattson, C.A. (2003). "The normalized normal constraint method for generating the Pareto frontier". Structural and Multidisciplinary Optimization. 25 (2): 86–98. doi:10.1007/s00158-002-0276-1. S2CID 58945431.
- ^ a b Messac, A.; Mattson, C. A. (2004). "Normal constraint method with guarantee of even representation of complete Pareto frontier". AIAA Journal. 42 (10): 2101–2111. Bibcode:2004AIAAJ..42.2101M. doi:10.2514/1.8977.
- ^ a b Mueller-Gritschneder, Daniel; Graeb, Helmut; Schlichtmann, Ulf (2009). "A Successive Approach to Compute the Bounded Pareto Front of Practical Multiobjective Optimization Problems". SIAM Journal on Optimization. 20 (2): 915–934. doi:10.1137/080729013.
- ^ Erfani, Tohid; Utyuzhnikov, Sergei V. (2010). "Directed search domain: a method for even generation of the Pareto frontier in multiobjective optimization". Engineering Optimization. 43 (5): 467–484. doi:10.1080/0305215X.2010.497185. ISSN 0305-215X.
- ^ a b Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. (2002). "A fast and elitist multiobjective genetic algorithm: NSGA-II". IEEE Transactions on Evolutionary Computation. 6 (2): 182. Bibcode:2002ITEC....6..182D. CiteSeerX 10.1.1.17.7771. doi:10.1109/4235.996017. S2CID 9914171.
- ^ Deb, Kalyanmoy; Jain, Himanshu (2014). "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints". IEEE Transactions on Evolutionary Computation. 18 (4): 577–601. Bibcode:2014ITEC...18..577D. doi:10.1109/TEVC.2013.2281535. ISSN 1089-778X. S2CID 206682597.
- ^ Jain, Himanshu; Deb, Kalyanmoy (2014). "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach". IEEE Transactions on Evolutionary Computation. 18 (4): 602–622. Bibcode:2014ITEC...18..602J. doi:10.1109/TEVC.2013.2281534. ISSN 1089-778X. S2CID 16426862.
- ^ Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the Performance of the Strength Pareto Evolutionary Algorithm, Technical Report 103, Computer Engineering and Communication Networks Lab (TIK), Swiss Federal Institute of Technology (ETH) Zurich (2001) [1]
- ^ Suman, B.; Kumar, P. (2006). "A survey of simulated annealing as a tool for single and multiobjective optimization". Journal of the Operational Research Society. 57 (10): 1143–1160. doi:10.1057/palgrave.jors.2602068. S2CID 18916703.
- ^ a b Vargas, Danilo Vasconcellos; Murata, Junichi; Takano, Hirotaka; Delbem, Alexandre Cláudio Botazzo (2015). "General Subpopulation Framework and Taming the Conflict Inside Populations". Evolutionary Computation. 23 (1): 1–36. arXiv:1901.00266. doi:10.1162/EVCO_a_00118. PMID 24437665.
- ^ Lehman, Joel; Stanley, Kenneth O. (2011). "Abandoning Objectives: Evolution Through the Search for Novelty Alone". Evolutionary Computation. 19 (2): 189–223. doi:10.1162/EVCO_a_00025. PMID 20868264.
- ^ a b c Navon, Aviv; Shamsian, Aviv; Chechik, Gal; Fetaya, Ethan (2021-04-26). "Learning the Pareto Front with Hypernetworks". Proceedings of International Conference on Learning Representations (ICLR). arXiv:2010.04104.
- ^ Xingchao, Liu; Xin, Tong; Qiang, Liu (2021-12-06). "Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent". Advances in Neural Information Processing Systems. 34.
- ^ Bringmann, Karl; Friedrich, Tobias; Neumann, Frank; Wagner, Markus (2011). "Approximation-Guided Evolutionary Multi-Objective Optimization". IJCAI. doi:10.5591/978-1-57735-516-8/IJCAI11-204.
- ^ Erfani, Tohid; Utyuzhnikov, Sergei V. (2011). "Directed search domain: a method for even generation of the Pareto frontier in multiobjective optimization". Engineering Optimization. 43 (5): 467–484. doi:10.1080/0305215X.2010.499190.
- ^ Mavrotas, George (2009). "Effective implementation of the ε-constraint method in Multi-Objective Mathematical Programming problems". Applied Mathematics and Computation. 213 (2): 455–465. doi:10.1016/j.amc.2009.03.037. ISSN 0096-3003.
- ^ Carvalho, Iago A.; Ribeiro, Marco A. (2020). "An exact approach for the Minimum-Cost Bounded-Error Calibration Tree problem". Annals of Operations Research. 287 (1): 109–126. doi:10.1007/s10479-019-03443-4. ISSN 0254-5330. S2CID 209959109.
- ^ Zhang, Qingfu; Li, Hui (2007). "MOEA/D: A multiobjective evolutionary algorithm based on decomposition". IEEE Transactions on Evolutionary Computation. 11 (6): 712–731. doi:10.1109/TEVC.2007.892759.
- ^ Mavrotas, G.; Diakoulaki, D. (2005). "Multi-criteria branch and bound: A vector maximization algorithm for Mixed 0-1 Multiple Objective Linear Programming". Applied Mathematics and Computation. 171 (1): 53–71. doi:10.1016/j.amc.2005.01.038. ISSN 0096-3003.
- ^ Vincent, Thomas; Seipp, Florian; Ruzika, Stefan; Przybylski, Anthony; Gandibleux, Xavier (2013). "Multiple objective branch and bound for mixed 0-1 linear programming: Corrections and improvements for the biobjective case". Computers & Operations Research. 40 (1): 498–509. doi:10.1016/j.cor.2012.08.003. ISSN 0305-0548.
- ^ Przybylski, Anthony; Gandibleux, Xavier (2017). "Multi-objective branch and bound". European Journal of Operational Research. 260 (3): 856–872. doi:10.1016/j.ejor.2017.01.032. ISSN 0377-2217.
- ^ Deb, Kalyanmoy; Jain, Himanshu (2014). "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints". IEEE Transactions on Evolutionary Computation. 18 (4): 577–601. doi:10.1109/TEVC.2013.2281535.
- ^ Craft, D.; Halabi, T.; Shih, H.; Bortfeld, T. (2006). "Approximating convex Pareto surfaces in multiobjective radiotherapy planning". Medical Physics. 33 (9): 3399–3407. Bibcode:2006MedPh..33.3399C. doi:10.1118/1.2335486. PMID 17022236.
- ^ Battiti, Roberto; Mauro Brunato; Franco Mascia (2008). Reactive Search and Intelligent Optimization. Springer Verlag. ISBN 978-0-387-09623-0.
- ^ Battiti, Roberto; Mauro Brunato (2011). Reactive Business Intelligence. From Data to Models to Insight. Trento, Italy: Reactive Search Srl. ISBN 978-88-905795-0-9.
- ^ Beume, N.; Naujoks, B.; Emmerich, M. (2007). "SMS-EMOA: Multiobjective selection based on dominated hypervolume". European Journal of Operational Research. 181 (3): 1653. doi:10.1016/j.ejor.2006.08.008.
- ^ Zitzler, Eckart; Laumanns, Marco; Thiele, Lothar (2001). SPEA2: Improving the Strength Pareto Evolutionary Algorithm (Report). Computer Engineering and Networks Laboratory (TIK), ETH Zurich.
- ^ a b c d Miettinen, K.; Ruiz, F.; Wierzbicki, A. P. (2008). "Introduction to Multiobjective Optimization: Interactive Approaches". Multiobjective Optimization. Lecture Notes in Computer Science. Vol. 5252. pp. 27–57. CiteSeerX 10.1.1.475.465. doi:10.1007/978-3-540-88908-3_2. ISBN 978-3-540-88907-6.
- ^ Luque, M.; Ruiz, F.; Miettinen, K. (2008). "Global formulation for interactive multiobjective optimization". OR Spectrum. 33: 27–48. doi:10.1007/s00291-008-0154-3. S2CID 15050545.
- ^ Ruiz, F.; Luque, M.; Miettinen, K. (2011). "Improving the computational efficiency in a global formulation (GLIDE) for interactive multiobjective optimization". Annals of Operations Research. 197: 47–70. doi:10.1007/s10479-010-0831-x. S2CID 14947919.
- ^ Zionts, S.; Wallenius, J. (1976). "An Interactive Programming Method for Solving the Multiple Criteria Problem". Management Science. 22 (6): 652. doi:10.1287/mnsc.22.6.652.
- ^ Wierzbicki, A. P. (1986). "On the completeness and constructiveness of parametric characterizations to vector optimization problems". OR Spektrum. 8 (2): 73–78. doi:10.1007/BF01719738. S2CID 121771992.
- ^ Andrzej P. Wierzbicki; Marek Makowski; Jaap Wessels (31 May 2000). Model-Based Decision Support Methodology with Environmental Applications. Springer. ISBN 978-0-7923-6327-9. Retrieved 17 September 2012.
- ^ Nakayama, H.; Sawaragi, Y. (1984), "Satisficing Trade-Off Method for Multiobjective Programming", in Grauer, M.; Wierzbicki, A. P. (eds.), Interactive Decision Analysis, Springer-Verlag Berlin, Heidelberg, pp. 113–122
- ^ Miettinen, K.; Mäkelä, M. M. (1995). "Interactive bundle-based method for nondifferentiable multiobjeective optimization: Nimbus§". Optimization. 34 (3): 231. doi:10.1080/02331939508844109.
- ^ Miettinen, K.; Mäkelä, M. M. (2006). "Synchronous approach in interactive multiobjective optimization". European Journal of Operational Research. 170 (3): 909. doi:10.1016/j.ejor.2004.07.052.
- ^ Sindhya, K.; Ruiz, A. B.; Miettinen, K. (2011). "A Preference Based Interactive Evolutionary Algorithm for Multi-objective Optimization: PIE". Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science. Vol. 6576. pp. 212–225. doi:10.1007/978-3-642-19893-9_15. ISBN 978-3-642-19892-2.
- ^ Sindhya, K.; Deb, K.; Miettinen, K. (2008). "A Local Search Based Evolutionary Multi-objective Optimization Approach for Fast and Accurate Convergence". Parallel Problem Solving from Nature – PPSN X. Lecture Notes in Computer Science. Vol. 5199. pp. 815–824. doi:10.1007/978-3-540-87700-4_81. ISBN 978-3-540-87699-1.
- ^ Benson, Harold P.; Sayin, Serpil (1997). "Towards finding global representations of the efficient set in multiple objective mathematical programming" (PDF). Naval Research Logistics. 44 (1): 47–67. doi:10.1002/(SICI)1520-6750(199702)44:1<47::AID-NAV3>3.0.CO;2-M. hdl:11693/25666. ISSN 0894-069X.
- ^ Pryke, Andy; Sanaz Mostaghim; Alireza Nazemi (2007). "Heatmap Visualization of Population Based Multi Objective Algorithms". Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science. Vol. 4403. pp. 361–375. doi:10.1007/978-3-540-70928-2_29. ISBN 978-3-540-70927-5. S2CID 2502459.
- ^ Gass, Saul; Saaty, Thomas (1955). "The computational algorithm for the parametric objective function". Naval Research Logistics Quarterly. 2 (1–2): 39–45. doi:10.1002/nav.3800020106. ISSN 0028-1441.
- ^ Jared L. Cohon (13 January 2004). Multiobjective Programming and Planning. Courier Dover Publications. ISBN 978-0-486-43263-2. Retrieved 29 May 2012.
- ^ Ruzika, S.; Wiecek, M. M. (2005). "Approximation Methods in Multiobjective Programming". Journal of Optimization Theory and Applications. 126 (3): 473–501. doi:10.1007/s10957-005-5494-4. ISSN 0022-3239. S2CID 122221156.
- ^ Meisel, W. L. (1973), J. L. Cochrane; M. Zeleny (eds.), "Tradeoff decision in multiple criteria decision making", Multiple Criteria Decision Making: 461–476
- ^ A. V. Lotov; V. A. Bushenkov; G. K. Kamenev (29 February 2004). Interactive Decision Maps: Approximation and Visualization of Pareto Frontier. Springer. ISBN 978-1-4020-7631-2. Retrieved 29 May 2012.
- ^ Wesner, N. (2017), "Multiobjective Optimization via Visualization", Economics Bulletin, 37 (2): 1226–1233
External links
[edit]- Emmerich, M.T.M., Deutz, A.H. A tutorial on multiobjective optimization: fundamentals and evolutionary methods. Nat Comput 17, 585–609 (2018). https://doi.org/10.1007/s11047-018-9685-y
- International Society on Multiple Criteria Decision Making
- Evolutionary Multiobjective Optimization, The Wolfram Demonstrations Project
- A Tutorial on Multiobjective Optimization and Genetic Algorithms, Scilab Professional Partner
- Tomoiagă, Bogdan; Chindriş, Mircea; Sumper, Andreas; Sudria-Andreu, Antoni; Villafafila-Robles, Roberto. 2013. "Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II." Energies 6, no. 3: 1439-1455.
- List of References on Evolutionary Multiobjective Optimization
Multi-objective optimization
View on GrokipediaFundamentals
Definition and Problem Statement
Multi-objective optimization is a branch of mathematical optimization concerned with simultaneously optimizing multiple conflicting objective functions, rather than seeking a single global optimum as in traditional optimization. This approach recognizes that real-world decision-making often involves trade-offs between competing goals, such as minimizing cost while maximizing performance or efficiency. The core challenge arises from the inherent conflicts among objectives, leading to a set of compromise solutions instead of a unique best choice.[4] The standard mathematical formulation of a multi-objective optimization problem seeks to determine a decision variable vector from the feasible set that optimizes the vector-valued objective function , where : subject to inequality constraints (with ) and equality constraints (with ). The feasible set is defined as , assuming is nonempty and compact to ensure boundedness. Objectives may involve minimization for some functions and maximization for others, which can be unified by negating maximization terms (e.g., ). This vector optimization framework distinguishes MOO from single-objective problems, where objectives are aggregated into a scalar via weighting or utility functions, potentially obscuring trade-offs and yielding a single solution that may not reflect decision-maker preferences. In contrast, MOO preserves conflicts, requiring the identification of a set of nondominated solutions based on Pareto optimality criteria.[5] The origins of multi-objective optimization trace back to welfare economics, with early conceptual foundations laid by Francis Y. Edgeworth in 1881 through his work on multi-utility optima in Mathematical Psychics, and further developed by Vilfredo Pareto in 1906, who formalized the idea of no further improvement without harm to others in Manuale di Economia Politica. The field gained momentum in the 1950s through economists like Tjalling C. Koopmans, who in 1951 introduced efficient (nondominated) vectors in production and allocation problems in Activity Analysis of Production and Allocation, and Kenneth J. Arrow, whose 1951 book Social Choice and Individual Values and 1953 collaboration on convex sets advanced partial orderings for multiple criteria. Research intensified in the late 1960s, marking the formal emergence of multi-objective optimization as a distinct discipline in operations research and decision theory.[6] Common assumptions in multi-objective optimization include continuity and differentiability of the objective functions to enable analytical methods and gradient-based algorithms, with convexity often imposed on objectives and constraints to guarantee the existence and structure of optimal solution sets, such as convexity of the feasible region and Pareto front. However, general formulations accommodate non-convex, nondifferentiable, or even discontinuous cases, particularly in engineering and evolutionary computation applications where real-world complexities preclude strict convexity.[7]Pareto Optimality and Dominance
In multi-objective optimization, the notion of optimality differs fundamentally from single-objective problems due to conflicting objectives. A solution is Pareto optimal if it represents an efficient trade-off, meaning no improvement in one objective is possible without degrading at least one other. This concept, originating from economic theory, was formalized by Vilfredo Pareto in his 1906 work Manuale di Economia Politica, where he described an optimal resource allocation as one where no individual can be made better off without making someone else worse off.[4] Earlier foundations were laid by Francis Y. Edgeworth in 1881, who explored multi-utility optima in economic decision-making.[4] In the context of optimization over a feasible set , Pareto optimality identifies solutions that cannot be dominated by any other feasible point. Central to this framework is the Pareto dominance relation, which provides a partial order for comparing solutions. For a minimization problem with objective functions , a solution dominates (denoted ) if: This definition ensures that is at least as good as in all objectives and strictly better in at least one.[1] A solution is Pareto optimal, or non-dominated, if no other dominates it, i.e., there exists no such that . The set of all such non-dominated solutions forms the Pareto optimal set, highlighting the trade-offs inherent in multi-objective problems.[1] A related but weaker condition is weak Pareto optimality, where a solution is weakly non-dominated if no other strictly dominates it in all objectives simultaneously, i.e., there exists no such that .[1] Unlike strong Pareto optimality, this allows for solutions where equal performance across all objectives is possible without strict improvement in one. In practice, weak Pareto optima may include points that are not strongly efficient, particularly in problems with non-convex feasible sets. The absence of a global optimum in multi-objective optimization stems from objective conflicts: unlike single-objective cases where a unique minimizer may exist, trade-offs prevent one solution from being best in all criteria. To see this, suppose two objectives and conflict such that minimizing increases ; no single point minimizes both simultaneously, leading instead to a set of Pareto optimal solutions where dominance identifies the boundary of attainable improvements. This partial order ensures the Pareto set captures all efficient compromises, as any dominated point can be improved upon without loss elsewhere.[1]Comparison to Single-Objective Optimization
In single-objective optimization, the goal is to minimize or maximize a scalar objective function subject to constraints, often yielding a unique global or local optimum that can be found using methods such as gradient descent, where iterative updates follow the negative gradient direction to converge to a point solution. This approach assumes a single criterion, allowing for straightforward convergence guarantees in convex cases, where local optima coincide with global ones. Multi-objective optimization, by contrast, involves simultaneously optimizing multiple conflicting objective functions, such as , which prevents the existence of a single scalar optimum and instead produces a set of trade-off solutions defined by Pareto dominance, where no solution dominates another in all objectives.[8] This shift from point-based to set-based optimization introduces unique challenges, including the need for decision-making to select from the Pareto-optimal set, as conflicting goals like minimizing cost while maximizing performance cannot all be achieved simultaneously.[8] Naive aggregation techniques, such as weighted sums that combine objectives into with weights summing to 1, can bias results toward supported solutions and fail to capture non-convex portions of the Pareto front, potentially missing diverse trade-offs even when weights are varied.[9] For instance, in engineering design, single-objective profit maximization might prioritize revenue alone, leading to a unique solution, whereas a multi-objective formulation balancing cost, quality, and delivery time generates a Pareto set of compromises, requiring post-optimization selection to align with stakeholder preferences.[8] This evolutionary paradigm emphasizes approximating the entire Pareto front rather than isolated points, motivating population-based search strategies over traditional scalar methods.[8]Applications
Engineering and Design
Multi-objective optimization plays a pivotal role in engineering design, where conflicting objectives such as performance, cost, and sustainability must be balanced to achieve robust solutions. In structural design, particularly for aerospace components, engineers often seek to minimize weight while maximizing strength and durability, addressing the trade-offs inherent in lightweight materials under extreme loads. For instance, NASA's multidisciplinary design optimization efforts for aircraft wings and fuselages have utilized multi-objective approaches to integrate aerodynamic efficiency with structural integrity, resulting in designs that reduce fuel consumption compared to single-objective baselines. These optimizations typically generate a Pareto front of non-dominated solutions, allowing designers to select trade-offs based on mission-specific requirements. In process engineering, multi-objective optimization is essential for chemical plants, where objectives include maximizing product yield, minimizing energy consumption, and reducing environmental emissions. A seminal application involves optimizing reactor configurations and operating conditions in petrochemical processes, where genetic algorithms have been employed to simultaneously improve yield and cut energy use, while keeping CO2 emissions below regulatory thresholds. Such methods enable the identification of Pareto-optimal operating points that balance economic viability with ecological impact, as demonstrated in studies on distillation column design. A prominent case study in automotive design illustrates the long-term adoption of these techniques since the 1990s, focusing on balancing fuel efficiency, crash safety, and manufacturing cost. Early implementations, such as those by Ford and General Motors, integrated multi-objective evolutionary algorithms to optimize vehicle body structures, achieving improvements in fuel economy alongside enhanced safety ratings under the same cost constraints. This approach has evolved with the rise of electric vehicles, where battery placement and thermal management are optimized concurrently. The integration of finite element analysis (FEA) with multi-objective solvers has become a standard metric for evaluating design performance in engineering workflows, providing quantitative assessments of stress distribution and material deformation across Pareto solutions. In structural applications, FEA-driven optimizations have reduced computational times through surrogate modeling, enabling rapid iteration on complex geometries like turbine blades. This coupling ensures that designs not only meet multiple criteria but also withstand real-world validations.Economics and Finance
In welfare economics, multi-objective optimization addresses the trade-off between equity and efficiency in resource allocation, extending classical frameworks to handle multiple criteria such as distributional fairness and aggregate output maximization. For instance, extensions of the Arrow-Debreu general equilibrium model incorporate multi-objective formulations to derive necessary optimality conditions for the Second Welfare Theorem in stochastic economies with production and stock markets, allowing for constrained Pareto optima that balance competitive equilibria with broader social objectives.[10] This approach highlights how multi-objective methods can refine welfare theorems by accommodating stock market dynamics, though they may fail in multi-period settings due to intertemporal inconsistencies.[10] In finance, multi-objective portfolio optimization generalizes the seminal Markowitz mean-variance framework, which originally focused on maximizing expected return while minimizing variance as a proxy for risk, by incorporating additional objectives like liquidity, skewness, or kurtosis to better capture investor preferences under uncertainty. These models use Pareto-efficient solutions to generate a set of non-dominated portfolios, enabling investors to select based on a risk aversion parameter that trades off multiple dimensions simultaneously, often yielding superior risk-adjusted performance compared to single-objective variants.[11] Algorithms such as the Non-dominated Sorting Genetic Algorithm III (NSGA-III) have been applied to optimize portfolios across global assets, demonstrating improved Sharpe ratios and reduced tail risks.[11] A key application in modern finance is sustainable investing, where multi-objective optimization balances financial returns with environmental, social, and governance (ESG) factors, a practice that has surged in prominence since the 2010s amid growing regulatory and investor demand for responsible capital allocation. Minimax-based models, for example, maximize risk-adjusted returns while minimizing ESG-related controversies across indices like the EURO STOXX 50 and DJIA, outperforming traditional benchmarks by integrating systematic risk and sustainability metrics into the objective space.[12] Multi-objective optimization also integrates with game theory in economic modeling, particularly through Nash equilibria in multi-objective normal-form games, where agents pursue vector-valued payoffs representing diverse criteria like profit and social welfare. Under scalarized expected returns with quasiconcave utility functions, pure strategy Nash equilibria exist and can be computed via trade-off transformations, providing a foundation for analyzing cooperative and competitive behaviors in economic interactions.[13]Control Systems and Resource Management
In control systems, multi-objective optimization addresses the inherent trade-offs in dynamic environments where multiple performance criteria must be balanced simultaneously, such as accuracy versus energy efficiency in real-time operations. This approach is particularly vital in optimal control problems, where controllers like proportional-integral-derivative (PID) systems are tuned to minimize tracking errors while constraining control effort in robotic applications. For instance, in robotic manipulators, multi-objective genetic algorithms have been employed to optimize PID parameters, achieving reduced mean squared error in trajectory tracking and lowering torque variance compared to single-objective tuning. Similarly, for collaborative robots (cobots), non-dominated sorting genetic algorithm II (NSGA-II) tunes proportional-derivative (PD) controllers across objectives of end-effector precision and torque minimization, with trajectory-specific tuning improving hypervolume indicators of Pareto fronts over generic controllers.[14] Resource management in time-varying systems leverages multi-objective optimization to allocate limited assets like bandwidth or power, optimizing for throughput, fairness, and reliability in wireless networks. In unmanned aerial vehicle (UAV)-enabled communications, joint bandwidth and power allocation problems are solved using iterative Lagrange dual methods, balancing sum-rate maximization with Jain's fairness index; simulations show that adjusting the fairness weight parameter increases equity among users by 30% at a 10-15% throughput cost, while ensuring quality-of-service constraints for reliability.[15] In smart grids, post-2020 advancements integrate artificial intelligence with multi-objective models to optimize energy dispatch incorporating renewables and battery storage, minimizing operational costs, emissions, and loss-of-load expectation (LOLE). One such strategy achieves significant reductions in costs, emissions, and LOLE through hybrid metaheuristic algorithms that handle stochastic renewable inputs for enhanced stability.[16] Wireless sensor networks exemplify resource management challenges by applying multi-objective optimization to deployment and clustering, aiming to minimize energy consumption while maximizing area coverage and network lifetime. Surveys highlight evolutionary algorithms like particle swarm optimization for node placement, extending lifetime via balanced energy distribution and achieving high coverage in monitored regions.[17] For example, NSGA-II-based models in real-time environments optimize sensor activation schedules, yielding energy savings and lifetime prolongation without coverage gaps below high thresholds. These solutions evaluate policies using Pareto dominance to identify non-dominated sets of configurations that trade off battery drain against sensing reliability.[18]Solution Concepts
Pareto Front and Trade-offs
In multi-objective optimization, the Pareto front represents the set of all Pareto optimal solutions mapped into the objective space, forming the boundary of achievable trade-offs between conflicting objectives. This front is typically a hypersurface in higher-dimensional objective spaces, illustrating the non-dominated outcomes where no further improvement in one objective is possible without degrading at least one other. The concept builds on the dominance relation, where a solution dominates another if it is better in at least one objective and no worse in all others.[19] In the bi-objective case, the Pareto front manifests as a trade-off curve, a continuous or discrete boundary that demonstrates how gains in one objective, such as maximizing profit, inevitably lead to losses in another, like minimizing risk. For instance, in portfolio optimization, the curve might plot expected return against variance, showing that higher returns correlate with increased volatility, with each point on the curve representing an efficient balance. In multi-objective scenarios with more than two objectives, the front extends to a surface or higher-dimensional hypersurface, complicating visualization but preserving the principle of inherent compromises among objectives.[19] To facilitate comparison across objectives with differing scales and units, normalization techniques are essential for accurately representing the Pareto front. Common methods include min-max normalization, which scales each objective to the interval [0, 1] based on its ideal (minimum or maximum values) and nadir (worst feasible values) points. These approaches ensure equitable treatment of objectives during analysis, preventing dominance by those with larger numerical ranges.[20] Approximating the Pareto front is computationally challenging, as the problem is NP-hard in general, particularly for combinatorial multi-objective problems where the number of non-dominated solutions can grow exponentially with problem size. This complexity underscores the need for heuristic and evolutionary algorithms to generate practical approximations of the front.[21]Ideal and Nadir Points
In multi-objective optimization, the ideal point, also referred to as the utopia point, represents the vector of the best possible values for each objective function, obtained by solving each single-objective minimization problem independently. For a problem with objectives , the ideal point is defined such that where is the feasible decision space. This point provides a lower bound for the objective values but is typically unattainable as a single feasible solution due to conflicting objectives.[22] The nadir point, known as the anti-utopia point, complements the ideal point by capturing the worst feasible values for each objective among the set of Pareto-optimal solutions. It is defined as , where and denotes the Pareto front. Together, the ideal and nadir points delineate the bounding box of the objective space containing all Pareto-optimal outcomes, facilitating normalization by scaling objectives to a unit range, such as .[22][23] These points play a key role in scalarization techniques, where metrics like Euclidean distance from the ideal point or Tchebycheff distance incorporating both bounds approximate preferred Pareto solutions by transforming the multi-objective problem into a single-objective one. For instance, in reference point methods, solutions minimizing distance to a user-specified point relative to the ideal and nadir emphasize trade-offs aligned with decision-maker preferences. However, estimating the nadir point is computationally intensive, particularly in high-dimensional problems with more than two or three objectives, as it demands exhaustive exploration of the Pareto front, which is generally NP-hard and prone to approximation errors in evolutionary algorithms.[23][24]Efficiency and Weak Pareto Optimality
In multi-objective optimization, an efficient solution is synonymous with a strongly Pareto optimal solution: a feasible point such that there is no other feasible point with and for at least one objective . A weakly efficient solution, by contrast, is a feasible point such that no feasible exists with for all objectives simultaneously; this allows flat portions in the Pareto front, where movement along the front improves some objectives without strictly worsening all others. The set of weakly efficient solutions contains the set of efficient solutions (), with equality holding under strict quasi-convexity of the objective functions. In multi-objective linear programming, weakly optimal solutions are those optimal for scalarized problems with non-negative weights , while efficient solutions require strictly positive weights ; in non-degenerate cases, weakly optimal points coincide with supported efficient vertices. In strictly convex multi-objective problems, the sets of weak and strong Pareto optimal solutions coincide.[25] This distinction influences trade-off analysis, as flat regions in weak Pareto fronts may obscure precise dominance relations between objectives.Optimization Methods
No-Preference Methods
No-preference methods in multi-objective optimization seek to identify a single compromise solution without incorporating any decision maker preferences, focusing instead on achieving a balanced outcome across all objectives through neutral aggregation techniques. These approaches are particularly useful when no prior information on trade-off priorities is available, generating a solution that aims for uniform coverage or central positioning on the Pareto front. Developed primarily in the 1970s, these methods convert the vector-valued optimization into a scalar problem by minimizing deviations from an unattainable ideal state.[26] The global criterion method, introduced by Hwang and Masud, exemplifies this category by minimizing the L_p norm distance between the objective vector and the ideal point z^, where z^ represents the vector of individual objective optima. This is formulated as: for 1 ≤ p ≤ ∞, often using p=1, 2, or ∞ to emphasize different aspects of deviation. The resulting solution provides a Pareto optimal point that is globally closest to the ideal in the chosen metric, assuming equal weighting across objectives.[27] Compromise programming, proposed by Zeleny, builds on similar distance minimization but incorporates normalization using both the ideal and nadir points to ensure equitable treatment of objectives with differing scales, formulated as a weighted L_p distance: where z^n is the nadir point (worst feasible values) and weights w_i are typically set equally in no-preference contexts to promote uniform spread. This method seeks a solution that compromises proportionally across objectives, yielding a balanced Pareto point.[28] Despite their simplicity, no-preference methods carry limitations, such as the assumption of equal objective importance, which overlooks potential asymmetries in real-world applications, and reduced effectiveness on non-convex Pareto fronts, where the metric-based solution may cluster toward convex regions and fail to capture diverse trade-offs. Normalization is essential to mitigate scaling sensitivities, but the methods still produce only a single solution, limiting exploration of the full front.[9] In practice, these techniques are applied to benchmark bi-objective problems like the ZDT test suite, where the global criterion method (with p=2) identifies a compromise solution approximating the knee of the front for convex instances such as ZDT1, but shows bias toward the ideal point in non-convex cases like ZDT3.A Priori Methods
A priori methods in multi-objective optimization involve the decision maker articulating preferences prior to the optimization process, thereby transforming the vector-valued problem into a scalar one or a sequence of scalar problems to yield a single preferred solution.[29] These approaches aim to incorporate user-specific trade-offs upfront, reducing the computational burden of exploring the entire Pareto front by focusing the search on promising regions of the objective space.[29] Unlike methods that generate multiple solutions for post-optimization selection, a priori techniques rely on the accuracy of the initial preference specification to ensure the resulting solution is efficient with respect to Pareto dominance. One prominent a priori method is the utility function approach, where the decision maker defines a scalar utility function that aggregates the multiple objectives into a single criterion to maximize. Common forms include additive utilities, such as , where are weights reflecting relative importance and are individual utility functions often assumed to be monotonic, or multiplicative forms like to capture interactions between objectives. The optimization then proceeds as a standard single-objective problem: , subject to the original constraints, assuming the utility function is known or elicitable from the decision maker. This method draws from decision theory and is particularly suited to problems where the decision maker has a clear, concave utility representing risk-averse preferences. Lexicographic ordering represents another key a priori technique, treating objectives as hierarchically prioritized rather than equally weighted, akin to dictionary ordering. The decision maker ranks objectives from most to least important, say first, then , up to , and solves sequentially: first optimize , fix that optimal value, then optimize subject to , and continue downward.[30] This yields a solution that is optimal for the highest-priority objective without degradation, while improving lower ones as much as possible given the hierarchy. The approach assumes strict ordinal preferences and does not require cardinal weights, making it useful when objectives have natural priorities, though it may overlook subtle trade-offs among lower-ranked goals.[29] Goal programming, originally formulated for linear problems, minimizes deviations from predefined aspiration levels or goals for each objective, providing a flexible a priori framework.[30] The decision maker sets target values for each , and the problem becomes , or more generally using positive and negative deviations and such that and , minimizing a weighted sum of unwanted deviations (e.g., under-achievement for maximization objectives).[30] Weights and priorities can be incorporated to reflect preferences, allowing prioritization similar to lexicographic methods but with quantitative deviation measures.[30] Introduced by Charnes and Cooper in the context of linear programming applications, it excels in scenarios with realistic targets derived from stakeholder requirements.[30] These methods offer advantages such as computational efficiency by avoiding the generation of numerous Pareto solutions and direct incorporation of decision maker knowledge to focus on a single outcome.[31] However, they are sensitive to the accuracy of prior preferences; misspecification of utilities, hierarchies, or goals can lead to suboptimal or non-Pareto-efficient results, and eliciting precise preferences upfront may be challenging in complex problems.[32] Additionally, they typically yield only one solution, limiting exploration of trade-offs unless preferences are adjusted and re-optimized.[31] A practical example is hierarchical optimization in supply chain management, where cost minimization is prioritized first (lexicographic ordering), followed by emissions reduction without increasing costs beyond the optimal level.[33] In one such application, order allocation to suppliers under uncertainty first optimizes total procurement cost, then minimizes environmental impact subject to that cost constraint, achieving a balanced solution that meets regulatory priorities while controlling expenses.[33] This approach demonstrates how a priori methods streamline decision-making in real-world logistics by enforcing sequential trade-offs.[34]A Posteriori Methods
A posteriori methods generate an approximation of the entire Pareto front—a set of non-dominated solutions representing trade-offs among conflicting objectives—allowing the decision maker to select a preferred solution afterward without prior articulation of preferences.[1] These approaches are particularly useful when the decision maker lacks precise preference information upfront or wishes to explore the full range of compromises before committing to a choice.[1] By producing a representative set of Pareto-optimal solutions, a posteriori methods facilitate informed decision-making in applications like engineering design and resource allocation, where multiple viable trade-offs exist.[29] One classical mathematical programming technique is the ε-constraint method, originally proposed by Haimes, Lasdon, and Wismer in 1971.[35] This method converts the multi-objective problem into a sequence of single-objective optimizations by selecting one objective for minimization while imposing upper bounds on the others. Formally, for a problem with objectives , it solves: where are systematically varied to generate points along the Pareto front.[35] Each optimal solution to these constrained problems yields a weakly Pareto-efficient point, and by adjusting the values—often guided by the payoff table of individual minima—this method produces supported efficient solutions that form a piecewise linear approximation of the convex portions of the front.[36] The approach guarantees exact solutions for convex problems when using precise solvers but can be computationally intensive for non-convex cases or high dimensions, as it requires solving multiple optimization problems.[37] Evolutionary algorithms dominate a posteriori methods for approximating non-convex and disconnected Pareto fronts in complex search spaces. The Non-dominated Sorting Genetic Algorithm II (NSGA-II), developed by Deb et al. in 2002, is a seminal population-based method that uses non-dominated sorting to stratify solutions into fronts based on dominance and crowding distance to promote diversity by preserving solutions in less populated regions of the objective space.[38] This elitist strategy ensures convergence toward the true front while maintaining a uniform spread, making NSGA-II effective for problems with two to three objectives, as demonstrated in benchmark tests on test functions like ZDT and DTLZ suites.[38] Other influential algorithms include MOEA/D by Zhang and Li (2007), which decomposes the multi-objective problem into single-objective subproblems via weighted aggregation (e.g., Tchebycheff or boundary intersection methods) and optimizes them in a neighborhood-based collaborative manner to achieve scalability for higher-dimensional objectives.[39] Additionally, the S-metric Selection Evolutionary Multi-objective Algorithm (SMS-EMOA) by Beume, Naujoks, and Emmerich (2007) operates in a steady-state mode, selecting offspring that maximize the hypervolume contribution to enhance both convergence and diversity without explicit diversity mechanisms like crowding.[40] In scenarios with computationally expensive evaluations, such as simulations in engineering or materials design, deep learning-based neural surrogates approximate the Pareto front by modeling the objective landscape from limited data. Post-2020 advancements leverage generative adversarial networks (GANs) as surrogates to generate high-fidelity Pareto set approximations, particularly for black-box problems where direct evaluations are prohibitive. For example, GAN-driven methods enrich sparse datasets by synthesizing diverse non-dominated solutions, enabling efficient exploration of trade-offs in high-dimensional spaces while reducing the need for costly function calls by up to orders of magnitude in surrogate-assisted frameworks. These neural approaches integrate with evolutionary methods to refine approximations iteratively, offering scalability for real-world applications like aerodynamic optimization.[41] The effectiveness of Pareto front approximations from a posteriori methods is evaluated using quality indicators that assess convergence, diversity, and uniformity. The hypervolume indicator, introduced by Zitzler and Thiele in 1999, quantifies the volume in the objective space dominated by the approximation set relative to a reference point, providing a Pareto-compliant measure that rewards both proximity to the true front and coverage. Complementary spread metrics, such as the one defined in NSGA-II, measure the uniformity of solution distribution along the approximated front by calculating the ratio of the distance between extreme points and the average distance to nearest neighbors, ensuring well-spaced representations without gaps or clustering.[38] These metrics guide algorithm tuning and comparison, with hypervolume often preferred for its ability to balance multiple quality aspects in empirical studies.Interactive Methods
Interactive methods in multi-objective optimization involve iterative interactions between a decision maker (DM) and the optimization process to guide the search toward the most preferred solution on the Pareto front. These approaches allow the DM to articulate preferences dynamically, refining the solution set progressively without requiring a complete a priori specification of objectives or generating an exhaustive approximation of the Pareto front upfront. By incorporating human judgment at each step, interactive methods bridge the gap between computational efficiency and subjective decision-making needs, particularly in complex problems where preferences may evolve.[42] Preference information in interactive methods typically takes forms such as trade-off weights, reference points, or classifications of solutions. Trade-off weights enable the DM to indicate the relative importance of objectives, often through marginal rates of substitution. Reference points involve specifying desirable aspiration levels for each objective, which the algorithm uses to navigate the feasible region. Classification of solutions allows the DM to categorize current proposals as acceptable, improvable, or worsenable across objectives, facilitating targeted refinements. These input types ensure that the method adapts to the DM's cognitive capabilities and problem-specific insights.[43][42] The NIMBUS method exemplifies classification-based interactive optimization, where the DM partitions objectives into three sets: those to be improved, those whose current values are acceptable, and those allowed to worsen. Developed for nondifferentiable and nonconvex problems, NIMBUS solves a sequence of scalarized subproblems based on this classification to generate a new candidate solution, repeating until satisfaction is achieved. This approach has been applied in engineering design tasks, such as optimizing chemical processes, demonstrating its robustness for real-world applications.[44][45] Reference point approaches, another prominent category, require the DM to provide a utopian or aspiration point, after which the method projects this point onto the Pareto front to identify nearby efficient solutions. Pioneered by Wierzbicki, these methods use achievement scalarizing functions to minimize the distance from the reference point while ensuring Pareto optimality. For instance, synchronous nested coordination variants decompose the problem hierarchically, coordinating subsystem optimizations synchronously to align with the global reference projection. This projection mechanism efficiently handles high-dimensional objectives by focusing computational effort on preferred regions.[43] The general process in interactive methods alternates between optimization steps and DM queries: an initial solution or approximation (potentially from a posteriori methods) is presented, the DM provides preference information, the algorithm scalarizes and solves a subproblem to yield a revised solution, and iterations continue until convergence on an acceptable compromise. This feedback loop typically requires few interactions, often 5-10, depending on problem complexity and DM expertise. Convergence is achieved when the DM confirms the current solution as preferred or no further improvements are desired.[42][46] Advantages of interactive methods include their adaptability to evolving DM preferences, which is crucial in decision support systems where initial priorities may shift based on new insights. By learning preferences gradually, these methods reduce cognitive burden compared to a priori weighting and avoid overwhelming the DM with large Pareto sets from a posteriori approaches. They have been integral to decision support since the 1980s, influencing fields like engineering and resource allocation through tools that enhance transparency and user involvement.[43][42]Advanced Techniques
Hybrid Methods
Hybrid methods in multi-objective optimization integrate multiple algorithmic paradigms to exploit their complementary strengths, such as combining the global search capabilities of evolutionary algorithms with the precision of classical optimization techniques. This approach addresses limitations like slow convergence in evolutionary methods or entrapment in local optima in classical solvers, particularly for complex, non-convex Pareto fronts. By fusing these strategies, hybrid methods enhance solution quality, diversity, and computational efficiency in real-world applications ranging from engineering design to resource allocation. A prominent category involves evolutionary algorithms augmented with classical local search, exemplified by memetic algorithms. In these frameworks, evolutionary operators generate a diverse population of candidate solutions, which are then refined through local optimization procedures like gradient-based descent or neighborhood searches to improve exploitation. For instance, the memetic Pareto optimization algorithm applies local search selectively to non-dominated solutions, balancing exploration and intensification to yield better approximations of the Pareto front. This hybridization has demonstrated superior performance over pure evolutionary methods in benchmark problems, achieving higher hypervolume indicators with reduced function evaluations. Scalarization-based hybrids combine different transformation techniques to overcome the shortcomings of individual scalarizing functions. One effective strategy alternates between weighted sum methods, which are efficient for convex fronts but fail on non-convex regions, and epsilon-constraint approaches, which ensure comprehensive coverage by optimizing one objective while constraining others. By iteratively applying these scalarizations within a single framework, such hybrids generate a more uniform distribution of Pareto-optimal solutions, as shown in applications to portfolio optimization where they outperform standalone scalarization in diversity metrics.[47] Two-stage hybrid approaches provide a structured way to approximate and refine the Pareto front. In the first stage, an evolutionary algorithm roughly outlines the front through population-based search, producing a set of promising non-dominated points. The second stage employs exact classical solvers, such as mixed-integer linear programming, to precisely optimize subsets of these points under tightened constraints. This method is particularly advantageous for problems with both combinatorial and continuous variables, as evidenced by its application in supply chain design, where it reduces computational time while maintaining solution accuracy.[48] Decomposition-based hybrids, such as extensions of the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), incorporate surrogate models to approximate expensive objective functions. MOEA/D decomposes the multi-objective problem into scalar subproblems, solved collaboratively across a population; integrating surrogates like radial basis functions accelerates evaluations without sacrificing convergence. These frameworks have gained prominence since the 2010s for tackling real-world problems with high-fidelity simulations, offering improved diversity and faster adaptation to dynamic environments compared to non-hybrid decompositions. Recent advances include hybridized multi-objective whale optimization algorithms that combine with other metaheuristics for enhanced performance on complex benchmarks.[49] Overall, hybrid methods have become essential for practical multi-objective optimization, consistently delivering robust trade-off solutions in diverse domains.Machine Learning Integration
Machine learning techniques have significantly enhanced multi-objective optimization by addressing computational bottlenecks, particularly through surrogate models that approximate expensive objective functions. Surrogate models, including probabilistic approaches like Gaussian processes (often implemented as Kriging), enable efficient exploration of the Pareto front by providing uncertainty estimates that guide the search process in high-dimensional spaces. Neural networks serve as deterministic surrogates, offering scalable approximations for complex, non-linear objectives in scenarios like engineering design, where they reduce evaluation costs significantly while maintaining accuracy.[50] These surrogates are often integrated into a posteriori methods to accelerate convergence toward diverse Pareto solutions without exhaustive simulations.[51] Deep reinforcement learning extends multi-objective optimization to dynamic environments, with the Multi-Objective Deep Q-Network (MO-DQN) framework, proposed in a 2017 preprint, allowing agents to learn policies that balance multiple conflicting rewards through modular neural architectures.[52] This approach excels in sequential decision-making tasks, such as resource allocation, by maintaining separate value functions for each objective and scalarizing them adaptively during training.[53] Subsequent variants have improved scalability for real-time applications, demonstrating superior performance in non-deterministic settings compared to traditional scalarized reinforcement learning. For high-dimensional Pareto fronts, autoencoders facilitate prediction and dimensionality reduction by learning latent representations that preserve trade-off structures, enabling visualization and approximation of fronts with thousands of objectives.[54] Recent advances from 2020 to 2025 include transformer-based multi-task learning models that generate entire Pareto fronts via hypernetworks, capturing long-range dependencies in objective spaces for faster inference in generative tasks.[55] These methods also address non-stationarity—where objectives evolve over time—by incorporating adaptive priors in reinforcement learning frameworks, ensuring robust policy updates amid environmental changes.[56] In drug design, neural surrogates optimize efficacy and toxicity simultaneously; for instance, convolutional neural networks approximate molecular properties to explore vast chemical spaces, identifying candidates with balanced therapeutic indices.[57] Recent integrations of machine learning with multi-objective optimization, as of 2025, include applications in materials design and biofuel systems, leveraging techniques like artificial neural networks for enhanced discovery processes.[58]Visualization of Results
Visualization of results in multi-objective optimization involves techniques to represent the Pareto front, which consists of non-dominated solutions capturing trade-offs among conflicting objectives. These methods aid decision-makers in understanding solution diversity and selecting preferred outcomes from the approximated Pareto set.[59] For bi-objective problems, scatter plots are commonly used to depict trade-off curves, where each axis represents one objective and points illustrate the Pareto front's shape. Parallel coordinates provide an alternative, plotting solutions as lines connecting axis values for each objective, revealing correlations and gaps in the front.[60][61] In high-dimensional cases with more than three objectives, visualization challenges arise due to the "curse of dimensionality," necessitating projection or mapping techniques. Radial visualization methods, such as RadViz, position solutions on a circle with axes as radial spokes, using springs to layout points and highlight clusters. Self-organizing maps (SOMs) reduce the Pareto front to a 2D grid, preserving topological relationships for pattern identification. Dimensionality reduction via principal component analysis (PCA) or t-distributed stochastic neighbor embedding (t-SNE) projects high-dimensional data into 2D or 3D spaces, emphasizing local structures in the front.[62][60] Interactive tools enhance decision support by allowing exploration of the Pareto front. Level diagrams represent solutions through nested contours or value paths, facilitating comparison of fronts by highlighting dominance levels and uniformity. Heatmaps encode objective interactions via color intensity, enabling quick assessment of solution density and trade-offs across dimensions.[63][60] Assessing visual quality of these representations often relies on metrics evaluating coverage and uniformity of the Pareto front approximation, such as spacing or entropy-based indicators that measure even distribution and completeness without requiring the true front.[64][65] Software tools support Pareto front rendering, including MATLAB'sparetoplot function for generating 2D and 3D plots of multi-objective solutions, and custom implementations for interactive exploration.[66] Recent advances as of 2025 include methods for distilling Pareto fronts into actionable insights, improving accessibility for decision-making in complex optimizations.[67]
