Pareto-Optimal Front Determination
Page: 1-53 (53)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010003
PDF Price: $15
Abstract
Multi-objective optimization (MOO) problems belong to programming approaches in which the decision-maker is faced with a multiplicity of conflicting objectives. This situation occurs with real-world problems involving engineering design, chemical processes, financial management, etc. In such problems, we will obtain rather a set of equally good solutions and not just one best solution. Classical methods for solving problems have shown their limits to find Pareto-optimality fronts. The difficulties were the necessity of repetitive applications, the requirement of some knowledge about the problem, the insufficient spread of non-dominated solutions, and the sensitivity of results to the shape of the Pareto-optimal front. accelerated development in the decades 1980s and 1990s. A variety of nature-inspired algorithms have been proposed in the recent years with extensive applications. The concept of dominance is the basis of the Pareto optimality. The dominance binary relation is a strict partial order relation. This approach allows a comparison between feasible solutions in the fitness space and decision space. The non-dominated solution sets yield Pareto fronts. Different techniques were proposed to find good approximations of the Pareto sets when a Pareto front cannot be determined analytically. Our method uses graph theory analysis. This approach provides a nondominated set by using the Hasse diagram of an acyclic graph. Numerous examples from the literature show connected and disconnected Pareto-optimal fronts. In particular, Pareto-optimal fronts change if we decide to maximize instead to minimize the objectives. Algorithms use different selection procedures. The selection criteria can be an elitist Pareto ordering, non-Pareto criteria like indicators, a bi-criteria evolution, and the extended concepts of dominance like ε-dominance and grid dominance.
Metaheuristic Optimization Algorithms
Page: 54-83 (30)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010004
PDF Price: $15
Abstract
Heuristic and metaheuristic algorithms are used iteratively to approximate challenging optimization problem-solving. A metaheuristic algorithm refers to a higher level master strategy which guides and controls the operations of other lower-level subordinate heuristic algorithms. Different concepts and operators are combined for exploring the search space of an optimization problem. The capacity of such techniques to solve NP-hard combinatorial problems and continuous optimization is well known. An illustrative and reference metaheuristic is given by the simulated annealing (SA) algorithm for solving optimization problems. SA is not an evolutionary algorithm since new solutions are mainly generated by using a sequence of random walks. We introduce both SA metaheuristic techniques for single-objective (SA) and multiobjective (MOSA) optimization problems. This study solves numerical test problems, such as the Ursem’s test function, the six-hump camel back test function, and ZDT1 to ZDT3 test problems. Routines from different software packages are used such as Mathematica® and other free open software packages. The applications show the capacity to approximate various Pareto-optimal fronts which shape can be convex or non-convex.
Evolutionary Strategy Algorithms
Page: 84-113 (30)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010005
PDF Price: $15
Abstract
Evolutionary computation (EC) is commonly divided into evolution strategy (ES), evolution programming (EP), genetic algorithm (GA), and genetic programming (GP). This study focuses on GA algorithms for solving MOOP problems. GA belongs to the set of nature-inspired meta-algorithms. GA’s stochastic search techniques are based on genetic processes of biological systems. GA uses encodings and reproduction mechanisms. A population of feasible solutions evolves over successive generations with improved performances. This process stops when reaching global optima. The reproduction uses three types of operators: a selection operator determines the most qualified solutions (i.e., individuals), a crossover operator allows creating new solutions, and a mutation operator makes one or more random changes in a single string. In this study, we show how we can combine different modules with an early version of Mathematica® software. Each module specifies input and output variables. Other free packages SciLab 5.5.2 (an alternative to Matlab®) and GENOCOP III use the genetic search algorithm solvers for SOOP and MOOP problems. Ackley’s test function and other examples illustrate such computations.
Genetic Search Algorithms
Page: 114-143 (30)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010006
PDF Price: $15
Abstract
The extension of genetic search algorithms to MOO problems requires new concepts, such as Pareto domination ranking and fitness sharing. The goal is to maintain the diversity in the population of solutions through successive generations. Niching methods were introduced to reduce the effect of “random genetic drift,” and to preserve the genetic diversity of optimal solutions. The NPGA algorithm illustrates this approach with application to the Binh-Korn test function. The software NSGA-II can handle real-world complex constrained MOO problems. This software package is a fast and elitist algorithm sorting the population of parents and offspring on different ranked fronts. A non-domination order relation prevails. An elitist search procedure guarantees the diversity and spread of solutions. The constraint handling method differs from the conventional methods by using a modified definition of dominance rather than penalty parameters. NSGA-II can solve several test functions. These applications are unconstrained test functions such as ZDT1, UF1, Kursawe and Viennet, and constrained test functions such as SRN, Deb, and Tanaka. For these problems, we decided to illustrate the progression of calculation at the different steps of the iteration path.
Evolution Strategy Algorithms
Page: 144-156 (13)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010007
PDF Price: $15
Abstract
Evolution strategy algorithms are nature-inspired methods. The differential evolution (DE) algorithm is stochastic population-based. The differential evolution strategy consists of subsequent recombination of solutions. Operators such as crossover, mutation, and selection change the composition and the performances of the population. This method for solving SOO problems was extended to MOO problems. In this case, more properties were required, such as promoting the diversity of solutions and performing elitism. Contrary to other genetic algorithms, DE algorithm relies on the mutation operation, rather than crossover operation. DE algorithm is implemented in Mathematica® software among other numerical methods for single-objective optimization problems. Two examples illustrate the computations with Mathematica®, the bivariate Rosenbrock’s test function, and a highly multimodal test function drawn from Mathematica®. Test function ZDT6 shows the application of DE algorithm for solving a multi-objective optimization problem with three decision variables and two objectives.
Swarm Intelligence and Co-Evolutionary Algorithms
Page: 157-172 (16)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010008
PDF Price: $15
Abstract
Collective strategies are possible within a population. Such situations occur in nature with birds or fishes in flocks. Such swarm intelligence problems are suitable for optimization problem-solving. Other co-evolutionary models implicate two different populations or species which compete. In the predator-prey model, the predator eliminates the weak prey. We show that such a situation can be transposed to optimization problems, for which the predator is one of the objectives, and the preys are feasible solutions. Solving an MOO problem may use different ways. One way consists of decomposing the initial problem into a sequence of subproblems.
Decomposition-Based and Hybrid Evolutionary Algorithms
Page: 173-196 (24)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010009
PDF Price: $15
Abstract
Solving an MOO problem may use two different ways. One way consists of decomposing the initial problem into a sequence of subproblems. Each subproblem is solved using information from its neighboring subproblems. We solve all the subproblems simultaneously. A multi-objective evolutionary algorithm based on decomposition (MOEA/D) applies this approach to the Kursawe’s test function. There -lattice are used to approximate convex and nonconvex Pareto-optimal fronts. The metaheuristics or heuristics combine. The incorporation of local search heuristics into an MOEA illustrates this aspect. In the literature, we find how hybridization can be designed. There are three ways to hybridize metaheuristics and heuristics. A first method is to use one algorithm and improve it with other techniques. A second method is to use multiple operators in an EA, and a third method is to develop MOGA solutions by implementing efficient local search. Numerous examples of hybridization exist in the literature. The algorithm M-PAES combines the local search strategy in the Pareto archived evolution strategy (PAES) with the use of GA. The main algorithm PSO can be combined with a local and a global search algorithm.
Many-Objective Optimization and Parallel Computation
Page: 197-220 (24)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010010
PDF Price: $15
Abstract
Large-scale multi-objective problems in industrial and engineering may involve much higher dimensional problems. They may require decomposing the numerous operations to be done in the resolution process. In practice, one can be confronted with the optimization of a large number of objectives as well as with the great amount of the calculations to be made. This chapter is devoted to possible approaches to overcome these difficulties namely the so-called ‘many-objective optimization’ and ‘parallel computation’. Two major inconveniences in handling more than three (many) objectives are a decrease selection pressure to converge toward the Pareto front and a decreasing computational efficiency as the number of objectives increases. Addressing these significant difficulties may consist of adapting or changing the concept of the Pareto dominance. A stronger dominance relation will allow a better comparison of the quality of solutions. New concepts are ε-dominance, L-optimality, fuzzy dominance, preference order ranking. Some essential algorithms for manyoptimization are proposed, such as the fast hypervolume-based algorithm, the vector angle-based algorithm, the reference point-based algorithm as with NSGA-III. Test problem suites for many-objective optimization are proposed. Parallel search techniques are adapted to new computer architectures with parallel computers and distributed multiprocessor computers. Two motivations to adopt such configuration are saving computation time of complex real-world problem and the possibility to solve large-size problems. The hierarchical master-worker paradigm is a standard way to implement parallel applications. A master process dispatches specific tasks to multiple worker processes and receives the computation results back from the worker processes. Parallel computation of metaheuristic algorithms includes parallelization strategies, parallel designs, and parallel metaheuristic algorithms. Applications can show how much computation time can be saved with parallel computation.
Design of Test Problems
Page: 221-234 (14)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010011
PDF Price: $15
Abstract
Test functions help to evaluate multi-objective optimization algorithms. Toolkits and suites support for constructing tunable test multi-objective problems. Generating tunable methods are helpful for controlling a different kind of complexities as in the K. Deb’s approach. We may be faced with two types of difficulties, which concern the convergence process and the diversity of Pareto-optimal solutions. Firstly, the convergence of the iteration process to a Pareto-optimal front may not be achieved. The reasons can be due to the multi-modality of objective functions, to the presence of a deceptive attractor, or to flat areas surrounding the global optimum. Secondly, the non-diversity of the Pareto-optimal solution should be devoted to geometric anomalies of the Pareto-optimal front, such as convexities or non-convexities, discontinuities, and non-uniform distributed solutions. Generating tunable methods are helpful for controlling a different kind of complexities. K. Deb (1999) suggested the construction of tunable two-objective test problems. The objective functions of such test problems are composed of a particular function for which we know the impacts. A nonlinear multivariate ‘distribution function should affect the diversity in the Pareto-optimal front. A multi-modal ‘distance function’ should disturb the convergence to the true Pareto-optimal front. The convexity or discontinuity in the Pareto-optimal front should be affected by choice of a “shape function.” An interactive and controlled document demonstrates the resolution process of the Kursawe's test function.
Fifty Collected Test Functions
Page: 235-267 (33)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010012
PDF Price: $15
Abstract
This study collects fifty test functions. This collection includes test problems from Deb ’s test and problem toolkit, ZDT and DTLZ test suites, Van Veldhuizen’s test suite, and other examples from the literature. For each test function, the Pareto-optimal set in the parameter space and the Pareto-optimal front in the fitness space are determined by using NSGA-II. We specify the main features of the Pareto-optimal sets for these test functions. The Pareto-optimal sets can be connected or disconnected, separable, unimodal of multimodal, symmetric and scalable. The Pareto-optimal fronts may have particular shapes such as a curve, a single point or a surface. The Paretooptimal fronts can be connected or disconnected, and entirely or partially with convex or nonconvex.
List of Journal Abbreviations in the References
Page: 275-277 (3)
Author: Andre A. Keller
DOI: 10.2174/9781681087054119010014
Introduction
Multi-Objective Optimization in Theory and Practice is a simplified two-part approach to multi-objective optimization (MOO) problems. This second part focuses on the use of metaheuristic algorithms in more challenging practical cases. The book includes ten chapters that cover several advanced MOO techniques. These include the determination of Pareto-optimal sets of solutions, metaheuristic algorithms, genetic search algorithms and evolution strategies, decomposition algorithms, hybridization of different metaheuristics, and many-objective (more than three objectives) optimization and parallel computation. The final section of the book presents information about the design and types of fifty test problems for which the Pareto-optimal front is approximated. For each of them, the package NSGA-II is used to approximate the Pareto-optimal front. It is an essential handbook for students and teachers involved in advanced optimization courses in engineering, information science and mathematics degree programs.