**3. Optimization Approaches**

This section provides a description of all the algorithmic approaches selected, and thus considered in the experimental study. We also attempt to justify the selection and design decisions made.

#### *3.1. Multi-Objective Evolutionary Algorithms*

Evolutionary Multi-Objective Optimization (EMO) [34] is a collection of research, applications and algorithms in the field of Multi-Objective Optimization (MO) paradigms using Evolutionary Algorithms (EAs). In the related literature, several Multi-Objective Evolutionary Algorithms (MOEAs) have been proposed for solving MOPs, and these can be classified based on different features. A widely accepted classification for MOEAs is one that considers the following families:


Evolutionary Algorithm (FV-MOEA) [43] and Many-Objective Metaheuristic Based on R2 Indicator (MOMBI-II) [44].

The No Free Lunch Theorem in optimization [45] states that any algorithm that searches for an optimal cost or fitness solution is not universally superior to any other algorithm. Therefore, for experimental studies, at least one algorithm of each type is usually selected as part of the state of the art. In this work, we apply a Pareto-dominance-based algorithm (NSGA-II), a decomposition-based approach (MOEA/D) and an indicator-based algorithm (SMS-EMOA), which guides the search by means of the hypervolume metric. A brief description of said multi-objective approaches is provided below:


#### *3.2. Single-Objective Evolutionary Algorithms*

In the context of SOEAs, some of the most frequently used approaches are Evolution Strategies (ESs) and Genetic Algorithms (GAs). The main differences between these types of EAs lie in the calculation of the fitness and the application of operators (mutation, recombination and selection). In contrast to GAs, where the main role of the mutation operator is simply to avoid the problem of premature convergence, mutation is the primary operator of ESs. Furthermore, in contrast to GAs, selection in the case of ESs is absolutely deterministic. For the experimental study conducted in this work, we considered the following approaches:

• **Generational Genetic Algorithm (gGA)** [46]: two parents are selected from the population in order to be crossed, yielding two offspring, which are later mutated and evaluated. These newly generated individuals are placed in an auxiliary population that will replace the current population when it is completely full.


#### *3.3. Comparison of Single and Multi-Objective Approaches*

For the comparison we will use the same set of bi-objective instances for the single-objective and multi-objective algorithms here considered. Our aim will be to analyze the objectives independently, i.e., first comparing the values of MOEAs and SOEAs for objective 1 in the whole set of instances considered, and then, in a similar way, by comparing objective 2. The multi-objective approaches directly address the bi-objective instances selected for the KNP and the TSP. The above means that they obtain, at every execution, an extreme—best—value for objective 1, and another extreme—best—value for objective 2. We note that both values—for the two optimization objectives—are obtained at the same time, i.e., in one single execution of the algorithm. However, the single-objective approaches cannot deal with several objectives simultaneously, and therefore, they need to be executed twice: once to optimize objective 1 and another to optimize objective 2.

During the experimental evaluation we will focus on solution quality when comparing the different approaches, i.e., we will not perform any analysis on the execution time for each approach. Due to their stochastic nature, the time complexity analysis of EAs is not an easy task [49]. Many experimental results have been reported on all types of EAs but only a few results have been proved on a theoretical context [50]. Besides, when the complexity analysis is about MOEAs, the development of a theoretical study is even more complicated [51]. Since MOEAs implicitly deal with objectives that are in conflict one with each other, they need to manage a set of trade-off solutions instead of one single (optimal) solution. When tackling MOPs is necessary to distinguish the quality of solutions consisting of multiple objective values. In many MOEAs, it is common to use the concepts of Pareto dominance in order to sort a set of solutions: non-dominated sorting. This sorting procedure aims to divide a solution set into a number of disjoint subsets or ranks, by means of comparing their values of the same objective. After the sorting process, solutions in the same rank are viewed equally important, and solutions in a smaller rank are better than those in a larger rank. Since a wide range of the existing MOEAs have adopted this sorting strategy, they all involve a high computational cost [52]. Some studies have shown that in an approach such as NSGA-II applied to a bi-objective DTLZ1 benchmark problem, the non-dominated sorting consumes more than 70% of the run-time for a population size of 1000 individuals and a maximum number of generations equal to 500 [52]. In our study, the execution times of the multi-objective approaches range from 5 to 10 times greater in comparison to the time required by the two executions–one for each objective–of the corresponding single-objective alternatives. These values depend on the problem (KNP or TSP) and on the instance type or size. When dealing with many-objective optimization problems (three or more objectives) this aspect of efficiency becomes even more critical. Bearing the above in mind, some authors have actively worked on reducing the number of objective functions by eliminating those that are not essential to describe the Pareto-optimal front [53].
