**1. Introduction**

Evolutionary Algorithms (EAs) [1] were initially developed for unconstrained Single-Objective Optimization Problems (SOPs). However, extensive research has been conducted to adapt them to other types of problems. In recent years, many Multi-Objective Evolutionary Algorithms (MOEAs) have been proposed in the literature [2,3] to adapt EAs to dealing with Multi-Objective Optimization Problems (MOPs). One of the main components of most modern MOEAs is the ability to maintain genetic diversity within a population of individuals [4]. Maintaining proper diversity is decisive for the behavior of EAs, since a loss of diversity could lead to premature convergence, which is a frequent drawback, especially for single-objective optimization. Most MOEAs implicitly manage diversity by considering the objective function space [5] and, in some cases, the decision variable space. Several mechanisms have been proposed in the literature to deal with the above, such as fitness

sharing [6], clustering [7], and entropy [8], among others [4]. Promoting diversity is a key feature of an efficient and reliable MOEA. In fact, it is an intrinsic component in many MOEAs. Because of this, some authors have claimed that the application of MOEAs might be useful when dealing with single-objective problems. Furthermore, several theoretical and empirical studies have shown that multi-objective optimizers can even provide better solutions than single-objective optimizers [4,9–11].

MOEAs have been applied to SOPs using various guidelines. Usually, the mechanisms proposed in the literature for solving SOPs by means of MOEAs consist of transforming the original SOP into a MOP so that MOEAs can be applied to the transformed problem. This transformation can be done by either replacing the original objective with a set of new objectives, or by adding new, additional objectives to the original one [4,12]. Among these approaches, the best known and most widespread in the literature are: transforming constraints into objectives [13], considering diversity as an explicit objective function [14] and multiobjectivization schemes, which transform a SOP into a MOP by modifying its fitness landscape [12]. In any case, these new objectives are included in order to promote the exploration of different regions, since multi-objective approaches try to simultaneously optimize several objectives. This might make it possible to escape from sub-optimal regions, thus providing a suitable balance between exploration and exploitation. The analysis presented in [15] lists the benefits of using additional objectives, named helper-objectives. The main ones are [12]: avoiding stagnation in local optima and maintaining diversity within a population.

In this paper, we present a comparative study of MOEAs and SOEAs when both types of schemes are separately applied to optimize each objective function of a bi-objective optimization problem. The study is not intended to provide a novel algorithm or to compare a new proposal with state-of-the-art algorithms. The main goal of this work is to investigate the effectiveness—or at least the opportunities—of applying multi-objective approaches to single-objective optimization. This study relies on comparisons of standard MOEAs and some general SOEAs when they seek to optimize—independently—every objective in a bi-objective problem. In this study, we consider the Knapsack Problem [16] and the Travelling Salesman Problem [17]. Both problems have been considered in numerous theoretical and experimental studies in the literature, so many effective solvers are known to perform successfully for a wide range of benchmarks.

Although there are many contributions that have been made in the field of mathematical optimization, in this work we are interested in the analysis of a particular set of approximated algorithms—named evolutionary algorithms—for both, single and multi-objective formulations of the problems. For this reason, our literature review deepens the field of evolutionary computation and not in other research areas that could also have grea<sup>t</sup> impact and interest nowadays. As an alternative, some experts have advocated for pushing further the integration of machine learning and combinatorial optimization [18]. Some operations research communities are introducing machine learning as a modeling tool for discrete optimization [19] or to extract intuition and knowledge in order to dynamically adapt the optimization process [20]. Despite the existence of such a huge amount of alternatives to face these optimization problems, it is important to note that we are interested in the comparative analysis of single and multi-objective evolutionary algorithms. Thus, the selected optimization problems can be understood as simple use cases for our experimental study.

For the experiments, we have selected an extensive and diverse set of problem instances that consider different features, sizes and complexities. However, all the instances have two optimization objectives, meaning they can be used to apply multi-objective approaches. For the optimization process, three MOEAs and two SOEAs have been analyzed. Each SOEA is applied twice for each problem instance (one for each objective), so that the optimized values for each of the two objectives can be compared to the multi-objective solutions offered by the MOEAs in question. The rest of this paper is organized as follows: Section 2 describes the formulation of the two problems selected for this study, as well as the set of instances solved during the experimental process. Then, Section 3 provides an overview of the approaches—MOEAs and SOEAs— applied during this study. A detailed description of the experimental analysis and some underlying results, as well as their discussion, are presented in Section 4 Finally, the conclusions and some lines for future work are presented in Section 5.

#### **2. Problems: Formulation and Instances**

This section presents two well-known problems, the Knapsack Problem (KNP) [16] and the Travelling Salesman Problem (TSP) [17], which we have selected to conduct out experimental study. For each problem, a formulation involving two objectives is described, as well as the corresponding set of instances. Note that all instances presented below are bi-objective instances. This means that all instances have information that allow the calculation of two different objective functions to be carried out. Anyway, for the single-objective approaches, we can use only one of the objectives and discard the other, depending on the objective being analyzed at a particular moment.

#### *2.1. The Bi-Objective Knapsack Problem (BOKNP)*

We consider the one-dimensional 0/1 knapsack problem with two objectives, where fractional items are not allowed and each item is available only once. This multi-objective one-dimensional binary knapsack problem can be defined as follows. Given a set of items *J* = {1, ..., *<sup>n</sup>*}, each with an associated weight *wj* ∈ N∗ and a profit *ckj* ∈ N0 for each objective *k* ∈ *K* = {1, ..., *p*}, the problem seeks to select the subset of *J* whose total weight does not exceed a fixed capacity *W* ∈ N<sup>∗</sup>, while simultaneously maximizing the accumulated profit according to each objective in *K*. Mathematically, the problem can be formulated as follows [21]:

$$\begin{array}{ccccc}\max f\_1(\mathbf{x}) & \sum\_{j=1}^n c\_j^1 \mathbf{x}\_j\\ & \vdots & \vdots\\ \max f\_p(\mathbf{x}) & \sum\_{j=1}^n c\_j^p \mathbf{x}\_j\\ \text{subject to} & \sum\_{j=1}^n w\_j \mathbf{x}\_j \le \mathcal{W}\_\mathbf{v} \ \mathbf{x}\_j \in \{0, 1\} \end{array} \tag{1}$$

Moreover, and without loss of generality, we assume that *ckj* ≥ 0 and *wj* ≤ *W* : ∀*j* ∈ {1, ..., *<sup>n</sup>*}, ∀*k* ∈ {1, ..., *p*}. Since, in this work, we are interested in a bi-objective formulation of this problem, we let *p* = 2, such that the set *K* contains two functions (*f*1 and *f*2) to be optimized, *K* = {1, <sup>2</sup>}.

For this bi-objective knapsack problem, we propose using the subset benchmark "MOKP" data sets available in the MOCOlib project [22]. MOCOlib is a collection of data sets and links for a variety of multi-objective combinatorial optimization problems. In this collection, we found three different sets of instances that are suitable for the bi-objective 0/1 unidimensional knapsack problem defined herein.

The data files themselves contain a description of the instances. Table 1 is attached in order to summarize the main features of the different sets of instances as well as their original references. Some of the key points for each data set are briefly broken down here:

1. **Data set 1A**: consists of five data files (instances) for the bi-objective 0/1 unidimensional knapsack problem. The values for the profits and weights have been uniformly generated. The number of items in the instances range from 50 to 500. The tightness ratio (Equation (2)) is in the range [0.11, 0.92].

$$r = \frac{W}{\sum\_{i=1}^{7} w\_i} \tag{2}$$

	- • **1B/A**: the weights and profits are uniformly distributed within the range [1, <sup>100</sup>].
	- • **1B/B**: these instances are created starting from data set 1B/A by defining the objectives in reverse order.
	- • **1B/C**: the profits are generated with plateaus of values of length ≤ 0.1 × *n*.
	- • **UNCOR**: 20 uncorrelated instances of 50 items. The profit vectors *c*1*j* , *c*2*j* and the weight vector *wj* are uniformly generated at random in the range [1, 300] for ten items, while for the remaining ones the range [1, 1000] is considered.
	- • **WEAK**: 15 weakly correlated instances ranging in size from 50 to 1000 items, where *c*1*j* is correlated with *c*2*j* , i.e., *c*2*j* ∈ [111, <sup>1000</sup>], and *c*1*j* ∈ [*c*2*j* − 100, *c*2*j* + <sup>100</sup>]. The weight values *wj* are uniformly generated at random in the range [1, <sup>1000</sup>].
	- • **STRONG**: 15 strongly correlated instances with the number of items ranging between 50 and 1000. The weights *wj* are uniformly generated at random and are correlated with *c*1*j* , i.e., *wj* ∈ [1, <sup>1000</sup>], and *c*1*j* = *wj* + 100. The value of *c*2*j* is uniformly generated at random in the range [1, <sup>1000</sup>].


**Table 1.** Bi-objective KNP instances. The instance number (*s*), the number of items (*n*) and tightness ratio (*r*) refer to the parameters of the instances.

#### *2.2. The Bi-Objective Travelling Salesman Problem (BOTSP)*

In this work we consider a generalization of the classical Travelling Salesman Problem (TSP), which is defined as follows. Given a complete graph—or fully connected network—*G* = (*<sup>V</sup>*, *E*) with vertex set *V* (cities), edge set *E* (paths between any two cities *i*, *j* ∈ {1, ..., *<sup>n</sup>*}), and edge values *ckij* with *k* ∈ *K* = {1, ..., *p*} (objective cost—it could be distance, time, energy, etc.—between city *i* and city *j*), the problem is to find the Hamiltonian path [26] (tour), which is a single and cyclic circuit, along the edges of *G*, such that each vertex (city) is visited exactly once and the total tour for each objective *k*, defined as the sum of costs *ckij*, is minimized. A more detailed description of this multi-objective formulation of TSP can be found in [27].

Given a graph *G* = (*<sup>V</sup>*, *<sup>E</sup>*), where *V* = {1, 2, ..., *n*} and *E* = {(*<sup>i</sup>*, *<sup>π</sup>*(*i*)), *i* ∈ *<sup>V</sup>*}, Π*n* denotes the set of all possible permutations of *n* cities. For a permutation *π* ∈ Π*<sup>n</sup>*, *π*(*i*) represents the city that follows city *i* on the tour represented by permutation *π*. A permutation whose graph is a Hamiltonian path is

called a cyclic permutation. We denote by Π*c* the set of all cyclic permutations of *n* cities. Therefore, a TSP tour can be represented by a permutation *π* = ( *<sup>π</sup>*(1), ... , *π*(*n*)) ∈ Π*<sup>c</sup>*. Thus, the formulation of the multi-objective TSP is given by:

$$\min\_{\pi \in \Pi^c} \sum\_{i=1}^{n-1} c\_{\pi(i), \pi(i+1)}^k + c\_{\pi(n), \pi(1)}^k \quad k = \{1, \dots, p\} \tag{3}$$

Since in this work we are interested in multi-objective problems with two optimization objectives, we have considered the bi-objective TSP formulation. Thus, the general Equation (3) is considered, in which *k* = {1, <sup>2</sup>}. Figure 1 is provided to better clarify the differences between a single and a bi-objective formulation of the TSP. Figure 1a illustrates a single-objective formulation of the TSP where there is only one set of costs (one for each edge), thus defining a single optimization function. As a result, the single-objective formulation of the TSP consists of a list of *n* cities and a set of costs—a single cost for each pair of cities—which are all stored in a cost matrix *D* with elements *cij*, with *i*, *j* ∈ {1, ..., *<sup>n</sup>*}, and diagonal elements *cii* = 0. However, Figure 1b shows the differences between a single and a bi-objective instance of the TSP. As shown in the example, a bi-objective formulation considers instances with two different costs for each edge: one cost for objective 1 and another for objective 2. Instead of having a single cost matrix, in a multi-objective formulation, we need to manage a cost matrix for each objective function considered.

**Figure 1.** Illustration of single and bi-objective TSP graphs. (**a**) A graph with weights (distances) on its edges as a single-objective optimization problem. (**b**) A graph with weights (distances and times) on its edges as a bi-objective optimization problem.

For this bi-objective formulation of the problem, we need a suitable set of problem instances: different types, sizes and costs between cities. Two types of instances are selected for the experimental study that is presented in this work. First, in the Euclidean instances, the costs between edges correspond to the Euclidean distance between two points on a plane, randomly sampled from a uniform distribution. Meanwhile, in the clustered instances, the points are randomly clustered on a plane, and the costs between edges correspond to the Euclidean distance. Then, the bi-objective instances are obtained by combining a pair of single-objective instances. Table 2 shows the information for the 19 problem instances of symmetric bi-objective TSPs with 100, 300 and 500 cities (these instances are available at http://www-desir.lip6.fr/~lustt/). These instances have been used in several related works [28–30], so they have been successfully solved in the literature. In fact, their exact fronts were already published by K. Florios (optimal fronts are available at https://sites.google.com/site/kflorios/motsp). More details on the selected instances are given below:

• The **TSPLIB Euclidean Instances** [31] (files with prefix kro, from the authors Krolak/Felts/Nelson) consist of 13 instances with two objectives which are generated on the basis of the single-objective

TSP instances from TSPLIB [32] (Library of Traveling Salesman Problems). The TSPLIB is a library of sample instances for the TSP (and related problems) from various sources and with different features.



**Table 2.** Bi-objective TSP instances.
