Next Article in Journal
Sustainable Evaluation of E-Commerce Companies in Vietnam: A Multi-Criteria Decision-Making Framework Based on MCDM
Next Article in Special Issue
A New Hybrid Descent Algorithm for Large-Scale Nonconvex Optimization and Application to Some Image Restoration Problems
Previous Article in Journal
Process Capability Evaluation Using Capability Indices as a Part of Statistical Process Control
Previous Article in Special Issue
Optimizing Retaining Walls through Reinforcement Learning Approaches and Metaheuristic Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weight Vector Definition for MOEA/D-Based Algorithms Using Augmented Covering Arrays for Many-Objective Optimization

1
Information Technology Research Group (GTI), Universidad del Cauca, Popayán 190001, Colombia
2
Intelligent Management Systems, Fundación Universitaria de Popayán, Popayán 190001, Colombia
3
CINVESTAV Tamaulipas, Ciudad Victoria 87130, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(11), 1680; https://doi.org/10.3390/math12111680
Submission received: 28 March 2024 / Revised: 22 April 2024 / Accepted: 25 April 2024 / Published: 28 May 2024
(This article belongs to the Special Issue Optimization Algorithms: Theory and Applications)

Abstract

:
Many-objective optimization problems are today ever more common. The decomposition-based approach stands out among the evolutionary algorithms used for their solution, with MOEA/D and its variations playing significant roles. MOEA/D variations seek to improve weight vector definition, improve the dynamic adjustment of weight vectors during the evolution process, improve the evolutionary operators, use alternative decomposition methods, and hybridize with other metaheuristics, among others. Although an essential topic for the success of MOEA/D depends on how well the weight vectors are defined when decomposing the problem, not as much research has been performed on this topic as on the others. This paper proposes using a new mathematical object called augmented covering arrays (ACAs) that enable a better sampling of interactions of M objectives using the least number of weight vectors based on an interaction level (strength), defined a priori by the user. The proposed method obtains better results, measured in inverted generational distance, using small to medium populations (up to 850 solutions) of 30 to 100 objectives over DTLZ and WFG problems against the traditional weight vector definition used by MOEA/D-DE and results obtained by NSGA-III. Other MOEA/D variations can include the proposed approach and thus improve their results.

Graphical Abstract

1. Introduction

The aim of evolutionary algorithms for multi-objective optimization, better known in the state of the art as multi-objective evolutionary algorithms (MOEAs) [1], is to find a set of solutions (rather than a single solution) to problems with two or three objectives, called multi-objective optimization problems (MOPs), where in many cases, these objectives conflict. In the last two decades, different algorithms have been proposed to address these problems, classifying the most successful proposals into three main approaches: dominance-based, indicator-based, and decomposition-based [2]. Prominent among these are NSGA-II [3], SPEA2 [4], IBEA [5], SME-EMOA [6], MSOPS [7], and MOEA/D [8]. These algorithms are commonly used to optimize systems with a low number of objectives (up to three), such as the planning of air routes [9], the design of aqueducts and sewers [10], and optimizing the routes and frequencies for bus rapid transit systems [11]. However, when dealing with many-objective optimization problems (MaOPs), i.e., four (4) or more objectives, traditional MOEAs are prone to fail or converge to local optima because, among other complications, many objectives make it difficult to define when one solution outperforms another as the space for representing the objectives becomes too large [12]. In recent years, several evolutionary algorithms (many-objective evolutionary algorithms, MaOEAs) have been proposed to optimize many objectives that seek to overcome the deficiencies of traditional MOEAs. These algorithms also have different approaches; among the most important are [12]:
Scalar-function-based (decomposition/aggregation): The first group seeks to solve the problem by decomposing it using multiple weighted objective functions, in which each objective has different weight values in each function. The second group, based on aggregation, uses functions to combine groups of objectives by working with a much smaller amount of these and solving them with traditional MOEAs. Within this approach, the MOEA/D algorithm [8] and its variations stand out, including for the improvement of evolution operators based on differential evolution, such as MOEA/D-DE [13], MOEA/D-HSE [14], and MOEA/D-oDE [15].
Reference-set-based: These algorithms guide the search process based on a list of solutions in a reference set. Notable here is the improved two-archive algorithm (TAA) [16] and, in particular, version 3 of the non-dominated sorting genetic algorithm NSGA-III [17].
Quality-indicator-based: These algorithms transform the problem of many objectives to a problem of optimizing a single objective that represents how good the solutions are compared to the rest of the population (indicator). The most widely recognized approaches are IBEA [18], I-SIBEA [19], artificial bee colony algorithm (E-MOABC) [20], and hypervolume adaptive grid algorithm (HAGA) [21].
Dimensional-reduction-based: These algorithms take the objectives of the original problem and reduce them in a low-dimension representation using, for example, principal component analysis (PCA), unsupervised feature selection, and greedy techniques. Prominent in this approach is PCA-NSGA-II [22].
Space-partitioning-based: These algorithms optimize subsets of the problem objectives in each iteration of the evolutionary process. The ∈R-EMO [23] algorithm is a good example.
The decomposition-based approach has attracted much attention from researchers in the area. In particular, MOEA/D [8] has benefited from a number of improvements with the following principal aims: (1) to develop new methods for the defining and dynamic adjustment of weight vectors that decompose the problem into multiple single-objective problems; (2) to use new decomposition approaches; (3) to ensure the efficient allocation of computational resources; (4) to improve the search process by modifying the selection, crossover, mutation, and replacement operations of the algorithm; and (5) to hybridize with dominance-based approaches.
One of the least-researched limitations of MOEA/D focuses on defining the weight vectors that decompose the problem. In its original version, the algorithm seeks to conduct a uniformly random sampling of the weighting of the different objectives. However, this is not guaranteed to be the most appropriate approach, especially when the number of objectives grows [24]. This method does not work because the interrelationship between objectives is not adequately sampled. In addition, an exponential increase in the number of weight vectors is required to obtain adequate sampling when the number of targets grows [25].
Another option is based on the simplex method [21], in which the size of the population (the number of weight vectors) increases non-linearly with the increasing number of objectives, and the user cannot define the size of the population. In addition, making a uniform distribution of weight vectors does not ensure that the solutions sample the interaction between the objectives [22].
In this research [26], the use of augmented covering arrays (ACAs) is proposed for defining the weight vectors, considering that this new mathematical object guarantees the most significant coverage (a sampling with the highest coverage of interactions between several factors, in this case, optimization objectives) with the least possible effort. ACAs are a new type of covering array (CA), and they are formally presented for the first time later in Section 3.3. CAs, in general, have been used to support experimental design in fields such as agriculture, medicine, biology, and material design. Latterly, they are one of the most widely used tools for testing software and hardware. In all these fields, it is necessary to test combinations of different factors without conducting an exhaustive search of them due to restrictions of cost, time, and effort [27,28].
Experimental results show that the proposed method obtains better results on DTLZ and WFG problems using small and medium populations than MOEA/D-DE and NSGA-III. MOEA/D-DE-ACA (a new MOEA/D-DE version that uses ACAs) obtains better inverted generational distance results for 30 to 100 objectives supported in the Friedman non-parametric and Holm post hoc tests. Execution time was also significantly reduced, using only 40.7% or 8.9% of the time used by MOEA/D-DE and 4.8% or 7% of that of NSGA-III. The results showed no significant differences between MOEA/D-ACA and MOEA/D using large populations, except in 90 objectives in which MOEA/D-ACA performs better. In addition, MOEA/D-ACA further reduces execution times by using only 2.7% of the execution time of MOEA/D-DE and 6.2% of that used by NSGA-III. Given such results, the different variations of MOEA/D and decomposition-based algorithms might be expected to incorporate the proposed approach for defining the weight vectors, thereby improving the literature results.
The rest of this document is organized as follows: Section 2 presents previous work on defining weight vectors in MOEA/D. Section 3 presents orthogonal arrays (OA), covering arrays (CA), and augmented covering arrays (ACA) and a comparison between them for the definition of weight vectors. Section 4 details the process of defining weight vectors based on augmented covering arrays within the multi-objective evolutionary algorithm based on decomposition with differential evolution (MOEA/D-DE-ACA). Section 5 describes the experiments, starting with the characteristics of the problems used (DTLZ and WFG), the quality measure used for the comparison, and the results of the three defined experiments, which include the comparison with the multi-objective evolutionary algorithm based on decomposition with a differential evolution approach (MOEA/D-DE) and the non-dominated sorting genetic algorithm version 3 (NSGA-III) from 10 to 100 objectives, and the comparison against other proposals of the state of the art in constrained problems. Finally, Section 6 presents conclusions and recommends directions for future research.

2. Related Studies

MOEA/D is an algorithm that decomposes a multi-objective optimization problem into several single-objective optimization subproblems. MOEA/D employs a method based on populations to optimize these subproblems concurrently and to find the Pareto front (PF) of the problem. The literature reports much theoretical and practical work using MOEA/D and variants [29]. Considering that the definition of weight vectors in MOEA/D significantly impacts the algorithm’s results, previous works that have sought to improve this definition are presented below. Most recent work is focused more on the dynamic adjustment of weights during the evolutionary process than on the initialization process.
Many previous studies have looked how to generate uniform weight vectors, and they can be organized into three classical methods: (1) simplex-lattice design, first used by Scheffe in 1958 to obtain uniformly distributed weight vectors [30]; (2) simplex-centroid design, presented by Scheffe in 1963 [31]; and (3) axial design, put forward by Cornell in 1975 [32]. Using these concepts, the transformation method (uniform design) tries to find a set of aggregation weight vectors with an arbitrary amount, which is uniformly distributed in the objectives space [33,34]. The original MOEA/D version uses the simplex-lattice design method to generate the weight vectors, but this method has three main weaknesses. The first is that the resulting weight vector distribution is not very uniform for three or more objectives. The second is that the population size or the number of weight vectors increases non-linearly with the number of objectives, and the population size cannot be defined at will. The third weakness is that the uniform distribution of weight vectors does not guarantee that uniformly distributed Pareto optimal solutions are obtained [35].
In 2012 [36], a new version of MOEA/D with a uniform design called UMOEA/D was proposed and compared with MOEA/D and NSGA-II on some scalable test problems with three to five objectives, obtaining the best results. The authors claim that the number of weight vectors is restricted for the three classical methods, but the “practical” number of weight vectors is very flexible in most of the experiments. This paper has a significant number of citations in Scopus related to new proposals for the dynamic adjustment of weights during the evolutionary process, the study of the effect of weight vectors on the performance of decomposition-based algorithms, and several reviews of decomposition-based methods, among others. It is worth highlighting section about of “The Study of Generation Strategy of Weight Vector” of the “Survey of Decomposition Based Evolutionary Algorithms for Many-Objective Optimization Problems” published in 2022 by Xiaofang Guo [37], mentioning four systematic design methods of weight vector generation: simplex-lattice design used in MOEA/D, uniform design for experiments with mixtures (UDEM) used in UMOEA/D, a combination of the previous two used in MOEA/D-UMD, and a two-layer reference vector generation approach.
In 2014 [38], MOEA/D-UDM was proposed with uniform decomposition measurement to obtain uniform weight vectors in any amount and one modified Tchebycheff decomposition. However, this proposal deals with two difficulties in applying MOEA/D to solve MaOPs, namely: (1) the quantity of generated weight vectors is predetermined, and these vectors are primarily concentrated along the boundary of the objective space for MaOPs, and (2) in the Tchebycheff decomposition method employed by MOEA/D, the association between a subproblem’s optimal solution and its weight vectors exhibits non-linearity.
Also, in 2014 [39], based on the geometric relationship between the weight vectors and the corresponding Tchebycheff-based optimal solutions, an initialization method of the weight vectors, called WS transformation, and an adaptive adjustment of the weight vectors in a new proposal called MOEA/D-AWA were proposed. WS transformation is redundant in two objectives. However, experimental studies on ten ZDT and DTLZ reference problems with three objectives demonstrated that MOEA/D obtains much better uniformly distributed Pareto optimal solutions. This work is also highly cited in Scopus for comparison against it, as a reference for an algorithm that initializes the weight vectors differently from the simplex-lattice design method, and for incorporating dynamic weight adjustment based on the same technique used for weight vector generation. A survey presented in 2020 cited this work and shows, in Sections III.B.1, III.B.4, and III.D, a list of weight vector generation methods for multi- and many-objective problems using the MOEA/D framework.
In 2015 [40], MOEA/D-UD was proposed. This work modified the initial definition of weights using a new method based on an experimental design called UD. It also proposed a dynamic adjustment of the weight vectors to remove them from crowding regions and add new ones into the sparse regions, previously distinguishing truly sparse regions from pseudo-sparse regions of the PF. MOEA/D-UD was compared with MOEA/D-DE, MOEA/D-AWA, and NSGA-II on nineteen test instances. The results show that MOEA/D-UD can obtain a well-converged and well-diversified set of solutions within an acceptable run time.
In 2017 [41], non-uniform weight vector distribution strategies were used to modify MOEA/D-DE to solve the unit commitment (UC) problem (a mixed-integer optimization problem) in an uncertain environment. The authors evaluated two methods; the first initially generates weight vectors using the simplex-lattice design method and then randomly removes weight vectors from the outer layers of the distribution to help the algorithm focus its search more toward the center of the Pareto front. Second, a sinusoidal function is selected to generate the weight vector distribution. The second proposal significantly outperforms the other variants and the traditional MOEA/D-DE method over the UC problem, providing a much better distribution of solutions.
Also, in 2017 [42], an evolutionary method for weight vector generation was presented. The algorithm initially creates a population of n weight vectors using a Latin hypercube design [43], then normalizes the population and evaluates the distance between all pairs of weight vectors to calculate the fitness value for each vector. Then, the evolutive process is executed until a stopping criterion is reached; in each evolution iteration, a new weight vector is created using a weight vector randomly selected from the population and a Gaussian perturbation; next, the Euclidian distances between the new vector and the rest of the weight vectors in the population are calculated, the fitness for the new vector is defined, and the new vector replaces the worst vector in the population if the new vector is better than the worst. The fitness function corresponds to the sum of the Euclidian distances to the closest neighbors in the population. Unlike the simplex-lattice design method, this method can create weight vectors without restricting the number of vectors. This paper has been cited by nine documents in Scopus, most related to applications in multi- and many objectives.
In 2018 [44], an alternative proposal was presented with two types of weight vector adjustments for many-objective optimization called MaOEA/D-2ADV. After performing the first evolution iteration, this proposal searches for the weight vectors with better solutions close to the optimal PF; if it finds a vector that does not satisfy certain qualifying conditions, it is eliminated and creates a new one as a replacement. It then uses the domain-based Pareto mechanism to detect the effectiveness of each vector. Finally, where a vector is in the wrong direction, the vector is adjusted to adapt better to the PF. This algorithm was compared with MOEA/D-AWA and RVEA using IGD in DTLZ problems of up to 10 objectives, concluding that MaOEA/D-2ADV is suitable for working on problems with a disconnected PF with 4 to 10 objectives.
The initialization of weight vectors using a self-organizing map (SOM) was put forward in a proposal called MOEA/D-SOM (2018) [45]. The normalized weight vectors are sent to SOM to create neighborhoods or groups of vectors. Those closest to the PF based on Euclidean distance are then selected. MOEA/D-SOM was evaluated in many-objective problems using 16 problems with and without constraints, including DTLZ, TOY, and MAOP. Their results were compared with those of MOEA/D-AWA, MOEA/DD, and M2M, among others, using IGD. Compared to the other algorithms, this proposal was observed to be superior, solving MaOPs with a degenerate PF [46].
Also, in 2018 [47], considering the fundamental role of weight vectors—ensuring good diversity and convergence of solutions in different problems, especially problems with a complex PF (discontinuous or with sharp peaks)—it was identified that the uniform distribution of the weight vectors in MOEA/D does not allow a set of solutions with good diversity to be obtained. The authors thus proposed the improved multi-objective evolutionary algorithm based on decomposition with adaptive weight adjustment, IMOEA/DA. This proposal first uses the uniform design method and crowding distance to generate a set of evenly distributed weight vectors. Then, according to the distances of the dominated solutions, it adapts the weight vectors to redistribute them in the subobjective spaces. The algorithm also uses a selection strategy to help each subobjective space to have at least one solution. This proposal was compared with state-of-the-art algorithms such as NSGA-II, MOEA/D, MOEA/D-AWA, EMOSA, RVEA, and KnEA on different test functions (DTLZ, WFG, UF, and ZDT) using three performance metrics, IGD, hypervolume (HV), and generational distance (GD). The Wilcoxon non-parametric test was used to analyze the results. With a 95% significance, it was determined that the proposal could find a set of solutions with greater diversity and convergence than the other compared algorithms.
The penalty-based boundary intersection (PBI) approach to defining weight vectors obtains better results in concave and convex problems than the uniform random definition of weights and the Tchebycheff method. However, its performance is degraded in problems with a complex PF because it defines fixed penalty values. As a result, in 2019 [48], an adaptive penalty scheme (AAP) was proposed to dynamically adjust each weight vector’s penalty value during the algorithm’s evolutionary process. This proposal, called MOEA/D-AAP, was evaluated using six reference problems (F1 to F6) and compared with MOEA/D-DE and MOEA/D-STM, concluding that the proposed approach significantly improved the results measured in IGD.
Also, in 2019 [49], MOEA/HD was proposed, a method that uses a hierarchical decomposition strategy. The scalar subproblems are in different weight hierarchies, and the search direction of the solutions in the lower hierarchy subproblems is adjusted adaptively based on the results of the upper hierarchy. This proposal was evaluated and compared with four state-of-the-art proposals: MOEA/D-AWA, NSGA-III, MOEA/D-DRA, and NSGA-II in the problems DTLZ, WFG, and JY using IGD and HV and obtained the best results in all cases evaluated.
To date, there is no experimental comparison between the different proposals for the generation of weight vectors in MOEA/D, and all published proposals were evaluated using different problems and algorithms and using different numbers of objectives and solutions generated. Therefore, it cannot be established that one algorithm dominates another; it can only be defined that all proposals are better than the simplex-lattice design method used originally in MOEA/D. That is why carrying out a fair experimental comparison process for all proposals is an excellent future work.

3. Weight Vector Definition Using Combinatorial Designs

As previous studies reported, the definition of weight vectors in MOEA/D significantly impacts the algorithm’s results, and this depends on how the weight vectors are sampled in the objective space. Therefore, in this work, we sought to explore alternative combinatorial designs to define weight vectors for MOEA/D-based algorithms to obtain better results over different kinds of many-objective problems. As a result, the alternative selected was augmented covering arrays (ACAs), but orthogonal arrays (OAs) and covering arrays (CAs) were also analyzed.
These three mathematical objects are represented as matrices and are characterized by four parameters: N, which is the number of rows, and in the algorithms based on MOEA/D, it corresponds to the size of the population (each row will allow defining the weight vector of one solution in the population); k, which is the number of columns and corresponds to the number of objectives of the problem that is being solved; v, representing the alphabet from which the values are defined for each cell of the matrix (values from 0 to v − 1); and t, representing the degree of interaction between the columns—this parameter forces the matrix to satisfy that for t columns, all the values of v^t occur exactly once in the OAs or at least once in the CAs and ACAs [50]. These objects are usually denoted as OA (N; t, k, v), CA (N; t, k, v), and ACA (N; t, k, v). Table 1 summarizes the variables used in this section of the document.
To understand how each row of these three combinatorial designs can be used to define the weight vectors, we first designate M as a matrix of size N × k associated with any of the combinatorial designs (OA, CA, or ACA), and we have M i , j , where 0 i N, 0 j k − 1 represent the value in the i-th row of the j-th column. To derive the weight allocation of the j-th column using the i-th row, the calculation of M i , j / α i is made, where α i = c = 0 k 1 M i , c (the case of a row with all zeros is excluded). Considering k objectives (k  1) and a value α that defines the level of granularity of the weights ( α 1 ), the linear Diophantine equation with unit coefficients (LDEU) a 0 + + a k 1 = α allows sampling the weights of a row with a granularity 1/ α [36,45,51]. For example, Table 2 shows on its left side the first four rows of the ACA (42; 2; 10; 5) presented below in Table 3, and on the right side, the weight vectors defined row by row. It can be seen that in order to transform the ACA (OA or CA) from its integer domain to the real domain of the weight vectors, it is only required to perform a normalization process (the sum of the components of the weight vector is 1 as shown in the last column) dividing each row by the α value of each row ( α i ). The value of α i defines the granularity of the weights, i.e., the coarseness of the weights. For instance, if α i = 10, the granularity is in tenths, and if α i = 100, the granularity is in hundredths.
The number of solutions of the LDEU a 0 + + a k 1 = α is equal to α + k 1 k 1 , which is of exponential order, but since the exploration is also required for the possible values of α , an exhaustive search for granularities from 1 to α with k objectives would involve exploring i = 1 α i + k 1 k 1 =   α + k k 1 weight vectors. For example, with α = 40 and k = 10, the space to be explored is 10,272,278,169 possible weight vectors.

3.1. Orthogonal Arrays

A first way of sampling weight vector definition is to use orthogonal arrays of index unity (OAs). OAs are described by OA (N = v^t; t, k = v + 1, v). In an OA, each submatrix of size N × t contains as a row each t-tuple over the v symbols exactly once [52]. This constraint limits the existence of a solution for all combinations of k, v, and t [53]. The construction of OAs is an open topic for values of v that are not prime powers, but a general solution exists for OAs with values of v that are prime powers. Not having OAs for v values that are not prime powers represents a significant disadvantage when the number of columns (objectives of the problem) is large since the number of rows will always be N = v^t, where v is the prime power that satisfies v + 1 ≥ k. For example, for k = 100 objectives and t = 2, the OA that should be used is OA (N = 10,201; t = 2, k = 102, v = 101). This OA generates a huge population size that may be unfeasible to use in practice.
For a problem with 10 objectives (k = 10) and an interaction level of 2 (t = 2), the value of v that satisfies the constraint v + 1 ≥ k is v = 9 (OA alphabet). Therefore, to be able to define the weight vectors in the algorithm, it is necessary to have the OA (81; 2, 10, 9). This OA is constructed using the Bush algorithm [54] and is presented in Table 3. The columns are designated by a 0 , …, a 9 and their sum is the value of α . As each cell can take values from 0 to v − 1 = 9 − 1 = 8 and there are 10 columns (k), the possible values of α vary from 0 to k × (v − 1) = 10 × 8 = 80. The total sampling space is defined (as already described above) by   k v k 1 .
The weight vectors are obtained by dividing the columns a 0 , …, a 9 between the value α of each row. The sampling provided by this OA is concentrated in central values, 9 occurrences for α ∈ {37, 38, 39, 40, 41, 42, 43, 44}, 1 occurrence for α ∈ {0, 9, 18, 27, 36, 45, 54, 63, 72}, and the rest of the 65 α values are not sampled (80% (65/81) of unsampled α values).
From the above, it can be inferred, given the poor sampling obtained, that the use of OAs as a sampling mechanism of possible objective weight vectors does not represent a good alternative due to their sampling properties and because the size of the OA for more than 40 objectives and granularities of the order of 40 or more would demand a huge population size. A final problem that can occur with OAs is best understood by reviewing, for example, rows 80 and 81 of the OA presented in Table 3. These rows are {7, 7, 7, 7, 7, 7, 7, 7, 7, 0} and {8, 8, 8, 8, 8, 8, 8, 8, 8, 0} that, when divided by their corresponding α values, generate the same weight vector {0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0.11, 0}. This same situation occurs with other rows, for example, rows 2, 3, 4, and 5. These examples show that several lines sample the same weight vector, which is not desirable in algorithms based on MOEA/D.

3.2. Covering Arrays

A covering array CA (N; t, k, v) is a matrix with N rows and k columns, where each cell has one of v possible symbols such that every N × t submatrix contains as a row each t-tuple over the v symbols at least once [50]. The covering array number CAN (t, k, v) defines the minimum value of N such that a CA (N; t, k, v) exists. CAs have been used successfully in different areas, including experiment design and hardware and software testing [55].
CAs generally can be seen as a sampling mechanism in several contexts [56,57,58,59,60]. In this paper, we use CAs to sample solutions of multiple linear Diophantine equations with unit coefficients, where each row of the CA is used to construct a solution of an LDEU. Like OAs, each row in a CA is a possible solution of a corresponding LDEU. The LDEU associated with a row of a CA with k columns and alphabet v is: a 0 + + a k 1 = α , where the value of α is obtained as the summation of the elements in a row of the CA. Its values are thus described by α { 0 , , k v 1 } (the zero value corresponds to a row with all cells of the row equal to zero, and k v 1 corresponds to a row with all cell values equal to v − 1).
Given this, the total number of possible solutions of all the LDEUs potentially sampled in a CA grows exponentially according to the values of k (the number of columns in a CA that corresponds to the number of objectives in many-objective optimization) and the value of v (the alphabet v at the end determines the set of possible α values and transitively will define the granularity of weight assignment). Note that the cardinality of the space of weights is:   k v k 1 . Nevertheless, the number of solutions sampled by a CA is the number of rows. This number is bounded asymptotically by the expression that defines the covering array number (CAN). In [61,62,63] the CAN value corresponds to the Stein–Lovász–Johnson (SLJ) bound. Let t, k, and v be integers with k t 2 Ʌ v 2 . Then, as k defined by C A N ( t , k , v ) log ( k t ) + t log ( v ) log ( v t v t 1 ) . In this sense, the number of solutions sampled by the rows of a CA is much smaller than the number of possible solutions that correspond to all the LDEUs sampled. For instance, for k = 10, v = 5, and t = 2, a possible CA will have 36 rows and the total number of LDEUs will be   k v k 1 =   50 5 1 = 2,118,759 .
Table 4 shows the contents of CA (36; 2, 10, 5) and, in the 11th column (the last one), the value of α (summation of all the elements in each row of the CA). It is shown that 13   α   30, with 17 different values of α . The values of α that are not sampled are 24, so the proportion of non-sampled values is 58% (24/41) because the total number of classes of α is 1 + k (v − 1) = 41. The proportion of non-sampled values is lower than OAs and has fewer constraints or problems when defining weight vectors in MOEA/D-based algorithms.

3.3. Augmented Covering Arrays

An augmented covering array (ACA) is denoted by ACA (N; t, k, v), where the meaning of the parameters N, t, k, and v is the same as in OAs or CAs. An ACA is constructed progressively using an ACA with an alphabet lower than v and adding the necessary rows to satisfy the covering property. For instance, an ACA (M; t, k, 3) can be constructed by adding the necessary rows to an ACA (M; t, k, 2). Empirically we have found that it is desirable that the alphabets of a sequence of ACAs (one of these ACAs is described by A C A i N i ; t , k , v i , i = 0,1 , ) may follow the expression: v i = 2 i + 1 . The ACAs that will be used are then: A C A 0 ( N 0 ; t, k, 2), A C A 1 ( N 1 ; t, k, 3), A C A 2 ( N 1 ; t, k, 5), A C A 3 ( N 1 ; t, k, 9), A C A 4 ( N 1 ; t, k, 17), A C A 5 ( N 1 ; t, k, 33), … A C A i ( N i ; t, k, 2 i +1). This construction process generates an ACA with more rows than a CA with a similar configuration (values of t, k, and v). However, it adds an interweaving in the sampling important for defining weight vectors and extends the range of sampled alpha values.
Table 5 shows the ACA (42; 2; 10; 5) constructed using ACA (6; 2; 10; 2) and ACA (16; 2; 10; 3). Note that the α values are distributed in a range 3 α   28 with 15 different values of α . In this case, the values of α that are not sampled are 26, then the proportion of non-sampled values is 63% (26/41). This ACA has a range of α values between 3 and 28 while the CA in the previous section had a smaller α range, only between 13 and 30, even though it samples two additional α values.

3.4. Comparison between Orthogonal Arrays, Covering Arrays, and Augmented Covering Arrays for Weight Vector Definition

According to the information presented in the three previous subsections, it can be stated that the OAs are not a good alternative for sampling for the following reasons: (a) they cannot be constructed for all the desired combinations of v, k, and t; (b) they concentrate the sampling in the middle part of the possible values of 0 ≤ α ≤ k (v − 1); (c) the proportion of unsampled values is very high; (d) the size of an OA depends on v^t, where v is the smallest prime power satisfying v + 1 ≥ k, and this value of N quickly exceeds the reasonable population size for an algorithm based on MOEA/D; and (e) in the process of transforming the domain from integer to real, several rows of the same OA can obtain the same weight vector, which is not desired in the context of defining the weight vectors.
CAs represent a better alternative to OAs for the following reasons: (a) they can be constructed for any combination of values of the parameters v, k, and t; (b) they perform a reasonably distributed sampling in the range of possible α values; (c) the proportions of unsampled α values are lower; and (d) the size of the CA grows logarithmically with the number of columns/objectives.
ACAs are an even better alternative since (a) like CAs they can be built for any combination of values k, v, and t; (b) the ranges of sampled α values are broader than in OAs and CAs; (c) the proportions of unsampled α values are lower than in OAs and similar to that of CAs; and (d) the size of the ACA grows similar to how a CA does, that is, logarithmically with the number of columns/objectives.
In summary, using OAs for weight vector definition is not recommended, and CAs and ACAs possess attractive characteristics. In light of this, we conducted an exploratory experiment using MOEA/D to compare CAs (MOEA/D-CAS) and ACAs (MOEA/D-ACAS) for defining weight vectors over the same problems (DTLZ and WFG test suites) and the experimental configuration is presented later in Section 5 with 10 to 100 objectives in intervals of 10, i.e., 10, 20, …, 100, and CAs and ACs of strength 2 and alphabet 9. Inverted generational distance (IGD) results were tabulated, and the non-parametric Friedman test was applied to them, obtaining the results presented in Table 6. In this table, the ACAs obtain the first position (rank) in all compared objectives, and the test result is statistically significant (95%). With these preliminary results, it was decided to focus the experimentation on ACAs as the best option of the three combinatorial designs reviewed.
Although it is an expected fact (discussed in the previous section), it should be noted that in all the experiments (10 to 100 objectives), the population size with CAs is smaller than that of ACAs (a difference between 42 and 128 solutions), which puts the CAs at a disadvantage because they have fewer weight vectors to guide the approach to the Pareto front, even though the two algorithms carry out the same number of objective evaluations and therefore have a similar average run time.
Figure 1 below shows a visual comparison of weight vectors created by an orthogonal array, a covering array, and an augmented covering array with 3 objectives, strength 2, and an alphabet of 7. Analyzing these graphs is challenging, but the CA can be seen to sample the center of the objective space in more detail than the OA. The OA leaves some regions unsampled (near the vertices of the triangle), and the ACA uses more weight vectors (13 additional vectors) and samples more effectively a more considerable number of regions (borders, vertices, and center zone) of the objective space.

4. Weight Vector Definition Based on Augmented Covering Arrays

4.1. Main Hypothesis

The main hypothesis that we wish to test is that using an ACA with N rows, low strength values (t = {2, 3}), and low alphabet values (v = {9, 17}), the MOEA/D-DE algorithm obtains a better Pareto front than that obtained using the classical method of MOEA/D-DE (N uniformly distributed weight vectors).

4.2. The Proposed Algorithm

Algorithm 1 presents the multi-objective evolutionary algorithm proposal based on decomposition with differential evolution and augmented covering arrays (MOEA/D-DE-ACA). This proposal focuses on modifying (Step 1.1) the initialization of the random sampling of the weight vectors in MOEA/D-DE, incorporating an ACA to obtain a fixed size and better-distributed weight vector sample. The rest of the algorithm is equal to that originally proposed. The proposal maintains the genetic operators of the MOEA/D-DE algorithm [13]. The following explains in detail each component added in MOEA/D-DE-ACA and the problems solved by incorporating this method.

4.3. Fundamentals of the Proposal

The MOEA/D and MOEA/D-DE algorithms, in their initialization step, generate a random sampling as uniformly distributed as possible in the objective space for the definition of the weight vectors of the solutions in the population. This implies generating boundary weights (1, 0, 0, …, 0), (0, 1, 0, …, 0) … (0, 0, 0, …, 1) and then generating a large amount E (E >> N, where N corresponds to the population size) of candidate weight vectors. An iterative process is then conducted, selecting candidate weight vectors furthest from the previously selected list and including them until the N required weight vectors are completed, a task that is computationally costly when the value of E grows, with a complexity of O (N × E). This initialization method allows the user in these algorithms to maintain control over the growth in the number of weight vectors used for sampling the objective space through parameter N.
The quality of that sampling is affected by the exponential growth of weight vectors that must be considered in the objective search space, as explained in the previous section.
With the use of an ACA, the distribution of the weight vectors allows a better evaluation of the interaction between the objectives. In addition, the ACA’s N value (number of rows) allows control of the size of the population. The ACA’s N value is defined by the number of objectives, m, and its growth is controlled by the value of alphabet v and strength t. In addition, the ACAs were constructed previously and can be used as often as required, thereby reducing the execution time of the optimization algorithm for many purposes.
In Step 1.1 of Algorithm 1, each row of the ACA (N; t, k = m, v) is taken and converted from its integer domain to the real domain of the weight vectors. This conversion is conducted based on the alpha value, which automatically allows obtaining a normalized weight vector. This conversion is achieved by adding all values of the row and dividing each value (cell) of the row by that sum, a process repeated for the N rows of the ACA. In this way, N-normalized weight vectors are obtained.
Algorithm 1. MOEA/D-DE-ACA.
MOEA/D-DE-ACA
Input:
  • Problem: Minimize F x = Minimize f 1 x , f 2 x , , f m x T , subject to x Ω where Ω is the decision (variable or search) space, F :   Ω R m consists of m real-value objective functions, and R m is called the objective space. The attainable objective set is defined as the set F x x ϵ Ω } . If x   ϵ R d , all the objectives are continuous and described by x ϵ R d h j x 0 , j = 1 , , m and, where h j are continuous functions, the problem is called a continuous MOP.
  • Nb: Number of weight vectors in the neighborhood of each weight vector.
  • δ: Probability that parents’ solutions are selected from the neighborhood.
  • η r : Maximum number of solutions replaced in each generation.
  • ACA: Augmented Covering Array with N rows, strength t, m objectives (or k columns) and v alphabet. ACA (N; t, k = m, v).
  • A stopping criterion.
Output:
  • Approach to the Pareto set (PS): { x 1 , , x N }.
  • Approach to the PF: { F x 1 , , F ( x N ) }.

Step 1 Initialization
 Step 1.1 
Define weight vectors using the ACA object. Define weight vectors λ 1 , , λ N based on the normalization of the rows in the ACA and update the value of N if it was necessary to eliminate duplicate rows.
This means that a weight vector λ i = A C A i / j = 1 m A C A i , j .
The set of weight vectors λ = i = 1 N λ i (not including duplicates).
Finally, N = λ .
 Step 1.2 
Compute the Euclidean distances between any two weight vectors and then work out the Nb closest weight vectors to each weight vector. In algorithmic terms:
For each i = 1 , , N
  Set B i = {   i 1 , ,   i N b } , as a list of weight vector indexes where λ i 1 , , λ i N b are the Nb closest weight vectors to λ i .
This means that D = d ( λ 1 ,   λ 1 ) d ( λ 1 ,   λ 2 ) d ( λ 2 ,   λ 1 ) d ( λ 2 ,   λ 2 ) d ( λ 1 ,   λ N ) d ( λ 2 ,   λ N ) d ( λ N ,   λ 1 ) d ( λ N ,   λ 2 ) d ( λ N ,   λ N )
Where d ( λ i ,   λ j ) = k = 1 m ( λ i , k λ j , k ) 2 .
D i = d 1 ,   ,   d j ,   , d N = { d ( λ 1 ,   λ 1 ) ,   ,   d ( λ 1 ,   λ j ) ,   , d ( λ 1 ,   λ N ) } .
S _ D i = s o r t ( D i ) , where s o r t ( D i ) sorts the elements in D i in ascending order.
B i = j S _ D i   j   i s   t h e   i n d e x   o f d j   a n d   d j   i s   o n e   o f   t h e   s m a l l e s t   N b   i n   S _ D i } .
 Step 1.3 
Generate an initial population p = ( x 1 , , x N ) randomly on Ω and for each i = 1 , , N calculate F x i . Also calculate and store for each x i the individual objectives of F x i , i.e., f 1 x i , f 2 x i , , f m x i .
This means that P = x 1,1 x 1,2 x 2,1 x 2,2 x 1 , d x 2 , d x N , 1 x N , 2 x N , d
Where x i = ( x i , 1 ,   ,   x i , j ,   , x i , d ) and each x i , j value is randomly generated in the range of the problem and d is the dimension of the decision (search) space.
f 1 x 1 f 2 x 1 f 1 x 2 f 2 x 2 f m x 1 f m x 2 f 1 x N f 2 x N f m x N = j 1 j m   i 1 i N  Compute  f j x i .
F x 1 F x 2 F x N = i 1 i N   C o m p u t e   F x i , where F x i = j = 1 m λ i , j f j x i .
 Step 1.4 
Initialize z = ( z 1 , . z m ) T where     z j =   m i n 1 i N   f j ( x i ) .
Step 2 Evolution
For i = 1 , , N do, i.e., i 1 i N
 Step 2.1 
Selection of Mating/Update Range: Uniformly generate a random number from 0 ,   1 . Then set Q = B i i f   r a n d < δ ,   w h e r e   r a n d : [ 0,1 ] R . i = 1 N i o t h e r w i s e
 Step 2.2 
Reproduction: Set   r 1 = i and randomly select two indexes   r 2 and r 3 from P, generate a solution y from x r 1 , x r 2 , and x r 3 by a DE operator, and then perform a mutation operator on y ¯ with probability   p m to produce a new solution y. These operations can be summarized as:
  r 1 = i .
  r 2 = g 1 () where g 1 : [ 1 , N ] {   r 1 } N .
  r 3 = g 2 () where g 2 : [ 1 , N ] {   r 1 ,   r 2 } N .
y = D E _ o p e r a t o r x r 1 ,   x r 2 , x r 3 based on problem.
y = m u t a t i o n ( y ,     p m ) based on problem.
 Step 2.3 
Repair: If an element of y is out of the boundary of Ω , its value is reset to be a randomly selected value inside the boundary. Calculate and store for each y its individual objectives, i.e., f 1 y , f 2 y , , f m y . These two operations can be represented as:
y = r e p a i r y ,     p m
( f 1 y ,   f 2 y ,   ,   f m y ) = j 1 j m  Compute  f j y .
 Step 2.4 
Update of z : For each j = 1 , , m ,   i f   f j y <   z j , then set   z j = f j y . This step can be represented as:
z = z 1 ,   z 2 ,   ,   z 3 = j 1 j m   z j = z j i f   z j < f j y f j y o t h e r w i s e
 Step 2.5 
Update of Solutions.
Set c = 0 .
While Q   ϕ do
 a.
If c = η r go to Step 3.
 b.
Randomly pick an index j from Q ; i.e., j = g 3 () where g 3 : [ 1 , N ] N .
 c.
C o m p u t e   F y , where F y = k = 1 m λ i , k | f k y z k | .
 d.
If F y F x j then
  set x j = y ,
   F x j = F y .
   c = c + 1 .
 e.
Remove j from Q ; i.e., Q = Q { j } .
End while
Step 3 Stopping Criterion:
   If the stopping criterion is satisfied, then
       Return { x 1 , , x N } and { F ( x 1 ) , , F ( x N ) } .
   Otherwise go to Step 2.
Below is an example of this conversion based on row 26 highlighted in Table 5, corresponding to the following values: (3, 4, 2, 1, 1, 3, 3, 0, 4, 1). In this case, the sum of the values in the row is 22, corresponding to the value of α . The individual values are divided by the sum, and the following weight vector (0.14, 0.18, 0.09, 0.05, 0.05, 0.14, 0.14, 0.00, 0.18, 0.05) is obtained, which is also normalized as required by the rest of the algorithm.
When starting from an ACA with more columns than the objectives of the problem, the remaining columns are eliminated, and the result is still a valid ACA to which the normalization process can be applied. Meanwhile, when any two weight vectors are identical, the duplicates are eliminated, and the value of N is reduced to the number of unique weight vectors.
Below is an example of this situation. Where there are two rows of an ACA with 3 columns (objectives) and an alphabet of 9, it would be possible to have the following ACA vectors that are different in the discrete domain: (0, 2, 3) and (0, 4, 6). When calculating the α value of each, we would obtain 5 and 10, respectively. Then, when converting from integer domain to real domain, the result of these two vectors will be (0, 2/5, 3/5) and (0, 2/5, 3/5), which is why one must be removed from the list of weight vectors.
The source code and resources (ACAs) required to run the proposed method and experiments are available online at https://github.com/coboscarlos/MOEA_D_DE_ACA (accessed on 1 March 2024). The source code is based on MOEA Framework 2.12 or higher. The algorithm for constructing ACAs will be released after publication in an international journal; its explanation is not part of the purpose of this paper.

5. Experimentation

In this section, we first give the main characteristics of the many-objective problems used in evaluating and comparing the proposed algorithm (MOEA/D-DE-ACA) versus MOEA/D-DE and NSGA-III. The metric used to make the comparison in this case, inverted generational distance (IGD), is then described. Next, the experimental results are presented along with an analysis of three scenarios: (1) MOEA/D-DE-ACA with alphabet v = 9 and strength t = 2 (small population, from 136 to 288 solutions); (2) with alphabet v = 17 and strength t = 2 (medium population, from 416 to 838 solutions); and (3) alphabet v = 9 and strength t = 3 (large population, from 729 to 1457 solutions). These scenarios allow us to analyze the impact of variations in alphabet and strength. In the analysis of each scenario, the Friedman non-parametric and Holm post hoc tests are used to determine if the results have an appropriate level of statistical significance. All experiments were repeated 31 times to ensure that mean (or average) IGD values comply with the central limit theorem and genuinely represent the mean behavior of each algorithm. A comparison with another eight state-of-the-art algorithms is then made using the same problems (DTLZ and WFG) with 10–15 objectives. Finally, a comparison with another six state-of-the-art algorithms is made over constrained problems with 10–15 objectives.

5.1. Experimental Environment

5.1.1. The Many-Objective Problems Used for Evaluation and Comparison

Two test suites available in MOEA Framework 2.12 were used in this study, namely: Deb–Thiele–Laumanns–Zitzler (DTLZ) and Walking Fish Group (WFG) [64]. For each test problem, the number of objectives (m) varies from 10 to 100 in increments of 10, i.e., m ∈ {10, 20, 30, …, 100}. All problems can be scaled to any number of objectives and decision variables. A summary of the main characteristics of these problems is presented in Table 7.
In the DTLZ test suite, DTLZ1 presents a linear and regular Pareto front (PF), making it relatively straightforward to solve. DTLZ3 and DTLZ7 exhibit numerous local PFs, adding complexity to the optimization task. DTLZ2 features a spherical PF, making it ideal for assessing the convergence of MOEAs to the global PF. DTLZ4 showcases a non-uniform distribution along the PF, providing a means to evaluate MOEAs’ capacity to maintain a well-balanced distribution of solutions. A degenerate hypersurface characterizes DTLZ5’s PF. DTLZ6 contains disjointed Pareto-optimal regions, making it suitable for evaluating MOEAs’ ability to sustain subpopulations across disconnected segments of the objective space. The k1 parameter for these problems was set to 5 for the DTLZ1, DTLZ5, and DTLZ6 problems, 10 for the DTLZ2, DTLZ3, and DTLZ4 problems, and 20 for the DTLZ7 problem, in which the number of variables is D = m + k1 − 1.
In the WFG test suite, the WFG1 problem is separable and uni-modal, like WFG7, but they have a different PF shape. The PF shapes on the WFG1, WFG2, and WFG3 problems are complicated, discontinuous, and partially degenerate. Five problems (WFG2, WFG3, WFG6, WFG8, and WFG9) are not separable. WFG7, WFG8, and WFG9 are connected and biased, WFG7 is not separable, and WFG9 presents a challenge due to its high modality. WFG4, like WFG9, also involves multi-modality but is not biased. The deceptiveness of WFG5 is more difficult than that of WFG9. The k1 parameter for these problems was set to k1 = m − 1, and the distance parameter l was set to l = 10, where D = k1 + l.

5.1.2. Comparison Metrics

Inverted generational distance is a metric designed to evaluate the quality of a set of solutions obtained in terms of convergence and diversity [70,71]. IGD is defined as follows [72]: I G D A P = 1 P * z * P * d i s t ( z * A P ) where A P is an approximation set to the Pareto front of the problem (solutions found by the algorithm); P * is a set of reference points (non-dominated and evenly distributed) along the Pareto front; d i s t z * A P is from the Euclidean distance between z * and its nearest neighbor in A P ; and | P * | is the cardinality of P * . With this definition, a lower IGD value indicates better algorithm performance, i.e., its solutions are closer to the PF.
IGD has two main advantages. The first is its computational efficiency. Secondly, it can measure convergence and diversity simultaneously whenever | P * | is large enough to cover the Pareto front easily. The number of reference points | P * | used in each experiment is shown further in the final column of Table 11. Values range from 3356 to 39,190.
In addition to the effectiveness measure (IGD) used to evaluate and compare the algorithms, their efficiency was also evaluated based on the computation time (execution time in seconds) required by each algorithm to run each experiment in a controlled environment (using the same hardware and software resources) [71].

5.1.3. Parameter Setting

Parameter settings for MOEA/D-DE and NSGA-III were adopted as recommended in the literature and summarized in Table 8, where pm and pc are the mutation and crossover probabilities, ηm and ηc are the distribution indexes of the crossover and mutation operators, respectively, CR is the crossover probability, F is the differential weight, δ is the probability of selecting parent solutions from the neighborhood, N b is the neighborhood size of weight vectors, and nr is the maximum number of solutions replaced by each new solution.
The population size N was defined for all algorithms (MOEA/D-DE-ACA, MOEA/D-DE, and NSGA-III) based on the size of the selected ACA for the specific problem. The maximum number of function evaluations is the stopping criterion for all algorithms and results from multiplying the number of generations parameter (last column in Table 7) by the population size parameter.

5.2. Experiments with Strength t = 2 and Alphabet v = 9

Below, Table 9 and Table 10 present the mean IGD results of the 31 repetitions for the three algorithms (MOEA/D-DE-ACA, MOEA/D-DE, and NSGA-III) using 10, 20, …, up to 100 objectives in the seven DTLZ problems and nine WFG problems, respectively. Cells with bold text correspond to the best result in each experiment, and cells are highlighted with a gray background whenever the winner was MOEA/D-DE-ACA. Each cell shows the position (ranking) of the algorithm in parentheses and the mean IGD value achieved by the algorithm with the number of objectives established in the row on the problem defined in the column.
In Table 9, the following can be observed: (1) regardless of the number of objectives (10 to 100), MOEA/D-DE-ACA obtains better mean IGD values in problems DTLZ1, DTLZ2, and DTLZ3, which are linear (the first) and concave (the following two), do not have biases, do not have discontinuities, and are separable (the exception being with 40 objectives in DTLZ1 where it is surpassed by MOEA/D-DE in 0.0039); (2) in the DTLZ7 problem, MOEA/D-DE-ACA obtains the best IGD results from 10 to 70 objectives, which is a problem with a mixed, multi-modal Pareto front, disconnected, non-separable, and deceptive; (3) MOEA/D-DE-ACA obtains the best IGD for the DTLZ6 problem in 10 and from 50 up to 100 objectives and for the DTLZ5 problem from 70 to 100 objectives, problems that have a concave, degenerate, and irregular Pareto front, are not multi-modal, and are non-deceptive. In the remaining objectives (10 to 60 in DTLZ5 and 20 to 40 in DTLZ6), it occupies second place, being surpassed by MOEA/D-DE (between 0.00021 and 0.00099); (4) in problem DTLZ4, MOEA/D-DE-ACA only wins with 10 and 30 objectives. In the other objectives, it is only surpassed by MOEA/D-DE, and the differences are less than thirteen tenths. The DTLZ4 problem has similar characteristics to DTLZ2 but has bias; (5) the NSGA-III algorithm, in general, is last in all rankings, and the distances of the mean IGD values that it obtains from those obtained by the other two algorithms become greater as the number of objectives increases, which shows a critical weakness for the use of this algorithm with DTLZ problems with a high number of objectives and a small population size; and (6) MOEA/D-DE obtains first position in DTLZ6 between 20 and 40 objectives, in DTLZ5 between 10 and 60 objectives, and in DTLZ4 between 40 and 100 objectives, winning in DTLZ5 and DTLZ6 by narrow margins (thousandths) but in DTLZ4 by more.
Table 10 shows the following: (1) in problems WF4, WF5, WF6, and WF8, from 30 to 100 objectives, MOEA/D-DE-ACA obtains the best mean IGD values. These problems are characterized by having a concave and regular Pareto front, being scalable, and not being disconnected. Some are multi-modal, others not, and the same happens with having bias or not, being separable or not, or being deceptive or not; (2) in problem WF2 from 10 to 60 objectives, MOEA/D-DE-ACA obtains the best mean IGD values, followed by MOEA/D-DE with differences that increase little by little as the number of objectives increases. This problem is convex, multi-modal, disconnected, and scaled; (3) in problem WFG7 from 40 to 70 objectives, MOEA/D-DE-ACA obtains the best IGD values, followed by MOEA/D-DE with differences that decrease little by little as the number of objectives increases. This problem is similar to WFG8, where the algorithm is dominant from 30 to 100 objectives, but the fact that it is separable leads to it having better results in this problem; (4) the NSGA-III algorithm has the best values in a low number of objectives (10 and 20 in problems WFG4 to WFG9), but as the number of objectives grows the values tend to rise, that is, to deteriorate. Despite this, for problems WFG1 and WFG3, NSGA-III is dominant from 20 to 100 objectives with significant differences in mean IGD values, these problems are not multi-modal, and their Pareto fronts’ shapes are not disconnected nor deceptive; and (5) MOED/D-DE obtains, in general, intermediate values of IGD in these problems and achieves dominance in problem WFG9 from 60 objectives with a concave and regular Pareto front. In this sense, the initialization of weights with an ACA allows MOEA/D-DE to improve its performance in WFG problems with irregular, discontinuous Pareto fronts that are scaled.
Since the previous analysis makes it difficult to determine precisely which algorithm is best and in what kind of problems, the mean behavior of the three algorithms in these problems was evaluated using the Friedman non-parametric and Holm post hoc tests. This test was performed with 2 degrees of freedom, and Holm results were evaluated with a significance level of 90% and 95%.
Table 11 shows in the first column the number of objectives (10 to 100) evaluated in the DTLZ and WFG problems, then the three algorithms with their ranking (1, 2, or 3) and the Friedman ranking and, finally, the p-value obtained in the test and whether the said value is significant (True or False). The result of the Holm post hoc test is then seen with a 3 × 3 matrix, where the first row and column refer to Algorithm A (MOEA/D-DE-ACA), the second row and column Algorithm B (MOEA/D-DE), and the third row and column Algorithm C (NSGA-III). The symbol ● indicates that results obtained with the algorithm in the row are better than those obtained with the algorithm in the column, while the symbol ○ indicates that the algorithm in the column outperforms the algorithm in the row. The values above the diagonal have a significance level of 90%, while those below the diagonal have a significance of 95%. The table then shows the population size with which the three algorithms were executed, a value defined by the ACA used in MOEA/D-DE-ACA and that has the strength to grow logarithmically based on the number of objectives (population size = 6.5686 × 10 1 × ln o b j e c t i v e s 2.1801 × 10 1 with R 2 = 0.992 ). The last column in this table shows the number of reference points used by the MOEA Framework for calculating IGD in the problems, according to the number of objectives, in this case, 3356 for 10 objectives and 7416 for 100 objectives. In Table 11, it can be seen that the MOEA/D-DE-ACA algorithm obtains number 1 ranking in all cases, but in 10 and 20 objectives, this ranking is not statistically significant (the p-value obtained is not less than 0.05), which is why the Holm post hoc test is not performed for these two experiments.
Based on the Holm test, between 30 and 60 objectives, MOEA/D-DE-ACA outperforms MOEA/D-DE and NSGA-III with a significance level of 95%. Between 70 and 100 objectives, a dominant relationship between MOEA/D-DE-ACA and MOEA/D-DE cannot be established, but these two algorithms outperform NSGA-III with 90% significance. This relationship can be defined with 95% significance with 70 and 90 objectives. With 90 objectives, MOEA/D-DE-ACA also outperforms MOEA/D-DE with 95% significance.
Figure 2 shows the average execution time (AET) of the three algorithms for all DTLZ and WFG datasets with each number of objectives, from 10 to 100. The execution time of NSGA-III is much greater than that of the other two algorithms. The complexity growth is quadratic ( A E T = 2.131 × 10 2 × o b j e c t i v e s 2 1.0533 × 10 4 × o b j e c t i v e s + 1.21769 × 10 5 ) with an R 2 = 0.9966 . This figure shows that the processing executed by NSGA-III beyond the objective function evaluations is much greater than that performed by the other two algorithms since they all execute the same number of fitness function evaluations. The execution times of MOEA/D-DE ( A E T = 8.3186 × 10 0 × o b j e c t i v e s 2 + 2.459 × 10 2 × o b j e c t i v e s 2.8955 × 10 3 ) with an R 2 = 0.9953 and MOEA/D-DE-ACA ( A E T = 3.6936 × 10 0 × o b j e c t i v e s 2 + 6.3126 × 10 1 × o b j e c t i v e s 3.9744 × 10 2 ) with an R 2 = 0.998 also have a quadratic tendency where MOEA/D-DE-ACA has the shortest execution time. The difference obtained in time between the two versions of MOEA/D is explained in the time saved in Step 1.1 Initialization; the use of a previously manufactured ACA gives an advantage in time with the proposed approach.
Figure 3 shows a box plot graphic that visually summarizes the median and quartiles of the IGD values obtained by the three algorithms in the 16 problems (DTLZ and WFG) in the experiments with t = 2 and v = 9 from 10 to 100 objectives. It can be observed that in 12 problems (DTLZ1 to DTLZ3, DTLZ5 to DTLZ7, WFG2, and WFG4 to WFG8), i.e., in 75% of the 16 problems, MOEA/D-DE-ACA obtains better (lower) mean IGD values than the other two algorithms, followed by MOEA/D-DE.
In DTLZ4 and WFG9, MOEA/D-DE-ACA is outperformed by MOEA/D-DE, and in WFG1 and WFG3, the best results are obtained by NSGA-III, leaving MOEA/D-DE-ACA in second place in the first problem and third place the other.

5.3. Experiments with Strength t = 2 and Alphabet v = 17

Concerning the previous experiment, in this one, the value of the alphabet increases from 9 to 17, which implies that the used ACAs have a larger number of rows, and a larger population is established. With 10 objectives, there is an increase from 136 rows in v = 9 to 416 with v = 17, which establishes a relation of 1 to 3.06, while with 100 objectives, the increase is from 288 with v = 9 to 838 with v = 17, which is a ratio of 1 to 2.91. In the intermediate relations for the other objectives, a maximum of 1 to 3.45 was obtained; this shows that doubling (approximately) the alphabet triples (approximately) the number of rows in the ACA and, therefore, the number of weight vectors in the population of MOEA/D-DE-ACA.
As in the previous section, a table with the results for the DTLZ problems was built; from this table, the following was observed: (1) MOEA/D-DE-ACA obtains better mean IGD values from 40 to 100 objectives in all DTLZ problems, with five exceptions in which it comes second (with a maximum difference of 0.0316) and in DTLZ4 where it is outperformed by the other two algorithms (maximum difference of 0.149); (2) in problems DTLZ3 and DTLZ7, MOEA/D-DE-ACA obtains the best IGD results from 20 objectives upward, the first of these problems being concave and the second mixed; both are multi-modal, one is disconnected and the other not, one is separable and the other not, and both are deceptive. MOEA/D-DE-ACA improves the performance in these two problems when the objective number grows to 60–70, and this is then maintained up to 100 objectives; (3) MOEA/D-DE-ACA in 10 to 30 objectives of problems DTLZ5 and DTLZ6 occupies second place, being outperformed by MOEA/D-DE (differences between 0.0005 and 0.0053; (4) the NSGA-III algorithm generally leads the rankings of the mean IGD values of 10 and 20 objectives of problems DTLZ1 to DTLZ4, but its values are later exceeded by those obtained by the MOEA/D-DE with increases in the number of objectives; this reveals a critical weakness for the use of this algorithm with DTLZ problems with a high number of objectives and a population size lower than 900; and (5) MOEA/D-DE in general—like in the previous experiment—obtains second place except in some cases, most of which are from 10 to 30 objectives.
In the results related to WDG problems, it could be observed that: (1) in problem WFG2 from 20 to 100 objectives, MOEA/D-DE-ACA obtains better mean IGD values, followed by MOEA/D-DE with differences that increase little by little as the number of objectives increases; (2) in problems WF4 and WF6 from 50 to 100 objectives, MOEA/D-DE-ACA obtains better mean IGD values, and these problems are characterized by having a concave Pareto front, being regular, not having bias, not being disconnected, not being deceptive, and being scaled, but the first is highly multi-modal and the second is not; (3) in problems WF5 and WF8 from 60 to 100 objectives, MOEA/D-DE-ACA obtains better mean IGD values, and these problems are characterized by having a concave Pareto front, being regular, not being multi-modal, and not being disconnected. The first one is separable and the second one is not, the first is deceptive but the second is not and is scaled; (4) in general, the NSGA-III algorithm occupies first place in the WFG problems from 10 to 40 objectives, except in WFG2, but, with more objectives, the mean IGD value for this algorithm becomes larger (it moves away from the ideal PF). Despite this, the WFG3 and WFG7 problems give the best results with up to 100 objectives and the best results are given in WFG1 with 20, 30, 40, 50, 90, and 100 objectives.
After evaluating the average behavior of the three algorithms in these problems, Friedman’s non-parametric statistical test and the Holm post hoc test were also executed. In these tests, the NSGA-III algorithm ranks 1 in 10, 20, and 30 objectives, showing that it outperforms MOEA/D-DE and MOEA/D-DE-ACA in 10 and 20 objectives with a 95% significance level. In addition, with 10 objectives, MOEA/D-DE outperforms MOEA/D-DE-ACA with the same significance level. In 30 objectives, the Friedman ranking is not significant (the p-value obtained is not less than 0.05). As such, the Holm post hoc test is not performed. MOEA/D-DE-ACA ranks 1 from 40 to 100 objectives, but in 50 objectives, this ranking is not statistically significant. Based on the Holm test for 40 objectives, MOEA/D-DE-ACA outperforms MOEA/D-DE, with a 95% significance level. In 60, 70, 80, 90, and 100 objectives, MOEA/D-DE-ACA and MOEA/D-DE are seen to outperform NSGA-III with 95% significance. In 70 objectives, MOEA/D-DE-ACA outperforms MOEA/D-DE with 90% significance. The situation is similar in 80 objectives, except that the dominance of MOEA/D-DE-ACA over MOEA/D-DE is more remarkable, with a significance of 95%. Here, the population size can be defined as 1.8644 × 10 2 × ln o b j e c t i v e s 0.3125 × 10 0 with R 2 = 0.9596 .
On the other hand, the execution time of NSGA-III is much greater than that of the other two algorithms. In this case, A E T = 1.0635 × 10 4 × o b j e c t i v e s 2 5.30145 × 10 5 × o b j e c t i v e s + 9 × 10 6 with an R 2 = 0.9333 ; this shows that the processing executed by NSGA-III beyond the evaluations of the objective function is much greater than that of the other two algorithms. The execution time of MOEA/D-DE ( A E T = 9.2652 × 10 2 × o b j e c t i v e s 2 + 2.51005 × 10 5 × o b j e c t i v e s 1 × 10 6 with an R 2 = 0.9099 ) shows that when the population increases, the execution time increases, especially after 50 objectives where it even manages to be greater than that of the other two algorithms. This also shows that using a previously constructed ACA for executing MOEA/D-DE and avoiding this task in the initialization step significantly reduces processing time. For MOEA/D-DE-ACA, the A E T = 7.5386 × 10 2 × o b j e c t i v e s 2 5.6509 × 10 4 × o b j e c t i v e s + 2 × 10 6 (with an R 2 = 0.9093 ) also has a quadratic tendency, so MOEA/D-DE-ACA is the one presenting the shortest execution time in all cases.
A box plot graphic (like Figure 3) was also used to visually summarizes the median and quartiles of the IGD values obtained by the three algorithms in the 16 problems (7 DTLZ and 9 WFG) during the experiment for t = 2 and v = 17 from 10 to 100 objectives. As a result, the following was observed: (1) in 11 of the problems (DTLZ1 to DTLZ3, DTLZ5 to DTLZ7, WFG2, WFG4 to WFG6, and WFG8 to WFG9), that correspond to 68.75% of the 16 problems, MOEA/D-DE-ACA obtains better median values and its quartiles are closer to the median (less dispersed) than the other algorithms; (2) in WFG3, WFG5, and WFG7 problems, NSGA-III obtains better results, followed by MOEA/D-DE-ACA and then by MOEA/D-DE, and in DTLZ4, NSGA-III obtains the best results this time followed by MOEA/D-DE-ACA; (3) in the WFG1 problem, MOEA/D-DE obtains the best results, followed by NSGA-III and MOEA/D-DE-ACA. Although the results are not the same as in the previous experiment, they are generally similar. Furthermore, suppose the results in the figures only included the values from 40 to 100 objectives, which is where the MOEA/D-DE-ACA algorithm obtains the best results. In that case, the proposed algorithm will be the best for all problems.

5.4. Experiments with Strength t = 3 and Alphabet v = 9

This experiment was performed only up to 90 objectives because the execution times of MOEA/D-DE and NSGA-III were too great for the execution of the 31 repetitions. The results of DTLZ problems show the following: (1) MOEA/D-DE-ACA obtains better mean IGD values in problems DTLZ1, DTLZ2, and DTLZ3 with 20 objectives with two exceptions in DTLZ1 and DTLZ2 and one exception for DTLZ3; (2) in the DTLZ7 problem, MOEA/D-DE-ACA obtains the best IGD results from 30 to 100 objectives, improving the results due to the increasing number of objectives for this problem, a problem with a complex Pareto front; (3) MOEA/D-DE obtains the best IGD for problem DTLZ6 from 10 objectives and, for problem DTLZ5 from 20 to 100 objectives, this algorithm produces the best solutions. As the number of objectives grows, its performance improves; (4) in problem DTLZ4, the two versions of MOEA/D are exceeded by NSGA-III in the different objectives, although the differences are minor; this problem has similar characteristics to DTLZ2 but has bias; and (5) the MOEA/D-DE-ACA MOEA/D-DE algorithms generally lead the rankings of mean IGD values from 20 to 100 objectives. The results on WFG problems shows the following: (1) in problem WFG2 from 30 to 90 objectives, MOEA/D-DE-ACA obtains better mean IGD values; (2) in problems WFG4, WFG6, and WFG8 from 80 to 90 objectives, MOEA/D-DE-ACA obtains better mean IGD values, followed by MOEA/D-DE with differences that increase; (3) in general, the MOEA/D-DE-ACA algorithm does not perform well with 50 objectives or fewer, since NSGA-III obtains the best results, and it only competes for second place with MOEA/D-DE. NSGA-III stands out in problems WFG3, WFG4, WFG5, WFG6, WFG7, WFG8, and WFG9; and (4) MOEA/D-DE-ACA leads the experiments in WFG problems with many objectives, 80 and 90 objectives precisely. Furthermore, the difference from NSGA-III is significant.
The Friedman non-parametric test and the Holm post hoc test shows that the MOEA/D-DE-ACA algorithm obtains the number 1 ranking in 70 and 90 objectives. In 20, 30, 40, 60, and 70 objectives, Friedman’s ranking is not statistically significant (the p-value obtained is not less than 0.05), which again is why the Holm post hoc test is not performed for these experiments. Based on the Holm test for 10 objectives, the NSGA-III algorithm outperforms MOEA/D-DE-ACA and MOEA/D-DE with 95% significance. For 80 objectives, MOEA/D-DE and MOEA/D-DE-ACA outperform NSGA-III with a significance level of 90%. Finally, MOEA/D-DE-ACA and MOEA/D-DE are seen to outperform NSGA-III in 90 objectives with a level of significance of 95%.
On the other hand, the average execution time (AET) of NSGA-III decreases compared to the previous experiments. The growth is also quadratic ( A E T = 1.342 × 10 3 × o b j e c t i v e s 2 4.69 × 10 3 × o b j e c t i v e s + 2.06500 × 10 5 with an R 2 = 0.9795 ). The processing executed by MOEA/D-DE is observed to be much greater than that of the other two algorithms ( A E T = 9.012 × 10 2 × o b j e c t i v e s 2 + 1.53215 × 10 5 × o b j e c t i v e s 6.39727 × 10 5 with an R 2 = 0.9881 ). The average execution time of MOEA/D-DE-ACA ( A E T = 8.7379 × 10 1 × o b j e c t i v e s 2 1.2631 × 10 3 × o b j e t i v e s + 4.8970 × 10 4 with an R 2 = 0.8843 ) continues to be quadratic and is the shortest execution time in this experiment, as in previous experiments. Considering that MOEA/D-DE and MOEA/D-DE-ACA differ only in the initialization step, it is evident that the increase in the number of weight vectors enormously increases the weight vector definition time.
After performing the three experiments, it was observed that the incorporation of ACAs, independently of the strength and alphabet in MOEA/D-DE, decreases the execution time since when using t = 2, v = 9, the proposal uses 40.7% of the time that MOEA/D-DE uses and only 4.8% of the time used by NSGA-III. Then with t = 2, v = 17, MOEA/D-DE-ACA uses only 8.9% of the time used by MOEA/D-DE and 7% of the time that NSGA-III uses. Finally, with t = 3, v = 9, it is observed that MOEA/D-DE-ACA uses only 2.7% of the execution time that MOEA/D-DE uses and only 6.2% of the time that NSGA-III uses.
Moreover, the random generation of weight vectors in MOEA/D-DE, when the population is large, causes the execution time to be so high that it exceeds the cost of NSGA-III in the experiments with small and medium populations, highlighting the importance of incorporating ACAs into this algorithm without diminishing the quality of the IGD results. The execution time of MOEA/D-DE-ACA ( A E T = 8.7379 × 10 1 × o b j e c t i v e s 2 1.2631 × 10 3 × o b j e t i v e s + 4.8970 × 10 4 with an R 2 = 0.8843 ) continues to be quadratic and is the shortest execution time in this experiment, as in previous experiments.
A box plot graphic of the IGD values obtained by the three algorithms in the 16 evaluated problems showed that MOEA/D-DE-ACA obtains better (lower) IGD values, and its mean is the lowest in 6 (37.5%) of the 16 evaluated problems (DTLZ1, DTLZ2, DTLZ6, DTLZ7, WFG1, and WFG2). Meanwhile, the performance of NSGA-III in all the problems improves compared to the previous experiments, where in 9 (56.3%) of the 16 evaluated problems, it slightly surpasses the results of MOEA/D-DE-ACA and MOEA/D-DE.

5.5. Data Analysis and Discussion

Seeking to identify the impact that the strength (t) and alphabet (v) of the ACA have on the performance of the MOEA/D-DE-ACA algorithm concerning the characteristics of the problems, a table was constructed in a minable view way with the data generated in the three experiments previously presented. The data were extracted from Table 7, where the characteristics of the problems were described, and Table 9, Table 10 and Table 11, which present the results of the experiments with small population sizes (t = 2 and v = 9)—similarly, the results of the experiments with medium populations (t = 2 and v = 17) and large populations (t = 3 and v = 9) were used. As a result, a minable view with 464 instances, 13 attributes, and 1 class variable (the ranking obtained by MOEA/D-DE-ACA in each experiment with each problem related to MOEA/D-DE and NSGA-III) was obtained. The class variable corresponds to Rank = 1 (207 instances where MOEA/D-DE-ACA ranked first) and Rank = 2 (257 instances where MOEA/D-DE-ACA occupied second or third place).
Using RapidMiner Studio Educational 10.3.001, a mining process implemented cross-validation with ten folds and a “decision tree” classifier using the Optimize Parameter operator, the following hyperparameters were defined: accuracy for criterion, seven for maximal depth, false for pruning, false for prepruning, and fifty-five for minimal leaf size. As a result, the operator generates the decision rules of Table 12 with 80% accuracy, 86% precision, and 77% recall. From this figure, the following general rules in favor of MOEA/D-DE-ACA can be summarized:
  • If the problem is DTLZ1, DTLZ2, DTLZ3, DTLZ7, or WFG2, then MOEA/D-DE-ACA has a probability between 76% and 86% to be the best option.
  • If the problem is DTLZ6, WFG4, WFG6, or WFG8 and strength = 2, then MOEA/D-DE-ACA has a probability between 65% and 70% to be the best option.
  • If the problem is WFG5 and strength = 2 and alphabet = 9, then MOEA/D-DE-ACA has a probability of 80% of being the best option.
Also, from Table 12, the following conditions when MOEA/D-DE-ACA did not win (lost against MOEA/D or NSGA-III) can be summarized:
  • If the problem is DTLZ4, DTLZ5, WFG1, WFG3, WFG7, or WFG9, with a probability between 72% and 97%.
  • If the problem is DTLZ6, WFG4, WFG5, WFG6, or WFG8 and strength = 3, with a probability between 67% and 100%.
  • If the problem is WFG5 and strength = 2 and alphabet = 17, with a probability of 60%.
These rules apply regardless of the number of objectives and population size, which is remarkably interesting. They only require the values of strength, alphabet, and problem. Unfortunately, the classifier could not find characteristics (shape, multi-modal, and deceptive, among others) of the problems without loss of generality, so more experiment data must be used in future work.
Table 12. Decision rules to define the rank of MOEA/D-DE-ACA.
Table 12. Decision rules to define the rank of MOEA/D-DE-ACA.
ConditionRankConfidenceRank = 1Rank = 2
Problem = DTLZ3186%254
Problem = DTLZ7179%236
Problem = DTLZ1176%227
Problem = DTLZ2176%227
Problem = WFG2176%227
Problem = WFG4 and Strength ≤ 2.5170%146
Problem = WFG6 and Strength ≤ 2.5170%146
Problem = DTLZ6 and Strength ≤ 2.5165%137
Problem = WFG8 and Strength ≤ 2.5165%137
Problem = WFG5 and Strength ≤ 2.5 and Alphabet ≤ 13180%82
Problem = WFG3297%128
Problem = DTLZ4293%227
Problem = WFG1293%227
Problem = WFG9290%326
Problem = WFG7286%425
Problem = DTLZ5272%821
Problem = DTLZ6 and Strength > 2.52100%09
Problem = WFG5 and Strength > 2.52100%09
Problem = WFG6 and Strength > 2.5278%27
Problem = WFG8 and Strength > 2.5278%27
Problem = WFG4 and Strength > 2.5267%36
Problem = WFG5 and Strength ≤ 2.5 and Alphabet > 13260%46
To understand why MOEA/D-DE-ACA exceeds the results of MOEA/D-DE in small and medium populations, the angular distance between the different weight vectors sampled by each of the algorithms was calculated (value between 0 and 90 degrees, since the components of the weight vectors are positive, and these are always in the first quadrant of the multi-dimensional space). Figure 4 shows these angular distances organized in a frequency histogram observed every 2° for 30 and 60 objectives with the three population sizes (small, medium, and large).
On analyzing the first column of this figure (30 objectives), the histogram of the separation angles between the weight vectors obtained by MOEA/D-DE-ACA (light green color) has a symmetrical shape, a range that goes from 20° to 70°, and an average that varies between 42° and 43°; this means that, in the worst case, a weight vector has a neighboring vector at 20° and that two weight vectors that are as far from each other as possible are at 70°. In contrast, the histogram of the angles obtained by MOEA/D-DE (yellow-orange color) has an asymmetric shape towards the right side, a broader range from 30° to 90°, and an average between 52° and 59°. At first glance, the distribution of the weight vectors at the level of the angles of separation between each other is more uniform for MOEA/D-DE-ACA. In addition, the experimental results show that with 30 objectives, whether in small, medium, or large populations, MOEA/D-DE-ACA wins (in most cases) in the Friedman ranking over MOEA/D-DE, which in Figure 4 is expressed as Rank (A) = 1 and Rank (B) = 2. In this case, with a small population (t = 2, v = 9), MOEA/D-DE-ACA dominates with a 95% level of significance (A > B), but as the population grows, the domination relationship begins to fade (A ≥ B). Observing the histograms shows the reason. As more weight vectors can be generated, MOEA/D-DE begins to generate weight vectors with a distribution that is increasingly like the way MOEA/D-DE-ACA generates them from the beginning, but with a much higher computational cost. The tail on the right presented by MOEA/D-DE decreases from a smaller to a larger population but does not disappear; this tail refers to weight vectors found at the borders of the objective space—for example, with five objectives, it would include (1, 0, 0, 0, 0), (0, 1, 0, 0, 0), among many others.
The results of the second column (60 objectives) in Figure 4 follow the same pattern. MOEA/D-DE-ACA obtains a more symmetrical distribution that gives better results than MOEA/D-DE in small and medium populations, the domination vanishing from being significant in the small population to not being significant in the medium population and the ranking inverting in the large population since here the first in the ranking is MOEA/D-DE. These graphs show how MOEA/D-DE mimics MOEAD/D-DE-ACA in so far as it can create more weight vectors (larger population) except the right-hand tail is not removed at all; it also seems that it may be beneficial with a large number of objectives, as occurs with 60 objectives. The latter motivates future work in which rows that sample these extremes (border weight vectors) are included in the construction of ACAs.
Figure 5 shows, as an example, four graphs of parallel coordinates of the non-dominated solutions obtained after executing MOEA/D-DE and MOEA/D-DE-ACA (strength 2 and alphabet 9) for problems DTLZ6, DTLZ7, WFG1, and WFG2. The initialization of the two algorithms establishes the same initial (random) position in the search space for the solutions in the population. The only difference between the two populations is the weight vectors.
Although it is a challenge to analyze the results obtained with these graphs visually, in Figure 5, the solutions obtained by the two algorithms are substantially different concerning the values of their 30 objectives. In the case of DTLZ6, the results obtained by MOEA/D-DE (goldenrod color) measured in IGD are slightly better than those obtained by MOEA/D-DE-ACA (green color), which is visually represented with a higher density of goldenrod lines at the bottom of each of the objectives. The case of DTLZ7 is more difficult to interpret since, in several objectives, there is a concentration of better results for MOEA/D-DE, while others relate better results for MOEA/D-DE-ACA; this is evidenced by a remarkably similar value of IGD for the two algorithms. In WFG1 and WFG2, the IGD value for MOEA/D-DE-ACA is better than that of MOEA/D-DE. This fact can be seen in the graph of WFG1 from the more significant presence of green lines in the lower part of the graph, despite the peaks in several objectives. In the graph of WFG2, the green lines in the lower part cannot be visualized since they are hidden by the goldenrod lines (in the tool used to make these graphs, there is no option to manage transparencies).
Regarding execution time, it can be seen that even though all the algorithms perform the same number of objective evaluations and are executed on the same machine with the same conditions, in all experiments, MOEA/D-DE-ACA runs faster than MOEA/D-DE and NSGA-III because initializing weight vectors in MOEA/D-DE or reference sets in NSGA-III is a computationally expensive step, while that in MOEA/D-DE-ACA is summarized in reading a file with the ACA that will run the algorithm. This file with the ACA must be created using an ACA creation algorithm, usually a metaheuristic, which can be computationally expensive. Still, this work is performed only once, and then MOEA/D-DE-ACA can use it as many times as needed. This file can also be acquired from third parties, such as the repository of Professor José Torres-Jiménez at CINVESTAV Tamaulipas (Mexico).
As mentioned as early as the Introduction section, in multiple or many-objective optimization algorithms based on the decomposition approach, such as MOEA/D, the proper initialization of the weight vectors is not the only way to improve the quality of the results. Still, suppose it uses a better initialization method and is combined with other strategies, such as the dynamic adjustment of the weight vectors during the evolutionary process. In that case, the results obtained may be better; this is future work that the research group hopes to conduct.

5.6. Comparison with Other State-of-the-Art Algorithms Using DTLZ and WFG Problems

The performance of the proposed algorithm was compared with the results of eight state-of-the-art MOEAs for solving MaOPs with 10 and 15 objectives [73]. The algorithms compared were NSGA-III [17], ANSGA-III [74], MOEA/D-PBI [8], EAG-MOEA/D [75], MOEA/DD [76], IBEA [5], SME-EMOA [6], and IDMOPSO [73].
To make a fair comparison, all algorithms’ population sizes (popsize or N parameter), including the present proposal, were configured as similar values [73]. Table 13 summarizes the population size used for each algorithm. MOEA/D-DE-ACA was executed with an ACA (N = 277; t = 2, k = 10, v = 9) for 10 objectives and an ACA (N = 132; t = 2, k = 15, v = 9) for 15 objectives. Each algorithm is terminated after a prespecified number of fitness function evaluations (these values and other specific parameters for each algorithm are the same used in [73]). For WFG problems, each algorithm stops after 200,000 and 300,000 fitness evaluations for 10 and 15 objectives, respectively. For DTLZ1 problems, each algorithm stops after 2500 generations for 10 and 15 objectives. For DTLZ2, each algorithm stops after 750 and 1000 generations for 10 and 15 objective problems. For DTLZ4, each algorithm stops after 2000 and 3000 generations for 10 and 15 objective problems. For DTLZ5 to DTLZ7, each algorithm stops after 1000 and 1500 generations for 10 and 15 objective problems.
Table 14 shows the IGD results for all algorithms on DTLZ and WFG problems with 10 objectives. The results obtained for MOEA/D-DE-ACA were the best for most of the problems regardless of the shape of the Pareto front or other characteristics of the problem. The last line of this table also shows the Friedman ranking obtained using a Friedman statistic (distributed according to chi-square with 8 degrees of freedom) of 55.73 and a p-value of 2.243 × 10−12 computed by the Iman and Davenport test. Friedman test results show MOEA/D-DE-ACA as the best option, MOEA/DD in second place, and IDMOPSO in third place. The Holm post hoc test shows that MOEA/D-DE-ACA results dominate all algorithms except for MOEA/DD and IDMOPSO, with a 95% significance level. MOEA/DD dominates MOEA/D-PBI and SMS-EMOA with the same significance level, and IDMOPSO only dominates MOEA/D-PBI.
Table 15 shows the IGD results with 15 objectives. The results obtained for MOEA/D-DE-ACA were the best for all problems except for DTLZ5 and DTLZ6, which is outdone only by IDMOPSO, as in 10 objectives. The Friedman ranking (obtained using a Friedman statistic of 57.99, distributed according to chi-square with 8 degrees of freedom, and a p-value computed by Iman and Davenport test of 3.407 × 10−13) shows MOEA/D-DE-ACA as the best option, followed by MOEA/DD and IDMOPSO. The Holm post hoc test shows that MOEA/D-DE-ACA results dominate ANSGA-III, NSGA-III, MOEA/D-PBI, EAG-MOEA/D, IBEA, and SMS-EMOA algorithms with a 95% significance level. MOEA/DD and IDMOPSO dominate MOEA/D-PBI with the same significance level.
Results for 10 and 15 objectives in DTLZ and WFG problems show MOEA/D-DE-ACA as the best option using a population size of 277 and 132, respectively. The results on WFG problems are remarkable. Good results of MOEA/DD could be improved using ACAs for weight initialization instead of the Das and Dennis method [77]; this represents future work for our research group.

5.7. Comparison with Other State-of-the-Art Algorithms over Constrained Problems

The performance of the proposed algorithm was compared with the results of six state-of-the-art MOEAs for solving constrained many-objective optimization problems with 10 and 15 objectives [78]. The algorithms compared were: C-MOEA/DD [76], C-NSGA-III [17], C-RVEA [79], I-DBEA [80], C-TAEA [81], and C-AnEA [78]. To manage constraints, C-NSGA-III and I-DBEA adopted the feasibility-driven strategy. Conversely, C-MOEA/DD, C-TAEA, C-RVEA, and C-AnEA utilized infeasibility information. MOEA/D-DE-ACA also adopted the feasibility-driven approach, as Jain and Deb describe in [17].
Problems used for evaluation and comparison were the six available problems in the C-DTLZ test suite [17]. C-DTLZ is based on DTLZ but includes three types of constraints in the objective space, namely: (1) where the original PF is still optimal, but there is an infeasible barrier in approaching the PF; (2) where only the region located inside each of the M+1 hyperspheres with radius r is feasible; and (3) where the PF is composed of several constraint surfaces. The reference points for these problems were 275 and 135 for 10 and 15 objectives, respectively. The population size for the six compared algorithms was 276 and 136 for 10 and 15 objectives. As in the previous experiment, the population size for MOEA/D-DE-ACA was 277 and 132 for 10 and 15 objectives (see Table 13).
All algorithms were terminated after a prespecified number of fitness function evaluations (these values and other specific parameters for each algorithm are the same used in [78]). For C1-DTLZ1, each algorithm stops after 276,000 and 204,000 fitness evaluations (FEs) for 10 and 15 objectives, respectively. For C1-DTLZ3, each algorithm stops after 966,000 and 680,000 FEs for 10 and 15 objectives. For C2-DTLZ2, each algorithm stops after 207,000 and 136,000 FEs for 10 and 15 objectives. And for C2-DTLZ2*, C3-DTLZ1, and C3-DTLZ4, each algorithm stops after 828,000 and 544,000 FEs for 10 and 15 objectives.
Table 16 shows the IGD results for all algorithms on C-DTLZ problems with 10 and 15 objectives. The results for 10 objectives show that MOEA/D-DE-ACA was the best for most (five out of six) problems. The eighth line of this table also indicates the Friedman ranking obtained using a Friedman statistic (distributed according to chi-square with 6 degrees of freedom) of 26.78 and a p-value computed by the Iman and Davenport test of 1.042 × 10−7. Friedman test results show MOEA/D-DE-ACA as the best option, followed by C-AnEA and C-RVEA. The Holm post hoc test only shows that MOEA/D-DE-ACA and C-AnEA results dominate I-DBEA with a 95% significance level. Still, using a 90% significance level, the Holm post hoc test shows that MOEA/D-DE-ACA also dominates C-NSGA-III and C-TAEA.
Table 16 also shows the results for 15 objectives, like those for 10. MOEA/D-DE-ACA was the best for most (five out of six) problems. This table also shows, on the last line, the Friedman ranking for 15 objectives obtained using a Friedman statistic (distributed according to chi-square with 6 degrees of freedom) of 24.14 and a p-value computed by the Iman and Davenport test of 3.787 × 10−6. Friedman test results reveal MOEA/D-DE-ACA as the best option, C-AnEA in second place, and C-RVEA in third place, equal to 10 objectives. The Holm post hoc test only shows that MOEA/D-DE-ACA and C-AnEA dominate I-DBEA, and MOEA/D-DE-ACA dominates C-NSGA-III with 95% significance. Still, using a 90% significance level, the Holm post hoc test shows that C-RVEA results dominate those of I-DBEA.
Of the three best algorithms (MOEA/D-DE-ACA, C-AnEA, and C-RVEA) for solving constrained problems in the C-DTLZ test suite with 10 and 15 objectives, only MOEA/D-DE-ACA adopts the feasibility-driven strategy, the others utilize infeasibility information.

6. Conclusions and Future Work

This work proposes the definition of weight vectors in MOEA/D-DE based on augmented covering arrays (ACAs) in a new version of the algorithm called MOEA/D-DE-ACA. This new version was compared with the original version of MOEA/D-DE and NSGA-III in seven DTLZ problems and nine WFG problems of 10 to 100 objectives using small (t = 2, v = 9), medium (t = 2, v = 17), and large (t = 3, v = 9) populations.
About the hypothesis initially raised in this research, it can be concluded that with a low value of strength (t = 2) and a low–medium value of alphabet (v = 9 or v = 17), meaning small populations with 136 to 288 solutions and medium ones with 416 to 838 solutions, the MOEA/D-DE-ACA algorithm obtains better IGD results than MOEA/D-DE between 30 and 100 objectives regardless of the characteristics of the problems and the shapes of their Pareto fronts; this implies that initialization of the weight vectors is more appropriate based on ACAs and that the execution time is significantly reduced to 40.7% and 8.9% of the time executed by MOEA/D-DE. When using a strength t = 3 (large population with 729 to 1457 solutions), the results are similar between MOEA/D-DE-ACA and MOEA/D-DE, i.e., there is no statistically significant difference between MOEA/D-DE-ACA and MOEA/D-DE, except in 90 objectives where MOEA/D-DE-ACA performs best, but the execution time with these populations is much reduced to 2.7% of the execution time of MOEA/D-DE.
In experiments with small and medium populations, two algorithms, MOEA/D-DE-ACA and MOEA/D-DE, outperform NSGA-III, but when there are 10 or 20 objectives and medium populations, results are better with NSGA-III. With a large population, NSGA-III obtains better results than the other two algorithms for 10 to 70 objectives, but those results are not statistically superior to those obtained by MOEA/D-DE-ACA and MOEA/D-DE. MOEA/D-DE-ACA, in all cases, executes faster than NSGA-III, using in small populations only 4.8% of the time used by NSGA-III, in medium populations only 7%, and in large populations, only 6.2%.
In future work, the research group expects to directly compare the proposed initialization scheme with other initialization schemes in other decomposition-based algorithms like MOEA/DD, MaOEA/D-2ADV, or MOEA/D-SOM. In addition, it is hoped to use the initialization scheme proposed in conjunction with a recent version of MOEA/D which adapts the direction of the weight vectors and improves the selection operators, define the key characteristics of the ACAs to improve the results in specific kinds of MaOPs, and, finally, design ACAs that include more border weight vectors and evaluate their impact in many-objective optimization problems.

Author Contributions

Methodology, C.C., C.O., J.T.-J., H.O. and M.M.; Software, C.C., C.O. and J.T.-J.; Supervision, C.C.; Writing—original draft, C.C. and C.O.; Writing—review and editing, C.C., C.O., J.T.-J., H.O. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

The Universidad del Cauca (Colombia) and the Fundación Universitaria de Popayán (Colombia) supported the work in this paper. The third author is grateful to CONAHCYT for the grant CF-2023-I-1014 “Construcción de un repositorio de Cluster Covering Arrays usando una metodología de tres etapas” that partially funded the research reported in this paper.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors sincerely thank the Universidad del Cauca, the Fundación Universitaria de Popayán, and the CINVESTAV Tamaulipas (Mexico) for their invaluable support throughout this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chand, S.; Wagner, M. Evolutionary Many-Objective Optimization: A Quick-Start Guide. Surv. Oper. Res. Manag. Sci. 2015, 20, 35–42. [Google Scholar] [CrossRef]
  2. Jiang, S.; Yang, S. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts. IEEE Trans. Cybern. 2016, 46, 421–437. [Google Scholar] [CrossRef] [PubMed]
  3. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T.; Pratab, S.; Agarwal, S.; Meyarivan, T.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NGSA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  4. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm; Technical Report; Gloriastrasse: Zurich, Switzerland, 2001. [Google Scholar]
  5. Zitzler, E.; Künzli, S. Indicator-Based Selection in Multiobjective Search. Lect. Notes Comput. Sci. 2004, 3242, 832–842. [Google Scholar] [CrossRef] [PubMed]
  6. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective Selection Based on Dominated Hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
  7. Hughes, E.J. Multiple Single Objective Pareto Sampling. In Proceedings of the 2003 Congress on Evolutionary Computation (CEC 2003), Canberra, Australia, 8–12 December 2003; Volume 4, pp. 2678–2684. [Google Scholar]
  8. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  9. Hui, Y.; Xin, Y.; Min, S. Particle Swarm Optimization Route Planner Algorithm for Air Vehicle. In Proceedings of the ICCIA 2010–2010 International Conference on Computer and Information Application, Tianjin, China, 3–5 December 2010; pp. 319–322. [Google Scholar]
  10. Li, Y.; Bai, X.; Wu, Z. The Determination of Optimal Design Plan of the Sha-He Aqueduct. In Proceedings of the ICIME 2010–2010 2nd IEEE International Conference on Information Management and Engineering, Chengdu, China, 16–18 April 2010; Volume 6, pp. 319–323. [Google Scholar]
  11. Ruano, E.; Cobos, C.; Torres-Jimenez, J. Transit Network Frequencies-Setting Problem Solved Using a New Multi-Objective Global-Best Harmony Search Algorithm and Discrete Event Simulation. Lect. Notes Comput. Sci. 2017, 10062, 341–352. [Google Scholar] [CrossRef] [PubMed]
  12. Li, B.; Li, J.; Tang, K.; Yao, X. Many-Objective Evolutionary Algorithms: A Survey. ACM Comput. Surv. 2015, 48, 1–35. [Google Scholar] [CrossRef]
  13. Li, H.; Zhang, Q. Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 2009, 13, 284–302. [Google Scholar] [CrossRef]
  14. Chen, Z.; Zhou, Y.; Zhao, X.; Xiang, Y.; Wang, J. A Historical Solutions Based Evolution Operator for Decomposition-Based Many-Objective Optimization. Swarm Evol. Comput. 2018, 41, 167–189. [Google Scholar] [CrossRef]
  15. Zheng, W.; Tan, Y.; Fang, X.; Li, S. An Improved MOEA/D with Optimal DE Schemes for Many-Objective Optimization Problems. Algorithms 2017, 10, 86. [Google Scholar] [CrossRef]
  16. Li, B.; Li, J.; Tang, K.; Yao, X. An Improved Two Archive Algorithm for Many-Objective Optimization. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC’2014), Beijing, China, 6–11 July 2014; pp. 2869–2876. [Google Scholar]
  17. Jain, H.; Deb, K. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach. IEEE Trans. Evol. Comput. 2014, 18, 602–622. [Google Scholar] [CrossRef]
  18. Bhagavatula, S.S.; Sanjeevi, S.G.; Kumar, D.; Yadav, C.K. Multi-Objective Indicator Based Evolutionary Algorithm for Portfolio Optimization. In Proceedings of the Souvenir of the 2014 IEEE International Advance Computing Conference, IACC 2014, Gurgaon, India, 21–22 February 2014; pp. 1206–1210. [Google Scholar]
  19. Chugh, T.; Sindhya, K.; Hakanen, J.; Miettinen, K. An Interactive Simple Indicator-Based Evolutionary Algorithm (I-SIBEA) for Multiobjective Optimization Problems. Lect. Notes Comput. Sci. 2015, 9078, 277–291. [Google Scholar] [CrossRef] [PubMed]
  20. Luo, J.; Liu, Q.; Yang, Y.; Li, X.; Chen, M.; Cao, W. An Artificial Bee Colony Algorithm for Multi-Objective Optimisation. Appl. Soft Comput. 2017, 50, 235–251. [Google Scholar] [CrossRef]
  21. Rostami, S.; Neri, F. A Fast Hypervolume Driven Selection Mechanism for Many-Objective Optimisation Problems. Swarm Evol. Comput. 2017, 34, 50–67. [Google Scholar] [CrossRef]
  22. Xie, H.; Li, J.; Xue, H. A Survey of Dimensionality Reduction Techniques Based on Random Projection. arXiv 2017, arXiv:1706.04371. [Google Scholar]
  23. Aguirre, H.; Tanaka, K. Adaptive ε-Ranking on Mnk-Landscapes. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, MCDM, Nashville, TN, USA, 30 March–2 April 2009; pp. 104–111. [Google Scholar]
  24. Zou, J.; Ji, C.; Yang, S.; Zhang, Y.; Zheng, J.; Li, K. A Knee-Point-Based Evolutionary Algorithm Using Weighted Subpopulation for Many-Objective Optimization. Swarm Evol. Comput. 2019, 47, 33–43. [Google Scholar] [CrossRef]
  25. Ishibuchi, H.; Akedo, N.; Nojima, Y. Relation between Neighborhood Size and MOEA/D Performance on Many-Objective Problems Content of This Presentation. Evol. Multi-Criterion Optim. 2013, 7811, 459–474. [Google Scholar]
  26. Ordóñez-Quintero, C.-C. Definición de Pesos En MOEA/D Usando Arreglos de Cubrimiento Para Resolver Problemas de Optimización de Muchos Objetivos; Universidad del Cauca: Popayán, Colombia, 2019. [Google Scholar]
  27. Torres-Jimenez, J.; Ramirez-Acuna, D.O.; Acevedo-Juárez, B.; Avila-George, H. New Upper Bounds for Sequence Covering Arrays Using a 3-Stage Approach. Expert Syst. Appl. 2022, 207, 118022. [Google Scholar] [CrossRef]
  28. Torres-jimenez, J.; Rodriguez-cristerna, A. Metaheuristic Post-Optimization of the NIST Repository of Covering Arrays. CAAI Trans. Intell. Technol. 2017, 2, 6–13. [Google Scholar] [CrossRef]
  29. Xu, Q.; Xu, Z.; Ma, T. A Survey of Multiobjective Evolutionary Algorithms Based on Decomposition: Variants, Challenges and Future Directions. IEEE Access 2020, 8, 41588–41614. [Google Scholar] [CrossRef]
  30. Scheffé, H. Experiments with Mixtures. J. R. Stat. Soc. Ser. B 1958, 20, 344–360. [Google Scholar] [CrossRef]
  31. Scheffe, H. The Simplex-Centroid Design for Experiments with Mixtures. J. R. Stat. Soc. Ser. B 1963, 25, 235–263. [Google Scholar] [CrossRef]
  32. Cornell, J.A. Some Comments on Designs for Cox’s Mixture Polynomial. Technometrics 1975, 17, 25–35. [Google Scholar] [CrossRef]
  33. Prescott, P. Nearly Uniform Designs for Mixture Experiments. Commun. Stat. Theory Methods 2008, 37, 2095–2115. [Google Scholar] [CrossRef]
  34. Borkowski, J.J.; Piepel, G.F. Uniform Designs for Highly Constrained Mixture Experiments. J. Qual. Technol. 2009, 41, 35–47. [Google Scholar] [CrossRef]
  35. Trivedi, A.; Srinivasan, D.; Sanyal, K.; Ghosh, A. A Survey of Multiobjective Evolutionary Algorithms Based on Decomposition. IEEE Trans. Evol. Comput. 2017, 21, 440–462. [Google Scholar] [CrossRef]
  36. Tan, Y.Y.; Jiao, Y.C.; Li, H.; Wang, X.K. MOEA/D + Uniform Design: A New Version of MOEA/D for Optimization Problems with Many Objectives. Comput. Oper. Res. 2013, 40, 1648–1660. [Google Scholar] [CrossRef]
  37. Guo, X. A Survey of Decomposition Based Evolutionary Algorithms for Many-Objective Optimization Problems. IEEE Access 2022, 10, 72825–72838. [Google Scholar] [CrossRef]
  38. Ma, X.; Qi, Y.; Li, L.; Liu, F.; Jiao, L.; Wu, J. MOEA/D with Uniform Decomposition Measurement for Many-Objective Problems. Soft Comput. 2014, 18, 2541–2564. [Google Scholar] [CrossRef]
  39. Qi, Y.; Ma, X.; Liu, F.; Jiao, L.; Sun, J.; Wu, J. MOEA/D with Adaptive Weight Adjustment. Evol. Comput. 2014, 22, 231–264. [Google Scholar] [CrossRef] [PubMed]
  40. Zhang, Y.; Yang, R.; Zuo, J.; Jing, X. Enhancing MOEA/D with Uniform Population Initialization, Weight Vector Design and Adjustment Using Uniform Design. J. Syst. Eng. Electron. 2015, 26, 1010–1022. [Google Scholar] [CrossRef]
  41. Trivedi, A.; Srinivasan, D.; Pal, K.; Reindl, T. A MOEA/D with Non-Uniform Weight Vector Distribution Strategy for Solving the Unit Commitment Problem in Uncertain Environment. In Artificial Life and Computational Intelligence, Proceedings of the Australasian Conference on Artificial Life and Computational Intelligence, Geelong, Australia, 31 January–2 February 2017; Wagner, M., Li, X., Hendtlass, T., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 10142, pp. 378–390. [Google Scholar]
  42. Meneghini, I.R.; Guimarães, F.G. Evolutionary Method for Weight Vector Generation in Multi-Objective Evolutionary Algorithms Based on Decomposition and Aggregation. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 1900–1907. [Google Scholar]
  43. Tang, B. Orthogonal Array-Based Latin Hypercubes. J. Am. Stat. Assoc. 1993, 88, 1392–1397. [Google Scholar] [CrossRef]
  44. Cai, X.; Mei, Z.; Fan, Z. A Decomposition-Based Many-Objective Evolutionary Algorithm with Two Types of Adjustments for Direction Vectors. IEEE Trans. Cybern. 2018, 48, 2335–2348. [Google Scholar] [CrossRef] [PubMed]
  45. Gu, F.; Cheung, Y.M. Self-Organizing Map-Based Weight Design for Decomposition-Based Many-Objective Evolutionary Algorithm. IEEE Trans. Evol. Comput. 2018, 22, 211–225. [Google Scholar] [CrossRef]
  46. Liu, H.L.; Gu, F.; Zhang, Q. Decomposition of a Multiobjective Optimization Problem into a Number of Simple Multiobjective Subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef]
  47. Dai, C.; Lei, X. A Decomposition-Based Multiobjective Evolutionary Algorithm with Adaptive Weight Adjustment. Complexity 2018, 2018, 1753071. [Google Scholar] [CrossRef]
  48. Qiao, J.; Zhou, H.; Yang, C.; Yang, S. A Decomposition-Based Multiobjective Evolutionary Algorithm with Angle-Based Adaptive Penalty. Appl. Soft Comput. 2019, 74, 190–205. [Google Scholar] [CrossRef]
  49. Xu, H.; Zeng, W.; Zhang, D.; Zeng, X. MOEA/HD: A Multiobjective Evolutionary Algorithm Based on Hierarchical Decomposition. IEEE Trans. Cybern. 2019, 49, 517–526. [Google Scholar] [CrossRef] [PubMed]
  50. Torres-Jimenez, J.; Izquierdo-Marquez, I.; Avila-George, H. Methods to Construct Uniform Covering Arrays. IEEE Access 2019, 7, 42774–42797. [Google Scholar] [CrossRef]
  51. Sato, H. Inverted PBI in MOEA/D and Its Impact on the Search Performance on Multi and Many-Objective Optimization. In Proceedings of the GECCO 2014 Genetic and Evolutionary Computation Conference, ACM, New York, NY, USA, 12–16 July 2014; pp. 645–652. [Google Scholar]
  52. Xu, M.; Tian, Z. A Flexible Image Cipher Based on Orthogonal Arrays. Inf. Sci. 2021, 551, 39–53. [Google Scholar] [CrossRef]
  53. Hedayat, A.S.; Sloane, N.J.A.; Stufken, J. Orthogonal Arrays: Theory and Applications, 1st ed.; Springer: New York, NY, USA, 1999; ISBN 978-0-387-98766-8. [Google Scholar]
  54. Bush, K.A. Orthogonal Arrays of Index Unity. Ann. Math. Stat. 1952, 23, 426–434. [Google Scholar] [CrossRef]
  55. Muazu, A.A.; Hashim, A.S.; Sarlan, A.; Abdullahi, M. SCIPOG: Seeding and Constraint Support in IPOG Strategy for Combinatorial t-Way Testing to Generate Optimum Test Cases. J. King Saud Univ. Comput. Inf. Sci. 2023, 35, 185–201. [Google Scholar] [CrossRef]
  56. Ordoñez, H.; Torres-jimenez, J.; Ordoñez, A.; Cobos, C. Clustering Business Process Models Based on Multimodal Search and Covering Arrays. Lect. Notes Comput. Sci. 2017, 10062, 317–328. [Google Scholar] [CrossRef] [PubMed]
  57. Ruano-Daza, E.; Cobos, C.; Torres-Jimenez, J.; Mendoza, M.; Paz, A. A Multiobjective Bilevel Approach Based on Global-Best Harmony Search for Defining Optimal Routes and Frequencies for Bus Rapid Transit Systems. Appl. Soft Comput. 2018, 67, 567–583. [Google Scholar] [CrossRef]
  58. Ordoñez, H.; Torres-Jimenez, J.; Cobos, C.; Ordoñez, A.; Herrera-Viedma, E.; Maldonado-Martinez, G. A Business Process Clustering Algorithm Using Incremental Covering Arrays to Explore Search Space and Balanced Bayesian Information Criterion to Evaluate Quality of Solutions. PLoS ONE 2019, 14, e0217686. [Google Scholar] [CrossRef]
  59. Vivas, S.; Cobos, C.; Mendoza, M. Covering Arrays to Support the Process of Feature Selection in the Random Forest Classifier. Lect. Notes Comput. Sci. 2019, 11331, 64–76. [Google Scholar] [CrossRef] [PubMed]
  60. Dorado, H.; Cobos, C.; Torres-Jimenez, J.; Burra, D.D.; Mendoza, M.; Jimenez, D. Wrapper for Building Classification Models Using Covering Arrays. IEEE Access 2019, 7, 148297–148312. [Google Scholar] [CrossRef]
  61. Johnson, D.S. Approximation Algorithms for Combinatorial Problems. J. Comput. Syst. Sci. 1974, 9, 256–278. [Google Scholar] [CrossRef]
  62. Lovász, L. On the Ratio of Optimal Integral and Fractional Covers. Discret. Math. 1975, 13, 383–390. [Google Scholar] [CrossRef]
  63. Stein, S.K. Two Combinatorial Covering Theorems. J. Comb. Theory Ser. A 1974, 16, 391–397. [Google Scholar] [CrossRef]
  64. Huband, S.; Hingston, P.; Barone, L.; While, L. A Review of Multiobjective Test Problems and a Scalable Test Problem Toolkit. IEEE Trans. Evol. Comput. 2006, 10, 477–506. [Google Scholar] [CrossRef]
  65. Zhou, C.; Dai, G.; Zhang, C.; Li, X.; Ma, K. Entropy Based Evolutionary Algorithm with Adaptive Reference Points for Many-Objective Optimization Problems. Inf. Sci. 2018, 465, 232–247. [Google Scholar] [CrossRef]
  66. Khan, B.; Hanoun, S.; Johnstone, M.; Lim, C.P.; Creighton, D.; Nahavandi, S. A Scalarization-Based Dominance Evolutionary Algorithm for Many-Objective Optimization. Inf. Sci. 2019, 474, 236–252. [Google Scholar] [CrossRef]
  67. Zou, J.; Zhang, Y.; Yang, S.; Liu, Y.; Zheng, J. Adaptive Neighborhood Selection for Many-Objective Optimization Problems. Appl. Soft Comput. 2018, 64, 186–198. [Google Scholar] [CrossRef]
  68. Lin, Q.; Zhu, Q.; Huang, P.; Chen, J.; Ming, Z.; Yu, J. A Novel Hybrid Multi-Objective Immune Algorithm with Adaptive Differential Evolution. Comput. Oper. Res. 2015, 62, 95–111. [Google Scholar] [CrossRef]
  69. Sengupta, R.; Saha, S. Reference Point Based Archived Many Objective Simulated Annealing. Inf. Sci. 2018, 467, 725–749. [Google Scholar] [CrossRef]
  70. Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; Da Fonseca, V.G. Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
  71. Halim, A.H.; Ismail, I.; Das, S. Performance Assessment of the Metaheuristic Optimization Algorithms: An Exhaustive Review. Artif. Intell. Rev. 2020, 54, 2323–2409. [Google Scholar] [CrossRef]
  72. Coello, C.A.C.; Lamont, G.B.; Van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems (Genetic and Evolutionary Computation); Springer: Berlin/Heidelberg, Germany, 2006; ISBN 0387332545. [Google Scholar]
  73. Luo, J.; Huang, X.; Yang, Y.; Li, X.; Wang, Z.; Feng, J. A Many-Objective Particle Swarm Optimizer Based on Indicator and Direction Vectors for Many-Objective Optimization. Inf. Sci. 2020, 514, 166–202. [Google Scholar] [CrossRef]
  74. Cheng, Q.; Du, B.; Zhang, L.; Liu, R. ANSGA-III: A Multiobjective Endmember Extraction Algorithm for Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 700–721. [Google Scholar] [CrossRef]
  75. Cai, X.; Li, Y.; Fan, Z.; Zhang, Q. An External Archive Guided Multiobjective Evolutionary Algorithm Based on Decomposition for Combinatorial Optimization. IEEE Trans. Evol. Comput. 2015, 19, 508–523. [Google Scholar] [CrossRef]
  76. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition. IEEE Trans. Evol. Comput. 2015, 19, 694–716. [Google Scholar] [CrossRef]
  77. Das, I.; Dennis, J.E. Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. SIAM J. Optim. 1998, 8, 631–657. [Google Scholar] [CrossRef]
  78. Wang, C.; Xu, R. An Angle Based Evolutionary Algorithm with Infeasibility Information for Constrained Many-Objective Optimization. Appl. Soft Comput. 2020, 86, 105911. [Google Scholar] [CrossRef]
  79. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef]
  80. Asafuddoula, M.; Ray, T.; Sarker, R. A Decomposition-Based Evolutionary Algorithm for Many Objective Optimization. IEEE Trans. Evol. Comput. 2015, 19, 445–460. [Google Scholar] [CrossRef]
  81. Li, K.; Chen, R.; Fu, G.; Yao, X. Two-Archive Evolutionary Algorithm for Constrained Multiobjective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 303–315. [Google Scholar] [CrossRef]
Figure 1. Visual comparison of an OA, CA, and ACA in 3D objectives with v = 7 and t = 2.
Figure 1. Visual comparison of an OA, CA, and ACA in 3D objectives with v = 7 and t = 2.
Mathematics 12 01680 g001aMathematics 12 01680 g001b
Figure 2. Average execution time in seconds for all algorithms in the experiments with strength 2 and alphabet 9.
Figure 2. Average execution time in seconds for all algorithms in the experiments with strength 2 and alphabet 9.
Mathematics 12 01680 g002
Figure 3. IGD values obtained for all algorithms in the evaluation of strength 2 and alphabet 9 from 10 to 100 objectives. A: MOEA/D-DE-ACA, B: MOEA/D-DE, and C: NSGA-III.
Figure 3. IGD values obtained for all algorithms in the evaluation of strength 2 and alphabet 9 from 10 to 100 objectives. A: MOEA/D-DE-ACA, B: MOEA/D-DE, and C: NSGA-III.
Mathematics 12 01680 g003aMathematics 12 01680 g003b
Figure 4. Histograms for comparison of separation angles with different numbers of objectives and population sizes.
Figure 4. Histograms for comparison of separation angles with different numbers of objectives and population sizes.
Mathematics 12 01680 g004
Figure 5. Non-dominated solutions for MOEA/D-DE (goldenrod color) and MOEA/D-DE-ACA (green color) for problems DTLZ6, DTLZ7, WFG1, and WFG2 with 30 objectives.
Figure 5. Non-dominated solutions for MOEA/D-DE (goldenrod color) and MOEA/D-DE-ACA (green color) for problems DTLZ6, DTLZ7, WFG1, and WFG2 with 30 objectives.
Mathematics 12 01680 g005aMathematics 12 01680 g005b
Table 1. Summary of variables in the combinatorial objects previously mentioned.
Table 1. Summary of variables in the combinatorial objects previously mentioned.
VariablesDescription in OAs, CAs, and ACAsDescription in MOEA/D-ACA
NNumber of rowsPopulation size
kNumber of columnsNumber of objectives of the problem
tDegree of interaction between the columnsDegree of interaction between the problem objectives
vAlphabet for each cell of the matrix w = Weight values on weight vectors
--α = Defines the level of granularity of the weights
Table 2. First four rows of ACA (42; 2, 10, 5), on the left, and defined weight vectors on the right.
Table 2. First four rows of ACA (42; 2, 10, 5), on the left, and defined weight vectors on the right.
a o a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 α w o w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 Σ
0100000011301/30000001/31/31
100100010031/3001/30001/3001
00100111105001/5001/51/51/51/501
111010100051/51/51/501/501/50001
Table 3. OA (81; 2, 10, 9) with a 0 + + a 9 = α .
Table 3. OA (81; 2, 10, 9) with a 0 + + a 9 = α .
a o a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 α a o a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 α
10000000000042057138246541
21111111110943138246057541
322222222201844246057138541
433333333302745381462570541
544444444403646462570381541
601234567813747570381462541
712045378613748624705813541
820153486713749705813624541
934567801213750813624705541
1045378612013751063852417642
1153486720113752174630528642
1267801234513753285741306642
1378612045313754306285741642
1486720153413755417063852642
1502168735423856528174630642
1610276843523857630528174642
1721087654323858741306285642
1835402168723859852417063642
1943510276823860075264183743
2054321087623861183075264743
2168735402123862264183075743
2276843510223863318507426743
2387654321023864426318507743
2403647182533965507426318743
2514758260333966642831750743
2625836071433967750642831743
2736071425833968831750642743
2847182503633969084516732844
2958260314733970165327840844
3060314758233971273408651844
3171425836033972327840165844
3282503647133973408651273844
3304872356144074516732084844
3415680437244075651273408844
3523761548044076732084516844
3637215680444077840165327844
3748023761544078555555555045
3856104872344079666666666054
3961548023744080777777777063
4072356104844081888888888072
41804372156440
Table 4. CA (36; 2, 10, 5).
Table 4. CA (36; 2, 10, 5).
a o a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 α a o a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 α
140020410021319302123213421
210132203021420102242342121
321100231231521344134010121
400421041131622232304220321
533402210101623114203024421
614210014301624123412204322
702330312111625313440320222
822033000241626203434114022
902212130421727140034331423
1021041140311728440312122423
1101042422201729244221241224
1222201343001730033232233324
1313011331221731412433041325
1410103142321732424341313025
1521114113131833041443204426
1603304104041934443122422327
1743102023412035334401432428
1832121404302036310344444330
Table 5. ACA (42; 2, 10, 5).
Table 5. ACA (42; 2, 10, 5).
a o a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 α a o a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 α
10100000011322401440341122
21001000100323104412034322
30010011110524134243100422
41110101000525234130431122
50101111101726342113304122
61011110011727422434100323
702121122011228243144011323
800202202221229223300144423
921020212021230443023332024
1012210120221331430321323324
1120211221111332143232340224
1222002121211333224403313224
1311212022201334432112043424
1410121211221335033434223024
1522222210001336322342430225
1622122002121437431204414225
1730133301141938334024442026
1834120013332039044343224026
1910331440312040340431422427
2001004344332241414341232327
2101003433442242313421244428
Table 6. Friedman rank for IGD results with strength (t = 2) and alphabet (v = 9).
Table 6. Friedman rank for IGD results with strength (t = 2) and alphabet (v = 9).
Objectives (k)MOEA/D-ACASMOEA/D-CASp-ValueSignificativeACAs Population Size (N)CAs Population Size (N)Population Size
Δ
10(1)1.06(2)1.940.000465True1368155
20(1)1.25(2)1.750.045500True17413242
30(1)1.00(2)2.000.000063True19714849
40(1)1.13(2)1.880.002700True21515362
50(1)1.00(2)2.000.000063True23215379
60(1)1.06(2)1.940.000465True24515392
70(1)1.06(2)1.940.000465True256153103
80(1)1.06(2)1.940.000465True266153113
90(1)1.06(2)1.940.000465True277153124
100(1)1.06(2)1.940.000465True288160128
Table 7. Properties of test problems using M objectives where M = {10, 20, 30, …, 100}.
Table 7. Properties of test problems using M objectives where M = {10, 20, 30, …, 100}.
ProblemShape of PF [65]Multi-Modal [64]Bias [64]Disconnected [64,65]Separable [64]Deceptive [64,65]Scaled [65]No. of Variables (D = m + k1 − 1)Generations
DTLZ1Linear, Regular (Easy)Yes [65]NoNoYes [66]No m + 4 (k1 = 5)600
DTLZ2ConcaveNoNoNoYes [66]No m + 9 (k1 = 10)500
DTLZ3Concave [67]Yes [65]NoNoYes [66]Yes m + 9 (k1 = 10)800
DTLZ4Concave [67]NoYesNoYes [66]No m + 9 (k1 = 10)500
DTLZ5Concave, Degenerate, IrregularNoNoUnknownUnknownNo m + 4 (k1 = 5)500
DTLZ6Concave, Degenerate, IrregularNoYesUnknownUnknownNo m + 4 (k1 = 5)500
DTLZ7MixedYesYes [67]Yes [65]NoYesYesm + 19 (k1 = 20)500
WFG1Sharp Tails, Irregular, Convex, MixedNoYes (polynomial, flat) [68]NoYesNoYes [69]m + 9 (l = 10)600
WFG2Convex [68,69]Yes (F1:M-1 no) [68]NoYesNoNoYes [69]m + 9 (l = 10)500
WFG3Linear, DegenerateNoNoNoNoNoYesm + 9 (l = 10)500
WFG4Concave, RegularYes (highly)NoNoYes [67]NoYesm + 9 (l = 10)500
WFG5Concave, RegularNoNoNoYesYesYesm + 9 (l = 10)500
WFG6Concave, RegularNoNoNoNoNoYesm + 9 (l = 10)500
WFG7Concave, RegularNoYes (parameter dependent) [68]NoYesNoYesm + 9 (l = 10)500
WFG8Concave, RegularNoYes (parameter dependent) [68]NoNoNoYesm + 9 (l = 10)500
WFG9Concave, RegularYes (highly difficult)Yes (parameter dependent) [68]NoNoYesYesm + 9 (l = 10)500
Table 8. Parameter settings for the compared algorithms.
Table 8. Parameter settings for the compared algorithms.
AlgorithmParameter Settings
NSGA-IIIpm = 1/n, pc = 1.0, ηm = 20, ηc = 30
MOEA/D-DEpm = 1/n, ηm = 20, CR = 1, F = 0.5, δ = 0.9, Nb = 20, nr = 2
MOEA/D-DE-ACApm = 1/n, ηm = 20, CR = 1, F = 0.5, δ = 0.9, Nb = 20, nr = 2
Table 9. Mean IGD results in DTLZ problems using ACAs with strength 2 and alphabet 9.
Table 9. Mean IGD results in DTLZ problems using ACAs with strength 2 and alphabet 9.
ObjAlgorithmDTLZ1DTLZ2DTLZ3DTLZ4DTLZ5DTLZ6DTLZ7
10MOEA/D-DE-ACA(1) 0.1457(1) 0.3194(1) 0.1888(1) 0.2919(2) 0.1505(1) 0.1044(1) 0.6671
MOEA/D-DE(2) 0.1513(2) 0.3582(2) 0.2033(3) 0.3163(1) 0.1481(2) 0.1046(2) 0.7505
NSGA-III(3) 0.1696(3) 0.3746(3) 0.2989(2) 0.3024(3) 0.2172(3) 0.5621(3) 1.1480
20MOEA/D-DE-ACA(1) 0.1232(1) 0.4372(1) 0.1411(2) 0.3697(2) 0.2045(2) 0.1279(1) 1.1480
MOEA/D-DE(2) 0.1364(3) 0.4820(2) 0.1471(3) 0.4221(1) 0.2024(1) 0.1179(2) 1.5285
NSGA-III(3) 0.1612(2) 0.4458(3) 0.4070(1) 0.3295(3) 0.2341(3) 0.4297(3) 1.7975
30MOEA/D-DE-ACA(1) 0.1245(1) 0.4905(1) 0.1647(1) 0.4614(2) 0.2010(2) 0.1049(1) 1.7280
MOEA/D-DE(2) 0.1423(2) 0.5359(2) 0.1673(2) 0.4825(1) 0.1939(1) 0.0992(2) 2.0447
NSGA-III(3) 0.2119(3) 0.6693(3) 0.3213(3) 0.4966(3) 0.2951(3) 0.4388(3) 2.2949
40MOEA/D-DE-ACA(2) 0.1039(1) 0.5235(1) 0.1571(2) 0.5359(2) 0.1618(2) 0.0912(1) 2.2291
MOEA/D-DE(1) 0.1000(2) 0.5718(2) 0.1673(1) 0.5092(1) 0.1590(1) 0.0906(2) 2.5549
NSGA-III(3) 0.1758(3) 0.7721(3) 0.3086(3) 0.5871(3) 0.2965(3) 0.4688(3) 2.6981
50MOEA/D-DE-ACA(1) 0.0663(1) 0.5547(1) 0.1250(2) 0.6354(2) 0.1869(1) 0.0887(1) 2.6861
MOEA/D-DE(2) 0.0698(2) 0.6165(2) 0.1337(1) 0.5506(1) 0.1846(2) 0.0903(2) 2.9569
NSGA-III(3) 0.1169(3) 0.8753(3) 0.3301(3) 0.6740(3) 0.2929(3) 0.4995(3) 3.0068
60MOEA/D-DE-ACA(1) 0.0615(1) 0.5812(1) 0.1226(2) 0.6891(2) 0.1816(1) 0.0722(1) 3.0971
MOEA/D-DE(2) 0.0746(2) 0.6349(2) 0.1298(1) 0.5763(1) 0.1800(2) 0.0759(2) 3.2730
NSGA-III(3) 0.1325(3) 0.9522(3) 0.4146(3) 0.7245(3) 0.2877(3) 0.5283(3) 3.3230
70MOEA/D-DE-ACA(1) 0.0835(1) 0.6065(1) 0.1230(2) 0.7266(1) 0.1807(1) 0.0670(1) 3.4231
MOEA/D-DE(2) 0.0926(2) 0.6516(2) 0.1285(1) 0.6016(2) 0.1809(2) 0.0716(3) 3.6111
NSGA-III(3) 0.1425(3) 0.9849(3) 0.5539(3) 0.7859(3) 0.2934(3) 0.6775(2) 3.5931
80MOEA/D-DE-ACA(1) 0.0906(1) 0.6148(1) 0.1329(2) 0.7491(1) 0.1861(1) 0.0626(2) 3.7727
MOEA/D-DE(2) 0.0947(2) 0.6498(2) 0.1458(1) 0.6209(2) 0.1891(2) 0.0725(3) 3.9323
NSGA-III(3) 0.1614(3) 0.9839(3) 0.6074(3) 0.8184(3) 0.2952(3) 1.1399(1) 3.7696
90MOEA/D-DE-ACA(1) 0.0914(1) 0.6224(1) 0.1444(2) 0.7710(1) 0.1981(1) 0.0824(2) 4.1787
MOEA/D-DE(2) 0.1018(2) 0.6635(2) 0.1596(1) 0.6381(2) 0.2027(2) 0.0977(3) 4.2406
NSGA-III(3) 0.1750(3) 1.0005(3) 0.7430(3) 0.8398(3) 0.3099(3) 4.2159(1) 4.0912
100MOEA/D-DE-ACA(1) 0.1063(1) 0.6304(1) 0.1342(2) 0.7967(1) 0.1719(1) 0.0824(3) 4.5781
MOEA/D-DE(2) 0.1160(2) 0.6758(2) 0.1454(1) 0.6692(2) 0.1757(2) 0.0977(2) 4.5343
NSGA-III(3) 0.1994(3) 1.0274(3) 0.6672(3) 0.8838(3) 0.3037(3) 4.2159(1) 4.2818
Table 10. Mean IGD results in WFG problems using ACAs with strength 2 and alphabet 9.
Table 10. Mean IGD results in WFG problems using ACAs with strength 2 and alphabet 9.
ObjAlgorithmWFG1WFG2WFG3WFG4WFG5WFG6WFG7WFG8WFG9
10MOEA/D-DE-ACA(2) 0.1089(1) 0.1059(1) 0.1792(3) 0.4017(3) 0.3018(3) 0.4681(3) 0.3453(3) 0.4983(3) 0.3782
MOEA/D-DE(1) 0.0861(2) 0.1127(3) 0.2074(2) 0.3719(2) 0.2834(2) 0.4176(2) 0.3440(2) 0.4603(2) 0.3460
NSGA-III(3) 0.1779(3) 0.1676(2) 0.1836(1) 0.3441(1) 0.2755(1) 0.3651(1) 0.3213(1) 0.3808(1) 0.2964
20MOEA/D-DE-ACA(2) 3.3927(1) 0.1268(3) 0.2264(3) 0.4800(2) 0.3958(2) 0.5574(2) 0.4782(2) 0.5703(2) 0.4503
MOEA/D-DE(3) 4.2079(2) 0.1507(2) 0.2218(2) 0.4764(3) 0.4662(3) 0.5858(3) 0.5280(3) 0.5747(3) 0.4892
NSGA-III(1) 0.9033(3) 0.2007(1) 0.2103(1) 0.3963(1) 0.3272(1) 0.3942(1) 0.3664(1) 0.4135(1) 0.3443
30MOEA/D-DE-ACA(2) 4.1126(1) 0.1564(3) 0.2377(1) 0.5587(1) 0.4487(1) 0.6216(2) 0.5629(1) 0.6175(2) 0.5107
MOEA/D-DE(3) 6.4187(2) 0.1921(2) 0.2302(2) 0.6130(2) 0.4825(3) 0.6427(3) 0.5859(3) 0.6342(3) 0.5352
NSGA-III(1) 1.0403(3) 0.2431(1) 0.2135(3) 0.6403(3) 0.5015(2) 0.6354(1) 0.5171(2) 0.6301(1) 0.5049
40MOEA/D-DE-ACA(2) 9.8444(1) 0.1768(3) 0.2669(1) 0.5783(1) 0.4802(1) 0.6320(1) 0.5886(1) 0.6111(1) 0.6022
MOEA/D-DE(3) 11.751(2) 0.2150(2) 0.2587(2) 0.6321(2) 0.5115(2) 0.6711(3) 0.6386(2) 0.6433(2) 0.6168
NSGA-III(1) 2.0611(3) 0.2657(1) 0.2259(3) 0.7516(3) 0.5776(3) 0.7555(2) 0.6133(3) 0.7156(3) 0.7088
50MOEA/D-DE-ACA(3) 15.400(1) 0.1896(2) 0.2748(1) 0.6079(1) 0.4979(1) 0.6621(1) 0.6270(1) 0.6438(1) 0.6390
MOEA/D-DE(2) 14.295(2) 0.2214(3) 0.2755(2) 0.6790(2) 0.5326(2) 0.7115(2) 0.6554(2) 0.6972(2) 0.6410
NSGA-III(1) 5.3195(3) 0.2760(1) 0.2237(3) 0.8358(3) 0.6319(3) 0.8348(3) 0.6837(3) 0.8072(3) 0.8069
60MOEA/D-DE-ACA(3) 9.6503(1) 0.2472(3) 0.3087(1) 0.6349(1) 0.5237(1) 0.6884(1) 0.6406(1) 0.6806(2) 0.6610
MOEA/D-DE(2) 7.1777(2) 0.2564(2) 0.2868(2) 0.6899(2) 0.5406(2) 0.7334(3) 0.6524(2) 0.7288(1) 0.6473
NSGA-III(1) 4.1623(3) 0.3080(1) 0.2416(3) 0.8932(3) 0.6802(3) 0.9006(2) 0.6412(3) 0.9166(3) 0.8777
70MOEA/D-DE-ACA(3) 43.702(2) 0.2905(3) 0.3317(1) 0.6496(1) 0.5407(1) 0.7016(1) 0.6643(1) 0.7013(2) 0.6953
MOEA/D-DE(2) 26.651(1) 0.2819(2) 0.3141(2) 0.7093(2) 0.5579(2) 0.7560(3) 0.6888(2) 0.7459(1) 0.6577
NSGA-III(1) 22.747(3) 0.3266(1) 0.2290(3) 0.9360(3) 0.6958(3) 0.9388(2) 0.6692(3) 0.9770(3) 0.9486
80MOEA/D-DE-ACA(3) 4.9379(3) 0.3527(3) 0.3534(1) 0.6500(1) 0.5574(1) 0.7168(3) 0.6902(1) 0.7118(2) 0.7222
MOEA/D-DE(2) 2.9548(1) 0.2980(2) 0.3347(2) 0.7014(2) 0.5698(2) 0.7562(1) 0.6756(2) 0.7452(1) 0.6742
NSGA-III(1) 2.5396(2) 0.3512(1) 0.2236(3) 0.9627(3) 0.7295(3) 0.9944(2) 0.6833(3) 1.0088(3) 1.0464
90MOEA/D-DE-ACA(2) 3.0815(2) 0.3612(3) 0.3428(1) 0.6653(1) 0.5692(1) 0.7177(2) 0.7226(1) 0.7056(2) 0.7461
MOEA/D-DE(3) 3.2389(1) 0.3103(2) 0.3205(2) 0.7182(2) 0.5837(2) 0.7641(1) 0.7002(2) 0.7415(1) 0.6669
NSGA-III(1) 2.0892(3) 0.3766(1) 0.2394(3) 0.9852(3) 0.7565(3) 1.0150(3) 0.7403(3) 1.0465(3) 1.0802
100MOEA/D-DE-ACA(2) 3.5191(3) 0.3960(3) 0.3538(1) 0.6602(1) 0.5731(1) 0.7161(3) 0.7167(1) 0.6930(2) 0.7771
MOEA/D-DE(3) 3.6717(1) 0.3455(2) 0.3502(2) 0.7234(2) 0.5907(2) 0.7739(1) 0.7032(2) 0.7473(1) 0.6738
NSGA-III(1) 1.2328(2) 0.3807(1) 0.2465(3) 1.0142(3) 0.7612(3) 1.0453(2) 0.7032(3) 1.0746(3) 1.1486
Table 11. Friedman rank and Holm post hoc for IGD results with strength 2 and alphabet 9.
Table 11. Friedman rank and Holm post hoc for IGD results with strength 2 and alphabet 9.
ObjMOEA/D-DE-ACA (A)MOEA/D-DE
(B)
NSGA-III
(C)
p-ValueSigHolmPopulation SizeReference Points
10(1)1.88(2)2.00(3)2.130.77880False-1363356
20(1)1.81(3)2.38(1)1.810.18498False-1744830
30(1)1.44(2)2.19(3)2.380.01950True ABC1975414
A-
B-
C -
40(1)1.44(2)1.88(3)2.690.00160True ABC40(1)
A-
B-
C-
50(1)1.31(2)1.94(3)2.750.00025True ABC2326303
A-
B-
C-
60(1)1.44(2)1.88(3)2.690.00160True ABC2456576
A-
B-
C-
70(1)1.44(2)1.94(3)2.630.00339True ABC2566390
A-
B -
C-
80(1)1.69(2)1.81(3)2.500.04677True ABC2666753
A-
B -
C -
90(1)1.50(2)1.88(3)2.630.00525True ABC2777416
A-
B -
C-
100(1)1.69(2)1.81(3)2.500.04677True ABC2887416
A-
B -
C -
Table 13. Number of reference points/directions and corresponding population sizes used in algorithms.
Table 13. Number of reference points/directions and corresponding population sizes used in algorithms.
Obj (M)Ref. Points/Ref. DirectionsNSGA-III and ANSGA-III Popsize (N)Other Algorithms and IDMOPSO Popsize (N)MOEA/D-DE-ACA Popsize (N)
10275276275277
15135136135132
Table 14. IGD results for all algorithms on DTLZ and WFG problems with 10 objectives.
Table 14. IGD results for all algorithms on DTLZ and WFG problems with 10 objectives.
PROBLEMMOEA/D-DE-ACAMOEA/DDIDMOPSONSGA-IIIANSGA-IIIEAG-MOEA/DIBEASMS-EMOAMOEA/D-PBI
DTLZ1(1) 0.0480.1090.1080.1130.1250.2040.23615.6170.110
DTLZ2(1) 0.1340.4220.4300.4740.5400.6740.4280.4980.423
DTLZ4(1) 0.1350.4220.4560.4320.4410.7140.4290.5830.512
DTLZ5(3) 0.0940.133(1) 0.0160.6150.6920.0940.0980.117(2) 0.020
DTLZ6(3) 0.0550.121(1) 0.0182.7754.2250.1170.3311.210(2) 0.019
DTLZ7(1) 0.7302.3361.2241.1161.1221.1120.9594.7633.037
WFG1(1) 0.0411.0751.0351.2421.2222.1201.5062.6332.554
WFG2(1) 0.0425.7305.9265.8985.7261.98714.8875.77616.681
WFG3(1) 0.0750.3912.5320.8151.1061.0472.7591.8735.342
WFG4(1) 0.1524.4695.3614.7194.5454.7766.2895.7949.084
WFG5(1) 0.0644.4744.2344.4764.5294.6076.3394.4328.238
WFG6(1) 0.1934.6214.6745.6914.7266.0146.4396.8839.294
WFG7(1) 0.0984.5414.3414.6064.5445.1146.0054.5439.304
WFG8(1) 0.2124.5755.0845.3694.9936.5265.6185.8958.444
WFG9(1) 0.1054.1224.0124.4434.4475.4546.2275.3578.838
Fried. Rank(1) 1.30(2) 3.40(3) 3.73(4) 5.20(5) 5.33(6) 5.9(7) 6.33(8) 6.7333(9) 7.07
Table 15. IGD results for all algorithms on DTLZ and WFG problems with 15 objectives.
Table 15. IGD results for all algorithms on DTLZ and WFG problems with 15 objectives.
ProblemMOEA/D-DE-ACAMOEA/DDIDMOPSONSGA-IIIANSGA-IIIEAG-MOEA/DIBEASMS-EMOAMOEA/D-PBI
DTLZ1(1) 0.1050.1770.1380.2140.2140.2750.3492.5160.176
DTLZ2(1) 0.1780.6210.6040.7590.7500.9400.6150.9300.622
DTLZ4(1) 0.1740.6200.6410.6710.7130.9430.6220.6090.678
DTLZ5(2) 0.0990.161(1) 0.0270.4830.4490.1130.2760.1930.096
DTLZ6(2) 0.0960.163(1) 0.0226.1206.1030.1150.3561.1500.096
DTLZ7(1) 1.1393.3852.0636.1675.9802.3445.01611.1402.720
WFG1(1) 0.0521.8511.4522.3782.5752.6452.5283.1743.283
WFG2(1) 0.10116.34813.50713.55013.4073.34926.04519.17627.325
WFG3(1) 0.1540.9866.9736.4616.4852.3687.1561.5909.038
WFG4(1) 0.2069.26910.4129.4189.4439.31313.75918.74415.158
WFG5(1) 0.1129.17714.8379.2809.3619.52312.69718.56414.881
WFG6(1) 0.2729.13810.42111.29911.11614.37413.03415.94115.849
WFG7(1) 0.2138.9808.8139.3839.37414.38213.60516.92116.078
WFG8(1) 0.3908.7968.71510.58110.70914.72111.34116.17414.421
WFG9(1) 0.2468.49016.6239.3129.36013.34611.57418.47415.222
Fried. Rank(1) 1.27(2) 3.40(3) 3.60(4) 5.47(5) 5.60(6) 5.67(7) 6.07(9) 7.53(8) 6.40
Table 16. IGD results for all algorithms on C-DTLZ problems with 10 and 15 objectives. C2-DTLZ2 * denotes C2-convex-DTLZ2.
Table 16. IGD results for all algorithms on C-DTLZ problems with 10 and 15 objectives. C2-DTLZ2 * denotes C2-convex-DTLZ2.
ProblemObj (M)MOEA/D-DE-ACAC-AnEAC-RVEAC-MOEA/DDC-NSGA-IIIC-TAEAI-DBEA
C1-DTLZ110(1) 0.0750.1140.1160.1170.1180.1300.498
C1-DTLZ310(1) 0.2110.42014.13913.27714.2210.58114.848
C2-DTLZ210(2) 0.2030.2650.2680.2660.299(1) 0.1811.272
C2-DTLZ2 *10(1) 0.0330.1320.1350.1450.1070.3040.520
C3-DTLZ110(1) 0.0710.2300.2350.2350.2350.2720.659
C3-DTLZ410(1) 0.1070.5620.5680.5690.5780.5900.746
Friedman
Rank
10(1) 1.17(2) 2.33(3) 3.83(4) 4.33(5) 4.67(6) 4.67(7) 7.00
C1-DTLZ115(1) 0.1250.1810.1880.1870.2000.1980.570
C1-DTLZ315(1) 0.1650.59414.20614.21118.7860.86214.914
C2-DTLZ215(4) 0.430(2) 0.250(3) 0.3550.5760.651(1) 0.1921.415
C2-DTLZ2 *15(1) 0.0870.2910.1620.1740.1870.3360.790
C3-DTLZ115(1) 0.1480.3660.3810.3830.4610.5031.237
C3-DTLZ415(1) 0.1500.8080.7710.7731.3020.7681.354
Friedman
Rank
15(1) 1.50(2) 3.00(3) 3.17(5) 4.00(6) 5.67(4) 3.83(7) 6.83
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cobos, C.; Ordoñez, C.; Torres-Jimenez, J.; Ordoñez, H.; Mendoza, M. Weight Vector Definition for MOEA/D-Based Algorithms Using Augmented Covering Arrays for Many-Objective Optimization. Mathematics 2024, 12, 1680. https://doi.org/10.3390/math12111680

AMA Style

Cobos C, Ordoñez C, Torres-Jimenez J, Ordoñez H, Mendoza M. Weight Vector Definition for MOEA/D-Based Algorithms Using Augmented Covering Arrays for Many-Objective Optimization. Mathematics. 2024; 12(11):1680. https://doi.org/10.3390/math12111680

Chicago/Turabian Style

Cobos, Carlos, Cristian Ordoñez, Jose Torres-Jimenez, Hugo Ordoñez, and Martha Mendoza. 2024. "Weight Vector Definition for MOEA/D-Based Algorithms Using Augmented Covering Arrays for Many-Objective Optimization" Mathematics 12, no. 11: 1680. https://doi.org/10.3390/math12111680

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop