*5.2. Performance Metrics*

Three popular metrics for evaluating multi-objective optimization problems (MOPs) [34,42,43], namely the number of non-dominant solutions, set coverage, and inverse generation distance, were adapted to evaluate the performance of SDABC. The mean and standard deviation of each metric at each level were obtained from 400 instance problems of the FHFGSP over 30 independent iterations.


$$IGD(\mathbf{A}, \text{PF}^\*) = \frac{\sum\_{v \in \text{PF}^\*} d(v, \mathbf{A})}{\left| \text{PF}^\* \right|} \tag{24}$$

where *d*(*v,* **A**) is the minimum Euclidean distance between *v* and the point in **A**. The smaller the value, the better the comprehensive performance of the algorithm including convergence and distribution performance. Since the real PF\* cannot be solved, all non-dominated solutions obtained jointly by the algorithms of each comparison are used as PF\* in this paper.

(3) Set coverage (*C*-metric). This metric measures the dominance relationship between the two solution sets **A** and **B**.

$$\mathcal{C}(\mathbf{A}, \mathbf{B}) = \frac{|\{\mu \in \mathbf{B} | \exists v \in \mathbf{A} : v \prec \mu\}|}{|\mathbf{B}|} \tag{25}$$

where *C*(**A**, **B**) represents the percentage of ideal solutions in **B** that are identical or dominant to those in **A**. The higher the value, the higher the performance.

In order to eliminate the effect of different metrics, a very simple max-min method [44] is used in this paper to normalize the obtained MS and TEC as follows.

$$f\_i = \frac{f\_i - \min(f\_i)}{\max(f\_i) - \min(f\_i)} \tag{26}$$

#### *5.3. Effect of Search Strategy*

To evaluate the performance of the search strategy, SDABC with a search strategy was compared with DABC without a search strategy. All content factors of the algorithm were the same except for the difference in the employed bee phase search strategy. The evaluation results of the three metrics for the two strategies are shown in Tables A1–A3 (Tables in Appendix A), where the better values are shown in bold and the last row is the average of the 20 problem dimensions.

For the *N*-metric, it can be seen from Table A1 that SDABC has a higher average (AVG) for 85% of the questions and a lower standard deviation (SD) for 75% of the questions. In summary: no problems were found in DABC where both the AVG and SD were better than in SDABC, so SDABC led to better results. For the FHFGSP, for which it is difficult to find the exact solution, a higher *N*-metric can plot the PF more accurately and also help managers to ge<sup>t</sup> more options. Therefore, SDABC is more advantageous in this respect.

This is because in the search phase, the employed bee is able to obtain more nondominated solutions by searching around the individual to different degrees depending on the number of dominant solutions and the similarity of the ideal solutions.

For the *C*-metric, it can be seen from Table A2 that, with the exception for 20 × 5 and 60 × 3, SDABC resulted in a better AVG on 90% of the questions and obtained a lower SD on 79% of the questions. Overall SDABC achieved a lower AVG and SD than DABC. To better show the difference between the *C*-metric obtained by SDABC and DABC, a boxplot of the two is plotted in Figure 2, and it can be seen that SDABC is able to obtain more concentrated and dense values and the median was significantly higher for SDABC than DABC. For the values of C (SDABC, DABC) away from the whole, which is the *C*-metric obtained for question 100 × 5, a comparison of Table A2 shows that lower values were obtained with DABC for the same question.

**Figure 2.** *C*-metric boxplot for DABC and SDABC.

This indicates that the quality of the solutions obtained by SDABC is higher than that of DABC. This is because in the employed bee phase, the employed bee searches around the individual to different degrees based on the similarity between the current solution and the ideal solution, and by being guided by the ideal solution, the employed bee is able to obtain a high-quality solution.

For the *IGD*-metric, it can be seen from Table A3 that SDABC obtains a lower mean value than DABC, except for 20 × 5, 60 × 8, and 80 × 10. For 75% of the questions, SDABC obtained a lower SD. Taken together, SDABC obtained a lower AVG and SD. In order to show the difference more graphically, a boxplot of the two is plotted in Figure 3. It can be seen that SDABC is able to obtain a much more concentrated lower IGD and a much lower median value than DABC.

**Figure 3.** *IGD*-metric boxplot for DABC and SDABC.

This indicates that the solutions obtained by SDABC are better than DABC in terms of diversity and convergence. This is due to the design of five different directions in the search phase, which improves the diversity of solutions obtained. The employed bee improved the convergence by searching around individuals based on the number of dominant solutions and the similarity of ideal solutions.

#### *5.4. Effect of Selection Strategy*

Three different selection strategies were compared, in order to evaluate the performance of the newly proposed selection strategy. The three strategies are as follows: the selection strategy proposed in this paper (denoted by ABC\_snm), the selection strategy in which individuals in population are selected according to similarity with mutation (denoted by ABC\_sm), and the selection strategy in which individuals in population are selected according to similarity without mutation (denoted by ABC\_s). The evaluation results of the three metrics for the three strategies are shown in Tables A4–A6, where the better values are shown in bold and the last row is the average of the 20 problem dimensions.

For the *N*-metric, it can be seen from Table A4 that ABC\_snm obtained significantly better mean values than ABC\_s. Compared to ABC\_sm, ABC\_snm achieved better results in 80% of the questions. For SD, ABC\_s obtained a lower SD value due to the fact that the size of SD is positively related to AVG, and ABC\_s has a significantly smaller AVG value, so the resulting SD is also smaller. However, on balance ABC\_snm was able to obtain more non-dominated solutions, giving the manager more options to choose from.

This indicates that it is more advantageous to select individuals in the onlooker bee phase based on the number of solutions dominated by them and their similarity to the ideal solution than to select only on the basis of similarity. A comparison of the three can reveal that ABC\_snm was able to obtain a greater number of non-dominated solutions.

For *C*-metric, it can be seen from Table A5 that ABC\_snm obtains significantly better AVG and SD than ABC\_s and ABC\_sm. In each problem, ABC\_snm achieves better results. Of course, overall, ABC\_snm also obtains better AVG and SD than the other two strategies. Figure 4 plots the boxplots of the *C*-metric obtained by the three strategies, and it can be

seen that ABC\_snm is able to obtain more concentrated values and obtains a much higher median than the other two strategies, and the minimum value obtained for ABC\_snm is also higher than the maximum values of ABC\_s and ABC\_sn.

**Figure 4.** *C*-metric boxplot for ABC\_sm, ABC\_snm, and ABC\_s.

This indicates that in this paper the proposed strategy is able to give obtain highquality non-dominated solutions. This is due to the fact that the adopted selection strategy can speed up the convergence of the algorithm and the adopted mutation strategy can prevent the algorithm from falling into local optimum.

For *IGD*-metric, it can be seen from Table A6 that in each problem, ABC\_snm obtained significantly lower AVG than ABC\_s and ABC\_sm. In total, 85% of the problems in ABC\_snm had smaller SDs than the other two algorithms. As a whole, both the AVG and SD of ABC\_snm are smaller than the other two strategies. Figure 5 plots the boxplot of the *IGD*-metric obtained by the three algorithms, and it can be seen that ABC\_snm is able to obtain more concentrated values, and the median obtained is much lower than the other two strategies.

**Figure 5.** *IGD*-metric boxplot for ABC\_sm, ABC\_snm, and ABC\_s.

This indicates that in this paper the proposed strategy has better diversity and convergence. This is because there is some probability that some low-quality individuals are also selected, which improves the diversity of the algorithm to some extent, and the proposed mutation strategy also has some contribution to the diversity. In addition, the adopted selection strategy can speed up the convergence of the algorithm.

#### *5.5. Evaluation of SDABC*

In this subsection, SDABC is compared with IMDABC, MDABC, and NSGAII. All algorithms use the same CPU time as a stopping criterion and all use the same parameter settings, and the results are shown in Tables 8–13 and A7, respectively. The last row of the table represents the average of the 20 problems. The best parts are marked in bold.

Table A7 shows the *N*-metrics obtained by the four algorithms, and it can be seen that the average values obtained in SDABC are higher than the three remaining algorithms. The values obtained by SDABC are significantly higher than IMDABC and NSGAII in each problem. In addition, although some values of MDABC are higher than SDABC, the difference is not significant, and SDABC achieves higher values in 70% of the problems. To sum up, SDABC is able to obtain more non-dominated solutions compared to other algorithms.

This is because the search strategy proposed by the SDABC in the employed bee phase proposed in this paper is able to search in both depth and breadth directions, enhancing the diversity of individuals and contributing to obtaining a greater number of solutions.

Tables 8–10 show the *C*-metric obtained by the four algorithms, and it can be seen that the AVG and SD obtained by SDABC are significantly higher than IMDABC, MDABC and NSGAII in each of the problems except for the 100 × 10 problem in Table 10 where the SD is slightly higher. Figure 6 shows a boxplot of the *C*-metric obtained by SDABC versus the other three algorithms. The outliers in (a) are the *C*-metric obtained for problem 100 × 5. In Table 8, both C-metrics for 100 × 5 are lower than the overall value, but SDABC's is better than IMDABC's. In (b) it can be seen that SDABC is significantly higher than NSGAII overall. Two independent values of C (SDABC, MDABC) in (c) are for problems 100 × 3 and 100 × 5. While these two values deviate from the overall, SDABC has a higher quality AVG and SD for the same problem dimension. Additionally, the median of SDABC is significantly higher than the other three algorithms. Therefore, SDABC obtains solutions of significantly higher quality than IMDABC, MDABC and NSGAII.

This is because SDABC follows the individual in the population in both the employed and onlooker bee phases. The evolution of SDABC continued in accordance with the dominance of individuals in population and the similarity to the ideal solution. At the same time, it is possible to find high-quality solutions faster.

Tables 11–13 show the *IGD*-metric obtained by the four algorithms, and it can be seen that in each problem SDABC obtains significantly lower AVG and SD than IMDABC, MDABC, and NSGAII. Figure 7 plots the boxplot of the *IGD*-metrics obtained by the four algorithms. Figure 7a shows that the overall and median SDABC is much lower than IMDABC. Figure 7b demonstrates that SDABC has a better concentration than MDABC. Figure 7c indicates that SDABC has a better overall and median quality than NSGAII. With Figure 7, we can see that the SDABC distribution is more concentrated under the condition of obtaining a lower *IGD*.

**Figure 6.** *C*-metric boxplot for NSGAII, IMDABC, MDABC, and SDABC.

**Figure 7.** *IGD*-metric boxplot for NSGAII, IMDABC, MDABC, and SDABC.

This indicates that SDABC performs better than IMDABC, MDABC, and NSGAII in terms of convergence and diversity. This is because SDABC takes five different search directions in the employed bee phase and also explores them to different degrees depending on the ranking, both of which can become a more adequate search for individuals in the solution space and increase the diversity of the population. In addition, in the onlooker bee phase, every individual in population has the potential to be tracked. It also contributes to the diversity of the algorithm due to the introduction of the variation strategy. In terms of convergence, both the employed bee and the onlooker bee phases operate based on ranking, which speeds up the convergence of the population based on the similarity and dominance with the ideal solution.
