**5. Simulation Results**

The 100-Digit Challenge problem comes from CEC 2019 and can be found through the website http://www.ntu.edu.sg/home/EPNSugan/index\_files/CEC2019. This challenge includes 10 problems, which are 10 functions in our data. The goal is to calculate the accuracy of the function to 10 digits without a time limit. The proposed algorithm was written in MATLAB R2016a and was run in the following conditions: Intel(R) Core(TM) i5-8100 CPU 3.60 GHz, which possesses 8G of RAM on Windows 10. Table 1 shows the basic parameters of the 100-Digit Challenge. The 10 test problems are the definition of minimization and some other definitions are as follows:

$$\text{Min} \quad f(\mathbf{x}), \mathbf{x} = \begin{bmatrix} \mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_D \end{bmatrix}^T \tag{11}$$

where *D* is the dimension, the global optimal value of the shift is randomly distributed in (−80, 80), and all testing problems are scalable.


**Table 1.** The basic parameters of the 100-Digit Challenge.

#### *5.1. The Comparison of AKQPSO and Other Algorithms*

In this paper, we compared the AKQPSO algorithm with the following eight state-of-the-art algorithms.


Table 2 is the common parameter setting of these algorithms. We set the population size ( *N*) to 50; after 7500 iterations (*max\_gen*) and 100 independent runs (*max\_run*), all algorithm experiments are carried out under the condition that the basic parameters are consistent.


**Table 2.** Parameter settings.

Table 3 shows the optimization results of AKQPSO and eight other algorithms for ten of the 100-Digits Challenge, **with the best value in bold**. From the experimental results, we can see that the optimal values of the ten problems are all calculated by the AKQPSO algorithm. AKQPSO is the best algorithm among these algorithms, and the result is the best in every problem, but QKH is not the second in every problem, so we can know that our algorithm improvement is very e ffective. Through Figure 2, we can clearly conclude that the performance of AKQPSO is the best in general, followed by QKH, and the performance of GA is the worst. In general, we rank the algorithms as follows: AKQPSO > QKH > CCS > BBKH > DEKH > SKH > VNBA > GA.

**Figure 2.** Accuracy of AKQPSO and other algorithms on all problems.



Figures 3–12 show the results of the nine algorithms on each problem, and the experimental results show that AKQPSO algorithm has obvious advantages over other algorithms. From these column charts, we can see that the fluctuation of problem 1 is the largest. In problem 1, the results of the GA algorithm with the worst performance are 735 times di fferent from those of AKQPSO algorithm with the best performance, which can be said to be very di fferent. Because the 100-Digits Challenge problem

is used to test the computational accuracy, we can know that the computational accuracy of AKQPSO algorithm is the highest among the nine algorithms. In Figure 3, because the results of AKQPSO, QKH, and BBO are so different from those of GA, the columns of these algorithms are almost invisible, so they are not obvious in the same chart. However, in fact, the improvement of AKQPSO compared with QKH and BBO is significant.

**Figure 3.** Accuracy of AKQPSO and other algorithms on problem 1: Storn's Chebyshev polynomial fitting problem. BBKH, biogeography-based krill herd; DEKH, differential evolution KH; QKH, quantum-behaved KH; SKH, stud KH; CCS, chaotic cuckoo search; VNBA, variable neighborhood bat algorithm; BBO, biogeography-based optimization; GA, genetic algorithm.

**Figure 4.** Accuracy of AKQPSO and other algorithms on problem 2: inverse Hilbert matrix problem.

**Figure 5.** Accuracy of AKQPSO and other algorithms on problem 3: Lennard–Jones minimum energy cluster.

**Figure 6.** Accuracy of AKQPSO and other algorithms on problem 4: Rastrigin's function.

**Figure 7.** Accuracy of AKQPSO and other algorithms on problem 5: Griewangk's function.

**Figure 8.** Accuracy of AKQPSO and other algorithms on problem 6: Weierstrass function.

**Figure 9.** Accuracy of AKQPSO and other algorithms on problem 7: modified Schwefel's function.

**Figure 10.** Accuracy of AKQPSO and other algorithms on problem 8: expanded Schaffer's F6 function.

**Figure 11.** Accuracy of AKQPSO and other algorithms on problem 9: happy cat function.

**Figure 12.** Accuracy of AKQPSO and other algorithms on problem 10: Ackley function.

Among these ten problems, we divide the problems into four groups according to the ability of these problems to test the computational accuracy of AKQPSO. Some groups have a strong ability to test, so it is more obvious which algorithm is better. The first group: problem 1. According to the computational accuracy, the results displayed in this group are quite different. The second group: problems 2, 3, and 7. This group has higher requirements for the accuracy, so it is difficult to calculate the best value of the function. The third group: problems 4, 6, 8, and 10. This group has a slightly lower requirement for the computational accuracy, so the results of various algorithms are very close, but it is difficult to achieve the best results. The fourth group: problems 5 and 9. This group has the lowest requirements for the computational accuracy, so many algorithms can be very close to the optimal result 1.0000000000.

Figures 13–16 show the advantage of AKQPSO over other algorithms in each group of the 100-Digits Challenge problem. Although the accuracy of GA is the worst when ranking from the whole, and in the first of the four groups, it is also the GA. In the other three groups, however, the algorithm with the worst accuracy is CCS algorithm, so in terms of grouping comparison, CCS algorithm is the worst. In addition, the accuracy of AKQPSO algorithm is still the best among the four groups, which has not changed. From the above analysis, AKQPSO is the best in all aspects, and the computational accuracy of this algorithm is also the highest.

**Figure 13.** Ratio of AKQPSO over other algorithms in the first group.

**Figure 14.** Ratio of AKQPSO over other algorithms in the second group.

**Figure 15.** Ratio of AKQPSO over other algorithms in the third group.

**Figure 16.** Ratio of AKQPSO over other algorithms in the fourth group.

#### *5.2. Evaluation Parameter* λ

In Section 4, after the initialization of the population, AKQPSO algorithm divided the population into two subpopulations: AKH and QPSO. We set the parameter λ, which represents the proportion of the two subpopulations. In the experiment, λ = 0.1, 0.2, ... , 0.9. For example, if λ = 0.1, this means that the ratio of subpopulation-AKH/subpopulation-QPSO is 1:9, and other numbers mean the same thing. After a lot of experiments, the optimal parameter value is λ = 0.3. All the above experiments are based on the results obtained using λ = 0.3, and the following part will introduce the experiments on λ = 0.3. The parameters of this part of experiments are still the same as Table 2 to ensure that all experiments are tested under the same conditions.

Table 4 shows the experimental results of 10 problems that are calculated by different λ in the AKQPSO algorithm in the 100-Digit Challenge. **Bold font indicates the best value**. From Table 4, we can see that, in the case of λ = 0.3, eight of the ten problems can ge<sup>t</sup> the best value. In the remaining two problems, even if λ = 0.3 does not reach the best value, its result is the second among the nine values, and it is very close to the best value. Therefore, when λ = 0.3, the performance of AKQPSO can reach the best.


**Table 4.** The different subpopulations in the AKQPSO algorithm.

As can be seen from Figure 17a, the fluctuation range of the result of problem 1 is the largest. As problems 1–10 are in the same chart, and the fluctuation of problem 1 is much larger than that of other problems, the fluctuation range of problems 2–10 is not obvious in Figure 17a. In order to compare problems 2–10 more clearly, we used Figure 17b to show the fluctuation of other problems 2–10. Through Figure 17b, we can see that the fluctuation of problems 2 and 7 are also obvious. Therefore, among these ten problems, problems 1, 2, and 7 are the three problems with the largest fluctuation. Among these ten problems, the fluctuation of problem 7 is different from that of the other nine problems. With the increasing proportion of AKH and QPSO, the result of problem 7 tends to be better. For other problems, however, with the increasing proportion of AKH and QPSO, their results tend to be worse, which is the opposite of problem 7.

**Figure 17.** The different subpopulations in different problems. (**a**) The different subpopulations in problems 1–10 and (**b**) the different subpopulations in problems 2–10.

From the results of Sections 5.1 and 5.2, problems 1, 2, and 7 belong to the first and the second of the four groups, respectively, and they are also the two groups with the highest requirements for the computational accuracy of AKQPSO. Therefore, the adjustment of subpopulation has a very direct effect on the accuracy of AKQPSO.

#### *5.3. Complexity Analysis of AKQPSO*

The main computing overhead of AKQPSO associated with **Step "Partition"** of the AKQPSO algorithm in Section 4. The following is a detailed analysis of the single computational complexity of **Step "Partition"** and other step in AKQPSO. *N* is the number of individuals in the population.

	- • **Step "AKH process":** The computational complexity of this step mainly includes "Subpopulation-AKH individuals were optimized by AKH", "Update through these three actions of the influence of other krill individuals, behavior of getting food and random diffusion", "The simulated annealing strategy is used to deal with the above behaviors", and "Update the individual position according to the above behavior", and their complexities are O(*N*), O(*N*2), O(*N*), and O(*N*), respectively.
	- • **Step "QPSO process":** The computational complexity of this step mainly includes "Subpopulation-QPSO individuals were optimized by QPSO", "Update the particle's local best point *Pi* and global best point *Pbest*", and "Update the position *xt*+<sup>1</sup> *ij* by Equation (10)", and their complexities are all O(*N*).
	- • The computational complexity of **Step "Initialization"**, **Step "Evaluation"**, **Step "Combination"**, and **Step "Finding the best solution"** are all O(*N*).

Therefore, in one generation AKQPSO, the worst-case complexity is O(*N*2).
