Next Article in Journal
Predicting Childhood Obesity Based on Single and Multiple Well-Child Visit Data Using Machine Learning Classifiers
Previous Article in Journal
The Effect of Dataset Imbalance on the Performance of SCADA Intrusion Detection Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm

School of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(2), 755; https://doi.org/10.3390/s23020755
Submission received: 28 November 2022 / Revised: 23 December 2022 / Accepted: 7 January 2023 / Published: 9 January 2023

Abstract

:
The Aquila Optimizer (AO) is a new bio-inspired meta-heuristic algorithm inspired by Aquila’s hunting behavior. Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm (NCAAO) is proposed to address the problem that although the Aquila Optimizer (AO) has a strong global exploration capability, it has an insufficient local exploitation capability and a slow convergence rate. First, to improve the diversity of populations in the algorithm and the uniformity of distribution in the search space, DLCS chaotic mapping is used to generate the initial populations so that the algorithm is in a better exploration state. Then, to improve the search accuracy of the algorithm, an adaptive adjustment strategy of de-searching preferences is proposed. The exploration and development phases of the NCAAO algorithm are effectively balanced by changing the search threshold and introducing the position weight parameter to adaptively adjust the search process. Finally, the idea of small habitats is effectively used to promote the exchange of information between groups and accelerate the rapid convergence of groups to the optimal solution. To verify the optimization performance of the NCAAO algorithm, the improved algorithm was tested on 15 standard benchmark functions, the Wilcoxon rank sum test, and engineering optimization problems to test the optimization-seeking ability of the improved algorithm. The experimental results show that the NCAAO algorithm has better search performance and faster convergence speed compared with other intelligent algorithms.

1. Introduction

With the continuous development of science and technology, the effective integration of life sciences and engineering sciences has become the main feature of modern science and technology [1], and meta-heuristic algorithms have flourished under this trend. Currently, optimization objectives in practical engineering applications range from single objective to multi-objective, from continuous to discrete, and from constrained to unconstrained, with complex problems. The drawbacks of traditional optimization algorithms are obviously due to their heavy reliance on initial values and their tendency to fall into local optima. Meta-heuristic optimization algorithms are easy to implement, bypass local optima, are non-derivative mechanisms, and do not require gradient information [2,3,4,5]. Compared with the two types of algorithms, the performance of meta-heuristic algorithms is outstanding and has received widespread attention from scholars; they have not only played an important role in the field of computing, but have also shown powerful problem-solving capabilities in the fields of military [6], agriculture [7] and hydraulic engineering [8].
Since the series of algorithms such as Particle Swarm Optimization [9] (PSO), Genetic Algorithm [10] (GA) and Artificial Bee Colony [11] (ABC) have been proposed, they are continuously applied in many engineering examples and provide better solutions to the problems [12]. In the process of rapid development, scholars mainly focus on population initialization, population exploration and how to quickly converge to the global optimum to study meta-heuristic algorithms in depth. Abualigah L et al. proposed a new meta heuristic algorithm in 2021: Aquila Optimizer (AO) [13]. The algorithm, inspired by the prey-catching behavior of the Aquila, has demonstrated in the generalized arithmetic test that AO has a high performance and high local optimum capability in the exploration and development stages and in the process of handling composite functions compared to 11 algorithms such as GOA, EO, DA, PSO, ALO, GWO, and SSA. However, its local exploitation capability is insufficient and easy to fall into a local optimum and slow convergence, and many scholars have improved it. Wang S et al. [14] combined the development phase of Harris Hawks Optimization (HHO) with the exploration phase of AO to propose an improved hybrid Aquila Optimizer and Harris Hawks Optimization (IHAOHHO), combining both nonlinear escape energy parameters and stochastic opposition-based learning strategies to enhance the exploration and development of algorithms in standard test functions and industrial engineering design problems, demonstrating strong superior performance and good promise. Verma M et al. [15] generated a population by a standard AO and a new population by a single-stage genetic algorithm based on the concept of evolution, where binary tournament selection, roulette wheel selection, shuffle crossover and replacement mutations occur to generate a new population. The chaotic mapping criterion is then applied to obtain various variants of the standard AO technique. The standard AO is further improved, yields better results and is applied to engineering design problems. However, the effect of the homogeneity of chaotic systems on population initialization is not considered. Akyol S [16] used the tangent search algorithm with an intensive phase (TSA) to optimize the AO and proposes the Skyhawk Optimizer tangent search algorithm (AO-TSA). The enhancement phase (TSA) of the tangent search algorithm is used instead of the limited exploration phase to improve the exploitation capability of the Aquila Optimizer (AO), while the local minimum escape phase of the TSA is applied in the AO-TSA to avoid the local minimum stagnation problem. Mahajan S et al. [17] proposed a new hybrid approach using Aquila Optimizer (AO) and Arithmetic Optimization Algorithm (AOA) by combining AO with Arithmetic Optimization Algorithm (AOA). Efficient search is achieved in high- and low-dimensional problems, further showing that the population-based approach achieves efficient search results in high-dimensional problems. Zhang Y J et al. [18] similarly combined AO with Arithmetic Optimization Algorithm (AOA) and proposed The Hybrid Algorithm of Arithmetic Optimization Algorithm with Aquila Optimizer (AOAAO). An Individual exploration and development process in the energy parameter E equilibrium population is also introduced, and segmented linear mappings are introduced to reduce the randomness of the energy parameters. The proposed AOAAO is experimentally validated to have a faster convergence speed and higher convergence accuracy.
In addition, AIRassas A M et al. developed an improved Adaptive Neuro-Fuzzy Inference System (ANFIS) via AO: AO-ANFIS [19] for predicting oil production and experimentally demonstrated that AO significantly improved the prediction accuracy of ANFIS. Abd Elaziz M et al. [20] proposed a new image of COVID-19 classification framework using DL and a hybrid swarm-based algorithm that use AO as a feature selector to reduce the dimensionality of the image. The experimental results showed that the performance metrics of feature extraction and selection stages are more accurate than other methods. Jnr E O N et al. [21] proposed a new DWT-PSR-AOA-BPNN model for wind speed prediction and for efficient grid operation by combining the Skyhawk Optimization Algorithm (AOA) with Discrete Wavelet Transform (DWT), Phase Space Reconfiguration (PSR) with Chaos Theory, and Back Propagation Neural Network (BPNN). Meanwhile, AO has been applied to population prediction [22], PID parameter optimization [23], power generation allocation [24], path planning [25] and other fields. Due to the short time of AO proposal, its slow convergence speed and its tendency to fall into local optimum still need further optimization and improvement. It is important to further improve its performance and make it applicable to more practical problems.
The literature shows that these algorithms can effectively approach the true expectation of a multi-objective problem. The problem of zero search preferences, however, has not attracted much attention from scholars. The effective solution of this problem is the basis for improving the robustness of the AO algorithm and allows the algorithm to be applied to new classes of problems or to propose new optimization algorithms. This is the purpose and significance of this work, in which we propose a new algorithm called Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm (NCAAO), based on the recently proposed Aquila optimizer (AO). The research gaps and contributions of the paper can be summarized as follows:
  • The NCAAO has great potential in solving various engineering problems such as feature selection, image segmentation, image enhancement and engineering design.
  • The AO and NCAAO have rarely been combined with other available meta-heuristics; therefore, the NCAAO has a great potential to be combined with other algorithms available in the literature.
  • Hence, a new chaotic mapping is introduced which has better uniformity, better initializes the population, and effectively enhances the efficiency of the algorithm.
  • This new algorithm is further investigated in terms of search thresholds, which allows the population to adaptively adjust the search space by changing the search threshold. The introduced adaptive weight parameter perturbs the population of individual positions and improves the algorithm local exploitation ability.
  • Moreover, for the algorithm to obtain the optimal solution with high accuracy, a communication exchange strategy based on the idea of Niche thought is proposed to ensure the optimization-seeking accuracy and convergence speed of the algorithm by screening elite individuals.
In summary, in order to solve the problem that the convergence of AO is slow and easy to fall into local optimum, firstly and as inspired by the literature [26], the method of initializing population is adopted to improve the robustness of the algorithm. The Aquila population is initialized by DLCS chaotic mapping. The approach we have used in this study aims to enhance the homogeneity of the Aquila population in the search space and improve the randomness and ergodicity of the population individuals. Second, an adaptive adjustment search strategy for de-searching preferences is proposed to balance global exploitation and local search by the adaptive threshold adjustment method. On this basis, the adaptive location weighting method is introduced to perturb the update of the group individuals’ location and improve the local exploitation ability. The de-search preference adaptive adjustment search strategy can remove the search preference problem effectively to a certain extent and help the algorithm escape from the local optimum quickly. Finally, inspired by literature [27] and combined with the elite solution mechanism, a communication exchange strategy based on the idea of Niche thought is used to ensure that the goodness of the population is maintained during the iterative process. Through the information exchange among individuals, the global optimum is better selected and the rapid convergence of the scout individuals to the global optimum is promoted to enhance the robustness of the algorithm.
The rest of the paper is structured as follows: Section 2 describes the principle of AO and studies its module performance. Section 3 describes the proposed algorithm and analyzes the time complexity of the proposed algorithm. Section 4 verifies the robustness and applicability of the algorithm through numerical experiments and engineering experiments. Section 5 summarizes the full text and examines the future research direction.

2. Aquila Optimizer

2.1. Mathematical Model of AO

The AO algorithm simulates the different hunting styles of Aquila for different prey. For fast-moving prey, the Aquila needs to acquire the prey in a fast and precise way, by which the global exploration ability of the algorithm is reflected. Similarly, the local exploitation ability of the algorithm is reflected by the hunting way for slow-moving prey. The optimization process is represented by simulating four types of Aquila hunting behaviors.
First, the population needs to be generated randomly between the upper bound (UB) and lower bound (LB), specified based on the problem, as shown in Equation (1). The approximate optimal solution during the iterative process up to each iteration is determined as the optimal solution. The current set of candidate solutions X is randomly generated by Equation (2).
X = x 1 , 1 x 1 , D x n , 1 x n , D
X i , j = r a n d × ( U B j L B j ) + L B j , i = 1 , 2 , , N j = 1 , 2 , D
where n denotes the total number of candidate solutions, D denotes the dimensionality of the problem and xn,D denotes the position of the nth solution in the Dth dimensional space. Rand is a random number, UBj denotes the jth dimensional upper bound of the given problem and LBj denotes the jth dimensional lower bound of the given problem.

2.1.1. Expanded Exploration(X1)

The first way is to select the search space by flying high in a vertical bend. Aquila flies high to recognize prey areas and quickly select the best prey area. This behavior is shown by Equation (3).
X 1 ( t + 1 ) = X b e s t ( t ) × ( 1 t T ) + ( X M ( t ) X b e s t ( t ) ) × r a n d
  X M ( t ) = 1 N i = 1 N X i ( t ) , j = 1 , 2 , , D
where X1(t + 1) denotes the position of an individual at time t + 1, Xbest(t + 1) denotes the current global best position at the tth iteration, T and t denote the maximum number of iterations and the current number of iterations, respectively, XM(t) denotes the average position of an individual in the current iteration, and Rand is a random number between 0 and 1 in the Gaussian distribution.

2.1.2. Narrowing the Scope of Exploration (X2)

The second approach is short gliding attacks in isometric flight. Aquilas hover above the target prey in preparation for an attack when they spot the prey area from a high altitude. This behavior is represented by Equation (5).
X 2 ( t + 1 ) = X b e s t ( t ) × l e v y ( D ) + X R ( t ) + ( y x ) × r a n d
l e v y ( D ) = s × u × σ v 1 β
σ = Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 β 1 2
where X2(t+1) is the generated solution for the next iteration of t, D denotes the spatial dimension, levy(D) is the Lévy flight distribution function, XR(t) is the random position of the Aquila in [1,N], s takes the value of 1.5, and y and x present a spiral situation in the search space as shown in the following equation:
y = r × cos θ
x = r × sin θ
r = r 1 + 0.00565 × D 1
θ = 0.005 × D 1 + 3 × π 2
where r1 takes a fixed index between 1 and 20, and D1 is an integer from 1 to the length of the search space.

2.1.3. Expanded Development (X3)

The third way is a low-flying and slow-descent attack. The Aquila locks onto a hunting target in the hunting area, and with the attack ready, makes an initial attack in a vertical descent, thus further testing the prey’s response. This behavior is represented by Equation (12).
X 3 ( t + 1 ) = ( X b e s t ( t ) X M ( t ) ) × α r a n d + ( ( U B L B ) × r a n d + L B ) × δ
where X3(t + 1) is the solution of the next iteration of t generated, δ and α are both mining adjustment parameters in the range of (0,1), and UB and LB represent the upper and lower bounds of the given problem.

2.1.4. Narrowing the Development Area(X4)

The fourth way is walking and grabbing prey. When the Aquila approaches the prey, it attacks the prey according to the random movement of the prey. This behavior is represented by Equation (13).
X 4 ( t + 1 ) = Q F × X b e s t ( t ) ( G 1 × X ( t ) × r a n d ) G 2 × l e v y ( D )
Q F ( t ) = t 2 × r a n d 1 ( 1 T ) 2
G 1 = 2 × r a n d 1
G 1 = 2 × ( 1 t T )
where X4(t + 1) is the generated solution for the next iteration of t; QF denotes the mass function used to balance the search strategy and F 0 , 1 ; G1 denotes the different methods used by the Aquila for prey escape; G2 represents the slope value from the first position to the last position during the Aquila’s prey chase, taking a decreasing value from 2 to 0; Rand is a random number between 0 and 1 in the Gaussian distribution; and T and t denote the maximum number of iterations and the current number of iterations, respectively.

2.2. AO module Performance Split Study

In order to extract the modules with more prominent optimization performance, the standard AO modules are separated and tested. The other modules are masked in the code source, one module is selected to update the individual positions, and the typical arithmetic cases F1, F10, and F13 in the literature [5] are selected to run the AO after the separation optimization process for ten times; the data are shown in Table 1, where F_fit is the average fitness of the ten times.
Two phases in the AO algorithm introduce a greedy algorithm to perturb the individual positions to ensure that the new position of the individual is better than the original position. The four positions’ update formulas are observed by the module separation test described above. In Equation (3), when the optimal solution is 0, it converges to 0. When the optimal solution is not 0, it converges to a constant; in Equation (5), it barely converges to the optimal solution; and in Equations (12) and (13), it mostly converges to a constant. Through the analysis of Equation (3), it is found that the optimal value of 0 creates an incentive for the algorithm to converge to the optimal solution quickly during the iterative process. Therefore, the algorithm easily falls into the local optimum, and the probability that the local optimum is 0 is larger, which will be called the zero-point search preference. The zero-point preference interference and incentive schematic are shown in Figure 1. There is a zero-point preference corollary as follows.
There exist N feasible solution vectors in each dimension in the search domain L B , U B D defined by the function F(x); then, there is a total of ND feasible solution vectors in the search domain. If there exist K (K is a constant) solutions of F(x) with the same global optimal fitness and N tends to infinity, the probability that any feasible solution vector in the search domain corresponds to a global optimal fitness solution is as in Equation (17).
P { f i t n e s s ( X ) = min ( f i t n e s s ) } = lim N K N D
If the algorithm has no preference, the search probability for each feasible solution should be of the same order as P{fitness(X) = min(fitness)}. Preference appears when a feasible solution vector Xp in the search domain is attractive to the population, then there is Equation (18).
P { f i t n e s s ( X ) = min ( f i t n e s s ) } = lim N K N D
Find the fitness probability of the global optimum, and then determine whether there is interference in the search process; if there is no interference, then it is expressed by Equation (19).
P { f i t n e s s ( X b e s t ) } < lim N K N D
Therefore, it can be judged that the probability of the population converging to Xbest tends to be a higher order infinitesimal.

3. Improved Algorithm

3.1. Generation of Group Based on Chaotic System

The goodness of the initial population directly affects the solution accuracy and convergence speed of the algorithm, and a well-diversified initial population can largely improve the performance of the algorithm. The problem of using a stochastic system to generate the initial population in optimization problems is that there is a lack of correlation between previous and upcoming states; so, the upcoming states cannot be predicted, resulting in an uneven distribution of the initialized population.
Compared with random systems, chaotic systems are more sensitive to initial values and have the characteristics of mixing properties, randomness, ergodicity and regularity [28,29,30], which can produce uniform and stable orbits. Given a long enough time, a chaotic system can traverse all states of the space, and the traversal process never repeats in a certain range, so it can effectively circumvent the local optimum. Lin Z B and Liu Y H [31] proposed the DLCS chaotic mapping by adding a piecewise model to the Lorenz chaotic mapping and introducing a greedy strategy. The DLCS chaotic mapping has extremely complex kinematic properties with outstanding initial value sensitivity, high distribution uniformity and stability, and an enhanced search uniformity of the algorithm for neighborhood solutions. The expressions are shown in Equations (20)–(23), and Zsn+1 is the main chaotic sequence generated.
X n + 1 = X n + a ( X n Y n ) Δ s
Y n + 1 = Y n + ( b X n X n Z n Y n ) Δ s
Z n + 1 = Z n + ( X n 2 + sin ( X n Y n ) c Z n ) Δ s
Z s n + 1 = Z n + 1 16 mod G e a r
where a = 10, b = 28, c = 6.2 and Δ s is the step size taken as 0.01; the signal amplifier control parameter is taken as 16, and Gear = 105 for the sequence slice mode parameter.
The logistic mapping [32], cubic mapping [33], tent mapping [34] and DLCS mapping were used to generate sequences of a length of 1000 each on initial values with small differences and normalized to form a two-dimensional planar distribution in the interval [0, 1] for 30 uniformity comparison experiments; the metrics that constitute the chaotic model are shown in Table 2. When the ratio of both adjacent statistical values of a chaotic distribution graph is closer to 1, the uniformity of the chaotic system is higher. To observe the uniformity of the chaotic orbits generated by the two chaotic systems, two perturbed orbits were generated using the four chaotic systems mentioned above to form a chaotic distribution map, as shown in Figure 2. It is obvious that logistic and cubic have an edge over the density phenomenon, which leads to a weaker ergodicity of the chaotic system. Although most of the segments of the tent map converge normally, there are still uneven distributions. Combined with Table 2, it can be seen that DLCS has the relatively best performance with a stable and uniform distribution, which is suitable for generating the initial chaotic repulsive group of individuals.

3.2. Adaptive Adjustment Strategy for De-Searching Preferences

The fitness in the AO algorithm reflects the difference between the individual and global optimum. In the AO algorithm, it is shown how to quickly choose a search strategy that causes the Aquila to reach the area where the prey is located in the fastest way and thus obtain the prey (optimal solution). The adaptive adjustment strategy of the de-searching preference uses two methods: the adaptive probability threshold method and the adaptive position weight method. First, to further improve the convergence speed of AO while balancing the ability of global exploration and local exploitation, an adaptive probability threshold method is proposed. In this method, the adaptive probability threshold adaptively changes between 0 and 1 with the iterations of the algorithm, guiding individuals to quickly select the hunting strategy more suitable for the current population at different times. The mathematical expression of the adaptive probability threshold adaptive_p is shown in Equation (24).
a d a p t i v e _ p = 1 1 1 + λ × λ × t λ T λ + μ × t μ T μ
where λ and μ are control parameters and take the values λ = 3, μ = 2, and T and t denote the maximum number of iterations and the current number of iterations, respectively.
Therefore, the adaptive probability threshold is introduced into the Gaussian distributed random number Rand to further and precisely select the appropriate strategy for hunting. The mathematical expression for the hunting mode of AO can therefore be re-expressed as Equation (25).
X ( t + 1 ) = X b e s t ( t ) × ( 1 t T ) + ( X M ( t ) X b e s t ( t ) ) × r a n d X b e s t ( t ) × l e v y ( D ) + X R ( t ) + ( y x ) × r a n d ( X b e s t ( t ) X M ( t ) ) × α r a n d + ( ( U B L B ) × r a n d + L B ) × δ Q F × X b e s t ( t ) ( G 1 × X ( t ) × r a n d ) G 2 × l e v y ( D ) , i f r a n d < a d a p t i v e _ p , i f r a n d a d a p t i v e _ p
At the beginning of the algorithm iteration, the adaptive probability threshold is larger, guiding the Aquila population for global exploration. At the later stage of algorithm iteration, the adaptive probability threshold is smaller to guide the populations for local exploration. This enables the algorithm to adaptively adjust the search strategy and reduce the search preference while effectively improving the convergence speed of the algorithm.
Second, the adaptive weight method is introduced to adjust the position update of AO. The adaptive weight coefficients are shown in Equation (26).
ξ = 1 τ + μ × τ × t τ T τ + μ × t μ T μ
where τ = 2, μ = 2 and ξ 0 , 1 .
When ξ increases with the number of iterations, it indicates that the prey selected after each iteration is also the optimal solution for the current population with its own adaptation degree, which produces a stronger attraction to the Aquila in the population. At the same time, the credibility of the information conveyed by the randomly selected individuals in the population increases under the iteration, thus making the Aquila able to find the prey more accurately according to the adaptive weight change, thus improving the algorithm convergence speed and the ability to find the best. However, when the local exploitation is carried out in the late iteration of the algorithm, the Aquila approaches the prey; a relatively small weight coefficient should be used at this time so that the individuals update their position while finely searching whether there is a more optimal solution around the prey, thus improving the local exploitation capability of the algorithm. The introduction of adaptive weight coefficients to update the position is shown in Equations (27)–(30).
X 1 ( t + 1 ) = ξ × X b e s t ( t ) × ( 1 t T ) + ( X M ( t ) X b e s t ( t ) ) × r a n d
X 2 ( t + 1 ) = ξ × X b e s t ( t ) × l e v y ( D ) + X R ( t ) + ( y x ) × r a n d
X 3 ( t + 1 ) = ( ( 1 ξ ) × X b e s t ( t ) X M ( t ) ) × α r a n d + ( ( U B L B ) × r a n d + L B ) × δ
X 4 ( t + 1 ) = Q F × ( 1 ξ ) × X b e s t ( t ) ( G 1 × X ( t ) × r a n d ) G 2 × l e v y ( D )

3.3. Communication Strategy Based on Niche Thought

Niche thought comes from biology, where microhabitat refers to the functions or roles of organizations in a particular environment, and organizations with common characteristics are called species. Niche can help maintain diversity in the biological community and help form new species in nature. A schematic diagram of Niche thought is shown in Figure 3.
Niche thought is introduced in the AO algorithm [35], which uses a sharing mechanism to compare the distance between individuals in a habitat. A specific threshold is set to increase the fitness of individuals with high fitness, which is used to ensure that the individual state is at the optimum. For individuals with low fitness, a penalty is given to make them update and further search for optimal values in other regions to ensure the diversity of the population in the iterative process and further obtain the location of the optimal solution. In this process, the distance between the individuals of the small habitat population is calculated by Equation (31).
d i j = X i X j
The information exchange function between individual Xi and Xj is shown in Equation (32).
s h ( d i j ) = 1 d i j ρ , d i j < ρ 0 , d i j > ρ
where ρ is the radius of information sharing in the microhabitat, and d i j < ρ ensures that individuals survive in the microhabitat environment.
After sharing information, the optimal adaptation is adjusted in time, as shown in Equation (33).
F i _ b e s t = F i s h , i = 1 , 2 , , N
where F i _ b e s t is the optimal adaptation after sharing, and F i is the original adaptation.

3.4. Algorithm General Framework

In summary, the improved modules are reorganized to constitute the new algorithm after the improved AO, which is called the Adaptive Aquila optimizer Combining Niche Thought with Dispersed Chaotic Swarm(NCAAO). The overall flow chart of the algorithm is shown in Figure 4. In NCAAO, a set of candidate solutions is generated by chaotic mapping to initiate the improved algorithm. Through repeated iterations, the search strategy of NCAAO gradually converges toward the optimal solution or obtains the location of the optimal solution. The adaptive adjustment strategy and the communication exchange strategy help the algorithm to obtain the optimal position according to the optimization process, balancing global exploration with local exploitation. Until the algorithm reaches the end criteria, the NCAAO search process is terminated. The Algorithm 1 pseudo-code is as follows.
Algorithm 1. Pseudo-code of NCAAO
Initialization phase:
Initialization of the parameters of the AO (i.e., D, UB, LB, t, T)
Generating initial populations using DCLS chaotic mapping with a length of
ChaoticTrackLength_no and a range of [LB, UB]D Xi = (i = 1,2,...0., N)
while (t < T + 1)
Calculate the fitness function values.
  Xbest(t) = Determine the best obtained solution according to the fitness values.
Update x, y, G1, G2, Levy(D), etc.
Update adaptive_p and ξ using Equations (24) and (26)
For i = 1:ND
if rand<adaptive_p
  Update the position of X(t + 1) using Equation (27)
  Else
  Update the position of X(t + 1) using Equation (28)
   End if
Else
if rand>adaptive_p
  Update the position of X(t + 1) using Equation (29)
  Else
  Update the position of X(t + 1) using Equation (30)
   End if
Calculate distance from Xi to Xj using Equation (31)
Update the best fitness using Equation (33)
End for
End

3.5. Time Complexity Analysis

The computational complexity of the improved algorithm is based on the original algorithm by adding chaotic mapping to generate the initial population and updating the fitness of the population through Niche thought. The computational complexity of the algorithm still depends on the initialization of the solution, the calculation of the function and the update of the solution. First, if the number of individuals in the initialized population is N, the dimension of the objective function is D and the maximum number of iterations is T. The time complexity is expressed in terms of “O”, and the time complexity of solving the initial population of Aquila once per solution is Equation (34).
O ( T × D )
The process of updating the solutions includes updating the positions of all solutions as well as exploring the best positions with a time complexity of Equation (35).
O N × T ) + O ( N × T × D
Through the small habitat idea to ensure the excellence of the population in the iterative process and repeated iterations to compare the mutual distance between the population particles and adjust the optimal adaptation in time, the time complexity is as in Equation (36).
O ( ( N 1 ) × T × D )
Therefore, the total algorithm time complexity is shown in Equation (37). Since the chaotic orbit length and the number of iterations need to be determined based on the solution problem, the actual complexity of the algorithm is influenced by the actual problem.
O ( D × ( N × T ) )

4. Algorithm Performance Experiments

In this section, the performance of the proposed NCAAO in solving optimization problems is studied. For this purpose, fifteen objective functions of NCAAO were selected from the literature [36,37] for solving different types of unimodal, high-dimensional multimodal and fixed-dimensional multimodal modes. The benchmark functions used are detailed in Table 3. Similarly, three engineering design problems were selected to test the applicability of NCAAO.
In addition, the obtained optimization results from the proposed NCAAO are compared with five well-known optimization algorithms. These competing algorithms include popular methods: Gray Wolf Optimization (GWO) [38], Whale Optimization Algorithm(WOA) [39], Harris Hawks Optimization(HHO) [40], Ant Lion Optimizer(ALO) [41] and Aquila Optimizer(AO) [13]. Table 4 shows the values of the control parameters of these algorithms.
To evaluate the performance of optimization algorithms, each of the competing algorithms as well as the proposed NCAAO in 30 independent implementations (with each independent implementation containing 500 iterations) have been implemented on the objective functions. The experimental environment is Windows 10, a 64-bit operating system, the CPU is Inter Core i7-10750H, the main frequency is 2.60 GHz and the memory is 16GB. The algorithm is based on MATLAB R2019a. The simulation results are reported using two criteria: the average of the best solutions obtained (AVG) and the standard deviation of the best solutions obtained (STD).

4.1. Benchmark Set and Compared Algorithms

Table 3 includes five unimodal functions (F1–F5) to test the algorithm development capability, four high-dimensional multimodal functions (F6–F9) to test the algorithm search capability as well as the local optimum avoidance capability, and six fixed-dimensional multimodal modes (F10-F15) that are considered to be a combination of the first two sets of random rotations, shifts and biases. The composite test functions are more similar to the real search space, facilitating the simultaneous benchmarking of algorithm exploration and development [42]. The ‘w/t/l’ in Table 3 indicates the number of wins, draws and failures of each algorithm. The algorithms in Table 3 are shown in Figure 5 in the graph of the effect of adaptation in two dimensions.
The NCAAO algorithm was tested by the unimodal functions, and the results are shown in Table 5. As can be seen from the table, the NCAAO algorithm is able to provide very competitive results on the single-peak test function, especially on the F2 function for all dimensions with significant improvements. The proposed algorithm converges to the global optimum of this function, i.e., zero. This proves that the proposed algorithm has a high utilization capacity, which is due to the role of initializing the population with the DLCS chaotic map, helping the NCAAO algorithm to provide high utilization. Therefore, the NCAAO algorithm has a strong development capability.
After proving the exploitation of NCAAO, we will discuss the exploration of this algorithm. The NCAAO algorithm was tested by high-dimensional multimodal functions, and the results are shown in Table 6. From the table, it can be seen that the NCAAO algorithm is equally competitive with the classical algorithm in finding the optimal values for different dimensions of the F6 and F7 functions. However, the results of finding the optimum for F8 and F9 functions show that the NCAAO algorithm has a stronger search ability and local optimum avoidance ability than other classical algorithms. The excellent exploration of the proposed algorithm is due to the adjustment of adaptive search thresholds and the introduction of adaptive position weighting parameters. Directing a population from one search region into another promotes exploration. This is similar to the crossover operator in GA that highly emphasizes search space exploration.
F10–F15 are the six objective functions that evaluate the ability of the optimization algorithm to handle fixed-dimensional multimodal mode problems. The results of optimizing these objective functions using the proposed NCAAO and the competing algorithms are shown in Table 7. The proposed NCAAO optimizing F10, F12, F14, and F15 is able to converge to the global optimum of these functions. NCAAO is the first best optimizer for solving F14 and F15. In optimizing the functions of F10, F11, F12 and F13, although the average criterion of NCAAO is similar to some rival algorithms, its standard criterion is more outstanding. Therefore, the proposed NCAAO solves these objective functions more effectively. The analysis of the simulation results shows that the proposed NCAAO has a higher ability to solve the fixed-dimensional multimodal mode optimization problems from F10 to F15 compared with six competing algorithms.
Although the use of AVG and STD indices from the experimental evaluation showed that the NCAAO algorithm outperformed the comparison algorithm, it was not sufficient to demonstrate the superiority of the NCAAO algorithm. Due to the random nature of the metaheuristic, and for a fair comparison, the Wilcoxon rank sum test statistical analysis was used to draw statistically significant conclusions [43]. Simulation results for testing the proposed NCAAO using five competing algorithms are presented in Table 8. In this table, when the p-value is less than 0.05, the proposed NCAAO has a significant advantage over the competing algorithms in this group of objective functions. The p-values in the table also prove that the advantage is significant in most cases. This proves the high exploitation capability of the proposed algorithm, which is due to a proposed adaptive adjustment strategy of de-searching preferences that allows the algorithm to balance effectively in development and exploration. The effective combination of small habitat techniques allows the algorithm to speed up the process of preference seeking. In addition, the p-value also shows that the NCAAO algorithm effectively avoids the zero-search preference problem, proving that the results of the algorithm are very competitive.
To evaluate the convergence performance of the NCAAO algorithm, the convergence curves were plotted by selecting the best average value among 30 iterations, as shown in Figure 6. The convergence curves for the unimodal functions, high-dimensional multimodal functions and fixed-dimensional multimodal functions are shown in the figure. The curves show that for the single-peaked and multi-peaked functions, the NCAAO algorithm has a better ability to find the best and converges faster, which further proves that the algorithm has a better ability to develop as well as explore. As for the multimodal functions, the NCAAO algorithm fluctuates more during the initial iterations and relatively less so in the later stages. The significant decrease of the curve during the iterative process shows that the proposed communication exchange strategy based on the idea of small habitats increases the ability to update the optimal adaptation of the population. The overall iterative curve analysis concludes that the NCAAO algorithm has a stronger convergence behavior relative to the comparison algorithm. It is further demonstrated that NCAAO has a better ability to balance exploitation and exploration.
In summary, the NCAAO algorithm has significantly superior performance compared to other more classical optimization algorithms (e.g., GWO, WOA, HHO, ALO, AO). From the data in Table 5 and Table 6, we can see that the performance of the optimization search is better on the unimodal functions as well as the high-dimensional multimodal functions, and there is no significant decrease with the increase of the dimensionality. At the same time, through the convergence analysis in Figure 6, the NCAAO algorithm has good convergence overall; although the convergence speed is slower in function F8 compared with other functions, it can still find the optimal solution. It can be seen through the fixed-dimensional multimodal function’s test data in Table 7 that the optimal solution can still be found accurately when the optimal solution is not zero, which reduces the existence of zero search preference. Compared with other algorithms, the NCAAO algorithm has a strong balance of development ability and search ability.

4.2. Engineering Benchmark Sets

In order to further demonstrate that the NCAAO algorithm has better performance in finding the optimum, and to prove the application value of the algorithm in solving engineering problems, the tension/compression spring design problem [44], the pressure vessel design problem [45] and the three-bar truss design problem [46] were selected for solution testing. Salp Swarm Algorithm (SSA) [47], Whale Optimization Algorithm (WOA) [39], Gray Wolf Optimization (GWO) [38], Moth-flame Optimization MFO [48], Gravitational Search Algorithm GSA [49], Particle Swarm Optimization (PSO) [9], Genetic Algorithm (GA) [10], Tunicate Swarm Algorithm (TSA) [50] and NCAAO were also selected for comparison with the NCAAO algorithm for optimization testing of engineering examples. The values of the control parameters for these algorithms are likewise described in Table 4.

4.2.1. Tension/Compression Spring Design Problem

The purpose of the tension/compression spring design problem is to minimize the weight of the pull-pressure spring, which is shown schematically in Figure 7. The main considerations in the design process are the cross-sectional diameter of the spring (d), the average coil diameter (D) and the number of active coils (Q).
The mathematical model of this problem is as follows:
Consider
x = x 1 , x 2 , x 3 = d , D , Q
Minimize
f x = ( x 3 + 2 ) x 2 x 1 2
Subject to
g 1 ( X ) = 1 x 2 3 x 3 717854 x 1 4 0 g 2 ( X ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) 0 g 3 ( X ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( X ) = x 1 + x 2 1.5 1 0
Variable range
0.05 x 1 2.00 0.25 x 2 1.30 2.00 x 3 15.00
The NCAAO is applied to this case based on 30 independent runs with 500 group individuals and 500 iterations in each run. Since this benchmark case has some constraints, we need to integrate the NCAAO with a constraint handling technique. The results of NCAAO are compared to those reported for eight algorithms in the previous literature. Table 9 shows the detailed results of the proposed NCAAO compared to other techniques. Based on the results in Table 9, it is observed that NCAAO can reveal very competitive results compared to the MFO algorithm. Additionally, the NCAAO outperforms other optimizers significantly. The results obtained show that the NCAAO is capable of dealing with a constrained space.

4.2.2. Pressure Vessel Design Problem

The pressure vessel design problem is a minimization problem, which is shown schematically in Figure 8. The variables of this case are thickness of shell (Ts), thickness of the head (Th), inner radius (r) and length of the section without the head (L).
The formulation of this test case is as follows:
Consider
x = x 1 , x 2 , x 3 , x 4 = T s , T h , R , L
Minimize
f x = 0.6224 x 1 x 2 x 3 + 1.778 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to
g 1 ( x ) = x 1 + 0.0193 x 3 0 g 2 ( x ) = x 2 + 0.00954 x 3 0 g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 g 4 ( x ) = x 4 240 0
Variable range
0 x 1 , x 2 100 10 x 3 , x 4 200
Table 10 gives the optimal design results for the pressure vessels. The data in the table shows that NCAAO has given the relatively best results compared with other algorithms. Therefore, NCAAO is proven to be the best optimizer to deal with this problem in this test.

4.2.3. Three-Bar Truss Design Problem

The three-bar truss design problem is a more classical problem that is used to test the performance of numerous algorithms. The schematic diagram of its structure and the relationship between the forces of each part are shown in Figure 9. The mathematical model of the problem is as follows:
Consider
x = x 1 , x 2 = A 1 , A 2
Minimize
f x = ( 2 2 x 1 + x 2 ) × 1
Subject to
g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 3 ( x ) = 1 2 x 2 + 2 x 1 x 2 P σ 0 ,
Variable range
0 x 1 , x 2 1
where, l = 100 cm, P = 2 KN/cm2, σ = 2 KN/cm2.
Table 11 reports the optimum designs attained by NCAAO and the listed optimizers. Inspecting the results in Table 11, we detected that the NCAAO is the best optimizer in dealing with problems and can attain superior results compared to other techniques.

5. Conclusions and Prospect

In this work, a new meta-heuristic algorithm called NCAAO is proposed for the AO algorithm, which is prone to fall into local optima and has the property of zero-point search preference and slow convergence. In fact, three new modules are integrated into the AO algorithm. The first one is the DLCS chaotic mapping chosen as the initial Skyhawk population generator. The resulting population has a more uniform distribution of individuals, which further clarifies the convergence direction of the algorithm effectively. The second module goes to the adaptive adjustment strategy of the search preferences, which is used to balance global search and local probing by changing the threshold of the Aquila selection work mechanism. Then, the adaptive position weight parameter is introduced to give a perturbation to the group individuals, which in turn updates their positions and improves the algorithm’s local exploitation capability. Finally, in the third module we propose a communication exchange strategy based on the idea of small habitats to better select the global optimum through the information exchange between individuals and to promote the rapid convergence of population individuals toward the global optimum and reduce the search error. Although chaotic mappings have been used in other algorithms, this work took into consideration the homogeneity of chaotic mappings to produce populations. Similarly, the existence of zero search preferences in the AO algorithm was investigated and the new algorithm was further improved.
The following improvements from the theoretical aspect demonstrate that the NCAAO algorithm shows excellent performance in the unified combination of theory and practice:
  • Chaotic mapping has the characteristics of chaotic characteristics, randomness, ergodicity and regularity; the use of DLCS mapping produces a more uniform population distribution, which is conducive to an efficient search of the overall search space and further clarifies the convergence direction of the algorithm.
  • Changing the threshold adaptive_p of the Aquila selection work and introducing the dynamic adjustment strategy in the development and exploration process help the algorithm to choose a reasonable strategy for the search area to seek the best and enhance the utilization behavior of NCAAO in the iterative process.
  • The introduced adaptive location weight parameter ζ, which makes the group individual locations more diverse, further promotes the exploration behavior of NCAAO in the iterative process and has a constructive impact on balancing the exploitation and exploration trends.
  • The proposed communication exchange strategy based on the idea of small habitats better promotes the optimization of group adaptation. Among them, a penalty mechanism is given to individuals with lower fitness to promote the NCAAO algorithm to better solve the problems of difficult high-dimensional search, local optimal incentive and unclear convergence direction.
The experiments further investigated the development performance, search performance, and local optimum avoidance performance of NCAAO by fifteen objective functions. The results show that NCAAO has good optimization performance. Meanwhile, three engineering application examples further verify that the NCAAO algorithm provides a better solution for practical engineering applications. Statistical analysis by the Wilcoxon rank sum test concludes that NCAAO has statistical significance. Based on the comprehensive study, it can be concluded that the proposed algorithm has significant advantages in solving practical problems and thus can provide different areas for other researchers. The results demonstrate that the NCAAO algorithm has a more intelligent balancing mechanism in the development and exploration process, which greatly circumvents the local optimal solutions and converges quickly toward the optimal solution when solving many different types of problems. At the same time, the existence of search preferences is suppressed, and the ability to jump out of the local optimal solution is significantly improved.
In future research, we plan to investigate the application of the proposed algorithm to problems such as image enhancement, feature extraction and camera calibration. In addition, we will further investigate in depth the effect of introducing chaotic mappings to evolve the initial population on the convergence performance, which is essential to enhance the performance of the algorithm.

Author Contributions

Conceptualization, Y.Z. and X.X.; methodology, Y.Z.; validation, N.Z., Y.Z and X.L.; formal analysis, Y.Z.; investigation, W.D.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z.; visualization, K.Z.; supervision, X.X.; project administration, X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the 111 Project of China (Grant No. D21009) and the State Key Laboratory Fund Project of China: 4D Millimeter Wave Radar Key Technology Research (Grant No. 20210102).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to Project Confidential.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Teng, Z.-J.; Lv, J.-L.; Guo, L.-W. An improved hybrid grey wolf optimization algorithm. Soft Comput. 2019, 23, 6617–6631. [Google Scholar] [CrossRef]
  2. Neumann, F.; Witt, C. Bioinspired computation in combinatorial optimization: Algorithms and their computational complexity. In Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, Amsterdam, The Netherlands, 6–10 July 2013; pp. 567–590. [Google Scholar]
  3. Liu, E.; Yao, X.; Liu, M.; Jin, H. AGV path planning based on improved grey wolf optimization algorithm and its implementation prototype platform. Comput. Integr. Manuf. Syst. 2018, 24, 2779–2791. [Google Scholar]
  4. Shi, C.; Zeng, Y.; Hou, S. Summary of the application of swarm intelligence algorithms in image segmentation. Comput. Eng. Appl. 2021, 57, 36–47. [Google Scholar]
  5. Li, S.-Y.; He, Q.; Chen, J. Application of improved equilibrium optimizer algorithm to constrained optimization problems. J. Front. Comput. Sci. Technol. 2021, 9, 1–14. [Google Scholar]
  6. Zheng, Y.J.; Wang, Y.; Ling, H.F.; Xue, Y.; Chen, S.Y. Integrated civilian–military pre-positioning of emergency supplies: A multi-objective optimization approach. Appl. Soft Comput. 2017, 58, 732–741. [Google Scholar] [CrossRef]
  7. Zou, S.; Yang, F.; Tang, Y.; Xiao, L.; Zhao, X. Optimized algorithm of sensor node deployment for intelligent agricultural monitoring. Comput. Electron. Agric. 2016, 127, 76–86. [Google Scholar]
  8. Le Quiniou, M.; Mandel, P.; Monier, L. Optimization of drinking water and sewer hydraulic management: Coupling of a genetic algorithm and two network hydraulic tools. Procedia Eng. 2014, 89, 710–718. [Google Scholar] [CrossRef] [Green Version]
  9. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 7 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  10. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  11. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  12. Forestiero, A. Heuristic recommendation technique in Internet of Things featuring swarm intelligence approach. Expert Syst. Appl. 2022, 187, 115904. [Google Scholar] [CrossRef]
  13. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  14. Wang, S.; Jia, H.; Abualigah, L.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021, 9, 1551. [Google Scholar] [CrossRef]
  15. Verma, M.; Sreejeth, M.; Singh, M.; Babu, T.S.; Alhelou, H.H. Chaotic Mapping Based Advanced Aquila Optimizer with Single Stage Evolutionary Algorithm. IEEE Access 2022, 10, 89153–89169. [Google Scholar] [CrossRef]
  16. Akyol, S. A new hybrid method based on Aquila optimizer and tangent search algorithm for global optimization. J. Ambient. Intell. Humaniz. Comput. 2022, 1–21. [Google Scholar] [CrossRef]
  17. Mahajan, S.; Abualigah, L.; Pandit, A.K.; Altalhi, M. Hybrid Aquila optimizer with arithmetic optimization algorithm for global optimization tasks. Soft Comput. 2022, 26, 4863–4881. [Google Scholar] [CrossRef]
  18. Zhang, Y.-J.; Yan, Y.-X.; Zhao, J.; Gao, Z.-M. AOAAO: The hybrid algorithm of arithmetic optimization algorithm with aquila optimizer. IEEE Access 2022, 10, 10907–10933. [Google Scholar] [CrossRef]
  19. AlRassas, A.M.; Al-qaness, M.A.; Ewees, A.A.; Ren, S.; Abd Elaziz, M.; Damaševičius, R.; Krilavičius, T. Optimized ANFIS model using Aquila Optimizer for oil production forecasting. Processes 2021, 9, 1194. [Google Scholar] [CrossRef]
  20. Abd Elaziz, M.; Dahou, A.; Alsaleh, N.A.; Elsheikh, A.H.; Saba, A.I.; Ahmadein, M. Boosting COVID-19 image classification using MobileNetV3 and aquila optimizer algorithm. Entropy 2021, 23, 1383. [Google Scholar] [CrossRef]
  21. Jnr, E.O.N.; Ziggah, Y.Y.; Rodrigues, M.J.; Relvas, S. A hybrid chaotic-based discrete wavelet transform and Aquila optimisation tuned-artificial neural network approach for wind speed prediction. Results Eng. 2022, 14, 100399. [Google Scholar]
  22. Ma, L.; Li, J.; Zhao, Y. Population Forecast of China’s Rural Community Based on CFANGBM and Improved Aquila Optimizer Algorithm. Fractal Fract. 2021, 5, 190. [Google Scholar] [CrossRef]
  23. Aribowo, W.; Supari, B.S.; Suprianto, B. Optimization of PID parameters for controlling DC motor based on the aquila optimizer algorithm. Int. J. Power Electron. Drive Syst. (IJPEDS) 2022, 13, 808–2814. [Google Scholar] [CrossRef]
  24. Ali, M.H.; Salawudeen, A.T.; Kamel, S.; Salau, H.B.; Habil, M.; Shouran, M. Single-and multi-objective modified aquila optimizer for optimal multiple renewable energy resources in distribution network. Mathematics 2022, 10, 2129. [Google Scholar] [CrossRef]
  25. Yao, J.; Sha, Y.; Chen, Y.; Zhang, G.; Hu, X.; Bai, G.; Liu, J. IHSSAO: An Improved Hybrid Salp Swarm Algorithm and Aquila Optimizer for UAV Path Planning in Complex Terrain. Appl. Sci. 2022, 12, 5634. [Google Scholar] [CrossRef]
  26. Alkayem, N.F.; Shen, L.; Al-hababi, T.; Qian, X.; Cao, M. Inverse Analysis of Structural Damage Based on the Modal Kinetic and Strain Energies with the Novel Oppositional Unified Particle Swarm Gradient-Based Optimizer. Appl. Sci. 2022, 12, 11689. [Google Scholar] [CrossRef]
  27. Alkayem, N.F.; Cao, M.; Shen, L.; Fu, R.; Šumarac, D. The combined social engineering particle swarm optimization for real-world engineering problems: A case study of model-based structural health monitoring. Appl. Soft Comput. 2022, 123, 108919. [Google Scholar] [CrossRef]
  28. Li, J.; Zang, H.; Wei, X. On the construction of one-dimensional discrete chaos theory based on the improved version of Marotto’s theorem. J. Comput. Appl. Math. 2020, 380, 112952. [Google Scholar] [CrossRef]
  29. Zandi-Mehran, N.; Jafari, S.; Golpayegani, S.M.R.H. Signal separation in an aggregation of chaotic signals. Chaos Solitons Fractals 2020, 138, 109851. [Google Scholar] [CrossRef]
  30. Anand, P.; Arora, S. A novel chaotic selfish herd optimizer for global optimization and feature selection. Artif. Intell. Rev. 2020, 53, 1441–1486. [Google Scholar] [CrossRef]
  31. Lin, Z.-B.; Liu, Y.-H. Divided chaotic oscillatory annealing TSP optimization algorithm based on greedy strategy. Appl. Res. Comput. 2021, 38, 2359–2364. [Google Scholar]
  32. Fan, J.-L.; Zhang, X.-F. Piecewise Logistic Chaotic Map and Its Performance Analysis. Acta Electron. Sin. 2009, 37, 720–725. [Google Scholar]
  33. Wang, C.; Di, Y.; Tang, J.; Shuai, J.; Zhang, Y.; Lu, Q. The Dynamic Analysis of a Novel Reconfigurable Cubic Chaotic Map and Its Application in Finite Field. Symmetry 2021, 13, 1420. [Google Scholar] [CrossRef]
  34. Zhang, H.; Zhang, T.N.; Shen, J.H.; Li, Y. Research on decision-makings of structure optimization based on improved Tent PSO. Control Decis. 2008, 8, 857–862. [Google Scholar]
  35. Wang, K.; Mao, W. Simulation of Vertical Temperature Distribution in Green Building Space Based on the Niche Genetic Algorithm. In Proceedings of the 2021 IEEE International Conference on Industrial Application of Artificial Intelligence (IAAI), Harbin, China, 24–26 December 2021; pp. 123–130. [Google Scholar]
  36. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  37. Digalakis, J.G.; Margaritis, K.G. On benchmarking functions for genetic algorithms. Int. J. Comput. Math. 2001, 77, 481–506. [Google Scholar] [CrossRef]
  38. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  39. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  40. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  41. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  42. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  43. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  44. Zhang, M.-J.; Zhang, H.; Chen, X.; Yang, J. A grey wolf optimization algorithm based on Cubic mapping and its application. Comput. Eng. Sci. 2021, 43, 2035–2042. [Google Scholar]
  45. Kannan, B.; Kramer, S.N. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Albuquerque, NM, USA, 19–22 September 1993; pp. 103–112. [Google Scholar]
  46. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimization algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  47. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  48. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  49. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  50. Kaur, S.; Awasthi, L.K.; Sangal, A.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
Figure 1. Zero preference interference and incentive.
Figure 1. Zero preference interference and incentive.
Sensors 23 00755 g001
Figure 2. Chaotic System Planar Distribution.
Figure 2. Chaotic System Planar Distribution.
Sensors 23 00755 g002
Figure 3. Diagram of Niche Thought.
Figure 3. Diagram of Niche Thought.
Sensors 23 00755 g003
Figure 4. NCAAO algorithm flow chart.
Figure 4. NCAAO algorithm flow chart.
Sensors 23 00755 g004
Figure 5. Fitness charts of 15 cases in two dimensions.
Figure 5. Fitness charts of 15 cases in two dimensions.
Sensors 23 00755 g005
Figure 6. Convergence comparison curves of experiment.
Figure 6. Convergence comparison curves of experiment.
Sensors 23 00755 g006
Figure 7. Schematic diagram of a tension/compression spring. (a) Schematic of the spring, (b) stress distribution evaluated at the optimum design.
Figure 7. Schematic diagram of a tension/compression spring. (a) Schematic of the spring, (b) stress distribution evaluated at the optimum design.
Sensors 23 00755 g007
Figure 8. Pressure vessel design problem.
Figure 8. Pressure vessel design problem.
Sensors 23 00755 g008
Figure 9. Three-bar truss design problem.
Figure 9. Three-bar truss design problem.
Sensors 23 00755 g009
Table 1. Standard AO optimization process performance splitting.
Table 1. Standard AO optimization process performance splitting.
Test ModuleF1_fitF10_fitF13_fit
Equation (3)2.21 × 10−1608.47673.3478
Equation (5)86.424990.9963.2516
Equation (12)1.53 × 10−412.67055.3837
Equation (13)0.00180841.9983.0893
Table 2. Planar uniformity comparison.
Table 2. Planar uniformity comparison.
ULogisticCubicTentDLCS
Umax1.42151.37261.02591.0238
Umin1.32011.18360.99310.9825
Uavg1.36981.21451.02411.0026
Table 3. Unimodal benchmark functions.
Table 3. Unimodal benchmark functions.
FunctionDimRangefmin
F 1 x = i = 1 n x i 2 10,30,100,500[−100, 100]0
F 2 x = i = 1 n x i + i = 1 n x i 10,30,100,500[−10, 10]0
F 3 x = i = 1 n j 1 i x j 2 10,30,100,500[−100, 100]0
F 4 x = max i { x i , 1 i n } 10,30,100,500[−100, 100]0
F 5 x = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 10,30,100,500[−1.28, 1.28]0
F 6 = i = 1 n x i 2 10 cos 2 π x i + 10 10,30,100,500[−5.12, 5.12]0
F 7 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 10,30,100,500[−600, 600]0
F 8 = π n { 10 sin ( π y 1 ) } + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 π y i + 1 + i = 1 n u ( x i , 10 , 100 , 4 ) ] , w h e r e y i = 1 + x i + 1 4 , u ( x i , a , k , m ) K ( x i a ) m , x i > a 0 , a x i a K ( x i a ) m , x i < a 10,30,100,500[−50, 50]0
F 9 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n x i 1 2 × [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 1 + sin 2 ( 2 π x n ) } + i = 1 n u x i , 5 , 100 , 4 10,30,100,500[−50, 50]0
F 10 = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 1 2[−65, 65]1
F 11 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F 12 = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 13 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 48 x 2 + 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F 14 = i = 1 4 c i exp ( i = 1 3 a i j ( x j p i j ) 2 ) 3[−1, 2]−3.86
F 15 x = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 1]−10.5363
Table 4. Parameter settings.
Table 4. Parameter settings.
AlgorithmSetting
GWOa was linearly decreased from 2 to 0, rate = 3
WOAα decreased from 2 to 0, b = 1
HHOt = 0
ALOI ratio = 10ω, ω = [2, 6]
AOS = 1.5, r1 take a fixed index between 1 and 20, G2 decreased from 2 to 0
MFOConvergence constant α from 2 to 0, Spiral factor b = 1
PSOC1 = −0.2, C2 = 1.49445, w = 0.8
Pc = 0.95,
GAPc = 0.95, Pm = 0.001, Er = 0.2
TSAt = 0
SSAP_percent = 0.2
Table 5. The comparison of obtained solutions for unimodal functions.
Table 5. The comparison of obtained solutions for unimodal functions.
FDIndexGWOWOAHHOALOAONCAAO
F110AVG5.6575 × 10−1314.8691 × 10−1229.5173 × 10 −1223.2913 × 10−105.6144 × 10−1740
STD1.4541 × 10−1301.0843 × 10−1212.6643 × 10−1211.1628 × 10−1000
30AVG5.4833 × 10−571.3682 × 10−1155.8876 × 10−1184.8172 × 10−84.2005 × 10−1760
STD1.3376 × 10−562.0943 × 10−1151.2566 × 10−1172.0373 × 10−800
100AVG1.1543 × 10−252.6843 × 10−1143.2317 × 10−1153.8028 × 10−21.381 × 10−1770
STD2.5685 × 10−253.8666 × 10−1135.6943 × 10−1142.6591 × 10−200
F210AVG7.7136 × 10−721.0135 × 10−673.6286 × 10−671.9342 × 10−25.2349 × 10−875.8837 × 10−169
STD1.4544 × 10−711.60484 × 10−677.06325 × 10−674.3151 × 10−21.1556 × 10−860
30AVG3.3720 × 10−334.99 × 10−674.3415 × 10−692.23231.0200 × 10−881.9000 × 10−168
STD3.94207 × 10−339.9288 × 10−674.30699 × 10−671.55521.86559 × 10−880
100AVG2.7785 × 10−141.2822 × 10−634.1000 × 10−688.0225 × 1021.4752 × 10−885.8922 × 10−167
STD3.5943 × 10−142.4739 × 10−633.5430 × 10−671.1591 × 1021.7100 × 10−880
F310AVG6.7187 × 10−764.3830 × 10−61.4170 × 10−1061.6350 × 10−61.4380 × 10−1780
STD2.3663 × 10−751.36690 × 10−607.3425 × 10−600
30AVG4.1523 × 10−224.2241 × 10−34.4823 × 10−1112.1156 × 1012.3572 × 10−1730
STD1.2303 × 10−225.0500 × 10−307.6463 × 10100
100AVG1.4000 × 10−34.1766 × 1051.8609 × 10−1071.2330 × 1041.3141 × 10−1770
STD5.2473 × 10−22.2046 × 1055.5540 × 10−1078.1439 × 10400
F410AVG2.5343 × 10−441.1055 × 10−86.3666 × 10−601.5402 × 10−51.9843 × 10−897.9443 × 10−167
STD3.4655 × 10−442.4773 × 10−87.6522 × 10−603.0500 × 10−62.5100 × 10−890
30AVG8.1471 × 10−154.4538 × 10−144.4883 × 10−593.47411.1948 × 10−862.0741 × 10−167
STD7.7544 × 10−158.5664 × 10−149.9273 × 10−593.75642.6700 × 10−860
100AVG4.4000 × 10−51.7645 × 10−17.6788 × 10−602.0600 × 1011.1767 × 10−865.0800 × 10−168
STD1.5946 × 10−53.4955 × 10−18.6634 × 10−602.59732.3443 × 10−860
F510AVG2.7156 × 10−42.0000 × 10−21.6343 × 10−53.0100 × 10−39.2146 × 10−54.3222 × 10−12
STD3.0725 × 10−43.5740 × 10−22.0440 × 10−53.3140 × 10−31.3600 × 10−40
30AVG6.8043 × 10−52.6225 × 10−31.7400 × 10−43.8846 × 10−31.5622 × 10−42.0510 × 10−6
STD2.0579 × 10−52.8950 × 10−33.2803 × 10−41.9460 × 10−32.0864 × 10−40
100AVG2.2536 × 10−41.9441 × 10−41.8113 × 10−53.7800 × 10−11.8843 × 10−45.2014 × 10−6
STD1.8141 × 10−43.3940 × 10−41.8235 × 10−53.1397 × 10−13.8878 × 10−41.5420 × 10−6
RANK10w/t/l0/0/50/0/50/0/50/0/50/0/55/0/0
30w/t/l0/0/50/0/50/0/50/0/50/0/55/0/0
100w/t/l0/0/50/0/50/0/50/0/50/0/55/0/0
Table 6. The comparison of obtained solutions for multimodal functions.
Table 6. The comparison of obtained solutions for multimodal functions.
FDIndexGWOWOAHHOALOAONCAAO
F610AVG4.25262.9536 × 10−1501.7724 × 10100
STD5.98286.8724 × 10−1501.1325 × 10100
30AVG2.1300 × 101003.7500 × 10200
STD1.3500 × 101006.8300 × 10100
100AVG1.4028 × 101003.3645 × 10200
STD4.1562004.0162 × 10100
F710AVG4.9425 × 10−34.4737 × 10−201.7114 × 10−100
STD6.7643 × 10−36.2962 × 10−201.0108 × 10−100
30AVG4.1982 × 10−2002.2600 × 10−200
STD3.6423 × 10−2002.4700 × 10−100
100AVG2.7342 × 10−21.3320 × 10−102.3934 × 10−100
STD2.2163 × 10−21.5432 × 10−101.5643 × 10−100
F810AVG2.0436 × 10−21.1674 × 10−21.4803 × 10−42.8749 × 10−23.1236 × 10−41.8510 × 10−5
STD8.8876 × 10−38.9764 × 10−25.2234 × 10−41.2356 × 10−34.2360 × 10−53.2664 × 10−5
30AVG6.1270 × 10−14.7881 × 10−12.0856 × 10−61.5716 × 1011.0767 × 10−61.2786 × 10−7
STD1.5289 × 10−12.3845 × 10−11.1722 × 10−52.1337 × 1011.4585 × 10−63.2361 × 10−7
100AVG5.2230 × 10−21.3180 × 10−14.2344 × 10−52.4432 × 1017.3494 × 10−66.7628 × 10−6
STD1.3462 × 10−21.5785 × 10−15.4672 × 10−51.3258 × 1018.7032 × 10−61.3640 × 10−6
F910AVG4.0664 × 10−27.3654 × 10−21.5774 × 10−45.0367 × 10−39.4682 × 10−55.4800 × 10−7
STD9.0056 × 10−26.3332 × 10−22.5761 × 10−45.9886 × 10−30.3021 × 10−22.6473 × 10−6
30AVG4.05003.75001.5736 × 10−49.6700 × 1016.4589 × 10−51.3468 × 10−6
STD4.86005.4500 × 10−11.8932 × 10−47.5600 × 1016.6508 × 10−55.6430 × 10−6
100AVG3.6753 × 10−14.3929 × 10−19.1331 × 10−41.3463 × 1011.8243 × 10−52.8756 × 10−4
STD1.0467 × 10−13.1817 × 10−11.2332 × 10−31.2789 × 1011.6473 × 10−57.6312 × 10−4
RANK10w/t/l0/0/40/0/40/2/20/0/40/2/22/2/0
30w/t/l0/0/40/1/30/2/20/0/40/2/22/2/0
100w/t/l0/0/40/0/40/2/00/0/40/2/22/2/0
Table 7. The comparison of obtained solutions for fixed-dimension multimodal functions.
Table 7. The comparison of obtained solutions for fixed-dimension multimodal functions.
FDIndexGWOWOAHHOALOAONCAAO
F102AVG9.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−1
STD000000
F114AVG3.1666 × 10−43.2614 × 10−43.1002 × 10−45.6676 × 10−45.1303 × 10−45.4825 × 10−4
STD1.9293 × 10−53.5372 × 10−53.3390 × 10−62.2070 × 10−44.0083 × 10−42.0710 × 10−6
F122AVG−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
STD000000
F132AVG333333.0053
STD000001.3000 × 10−4
F143AVG−3.8623−3.8596−3.8347−3.8213−3.8425−3.8604
STD1.6897 × 10−32.1255 × 10−24.0300 × 10−20.4283 × 10−20.1978 × 10−27.7618 × 10−4
F154AVG−1.0543 × 101−2.1105 × 104−7.2953−7.2943−2.1134 × 104−1.0500 × 101
STD1.0832 × 10−44.8980 × 10−52.96202.96201.0496 × 10−47.4736 × 10−4
RANKDw/t/l0/3/20/3/21/3/20/3/30/3/32/2/1
Table 8. The p-values obtained from Wilcoxon sum rank test. (p ≥ 0.05 have been underlined).
Table 8. The p-values obtained from Wilcoxon sum rank test. (p ≥ 0.05 have been underlined).
Functions TypeGWO and NCAAOWOA and NCAAOHHO and NCAAOALO and NCAAOAO and NCAAO
Unimodal0.0006390.0002480.0015530.0000100.009578
High-dimensional multimodal0.0285710.0657140.0657140.0285710.082857
Fixed-dimensional multimodal0.0118930.0078000.0039650.0078000.068436
Table 9. Comparison of results for tension/compression spring problem.
Table 9. Comparison of results for tension/compression spring problem.
AlgorithmOptimum VariablesOptimum Weight
dDN
SSA [47]0.0511970.34521912.004020.0126821
WOA [39]0.051190.35723612.003090.0126828
GWO [38]0.051710.35438211.286980.0125433
MFO [48]0.0518430.36410711.240360.0126734
GSA [49]0.050190.35369714.254100.126963
PSO [9]0.050310.31022514.003240.013108
GA [10]0.050340.31597915.243160.012833
TSA [50]0.0501090.34159912.073500.012655
NCAAO0.0518360.36002611.136590.0126649
Table 10. Comparison of results for pressure vessel design problem.
Table 10. Comparison of results for pressure vessel design problem.
AlgorithmOptimum VariablesOptimum Cost
TsThRL
SSA [47]0.8125000.43751642.09517176.96356059.7294
WOA [39]0.7836940.39110640.600432005923.497
GWO [38]0.852570.42139444.23575175.53246142.5731
MFO [48]0.811500.44135042.10036176.34926058.3024
GSA [49]1.089960.95663849.73246183.66728766.9234
PSO [9]0.755360.42486141.65367179.10685919.763
GA [10]1.0956120.92006444.67365182.56346587.639
TSA [50]07816810.38652640.312562005919.263
NCAAO0.8176240.41756342.98364177.83296000.3776
Table 11. Comparison of results for tree-bar truss design problem.
Table 11. Comparison of results for tree-bar truss design problem.
AlgorithmOptimum VariablesOptimum Weight
x1x2
SSA [47]0.7914620.407936264.10593
WOA [39]0.7866290.406775263.89534
GWO [38]0.7865130.408248264.01956
MFO [48]0.7872360.406624263.89511
GSA [49]0.7902540.407993263.97544
PSO [9]0.7885220.408346263.89602
GA [10]0.7943660.395462264.00395
TSA [50]0.7895600.408011262.97637
NCAAO0.7886530.408294263.89582
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Xu, X.; Zhang, N.; Zhang, K.; Dong, W.; Li, X. Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm. Sensors 2023, 23, 755. https://doi.org/10.3390/s23020755

AMA Style

Zhang Y, Xu X, Zhang N, Zhang K, Dong W, Li X. Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm. Sensors. 2023; 23(2):755. https://doi.org/10.3390/s23020755

Chicago/Turabian Style

Zhang, Yue, Xiping Xu, Ning Zhang, Kailin Zhang, Weida Dong, and Xiaoyan Li. 2023. "Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm" Sensors 23, no. 2: 755. https://doi.org/10.3390/s23020755

APA Style

Zhang, Y., Xu, X., Zhang, N., Zhang, K., Dong, W., & Li, X. (2023). Adaptive Aquila Optimizer Combining Niche Thought with Dispersed Chaotic Swarm. Sensors, 23(2), 755. https://doi.org/10.3390/s23020755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop