Next Article in Journal
Rail Fastener Status Detection Based on MobileNet-YOLOv4
Next Article in Special Issue
Human Perception Intelligent Analysis Based on EEG Signals
Previous Article in Journal
Propagation Channel Characterization for 6–14 GHz Bands Based on Large Array Measurement for Indoor Scenarios
Previous Article in Special Issue
Estimating Sound Speed Profile by Combining Satellite Data with In Situ Sea Surface Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle Chaos Map and Levy Flight Operator

College of Software, Nankai University, Tianjin 300071, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(22), 3678; https://doi.org/10.3390/electronics11223678
Submission received: 23 October 2022 / Revised: 31 October 2022 / Accepted: 7 November 2022 / Published: 10 November 2022
(This article belongs to the Special Issue Applications of Computational Intelligence)

Abstract

:
The tuna swarm optimization algorithm (TSO) is a new heuristic algorithm proposed by observing the foraging behavior of tuna populations. The advantages of TSO are a simple structure and fewer parameters. Although TSO converges faster than some classical meta-heuristics algorithms, it can still be further accelerated. When TSO solves complex and challenging problems, it often easily falls into local optima. To overcome the above issue, this article proposed an improved nonlinear tuna swarm optimization algorithm based on Circle chaos map and levy flight operator (CLTSO). In order to compare it with some advanced heuristic algorithms, the performance of CLTSO is tested with unimodal functions, multimodal functions, and some CEC2014 benchmark functions. The test results of these benchmark functions are statistically analyzed using Wilcoxon, Friedman test, and MAE analysis. The experimental results and statistical analysis results indicate that CLTSO is more competitive than other advanced algorithms. Finally, this paper uses CLTSO to optimize a BP neural network in the field of artificial intelligence. A CLTSO-BP neural network model is proposed. Three popular datasets from the UCI Machine Learning and Intelligent System Center are selected to test the classification performance of the new model. The comparison result indicates that the new model has higher classification accuracy than the original BP model.

1. Introduction

Nowadays, many engineering problems in real life have become more and more complex and challenging. High-quality solutions can help people effectively reduce resource investment. Because most production practice problems are multivariate, nonlinear, and have many complex constraints, the traditional branch and bound algorithm [1], conjugate gradient method [2], and dynamic programming method [3] cannot achieve remarkable results regarding these problems. The meta-heuristic algorithm has the characteristics of strong global search ability, no dependence on gradient information, and wide adaptability. It can effectively overcome the shortcomings of traditional optimization algorithms. Much of the research on meta-heuristic algorithms has shown that these algorithms are able to solve nonlinear optimization problems [4,5]. Many researchers tend to use meta-heuristic algorithms to solve complex engineering problems. Now, meta-heuristic algorithms are applied in various fields, such as workshop scheduling [6], task optimization [7], engineering management [8,9,10], and others.
The meta-heuristic algorithm is a mathematical method inspired by biological behavior and some physical phenomena in nature. These methods are used to solve complex problems in real life [11]. The meta-heuristic algorithm has the advantages of a simple structure, fewer hyperparameters, and being easy to understand. Based on these advantages, it has become an important method for solving optimization problems today. Meta-heuristic algorithms can be divided into four categories: swarm intelligence algorithms [12], evolutionary algorithms [13], human-based algorithms [14], and physical and chemical-based algorithms [15]. The swarm intelligence algorithms simulate the behavior of animal populations. Each individual in the population is a candidate solution. They are randomly explored in the search space, which effectively avoids the possibility of entering the local optimum. Some classic and newly proposed swarm intelligence algorithms include Golden Jackal Optimization (GJO) [16], the Gray Wolf Optimization Algorithm (GWO) [17], and the Poplar Optimization Algorithm (POA) [18]. Some classical evolutionary algorithms include Genetic Algorithms [19] and the Biogeographic-Based Optimization Algorithm (BBO) [20], etc. Meta-heuristic algorithms can effectively enhance the efficiency of engineering practice. This has attracted more and more scholars’ attention.
In many industrial problems, specific solution functions can be established with mathematical models. How to solve complex function optimization problems has become a focus of current research. For the optimization problems with fewer constraints and dimensions, the traditional mathematical methods can achieve outstanding results. Although meta-heuristic algorithms have very good performance in dealing with complex and high dimensional optimization problems, the convergence speed of simple meta-heuristic algorithms still needs to be improved. Sometimes with a single meta-heuristic algorithm, it is difficult to get rid of the attraction of local extremum. To further enhance the optimization capability of meta-heuristic algorithms, many experts try to use different strategies to improve them. Zhongzhou Du introduced Levy flight in the iterative process of PSO, which accelerated the optimization speed of PSO [21]. Hang Yu used a chaotic mapping strategy to improve the GWO initialization method, which improved the accuracy of the GWO solution [22]. Xiaoling Yuan introduced adaptive weight into the PSO algorithm, which greatly strengthened its global search capability [23]. So-Youn Park combined CS with oppositional learning, making the CS converge faster [24]. W. Xie used the golden sine operator to improve the Black Hole algorithm (BH) [25], giving it better exploration performance [26].
Xie et al. proposed a new meta-heuristic algorithm called the tuna swarm optimization algorithm (TSO) [27] in 2021 after observing the foraging behavior of tuna swarms. There are two common foraging strategies for tuna swarms: spiral foraging strategy and parabolic foraging strategy. TSO searches for the global optimal value by simulating the common individual in the tuna swarm to follow the optimal individual in the swarm to attack the prey. Comparing TSO with the Whale Optimization Algorithm (WOA) [28], the Salp Swarm Algorithm (SSA) [29], and some other advanced algorithms, the comparison results indicate that TSO outperforms the competitors. The tuna swarm optimization algorithm has the advantages of less parameters and easy realization. Therefore, after it was proposed, TSO has been widely studied and applied to engineering practice. Although TSO performs very well in many engineering practices, it still has some shortcomings. Firstly, TSO cannot efficiently search for the global optimal value. It is easily attracted by local extremum. Secondly, TSO does not converge fast enough. Finally, the followers of the optimal individual blindly follow the later. There is a lack of local exploitation. At present, Hu et al. have used Gaussian mutation to improve the TSO algorithm, and have applied the improved algorithm to photovoltaic power prediction [30]. Kumara et al. improved the TSO algorithm by using chaotic maps to increase the diversity of the algorithm population [31]. This paper proposes an improved tuna swarm optimization algorithm (CLTSO) based on the Circle chaotic map [32], Levy flight operator, and nonlinear adaptive operator. The innovations made in this article are summarized as follows:
(1)
At the CLTSO initialization stage, this paper introduces the Circle chaotic map to uniformly generate individual positions. Because the initial positions of tuna individuals are randomly generated, the initial tuna individuals are likely to cluster together. In this paper, the emergence of the initial individual aggregation problem can be effectively solved by introducing the Circle chaotic map.
(2)
In CLTSO, the optimal individual and its follower positions are updated by using Levy flight strategy. Because Levy flight uses a combination of long and short steps, it can significantly enlarge the search scope of CLTSO.
(3)
In the iterative process of CLTSO, a nonlinear convergence factor is introduced to balance the exploration and the exploitation. In CLTSO, a large convergence factor in the initial iteration can bring the common individuals closer to the optimal individuals. A smaller convergence factor at the end of iteration increases the capability of followers to explore local scope.
This article covers the following aspects: Section 1 introduces some related content of the meta-heuristic algorithm and the tuna swarm optimization algorithm. Section 2 reviews the two foraging strategies of the original tuna swarm optimization algorithm. Section 3 introduces the improved Circle chaotic map strategy, the Levy flight operator, and the nonlinear adaptive weight operator, and the usage of these operators to improve TSO. Section 4 compares CLTSO with some classical and advanced meta-heuristics and makes some experimental analysis. Section 5 modifies the BP neural network based on CLTSO, and then tests the new model by using three popular datasets. Finally, Section 6 summarizes the content of the article.
The main mathematical symbols mentioned in this paper are shown in Table 1.

2. An Overview of Tuna Optimization Algorithms

Tuna is the top predator in the ocean. Although tuna swim very fast, some small prey are more flexible than tuna. Therefore, in the process of predation, tuna often choose group cooperation to capture prey. The tuna swarm has two efficient predatory strategies, namely, the spiral foraging strategy and the parabolic foraging strategy. When the tuna swarm uses the parabolic foraging strategy, each tuna will follow the previous individual closely. The tuna swarm forms a parabola to surround the prey. When the tuna swarm adopts the spiral foraging strategy, the tuna swarm will aggregate into spiral shapes and drive prey to shallow water areas. Prey is more likely to be captured. By observing these two foraging behaviors of tuna swarm, researchers proposed a new swarm intelligence optimization called TSO.

2.1. Population Initialization

There are N P tunas in a tuna swarm. At the swarm initialization phase, the tuna swarm optimization algorithm randomly generates the initial swarm in the search space. The mathematical formulas for initializing tuna individuals are as follows:
X i int = r a n d ( u b l b ) + l b = [ x i 1 x i 2 x i j ] { i = 1 , 2 , , N P j = 1 , 2 , , D i m
where X i int is the i -th tuna, u b and l b are the upper and lower boundaries of the range of tuna exploration, and r a n d is a random variable with uniform distribution from 0 to 1. In particular, each individual, X i int , in the tuna swarm represents a candidate solution for TSO. Each individual tuna consists of a set of Dim-dimensional numbers.

2.2. Parabolic Foraging Strategy

Herring and eel are the main food sources of tuna. When they encounter predators, they will use their speed advantage to constantly change their direction of swimming. It is very difficult for predators to catch them. Because tuna is less agile than their prey, the tuna swarm will take a cooperative approach to attack the prey. The tuna swarm will use the prey as a reference point to keep chasing prey. During predation, each tuna follows the previous individual, and the whole tuna swarm forms a parabola to surround the prey. In addition, the tuna swarm also uses a spiral foraging strategy. Assuming that the probability of the tuna swarm choosing either strategy is 50%, the mathematical model of parabolic foraging of the tuna swarm is as follows:
X i t + 1 = { X b e s t t + r a n d ( X b e s t t X i t ) + T F p 2 ( X b e s t t X i t ) ,   if   rand   < 0 . 5 T F p 2 X i t ,     if   rand 0 . 5
p = ( 1 t t max ) ( t / t max )
where t indicates that the tth iteration is currently running and t max means the maximum number of iterations preset. T F is a random value of 1 or −1.

2.3. Spiral Foraging Strategy

Besides the parabolic foraging strategy, there is another efficient cooperative foraging strategy called the spiral foraging strategy. While chasing the prey, most tuna cannot choose the right direction, but a small number of tuna can guide the swarm to swim in the right direction. When a small group of tuna start chasing the prey, the nearby tuna will follow this small group of individuals. Eventually, the entire tuna swarm will form a spiral formation to catch the prey. When the tuna swarm adopts a spiral foraging strategy, individuals will exchange information with the best to follow individuals or adjacent individuals in the swarm. Sometimes the best individual is not able to lead the swarm to capture prey effectively. The tuna will then select a random individual in the swarm to follow. The mathematical formula of the spiral foraging strategy is as follows:
X i t + 1 = { α 1 ( X r a n d t + τ | X r a n d t X i t | + α 2 X i t ) , i = 1 α 1 ( X r a n d t + τ | X r a n d t X i t | + α 2 X i 1 t ) , i = 2 , 3 , , N P α 1 ( X b e s t t + τ | X b e s t t X i t | + α 2 X i t ) , i = 1 α 1 ( X b e s t t + τ | X b e s t t X i t | + α 2 X i 1 t ) , i = 2 , 3 , , N P , i f   rand < t t max , i f   rand t t max
where X i t + 1 denotes the i -th tuna in the t + 1 iteration. The current best individual is X b e s t t . X r a n d t is the reference point randomly selected in the tuna swarm. α 1 is the trend weight coefficient to control the tuna individual swimming to the optimal individual or randomly selected adjacent individuals. α 2 is the trend weight coefficient to control the tuna individual swimming to the individual in front of it. τ is the distance parameter that controls the distance between the tuna individual and the optimal individual or a randomly selected reference individual. Their mathematical calculation model is as follows:
α 1 = a + ( 1 a ) t t max
α 2 = ( 1 a ) ( 1 a ) t t max
τ = e b l cos ( 2 π b )
l = e 3 cos ( ( ( t max + 1 / t ) 1 ) π )
where a is a constant to measure the degree of tuna following and b is a random number uniformly distributed in the range of [0, 1].

2.4. Pseudocode of TSO

The pseudocode of the original TSO is displayed in Algorithm 1. The flow chart of TSO is displayed in Figure 1.
Algorithm 1 Pseudocode of TSO Algorithm
Initialization: Set parameters NP, Dim, a, z and T Max
 Initialize the position of tuna Xi (i = 1, 2, …, NP) by (1)
 Counter t = 0
while   T   <   T Max do
  Calculate the fitness value of all tuna
  Update the position and value of the best tuna X b e s t t
  for (each tuna) do
   Update α 1 , α 2 , p by (5), (6), (3)
    if (rand < z) then 
      Update X i t + 1 by (1)
    else if (rand z) then 
     if (rand < 0.5) then
       Update X i t + 1 by (4)
     else if (rand 0.5) then
       Update X i t + 1 by (2)
 t = t + 1
return the best fitness value f ( X b e s t ) and the best tuna X b e s t
In the iterative process of the TSO algorithm, each tuna will randomly choose to perform either the spiral foraging strategy or the parabolic foraging strategy. Tuna will also generate new individuals in the search range according to probability Z . Therefore, TSO will choose different strategies according to Z when generating new individual positions. During the execution of the TSO algorithm, all tuna individuals in the population are constantly updated until the number of iterations reaches a predetermined value. Finally, the TSO algorithm returns the optimal individual in the population and its optimal value.
The following advantages of TSO can be seen from Algorithm 1: (1) The TSO algorithm has fewer adjustable parameters, which is beneficial to the implementation of the algorithm. (2) This algorithm will save the position of the best tuna individual in each iteration; even if the quality of the candidate solution decreases, it will not affect the location of the optimal value. (3) The TSO algorithm can keep the balance between exploitation and exploration by selecting two foraging strategies.

3. The Improved Tuna Swarm Optimization Algorithm

This section introduces an improved nonlinear tuna swarm optimization algorithm, CLTSO, based on Circle chaotic map and Levy flight operator. Firstly, the population initialization using Circle chaotic map can increase the diversity of the swarm. The combination of TSO and Levy flight gives the algorithm an outstanding global exploration capability. Furthermore, a nonlinear adaptive weight operator is introduced to modify the weight coefficient of tuna following behavior in CLTSO. In CLTSO, the relationship between global exploration and local exploitation in the iterative process are well balanced.

3.1. Circle Chaotic Map

Many changes in nature are not random. They seem to conform to some special laws. Such a phenomenon is called chaos. Many movements in nature are chaotic [33]. Chaos is a random behavior, but it conforms to certain laws, which enables this operator to display more states in the search space of TSO [34].
Because the position of the tuna is randomly generated in the initialization phase of the tuna algorithm, it is easy to make the initial tuna gather at the same place. The initial tuna swarm does not fully cover the search space, resulting in a small difference between tuna individuals. This greatly reduces the global searching capability of the algorithm. The current popular chaotic mapping strategies are as follows: Tent [35], Logistic [36], Circle [37], Chebyshev [38], Sinusoidal [39], and Iterative chaotic map [40]. Studying the related literature on the above chaotic mapping strategies, we found that Circle chaotic map has a more stable chaotic value and has a higher coverage rate in the search space [41]. However, our experiments indicate that the distribution of Circle chaotic value is still not uniform. The chaotic values of the original Circle operator are clustered in the scope of [0.2, 0.5]. To make the chaotic value distribution more uniform, we improved the mathematical model of the Circle chaotic mapping strategy.
The mathematical modeling of the original Circle chaotic map is as follows:
x i + 1 = mod ( x i + 0.2 ( 0.5 / 2 π ) sin ( 2 π x i ) , 1 )
where x i is the ith chaotic particle and x i + 1 is the (i + 1)th chaotic particle. The scatter plot and frequency histogram of the initial candidate solution of the original Circle chaotic mapping operator are displayed in subgraphs (a) and (c) of Figure 2. In the Circle chaotic map experiment, the total number of particles is 2000. Chaotic particles denote the initial candidate solution of TSO.
As can be seen from subgraphs (a) and (c) of Figure 2, the chaotic particles are concentrated in the range of [0.2, 0.5] in the chaotic sequence initialized by Circle chaotic map. However, the initial candidate solutions are too concentrated, which will greatly reduce the population diversity of TSO. Therefore, the original Circle chaotic map is improved in this paper [42]. The mathematical modeling of the improved Circle chaotic map is as follows:
x i + 1 = mod ( 3.85 x i + 0.4 ( 0.7 / 3.85 π ) sin ( 3.85 π x i ) , 1 )
where x i is the ith chaotic particle and x i + 1 is the (i + 1)th chaotic particle.
The scatter plot and frequency histogram of the initial candidate solution of the improved Circle chaotic map operator are displayed in subgraphs (b) and (d) of Figure 2.
From (b) and (d), we can clearly see that, compared to the original Circle chaotic map, the particle distribution of the improved Circle chaotic map is more uniform. Each candidate solution particle of the algorithm is explored in the search space. Therefore, using the improved Circle chaotic map operator to modify TSO can obtain more uniform candidate solutions. The initial tuna individuals uniformly distributed in the search space of the algorithm can significantly increase the population diversity of TSO.

3.2. Levy Flight

The movement and trajectory of many small animals and insects in life have the characteristics of Levy flight. These animals and insects include ants and flies. Many animals in nature use Levy flight strategy as an ideal way of foraging. By studying this phenomenon, French mathematician Paul Pierre Levy proposed the mathematical model of Levy flight [43]. Levy flight is an operator conforming to Levy distribution. The step size of Levy flight is random and mixed with long and short distances, which makes it easier to search over a large scale and with unknown scope compared to Brownian motion [44]. In the searching process, the Levy operator often uses short steps to walk and occasionally uses long steps to jump, which allows it to efficiently get rid of the effects of local attraction points. Therefore, in the random searching problem, many heuristic algorithms adopt this strategy to modify the iterative process, which efficiently helps the algorithm to get rid of the influence of local attraction points [45,46,47].
The Levy distribution can be expressed by the following mathematical model:
L ( s ) ~ | s | 1 β
where β is in the range of (0, 2), s is the step size, and L ( s ) is the probability density of a step size, s , according to Levy modeling. The mathematical modeling of Levy distribution is as follows:
L ( s , γ , μ ) = { γ 2 π exp [ γ 2 ( s μ ) ] 1 ( s μ ) 3 / 2 , 0 < μ < s < 0 , o t h e r w i s e
where μ represents the minimum step size and μ > 0, γ represents size parameters. When s , Equation (12) can be written in the following form:
L ( s , γ , μ ) γ 2 π 1 s 3 / 2
Usually, scholars regard L ( s ) approximation as the following mathematical formula:
L ( s ) α β Γ ( β ) sin ( π β / 2 ) π | s | 1 + β , s
where Γ represents gamma function. Its mathematical model is as follows:
Γ ( z ) = 0 t z 1 e t d t
Due to the high complexity of Levy distribution, researchers often use the Mantegna [48] algorithm to simulate Levy flight step size, s , which is defined as follows:
s = μ | ν | 1 / β
where μ and v are defined as follows:
μ ~ N ( 0 , σ μ 2 )
ν ~ N ( 0 , σ ν 2 )
σ μ = { Γ ( 1 + β ) sin ( π β 2 ) Γ [ ( 1 + β ) 2 ] β 2 ( 1 + β ) 2 } ,   σ ν = 1
where the value of β is usually 1.5.
To show the global exploration capability of Levy flight more intuitively, this paper compares Levy flight with random walk strategy. The simulation steps of Levy flight and random walk are set to 300. The comparison results are presented in Figure 3.
Figure 3 shows that the Levy flight has a larger search range than random walk. The jump points of the random walk strategy are more concentrated, and the jump points of the Levy flight strategy are widely distributed. Figure 3 fully demonstrates the characteristics of Levy flight, which can make it better to explore in the whole searching space.

3.3. Nonlinear Adaptive Weight

How to balance the exploration capability and the exploration capability of the swarm intelligence optimization algorithm is very important. Weight parameters play an important role in the TSO algorithm. When the tuna chooses the spiral foraging strategy, in Equations (5) and (6), the weight parameters α 1 and α 2 determine the degree of how much tuna individuals follow the optimal individual to forage. This reflects the optimization process of the algorithm. Similarly, in the parabolic foraging strategy, the weight parameter p in Equation (2) determines the degree of how much ordinary individuals follow the optimal individual. When the weight parameter is large, the degree of tuna following the optimal individual is higher, which makes the whole tuna population better explore the whole space. When the weight parameter is small, ordinary tuna individuals do not follow the optimal individuals. They will swim around a small part of the space, which facilitates the ordinary tuna individual to develop the field around itself. To sum up, the exploration and the exploitation capabilities of TSO depend on the changes of weight parameters α 1 , α 2 , and p .
From Equations (5) and (6), it can be seen that the weight parameters α 1 and α 2 are linear changes. However, the optimization process of TSO is very complex, and the linear changes of weight parameters α 1 and α 2 cannot reflect the actual optimization process of the algorithm. Nowadays, in order to overcome the drawbacks caused by linear control weights, many scholars use nonlinear adaptive weights to improve the swarm intelligence optimization algorithms [49,50,51]. Repeated experiments indicate that the optimization effect of the nonlinear adaptive weight strategy is better than the linear weight strategy. Therefore, two improved nonlinear weight parameters α 1 i and α 2 i are introduced in this paper. Their mathematical models are as follows:
α 1 i ( t ) = α 1 i n i ( α 1 i n i α 1 f i n ) sin ( t μ T M a x π )
α 2 i ( t ) = α 2 i n i ( α 2 i n i α 2 f i n ) sin ( t μ T M a x π )
where μ  = 2, α 1 i n i denotes the initial value of α 1 , α 1 f i n denotes the final value of α 1 , α 2 i n i denotes the initial value of α 2 , and α 2 f i n denotes the final value of α 2 . We compared the improved weight parameters α 1 i and α 2 i with the original weight parameters α 1 and α 2 . The results are displayed in Figure 4. In the experiment, T Max = 500.
It can be clearly seen from Figure 4 that the improved weight parameters α 1 i and α 2 i change rapidly in the early stage, which makes ordinary tuna individuals more closely follow the optimal individual. It increases the global exploration capability of TSO. The weight parameters α 1 i and α 2 i change slowly in the late stage, which enables tuna individuals to explore their surrounding areas. It increases the local search capability of TSO.
In the spiral foraging strategy, a new nonlinear weight parameter p i is proposed. Its mathematical model is as follows:
p i ( t ) = p i n i ( p i n i p f i n ) sin ( t μ T M a x π )
where p i n i represents the initial value of p , and p f i n represents the final value of p . We compare the improved weight parameter p i 2 with the original weight parameter p 2 . The comparison curves are displayed in Figure 5. In the comparison curve, T Max = 500.
As can be seen from Figure 5, the improved weight parameter p i 2 decreases rapidly in the early stage, so a tuna individual can follow its previous individual more closely. It increases the global exploration capability of TSO. The improved weight parameter p i 2 decreases slowly in the late iteration, so tuna individuals can swim and explore in the surrounding space. It increases the local exploration capability of TSO.

3.4. Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circular Chaotic Map and Levy Flight Operator

The TSO algorithm usually uses random data to initialize population in solving function optimization problems, which may lead to the phenomenon that candidate solutions are clustered together. However, this phenomenon will lead to poor population diversity, which eventually leads to poor optimization results of the algorithm. Circle chaotic map has the advantages of randomness and ergodicity. In the optimization process of TSO, these advantages make it easier for the algorithm to escape the attraction of local extremum, and helps the algorithm to maintain the diversity of the swarm. Therefore, an improved Circle chaotic map strategy is introduced to initialize the tuna swarm. The swarm initialization mechanism is upgraded from Equations (1)–(10).
For the swarm intelligence optimization algorithm, how to get rid of the influence of local attraction points is a very important issue. The Levy flight strategy is an operator that can strengthen the global capability of TSO. This mechanism often uses short steps to walk and occasionally uses long steps to jump. The low-frequency use of long step length can ensure that TSO can extensively search the entire search area. The high-frequency use of short step length can ensure that TSO can locally search its nearest scope. Therefore, this paper introduces the Levy operator to modify the swarm update strategy of TSO. Considering that the jump of the Levy operator is too intense, and it may jump out of the main range in the process of operation, this paper adds step control parameters on the basis of the original Levy operator. The small step size control parameters can control the search of TSO in a small scope, which can enhance the local exploration ability of TSO without weakening the global exploration ability. The step size control parameters with large values can control the exploration of TSO in a large scope, which is conducive to solving the complex optimization problem.
The original TSO designed the parabolic foraging strategy and the spiral foraging strategy to balance the global exploration and the local exploitation capabilities of TSO. However, in the spiral foraging strategy, the linear changes of the weight parameters α 1 and α 2 cannot solve the actual complex problems well. In the parabolic foraging strategy, the change of the weight parameter p cannot effectively provide the solution to TSO for the global and the local exploration abilities. This paper uses nonlinear adaptive weight to modify the spiral foraging strategy and parabolic foraging strategy in TSO. The mathematical model of weight parameter p i is upgraded from Equation (3) to Equation (22), and the mathematical models of α 1 i and α 2 i are upgraded from Equations (5) and (6) to Equations (20) and (21), respectively.
The mathematical model of the improved spiral foraging strategy based on the Levy operator and nonlinear adaptive weight strategy is as follows:
X i t + 1 = { α 1 i ( X r a n d t + L τ | X r a n d t X i t | + α 2 i X i t ) , i = 1 α 1 i ( X r a n d t + L τ | X r a n d t X i t | + α 2 i X i 1 t ) , i = 2 , 3 , , N P α 1 i ( X b e s t t + L τ | X b e s t t X i t | + α 2 i X i t ) , i = 1 α 1 i ( X b e s t t + L τ | X b e s t t X i t | + α 2 i X i 1 t ) , i = 2 , 3 , , N P , i f   rand < t t max , i f   rand < t t max
where L τ is an improved distance control parameter combined with the Levy operator. Its mathematical model is as follows:
L τ = e α L e v y ( s ) l cos ( 2 π L e v y ( s ) α )
where L e v y ( s ) is the step size of the Lévy operator, and α is the step size control coefficient. In this article, α = 0.01. The mathematical model of improved parabolic foraging strategy based on the Levy operator and nonlinear adaptive weight strategy is as follows:
X i t + 1 = { X b e s t t + α L e v y ( s ) ( X b e s t t X i t ) + T F p 2 ( X b e s t t X i t ) ,   if   rand   <   0 . 5 T F p i 2 X i t ,   if   rand 0 . 5
Based on the above improvement strategies, an improved TSO is proposed, called CLTSO. The pseudocode of CLTSO is shown in Algorithm 2, and the process diagram of CLTSO is shown in Figure 6.
Algorithm 2 Pseudocode of CLTSO Algorithm
Initialization:Set parametersNP, Dim, a, z and T Max . Initialize the position of tuna Xi (i = 1, 2, …, NP) by (10)
 Counter t = 0
while T < T Max do
  Calculate the fitness value of all tuna
  Update the position and value of the best tuna X b e s t t
  for (each tuna) do
    Update α 1 i , α 2 i , p i by (20), (21), (22)
     if (rand < z) then 
      Update X i t + 1 by (10)
   else if (rand z) then
     if (rand < 0.5) then
       Update X i t + 1 by (23)
     else if (rand 0.5) then
       Update X i t + 1 by (25)
  t = t + 1
return the best fitness value f ( X b e s t ) and the best tuna X b e s t
Comparing Algorithms 1 and 2, it is clear that the overall structures are similar. The update strategies have been changed. Therefore, the improved operator proposed in this paper does not destroy the structural simplicity of the original TSO algorithm.

3.5. Time Complexity Analysis

Time complexity is an important measurement tool for evaluating the efficiency of an algorithm. In much of the research literature, it is represented by the symbol O. The time complexity is closely related to the number of instruction operations of the algorithm. The time complexity of TSO is closely related to iteration times, location update mechanism, and the evaluation times of fitness value function. The time complexity of CLTSO is closely related to the number of iterations, the number of fitness function evaluations, and the improvement operator. To compare the time cost differences between TSO and CLTSO, the time complexity of TSO and CLTSO is evaluated as follows. The time complexity of each operation instruction in the TSO is discussed below.
  • Initialize N individuals in the TSO, each with a dimension of D , so N D calculations are required.
  • Calculate the fitness value of each individual in the tuna population and select the optimal individual in the current population. Therefore, it needs to calculate [ N ( N 1 ) ] / 2 times.
  • Update the values of parameters α 1 , α 2 , and p , which are computed 3 times.
  • Update all tuna individuals in the search space, which are computed N D times.
  • Return the best individual, X b e s t , in the tuna population, which requires this code to be executed 1 time.
The instructions in steps 2 to 4 need to be iteratively run T Max times. Combining the above analysis process, the time complexity of TSO can be expressed as O(TSO) =  T M a x [ ( N 2 N ) / 2 + N D + 3 ] .
The time complexity of each operation instruction in CLTSO is analyzed as follows.
1.
Initialize N individuals in the CLTSO, each with a dimension of D , so N D calculations are required.
2.
Calculate the fitness value of each individual in the tuna population and select the optimal individual in the current population. Therefore, it needs to be calculated [ N ( N 1 ) ] / 2 times.
3.
Update the values of parameters α 1 i , α 2 i , and p i , which needs to be calculated 3 times.
4.
Update all tuna individuals in the search space, which needs to be calculated N D times.
5.
When each individual in the tuna population is updated, the Levy operator needs to be calculated 1 time. Therefore, it needs to be run N times in total.
6.
Return the best individual, X b e s t , in the tuna population, which requires this code to be executed 1 time.
Steps 2 to 5 require a total of T Max iterations. Therefore, the time complexity of CLTSO can be expressed as O ( C L T S O ) = T M a x [ ( N 2 N ) / 2 + N D + 3 + N ] .
Compared with the tuna swarm optimization algorithm, the three operators proposed in this paper slightly increase the time cost. CLTSO and TSO have very close time complexity.

4. Simulation Experiments and Results Analysis

To verify the effectiveness of the proposed CLTSO in solving different optimization problems, in this section, 22 benchmark functions are applied to design a series of experiments to compare CLTSO with other famous meta-heuristic algorithms. In addition, to illustrate the outstanding performance of CLTSO, we compared it against the tuna swarm optimization algorithm (TSO), the improved TSO based on the Levy flight operator (LTSO), the improved TSO based on the Circle chaotic map, and nonlinear adaptive weights (CTSO). Finally, this section provides a detailed analysis of the experimental results.

4.1. Benchmark Function

Twenty-two different types of benchmark functions are selected to evaluate the capability of CLTSO, which cover unimodal, multimodal, fixed-dimension multimodal, and combined functions in the CEC2014 [52]. Through a survey of relevant literature, we find that CEC2014 is a classic test function, so it can be used as a benchmark to evaluate the performance of the proposed algorithm. Its mathematical model is given in Table 2. F 1 ~ F 7 are unimodal functions, which are used to evaluate the convergence rate of the algorithm. F 8 ~ F 14 are multimodal functions, which are applied to verify whether the algorithm has good global exploration capability. F 15 ~ F 22 are the CEC2014 functions, which are applied to test the comprehensive capability of these algorithms.

4.2. Comparison Algorithm and Parameter Setting

Based on these 22 benchmark functions, a series of comparative experiments are designed to test the selected algorithms, which include Accelerated Particle Swarm Optimization (APSO) [53], WOA, the Fitness-Distance Balance based adaptive guided differential evolution (FDB-AGDE) algorithm [54], Covariance Matrix Adaptation Evolutionary Strategies (CMA-ES) [55], TSO, and CLTSO. The parameter values of the algorithms involved in these experiments are shown in Table 3. The symbol ‘ ~ ’ indicates that the algorithm does not set parameter values. Functions F 1 ~ F 13 are tested in 30 and 100 dimensions, respectively, and F 14 is tested in its suitable dimension. Eight CEC2014 benchmark functions are tested in 50 dimensions. The maximum number of evaluations of F 1 ~ F 14 are 1000. Because CEC benchmark functions are complex, the number of evaluations of 8 CEC2014 functions are simplified to 5000 without losing representativeness. The swarm size of each algorithm is 30. To avoid accidental interference, we run each algorithm 30 times independently in each experiment.

4.3. Results and Analysis

Table 4 shows the experimental results of CLTSO and other algorithms in low dimensional benchmark functions (dimension = 30), where Std is standard deviation and Mean is mean value. Mean represents the solution accuracy of these algorithms. Std reflects the stability of these algorithms in the solution process. F 14 is tested in its own dimension. Table 5 displays the experimental results of CLTSO and other algorithms in high dimensional benchmark functions (dimension = 100). The experimental results of eight composite functions in CEC2014 are displayed in Table 6.
As can be seen from Table 3, in low-dimensional functions, the optimization accuracy of CLTSO is only slightly weaker than its competitors in F 6 , F 8 , F 12 , and F 13 . Among the remaining 10 benchmark functions, CLTSO not only has significantly better solution accuracy than its competitors, but also has better robustness. This shows that the Circle chaotic map operator can help CLTSO obtain more diverse candidate solutions, and each candidate solution can continuously update and finally select the optimal solution during the iteration.
When the dimension of the benchmark function is 100, CLTSO has better optimization performance in dealing with higher dimensional and more complex problems. Only in the F 8 test function is the optimization accuracy of CLTSO slightly worse than that of CMA-ES. In the remaining 12 functions, CLTSO has the best optimization accuracy, and CLTSO can find the theoretical optimal value in F 1 , F 2 , F 3 , F 4 , F 9 , F 10 , and F 11 . From the robustness of the algorithm, CLTSO obtains the minimum Std value in all benchmark functions, which indicates that CLTSO has more stable exploration ability than other competitors. This is due to the fact that the Circle chaotic map strategy helps CLTSO to obtain a richer population diversity, which allows the initial tuna to be evenly distributed in the search space. In addition, during the execution of CLTSO, the Levy flight operator strengthens the exploration capability of the algorithm, and the nonlinear adaptive weight operator can well balance the exploration and exploitation capability of CLTSO.
The experimental results of the CEC2014 function indicate that all algorithms do not obtain the theoretical optimal value, but CLTSO can still achieve more excellent optimization accuracy than other competitors in F 16 ~ F 22 . This effectively proves that the improved nonlinear tuna swarm optimization algorithm based on the Circle chaotic map strategy and the Levy flight operator can adapt to more complex and challenging optimization problems.
To more intuitively observe the convergence ability of CLTSO and the competitors, Figure 7 draw their operating curves. The images of F 1 ~ F 13 are drawn in 100 dimensions, the image of F 14 is drawn in its suitable dimension, and the images of F 15 ~ F 22 are drawn in 50 dimensions.
The convergence curves of these algorithms indicate that CLTSO has a better convergence performance than the competitors. For simple optimization problems, CLTSO can obtain theoretical optimal values within 500~600 iterations. For complex and challenging problems, CLTSO can also maintain a faster convergence rate and get rid of the influence of local attraction points, and ultimately achieve higher optimization accuracy.
In order to further show whether CLTSO has obvious advantage over other algorithms, this paper uses the Wilcoxon [56] statistical method and the Friedman method to analyze the experimental results of these algorithms in 100-dimensional benchmark functions. The results of F 14 is based on its suitable dimensions. The experimental data of eight CEC2014 benchmark functions are measured in 50 dimensions. The results of the Friedman test and the p-value of the Wilcoxon test are listed in Table 7 and Table 8, respectively.
The Friedman test is a nonparametric statistical analysis method, which uses rank mean to test whether there are significant differences in multiple population distributions. Because the problem in this paper is to find the minimum value, a smaller rank mean value in the Friedman test results indicates better performance of the algorithm. As can be seen from Table 6, CLTSO has the smallest rank mean and TSO ranks the second, followed by CMA-ES, WOA, DE, and APSO.
In the Wilcoxon statistical test results, if the p-value is less than 0.05 and close to 0, this indicates that the experimental results of the two algorithms are significantly different. If the p-value exceeds 0.05, this indicates that the experimental results of the two algorithms are not significantly different. If the p-value is equal to NaN, this means that the experimental results of the two algorithms are not different. As can be seen from Table 7, except for the last column, the p-values of CLTSO are basically less than 0.05 and close to 0, which indicates that CLTSO has significant advantages compared with other algorithms. It is not difficult to find that half of the p-values for Wilcoxon analysis of CLTSO vs. TSO are greater than 0.05. This is because both CLTSO and TSO can find the theoretical optimal value in these functions, or the optimal value found by TSO is not much different from that found by CLTSO. From the optimization curves of TSO and CLTSO, we can see that although the calculation results of these two algorithms are not very different in those functions with p-values greater than 0.05, the speed of CLTSO is generally much faster than that of TSO.
Finally, this paper quantitatively analyzes all the algorithms in the experiment. The quantitative analysis of these algorithms is based on the mean absolute error (MAE) of 22 benchmark functions. In mathematics, MAE is a measure of the error between paired observations expressing the same phenomenon. The mathematical model of MAE is as follows:
M A E = i = 1 N | m i o i | N
where N is the total amount of benchmark functions used for testing, m i is the average of the optimal results calculated by the algorithm, and o i is the theoretical optimal value of the ith benchmark function.
Table 9 shows the MAE ranking results of these algorithms. The MAE value of CLTSO ranks the first among all competitors, and FDB-AGDE ranks the second. The above data intuitively illustrate the advantage of CLTSO.
The time consumed by these algorithms in functions F 1 ~ F 22 are shown in Table 10. The numerical unit is second. The analysis of the time they consumed indicates that the time complexity of CLTSO is slightly higher than that of TSO, but the increase is trivial. The improved operator proposed in this paper only increases the time complexity a little but greatly enhances the optimization performance of the CLTSO algorithm.

4.4. Effectiveness Analysis of Improved Operators

This paper makes three improvements to the original tuna swarm optimization algorithm. Firstly, the improved Circle chaotic mapping strategy is introduced in the initialization phase, which expands the swarm diversity. Secondly, the Levy operator is introduced in the position update phase, which strengthens the global swimming ability of tuna. Finally, the nonlinear adaptive weight strategy is introduced in the TSO iteration stage, which can effectively balance the exploration and the exploitation capabilities of the tuna swarm. Section 3 of this chapter proves that the proposed operator significantly improves the optimization performance of TSO. In addition, to verify the effectiveness of the improvements proposed in this paper, we selected the tuna swarm optimization algorithm (TSO), the improved TSO based on the Levy flight operator (LTSO), the improved TSO based on the Circle chaotic map and nonlinear adaptive weights (CTSO), and CLTSO to conduct a set of comparative experiments. Functions F 1 ~ F 22 are used to test these algorithms in this section, and each algorithm runs 30 times independently. F 1 ~ F 13 are experiments in 100 dimensions, F 15 ~ F 22 are experiments in 50 dimensions.
The experimental results of various versions of the improved tuna swarm optimization algorithm are displayed in Table 11. Their convergence curves are displayed in Figure 8.
As can be seen from Table 10 and Figure 8, CLTSO has higher optimization accuracy than the competitors. In benchmark functions F 1 , F 2 , F 3 , F 4 , F 9 , F 10 , F 11 , and F 14 , CLTSO, CTSO, and LTSO can calculate the theoretical optimal values, but CLTSO converges much faster than CTSO and LTSO. The above data indicate that the optimization performance of CTSO and LTSO is more enhanced than the original tuna swarm optimization algorithm, which further confirms the validity of the three modified operators in CLTSO. To demonstrate that the optimization capability of CLTSO is greatly enhanced compared to CTSO and LTSO, Friedman statistical analysis and MAE ranking are conducted based on the data in Table 11. The analysis and ranking results are listed in Table 12 and Table 13.
According to the above two tables, it is clear that CLTSO has the smallest rank mean in the Friedman analysis test, LTSO ranks the second, and CTSO ranks the third, followed by TSO. According to the MAE value of each algorithm, CLTSO ranks the first. The above ranking shows that CLTSO can better approximate the theoretical optimal value when dealing with optimization problems. CLTSO has shown much better performance than the competitors. Therefore, the above data and analysis results confirm that the three improved operators proposed in this paper are effective.

5. Optimization Engineering Example Using CLTSO

The original intention of meta-heuristic algorithms is to optimize the engineering problems encountered. How to improve the precision of engineering practice is the concern of researchers. To verify the effectiveness of CLTSO for real engineering problems, CLTSO is applied to the modification design of a BP neural network. The BP neural network is a model proposed by McCulloch to train the network based on error back propagation. It is one of the most mature and widely used artificial neural network modules. The BP neural network is widely used in pattern recognition, classification and prediction, nonlinear modeling, etc. Figure 9 shows a BP neural network topology with d input neurons, l output neurons, and q hidden layer neurons.
v i h is the weight between the ith node in the input layer and the hth node in the hidden layer. w h j is the weight between the hth node in the input layer and the jth node in the hidden layer. The threshold of the jth node in the output layer is expressed by θ j . Therefore, the input value received by the hidden layer hth neurons in the network model is as follows:
α h = i = 1 d v i h x i
The value received by the jth node in the output layer is as follows:
β j = h = 1 q w i h b h
where b h is the output value of the hth neuron in the hidden layer. Taking training case ( x k , y k ) as an example, we assume that the output of the network model is as follows:
y ^ k = ( y ^ 1 k , y ^ 2 k , , y ^ l k )
y ^ j k = f ( β j θ j )
Therefore, the mean square error of the network on example ( x k , y k ) is as follows:
E k = 1 / 2 j = 1 l ( y ^ j k y j k ) 2
where n is the total amount of training samples, m is the total amount of input nodes, x i k is the output value of the network model, and d i k is the real value of training samples.
In the training process of the model, the error will be transmitted back to the hidden nodes. The model will adjust the weights and thresholds between each layer of nodes based on the error, and finally make the error achieve satisfactory accuracy. At present, the training methods of the BP neural network are mostly gradient descent. The training accuracy of the network model is extremely sensitive to the initial weight value and the learning rate. Therefore, when the objective function has multiple extreme values, the neural network is easily attracted by local extreme values. This will lead to a serious degradation in the performance of the algorithm. In order to optimize the performance of the BP network model and verify the optimization ability of CLTSO, a CLTSO-BP neural network model is proposed. The basic idea of the model is to use the weights and thresholds of each node in the BP model as the tuna individual in the CLTSO algorithm and use MSE as the fitness function in the CLTSO algorithm. CLTSO optimizes the MSE of the model to obtain the optimized initial value weight and the threshold.
To compare the capability of the CLTSO-BP neural network with the original BP model, three popular datasets from the UCI machine learning and intelligent system center, Iris, Wine, and Wine Quality, are selected to design a comparative experiment. This experiment compares the classification accuracy of the CLTSO-BP neural network model and the BP model on the above three datasets.
In the experiment, the total amount of tuna is 30, the CLTSO algorithm is executed 30 times in total, and the neural network is executed 500 times in total. Table 14 shows the comparison results of the CLTSO-BP neural network model and the original BP model.
By comparing the result of the CLTSO-BP neural network and the original BP model on three datasets, it is found that the new model can obtain more ideal classification results. It also indicates that CLTSO can show excellent performance in multi-layer perceptron training difficulties.

6. Conclusions

The tuna swarm optimization algorithm is widely recognized by scholars because of its simple structure and low number of parameters. The tuna swarm optimization algorithm has excellent optimization performance, but it can still be further improved. When dealing with simple problems, the solving speed of TSO can still be further improved. When facing complex problems, it is difficult for TSO to escape the attraction of local optimal value. Therefore, this article proposes a modified nonlinear tuna swarm optimization algorithm based on Circle chaotic map and Levy flight operator. The optimization performance of CLTSO has been fully verified in 22 benchmark functions. The results show that CLTSO outperforms the comparable algorithms. Comparation data based on 22 benchmark functions were analyzed using Wilcoxon’s test, Friedman’s test, and MAE. The analysis conclusion indicates that the rank mean and MAE value of CLTSO are superior to other advanced algorithms such as CMA-ES. Finally, this paper optimizes the BP neural network based on CLTSO. The CLTSO-BP neural network model is tested using three popular datasets from the UCI Machine Learning and Intelligent System Center. Compared with the original BP model, the new model optimizes the classification accuracy. However, for the optimization problem of more complex datasets, the classification ability of the CLTSO-BP neural network still needs to be improved. Possible directions include increasing the swarm size of the algorithm and the total number of CLTSO operations to obtain a higher quality solution, which is also the target of continuous research in the future. In addition, the advantages of CLTSO in solving some complex multimodal functions can still be further improved, which is also one of the key research directions in the future. CLTSO has the advantages of fast convergence and high convergence accuracy, which can be applied in practical projects such as workshop scheduling and distribution network reconstruction.

Author Contributions

Conceptualization, W.W. and J.T.; methodology, W.W.; software, W.W.; validation, W.W.; formal analysis, W.W.; investigation, W.W.; resources, W.W.; data curation, W.W.; writing—original draft preparation, W.W.; writing—review and editing, W.W. and J.T.; visualization, W.W.; supervision, J.T.; project administration, J.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Narendra, P.M.; Fukunaga, K. A branch and bound algorithm for feature subsets election. IEEE Trans Comput. 1977, 26, 917–922. [Google Scholar] [CrossRef]
  2. Wu, G.; Pedrycz, W.; Suganthan, P.N.; Mallipeddi, R. A variable reduction strategy for evolutionary algorithms handling equality constraints. Appl. Soft Comput. J. 2015, 37, 774–786. [Google Scholar] [CrossRef]
  3. Zhang, H.; Cui, L.; Zhang, X.; Luo, Y. Data-driven robust approximate optimal tracking control for unknown general non-linear systems using adaptive dynamic programming method. IEEE Trans. Neural Netw. 2011, 22, 2226–2236. [Google Scholar] [CrossRef]
  4. Slowik, A.; Halina, K. Nature inspired methods and their industry applications—Swarm intelligence algorithms. IEEE Trans. Ind. Inform. 2017, 14, 1004–1015. [Google Scholar] [CrossRef]
  5. Chakraborty, A.; Kar, A.K. Swarm intelligence: A review of algorithms. Nat.-Inspir. Comput. Optim. 2017, 10, 475–494. [Google Scholar]
  6. Liu, W.; Dridi, M.; Fei, H.; el Hassani, A.H. Hybrid metaheuristics for solving a home health care routing and scheduling problem with time windows, synchronized visits and lunch breaks. Expert Syst. Appl. 2021, 183, 115307. [Google Scholar] [CrossRef]
  7. Wang, X.; Zhao, H.; Han, T.; Zhou, H.; Li, C. A grey wolf optimizer using Gaussian estimation of distribution and its application in the multi-UAV multi-target urban tracking problem. Appl. Soft Comput. 2019, 78, 240–260. [Google Scholar] [CrossRef]
  8. Wang, Y.-G. A maximum-likelihood method for estimating natural mortality and catchability coefficient from catch-and-effort data. Mar. Freshw. Res. 1999, 50, 307–311. [Google Scholar] [CrossRef]
  9. Wu, J.; Ding, Z. Improved grey model by dragonfly algorithm for Chinese tourism demand forecasting. In Proceedings of the 33rd International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, Kitakyushu, Japan, 22–25 September 2020. [Google Scholar]
  10. Wu, J.; Cui, Z.; Chen, Y.; Kong, D.; Wang, Y.-G. A new hybrid model to predict the electrical load in five states of Australia. Energy 2019, 166, 598–609. [Google Scholar] [CrossRef]
  11. Webb, B. Swarm Intelligence: From Natural to Artificial Systems. Connect. Sci. 2002, 14, 163–164. [Google Scholar] [CrossRef]
  12. Kennedy, J. Swarm Intelligence. In Handbook of Nature-Inspired and Innovative Computing; Springer: Berlin/Heidelberg, Germany, 2006; pp. 187–219. [Google Scholar]
  13. Ashlock, D. Evolutionary Computation for Modeling and Optimization; Springer: New York, NY, USA, 2006; Volume 51, p. 743. [Google Scholar]
  14. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  15. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  16. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, D.; Ge, Y.; Wan, Y.; Deng, Y.; Chen, Y.; Zou, F. Poplar optimization algorithm: A new meta-heuristic optimization technique for numerical optimization and image segmentation. Expert Syst. Appl. 2022, 200, 1–17. [Google Scholar] [CrossRef]
  19. Man, K.F.; Tang, K.S.; Kwong, S. Genetic Algorithms. Perspect. Neural Comput. 1989, 83, 55–80. [Google Scholar]
  20. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  21. Du, Z.; Li, S.; Sun, Y.; Li, N. Adaptive particle swarm optimization algorithm based on levy flights mechanism. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017. [Google Scholar]
  22. Yu, H.; Yu, Y.; Liu, Y.; Wang, Y.; Gao, S. Chaotic grey wolf optimization. In Proceedings of the 2016 International Conference on Progress in Informatics and Computing (PIC), Beijing, China, 23–26 December 2016; pp. 103–113. [Google Scholar]
  23. Yuan, X.; Yang, D.; Liu, H. MPPT of PV system under partial shading condition based on adaptive inertia weight particle swarm optimization algorithm. In Proceedings of the 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Shenyang, China, 8–12 June 2015; pp. 729–733. [Google Scholar]
  24. Park, S.; Kim, Y.; Kim, J.; Lee, J. Speeded-up cuckoo search using opposition-based learning. In Proceedings of the 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014), Seoul, Korea, 22–25 October 2014; pp. 535–539. [Google Scholar]
  25. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  26. Xie, W.; Wang, J.S.; Tao, Y. Improved Black Hole Algorithm Based on Golden Sine Operator and Levy Flight Operator. IEEE Access 2019, 7, 161459–161486. [Google Scholar] [CrossRef]
  27. Xie, L.; Han, T.; Zhou, H.; Zhang, Z.-R.; Han, B.; Tang, A. Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 9210050. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  29. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  30. Hu, D.; Yang, S. Improved Tuna Algorithm to Optimize ELM Model for PV Power Prediction. J. Wuhan Univ. Technol. 2022, 44, 97–104. [Google Scholar]
  31. Kumar, C.; Mary, D.M. A novel chaotic-driven Tuna Swarm Optimizer with Newton-Raphson method for parameter identification of three-diode equivalent circuit model of solar photovoltaic cells/modules. Optik 2022, 264, 169379. [Google Scholar] [CrossRef]
  32. Arora, S.; Anand, P. Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 2019, 31, 4385–4405. [Google Scholar] [CrossRef]
  33. Guangyuan, P.; Junfei, Q.; Honggui, H. A new strategy of chaotic PSO and its application in optimization design for pipe network. In Proceedings of the 32nd Chinese Control Conference, Xi’an, China, 26–28 July 2013; pp. 8022–8027. [Google Scholar]
  34. Pluhacek, M.; Senkerik, R.; Zelinka, I.; Davendra, D. Designing PID Controllers by Means of PSO Algorithm Enhanced by Various Chaotic Maps. In Proceedings of the 2013 8th EUROSIM Congress on Modelling and Simulation, Cardiff, UK, 10–13 September 2013; pp. 19–23. [Google Scholar]
  35. Zhao, J. Chaotic particle swarm optimization algorithm based on tent mapping for dynamic origin-destination matrix estimation. In Proceedings of the 2011 International Conference on Electric Information and Control Engineering, Wuhan, China, 15–17 April 2011; pp. 221–224. [Google Scholar]
  36. Zhang, J.; Zhu, Y.; Zhu, H.; Cheng, J. Some improvements to logistic map for chaotic signal generator. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; pp. 1090–1093. [Google Scholar]
  37. Li, M.; Sun, X.; Li, W.; Wang, Y. Improved Chaotic Particle Swarm Optimization using circle map for training SVM. In Proceedings of the 2009 Fourth International on Conference on Bio-Inspired Computing, Beijing, China, 16–19 October 2009; pp. 1–7. [Google Scholar]
  38. Vasuyta, K.; Zakharchenko, I. Modified discrete chaotic map bas-ed on Chebyshev polynomial. In Proceedings of the 2016 Third International Scientific-Practical Conference Problems of Info communications Science and Technology (PIC S&T), Kharkov, Ukraine, 4–6 October 2016; pp. 217–219. [Google Scholar]
  39. Jiteurtragool, N.; Ketthong, P.; Wannaboon, C.; San-Um, W. A topologically simple keyed hash function based on circular chaotic sinusoidal map network. In Proceedings of the 2013 15th International Conference on Advanced Communications Technology (ICACT), Seoul, Korea, 27–30 January 2013; pp. 1089–1094. [Google Scholar]
  40. Petavratzis, E.; Moysis, L.; Volos, C.; Nistazakis, H.; Muñoz-Pacheco, J.M.; Stouboulos, I. Motion Control of a Mobile Robot Based on a Chaotic Iterative Map. In Proceedings of the 2020 9th International Conference on Modern Circuits and Systems Technologies (MOCAST), Pradesh, India, 7–9 September 2020; pp. 1–4. [Google Scholar]
  41. Zhang, D.M.; Xu, H.; Wang, Y.R.; Song, T.; Wang. Whale optimization algorithm for embedded circle mapping and one-dimensional oppositional learning based small hole imaging. Control. Decis. 2021, 36, 1173–1180. [Google Scholar]
  42. Song, L.; Chen, W.; Chen, W.; Lin, Y.; Sun, X. Improvement and application of sparrow search algorithm based on hybrid strategy. J. Beijing Univ. Aeronaut. Astronaut. 2021, 1, 1–16. [Google Scholar]
  43. Viswanathan, G.M.; Afanasyev, V.; Buldyrev, S.V.; Havlin, S.; Da Luz, M.G.E.; Raposo, E.P.; Stanley, H.E. Levy fights search patterns of biological organisms. Phys. A Stat. Mech. Its Appl. 2001, 295, 85–88. [Google Scholar] [CrossRef]
  44. Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Optimal Control and Applications. In Stochastic Calculus for Fractional Brownian Motion and Applications; Springer: London, UK, 2008. [Google Scholar]
  45. Yan, X.F.; Ye, D.Y. An improved flora foraging algorithm based on Levy flight. Comput. Syst. Appl. 2015, 24, 124–132. [Google Scholar]
  46. Haklı, H.; Uğuz, H. A novel particle swarm optimization algorithm with Levy flight. Appl. Soft Comput. 2014, 23, 333–345. [Google Scholar] [CrossRef]
  47. Liu, C.; Ye, C. Bat algorithm with Levy flight characteristics. Chin. J. Intell. Syst. 2013, 8, 240–246. [Google Scholar]
  48. Mantegna, R.N. Fast accurate algorithm for numerical simulation of Lévy stable stochastic processes. Phys. Rev. 1994, 49, 4677–4689. [Google Scholar] [CrossRef] [PubMed]
  49. Zhang, J.; Wang, J.S. Improved Whale Optimization Algorithm Based on Nonlinear Adaptive Weight and Golden Sine Operator. IEEE Access. 2020, 8, 77013–77048. [Google Scholar] [CrossRef]
  50. Zhang, J.; Wang, J.S. Improved Salp Swarm Algorithm Based on Levy Flight and Sine Cosine Operator. IEEE Access. 2020, 8, 99740–99771. [Google Scholar] [CrossRef]
  51. Aloui, M.; Hamidi, F. A Chaotic Krill Herd Optimization Algorithm for Global Numerical Estimation of the Attraction. Domain Nonlinear Syst. 2021, 9, 1743. [Google Scholar]
  52. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, 2013; p. 201311. [Google Scholar]
  53. Reddy, R.B.; Uttara, K.M. Performance Analysis of Mimo Radar Waveform Using Accelerated Particle Swarm Optimization Algorithm. Signal Image Process. 2012, 3, 4. [Google Scholar] [CrossRef]
  54. Guvenc, U.; Duman, S.; Kahraman, H.T.; Aras, S.; Katı, M. Fitness-Distance Balance based adaptive guided differential evolution algorithm for security-constrained optimal power flow problem incorporating renewable energy sources. Appl. Soft Comput. 2021, 108, 107421. [Google Scholar] [CrossRef]
  55. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef]
  56. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2008, 15, 617. [Google Scholar] [CrossRef]
Figure 1. Flow chart of TSO.
Figure 1. Flow chart of TSO.
Electronics 11 03678 g001
Figure 2. Frequency distribution histogram of improved Circle chaotic map.
Figure 2. Frequency distribution histogram of improved Circle chaotic map.
Electronics 11 03678 g002
Figure 3. Simulation comparison experiment diagram of Levy flight and random walk.
Figure 3. Simulation comparison experiment diagram of Levy flight and random walk.
Electronics 11 03678 g003
Figure 4. Comparison of weight coefficients α1 and α2 before and after improvement.
Figure 4. Comparison of weight coefficients α1 and α2 before and after improvement.
Electronics 11 03678 g004
Figure 5. Comparison of the weight coefficient p before and after the improvement.
Figure 5. Comparison of the weight coefficient p before and after the improvement.
Electronics 11 03678 g005
Figure 6. Flow chart of CLTSO.
Figure 6. Flow chart of CLTSO.
Electronics 11 03678 g006
Figure 7. Convergence curve of each algorithm (F1~F22).
Figure 7. Convergence curve of each algorithm (F1~F22).
Electronics 11 03678 g007aElectronics 11 03678 g007b
Figure 8. Convergence curves of each version of improved TSO.
Figure 8. Convergence curves of each version of improved TSO.
Electronics 11 03678 g008aElectronics 11 03678 g008b
Figure 9. BP neural network topology.
Figure 9. BP neural network topology.
Electronics 11 03678 g009
Table 1. Explanation of symbols.
Table 1. Explanation of symbols.
SymbolMeaning
X i int Tuna individual in TSO
u b The upper boundary of the search space of TSO
l b The lower boundary of the search space of TSO
N P Population size of TSO
τ Distance parameter
α 1 Weight parameters of tuna following the best individual
α 2 Weight parameters of tuna following the front individual
p Weight parameters in parabolic foraging strategy
s The step length of Levy flight
t Current number of iterations of the algorithm
T M a x Maximum number of iterations of the algorithm
α 1 i Improved version of α 1
α 2 i Improved version of α 2
p i Improved version of p
Table 2. Benchmark functions.
Table 2. Benchmark functions.
FunctionDimRange f min
F 1 ( x ) = i = 1 D x i 2 30,100[−100, 100]0
F 2 ( x ) = i = 1 D | x i | + i = 1 D | x i | 30,100[−10, 10]0
F 3 ( x ) = j = 1 D ( i = 1 j x i ) 2 30,100[−100, 100]0
F 4 ( x ) = max i { | x i | , 1 i D } 30,100[−100, 100]0
F 5 ( x ) = i = 1 D 100 ( x i + 1 2 x i 2 ) 2 + ( x i 1 ) 2 30,100[−30, 30]0
F 6 ( x ) = i = 1 D ( x i + 0.5 ) 2 30,100[−100, 100]0
F 7 ( x ) = i = 1 D i x i 4 + random [ 0 , 1 ) 30,100[−1.28, 1.28]0
F 8 ( x ) = i = 1 D x i sin ( | x i | ) 30,100[−500, 500]−418.
9829 × D
F 9 ( x ) = i = 1 D ( x i 2 10 cos ( 2 π x i ) + 10 ) 30,100[−5.12, 5.12]0
F 10 ( x ) = 20 exp ( 0.2 ( 1 / D ) i = 1 D x i 2 ) exp ( ( 1 / D ) i = 1 D cos ( 2 π x i ) ) + 20 + exp ( 1 ) 30,100[−32, 32]8.8818 × 10-16
F 11 ( x ) = ( 1 / 4000 ) i = 1 D x i 2 i = 1 D cos ( x i / i ) + 1 30,100[−600, 600]0
F 12 ( x ) = π / D { 10 sin 2 ( π y i ) + i = 1 D ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y D 1 ) } + i = 1 D u ( x i , 10 , 100 , 4 )   y i = 1 + [ ( x i + 1 ) / 4 ] u ( x i , a , k , m ) = { k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a 30,100[−50, 50]0
F 13 ( x ) = 0.1 { sin 2 ( 3 π x i ) + i = 1 D ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x D 1 ) 2 [ 1 + sin 2 ( 2 π x D ) ] } + i = 1 D u ( x i , 5 , 100 , 4 ) } 30,100[−50, 50]0
F 14 ( x ) = ( ( 1 / 500 ) + j = 1 25 ( 1 / ( j + i = 1 2 ( x i a i j ) 6 ) ) ) 1 2[−65.53, 65.53]0.998004
F 15 ( x ) ( C E C 2014   1 : Rotated   High   Conditioned   Elliptic   Function ) 50[−100, 100]100
F 16 ( x ) ( C E C 2014   2 : Rotated   Bent   Cigar   Function ) 50[−100, 100]200
F 17 ( x ) ( C E C 2014   3 : Rotated   Discus   Function ) 50[−100, 100]300
F 18 ( x ) ( C E C 2014   5 : Shifted   and   Rotated   Rosenbrock ) 50[−100, 100]500
F 19 ( x ) ( C E C 2014   18 : Shifted   and   Rotated   Expanded   Scaffer s   F 6   Function ) 50[−100, 100]1800
F 20 ( x ) ( C E C 2014   20 : Hybrid   Function   4   ( N = 4 ) ) 50[−100, 100]2000
F 21 ( x ) ( C E C 2014   21 : Hybrid   Function   5   ( N = 5 ) ) 50[−100, 100]2100
F 22 ( x ) ( C E C 2014   30 : Composition   Function   8   ( N = 3 ) ) 50[−100, 100]3000
Table 3. Parameter values of the algorithms.
Table 3. Parameter values of the algorithms.
AlgorithmParameter Value
APSO α = 1 ,   β = 0.5 ,   γ = 0.95
WOA l ( 1 , 1 )
FDB-AGDE μ C R = 0.5
CMA-ES μ = 2
TSO a = 0.7 ,   z = 0.05
CLTSO a = 0.7 ,   z = 0.05
CTSO a = 0.7 ,   z = 0.05
LTSO a = 0.7 ,   z = 0.05
Table 4. Experimental results in 30 dimensions.
Table 4. Experimental results in 30 dimensions.
FunctionPerformanceAPSOWOAFDB-AGDECMA-ESTSOCLTSO
F 1 Mean5.09 × 10−394.82 × 10−1501.23 × 10−85.99 × 10−1500
Std1.05 × 10−392.59 × 10−1491.34 × 1013.94 × 10−1500
F 2 Mean4.38 × 10−18.02 × 10−1034.49 × 10−61.83 × 10−71.71 × 10−2520
Std5.79 × 10−13.63 × 10−1025.61 × 1014.53 × 10−800
F 3 Mean1.32 × 1012.01 × 1041.08 × 10−966.14 × 10−600
Std4.84 × 1009.41 × 1036.35 × 1041.16 × 10−500
F 4 Mean4.96 × 10−13.61 × 1011.64 × 1008.37 × 10−66.42 × 10−2490
Std1.75 × 10−12.69 × 1011.45 × 1012.29 × 10−600
F 5 Mean5.02 × 1012.72 × 1011.50 × 1026.64 × 1012.94 × 10−42.12 × 10−4
Std4.56 × 1014.96 × 10−11.14 × 1021.52 × 1027.65 × 10−13.82 × 10−5
F 6 Mean4.01 × 10−328.72 × 10−21.09 × 10−86.59 × 10−151.37 × 10−92.04 × 10−10
Std1.60 × 10−329.95 × 10−21.33 × 1013.44 × 10−158.89 × 10−62.40 × 10−10
F 7 Mean1.54 × 10−11.53 × 10−32.36 × 10−22.44 × 10−22.16 × 10−51.81 × 10−5
Std2.03 × 10−22.05 × 10−39.40 × 1006.71 × 10−32.19 × 10−46.32 × 10−5
F 8 Mean–1.09 × 102–1.16 × 104–1.26 × 104–4.41 × 1011–8.38 × 102–1.26 × 104
Std3.25 × 1001.50 × 1033.32 × 10−12.34 × 10121.17 × 1046.00 × 10−8
F 9 Mean7.42 × 10103.11 × 1015.60 × 10100
Std5.96 × 10001.62 × 1016.32 × 10100
F 10 Mean5.36 × 10−14.56 × 10−156.78 × 10−77.01 × 10−18.88 × 10−168.88 × 10−16
Std5.28 × 10−12.15 × 10−153.73 × 10−13.78 × 1008.29 × 10−160
F 11 Mean8.49 × 10−31.61 × 10−32.85 × 10−73.29 × 10−400
Std1.67 × 10−28.69 × 10−31.64 × 1011.77 × 10−300
F 12 Mean1.08 × 10−16.17 × 10−31.74 × 10−252.01 × 10−152.65 × 10−106.75 × 10−14
Std1.21 × 10−16.67 × 10−32.01 × 1011.14 × 10−151.14 × 10−73.87 × 10−11
F 13 Mean2.38 × 10−33.11 × 10−11.87 × 10−193.77 × 10−145.12 × 10−81.24 × 10−9
Std4.42 × 10−32.72 × 10−11.54 × 1013.15 × 10−142.84 × 10−33.77 × 10−9
F 14 Mean1.27 × 1012.27 × 1009.98 × 10−17.65 × 1009.98 × 10−19.98 × 10−1
Std1.12 × 10−132.91 × 1002.71 × 1003.59 × 1009.31 × 10−12.69 × 10−16
Table 5. Experimental results in 100 dimensions.
Table 5. Experimental results in 100 dimensions.
FunctionPerformanceAPSOWOAFDB-AGDECMA-ESTSOCLTSO
F 1 Mean1.84 × 1011.03 × 10−1497.27 × 1011.83 × 10−300
Std1.40 × 1004.01 × 10−1491.98 × 1014.15 × 10−400
F 2 Mean4.13 × 1016.59 × 10−1027.95 × 1002.99 × 10−18.66 × 10−2350
Std2.74 × 1002.42 × 10−1017.89 × 1001.09 × 10−100
F 3 Mean2.23 × 1028.92 × 1057.05 × 10−864.80 × 10500
Std1.40 × 1012.09 × 1051.05 × 1011.31 × 10500
F 4 Mean2.29 × 1007.06 × 1015.91 × 1011.73 × 1005.52 × 10−2290
Std8.35 × 10−22.77 × 1016.11 × 1003.03 × 10−100
F 5 Mean6.22 × 1039.77 × 1011.30 × 1053.59 × 1021.08 × 10−11.89 × 10−3
Std2.28 × 1034.05 × 10−19.66 × 1051.49 × 1031.99 × 10−14.41 × 10−3
F 6 Mean2.66 × 1011.76 × 1005.27 × 10−51.71 × 10−35.10 × 10−54.65 × 10−5
Std7.14 × 1006.30 × 10−11.37 × 1013.09 × 10−42.37 × 10−27.68 × 10−5
F 7 Mean1.83 × 1031.67 × 10−32.81 × 10−11.39 × 10−12.76 × 10−41.04 × 10−4
Std6.20 × 1021.19 × 10−31.37 × 1011.83 × 10−23.08 × 10−41.13 × 10−4
F 8 Mean−2.35 × 102−3.73 × 104−3.45 × 104−1.81 × 105−2.79 × 103−4.19 × 104
Std9.35 × 1005.59 × 1034.00 × 1033.12 × 1043.91 × 1042.95 × 10−3
F 9 Mean4.33 × 10202.25 × 1026.69 × 10200
Std2.60 × 10101.18 × 1021.64 × 10200
F 10 Mean3.58 × 1004.20 × 10−155.86 × 1009.41 × 10−38.88 × 10−168.88 × 10−16
Std3.04 × 10−12.23 × 10−154.11 × 1001.04 × 1018.29 × 10−160
F 11 Mean4.39 × 10−101.71 × 1003.49 × 10−200
Std6.34 × 10−201.12 × 1016.78 × 10−300
F 12 Mean5.46 × 10−11.80 × 10−26.02 × 10−32.12 × 10−42.49 × 10−82.78 × 10−9
Std1.09 × 10−17.22 × 10−39.60 × 1005.70 × 10−55.86 × 10−52.19 × 10−7
F 13 Mean8.73 × 1001.65 × 1005.93 × 10−13.76 × 10−32.02 × 10−46.80 × 10−6
Std2.13 × 1007.21 × 10−11.53 × 1012.83 × 10−34.38 × 10−37.14 × 10−6
Table 6. Simulation results of CEC2014 functions.
Table 6. Simulation results of CEC2014 functions.
FunctionPerformanceAPSOWOAFDB-AGDECMA-ESTSOCLTSO
F 15 Mean1.19 × 10108.85 × 1083.36 × 1051.84 × 1072.22 × 1064.35 × 105
Std3.12 × 1073.34 × 1087.01 × 1073.94 × 1061.21 × 1064.35 × 105
F 16 Mean1.65 × 10117.71 × 10103.36 × 1022.02 × 1041.00 × 1042.75 × 102
Std3.25 × 1087.86 × 1091.20 × 1015.08 × 1044.94 × 1091.15 × 104
F 17 Mean1.99 × 1089.77 × 1047.36 × 1028.33 × 1058.38 × 1033.59 × 102
Std3.10 × 1039.92 × 1031.09 × 1011.18 × 1057.60 × 1033.29 × 101
F 18 Mean5.20 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.20 × 102
Std9.69 × 1028.62 × 10−21.14 × 1014.43 × 10−23.13 × 1021.06 × 10−1
F 19 Mean1.39 × 1094.83 × 1052.96 × 1035.07 × 1042.27 × 1031.98 × 103
Std1.98 × 1044.22 × 1052.00 × 1032.89 × 1042.91 × 1031.56 × 103
F 20 Mean3.17 × 1033.04 × 1053.13 × 1038.77 × 1054.66 × 1032.67 × 103
Std5.98 × 1022.31 × 1051.56 × 1013.70 × 1054.86 × 1032.25 × 102
F 21 Mean9.39 × 1081.12 × 1073.56 × 1045.20 × 1064.10 × 1046.27 × 103
Std1.14 × 1065.43 × 1061.05 × 1052.42 × 1062.33 × 1056.06 × 104
F 22 Mean3.20 × 1033.79 × 1051.26 × 1044.00 × 1033.20 × 1033.20 × 103
Std7.83 × 10−42.41 × 1059.53 × 1002.93 × 1021.92 × 1030
Table 7. Results of Friedman test.
Table 7. Results of Friedman test.
AlgorithmRank Mean
CLTSO1.39
TSO2.48
FDB-AGDE3.68
CMA-ES4.09
WOA4.14
APSO5.23
Table 8. Results of Wilcoxon test.
Table 8. Results of Wilcoxon test.
FunctionCLTSO
vs.
WOA
CLTSO
vs.
APSO
CLTSO
vs.
FDB-AGDE
CLTSO
vs.
CMAES
CLTSO
vs.
TSO
F 1 1.21 × 10−121.21 × 10−127.94 × 10−31.21 × 10−12NaN
F 2 1.21 × 10−121.21 × 10−127.94 × 10−31.21 × 10−121.21 × 10−12
F 3 1.21 × 10−121.21 × 10−127.94 × 10−31.21 × 10−12NaN
F 4 1.21 × 10−121.21 × 10−127.94 × 10−31.21 × 10−121.21 × 10−12
F 5 3.02 × 10−113.02 × 10−117.94 × 10−33.02 × 10−114.18 × 10−9
F 6 3.02 × 10−113.02 × 10−117.94 × 10−33.02 × 10−112.78 × 10−7
F 7 3.82 × 10−103.02 × 10−117.94 × 10−33.02 × 10−116.20 × 10−4
F 8 3.02 × 10−113.02 × 10−117.94 × 10−33.02 × 10−113.65 × 10−8
F 9 NaN1.21 × 10−127.94 × 10−31.21 × 10−12NaN
F 10 3.06 × 10−91.21 × 10−127.94 × 10−31.21 × 10−12NaN
F 11 NaN1.21 × 10−127.94 × 10−31.21 × 10−12NaN
F 12 3.02 × 10−113.02 × 10−117.94 × 10−33.02 × 10−116.53 × 10−8
F 13 3.02 × 10−113.02 × 10−117.94 × 10−33.02 × 10−111.69 × 10−9
F 14 1.57 × 10−111.39 × 10−4NaN1.57 × 10−111.22 × 10−1
F 15 7.94 × 10−37.94 × 10−31.59 × 10−27.94 × 10−31.51 × 10−1
F 16 7.94 × 10−37.94 × 10−38.41 × 10−14.21 × 10−16.90 × 10−1
F 17 7.94 × 10−37.94 × 10−37.94 × 10−37.94 × 10−37.94 × 10−3
F 18 7.94 × 10−37.94 × 10−37.94 × 10−37.94 × 10−37.94 × 10−3
F 19 7.94 × 10−37.94 × 10−38.41 × 10−17.94 × 10−38.41 × 10−1
F 20 7.94 × 10−37.94 × 10−37.94 × 10−37.94 × 10−37.94 × 10−3
F 21 7.94 × 10−37.94 × 10−34.21 × 10−17.94 × 10−31.51 × 10−1
F 22 7.94 × 10−37.94 × 10−37.94 × 10−37.94 × 10−31.00 × 100
Table 9. MAE ranking results of each algorithm.
Table 9. MAE ranking results of each algorithm.
AlgorithmMAE
CLTSO2.06 × 104
FDB-AGDE2.70 × 104
CMA-ES1.21 × 106
TSO3.82 × 106
WOA3.55 × 109
APSO9.60 × 109
Table 10. The execution time of each algorithm.
Table 10. The execution time of each algorithm.
FunctionAPSOWOAFDB-AGDECMA-ESTSOCLTSO
F 1 0.50140.22068.24662.63470.22770.2603
F 2 0.44650.33338.79632.85520.23890.27260
F 3 0.57821.38268.44244.07511.27431.2819
F 4 4.32840.20838.84722.48570.19600.1942
F 5 0.71930.24583.5862.71630.24400.3442
F 6 0.85720.19218.89672.48670.18790.1946
F 7 0.75570.422613.63292.70730.39060.3860
F 8 1.00160.259923.66962.67090.26890.2698
F 9 0.51130.207426.19732.56780.21260.2246
F 10 0.56190.237021.38633.95140.34250.3350
F 11 0.53210.421320.68732.97840.34910.4298
F 12 1.95100.988919.54293.55280.88230.8381
F 13 1.88740.846811.61966.15772.21851.5804
F 14 2.07753.03644.97524.74622.25982.2449
F 15 1.93272.316715.585821.61522.44252.6542
F 16 1.42271.936214.822421.40821.94442.0605
F 17 1.49582.039514.657922.14842.00201.9974
F 18 1.27752.231115.685521.52552.13732.2258
F 19 1.78492.186715.327622.27182.08472.2293
F 20 2.52582.206329.384920.97462.18062.2839
F 21 3.00102.482430.127323.30552.43372.5681
F 22 6.11306.242649.855927.52035.98506.0176
Table 11. Experimental results of various versions of the improved TSO.
Table 11. Experimental results of various versions of the improved TSO.
FunctionPerformanceTSOLTSOCTSOCLTSO
F 1 Mean0000
Std0000
F 2 Mean9.66 × 10−237000
Std0000
F 3 Mean0000
Std0000
F 4 Mean1.26 × 10−233000
Std0000
F 5 Mean2.52 × 10−34.84 × 10−42.35 × 10−31.27 × 10−5
Std3.37 × 10−42.47 × 10−26.09 × 10−35.71 × 10−5
F 6 Mean9.33 × 10−46.07 × 10−52.92 × 10−43.07 × 10−5
Std2.93 × 10−38.82 × 10−51.76 × 10−22.78 × 10−5
F 7 Mean1.34 × 10−44.20 × 10−58.46 × 10−54.07 × 10−5
Std1.62 × 10−46.46 × 10−51.41 × 10−43.07 × 10−5
F 8 Mean−4.19 × 104−4.19 × 104−4.19 × 104−4.19 × 104
Std2.51 × 1042.51 × 1047.54 × 1035.30 × 10−8
F 9 Mean0000
Std0000
F 10 Mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std0000
F 11 Mean0000
Std0000
F 12 Mean5.26 × 10−51.41 × 10−67.65 × 10−75.14 × 10−10
Std2.72 × 10−52.04 × 10−78.98 × 10−51.52 × 10−7
F 13 Mean7.86 × 10−47.47 × 10−69.84 × 10−63.60 × 10−7
Std7.73 × 10−45.16 × 10−31.58 × 10−32.08 × 10−5
F 14 Mean9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Std5.99 × 10−15.99 × 10−15.99 × 10−11.72 × 10−16
F 15 Mean1.15 × 1061.69 × 1062.32 × 1069.51 × 105
Std1.45 × 1066.65 × 1052.06 × 1063.35 × 105
F 16 Mean1.38 × 1041.12 × 1041.61 × 1033.40 × 102
Std7.08 × 1031.93 × 1038.20 × 1032.05 × 103
F 17 Mean3.16 × 1034.72 × 1024.57 × 1033.71 × 102
Std6.55 × 1032.28 × 1021.62 × 1044.36 × 101
F 18 Mean5.21 × 1025.20 × 1025.21 × 1025.20 × 102
Std3.13 × 1023.12 × 1023.13 × 1024.47 × 10−2
F 19 Mean5.54 × 1038.74 × 1034.22 × 1033.78 × 103
Std2.93 × 1033.42 × 1032.35 × 1031.34 × 103
F 20 Mean6.52 × 1033.09 × 1035.49 × 1032.62 × 103
Std3.39 × 1031.59 × 1034.34 × 1031.85 × 102
F 21 Mean2.15 × 1047.56 × 1041.95 × 1041.90 × 104
Std1.30 × 1053.01 × 1046.81 × 1043.08 × 104
F 22 Mean3.20 × 1033.20 × 1033.20 × 1033.20 × 103
Std0000
Table 12. Results of Friedman statistical analysis.
Table 12. Results of Friedman statistical analysis.
AlgorithmRank Mean
CLTSO1.80
LTSO2.25
CTSO2.84
TSO3.11
Table 13. MAE ranking results.
Table 13. MAE ranking results.
AlgorithmMAE
CLTSO4.42 × 104
LTSO8.06 × 104
CTSO1.08 × 105
TSO1.18 × 105
Table 14. The comparison results of the two models.
Table 14. The comparison results of the two models.
DatasetModelClassification Accuracy
IrisCLTSO-BP neural network100%
BP neural network95.2%
WineCLTSO-BP
neural network
100%
BP neural network94.4%
Wine QualityCLTSO-BP neural network65.6%
BP neural network45.2%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, W.; Tian, J. An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle Chaos Map and Levy Flight Operator. Electronics 2022, 11, 3678. https://doi.org/10.3390/electronics11223678

AMA Style

Wang W, Tian J. An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle Chaos Map and Levy Flight Operator. Electronics. 2022; 11(22):3678. https://doi.org/10.3390/electronics11223678

Chicago/Turabian Style

Wang, Wentao, and Jun Tian. 2022. "An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle Chaos Map and Levy Flight Operator" Electronics 11, no. 22: 3678. https://doi.org/10.3390/electronics11223678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop