*Article* **An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle Chaos Map and Levy Flight Operator**

**Wentao Wang and Jun Tian \***

College of Software, Nankai University, Tianjin 300071, China

**\*** Correspondence: jtian@nankai.edu.cn

**Abstract:** The tuna swarm optimization algorithm (TSO) is a new heuristic algorithm proposed by observing the foraging behavior of tuna populations. The advantages of TSO are a simple structure and fewer parameters. Although TSO converges faster than some classical meta-heuristics algorithms, it can still be further accelerated. When TSO solves complex and challenging problems, it often easily falls into local optima. To overcome the above issue, this article proposed an improved nonlinear tuna swarm optimization algorithm based on Circle chaos map and levy flight operator (CLTSO). In order to compare it with some advanced heuristic algorithms, the performance of CLTSO is tested with unimodal functions, multimodal functions, and some CEC2014 benchmark functions. The test results of these benchmark functions are statistically analyzed using Wilcoxon, Friedman test, and MAE analysis. The experimental results and statistical analysis results indicate that CLTSO is more competitive than other advanced algorithms. Finally, this paper uses CLTSO to optimize a BP neural network in the field of artificial intelligence. A CLTSO-BP neural network model is proposed. Three popular datasets from the UCI Machine Learning and Intelligent System Center are selected to test the classification performance of the new model. The comparison result indicates that the new model has higher classification accuracy than the original BP model.

**Keywords:** artificial intelligence; circle chaotic map; Levy flight; nonlinear adaptive weight; tuna swarm optimization

#### **1. Introduction**

Nowadays, many engineering problems in real life have become more and more complex and challenging. High-quality solutions can help people effectively reduce resource investment. Because most production practice problems are multivariate, nonlinear, and have many complex constraints, the traditional branch and bound algorithm [1], conjugate gradient method [2], and dynamic programming method [3] cannot achieve remarkable results regarding these problems. The meta-heuristic algorithm has the characteristics of strong global search ability, no dependence on gradient information, and wide adaptability. It can effectively overcome the shortcomings of traditional optimization algorithms. Much of the research on meta-heuristic algorithms has shown that these algorithms are able to solve nonlinear optimization problems [4,5]. Many researchers tend to use meta-heuristic algorithms to solve complex engineering problems. Now, meta-heuristic algorithms are applied in various fields, such as workshop scheduling [6], task optimization [7], engineering management [8–10], and others.

The meta-heuristic algorithm is a mathematical method inspired by biological behavior and some physical phenomena in nature. These methods are used to solve complex problems in real life [11]. The meta-heuristic algorithm has the advantages of a simple structure, fewer hyperparameters, and being easy to understand. Based on these advantages, it has become an important method for solving optimization problems today. Meta-heuristic algorithms can be divided into four categories: swarm intelligence algorithms [12], evolutionary algorithms [13], human-based algorithms [14], and physical and chemical-based

**Citation:** Wang, W.; Tian, J. An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle Chaos Map and Levy Flight Operator. *Electronics* **2022**, *11*, 3678. https://doi.org/10.3390/ electronics11223678

Academic Editor: Javid Taheri

Received: 23 October 2022 Accepted: 7 November 2022 Published: 10 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

algorithms [15]. The swarm intelligence algorithms simulate the behavior of animal populations. Each individual in the population is a candidate solution. They are randomly explored in the search space, which effectively avoids the possibility of entering the local optimum. Some classic and newly proposed swarm intelligence algorithms include Golden Jackal Optimization (GJO) [16], the Gray Wolf Optimization Algorithm (GWO) [17], and the Poplar Optimization Algorithm (POA) [18]. Some classical evolutionary algorithms include Genetic Algorithms [19] and the Biogeographic-Based Optimization Algorithm (BBO) [20], etc. Meta-heuristic algorithms can effectively enhance the efficiency of engineering practice. This has attracted more and more scholars' attention.

In many industrial problems, specific solution functions can be established with mathematical models. How to solve complex function optimization problems has become a focus of current research. For the optimization problems with fewer constraints and dimensions, the traditional mathematical methods can achieve outstanding results. Although meta-heuristic algorithms have very good performance in dealing with complex and high dimensional optimization problems, the convergence speed of simple meta-heuristic algorithms still needs to be improved. Sometimes with a single meta-heuristic algorithm, it is difficult to get rid of the attraction of local extremum. To further enhance the optimization capability of meta-heuristic algorithms, many experts try to use different strategies to improve them. Zhongzhou Du introduced Levy flight in the iterative process of PSO, which accelerated the optimization speed of PSO [21]. Hang Yu used a chaotic mapping strategy to improve the GWO initialization method, which improved the accuracy of the GWO solution [22]. Xiaoling Yuan introduced adaptive weight into the PSO algorithm, which greatly strengthened its global search capability [23]. So-Youn Park combined CS with oppositional learning, making the CS converge faster [24]. W. Xie used the golden sine operator to improve the Black Hole algorithm (BH) [25], giving it better exploration performance [26].

Xie et al. proposed a new meta-heuristic algorithm called the tuna swarm optimization algorithm (TSO) [27] in 2021 after observing the foraging behavior of tuna swarms. There are two common foraging strategies for tuna swarms: spiral foraging strategy and parabolic foraging strategy. TSO searches for the global optimal value by simulating the common individual in the tuna swarm to follow the optimal individual in the swarm to attack the prey. Comparing TSO with the Whale Optimization Algorithm (WOA) [28], the Salp Swarm Algorithm (SSA) [29], and some other advanced algorithms, the comparison results indicate that TSO outperforms the competitors. The tuna swarm optimization algorithm has the advantages of less parameters and easy realization. Therefore, after it was proposed, TSO has been widely studied and applied to engineering practice. Although TSO performs very well in many engineering practices, it still has some shortcomings. Firstly, TSO cannot efficiently search for the global optimal value. It is easily attracted by local extremum. Secondly, TSO does not converge fast enough. Finally, the followers of the optimal individual blindly follow the later. There is a lack of local exploitation. At present, Hu et al. have used Gaussian mutation to improve the TSO algorithm, and have applied the improved algorithm to photovoltaic power prediction [30]. Kumara et al. improved the TSO algorithm by using chaotic maps to increase the diversity of the algorithm population [31]. This paper proposes an improved tuna swarm optimization algorithm (CLTSO) based on the Circle chaotic map [32], Levy flight operator, and nonlinear adaptive operator. The innovations made in this article are summarized as follows:


(3) In the iterative process of CLTSO, a nonlinear convergence factor is introduced to balance the exploration and the exploitation. In CLTSO, a large convergence factor in the initial iteration can bring the common individuals closer to the optimal individuals. A smaller convergence factor at the end of iteration increases the capability of followers to explore local scope.

This article covers the following aspects: Section 1 introduces some related content of the meta-heuristic algorithm and the tuna swarm optimization algorithm. Section 2 reviews the two foraging strategies of the original tuna swarm optimization algorithm. Section 3 introduces the improved Circle chaotic map strategy, the Levy flight operator, and the nonlinear adaptive weight operator, and the usage of these operators to improve TSO. Section 4 compares CLTSO with some classical and advanced meta-heuristics and makes some experimental analysis. Section 5 modifies the BP neural network based on CLTSO, and then tests the new model by using three popular datasets. Finally, Section 6 summarizes the content of the article.

The main mathematical symbols mentioned in this paper are shown in Table 1.

**Table 1.** Explanation of symbols.


#### **2. An Overview of Tuna Optimization Algorithms**

Tuna is the top predator in the ocean. Although tuna swim very fast, some small prey are more flexible than tuna. Therefore, in the process of predation, tuna often choose group cooperation to capture prey. The tuna swarm has two efficient predatory strategies, namely, the spiral foraging strategy and the parabolic foraging strategy. When the tuna swarm uses the parabolic foraging strategy, each tuna will follow the previous individual closely. The tuna swarm forms a parabola to surround the prey. When the tuna swarm adopts the spiral foraging strategy, the tuna swarm will aggregate into spiral shapes and drive prey to shallow water areas. Prey is more likely to be captured. By observing these two foraging behaviors of tuna swarm, researchers proposed a new swarm intelligence optimization called TSO.

#### *2.1. Population Initialization*

There are *NP* tunas in a tuna swarm. At the swarm initialization phase, the tuna swarm optimization algorithm randomly generates the initial swarm in the search space. The mathematical formulas for initializing tuna individuals are as follows:

$$\begin{array}{l} X\_i^{\text{int}} = rand \cdot (ub - lb) + lb \\ = \begin{bmatrix} \mathbf{x}\_i^1 & \mathbf{x}\_i^2 & \cdots & \mathbf{x}\_i^j \end{bmatrix} \\ \quad \begin{cases} i = 1, 2, \dots, NP \\ j = 1, 2, \dots, Dim \end{cases} \end{array} \tag{1}$$

where *X*int *<sup>i</sup>* is the *i*-th tuna, *ub* and *lb* are the upper and lower boundaries of the range of tuna exploration, and *rand* is a random variable with uniform distribution from 0 to 1. In particular, each individual, *X*int *<sup>i</sup>* , in the tuna swarm represents a candidate solution for TSO. Each individual tuna consists of a set of *Dim*-dimensional numbers.

#### *2.2. Parabolic Foraging Strategy*

Herring and eel are the main food sources of tuna. When they encounter predators, they will use their speed advantage to constantly change their direction of swimming. It is very difficult for predators to catch them. Because tuna is less agile than their prey, the tuna swarm will take a cooperative approach to attack the prey. The tuna swarm will use the prey as a reference point to keep chasing prey. During predation, each tuna follows the previous individual, and the whole tuna swarm forms a parabola to surround the prey. In addition, the tuna swarm also uses a spiral foraging strategy. Assuming that the probability of the tuna swarm choosing either strategy is 50%, the mathematical model of parabolic foraging of the tuna swarm is as follows:

$$X\_{i}^{t+1} = \begin{cases} X\_{\text{best}}^{t} + rand \cdot (X\_{\text{best}}^{t} - X\_{i}^{t}) + TF \cdot p^{2} \cdot (X\_{\text{best}}^{t} - X\_{i}^{t}), \text{if } \text{rand} < 0.5\\ TF \cdot p^{2} \cdot X\_{i}^{t}, & \text{if } \text{rand} \ge 0.5 \end{cases} \tag{2}$$

$$p = \left(1 - \frac{t}{t\_{\text{max}}}\right)^{(t/t\_{\text{max}})} \tag{3}$$

where *t* indicates that the *t*th iteration is currently running and *t*max means the maximum number of iterations preset. *TF* is a random value of 1 or −1.

#### *2.3. Spiral Foraging Strategy*

Besides the parabolic foraging strategy, there is another efficient cooperative foraging strategy called the spiral foraging strategy. While chasing the prey, most tuna cannot choose the right direction, but a small number of tuna can guide the swarm to swim in the right direction. When a small group of tuna start chasing the prey, the nearby tuna will follow this small group of individuals. Eventually, the entire tuna swarm will form a spiral formation to catch the prey. When the tuna swarm adopts a spiral foraging strategy, individuals will exchange information with the best to follow individuals or adjacent individuals in the swarm. Sometimes the best individual is not able to lead the swarm to capture prey effectively. The tuna will then select a random individual in the swarm to follow. The mathematical formula of the spiral foraging strategy is as follows:

$$X\_{i}^{t+1} = \begin{cases} \begin{aligned} \mathfrak{a}\_{1} \cdot (X\_{rand}^{t} + \tau \cdot |X\_{rand}^{t} - X\_{i}^{t}| + \mathfrak{a}\_{2} \cdot X\_{i}^{t}), \\ \quad i = 1 \\ \mathfrak{a}\_{1} \cdot (X\_{rand}^{t} + \tau \cdot |X\_{rand}^{t} - X\_{i}^{t}| + \mathfrak{a}\_{2} \cdot X\_{i-1}^{t}), \\ \quad i = 2, 3, \dots, NP \\ \mathfrak{a}\_{1} \cdot (X\_{best}^{t} + \tau \cdot |X\_{best}^{t} - X\_{i}^{t}| + \mathfrak{a}\_{2} \cdot X\_{i}^{t}), \\ \quad i = 1 \\ \mathfrak{a}\_{1} \cdot (X\_{best}^{t} + \tau \cdot |X\_{best}^{t} - X\_{i}^{t}| + \mathfrak{a}\_{2} \cdot X\_{i-1}^{t}), \\ \quad i = 2, 3, \dots, NP \end{aligned} \tag{4}$$

where *Xt*+<sup>1</sup> *<sup>i</sup>* denotes the *i*-th tuna in the *t* + 1 iteration. The current best individual is *Xt best*. *<sup>X</sup><sup>t</sup> rand* is the reference point randomly selected in the tuna swarm. *α*<sup>1</sup> is the trend weight coefficient to control the tuna individual swimming to the optimal individual or randomly selected adjacent individuals. *α*<sup>2</sup> is the trend weight coefficient to control the tuna individual swimming to the individual in front of it. *τ* is the distance parameter that controls the distance between the tuna individual and the optimal individual or a randomly selected reference individual. Their mathematical calculation model is as follows:

$$
\alpha\_1 = a + (1 - a) \cdot \frac{t}{t\_{\text{max}}} \tag{5}
$$

$$\alpha\_2 = (1 - a) - (1 - a) \cdot \frac{t}{t\_{\text{max}}} \tag{6}$$

$$
\pi = \varepsilon^{\text{Il}} \cdot \cos(2\pi b) \tag{7}
$$

$$I = e^{3\cos\left(\left(\left(t\_{\text{max}} + 1/t\right) - 1\right)\pi\right)}\tag{8}$$

where *a* is a constant to measure the degree of tuna following and *b* is a random number uniformly distributed in the range of [0, 1].

#### *2.4. Pseudocode of TSO*

The pseudocode of the original TSO is displayed in Algorithm 1. The flow chart of TSO is displayed in Figure 1.

#### **Algorithm 1** Pseudocode of TSO Algorithm

```
Initialization: Set parameters NP, Dim, a, z and TMax
Initialize the position of tuna Xi (i = 1, 2, . . . , NP) by (1)
Counter t = 0
while T < TMax do
  Calculate the fitness value of all tuna
  Update the position and value of the best tuna Xt
                                                     best
  for (each tuna) do
       Update α1, α2, p by (5), (6), (3)
         if (rand < z) then
              Update Xt+1
                         i by (1)
        else if (rand ≥ z) then
             if (rand < 0.5) then
                  Update Xt+1
                            i by (4)
              else if (rand ≥ 0.5) then
                  Update Xt+1
                            i by (2)
t=t+1
return the best fitness value f (Xbest) and the best tuna Xbest
```
In the iterative process of the TSO algorithm, each tuna will randomly choose to perform either the spiral foraging strategy or the parabolic foraging strategy. Tuna will also generate new individuals in the search range according to probability *Z*. Therefore, TSO will choose different strategies according to *Z* when generating new individual positions. During the execution of the TSO algorithm, all tuna individuals in the population are constantly updated until the number of iterations reaches a predetermined value. Finally, the TSO algorithm returns the optimal individual in the population and its optimal value.

The following advantages of TSO can be seen from Algorithm 1: (1) The TSO algorithm has fewer adjustable parameters, which is beneficial to the implementation of the algorithm. (2) This algorithm will save the position of the best tuna individual in each iteration; even if the quality of the candidate solution decreases, it will not affect the location of the optimal value. (3) The TSO algorithm can keep the balance between exploitation and exploration by selecting two foraging strategies.

**Figure 1.** Flow chart of TSO.

#### **3. The Improved Tuna Swarm Optimization Algorithm**

This section introduces an improved nonlinear tuna swarm optimization algorithm, CLTSO, based on Circle chaotic map and Levy flight operator. Firstly, the population initialization using Circle chaotic map can increase the diversity of the swarm. The combination of TSO and Levy flight gives the algorithm an outstanding global exploration capability. Furthermore, a nonlinear adaptive weight operator is introduced to modify the weight coefficient of tuna following behavior in CLTSO. In CLTSO, the relationship between global exploration and local exploitation in the iterative process are well balanced.

#### *3.1. Circle Chaotic Map*

Many changes in nature are not random. They seem to conform to some special laws. Such a phenomenon is called chaos. Many movements in nature are chaotic [33]. Chaos is a random behavior, but it conforms to certain laws, which enables this operator to display more states in the search space of TSO [34].

Because the position of the tuna is randomly generated in the initialization phase of the tuna algorithm, it is easy to make the initial tuna gather at the same place. The initial tuna swarm does not fully cover the search space, resulting in a small difference between tuna individuals. This greatly reduces the global searching capability of the algorithm. The current popular chaotic mapping strategies are as follows: Tent [35], Logistic [36], Circle [37], Chebyshev [38], Sinusoidal [39], and Iterative chaotic map [40]. Studying the related literature on the above chaotic mapping strategies, we found that Circle chaotic map has a more stable chaotic value and has a higher coverage rate in the search space [41]. However, our experiments indicate that the distribution of Circle chaotic value is still not uniform. The chaotic values of the original Circle operator are clustered in the scope

of [0.2, 0.5]. To make the chaotic value distribution more uniform, we improved the mathematical model of the Circle chaotic mapping strategy.

The mathematical modeling of the original Circle chaotic map is as follows:

$$\mathbf{x}\_{i+1} = \text{mod}(\mathbf{x}\_i + 0.2 - (0.5/2\pi)\sin(2\pi\mathbf{x}\_i), 1) \tag{9}$$

where *xi* is the *i*th chaotic particle and *xi*+<sup>1</sup> is the (*i* + 1)th chaotic particle. The scatter plot and frequency histogram of the initial candidate solution of the original Circle chaotic mapping operator are displayed in subgraphs (a) and (c) of Figure 2. In the Circle chaotic map experiment, the total number of particles is 2000. Chaotic particles denote the initial candidate solution of TSO.

**Figure 2.** Frequency distribution histogram of improved Circle chaotic map.

As can be seen from subgraphs (a) and (c) of Figure 2, the chaotic particles are concentrated in the range of [0.2, 0.5] in the chaotic sequence initialized by Circle chaotic map. However, the initial candidate solutions are too concentrated, which will greatly reduce the population diversity of TSO. Therefore, the original Circle chaotic map is improved in this paper [42]. The mathematical modeling of the improved Circle chaotic map is as follows:

$$\mathbf{x}\_{i+1} = \text{mod}(3.85\mathbf{x}\_i + 0.4 - (0.7/3.85\pi)\sin(3.85\pi\mathbf{x}\_i), 1) \tag{10}$$

where *xi* is the *i*th chaotic particle and *xi*+<sup>1</sup> is the (*i* + 1)th chaotic particle.

The scatter plot and frequency histogram of the initial candidate solution of the improved Circle chaotic map operator are displayed in subgraphs (b) and (d) of Figure 2.

From (b) and (d), we can clearly see that, compared to the original Circle chaotic map, the particle distribution of the improved Circle chaotic map is more uniform. Each candidate solution particle of the algorithm is explored in the search space. Therefore, using the improved Circle chaotic map operator to modify TSO can obtain more uniform candidate solutions. The initial tuna individuals uniformly distributed in the search space of the algorithm can significantly increase the population diversity of TSO.

#### *3.2. Levy Flight*

The movement and trajectory of many small animals and insects in life have the characteristics of Levy flight. These animals and insects include ants and flies. Many animals in nature use Levy flight strategy as an ideal way of foraging. By studying this phenomenon, French mathematician Paul Pierre Levy proposed the mathematical model of Levy flight [43]. Levy flight is an operator conforming to Levy distribution. The step size of Levy flight is random and mixed with long and short distances, which makes it easier to search over a large scale and with unknown scope compared to Brownian motion [44]. In the searching process, the Levy operator often uses short steps to walk and occasionally uses long steps to jump, which allows it to efficiently get rid of the effects of local attraction points. Therefore, in the random searching problem, many heuristic algorithms adopt this strategy to modify the iterative process, which efficiently helps the algorithm to get rid of the influence of local attraction points [45–47].

The Levy distribution can be expressed by the following mathematical model:

$$L(\mathbf{s}) \sim |\mathbf{s}|^{-1-\beta} \tag{11}$$

where *β* is in the range of (0, 2), *s* is the step size, and *L*(*s*) is the probability density of a step size, *s*, according to Levy modeling. The mathematical modeling of Levy distribution is as follows:

$$L(s, \gamma, \mu) = \begin{cases} \sqrt{\frac{\gamma}{2\pi}} \exp[-\frac{\gamma}{2(s-\mu)}] \frac{1}{(s-\mu)^{3/2}}, 0 < \mu < s < \infty\\ 0 & , \text{otherwise} \end{cases} \tag{12}$$

where *μ* represents the minimum step size and *μ* > 0, *γ* represents size parameters. When *s* → ∞, Equation (12) can be written in the following form:

$$L(\mathbf{s}, \gamma, \mu) \approx \sqrt{\frac{\gamma}{2\pi}} \frac{1}{\mathbf{s}^{3/2}} \tag{13}$$

Usually, scholars regard *L*(*s*) approximation as the following mathematical formula:

$$L(s) \rightarrow \frac{a\beta \cdot \Gamma(\beta)\sin(\pi\beta/2)}{\pi|s|^{1+\beta}}, s \rightarrow \infty \tag{14}$$

where *Γ* represents gamma function. Its mathematical model is as follows:

$$
\Gamma(z) = \int\_0^\infty t^{z-1} e^{-t} dt\tag{15}
$$

Due to the high complexity of Levy distribution, researchers often use the Mantegna [48] algorithm to simulate Levy flight step size, *s*, which is defined as follows:

$$s = \frac{\mu}{|\nu|^{1/\beta}}\tag{16}$$

where *μ* and *v* are defined as follows:

$$
\mu \sim \mathcal{N}\left(0, \sigma\_{\mu}^{2}\right) \tag{17}
$$

$$\nu \sim \mathcal{N}\left(0, \sigma\_\nu^2\right) \tag{18}$$

$$\sigma\_{\mu} = \left\{ \frac{\Gamma(1+\beta)\sin\left(\frac{\pi\beta}{2}\right)}{\Gamma\left[\frac{(1+\beta)}{2}\right] \cdot \beta \cdot 2^{\frac{(1+\beta)}{2}}} \right\}, \sigma\_{\nu} = 1\tag{19}$$

where the value of *β* is usually 1.5.

To show the global exploration capability of Levy flight more intuitively, this paper compares Levy flight with random walk strategy. The simulation steps of Levy flight and random walk are set to 300. The comparison results are presented in Figure 3.

**Figure 3.** Simulation comparison experiment diagram of Levy flight and random walk.

Figure 3 shows that the Levy flight has a larger search range than random walk. The jump points of the random walk strategy are more concentrated, and the jump points of the Levy flight strategy are widely distributed. Figure 3 fully demonstrates the characteristics of Levy flight, which can make it better to explore in the whole searching space.

#### *3.3. Nonlinear Adaptive Weight*

How to balance the exploration capability and the exploration capability of the swarm intelligence optimization algorithm is very important. Weight parameters play an important role in the TSO algorithm. When the tuna chooses the spiral foraging strategy, in Equations (5) and (6), the weight parameters *α*<sup>1</sup> and *α*<sup>2</sup> determine the degree of how much tuna individuals follow the optimal individual to forage. This reflects the optimization process of the algorithm. Similarly, in the parabolic foraging strategy, the weight parameter *p* in Equation (2) determines the degree of how much ordinary individuals follow the optimal individual. When the weight parameter is large, the degree of tuna following the optimal individual is higher, which makes the whole tuna population better explore the whole space. When the weight parameter is small, ordinary tuna individuals do not follow the optimal individuals. They will swim around a small part of the space, which facilitates the ordinary tuna individual to develop the field around itself. To sum up, the exploration and the exploitation capabilities of TSO depend on the changes of weight parameters *α*1, *α*2, and *p*.

From Equations (5) and (6), it can be seen that the weight parameters *α*<sup>1</sup> and *α*<sup>2</sup> are linear changes. However, the optimization process of TSO is very complex, and the linear changes of weight parameters *α*<sup>1</sup> and *α*<sup>2</sup> cannot reflect the actual optimization process of the algorithm. Nowadays, in order to overcome the drawbacks caused by linear control weights, many scholars use nonlinear adaptive weights to improve the swarm intelligence optimization algorithms [49–51]. Repeated experiments indicate that the optimization effect of the nonlinear adaptive weight strategy is better than the linear weight strategy. Therefore, two improved nonlinear weight parameters *α*1*<sup>i</sup>* and *α*2*<sup>i</sup>* are introduced in this paper. Their mathematical models are as follows:

$$\alpha\_{1i}(t) = \alpha\_{1ini} - (\alpha\_{1ini} - \alpha\_{1fin}) \cdot \sin(\frac{t}{\mu \cdot T\_{\text{Max}}} \cdot \pi) \tag{20}$$

$$u\_{2i}(t) = u\_{2
i
i} - (u\_{2
i
i} - u\_{2
f
i}) \cdot \sin(\frac{t}{\mu \cdot T\_{\text{Max}}} \cdot \pi) \tag{21}$$

where *μ* = 2, *α*1*ini* denotes the initial value of *α*1, *α*<sup>1</sup> *fin* denotes the final value of *α*1, *α*2*ini* denotes the initial value of *α*2, and *α*<sup>2</sup> *fin* denotes the final value of *α*2. We compared the improved weight parameters *α*1*<sup>i</sup>* and *α*2*<sup>i</sup>* with the original weight parameters *α*<sup>1</sup> and *α*2. The results are displayed in Figure 4. In the experiment, *T*Max = 500.

**Figure 4.** Comparison of weight coefficients *α*<sup>1</sup> and *α*<sup>2</sup> before and after improvement.

It can be clearly seen from Figure 4 that the improved weight parameters *α*1*<sup>i</sup>* and *α*2*<sup>i</sup>* change rapidly in the early stage, which makes ordinary tuna individuals more closely follow the optimal individual. It increases the global exploration capability of TSO. The weight parameters *α*1*<sup>i</sup>* and *α*2*<sup>i</sup>* change slowly in the late stage, which enables tuna individuals to explore their surrounding areas. It increases the local search capability of TSO.

In the spiral foraging strategy, a new nonlinear weight parameter *pi* is proposed. Its mathematical model is as follows:

$$p\_i(t) = p\_{ini} - (p\_{ini} - p\_{fin}) \cdot \sin(\frac{t}{\mu \cdot T\_{\text{Max}}} \cdot \pi) \tag{22}$$

where *pini* represents the initial value of *p*, and *pfin* represents the final value of *p*. We compare the improved weight parameter *pi* <sup>2</sup> with the original weight parameter *p*2. The comparison curves are displayed in Figure 5. In the comparison curve, *T*Max = 500.

**Figure 5.** Comparison of the weight coefficient *p* before and after the improvement.

As can be seen from Figure 5, the improved weight parameter *pi* <sup>2</sup> decreases rapidly in the early stage, so a tuna individual can follow its previous individual more closely. It increases the global exploration capability of TSO. The improved weight parameter *pi* 2 decreases slowly in the late iteration, so tuna individuals can swim and explore in the surrounding space. It increases the local exploration capability of TSO.

#### *3.4. Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circular Chaotic Map and Levy Flight Operator*

The TSO algorithm usually uses random data to initialize population in solving function optimization problems, which may lead to the phenomenon that candidate solutions are clustered together. However, this phenomenon will lead to poor population diversity, which eventually leads to poor optimization results of the algorithm. Circle chaotic map has the advantages of randomness and ergodicity. In the optimization process of TSO, these advantages make it easier for the algorithm to escape the attraction of local extremum, and helps the algorithm to maintain the diversity of the swarm. Therefore, an improved Circle chaotic map strategy is introduced to initialize the tuna swarm. The swarm initialization mechanism is upgraded from Equations (1)–(10).

For the swarm intelligence optimization algorithm, how to get rid of the influence of local attraction points is a very important issue. The Levy flight strategy is an operator that can strengthen the global capability of TSO. This mechanism often uses short steps to walk and occasionally uses long steps to jump. The low-frequency use of long step length can ensure that TSO can extensively search the entire search area. The high-frequency use of short step length can ensure that TSO can locally search its nearest scope. Therefore, this paper introduces the Levy operator to modify the swarm update strategy of TSO. Considering that the jump of the Levy operator is too intense, and it may jump out of the main range in the process of operation, this paper adds step control parameters on the basis of the original Levy operator. The small step size control parameters can control the search of TSO in a small scope, which can enhance the local exploration ability of TSO without weakening the global exploration ability. The step size control parameters with large values can control the exploration of TSO in a large scope, which is conducive to solving the complex optimization problem.

The original TSO designed the parabolic foraging strategy and the spiral foraging strategy to balance the global exploration and the local exploitation capabilities of TSO. However, in the spiral foraging strategy, the linear changes of the weight parameters *α*<sup>1</sup> and *α*<sup>2</sup> cannot solve the actual complex problems well. In the parabolic foraging strategy, the change of the weight parameter *p* cannot effectively provide the solution to TSO for the global and the local exploration abilities. This paper uses nonlinear adaptive weight to modify the spiral foraging strategy and parabolic foraging strategy in TSO. The mathematical model of weight parameter *pi* is upgraded from Equation (3) to Equation (22), and the mathematical models of *α*1*<sup>i</sup>* and *α*2*<sup>i</sup>* are upgraded from Equations (5) and (6) to Equations (20) and (21), respectively.

The mathematical model of the improved spiral foraging strategy based on the Levy operator and nonlinear adaptive weight strategy is as follows:

$$X\_{i}^{t+1} = \begin{cases} \begin{array}{c} \mathfrak{a}\_{1i} \cdot (X\_{rand}^{t} + L\tau \cdot |X\_{rand}^{t} - X\_{i}^{t}| + a\_{2i} \cdot X\_{i}^{t}), \\ \qquad i = 1 \\\ \mathfrak{a}\_{1i} \cdot (X\_{rand}^{t} + L\tau \cdot |X\_{rand}^{t} - X\_{i}^{t}| + a\_{2i} \cdot X\_{i-1}^{t}), \\ \qquad i = 2, 3, \dots, NP \\\ \mathfrak{a}\_{1i} \cdot (X\_{best}^{t} + L\tau \cdot |X\_{best}^{t} - X\_{i}^{t}| + a\_{2i} \cdot X\_{i}^{t}), \\ \qquad i = 1 \\\ \mathfrak{a}\_{1i} \cdot (X\_{best}^{t} + L\tau \cdot |X\_{best}^{t} - X\_{i}^{t}| + a\_{2i} \cdot X\_{i-1}^{t}), \\ \qquad i = 2, 3, \dots, NP \end{array} \tag{23}$$

where *Lτ* is an improved distance control parameter combined with the Levy operator. Its mathematical model is as follows:

$$L\pi = e^{a \cdot L \text{rev}(s) \cdot l} \cdot \cos(2\pi \cdot L \text{rev}(s) \cdot a) \tag{24}$$

where *Levy*(*s*) is the step size of the Lévy operator, and *α* is the step size control coefficient. In this article, *α* = 0.01. The mathematical model of improved parabolic foraging strategy based on the Levy operator and nonlinear adaptive weight strategy is as follows:

$$X\_{i}^{t+1} = \begin{cases} X\_{\text{best}}^{t} + \kappa \cdot Lxy(s) \cdot (X\_{\text{best}}^{t} - X\_{i}^{t}) + TF \cdot p^{2} \cdot (X\_{\text{best}}^{t} - X\_{i}^{t}), \text{if } \text{rand} < 0.5\\ \, \, TF \cdot p\_{i}^{2} \cdot X\_{i}^{t}, & \text{if } \text{rand} \ge 0.5 \end{cases} \tag{25}$$

Based on the above improvement strategies, an improved TSO is proposed, called CLTSO. The pseudocode of CLTSO is shown in Algorithm 2, and the process diagram of CLTSO is shown in Figure 6.

**Algorithm 2** Pseudocode of CLTSO Algorithm


**Figure 6.** Flow chart of CLTSO.

Comparing Algorithms 1 and 2, it is clear that the overall structures are similar. The update strategies have been changed. Therefore, the improved operator proposed in this paper does not destroy the structural simplicity of the original TSO algorithm.

#### *3.5. Time Complexity Analysis*

Time complexity is an important measurement tool for evaluating the efficiency of an algorithm. In much of the research literature, it is represented by the symbol *O*. The time complexity is closely related to the number of instruction operations of the algorithm. The time complexity of TSO is closely related to iteration times, location update mechanism, and the evaluation times of fitness value function. The time complexity of CLTSO is closely related to the number of iterations, the number of fitness function evaluations, and the improvement operator. To compare the time cost differences between TSO and CLTSO, the time complexity of TSO and CLTSO is evaluated as follows. The time complexity of each operation instruction in the TSO is discussed below.


The instructions in steps 2 to 4 need to be iteratively run *T*Max times. Combining the above analysis process, the time complexity of TSO can be expressed as *O*(*TSO*) = *TMax* · [(*N*<sup>2</sup> <sup>−</sup> *<sup>N</sup>*)/2 <sup>+</sup> *<sup>N</sup>* · *<sup>D</sup>* <sup>+</sup> <sup>3</sup>].

The time complexity of each operation instruction in CLTSO is analyzed as follows.


Steps 2 to 5 require a total of *T*Max iterations. Therefore, the time complexity of CLTSO can be expressed as *<sup>O</sup>*(*CLTSO*) <sup>=</sup> *TMax* · [(*N*<sup>2</sup> <sup>−</sup> *<sup>N</sup>*)/2 <sup>+</sup> *<sup>N</sup>* · *<sup>D</sup>* <sup>+</sup> <sup>3</sup> <sup>+</sup> *<sup>N</sup>*].

Compared with the tuna swarm optimization algorithm, the three operators proposed in this paper slightly increase the time cost. CLTSO and TSO have very close time complexity.

#### **4. Simulation Experiments and Results Analysis**

To verify the effectiveness of the proposed CLTSO in solving different optimization problems, in this section, 22 benchmark functions are applied to design a series of experiments to compare CLTSO with other famous meta-heuristic algorithms. In addition, to illustrate the outstanding performance of CLTSO, we compared it against the tuna swarm optimization algorithm (TSO), the improved TSO based on the Levy flight operator (LTSO), the improved TSO based on the Circle chaotic map, and nonlinear adaptive weights (CTSO). Finally, this section provides a detailed analysis of the experimental results.

#### *4.1. Benchmark Function*

Twenty-two different types of benchmark functions are selected to evaluate the capability of CLTSO, which cover unimodal, multimodal, fixed-dimension multimodal, and combined functions in the CEC2014 [52]. Through a survey of relevant literature, we find that CEC2014 is a classic test function, so it can be used as a benchmark to evaluate the performance of the proposed algorithm. Its mathematical model is given in Table 2. *F*1~*F*<sup>7</sup> are unimodal functions, which are used to evaluate the convergence rate of the algorithm. *F*8~*F*<sup>14</sup> are multimodal functions, which are applied to verify whether the algorithm has good global exploration capability. *F*15~*F*<sup>22</sup> are the CEC2014 functions, which are applied to test the comprehensive capability of these algorithms.

#### *4.2. Comparison Algorithm and Parameter Setting*

Based on these 22 benchmark functions, a series of comparative experiments are designed to test the selected algorithms, which include Accelerated Particle Swarm Optimization (APSO) [53], WOA, the Fitness-Distance Balance based adaptive guided differential evolution (FDB-AGDE) algorithm [54], Covariance Matrix Adaptation Evolutionary Strategies (CMA-ES) [55], TSO, and CLTSO. The parameter values of the algorithms involved in these experiments are shown in Table 3. The symbol ' ~ ' indicates that the algorithm does not set parameter values. Functions *F*1~*F*<sup>13</sup> are tested in 30 and 100 dimensions, respectively, and *F*<sup>14</sup> is tested in its suitable dimension. Eight CEC2014 benchmark functions are tested in 50 dimensions. The maximum number of evaluations of *F*1~*F*<sup>14</sup> are 1000. Because CEC benchmark functions are complex, the number of evaluations of 8 CEC2014 functions are simplified to 5000 without losing representativeness. The swarm size of each algorithm is

30. To avoid accidental interference, we run each algorithm 30 times independently in each experiment.



**Table 3.** Parameter values of the algorithms.


#### *4.3. Results and Analysis*

Table 4 shows the experimental results of CLTSO and other algorithms in low dimensional benchmark functions (dimension = 30), where Std is standard deviation and Mean is mean value. Mean represents the solution accuracy of these algorithms. Std reflects the stability of these algorithms in the solution process. *F*<sup>14</sup> is tested in its own dimension. Table 5 displays the experimental results of CLTSO and other algorithms in high dimensional benchmark functions (dimension = 100). The experimental results of eight composite functions in CEC2014 are displayed in Table 6.


**Table 4.** Experimental results in 30 dimensions.


**Table 5.** Experimental results in 100 dimensions.

**Table 6.** Simulation results of CEC2014 functions.


As can be seen from Table 3, in low-dimensional functions, the optimization accuracy of CLTSO is only slightly weaker than its competitors in *F*6, *F*8, *F*12, and *F*13. Among the remaining 10 benchmark functions, CLTSO not only has significantly better solution accuracy than its competitors, but also has better robustness. This shows that the Circle chaotic map operator can help CLTSO obtain more diverse candidate solutions, and each candidate solution can continuously update and finally select the optimal solution during the iteration.

When the dimension of the benchmark function is 100, CLTSO has better optimization performance in dealing with higher dimensional and more complex problems. Only in the *F*<sup>8</sup> test function is the optimization accuracy of CLTSO slightly worse than that of CMA-ES. In the remaining 12 functions, CLTSO has the best optimization accuracy, and CLTSO can find the theoretical optimal value in *F*1, *F*2, *F*3, *F*4, *F*9, *F*10, and *F*11. From the robustness of the algorithm, CLTSO obtains the minimum Std value in all benchmark functions, which indicates that CLTSO has more stable exploration ability than other competitors. This is due to the fact that the Circle chaotic map strategy helps CLTSO to obtain a richer population diversity, which allows the initial tuna to be evenly distributed in the search space. In addition, during the execution of CLTSO, the Levy flight operator strengthens the exploration capability of the algorithm, and the nonlinear adaptive weight operator can well balance the exploration and exploitation capability of CLTSO.

The experimental results of the CEC2014 function indicate that all algorithms do not obtain the theoretical optimal value, but CLTSO can still achieve more excellent optimization accuracy than other competitors in *F*16~*F*22. This effectively proves that the improved nonlinear tuna swarm optimization algorithm based on the Circle chaotic map strategy and the Levy flight operator can adapt to more complex and challenging optimization problems.

To more intuitively observe the convergence ability of CLTSO and the competitors, Figure 7 draw their operating curves. The images of *F*1~*F*<sup>13</sup> are drawn in 100 dimensions, the image of *F*<sup>14</sup> is drawn in its suitable dimension, and the images of *F*15~*F*<sup>22</sup> are drawn in 50 dimensions.

The convergence curves of these algorithms indicate that CLTSO has a better convergence performance than the competitors. For simple optimization problems, CLTSO can obtain theoretical optimal values within 500~600 iterations. For complex and challenging problems, CLTSO can also maintain a faster convergence rate and get rid of the influence of local attraction points, and ultimately achieve higher optimization accuracy.

In order to further show whether CLTSO has obvious advantage over other algorithms, this paper uses the Wilcoxon [56] statistical method and the Friedman method to analyze the experimental results of these algorithms in 100-dimensional benchmark functions. The results of *F*<sup>14</sup> is based on its suitable dimensions. The experimental data of eight CEC2014 benchmark functions are measured in 50 dimensions. The results of the Friedman test and the *p*-value of the Wilcoxon test are listed in Tables 7 and 8, respectively.

**Table 7.** Results of Friedman test.


**Figure 7.** *Cont*.

**Figure 7.** Convergence curve of each algorithm (F1~F22).

The Friedman test is a nonparametric statistical analysis method, which uses rank mean to test whether there are significant differences in multiple population distributions. Because the problem in this paper is to find the minimum value, a smaller rank mean value in the Friedman test results indicates better performance of the algorithm. As can be seen from Table 6, CLTSO has the smallest rank mean and TSO ranks the second, followed by CMA-ES, WOA, DE, and APSO.

In the Wilcoxon statistical test results, if the *p*-value is less than 0.05 and close to 0, this indicates that the experimental results of the two algorithms are significantly different. If the *p*-value exceeds 0.05, this indicates that the experimental results of the two algorithms are not significantly different. If the *p*-value is equal to NaN, this means that the experimental results of the two algorithms are not different. As can be seen from Table 7, except for the last column, the *p*-values of CLTSO are basically less than 0.05 and close to 0, which indicates that CLTSO has significant advantages compared with other algorithms. It is not difficult to find that half of the *p*-values for Wilcoxon analysis of CLTSO vs. TSO are greater than 0.05. This is because both CLTSO and TSO can find the theoretical optimal value in these functions, or the optimal value found by TSO is not much different from that found by CLTSO. From the optimization curves of TSO and CLTSO, we can see that although the calculation results of these two algorithms are not very different in those functions with *p*-values greater than 0.05, the speed of CLTSO is generally much faster than that of TSO.


**Table 8.** Results of Wilcoxon test.

Finally, this paper quantitatively analyzes all the algorithms in the experiment. The quantitative analysis of these algorithms is based on the mean absolute error (MAE) of 22 benchmark functions. In mathematics, MAE is a measure of the error between paired observations expressing the same phenomenon. The mathematical model of MAE is as follows:

$$MAE = \frac{\sum\_{i=1}^{N} |m\_i - o\_i|}{N} \tag{26}$$

where *N* is the total amount of benchmark functions used for testing, *mi* is the average of the optimal results calculated by the algorithm, and *oi* is the theoretical optimal value of the *i*th benchmark function.

Table 9 shows the MAE ranking results of these algorithms. The MAE value of CLTSO ranks the first among all competitors, and FDB-AGDE ranks the second. The above data intuitively illustrate the advantage of CLTSO.



The time consumed by these algorithms in functions *F*1~*F*<sup>22</sup> are shown in Table 10. The numerical unit is second. The analysis of the time they consumed indicates that the time complexity of CLTSO is slightly higher than that of TSO, but the increase is trivial. The improved operator proposed in this paper only increases the time complexity a little but greatly enhances the optimization performance of the CLTSO algorithm.


**Table 10.** The execution time of each algorithm.

#### *4.4. Effectiveness Analysis of Improved Operators*

This paper makes three improvements to the original tuna swarm optimization algorithm. Firstly, the improved Circle chaotic mapping strategy is introduced in the initialization phase, which expands the swarm diversity. Secondly, the Levy operator is introduced in the position update phase, which strengthens the global swimming ability of tuna. Finally, the nonlinear adaptive weight strategy is introduced in the TSO iteration stage, which can effectively balance the exploration and the exploitation capabilities of the tuna swarm. Section 3 of this chapter proves that the proposed operator significantly improves the optimization performance of TSO. In addition, to verify the effectiveness of the improvements proposed in this paper, we selected the tuna swarm optimization algorithm (TSO), the improved TSO based on the Levy flight operator (LTSO), the improved TSO based on the Circle chaotic map and nonlinear adaptive weights (CTSO), and CLTSO to conduct a set of comparative experiments. Functions *F*1~*F*<sup>22</sup> are used to test these algorithms in this section, and each algorithm runs 30 times independently. *F*1~*F*<sup>13</sup> are experiments in 100 dimensions, *F*15~*F*<sup>22</sup> are experiments in 50 dimensions.

The experimental results of various versions of the improved tuna swarm optimization algorithm are displayed in Table 11. Their convergence curves are displayed in Figure 8.


**Table 11.** Experimental results of various versions of the improved TSO.


**Table 11.** *Cont.*

As can be seen from Table 10 and Figure 8, CLTSO has higher optimization accuracy than the competitors. In benchmark functions *F*1, *F*2, *F*3, *F*4, *F*9, *F*10, *F*11, and *F*14, CLTSO, CTSO, and LTSO can calculate the theoretical optimal values, but CLTSO converges much faster than CTSO and LTSO. The above data indicate that the optimization performance of CTSO and LTSO is more enhanced than the original tuna swarm optimization algorithm, which further confirms the validity of the three modified operators in CLTSO. To demonstrate that the optimization capability of CLTSO is greatly enhanced compared to CTSO and LTSO, Friedman statistical analysis and MAE ranking are conducted based on the data in Table 11. The analysis and ranking results are listed in Tables 12 and 13.


**Table 12.** Results of Friedman statistical analysis.

**Table 13.** MAE ranking results.


According to the above two tables, it is clear that CLTSO has the smallest rank mean in the Friedman analysis test, LTSO ranks the second, and CTSO ranks the third, followed by TSO. According to the MAE value of each algorithm, CLTSO ranks the first. The above ranking shows that CLTSO can better approximate the theoretical optimal value when dealing with optimization problems. CLTSO has shown much better performance than the competitors. Therefore, the above data and analysis results confirm that the three improved operators proposed in this paper are effective.

**Figure 8.** *Cont*.

**Figure 8.** Convergence curves of each version of improved TSO.

#### **5. Optimization Engineering Example Using CLTSO**

The original intention of meta-heuristic algorithms is to optimize the engineering problems encountered. How to improve the precision of engineering practice is the concern of researchers. To verify the effectiveness of CLTSO for real engineering problems, CLTSO is applied to the modification design of a BP neural network. The BP neural network is a model proposed by McCulloch to train the network based on error back propagation. It is one of the most mature and widely used artificial neural network modules. The BP neural network is widely used in pattern recognition, classification and prediction, nonlinear modeling, etc. Figure 9 shows a BP neural network topology with *d* input neurons, *l* output neurons, and *q* hidden layer neurons.

**Figure 9.** BP neural network topology.

*vih* is the weight between the *i*th node in the input layer and the *h*th node in the hidden layer. *whj* is the weight between the *h*th node in the input layer and the *j*th node in the hidden layer. The threshold of the *j*th node in the output layer is expressed by *θj*. Therefore, the input value received by the hidden layer *h*th neurons in the network model is as follows:

$$\mathbf{a}\_{l} = \sum\_{i=1}^{d} v\_{il} \mathbf{x}\_{i} \tag{27}$$

The value received by the *j*th node in the output layer is as follows:

$$\beta\_{\dot{j}} = \sum\_{h=1}^{q} w\_{ih} b\_{h} \tag{28}$$

where *bh* is the output value of the *h*th neuron in the hidden layer. Taking training case (*xk*, *yk*) as an example, we assume that the output of the network model is as follows:

$$\mathfrak{g}\_k = (\mathfrak{g}\_1^k, \mathfrak{g}\_{2'}^k \cdot \cdot \cdot, \mathfrak{g}\_l^k) \tag{29}$$

$$\mathfrak{F}\_{\rangle}^{k} = f\left(\boldsymbol{\beta}\_{\rangle} - \boldsymbol{\theta}\_{\rangle}\right) \tag{30}$$

Therefore, the mean square error of the network on example (*xk*, *yk*) is as follows:

$$E\_k = 1/2\sum\_{j=1}^{l} \left(\mathcal{Y}\_j^k - \mathcal{Y}\_j^k\right)^2 \tag{31}$$

where n is the total amount of training samples, m is the total amount of input nodes, *x<sup>k</sup> <sup>i</sup>* is the output value of the network model, and *d<sup>k</sup> <sup>i</sup>* is the real value of training samples.

In the training process of the model, the error will be transmitted back to the hidden nodes. The model will adjust the weights and thresholds between each layer of nodes based on the error, and finally make the error achieve satisfactory accuracy. At present, the training methods of the BP neural network are mostly gradient descent. The training accuracy of the network model is extremely sensitive to the initial weight value and the learning rate. Therefore, when the objective function has multiple extreme values, the neural network is easily attracted by local extreme values. This will lead to a serious degradation in the performance of the algorithm. In order to optimize the performance of the BP network model and verify the optimization ability of CLTSO, a CLTSO-BP neural

network model is proposed. The basic idea of the model is to use the weights and thresholds of each node in the BP model as the tuna individual in the CLTSO algorithm and use MSE as the fitness function in the CLTSO algorithm. CLTSO optimizes the MSE of the model to obtain the optimized initial value weight and the threshold.

To compare the capability of the CLTSO-BP neural network with the original BP model, three popular datasets from the UCI machine learning and intelligent system center, Iris, Wine, and Wine Quality, are selected to design a comparative experiment. This experiment compares the classification accuracy of the CLTSO-BP neural network model and the BP model on the above three datasets.

In the experiment, the total amount of tuna is 30, the CLTSO algorithm is executed 30 times in total, and the neural network is executed 500 times in total. Table 14 shows the comparison results of the CLTSO-BP neural network model and the original BP model.


**Table 14.** The comparison results of the two models.

By comparing the result of the CLTSO-BP neural network and the original BP model on three datasets, it is found that the new model can obtain more ideal classification results. It also indicates that CLTSO can show excellent performance in multi-layer perceptron training difficulties.

#### **6. Conclusions**

The tuna swarm optimization algorithm is widely recognized by scholars because of its simple structure and low number of parameters. The tuna swarm optimization algorithm has excellent optimization performance, but it can still be further improved. When dealing with simple problems, the solving speed of TSO can still be further improved. When facing complex problems, it is difficult for TSO to escape the attraction of local optimal value. Therefore, this article proposes a modified nonlinear tuna swarm optimization algorithm based on Circle chaotic map and Levy flight operator. The optimization performance of CLTSO has been fully verified in 22 benchmark functions. The results show that CLTSO outperforms the comparable algorithms. Comparation data based on 22 benchmark functions were analyzed using Wilcoxon's test, Friedman's test, and MAE. The analysis conclusion indicates that the rank mean and MAE value of CLTSO are superior to other advanced algorithms such as CMA-ES. Finally, this paper optimizes the BP neural network based on CLTSO. The CLTSO-BP neural network model is tested using three popular datasets from the UCI Machine Learning and Intelligent System Center. Compared with the original BP model, the new model optimizes the classification accuracy. However, for the optimization problem of more complex datasets, the classification ability of the CLTSO-BP neural network still needs to be improved. Possible directions include increasing the swarm size of the algorithm and the total number of CLTSO operations to obtain a higher quality solution, which is also the target of continuous research in the future. In addition, the advantages of CLTSO in solving some complex multimodal functions can still be further improved, which is also one of the key research directions in the future. CLTSO has the advantages of fast convergence and high convergence accuracy, which can be applied in practical projects such as workshop scheduling and distribution network reconstruction.

**Author Contributions:** Conceptualization, W.W. and J.T.; methodology, W.W.; software, W.W.; validation, W.W.; formal analysis, W.W.; investigation, W.W.; resources, W.W.; data curation, W.W.; writing—original draft preparation, W.W.; writing—review and editing, W.W. and J.T.; visualization, W.W.; supervision, J.T.; project administration, J.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

