*Article* **CPPE: An Improved Phasmatodea Population Evolution Algorithm with Chaotic Maps**

**Tsu-Yang Wu, Haonan Li and Shu-Chuan Chu \***

College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China; wutsuyang@sdust.edu.cn (T.-Y.W.); lihaonan@sdust.edu.cn (H.L.) **\*** Correspondence: scchu0803@sdust.edu.cn

**Abstract:** The Phasmatodea Population Evolution (PPE) algorithm, inspired by the evolution of the phasmatodea population, is a recently proposed meta-heuristic algorithm that has been applied to solve problems in engineering. Chaos theory has been increasingly applied to enhance the performance and convergence of meta-heuristic algorithms. In this paper, we introduce chaotic mapping into the PPE algorithm to propose a new algorithm, the Chaotic-based Phasmatodea Population Evolution (CPPE) algorithm. The chaotic map replaces the initialization population of the original PPE algorithm to enhance performance and convergence. We evaluate the effectiveness of the CPPE algorithm by testing it on 28 benchmark functions, using 12 different chaotic maps. The results demonstrate that CPPE outperforms PPE in terms of both performance and convergence speed. In the performance analysis, we found that the CPPE algorithm with the Tent map showed improvements of 8.9647%, 10.4633%, and 14.6716%, respectively, in the Final, Mean, and Standard metrics, compared to the original PPE algorithm. In terms of convergence, the CPPE algorithm with the Singer map showed an improvement of 65.1776% in the average change rate of fitness value, compared to the original PPE algorithm. Finally, we applied our CPPE to stock prediction. The results showed that the predicted curve was relatively consistent with the real curve.

**Keywords:** chaotic-based PPE algorithm; meta-heuristic algorithm; chaotic maps

**MSC:** 90C26

**1. Introduction**

The advancement of science and technology has led to the emergence of a multitude of meta-heuristic algorithms that address engineering problems across various fields [1,2]. These algorithms employ randomness and fall into the following two categories: trajectorybased meta-heuristics, which include well-known algorithms such as the Genetic Algorithm [3–6] and Differential Evolution [7]; and population-based meta-heuristics, such as Particle Swarm Optimization [8–10], the Whale Optimization Algorithm (WOA) [11–13], and the Butterfly Optimization Algorithm [14]. Meta-heuristic algorithms are particularly effective in avoiding local optima due to their random nature, which is also the most challenging aspect in their development.

In recent years, chaos theory has been increasingly applied to enhance the performance and convergence of meta-heuristic algorithms. Chaos theory deals with the randomness arising from deterministic systems and is extensively utilized in various fields, including meta-heuristics [15,16]. Several studies have combined chaos theory with meta-heuristics to enhance their performance, such as the following: Chaotic Particle Swarm Optimization [17], Chaotic Imperialist Competitive Algorithm [18], Chaotic Firefly Algorithm [19,20], Chaotic Bat Algorithm [21], Chaotic Genetic Algorithm [22], Chaotic Whale Optimization (CWO) Algorithm [23], Chaotic Dragonfly Algorithm [24], Chaotic Grasshopper Optimization (CGO) Algorithm [25], Chaotic Bird Swarm Algorithm [26], Chaotic Cloud Quantum

**Citation:** Wu, T.-Y.; Li, H.; Chu, S.-C. CPPE: An Improved Phasmatodea Population Evolution Algorithm with Chaotic Maps. *Mathematics* **2023**, *11*, 1977. https://doi.org/10.3390/ math11091977

Academic Editor: Jian Dong

Received: 18 March 2023 Revised: 20 April 2023 Accepted: 21 April 2023 Published: 22 April 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Bat Hybrid Optimization Algorithm [27], Chaotic Sparrow Search Algorithm [28], Chaotic GrayWolf Optimization Algorithm [29,30].

One of the more recent meta-heuristic algorithms is the Phasmatodea Population Evolution (PPE) algorithm [31,32], inspired by the evolution of phasmatodea populations. This population-based algorithm has strong convergence capabilities and a degree of local optima avoidance. In this study, we aimed to enhance the PPE algorithm's performance and convergence speed by combining it with chaos theory. We propose a new algorithm, the Chaotic-based Phasmatodea Population Evolution (CPPE) algorithm, which replaces the probabilistically-initialized population part of the original PPE with a chaotic map. Our main contributions are listed as follows:


The rest of the paper is organized as follows. We review related research on chaoticbased meta-heuristic algorithms and PPE algorithms in Section 2. Section 3 provides a detailed description of the CPPE algorithm. Experimental results are discussed in Section 4. Finally, Section 5 provides the conclusion.

#### **2. Related Work**

Previous studies have examined meta-heuristic algorithms that incorporate chaotic maps. In 2018, Kaur and Arora [23] proposed the CWO algorithm, which combines chaotic maps and the WOA. They utilized chaotic mapping to adjust the parameter *p* in WOA, comparing the effectiveness of 10 different chaotic mappings. Tent mapping was found to significantly improve the performance of WOA. In a similar vein, Arora and Anand [25] proposed the CGO algorithm, adjusting the parameters, *c*<sup>1</sup> and *c*2, of the Grasshopper Optimization Algorithm (GOA) using chaotic maps and comparing the effectiveness of 10 different chaotic maps. They found that the Circle map significantly improved the performance of GOA.

Altay and Alatas [26] proposed the Bird Swarm Algorithm with Chaotic Mapping (CMBSA) in 2019, using chaotic mapping to initialize the population in the Bird Swarm Algorithm (BSA). The experimental results showed that CMBSA outperformed BSA. In 2021, Zhang and Ding [28] proposed the Chaotic Sparrow Search Algorithm, utilizing Logistic mapping to initialize the population in the Sparrow Search Algorithm (SSA). Xu et al. [29] proposed the Chaotic GrayWolf Optimization algorithm, which incorporated Chaotic Local Search (CLS) to adjust the radius of the algorithm's local search in the GrayWolf Optimization (GWO) algorithm. Similarly, Hao and Sobhani [30] proposed the Adaptive Chaotic GrayWolf Optimization algorithm, initializing the population using Logistic mapping in the population initialization phase.

In 2022, Gezici and Livatyalı [33] proposed the Chaotic Harris Hawks Optimization (CHHO) algorithm, utilizing 10 different chaotic maps to adjust various variables in the Harris Hawks Optimization (HHO) algorithm. They found that CHHO outperformed HHO. Gharehchopogh et al. [34] proposed the Chaotic Quasi-oppositional Farmland Fertility Algorithm (CQFFA), utilizing the CLS mechanism to adjust the radius of the local search using a chaotic map. Experimental results showed that CQFFA performed better than the Farmland Fertility Algorithm. In 2023, Chen et al. [35] proposed the Chaotic Satin Bowerbird Optimization Algorithm (CSBOA), utilizing the Bernoulli shift map initialization algorithm to initialize the population. Naik [36] proposed the Chaotic Social Group Optimization (CSGO) algorithm, replacing the parameter *c* in the Social Group Optimization (SGO) algorithm using a chaotic map. Experimental results showed that CSGO outperformed SGO.

In 2022, scholars primarily focused on improving and applying the PPE algorithm. Zhu et al. [37] proposed the Multigroup-based PPE algorithm with Multistrategy (MPPE), which divided the population into multiple groups in the initialization stage and incorporated the step size factor of the flower pollen algorithm into the population growth model of some groups. Similarly, Zhuang et al. [38] proposed the Advanced PPE (APPE) algorithm in 2022, which removed population competition and partial evolutionary trend updates and added jumping mechanisms, history-based searches, and population closure moves.

#### **3. Chaotic-Based Phasmatodea Population Evolution (CPPE) Algorithm**

#### *3.1. Phasmatodea Population Evolution (PPE) Algorithm*

The PPE algorithm simulates the way the phasmatodea population evolves. Three stages primarily make up the algorithm. The first stage is the initialization stage, the second stage is the population update stage, and the third stage is the selection of the population evolution trend. Figure 1 shows the flowchart of the PPE algorithm. In order to explain the PPE algorithm and CPPE algorithms, we define some symbols in Table 1.

**Figure 1.** Flowchart of PPE algorithm.


**Table 1.** The symbols used in PPE and CPPE.

In the initialization phase, we need to randomly initialize a *N p* × *d* matrix *X*, *X* = [*x*1, ... , *xi*, ... , *xN p*], where each element represents a population *xi*, the dimension of each population is *d*, and there are a total of *N p* populations. Each population *xi* has two attributes: (1) population size *pi*. (2) growth rate *ai*. The initial population size is *pi* = <sup>1</sup> *N p* , and the initial growth rate is *ai* = 1.1. To initialize, the population evolution trend *ei* is set to 0. After calculating the fitness value, use *gbest* to present the current global optimal solution. In addition, set a *k* × *d* matrix *H*, *H* = [*xh*1, ... , *xhi*, ... , *xhk*], and use *H* to store the historical global optimal solution. The value *xhi* represents the *i*-th global optimal solution, the number being set to *k*, *k* = log(*N p*) + 1. Then, sort *H* from largest to smallest. The role of *H* is to guide the update of the surrounding populations.

In the population update phase, the *t*-th updated population is represented by *x<sup>t</sup> i* , and, then, the calculation formula of the *t* + 1-th updated population is Equation (1).

$$\mathbf{x}\_{i}^{t+1} = \mathbf{x}\_{i}^{t} + \mathbf{e}\_{i} \tag{1}$$

After the population is updated, the fitness value needs to be recalculated, and *gbest* and *H* need to be updated.

Finally, the third stage is the selection of the population evolution trend. Three cases are involved in this stage. First, we use *f*(*x<sup>t</sup> i* ) to represent the fitness value of the *t*-th update, and, then, the fitness value of the *t* + 1-th update is represented by *f*(*xt*+<sup>1</sup> *<sup>i</sup>* ). The first case is *f*(*xt*+<sup>1</sup> *<sup>i</sup>* ) <sup>≤</sup> *<sup>f</sup>*(*x<sup>t</sup> i* ). Use Equations (2) and (3) to update the *pi* and *ei* of the population.

$$p\_i^{t+1} = a\_i^{t+1} p\_i^t (1 - p\_i^t) \tag{2}$$

$$e\_{i}^{t+1} = (1 - p\_{i}^{t+1})[(x\_{i,H}^{t} - x\_{i}^{t}) \cdot c] + p\_{i}^{t+1}(e\_{i}^{t} + m) \tag{3}$$

For Equation (3), *m* is a mutation factor; *x<sup>t</sup> <sup>i</sup>*,*<sup>H</sup>* is a historical optimal solution in *H*, and its fitness value is the closest to the fitness value of *x<sup>t</sup> <sup>i</sup>* in *<sup>H</sup>*, that is, *<sup>f</sup>*(*x<sup>t</sup> <sup>i</sup>*,*H*) <sup>−</sup> *<sup>f</sup>*(*x<sup>t</sup> i* ) is the smallest; *c* is the impact factor.The second case is *f*(*xt*+<sup>1</sup> *<sup>i</sup>* ) > *<sup>f</sup>*(*x<sup>t</sup> i* ). However, there is a probability for the population to accept this update situation. We use the *rand* method and *pi* to make a probability judgment. If the number randomly generated by the *rand* method is less than *pi*, we accept the worse situation and use Equation (2) to update *pi*. The second formula for updating *ei* is Equation (4).

$$\mathbf{e}\_{i}^{t+1} = rand \cdot (\mathbf{x}\_{i,H}^{t} - \mathbf{x}\_{i}^{t}) + st \cdot B \tag{4}$$

Among them, *st* is the impact factor, and *B* is a randomly generated 1 × *d* matrix that conforms to the standard normal distribution.The third case is the impact of competition before the population. First, to calculate the distance between the *xi* and the *xj*. If the distance is less than the defined threshold *G*, there is competition among populations. *G* is calculated as Equation (5). At this time, use Equations (6) and (7) to update the *pi* and *ei* of the population.

$$G = 0.1 \times (\mathcal{U} - L) \frac{Max\\_gen + 1 - t}{Max\\_gen} \tag{5}$$

$$p\_i^{t+1} = p\_i^t + a\_i^t p\_i^t \left(1 - p\_i^t - \frac{f(\mathbf{x}\_j^t)}{f(\mathbf{x}\_i^t)} p\_j^t \right) \tag{6}$$

$$e\_i^{t+1} = e\_i^t + \frac{f(\mathbf{x}\_j^t) - f(\mathbf{x}\_i^t)}{f(\mathbf{x}\_j^t)} (\mathbf{x}\_j^t - \mathbf{x}\_i^t) \tag{7}$$

#### *3.2. The Proposed CPPE Algorithm*

Chaos is an unpredictable and random movement in a deterministic system. Given an initial value for a chaotic system, a chaotic sequence can be generated after chaotic mapping, which is random. This property can be used as an initialization method in the PPE algorithm to improve the convergence speed and the ability to find the global optimal solution. A random generator is used in the initialization phase in the standard PPE algorithm. Compared with the random generator, using chaotic maps to generate the initial population can make it more random and uniform.

The initialization of CPPE algorithm is described as follows.

• Initialize a matrix *Z* with dimension *N p* × *d*, where all elements are zero, that is,

$$\mathbf{Z} = \begin{bmatrix} z\_{11} & \dots & z\_{1d} \\ \vdots & \ddots & \vdots \\ z\_{Np1} & \dots & z\_{Npd} \end{bmatrix} \\ \mathbf{\phantom{red}} z\_{11} = \dots = z\_{Npd} = \mathbf{0};$$


$$z\_{mn} = L + (L - L) \times z\_{mn} \tag{8}$$

Other initialization content is the same as that in the PPE algorithm. After the initialization phase is completed, the algorithm enters the iterative phase, which includes the population update phase and the population evolution trend update phase.

The flowchart of the CPPE algorithm is shown in Figure 2.


5. Determine whether the maximum number of iterations has been achieved. If the maximum number of iterations is not reached, proceed to step 2 and repeat the process until the maximum number is attained.

**Figure 2.** Flowchart of CPPE algorithm.

According to the above description, the pseudo-code of the CPPE algorithm is shown in Algorithm 1. The code for the algorithm has been uploaded to the website (https: //github.com/Leon-paq/CPPE.git).


#### **4. Experimental Results and Discussions**

Three experiments were designed to verify the performance and convergence of the proposed CPPE algorithm. Specifically, these experiments aimed to compare the CPPE algorithm, which incorporates 12 different chaotic maps, with the unimproved PPE algorithm, in terms of performance and convergence. The 12 selected chaotic maps included the Logistic, Piecewise, Singer, Sine, Gauss, Tent, Bernoulli, Chebyshev, Circle, Cubic, Sinusoidal, and ICMIC maps. To facilitate comparisons between the CPPE algorithm and the unimproved PPE algorithm, the 12 different CPPE algorithms were labeled as CPP1 to CPPE12, as shown in Table 2.


#### *4.1. Benchmark Functions and Experimental Environments*

For our experiment, we chose to utilize 28 benchmark functions from the widely-used CEC13 dataset [49]. These functions are commonly utilized for evaluating the efficacy of various algorithms. The CEC13 dataset is comprised of three types of benchmark functions: unimodal functions, basic multimedia functions, and composition functions. The mathematical expressions and attributes of these functions are presented in Table 3. Unimodal functions are represented by *f*<sup>1</sup> to *f*5, basic multimedia functions are represented by *f*<sup>6</sup> to *f*20, and composition functions are represented by *f*<sup>21</sup> to *f*28. The dimension of each function calculation is provided under the "Dimension" column, and the optimal value of each function is provided under the "Optimal" column.




The experiment was conducted on a Windows 11 laptop, which had an AMD Ryzen 7 5800H CPU with a clock speed of 3.20 GHz and 16 GB of running memory. The experiment was implemented using MATLAB R2022b.

#### *4.2. Performance Comparison between PPE and CPPEs*

Before commencing the experiment, several parameters needed to be configured, as presented in Table 4. The "Population\_Number" denotes the population counts for the CPPE algorithm and was set to 100 for this experiment. The maximum iteration count for the CPPE algorithm is denoted by the "Max\_Gen" variable and was set to 100 for this experiment. In this experiment, the CPPE algorithm was run 50 times, as indicated by the "Run\_Nums" variable.

**Table 4.** Parameters setting for performance experiments.


We ran PPE and CPPE1 to CPPE12 on 28 benchmark functions 50 times and recorded the respective results. We used the three criteria, Final, Mean and Standard, to compare algorithmic performance. Final represents the final optimal value of the algorithm, that is, the minimum result of running the algorithm 50 times. Mean is the average outcome of executing the algorithm 50 times and represents the method's average optimal value. Standard stands for the algorithm's total standard deviation, and the algorithm's degree of dispersion, after 50 iterations.

We displayed the results after running the experiment 50 times in a tabular form, as shown in Table 5. Where *f*1−<sup>28</sup> represents the 28 benchmark functions, the first column represents different algorithms, and the second to fourth columns represent three comparison standards. We have put the data pertaining to results better than the PPE algorithm in bold in the table to improve the readability for readers. In addition, we made statistics on 28 benchmark functions in the experiment, and the number of CPPE superior to PPE is shown in Figure 3 and Table 6. In Figure 3, the horizontal coordinates indicate the different CPPE algorithms and the vertical coordinates indicate the number of times CPPE outperformed PPE on Final, Mean, and Std matrices. The data in the figure shows the number of times CPPE was superior to PPE on the 28 benchmark functions. The detailed benchmark functions are shown in Table 7.


**Table 5.** The experimental results of the 50 PPE and CPPE on 28 benchmark functions.

CPPE12 1.04E+00 5.23E+00 **2.87E+00** CPPE12 7.14E-04 9.17E-01 1.03E+00 CPPE12 **9.96E-01** 7.45E+00 3.61E+00

**Table 5.** *Cont.*



**Table 5.** *Cont.*

**Table 6.** The number of times CPPE is better than PPE.


**Table 7.** The benchmark function of CPPE is better than PPE.


**Figure 3.** Number of times different CPPEs were superior to PPE.

Tables 5–7, and Figure 3 show that 12 CPPEs performed well in finding the final optimal value. In terms of average optimal value, CPPE3, CPPE8, CPPE10 and CPPE12 did not perform well, while CPPE1, CPPE6 and CPPE9 performed well. In terms of standard deviation, CPPE3 and CPPE8 performed slightly worse, while CPPE1, CPPE2, CPPE6 and CPPE9 performed very well. To sum up, the performances of CPPE3, CPPE8, CPPE10 and CPPE12 were not significantly better than that of PPE, that is, CPPE with Singer map, Chebyshev map, Cubic map and ICMIC map did not significantly improve the performance of the algorithm. However, CPPE1, CPPE2, CPPE4, CPPE5, CPPE6, CPPE7, CPPE9 and CPPE11 were obviously superior to PPE. That is, CPPE with the Logistic, Piecewise, Sine, Gauss, Tent, Bernoulli, Circle, and Sinusoidal maps significantly improved the algorithm's performance.

In addition to the initial evaluation, we also tallied the occurrences where the PPE algorithm and CPPE1 to CPPE12 attained optimal results on three metrics out of 28 benchmark functions. The statistical results are shown in Figure 4. In this chart, the horizontal axis represents 13 different algorithms, and the vertical axis represents the number of times that algorithm achieved the best results compared to the other algorithms across 3 metrics among 28 functions. The results show that CPPE9 achieved the best results 25 times, which was remarkable compared to other algorithms. CPPE9 is the CPPE algorithm with the Circle map.

Furthermore, to accurately calculate the improved percentage of CPPE compared to PPE, statistical analysis and calculations were performed on the experimental data. During the statistical process, we discovered that benchmark functions *f*4, *f*21, and *f*<sup>23</sup> had outliers. As a result, we only calculated the results for the remaining 25 benchmark functions. Our approach was as follows: (1) First, the value of each CPPE was subtracted from the value of PPE on each indicator for each benchmark function, and, then, the resulting value was divided by the value of PPE and, finally, converted into a percentage. This provided the improved percentage of each CPPE over the PPE for each indicator of each benchmark function. For example, benchmark functions *f*1, *f*1(*PPE*\_*Final*) and *f*1(*CPPE*1\_*Final*) indicate the value of PPE and CPPE1 in the Final indicator, respectively. Thus, the improved percentage of CPPE1 in the Final indicator compared with PPE is obtained by the following Equation (9).

$$\frac{f\_1(PPE\\_Final) - f\_1(CPPE1\\_Final)}{f\_1(PPE\\_Final)} \times 100\% \tag{9}$$

(2) After the obtained values were averaged, the average percentages of 12 CPPE in three indicators compared with PPE were obtained. The results are shown in Table 8. The first column indicates different CPPE algorithms, and the last three columns indicate the improved percentage of the CPPE algorithm compared with the PPE algorithm in the three indicators. In Table 8, it can be observed that CPPE1 (CPPE with Logistic map), CPPE6 (CPPE with Tent map), and CPPE8 (CPPE with Chebyshev map) showed improvements over PPE on the Final indicator. CPPE2 (CPPE with Piecewise map), CPPE6 (CPPE with Tent map), and CPPE9 (CPPE with Circle map) exhibited improvements over PPE on the Mean indicator. CPPE2 (CPPE with Piecewise map), CPPE5 (CPPE with Gauss map), CPPE6 (CPPE with Tent map), and CPPE9 (CPPE with Circle map) showed improvements over PPE on the Standard indicator. Therefore, the CPPE algorithm with Tent map performed the best compared to the PPE algorithm, with an increase of 8.9647%, 10.4633%, and 14.6716% in Final, Mean, and Standard indicators, respectively.

**Figure 4.** Optimal number of times for PPE and CPPEs on performance.


**Table 8.** The percentage of improved performance of CPPE compared to PPE.

#### *4.3. Convergence Comparison between PPE and CPPEs*

An experiment was designed to compare the convergence of the different algorithms. All parameters are shown in Table 9, where the number of population was set to 100, the number of iterations was set to 50, and the number of runs was set to 50 times.

**Table 9.** Parameters setting for convergence experiments.


In this experiment, an evaluation criterion was designed to compare the convergence of different algorithms, which we called the average change rate of fitness value. Our approach was as follows: (1) First, we ran PPE and 12 CPPE algorithms on each benchmark function once. To subtract the fitness values between the 50th generation and the initial generation. Finally, the result was divided by 50 to obtain the change rate of each generation. (2) We repeated this process 50 times and then calculated the average. The results yielded the average change rate of fitness value for the 28 benchmark functions. Table 10 shows the results between PPE and 12 CPPE algorithms .

**Table 10.** The experimental results of PPE and CPPE regarding iteration for 50 times on 28 benchmark functions.



**Table 10.** *Cont.*

Furthermore, to accurately calculate the improved percentage of CPPE compared to PPE, statistical analysis and calculations were performed on the experimental data. The same methods mentioned in Section 4.2 were used and the results are shown in Table 11. The first column indicates different CPPE algorithms and the last column indicates the improved percentages in the convergence of the CPPE algorithm compared with the PPE algorithm. In Table 11, it can be observed that CPPE1 (CPPE with Logistic map), CPPE3 (CPPE with Singer map), CPPE4 (CPPE with Sine map), CPPE6 (CPPE with Tent map), CPPE8 (CPPE with Chebyshev map), CPPE10 (CPPE with Cubic map), CPPE11 (CPPE with Sinusoidal map), and CPPE12 (CPPE with ICMIC map) increased the convergence. In addition, CPPE3 (CPPE with Singer map) had a significant effect, of about 65.1776%.


**Table 11.** The percentage of improved convergence of CPPE compared to PPE.

#### *4.4. Discussions*

In Section 4.2, the performance of different CPPE algorithms and that of the PPE algorithm are compared. We performed three different analyses of the experimental data. Firstly, we counted the number of times that CPPE was better than PPE on three indicators in 28 benchmark functions. The statistical results showed that CPP1, CPPE2, CPPE4, CPPE5, CPPE6, CPPE7, CPPE9, and CPPE11 algorithms outperformed the PPE algorithm. Secondly, we counted the optimal times of all CPPEs and PPE on 3 indicators in 28 benchmark functions. The statistical results showed that CPPE9 was the most prominent among the 13 algorithms. Finally, the improved percentages of all CPPEs compared with PPE in the three indicators were counted. The statistical results showed that CPPE6 performed the best, with an increase of 8.9647%, 10.4633%, and 14.6716% compared with PPE in the three indicators of Final, Mean, and Standard, respectively. Based on the above analysis, we believe that, in terms of performance, CPPE6 is the best performing algorithm among all CPPEs, so the Tent map is the best choice to improve the performance of CPPE algorithms.

In Section 4.3, the convergence of different CPPE algorithms and PPE algorithm were compared. The experimental results showed that, compared with PPE, CPPE1, CPPE3, CPPE4, CPPE6, CPPE8, CPPE10, CPPE11, and CPPE12 increased the percentages of the average change rate of the fitness value. Among them, the improvement offered by CPPE3 was the most obvious, with an increase of 65.1776%. Based on the above analysis, we believe that, in terms of convergence, CPPE3 is the best performing algorithm among all CPPEs, so the Singer map is the best choice to improve the convergence of CPPE algorithms.

Though CPPE6 (CPPE with Tent map) had the best performance and CPPE3 had the best convergence, we found CPPE6 to be the best choice among all CPPEs in regard to both performance and convergence. It offered improvements of 8.9647%, 10.4633%, and 14.6716% in the three indicators and 1.1324% in convergence.

#### *4.5. Real-Life Problem: Stock Prediction*

We applied our CPPE to stock prediction. Here, Amazon stock and a commonly used prediction model, the LSTM neural network [50], were selected for our experiments. In our experiments, we used CPPE to optimize the three hyperparameters, "hidden\_size", "batch\_size" and "epochs", of the LSTM neural network to improve the effectiveness of LSTM, where "hidden\_size" represents the dimension of hidden layers in LSTM, "batch\_size" represents the number of inputs per batch in LSTM, and "epochs" represents the number of training sessions for LSTM. Note that, considering the best choice mentioned in Sections 4.2 and 4.3, we chose the CPPE algorithm with Tent map (CPPE6) to optimize LSTM.

Firstly, data processing was performed on the experimental data selected, which included the highest price, opening price, lowest price, closing price, and trading volume of Amazon's stock every day from 23 October, 2009, to 31 March, 2020. The data was divided into a training set and a test set, with a ratio of 9:1, and standardized.

The parameter settings for CPPE6 and LSTM model are shown in Table 12. In the CPPE6 algorithm, we set the population size to 10, the number of iterations to 10, and all dimensions to 3, because we needed to optimize the three hyperparameters of LSTM. The range of the solution was set to [1, 300]. In the LSTM model, the time step was set to 5, which meant using 5 days of data to predict the next day's data. The solver was set to "adam", and the initial learning rate was set to 0.005. After 100 rounds of training, we reduced the learning rate to 0.2 times the initial learning rate. Furthermore, in this experiment, the root mean squared error (RMSE) of the LSTM model was used as the fitness value of the CPPE6 algorithm.

After the experiment, we obtained three optimized hyperparameters: hidden\_size = 179, batch\_size = 110, and epochs = 181 with RMSE = 0.05762. Then, we input the three solutions into the LSTM model and obtained predicted results, as shown in Figure 5. The horizontal axis represents days sorted by time and the vertical axis represents stock value, where the red curve represents the predicted value, and the blue curve represents the real value. Thus, it can be seen that the predicted curve was relatively consistent with the real curve.


**Table 12.** Parameter settings for real application experiments.

**Figure 5.** Prediction results on Amazon stock using CPPE6-LSTM.

#### **5. Conclusions**

This study proposes a Chaotic-based Phasmatodea Population Evolution (CPPE) algorithm by integrating chaotic mapping into the Phasmatodea Population Evolution (PPE) algorithm. To investigate the impact of various chaotic maps on the algorithm, 12 different chaotic maps were combined with CPPE, resulting in 12 CPPEs. The objective of this study was to determine whether CPPE outperforms PPE in terms of performance and convergence. To validate this claim, 28 benchmark functions were employed in the testing phase. Experimental results demonstrated that CPPE significantly improved both the performance and convergence speed of the algorithm. Among all chaotic maps, the Tent map is considered to be the best choice to improve the performance of the CPPE algorithm. Compared with PPE, CPPE with Tent map improved Final, Mean, and Standard by 8.9647%, 10.4633%, and 14.6716%, respectively. Moreover, the Singer map is considered to be the best choice to improve the convergence speed of the CPPE algorithm, and CPPE with Singer map was 65.1776% higher than PPE. Furthermore, we applied CPPE6 to stock prediction. Overall, this study contributes to the advancement of population-based optimization algorithms and provides insights into the impact of chaotic mapping on algorithmic performance.

**Author Contributions:** Conceptualization, T.-Y.W.; methodology, H.L.; software, S.-C.C.; validation, T.-Y.W.; investigation, H.L.; writing—original draft preparation, T.-Y.W., H.L. and S.-C.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data are included in the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
