Next Article in Journal
Characterization of Gramicidin A in Triblock and Diblock Polymersomes and Hybrid Vesicles via Continuous Wave Electron Paramagnetic Resonance Spectroscopy
Previous Article in Journal
Review of Electrohydraulic Actuators Inspired by the HASEL Actuator
Previous Article in Special Issue
An Efficient Multi-Objective White Shark Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Strategy Parrot Optimization Algorithm and Its Application

College of Electronic and Information Engineering, West Anhui University, Lu’an 237012, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(3), 153; https://doi.org/10.3390/biomimetics10030153
Submission received: 20 January 2025 / Revised: 25 February 2025 / Accepted: 26 February 2025 / Published: 2 March 2025

Abstract

:
Intelligent optimization algorithms are crucial for solving complex engineering problems. The Parrot Optimization (PO) algorithm shows potential but has issues like local-optimum trapping and slow convergence. This study presents the Chaotic–Gaussian–Barycenter Parrot Optimization (CGBPO), a modified PO algorithm. CGBPO addresses these problems in three ways: using chaotic logistic mapping for random initialization to boost population diversity, applying Gaussian mutation to updated individual positions to avoid premature local-optimum convergence, and integrating a barycenter opposition-based learning strategy during iterations to expand the search space. Evaluated on the CEC2017 and CEC2022 benchmark suites against seven other algorithms, CGBPO outperforms them in convergence speed, solution accuracy, and stability. When applied to two practical engineering problems, CGBPO demonstrates superior adaptability and robustness. In an indoor visible light positioning simulation, CGBPO’s estimated positions are closer to the actual ones compared to PO, with the best coverage and smallest average error.

1. Introduction

With the increasing complexity of real-world problems, traditional methods have encountered numerous limitations. Against such challenging circumstances, intelligent optimization algorithms have progressively emerged. They stem from the observation and emulation of phenomena in nature and the intelligent behaviors of organisms. Initially, such algorithms were rudimentary and merely simulated biological evolution; examples include the Simulated Annealing (SA) algorithm [1], the Genetic Algorithm (GA) [2], the Particle Swarm Optimization (PSO) algorithm [3], and the Ant Colony Optimization (ACO) algorithm [4].
With the rapid development of science and technology, intelligent optimization algorithms have evolved and diversified into numerous types. These include the Bacterial Foraging Optimization (BFO) [5], Artificial Bee Colony (ABC) [6], Cuckoo Search (CS) [7], Bat Algorithm (BA) [8], Moth Flame Optimization (MFO) [9], Pigeon Swarm Optimization Algorithm (PSOA) [10], Spider Monkey Optimization (SMO) [11], Seagull Optimization algorithm (SOA) [12], Remora Optimization Algorithm (ROA) [13], Black Smoke Swallow Optimization algorithm (STOA) [14], and Gray Wolf Optimization (GWO) [15] algorithms. In recent years, new ones like the Harris Hawks Optimization (HHO) [16], Catch fish optimization algorithm (CFOA) [17], Pelican Optimization Algorithm (POA) [18], and Crayfish Optimization Algorithm (COA) [19] have also emerged.
Most intelligent optimization algorithms simulate the behaviors and habits of natural organisms to efficiently solve complex problems. For instance, authors in [20] proposed a Hybrid Whale Particle Swarm Optimization (HWPSO) algorithm, leveraging the “forced” whale and “capping” phenomenon. Evaluated in tasks like determining operational amplifier circuit size, minimizing micro-electro-mechanical system switch pull-in voltage, and reducing differential amplifier random offset, the HWPSO operated efficiently and improved optimal values significantly. In [21], an Improved Whale Optimization Algorithm (IWOA) was proposed, combining crossover and mutation from the Differential Evolution (DE) algorithm and introducing a cloud-adaptive inertia weight. This algorithm was applied to truss structure optimization for 52-bar planar and 25-bar spatial trusses. Authors of Ref. [22] enhanced the Seagull Optimization Algorithm with Levy flight (LSOA), achieving good results in path-planning problems. Moreover, Ref. [23] introduced a Multi-strategy Golden Jackal Optimization (MGJO) algorithm. It initialized the population via chaotic mapping, adopted a non-linear dynamic inertia weight and used Cauchy mutation to boost population diversity, effectively estimating parameters of the non-linear Muskingum model.
The Parrot Optimization (PO) [24], proposed by J. Lian et al. in 2024, is inspired by parrots’ learning behaviors. It emulates parrots’ foraging, lingering, communication, and wariness of strangers to extensively search for the optimal solution in the search space. Yet, when handling complex problems with high-dimensional, multi-modal, and non-linear objective functions, the PO algorithm’s search efficiency and solution accuracy may suffer. It often converges prematurely to local optima or fails to fully explore the search space, hindering the discovery of the global optimum. Furthermore, lacking an adaptive parameter adjustment mechanism, the PO algorithm’s parameters cannot be automatically tuned. Manual parameter adjustment for various optimization problems is thus required, decreasing the algorithm’s convergence rate.
To address PO’s drawbacks, like becoming trapped in local optima and slow convergence, we developed a multi-strategy PO algorithm named CGBPO. Firstly, chaotic logistic mapping replaces traditional random initialization. This distributes the initial population more evenly and widely, preventing individuals from clustering in specific search space areas [25]. It boosts population diversity and enhances global search ability. Secondly, Gaussian mutation is applied to updated individual positions [26]. Everyone has a chance to generate new, relatively random positions nearby, breaking local convergence and increasing diversity. Thirdly, after each iteration, we calculate the centroid of the current population and generate the corresponding opposite solution [27]. Then, we explore the search space from the direction opposite to the current individuals based on the centroid, guiding the population to more promising areas. This makes the search more directional, comprehensive, and better at handling complex high-dimensional and multimodal optimization problems.
To validate the CGBPO algorithm, 30 independent experiments were carried out in CEC2017 [28], CEC2022 [29], and two engineering problems, followed by a comparison with seven swarm intelligence algorithms. The results show that the three strategies in CGBPO effectively enhance its performance, notably addressing premature local optimization and slow convergence issues.
The paper is organized as follows: Section 2 overviews the PO algorithm. Section 3 elaborates on the proposed CGBPO with multiple strategies. Section 4 and Section 5 detail the experimental tests of CGBPO on CEC2017 and CEC2022 benchmarks and analyze the results. Section 6 presents the comparative tests of CGBPO for engineering problems. Section 7 describes the application of CGBPO in the indoor visible-light-positioning system. Finally, Section 8 summarizes the study and outlines future research directions.

2. Parrot Optimization Algorithm

The PO algorithm primarily encompasses the four behaviors detailed below.

2.1. Foraging Behavior

Through observing the location of food or their owner’s position, parrots estimate the approximate location of food and then fly toward it. Thus, the movement of parrots is modeled using the following formula:
X i t + 1 = ( X i t X b e s t ) L e v y ( d i m ) + r a n d ( 0 , 1 ) ( 1 t M a x i t e r ) 2 t M a x i t e r ) ) X m e a n t
X m e a n t = 1 N   k = 1 N X k t
L e v y ( d i m ) = μ σ ν 1 γ μ N ( 0 , d i m ) ν N ( 0 , d i m ) σ = Γ 1 + γ s i n π γ 2 Γ 1 + γ 2 γ 2 1 + γ 2 γ + 1
where X i t stands for the current position; X i t + 1 symbolizes the updated position; M a x i t e r is the maximum number of iterations; X m e a n t represents the average position of the current population, as defined in Equation (2); L e v y ( ) represents the Levy distribution, as defined in Equation (3), which serves to depict the flight of the parrots; γ is given a value of 1.5, which is used to describe the flight of the parrots; X b e s t represents the current optimal position; t represents the current iteration; ( X i t X b e s t ) L e v y ( d i m ) represents the movement based on the relative position to the owner, and r a n d ( 0 , 1 ) ( 1 t M a x i t e r ) 2 t M a x i t e r ) ) X m e a n t represents determination of the location of food more precisely through observing the positions of the whole population.

2.2. Staying Behavior

Modeling the behavior of parrots staying randomly on different parts of their owner’s body allows for the incorporation of randomness into the search process:
X i ( t + 1 ) = X i t + X b e s t L e v y ( d i m ) + r a n d ( 0 , 1 ) o n e s ( 1 , d i m )
where X b e s t L e v y ( d i m ) indicates the process of flying toward the owner, and r a n d ( 0 , 1 ) o n e s ( 1 , d i m ) symbolizes randomly stopping on a certain part of the owner’s body.

2.3. Communicating Behavior

Parrots are inherently gregarious and often communicate within their groups, either by flying towards the flock or communicating while staying away from it. In the PO algorithm, it is assumed that these two behaviors have an equal probability of occurrence, and the average position of the current population is taken as the representation of the flock’s center. This behavior is modeled as follows:
X i ( t + 1 ) = 0.2 r a n d ( 0 , 1 ) ( 1 t M a x i t e r ) ( X i t X m e a n t ) , P 0.5 0.2 r a n d ( 0 , 1 ) e x p ( t r a n d ( 0 , 1 ) M a x i t e r ) , P > 0.5
where 0.2 r a n d ( 0 , 1 ) ( 1 t M a x i t e r ) ( X i t X m e a n t ) represents the course of an individual becoming part of a parrot flock for communication, while 0.2 r a n d ( 0 , 1 ) e x p ( t r a n d ( 0 , 1 ) M a x i t e r ) represents the situation where an individual takes off right after communicating. Both behaviors are realized through generating a random number within the interval [0, 1].

2.4. Fear of Strangers Behavior

Parrots have a natural fear of unfamiliar individuals and, so, will stay away from strangers and seek protection from their owners. This behavior is modeled as follows:
X i t + 1 = X i t + r a n d ( 0 , 1 ) c o s ( 0.5 π t M a x i t e r ) ( X b e s t X i t ) c o s ( r a n d ( 0 , 1 ) π ) ( t M a x i t e r ) 2 M a x i t e r ) ) ( X i t X b e s t )
where r a n d ( 0 , 1 ) c o s ( 0.5 π t M a x i t e r ) ( X b e s t X i t ) indicates the process of reorienting to fly toward the owner, and c o s ( r a n d ( 0 , 1 ) π ) ( t M a x i t e r ) 2 M a x i t e r ) ) ( X i t X b e s t ) symbolizes the procedure of distancing oneself from strangers.

3. Multi-Strategy Parrot Optimization Algorithm (CGBPO)

3.1. Chaotic Logistic Map Strategy

The chaotic logistic map is defined using a simple recurrence relation, as depicted in Equation (7):
x i + 1 = a x i 1 x i
In this context, x i stands for the state value in the i th iteration, while x i + 1 represents the state value in the subsequent iteration. The parameter a serves as a control factor, and its value generally falls within the range of 0 to 4. Additionally, when the value of a is confined to the interval between 0 and 1, as the number of iterations increases, the sequences generated in the subsequent steps will display entirely distinct changing tendencies. Furthermore, when the value of a lies within the approximate interval from 3.57 to 4, the system displays chaotic characteristics. In this case, 4 is selected. For different initial values, the system exhibits chaotic behavior after multiple iterations. More precisely, the sequence of system state values appears to be disorderly and fluctuates in a random manner. Even a minute difference in the initial value will result in completely different trajectories for the subsequent state values as the iterations progress.
There are other common chaotic strategies like the Tent map [30] and the Sine map [31]. The Tent map shows chaotic features in a narrow range around the control parameter value of 2, so its chaotic domain is relatively narrow. The Sine map turns chaotic when the control parameter approaches 1, but the boundaries of its chaotic regime are ambiguous. Compared with them, the logistic map has outstanding randomness and ergodicity. It can evenly cover the defined interval without obvious data aggregation. It is highly sensitive to the initial value; a tiny change can lead to greatly different results after multiple iterations, a key chaotic feature. The Tent and Sine maps are also sensitive to initial conditions, but less so than the logistic map. In the early stages, differences from different initial values are not obvious.
Due to this, the logistic map is used in the optimization algorithm’s initialization and perturbation. It can boost the algorithm’s global search ability, avoid local optima, and enhance overall optimization performance. In the proposed PO method’s initialization, the logistic map sets the initial positions of individual parrots, introducing chaotic perturbations. After iterations, individual search trajectories become more random, breaking away from local optima that the traditional update mode may become stuck in. This improves the algorithm’s global search ability, balances exploration and exploitation, and enhances its optimization ability.
To assess the performance of the selected map strategies, the CEC2022 standard dataset is used with MATLAB(R2023a) simulation software. Three algorithms—CPO, TPO, and SPO—improving the PO algorithm with the chaotic logistic map, Tent map, and Sine map, respectively, are tested. Then, the test results are compared and analyzed. The population size is set at 30, the maximum number of iterations is 300, and each of the three algorithms runs independently 30 times for every test function.
As shown in Figure 1a, the CPO’s radar chart has the least area fluctuation. CPO ranked first for six functions and second for three functions, indicating highly stable results across multiple runs, unaffected by random factors. In Figure 1b, the ranking chart, sorted by average fitness values, shows that CPO has the lowest average fitness value. In the comprehensive test, the CPO algorithm obtained better solutions and outperformed the other two algorithms in convergence performance.

3.2. Gaussian Mutation Strategy

Gaussian mutation randomly modifies the genetic information of individuals with random perturbation values that adhere to a Gaussian distribution (normal distribution), thereby giving rise to new individuals after the mutation process. The Gaussian mutation function is presented in Equation (8):
x i = N μ , σ
The Ν function is employed to generate random numbers that comply with the normal distribution. In this context, μ and σ serve as the mean and standard deviation of the Gaussian function, respectively, as illustrated in Equations (9) and (10):
μ = l b + u b 2
σ = u b l b 6
Typically, l b symbolizes the lower bound of the variable, while u b represents the upper bound of the variable. The average of the upper and lower bounds is taken as the central position of the normal distribution. This configuration ensures that the generated random mutation values are theoretically distributed in a relatively uniform manner around this value range. Given that it is grounded in a normal distribution, approximately 99.7% of the mutation values will fall within the range of the mean plus or minus three times the standard deviation, that is, μ 3 σ , μ + 3 σ . Consequently, there is a high likelihood that the mutation values will lie within this interval centered around the mean, with a width equivalent to u b l b (as 3 σ = u b l b 2 , extending by this amount on both sides precisely covers the width of the interval). This implies that the degree of dispersion of the mutation is relatively moderate. It will neither cause the new solutions generated by mutation to be overly scattered and distant from the original value range nor make the mutation overly concentrated, thus losing the ability to explore new solution spaces. Instead, it represents a form of exploration that has a certain breadth yet remains controllable within the given upper- and lower-bound intervals. Consequently, when the range of the upper and lower bounds (i.e., the value of u b l b ) is large, σ will also be large, signifying an increase in the degree of dispersion of the mutation. With the mutation operation, a larger value range can be explored more comprehensively, and it is more likely that individuals can break free from local optimal solutions and search for new areas in the solution space that are removed from the current solution. Conversely, if the range of the upper and lower bounds is small, σ will become smaller, the degree of dispersion of the mutation will decrease, and the new solutions generated by the mutation operation will be closer to the current mean position, focusing more on fine-tuning and optimizing the local area within a relatively small interval.
Cauchy mutation [32] and non-uniform mutation [33] are two common mutation strategies in optimization algorithms. Cauchy mutation modifies individual genes using the Cauchy distribution. It randomly samples a value from this distribution and adds it to the original gene value to obtain the mutated one. The Cauchy distribution’s heavy tail means there is a high chance of obtaining a value far from the central value. So, Cauchy mutation is mainly for global search, but it has low local search accuracy. Also, its large mutation values lead to poor stability and fluctuations result.
The probability of non-uniform mutation declines with iterations, aiding global exploration initially and focusing on local development later. The mutation amplitude varies dynamically, large at first then small. Its direction is random and parameter adjustable. Yet, parameter setting is tough and demands much debugging. Its adaptability to different problems is limited with varying effects, has high computational complexity, and may still become trapped in local optima, hindering global optimization.
In contrast, Gaussian mutation is highly stable. Based on the normal distribution, it gives the mutation process direction and concentration. Most mutation values are within a certain range, ensuring the algorithm’s stable convergence. For local research, Gaussian mutation converges fast and accurately, allowing for precise adjustments to the current solution, making it ideal for scenarios requiring local optimization precision.
To check the performance of the chosen chaotic strategies, three algorithms—GPO, CAPO, and NPO, which enhance the PO algorithm with Gaussian mutation, Cauchy mutation, and No-uniform mutation, respectively—are used for testing. As shown in Figure 2a, the GPO’s radar chart has the least area fluctuation. GPO ranked first for seven functions and second for three functions, proving its results across multiple runs were highly stable and unaffected by random factors. Notably, GPO had the lowest average fitness value of 1.58 in Figure 2b. In the comprehensive test, GPO performed the best.

3.3. Barycenter Opposition-Based Learning Strategy

In this study, barycenter opposition-based learning is adopted. In the initial iterations, as the differences between individuals in the population are relatively large, the generation of mutant parrots allows for more areas to be explored. Meanwhile, in later iterations, although the quantity of individuals in the population decreases, the mutant parrots can still maintain diversity. The barycenter is defined as follows:
Suppose that m 1 j , m 2 j , , m n j denotes the values of n parrots in the j th dimension. The population consists of N individuals. Then, the barycenter of the parrot population in the j th dimension is given by Equation (11), and the population barycenter is Z = z 1 , z 2 , , z j .
Z J = x 1 j + x 2 j + + x n j n
Barycenter opposition-based mutation: Suppose that x i = x i 1 , x i 2 , , x i D is the i th parrot with D dimensions. If the selected mutation dimension is the j th dimension, then the barycenter opposition-based solution corresponding to the i th parrot is x o p _ i = x o p _ i 1 , x o p _ i 2 , , x o p _ i D , which is determined using Equation (12):
x o p _ i = 2 × k × Z j x i
where k is a contraction factor, the value of which is a random figure within the range. During the iteration process, for each parrot in the parrot population, a certain dimension j is selected for mutation. Then, the mutation result is compared with the position of the previous generation, and the better mutation is retained.
Opposition-based learning [34] and elite opposition-based learning [35] are common strategies in optimization algorithms. Opposition-based learning is simple and effective at the start for boosting population diversity, providing more search trajectories and enhancing exploration. But it has a limited way of generating opposite individuals, relying only on individual features and search space boundaries, ignoring other individuals’ distribution and relationships, which may limit its performance in complex scenarios.
Elite opposition-based learning selects elite individuals with high fitness as the core to generate opposite ones, leveraging their high-quality information to find better solutions and speed up convergence. However, focusing on elites reduces population diversity, increasing the risk of premature local optima and missing the global optimum.
In contrast, the Barycenter opposition-based learning strategy uses the population’s overall barycenter information and random contraction adjustment to explore a new solution space opposite the original individuals. It aims to find better solutions different from the current ones, enhancing population diversity, helping the algorithm escape local optima and search for the entire solution space more efficiently.
To verify the performance of the chosen opposition-based learning strategies, three algorithms—BPO, OPO, and EPO, which enhance the PO algorithm with barycenter opposition-based learning, traditional opposition-based learning, and elite opposition-based learning, respectively—are employed for testing. As shown in Figure 3a, BPO’s radar chart has the least area fluctuation. BPO ranked first for nine functions and second for two functions, demonstrating that its results across multiple runs were highly stable and unaffected by random factors. Notably, BPO had the lowest average fitness value of 1.33 in Figure 3b. In the comprehensive test, BPO performed the best.

3.4. Ablation Study of CGBPO

An ablation study was carried out to clarify the contributions of each newly added strategy. The CGBPO algorithm without chaotic mapping (CGBPO1), without Gaussian mutation (CGBPO2), and without barycenter opposition-based learning (CGBPO3), and the original CGBPO algorithm were tested using the CEC2022 standard dataset with MATLAB simulation software. The population size was 30, the maximum iterations were 300, and each algorithm ran independently 30 times for every test function.
In Figure 4a, CGBPO’s radar chart had the least area fluctuation, ranking first for six functions and second for four. The radar chart areas of the other three algorithms were much larger, showing the positive effects of the three new strategies on algorithm performance. Notably, in Figure 4b, CGBPO had the lowest average fitness value of 1.33 and the best performance in the comprehensive test. In the ranking chart, CGBPO1 ranked second, CGBPO2 fourth, and CGBPO3 third, indicating that Gaussian mutation contributed most to algorithm improvement.

3.5. Pseudo-Code of CGBPO

The comprehensive structure of the CGBPO is presented in Figure 5 and Algorithm 1, which offer a meticulous roadmap for the whole improvement process, encompassing its iterative steps, as well as the utilized search strategies.
Algorithm 1: Pseudo-Code of CGBPO
1: Initialize the CGBPO parameters
2: Initialize the solutions’ positions using the chaos strategy by Equation (7)
3: For i = 1: N do
4:        Calculate the fitness value of all search agents
5: End
6: For i = 1: Max_iter do
7:     Find the best position and worst position:
8:     For j = 1: N do
9:             St = randi([1,4])
10:            Behavior 1: Foraging behavior
11:            If St == 1 Then
12:                 Update position by Equation (1)
13:            Behavior 2: Staying behavior
14:            Elseif St == 2 Then
15:                 Update position by Equation (4)
16:            Behavior 3: Communicating behavior
17:            Elseif St == 3 Then
18:                 Update position by Equation (5)
19:            Behavior 4: The fear of strangers’ behavior
20:              Elseif St == 4 Then
21:                 Update position by Equation (6)
22:            End
23:              Update position using gaussian mutation by Equation (8)
24:     End
25:     Generate new solutions using the barycenter opposition-based learning:
26: For i = 1: N do
27:        Calculate the values of the original function
28:        Update position using the barycenter opposition-based learning strategy by Equation (12)
29: End
30:      Return the best solution
31: End

3.6. Comparative Analysis of the Time Complexity Between CGBPO and PO

3.6.1. Time Complexity Analysis of the PO Algorithm

The population initialization operation of PO adopts a simple random initialization method. It generates N individuals with a dimension of dim, and the time complexity is O N × dim . When calculating the initial fitness values, the objective function values are calculated for N individuals, respectively. Assuming that the time complexity of calculating the objective function once is O k (where k depends on the complexity of the objective function), the time complexity of calculating the initial fitness values is O N × k . The sorting operation uses the sort of function. The time complexity of common sorting algorithms is O N log N . Therefore, the total time complexity of the initialization stage is O N × dim + N × k + N log N .
The PO algorithm has two nested loops in each iteration. The outer loop runs M a x _ i t e r times, and the inner loop operates on the N individuals in the population. When updating individuals, calculations such as Levy flight are involved, and its time complexity mainly depends on the dimension dim. Assuming that the time complexity of each individual update operation is O m × dim (where m is a constant related to the specific calculation), then the time complexity of individual updates in each iteration is O N × m × dim . The boundary control operation also judges and adjusts the dim dimensions of N individuals, and its time complexity is O N × dim . The time complexities of updating the global optimal solution and sorting the population are O N and O N log N , respectively. Therefore, the time complexity of each iteration is O N × m × dim + N × dim + N + N log N , and the time complexity of the entire iteration stage is O M a x _ i t e r × N × m × dim + N × dim + N + N log N .
Combining the initialization and iteration stages, the time complexity of PO is O N × dim + N × k + N log N + M a x _ i t e r × N × m × dim + N × dim + N + N log N . When N , dim, and M a x _ i t e r are large, by ignoring the lower-order terms, the main time complexity can be approximated as O M a x _ i t e r × N × dim .

3.6.2. Time Complexity Analysis of the CGBPO Algorithm

The CGBPO algorithm uses chaotic initialization. This function generates chaotic values and maps them to the search space to generate N individuals with a dimension of dim. The time complexity is O N × dim . The subsequent operations, such as calculating fitness values and sorting are the same as those of PO. Therefore, the total time complexity of the initialization stage is O N × dim + N × k + N log N .
During the iteration process of CGBPO, Gaussian mutation and centroid-based opposition-learning operations are added. Each mutation operation involves calculations and boundary checks for the dim dimensions of an individual. Assuming that the time complexity of each mutation operation is O p × dim (where p is a constant related to the mutation strategy), the time complexity of mutating N individuals is O N × p × dim . The opposition-learning operation also conducts calculations and boundary control for N individuals, and its time complexity is O N × dim . Adding the original individual update, boundary control, global optimal solution update, and population-sorting operations of t PO, the time complexity of each iteration is O N × m × dim + N × dim + N + N log N + N × p × dim + N × dim , and the time complexity of the entire iteration stage is O M a x _ i t e r × N × m × dim + N × dim + N + N log N + N × p × dim + N × dim .
Combining the initialization and iteration stages, the time complexity of CGBPO is O N × dim + N × k + N log N + M a x _ i t e r × N × m × dim + N × dim + N + N log N + N × p × dim + N × dim . When N , dim, and M a x _ i t e r are large, by ignoring the lower-order terms, the main time complexity can be approximated as O M a x _ i t e r × N × dim .

3.6.3. Comparison of the Time Complexities of the Two Algorithms

As can be seen from the above analysis, after ignoring the lower-order terms, the main time complexities of PO and CGBPO are O M a x _ i t e r × N × dim approximately. This means that in large-scale problems, when N , M a x _ i t e r , and dim increase, the growth trends in the calculation times of the two algorithms are basically the same.
The CGBPO algorithm adds mutation and opposition-based learning operations during the iteration process. There are additional terms related to mutation and opposition-based learning, namely Term O M a x _ i t e r × N × p × dim and Term O M a x _ i t e r × N × dim , in its time–complexity expression. In an actual operation, if the mutation and opposition-based learning operations are computationally complex, the time for each iteration of CGBPO will be longer than that of PO. However, this may also bring better optimization effects, helping the algorithm converge to a better solution more quickly.

3.6.4. Experimental and Comparative Analyses of the Time

Table 1 presents the running times of the CGBPO and PO algorithms, along with the ratio of CGBPO’s running time to that of PO. Using the CEC2022 standard dataset and MATLAB simulation software, the test sets the population size at 30 and the maximum number of iterations at 300. Each of the algorithms runs independently 30 times for each test function. Overall, at a low dimension of 30, CGBPO generally takes longer to run than PO, showing lower computational efficiency. But as the dimension rises to 50 and 100, the running-time gap between the two narrows. Different test functions affect the running-time ratio differently. For complex functions, the ratio drops more notably with increasing dimension, indicating CGBPO’s potential in handling high-dimensional complex functions.

4. Experimental and Comparative Analyses of the CEC2017 Benchmark Suite

To evaluate CGBPO’s performance, the CEC2017 and CEC2022 benchmarks were selected. Using MATLAB simulation software, nine algorithms were tested and compared: Parrot Optimization (PO), Harris Hawks Optimization (HHO) [16], Antlion Optimizer (AO) [36], FOX optimization [37], Beluga Whale Optimization (BWO) [38], GOOSE optimization [39], Whale Optimization (WOA) [40], CMA-ES [41], and CGBPO. The population size was set to 30, the maximum iterations to 300, and the dimension to 10, with other parameters following the original literature. Each algorithm ran independently 30 times.
The CEC2017 benchmark has 29 functions. F1, F3, and F4 are unimodal functions, used to assess global convergence; F5–F11 are simple multi-modal functions for testing the ability to escape local optima; F12–F21 are hybrid functions, F22–F27 are composition functions, and F28–F30 are extended unimodal functions, all for testing algorithms’ handling of complex issues. Some CEC2017 test functions’ 3D graphs are shown in Figure 6.
F1 has an obvious global minimum near the image bottom center, useful for evaluating if an algorithm can find the global optimum. F7 is multimodal with multiple local and a global extreme point, often used to evaluate global search and escaping local optima abilities. F17 has a complex stepped-like structure with multiple local extreme values for evaluating an algorithm’s global-optimum-finding ability for multi-extreme-value functions. F27 is extremely complex multimodal with many local extreme points used to evaluate an algorithm’s global optimum finding and avoiding local optima abilities in complex multimodal scenarios. F30 has a complex surface with multiple local extreme regions for evaluating an algorithm’s performance in complex function environments, including global search, convergence speed, and avoiding local optima.

4.1. Statistical Results of Comparative Tests

Comparing CGBPO with eight other algorithms on CEC2017, Table 2, Table 3 and Table 4 reveal its significant performance edge. On unimodal functions, CGBPO has notable advantages on F1, F3, and F4, though CMA-ES excels best on them. For simple multimodal functions, CGBPO performs optimally on F5, F6, F8–F10. In hybrid functions, CGBPO outperforms others on F16, F17, F20, and F21; CMA-ES tops F12–F15, F18, and F19, with CGBPO ranking second. For composition functions, CGBPO leads F22, F27, and F28 and is competitive on others, while CMA-ES performs best overall.
Data shows that CGBPO performs well on various test functions, simple or complex. Analyzing multiple functions, CGBPO generally surpasses PO and some other algorithms (like HHO, AO, BWO) in convergence speed, solution accuracy, and stability. This proves CGBPO’s comprehensive performance advantage and its ability to offer better solutions for practical engineering problems.

4.2. Algorithm Convergence Curve on CEC2017

Figure 7 displays the comparative convergence curves of CGBPO and eight other algorithms for functions F1–F30. In most functions’ convergence profiles, the CGBPO curve trends sharply downward. For unimodal functions, CGBPO’s downward trend, convergence speed, and final fitness value rank second only to CMA-ES.
On simple multimodal functions like F6, F8, F9, and F10, CGBPO’s curve has a clear downward trend with a fast decline rate, reaching a low fitness value early in iteration and having a lower final convergence fitness than most other algorithms. For F7, while other algorithms’ curves fluctuate greatly, CGBPO remains stable and achieves better convergence.
On hybrid functions, such as F16, F17, F20, and F21, CGBPO’s curve drops rapidly and reaches a low fitness value early, outperforming others. On F12–F15, F18, and F19, CMA-ES performs best and CGBPO ranks second, still ahead of other algorithms.
For composition functions like F22, 24, 25, F27, and F28, CGBPO’s curve drops quickly and has a low final fitness value for F22, F27, and F28, showing the best performance. For other functions, CGBPO is also competitive. Despite CMA-ES performing better on some functions, overall, CGBPO shows excellent or strong performance across various functions and has a comprehensive performance advantage.

4.3. Algorithm Box Plot on CEC2017

Figure 8 shows box plots of CGBPO and other algorithms. In box plots of multiple functions, CGBPO’s box and whisker lengths indicate data dispersion. Its constituent elements include:The rectangular box in the middle can show the distribution range of the middle 50% of the data; The horizontal line in the middle of the box represents the median of the data, which divides the data into two parts with equal quantities and reflects the central tendency of the data; The line segments extending from the upper and lower ends of the box show the distribution range of the data. The points outside the whiskers are abnormal values that deviate significantly from the overall data.
For unimodal function F1, CGBPO’s box plot is at the lowest, with the smallest median fitness value, converging to a better solution accurately. For F3, CMA-ES’s box plot is lowest, with CGBPO second. For F4, CGBPO performs well with a low median fitness and high accuracy. CGBPO’s box plots for F1 and F4 are short, showing strong stability; for F3, CMA-ES is relatively stable.
For simple multimodal functions like F6, F8, F9, and F10, CGBPO’s box plots are low, with small median fitness values, converging to better solutions accurately. It also excels in F7, more accurate than most algorithms. These box plots are mostly short, indicating good stability.
For hybrid functions such as F16, F17, F20, and F21, CGBPO’s box plots are low, with high accuracy in converged solutions. For F12–F15, F18, and F19, CMA-ES performs best, CGBPO second, still having an accuracy advantage. For good-performing functions of CGBPO, box plots are short with good stability; for CMA-ES-dominated functions, CGBPO also has good stability.
For composition functions like F22, F27, and F28, CGBPO’s box plots are at the lowest, with the highest convergence accuracy. For other functions, CGBPO is competitive, converging to good solutions. For advantageous functions, its box plots are short, indicating good stability; for others, its overall stability is acceptable and comparable to other algorithms.
In general, CGBPO shows excellent comprehensive performance for different functions, having advantages in both convergence accuracy and stability for many functions.

4.4. Wilcoxon’s Rank-Sum Test on CEC2017

Table 5 compares the p-values of CGBPO and swarm intelligence algorithms via Wilcoxon’s rank-sum test [42] on the CEC2017 benchmark. For functions like F3, F4, F9, F16, F21, and F24, CGBPO’s p-values compared to the other eight algorithms are all much less than 0.05, showing its performance significantly surpasses theirs.
In some functions, the p-values between CGBPO and specific algorithms exceed 0.05, meaning no significant performance difference. For example, on function F13, CGBPO’s p-values relative to PO, HHO, AO, and WOA are 8.53 × 10−1, 4.20 × 10−1, 1.41 × 10−1, and 2.84 × 10−1, respectively, indicating similar performance.
Overall, CGBPO performs excellently in most function tests, showing significant differences from and often outperforming other algorithms. Even when its performance is like some algorithms for certain functions, its advantages are still evident, proving its strong competitiveness in solving CEC2017 benchmark problems.

4.5. Radar Chart and Average Ranking Chart

Figure 9 show the radar chart and average ranking chart of CGBPO and eight other intelligent algorithms on CEC2017. Notably, CGBPO’s radar chart has the least area fluctuation. It ranked first for two functions and second for twenty functions, indicating highly stable results across multiple runs, unaffected by random factors.
Significantly, CGBPO had the lowest average fitness value and topped the ranking. This shows that in the comprehensive test, CGBPO obtained better solutions and outperformed the other eight algorithms in convergence performance.

4.6. Analysis of High-Dimensional Function Tests

To confirm CGBPO’s superiority in handling high-dimensional complex problems, the dimension was set to 100 with other parameters unchanged. High-dimensional function test results in Table 6, Table 7 and Table 8 show that on all high-dimensional unimodal functions, CGBPO has notable advantages and stability in minimum and mean values. On multimodal functions, it significantly outperforms other algorithms. On hybrid and composition functions, it reaches the theoretical optimal value. In some functions’ standard deviation data, it ranks second only to CMA-ES. Overall, CGBPO still excels in high-dimensional complex problems and offers better solutions.

5. Experimental and Comparative Analyses on the CEC2022 Benchmark Suite

The CEC2022 benchmark suite has 12 single-objective test functions with boundary constraints, categorized as follows: F1 is a unimodal function for evaluating convergence speed and accuracy; F2–F5 are multimodal functions with multiple local optima, testing global search capabilities; F6-F8 are hybrid functions for comprehensively assessing algorithm performance under complex conditions; F9–F12 are composite functions for testing the ability to handle complex tasks. Some CEC2022 test functions’ 3D graphs are shown in Figure 10.
F4 has a complex multimodal shape with many local extreme points used to evaluate optimization algorithms’ performance in multimodal environments, like global search and avoiding local optima. F7 has a stepped distribution with multiple levels and potential multiple local extreme values used to evaluate algorithms’ ability to find the global optimum for complex multi-extreme-value functions. F10 has a highly complex multimodal shape and numerous local extreme points, making its optimization difficult, and can evaluate algorithms’ performance in complex multimodal scenarios, such as finding the global optimum and escaping local optima.

5.1. Statistical Results of Algorithm Tests on CEC2022

Table 9 shows the min, std, and avg data of CGBPO and eight other algorithms for 12 test functions. On unimodal function F1, CMA-ES performs best. CGBPO’s minimum and average values rank second only to CMA-ES and are much better than others, indicating high convergence accuracy on unimodal functions.
On multimodal function F3, CGBPO has the best minimum value and small standard deviation, showing strong stability. On F4, algorithms’ minimum values are close, and CGBPO’s has an edge with a reasonable standard deviation. On F5, CGBPO’s minimum value is better than most, except CMA-ES, proving good comprehensive performance on multimodal functions.
On hybrid function F6, CMA-ES has the best minimum value, and CGBPO ranks second but has a large standard deviation, resulting in poor stability. On F7, CGBPO has the best minimum value and small standard deviation, showing good stability. On F8, CMA-ES has the best minimum value, CGBPO ranks third with a small standard deviation, indicating strong stability. CGBPO is competitive overall despite performance fluctuations on hybrid functions.
On composition function F9, CMA-ES has the best minimum value, CGBPO ranks second, close to CMA-ES, and has a small standard deviation. On F10, CGBPO has the best minimum value and extremely small standard deviation, showing excellent stability. On F11, CMA-ES has the best minimum value, CGBPO ranks third with a small standard deviation. On F12, CMA-ES has the best minimum value, and CGBPO ranks second with a small standard deviation, indicating good comprehensive performance on composition functions.
In general, CGBPO shows high convergence accuracy, strong stability, and good global convergence on various functions. Its comprehensive performance is remarkable among many algorithms. Despite some flaws in individual functions, its overall performance is good, with obvious advantages on unimodal, multimodal, and composition functions.

5.2. Algorithm Convergence Curve on CEC2022

Figure 11 displays the convergence graphs of CGBPO and other algorithms for functions F1–F12. For unimodal function F1, CMA-ES’s curve drops fastest with the lowest final average fitness, performing best. CGBPO’s curve also drops quickly, with its final average fitness ranking second only to CMA-ES and better than others, showing good convergence speed and accuracy.
For multimodal functions, in F3, CGBPO’s curve starts low with a clear downward trend and the lowest final average fitness, performing best. In F4, algorithms’ curves are close initially, but CGBPO’s has a better downward trend later with the optimal average fitness. In F5, CGBPO’s curve drops rapidly and has the lowest final average fitness, indicating good convergence speed and accuracy.
For hybrid functions, in F6, CMA-ES’s curve has an obvious downward trend later with the lowest final average fitness, while CGBPO’s performs well early but is overtaken later. In F7, CGBPO’s curve has the lowest final average fitness. In F8, CMA-ES’s curve has the lowest final average fitness, and CGBPO’s drop smoothly. CGBPO is competitive on hybrid functions, though less so than on unimodal and multimodal ones.
For composition functions, in F9, CMA-ES and CGBPO’s curves have similar downward trends with close and low final average fitness, CGBPO being slightly worse. In F10, CGBPO’s curve drops rapidly and has the lowest final average fitness, performing best. In F11, algorithms’ curves vary greatly, and CGBPO’s drop fast early. In F12, CMA-ES’s curve has the lowest final average fitness, and CGBPO’s ranks second, showing good overall performance on composition functions.

5.3. Algorithm Box Plot on CEC2022

Figure 12 shows box plots of test data distribution. For unimodal function F1, CMA-ES’s box plot is at the lowest, with the smallest fitness median and the best converged solution. CGBPO’s box plot is low with a short box, meaning it converges with a good solution and has small data dispersion, showing good stability.
For multimodal function F3, CGBPO’s box plot is at the lowest, with the smallest fitness median, indicating the best performance and converging to a better solution. Its box is short, showing good stability. For F4, CGBPO’s box plot is at the lowest. Despite outliers, the overall fitness median is small, with a good convergence effect. For F5, CGBPO’s box plot is at the lowest, with a fitness value much lower than most algorithms, showing high convergence accuracy.
For hybrid function F7, CGBPO’s box plot is low, with a small fitness median, good convergence, and small data dispersion, indicating good stability. For F8, CMA-ES’s box plot is at the lowest, performing best, while CGBPO’s is at an intermediate level, competitive but slightly worse.
For composition function F10, CGBPO’s box plot is at the lowest, with the smallest fitness median, the best performance, and a short box for good stability. For F12, CMA-ES’s box plot is at the lowest, and CGBPO’s is the second lowest, with good performance.
In general, CGBPO has obvious advantages on multimodal and composition functions, strong competitiveness on unimodal and hybrid functions, and outstanding comprehensive performance across different functions.

5.4. Wilcoxon’s Rank-Sum Test on CEC2022

Table 10 shows the rank-sum test results of the CEC2022 benchmark test. When comparing CGBPO with BWO, FOX, GOOSE, WOA, and CMA-ES, the p-values of all 12 functions are below 0.05, indicating that CGBPO significantly outperforms PO and BWO.
For the four functions, the p-values of CGBPO relative to AO exceed 0.05, meaning CGBPO and AO have similar performance on these functions.
Overall, CGBPO performs remarkably in most function tests, showing significant differences from and often outperforming other algorithms. Clearly, CGBPO maintains strong competitiveness in solving the CEC2022 benchmark problems.

5.5. Radar Chart and Average Ranking Chart

Figure 13 shows a radar chart and an average ranking chart comparing CGBPO with seven other algorithms on CEC2022. Notably, CGBPO’s radar chart has the least fluctuations. It ranked first for three functions (including F10) and second for four (like F5), showing remarkable performance and distinct advantages on these functions in Figure 13a.
The CGBPO algorithm had an average ranking of 2.25, tying for first place with CMA-ES in Figure 13b. It demonstrated outstanding comprehensive performance across multiple test functions, outperforming the other seven algorithms overall.

6. Application in Engineering Problems

To verify CGBPO’s performance in handling complex engineering problems, tests were conducted on the design optimization of industrial refrigeration systems [43] and the optimization of Himmel Blau’s function [44]. Then, the optimization results were compared and analyzed with those of the other seven algorithms mentioned above.

6.1. Optimization Results for Industrial Refrigeration Systems

With the continuous depletion of basic energy resources, energy conservation and emissions reduction have become a key concern across all industries recently. Industrial refrigeration systems consume a large amount of energy in enterprises. The design optimization of such systems seeks to strike a fine balance among performance, cost, and efficiency. This design issue involves 14 design variables and 15 constraint conditions in total. The mathematical model thereof is presented as follows:
Design variables:
x = x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10 x 11 x 12 x 13 x 14
Objective function:
f x = 63098.88 x 2 x 4 x 12 + 5441.5 x 2 2 x 12 + 115055.5 x 2 1.664 x 6 + 6172.27 x 2 2 x 6 + 63098.88 x 1 x 3 x 11 + 5441.5 x 1 2 x 11 + 115055.5 x 1 1.664 x 5 + 6171.27 x 1 2 x 5 + 140.53 x 1 x 11 + 281.29 x 3 x 11 + 70.26 x 1 2 + 281.29 x 1 x 3 + 281.29 x 3 2 + 14437 x 8 1.8812 x 12 0.3424 x 10 x 14 1 x 7 x 9 1 + 20470 x 7 2.893 x 11 0.316 x 1 2
Constraint conditions:
g 1 x = 1.524 x 7 1 1 g 2 x = 1.524 x 8 1 1 g 3 x = 0.07789 x 1 2 x 7 3 x 9 1 0 g 4 x = 7.05305 x 9 1 x 1 2 x 10 x 8 1 x 2 1 x 14 1 1 0 g 5 x = 0.0833 x 13 1 x 14 1 0 g 6 x = 47.136 x 2 0.333 x 10 1 x 12 1.333 x 8 x 13 2.1195 + 62.08 x 13 2.1195 x 12 1 x 8 0.2 x 10 1 1 0 g 7 x = 0.04771 x 10 x 8 1.8812 x 12 0.3424 1 0 g 8 x = 0.0488 x 9 x 7 1.893 x 11 0.316 1 0 g 9 x = 0.0099 x 1 x 3 1 1 0 g 10 x = 0.0193 x 2 x 4 1 1 0 g 11 x = 0.0298 x 1 x 5 1 1 0 g 12 x = 0.056 x 2 x 6 1 1 0 g 13 x = 2 x 9 1 1 0 g 14 x = 2 x 10 1 1 0 g 15 x = x 12 x 11 1 1 0
Range of values:
0.001 x i 5 , i = 1 , , 14
Table 11 compares the optimal results of the CGBPO algorithm with those of other methods. Clearly, CGBPO achieved the minimum optimized value of 5.32 × 10−1, showing its ability to meet stricter precision requirements. Also, CGBPO’s average value and standard deviation were lower than those of the other eight algorithms.
By comparing the convergence curves in Figure 14 and the box plots in Figure 15, it is evident that CGBPO was the most stable and converged fastest in the later iteration stage. This means CGBPO can optimize the refrigeration system design more quickly, reducing the system adjustment time while ensuring system performance.
In conclusion, CGBPO showed superiority and suitability in solving the design optimization problem of industrial refrigeration systems.

6.2. Optimization of Himmel Blau’s Function

Himmel Blau’s function is a commonly used multimodal function for evaluating optimization algorithm performance. It is mainly applied to analyze non-linear constrained optimization problems. Notably, this function has six non-linear constraints, involves five variables, and its mathematical expression is as follows:
Minimize
f x = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40792.141
subject to
g 1 x = G 1 0 , g 2 x = G 1 92 0 , g 3 x = 90 G 2 0 , g 4 x = G 2 110 0 , g 5 x = 20 G 3 0 , g 6 x = G 3 25 0 ,
where
G 1 = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 , G 2 = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 3 2 , G 3 = 9.300961 + 0.0047026 x 3 x 5 + 0.00125447 x 1 x 3 + 0.0019085 x 3 x 4 ,
with the bounds
78 x 1 102 , 33 x 2 45 , 27 x 3 45 , 27 x 4 45 , 27 x 5 45 .
As Table 12 shows, CGBPO achieves a minimum value of −30,665.4, the lowest among compared algorithms. Regarding std and avg values, CGBPO outperforms its counterparts. From Figure 16 and Figure 17, CGBPO has the fastest convergence speed when optimizing functions, obtaining the optimal solution earliest, showing the best stability, and having the highest solution accuracy.
These results clearly prove that CGBPO excelled in solving Himmel Blau’s function optimization problem. It not only had an edge in finding the optimal solution but also surpassed other algorithms in stability and overall performance, making it highly competitive.

7. Application of the CGBPO Algorithm in Indoor Visible Light Positioning

The CGBPO algorithm is applied to indoor visible light positioning [45]. In the indoor wireless visible-light transmission model, Light-Emitting Diodes (LEDs) are signal sources for data transmission, and Photo Diodes (PDs) are receivers for data reception to achieve high-precision positioning. A 5 m × 5 m × 6 m positioning model is established. Four LEDs on the ceiling, with coordinates (5, 0, 6), (0, 0, 6), (0, 5, 6), and (5, 5, 6), respectively, are set as signal sources.
To test the positioning error of CGBPO, at a height of 2 m, signal receivers are placed every 0.5 m in the 5 m length and 5 m width directions, creating 121 test points. A positioning-simulation experiment is conducted using MATLAB, and the experiment’s relevant parameters are set as shown in Table 13.
Figure 18 displays the distribution of the actual positions of PDs and the estimated positions from PO. The CGBPO algorithm’s estimated positions are nearer to the actual PD positions. Moreover, CGBPO has the optimal coverage, which reaches 100%.
Figure 19 presents the error curves of the estimated positions for the two algorithms. Clearly, among the 121 test points, CGBPO has the smallest error in estimated positions, while PO shows relatively larger errors at each test point.
Figure 20 is a bar chart of the average errors of the estimated positions for the two algorithms. By comparison, the average error of PO’s estimated positions is about 0.0070 cm, while that of CGBPO is about 0.00034807 cm. The CGBPO algorithm shows the most stable positioning performance.

8. Conclusions

To overcome the drawbacks of the PO algorithm, like becoming stuck in local optima and slow convergence for complex problems, this study proposed the CGBPO algorithm, which has improved performance and applicability. CGBPO uses chaotic logistic mapping for initialization to increase population diversity, applies Gaussian mutation to update individual positions to avoid premature local convergence, and incorporates the barycenter opposite learning approach to generate opposite solutions and boost global search ability. Simulation experiments on CEC2017 and CEC2022 benchmarks, comparing them with eight intelligent optimization algorithms, and two complex engineering problems showed that CGBPO can improve solution accuracy and convergence speed, balancing global and local search. In industrial refrigeration system design optimization and Himmel Blau’s function optimization, CGBPO achieved the highest accuracy, shortest optimization time, and best stability. In indoor visible light positioning, CGBPO’s estimated positions were closer to actual PD positions, with the best coverage and smallest average error (about 0.00034807 cm) compared to PO.
Future research will explore integrating CGBPO with other advanced methods like the MAMGD optimization method, aiming to further improve convergence speed and training-result accuracy [46]. Combining CGBPO with the differential evolution (DE [47]) algorithm and implementing multi-objective optimization (e.g., extending to NSGA-II [48]) will also be considered to enhance its performance, expand applicability, and meet the increasing needs for complex optimization.

Author Contributions

Conceptualization, Y.Y., M.F., C.J., P.W. and X.Z.; methodology, Y.Y.; software, Y.Y.; validation, Y.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Natural Science Key Project of West Anhui University (WXZR202307), the University Key Research Project of Department of Education Anhui Province (2022AH051683 and 2024AH051994), and the University Innovation Team Project of Department of Education Anhui Province (2023AH010078).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All the data presented in this study are available within the main text.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DOAJDirectory of open access journals
TLAThree-letter acronym
LDLinear dichroism

References

  1. Metropolis, N. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 2004, 21, 1087–1092. [Google Scholar] [CrossRef]
  2. Holland, J.H. Adaptation in Natural and Artificial Systems; The MIT Press: Cambridge, MA, USA, 1975. [Google Scholar]
  3. Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks: 1995, 1942–1948, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar] [CrossRef]
  4. Blum, C. Ant Colony Optimization; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2009. [Google Scholar]
  5. Passino, K.M. Bacterial Foraging Optimization. Int. J. Swarm Intell. Res. 2010, 1, 16. [Google Scholar] [CrossRef]
  6. Basturk, B.; Karaboga, D. An artificial bee colony (ABC) algorithm for numeric function optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, 12–14 May 2006. [Google Scholar]
  7. Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  8. Yang, X.S.; Gandomi, A.H. Bat Algorithm: A Novel Approach for Global Engineering Optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  9. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. -Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  10. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  11. Bansal, J.C.; Sharma, H.; Jadon, S.S.; Clerc, M. Spider Monkey Optimization algorithm for numerical optimization. Memetic Comput. 2014, 6, 31–47. [Google Scholar] [CrossRef]
  12. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  13. Jia, H.M.; Peng, X.X.; Lang, C.B. Remora Optimization Algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  14. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  15. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  16. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawk’s optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  17. Jia, H.; Wen, Q.; Wang, Y.; Mirjalili, S. Catch fish optimization algorithm: A new human behavior algorithm for solving clustering problems. Clust. Comput. 2024, 27, 13295–13332. [Google Scholar] [CrossRef]
  18. Trojovsk, P.; Dehghani, M. Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
  19. Jia, H.M.; Rao, H.H.; Wen, C.S.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  20. Laskar, N.M.; Guha, K.; Chanda, S.; Baishnab, K.L.; Paul, P.K. HWPSO: A new Hybrid Whale-Particle Swarm Optimization Algorithm and its application in Electronic Design Optimization Problems. Appl. Intell. 2019, 49, 265–291. [Google Scholar] [CrossRef]
  21. Jiang, F.; Wang, L.; Bai, L. An Improved Whale Algorithm and Its Application in Truss Optimization. J. Bionic Eng. (Engl. Ed.) 2021, 18, 12. [Google Scholar] [CrossRef]
  22. Chen, J.; Chen, X.; Fu, Z. Improvement of the Seagull Optimization Algorithm and Its Application in Path Planning. J. Phys. Conf. Ser. 2022, 2216, 012076. [Google Scholar] [CrossRef]
  23. Jun, W.; Wen-Chuan, W.; Lin, Q.; Hu, X.X. Multi-strategy Fusion Improved Golden Jackal Optimization Algorithm and its Application in Parameter Estimation of the Muskingum Model. China Rural Water Hydropower 2024, 2, 1–7. [Google Scholar] [CrossRef]
  24. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  25. Zhang, M.; Wang, D.; Yang, J. Hybrid-Flash Butterfly Optimization Algorithm with Logistic Mapping for Solving the Engineering Constrained Optimization Problems. Entropy 2022, 24, 525. [Google Scholar] [CrossRef]
  26. Shaik AL, H.P.; Manoharan, M.K.; Pani, A.K.; Avala, R.R.; Chen, C.-M. Gaussian mutation–spider monkey optimization (GM-SMO) model for remote sensing scene classification. Remote Sens. 2022, 14, 6279. [Google Scholar] [CrossRef]
  27. Song, T.; Zhang, L. A Moth—Flame Optimization Algorithm Combining Centroid—Based Opposite Mutation. Intell. Comput. Appl. 2020, 12, 104–115. [Google Scholar] [CrossRef]
  28. Liang, X.M.; Shi, L.Y.; Long, W. An improved snake optimization algorithm based on hybrid strategies and its application. Comput. Eng. Sci. 2024, 46, 693–706. [Google Scholar]
  29. Ahrari, A.; Elsayed, S.; Sarker, R.; Essam, D.; Coello, C.A.C. Problem definition and evaluation criteria for the cec’2022 competition on dynamic multimodal optimization. In Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI 2022), Padua, Italy, 18–23 July 2022; pp. 18–23. [Google Scholar]
  30. Bae, J.; Hwang, C.; Jun, D. The uniform laws of large numbers for the tent map. Stat. Probab. Lett. 2010, 80, 1437–1441. [Google Scholar] [CrossRef]
  31. Liu, Z.Y.; Liang, S.B.; Yuan, H.; Sun, H.K.; Liang, J. An Adaptive Equalization Optimization Algorithm Based on the Simplex Method. Chin. J. Sens. Actuators 2022, 35, 8. [Google Scholar]
  32. Gao, W.X.; Liu, S.; Xiao, Z.Y.; Yu, J. Butterfly Algorithm Optimized by Cauchy Mutation and Adaptive Weight. Comput. Eng. Appl. 2020, 56, 8. [Google Scholar] [CrossRef]
  33. Wang, C.; Ou, Y.; Shan, Z. Improved Particle Swarm Optimization Algorithm Based on Dynamic Change Speed Attenuation Factor and Inertia Weight Factor. J. Phys. Conf. Ser. 2021, 1732, 012072. [Google Scholar] [CrossRef]
  34. Zhan, H.X.; Wang, T.H.; Zhang, X. A Snake Optimization Algorithm Combining Opposite—Learning Mechanism and Differential Evolution Strategy. J. Zhengzhou Univ. (Nat. Sci. Ed.) 2024, 56, 25–31. [Google Scholar]
  35. Ban, Y.F.; Zhang, D.M.; Zuo, F.Q.; Shen, Q.W. Gazelle Optimization Algorithm Guided by Elite Opposite—Learning and Cauchy Perturbation. Foreign Electron. Meas. Technol. 2024, 43, 1–13. [Google Scholar]
  36. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Matlab Code of Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  37. Mohammed, H.; Rashid, T. FOX: A FOX-inspired optimization algorithm. Appl. Intell. 2023, 53, 1030–1050. [Google Scholar] [CrossRef]
  38. Zhong, C.T.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  39. Hamad, R.K.; Rashid, T.A. GOOSE algorithm: A powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst. 2024, 15, 1249–1274. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Uchida, K.; Hamano, R.; Nomura, M.; Saito, S.; Shirakawa, S. CMA-ES for Safe Optimization. arXiv 2024, arXiv:2405.10534. [Google Scholar]
  42. Lin, Y.D. Data Analysis of Energy Use Right Verification Based on Wilcoxon Signed–Rank Test. Chem. Eng. Equip. 2021, 187–189. [Google Scholar] [CrossRef]
  43. Li, Y.; Liang, X.; Liu, J.S.; Zhou, H. Solving Engineering Optimization Problems Based on Improved Balance Optimizer Algorithm. Comput. Integr. Manuf. Syst. 2023, 1–34. [Google Scholar] [CrossRef]
  44. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  45. Jia, C.C.; Yang, T.; Wang, C.J.; Mengli, S. High accuracy 3D indoor visible light positioning method based on the improved adaptive cuckoo search algorithm. Arab. J. Sci. Eng. 2022, 47, 2479–2498. [Google Scholar]
  46. Sakovich, N.; Aksenov, D.; Pleshakova, E.; Gataullin, S. MAMGD: Gradient-Based Optimization Method Using Exponential Decay. Technologies 2024, 12, 154. [Google Scholar] [CrossRef]
  47. Stron, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1995, 11, 341–359. [Google Scholar] [CrossRef]
  48. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
Figure 1. Radar chart (a) and ranking chart (b) of three algorithms using map strategies.
Figure 1. Radar chart (a) and ranking chart (b) of three algorithms using map strategies.
Biomimetics 10 00153 g001
Figure 2. Radar chart (a) and ranking chart (b) of three algorithms using mutation strategies.
Figure 2. Radar chart (a) and ranking chart (b) of three algorithms using mutation strategies.
Biomimetics 10 00153 g002
Figure 3. Radar chart (a) and ranking chart (b) of three algorithms using opposition-based-learning strategies.
Figure 3. Radar chart (a) and ranking chart (b) of three algorithms using opposition-based-learning strategies.
Biomimetics 10 00153 g003
Figure 4. Radar chart (a) and ranking chart (b) of ablation study.
Figure 4. Radar chart (a) and ranking chart (b) of ablation study.
Biomimetics 10 00153 g004
Figure 5. Flowchart of CGBPO.
Figure 5. Flowchart of CGBPO.
Biomimetics 10 00153 g005
Figure 6. Three-dimensional graphs of some test functions in the CEC2017 benchmark suite.
Figure 6. Three-dimensional graphs of some test functions in the CEC2017 benchmark suite.
Biomimetics 10 00153 g006
Figure 7. Convergence curves of the proposed and compared functions on CEC2017.
Figure 7. Convergence curves of the proposed and compared functions on CEC2017.
Biomimetics 10 00153 g007aBiomimetics 10 00153 g007b
Figure 8. Box plots of functions on CEC2017.
Figure 8. Box plots of functions on CEC2017.
Biomimetics 10 00153 g008aBiomimetics 10 00153 g008b
Figure 9. Radar chart (a) and ranking chart (b) for functions in CEC2017.
Figure 9. Radar chart (a) and ranking chart (b) for functions in CEC2017.
Biomimetics 10 00153 g009
Figure 10. Three-dimensional graphs of some test functions in CEC2022.
Figure 10. Three-dimensional graphs of some test functions in CEC2022.
Biomimetics 10 00153 g010
Figure 11. Convergence curves of functions on CEC2022.
Figure 11. Convergence curves of functions on CEC2022.
Biomimetics 10 00153 g011
Figure 12. Box plot of functions on CEC2022.
Figure 12. Box plot of functions on CEC2022.
Biomimetics 10 00153 g012
Figure 13. Radar chart (a) and ranking chart (b) for functions in CEC2022.
Figure 13. Radar chart (a) and ranking chart (b) for functions in CEC2022.
Biomimetics 10 00153 g013
Figure 14. Convergence curves regarding the design optimization problem for industrial refrigeration systems.
Figure 14. Convergence curves regarding the design optimization problem for industrial refrigeration systems.
Biomimetics 10 00153 g014
Figure 15. Box plots regarding the design optimization problem for industrial refrigeration systems.
Figure 15. Box plots regarding the design optimization problem for industrial refrigeration systems.
Biomimetics 10 00153 g015
Figure 16. Convergence curves for Himmel Blau’s function optimization problem.
Figure 16. Convergence curves for Himmel Blau’s function optimization problem.
Biomimetics 10 00153 g016
Figure 17. Box plots for Himmel Blau’s function optimization problem.
Figure 17. Box plots for Himmel Blau’s function optimization problem.
Biomimetics 10 00153 g017
Figure 18. Distribution diagram of actual location.
Figure 18. Distribution diagram of actual location.
Biomimetics 10 00153 g018
Figure 19. Curve of estimated position error.
Figure 19. Curve of estimated position error.
Biomimetics 10 00153 g019
Figure 20. Comparison of average errors of estimated positions.
Figure 20. Comparison of average errors of estimated positions.
Biomimetics 10 00153 g020
Table 1. Test results of the Avg_Time.
Table 1. Test results of the Avg_Time.
FunctionAvg_Time of Dim 30Avg_Time of Dim 50Avg_Time of Dim 100
CGBPOPORatioCGBPOPORatioCGBPOPORatio
F13.150.943.356.082.352.596.423.082.08
F33.200.933.442.731.132.424.963.051.63
F42.870.893.222.751.152.395.003.051.64
F53.631.432.543.481.702.056.164.001.54
F64.712.042.315.572.991.869.376.721.39
F75.132.362.173.351.672.006.324.131.53
F84.411.403.153.291.612.046.364.161.53
F94.121.522.713.351.701.976.274.101.53
F105.311.653.223.731.971.897.094.691.51
F113.481.143.052.931.312.245.503.501.57
F123.621.432.533.291.651.996.244.111.52
F134.351.283.402.971.372.175.613.581.57
F143.961.682.363.571.941.847.044.731.49
F153.851.143.382.841.302.185.453.421.59
F163.391.512.253.171.492.135.943.831.55
F175.662.252.524.492.511.798.335.701.46
F183.631.392.613.121.502.086.053.871.56
F1913.458.791.5312.739.091.4023.3818.311.28
F206.173.961.564.922.751.798.686.061.43
F216.743.801.776.443.681.7517.1213.371.28
F228.194.471.837.494.241.7719.6414.901.32
F239.225.121.807.954.831.6517.9413.091.37
F249.235.051.839.216.251.4719.3614.261.36
F2510.375.601.858.075.111.5818.7013.611.37
F2615.908.961.7710.006.881.4525.6721.371.20
F2716.429.761.6810.136.871.4735.1926.101.35
F2813.248.011.659.476.581.4431.8823.141.38
F2912.766.991.837.945.051.5721.5315.001.44
F3024.977.943.1516.1212.401.3041.2431.381.31
Table 2. Test results on CEC2017 (unimodal and simple multi-modal functions).
Table 2. Test results on CEC2017 (unimodal and simple multi-modal functions).
FunctionCriteriaCGBPOPOHHOAOFOXBWOGOOSEWOACMA-ES
F1min293,165.45,337,319.11,135,557.72,275,979.9763,179.6551,394,6763.33,257,262.223,872,113.1100.0
std22,384,116.0371500492.363,239,782.422,832,8804.7155,907,6402.5448,458,6416.41,376,810,745.0166,681,599.30.0
avg16,201,211.4287168876.827,271,828.421,085,0323.9146,183,6693.3145,394,67508.31,344,318,195.5210,187,164.4100.0
F3min398.4405.7587.91628.61031.94773.81340.21014.8300.0
std740.61466.21267.11475.33369.62671.312,634.19712.10.0
avg1049.82090.42588.04694.33173.413,287.715,291.412,091.7300.0
F4min400.0407.6405.5407.9401.9737.8404.0404.6400.0
std25.343.643.642.1111.5634.977.260.70.0
avg428.0449.8450.4442.3522.91831.9493.8483.2400.0
F5min501.1509.6518.6517.6542.8587.0521.9534.4502.0
std13.215.221.513.124.018.037.518.156.7
avg535.5544.2561.4540.8587.6623.7585.2567.8560.3
F6min600.0608.0623.4611.3633.1644.8642.8614.8600.0
std7.310.68.78.27.77.59.016.111.8
avg621.9625.2641.8625.8655.2663.3659.3641.6654.0
F7min719.4739.9737.2732.7809.4813.2799.2747.6717.3
std14.420.022.812.85.711.1169.121.133.8
avg766.0779.2790.2761.7815.1840.6934.3781.7801.7
F8min812.6819.2809.2809.4820.9836.6833.8824.9828.9
std1.78.79.27.414.19.223.819.11.8
avg827.9832.6830.9826.7840.4860.8868.7851.8833.8
F9min916.1943.61000.9951.11683.11383.21619.71069.41694.6
std23.4159.3243.2147.773.9186.4593.6506.424.5
avg1013.21160.21586.01135.51781.31952.12201.11677.21752.9
F10min1254.11546.81401.51551.61696.42394.91771.51774.31635.6
std164.7326.8324.9314.0486.7222.3319.2285.0420.6
avg2043.62188.02166.72043.72453.22657.42430.02290.42735.5
F11min1115.31109.91114.21142.61150.02394.71142.61161.61102.0
std39.348.987.9105.51526.18928.62261.391.713.9
avg1176.01204.41207.01289.31736.814,496.22607.81311.51118.6
Table 3. Test results on CEC2017 (hybrid functions).
Table 3. Test results on CEC2017 (hybrid functions).
FunctionCriteriaCGBPOPOHHOAOFOXBWOGOOSEWOACMA-ES
F12min23,166.319,048.115,396.358,156.244,296.111,496,4067.948,982.3118,189.11318.6
std3,563,572.63,293,097.54,722,385.75,495,619.82,113,319.140,180,5763.12,541,101.56,087,458.8211.3
avg2,671,403.52,825,986.43,446,835.35,087,353.41,773,596.2 74,002,0574.9 2,492,890.0 7,943,418.8 1729.3
F13min2224.51834.72030.82654.22951.638,485.71792.72069.71302.0
std12,101.818,023.814,042.915,445.913,066.424,838,681.317,417.913,474.763.0
avg15,770.318,663.815,065.520,517.215,470.124,204,221.317,116.019,114.91330.9
F14min1479.71480.81514.11756.81526.01499.31546.21510.81429.0
std1473.51808.11185.12173.77015.7454.65071.02259.521.0
avg2425.82912.42531.43790.67838.92020.35854.33953.51452.5
F15min1680.51651.02668.41809.31686.94393.63325.81790.61501.5
std1491.41789.24001.28331.749,345.32262.133,132.510,028.828.1
avg3205.13764.89849.09654.050,508.89098.239,617.912,864.21531.9
F16min1631.81676.61604.91655.01820.72139.31658.61668.91840.7
std92.8129.0144.9139.0214.198.5212.8182.1136.4
avg1809.51873.11994.91910.12205.42279.32240.51975.62069.9
F17min1744.31754.51735.31754.11745.81797.71731.01760.01750.7
std15.622.749.634.4200.239.5199.354.0106.4
avg1775.21789.11790.21793.42016.31852.92063.61822.71824.6
F18min3436.72370.02126.93944.32343.72,411,503.52651.32410.21803.2
std13,653.216,726.212,569.226,208.314,078.897,303,7204.613,162.714,359.132.3
avg25,915.722,058.716,882.736,689.518,464.670,451,5737.919,156.020,070.81849.5
F19min1952.91998.42371.32061.42332.030,596.42151.82628.11903.0
std6986.411,645.0120,958.9179,321.322,156.112,905,067.315,347.9353,208.39.5
avg7357.312,260.552,703.161,901.322,937.76,131,260.117,077.6193,418.11914.8
F20min2053.22055.12090.52057.62046.82144.22066.12075.42431.2
std2.657.480.457.7135.951.9139.895.23.0
avg2134.02153.92205.22136.92285.12269.42301.22204.92442.1
F21min2203.02205.62210.62204.92231.42222.82205.02217.72302.6
std1.252.559.750.836.570.267.151.32.0
avg2214.22259.82322.42307.52373.32344.12348.82332.82306.2
Table 4. Test results on CEC2017 (composition functions and extended unimodal functions).
Table 4. Test results on CEC2017 (composition functions and extended unimodal functions).
FunctionCriteriaCGBPOPOHHOAOFOXBWOGOOSEWOACMA-ES
F22min2249.52281.32247.52311.82312.12446.22311.32306.92300.0
std10.527.717.526.1514.8304.2579.1397.4359.6
avg2314.92329.92318.82325.72650.63000.32692.42432.82365.7
F23min2626.02618.72648.42627.52663.32688.72669.42619.32603.0
std15.815.831.517.390.735.759.332.6311.3
avg2649.12650.92696.02653.22767.92748.32774.92669.12853.9
F24min2505.52530.82529.72525.72687.52638.72769.52758.12500.0
std76.898.476.548.784.5137.976.022.668.2
avg2545.72728.32836.92771.22894.12913.22926.02788.32522.3
F25min2900.02699.32625.62900.02831.13363.32600.42921.42898.0
std22.360.262.329.467.0259.0200.935.314.1
avg2941.42954.22931.62946.73001.53826.53031.02975.62939.5
F26min2727.32930.62836.82693.92859.23139.93127.42968.12800.0
std136.4207.4600.0208.0625.5413.2591.5640.250.7
avg3077.53191.23650.93136.94164.63923.04234.43748.02846.7
F27min3080.83095.53107.23096.83126.93126.63148.73100.43089.5
std2.310.035.69.792.758.585.150.72.5
avg3090.03104.53177.13108.13232.13213.43258.63175.63096.1
F28min3014.63192.63164.23223.03170.53710.43189.53177.13100.0
std84.792.7156.690.3184.164.6191.7186.4125.4
avg3297.53352.83446.63469.13549.43874.43557.23472.23322.9
F29min3198.93149.53184.13183.53181.93292.23268.43221.63131.7
std45.168.6100.355.8210.097.9197.4122.637.8
avg3249.33251.33337.43272.53538.33530.63663.03428.13183.8
F30min10,510.44960.0137,198.012,779.86915.91,292,883.431,211.519,250.53394.9
std841,581.1879,430.75,458,572.33,630,522.49,404,026.412,310,152.410,232,108.23,397,987.862.9
avg923,876.51,088,041.02,857,102.22,427,257.05,504,069.710,697,044.47,402,947.62,416,955.63451.1
Table 5. p-values obtained from Wilcoxon’s rank-sum test on CEC2017.
Table 5. p-values obtained from Wilcoxon’s rank-sum test on CEC2017.
FunctionPOHHOAOFOXBWOGOOSEWOACMA-ES
F11.87 × 10−79.12 × 10−19.83 × 10−81.25 × 10−73.02 × 10−118.20 × 10−73.47 × 10−103.02 × 10−11
F31.60 × 10−31.03 × 10−61.21 × 10−102.88 × 10−63.02 × 10−111.78 × 10−101.78 × 10−101.21 × 10−12
F46.15 × 10−24.06 × 10−27.24 × 10−23.59 × 10−53.02 × 10−118.12 × 10−48.66 × 10−53.02 × 10−11
F51.17 × 10−28.29 × 10−67.73 × 10−23.47 × 10−103.02 × 10−111.01 × 10−82.39 × 10−87.84 × 10−1
F62.58 × 10−12.44 × 10−97.98 × 10−24.50 × 10−113.02 × 10−113.02 × 10−116.53 × 10−75.33 × 10−10
F71.17 × 10−25.97 × 10−52.77 × 10−13.02 × 10−113.02 × 10−113.02 × 10−112.75 × 10−31.11 × 10−6
F85.75 × 10−21.45 × 10−17.39 × 10−11.32 × 10−46.07 × 10−114.20 × 10−101.73 × 10−73.27 × 10−3
F91.17 × 10−41.33 × 10−102.39 × 10−43.02 × 10−113.02 × 10−113.02 × 10−116.70 × 10−113.01 × 10−11
F106.35 × 10−29.05 × 10−28.77 × 10−18.12 × 10−49.92 × 10−111.75 × 10−54.03 × 10−31.36 × 10−7
F112.92 × 10−24.64 × 10−17.60 × 10−77.22 × 10−63.02 × 10−112.20 × 10−71.56 × 10−84.57 × 10−9
F124.73 × 10−14.46 × 10−14.51 × 10−29.59 × 10−13.02 × 10−114.29 × 10−15.61 × 10−53.02 × 10−11
F138.53 × 10−14.20 × 10−11.41 × 10−17.84 × 10−13.34 × 10−118.65 × 10−12.84 × 10−13.02 × 10−11
F145.49 × 10−11.67 × 10−11.49 × 10−47.66 × 10−53.18 × 10−11.32 × 10−41.06 × 10−36.70 × 10−11
F151.76 × 10−12.92 × 10−93.52 × 10−72.78 × 10−71.78 × 10−101.21 × 10−108.20 × 10−73.02 × 10−11
F168.77 × 10−21.11 × 10−62.62 × 10−31.29 × 10−93.02 × 10−118.89 × 10−105.26 × 10−41.07 × 10−9
F172.81 × 10−26.00 × 10−13.64 × 10−21.73 × 10−74.50 × 10−111.56 × 10−83.83 × 10−51.15 × 10−1
F183.26 × 10−11.12 × 10−21.02 × 10−13.27 × 10−23.02 × 10−115.01 × 10−21.30 × 10−13.02 × 10−11
F191.22 × 10−11.04 × 10−45.56 × 10−41.75 × 10−53.02 × 10−113.18 × 10−42.78 × 10−73.02 × 10−11
F202.23 × 10−14.46 × 10−47.73 × 10−11.09 × 10−55.57 × 10−101.09 × 10−52.16 × 10−33.02 × 10−11
F213.59 × 10−51.85 × 10−81.31 × 10−83.02 × 10−114.98 × 10−119.06 × 10−86.70 × 10−113.02 × 10−11
F229.05 × 10−27.73 × 10−11.54 × 10−11.17 × 10−93.02 × 10−111.07 × 10−95.56 × 10−43.78 × 10−10
F237.17 × 10−13.65 × 10−87.06 × 10−19.92 × 10−113.02 × 10−114.50 × 10−111.27 × 10−21.86 × 10−1
F244.18 × 10−96.70 × 10−111.07 × 10−95.49 × 10−117.39 × 10−113.69 × 10−111.96 × 10−104.83 × 10−8
F258.24 × 10−26.63 × 10−13.87 × 10−11.60 × 10−73.02 × 10−111.34 × 10−51.17 × 10−51.95 × 10−3
F261.38 × 10−21.49 × 10−41.33 × 10−15.00 × 10−98.15 × 10−113.82 × 10−109.79 × 10−52.38 × 10−8
F271.37 × 10−13.69 × 10−112.60 × 10−53.02 × 10−113.02 × 10−113.02 × 10−111.61 × 10−101.62 × 10−5
F282.51 × 10−21.17 × 10−41.01 × 10−82.57 × 10−73.02 × 10−118.20 × 10−72.96 × 10−55.30 × 10−1
F296.10 × 10−11.58 × 10−47.24 × 10−23.96 × 10−85.49 × 10−117.39 × 10−111.56 × 10−87.09 × 10−8
F305.20 × 10−13.03 × 10−22.23 × 10−14.36 × 10−21.78 × 10−103.18 × 10−42.92 × 10−23.02 × 10−11
Table 6. Test results on CEC2017 for high-dimensional functions with dimension 100 (unimodal and simple multi-modal functions).
Table 6. Test results on CEC2017 for high-dimensional functions with dimension 100 (unimodal and simple multi-modal functions).
FunctionCriteriaCGBPOPOHHOAOFOXBWOGOOSEWOACMA-ES
F1min7.57 × 10101.44 × 10119.65 × 10101.08 × 10117.64 × 10102.73 × 10118.03 × 10101.10 × 10112.94 × 1011
std1.52 × 10101.18 × 10101.10 × 10101.22 × 10104.05 × 1094.74 × 1099.07 × 10101.48 × 10101.50 × 109
avg8.78 × 10101.67 × 10111.21 × 10111.30 × 10119.27 × 10102.85 × 10111.43 × 10111.42 × 10112.97 × 1011
F3min2.92 × 1053.03 × 1053.38 × 1053.46 × 1056.43 × 1053.71 × 1056.82 × 1055.83 × 1051.89 × 1013
std2.48 × 1041.79 × 1049.13 × 1045.80 × 1031.29 × 1056.97 × 1081.31 × 1051.01 × 1055.74 × 1013
avg3.31 × 1053.43 × 1053.75 × 1053.63 × 1058.60 × 1051.92 × 1088.70 × 1059.19 × 1051.05 × 1014
F4min1.29 × 1041.76 × 1041.37 × 1042.53 × 1041.49 × 1041.16 × 1051.53 × 1042.31 × 1041.55 × 105
std3.18 × 1035.68 × 1033.43 × 1034.41 × 1031.26 × 1031.07 × 1042.82 × 1045.28 × 1031.75 × 103
avg1.84 × 1042.94 × 1042.08 × 1043.25 × 1041.71 × 1041.34 × 1054.23 × 1043.23 × 1041.58 × 105
F5min1.33 × 1031.79 × 1031.63 × 1031.68 × 1031.33 × 1032.10 × 1031.71 × 1031.85 × 1032.28 × 103
std6.86 × 1016.20 × 1015.65 × 1016.69 × 1012.29 × 1013.29 × 1014.18 × 1027.06 × 1013.06 × 101
avg1.84 × 1031.90 × 1031.74 × 1031.81 × 1031.38 × 1032.16 × 1031.80 × 1031.99 × 1032.34 × 103
F6min6.84 × 1026.81 × 1026.86 × 1026.87 × 1026.66 × 1027.10 × 1026.65 × 1026.95 × 1027.26 × 102
std5.67 × 1006.35 × 1003.94 × 1004.96 × 1001.79 × 1001.96 × 1001.31 × 1011.03 × 1013.70 × 100
avg6.70 × 1026.99 × 1026.93 × 1026.95 × 1026.92 × 1027.15 × 1026.80 × 1027.11 × 1027.35 × 102
F7min3.49 × 1033.52 × 1033.55 × 1033.25 × 1033.20 × 1033.99 × 1033.31 × 1033.55 × 1034.17 × 103
std1.04 × 1021.02 × 1021.03 × 1021.54 × 1026.37 × 1014.84 × 1014.11 × 1031.36 × 1025.36 × 101
avg3.34 × 1033.74 × 1033.81 × 1033.60 × 1033.67 × 1034.12 × 1037.00 × 1033.86 × 1034.27 × 103
F8min2.15 × 1032.13 × 1031.96 × 1032.04 × 1031.83 × 1032.60 × 1031.85 × 1032.22 × 1032.75 × 103
std7.75 × 1017.84 × 1017.18 × 1016.35 × 1012.43 × 1012.79 × 1013.79 × 1021.40 × 1022.88 × 101
avg1.89 × 1032.34 × 1032.20 × 1032.22 × 1032.29 × 1032.66 × 1032.29 × 1032.48 × 1032.81 × 103
F9min5.19 × 1044.99 × 1045.59 × 1045.48 × 1043.15 × 1047.28 × 1042.98 × 1045.99 × 1049.00 × 104
std1.48 × 1037.00 × 1036.03 × 1036.68 × 1035.62 × 1033.10 × 1031.73 × 1041.84 × 1048.79 × 103
avg6.28 × 1046.38 × 1047.32 × 1046.85 × 1043.46 × 1048.06 × 1045.81 × 1048.44 × 1041.07 × 105
F10min2.53 × 1042.70 × 1042.20 × 1042.36 × 1041.54 × 1043.11 × 1041.54 × 1042.82 × 1043.47 × 104
std1.37 × 1031.37 × 1031.53 × 1031.74 × 1031.07 × 1035.68 × 1021.29 × 1031.06 × 1036.76 × 102
avg2.82 × 1042.91 × 1042.52 × 1042.67 × 1041.74 × 1043.29 × 1041.77 × 1043.03 × 1043.63 × 104
F11min8.15 × 1049.87 × 1048.40 × 1042.44 × 1051.38 × 1052.06 × 1051.65 × 1051.22 × 1052.17 × 1013
std2.03 × 1042.09 × 1043.65 × 1041.17 × 1056.69 × 1045.77 × 1048.43 × 1041.36 × 1054.53 × 1011
avg1.23 × 1051.36 × 1051.95 × 1054.41 × 1052.74 × 1053.53 × 1053.02 × 1053.24 × 1052.34 × 1013
Table 7. Test results on CEC2017 for high-dimensional functions with dimension 100 (hybrid functions).
Table 7. Test results on CEC2017 for high-dimensional functions with dimension 100 (hybrid functions).
FunctionCriteriaCGBPOPOHHOAOFOXBWOGOOSEWOACMA-ES
F12min1.56 × 10104.07 × 10101.68 × 10104.12 × 10103.00 × 10102.02 × 10113.23 × 10103.54 × 10102.55 × 1011
std8.64 × 1091.34 × 10101.22 × 10101.16 × 10103.06 × 1091.43 × 10102.81 × 10101.22 × 10101.91 × 109
avg3.38 × 10106.22 × 10103.75 × 10106.39 × 10103.56 × 10102.31 × 10116.46 × 10105.22 × 10102.59 × 1011
F13min1.11 × 1096.40 × 1098.62 × 1084.61 × 1091.50 × 1094.69 × 10101.52 × 1093.00 × 1096.42 × 1010
std1.53 × 1093.58 × 1091.68 × 1092.08 × 1097.43 × 1084.23 × 1094.94 × 1092.07 × 1095.14 × 108
avg4.09 × 1091.21 × 10103.57 × 1098.83 × 1092.70 × 1095.56 × 10108.30 × 1096.26 × 1096.53 × 1010
F14min2.67 × 1061.07 × 1077.47 × 1069.89 × 1063.12 × 1068.63 × 1071.61 × 1069.00 × 1061.32 × 109
std5.79 × 1067.05 × 1066.41 × 1061.42 × 1079.17 × 1061.28 × 1081.17 × 1071.37 × 1076.36 × 107
avg1.06 × 1071.99 × 1071.45 × 1072.79 × 1071.56 × 1072.94 × 1081.46 × 1072.91 × 1071.43 × 109
F15min1.58 × 1042.06 × 1086.73 × 1079.10 × 1082.58 × 1072.14 × 10103.08 × 1084.99 × 1084.04 × 1010
std5.34 × 1082.32 × 1095.46 × 1081.04 × 1092.02 × 1074.61 × 1091.04 × 1095.88 × 1084.02 × 108
avg7.45 × 1073.23 × 1095.87 × 1082.64 × 1091.10 × 1083.25 × 10109.11 × 1081.29 × 1094.11 × 1010
F16min1.58 × 1072.06 × 1086.73 × 1079.10 × 1082.58 × 1072.14 × 10103.08 × 1084.99 × 1084.04 × 1010
std5.34 × 1082.32 × 1095.46 × 1081.04 × 1092.02 × 1074.61 × 1091.04 × 1095.88 × 1084.02 × 108
avg7.45 × 1073.23 × 1095.87 × 1082.64 × 1091.10 × 1083.25 × 10109.11 × 1081.29 × 1094.11 × 1010
F17min5.30 × 1037.47 × 1036.06 × 1031.23 × 1047.19 × 1033.67 × 1065.87 × 1038.83 × 1031.52 × 108
std1.84 × 1035.53 × 1047.58 × 1046.78 × 1048.55 × 1032.26 × 1076.17 × 1036.67 × 1049.97 × 106
avg1.10 × 1045.60 × 1041.41 × 1047.93 × 1041.01 × 1043.55 × 1071.93 × 1044.92 × 1041.72 × 108
F18min1.36 × 1066.07 × 1062.24 × 1064.92 × 1071.53 × 1061.34 × 1081.98 × 1065.07 × 1061.33 × 109
std6.74 × 1061.49 × 1077.52 × 1061.47 × 1079.57 × 1062.71 × 1089.29 × 1061.22 × 1076.09 × 107
avg1.64 × 1062.80 × 1071.21 × 1073.08 × 1076.52 × 1077.23 × 1089.59 × 1062.26 × 1071.44 × 109
F19min2.57 × 1081.21 × 1091.11 × 1087.17 × 1089.75 × 1052.03 × 10101.11 × 1064.96 × 1084.06 × 1010
std3.96 × 1072.17 × 1093.38 × 1089.36 × 1085.89 × 1083.51 × 1091.20 × 1095.74 × 1084.31 × 108
avg6.88 × 1083.62 × 1094.54 × 1082.35 × 1093.22 × 1073.38 × 10108.59 × 1081.31 × 1094.15 × 1010
F20min5.33 × 1036.03 × 1035.35 × 1035.47 × 1034.79 × 1037.20 × 1035.54 × 1036.45 × 1039.91 × 103
std5.04 × 1025.68 × 1024.34 × 1024.84 × 1025.81 × 1023.24 × 1026.06 × 1026.16 × 1021.38 × 102
avg6.40 × 1036.95 × 1036.37 × 1036.35 × 1036.46 × 1037.99 × 1036.51 × 1037.39 × 1031.04 × 104
F21min3.69 × 1033.86 × 1033.98 × 1033.89 × 1033.91 × 1034.97 × 1034.09 × 1033.94 × 1037.35 × 103
std1.41 × 1021.46 × 1021.97 × 1023.10 × 1022.67 × 1021.36 × 1022.38 × 1022.49 × 1025.55 × 10−12
avg4.05 × 1034.12 × 1034.50 × 1034.42 × 1034.50 × 1035.22 × 1034.52 × 1034.53 × 1037.35 × 103
Table 8. Test results on CEC2017 for high-dimensional functions with dimension 100 (composition functions and extended unimodal functions).
Table 8. Test results on CEC2017 for high-dimensional functions with dimension 100 (composition functions and extended unimodal functions).
FunctionCriteriaCGBPOPOHHOAOFOXBWOGOOSEWOACMA-ES
F22min2.54 × 1042.84 × 1042.73 × 1042.85 × 1041.68 × 1043.43 × 1041.82 × 1042.89 × 1043.86 × 104
std2.02 × 1031.33 × 1031.17 × 1031.11 × 1031.63 × 1034.53 × 1021.79 × 1031.36 × 1037.37 × 102
avg3.06 × 1043.18 × 1042.90 × 1042.99 × 1042.05 × 1043.51 × 1042.08 × 1043.26 × 1044.04 × 104
F23min4.53 × 1034.61 × 1035.38 × 1034.53 × 1035.13 × 1037.01 × 1035.23 × 1034.94 × 1038.42 × 103
std2.19 × 1022.18 × 1025.22 × 1022.79 × 1024.30 × 1024.49 × 1023.53 × 1022.43 × 1025.55 × 10−12
avg4.92 × 1034.93 × 1036.05 × 1035.15 × 1036.07 × 1037.95 × 1035.95 × 1035.44 × 1038.42 × 103
F24min5.76 × 1035.82 × 1037.31 × 1036.20 × 1038.17 × 1031.18 × 1048.09 × 1036.46 × 1031.42 × 104
std3.32 × 1024.14 × 1026.95 × 1024.62 × 1024.39 × 1021.19 × 1036.12 × 1023.73 × 1027.40 × 10−12
avg6.31 × 1036.60 × 1038.76 × 1037.15 × 1038.94 × 1031.43 × 1049.03 × 1036.95 × 1031.42 × 104
F25min8.61 × 1031.15 × 1047.97 × 1039.97 × 1037.28 × 1032.84 × 1047.70 × 1031.13 × 1043.47 × 104
std1.35 × 1031.77 × 1031.20 × 1031.30 × 1033.20 × 1021.44 × 1031.38 × 1041.35 × 1035.51 × 102
avg8.13 × 1031.55 × 1049.92 × 1031.28 × 1041.04 × 1043.27 × 1042.33 × 1041.41 × 1043.55 × 104
F26min2.65 × 1043.06 × 1042.90 × 1043.24 × 1042.82 × 1045.31 × 1042.88 × 1043.50 × 1046.52 × 104
std3.00 × 1032.68 × 1032.29 × 1031.86 × 1038.80 × 1021.83 × 1031.03 × 1042.93 × 1033.91 × 102
avg3.00 × 1043.89 × 1043.41 × 1043.67 × 1043.60 × 1045.87 × 1043.90 × 1043.99 × 1046.59 × 104
F27min4.65 × 1034.85 × 1035.38 × 1036.97 × 1036.52 × 1031.36 × 1046.42 × 1035.26 × 1032.48 × 104
std7.11 × 1027.70 × 1021.37 × 1036.68 × 1021.78 × 1031.34 × 1031.76 × 1031.17 × 1032.51 × 102
avg5.74 × 1036.00 × 1037.53 × 1038.38 × 1039.65 × 1031.63 × 1049.58 × 1036.94 × 1032.54 × 104
F28min1.16 × 1041.60 × 1041.11 × 1041.47 × 1041.25 × 1043.11 × 1041.46 × 1041.51 × 1044.27 × 104
std1.65 × 1031.57 × 1039.16 × 1021.72 × 1035.94 × 1021.07 × 1037.19 × 1031.14 × 1033.62 × 102
avg1.60 × 1041.89 × 1041.31 × 1041.79 × 1041.35 × 1043.29 × 1042.64 × 1041.79 × 1044.33 × 104
F29min1.16 × 1041.49 × 1041.35 × 1041.65 × 1041.20 × 1044.74 × 1051.20 × 1041.70 × 1047.34 × 106
std1.20 × 1031.31 × 1041.94 × 1039.56 × 1034.37 × 1038.06 × 1051.43 × 1047.27 × 1035.06 × 105
avg1.31 × 1043.10 × 1041.86 × 1042.89 × 1041.59 × 1041.90 × 1062.24 × 1042.58 × 1048.38 × 106
F30min1.22 × 1093.20 × 1091.28 × 1094.62 × 1091.43 × 1094.06 × 1001.80 × 1092.42 × 1095.95 × 100
std1.33 × 1093.28 × 1091.61 × 1092.23 × 1091.02 × 1094.80 × 1093.93 × 1092.10 × 1095.85 × 108
avg3.13 × 1099.77 × 1093.38 × 1098.44 × 1094.79 × 1095.06 × 1006.97 × 1095.24 × 1096.07 × 100
Table 9. Test results on CEC2022.
Table 9. Test results on CEC2022.
FunctionCriteriaCGBPOPOHHOAOFOXBWOGOOSEWOACMA-ES
F1min325.52355.751069.212349.95385.1911,193.721783.0911,849.54300.00
std280.321049.901304.512719.528462.1716,636.3910,576.6413,601.110.00
avg620.231058.953038.216366.0914,786.7032,693.9715,310.2828,148.44300.00
F2min406.11404.48400.25401.69400.02885.63400.03411.90400.00
std26.2632.1972.1856.08165.151010.27106.3941.972.37
avg441.81452.71474.15467.09558.962377.61513.78474.28407.80
F3min607.97609.34619.88609.13635.91632.74628.56612.25608.81
std7.1811.3812.058.379.4910.9714.9616.709.87
avg624.54626.79641.84623.54654.60660.93656.53645.33655.49
F4min812.96816.17811.24810.26821.89843.08816.91817.79828.85
std7.156.456.417.014.776.2817.9513.790.81
avg823.54831.40825.92825.61832.77855.14852.68843.79832.44
F5min916.66959.821025.37931.881435.961377.331381.031098.631425.20
std123.80196.77197.33133.6485.64185.84557.80561.0113.44
avg1019.741208.811467.601089.631484.541745.062032.221674.671459.28
F6min1956.991883.562035.153559.251889.8617,422,511.161997.972280.291800.36
std2213.762351.936096.2585,201.852023.99672,783,088.062368.5224,520.476.06
avg4354.424572.609321.2867,141.024379.851,015,177,368.974464.9616,555.451807.81
F7min2003.982027.322048.692031.872077.882029.842059.452040.182094.92
std14.1917.1730.2917.4756.3318.4987.8628.8092.59
avg2055.092066.572092.732055.462151.882135.212185.772088.422288.13
F8min2216.142223.612209.772227.422224.132237.672224.852221.672201.71
std5.365.8912.904.41137.7926.95140.1525.0226.97
avg2229.932232.012234.722232.852358.082276.562438.532243.102226.92
F9min2529.862546.972542.162568.952553.722701.752569.872539.112529.28
std38.3744.8345.6337.4063.6141.7752.9651.030.00
avg2581.282609.892641.102631.162676.272778.042657.182631.482529.28
F10min2500.442500.722500.622500.962500.992582.112500.682500.652604.91
std0.8657.94210.9457.11651.13283.78481.74267.942.42
avg2501.382536.222637.542587.763209.992881.402955.982646.602608.25
F11min2674.682706.492612.502661.302715.423034.732725.872739.262600.00
std25.5579.65198.51109.34387.75462.4630,599.07167.34134.93
avg2747.092801.132889.722784.053333.113673.1131,826.273091.882820.00
F12min2863.502863.602864.612866.042901.592895.732881.812865.462862.70
std5.2512.1764.6010.39119.9873.62100.6157.311.06
avg2867.532871.592926.292873.243040.993028.493011.192914.782865.22
Table 10. p-values from Wilcoxon’s rank-sum test on CEC2022.
Table 10. p-values from Wilcoxon’s rank-sum test on CEC2022.
FunctionPOHHOAOFOXBWOGOOSEWOACMA-ES
F18.24 × 10−25.49 × 10−113.02 × 10−114.62 × 10−103.02 × 10−113.02 × 10−113.02 × 10−111.21 × 10−12
F21.91 × 10−12.32 × 10−22.24 × 10−27.20 × 10−53.02 × 10−119.52 × 10−48.56 × 10−42.90 × 10−9
F37.28 × 10−13.26 × 10−76.31 × 10−16.07 × 10−115.49 × 10−113.82 × 10−106.05 × 10−74.81 × 10−10
F44.35 × 10−51.30 × 10−11.76 × 10−11.19 × 10−63.02 × 10−112.44 × 10−92.20 × 10−71.59 × 10−7
F55.46 × 10−64.20 × 10−106.55 × 10−44.62 × 10−104.50 × 10−118.15 × 10−114.20 × 10−105.55 × 10−10
F69.35 × 10−14.22 × 10−41.29 × 10−98.88 × 10−13.02 × 10−118.88 × 10−13.56 × 10−43.02 × 10−11
F77.62 × 10−32.78 × 10−77.28 × 10−13.34 × 10−113.02 × 10−112.37 × 10−104.12 × 10−65.57 × 10−10
F82.64 × 10−11.96 × 10−11.84 × 10−28.15 × 10−54.98 × 10−116.72 × 10−101.58 × 10−41.61 × 10−6
F91.17 × 10−25.46 × 10−63.16 × 10−51.07 × 10−73.02 × 10−118.20 × 10−71.41 × 10−43.02 × 10−11
F103.85 × 10−31.17 × 10−51.55 × 10−96.72 × 10−103.02 × 10−119.06 × 10−82.24 × 10−23.02 × 10−11
F112.00 × 10−52.75 × 10−34.20 × 10−14.62 × 10−103.02 × 10−115.07 × 10−105.46 × 10−91.95 × 10−3
F129.93 × 10−21.07 × 10−92.49 × 10−63.02 × 10−113.02 × 10−113.34 × 10−111.31 × 10−85.31 × 10−3
Table 11. Test results regarding the design optimization problem for industrial refrigeration systems.
Table 11. Test results regarding the design optimization problem for industrial refrigeration systems.
FunctionCGBPOPOHHOAOFOXBWOGOOSEWOACMA_ES
min5.32 × 10−11.11 × 1002.32 × 1025.01 × 1034.93 × 1003.04 × 10168.62 × 1002.23 × 1011.09 × 100
std2.96 × 10143.01 × 10141.95 × 10156.52 × 10151.80 × 10151.38 × 10162.64 × 10151.77 × 10164.83 × 1014
avg9.36 × 10139.52 × 10131.14 × 10153.74 × 10151.51 × 10154.93 × 10161.96 × 10151.00 × 10165.62 × 1014
Table 12. Test results for Himmel Blau’s function optimization problem.
Table 12. Test results for Himmel Blau’s function optimization problem.
FunctionCGBPOWOAPOHHOAOFOXBWOGOOSECMA-ES
min−30,664.6−30,665.4−30,651.6−30,556−30,665.3−30,238.9−30,664.4−30,045.7−29,782.9
std113.3692201.8359263.5462237.3315249.8378154.6196225.8986451.5172196.1186
avg−30,509.6−30,505.3−30,442.6−30,299.2−30,394.7−30,068−30,471.4−29,477.6−29,433
Table 13. Related-parameter settings.
Table 13. Related-parameter settings.
ParameterValue
Emitting the power of LED, P t 2.2 W
Photo-electric conversion efficiency of PD, R p 0.5 A/W
Equivalent noise bandwidth, B 100 Mb/s
Background photocurrent, I b g 5100 μA
Noise bandwidth factor, I 2 0.562
Noise bandwidth factor, I 3 0.0868
Absolute temperature, T k 295 K
Open-loop voltage gain, G 10
Transconductance of Field-Effect Transistor, g m 30 mS
Channel noise coefficient of Field-Effect Transistor, Γ 1.5
Unit-area capacitance of PD, η 112 pF/cm2
Population size, N p 100
Max iteration number, G max 60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Fu, M.; Zhou, X.; Jia, C.; Wei, P. A Multi-Strategy Parrot Optimization Algorithm and Its Application. Biomimetics 2025, 10, 153. https://doi.org/10.3390/biomimetics10030153

AMA Style

Yang Y, Fu M, Zhou X, Jia C, Wei P. A Multi-Strategy Parrot Optimization Algorithm and Its Application. Biomimetics. 2025; 10(3):153. https://doi.org/10.3390/biomimetics10030153

Chicago/Turabian Style

Yang, Yang, Maosheng Fu, Xiancun Zhou, Chaochuan Jia, and Peng Wei. 2025. "A Multi-Strategy Parrot Optimization Algorithm and Its Application" Biomimetics 10, no. 3: 153. https://doi.org/10.3390/biomimetics10030153

APA Style

Yang, Y., Fu, M., Zhou, X., Jia, C., & Wei, P. (2025). A Multi-Strategy Parrot Optimization Algorithm and Its Application. Biomimetics, 10(3), 153. https://doi.org/10.3390/biomimetics10030153

Article Metrics

Back to TopTop