1. Introduction
With the increasing complexity of real-world problems, traditional methods have encountered numerous limitations. Against such challenging circumstances, intelligent optimization algorithms have progressively emerged. They stem from the observation and emulation of phenomena in nature and the intelligent behaviors of organisms. Initially, such algorithms were rudimentary and merely simulated biological evolution; examples include the Simulated Annealing (SA) algorithm [
1], the Genetic Algorithm (GA) [
2], the Particle Swarm Optimization (PSO) algorithm [
3], and the Ant Colony Optimization (ACO) algorithm [
4].
With the rapid development of science and technology, intelligent optimization algorithms have evolved and diversified into numerous types. These include the Bacterial Foraging Optimization (BFO) [
5], Artificial Bee Colony (ABC) [
6], Cuckoo Search (CS) [
7], Bat Algorithm (BA) [
8], Moth Flame Optimization (MFO) [
9], Pigeon Swarm Optimization Algorithm (PSOA) [
10], Spider Monkey Optimization (SMO) [
11], Seagull Optimization algorithm (SOA) [
12], Remora Optimization Algorithm (ROA) [
13], Black Smoke Swallow Optimization algorithm (STOA) [
14], and Gray Wolf Optimization (GWO) [
15] algorithms. In recent years, new ones like the Harris Hawks Optimization (HHO) [
16], Catch fish optimization algorithm (CFOA) [
17], Pelican Optimization Algorithm (POA) [
18], and Crayfish Optimization Algorithm (COA) [
19] have also emerged.
Most intelligent optimization algorithms simulate the behaviors and habits of natural organisms to efficiently solve complex problems. For instance, authors in [
20] proposed a Hybrid Whale Particle Swarm Optimization (HWPSO) algorithm, leveraging the “forced” whale and “capping” phenomenon. Evaluated in tasks like determining operational amplifier circuit size, minimizing micro-electro-mechanical system switch pull-in voltage, and reducing differential amplifier random offset, the HWPSO operated efficiently and improved optimal values significantly. In [
21], an Improved Whale Optimization Algorithm (IWOA) was proposed, combining crossover and mutation from the Differential Evolution (DE) algorithm and introducing a cloud-adaptive inertia weight. This algorithm was applied to truss structure optimization for 52-bar planar and 25-bar spatial trusses. Authors of Ref. [
22] enhanced the Seagull Optimization Algorithm with Levy flight (LSOA), achieving good results in path-planning problems. Moreover, Ref. [
23] introduced a Multi-strategy Golden Jackal Optimization (MGJO) algorithm. It initialized the population via chaotic mapping, adopted a non-linear dynamic inertia weight and used Cauchy mutation to boost population diversity, effectively estimating parameters of the non-linear Muskingum model.
The Parrot Optimization (PO) [
24], proposed by J. Lian et al. in 2024, is inspired by parrots’ learning behaviors. It emulates parrots’ foraging, lingering, communication, and wariness of strangers to extensively search for the optimal solution in the search space. Yet, when handling complex problems with high-dimensional, multi-modal, and non-linear objective functions, the PO algorithm’s search efficiency and solution accuracy may suffer. It often converges prematurely to local optima or fails to fully explore the search space, hindering the discovery of the global optimum. Furthermore, lacking an adaptive parameter adjustment mechanism, the PO algorithm’s parameters cannot be automatically tuned. Manual parameter adjustment for various optimization problems is thus required, decreasing the algorithm’s convergence rate.
To address PO’s drawbacks, like becoming trapped in local optima and slow convergence, we developed a multi-strategy PO algorithm named CGBPO. Firstly, chaotic logistic mapping replaces traditional random initialization. This distributes the initial population more evenly and widely, preventing individuals from clustering in specific search space areas [
25]. It boosts population diversity and enhances global search ability. Secondly, Gaussian mutation is applied to updated individual positions [
26]. Everyone has a chance to generate new, relatively random positions nearby, breaking local convergence and increasing diversity. Thirdly, after each iteration, we calculate the centroid of the current population and generate the corresponding opposite solution [
27]. Then, we explore the search space from the direction opposite to the current individuals based on the centroid, guiding the population to more promising areas. This makes the search more directional, comprehensive, and better at handling complex high-dimensional and multimodal optimization problems.
To validate the CGBPO algorithm, 30 independent experiments were carried out in CEC2017 [
28], CEC2022 [
29], and two engineering problems, followed by a comparison with seven swarm intelligence algorithms. The results show that the three strategies in CGBPO effectively enhance its performance, notably addressing premature local optimization and slow convergence issues.
The paper is organized as follows:
Section 2 overviews the PO algorithm.
Section 3 elaborates on the proposed CGBPO with multiple strategies.
Section 4 and
Section 5 detail the experimental tests of CGBPO on CEC2017 and CEC2022 benchmarks and analyze the results.
Section 6 presents the comparative tests of CGBPO for engineering problems.
Section 7 describes the application of CGBPO in the indoor visible-light-positioning system. Finally,
Section 8 summarizes the study and outlines future research directions.
3. Multi-Strategy Parrot Optimization Algorithm (CGBPO)
3.1. Chaotic Logistic Map Strategy
The chaotic logistic map is defined using a simple recurrence relation, as depicted in Equation (7):
In this context, stands for the state value in the th iteration, while represents the state value in the subsequent iteration. The parameter serves as a control factor, and its value generally falls within the range of 0 to 4. Additionally, when the value of is confined to the interval between 0 and 1, as the number of iterations increases, the sequences generated in the subsequent steps will display entirely distinct changing tendencies. Furthermore, when the value of lies within the approximate interval from 3.57 to 4, the system displays chaotic characteristics. In this case, 4 is selected. For different initial values, the system exhibits chaotic behavior after multiple iterations. More precisely, the sequence of system state values appears to be disorderly and fluctuates in a random manner. Even a minute difference in the initial value will result in completely different trajectories for the subsequent state values as the iterations progress.
There are other common chaotic strategies like the Tent map [
30] and the Sine map [
31]. The Tent map shows chaotic features in a narrow range around the control parameter value of 2, so its chaotic domain is relatively narrow. The Sine map turns chaotic when the control parameter approaches 1, but the boundaries of its chaotic regime are ambiguous. Compared with them, the logistic map has outstanding randomness and ergodicity. It can evenly cover the defined interval without obvious data aggregation. It is highly sensitive to the initial value; a tiny change can lead to greatly different results after multiple iterations, a key chaotic feature. The Tent and Sine maps are also sensitive to initial conditions, but less so than the logistic map. In the early stages, differences from different initial values are not obvious.
Due to this, the logistic map is used in the optimization algorithm’s initialization and perturbation. It can boost the algorithm’s global search ability, avoid local optima, and enhance overall optimization performance. In the proposed PO method’s initialization, the logistic map sets the initial positions of individual parrots, introducing chaotic perturbations. After iterations, individual search trajectories become more random, breaking away from local optima that the traditional update mode may become stuck in. This improves the algorithm’s global search ability, balances exploration and exploitation, and enhances its optimization ability.
To assess the performance of the selected map strategies, the CEC2022 standard dataset is used with MATLAB(R2023a) simulation software. Three algorithms—CPO, TPO, and SPO—improving the PO algorithm with the chaotic logistic map, Tent map, and Sine map, respectively, are tested. Then, the test results are compared and analyzed. The population size is set at 30, the maximum number of iterations is 300, and each of the three algorithms runs independently 30 times for every test function.
As shown in
Figure 1a, the CPO’s radar chart has the least area fluctuation. CPO ranked first for six functions and second for three functions, indicating highly stable results across multiple runs, unaffected by random factors. In
Figure 1b, the ranking chart, sorted by average fitness values, shows that CPO has the lowest average fitness value. In the comprehensive test, the CPO algorithm obtained better solutions and outperformed the other two algorithms in convergence performance.
3.2. Gaussian Mutation Strategy
Gaussian mutation randomly modifies the genetic information of individuals with random perturbation values that adhere to a Gaussian distribution (normal distribution), thereby giving rise to new individuals after the mutation process. The Gaussian mutation function is presented in Equation (8):
The
function is employed to generate random numbers that comply with the normal distribution. In this context,
and
serve as the mean and standard deviation of the Gaussian function, respectively, as illustrated in Equations (9) and (10):
Typically, symbolizes the lower bound of the variable, while represents the upper bound of the variable. The average of the upper and lower bounds is taken as the central position of the normal distribution. This configuration ensures that the generated random mutation values are theoretically distributed in a relatively uniform manner around this value range. Given that it is grounded in a normal distribution, approximately 99.7% of the mutation values will fall within the range of the mean plus or minus three times the standard deviation, that is, . Consequently, there is a high likelihood that the mutation values will lie within this interval centered around the mean, with a width equivalent to (as , extending by this amount on both sides precisely covers the width of the interval). This implies that the degree of dispersion of the mutation is relatively moderate. It will neither cause the new solutions generated by mutation to be overly scattered and distant from the original value range nor make the mutation overly concentrated, thus losing the ability to explore new solution spaces. Instead, it represents a form of exploration that has a certain breadth yet remains controllable within the given upper- and lower-bound intervals. Consequently, when the range of the upper and lower bounds (i.e., the value of ) is large, σ will also be large, signifying an increase in the degree of dispersion of the mutation. With the mutation operation, a larger value range can be explored more comprehensively, and it is more likely that individuals can break free from local optimal solutions and search for new areas in the solution space that are removed from the current solution. Conversely, if the range of the upper and lower bounds is small, will become smaller, the degree of dispersion of the mutation will decrease, and the new solutions generated by the mutation operation will be closer to the current mean position, focusing more on fine-tuning and optimizing the local area within a relatively small interval.
Cauchy mutation [
32] and non-uniform mutation [
33] are two common mutation strategies in optimization algorithms. Cauchy mutation modifies individual genes using the Cauchy distribution. It randomly samples a value from this distribution and adds it to the original gene value to obtain the mutated one. The Cauchy distribution’s heavy tail means there is a high chance of obtaining a value far from the central value. So, Cauchy mutation is mainly for global search, but it has low local search accuracy. Also, its large mutation values lead to poor stability and fluctuations result.
The probability of non-uniform mutation declines with iterations, aiding global exploration initially and focusing on local development later. The mutation amplitude varies dynamically, large at first then small. Its direction is random and parameter adjustable. Yet, parameter setting is tough and demands much debugging. Its adaptability to different problems is limited with varying effects, has high computational complexity, and may still become trapped in local optima, hindering global optimization.
In contrast, Gaussian mutation is highly stable. Based on the normal distribution, it gives the mutation process direction and concentration. Most mutation values are within a certain range, ensuring the algorithm’s stable convergence. For local research, Gaussian mutation converges fast and accurately, allowing for precise adjustments to the current solution, making it ideal for scenarios requiring local optimization precision.
To check the performance of the chosen chaotic strategies, three algorithms—GPO, CAPO, and NPO, which enhance the PO algorithm with Gaussian mutation, Cauchy mutation, and No-uniform mutation, respectively—are used for testing. As shown in
Figure 2a, the GPO’s radar chart has the least area fluctuation. GPO ranked first for seven functions and second for three functions, proving its results across multiple runs were highly stable and unaffected by random factors. Notably, GPO had the lowest average fitness value of 1.58 in
Figure 2b. In the comprehensive test, GPO performed the best.
3.3. Barycenter Opposition-Based Learning Strategy
In this study, barycenter opposition-based learning is adopted. In the initial iterations, as the differences between individuals in the population are relatively large, the generation of mutant parrots allows for more areas to be explored. Meanwhile, in later iterations, although the quantity of individuals in the population decreases, the mutant parrots can still maintain diversity. The barycenter is defined as follows:
Suppose that
denotes the values of
parrots in the
th dimension. The population consists of
N individuals. Then, the barycenter of the parrot population in the
th dimension is given by Equation (11), and the population barycenter is
.
Barycenter opposition-based mutation: Suppose that
is the
th parrot with
D dimensions. If the selected mutation dimension is the
th dimension, then the barycenter opposition-based solution corresponding to the
th parrot is
, which is determined using Equation (12):
where
is a contraction factor, the value of which is a random figure within the range. During the iteration process, for each parrot in the parrot population, a certain dimension
is selected for mutation. Then, the mutation result is compared with the position of the previous generation, and the better mutation is retained.
Opposition-based learning [
34] and elite opposition-based learning [
35] are common strategies in optimization algorithms. Opposition-based learning is simple and effective at the start for boosting population diversity, providing more search trajectories and enhancing exploration. But it has a limited way of generating opposite individuals, relying only on individual features and search space boundaries, ignoring other individuals’ distribution and relationships, which may limit its performance in complex scenarios.
Elite opposition-based learning selects elite individuals with high fitness as the core to generate opposite ones, leveraging their high-quality information to find better solutions and speed up convergence. However, focusing on elites reduces population diversity, increasing the risk of premature local optima and missing the global optimum.
In contrast, the Barycenter opposition-based learning strategy uses the population’s overall barycenter information and random contraction adjustment to explore a new solution space opposite the original individuals. It aims to find better solutions different from the current ones, enhancing population diversity, helping the algorithm escape local optima and search for the entire solution space more efficiently.
To verify the performance of the chosen opposition-based learning strategies, three algorithms—BPO, OPO, and EPO, which enhance the PO algorithm with barycenter opposition-based learning, traditional opposition-based learning, and elite opposition-based learning, respectively—are employed for testing. As shown in
Figure 3a, BPO’s radar chart has the least area fluctuation. BPO ranked first for nine functions and second for two functions, demonstrating that its results across multiple runs were highly stable and unaffected by random factors. Notably, BPO had the lowest average fitness value of 1.33 in
Figure 3b. In the comprehensive test, BPO performed the best.
3.4. Ablation Study of CGBPO
An ablation study was carried out to clarify the contributions of each newly added strategy. The CGBPO algorithm without chaotic mapping (CGBPO1), without Gaussian mutation (CGBPO2), and without barycenter opposition-based learning (CGBPO3), and the original CGBPO algorithm were tested using the CEC2022 standard dataset with MATLAB simulation software. The population size was 30, the maximum iterations were 300, and each algorithm ran independently 30 times for every test function.
In
Figure 4a, CGBPO’s radar chart had the least area fluctuation, ranking first for six functions and second for four. The radar chart areas of the other three algorithms were much larger, showing the positive effects of the three new strategies on algorithm performance. Notably, in
Figure 4b, CGBPO had the lowest average fitness value of 1.33 and the best performance in the comprehensive test. In the ranking chart, CGBPO1 ranked second, CGBPO2 fourth, and CGBPO3 third, indicating that Gaussian mutation contributed most to algorithm improvement.
3.5. Pseudo-Code of CGBPO
The comprehensive structure of the CGBPO is presented in
Figure 5 and Algorithm 1, which offer a meticulous roadmap for the whole improvement process, encompassing its iterative steps, as well as the utilized search strategies.
Algorithm 1: Pseudo-Code of CGBPO |
1: Initialize the CGBPO parameters |
2: Initialize the solutions’ positions using the chaos strategy by Equation (7) |
3: For i = 1: N do |
4: Calculate the fitness value of all search agents |
5: End |
6: For i = 1: Max_iter do |
7: Find the best position and worst position: |
8: For j = 1: N do |
9: St = randi([1,4]) |
10: Behavior 1: Foraging behavior |
11: If St == 1 Then |
12: Update position by Equation (1) |
13: Behavior 2: Staying behavior |
14: Elseif St == 2 Then |
15: Update position by Equation (4) |
16: Behavior 3: Communicating behavior |
17: Elseif St == 3 Then |
18: Update position by Equation (5) |
19: Behavior 4: The fear of strangers’ behavior |
20: Elseif St == 4 Then |
21: Update position by Equation (6) |
22: End |
23: Update position using gaussian mutation by Equation (8) |
24: End |
25: Generate new solutions using the barycenter opposition-based learning: |
26: For i = 1: N do |
27: Calculate the values of the original function |
28: Update position using the barycenter opposition-based learning strategy by Equation (12) |
29: End |
30: Return the best solution |
31: End |
3.6. Comparative Analysis of the Time Complexity Between CGBPO and PO
3.6.1. Time Complexity Analysis of the PO Algorithm
The population initialization operation of PO adopts a simple random initialization method. It generates individuals with a dimension of dim, and the time complexity is . When calculating the initial fitness values, the objective function values are calculated for N individuals, respectively. Assuming that the time complexity of calculating the objective function once is (where depends on the complexity of the objective function), the time complexity of calculating the initial fitness values is . The sorting operation uses the sort of function. The time complexity of common sorting algorithms is . Therefore, the total time complexity of the initialization stage is .
The PO algorithm has two nested loops in each iteration. The outer loop runs times, and the inner loop operates on the individuals in the population. When updating individuals, calculations such as Levy flight are involved, and its time complexity mainly depends on the dimension dim. Assuming that the time complexity of each individual update operation is (where m is a constant related to the specific calculation), then the time complexity of individual updates in each iteration is . The boundary control operation also judges and adjusts the dim dimensions of N individuals, and its time complexity is . The time complexities of updating the global optimal solution and sorting the population are and , respectively. Therefore, the time complexity of each iteration is , and the time complexity of the entire iteration stage is .
Combining the initialization and iteration stages, the time complexity of PO is . When , dim, and are large, by ignoring the lower-order terms, the main time complexity can be approximated as .
3.6.2. Time Complexity Analysis of the CGBPO Algorithm
The CGBPO algorithm uses chaotic initialization. This function generates chaotic values and maps them to the search space to generate individuals with a dimension of dim. The time complexity is . The subsequent operations, such as calculating fitness values and sorting are the same as those of PO. Therefore, the total time complexity of the initialization stage is .
During the iteration process of CGBPO, Gaussian mutation and centroid-based opposition-learning operations are added. Each mutation operation involves calculations and boundary checks for the dim dimensions of an individual. Assuming that the time complexity of each mutation operation is (where p is a constant related to the mutation strategy), the time complexity of mutating N individuals is . The opposition-learning operation also conducts calculations and boundary control for N individuals, and its time complexity is . Adding the original individual update, boundary control, global optimal solution update, and population-sorting operations of t PO, the time complexity of each iteration is , and the time complexity of the entire iteration stage is .
Combining the initialization and iteration stages, the time complexity of CGBPO is . When , dim, and are large, by ignoring the lower-order terms, the main time complexity can be approximated as .
3.6.3. Comparison of the Time Complexities of the Two Algorithms
As can be seen from the above analysis, after ignoring the lower-order terms, the main time complexities of PO and CGBPO are approximately. This means that in large-scale problems, when , , and dim increase, the growth trends in the calculation times of the two algorithms are basically the same.
The CGBPO algorithm adds mutation and opposition-based learning operations during the iteration process. There are additional terms related to mutation and opposition-based learning, namely Term and Term , in its time–complexity expression. In an actual operation, if the mutation and opposition-based learning operations are computationally complex, the time for each iteration of CGBPO will be longer than that of PO. However, this may also bring better optimization effects, helping the algorithm converge to a better solution more quickly.
3.6.4. Experimental and Comparative Analyses of the Time
Table 1 presents the running times of the CGBPO and PO algorithms, along with the ratio of CGBPO’s running time to that of PO. Using the CEC2022 standard dataset and MATLAB simulation software, the test sets the population size at 30 and the maximum number of iterations at 300. Each of the algorithms runs independently 30 times for each test function. Overall, at a low dimension of 30, CGBPO generally takes longer to run than PO, showing lower computational efficiency. But as the dimension rises to 50 and 100, the running-time gap between the two narrows. Different test functions affect the running-time ratio differently. For complex functions, the ratio drops more notably with increasing dimension, indicating CGBPO’s potential in handling high-dimensional complex functions.
4. Experimental and Comparative Analyses of the CEC2017 Benchmark Suite
To evaluate CGBPO’s performance, the CEC2017 and CEC2022 benchmarks were selected. Using MATLAB simulation software, nine algorithms were tested and compared: Parrot Optimization (PO), Harris Hawks Optimization (HHO) [
16], Antlion Optimizer (AO) [
36], FOX optimization [
37], Beluga Whale Optimization (BWO) [
38], GOOSE optimization [
39], Whale Optimization (WOA) [
40], CMA-ES [
41], and CGBPO. The population size was set to 30, the maximum iterations to 300, and the dimension to 10, with other parameters following the original literature. Each algorithm ran independently 30 times.
The CEC2017 benchmark has 29 functions. F1, F3, and F4 are unimodal functions, used to assess global convergence; F5–F11 are simple multi-modal functions for testing the ability to escape local optima; F12–F21 are hybrid functions, F22–F27 are composition functions, and F28–F30 are extended unimodal functions, all for testing algorithms’ handling of complex issues. Some CEC2017 test functions’ 3D graphs are shown in
Figure 6.
F1 has an obvious global minimum near the image bottom center, useful for evaluating if an algorithm can find the global optimum. F7 is multimodal with multiple local and a global extreme point, often used to evaluate global search and escaping local optima abilities. F17 has a complex stepped-like structure with multiple local extreme values for evaluating an algorithm’s global-optimum-finding ability for multi-extreme-value functions. F27 is extremely complex multimodal with many local extreme points used to evaluate an algorithm’s global optimum finding and avoiding local optima abilities in complex multimodal scenarios. F30 has a complex surface with multiple local extreme regions for evaluating an algorithm’s performance in complex function environments, including global search, convergence speed, and avoiding local optima.
4.1. Statistical Results of Comparative Tests
Comparing CGBPO with eight other algorithms on CEC2017,
Table 2,
Table 3 and
Table 4 reveal its significant performance edge. On unimodal functions, CGBPO has notable advantages on F1, F3, and F4, though CMA-ES excels best on them. For simple multimodal functions, CGBPO performs optimally on F5, F6, F8–F10. In hybrid functions, CGBPO outperforms others on F16, F17, F20, and F21; CMA-ES tops F12–F15, F18, and F19, with CGBPO ranking second. For composition functions, CGBPO leads F22, F27, and F28 and is competitive on others, while CMA-ES performs best overall.
Data shows that CGBPO performs well on various test functions, simple or complex. Analyzing multiple functions, CGBPO generally surpasses PO and some other algorithms (like HHO, AO, BWO) in convergence speed, solution accuracy, and stability. This proves CGBPO’s comprehensive performance advantage and its ability to offer better solutions for practical engineering problems.
4.2. Algorithm Convergence Curve on CEC2017
Figure 7 displays the comparative convergence curves of CGBPO and eight other algorithms for functions F1–F30. In most functions’ convergence profiles, the CGBPO curve trends sharply downward. For unimodal functions, CGBPO’s downward trend, convergence speed, and final fitness value rank second only to CMA-ES.
On simple multimodal functions like F6, F8, F9, and F10, CGBPO’s curve has a clear downward trend with a fast decline rate, reaching a low fitness value early in iteration and having a lower final convergence fitness than most other algorithms. For F7, while other algorithms’ curves fluctuate greatly, CGBPO remains stable and achieves better convergence.
On hybrid functions, such as F16, F17, F20, and F21, CGBPO’s curve drops rapidly and reaches a low fitness value early, outperforming others. On F12–F15, F18, and F19, CMA-ES performs best and CGBPO ranks second, still ahead of other algorithms.
For composition functions like F22, 24, 25, F27, and F28, CGBPO’s curve drops quickly and has a low final fitness value for F22, F27, and F28, showing the best performance. For other functions, CGBPO is also competitive. Despite CMA-ES performing better on some functions, overall, CGBPO shows excellent or strong performance across various functions and has a comprehensive performance advantage.
4.3. Algorithm Box Plot on CEC2017
Figure 8 shows box plots of CGBPO and other algorithms. In box plots of multiple functions, CGBPO’s box and whisker lengths indicate data dispersion. Its constituent elements include:The rectangular box in the middle can show the distribution range of the middle 50% of the data; The horizontal line in the middle of the box represents the median of the data, which divides the data into two parts with equal quantities and reflects the central tendency of the data; The line segments extending from the upper and lower ends of the box show the distribution range of the data. The points outside the whiskers are abnormal values that deviate significantly from the overall data.
For unimodal function F1, CGBPO’s box plot is at the lowest, with the smallest median fitness value, converging to a better solution accurately. For F3, CMA-ES’s box plot is lowest, with CGBPO second. For F4, CGBPO performs well with a low median fitness and high accuracy. CGBPO’s box plots for F1 and F4 are short, showing strong stability; for F3, CMA-ES is relatively stable.
For simple multimodal functions like F6, F8, F9, and F10, CGBPO’s box plots are low, with small median fitness values, converging to better solutions accurately. It also excels in F7, more accurate than most algorithms. These box plots are mostly short, indicating good stability.
For hybrid functions such as F16, F17, F20, and F21, CGBPO’s box plots are low, with high accuracy in converged solutions. For F12–F15, F18, and F19, CMA-ES performs best, CGBPO second, still having an accuracy advantage. For good-performing functions of CGBPO, box plots are short with good stability; for CMA-ES-dominated functions, CGBPO also has good stability.
For composition functions like F22, F27, and F28, CGBPO’s box plots are at the lowest, with the highest convergence accuracy. For other functions, CGBPO is competitive, converging to good solutions. For advantageous functions, its box plots are short, indicating good stability; for others, its overall stability is acceptable and comparable to other algorithms.
In general, CGBPO shows excellent comprehensive performance for different functions, having advantages in both convergence accuracy and stability for many functions.
4.4. Wilcoxon’s Rank-Sum Test on CEC2017
Table 5 compares the
p-values of CGBPO and swarm intelligence algorithms via Wilcoxon’s rank-sum test [
42] on the CEC2017 benchmark. For functions like F3, F4, F9, F16, F21, and F24, CGBPO’s
p-values compared to the other eight algorithms are all much less than 0.05, showing its performance significantly surpasses theirs.
In some functions, the p-values between CGBPO and specific algorithms exceed 0.05, meaning no significant performance difference. For example, on function F13, CGBPO’s p-values relative to PO, HHO, AO, and WOA are 8.53 × 10−1, 4.20 × 10−1, 1.41 × 10−1, and 2.84 × 10−1, respectively, indicating similar performance.
Overall, CGBPO performs excellently in most function tests, showing significant differences from and often outperforming other algorithms. Even when its performance is like some algorithms for certain functions, its advantages are still evident, proving its strong competitiveness in solving CEC2017 benchmark problems.
4.5. Radar Chart and Average Ranking Chart
Figure 9 show the radar chart and average ranking chart of CGBPO and eight other intelligent algorithms on CEC2017. Notably, CGBPO’s radar chart has the least area fluctuation. It ranked first for two functions and second for twenty functions, indicating highly stable results across multiple runs, unaffected by random factors.
Significantly, CGBPO had the lowest average fitness value and topped the ranking. This shows that in the comprehensive test, CGBPO obtained better solutions and outperformed the other eight algorithms in convergence performance.
4.6. Analysis of High-Dimensional Function Tests
To confirm CGBPO’s superiority in handling high-dimensional complex problems, the dimension was set to 100 with other parameters unchanged. High-dimensional function test results in
Table 6,
Table 7 and
Table 8 show that on all high-dimensional unimodal functions, CGBPO has notable advantages and stability in minimum and mean values. On multimodal functions, it significantly outperforms other algorithms. On hybrid and composition functions, it reaches the theoretical optimal value. In some functions’ standard deviation data, it ranks second only to CMA-ES. Overall, CGBPO still excels in high-dimensional complex problems and offers better solutions.
5. Experimental and Comparative Analyses on the CEC2022 Benchmark Suite
The CEC2022 benchmark suite has 12 single-objective test functions with boundary constraints, categorized as follows: F1 is a unimodal function for evaluating convergence speed and accuracy; F2–F5 are multimodal functions with multiple local optima, testing global search capabilities; F6-F8 are hybrid functions for comprehensively assessing algorithm performance under complex conditions; F9–F12 are composite functions for testing the ability to handle complex tasks. Some CEC2022 test functions’ 3D graphs are shown in
Figure 10.
F4 has a complex multimodal shape with many local extreme points used to evaluate optimization algorithms’ performance in multimodal environments, like global search and avoiding local optima. F7 has a stepped distribution with multiple levels and potential multiple local extreme values used to evaluate algorithms’ ability to find the global optimum for complex multi-extreme-value functions. F10 has a highly complex multimodal shape and numerous local extreme points, making its optimization difficult, and can evaluate algorithms’ performance in complex multimodal scenarios, such as finding the global optimum and escaping local optima.
5.1. Statistical Results of Algorithm Tests on CEC2022
Table 9 shows the min, std, and avg data of CGBPO and eight other algorithms for 12 test functions. On unimodal function F1, CMA-ES performs best. CGBPO’s minimum and average values rank second only to CMA-ES and are much better than others, indicating high convergence accuracy on unimodal functions.
On multimodal function F3, CGBPO has the best minimum value and small standard deviation, showing strong stability. On F4, algorithms’ minimum values are close, and CGBPO’s has an edge with a reasonable standard deviation. On F5, CGBPO’s minimum value is better than most, except CMA-ES, proving good comprehensive performance on multimodal functions.
On hybrid function F6, CMA-ES has the best minimum value, and CGBPO ranks second but has a large standard deviation, resulting in poor stability. On F7, CGBPO has the best minimum value and small standard deviation, showing good stability. On F8, CMA-ES has the best minimum value, CGBPO ranks third with a small standard deviation, indicating strong stability. CGBPO is competitive overall despite performance fluctuations on hybrid functions.
On composition function F9, CMA-ES has the best minimum value, CGBPO ranks second, close to CMA-ES, and has a small standard deviation. On F10, CGBPO has the best minimum value and extremely small standard deviation, showing excellent stability. On F11, CMA-ES has the best minimum value, CGBPO ranks third with a small standard deviation. On F12, CMA-ES has the best minimum value, and CGBPO ranks second with a small standard deviation, indicating good comprehensive performance on composition functions.
In general, CGBPO shows high convergence accuracy, strong stability, and good global convergence on various functions. Its comprehensive performance is remarkable among many algorithms. Despite some flaws in individual functions, its overall performance is good, with obvious advantages on unimodal, multimodal, and composition functions.
5.2. Algorithm Convergence Curve on CEC2022
Figure 11 displays the convergence graphs of CGBPO and other algorithms for functions F1–F12. For unimodal function F1, CMA-ES’s curve drops fastest with the lowest final average fitness, performing best. CGBPO’s curve also drops quickly, with its final average fitness ranking second only to CMA-ES and better than others, showing good convergence speed and accuracy.
For multimodal functions, in F3, CGBPO’s curve starts low with a clear downward trend and the lowest final average fitness, performing best. In F4, algorithms’ curves are close initially, but CGBPO’s has a better downward trend later with the optimal average fitness. In F5, CGBPO’s curve drops rapidly and has the lowest final average fitness, indicating good convergence speed and accuracy.
For hybrid functions, in F6, CMA-ES’s curve has an obvious downward trend later with the lowest final average fitness, while CGBPO’s performs well early but is overtaken later. In F7, CGBPO’s curve has the lowest final average fitness. In F8, CMA-ES’s curve has the lowest final average fitness, and CGBPO’s drop smoothly. CGBPO is competitive on hybrid functions, though less so than on unimodal and multimodal ones.
For composition functions, in F9, CMA-ES and CGBPO’s curves have similar downward trends with close and low final average fitness, CGBPO being slightly worse. In F10, CGBPO’s curve drops rapidly and has the lowest final average fitness, performing best. In F11, algorithms’ curves vary greatly, and CGBPO’s drop fast early. In F12, CMA-ES’s curve has the lowest final average fitness, and CGBPO’s ranks second, showing good overall performance on composition functions.
5.3. Algorithm Box Plot on CEC2022
Figure 12 shows box plots of test data distribution. For unimodal function F1, CMA-ES’s box plot is at the lowest, with the smallest fitness median and the best converged solution. CGBPO’s box plot is low with a short box, meaning it converges with a good solution and has small data dispersion, showing good stability.
For multimodal function F3, CGBPO’s box plot is at the lowest, with the smallest fitness median, indicating the best performance and converging to a better solution. Its box is short, showing good stability. For F4, CGBPO’s box plot is at the lowest. Despite outliers, the overall fitness median is small, with a good convergence effect. For F5, CGBPO’s box plot is at the lowest, with a fitness value much lower than most algorithms, showing high convergence accuracy.
For hybrid function F7, CGBPO’s box plot is low, with a small fitness median, good convergence, and small data dispersion, indicating good stability. For F8, CMA-ES’s box plot is at the lowest, performing best, while CGBPO’s is at an intermediate level, competitive but slightly worse.
For composition function F10, CGBPO’s box plot is at the lowest, with the smallest fitness median, the best performance, and a short box for good stability. For F12, CMA-ES’s box plot is at the lowest, and CGBPO’s is the second lowest, with good performance.
In general, CGBPO has obvious advantages on multimodal and composition functions, strong competitiveness on unimodal and hybrid functions, and outstanding comprehensive performance across different functions.
5.4. Wilcoxon’s Rank-Sum Test on CEC2022
Table 10 shows the rank-sum test results of the CEC2022 benchmark test. When comparing CGBPO with BWO, FOX, GOOSE, WOA, and CMA-ES, the
p-values of all 12 functions are below 0.05, indicating that CGBPO significantly outperforms PO and BWO.
For the four functions, the p-values of CGBPO relative to AO exceed 0.05, meaning CGBPO and AO have similar performance on these functions.
Overall, CGBPO performs remarkably in most function tests, showing significant differences from and often outperforming other algorithms. Clearly, CGBPO maintains strong competitiveness in solving the CEC2022 benchmark problems.
5.5. Radar Chart and Average Ranking Chart
Figure 13 shows a radar chart and an average ranking chart comparing CGBPO with seven other algorithms on CEC2022. Notably, CGBPO’s radar chart has the least fluctuations. It ranked first for three functions (including F10) and second for four (like F5), showing remarkable performance and distinct advantages on these functions in
Figure 13a.
The CGBPO algorithm had an average ranking of 2.25, tying for first place with CMA-ES in
Figure 13b. It demonstrated outstanding comprehensive performance across multiple test functions, outperforming the other seven algorithms overall.
7. Application of the CGBPO Algorithm in Indoor Visible Light Positioning
The CGBPO algorithm is applied to indoor visible light positioning [
45]. In the indoor wireless visible-light transmission model, Light-Emitting Diodes (LEDs) are signal sources for data transmission, and Photo Diodes (PDs) are receivers for data reception to achieve high-precision positioning. A 5 m × 5 m × 6 m positioning model is established. Four LEDs on the ceiling, with coordinates (5, 0, 6), (0, 0, 6), (0, 5, 6), and (5, 5, 6), respectively, are set as signal sources.
To test the positioning error of CGBPO, at a height of 2 m, signal receivers are placed every 0.5 m in the 5 m length and 5 m width directions, creating 121 test points. A positioning-simulation experiment is conducted using MATLAB, and the experiment’s relevant parameters are set as shown in
Table 13.
Figure 18 displays the distribution of the actual positions of PDs and the estimated positions from PO. The CGBPO algorithm’s estimated positions are nearer to the actual PD positions. Moreover, CGBPO has the optimal coverage, which reaches 100%.
Figure 19 presents the error curves of the estimated positions for the two algorithms. Clearly, among the 121 test points, CGBPO has the smallest error in estimated positions, while PO shows relatively larger errors at each test point.
Figure 20 is a bar chart of the average errors of the estimated positions for the two algorithms. By comparison, the average error of PO’s estimated positions is about 0.0070 cm, while that of CGBPO is about 0.00034807 cm. The CGBPO algorithm shows the most stable positioning performance.
8. Conclusions
To overcome the drawbacks of the PO algorithm, like becoming stuck in local optima and slow convergence for complex problems, this study proposed the CGBPO algorithm, which has improved performance and applicability. CGBPO uses chaotic logistic mapping for initialization to increase population diversity, applies Gaussian mutation to update individual positions to avoid premature local convergence, and incorporates the barycenter opposite learning approach to generate opposite solutions and boost global search ability. Simulation experiments on CEC2017 and CEC2022 benchmarks, comparing them with eight intelligent optimization algorithms, and two complex engineering problems showed that CGBPO can improve solution accuracy and convergence speed, balancing global and local search. In industrial refrigeration system design optimization and Himmel Blau’s function optimization, CGBPO achieved the highest accuracy, shortest optimization time, and best stability. In indoor visible light positioning, CGBPO’s estimated positions were closer to actual PD positions, with the best coverage and smallest average error (about 0.00034807 cm) compared to PO.
Future research will explore integrating CGBPO with other advanced methods like the MAMGD optimization method, aiming to further improve convergence speed and training-result accuracy [
46]. Combining CGBPO with the differential evolution (DE [
47]) algorithm and implementing multi-objective optimization (e.g., extending to NSGA-II [
48]) will also be considered to enhance its performance, expand applicability, and meet the increasing needs for complex optimization.