Next Article in Journal
Macroeconomic Impacts of College Expansion on Structural Transformation and Energy Economy in China: A Heterogeneous Agent General Equilibrium Approach
Previous Article in Journal
A Discrete Hamilton–Jacobi Theory for Contact Hamiltonian Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crown Growth Optimizer: An Efficient Bionic Meta-Heuristic Optimizer and Engineering Applications

College of Automation Engineering, Shanghai University of Electric Power, Shanghai 200090, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2343; https://doi.org/10.3390/math12152343
Submission received: 1 July 2024 / Revised: 19 July 2024 / Accepted: 24 July 2024 / Published: 26 July 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
This paper proposes a new meta-heuristic optimization algorithm, the crown growth optimizer (CGO), inspired by the tree crown growth process. CGO innovatively combines global search and local optimization strategies by simulating the growing, sprouting, and pruning mechanisms in tree crown growth. The pruning mechanism balances the exploration and exploitation of the two stages of growing and sprouting, inspired by Ludvig’s law and the Fibonacci series. We performed a comprehensive performance evaluation of CGO on the standard testbed CEC2017 and the real-world problem set CEC2020-RW and compared it to a variety of mainstream algorithms such as SMA, SKA, DBO, GWO, MVO, HHO, WOA, EWOA, and AVOA. The best result of CGO after Friedman testing was 1.6333/10, and the significance level of all comparison results under Wilcoxon testing was lower than 0.05. The experimental results show that the mean and standard deviation of repeated CGO experiments are better than those of the comparison algorithm. In addition, CGO also achieved excellent results in specific applications of robot path planning and photovoltaic parameter extraction, further verifying its effectiveness and broad application potential in practical engineering problems.

1. Introduction

As an essential tool in modern computational intelligence, meta-heuristic optimization algorithms have achieved remarkable results in solving complex optimization problems. By simulating biological, physical, and social phenomena in nature, these algorithms provide a flexible and efficient way to solve challenges in various engineering and scientific fields.
Meta-heuristic optimization algorithms have been widely used in various optimization problems. However, despite the success of existing meta-heuristic algorithms in many applications, some things could still be improved. Many algorithms tend to fall into local optimal solutions when dealing with complex and challenging problems, and it is not easy to find global optimal solutions [1]. Secondly, some algorithms have room for improvement in convergence speed and computational efficiency.
To solve these problems, we propose CGO. CGO combines the principles of tree branch growth, biological evolution, and natural selection to propose an innovative search mechanism. The design of CGO is inspired by the natural process of tree crown growth, specifically, the growth strategies of trees as they compete for light and living space. Trees grow and branch constantly to maximize their ability to capture sunlight and absorb nutrients. For example, maple trees (Acer spp.) and oak trees (Quercus palustris Münchh.) demonstrate a high degree of flexibility and adaptability during their growth.
As shown in Figure 1, maple trees show another exciting growth pattern. Maple trees have a complex crown structure and, by constantly sprouting and growing new branches, maple trees can quickly adapt to changes in the surrounding environment. Oak trees usually proliferate during seedling to compete for more sunlight in the forest. Once an oak tree’s crown breaks through the forest’s crown, its growth rate may slow, but branching continues to increase to extend its coverage. In addition, trees naturally “prune” as they grow—removing branches that consume resources but contribute little to overall growth. This mechanism ensures that trees focus their resources on the most promising branches, optimizing their growth efficiency.
It has the global search ability of the traditional meta-heuristic algorithm and improves its search efficiency in complex multidimensional space through a unique local search strategy. CGO seeks the optimal solution in the solution space by constructing a “virtual tree” to simulate the expansion, germination, and competition during crown growth. Our work refers to these three processes: growing, sprouting, and pruning.
The major contributions are summarized as follows:
  • A novel biology-based algorithm, the crown growth optimizer (CGO), is proposed, simulating the process of branches growing and sprouting of the tree crown.
  • The ability of the CGO algorithm was tested on thirty 10-dimensional, 30-dimensional, and 50-dimensional CEC2017 benchmark functions, and compared with nine other advanced optimizers; CGO achieved outstanding performance.
  • CGO was tested on 20 CEC2020 real-world constraint questions.
  • CGO algorithm was tested on two typical engineering problems: robot path planning and photovoltaic parameter extraction.
  • Used statistical analyses such as Wilcoxon’s and Friedman’s tests to investigate the strength of CGO algorithms.
The main contents of this paper are as follows: Section 2 discusses the related work; Section 3 details the CGO algorithm. In Section 4, experiments on CEC2017, CEC2020-RW, robot path planning, and photovoltaic parameter extraction are carried out. Section 5 summarizes our work.

2. Related Works

Typical meta-heuristic algorithms can be divided into the following categories: The first category is evolution-based algorithms, which typically include the Genetic Algorithm (GA) [2] and Differential Evolution (DE) [3], which make use of the crossing and mutation mechanism of chromosomes to update agent search location. The second category is the algorithms based on physical rules, like the Gravity Search Algorithm (GSA) [4], Simulated Annealing algorithm (SA) [5], and Multiverse Optimization (MVO) [6]; these algorithms make use of the physical laws of nature. The third type of algorithm is based on mathematics, and is derived from mathematical functions, formulas, and theories, such as the Sine and Cosine Algorithm (SCA) [7], and Arithmetic Optimization Algorithm (AOA) [8]. The fourth type of algorithm is a population-based algorithm, which is derived from the behavior of foraging, breeding, and hunting in organisms, such as Particle Swarm Optimization (PSO) [9], Artificial Bee Colony algorithm (ABC) [10], Gray Wolf Optimization (GWO) [11], Whale Optimization Algorithm (WOA) [12], and Harris Hawk Optimization (HHO) [13]. The above classification is not absolute, and the same algorithm may contain multiple mechanisms. They have been used to solve various industrial problems with great success, including in the field of UAV path planning. However, faced with complex environments, the performance of most algorithms can be further improved. In the in-depth development of meta-heuristic algorithms, many researchers focus on introducing more parameters, mechanisms, and multi-level search.
In recent years, many advanced optimization algorithms have been proposed. Nadimi [14] has proposed an improved Gray Wolf Optimizer (I-GWO) to solve global optimization and engineering design problems. A Dimension Learning-based Hunting (DLH) search strategy was proposed, inheriting from the individual hunting behavior of wolves in nature. It has achieved excellent results on the CEC 2018 benchmark function. Luo jing [15] proposed a 3D path-planning algorithm based on Improved Holographic Particle Swarm Optimization (IHPSO), which uses the system clustering method and the information entropy grouping strategy instead of a random grouping of structure–particle swarm optimization. Fouad [16] introduces the PMST-CHIO, a novel variant of the Coronavirus Herd Immunity Optimizer (CHIO) algorithm. It innovatively integrates a parallel multi-swarm treatment mechanism, significantly enhancing the standard CHIO’s exploration and exploitation capabilities. Wentao Wang [17] proposes an Improved Tuna Swarm Optimization algorithm based on a sigmoid nonlinear weighting strategy, multi-subgroup Gaussian mutation operator, and elite individual genetic strategy called SGGTSO.
The above algorithms perform well in their respective fields. However, they have limitations, such as easily falling into local optimal solutions, slow convergence speed, etc. This paper proposes a crown growth optimizer (CGO). By simulating the tree crown growth process and combining global and local search strategies, the search efficiency and stability of the algorithm were improved.

3. Crown Growth Optimizer

3.1. Inspiration

The crown growth optimizer (CGO) was inspired by the ability of tree growing to optimize natural growth by competing for light, nutrients, and space resources. In this algorithm, the crown of a tree is regarded as a population of branches, and each branch is a search agent. As trees grow, they adjust the shape of the crown to the location of the population. The growth of branches is affected by various resources such as light, temperature, water and soil nutrients. These resources are abstracted as objective functions in the optimization process so that the tree crown growth optimizer can be widely used in various optimization problems.
The physiological mechanisms of the tree regulate the crown as a whole. We used three mechanisms of crown development: growing, sprouting, and pruning. The growth process is the process of exploring the parameter space of a branch, and this process may find more beneficial resources for tree growth. The sprouting process is the process of growing new branches near the current branches, which is the full use of parameter space, and, the more abundant the resources of the branches, the more likely it is to grow new branches. The tree crown adaptively adjusts the development of branches with factors such as season, life span, and climate, such as paying more attention to the growth of branches or paying more attention to the development of new branches, or actively shedding leaves and aging branches in seasons that are not conducive to survival. These mechanisms are abstracted as pruning, which balances the growth and sprouting processes and periodically eliminates disadvantaged individuals.
Figure 2 shows a schematic diagram of these processes. Using these physiological processes, we simulated the developmental behavior of several branches of a tree crown and developed the crown growth optimizer (CGO).

3.2. Crown Branch Initialization

The iterative process of CGO starts with an initial population of randomly generated branches of the crown. Each branch represents a search agent. D is the dimensions of the problem, and the upper and lower bounds on the search dimensions are L b and U b , respectively. The number of searched branches is N, and the initial locations of all branches can be represented by a matrix of N rows and D columns:
X = L b + r a n d ( · ) × ( U b L b ) = x 1 , 1 x 1 , 2 x 1 , D 1 x 1 , D x 2 , 1 x 2 , 2 x 2 , D 1 x 2 , D x N 1 , 1 x N 1 , 2 x N 1 , D 1 x N 1 , D x N , 1 x N , 2 x N , D 1 x N , D N × D = { x i , j } N × D , i [ 1 , N ] , j [ 1 , D ]
where r a n d ( · ) is the random number of ( 0 , 1 ) . The i t h search branch’s position can be described as X i = ( x i , 1 , x i , 2 , , x i , D 1 , x i , D ) .

3.3. Growing Stage

The growing stage simulates the growth of the branch towards a more resourceful location, which allows the search agent to find a better solution at the current location. Branches are encouraged to focus on obtaining better solutions rather than scaling in environments with high dispersion characteristics. In the life of trees, the branches do not grow all the time, but, as time progresses, they show the characteristics of first fast and then slow. The branch growth velocity V ( t ) can be modeled as follows:
V ( t ) = v max + ( v min v max ) 1 + exp 10 b · ( 2 t T 1 )
where t is the iteration step, T is the maximum iteration, v max and v min are the maximum and minimum growth velocity, respectively, and b is the scaling factor, used to adjust the shape of the curve. The variation trend of Equation (2) is shown in Figure 3.
In the growth stage, the random number G R ( · ) of the Gaussian distribution is used to simulate the uncertainty of the growth direction. The Gaussian number has a dynamic and uniform step size, which ensures that the agent can explore as much as possible in the search space and spread to more feasible areas. The direction of growth depends on two reference directions. First, the branch will generally grow towards the better resource position since X b e s t ( t ) represents the best known location for growth, so g 1 , i ( t ) indicates the current branch growth reference direction towards the better resource. Second, the branches are distributed radially relative to the tree trunk. So, g 2 , i ( t ) indicates the current branch’s reference direction deviating from the crown’s centroid:
g 1 , i ( t ) = X b e s t ( t ) X i ( t ) g 2 , i ( t ) = X i ( t ) X c e n t ( t )
The growth direction of branches is highly random, so the Gaussian random number f B M ( x ; μ , σ 2 ) is used to describe this feature:
f B M ( x ; μ , σ 2 ) = 1 2 π σ 2 exp ( x μ ) 2 2 σ 2
G R g ( · ) = ( f B M , 1 , f B M , 2 , , f B M , D ) is a random sequence of standard Gaussian distributions of 1 × D to generate growth randomness for each dimension. The mean of f B M , μ g = 0 , and the variance σ g 2 = 1 .
Then, the position renewal equation of the growing stage is as follows, and Figure 4 shows the schematic diagram of the growing stage:
X i ( t + 1 ) = X i ( t ) + V ( t ) G R g ( · ) r 1 g 1 , i ( t ) + ( 1 r 1 ) g 2 , i ( t )
r 1 is a random number between 0 and 1, and ∗ represents the dot multiplication operation. X c e n t represents the centroid position of the entire population, calculated by
X c e n t , d = 1 N i = 1 N X i , d
where X i , d represents the d-th dimensional coordinates of the i-th search branch. X i grows in the direction of global optimality in general and is diversified by combining two random numbers. The effect of V ( t ) makes the growth of X i gradually slow down with each iteration.
In the growing stage, the branches are not shielded from each other as much as possible, that is, the position of the search agent is pretty close. We designed a repulsion mechanism to adjust the direction of branch growth if the target branch X i * is too close to other branches. The formula for the repulsion mechanism is shown in Equation (7):
F r e p , i ( X i * ) = α k = 1 | N r e p | r e p k r e p k = X i * X k N r e p = X k | X k / X i * , | | X i * X k | | < D i s
A high-dimensional spherical region is constructed with the location of x i ( t ) as the center and the radius of D i s . The other included search agents X k form a neighborhood set N r e p , and | N r e p | represents the number of elements in N r e p . In the above formula, r e p k represents the vector where each search agent X k in N r e p points to the current agent X i * . F r e p , i ( X i * ) is the sum of all r e p k vectors, representing the repulsive force of the population, where α ( 0 , 1 ) is a repulsive modulator. The agent updating scheme after repulsive force modification can be summarized as follows:
X i * = X i ( t ) + V ( t ) G R g ( · ) r 1 g 1 , i ( t ) + ( 1 r 1 ) g 2 , i ( t ) X i ( t + 1 ) = X i * + F r e p , i ( X i * )
The above formula can represent the two cases in which the branch needs repulsive force modification and does not need modification. When X i * is in the location of the other branches’ shade, | N r e p | = 0 , because F r e p , i ( X i * ) = 0 , and X i ( t + 1 ) = X i * with the same meaning as Equation (5).

3.4. Sprouting Stage

The sprouting stage simulates the growth process of new shoots on the crown branches. Well-grown branches have more nutrients and are more likely to sprout, so an elite pool X e l i t e was constructed to select the baseline location for sprouting:
X e l i t e = { X 1 s t , X 2 n d , X 3 r d , X h a l f , X c e n t }
where X 1 s t , X 2 n d , and X 3 r d represents the first three individuals with the best fitness value, and X h a l f represents the centroid position of individuals whose fitness values ranked in the top 50%, shown as Equation (10). X c e n t is the centroid position of the population, shown as Equation (6). The elite pool individuals represent the best-developed individuals in the tree crown, representing the most resource-rich locations in the known space with the highest probability of sprouting. It is also where the optimizer primary exploitation takes place. After the location of all search branches is updated, the algorithm updates the elite pool once, representing the location of the next iteration:
X h a l f , d = 1 N h i = 1 N h X i , d , N h = N / 2
Unlike the growing stage, which has a historical process, sprouting is a discrete and mutated process for the updating of the search agents. The location of the branches is highly dispersed. A Gaussian random number G R s ( · ) is used to simulate the uncertainty of the sprouting position. In order to build the diversity of sprouting, we use two reference directions s 1 , i ( t ) , s 2 , i ( t ) :
s 1 , i ( t ) = X b e s t ( t ) X i ( t ) s 2 , i ( t ) = X i ( t ) X c e n t ( t )
Each search agent can be updated using the following formula, and Figure 5 shows the schematic diagram of the sprouting stage:
X i ( t + 1 ) = X e l i t e * ( t ) + G R s ( · ) r 2 s 1 , i ( t ) + ( 1 r 2 ) s 2 , i ( t )
G R s ( · ) is a 1 × D standard Gaussian random sequence, and X e l i t e * is a random search individual from the elite pool. r 2 is a random number between 0 and 1. The sprouting stage allows for a more extraordinary exploitation ability under the condition of accepting wrong solutions to a certain extent.

3.5. Pruning Mechanism

We designed the pruning process to balance the two stages of sprouting and growing. Researchers have observed that new branches of trees often need to lie dormant for some time during their growth before they can sprout new branches. Ludvig’s law states that a sapling grows a new branch after an interval of, say, a year. The new branches are dormant in the second year, while the old ones still sprout. After that, the old shoots and those that have been dormant for a year sprout simultaneously, and the new shoots that sprouted in the same year become dormant the following year. In this way, the number of branches and sprouts of a tree in each year constitutes a Fibonacci sequence:
F n = { 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , 34 , } F n + 2 = F n + 1 + F n , F 1 = 1 , F 2 = 1
If the number of items in the sequence n is large enough, the ratio of the number of shoots of the two adjacent generations F n / F n + 1 is close to the golden ratio of 0.618 , which means that, when the number of shoots of the current generation is N, the number of shoots of the next generation is N / 0.618 .
Within each iteration, the population X is randomly divided into two sub-populations P g and P s . The numbers of the two sub-populations at the beginning of the iteration are N s = [ N × 0.618 ] , N g = N N s , and the [ · ] symbol indicates rounding. Branches within each sub-population enter the growth or sprouting stage, respectively. Since the growth of a tree is limited by its carrying capacity, the number of branches cannot increase forever, so the iterative process gradually reduces the number of growing branches and gradually increases the number of newly sprouted branches. A pruning time interval t c u t = T / N is defined, and, when the iteration enters a new pruning time interval, i.e., mod ( t , t c u t ) = 0 , the algorithm performs two additional actions. First, the sizes of the two populations are adjusted: N g = N g 1 and N s = N s + 1 . Second, some branches that are lower in the fitness order are pruned off and these branches are initialized in the search space by Equation (1). According to Ludvig’s law and the Fibonacci series, the amount of per pruning is set to [ 0.382 × N ] . These two additional actions are performed only in iterations of mod ( t , t c u t ) = 0 , and the rest of the iterations are only updated in the distribution structure of the sub-population (Algorithm 1).
X i ( t + 1 ) = X i ( t ) + V ( t ) G R g ( · ) r 1 g 1 , i ( t ) + ( 1 r 1 ) g 2 , i ( t ) + F r e p , i ( X i * ( t + 1 ) ) , i N g X e l i t e * ( t ) + G R s ( · ) r 2 s 1 , i ( t ) + ( 1 r 2 ) s 2 , i ( t ) , i N s
Algorithm 1: Pruning mechanism
  • Data: Population size N, current iteration t, pruning time interval t c u t , sub-population sizes N g , N s
  • Result: Sub-population sizes N g , N s , agent position X ( t + 1 )
  • When iteration enters pruning period mod ( t , t c u t ) = 0 ;
  • N s = N s + 1 , N g = N g 1 ;
  • Rank the F i t [ X ]
  • Select the [ 0.382 × N ] individuals with the lowest fitness and reset them with Equation (1);
In this process, branches with poor fitness are eliminated and replaced with new shoots, further improving the exploitation effect in the vicinity of elite branch individuals.

3.6. CGO Process and Computational Complexity

The process of the CGO algorithm is shown in Figure 6. In the primary iteration, all branches are divided into two parts to balance the effects of exploration and exploitation. Within each iteration, each branch completes the action of being assigned to a sub-population. After the branch position of the next iteration is updated, we recalculate the branch’s fitness and update the elite pool.
The algorithm flow of CGO is shown in the pseudo-code of Algorithm 2.
Algorithm 2: Crown growth optimizer (CGO)
Mathematics 12 02343 i001
The computational complexity of CGO mainly depends on four elements: population initialization, position updating, fitness calculation, and fitness ranking. The computational complexity for generating individual positions is O ( N × D ) , where N is the number of search agents and D is the solution space dimension. The fitness values of all individuals need to be calculated and sorted once in each iteration. The calculation complexity is O ( N × T ) + O ( N log N × T ) , where T is the maximum number of iterations. The computational complexity for updating the positions of all agents in the growing and sprouting stage is O ( N × D × T ) . In addition, an additional loop is required to calculate the repulsive force, the computational complexity of which is O ( N ) . Each pruning process requires N × 0.382 of branches to be replaced and the complexity consumed is O ( T / 2 × 0.382 × N × D ) . So, overall, the time complexity of CGO is O ( N D + T ( N ( log N + 1.191 D + 2 ) ) ) .

4. Experiment and Simulation

This work used the benchmark function to test the proposed CGO algorithm’s performance. CGO was further used to solve typical engineering problems, including parameter extraction for photovoltaic systems and robot path planning.
The experiment and simulation studies in this section used MATLAB. The code ran on a computer equipped with a 12th Gen Intel(R) Core(TM) i7–12700H @ 2.30 GHz CPU, 16.0 GB RAM, and the Windows 11 operating system. The version of MATLAB used was R2024a.

4.1. Testing Results on CEC2017 Benchmark Functions

4.1.1. Experiment Settings

Experimental studies were performed on thirty benchmark functions from the CEC2017 test suite, respectively. More details on these typical testing problems can be found in the paper ([18]). These benchmark functions include four types: unimodal problems (F1–F3), simple multimodal problems (F4–F10), hybrid problems (F11–F20), and composition problems (F21–F30). These benchmark problems can reflect the algorithm’s performance in real-world optimization problems. We compared the proposed algorithm with the other nine advanced optimization algorithms. They were SMA [19], BKA [20], DBO [21], GWO [11], WOA [12], EWOA [22], HHO [13], MVO [6], and AVOA [23]. The main parameters of the algorithms involved are shown in Table 1.
In addition, the fundamental parameters remained consistent, such as the number of search agents N = 100 , the search dimension D = { 2 , 10 , 30 , 50 } , the search upper bound U b = 100 , and the search lower bound L b = 100 . Each algorithm was independently repeated 20 times, and two evaluation metrics were utilized to compare and analyze the optimization performance of each method intuitively: average value (mean) and standard deviation (std):
m e a n = 1 n i n f i * s t d = 1 n 1 i n ( f i * m e a n ) 2
where the mean reflects the convergence accuracy of the algorithm, std quantifies the dispersion degree of the optimization results, i represents the number of repeated runs, n is the total number of runs, and f i * represents the global optimal solution of the i t h run.
Statistics such as standard deviation or variance can be used to measure the diversity of swarms in the search space. In this study, Positional Diversity was used to describe changes in the diversity of the branch swarm, defined as Equation (16):
D i v = 1 N i = 1 N j = 1 D ( x i j x ¯ j ) 2
where x i j is the coordinates of the i particle in the j dimension, and x ¯ j is the average coordinates of all individuals in the j dimension.
At the same time, the Friedman test was used to rank the average fitness of CGO and other algorithms [24]. In Equation (17), k is the sequence number of the algorithm, R j is the average ranking of the j t h algorithm, and n is the number of test cases. The test assumes a χ 2 distribution with k 1 degrees of freedom. It first finds the rank of algorithms individually and then calculates the average rank to get the final rank of each algorithm for the considered problem.
F f = 12 n k ( k + 1 ) j R j 2 k ( k + 1 ) 2 4

4.1.2. Convergence Behavior of CGO

In this part, the convergence behavior of CGO was studied utilizing several CEC2017 benchmarks in the 2-dimensional parametric space. Specifically, in this experiment, the convergence behavior of CGO was reflected by the search history, convergence graph, swarm’s diversity, and diagram of trajectory in the first dimension. The benchmark functions No. 1, 3, 5, 6, 10, 22, 24, and 26 of CEC2017 were selected for testing. In this part, we set t c u t = T / N , sub-population sizes started with N s = [ 0.382 × N ] , and N g = N N s to more clearly show the search history of the group exploration process.
As depicted in Figure 7, the first column is a description of the search space, which reveals the selected unimodal problem CEC2017–1; CEC2017–3 as the smooth structure problem, CEC2017–5, CEC2017–6, and CEC2017–10 as the simple multimodal problems; and CEC2017–22, CEC2017–24, and CEC2017-10 as the simple multimodal problems. There are a large number of locally optimal solutions in CEC2017–26 complex hybrid problems. The selected function simulates the real solution space well.
The second column shows the historical position of the swarm at different iteration times in different colors. It can be clearly seen that, at the beginning of the iteration ( t 100 , represented by black, cyan, and green), the swarms show a high degree of dispersion and tend to discover potential and promising areas. In the middle and late iterations ( 100 t 200 , denoted by yellow, orange, and red), swarms tend to cluster in the globally optimal solution. This shows that CGO achieves a significant trade-off between exploration and exploitation.
The convergence graph in the third column is the most widely used metric for validating the performance of the meta-heuristic optimizer. As shown in Figure 7, the convergence graph obtained by CGO shows that the algorithm has a fast convergence rate on all eight benchmarks. It can be seen from CEC2017–24 that, when dealing with hybrid problems with multiple local optima, the CGO algorithm sometimes falls into a local optimal state temporarily. However, under the sprouting mechanism based on elite pool guidance, the algorithm achieves good fitness. In practice, this suggests that CGO has good exploratory capabilities to preserve the diversity of the population while avoiding local optimality.
The fourth and fifth columns are the position diversity changes of the population and the movement trajectory of the average position of the swarms in the first dimension, respectively, which reflect the role of CGO in balanced exploration and exploitation. It can be seen that, at 130 iterations, due to the reduction of the growth population to 0, all swarms rapidly gather at the elite pool and deeply exploit the vicinity of the elite pool. Because the diversity of the population is well preserved in the initial stages of the iteration, the trajectories of the individuals show mutations and significant changes, suggesting that CGO is more likely to explore the potential, high-quality solutions.

4.1.3. The Swarm Behavior of the Growing Stage and Sprouting Stage in CGO

This part shows in detail how the growing stage and sprouting stage change over several iterations to better reveal how CGO works. Figure 8a shows the growth process of the clustered population (green) in three-time steps (yellow, orange, red). It can be seen that, during the exploitation stage, individuals will grow in the opposite direction of the centroid, which gives the group the possibility to explore the potential optimal solution. The swarm diversity (calculated using Equation (16)) in these four-time steps is { 1.0318 , 2.2091 , 6.4355 , 15.6924 } , respectively, indicating that the dispersion of the population is increasing. Figure 8b shows the change in population diversity when only growth occurs in 100 iterative steps. It can be seen that, in the early iteration stage, branches rapidly grow to the entire search space. The iteration is carried out. Due to the Gaussian distribution characteristic of the random number G R ( · ) in Equation (8), the population is given the possibility of inward growth. Hence, the diversity change of the population reaches a balance.
Figure 9a shows the position change of the random initial population (green) in the sprouting stage. Tracking the same particle can be seen as tracking an elite pool individual, which gives the group the ability to converge on the best individual and exploit them deeply. Figure 9b shows the change of group diversity during this process. The individual can quickly converge to an optimal location within dozens of iterations. This illustrates the exploitative power of CGO, and the exploitation process can be very rapid due to the guiding role of the elite pool.

4.1.4. Optimized Performance of CGO

This part computed CGO and nine other algorithms on thirty 10-dimensional, 30-dimensional, and 50-dimensional problems of CEC2017. After calculation, the solution results are shown in Table 2, Table 3 and Table 4, where the bold terms are the optimal solution results under the benchmark function in the row. We discuss the experimental results according to the class of the benchmark function.
1. Unimodal functions (F1–F3): These three functions only have one global best solution. The gradient near the optimal value of those functions is very small relative to other spaces, which is suitable for testing the algorithm’s exploitation ability. CGO performs better than other algorithms on the 10-dimensional F1–F3 problem. This shows that CGO has enough exploitation ability near the optimal value. Because the sprouting stage of CGO is guided by the elite pool, CGO can concentrate on exploitation in such unimodal problems. Compared with other swarm algorithms, such as BKA, DBO, EWOA, etc., a part of swarms are allocated to the exploration in the middle and later iterations, so the results of the CGO algorithm are better.
2. Simple multimodal functions (F4–F10): These functions have many locally optimal solutions suitable for testing the algorithm’s exploration ability. CGO exhibits the best global exploration capabilities and gets the best results on all benchmark functions except the 30-dimensional F4 function. The highly dispersed and repulsive mechanism of the growth stage of CGO can improve the diversity of the population, so it is conducive to finding more potential optimal solutions.
3. Hybrid functions (F11–F20): This kind of function contains many unimodal and multimodal functions, which are more challenging to optimize. CGO achieved the best results in 20 out of 30 benchmark functions in 10, 30, and 50 dimensions. In addition, EWOA and BKA are also prominent in this kind of function. The pooling mechanism and priority selection strategy of EWOA improve the local and global search capability of WOA, and BKA integrates the Cauchy mutation strategy and the leader strategy to enhance the global search capability and the convergence speed of the algorithm. This also leaves room for improvements in CGO algorithms.
4. Composition functions (F21–F30): The composition benchmark functions combine all the above function combinations. CGO achieved the best results in 17 of the 30 benchmark functions in 10, 30, and 50 dimensions. This shows that CGO’s optimization performance is better than that of the existing advanced algorithms.
The main reason why CGO outperforms other algorithms in unimodal functions, multimodal functions, and composition functions is that the algorithm introduces an elite pool. It is different from other algorithms, such as DBO, WOA, BKA, etc., which only record the position of the only optimal solution. In the process of exploration, CGO can record several optimal locations, median locations, and centroid locations of the whole world without losing the statistical characteristics of the population, so that CGO has the possibility of discovering potential local optimal solutions. The sprouting stage is completely based on the guidance of the elite pool, and the synthesis of two vectors s 1 and s 2 is directional and random. These mechanisms can make CGO’s ability to exploit the optimal solution of the solution space significantly better than other algorithms.
CGO and nine other algorithms’ rank on all CEC2017 benchmark functions are shown in Figure 10. The green line represents the CGO algorithm, which is distributed in the center area of the radar map, indicating that, in most problems, CGO is significantly better than other algorithms.
Table 5 shows the Friedman test rankings for all the algorithms above. The proposed CGO algorithm performs very well, with a comprehensive ranking of 2.5000 on the 10-dimension functions, 2.4333 on the 30-dimension function, and 2.4000 on the 50-dimension functions. In particular, as the dimension of the solution space increases, the CGO algorithm shows better optimization performance.
Additionally, the Wilcoxon test [24] was conducted on CGO and nine other algorithms based on 10-, 30-, and 50-dimensional CEC2017 functions. As the test outcomes show in Table 6, in most cases, the attained p-values are less than 5%. Only in the case of a 10-dimensional function does the p-value of CGO vs. BKA and CGO vs. EWOA exceed 5%. This shows that CGO’s performance is close to that of BKA and EWOA in such cases. In most cases, the optimization performance of CGO is significantly better than that of other algorithms.
The evolution curve (Figure 11 and Figure 12) shows that CGO’s (green line) convergence speed is much faster than other algorithms, and it can quickly converge to the optimal value. CGO showed rapid evolution in the early stage of iteration, showing an excellent ability to search the global space. In contrast, in the middle and late stages of iteration, CGO maintains a persistent local exploitation ability on many functions, and its evolution curve consistently and slowly declines to avoid premature convergence.

4.1.5. Analysis of Pruning Mechanism in CGO

In this part, we analyzed the influence of the pruning time interval t c u t of the pruning mechanism on the exploration and exploitation of CGO. In [25], Hussain et al. put forward an approach to measure and analyze the capability of exploitation and exploration in meta-heuristic algorithms. We used this method to measure the extent of population exploration and exploitation:
E p l % = D i v D i v max × 100 E p t % = | D i v D i v max | D i v max × 100
where D i v max represents the maximum diversity. E p l % and E p t % refer to the exploration percentage and exploitation percentage, respectively. Figure 13 shows the proportions of exploration (blue) and exploitation (green) capability in crown population when adjusting t c u t = { T / 2 N , T / N , 3 T / 2 N , 2 T / N } on the same benchmark function CEC2017-10.
Figure 13 shows the capability of exploitation and exploration of individuals after adjusting the pruning time interval t c u t . In Figure 13, it can be seen that there are two distinct stages of particle behavior. In the first stage, the primary behavior of the group is exploration. With the increase of t c u t , the time interval for CGO to perform pruning also increases. The duration of the exploration phase also increases, and, in this phase, particle swarms are more focused on spreading out into the search space and finding more potential locations. In each pruning period, the relative amounts of N g and N s are renewed once, and the sprouting effect gradually exceeds the growth effect. Therefore, it can be seen that the exploration capability declines with each iteration while the exploitation capability gradually increases. Until N g is reduced to 0, all individuals enter the sprouting stage, the exploration effect rapidly reduces, and the exploitation effect rapidly increases. Due to the elite guidance and randomness of the sprouting stage, Equation (12), as well as the last selection process (Section 3.5) of the pruning mechanism, the optimization does not converge immediately but continues to deepen the exploitation within the known optimal space to find a better solution.
We checked the results of the diversity change of CGO at t c u t = T / N on CEC2017 and compared the results of different t c u t solutions. Table 7 and Figure 14 do an excellent job of explaining the ability of t c u t to regulate exploitation and exploitation. CEC2017-1 and CEC2017-3 problems are unimodal problems, and branches do not need to explore other better solutions, so they should be more focused on exploitation. A shorter t c u t would be more appropriate. For multimodal and hybrid problems, such as CEC2017-5, CEC2017-6, CEC2017-10, CEC2017-22, CEC2017-24, and CEC2017-26, due to the existence of a large number of locally optimal solutions, it is essential to explore the process thoroughly. Therefore, when t c u t is more significant, the search performance is better. The more complex the problem, the more thorough the exploration phase needs to be. At the same time, when t c u t = 2 T / N , the CGO solution results are poor because the necessary exploitation process is lacking.
This example illustrates how the pruning mechanism can balance and utilize the two phases of exploration and exploitation, as well as provide an adaptive approach to different problem types. Usually, we set t c u t = T / N because this is the most balanced result.

4.1.6. Influence of Population Size N and Max Iteration T on Optimization

Figure 15 and Figure 16, respectively, show the optimizing results under the number of different branches of population N = { 10 , 20 , 50 , 100 , 150 } and the maximum number of iterations T = { 20 , 50 , 80 , 100 , 200 , 500 , 1000 , 2000 } . The vertical coordinate of each set of blue bars is the optimization solution error, the horizontal coordinate is the value N or T, the vertical coordinate of orange bars is the improvement efficiency, and the horizontal coordinate is the number of increases of N or T. The solution error and improvement efficiency are defined as in Equation (19).
E r r o r = F i t * [ X ( T ) ] f * E f f i c i e n c y N = ( E r r o r k + 1 N E r r o r k N ) / ( N k + 1 N k ) E f f i c i e n c y T = ( E r r o r k + 1 T E r r o r k T ) / ( T k + 1 T k )
where f * is the theoretical optimal solution, F i t * [ X ( T ) ] is the optimal solution obtained by CGO, and k is the serial number of the traversal settings of N and T. When N and T are increased, CGO can get better solutions. However, with the increase of N and T, the efficiency of improving results is also limited, which also causes a waste of calculation time. By comparison, in the selected 10-dimensional problem, where N takes 100 and T takes 200, it has almost reached the maximum optimization ability. Further increases in N and T will not significantly improve CGO’s ability to exploit.
Table 8 shows the solving time of CGO and nine other algorithms on thirty 10-dimensional CEC2017s. The solution parameters are set to N = 100 and T = 200 . Each optimizer calculates each benchmark function 20 times independently and then sums 30 × 20 operation times to calculate the average time to solve a single problem. It can be seen that CGO is ahead of most advanced optimization algorithms in average processing time, including MVO, WOA, DBO, GWO, AVOA, BKA, SMA, etc. Among the algorithms compared, only HHO has a better computation time than CGO. This shows the high efficiency of this algorithm.

4.2. Testing Results on CEC2020 Real-World Constrained Problems

Unlike CEC2017, CEC2020 contains a set of engineering problems with real environmental constraints called CEC2020–RW. CEC2020–RW is highly non-convex and complex and contains several equality and inequality constraints. The penalty function method is used as the constraint processing method.
There are six types of problems in CEC2020–RW: industrial chemical processes, process synthesis and design problems, mechanical engineering problems, power system problems, and livestock feed ration optimization. Mathematical expressions for these problems can be found in paper [26]. This section used CGO and nine other algorithms to solve 20 typical problems of CEC2020–RW. The selected problem names, main parameters, and theoretical optimal values are shown in Table 9. Each algorithm was independently executed 20 times, with 1000 iterations as the termination criterion, and the number of searching agents was 100. D is the problem dimension, g is the number of equality constraints, h is the number of inequality constraints, and f * is the theoretical optimal value.
After solving the problem, the results of CGO and nine other algorithms on CEC2020–RW are shown in Table 10. The best results are shown in bold. In addition, the Friedman ranking of each algorithm and the Wilcoxon test of each algorithm and CGO are shown in Table 10 and Table 11; CGO ranks 1.4000 out of ten algorithms and meets the significance of p-value = 5% in the Wilcoxon test with the other nine algorithms. CGO achieved the best results in all problems except RW07, RW08, and RW18 (where it ranked 2nd, 3rd, and 6th, respectively). Furthermore, Figure 17 shows the ranking of algorithms’ performances on CEC2020-RW. This shows that CGO can meet the needs of real engineering problems and has made promising advancements.

4.3. Solving Robot Path Planning by CGO

4.3.1. Problem Definition

This section used the CGO algorithm to plan a robot’s path from a starting point to a goal point and avoid obstacles. We constructed a 2D configuration space X with a range of 10 × 10 . The starting point was p s = [ 0 , 0 ] T and the goal point was p g = [ 10 , 10 ] T . Nine obstacles were distributed in the space, as shown in Figure 18. Table 12 lists the spherical center position and radius of each obstacle. The number of path control points p i = [ x i , y i ] T for path planning is 5 (not including the starting and goal point), so the dimension of the decision variable is 10. The maximum number of iterations of the algorithm T = 200 , the number of search agents is 100, and it ran 10 times independently. Other parameters are defined as above.
The 2D path-planning problem in this study can be expressed as the following optimization model:
min p i J = L ( p ) ( 1 + λ V i o l ) L ( p ) = i = 1 M 1 ( X i X i + 1 ) 2 + ( Y i Y i + 1 ) 2 V i o l = j = 1 O i = 1 M max [ 1 d i / r j , 0 ] M = 0 λ 1 = 10 5
In optimization, the obstacle threat is added to the objective function as a penalty function V i o l and V i o l = 0 . The calculation method of V i o l is shown in Figure 19. When the path point p i is inside the obstacle O j , the distance d to the center is less than the radius r; then, the point is threatened. If d > r , it is not threatened. The robot path is obtained from the control points solved by the optimizer through cubic spline interpolation. The total number of interpolation points is M = 101 .

4.3.2. Optimized Performance of CGO

Figure 20b represents the spatial trend of each planned path after 10 instances of CGO repeated calculations, in which the red line represents the longest path, the green line represents the shortest path, and the path control points are marked with circular rows. All trajectories satisfy obstacle avoidance constraints.
Among those ten planned paths, the optimal path is 14.6166, and the longest is 16.2720. As can be seen from the figure, the distance between the two paths is small, which also shows that path planning in an environment is a very challenging problem. The length of the standard deviation of 10 paths is 0.6868, indicating that CGO has good stability in solving this problem.
Figure 20a shows the optimal solution path of various algorithms, Table 13 lists the length of the solution results of different algorithms, and Figure 21 shows the average evolution curves of different optimization algorithms. It can be seen from Figure 20b that most algorithms tend to pass through the middle of obstacles to reach the target point, while the paths planned by GWO and DBO pass through both sides of obstacles, and MVO finds a different path than most algorithms. From the best and mean results shown in Table 13, the CGO algorithm is better the others, and the DBO algorithm performs the worst. From the point of view of the length standard deviation of the 10-times path generated by various paths, CGO has the most minor standard deviation. This further proves that the CGO algorithm is effective and guarantees the optimality and stability of the generated path. According to the average evolution curve (Figure 21), the evolution process of the CGO algorithm is relatively rapid in the initial iteration (Iteration < 40) compared with DBO, MVO, HHO, and other algorithms. The results show that the local exploitation ability of CGO, especially the exploitation ability, is better than the existing advanced algorithms.
In summary, the CGO proposed in this paper performs well in robot path planning. This shows that this algorithm has advantages in solving complex real problems and can be further researched and applied.

4.4. Extracting the Parameter of Photovoltaic Systems by CGO

4.4.1. Problem Definition

In the new energy field, photovoltaic (PV) systems are powerful tools for harnessing solar energy and converting it directly into electricity. Therefore, an efficient and accurate PV system model must be designed based on the parameters extracted from the measured current-voltage data.
This section used CGO and nine other algorithms to extract the core parameters of photovoltaic systems. Three classical PV models, the single-diode model (SDM), double-diode model (DDM), and PV module-diode model (PVMM), were adopted. The equivalent circuit diagram of the three models is shown in Figure 22. Five core parameters need to be extracted in SDM: the source of photocurrent ( I p h ), the reverse saturation current ( I s d ), the series resistance ( R s ), the shunt resistance ( R s h ), and the ideal factor of diodes (n), as depicted in Figure 22a. Based on Kirchhoff’s current law, the output current ( I L ) in SDM can be calculated as in the following equation, Equation (21):
SDM : I L = I p h I s h I d = I p h V L + I L R S R s h I s d · exp q ( V L + R s I L ) n k T 1
where I s h represents the parallel resistance current, I d denotes the diode current, V L denotes the output voltage, q is the electron charge ( 1.60217646 × 10 19 ), k indicates Boltzmann’s constant ( 1.3806503 × 10 23 J / k ), and T e m p is the temperature measured in Kelvin. The main purpose of this problem is to minimize the difference between the experimental data estimated by the algorithm and the real measured data as much as possible, so the root mean square error (RMSE) is adopted as the objective function as follows in Equation (22):
min RMSE ( x ) = 1 M i = 1 M f i ( x , I L , V L ) 2
where M is the amount of specified experimental data and x represents the solution vector for the five parameters ( I p h , I s d , R s , R s h , n ). Similarly, the output current ( I L ) in DDM (Figure 22b) and PVMM (Figure 22c) can be calculated as in the following Equations (23) and (24):
DDM : I L = I p h I s h I d 1 I d 2 = I p h V L + I L R S R s h I s d 1 · exp q ( V L + R s I L ) n 1 k T 1 I s d 2 · exp q ( V L + R s I L ) n 2 k T 1
PVMM : I L = I p h N p I s d N p · exp q ( V L + R s I L N s / N p ) n k T N s 1 I L R s N s / N p + V L N s R s h / N p
In the DDM model, I s d 1 denotes the diffusion current, and I s d 2 indicates the saturation current. n 1 and n 2 refer to the ideality factors of diodes, x represents the solution vector for the seven parameters ( I p h , I s d 1 , I s d 2 , R s , R s h , n 1 , n 2 ). In the PVMM model, N p denotes the number of cells connected in parallel, and N s denotes the number of cells connected in series. x represents the solution vector for the five parameters ( I p h , I s d , R s , R s h , n ). The bounds of different parameters in three PV models are reported in Table 14.

4.4.2. Optimized Performance of CGO

In this experiment, the benchmark measured current-voltage data were obtained from Easwarakhanthan et al. ([27]), which used a commercial RTC France silicon solar cell with 57 mm diameter (under 1000 W/m2 at 33 °C). We set T = 1000 , N = 50 , and independently ran the experiment 20 times on SDM, DDM, and PVMM with each optimizer. Table 15 shows the parameter extraction results of each algorithm on the three models. CGO achieved the best result on SDM, with RMSE = 0.001875, and the second-best result on DDM and PVMM under EWOA. This shows that CGO can solve PV parameter extraction and is ahead of most optimization algorithms. Figure 23 shows the comparison between the test results of CGO on SDM, DDM, and PVMM and the actual measured values. We can observe that the experimental data obtained by the CGO algorithm can perfectly fit the measured data.

5. Conclusions and Discussion

This paper proposed a novel meta-heuristic optimization algorithm, the crown growth optimizer (CGO), which provides an innovative optimization method by simulating the growing, sprouting, and pruning mechanisms during tree crown growth. CGO introduces the two core stages of tree crown growth and sprouting and combines the pruning mechanism to balance the two stages, thus improving the search efficiency and the quality of reconciliation.
Experimental results on CEC2017 and CEC2020-RW show that CGO achieves remarkable results on multidimensional optimization problems, showing global search capability and stability. Significantly, when solving complex engineering problems, CGO showed a high convergence speed and accuracy. The algorithms compared included SMA, SKA, DBO, GWO, MVO, HHO, WOA, EWOA, and AVOA. The experimental results show that CGO performs excellently in most test functions and practical applications.
In addition, this paper verified the practical application value of CGO through experiments in two specific applications: robot path planning and photovoltaic parameter extraction. In the robot path-planning problem, CGO could effectively plan the optimal path, avoid obstacles and minimize the path length. In the photovoltaic parameter extraction, CGO successfully optimized the model parameters and improved the efficiency of the photovoltaic system. These application examples further prove the vast application potential of CGO in practical engineering problems.
Although CGO has performed well in several benchmarks and real-world applications, there are still some shortcomings and room for improvement. Future research can consider the following directions:
  • Adaptive parameter adjustment: introducing an adaptive parameter adjustment mechanism enables the algorithm to adjust parameters in different types of problems automatically.
  • Hybrid algorithm: combined with the advantages of other optimization algorithms, further enhancing the global search and local search capabilities of CGO to improve the algorithm’s overall performance.
  • Parallelization and distributed computing: through parallelization and distributed computing technology, improving the computational efficiency of CGO, especially when dealing with large-scale and high-dimensional problems.
In conclusion, the crown growth optimizer, as a new meta-heuristic optimization algorithm, not only puts forward a new optimization mechanism in theory but also shows its effectiveness and broad application prospects in practice. Future research and improvements will further enhance the performance of CGO, allowing it to play a more significant role in more complex optimization problems.

Author Contributions

Conceptualization, C.L.; formal analysis, W.L.; investigation, D.Z.; methodology, C.L., D.Z. and W.L.; software, C.L. and W.L.; validation, C.L. and D.Z.; writing—original draft, C.L.; writing—review and editing, D.Z. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are available in a publicly accessible repository. The CGO code and CEC2017 and CEC2020-RW’s parameters are included at https://github.com/RivenSartre/Crown-Growth-Optimizer, accessed on 5 June 2024.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

  • The main parameters of CGO.
DProblem dimensions
NThe number of agents
N g The number of growing agents
N s The number of sprouting agents
P g Growing sub-population
P s Sprouting sub-population
tIterations
TTotal iterations
L b Search lower bound
U b Search upper bound
X Searching agent
X c e n t The centroid agent
X h e l f The median agent
X e l i t e The elite pool
VGrowth velocity
G R Gaussian number
r 1 , r 2 Random numbers from 0 to 1
g 1 , g 2 Growth reference directions
s 1 , s 2 Sprouting reference directions

References

  1. Sayouti, A.B.Y. Hybrid Meta-Heuristic Algorithms for Optimal Sizing of Hybrid Renewable Energy System: A Review of the State-of-the-Art. Arch. Comput. Methods Eng. 2022, 29, 4049–4083. [Google Scholar]
  2. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA; London, UK, 1992; pp. 89–109. [Google Scholar]
  3. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  4. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  5. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  7. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  8. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  9. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  10. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  14. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  15. Luo, J.; Liang, Q.; Li, H. UAV penetration mission path planning based on improved holonic particle swarm optimization. J. Syst. Eng. Electron. 2023, 34, 197–213. [Google Scholar] [CrossRef]
  16. Allouani, F.; Abdelaziz, A.; Chris, H.; Xiao-Zhi, G.; Sofiane, B.; Ilyes, B.; Nadhira, K.; Hanen, S. Enhancing Individual UAV Path Planning with Parallel Multi-Swarm Treatment Coronavirus Herd Immunity Optimizer (PMST-CHIO) Algorithm. IEEE Access 2024, 12, 28395–28416. [Google Scholar]
  17. Wentao, W.; Chen, Y.; Jun, T. SGGTSO: A Spherical Vector-Based Optimization Algorithm for 3D UAV Path Planning. Drones 2023, 7, 452. [Google Scholar] [CrossRef]
  18. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation 2017, San Sebastian, Spain, 5–8 June 2017; p. 1. [Google Scholar]
  19. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  20. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 7, 98. [Google Scholar] [CrossRef]
  21. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
  22. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef]
  23. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  24. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  25. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
  26. Ali, A.K.W.Z. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar]
  27. Easwarakhanthan, T.; Bottin, J.; Bouhouch, I.; Boutrit, C. Nonlinear Minimization Algorithm for Determining the Solar Cell Parameters with Microcomputers. Int. J. Sol. Energy 1986, 4, 1–12. [Google Scholar] [CrossRef]
Figure 1. Schematic illustration of the source of inspiration for CGO.
Figure 1. Schematic illustration of the source of inspiration for CGO.
Mathematics 12 02343 g001
Figure 2. Schematic of three mechanisms of CGO.
Figure 2. Schematic of three mechanisms of CGO.
Mathematics 12 02343 g002
Figure 3. The growth velocity V ( t ) when T = 1000 , v max = 1 , v min = 0.15 .
Figure 3. The growth velocity V ( t ) when T = 1000 , v max = 1 , v min = 0.15 .
Mathematics 12 02343 g003
Figure 4. Schematic diagram of the growing stage.
Figure 4. Schematic diagram of the growing stage.
Mathematics 12 02343 g004
Figure 5. Schematic diagram of the sprouting stage.
Figure 5. Schematic diagram of the sprouting stage.
Mathematics 12 02343 g005
Figure 6. The main flow chart of CGO.
Figure 6. The main flow chart of CGO.
Mathematics 12 02343 g006
Figure 7. The performed CGO population search process on several 2D CEC2017 benchmarks.
Figure 7. The performed CGO population search process on several 2D CEC2017 benchmarks.
Mathematics 12 02343 g007
Figure 8. The swarm behavior of growing stage.
Figure 8. The swarm behavior of growing stage.
Mathematics 12 02343 g008
Figure 9. The swarm behavior of sprouting stage.
Figure 9. The swarm behavior of sprouting stage.
Mathematics 12 02343 g009
Figure 10. Ranking of CGO and other algorithms on CEC2017 benchmark functions.
Figure 10. Ranking of CGO and other algorithms on CEC2017 benchmark functions.
Mathematics 12 02343 g010
Figure 11. Evolution curve of CGO and other algorithms on CEC2017 30-dimension reference function (F1–F15).
Figure 11. Evolution curve of CGO and other algorithms on CEC2017 30-dimension reference function (F1–F15).
Mathematics 12 02343 g011
Figure 12. Evolution curve of CGO and other algorithms on CEC2017 30-dimension reference function (F16–F30).
Figure 12. Evolution curve of CGO and other algorithms on CEC2017 30-dimension reference function (F16–F30).
Mathematics 12 02343 g012
Figure 13. The influence of t c u t on optimization.
Figure 13. The influence of t c u t on optimization.
Mathematics 12 02343 g013
Figure 14. Exploitation percentage and exploration percentage on CEC2017 benchmarks.
Figure 14. Exploitation percentage and exploration percentage on CEC2017 benchmarks.
Mathematics 12 02343 g014
Figure 15. The influence of population size N on optimization.
Figure 15. The influence of population size N on optimization.
Mathematics 12 02343 g015
Figure 16. The influence of max iteration T on optimization.
Figure 16. The influence of max iteration T on optimization.
Mathematics 12 02343 g016
Figure 17. Ranking of CGO and other algorithms on CEC2020-RW problems.
Figure 17. Ranking of CGO and other algorithms on CEC2020-RW problems.
Mathematics 12 02343 g017
Figure 18. Configuration space for robot path planning.
Figure 18. Configuration space for robot path planning.
Mathematics 12 02343 g018
Figure 19. Obstacle threat penalty function.
Figure 19. Obstacle threat penalty function.
Mathematics 12 02343 g019
Figure 20. Results on robot path planning.
Figure 20. Results on robot path planning.
Mathematics 12 02343 g020
Figure 21. Evolution curve of CGO and various algorithms in robot path planning.
Figure 21. Evolution curve of CGO and various algorithms in robot path planning.
Mathematics 12 02343 g021
Figure 22. Equivalent circuit diagrams for photovoltaic cells.
Figure 22. Equivalent circuit diagrams for photovoltaic cells.
Mathematics 12 02343 g022
Figure 23. Comparisons between measured data and experimental data attained by CGO for SDM, DDM, and PVMM.
Figure 23. Comparisons between measured data and experimental data attained by CGO for SDM, DDM, and PVMM.
Mathematics 12 02343 g023
Table 1. The main parameters of algorithms involved.
Table 1. The main parameters of algorithms involved.
AlgorithmAbbr.Parameter SettingsReferenceYears
Crown growth optimizerCGO z = 0.8 , r = 0.5 , α = 0.2 , D i s = 16 --
Slime Mould AlgorithmSMA z = 0.03 [19]2020
Black-winged kite AlgorithmBKA p = 0.9 [20]2024
Dung beetle optimizerDBO P p e r c e n t = 0.2 [21]2022
Gray Wolf OptimizationGWOa was linearly decreased from 2 to 0[11]2014
Whale Optimization AlgorithmWOA a = 20 , b = 1 [12]2016
Enhanced Whale Optimization AlgorithmEWOA a = 20 , b = 1 , P r a t e = 20 , κ = 1.5 N [22]2022
Harris Hawk OptimizationHHO E 0 = ( 1 , 1 ) [13]2019
Multi-verse OptimizationMVO W E P max = 1 , W E P min = 0.2 [6]2016
African Vultures Optimization AlgorithmAVOA P 1 = 0.6 , P 2 = 0.4 , P 3 = 0.6 [23]2021
Table 2. Results of CGO and each algorithm on CEC2017 10-dimension benchmark function.
Table 2. Results of CGO and each algorithm on CEC2017 10-dimension benchmark function.
Dim10MetricSMABKADBOMVOHHOGWOWOAEWOAAVOACGO
F1mean8266.684739.0532,730,4805526.672260,099.912,115,651159,8861878.2783648.82909.6397
std4994.144575.79712,100,9714449.135130,855.149,130,866257,989.92256.0883555.9491312.657
F2mean200.0004203.42924169.617239.4137418.63182083,0776526.694200.0001200.0002200
std0.00024414.615249347.37250.4729612.91628,870,4398233.8866.25  ×   10 5 7.74  ×   10 5 0.000556
F3mean300.0005300.0394300.0066300.0105300.8321032.75510.9382300300300
std0.0003560.0407470.0297370.0069430.3604341211.673216.02535.05  ×   10 14 2.97  ×   10 11 0.05145
F4mean418.1226400.2291417.4178402.7115404.5922410.82429.8399404.5777410.8357402.8124
std27.027170.36812837.601321.0656772.93950316.1020639.5412614.6534719.875032.031058
F5mean514.587523.1837525.8744512.3619536.3027509.9124543.79520.1479532.3157505.8702
std5.53587710.8291411.948354.33212510.895224.14747124.103356.91495611.438433.670722
F6mean600.0569613.2113602.0653600.3401625.0736600.2764627.1772600.191607.7833600
std0.0264736.2074282.8521610.5450888.6232330.50563414.270010.4137896.8749160.376712
F7mean721.1106737.097727.8944725.8924782.9368724.9991781.0224736.7901758.9543717.5504
std3.902438.06102410.256399.79468420.420179.10437318.5113610.7314518.017173.091354
F8mean814.0083816.1218823.0227814.8358829.6568811.411840.6548818.4565828.8537808.9546
std6.9100577.1347129.8046455.875535.6779784.19130618.434976.280019.62484.041537
F9mean900.0006996.9719900.715900.07371291.341901.05551386.16901.7861044.526900
std0.00024876.976731.1780.222166252.91471.836854377.14813.667589152.28270.028311
F10mean1634.181646.3931731.5041558.0412084.861528.8982026.7461635.0071774.9731338.091
std217.0402171.708278.0634265.2521355.2173236.4776288.5847214.847274.8445196.0357
F11mean1109.5891119.7241177.4221112.0281139.3481115.9491191.3631112.1441125.331110.99
std5.58354114.0067293.590597.12344322.777910.9882577.749979.05785212.703869.731681
F12mean40,815.294677.2561,198,192184,990.1666,948.9345,046.94118,67714,256.83113,8918223.305
std22103.32661.3892,411,781157,677.4588,860.3602,592.44,532,16214,667.9159,196.44117.475
F13mean9944.6271708.3868170.5389950.98415,744.899069.67514,967.9711,504.9110,669.886962.14
std12,150.08219.67478328.70410,701.3210,984.254342.8111,770.137273.42610,316.412133.539
F14mean1432.8421461.3271469.3641429.2061506.332804.2831719.291450.3491547.1352542.738
std12.8323528.5292645.934356.38377624.135531788.29810.558637.8395153.66751008.174
F15mean1535.6431568.8473777.7961524.9631874.5663007.1932981.1741518.9392655.7812120.204
std40.268945.880326889.9816.95419285.09921745.551675.84711.871161221.481547.3905
F16mean1666.1141668.6521707.1591734.6281886.3681731.0931803.731683.1511735.7291605.374
std74.8146364.842392.6201124.6254154.7654126.6304103.909185.00248100.18185.03682
F17mean1743.3351742.3011739.0921776.2981770.1741745.3331780.8431741.611762.1411719.197
std33.4769515.2216324.491659.9460932.9226628.4431534.9623137.9076936.6152537.18651
F18mean31,255.521940.02116,686.2215,806.8215,938.0327,407.5522,899.764475.13718,418.815,017.9
std15,015.9991.4573413,836.7510,797.910,894.612,827.7510,053.853535.9813,621.8510,590.26
F19mean2540.2521946.8785288.0851911.5945556.475072.62522,607.982016.2136279.0392412.285
std2390.42535.705059458.3323.2767675577.4894856.77818,445.95155.50324728.916693.1796
F20mean2019.5612044.2322036.462057.3762122.3542054.6322151.1842027.0612063.0232023.319
std8.70557817.7378231.0775654.7110660.2383840.9933365.7236917.8331841.529728.311478
F21mean2303.5412207.1662231.6292275.522310.1862305.6462287.4612200.6992229.0672266.624
std43.8428924.5901550.4609856.7448156.4260725.0346165.642231.30523855.5801157.47365
F22mean2302.1042296.032303.8412300.0792310.8782305.8142310.6542296.3422303.0412300.611
std0.79977221.806773.08008417.6715515.965316.79746921.8266918.5681417.343430.386774
F23mean2619.0182618.1652630.7152615.2022659.1442614.5492648.682625.4792632.9962611.601
std6.4362869.35439210.435819.50934922.876968.73588422.0241510.5547614.502734.589338
F24mean2755.212736.462707.9232695.7482774.4382719.562765.7952591.5612747.1282731.876
std8.75114956.80963103.1558100.483298.0225470.4474365.20339121.709886.0422713.06646
F25mean2924.3532926.2552934.0622907.5752932.3972937.1322946.2532926.5722929.3082921.637
std27.1024132.5216826.5604219.1433443.2985614.7214719.2546125.8830725.2981923.62066
F26mean2954.3212948.1823018.5232900.0813378.3722981.2663235.9842898.3253068.9382860
std32.03815125.041273.57010.016775452.4665208.603472.579436.49603275.987596.60918
F27mean3090.323094.7473102.1753097.0073150.9623093.933131.4223102.243098.9983092.865
std0.9388668.3305915.5547920.0703242.946665.15388135.2024217.036818.8608382.232284
F28mean3184.2143278.263249.1473242.5723310.6693355.783272.3933215.8683284.4083322.626
std122.8897180.961148.987180.4213156.602793.39403107.6636134.6152130.4064126.0767
F29mean3174.8963182.0793207.1213185.5653311.763180.4593323.4523206.8783235.5473159.756
std41.202832.9019759.1262262.16986108.283252.1857693.8079745.7787852.8768622.40437
F30mean48,147.09293,301.7631,677.1408,604.1737,280590,766377,696.8274,603.598,507.87193,485.8
std181,844.3464,942.7506,220.3561,209.6850,133.8810,102470,485.7418,395.6251,096.9379,787.2
Friedmanranking3.94.2333336.34.1666678.56.0666678.9333333.76.52.5333
The bold terms are the best solution results under the benchmark function in the row.
Table 3. Results of CGO and each algorithm on CEC2017 30-dimension benchmark function.
Table 3. Results of CGO and each algorithm on CEC2017 30-dimension benchmark function.
Dim30MetricSMABKADBOMVOHHOGWOWOAEWOAAVOACGO
F1mean5377.254.36  ×   10 8 3,853,111148,587.712,613,2391.01E+0973,686,8234803.6455200.8811,300
std6570.6276.82  ×   10 8 7,561,11936,216.713,034,1379.72  ×   10 8 50,454,4123485.7194824.25249,898
F2mean3.47  ×   10 9 1.3  ×   10 22 2.41  ×   10 29 6.27  ×   10 10 1.87  ×   10 17 1.5  ×   10 28 1.15  ×   10 28 6.3  ×   10 10 5.91  ×   10 11 368,316
std1.43  ×   10 10 5.81  ×   10 22 1.08  ×   10 30 1.74  ×   10 11 4.41 × 10175.41  ×   10 28 5.12  ×   10 28 1.44  ×   10 11 1.36  ×   10 12 8.97  ×   10 13
F3mean319.44334898.24959,378.75309.52812,424.3234,175.9231,964.322,734.76124.8446238.848
std18.605162764.83121,400.94.3550092451.6819293.06860,932.776121.4093018.5015286.609
F4mean492.5348508.5619526.8319490.7313526.6213548.6142591.6376490.0949497.5229478.4643
std7.80833333.5786229.634097.44050442.7923536.2284447.6459117.8298421.8751329.7184
F5mean604.449700.1549677.9418589.9032737.0991590.5285792.6595650.4369705.993581.8796
std27.3369236.9695244.5732217.2807127.7285719.6856570.128639.4925345.1595623.11887
F6mean601.7082652.4253625.5884613.5183660.4253605.0337667.7606616.4232641.1062601.9366
std0.683677.0424878.2016688.5909656.7926041.7385168.8631378.8052729.2248865.243016
F7mean826.35491136.444924.626847.58871241.857839.59951215.028911.19091123.163772.7977
std19.6701454.9043387.4736323.2040664.5411734.5335170.3062545.9909187.0211710.72138
F8mean890.7417954.7519983.5396897.947958.2976870.96671009.694932.3289958.1829872.1521
std28.4762122.1978945.3345524.0280520.9876919.3873739.3807732.418929.6578719.30533
F9mean2998.8664394.9044815.6752751.8747043.8771510.8937922.9273057.8894989.181901.1183
std1769.735529.16012046.5052234.605626.1384408.62333329.587999.9277735.80328.957735
F10mean4301.4914707.4365226.5174302.915529.8924345.0956397.4584689.7255483.4553394.958
std720.7308490.3054862.3125628.635595.72651146.752606.9268697.3967649.8394451.398
F11mean1237.7951294.0341462.8951323.8251257.0581590.6722463.9611213.4761241.4681173.995
std53.7256464.36796125.347764.8833434.03296562.5162946.976331.5063144.1068124.58917
F12mean2,778,6635,766,59425,790,2128,607,55410,984,07736,057,5531.18  ×   10 8 176,4332,509,225614,565
std1,234,3573,553,90137,002,7316,609,9237,757,40635,120,20355,559,81780,536.632,278,251799,146.2
F13mean38,098.18105,459.54,309,217149,825.8333,449.819,852,758222,865.715,230.6657,366.9915,203.03
std25,643.785,4149,169,298114,524166,746.977,341,414141,262.615,925.6527,808.2912,944.46
F14mean62,781.133426.03652,106.3311,873.698,680.75169,391.2800,536.329,645.9949,452.258200.661
std44,319.467756.40564,522.267631.825102,060.8271,020.1878,213.723,803.5454,028.8921,559.78
F15mean29,344.4213,734.6691,986.9963,872.1658,002.99255,257.6101,402.89565.82928,754.014382.162
std13,411.526775.95491,069.9437,859.1235,084.53557,425.758,070.038020.2615,532.171422.259
F16mean2452.3542678.5422829.2282402.2653025.7882381.1023746.3782732.4422984.3772343.871
std328.0747276.737362.5582272.9544351.328281.557460.0131247.8367299.5409298.8409
F17mean2130.3742155.6972440.6732017.6282558.0581948.5422620.2252212.6322346.5371962.032
std213.7428216.4221191.0118136.448237.0936122.3204284.0307169.861276.5934175.0651
F18mean617,76647,971.071,338,814207,7221,210,834864,938.15,645,910226,536.3696,127.5101,352.9
std503,903.627,499.952,621,422118,851.81,280,763943,209.74,247,208211,816.6476,743.984,960.42
F19mean28,885.1223,485.62403,727.1777,344.4332,815.2207,75616,557,6468966.78917,006.415016.343
std20,718.8218,747.19766,456.2624,438.3200,943.16,559,1315,780,1438030.88915,618.751513.306
F20mean2470.8182444.7162635.9592471.4662670.4392315.1972836.1652494.2822645.8212218.891
std215.0865109.7244194.7875166.0301227.8806126.5749209.7384182.0839203.3636109.6985
F21mean2410.6062495.2582499.8372388.5662537.9442372.2152560.4712417.5442492.8492353.343
std28.9640728.6816652.7428318.3017943.8700515.7539353.46830.4635854.430047.248308
F22mean5317.7145197.9494758.3523573.8635804.974419.6966243.8033210.5445493.7352306.643
std1467.2111786.6551913.3241655.9952285.1881870.1042333.7461643.1072453.881183.754
F23mean2750.1562986.6812879.682751.4443082.0552758.0813032.2012784.7942939.1982693.781
std28.6382282.9363944.8991321.55668100.877956.9573893.9826635.37427102.125617.79106
F24mean2926.4883111.6643036.0582898.5943364.7762918.9963204.1512946.5323098.2052861.949
std26.4090291.4019365.3619227.61287113.528454.5466887.7849832.8387479.76499.676014
F25mean2887.4252937.8972923.92888.4982916.5462959.3292986.0762896.9322907.9362920.704
std1.2746933.6978924.450556.2472917.8814644.5067626.1960819.1320620.1123623.39826
F26mean4793.4896782.3876062.8394411.466910.8144456.2247639.5785262.0136236.183750.938
std385.79111384.81819.6153413.45541334.103294.79871054.782904.59741459.199881.4943
F27mean3217.7023315.13254.2193220.2623337.3563231.9873379.5913237.4753270.7773219.856
std11.6392572.5175918.3782712.5377662.7237410.8127267.1787129.8611534.9211611.68746
F28mean3237.1843275.8763330.8113242.1013280.4723364.4043359.5673225.6693220.663206.159
std22.4463642.0244767.5971534.8058812.6447959.9067142.475738.6719116.4259417.06787
F29mean3811.9434246.3464169.4913738.0224395.1243748.1754996.023842.7634213.63579.192
std147.6799254.7624220.7401128.9396345.528190.1221422.5032255.025350.5144136.1354
F30mean19,731.11673,527.91,128,0292,617,7551,793,3507,804,49121,177,5619601.59294,281.389421.892
std6175.026552,366.11,516,1031,370,9421,038,3256,531,36421,178,5953529.08869,595.695107.745
Friedmanranking3.55.8666677.1666673.8666677.9666675.79.63.9666675.7333331.633333
The bold terms are the best solution results under the benchmark function in the row.
Table 4. Results of CGO and each algorithm on CEC2017 50-dimension benchmark function.
Table 4. Results of CGO and each algorithm on CEC2017 50-dimension benchmark function.
Dim10MetricSMABKADBOMVOHHOGWOWOAEWOAAVOACGO
F1mean60,715.586.19  ×   10 9 1.55  ×   10 8 1,423,87576,313,3544.71  ×   10 9 6.06  ×   10 8 4201.7596876.674440,031.7
std21,273.64.5  ×   10 9 1.17  ×   10 8 246,778.211,912,6302.27  ×   10 9 2.56  ×   10 8 4783.1036592.1333342.474
F2mean7.83  ×   10 24 4.48  ×   10 43 2.66  ×   10 59 6.83  ×   10 23 2.54  ×   10 44 2.64  ×   10 50 5.14  ×   10 65 1.04  ×   10 30 2.26  ×   10 27 2.48  ×   10 18
std3.5  ×   10 25 9.68  ×   10 43 1.16  ×   10 60 1.55  ×   10 24 1.1  ×   10 25 8.31  ×   10 50 2.27  ×   10 66 4.4  ×   10 30 6.83  ×   10 27 1.28  ×   10 32
F3mean4521.64721,979.67228,063.71409.19655289.2188,639.31168,728.8104,801.758,462.2260,065.37
std1794.4055372.98749,887.45297.055112,257.1111,963.7147,580.6423,139.9713,891.3623,000
F4mean576.9389949.9207805.75563.2629712.4008891.85741030.664515.0034567.9258519.6674
std32.05013307.121247.045142.0651570.7315136.898128.459249.4353752.325144.94962
F5mean709.1999863.8638953.4783707.6505884.7758698.5931955.3329800.2986847.3773715.3925
std37.2821438.8845993.9145734.1192433.7568870.2915271.5280170.5650437.3142419.22539
F6mean613.947661.9436648.0411629.4978673.3391612.955683.6265634.0399647.9081609.8787
std7.6497364.12670810.1279711.304145.1767034.5416379.1605811.426087.8235940.457
F7mean995.10851583.8751159.8881002.981785.5071024.6051740.5821703.7781514.404839.422
std49.3850469.66849136.895353.2272781.7983358.98407117.9619120.573791.0508775.66296
F8mean1019.6891146.71223.757997.94371181.7991006.0821268.2721101.4151151.1181017.22
std47.5397433.5878688.2305648.8768725.9453131.5060374.8427645.7408462.8261125.88947
F9mean11,285.9813,715.3716,576.9513,981.2923,487.357488.5626,733.219861.73112,909.871230.147
std2753.4151164.7465415.4229078.1043516.2784480.5327394.6892935.5111597.67971
F10mean7147.1458160.7229364.4176914.0819003.1137452.65910,826.27608.958090.3785389.16
std896.7854831.22711147.531704.9839945.0152031.7461260.341999.3778811.2726794.8772
F11mean1343.2041662.6052009.7431470.061521.0013361.8282297.4951332.3741327.3861262.033
std72.79716265.8163380.253882.5848694.791131549.037208.912248.8787461.43538105.9204
F12mean12,613,59877,774,0452.24  ×   10 8 54,019,52386,223,8222.89  ×   10 8 5.62  ×   10 8 2,457,19913,802,8425,927,494
std6,324,0841.15  ×   10 8 2.63  ×   10 8 29,881,48447,319,1802.98  ×   10 8 2.87  ×   10 8 1,502,6306,891,5421,791,801
F13mean42,480.471,388,3229,526,966185,928.92,480,8121.33  ×   10 8 7,959,2019581.04886,155.398253.086
std9436.6862,989,11611,614,064100,051.81,650,9541.2  ×   10 8 8,311,5628363.52855,472.566486.681
F14mean299,607.919,124.7982,315.2104,037.7763,382947,260.21,808,266106,417287,420.349,935.36
std154,164.626,630.75109,150793,808.35499,233.7998,844.41,504,56981,496.26156,134.756,338.87
F15mean24,460.2860,828.63720,617.298,359.37458,727.84,521,998825,8007615.57945,620.4613,639.9
std10,860.646,892.592,585,49859,608.07183,02115,978,0671,648,2186523.72817,928.285248.987
F16mean3317.8543624.9334259.1473105.1534779.9483143.25326.3313245.9163887.7552775.846
std399.8752454.4592709.0213389.9807712.4875546.5186876.6649401.7988369.6291400.5497
F17mean3166.3593110.2814067.173050.2493689.6032873.1244066.1773038.6263528.0142874.947
std338.6763252.3009413.9516295.5947321.8392497.1003545.2768405.8827386.374320.2662
F18mean2,176,417319,607.76,555,1711,263,2695,844,8593,935,31813,116,738676,520.51,814,154493,333.4
std1,116,703221,443.89,386,293877,055.74,162,4573,931,3638,841,144563,721.41,594,3251,106,656
F19mean16,295.18536,067.7254,77854,209,524993,633.63,150,6474,860,97012,104.6727,508.8616,543.29
std16,280.52676,035.15,107,5642,601,440507,672.33,679,5014,553,91611,980.417,573.317524.625
F20mean3105.93019.6683443.123068.483537.8312901.13,7273100.0093493.7482631.8
std241.7975216.8104337.3048335.3027294.5325299.7464340.0154351.5468369.5631265.2259
F21mean2525.9592721.1312744.4632497.8332850.0962495.9732961.0232568.342722.4532465.665
std46.6229765.6328783.3005749.8878190.4666925.4167486.7169253.7239570.0015419.36694
F22mean9029.4919842.28610,611.638505.02111,258.479089.93912,279.669225.20210,036.176994.803
std1039.094695.92781343.249862.7647863.9258964.67621551.255991.3282936.2804968.9891
F23mean2965.5713512.1633283.9812921.3413721.4732936.6853684.5483061.8593333.7142931.938
std47.53908143.57110.917938.99199153.893442.68835164.287682.13251134.963430.98419
F24mean3128.7383648.6233373.1643062.2554002.2643082.2333699.7733248.5053553.7213090.03
std51.5166132.946486.9967146.96662167.572940.90279163.973973.9347591.9876721.85485
F25mean3050.2383458.0813204.673039.6763188.1523442.4783382.0833052.5383100.4153077.409
std37.41275161.9894131.542729.0365938.73183167.2651103.692734.17424.6711322.7515
F26mean5366.7310,471.819842.8685951.70610,884.816079.83513,043.286946.2129670.9338098.113
std1317.3692811.7861274.646392.35471074.937823.48941482.289726.31591934.586222
F27mean3394.0573975.3463621.033351.0964140.8143524.8274351.4263508.1793737.2233432.68
std71.6175252.2562127.59459.3161219.583272.02516433.7488136.1074158.408363.80357
F28mean3318.2633904.9225162.0913286.9583489.3873935.5724026.6313306.9933350.133355.777
std27.92524278.722418.65125.403373.30007236.1407293.298721.6134938.4343822.29936
F29mean4315.9255867.8015327.1544,5655853.1514405.6157572.4464612.1835019.8324103.622
std428.7486569.1849415.6254431.6857465.5058235.6915949.6659363.3273373.5984219.1384
F30mean2,049,84019,628,60023,184,65257,211,50630,463,80199,569,8061.74  ×   10 8 1,087,2814,168,4141,080,447
std635,292.46,331,93519,630,61218,410,2637,700,49028,094,96253,107,108322,302.31,272,446169,457
Friedmanranking3.4666676.3333337.83.3666677.7333335.4333339.5333333.75.42.233333
The bold terms are the best solution results under the benchmark function in the row.
Table 5. The Friedman test ranking of CGO and other algorithms on CEC2017.
Table 5. The Friedman test ranking of CGO and other algorithms on CEC2017.
Algorithm10-Dim30-Dim50-Dim
SMA3.90003.50003.4667
BKA4.23335.86676.3333
DBO6.30007.16677.8000
MVO4.16673.86673.3667
HHO8.56677.96677.7333
GWO6.13335.70005.4333
WOA8.96679.60009.5333
EWOA3.70003.96673.7000
AVOA6.50005.73335.4000
CGO2.53331.63332.2333
Table 6. The Wilcoxon test result of CGO and other algorithms on CEC2017.
Table 6. The Wilcoxon test result of CGO and other algorithms on CEC2017.
Comparison10-Dimensional Functions30-Dimensional Functions50-Dimensional FunctionsAll Functions
CGO vs. SMA1.1748  ×   10 2 3.1123  ×   10 5 1.7988  ×   10 5 2.43  ×   10 11
CGO vs. BKA1.5886  ×   10 1 8.9443  ×   10 4 6.1564  ×   10 4 2.63  ×   10 7
CGO vs. DBO3.7243  ×   10 5 1.7344  ×   10 6 1.7344  ×   10 6 5.98  ×   10 16
CGO vs. MVO2.9575  ×   10 3 1.7988  ×   10 5 2.3704  ×   10 5 8.48  ×   10 12
CGO vs. HHO2.1266  ×   10 6 1.3601  ×   10 5 1.2381  ×   10 5 1.46  ×   10 14
CGO vs. GWO5.7517  ×   10 6 1.7344  ×   10 6 1.7344  ×   10 6 2.88  ×   10 16
CGO vs. WOA1.7344  ×   10 6 1.7344  ×   10 6 1.7344  ×   10 6 1.74  ×   10 16
CGO vs. EWOA7.4438  ×   10 2 1.1138  ×   10 3 1.4936  ×   10 5 3.18  ×   10 9
CGO vs. AVOA1.9148  ×   10 4 1.6394  ×   10 5 1.1265  ×   10 5 5.73  ×   10 13
Table 7. Effects of different pruning time intervals on optimization results.
Table 7. Effects of different pruning time intervals on optimization results.
t cut T/2NT/N3T/2N2T/N
F14172.43495183.41823,1433.64  ×   10 8
F3300.09126305.3465761.59454837.87
F5508.914549506.915513.1768548.3812
F6600.000045600.002600.1659614.1876
F101510.251561404.881722.7872485.58
F222297.310092291.282305.5852340.852
F242729.528292727.5592673.212751.753
F262929.770152937.0752926.373035.957
Table 8. The average solving time of CGO and nine other algorithms on thirty 10-D CEC2017 functions.
Table 8. The average solving time of CGO and nine other algorithms on thirty 10-D CEC2017 functions.
CGOSMABKADBOHHOMVOGWOWOAEWOAAVOA
time0.119430.379540.3229230.2737710.0256860.2684790.277260.2684790.4361670.300184
rank29851364107
Table 9. Names and parameters of 20 typical problems in CEC2020-RW.
Table 9. Names and parameters of 20 typical problems in CEC2020-RW.
No.NameDgh f *
Industrial Chemical Processes
RW01Heat exchanger network design (case 2)11097.05  ×   10 3
RW02Blending–pooling–separation problem280321.86  ×   10 0
RW03Propane, isobutane, n-butane nonsharp separation480382.12  ×   10 0
Process Synthesis and Design Problems
RW04Process synthesis and design problem3112.56  ×   10 0
RW05Multi-product batch plan101005.36  ×   10 4
Mechanical Engineering Problems
RW06Optimal design of industrial refrigeration system141503.22  ×   10 2
RW07Pressure vessel design4405.89  ×   10 3
RW08Welded beam design4501.67  ×   10 0
RW09Three-bar truss design problem2302.64  ×   10 2
RW10Multiple-disk clutch brake design problem5702.35  ×   10 1
RW11Step–cone pulley problem5831.61  ×   10 1
RW1210-bar truss design10305.24  ×   10 2
RW13Rolling element bearing10901.46  ×   10 4
RW14Topology optimization303002.64  ×   10 0
Power System Problems
RW15Optimal sizing of single-phase distributed generation
with reactive power support for phase balancing at main transformer/grid
11801080.00  ×   10 0
RW16Wind farm layout problem30910−6.26  ×   10 3
Power Electronic Problems
RW17SOPWM for 3-level inverters252413.80  ×   10 2
RW18SOPWM for 5-level inverters252412.12  ×   10 2
Livestock Feed Ration Optimization
RW19Beef cattle (case 1)591414.55  ×   10 3
RW20Dairy cattle (case 1)64066.70  ×   10 3
Table 10. Results of CGO and each algorithm on CEC2020-RW problems.
Table 10. Results of CGO and each algorithm on CEC2020-RW problems.
ProblemSMABKADBOMVOHHOGWOWOAEWOAAVOACGO
RW011.14  ×   10 17 907,543.91.14  ×   10 15 2.73  ×   10 11 9.42  ×   10 15 8.76  ×   10 11 1.86  ×   10 13 7049.0371.03  ×   10 9 7049.037
RW021.6085743.0357913.311112.6301067.3697632.31884918.061651.4424493.1530621.404981
RW033.1622785.1843992.5254681.44333412.987282.8308723.7883461.0729771.4342250.97251
RW042.557812.557812.557812.557892.5587272.5582022.6647562.557812.557812.55781
RW0553,949.0458,607.1465,795.3853,656.0661,597.4853676.8995,555.3359,143.3558,477.0353,639.02
RW060.03230.035375187,240.2187,253.94.0157110.0388020.118460.0471390.0322210.032213
RW075743.0285743.0195743.0286358.4066499.7695743.0486160.4456015.725743.3635743.019
RW081.6702591.671151.6702151.6751811.705541.6710551.8510131.6702151.6856851.670215
RW09266.8652263.8523263.8523263.8523263.8524263.8524264.1553263.8523263.8523263.8523
RW100.2352420.2352420.2352420.2352430.2352420.2352570.2352420.2352420.2352420.235242
RW1116.0439316.0450316.0439216.0452816.0995516.045616.834816.0439216.2850816.04392
RW12522.4317522.4281528.4657522.4911542.3062522.4848584.5383528.5917528.8238522.4029
RW1314,607.8114,607.8114,607.8114,618.6514,607.8114,622.8614,625.2714,607.8114,607.8114,607.81
RW142.6426612.6393462.6393462.6861482.6393462.70682.6393462.6393462.6393462.639346
RW151.7665173.4219993.546031.8259369.1421732.8820129.1312132.2820772.9118810.419845
RW16−6052.85−5748.61−5929.02−6081.28−5285.89−5942.94−5477.81−5656.02−5605.64−6165.65
RW170.8868510.8867151.282310.3375350.88671.5260310.8868050.4172910.8867430.084488
RW180.2776220.2770990.277099445.4460.28458412.4362125,920.525,920.50.2770980.378623
RW1912,312.03762,643.117,399.383764776204,431.540,314.27827,917.54545.974802,8384504.446
RW205973.7887,172,1944354.316293,383.31,678,9616170.1085163.7995033.4176394.7373981.429
Ranking4.66674.55005.00006.25007.66676.35008.35004.66676.00001.4000
Table 11. The Wilcoxon test result of CGO and other algorithms on CEC2020-RW.
Table 11. The Wilcoxon test result of CGO and other algorithms on CEC2020-RW.
ComparisonWilcoxon Test
CGO vs. SMA0.000738
CGO vs. BKA0.001160
CGO vs. DBO0.000854
CGO vs. MVO8.86  ×   10 5
CGO vs. HHO0.000713
CGO vs. GWO8.86  ×   10 5
CGO vs. WOA0.000196
CGO vs. EWOA0.000488
CGO vs. AVOA0.000935
Table 12. Center coordinates and classes of obstacles.
Table 12. Center coordinates and classes of obstacles.
CenterRadiusCenterRadius
(1.5, 4.5)1.5(8.0, 6.0)0.8
(4.0, 3.0)1.0(5.0, 6.0)0.8
(1.2, 1.5)0.8(7.0, 1.0)0.8
(7.0, 4.0)0.8(6.0, 2.5)0.5
(7.0, 8.0)0.8
Table 13. The results of CGO and other algorithms in robot path planning.
Table 13. The results of CGO and other algorithms in robot path planning.
AlgorithmsBestMeanWorstStd
SMA14.630915.475016.29010.8502
BKA14.617015.365616.93240.9494
DBO16.660819.173225.28233.0581
MVO15.543715.891516.31730.5545
HHO15.044717.119219.59621.2120
GWO16.204416.819217.32440.7889
WOA14.738316.208516.93240.7567
EWOA14.622815.795816.67860.8767
AVOA14.673315.962417.95361.1483
CGO14.616615.373216.27200.6868
Table 14. Bounds of different parameters in three PV models.
Table 14. Bounds of different parameters in three PV models.
SDM & DDM PVMM
ParameterLbUbLbUb
I p h (A)0102
I s d 1 , I s d 2 , I s d ( μ A)01050
n 1 , n 2 , n 12150
R s h ( Ω )010002000
R s ( Ω )00.502
Table 15. The results of CGO and each algorithm on PV parameter extraction.
Table 15. The results of CGO and each algorithm on PV parameter extraction.
ProblemsSMABKADBOMVOHHOWOAGWOEWOAAVOACGO
SDM0.2235150.0032100.0117010.0084770.0936670.0319470.0170480.0022170.0123110.001875
DDM0.2033170.0030450.0086550.0072410.0773630.0330730.0237850.0022730.008820.002718
PVMM0.4013560.0193360.0648030.0560100.0812650.0801920.0305240.0156450.0229040.016497
Ranking1035.6666674.666667986.3333331.3333335.3333331.666667
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.; Zhang, D.; Li, W. Crown Growth Optimizer: An Efficient Bionic Meta-Heuristic Optimizer and Engineering Applications. Mathematics 2024, 12, 2343. https://doi.org/10.3390/math12152343

AMA Style

Liu C, Zhang D, Li W. Crown Growth Optimizer: An Efficient Bionic Meta-Heuristic Optimizer and Engineering Applications. Mathematics. 2024; 12(15):2343. https://doi.org/10.3390/math12152343

Chicago/Turabian Style

Liu, Chenyu, Dongliang Zhang, and Wankai Li. 2024. "Crown Growth Optimizer: An Efficient Bionic Meta-Heuristic Optimizer and Engineering Applications" Mathematics 12, no. 15: 2343. https://doi.org/10.3390/math12152343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop