Next Article in Journal
The Effect of Glycerol-Based Suspensions on the Characteristics of Resonators Excited by a Longitudinal Electric Field
Previous Article in Journal
Exploration of Effective Time-Velocity Distribution for Doppler-Radar-Based Personal Gait Identification Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Flower Pollination Optimization Algorithm Based on Cosine Cross-Generation Differential Evolution

School of Electronics and Communication Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(2), 606; https://doi.org/10.3390/s23020606
Submission received: 9 November 2022 / Revised: 31 December 2022 / Accepted: 3 January 2023 / Published: 5 January 2023
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The flower pollination algorithm (FPA) is a novel heuristic optimization algorithm inspired by the pollination behavior of flowers in nature. However, the global and local search processes of the FPA are sensitive to the search direction and parameters. To solve this issue, an improved flower pollination algorithm based on cosine cross-generation differential evolution (FPA-CCDE) is proposed. The algorithm uses cross-generation differential evolution to guide the local search process, so that the optimal solution is achieved and sets cosine inertia weights to increase the search convergence speed. At the same time, the external archiving mechanism and the adaptive adjustment of parameters realize the dynamic update of scaling factor and crossover probability to enhance the population richness as well as reduce the number of local solutions. Then, it combines the cross-generation roulette wheel selection mechanism to reduce the probability of falling into the local optimal solution. In comparing to the FPA-CCDE with five state-of-the-art optimization algorithms in benchmark functions, we can observe the superiority of the FPA-CCDE in terms of stability and optimization features. Additionally, we further apply the FPA-CCDE to solve the robot path planning issue. The simulation results demonstrate that the proposed algorithm has low cost, high efficiency, and attack resistance in path planning, and it can be applied to a variety of intelligent scenarios.

1. Introduction

With the advancement and innovation of science and technology, including the Internet of Things and artificial intelligence, there are a large number of complex high-dimensional variable optimization problems in engineering and industrial applications that need resolution, including solving engineering optimization problems [1], energy-saving development and utilization [2,3], andpublic facility construction [4]. Continuous, linear, and simple problems are typically amenable to traditional optimization algorithms [5,6,7], and yet when exploring larger combinatorial optimization problems, there are often great limitations, namely, low efficiency, high cost, and high energy consumption, which are not conducive to the application of engineers, and the solution accuracy cannot reach the desired value [8]. As a result, numerous academics began to seek out alternative methods, and heuristic algorithms emerged. Heuristic optimization algorithms are developed on the basis of the characteristic of biological systems and physical theory, Darwinian evolution, the swarm behavior of insects or animals, such as ants [9], birds [10], and annealing in physical metallurgy [11]. Heuristic optimization algorithms have high search accuracy, efficiency, and strong robustness [12], which can be classified into three categories. The first category has strong balance ability between local search and global search, such as artificial bee colony (ABC) [13], social spider optimization (SSO) [14], and so on. The second category has superior global convergence and computation robustness, namely, the wolf pack algorithm (WPA) [15], dragonfly algorithm (DA) [16], while algorithm (WA) [17], etc. The third category has greater population density and searching efficiency such as particle swarm optimization (PSO) [18] and genetic algorithm (GA) [19]. Although the aforementioned traditional algorithms have good performance in some situations, they may lack the exploration ability to find better solution space [20,21,22,23].
In view of this, Yang et al. [24,25] proposed a flower pollination algorithm (FPA) on the basis of flowering plants, which has been widely applied in multi-objective optimization. To improve the search capability of flower pollination algorithms, numerous researchers have addressed the shortcomings of flower pollination algorithm search strategies of flower pollination algorithms. In 2016, Zhou et al. [26] proposed the elite opposition-based flower pollination algorithm (EOFPA), in which the elite opposition learning strategy utilized the optimal individual information to extend the search range of the algorithm, which helped to enhance the global superiority seeking ability of the algorithm and increased the probability of searching for excellent solutions. In 2018, Bian et al. [27] proposed the self-adaptive flower pollination algorithm (SFPA) to reduce the probability of falling into a local optimum, and the parameter control mechanism designed in the SFPA fluctuates within a certain range, thus adjusting the algorithm parameters. Nevertheless, the adjustment of the transition probability P is fluctuating on the basis of 0.8, which is a small fluctuation range and may not significantly improve the algorithm’s performance. In 2019, Supriya et al. [28] proposed an enhanced global-best-driven flower pollination algorithm (GFPA) to improve the convergence speed of the flower pollination algorithm. The GFPA introduces a search strategy with the best individual as the base individual, which gives the algorithm a chance to mine in the neighborhood of the best individual and increases the algorithm’s convergence speed. In 2020, Yang et al. [29] proposed an improved flower pollination algorithm with three strategies (IFPA) to enhance the convergence speed and optimality search accuracy of the algorithm. In the IFPA, the information of two adjacent bags of the majority of individuals is employed to guide the evolutionary direction, which is equivalent to providing two clear and promising directions for the algorithm search, and yet the similarity of two neighboring generations of optimal individuals results in a single overall trend of the algorithm search direction and an increased likelihood of falling into a local optimum. By analyzing relevant studies, the FPA can find the optimum of the objective function by global and local search, but some flaws remain unsolved: (1) The setting of parameters such as inertia weights affects the accuracy of the global search for the optimal solution. (2) The local search process lacks a search guide and falls into local optimum, resulting in lower search efficiency and convergence. (3) The complexity of the optimization problem increases the number of locally optimal solutions and hinders the FPA from obtaining optimal solutions.
To address these issue, we propose an improved flower pollination algorithm based on cosine cross-generation differential evolution (FPA-CCDE) to overcome the mentioned shortcomings of the FPA. The contributions of our work can be summarized as follows: (1) Setting the cosine inertia weight makes the global search initially strengthen the search ability at a faster rate and enhance the convergence speed of the algorithm. (2) The algorithm uses cross-generation differential evolution to guide individuals to approach the optimal solution, so that the local search process of the algorithm is guided. (3) The external archiving mechanism and the adaptive adjustment of parameters realize the dynamic update of the scaling factor and crossover probability to enhance the population richness as well as reduce the number of local solutions and then combine the cross-generation roulette wheel selection mechanism to reduce the probability of falling into the local optimal solution.
For the purpose of evaluating the superiority and robustness of the proposed FPA-CCDE, we compare it with the other five state-of-the-art heuristic optimization algorithms by benchmark functions in [30,31,32,33,34,35,36,37,38]. The experimental results indicate our algorithm has the best performance in terms of searching accuracy, convergence speed, and operation efficiency. Intending to evaluate the performance of the proposed algorithm in practical applications, we consider the robot inspection path planning issue as an illustration and design the inspection path in accordance with the optimization variables to verify the effectiveness of the algorithm. The experimental results prove the proposed algorithm can meet the requirements of low-cost, high efficiency and obstacle avoidance in inspection path planning.
The remainder of this paper is organized as follows. Section 2 introduces the main idea of the FPA. Section 3 describes the details of the proposed FPA-CCDE. Section 4 verifies the effectiveness of the FPA-CCDE by benchmark functions. Section 5 demonstrates the performance of the FPA-CCDE on robot path planning. The last part of this paper is Section 6, which provides conclusions.

2. Preliminary Review

Cross-pollination and self-pollination are the two components of flower pollination. Cross-pollination signifies that pollination occurs among different flowers due to natural wind transmission and animal carrying. Self-pollination relates to the pollination of the same flower by spontaneous diffusion without consequence by biological factors. In the FPA, Yang [25] regards these two pollination methods as global and local searching processes, respectively. Moreover, Yang employs switch probability to realize the conversion between two methods, which can substantially balance the ratio of global and local searches.
Cross-pollination corresponds to the global searching process of the FPA. Let the population size of pollen be N, then the position of pollen i at iteration t + 1 is:
x i t + 1 = x i t + L · ( x i t g ) ,
where x i t is the pollen i at iteration t, and g is the current fittest pollen. L denotes the step size of the pollination, which follows a Levy distribution [39].
Self-pollination is in line with the local searching process of the FPA, and the pollen i at iteration t + 1 is:
x i t + 1 = x i t + ε · ( x j t x k t ) ,
where x j t and x k t represent two pollen individual positions (different flowers belonging to the same flowering plant) in the population that are distinct from pollen individual i, which essentially mimics the constancy of flowers in a finite region and ε follows a uniform distribution in [ 0 , 1 ] .

3. The Flower Pollination Algorithm Based on Cosine Cross-Generation Differential Evolution (FPA-CCDE)

As a novel heuristic algorithm for optimization, the FPA has fewer parameters and can achieve high efficiency. Nonetheless, it is prone to falling into the local optimum, and it has insufficient population richness and search accuracy. In light of this, we propose the FPA-CCDE algorithm. The framework of the FPA-CCDE is shown in Figure 1. The cosine inertia weights are introduced in the global search process to enhance the global search accuracy and balance the searching process. Moreover, the introduction of cross-generation difference in the local search process differential evolution can increase the population’s diversity and the efficiency of iteration. The parameter adaptive adjustment mechanism can increase the effectiveness of searching, and the roulette wheel selection mechanism can help the algorithm jump out of the local optimal in the searching process. More details are given in the following subsections.

3.1. Cosine Inertia Weight

The inertia weight in the traditional FPA is a random value. A larger inertia weight can contribute to better global searching. Smaller inertia weight assists to strengthen the local searching process. Notwithstanding, this kind of weight cannot guide the global search direction and even leads to local optimal [40]. Consequently, in the global searching process, to improve global search precision, we design a cosine inertia weight on the basis of the simulated annealing method as
ω = λ [ 1 cos ( π 2 t T max d n u m ) ] + σ b e t a r n d ( a , b ) ,
where λ is the weight adjustment factor, T max is the maximum number of iterations, t is the current iteration, and σ is the inertia adjustment factor. The second term on the right-hand side utilizes the beta distribution to adjust the total inertia weight value. The deviation degree of ω can control the inertia weight value then strengthen the global searching as well as the local searching ability.
On the basis of the cosine inertia weight, the position of pollinate iteration t + 1 in global searching can be rewritten as:
x i t + 1 = x i t k · ( x i t x c e n t ) + r a n d 1 , d n u m · U min i + ( U max i U min i ) ,
where U max i and U min i are the upper and lower limits of pollen individual values in each dimension, respectively, and d n u m is the dimension of pollen individual. Here, r a n d 1 , d n u m is a matrix of 1 × d n u m , and the values of each column are drawn from a uniform distribution in the range [ 0 , 1 ] . Finally, x c e n t refers to the average value of the upper and lower dimensions of each individual.

3.2. Cross-Generation Differential Evolution

In the local searching process, we employ cross-generation differential evolution [41] to achieve a balance between convergence speed and diversity, while maintaining a balance between exploration and development.
The cross-generation differential evolution includes two mutation strategies, namely the neighborhood-based cross-generation strategy (NCG) and the population-based cross-generation strategy (PCG). The NCG estimates the promising search direction by analyzing the differences between consecutive generations. It can direct the algorithm toward the optimal person. In the mutation of the PCG, the populations from two generations participate in the search for the optimal individual. Combining the information from two generations can reduce the numerical oscillation of search results and improve the search stability of the FPA in the local searching process. In this paper, to balance the convergence and search ability, we adopt the NCG and the PCG with the same probability during the mutation process. The generation of new individuals from the NCG and the PCG is described in detail following the table.
In the NCG, each individual called parent individual will be mutated, and two neighborhood pools of the parent individuals are used to generate the mutant vector. One neighborhood pool is formed by T individuals from a population of the current generation, and the other neighborhood pool consists of T individuals from the population of the previous generation.
The selection criteria are listed below. By calculating the Euclidean distance between the parent and other individuals in the population, the nearest T individuals are selected to form one neighborhood pool. In accordance with the Euclidean distance, T individuals are also picked from the previous generation as the members of the other neighborhood pool.
Utilizing two neighborhood pools for mutation, the new individual generated after mutation can be expressed as:
V i , g = x r n 1 , g + F · ( x r n 1 , g x r n 2 , g 1 ) ,
where V i , g is the mutant vector, i is the index of the parent individual, g is the index of the mutation generation, x r n 1 , g is the randomly selected individual in the neighborhood pool of the current generation, and x r n 2 , g 1 is the spontaneously selected individual in the parent individual neighborhood pool of the previous generation. Subscript r n 1 is an integer randomly selected from I i , g , 1 , I i , g , 2 , , I i , g , T , which records the indices of T members in the neighborhood pool. Similarly, r n 2 is randomly selected from I i , g 1 , 1 , I i , g 1 , 2 , , I i , g 1 , T , and the size is 5 % of the total population size.
In PCG, the new individual generated after mutation is:
V i , g = x i , g + F · ( x r p 1 , g x r p 2 , g 1 ) ,
where x i , g is the parent individual, x r p 1 , g is the randomly selected individual in the population of the current generation, individual x r p 2 , g 1 is in the population of the previous generation of the parent individual, r p 1 and r p 2 are random integers selected from 1 , 2 , , N , and N is population size.

3.3. External Archiving Mechanism

After the searching process, to select high-quality solutions in the neighborhood pool, an external archiving mechanism is proposed.
Population diversity can be increased by establishing external archives and requiring individuals within those archives to direct the search process. After completing differential evolution, the fitness value of the resulting offspring u i is compared with the fitness value of the parent x i . If the fitness value is smaller, then the offspring u i replaces x i . Otherwise, T 1 individuals whose fitness value is greater than x i are selected and stored in an archive. External archiving can increase the richness of the population and avoid the search process from falling into a locally optimal solution. In addition, it can direct individuals in their search for potential subregions with high-quality solution options. After completing the search iteration, if the number of solutions contained in the external archive exceeds the threshold value N p individuals are randomly removed from the archive to keep the number within N P .

3.4. Parameter Adaptive Adjustment Mechanism

Since NCG and PCG are highly sensitive to scaling factors and crossover probability C R , the value of these two parameters will affect the sear process [41,42]. If the scaling factor is small, the difference vector will generate a small oscillation, and the richness of the population decreases. If F is large, it will increase the blindness of the searching process, and the convergence velocity slows down. C R determines the probability of crossover between the parent individual and mutation vector in balancing local and global search processes. If C R is large, the population diversity and convergence speed will be increased. For distinct stages of the search process, it is essential to formulate distinct parameters.The parameters setting should meet the following requirements: (1) Select appropriate parameters for a specific region in the target scope; (2) Eliminate inappropriate parameters; (3) Reduce the probability of complete convergence of parameters.
With every iteration, the F i , g of each individual x i is determined according to the Cauchy distribution, which can be expressed as:
F i , g = r a n d c i ( μ F , 0.1 ) .
If F i 1 , set F i to 1. If F i 0 , F i will be generated again according to (7). The initial value is 0.5, and the updated formula is as follows:
μ F = 1 2 · μ F + 1 2 · m e a n L ( S F ) .
m e a n L ( S F ) = F S F F 2 F S F F .
S F is the set in the archive of F i of the successful individuals at iteration t and is used to denote the Lehmer of all successful individuals in the population.
The generation mainly includes two operation processes. The Cauchy distribution is advantageous for the diversification of mutation factors and avoids the drawback of premature convergence in the differential evolution strategy searching process. Likewise, the Lehmer mean will give more weight to the scaling factor of the better solution. The mutation factor information of the better solution is transmitted to improve the optimization efficiency, and the larger scaling factor occupies a greater weight in the calculation of μ F . It can take optimization precision and algorithm operation efficiency into account.
The crossover is probably randomly generated for utility. The crossover probability updates as:
C R i , g = C R min + ( C R max C R min )   ( f i f min ) f max f min f 4 f 2 f 3 f 1 , f i f ¯ , C R min , f i < f ¯ ,
where C R i , g is the average fitness for all individuals within the current population. Randomly selecting four individuals, x p 1 , x p 2 , x p 3 , x p 4 , from the external archived population, their fitness values are f 1 , f 2 , f 3 , f 4 , respectively, and f 4 > f 3 > f 2 > f 1 .
After completing the local search, the subsequent cross-selection procedure is also carried out as follows:
U i , g = V i , g , i f r a n d < C R i , g , x i , g , i f r a n d C R i , g ,
where U i , g is the trailing vector and r a n d is derived from a uniform distribution in the range [ 0 , 1 ] . After completing the pollen variation and cross-selection of the population’s individuals, the selection is carried out:
X i , g + 1 = U i , g , i f     f ( U i , g ) f ( X i , g ) . X i , g , i f     f ( U i , g ) > f ( X i , g ) .

3.5. Cross-Generation Roulette Wheel Selection

Heuristic optimization algorithms predominantly applied roulette wheel selection to jump out of the local optimal in the searching process [13,43]. After completing a search iteration for all individuals, we select T pollen individuals with the smallest fitness value and then randomly select T pollen individuals from the remaining individuals in the population. For the individual x i t + 1 ( i = 1 , 2 , 2 T ) , we design a cross-generation roulette wheel selection mechanism to reduce the likelihood of algorithmic failure falling into a local solution.
We select T individuals which include parent individuals of the current population and individuals of the previous generation (we recommend α = 0.7 T and β = 0.3 T ) to form a roulette wheel pool to engage in the process of roulette wheel selection. The probability of each pollen individual being selected in the roulette pool is:
p i = w i g i i = 1 2 T w i g i .
  • For pollen individuals i, w i g i is the mapping weight of the fitness value. The cross-generation roulette wheel selection strategy selects appropriate parameters in the target scope. Consequently, it can eliminate inappropriate parameters and reduce the probability of complete convergence of parameters. The specific actions are broken down into the following steps.
Step 1: Sort the pollen individuals in the parent population in accordance with the fitness values.
Step 2: Select the top T pollen individuals to form a subpopulation P t .
Step 3: Randomly select β remaining individuals from the parent population.
Step 4: Randomly select α individuals from the current population.
Step 5: Combine subpopulations P t with selected individuals from the current population to form a roulette wheel selection pool.
Step 6: The individuals in the roulette pool perform the roulette process according to (13).
On the basis of the above analysis in this section, the pseudo-code of the FPA-CCDE can be summarized in Algorithm 1.
Algorithm 1 Flower Pollination Algorithm Based on Cosine Cross-Generation Differential Evolution (FPA-CCDE)
1:
Initialize the pollen population size N and optimal individual g .
2:
Set the maximum criterion T max , and select initial crossover probability C R 0 and initial scaling factor F 0 respectively from [ C R min , C R max ] and [ F min , F max ] .
3:
Randomly generate switch probabilities r a n d 1 and r a n d 2 .
4:
for each individual in current population do
5:
 Calculate F i , g according to (7).
6:
if r a n d 1 < P then
7:
   if r a n d 2 < 0.5 then
8:
    Updating the current pollen position using PCG differential evolution.
9:
   else
10:
    Updating the current pollen position using NCG differential evolution.
11:
    Crossover and selecting operation for x i + 1 according to formulas (10) and (11).
12:
   end if
13:
else
14:
  For cross-pollination, formula (4) is used to complete the global search process and update the current pollen position.
15:
  Select the updated pollen individuals to perform mutation and crossover operations according to formulas (11) and (12).
16:
end if
17:
 Generate x i p using cross-generation Roulette Wheel Selection Mechanism.
18:
 Compare x i p with x i + 1 and replace x i + 1 if x i p has a better fitness value.
19:
 Compare x i + 1 with x i and replace x i if x i + 1 has a better fitness value.
20:
 Compare x i + 1 with g and replace g if x i + 1 has a better fitness value.
21:
end for
22:
Repeat line 3 to line 19 until the T max is satisfied.

4. Evaluation forthe FPA-CCDE

To verify the optimization performance of the proposed FPA-CCDE and the operation efficiency of the algorithm, in this section, 32 sets of standard test functions [41,42] were chosen to test the FPA-CCDE, and particle swarm optimization algorithm (PSO) was used to conduct research and comparisons with the algorithm, the pollination algorithm (FPA), the artificial bee colony algorithm (ABC), the genetic algorithm (GA), and the spider clustering algorithm (SSO) in the identical testing setting. In accordance with Yang Xin’s article [33], each algorithm is run independently 30 times, and the algorithm is suitable for most applications on the condition that the transition probability P = 0.8 . The pollen crossover probability C R 0 = 0.15 is randomly initialized, the population size N p = 300 is set for all comparison algorithms, and the maximum number of iterations is set to T max = 2500 . The parameters setting for Algorithm 1 are given in Table 1. The selected test functions mainly include three types, namely, the low-dimensional test function (dimension less than 10) in Table 2, the high-dimensional test function (dimension more than 10) in Table 3, and the extensible test function [43]. Among them, F10 is a low-dimensional multipeak function, which makes it simple for the algorithm to find the local optimal solution during the search. Most low-dimensional functions tend to oscillate violently throughout the search process.

4.1. Performance Comparison on Low-Dimensional Benchmark Functions

The low-dimensional benchmark functions are frequently utilized in numerous performance evaluations of heuristic optimization algorithms and can generate a mass of local optimum solutions. These functions are continuous or discontinuous, convex or non-convex, and unimodal or multimodal.
The fitness value and standard deviation on benchmark functions of divergent algorithms are displayed in Table 2, with EF fixed at 1000. The smaller the fitness value and standard deviation are, the higher the optimization accuracy and stability will be. For instance, the fitness value and standard deviation of the FPA-CCDE on function F2 are smaller than that of other algorithms, so the optimization accuracy and stability of the FPA-CCDE are the best. It can be observed from Table 2 that compared with the FPA, the FPA-CCDE appears to be better in 15 functions (F2–F4, F7, F8, F14, F17, F18, F20, F22, F28, F30–F32, F35, F36). The standard deviation of the FPA-CCDE has undergone significant enhancements in 18 functions. The results indicate that the FPA-CCDE greatly enhances the optimization accuracy in the searching process. Moreover, the smaller standard deviation values demonstrate that the FPA-CCDE has smaller numerical oscillation and higher stability.
In comparison to ABC, the FPA-CCDE achieves considerably better fitness value in 8 functions and obtains similar fitness value in 10 functions. For standard deviation, the FPA-CCDE outperforms ABC in 12 functions. In general, the optimization accuracy and stability of the FPA-CCDE are fairly superior to ABC. In the listed functions, the FPA-CCDE surpasses PSO in F14, F17, F30, and F35-F36 on fitness value, and it is similar to PSO in 12 functions. For standard deviation, the FPA-CCDE wins in 7 (F16–F18, F22, F35–F36) functions, and ties in 4 (F3–F4, F7, F28) functions. For functions F2, F5, F8, and F31–F32, although PSO achieves a smaller standard deviation, it is not significantly different from the FPA-CCDE at the level α = 0.05 . In 16 functions, the FPA-CCDE outperforms the GA in optimization accuracy. Regarding standard deviation, the FPA-CCDE only loses in F5, and it can find smaller fitness value than the GA in most functions. Compared with SSO, the FPA-CCDE can provide better or similar fitness value in 18 functions.
Table 2. Experimental results of benchmark function with low dimension.
Table 2. Experimental results of benchmark function with low dimension.
FunctionsFPA-CCDEFPA [14]ABC [15]PSO [16]GA [17]SSO [18]
F23.1021 × 10 15
(4.3388 × 10 15 )
4.1451 × 10 3
(7.0754 × 10 3 )
6.8256 × 10 10
(1.3239 × 10 9 )
0.0000 × 10 0
(0.0000 × 10 0 )
7.8197 × 10 2
(1.6845 × 10 1 )
1.5299 × 10 7
(1.7006 × 10 7 )
F30.0000 × 10 0
(0.0000 × 10 0 )
2.2574 × 10 1
(2.2744 × 10 1 )
0.0000 × 10 0
(0.0000 × 10 0 )
0.0000 × 10 0
(0.0000 × 10 0 )
6.2051 × 10 1
(2.9506 × 10 1 )
1.9375 × 10 6
(1.4281 × 10 6 )
F40.0000 × 10 0
(0.0000 × 10 0 )
1.7376 × 10 0
(1.3740 × 10 0 )
2.8247 × 10 1
(1.7454 × 10 1 )
0.0000 × 10 0
(0.0000 × 10 0 )
4.7987 × 10 1
(3.1709 × 10 1 )
1.2790 × 10 3
(5.9245 × 10 4 )
F5−2.0626 × 10 0
(3.1346 × 10 12 )
−2.0626 × 10 0
(1.8779 × 10 5 )
−2.0626 × 10 0
(9.0649 × 10 16 )
−2.0626 × 10 0
(9.0649 × 10 16 )
−2.0625 × 10 0
(1.3597 × 10 15 )
−2.0626 × 10 0
(1.2666 × 10 8 )
F7−1.0000 × 10 0
(0.0000 × 10 0 )
−9.5125 × 10 1
(2.7284 × 10 2 )
−1.0000 × 10 0
(0.0000 × 10 0 )
−1.0000 × 10 0
(0.0000 × 10 0 )
−9.5125 × 10 1
(1.1331 × 10 16 )
−9.9999 × 10 1
(4.6850 × 10 6 )
F8−1.0000 × 10 0
(0.0000 × 10 0 )
−7.9984 × 10 1
(3.5024 × 10 1 )
−9.9999 × 10 1
(8.4234 × 10 6 )
−1.0000 × 10 0
(0.0000 × 10 0 )
−5.4542 × 10 1
(4.9345 × 10 1 )
−1.0000 × 10 0
(2.1452 × 10 7 )
F143.0000 × 10 0
(2.0121 × 10 15 )
3.0024 × 10 0
(3.0070 × 10 3 )
3.0000 × 10 0
(1.8198 × 10 7 )
3.0000 × 10 0
(1.2128 × 10 15 )
3.9060 × 10 0
(1.0226 × 10 0 )
3.00004 × 10 0
(3.6979 × 10 5 )
F16−3.8627 × 10 0
(1.8995 × 10 15 )
−3.8627 × 10 0
(7.5327 × 10 5 )
−3.8627 × 10 0
(2.7194 × 10 15 )
−3.8627 × 10 0
(2.6691 × 10 15 )
−3.8627 × 10 0
(1.8206 × 10 5 )
−3.8627 × 10 0
(4.2692 × 10 5 )
F17−3.0679 × 10 0
(3.1312 × 10 18 )
−3.0155 × 10 0
(2.9497 × 10 2 )
−3.0424 × 10 0
(9.0649 × 10 16 )
−3.0031 × 10 0
(3.0102 × 10 2 )
−3.0113 × 10 0
(2.8284 × 10 2 )
−3.0305 × 10 0
(2.2595 × 10 2 )
F18−1.9208 × 10 1
(5.4266 × 10 15 )
−1.9207 × 10 1
(1.4184 × 10 3 )
−1.9208 × 10 1
(7.7768 × 10 15 )
−1.9208 × 10 1
(6.1960 × 10 15 )
−1.9207 × 10 1
(5.2294 × 10 15 )
−1.9208 × 10 1
(8.5032 × 10 7 )
F20−1.8013 × 10 0
(4.2850 × 10 16 )
−1.8011 × 10 0
(2.7689 × 10 4 )
−1.8013 × 10 0
(6.7987 × 10 16 )
−1.8013 × 10 0
(6.7987 × 10 16 )
−1.8011 × 10 0
(8.7772 × 10 8 )
−1.8013 × 10 0
(2.0978 × 10 6 )
F222.9197 × 10 6
(2.5958 × 10 6 )
1.2187 × 10 1
(6.1220 × 10 0 )
1.8688 × 10 2
(1.7140 × 10 2 )
1.4393 × 10 1
(2.6987 × 10 1 )
4.4791 × 10 6
(9.9631 × 10 6 )
5.7462 × 10 2
(1.6986 × 10 1 )
F280.0000 × 10 0
(0.0000 × 10 0 )
5.0174 × 10 3
(5.2430 × 10 3 )
1.5375 × 10 9
(3.4115 × 10 9 )
0.0000 × 10 0
(0.0000 × 10 0 )
1.0890 × 10 3
(1.7702 × 10 3 )
3.8345 × 10 10
(1.0001 × 10 10 )
F30−1.0536 × 10 1
(1.1934 × 10 14 )
−7.2096 × 10 0
(3.4956 × 10 0 )
−1.0536 × 10 1
(6.2565 × 10 5 )
5.8020 × 10 0
(3.3779 × 10 0 )
−6.5184 × 10 0
(3.1127 × 10 0 )
−1.0533 × 10 1
(1.5542 × 10 3 )
F31−1.0316 × 10 0
(2.1531 × 10 11 )
−1.0319 × 10 0
(3.8595 × 10 5 )
−1.0316 × 10 0
(2.2662 × 10 16 )
−1.0316 × 10 0
(2.2662 × 10 16 )
−1.0302 × 10 0
(1.2697 × 10 3 )
−1.0316 × 10 0
(1.1275 × 10 6 )
F32−1.9410 × 10 0
(3.2685 × 10 9 )
−1.9410 × 10 0
(1.2658 × 10 5 )
−1.9410 × 10 0
(4.5324 × 10 16 )
−1.9410 × 10 0
(4.9650 × 10 16 )
−1.9410 × 10 0
(1.0748 × 10 6 )
−1.9410 × 10 0
(8.4117 × 10 6 )
F354.5477 × 10 87
(2.8866 × 10 96 )
7.0076 × 10 5
(1.3636 × 10 4 )
1.8304 × 10 18
(1.5938 × 10 18 )
4.4158 × 10 72
(2.2079 × 10 71 )
1.3759 × 10 4
(1.3743 × 10 4 )
1.40547 × 10 7
(1.24796 × 10 7 )
F36−1.5198 × 10 3
(2.1675 × 10 1 )
3.5801 × 10 4
(1.3014 × 10 4 )
−1.3389 × 10 3
(1.1533 × 10 2 )
−9.2442 × 10 2
(5.3658 × 10 2 )
1.4270 × 10 5
(4.1396 × 10 4 )
−1.4922 × 10 3
(5.4167 × 10 1 )
+/=/− 15/3/08/10/05/12/116/2/014/4/0
The outcomes demonstrate that for the low-dimensional test function, the FPA-CCDE greatly improves the reliability of the optimization accuracy in the searching process with smaller numerical oscillation and higher stability.

4.2. Performance Comparison on High-Dimensional Benchmark Functions

To verify the stability of the FPA-CCDE, we test it on high-dimensional benchmark functions with 30 dimensions. Such capabilities have a huge number of locally optimal solutions, which may hamper the whole optimization process. The corresponding experiment results are shown in Table 3.
We can see that the optimization precision of the FPA-CCDE is superior to that of the FPA for all functions. The FPA-CCDE achieves better solutions and mean error values than ABC in all functions except for F6. Moreover, the FPA-CCDE can obtain better solutions and error values than PSO, the GA ,and SSO in all functions.
Table 3. Experimental results of benchmark function with high dimension.
Table 3. Experimental results of benchmark function with high dimension.
FunctionsFPA-CCDEFPAABCPSOGASSO
F19.4147 × 10 15
(1.3831 × 10 14 )
1.5080 × 10 1
(9.7566 × 10 1 )
5.7545 × 10 6
(2.5028 × 10 6 )
4.1650 × 10 0
(6.5479 × 10 1 )
1.6527 × 10 1
(5.8660 × 10 1 )
2.9000 × 10 1
(3.3065 × 10 2 )
F61.5782 × 10 2
(1.2026 × 10 2 )
3.2860 × 10 4
(1.8733 × 10 4 )
1.0434 × 10 2
(5.7614 × 10 3 )
6.9923 × 10 0
(3.3423 × 10 0 )
3.5530 × 10 5
(1.0824 × 10 5 )
2.5406 × 10 0
(9.0102 × 10 1 )
F150.0000 × 10 0
(0.0000 × 10 0 )
7.8836 × 10 1
(2.0933 × 10 1 )
5.0017 × 10 4
(2.5007 × 10 3 )
8.5954 × 10 1
(4.1484 × 10 1 )
1.8969 × 10 2
(2.5099 × 10 1 )
1.8620 × 10 2
(1.2621 × 10 2 )
F191.4998 × 10 32
(2.5761 × 10 29 )
3.5660 × 10 1
(8.0118 × 10 0 )
4.8711 × 10 14
(3.3297 × 10 14 )
1.5369 × 10 0
(9.8456 × 10 1 )
7.0606 × 10 1
(9.2977 × 10 0 )
1.4748 × 10 1
(2.3450 × 10 1 )
F217.8109 × 10 14
(1.9111 × 10 13 )
1.1668 × 10 1
(2.9367 × 10 1 )
1.1511 × 10 0
(2.0025 × 10 1 )
6.2434 × 10 0
(8.4791 × 10 1 )
1.1301 × 10 1
(4.5514 × 10 1 )
3.5535 × 10 0
(8.1652 × 10 1 )
F231.7355 × 10 73
(3.0057 × 10 73 )
4.9245 × 10 2
(1.8600 × 10 2 )
4.3686 × 10 2
(1.3277 × 10 2 )
4.9038 × 10 0
(3.2106 × 10 0 )
4.0988 × 10 3
(1.0626 × 10 3 )
1.6126 × 10 0
(5.5078 × 10 1 )
F24−1.4344 × 10 4
(3.6431 × 10 3 )
−4.9410 × 10 3
(3.8186 × 10 2 )
−1.2208 × 10 4
(2.2864 × 10 2 )
−3.2450 × 10 3
(3.5118 × 10 2 )
−5.6765 × 10 3
(1.8977 × 10 3 )
−7.9893 × 10 3
(8.4372 × 10 2 )
F251.1137 × 10 10
(3.0381 × 10 10 )
2.3523 × 10 2
(1.6087 × 10 1 )
1.8038 × 10 2
(7.4305 × 10 2 )
3.9470 × 10 1
(8.9127 × 10 0 )
2.4371 × 10 2
(1.7642 × 10 1 )
4.7235 × 10 1
(1.0067 × 10 1 )
F260.0000 × 10 0
(0.00000 × 10 0 )
2.3426 × 10 4
(1.2743 × 10 4 )
2.7476 × 10 1
(2.0968 × 10 1 )
1.8593 × 10 2
(6.6331 × 10 1 )
2.5259 × 10 5
(9.1306 × 10 4 )
6.7862 × 10 1
(3.6053 × 10 1 )
F279.5293 × 10 78
(3.1604 × 10 77 )
4.7212 × 10 4
(1.3112 × 10 4 )
1.6450 × 10 11
(1.4457 × 10 11)
3.3784 × 10 0
(2.8782 × 10 0 )
1.2034 × 10 5
(2.3265 × 10 4 )
1.1852 × 10 0
(2.7957 × 10 1 )
F291.2451 × 10 4
(3.7184 × 10 6 )
6.9380 × 10 3
(4.9156 × 10 2 )
1.6629 × 10 2
(1.1654 × 10 2 )
9.2941 × 10 3
(3.6015 × 10 2 )
5.9258 × 10 3
(4.1609 × 10 2 )
6.7862 × 10 1
(3.6053 × 10 1 )
F330.0000 × 10 0
(0.0000 × 10 0 )
8.9432 × 10 3
(2.3352 × 10 3 )
0.0000 × 10 0
(0.0000 × 10 0 )
6.5480 × 10 1
(4.5312 × 10 1 )
2.2026 × 10 4
(2.8696 × 10 3 )
2.4000 × 10 1
(4.3589 × 10 1 )
F346.2388 × 10 28
(1.1777 × 10 27 )
1.3207 × 10 1
(1.5938 × 10 0 )
3.0464 × 10 1
(8.7926 × 10 2 )
2.0137 × 10 0
(1.0508 × 10 0 )
1.4142 × 10 1
(1.2205 × 10 0 )
2.9222 × 10 0
(5.1325 × 10 1 )
F371.6338 × 10 2
(2.1111 × 10 2 )
1.2515 × 10 2
(1.2515 × 10 2 )
2.4360 × 10 2
(2.6737 × 10 1 )
9.5543 × 10 1
(4.8483 × 10 1 )
2.5296 × 10 8
(4.6531 × 10 8 )
5.0225 × 10 1
(1.1329 × 10 1 )
+/=/− 14/0/012/1/114/0/014/0/014/0/0

4.3. Performance Comparison on Scalable Benchmark Functions

To further verify the optimization effect of the FPA-CCDE under scalable dimensions of benchmark functions, we select F9 as the test function. F9 has a global minimum solution ( 0 , 0 , 0 ) n , which is surrounded by a massive local minimum optimum with identical function values. Meanwhile, maximum solutions exist between the global minimum and local minimum solutions. Hence, its global convergence speed is poor, and it is simple to fall into local optimum.
The fitness values of optimum solutions and standard deviations of function F9 are summarized in Table 4. It can be observed that in the case of low-dimensional test functions, the optimum solution fitness values of algorithms, such as ABC and PSO, are close to the FPA-CCDE. This is because in this case, the number of locally optimal solutions is small, and it is simple to deviate from the local optimal solutions. Nevertheless, the increase of the dimension of variables will generate more local optimal solutions, and most heuristic optimization algorithms will lack searching directions, contributing to an unbalanced local–global searching process. In this situation, the superiority of the FPA-CCDE improves search accuracy and guides the search process. In Table 4, we can see that the optimum solution fitness values and standard deviations of the FPA-CCDE are substantially superior to other algorithmic methods.
The iterations to obtain the fitness value of the optimal solution can quantitatively demonstrate convergence and searching efficiency. The experiment results are depicted in Figure 2, Figure 3 and Figure 4. We continue to select the three types of benchmark functions listed above for comparison. In Figure 2, it is obvious that for F2 the iterations required by the GA and the FPA are much higher than other algorithms, and the iterations of the FPA-CCDE are close to ABC, PSO, and SSO, which have better searching efficiency in Figure 2. For another two functions, the iterations required by the FPA-CCDE are slightly less than other algorithms. In Figure 3, although ABC is close to the FPA-CCDE on optimal solution fitness value, the FPA-CCDE outperforms ABC and other algorithms regarding iterations.
Figure 4 demonstrates the fitness values of all algorithms on F9. On the condition of dealing with low-dimensional variables, the optimum fitness value and iteration number of some algorithms are close to that of the FPA-CCDE, respectively. With the increase of variable dimension, the amount of locally optimal solutions increases sharply, and the FPA-CCDE performs better than other algorithms on the fitness value of the optimal solution.
With the majority of low-dimensional functions, the algorithm will oscillate violently. Consequently, the stability of the algorithm can be characterized by oscillation. On the condition that the number of optimization iterations of each algorithm is 50, 100, 150, 200, and 250, the fluctuation of accuracy between the optimal solutions obtained through every algorithm is adopted as the performance index of the optimization stability of the algorithm. Each algorithm’s volatility varies concerning specific test functions (F36, F24, F25) and is displayed in Figure 5.
Under the low-dimensional test function (F36), each algorithm’s fluctuation range is relatively stable. As the dimension of the decision variable of the test function increases (F24, F25), the complexity of optimizing the solution space, and the oscillation amplitude of the algorithm increases. In comparison with the other algorithms, the overall oscillation amplitude of the FPA-CCDE is relatively small and has excellent stability.

5. Application of the FPA-CCDE in Inspection Robot Path Planning

To evaluate the performance of the FPA-CCDE in a practical scenario, we use it to solve the path planning problem for the inspection robot. The goal of the path planning is to design a motion track in the workplace in accordance with optimization constraints (e.g, minimum energy cost, shortest motion path, minimum duration cost).
In this work, when planning routes, we take into account the following five constraints. The first one is the moving distance limitation. There is a straight-line distance before each turn of the robot for error correction, and the minimum straight-line distance will affect the robot track. If dividing the path into segments, each segment should be greater than the minimum distance limitation. The total distance of segments follows the limitation:
i = 1 D + 1 | | P i 1 P i | | L max ,
where | | P i 1 P i | | is the distance of a segment, P 0 is the starting point, P D + 1 is the endpoint, and L max is the longest path limit.
The second constraint is the turning angle limitation. The robot has a turning radius between two adjacent segments:
( P i 1 P i ) · ( P i P i + 1 ) | | P i 1 P i | | · | | P i P i + 1 | | cos θ max ,
where θ max is the maximum turning angle.
The third constraint is the minimum obstacle distance limitation that guarantees obstacle avoidance. Subsequently, let the current position and direction of a robot be s i = [ x i , y i , θ j ] T , where ( x i , y i ) is the position of the robot, θ j is the steering angle, and the minimum Euclidean distance of the obstacle ϑ to the robot i is d ( s i , ϑ ) . Let d min be the shortest distance that must exist between the robot and the obstruction for there to be no risk of collision between the two. The constraint is expressed as:
ϑ i ( s i ) = [ d ( s i , ϑ 1 ) , d ( s i , ϑ 2 ) , , d ( s i , ϑ R ) ] T [ d min , d min , , d min ] T 0 .
  • The last two constraints are the velocity and acceleration of the robot. The linear velocity v v i , angular velocity v ω i , linear acceleration a v i and angular acceleration a ω i are defined as:
    v v i = Δ T i 1 [ x i + 1 x i , y i + 1 y i ] T , v ω i = Δ T i 1 ( θ i + 1 θ i ) .
    a v i = 2 ( v v , i + 1 v v i ) Δ T i + Δ T i + 1 , a ω i = 2 ( v ω , i + 1 v ω i ) Δ T i + Δ T i + 1 .
    Let the maximum linear velocity be v max , the maximum angular velocity be ω max , the maximum linear acceleration be a max , and the maximum angular acceleration be φ max . The limitations can be stated in the following manner:
    v i ( s i + 1 , s i , Δ T i ) = [ v max v v i , ω max v ω i ] T 0 .
    a i ( s i + 2 , s i + 2 , Δ T i + 1 , Δ T i ) = [ a max a v i , φ max a ω i ] T 0 .
    On the basis of the above constraints, we consider four costs, including energy cost, steering cost, threat attacking cost, and time cost.
The energy cost is proportional to the robot’s speed as well as the distance it travels, and this relationship can be expressed as:
c o s t e = α · v 3 · l i ,
where α is the consumption factor, v is the speed of the robot, and l i is the distance of the segment. The steering cost is related to the steering angle, which can be expressed as:
cost s = θ j , θ j 1 3 θ max , k 2 θ j , 1 3 θ max < θ j 2 3 θ max , k 2 θ j , 2 3 θ max < θ j θ max ,
where k is the steering coefficient, and θ max is the maximum steering angle.
It is more likely that the movement track will be affected by threat targets such as communication radar, base stations, and buildings. To avoid threat targets, the threat attacking cost cost t should be considered. At first, we calculate the threat degree of each track point as:
f j ( x ) = a 2 k j 2 d x j 2 , d x j R j , k j d x j 2 , R j < d x j b R j , 0 , d x j > b R j ,
where k j is the threat level of the threat target, a and b are the weights of the threat factors, d x j is the distance from the robot to the center of the j t h threat target, and R j represents the radius of the threat target. To calculate each threat target in each segment of the total moving track, we evenly divide each segment into 15 parts. The threat attacking cost for mobile robots in each segment is the average value of the robot at the position x = 2 15 , 4 15 , 6 15 , 8 15 , 10 15 , 12 15 , 14 15 . Hence, cost t can be calculated as:
c o s t t = l i 7 m = 1 N j = 1 7 f j ( 2 j 15 l i ) ,
where N is the number of threat targets.
Moving time cost cost h is connected with the movement time of each segment and can be expressed as:
cost h = μ · Δ T i ,
whereas the movement time factor, Δ T i , is i t h segment movement time. The effective cost of the moving track is defined as:
f ( x ) = i = 1 D + 1 ( λ 1 cost t + λ 2 cost e + λ 3 cost h ) + λ 4 i = 1 D cost s ,
where λ 1 , λ 2 , λ 3 , λ 4 represent the weights of cost t , cost e , cost h , and cost s , respectively.
In order to verify the operational efficiency of the algorithm and the effectiveness of the path planning, a simulation on the basis of the Matlab R2020a was conducted to verify the experimental simulation parameters as follows: set each algorithm to run independently 30 times, the pollen population size N p = 300 , and the maximum number of iterations T max = 1000 . We use PSO, the GA, the FPA, SSO, ABC, and the FPA-CCDE to perform mobile robot path planning. The parameter settings are listed in Table 5, which are also used in [25,26].
Table 6 indicates the time spent in the path planning process of the FPA-CCDE is better than that of the FPA, ABC, PSO, and SSO and slightly worse than that of the GA. The cost of path planning is 65.7% lower than the FPA, 2.8% lower than ABC, 1.7% lower than PSO, 1.7% lower than SSO, and 0.9% lower than the GA. Regarding average variance, the FPA-CCDE is 99.8% lower than the FPA, 3.2% lower than ABC, 63.1% lower than PSO, 80.7% lower than SSO, and 89.8% lower than the GA. It can be seen that in comparison to the other algorithms, the FPA-CCDE can maintain better efficiency in path planning, while consuming the least movement cost, whereas at the same time, the average error of the algorithm is the smallest, and the algorithm is more robust.
Figure 6 demonstrates the convergence curve of the objective function of each algorithm as the number of iterations increases. We can see that the ABC algorithm finds the optimal solution on the condition that the number of iterations is 900, and ABC, PSO, and the GA all have the phenomenon of early convergence, which cannot reduce movement expenses. The number of algorithm iterations required for SSO to find the optimal solution is considerably larger than other algorithms. Nonetheless, when the number of iterations of the FPA-CCDE is 700, the value of the objective function does not change. Moreover, the cost of this algorithm is significantly lower than that of the other algorithms.
Figure 7 is a top view of the robot path planning. Figure 7 demonstrates that a route from the lower left corner to the upper right corner of the area must be designed in which the buildings are obstacles encountered during the movement, and the colored icons represent the threat targets. Moreover, it can be observed that the SSO and PSO algorithms are highly volatile during the optimization process, and enormous fluctuations occur at the conclusion of the optimization procedure. The FPA-CCDE has better stability in the optimization process, with smooth curves and low volatility. The effect of the FPA-CCDE is superior to the other algorithms.
Figure 8a,b display the number of iterations and running time required for each algorithm to perform path planning under different numbers of segments. Figure 8a demonstrates that as the number of segments D increases, the number of iterations of each algorithm also increases proportionally. When D is set to 35, the number of iterations required by each algorithm to accomplish the path planning tends to be stable. It can be seen from Figure 8b that with the increase in the number of segments, the running time of each algorithm to complete the path planning also increases. When the number of segments D is set to 35, the running time of ABC, SSO, the GA, and the FPA-CCDE to complete the path planning tends to be stable, whereas the execution time of the FPA and PSO increases more rapidly than that of the other algorithms at this time.
To verify the stability of these algorithms applied in robot path planning, each algorithm runs independently, and the obtained value of the path planning cost function is demonstrated in Figure 9. The cost value of the FPA-CCDE mostly maintains a stable state, and its value is considerably less than that of the other algorithms.

6. Conclusions

In order to solve the optimization problem of high-dimensional variables, this paper designs a flower pollination optimization algorithm based on cosine cross-generation differential evolution. Specifically, individuals are directed to approach the optimal solution by means of differential evolution between generations so that the local searching process of the algorithm is oriented. Setting the cosine inertia weight makes the global search initially strengthen the search ability at a faster rate and enhances the convergence speed of the algorithm. At the same time, the scaling factor and crossover probability are dynamically updated through the parameter adaptive adjustment mechanism, thereby improving the population richness, and the cross-generation roulette method is adopted to reduce the probability of falling into the local optimal solution. Simulation results indicate that the FPA-CCDE displays significant performance advantages in terms of accuracy, average error of algorithm, and stability of the algorithm. In addition, we apply the FPA-CCDE to solve the robot path planning issue. The simulation test demonstrates that our algorithm is capable of low-cost, high-efficiency path planning. In the future, it is expected to be employed in industrial scenarios, such as unmanned submarine path design, automobile cargo distribution route planning, and UAV smart grid fault monitoring.

Author Contributions

Methodology, Y.W. (Yaxing Wei); Data curation, Y.W. (Yanfei Wu); Writing—original draft, S.W.; Writing—review & editing, Y.J. and L.L.; Visualization, Y.W. (Yanfei Wu); Supervision, Y.J.; Project administration, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Natural Science Foundation of China under Grant 62071075 and 61971077, in part by the Natural Science Foundation of Chongqing under Grant cstc2020jcyj-msxmX0704, in part by the Fundamental Research Funds for the Central Universities under Grant 2020CDJ-LHZZ-022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liang, Y.C.; Smith, A.E. An ant colony optimization algorithm for the redundancy allocation problem (RAP). IEEE Trans. Reliab. 2004, 53, 417–423. [Google Scholar] [CrossRef]
  2. Ma, Z.; Ai, B.; He, R.; Wang, G.; Zhong, Z. Impact of UAV Rotation on MIMO Channel Characterization for Air-to-Ground Communication Systems. IEEE Trans. Veh. Technol. 2020, 69, 12418–12431. [Google Scholar] [CrossRef]
  3. Pandey, P.; Shukla, A.; Tiwari, R. Aerial path planning using meta-heuristics: A survey. In Proceedings of the 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 22–24 February 2017; pp. 1–7. [Google Scholar]
  4. Song, B.; Qi, G.; Xu, L. A Survey of Three-Dimensional Flight Path Planning for Unmanned Aerial Vehicle. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 5010–5015. [Google Scholar]
  5. Song, Y.; Zhang, K.; Hong, X.; Li, X. A novel multi-objective mutation flower pollination algorithm for the optimization of industrial enterprise R&D investment allocation. Appl. Soft Comput. 2021, 109, 107530. [Google Scholar]
  6. Wei, S.; Zhang, S.J.; Cheng, Y.F. An Improved Multi-Objective Genetic Algorithm for Large Planar Array Thinning. IEEE Trans. Magn. 2016, 52, 1–4. [Google Scholar]
  7. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Mhs95 Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 2002. [Google Scholar]
  8. Cao, H.; Hu, H.; Qu, Z.; Yang, L. Heuristic solutions of virtual network embedding: A survey. China Commun. 2018, 15, 186–219. [Google Scholar] [CrossRef]
  9. Pan, L.; Feng, X.; Sang, F.; Li, L.; Leng, M. An improved back propagation neural network based on complexity decomposition technology and modified flower pollination optimization for short-term load forecasting. Neural Comput. Appl. 2019, 31, 2679–2697. [Google Scholar] [CrossRef]
  10. Gan, C.; Cao, W.H.; Liu, K.Z.; Wu, M.; Zhang, S.B. A New Hybrid Bat Algorithm and its Application to the ROP Optimization in Drilling Processes. IEEE Trans. Ind. Inform. 2020, 16, 7338–7348. [Google Scholar] [CrossRef]
  11. San-José-Revuelta, L.M.; Casaseca-de-la-Higuera, P. A new flower pollination algorithm for equalization in synchronous DS/CDMA multiuser communication systems. Soft Comput. 2020, 24, 13069–13083. [Google Scholar] [CrossRef]
  12. Wu, T.; Feng, Z.; Wu, C.; Lei, G.; Wang, X. Multiobjective Optimization of a Tubular Coreless LPMSM Based on Adaptive Multiobjective Black Hole Algorithm. IEEE Trans. Ind. Electron. 2020, 67, 3901–3910. [Google Scholar] [CrossRef]
  13. Li, J.Q.; Pan, Q.K.; Duan, P.Y. An Improved Artificial Bee Colony Algorithm for Solving Hybrid Flexible Flowshop With Dynamic Operation Skipping. IEEE Trans. Cybern. 2016, 46, 1311–1324. [Google Scholar] [CrossRef]
  14. Chandran, T.R.; Reddy, A.V.; Janet, B. An effective implementation of Social Spider Optimization for text document clustering using single cluster approach. In Proceedings of the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 20–21 April 2018; pp. 508–511. [Google Scholar]
  15. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 687–697. [Google Scholar] [CrossRef]
  16. Klein, C.E.; Segundo, E.H.V.; Mariani, V.C.; Leandro, D.S.C. Modified Social-Spider Optimization Algorithm Applied to Electromagnetic Optimization. IEEE Trans. Magn. 2016, 52, 1–4. [Google Scholar] [CrossRef]
  17. Chen, X.; Tang, C.; Jian, W.; Lei, Z. A novel hybrid wolf pack algorithm with harmony search for global numerical optimization. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; pp. 2164–2169. [Google Scholar]
  18. Huang, M.; Zhan, X.; Liang, X. Improvement of Whale Algorithm and Application. In Proceedings of the 2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 19–20 October 2019; pp. 6–8. [Google Scholar]
  19. Sudabattula, S.K.; Kowsalya, M.; Velamuri, S.; Melimi, R.K. Optimal Allocation of Renewable Distributed Generators and Capacitors in Distribution System Using Dragonfly Algorithm. In Proceedings of the 2018 International Conference on Intelligent Circuits and Systems (ICICS), Phagwara, India, 20–21 April 2018; pp. 393–396. [Google Scholar]
  20. Wei, G.U. An improved whale optimization algorithm with cultural mechanism for high-dimensional global optimization problems. In Proceedings of the 2020 IEEE International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 6–8 November 2020; pp. 1282–1286. [Google Scholar]
  21. Peng, J.; Ye, Y.; Chen, S.; Dong, C. A novel chaotic dragonfly algorithm based on sine-cosine mechanism for optimization design. In Proceedings of the 2019 2nd International Conference on Information Systems and Computer Aided Education (ICISCAE), Dalian, China, 28–30 September 2019; pp. 185–188. [Google Scholar]
  22. Yang, X.S. Nature-Inspired Metaheuristic Algorithms, 2nd ed.; Xinshe, Y., Ed.; Luniver Press: Frome, UK, 2019. [Google Scholar]
  23. Wang, H.; Sun, H.; Li, C. Diversity enhanced particle swarm optimization with neighborhood search. Inf. Sci. 2013, 223, 119–135. [Google Scholar] [CrossRef]
  24. Tian, M.; Gao, X.; Dai, C. Differential evolution with improved individual-based parameter setting and selection strategy. Appl. Soft Comput. 2017, 56, 286–297. [Google Scholar] [CrossRef]
  25. Yang, X.S. Flower Pollination Algorithm for Global Optimization. In Proceedings of the Unconventional Computation and Natural Computation: 11th International Conference (UCNC 2012), Orleans, France, 3–7 September 2012; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  26. Zhou, Y.; Wang, R.; Luo, Q. Elite opposition-based flower pollination algorithm. Neurocomputing 2016, 188, 294–310. [Google Scholar] [CrossRef]
  27. Bian, J.H.; He, X.T.; Fan, Q.W. Structural optimization of BP neural network based on adaptive flower pollination algorithm. Comput. Eng. Appl. 2018, 54, 50–56. [Google Scholar]
  28. Supriya, D.; Palaniandavar, V. An improved global-best-driven flower pollination algorithm for optimal design of two-dimensional fir filter. Soft Comput. 2019, 23, 8855–8872. [Google Scholar]
  29. Yang, X.; Shen, Y.J. An Improved Flower Pollination Algorithm with three Strategies and its Applications. Neural Process. Lett. 2020, 51, 675–695. [Google Scholar] [CrossRef]
  30. Hui, S.; Suganthan, P.N. Ensemble and Arithmetic Recombination-Based Speciation Differential Evolution for Multimodal Optimization. IEEE Trans. Cybern. 2016, 46, 64–74. [Google Scholar] [CrossRef]
  31. Qu, B.Y.; Liang, J.J.; Wang, Z.Y.; Liu, D.M. Solving CEC 2015 multi-modal competition problems using neighborhood based speciation differential evolution. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 3214–3219. [Google Scholar]
  32. Li, Y.L.; Zhan, Z.H.; Gong, Y.J.; Chen, W.N.; Zhang, J.; Li, Y. Differential Evolution with an Evolution Path: A DEEP Evolutionary Algorithm. IEEE Trans. Cybern. 2015, 45, 1798–1810. [Google Scholar] [CrossRef] [Green Version]
  33. Liang, J.J.; Runarsson, T.P.; Mezura-Montes, E.; Clerc, M.; Suganthan, P.N.; Coello, C.C.; Deb, K. Problem definitions and evaluation criteria for the cec 2006 special session on constrained real-parameter optimization. J. Appl. Mech. 2006, 41, 8–31. [Google Scholar]
  34. Tang, K.; Yao, X.; Suganthan, P.N.; Chen, Y.P.; Chen, C.M.; Yang, Z. Benchmark Functions for the cec’2008 Special Session and Competition on Large Scale Global Optimization; USTC: Hefei, China, 2007. [Google Scholar]
  35. Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2010 Competition on Constrained Real-Parameter Opti-Mization. 2007. Available online: https://al-roomi.org/multimedia/CEC_Database/CEC2010/RealParameterOptimization/CEC2010_RealParameterOptimization_TechnicalReport.pdf (accessed on 8 November 2022).
  36. Liang, J.J.; Qu, B.Y.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization. 2007. Available online: https://al-roomi.org/multimedia/CEC_Database/CEC2013/RealParameterOptimization/CEC2013_RealParameterOptimization_TechnicalReport.pdf (accessed on 8 November 2022).
  37. Lin, X.; Wang, Z.Q.; Chen, X.Y. Path Planning with Improved Artificial Potential Field Method Based on Decision Tree. In Proceedings of the 2020 27th Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), Saint Petersburg, Russia, 25–27 May 2020. [Google Scholar]
  38. Li, B.; Jiang, W.S. Optimizing Complex Functions by Chaos Search. Cybernet. Syst. 1998, 29, 409–419. [Google Scholar]
  39. Agafonov, A.; Myasnikov, V. Stochastic On-time Arrival Problem with Levy Stable Distributions. In Proceedings of the 2019 4th International Conference on Intelligent Transportation Engineering (ICITE), Singapore, 5–7 September 2019; pp. 227–231. [Google Scholar]
  40. Tarczewski, T.; Grzesiak, L.M. An Application of Novel Nature-Inspired Optimization Algorithms to Auto-Tuning State Feedback Speed Controller for PMSM. IEEE Trans. Ind. Appl. 2018, 54, 2913–2925. [Google Scholar] [CrossRef]
  41. Qiu, X.; Xu, J.X.; Tan, K.C.; Abbass, H.A. Adaptive Cross-Generation Differential Evolution Operators for Multiobjective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 232–244. [Google Scholar] [CrossRef]
  42. Chen, Y.; Pi, D. An innovative flower pollination algorithm for continuous optimization problem. Appl. Math. Model. 2020, 83, 237–265. [Google Scholar] [CrossRef]
  43. Zhang, X.; Zhang, X.; Wang, L. Antenna Design by an Adaptive Variable Differential Artificial Bee Colony Algorithm. IEEE Trans. Magn. 2017, 54, 1–4. [Google Scholar] [CrossRef]
Figure 1. The framework of the FPA-CCDE.
Figure 1. The framework of the FPA-CCDE.
Sensors 23 00606 g001
Figure 2. Fitness value graphs of benchmark functions: (a) the fitness value of F2; (b) the fitness value of F19; (c) the fitness value of F30.
Figure 2. Fitness value graphs of benchmark functions: (a) the fitness value of F2; (b) the fitness value of F19; (c) the fitness value of F30.
Sensors 23 00606 g002
Figure 3. Fitness value graphs of benchmark functions: (a) the fitness value of F21; (b) the fitness value of F24; (c) the fitness value of F29.
Figure 3. Fitness value graphs of benchmark functions: (a) the fitness value of F21; (b) the fitness value of F24; (c) the fitness value of F29.
Sensors 23 00606 g003
Figure 4. Fitness value graphs on F9: (a) the fitness value of F9 with 10 dimensions; (b) the fitness value of F9 with 20 dimensions; (c) the fitness value of F9 with 30 dimensions.
Figure 4. Fitness value graphs on F9: (a) the fitness value of F9 with 10 dimensions; (b) the fitness value of F9 with 20 dimensions; (c) the fitness value of F9 with 30 dimensions.
Sensors 23 00606 g004
Figure 5. Stability of different algorithms: (a) stability on F36; (b) stability on F24; (a) stability on F25.
Figure 5. Stability of different algorithms: (a) stability on F36; (b) stability on F24; (a) stability on F25.
Sensors 23 00606 g005
Figure 6. Partial enlargement of the objective function convergence curve.
Figure 6. Partial enlargement of the objective function convergence curve.
Sensors 23 00606 g006
Figure 7. Mobile robot path planning route of two dimensions.
Figure 7. Mobile robot path planning route of two dimensions.
Sensors 23 00606 g007
Figure 8. Iterations and operation time under different algorithms with various segments: (a) iterations under different segmentation; (b) running time under different segmentation.
Figure 8. Iterations and operation time under different algorithms with various segments: (a) iterations under different segmentation; (b) running time under different segmentation.
Sensors 23 00606 g008
Figure 9. Local amplification of fluctuation.
Figure 9. Local amplification of fluctuation.
Sensors 23 00606 g009
Table 1. Algorithm 1 parameters setting.
Table 1. Algorithm 1 parameters setting.
AlgorithmParameters Setting
FPA P = 0.8 , λ = 1 . 5
ABC S N = 50
PSO c 1 = 2 , c 2 = 2 , ω = 1
SSO R a = 1 , P c = 0.7 , P m = 0.7
GA P c = 0.7 , P m = 0.04
Table 4. F9 with the increase of variables dimension.
Table 4. F9 with the increase of variables dimension.
F9
(D = 10)
F9
(D = 30)
F9
(D = 50)
F9
(D = 70)
F9
(D = 100)
FPA-CCDE6.2388 × 10 28
(1.1777 × 10 27 )
1.19848 × 10 2
(3.31242 × 10 2 )
7.98987 × 10 3
(2.76537 × 10 2 )
1.19851 × 10 2
(3.31241 × 10 2 )
1.19853 × 10 2
(3.31241 × 10 2 )
FPA2.89070 × 10 1
(5.25541 × 10 2 )
1.18909 × 10 0
(1.76364 × 10 1 )
1.91691 × 10 0
(2.09563 × 10 1 )
2.51704 × 10 0
(2.33520 × 10 1 )
3.12524 × 10 0
(3.38376 × 10 1 )
ABC2.55878 × 10 1
(6.50620 × 10 2 )
1.50392 × 10 0
(1.78987 × 10 1 )
2.70389 × 10 0
(1.48553 × 10 1 )
3.64166 × 10 0
(1.22192 × 10 1 )
4.89987 × 10 0
(1.35401 × 10 1 )
PSO1.59873 × 10 1
(5.00000 × 10 2 )
4.35873 × 10 1
(6.37704 × 10 2 )
6.67873 × 10 1
(7.48331 × 10 2 )
8.39873 × 10 1
(8.66025 × 10 2 )
1.06387 × 10 0
(7.57188 × 10 2 )
GA6.45494 × 10 1
(1.10375 × 10 1 )
1.78149 × 10 0
(1.36208 × 10 1 )
2.60254 × 10 0
(1.28445 × 10 1 )
3.38987 × 10 0
(1.60031 × 10 1 )
4.27658 × 10 0
(1.17499 × 10 1 )
SSO9.98733 × 10 2
(2.54849 × 10 10 )
3.03873 × 10 1
(4.54606 × 10 2 )
5.11873 × 10 1
(3.31662 × 10 2 )
6.35873 × 10 1
(4.89898 × 10 2 )
7.59873 × 10 1
(5.00000 × 10 2 )
Table 5. Parameters settings for path planning.
Table 5. Parameters settings for path planning.
Parameter NameParameter ValueParameter NameParameter Value
Movement time factor ( μ )1.5Maximum angular acceleration ( φ max :degree)1.0
Threat factor (a)1.2Maximum movement distance ( L max :metre)500
Threat factor (b)1.5Threat attacking cost weight ( λ 1 )0.5
Steering coefficient (k)2.5Energy cost weight ( λ 2 )0.3
Maximum turning angle ( θ max )60Movement time cost weight ( λ 3 )0.5
Maximum linear velocity ( v max )1.0Steering cost weight ( λ 4 )0.3
Maximum angular velocity ( ω max )0.8Threat target number (N)18
Maximum linear acceleration ( a max )1.0Segment number (D)15
Table 6. Average objective function of each algorithm.
Table 6. Average objective function of each algorithm.
AlgorithmConvergence IterationTimeRunning TimeFunction ValueAverage Error
FPA122420.2454325.321474.76025 × 10 6 1.69787 × 10 6
ABC91318.2765320.333331.67925 × 10 6 3.07283 × 10 3
PSO62118.3546120.002141.65191 × 10 6 8.06825 × 10 3
SSO112719.2684521.526121.66159 × 10 6 1.54797 × 10 4
GA41117.2568420.456471.64760 × 10 6 2.92722 × 10 4
FPA-CCDE70318.2645320.359511.63272 × 10 6 2.97392 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jia, Y.; Wang, S.; Liang, L.; Wei, Y.; Wu, Y. A Flower Pollination Optimization Algorithm Based on Cosine Cross-Generation Differential Evolution. Sensors 2023, 23, 606. https://doi.org/10.3390/s23020606

AMA Style

Jia Y, Wang S, Liang L, Wei Y, Wu Y. A Flower Pollination Optimization Algorithm Based on Cosine Cross-Generation Differential Evolution. Sensors. 2023; 23(2):606. https://doi.org/10.3390/s23020606

Chicago/Turabian Style

Jia, Yunjian, Shankun Wang, Liang Liang, Yaxing Wei, and Yanfei Wu. 2023. "A Flower Pollination Optimization Algorithm Based on Cosine Cross-Generation Differential Evolution" Sensors 23, no. 2: 606. https://doi.org/10.3390/s23020606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop