Next Article in Journal
Research on Suppressing Commutation Torque Ripple of BLDCM Based on Zeta Converter
Previous Article in Journal
Research on Multi-Objective Low-Carbon Flexible Job Shop Scheduling Based on Improved NSGA-II
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Agricultural UAV Path Planning Based on a Differentiated Creative Search Algorithm with Multi-Strategy Improvement

1
Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science and Technology, Kunming 650500, China
2
Library and Archives, Yunnan Communications Vocational and Technical College, Kunming 650500, China
3
Logistic Service Group, Yunnan University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Machines 2024, 12(9), 591; https://doi.org/10.3390/machines12090591
Submission received: 25 June 2024 / Revised: 21 August 2024 / Accepted: 23 August 2024 / Published: 26 August 2024
(This article belongs to the Special Issue Design and Control of Agricultural Robots)

Abstract

:
A differentiated creative search algorithm with multi-strategy improvement (MSDCS) is proposed for the path planning problem for agricultural UAVs under different complicated situations. First, the good point set and oppositional learning strategies are used to effectively improve the quality of population diversity; the adaptive fitness–distance balance reset strategy is proposed to motivate the low performers to move closer to the region near the optimal solution and find the potential optimal solution; and the vertical and horizontal crossover strategy with random dimensions is proposed to improve the computational accuracy of the algorithm and the ability to jump out of the local optimum. Second, the MSDCS is compared to different algorithms using the IEEE_CEC2017 test set, which consists of 29 test functions. The results demonstrate that the MSDCS achieves the optimal value in 23 test functions, surpassing the comparison algorithms in terms of convergence accuracy, speed, and stability by at least one order of magnitude difference, and it is ranked No. 1 in terms of comprehensive performance. Finally, the enhanced algorithm was employed to address the issue of path planning for agricultural UAVs. The experimental results demonstrate that the MSDCS outperforms comparison algorithms in path planning across various contexts. Consequently, the MSDCS can generate optimal pathways that are both rational and safe for agricultural UAV operations.

1. Introduction

An unmanned aerial vehicle (UAV) is an unmanned aerial vehicle that is operated by a remote control or automatic control system. In recent years, UAVs have been increasingly employed in agriculture due to their great flexibility, mobility, low safety risk factor, and low cost, and agricultural UAVs are unmanned aerial vehicles specifically designed for use in agriculture, which can significantly boost agricultural output, decrease expenses, and provide environmental benefit. For instance, UAVs may precisely spray crops using path planning, ensuring uniform coverage [1]. UAVs can fly along prearranged routes and gather data on crop growth to assist farmers in making more informed management choices [2]. UAVs can use path planning to conduct regular inspections of fields, monitor soil moisture and crop water storage with on-board sensors, and direct the irrigation system for precision irrigation [3]. UAVs can make 3D maps of farms to help farmers understand the topography of their farmland and plan irrigation systems and agricultural planting [4]. UAVs can be used to assess crop damage and offer data support to agricultural insurers, helping farmers obtain compensation more quickly [5].
Path planning is a key function for UAV mission execution and is a major challenge for autonomous UAVs in engineering applications. The current UAV path planning algorithms fall into two broad categories: classical and meta-heuristic algorithms. Classical algorithms include the Rapid Random Number Method based on random sampling (RRT) [6], the A* algorithm [7], and so on; these algorithms are less efficient in searching and converge slowly in complex situations. Path planning problems are classic NP hard problems with intricate constraints that make it difficult for general solvers to discover accurate solutions. Meta-heuristic algorithms can effectively solve complex combinatorial optimization problems. Common meta-heuristic algorithms include genetic algorithms (GAs) [8], ant colony optimization algorithms (ACOs) [9], gray wolf optimization algorithms (GWOs) [10], Harris Hawk optimization algorithms (HHOs) [11], whale optimization algorithms (WOAs) [12], sine cosine algorithms (SCAs) [13], and slime mold optimization algorithms (SMAs) [14].
Meta-heuristic algorithms are widely used in the study of UAV path planning. For example, Chen et al. [15] proposed an enhanced version of the chimp optimization algorithm to address the UAV path planning problem in a 3D environment. Zhang et al. [16] proposed an improved adaptive gray wolf optimization algorithm to address the three-dimensional path planning problem of UAVs in the complicated environment of material conveying in earthquake disaster zones. Wang et al. [17] proposed an updated bionic tuna swarm optimization algorithm for planning UAV flight paths safely and efficiently in complicated obstacle situations. Li et al. [18] proposed an improved particle swarm optimization algorithm for balancing the development and exploration capabilities of UAVs. He and Wang [19] proposed an improved chaos sparrow search algorithm to address the issue of delayed convergence and falling into local optimums in path planning in three-dimensional complicated environments for UAVs. Yu et al. [20] used a hybrid gray wolf optimization and differential evolutionary algorithm to tackle the UAV path planning problem. These algorithms can swiftly identify effective pathways by employing various search techniques across several iterations.
However, one of the most important difficulties remains: how to keep the algorithm from falling into a local optimum [21]. To enhance the path smoothness of agricultural UAVs in complex environments and achieve a reasonable and safe flight path, this study recognizes the agricultural UAV path planning problem as a complex optimization problem that necessitates an effective solution. It then suggests a multi-strategy improvement-based differentiated creative search algorithm (MSDCS) for agricultural UAV path planning. The differentiated creative search algorithm (DCS) is a novel meta-heuristic optimization algorithm proposed by Duankhan et al. [22] in 2024. The DCS is a model for iterative optimization of team performance that uses differential knowledge acquisition and creative realism strategies to solve complex optimization problems. While it is effective at solving complex optimization problems, it suffers from a lack of search accuracy and does not adequately explore unknown regions in the search space. This may result in agricultural UAVs not being reasonably fast or smooth for path planning in complex environments. Therefore, in this paper, a set of good points is used to resolve the uneven distributional problem in the initial population of the original DCS, and the oppositional learning strategy is used to improve the quality of the population and strengthen the algorithm’s search ability. To bring low performers closer to the region near the optimal solution, an adaptive fitness–distance balance reset strategy is proposed, which helps the algorithm search meticulously near the optimal solution to find the potential optimal solution. To address the issue that the DCS algorithm aggregates population individuals to the optimal individuals in later iterations, which can easily lead to search stagnation and the population falling into the local optimum, a vertical and horizontal crossover strategy with random dimensions based on the stochastic dimension is proposed to improve the algorithm’s computational accuracy, as well as its ability to jump out of the local optimum. A comparison with different algorithms through 29 test functions from the CEC2017 test set shows that the MSDCS possesses better convergence, and the stronger comprehensive performance of the MSDCS is further verified using the Wilcoxon rank-sum test and Friedman test. Finally, different agricultural UAV environment models were established to carry out simulation experiments on the trajectories of agricultural UAVs, and the applicability of the MSDCS in the agricultural UAV path planning problem was verified [23]. The main contributions of the novel algorithm proposed in this study are as follows:
(1)
The performance of the algorithm is enhanced, and a new optimization method for path planning of agricultural UAVs is developed by utilizing a good point set and oppositional learning strategy, an adaptive fitness–distance balance reset strategy, and a vertical and horizontal crossover strategy with random dimensions based on the original DCS algorithm.
(2)
Comparing the proposed algorithm with nine other algorithms, simulation experiments based on the CEC2017 test set validated the effectiveness of the MSDCS and further validated its comprehensive performance using the Wilcoxon rank-sum test and Friedman test.
(3)
The MSDCS is applied to the agricultural UAV path planning problem to minimize the objective function of the problem, which has complicated constraints. The results show that the MSDCS can generate high-quality solutions for agricultural UAV path planning, even in complex environmental conditions. Thus, a reasonable and safe flight path was obtained.
The remainder of this paper is organized as follows. Section 2 introduces the basic differentiated creative search algorithm; Section 3 describes, in detail, the three improvement strategies of the MSDCS; Section 4 calculates the time complexity of the MSDCS; Section 5 describes the environment, parameters, and result analysis of the simulation experiments; Section 6 models and experiments the agricultural UAV path planning problem; Section 7 discusses the results of the MSDCS in agricultural UAV path planning; and Section 8 summarizes the research of this paper.

2. Differentiated Creative Search Algorithm

2.1. Population Initialization

The process of the DCS algorithm is as follows. A set of candidate solutions X (team members) is randomly generated with upper and lower bounds. Equation (1) is shown as follows:
X = x 1 , 1 x 1 , d x 1 , D 1 x 1 , D x i , 1 x i , d x i , D 1 x i , D x N P 1 , 1 x N P 1 , d x N P 1 , D x N P 1 , D x N P , 1 x N P , d x N P , D x N P , D
where X i denotes the i th candidate solution, X i , d denotes the d th location of the candidate solution, N P represents the number of candidate solutions, and D represents the dimension of the optimization problem. Each element of X is randomly generated by Equation (2) as follows:
X i , d = l b d + U i 0 , 1 u b d l b d
l b d and u b d are the lower and upper bounds of the optimization problem, respectively, and U represents a uniform distribution in the interval 0 , 1 . After initialization, each individual is evaluated to determine their fitness value. Then, the individuals in X are ordered based on their fitness value, with a smaller fitness value indicating higher performance.

2.2. Differentiated Knowledge Acquisition

The stages of differentiated knowledge acquisition are rooted in the different potentials for knowledge acquisition and can be categorized into three types, depending on the individual’s ability and level of performance. The first type of individual is the high performer, and these members serve as team idea generators and have a mindset that generates a diverse range of ideas; the second type of individual is the average performer, who serve as concept refiners, examining and comparing several ideas to determine the optimal answer; and the third type of individual is the low performer, and these members contribute to the team’s diversity and exploration by offering new ideas and approaches to problem solving. The process of updating the position of each individual can be executed using the following Equations (3)–(6), in which the poorer performers need to assimilate new knowledge or experience more than the better performers:
X i , d = X n e x t i , d ,       i f U i , d 0 , 1 δ i   o r   d = j r a n d X i , d ,       o t h e r w i s e
δ i U i 0 , 1 φ i + U i 0 , 1 φ i 2
φ i = 0.25 + 0.55 R i N P 0.5
j r a n d = r a n d i n t 1 , D
As stated in Equation (3), U i , d 0 , 1 denotes a random number that ranges between 0 and 1 and is generated independently for each dimension of each particle. If the condition is met, the position is updated by Equation (7), otherwise, the position remains unchanged. Where parameter δ i is the quantitative knowledge acquisition rate of the i th individual at the t th iteration, and the uniformly distributed random numbers generated by the two U ’s in Equation (4) are different, with the first half . indicating rounding to the nearest whole number, while the second half compares the random number with φ i , and, if true, the value is 1; otherwise, the value is 0. φ i represents the variable φ ’s value at the t th iteration of an individual, while R i represents the i th individual’s ranking at the start of the t th iteration. The φ coefficient measures the degree of imperfection in an individual’s knowledge. Higher values indicate larger gaps in knowledge and the need for additional learning and experience to compensate; smaller values indicate a more comprehensive and solid foundation in the relevant knowledge area. j r a n d is an integer chosen randomly from 1 to D and is generated once per individual.

2.3. Creative Realism

The differential knowledge acquisition stated above divides teams into low, high, and average performers, and creative realism is based on the divergent thinking of high performers and the convergent thinking of average performers. Creative ideas and problem solving are both key characteristics of high performers. Not only do high performers serve as a knowledge basis, but each member of the team is required to constantly draw on fresh knowledge and broaden their divergent thinking. For the average performer, convergent thinking relies on the knowledge of the best individual, combined with random contributions from two different team members. The position update of an individual can be performed using the following Equations (7)–(9):
X n e x t i , d = X r 1 , d + L K i , d α , σ ,     i f   i n g s X b e s t , d + γ X r 2 , d X i , d + ω X r 1 , d X i , d ,     o t h e r w i s e
n g s = max 6 , N P g 3
γ = 0.1 + 0.518 1 t max _ i t 0.5
X r 1 , d represents the d th location of a randomly picked individual in 1 , 2 , , N P , and L K i , d α , σ is the Lévy flying random number generator with control parameters α and σ . n g s stands for the number of high performers according to the population’s golden ratio, g , which will have a value of an integer larger than or equal to six. X b e s t , d is the d th position of the best individual and X r 2 , d is the dth position of the individual randomly selected from n g s + 1 , , N P . ω denotes the learning intensity state of an individual, with a default value of 1. γ is the coefficient of the individual at the t th iteration, t is the current number of iterations, and max _ i t is the maximum number of iterations. The γ coefficient measures how much a team’s social dynamics influence an individual’s views. Higher values indicate a greater reliance on team members, while lower values indicate a greater degree of independence. The DCS algorithm replaces underperforming members with new ones to increase the diversity of ideas generated by the team. The formula for producing new members is given by Equation (10) as follows:
X i , d = l b d + U i 0 , 1 u b d l b d ,     i f   i = N P   and   U i 0 , 1 < p c
When p c is 0.5, the method defaults to the last team member, and U i 0 , 1 < p c is an inefficiency and uses random initialization to reset the individual.

3. Improvement Strategies

3.1. Population Initialization Based on Good Point Set and Oppositional Learning Strategies

The distribution location of the initial population and the quality of the solution have a significant impact on the effectiveness of the metaheuristic algorithm [24]; if the initial population is too centralized or too dispersed, the algorithm may underutilize the search space, and if the quality of the solution in the population is low, the algorithm’s convergence accuracy will suffer. The original DCS algorithm’s random initialization is too unpredictable to ensure the algorithm’s population diversity and convergence accuracy. This study provides enhancements to its initialization procedure.

3.1.1. Good Point Set Strategy

In the optimization process of meta-heuristic algorithms, the diversity of the algorithm can be enhanced if the initialized population is made to be uniformly distributed over the spatial range of the objective function. The good point set [25] is an effective uniform point selection method with good distribution, so we use it to initialize the population in the MSDCS. Let V D be a D -dimensional Euclidean space unit cube, with r V D . Then, the set of good points is formed in Equations (11) and (12) as follows:
P N P k = r 1 k , , r D k , 1 k N P
r i = 2 cos 2 π i p , 1 i D
where N P is the number of populations, P N P k is the set of good points, r is the value of good points, and p is the smallest prime number that satisfies p 3 2 D . Therefore, the initialization based on the set of good points is given by Equation (13) as follows:
X i , d = l b d + P N P k u b d l b d
Theoretically, weighted sums formed with n good points have smaller errors than those obtained with any other n points, making them appropriate for approximate computations in higher dimensional spaces. Generate 500 points in a two-dimensional unit search space and compare the randomized points. The points from the good point set approach are shown in Figure 1.
Figure 1 shows that random points are not uniformly distributed, resulting in lower utilization of the algorithm’s search space, whereas points generated using the good point set method are more uniformly distributed in the search space, resulting in broader coverage of the population and improving the efficiency of the global search.

3.1.2. Oppositional Learning Strategy

Meta-heuristic algorithms typically save high-quality individuals from the previous generation population to the next generation population. The number of high-quality individuals in the initial population has a direct impact on the algorithm’s convergence rate; if there are fewer high-quality individuals in the initial population, the algorithm’s convergence speed and accuracy will suffer. Therefore, in this paper, we use an oppositional learning strategy to boost the number of quality individuals in the generated initial population. To be more precise, the algorithm simultaneously generates an individual who opposes the individual in the good point set. Finally, the individuals with the best fitness values are chosen as the beginning population, increasing the algorithm’s search efficiency. Their opposing positions ( X i , d ) can be expressed by Equation (14) as follows:
X i , d = l b d + u b d X i , d
The lower and upper bounds of the individual positions are l b d and u b d , respectively. The implementation process of population initialization based on the good point set and oppositional learning strategy is as follows: generate an initial population based on the good point set, compute the oppositional population using Equation (14), rank the fitness values, and select the top N P individuals with better fitness values as the new initial population. As shown in Table 1, the best fitness values of four representative test functions were selected from the CEC2017 test set with 500 iterations of the algorithm and 30 independent runs. DCS1 uses the good point set strategy, and DCS2 uses the good point set and oppositional learning strategy. The results show that population initialization based on a good point set and an oppositional learning strategy can effectively improve the algorithm’s ability to find the optimal value.

3.2. Adaptive Fitness–Distance Balance Reset Strategy

The DCS algorithm categorizes team members into three types: low, high, and average performers. For the high performer, combining the existing knowledge and creative elements can provide better solutions to the innovation of the problem; for the average performer, refinement of the knowledge from the best team members promotes the diversity of solutions. However, for low performers, acquiring new knowledge only affects one dimension of an individual’s knowledge structure, as in Equation (10). This local variation contributes to diversity through random initialization and cannot ensure the quality of the solution. To encourage low performers to migrate closer to the optimal solution zone, this research presents an adaptive fitness–distance balance reset strategy. By dynamically adjusting the balance between randomized and targeted exploration, the strategy efficiently identifies the candidates with the highest potential from the population while maintaining diversity, enabling the algorithm to perform a detailed search on the area of good fitness to find a potentially better solution, as described in the following steps.
The first step is to compute the distance ( D P i ) between the present solution ( x P i ) and the best solution ( P b e s t ) using the Euclidean distance metric, which is mathematically modeled by Equation (15) as follows:
D P i = x P i , 1 x P b e s t , 1 2 + + x P i , D x P b e s t , D 2
The second step is to compute the candidate solution’s score ( S A F D B P i ), which is obtained by weighted summation of the normalized fitness value ( n o r m F P i ) and the normalized distance value ( n o r m D P i ) using Equations (16)–(18) as follows:
S F D B P i = w n o r m F P i + 1 w n o r m D P i
n o r m f P i = f P i min f max f min f
If   goal   is   minimization n o r m F P i = 1 norm f P i
where max f and min f are the maximum and minimal fitness values, respectively, and the fitness value in the minimization problem is defined in Equation (18). The weighting coefficient w is used to modify the effect of the fitness and distance values. In this study, w is an adaptive operator with the following Equation (19):
w = max 0.1 ,   0.9 i t max _ i t
The value of w decreases as the number of iterations increases, allowing for a fast exploration of the space based on the fitness value in the early stages, followed by a meticulous search as approaching the optimum in the later stages. Then, using Equation (20), the candidate with the highest score ( S m a x F D B ) is chosen to update the inefficient individual ( X i , d ).
X i , d = X i , d + U i 0 ,   1 S m a x F D B X i , d ,   i f   i > N P 10   and   U i 0 ,   1 < p c
In the original DCS algorithm, the last individual is defined as inefficient and updated using random initialization, resulting in low-quality solutions. In this study, the algorithm designates the last ten individuals in the fitness ranking list as inefficient and then employs an adaptive fitness–distance balance reset technique to enhance their quality. Table 2 shows the result of the best fitness value obtained by the DCS algorithm based on the adaptive fitness–distance balance reset strategy (DCS3) and the original algorithm (DCS) (the experimental setup is found in Section 3.1.2). It can be seen that the DCS algorithm based on the adaptive fitness–distance balance reset strategy has a better value of the best fitness in these test functions.

3.3. Vertical and Horizontal Crossover Strategy with Random Dimensions

In later iterations of the DCS, the individuals gradually converge to the optimal individual, and the fitness values of the individuals tend to stabilize. As a result, new and better solutions are difficult to emerge, which can easily lead to the search becoming stagnant and the population falling into the local optimum. To improve the convergence accuracy of the DCS and enhance the algorithm’s ability to jump out of the local optimum, this paper uses the vertical and horizontal crossover strategy based on random dimensions to improve the current individuals. Specifically, the horizontal crossover is used for searching to reduce the search blind spots and improve the algorithm’s global search ability; the vertical crossover is used to enhance population variety while decreasing the risk of the algorithm falling into the local optimum. The dimensions involved are selected by c , i.e., c determines the number of dimensions involved in the vertical and horizontal crossings. Initially, a comprehensive random perturbation is performed, which is conducive to quickly jumping out of the local optimum, and with the increase in the number of iterations, the search is gradually centralized, which is conducive to fine optimization. The calculation process is given by Equation (21) as follows:
c = m a x D i t m a x _ i t D 1 , 1
The horizontal crossover refers to the exchange of individual information between two different members in a randomly chosen dimension so that different individuals can learn from each other. This operation can also be viewed as a mutating process of generating a new position based on two random individuals. The horizontal crossover of two unduplicated individuals of the team on c randomly chosen dimensions r d 1 generates offspring with Equations (22) and (23) as follows:
X j , r d 1 = r 1 X j , r d 1 + 1 r 1 X j j , r d 1 + v 1 X j , r d 1 X j j , r d 1
X j j , r d 1 = r 2 X j j , r d 1 + 1 r 2 X j , r d 1 + v 2 X j j , r d 1 X j , r d 1
In this equation, X j , r d and X j j , r d are the individuals generated by X j and X j j after the horizontal crossover; in this way, each mutation operation affects only the values on the selected dimension, while the other dimensions remain unchanged. r 1 and r 2 are random numbers of 0 , 1 used for balancing the individual’s current position in each mutation operation, and v 1 and v 2 are random numbers of 1 , 1 used for regulating the distance between the individual’s current position and crossover point position in each mutation operation. The resulting children compete with the parent generation, and the best individuals are finally retained.
The vertical crossover of this strategy implies that the best individuals in the population exchange information across randomly selected dimensions, resulting in a new generation of the best individuals to compete with the parent generation, allowing different dimensions to learn from one another and avoiding premature convergence of one dimension. The individual formulae for the offspring obtained from the vertical crossover are given by Equation (24) as follows:
X b e s t r d 2 = q X b e s t r d 2 + 1 q X b e s t r d 3
The offspring individual X b e s t r d 2 is created via the vertical crossover of the present optimal individual X b e s t on c randomly chosen dimensions r d 2 and r d 3 ( r d 2 r d 3 ). q     0 , 1 is used to control the magnitude of the shift of the new position during the vertical crossover. Table 3 depicts the results of the random dimension-based vertical and horizontal crossover strategy (DCS4) and the original algorithm (DCS) in the test function (the experimental setup is the same as above). The test results of DCS4 are significantly better than the DCS.

4. Time Complexity

The basic DCS algorithm has a total time complexity of O N D T , where N is the population size, D is the dimension, and T is the maximum number of iterations. The time complexity of the MSDCS algorithm is evaluated as follows.
The process consists mostly of population initialization, fitness evaluation, and updating candidate solutions. The temporal complexity of population initialization is O N D , and fitness evaluation is O N . The candidate solution is updated in four steps. First, when the team is inefficient, the time complexity is O T p c , where p c is a probability constant. Second, the time complexity of updating the candidate solution when the team is efficient is O T n g s D δ , where n g s denotes the number of high-performance candidates and δ denotes the knowledge acquisition rate. Then, the time complexity of updating the candidate solution when the team is average is O T N n g s 1 D δ . Finally, the time complexity of updating the candidate solutions using the vertical and horizontal crossover strategy with random dimensions is O N D . In summary, excluding the low-order terms, the time complexity of the MSDCS is O N D T , which is the same as the original algorithm.
The pseudo-code for the improved Algorithm 1 is as follows.
Algorithm 1: MSDCS
Machines 12 00591 i001

5. Simulation Experiment and Result Analysis

5.1. Test Functions and Experimental Environments

To test the optimization ability of the proposed MSDCS algorithm, 29 test functions from the CEC2017 test set are chosen, and the specific function information is shown in Table 4. The CEC2017 test set [26] is challenging and widely used in academic studies. All functions in the test set were rotated and shifted to increase the difficulty of optimization and avoid bias caused by the solution space’s symmetry. The functions can be divided into four categories: single peak, multi-peak, hybrid, and combined functions. The single-peak function has only one optimal solution, which is used to measure the algorithm’s convergence speed and local search ability, whereas the other three types of functions have multiple solutions, which are used to test the algorithm’s ability to escape from local optima and explore globally. F1 and F3 are single-peak functions with no local optimum and just a global optimum (F2 was officially deleted due to its instability); F4–F10 are multi-peak functions having local extreme points; F11–F20 are hybrid functions; and F21–F30 are combination functions. All experiments were performed in the same environment with a Win10 64-bit operating system, an AMD Ryzen 7 4800H processor with Radeon Graphics at 2.90 GHz, and 16 GB of RAM, and produced by Lenovo in Beijing, China.

5.2. Overview of Convergence Analysis and Comparative Algorithms

The comparison algorithms contain two classical algorithms, including the particle swarm optimization algorithm (PSO) [27] and the differential evolution algorithm (DE) [28]; two newly proposed algorithms, including the black-winged kite optimization algorithm (BKA) [29] and the snow ablation optimization algorithm (SAO) [30]; two recent improved algorithms, including the improved sand cat swarm optimization algorithm (MSCSO) [31] and the dispersed foraging slime mold algorithm (DFSMA) [32]; and two representative competition algorithms, including the linear population reduction SHADE algorithm (LSHADE) [33] and the improved multi-objective differential evolutionary algorithm (IMODE) [34]. Two typical intelligent optimization algorithms, the PSO and DE, are selected as the baseline algorithms for their stability and extensive application; the BKA and SAO are both new algorithms proposed recently, which are new breakthroughs in related research fields; the MSCSO and DFSMA add new strategies into their original algorithms and have better performance, similar to the present MSDCS; and the LSHADE and IMODE have proved their excellent performance in practical applications in open competitions. Since there is no improved algorithm based on the DCS yet, the comparison with these comprehensive algorithms can verify the proposed MSDCS algorithm’s superiority in optimization performance. As a result, with the additional DCS algorithm, there are nine algorithms to be compared with the MSDCS algorithm. In this research, the population size N of all algorithms is set to 30, the maximum number of iterations T is set to 500, the dimension D is 100, and each method is run independently 30 times. Additional parameters are presented in Table 5. The performance of each algorithm’s optimization is measured using three performance indexes: optimal value, average value, and standard deviation, with the experimental results presented in Table 6.
Table 6 shows the optimal value, mean, and standard deviation of the MSDCS obtained on 100 dimensions, as well as other comparison algorithms, for the 29 test functions of the CEC2017 test set. The results show that the MSDCS obtained three best results in single-peak functions (F1 and F3); fourteen best results in multi-peak functions (F4–F10); twenty-eight best results in hybrid functions (F11–F20); and fourteen best results in combined functions (F21–F30), which are significantly better than the original algorithm and the other compared algorithms. The test results indicate that the MSDCS exhibits a rapid convergence rate, strong optimization potential, and improved local escape capacity. The iterative convergence behavior of a representative set of 16 functions selected from the test set is shown in Figure 2. It demonstrates that the MSDCS converges faster than other algorithms in the early iteration phase and has much higher convergence accuracy in the later period. In conclusion, the convergence curves in Figure 2 and the experimental findings in Table 6 both confirm the efficiency of the MSDCS, which outperforms other algorithms in the optimization-seeking process.
Figure 2 shows the iterative convergence behavior of the 16 typical functions chosen from the test set. The MSDCS exhibits superior convergence speed and accuracy compared to other methods. Although F3’s convergence accuracy for the MSDCS is not as excellent as that of IMODE, its convergence speed surpasses that of the other algorithms. Functions F4–F7 have several peaks. The figure demonstrates that the MSDCS performs better than the other algorithms in terms of both convergence speed and accuracy. This suggests that other comparison algorithms often become stuck in local optima. Functions F11–F15 are hybrid functions, and MSDCS’s superior performance demonstrates its adaptability in complex environments. Although the convergence accuracy of the MSDCS in F22 is not as high as that of DFSMA and IMODE, it is still higher than that of the other algorithms in F22, with the MSDCS achieving the highest accuracy in other functions. When optimizing high-dimensional functions, the other algorithms usually fall into local optima, which slows the convergence speed. In contrast, the MSDCS usually obtains a better initial population by adopting a good point set and oppositional learning strategy and then finds the potential optimal solutions through the adaptive fitness–distance balanced reset strategy and the vertical and horizontal crossover, thus jumping out of the local optimum and finally converging to the global optimum to improve the optimization accuracy. It is worth mentioning that the MSDCS outperformed the original DCS, which also had favorable results, indicating the success of the three improvement methods introduced in the present study. In summary, the experimental results in Table 6 and the convergence curves in Figure 2 verify the effectiveness of the MSDCS algorithm, which has a better capability for finding optimization than the other comparison algorithms.

5.3. Wilcoxon Rank-Sum Test and Friedman Test

To further confirm the performance of the MSDC, the Wilcoxon rank-sum test is utilized to determine whether the algorithm is substantially different from the other comparable methods at a 5% significance level. The symbols “+”, “−”, and “=” show that the MSDCS outperforms, underperforms, or is equivalent to the other compared algorithms. The results are shown in Table 7, where the MSDCS performs better than the other compared algorithms in most cases out of the 29 tested functions, thus indicating that the algorithm has a better ability to find an optimal solution. Friedman’s test, which is an analysis of variability in problem solving for a variety of methods, is used to obtain a comparison of the strengths and weaknesses of the algorithms by calculating the average rankings of the compared algorithms. As shown in Table 7, the MSDCS is ranked 1, suggesting that it has the best overall performance among the 29 functions evaluated.

6. MSDCS Algorithm for Optimizing the Agricultural UAV Path Planning Problem

6.1. Problem Modeling

UAVs are frequently used in agricultural production for tasks like irrigation, fertilizer and pesticide spraying, field monitoring, and other tasks. To ensure that the agricultural UAV can accomplish the mission successfully, reasonable flight route planning must be performed. In complex terrain, agricultural UAVs need to fly from the starting point to the target point following the result of path planning. During flight, agricultural UAVs may confront topographical barriers, weather hazards, radar scanning zones, and other obstacles, as well as limits such as fuel consumption, maximum climb capacity, and maximum turning capacity. To enable the safe and coordinated flight of agricultural UAVs, path planning algorithms need to connect the best route between the start and target points.
The agricultural UAV is assumed to maintain a predetermined flight speed, which reduces the path planning problem to a static polyline planning problem. The cost function F X i for agricultural UAV path planning is defined by computing the range length cost, flying altitude cost, threat cost, and smoothing cost using Equation (25) to satisfy the operational needs as follows:
F X i = k = 1 4 b k F k X i
where the list of n waypoints that the agricultural UAV is supposed to fly is represented by the decision variable X i . F k is the k th cost function, and b k is the weight of each cost function. The descriptions of each cost function are given below.

6.2. Voyage Length Cost

The flight path length of agricultural UAVs should be as low as feasible to minimize fuel consumption. The agricultural UAV is controlled from a ground control station, and its flight route X is represented as a list of n waypoints that the agricultural UAV must travel, with each path point corresponding to the coordinates P i j = x i j , y i j , z i j . The path length is derived as the sum of several segments, with each segment’s length calculated using the Euclidean distance between two nodes. The voyage length cost F 1 is computed by Equation (26) as follows:
F 1 X i = j = 1 n 1 P i j P i , j + 1

6.3. Threat Cost

Path planning for agricultural UAVs must not only optimize path length but also assure safe operation by avoiding no-fly zones known as hazard zones (e.g., radar detection, air defense equipment attacks, weather threats, etc.). The threat environment is abstracted, and the threat region is a cylinder with a fixed radius. The agricultural UAV is intended to successfully avoid the threat region and complete the flight. Figure 3 illustrates the abstracted threat region.
The obstacle in Figure 3 is represented by the green area; K is the number of all obstacles where a threat exists, and its radius is indicated by R k . The collision area, or white area, is represented by D , which is the agricultural UAV’s diameter, and d k indicates the vertical distance from the given path (from P i j to P i , j + 1 ) to the obstacle’s center. The hazardous area, or red area S , is determined by the agricultural UAV’s localization accuracy and the flight environment. If the agricultural UAV is in a static area with a strong signal, the value of S might be a few tens of meters; if there are moving things nearby or the signal is weak, it might be more than a hundred meters. T k is the threat cost of path P i j to P i , j + 1 for a path whose threat cost F 2 is calculated by Equations (27) and (28) as follows:
F 2 X i = j = 1 n 1 k = 1 K T k ( P i j P i , j + 1 )
T k P i j P i , j + 1 = 0 ,     i f     d k > S + D + R k S + D + R k d k ,     i f ,       i f   d k D + R k D + R k < d k S + D + R k

6.4. High Cost

An agricultural UAV is often limited to a flight altitude between the minimum altitude ( h m i n ) and the maximum altitude ( h m a x ). The UAV’s altitude relative to the ground is represented by the variable h i j . The altitude cost H i j for each waypoint is added up to obtain the altitude cost F 3 . It is evident that the altitude cost will punish values that are outside of range while maintaining the average altitude. High costs are shown by Equations (29) and (30).
F 3 X i = j = 1 n H i j
H i j = h i j h m i n + h m a x 2 ,     i f   h m i n h i j h m a x ,     o t h e r w i s e

6.5. Smoothing Cost

The flight angle control parameters for agricultural UAVs primarily involve the horizontal steering angle and vertical pitch angle. The trajectory planning model needs to make sure that these two parameters meet the agricultural UAV’s actual angle limits in order to produce feasible flight paths. The horizontal steering angle is an angle between two successive path segments projected on the horizontal plane O x y and is represented by i j , and the projections of P i j and P i , j + 1 are P i j and P i , j + 1 . The angle between two successive path segments projected on the Z -axis is known as the vertical pitch angle, or φ i j . The penalization coefficients for the vertical pitch angle, Z i , j + 1 Z i j , denote the height difference between two points and the horizontal steering angle, which are a 1 and a 2 , respectively. Based on these values, the smoothing cost can be computed by Equations (31)–(33) as follows:
F 4 X i = a 1 j = 1 n 2 i j + a 2 j = 1 n 1 φ i j φ i , j 1
i j = arctan P i j P i , j + 1 × P i , j + 1 P i , j + 2 P i j P i , j + 1 . P i , j + 1 P i , j + 2
φ i j = arctan Z i , j + 1 Z i j P i j P i , j + 1

6.6. Simulation Experiment

6.6.1. Environmental Modeling and Parameterization

To further test the MSDCS’s performance in the agricultural UAV route planning problem, different threats are included to mimic various complicated terrains, and two terrains are constructed for path planning, as shown in Figure 4. In these scenarios, the number and location of threats (indicated by the brown columns) were chosen based on different levels of complexity, with Figure 4a showing a terrain map with a lower level of complexity and Figure 4b showing a terrain map with a higher level of complexity. For comparison, all algorithms were implemented with the same parameters: a population size of 30, 300 iterations, waypoints set to 10, the beginning position of the agricultural UAV (200, 100, 150), and the target position (800, 800, 150). For each comparison, all algorithms are performed ten times to determine the optimal, mean, and standard deviation.

6.6.2. Scenario One

Figure 5 depicts the agricultural UAV flight trajectory maps of the MSDCS and other comparative algorithms. Figure 5a depicts the flight paths of all the algorithms in a 3D map, and to better observe the courses of the algorithms, the 3D map is converted into a 2D top view, as shown in Figure 5b. Most algorithms can discover a flight path from the origin to the destination; however, the pathways planned by the PSO, SAO, BKA, and IMODE have excessively wide steering angles, which can easily cause collisions. The MSDCS’s path can successfully avoid dangers and identify the flight path from the starting point to the target, while the agricultural UAV’s flight curve is reasonably smooth and approximates a straight flight.
The convergence curves of the optimal costs of several path planning algorithms are provided in Figure 6 to help visualize the effects of each algorithm more clearly. The MSDCS converges more quickly and with greater convergence accuracy than the other compared algorithms. The MSDCS outperforms the other algorithms in terms of optimal cost, average fitness value, and standard deviation (see Table 8).

6.6.3. Scenario Two

Figure 7 demonstrates that the MSDCS can still generate substantially realistic smoothed pathways in difficult settings. In contrast, the SAO and BKA opt to bypass the threat objects on the left, while the PSO and LSHADE choose to bypass the threat objects on the right, resulting in high costs for the agricultural UAV. Additionally, the flight routes of the remaining algorithms are not as sensible and practicable as those of the MSDCS. Figure 8 shows a cost function curve. Although most algorithms are comparable, Table 9 shows that the MSDCS has a lower cost price, a lower average fitness value, and is more stable.

7. Conclusions

The simulation experiments based on the CEC2017 test set in Section 5 lead to the conclusion that the MSDCS has a fast convergence speed and high accuracy compared to the other nine comparison algorithms, and the comprehensive performance of the MSDCS ranks first in Friedman’s test, which proves the effectiveness of the improved algorithm. Section 6 applies the improved algorithms to the agricultural UAV path planning problem. It can be seen in Figure 5, Figure 6, Figure 7 and Figure 8 that the PSO, SAO, BKA, IMODE, and LSHADE algorithms have poorer performance in solving the agricultural UAV path planning, and the flight paths obtained are prone to incur high flight costs and even higher risks during the flight process, whereas the MSDCS has a faster convergence speed and higher convergence accuracy. Compared to other algorithms, the MSDCS can obtain better flight paths for agricultural UAVs with a safer flight process, thus reducing the flight cost. Table 8 and Table 9 show that the MSDCS produces high-quality solutions in agricultural UAV path planning in both simple and complex scenarios with good stability.
This study not only confirms the efficacy of the MSDCS but also its practical worth in agricultural UAV path planning. The comparative study demonstrates that the MSDCS exhibits notable superiority in effectively navigating various intricate situations. When compared to the classical PSO method, the PSO exhibits a higher rate of convergence. However, it demonstrates limited stability and adaptability in the presence of impediments. The MSDCS, in this study, demonstrates superior capability in guaranteeing that agricultural UAVs locate the most advantageous route while evading obstacles, hence enhancing the quality and efficiency of UAV flight pathways for path planning.
The cost function of agricultural UAV path planning is flexible, encompassing factors like fuel consumption and collision risk. In the future, the algorithm can be modified to achieve more comprehensive optimization of agricultural UAV path planning. This will cater to a variety of agricultural application scenarios, ensuring the efficient and safe operation of agricultural UAVs. Furthermore, it will further advance the development of agricultural UAVs in the field of agriculture.

8. Summary

For the agricultural UAV path planning problem, a multi-strategy improved differentiated creative search algorithm is proposed. The MSDCS employs the good point set and the oppositional learning strategy to increase the diversity of the population and improve the quality of the solution; the adaptive fitness–distance balanced reset strategy prompts the low performers to converge to the optimal solution region. Additionally, the vertical and horizontal crossover strategy with random dimensions is proposed to help the algorithm jump out of the local optimum and improve the convergence accuracy.
To demonstrate the MSDCS’s effectiveness, 29 test functions from the CEC2017 test set are used for comparison experiments with the original algorithms, the classical algorithms, the most recent modified algorithms, and the competition algorithms. The experimental results reveal that the MSDCS outperforms the other comparative algorithms in terms of convergence speed, accuracy, and stability. Further validation is performed using the Wilcoxon rank-sum test and the Friedman test, and the findings confirmed the MSDCS’s excellent performance.
Two sets of experiments are conducted using relatively simple terrain maps and relatively complex terrain maps to test the performance of the MSDCS in agricultural UAV path planning problems. The simulation experiments show that the MSDCS algorithm can converge quickly and generate efficient and collision-free agricultural UAV flight paths in both simple and complex scenarios, which verifies the applicability and superiority of the MSDCS.

Author Contributions

Conceptualization, J.L. and Q.Q.; methodology, J.L.; software, J.L. and Q.Q.; validation, Y.L., X.Z. (Xiang Zhang), J.Y., X.Z. (Xiaoli Zhang) and Y.F.; formal analysis, J.L.; investigation, Q.Q.; writing—original draft preparation, J.L.; writing—review and editing, Q.Q.; supervision, Q.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation of China (32060193, 62062047), Yunnan Fundamental Research Projects (202101AT070082), the major scientific and technological projects in Yunnan Province under Grant 202202AD080006, and the Foundation of Yunnan Key Laboratory of Computer Technology Applications.

Data Availability Statement

The data involved in this study are all public data, which can be downloaded through public channels. All other data used in this study are available from the corresponding authors upon reasonable request.

Acknowledgments

I would like to express my sincere gratitude to Qian Qian for his selfless guidance and support at every stage of this study.

Conflicts of Interest

The authors declare no conflicts of interest. This article does not contain any studies with human participants or animals performed by any of the authors.

References

  1. Plessen, M. Path Planning for Spot Spraying with UAVs Combining TSP and Area Coverages. arXiv 2024, arXiv:2408.08001. [Google Scholar]
  2. Hafeez, A.; Husain, M.A.; Singh, S.P.; Chauhan, A.; Khan, M.T.; Kumar, N.; Chauhan, A.; Soni, S.K. Implementation of drone technology for farm monitoring & pesticide spraying: A review. Inf. Process. Agric. 2023, 10, 192–203. [Google Scholar] [CrossRef]
  3. Li, W.; Liu, C.; Yang, Y.; Awais, M.; Li, W.; Ying, P.; Ru, W.; Cheema, M.J.M. A UAV-aided prediction system of soil moisture content relying on thermal infrared remote sensing. Int. J. Environ. Sci. Technol. 2022, 19, 9587–9600. [Google Scholar] [CrossRef]
  4. Budiharto, W.; Irwansyah, E.; Suroso, J.S.; Chowanda, A.; Ngarianto, H.; Gunawan, A.A.S. Mapping and 3D modelling using quadrotor drone and GIS software. J. Big Data 2021, 8, 48. [Google Scholar] [CrossRef]
  5. Gugan, G.; Haque, A. Path Planning for Autonomous Drones: Challenges and Future Directions. Drones 2023, 7, 169. [Google Scholar] [CrossRef]
  6. Xu, D.; Qian, H.; Zhang, S. An Improved RRT*-Based Real-Time Path Planning Algorithm for UAV. In Proceedings of the 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Haikou, China, 20–22 December 2021; pp. 883–888. [Google Scholar] [CrossRef]
  7. Moon, S.; Oh, E.; Shim, D.H. An Integral Framework of Task Assignment and Path Planning for Multiple Unmanned Aerial Vehicles in Dynamic Environments. J. Intell. Robot. Syst. 2013, 70, 303–313. [Google Scholar] [CrossRef]
  8. McCall, J. Genetic algorithms for modelling and optimisation. J. Comput. Appl. Math. 2005, 184, 205–222. [Google Scholar] [CrossRef]
  9. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; pp. 1470–1477. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  11. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  14. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  15. Chen, Q.; He, Q.; Zhang, D. UAV Path Planning Based on an Improved Chimp Optimization Algorithm. Axioms 2023, 12, 702. [Google Scholar] [CrossRef]
  16. Zhang, W.; Zhang, S.; Wu, F.; Wang, Y. Path Planning of UAV Based on Improved Adaptive Grey Wolf Optimization Algorithm. IEEE Access 2021, 9, 89400–89411. [Google Scholar] [CrossRef]
  17. Wang, Q.; Xu, M.; Hu, Z. Path Planning of Unmanned Aerial Vehicles Based on an Improved Bio-Inspired Tuna Swarm Optimization Algorithm. Biomimetics 2024, 9, 388. [Google Scholar] [CrossRef]
  18. Tan, L.; Zhang, H.; Shi, J.; Liu, Y.; Yuan, T. A robust multiple Unmanned Aerial Vehicles 3D path planning strategy via improved particle swarm optimization. Comput. Electr. Eng. 2023, 111, 108947. [Google Scholar] [CrossRef]
  19. He, Y.; Wang, M. An improved chaos sparrow search algorithm for UAV path planning. Sci. Rep. 2024, 14, 366. [Google Scholar] [CrossRef]
  20. Yu, X.; Jiang, N.; Wang, X.; Li, M. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst. Appl. 2023, 215, 119327. [Google Scholar] [CrossRef]
  21. Zhu, D. Human memory optimization algorithm: A memory-inspired optimizer for global optimization problems. Expert Syst. Appl. 2024, 237, 121597. [Google Scholar] [CrossRef]
  22. Duankhan, P.; Sunat, K.; Chiewchanwattana, S.; Nasa-ngium, P. The Differentiated Creative search (DCS): Leveraging Differentiated knowledge-acquisition and Creative realism to address complex optimization problems. Expert Syst. Appl. 2024, 252, 123734. [Google Scholar] [CrossRef]
  23. Geoscience Australia. Digital Elevation Model (DEM) of Australia Derived from LiDAR 5 Metre Grid; Geoscience Australia: Canberra, Australia, 2015. [CrossRef]
  24. Yang, W.; Xia, K.; Li, T.; Xie, M.; Song, F. A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM. Mathematics 2021, 9, 291. [Google Scholar] [CrossRef]
  25. Kiefer, J.C. On large deviations of the empiric D. F. of vector chance variables and a law of the iterated logarithm. Pacific J. Math. 1961, 11, 649–660. [Google Scholar] [CrossRef]
  26. Deng, L.; Liu, S. An enhanced slime mould algorithm based on adaptive grouping technique for global optimization. Expert Syst. Appl. 2023, 222, 119877. [Google Scholar] [CrossRef]
  27. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  28. Storn, R. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  29. Wang, J.; Wang, W.; Hu, X.; Qiu, L.; Zang, H. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  30. Deng, L.; Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  31. Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 4350. [Google Scholar] [CrossRef]
  32. Hu, J.; Gui, W.; Heidari, A.A.; Cai, Z.; Liang, G.; Chen, H.; Pan, Z. Dispersed foraging slime mould algorithm: Continuous and binary variants for global optimization and wrapper-based feature selection. Knowl.-Based Syst. 2022, 237, 107761. [Google Scholar] [CrossRef]
  33. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar] [CrossRef]
  34. Fan, J.; Xiong, S.; Wang, J.; Gong, C. IMODE: Improving Multi-Objective Differential Evolution Algorithm. In Proceedings of the 2008 Fourth International Conference on Natural Computation, Jinan, China, 18–20 October 2008; pp. 212–216. [Google Scholar] [CrossRef]
Figure 1. A comparison diagram of random initialization and good point set initialization. (a) Random initialized distribution plot. (b) Distribution of the initialization based on the set of good points.
Figure 1. A comparison diagram of random initialization and good point set initialization. (a) Random initialized distribution plot. (b) Distribution of the initialization based on the set of good points.
Machines 12 00591 g001
Figure 2. Convergence curves for 16 functions selected from the CEC2017 test set.
Figure 2. Convergence curves for 16 functions selected from the CEC2017 test set.
Machines 12 00591 g002
Figure 3. Images of the threat area.
Figure 3. Images of the threat area.
Machines 12 00591 g003
Figure 4. Topographic maps of different complexities. (a) A topographic map with a low number of threats. (b) A topographic map with a high number of threats.
Figure 4. Topographic maps of different complexities. (a) A topographic map with a low number of threats. (b) A topographic map with a high number of threats.
Machines 12 00591 g004
Figure 5. The flight path diagram of the UAV in scenario one. (a) The flight path in a simple scenario. (b) A top view of the flight path in a simple scenario.
Figure 5. The flight path diagram of the UAV in scenario one. (a) The flight path in a simple scenario. (b) A top view of the flight path in a simple scenario.
Machines 12 00591 g005
Figure 6. Cost function curves for all algorithms.
Figure 6. Cost function curves for all algorithms.
Machines 12 00591 g006
Figure 7. The flight path diagram of the UAV in scenario two. (a) The flight path in a complex scenario. (b) A top view of the flight path in a complex scenario.
Figure 7. The flight path diagram of the UAV in scenario two. (a) The flight path in a complex scenario. (b) A top view of the flight path in a complex scenario.
Machines 12 00591 g007
Figure 8. Cost function curves for all algorithms.
Figure 8. Cost function curves for all algorithms.
Machines 12 00591 g008
Table 1. Test results of population initialization strategies. The DCS randomly generates initial points. DCS1 uses the good point set strategy only, while DCS2 uses both the good point set and oppositional learning strategies.
Table 1. Test results of population initialization strategies. The DCS randomly generates initial points. DCS1 uses the good point set strategy only, while DCS2 uses both the good point set and oppositional learning strategies.
DCSDCS1DCS2
F11.77 × 1089.00 × 1078.45 × 107
F49.39 × 1028.94 × 1028.62 × 102
F111.08 × 1041.09 × 1047.69 × 103
F298.34 × 1038.12 × 1037.31 × 103
Table 2. The DCS and DCS3 test results. DCS3 shows the best fitness values obtained by the DCS algorithm plus the adaptive fitness–distance balance reset strategy.
Table 2. The DCS and DCS3 test results. DCS3 shows the best fitness values obtained by the DCS algorithm plus the adaptive fitness–distance balance reset strategy.
DCSDCS3
F11.77 × 1081.20 × 108
F49.39 × 1029.07 × 102
F111.08 × 1047.69 × 103
F298.34 × 1036.93 × 103
Table 3. The DCS and DCS4 test results. DCS4 depicts the results of the DCS with the random dimension-based vertical and horizontal crossover strategy.
Table 3. The DCS and DCS4 test results. DCS4 depicts the results of the DCS with the random dimension-based vertical and horizontal crossover strategy.
DCSDCS4
F11.77 × 1087.63 × 103
F49.39× 1026.74 × 102
F111.08× 1042.01 × 103
F298.34 × 1036.05 × 103
Table 4. Descriptions of CEC2017.
Table 4. Descriptions of CEC2017.
ClassIDFunctions f m i n
Unimodal FunctionsF1Shifted and Rotated Bent Cigar Function100
F3Shifted and Rotated Zakharov Function300
Simple
Multimodal Functions
F4Shifted and Rotated Rosenbrock’s Function400
F5Shifted and Rotated Rastrigin’s Function500
F6Shifted and Rotated Expanded Scaffer’s F6 Function600
F7Shifted and Rotated Lunacek Bi_Rastrigin Function 700
F8Shifted and Rotated Non-Continuous Rastrigin’s Function800
F9Shifted and Rotated Levy Function900
F10Shifted and Rotated Schwefel’s Function1000
Hybrid FunctionsF11Hybrid Functions 1 (N = 3)1100
F12Hybrid Functions 2 (N = 3)1200
F13Hybrid Functions 3 (N = 3)1300
F14Hybrid Functions 4 (N = 4)1400
F15Hybrid Functions 5 (N = 4)1500
F16Hybrid Functions 6 (N = 4)1600
F17Hybrid Functions 6 (N = 5)1700
F18Hybrid Functions 6 (N = 5)1800
F19Hybrid Functions 6 (N = 5)1900
F20Hybrid Functions 6 (N = 6)2000
Composition FunctionsF21Composition Functions 1 (N = 3)2100
F22Composition Functions 2 (N = 3)2200
F23Composition Functions 3 (N = 4)2300
F24Composition Functions 4 (N = 4)2400
F25Composition Functions 5 (N = 5)2500
F26Composition Functions 6 (N = 5)2600
F27Composition Functions 7 (N = 6)2700
F28Composition Functions 8 (N = 6)2800
F29Composition Functions 9 (N = 3)2900
F30Composition Functions 10 (N = 3)3000
Table 5. Parameter settings for comparative algorithms.
Table 5. Parameter settings for comparative algorithms.
AlgorithmParameter Settings
PSO c 1 = 2 ; c 2 = 2 ; w = 0.9
DE S c a l i n g   f a c t o r = 0.5 ; c r o s s o v e r   p r o b a b i l i t y = 0.5
BKA p = 0.5 ; r 0 ,   1
SAO θ 0 ,   1
MSCSO S M = 2 ; β 0 ,   2 π ; R o u l e t t e   W h e e l   s e l e c t i o n 0 ,   360
DFSMAFinding food operation probability z   = 0.03,   D R 0 ,   0.4
LSHADE r N i n i t = 18 D , p = 0.11 ,   H = 6
IMODE F = 0.35 ; C R = 0.2
DCS p c = 0.5 ; ω = 1
Table 6. Comparison of the test function results in CEC2017.
Table 6. Comparison of the test function results in CEC2017.
FunctionIndexMSDCSDCSPSODEBKASAOMSCSODFSMALSHADEIMODE
F1Fbest1.07 × 102 3.43 × 1072.28 × 1083.99 × 1099.76 × 10104.48 × 1094.00 × 10101.19 × 1083.18 × 1091.60 × 102
mean7.86 × 103 1.24 × 1083.25 × 1084.99 × 1091.39 × 1011 7.94 × 1096.96 × 10102.43 × 1086.43 × 1092.20 × 1010
std8.16 × 103 5.76 × 1075.81 × 1078.66 × 1081.70 × 1010 2.15 × 1091.24 × 10107.91 × 1072.06 × 1098.23 × 1010
F3Fbest3.13 × 104 1.94 × 1053.89 × 1056.51 × 1052.33 × 1055.55 × 1052.38 × 1052.62 × 1052.31 × 1053.00 × 102
mean6.15 × 104 2.79 × 1055.09 × 1058.31 × 1052.66 × 1059.13 × 1052.83 × 1054.14 × 1054.14 × 1053.01 × 102
std1.27 × 104 4.20 × 1047.98 × 1048.59 × 1041.94 × 1042.11 × 1052.61 × 1041.34 × 1051.61 × 1051.25
F4Fbest5.68 × 102 7.65 × 1027.09 × 1021.71 × 1031.48 × 1041.10 × 1033.64 × 1037.65 × 1029.81 × 1026.75 × 102
mean7.00 × 102 8.80 × 1028.53 × 1022.15 × 1032.32 × 1041.48 × 1036.87 × 1039.24 × 1021.50 × 1038.55 × 103
std5.79 × 10 5.55 × 106.24 × 102.85 × 1026.71 × 1032.01 × 1022.39 × 1038.39 × 103.33 × 1022.92 × 104
F5Fbest1.01 × 103 1.23 × 1031.26 × 1031.59 × 1031.36 × 1031.45 × 1031.34 × 1031.11 × 1039.53 × 1021.58 × 103
mean1.09 × 103 1.34 × 1031.39 × 1031.66 × 1031.52 × 1031.64 × 1031.49 × 1031.29 × 1031.20 × 1031.80 × 103
std4.15 × 10 5.26 × 105.46 × 102.97 × 106.84 × 107.57 × 105.53 × 108.88 × 101.01 × 1021.24 × 102
F6Fbest6.00 × 102 6.09 × 1026.64 × 1026.20 × 1026.71 × 1026.30 × 1026.66 × 1026.45 × 1026.18 × 1026.66 × 102
mean6.00 × 102 6.13 × 1026.71 × 1026.23 × 1026.78 × 1026.39 × 1026.73 × 1026.60 × 1026.31 × 1026.77 × 102
std1.57 × 10−1 2.30 4.31 2.09 3.39 5.71 3.54 6.37 5.75 1.16 × 10
F7Fbest1.40 × 103 1.68 × 1032.03 × 1032.03 × 1033.21 × 1032.11 × 1032.88 × 1031.90 × 1032.04 × 1033.59 × 103
mean1.54 × 103 1.84 × 1032.33 × 1032.14 × 1033.38 × 1032.36 × 1033.26 × 1032.26 × 1032.42 × 1035.37 × 103
std8.31 × 10 7.18 × 101.57 × 1025.22 × 108.93 × 101.14 × 1021.38 × 1022.55 × 1022.47 × 1021.06 × 103
F8Fbest1.25 × 103 1.51 × 1031.56 × 1031.86 × 1031.83 × 1031.80 × 1031.70 × 1031.38 × 1031.26 × 1031.94 × 103
mean1.37 × 103 1.62 × 1031.79 × 1031.94 × 1031.94 × 1031.94 × 1031.91 × 1031.65 × 1031.53 × 1032.15 × 103
std4.63 × 10 5.75 × 109.63 × 103.23 × 106.90 × 105.59 × 107.88 × 101.27 × 1029.48 × 101.01 × 102
F9Fbest1.02 × 104 8.29 × 1034.91 × 1043.37 × 1043.03 × 1041.55 × 1042.80 × 1042.62 × 1041.34 × 1042.99 × 104
mean1.76 × 104 2.37 × 1046.52 × 1046.03 × 1043.52 × 1043.14 × 1043.41 × 1043.50 × 1042.62 × 1044.14 × 104
std4.88 × 103 7.37 × 1037.02 × 1031.01 × 1043.05 × 1038.18 × 1033.98 × 1034.33 × 1037.07 × 1031.59 × 104
F10Fbest1.84 × 104 2.30 × 1041.48 × 1043.08 × 1041.64 × 1041.85 × 1041.60 × 1041.52 × 1041.98 × 1041.27 × 104
mean1.94 × 104 2.47 × 1041.85 × 1043.20 × 1042.02 × 1042.99 × 1041.86 × 1041.84 × 1042.18 × 1041.56 × 104
std5.13 × 102 6.64 × 1021.86 × 1035.92 × 1021.50 × 1033.48 × 1031.53 × 1031.41 × 1031.04 × 1032.55 × 103
F11Fbest1.83 × 103 5.20 × 1032.25 × 1041.94 × 1053.44 × 1041.32 × 1052.22 × 1043.72 × 1039.27 × 1032.16 × 103
mean2.02 × 103 1.03 × 1044.03 × 1042.50 × 1056.36 × 1042.20 × 1054.49 × 1046.05 × 1034.04 × 1041.95 × 104
std1.19 × 102 2.68 × 1039.73 × 1033.42 × 1041.61 × 1044.78 × 1041.29 × 1042.06 × 1033.98 × 1046.33 × 104
F12Fbest1.27 × 106 2.09 × 1072.28 × 1082.37 × 1092.01 × 1010 2.49 × 1083.01 × 1091.19 × 1081.64 × 1084.85 × 107
mean3.10 × 106 5.38 × 1075.12 × 1084.27 × 1094.78 × 1010 4.97 × 1081.93 × 1010 3.72 × 1082.83 × 1081.55 × 108
std1.78 × 106 2.44 × 1071.87 × 1086.14 × 1081.35 × 1010 1.24 × 1089.92 × 1091.66 × 1089.50 × 1078.56 × 107
F13Fbest1.75 × 103 6.24 × 1037.04 × 1053.06 × 1041.88 × 1092.48 × 1047.56 × 1065.67 × 1042.25 × 1047.63 × 103
mean4.80 × 103 1.65 × 1042.51 × 1061.32 × 1067.51 × 1096.56 × 1042.39 × 1095.34 × 1054.92 × 1041.38 × 109
std3.82 × 103 6.65 × 1031.19 × 1062.67 × 1063.14 × 1093.35 × 1042.50 × 1091.00 × 1061.86 × 1047.44 × 109
F14Fbest4.22 × 104 1.65 × 1051.16 × 1062.29 × 1073.08 × 1056.02 × 1051.46 × 1061.57 × 1069.83 × 1042.26 × 103
mean2.07 × 105 9.94 × 1053.01 × 1064.62 × 1071.96 × 1062.68 × 1066.66 × 1064.74 × 1066.75 × 1055.07 × 105
std1.71 × 105 6.10 × 1051.18 × 1061.29 × 1071.44 × 1061.87 × 1063.35 × 1062.10 × 1064.87 × 1058.60 × 105
F15Fbest1.86 × 103 2.79 × 1037.61 × 1044.06 × 1041.52 × 1073.67 × 1032.14 × 1052.86 × 1044.12 × 1033.05 × 103
mean3.75 × 103 5.81 × 1032.03 × 1053.61 × 1061.06 × 1096.93 × 1036.17 × 1083.91 × 1051.17 × 1041.24 × 104
std2.82 × 103 4.16 × 1031.04 × 1057.10 × 1061.61 × 1093.45 × 1031.28 × 1098.15 × 1055.25 × 1031.41 × 104
F16Fbest4.06 × 103 5.64 × 1034.31 × 1031.03 × 1047.03 × 1034.80 × 1035.99 × 1034.19 × 1034.76 × 1035.08 × 103
mean5.03 × 103 8.07 × 1036.42 × 1031.14 × 1049.55 × 1036.69 × 1038.29 × 1036.12 × 1037.12 × 1037.01 × 103
std3.52 × 102 8.13 × 1028.10 × 1025.13 × 1021.13 × 1031.85 × 1031.11 × 1037.69 × 1028.05 × 1028.38 × 102
F17Fbest3.16 × 103 5.47 × 1034.42 × 1037.41 × 1035.32 × 1034.22 × 1034.98 × 1033.95 × 1034.35 × 1035.84 × 103
mean4.23 × 103 6.41 × 1035.33 × 1038.07 × 1031.26 × 1046.71 × 1038.50 × 1035.60 × 1035.72 × 1031.06 × 105
std3.74 × 102 5.05 × 1025.00 × 1022.84 × 1027.66 × 1031.43 × 1033.06 × 1037.29 × 1025.95 × 1022.87 × 105
F18Fbest1.15 × 105 2.39 × 1058.89 × 1053.64 × 1077.10 × 1051.74 × 1062.42 × 1063.24 × 1062.84 × 1052.61 × 103
mean4.44 × 105 1.51 × 1063.93 × 1068.12 × 1072.61 × 1068.96 × 1066.92 × 1068.06 × 1061.08 × 1067.17 × 106
std2.43 × 105 1.02 × 1061.70 × 1062.29 × 1071.38 × 1065.49 × 1063.68 × 1063.85 × 1067.65 × 1053.00 × 107
F19Fbest2.03 × 103 2.35 × 1038.01 × 1052.27 × 1041.59 × 1072.73 × 1031.71 × 1061.06 × 1053.10 × 1037.48 × 104
mean5.53 × 103 6.76 × 1032.99 × 1064.84 × 1065.21 × 1086.09 × 1035.06 × 1085.28 × 1059.86 × 1031.53 × 106
std3.97 × 103 5.88 × 1032.40 × 1061.08 × 1076.80 × 1084.80 × 1031.08 × 1092.46 × 1056.75 × 1032.14 × 106
F20Fbest3.86 × 103 5.99 × 1034.54 × 1036.86 × 1034.75 × 1033.79 × 1034.77 × 1034.05 × 1035.05 × 1034.45 × 103
mean4.54 × 103 6.84 × 1035.44 × 1037.75 × 1035.36 × 1036.63 × 1035.83 × 1035.49 × 1036.36 × 1035.77 × 103
std2.85 × 102 3.86 × 1024.88 × 1023.92 × 1023.88 × 1021.56 × 1035.35 × 1027.54 × 1024.77 × 1029.69 × 102
F21Fbest2.82 × 103 3.03 × 1033.42 × 1033.41 × 1033.79 × 1033.33 × 1033.31 × 1032.89 × 1032.87 × 1033.50 × 103
mean2.89 × 103 3.15 × 1033.72 × 1033.48 × 1034.15 × 1033.48 × 1033.54 × 1033.09 × 1033.07 × 1033.82 × 103
std4.37 × 10 4.98 × 101.74 × 1023.24 × 101.78 × 1026.82 × 101.54 × 1028.83 × 101.04 × 1021.55 × 102
F22Fbest2.08 × 104 2.54 × 1041.94 × 1043.26 × 1042.00 × 1042.06 × 1041.85 × 1041.77 × 1042.22 × 1041.58 × 104
mean2.21 × 104 2.72 × 1042.22 × 1043.41 × 1042.33 × 1043.08 × 1042.27 × 1042.06 × 1042.43 × 1041.94 × 104
std5.79 × 102 7.95 × 1021.35 × 1035.58 × 1021.72 × 1034.16 × 1031.74 × 1031.73 × 1031.08 × 1034.54 × 103
F23Fbest3.10 × 103 3.44 × 1034.28 × 1033.75 × 1034.60 × 1033.37 × 1033.81 × 1033.36 × 1033.41 × 1034.34 × 103
mean3.19 × 103 3.53 × 1035.29 × 1033.81 × 1035.09 × 1033.61 × 1034.09 × 1033.51 × 1033.57 × 1034.94 × 103
std4.07 × 10 6.20 × 103.45 × 1022.90 × 102.84 × 1021.05 × 1021.64 × 1027.00 × 107.61 × 103.60 × 102
F24Fbest3.73 × 103 3.82 × 1035.03 × 1034.27 × 1035.68 × 1034.08 × 1034.81 × 1033.86 × 1033.99 × 1035.01 × 103
mean3.85 × 103 4.05 × 1035.66 × 1034.35 × 1036.67 × 1034.22 × 1035.12 × 1034.15 × 1034.20 × 1035.89 × 103
std7.09 × 10 1.09 × 1023.79 × 1023.95 × 104.91 × 1021.13 × 1021.66 × 1021.37 × 1021.45 × 1027.71 × 102
F25Fbest3.21 × 103 3.52 × 1033.36 × 1035.18 × 1039.38 × 1034.16 × 1035.57 × 1033.47 × 1033.74 × 1033.15 × 103
mean3.34 × 103 3.62 × 1033.45 × 1035.92 × 1031.29 × 1044.64 × 1037.32 × 1033.63 × 1034.18 × 1036.51 × 103
std5.71 × 10 7.05 × 106.22 × 104.37 × 1021.87 × 1033.05 × 1021.03 × 1035.66 × 102.14 × 1021.18 × 104
F26Fbest1.01 × 104 1.31 × 1043.46 × 1031.62 × 1042.74 × 1041.48 × 1042.17 × 1044.56 × 1031.26 × 1042.07 × 104
mean1.20 × 104 1.42 × 1041.98 × 1041.72 × 1043.48 × 1041.65 × 1042.90 × 1041.47 × 1041.49 × 1042.70 × 104
std9.37 × 102 6.18 × 1028.36 × 1034.46 × 1022.88 × 1038.57 × 1022.72 × 1032.27 × 1031.38 × 1037.19 × 103
F27Fbest3.42 × 103 3.47 × 1033.35 × 1034.13 × 1034.53 × 1033.40 × 1034.09 × 1033.53 × 1033.59 × 1034.24 × 103
mean3.55 × 103 3.63 × 1033.78 × 1034.39 × 1035.59 × 1033.53 × 1034.52 × 1033.72 × 1033.83 × 1035.21 × 103
std7.76 × 10 9.47 × 105.76 × 1021.40 × 1027.18 × 1027.68 × 102.85 × 1029.12 × 101.48 × 1021.34 × 103
F28Fbest3.40 × 103 3.57 × 1033.39 × 1038.77 × 1031.02 × 1043.87 × 1036.69 × 1033.58 × 1034.14 × 1033.45 × 103
mean3.45 × 103 3.74 × 1033.54 × 1031.11 × 1041.66 × 1044.57 × 1039.61 × 1033.67 × 1035.25 × 1037.04 × 103
std3.36 × 10 8.80 × 106.35 × 101.38 × 1032.33 × 1036.29 × 1021.93 × 1035.79 × 108.31 × 1021.05 × 104
F29Fbest5.10 × 103 6.36 × 1037.59 × 1039.57 × 1039.89 × 1035.75 × 1037.97 × 1036.93 × 1036.53 × 1037.77 × 103
mean6.10 × 103 8.08 × 1039.32 × 1031.05 × 1041.51 × 1047.43 × 1031.02 × 1048.12 × 1038.03 × 1032.11 × 104
std3.91 × 102 7.68 × 1028.62 × 1024.55 × 1024.15 × 1031.03 × 1031.05 × 1036.19 × 1027.47 × 1024.80 × 104
F30Fbest7.51 × 103 5.12 × 1041.11 × 1073.57 × 1063.14 × 1081.34 × 1056.15 × 1073.39 × 1061.16 × 1051.01 × 107
mean1.30 × 104 2.18 × 1053.33 × 1071.21 × 1075.40 × 1095.24 × 1051.46 × 1091.30 × 1071.14 × 1061.22 × 109
std7.45 × 103 1.11 × 1051.38 × 1075.10 × 1063.58 × 1092.65 × 1051.70 × 1096.55 × 1067.54 × 1056.17 × 109
Table 7. The results of the Wilcoxon rank-sum test and the Friedman test.
Table 7. The results of the Wilcoxon rank-sum test and the Friedman test.
MSDCSDCSPSODEBKASAOMSCSODFSMALSHADEIMODE
+/−/=0/0/028/1/026/1/229/0/028/0/127/2/027/1/129/0/028/0/124/4/1
Ranking mean1.794.09 5.57 8.05 8.52 5.48 7.36 4.36 4.59 5.19
Ranking12791068345
Table 8. The index data of simple terrain.
Table 8. The index data of simple terrain.
MSDCSDCSPSODEBKASAOMSCSODFSMALSHADEIMODE
cost9.26 × 103 9.28 × 103 1.31 × 104 9.65 × 103 1.03 × 104 1.15 × 104 9.34 × 103 1.10 × 104 9.31 × 103 2.54 × 104
av9.27 × 103 9.33 × 103 1.15 × 104 9.67 × 103 9.67 × 103 1.07 × 104 9.32 × 103 1.09 × 104 9.75 × 103 2.35 × 104
std5.07 5.23 × 10 8.23 × 102 1.59 × 102 4.41 × 102 5.30 × 102 4.13 × 10 7.53 × 10 6.46 × 102 2.14 × 103
Table 9. The index data of complex terrain.
Table 9. The index data of complex terrain.
MSDCSDCSPSODEBKASAOMSCSODFSMALSHADEIMODE
cost9.31 × 1039.59 × 103 1.23 × 104 9.77 × 103 1.14 × 104 1.23 × 104 9.74 × 103 1.20 × 104 1.10 × 104 2.88 × 104
av9.30 × 1039.59 × 103 1.23 × 104 1.07 × 104 1.06 × 104 1.12 × 104 9.87 × 103 1.20 × 104 9.99 × 103 2.87 × 104
std1.17 × 109.94 × 10 2.07 × 103 5.49 × 102 9.71 × 102 7.09 × 102 6.10 × 102 3.66 × 10 6.47 × 102 2.94 × 103
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, J.; Lin, Y.; Zhang, X.; Yin, J.; Zhang, X.; Feng, Y.; Qian, Q. Agricultural UAV Path Planning Based on a Differentiated Creative Search Algorithm with Multi-Strategy Improvement. Machines 2024, 12, 591. https://doi.org/10.3390/machines12090591

AMA Style

Liu J, Lin Y, Zhang X, Yin J, Zhang X, Feng Y, Qian Q. Agricultural UAV Path Planning Based on a Differentiated Creative Search Algorithm with Multi-Strategy Improvement. Machines. 2024; 12(9):591. https://doi.org/10.3390/machines12090591

Chicago/Turabian Style

Liu, Jin, Yong Lin, Xiang Zhang, Jibin Yin, Xiaoli Zhang, Yong Feng, and Qian Qian. 2024. "Agricultural UAV Path Planning Based on a Differentiated Creative Search Algorithm with Multi-Strategy Improvement" Machines 12, no. 9: 591. https://doi.org/10.3390/machines12090591

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop