Next Article in Journal
Exploration of the Characteristics of Elderly-Driver-Involved Single-Vehicle Hit-Fixed-Object Crashes in Pennsylvania, USA
Previous Article in Journal
Applications of Artificial Intelligence in Microbiome Analysis and Probiotic Interventions—An Overview and Perspective Based on the Current State of the Art
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Somersault Foraging and Elite Opposition-Based Learning Dung Beetle Optimization Algorithm

1
College of Computer Science and Engineering, Guilin University of Technology, Guilin 541004, China
2
Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin 541004, China
3
College of Civil Engineering, Guilin University of Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(19), 8624; https://doi.org/10.3390/app14198624
Submission received: 8 July 2024 / Revised: 15 August 2024 / Accepted: 20 August 2024 / Published: 25 September 2024

Abstract

:
To tackle the shortcomings of the Dung Beetle Optimization (DBO) Algorithm, which include slow convergence speed, an imbalance between exploration and exploitation, and susceptibility to local optima, a Somersault Foraging and Elite Opposition-Based Learning Dung Beetle Optimization (SFEDBO) Algorithm is proposed. This algorithm utilizes an elite opposition-based learning strategy as the method for generating the initial population, resulting in a more diverse initial population. To address the imbalance between exploration and exploitation in the algorithm, an adaptive strategy is employed to dynamically adjust the number of dung beetles and eggs with each iteration of the population. Inspired by the Manta Ray Foraging Optimization (MRFO) algorithm, we utilize its somersault foraging strategy to perturb the position of the optimal individual, thereby enhancing the algorithm’s ability to escape from local optima. To verify the effectiveness of the proposed improvements, the SFEDBO algorithm is utilized to optimize 23 benchmark test functions. The results show that the SFEDBO algorithm achieves better solution accuracy and stability, outperforming the DBO algorithm in terms of optimization results on the test functions. Finally, the SFEDBO algorithm was applied to the practical application problems of pressure vessel design, tension/extension spring design, and 3D unmanned aerial vehicle (UAV) path planning, and better optimization results were obtained. The research shows that the SFEDBO algorithm proposed in this paper is applicable to actual optimization problems and has better performance.

1. Introduction

Optimization problems [1], which frequently arise in scientific research and practical production, primarily involve the quest for the extreme value of an objective function subject to a specified set of constraints. These problems are widespread in numerous diverse fields, such as flow shop scheduling [2,3], image processing [4,5], path planning [6,7], engineering design [8], and fault diagnosis [9]. As optimization problems become increasingly complex, optimization methods have emerged. These are methods for searching for optimal solutions to problems and have long been essential tools in economics, production, engineering, and other fields. Optimization methods can typically be categorized into two main groups. One is the gradient-based exact search algorithm, which generally requires continuous derivatives of the objective function and strongly depends on the selection of initial values. It is often powerless in the face of more complex multi-extremum problems or problems lacking clear structural information. The other type of optimization method is the heuristic algorithm based on experience. This algorithm type is grounded in empirical rules for discovery and is not reliant on the mathematical properties of the problem. It has no definite optimization steps and is suitable for solving various complex optimization problems.
Heuristic algorithms can be categorized into three distinct types. The first is evolutionary algorithms, which primarily simulate the principle of natural selection in nature to facilitate population evolution, with examples including the Genetic Algorithm (GA) [10] and the Differential Evolution (DE) [11] Algorithm. The second is inspired by social behavior or physical rules, such as the Gravitational Search Algorithm (GSA) [12], the Imperialist Competitive Algorithm (ICA) [13], etc. The third is inspired by swarm intelligence, which mainly completes the population position update by simulating the reproduction or hunting behavior of populations in nature, such as the Particle Swarm Optimization (PSO) [14] Algorithm, the Whale Optimization Algorithm (WOA) [15], the Spotted Hyena Optimizer (SHO) [16], the Rat Swarm Optimizer (RSO) [17], the Grey Wolf Optimizer (GWO) [18], the Black-winged Kite Algorithm (BKA) [19], and the Dung Beetle Optimizer (DBO) [20]. The DBO algorithm was first proposed in December 2022. Inspired by the reproduction, foraging, and other behaviors of the dung beetle population, it executes corresponding search strategies based on the different roles performed by individual dung beetles, such as dancing, ball rolling, reproduction, foraging, and theft. Compared to other algorithms, the DBO algorithm exhibits the following unique characteristics:
  • The DBO incorporates a novel rolling search mechanism, where various search modes allow the population to comprehensively search through the space by leveraging information from various time intervals, thereby demonstrating a certain capability to avoid local optima.
  • The DBO algorithm employs diverse boundary selection strategies that dynamically choose search areas according to the present best solution and the inherent boundaries of the optimization problem. This feature allows the DBO algorithm to be versatile and applicable across a wide range of fields.
  • The R parameter in the DBO algorithm demonstrates dynamic variation characteristics, which can enhance the algorithm’s exploration and exploitation states.
Since its introduction, DBO has attracted widespread attention due to its strong optimization capabilities. The reference [20] has proven that DBO significantly outperforms classical optimization algorithms such as the HHO [21], the GWO [18], the WOA [15], the SSA [22], and PSO [14] in optimization capability. Since the DBO was proposed, it or its variants have found applications in diverse domains, including air quality prediction [23], engineering design problems [24], fault diagnosis [25], etc.
When contrasted with other algorithms, the DBO algorithm similarly reveals shortcomings, which encompass a sluggish convergence rate, a proclivity for being trapped in local optima, as well as an imbalance between exploration and exploitation capabilities. Many scholars have made improvements to DBO to enhance its performance by addressing these issues. To address the issue of inadequate diversity within the DBO population, reference [26] integrated the idea of quantum state update into quasi-oppositional learning to generate the initial population. Experimental results demonstrated that the initial population generated by this method significantly enhanced both the algorithm’s capacity for global exploration and its rate of convergence. To boost the algorithm’s capacity to avoid local optima, reference [27] introduced a Cauchy–Gaussian mutation strategy during algorithm stagnation, substantially mitigating the likelihood of the population being trapped in local optima, and further augmenting the algorithm’s solution precision. In order to tackle the disparity between the exploratory and exploitative capabilities within the DBO algorithm, reference [28] incorporated the search methodology of the Improved Sine Algorithm (MSA) [29] into the DBO population, ultimately fostering equilibrium between exploration and exploitation.
This paper is aimed at addressing the challenges posed by delayed convergence rate and vulnerability to entrapment in local optima, and imbalanced exploration and exploitation in DBO by proposing a Somersault Foraging and Elite Opposition-Based Learning Dung Beetle Optimization (SFEDBO) Algorithm. An elite opposition-based learning strategy is employed as the method for population initialization, generating an initial population with greater diversity. An adaptive strategy is employed to dynamically adjust the number of rolling dung beetles and hatching balls, ensuring a balance between exploration and exploitation. The somersault foraging strategy is incorporated to boost the algorithm’s capability of evading local optima. In addition, the SFEDBO algorithm has been utilized in the design of pressure vessels [30], tension/extension spring design problems [31], and 3D unmanned aerial vehicle (UAV) path planning problems [32], all of which have achieved good results, further demonstrating the great potential of this algorithm for practical applications.
The key contributions presented in this paper are as follows:
  • We are the first to apply the EOBL strategy to the DBO algorithm, generating an initial population with higher diversity through this strategy. This innovation accelerates the early convergence speed of the algorithm, and it also enhances its global search capability.
  • We design a novel adaptive strategy that dynamically adjusts the ratio of ball-rolling dung beetles to brood balls according to the algorithm’s running phase. This strategy enhances exploration capability in the early stages and strengthens exploitation efficiency in the later stages, ultimately fostering a harmonious integration of exploration and development.
  • We innovatively introduce the somersault foraging strategy. This strategy simulates the somersault movements of manta rays during foraging, enabling the algorithm to effectively escape from local optima when trapped and explore a broader solution space.
  • We evaluated the SFEDBO algorithm using benchmark test functions and compared the results with those of other algorithms, proving that the performance of the SFEDBO algorithm is highly superior compared to other algorithms.
  • We applied the SFEDBO algorithm to solve the pressure vessel design problem, tension/compression spring design problem, and three-dimensional unmanned aerial vehicle (UAV) path-planning problem. We compared and analyzed the results obtained by this algorithm with those obtained by four other algorithms. The comparison demonstrates that the SFEDBO algorithm has better performance in dealing with such issues, fully proving its great potential in practical applications.
The rest of this paper is structured as follows: Section 2 offers an outline of the search mechanisms utilized by four distinct species of dung beetles. Section 3 introduces three improvement strategies designed to rectify the shortcomings of the DBO, along with the operational steps of the SFEDBO. Section 4 analyzes the test outcomes of the SFEDBO algorithm on test functions. Section 5 investigates the practical application performance of the SFEDBO algorithm. Section 6 summarizes the conclusions and proposes avenues for future research.

2. Dung Beetle Optimization (DBO) Algorithm

2.1. The Ball-Rolling Dung Beetle

To guarantee that the ball rolls in a straight path, the dung beetle must navigate under the guidance of the sun. Additionally, the strength of the sunlight plays a role in determining the beetle’s rolling trajectory. The definition of rolling ball behavior is as follows:
x i ( t + 1 ) = x i ( t ) + α × k × x i ( t 1 ) + b × Δ x Δ x = | x i ( t ) X w |
In Equation (1), the variable t denotes the number of the present iteration, x i t represents the location of the i -th dung beetle during the t -th iteration, and k serves as a fixed parameter within the range of 0,0.2 . Additionally, b is a fixed value falling within the interval 0,1 , which indicates the degree of influence that changes in sunlight intensity have on the dung beetle’s moving path. Furthermore, α functions as a coefficient simulating deviations in the dung beetle’s path due to natural factors; specifically, a value of −1 for α signifies significant deviation from the original path, whereas a value of 1 indicates no such deviation. X w stands for the overall poorest solution, and Δ x is utilized as a metric to simulate the strength of sunlight.
Upon encountering an obstacle during its movement, the dung beetle adjusts its trajectory by performing a dance-like maneuver. This dance-like action, simulated by a tangent function operating within the domain of [0, π], helps the beetle navigate around obstacles. The rolling dung beetle’s unique dance behavior, which facilitates this directional change, is described below:
x i ( t + 1 ) = x i ( t ) + tan ( θ ) | x i ( t ) x i ( t 1 ) |
where θ represents the inclination angle belonging to the range of 0 , π .

2.2. The Brood Ball

The dung beetle will find a safe area to hide the dung ball and use it as a hatching ball to lay eggs in this secure region, which is known as the optimal spawning area. This optimal reproductive habitat is determined through a boundary selection strategy that takes into account not only the inherent constraints of the optimization problem but also the location of the optimal solution in the present iteration. The specific definition of the optimal spawning area is shown in Equation (3):
L b * = max ( X * × ( 1 R ) , L b ) , U b * = min ( X * × ( 1 + R ) , U b ) , R = 1 t / T max
where X * represents the position of the individual dung beetle with the current best fitness, U b * and L b * are the upper and lower bounds of the optimal oviposition area determined by the boundary selection strategy, R primarily controls the dynamic change in the optimal oviposition area as the population iterates, and U b an d L b stand for the inherent upper and lower limits of the optimization problem. The optimal oviposition area of the dung beetle population changes dynamically with the iteration of the population.
After determining the best oviposition site, the dung beetle would deposit eggs on the incubation balls in this area, and only one egg was laid each time. The best spawning area of dung beetle changes dynamically with the iteration of the population, so the location of the incubation balls will also change continuously. Its position is defined as follows:
B i ( t + 1 ) = X * + b 1 × ( B i ( t ) L b * ) + b 2 × ( B i ( t ) U b * )
In Equation (4), B i ( t ) is the location of the incubation ball; D represents the dimension of the optimization problem; b 1 and b 2 represent random vectors.

2.3. The Small Dung Beetle

After hatching from the brood ball, the small dung beetles will surface from underground and commence foraging once they become adults. Their foraging behavior takes place in an optimal feeding area, which is also defined using a boundary selection strategy. The specific definition is as follows:
L b b = max ( X b × ( 1 R ) , L b ) , U b b = min ( X b × ( 1 + R ) , U b ) , R = 1 t / T max
x i ( t + 1 ) = x i ( t ) + C 1 × ( x i ( t ) L b b ) + C 2 × ( x i ( t ) U b b )
In Equation (5), X b represents the overall best position; U b b and L b b are the boundaries of the optimal feeding area determined by the boundary selection strategy. In Equation (6), C 1 is a random variable adhering to a normal distribution; C 2 is a random vector ranging within 0 , 1 ; and the meanings of the remaining parameters are the same as in Equation (3).

2.4. The Thief Dung Beetle

Within a dung beetle community, certain individuals are known as “thief dung beetles” due to their habit of stealing feces balls from their peers. The prime area for stealing aligns with the most efficient foraging zone. The formula to update the position of these thieving beetles can be articulated as follows:
x i ( t + 1 ) = X b + S × g × ( | x i ( t ) X * | + | x i ( t ) X b | )
In Equation (7), g is a vector that conforms to a normal distribution, while S represents a constant value.

3. Somersault Foraging and Elite Opposition-Based Learning Dung Beetle Optimization (SFEDBO) Algorithm

3.1. Elite Opposition-Based Learning

The initial population’s quality exerts a substantial influence on the search performance of the population. The DBO algorithm randomly initializes its population, which could potentially result in an unbalanced initial distribution, thereby decreasing the diversity of the population. This can render the algorithm susceptible to converging towards local optima, potentially impacting its convergence rate and overall global search capability. Hence, this paper proposes an Elite Opposition-Based Learning (EOBL) approach as a technique for initializing the population, with the objective of improving the convergence rate and reinforcing the algorithm’s global search proficiency.
Opposition-Based Learning (OBL) [33] was initially presented in 2005, with its core concept revolving around generating opposing solutions based on the current solutions and selecting the superior solution through fitness evaluation as an individual within the population. Elite Opposition-Based Learning (EOBL) [34] represents an enhanced technique introduced to refine the OBL method. Its primary concept involves generating opposing solutions based on elite individuals within the population, evaluating the fitness of both the elite individual and its corresponding opposite solution, and ultimately retaining the superior solution among the two. In this paper, the top 50% of individuals with the highest fitness in the current population are designated as elite individuals. The opposite solutions of each elite individual are calculated using Equation (9), and these opposite solutions of elite individuals are referred to as elite opposite solutions. The specific definitions of elite individuals and elite opposite solutions are given in Equation (8):
x i , j e = ( x i , 1 e , x i , 2 e , , x i , d e ) X i , j e ¯ = ( X i , 1 e ¯ , X i , 2 e ¯ , , X i , d e ¯ ) j = 1 , 2 , , d i = 1 , 2 , , n / 2
X i , j e ¯ = α × ( l b j + u b j ) X i , j e
In Equation (8), X i , j e represents the elite solution, which is composed of the top 50% solutions based on fitness; d signifies the dimension extent of the optimization problem; n denotes the total number of solutions; X i , j e ¯ is the elite opposition-based solution. In Equation (9), α is a random number that follows a normal distribution within the range of 0 and 1; l b j and u b j are the thresholds that delimit the search space established by the elite individuals; X i , j e is the elite individual.

3.2. Adaptive Strategy

In the DBO algorithm, the positions of the incubation balls are limited to the optimal oviposition area, while the search area of dung beetles rolling balls covers the entire search space. Therefore, the main roles of incubation balls and dung beetles rolling balls in the population are local exploration and global search, respectively. During the initial iterations, the collective’s capacity for global exploration holds paramount importance in discovering the optimal solution, making it imperative to maximize the enhancement of this global search capability in the early stages. However, as the population iterates, the focus of exploration shifts from global search to local exploration, requiring a stronger local exploration ability in the later stages.
We can increase the proportion of dung beetles’ rolling balls in the population and reduce the number of incubation balls in the early iterations. Gradually, as the population iterates, we decrease the proportion of dung beetles’ rolling balls and increase the number of incubation balls. This approach is geared towards augmenting the global search capabilities in the initial stages, as well as boosting local exploration abilities in the subsequent stages, thereby achieving an effective equilibrium between global search and local exploration.
In this paper, the initial proportion of dung beetles rolling balls is adjusted to 0.3, and the initial proportion of incubation balls is set to 0.1. We dynamically adjust the number of both according to the population iteration. We use Equation (10) to dynamically adjust the proportion of dung beetles rolling balls and incubation balls:
W 1 = 0.3 0.2 × t / T max W 2 = 0.4 W 1
where W 1 represents the proportion of dung beetles rolling balls in the population, and W 2 denotes the fraction of incubation balls within the population.

3.3. Somersault Foraging Strategy

As the population iterates further, the focus of the search gradually shifts from exploring the global solution space to refining local solutions. At this juncture, the iteration of the population is directed by the present optimal position. If the present optimal position constitutes merely a local optimum, it will steer the population towards conducting local exploration at this local optimal location, making it difficult to break free from the constraints of the local optimum. To tackle this problem, this paper presents a somersault foraging strategy to perturb the location of the current optimal individual dung beetle, boosting the algorithm’s ability to transcend local optima.
The MRFO [35] algorithm, which was proposed in 2020, is motivated by the feeding behavior of manta rays. The rolling-over foraging strategy of the MRFO algorithm can be understood as using the current optimal position as a fulcrum, and each hunting attempt will shift the position to a certain opposite location centered on this fulcrum. The definition for the somersault foraging strategy is as follows:
x M R F O = x b + S × ( r 1 × x b r 2 × x * )
where x b represents the global best individual dung beetle, while x * represents the current best individual dung beetle; S is a somersault factor with a value of 2; h 1 and h 2 denote random values in the interval 0,1 .
Although the somersault foraging strategy can efficiently augment the population’s capacity to transcend local optima, it may also lead to population positions that are not as good as those before the transformation. Therefore, this paper adopts a greedy approach, retaining only the solutions with better fitness after the position change. The specific definition is as follows:
x b = { x M R F O f ( x M R F O ) < f ( x b ) x b f ( x M R F O ) > f ( x b )

3.4. The Algorithm Steps

The operational flow of the SFEDBO algorithm is illustrated in Figure 1. The iterative process encompasses a series of steps, which are detailed as follows:
A.
Set the basic parameters of the SFEDBO algorithm;
B.
Create an initial set of individuals using an EOBL strategy, as described in Equations (8) and (9);
C.
Calculate and update the fitness values for each dung beetle and identify the globally optimal dung beetle based on these fitness values;
D.
Revise the location of the rolling balls of the dung beetles in accordance with Equation (1), and if a dung beetle deviates from its direction, perform a dancing behavior to re-determine the direction according to Equation (2);
E.
Update the local optimal individual dung beetle;
F.
Determine the optimal egg-laying area according to Equation (3) and update the position of the hatching balls based on Equation (4);
G.
Determine the optimum foraging zone according to Equation (5) and update the position of the small dung beetles accordance with Equation (6);
H.
Revise the location of the dung-stealing beetles in accordance with Equation (7);
I.
Perturb the globally optimal individual according to Equation (11) and finally determine the position of the final retained optimal individual based on Equation (12);
J.
Adjust the ratio of rolling dung beetles to hatching balls according to Equation (10);
K.
If the present iteration count t surpasses the maximum iteration count T m a x , proceed to step L, otherwise return to step C;
L.
The SFEDBO algorithm ends, returning the optimal fitness F b e s t and the optimal position X b .

4. Experiments and Result Analysis

To validate the capability of the SFEDBO, this section will conduct simulation experiments in three parts: (1) Computing time analysis: The SFEDBO algorithm and the DBO algorithm were used to conduct optimization tests on benchmark test functions, and the computational time of both was recorded. The analysis aimed to determine whether the algorithm with added improvement strategies incurred excessive time overhead. (2) Comparison with classical algorithms: The results of the SFEDBO will be contrasted with other classical algorithms to analyze whether the improved algorithm has superior performance. (3) Comparison with modified algorithms: The results of the SFEDBO will be compared with those of four recently proposed modified algorithms to analyze the superiority of the SFEDBO algorithm’s performance.

4.1. Experiment Preparation

The original DBO algorithm paper utilized 23 benchmark test functions for simulation experiments and compared the DBO algorithm with other classical algorithms. To more vividly demonstrate the efficacy of the improvement strategies of the SFEDBO algorithm, this paper also employs the same 23 benchmark functions for simulation experiments. These 23 benchmark test functions have been extensively utilized in recent years to appraise the capacity of algorithms, and they are illustrated in Table 1.
To guarantee impartiality in the evaluation process, each algorithm was run independently 30 times. The mean (Mean) and standard deviation (Std) were then utilized as evaluation metrics to assess the optimization accuracy and stability of the algorithms. The experimental environment was an AMD Ryzen 5 2500U CPU @ 2.00 GHz, with 8.00 GB of RAM, running Windows 10, and Matlab R2020a.

4.2. Computing Time Analysis

The computing time is a vital criterion for assessing the capacity of algorithms. In this section, both the SFEDBO algorithm and the DBO algorithm are employed to tackle 23 benchmark test functions. The average computational time across these 30 runs is recorded, with units measured in seconds. The execution times of the two algorithms are presented in Table 2, where the first column represents the function number, while the second column displays the average computation time of the DBO algorithm; the third column is the average computational time of the SFEDBO algorithm, the fourth column is the difference in average running time between the SFEDBO algorithm and the DBO algorithm, and the fifth column displays the growth rate of the computation time for the SFEDBO algorithm relative to the DBO algorithm.
By examining the overall growth rate, it is observable that the overall time cost of the SFEDBO algorithm is higher than that of the DBO algorithm. This is because additional computations are required after incorporating the improved strategies, as it requires additional computations due to the incorporation of improved strategies, and it needs to invoke the objective function and boundary control function after executing these strategies. Therefore, from an overall perspective, the computational efficiency of the SFEDBO algorithm is inferior to that of the DBO algorithm. However, the difference between them is not significant, and sacrificing some of the computational enhancing efficiency to refine the algorithm’s accuracy and stability is inevitable.

4.3. Comparison with Classical Intelligent Optimization Algorithms

To thoroughly assess the capability of the SFEDBO algorithm, this paper compares it to five classical optimization algorithms, including DBO [20], GWO [18], PSO [14], WOA [15], SCA [36], and RSO [17]. The parameters of the algorithms are all sourced from references [17,20]. The outcomes are presented in Table 3.
High-dimensional unimodal functions (F1~F7), characterized by having a single global optimal solution, are primarily used to evaluate the local search capabilities of algorithms. On these functions, the DBO algorithm shows better performance than other classic algorithms, ranking second on F1–F7. However, the capability of the SFEDBO algorithm is further enhanced based on the DBO algorithm, significantly outperforming other algorithms on F1–F7. This fully demonstrates the necessity of the adaptive strategy and tumbling foraging strategy. As the adaptive strategy dynamically shifts the focus of the population towards local exploration during later iterations, the algorithm’s local search ability is significantly strengthened. Furthermore, the incorporation of the somersault foraging strategy augments the algorithm’s local exploration capability. An analysis of the standard deviations of each algorithm on F1–F7 indicates that the SFEDBO algorithm attains the lowest standard deviation across all seven functions, illustrating that the superiority of the SFEDBO algorithm is not attributed to the randomness of the optimization process.
High-dimensional multimodal functions (F8~F13) possess numerous local optimum solutions and a single global optimum solution, posing a challenge to algorithms’ global search capability and their proficiency in escaping local optima. The SFEDBO algorithm still has a significant advantage on F8–F13, ranking first in optimization results on such functions. This is credited to the introduction of the OBL strategy and somersault foraging strategy. The OBL strategy generates more diverse populations, effectively enhancing the global exploration capability of the algorithm. And when the population reaches a local optimum, the somersault foraging strategy is applied to the optimal individual, enabling the population to escape the local optimum, greatly improving the algorithm’s ability to escape local optima. Analysis of the standard deviations on F8–F13 shows that, except for F8 where the SFEDBO algorithm’s standard deviation is inferior to GWO and PSO, the SFEDBO algorithm demonstrates a performance that surpasses the other algorithms by more than one order of magnitude on the remaining functions. Moreover, the standard deviation on F8 is also better than that of the DBO algorithm, further proving the stability of the improved algorithm and the effectiveness of the improvement strategies.
Fixed-dimensional multimodal functions (F14~F23) consist of functions with dimensions between 2 and 6. Due to their low dimensionality and simple structure, the outcomes of the five algorithms do not exhibit significant disparities. The EOBL strategy and adaptive strategy are not effective when solving such functions, and only the tumbling foraging strategy improves the algorithm’s proficiency in evading local optima and boosts optimization accuracy. Through an analysis of specific optimization results, observations indicate that the SFEDBO algorithm slightly lags behind the GWO algorithm and the PSO algorithm on F15 and F20, but marginally surpasses the other algorithms on the remaining functions. In conclusion, the SFEDBO algorithm achieves significant improvements in solving high-dimensional functions, but although it is overall superior to other algorithms on low-dimensional functions, the advantage is less pronounced compared to high-dimensional functions. Overall, the SFEDBO algorithm has better optimization accuracy and stability compared to classical optimization algorithms.

4.4. Comparison with Improved Algorithms

To further validate the capacity of the SFEDBO algorithm, this paper selects four improved algorithms for comparison. These include the QOLDBO [26], the MSADBO [28], the SGMDBO [27], the EtHGS [37], and the MSWOA [38]. Among them, the experimental data of QOLDBO are reproduced from the reference [26], the experimental data of MSADBO are reproduced from the reference [28], the experimental data of SGMDBO are reproduced from the reference [27], the experimental data of the EtHGS algorithm are reproduced from the reference [37], and the data of the MSWOA algorithm are reproduced from the reference [38].The comparison results of SFEDBO and other improved algorithms in optimizing are shown in Table 4.
The optimization comparison results of the SFEDBO algorithm and other improved algorithms are shown in Table 4. From the data in Table 4, it can be observed that the SFEDBO algorithm exhibits overall better performance on high-dimensional unimodal functions (F1~F7). Except for being slightly weaker than the QOLDBO algorithm on F7 and F8, the SFEDBO algorithm achieves optimal results on the remaining functions, indicating that its local search capability is also highly competitive among the improved algorithms. On high-dimensional multimodal functions (F8–F13), the SFEDBO algorithm ranks first alongside the QOLDBO algorithm on most functions, and its overall standard deviation is also in a superior position. This fully demonstrates that the SFEDBO algorithm not only handles complex problems effectively but also possesses a strong ability to escape local optima. Lastly, for the function of F14–F23, the performance of the SFEDBO algorithm and the QOLDBO algorithm varies, with the SFEDBO algorithm being inferior to the QOLDBO on F15 and F20, although the two are within the same magnitude. Overall, the SFEDBO algorithm has proven its high capability in solving high-dimensional functions, although there is still potential for enhancement in handling low-dimensional functions and high-dimensional multimodal functions.

5. Application of SFEDBO Algorithm

To assess the practical applicability of the SFEDBO algorithm, this paper employs it to tackle the pressure vessel design problem [30], the tension/compression spring design problem [31], and the three-dimensional unmanned aerial vehicle path-planning problem [32]. For comparative analysis with the SFEDBO algorithm, this paper selects two classic standard optimization algorithms and two improved algorithms, which are the DBO [20], GWO [18], SGMDBO [27], and MSADBO [28].

5.1. The Pressure Vessel Design (PVD) Problem

The PVD problem refers to finding optimal variables of the pressure vessel that satisfy the constraint conditions and utilizing these design variables to calculate the manufacturing cost of the pressure vessel. The specific definition of the PVD problem is as follows:
Min f ( s ) = 0.6224 s 1 s 3 s 4 + 1.7781 s 2 s 3 3 + 3.1661 s 1 2 s 4 + 19.84 s 1 2 s 3
g 1 = s 1 + 0.0193 s 3 0 , g 2 = s 2 + 0.00954 s 3 0 , g 3 = π s 3 2 s 4 4 3 π s 3 3 + 1296000 0 , g 4 = s 4 240 0 s 1 , 2 [ 0 , 99 ] , s 3 , 4 [ 0 , 200 ]
In the formula, s 1 represents the wall thickness, s 2 represents the head thickness, s 3 represents the inner radius, s 4 represents the vessel length excluding heads, and g i ( i = 1,2 , 3,4 ) represents the four constraint conditions. f ( x ) is the fitness function, which indicates the manufacturing cost of the pressure vessel.
The test results of PVD problems are presented in Table 5. The optimization outcomes obtained through the SFEDBO algorithm surpass those of the other four algorithms. The four parameters calculated by the SFEDBO algorithm result in a lower manufacturing cost compared with the other algorithms.

5.2. The Tension/Compression Spring Design (TCSD) Problem

The primary objective of the TCSD problem is to achieve optimal weight for the spring. The TCSD problem entails identifying the design parameters of the spring that satisfy the constraint conditions and subsequently calculating the weight of the spring based on these parameters. The constraint conditions include minimum deflection, vibration frequency, shear stress, and outer diameter limitations. The specific definition of TCSD is presented as follows:
min f ( s ) = ( s 3 + 2 ) s 2 s 1 2
g 1 ( s ) = 1 s 3 s 2 3 71785 s 1 4 0 , g 2 ( s ) = 4 s 2 2 s 1 s 2 12566 ( s 2 s 1 3 s 1 4 ) + 1 5108 s 1 2 1 0 , g 3 ( s ) = 1 140.45 s 1 s 2 2 s 3 0 , g 4 ( s ) = s 1 + s 2 1.5 1 0
s 1 [ 0.05 , 2 ] , s 2 [ 0.25 , 1.3 ] , s 3 [ 2 , 15 ]
In the formula, s 1 represents the average diameter of the spring coils, s 2 represents the diameter of the spring wire, s 3 represents the number of active coils in the spring, and g i ( i = 1,2 , 3,4 ) represents the constraint conditions.
The results of TCSD problems are presented in Table 6. The SFEDBO algorithm also demonstrates significant advantages over the other algorithms in solving this problem. The three design parameters calculated by the SFEDBO algorithm result in a lower weight compared to the other algorithms.

5.3. Three-Dimensional Unmanned Aerial Vehicle Path Planning

The three-dimensional unmanned aerial vehicle (UAV) path-planning problem refers to planning a safe and collision-free navigation path for the UAV from the starting point to the endpoint with the shortest distance in an environment with complex terrain and obstacles. In recent years, researchers have designed numerous algorithms for the three-dimensional UAV trajectory-planning problem, including swarm-based intelligent optimization algorithms.

5.3.1. Construction of the Objective Function

In the problem of three-dimensional unmanned aerial vehicle (UAV) path planning, it is necessary to comprehensively consider three factors: flight path length, changes in flight altitude, and deflection angle. These three factors have different weights of influence on trajectory planning.
Flight path length is an important indicator for the UAV’s flight trajectory. Specifically, it refers to the distance from the starting point to the endpoint, assuming that the UAV does not consider changes in speed. A shorter distance means lower flight costs and indicates higher quality of the planned path. The specific expression function for flight path length is as follows:
f L = i = 1 n ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 + ( z i + 1 z i ) 2
where f L represents the flight path length; n is the total number of trajectory points; ( x i , y i , z i ) are the three-dimensional coordinates of the i -th trajectory point.
A stable flight height can make the path smoother and reduce the burden on the UAV. To ensure the safety of the UAV during flight, a flight height variation function is set, which represents the standard deviation of height changes during the flight process. The specific definition of the flight height variation function is as follows:
f H = i = 2 n 1 ( z 2 , , n 1 m e a n ( z 2 , , n 1 ) ) 2 / ( n 2 )
where f H represents the flight height variation function; z i is the height of the i -th trajectory point; and n is the number of trajectory points.
The change in the flight angle of the UAV determines its flexibility. During flight, the deflection angle of the UAV should be less than or equal to its preset maximum deflection angle. The expression for the change in deflection angle is as follows:
f S = i = 1 n | θ i + 1 θ i | θ i = arccos ( v 1 v 2 | v 1 | | v 2 | ) v 1 = Q i Q i 1 , v 2 = Q i + 1 Q i 2 i n 1
where f S is the deflection angle function; Q i is the position of the i -th trajectory point; v i refers to the vector between two adjacent trajectory points; θ i is the deflection angle; and n is the number of trajectory points.
The objective function designed by combining these three influencing factors is as follows:
f min = w L × f L + w H × f H + w S × f S
where w L is the deflection angle variation function; w H refers to the weight coefficient for the flight altitude variation; and w S is the weight coefficient for deflection angle.

5.3.2. Preparation for Simulation Experiments

The virtual terrain environment for the experiment is a complex mountainous area with dimensions of 100 km by 150 km. The highest and lowest positions differ by less than 2 km. Pink cylinders are added to the virtual environment as impassable areas to increase the difficulty of path planning. The space for path planning is illustrated in Figure 2. In the figure, the blue dot and green triangle represent the starting and ending points of the UAV, with positions (10, 90, 1.1146) and (140, 10, 1.3735), respectively.

5.3.3. Simulation Experiments and Analysis

The simulation experiment data of the five algorithms are shown in Table 7, and the experimental results are shown in Figure 3, Figure 4 and Figure 5. In Table 7, the first column is the algorithm name, the second column is the flight path length, the third column is the flight altitude variation, the fourth column is the deflection angle variation, and the fifth column is the optimal fitness value. The flight path length and flight altitude variation of the path planned by the SFEDBO algorithm are the best among the five algorithms, indicating that the path planned by it not only satisfies the shortest distance but also ensures stable flight altitude. However, its flight deflection angle is slightly inferior to the DBO algorithm. But, from a comprehensive perspective of these three aspects, the path planned by the SFEDBO algorithm is still the best among the five algorithms.
As evident from Figure 3, the paths planned by the three enhanced algorithms exhibit significant improvement over those of the DBO algorithm and the GWO algorithm. From the top view of Figure 4, the SFEDBO algorithm and the MSADBO algorithm have found a flight route closer to the destination through the gaps between the four impassable areas, while the DBO algorithm and the GWO algorithm have only found a path around the sides. Therefore, it can be concluded that with the enhancement of the tumbling foraging strategy and the adaptive strategy, the SFEDBO algorithm can accurately find an optimal path. Figure 5 displays the convergence curves of the five algorithms. As evident from the figure, the incorporation of the EOBL strategy to enhance the diversity of the initial population results in the fitness of the path planned by the SFEDBO algorithm surpassing the other three algorithms in the early stages of iteration. Only the SGMDBO algorithm manages to keep pace with the SFEDBO algorithm during this initial phase. However, upon examining the final iteration curve, it becomes evident that the fitness of the SFEDBO algorithm surpasses that of the SGMDBO algorithm. Conversely, both the GWO and DBO algorithms exhibit not only a sluggish convergence speed during the planning process but also a tendency to converge towards local optima in the later stages of iteration.
In summary, considering indicators such as flight path length, flight altitude variation, and flight deflection angle, the path planned by the SFEDBO is better. Therefore, the SFEDBO algorithm has the ability to handle such problems and has advantages over other algorithms.

6. Conclusions and Prospects

This paper introduces the SFEDBO algorithm, which tackles the constraints and deficiencies inherent in the initial DBO algorithm, including a sluggish speed of convergence, unequal emphasis on exploration and exploitation, and vulnerability to local optima. The main contributions of this paper are threefold: (1) An EOBL strategy is employed to enhance the initialization of the population in the DBO algorithm. This integration effectively enhances the quality of the starting population, notably improving the algorithm’s overall search capability and accelerating convergence speed; (2) an adaptive strategy is proposed for dynamically adjusting the ratio of rolling dung beetles to incubating balls, mitigating the imbalance between exploration and exploitation in the DBO algorithm; (3) this paper pioneers the introduction of the somersault foraging strategy from the MRFO algorithm into the DBO algorithm. By continuously perturbing the optimal dung beetle individuals, the algorithm’s ability to escape local optima is significantly bolstered, greatly enhancing its optimization accuracy.
To verify the effectiveness of the suggested enhancements, this paper conducts optimization experiments on 23 benchmark test functions using the SFEDBO algorithm and compares the results to those of those of other algorithms. The experimental outcomes demonstrate the high solution accuracy and stability of the proposed algorithm. To assess the practical applicability of the SFEDBO algorithm, this paper applies it to pressure–vessel design problems, tension/compression–spring design problems, and three-dimensional unmanned aerial vehicle path-planning problems. The experimental results indicate the algorithm’s suitability for practical optimization problems and its superiority over other algorithms.
While the SFEDBO algorithm exhibits stronger performance compared to the original algorithm, there is still potential for further enhancement when compared with other recently proposed advanced algorithms. Additionally, the application domains of the improved algorithm require further delineation. Consequently, future research will concentrate on specific areas such as image processing, flow shop scheduling, and wireless sensor network coverage, proposing corresponding improvement methods that harness the unique characteristics of the DBO algorithm.

Author Contributions

Conceptualization, D.Z.; methodology, Z.W.; software, Z.W.; validation, Z.W.; formal analysis, Z.W.; data curation, D.Z.; writing—original draft preparation, Z.W.; writing—review and editing, D.Z.; supervision, F.S.; project administration, D.Z.; funding acquisition, F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by National Natural Science Foundation of China (52178468, 52268023), and Natural Science Foundation of Guangxi (No. 2023GXNSFAA026418).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wen, G.K.; Dasril, Y.; Othman, A. A class of ABCED conjugate gradient method in solving general global optimization problems. J. Theor. Appl. Inf. Technol. 2018, 96, 7984–7995. [Google Scholar]
  2. Mzili, T.; Mzili, I.; Riffi, M.E.; Dhiman, G. Hybrid genetic and spotted hyena optimizer for flow shop scheduling problem. Algorithms 2023, 16, 265. [Google Scholar] [CrossRef]
  3. Mzili, T.; Mzili, I.; Riffi, M.E.; Pamucar, D.; Simic, V.; Abualigah, L.; Almohsen, B. Hybrid genetic and penguin search optimization algorithm (GA-PSEOA) for efficient flow shop scheduling solutions. Facta Univ. Ser. Mech. Eng. 2024, 22, 077–100. [Google Scholar] [CrossRef]
  4. Farshi, T.R.; Drake, J.H.; Özcan, E. A multimodal particle swarm optimization-based approach for image segmentation. Expert Syst. Appl. 2020, 149, 113233. [Google Scholar] [CrossRef]
  5. Li, H.; He, H.; Wen, Y. Dynamic particle swarm optimization and K-means clustering algorithm for image segmentation. Optik 2015, 126, 4817–4822. [Google Scholar] [CrossRef]
  6. Deng, X.; He, D.; Qu, L. A Multi-strategy Enhanced Arithmetic Optimization Algorithm and Its Application in Path Planning of Mobile Robots. Neural Process. Lett. 2024, 56, 18. [Google Scholar] [CrossRef]
  7. Cui, J.; Wu, L.; Huang, X.; Xu, D.; Liu, C.; Xiao, W. Multi-strategy adaptable ant colony optimization algorithm and its application in robot path planning. Knowl.-Based Syst. 2024, 288, 111459. [Google Scholar] [CrossRef]
  8. Wang, X.; Wei, Y.; Guo, Z.; Wang, J.; Yu, H.; Hu, B. A Sinh–Cosh-Enhanced DBO Algorithm Applied to Global Optimization Problems. Biomimetics 2024, 9, 271. [Google Scholar] [CrossRef]
  9. Sun, W.; Ma, H.; Wang, S. A Novel Fault Diagnosis of GIS Partial Discharge Based on Improved Whale Optimization Algorithm. IEEE Access 2024, 12, 3315–3327. [Google Scholar] [CrossRef]
  10. Alhijawi, B.; Awajan, A. Genetic algorithms: Theory, genetic operators, solutions, and applications. Evol. Intell. 2024, 17, 1245–1256. [Google Scholar] [CrossRef]
  11. Price, K.V.; Storn, R.M.; Lampinen, J.A. The differential evolution algorithm. Differ. Evol. A Pract. Approach Glob. Optim. 2005, 37–134. [Google Scholar]
  12. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  13. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  14. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  15. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  16. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  17. Dhiman, G.; Garg, M.; Nagar, A.; Kumar, V.; Dehghani, M. A novel algorithm for global optimization: Rat swarm optimizer. J. Ambient Intell. Humaniz. Comput. 2021, 12, 8457–8482. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  19. Wang, J.; Wang, W.-C.; Hu, X.-X.; Qiu, L.; Zang, H.-F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  20. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  21. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  22. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  23. Duan, J.; Gong, Y.; Luo, J.; Zhao, Z. Air-quality prediction based on the ARIMA-CNN-LSTM combination model optimized by dung beetle optimizer. Sci. Rep. 2023, 13, 12127. [Google Scholar] [CrossRef]
  24. Zhang, D.; Wang, Z.; Zhao, Y.; Sun, F. Multi-Strategy Fusion Improved Dung Beetle Optimization Algorithm and Engineering Design Application. IEEE Access 2024, 9, 291. [Google Scholar] [CrossRef]
  25. Sun, W.; Wang, Y.; You, X.; Zhang, D.; Zhang, J.; Zhao, X. Optimization of Variational Mode Decomposition-Convolutional Neural Network-Bidirectional Long Short Term Memory Rolling Bearing Fault Diagnosis Model Based on Improved Dung Beetle Optimizer Algorithm. Lubricants 2024, 12, 239. [Google Scholar] [CrossRef]
  26. Wang, Z.; Huang, L.; Yang, S.; Li, D.; He, D.; Chan, S. A quasi-oppositional learning of updating quantum state and Q-learning based on the dung beetle algorithm for global optimization. Alex. Eng. J. 2023, 81, 469–488. [Google Scholar] [CrossRef]
  27. Qin, X.; Leng, C.; Dong, X. Research on Dung Beetle Optimization Algorithm Based on Hybrid Strategy. J. Jilin Univ. (Inf. Sci. Ed.), 2024; 1–11. [Google Scholar] [CrossRef]
  28. Pan, J.; Li, S.; Zhou, P.; Yang, g.; Lv, D. Dung beettle optimization algorithm guided by improved sine algorithm. Comput. Eng. Appl. 2023, 59, 92–110. [Google Scholar]
  29. Luo, Y.; Dai, W.; Ti, Y.-W. Improved sine algorithm for global optimization. Expert Syst. Appl. 2023, 213, 118831. [Google Scholar] [CrossRef]
  30. Mali, M.A.; Bhosale, M.H.; Bedi, M.D.S.; Modasara, M.A. A review paper on study of pressure vessel, design and analysis. Int. Res. J. Eng. Technol 2017, 4, 1369–1374. [Google Scholar]
  31. Belegundu, A.D.; Arora, J.S. A study of mathematical programming methods for structural optimization. Part I: Theory. Int. J. Numer. Methods Eng. 1985, 21, 1583–1599. [Google Scholar] [CrossRef]
  32. De Filippis, L.; Guglieri, G.; Quagliotti, F. Path planning strategies for UAVS in 3D environments. J. Intell. Robot. Syst. 2012, 65, 247–264. [Google Scholar] [CrossRef]
  33. Mahdavi, S.; Rahnamayan, S.; Deb, K. Opposition based learning: A literature review. Swarm Evol. Comput. 2018, 39, 1–23. [Google Scholar] [CrossRef]
  34. Chen, L.; Ma, L.; Li, L. Enhancing sine cosine algorithm based on social learning and elite opposition-based learning. Computing 2024, 106, 1475–1517. [Google Scholar] [CrossRef]
  35. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  36. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  37. Xu, Y.; Liu, S.; Zhang, W.; Liu, Y. Elite Opposition-Based Learning andt-Distribution Hunger Games Search Algorithm. Comput. Simul. 2023, 40, 425–434. [Google Scholar]
  38. Yang, W.; Xia, K.; Fan, S.; Wang, L.; Li, T.; Zhang, J.; Feng, Y. A multi-strategy whale optimization algorithm and its application. Eng. Appl. Artif. Intell. 2022, 108, 104558. [Google Scholar] [CrossRef]
Figure 1. The flowchart of SFEDBO algorithm.
Figure 1. The flowchart of SFEDBO algorithm.
Applsci 14 08624 g001
Figure 2. Three-dimensional path-planning space.
Figure 2. Three-dimensional path-planning space.
Applsci 14 08624 g002
Figure 3. Three-dimensional path-planning diagram.
Figure 3. Three-dimensional path-planning diagram.
Applsci 14 08624 g003
Figure 4. Top view of path planning.
Figure 4. Top view of path planning.
Applsci 14 08624 g004
Figure 5. Convergence diagram.
Figure 5. Convergence diagram.
Applsci 14 08624 g005
Table 1. The 23 benchmark test functions.
Table 1. The 23 benchmark test functions.
Function NumberOptimal ValueDimensionSearch Range
F1 0 30[−100, 100]
F2 0 30[−10, 10]
F3 0 30[−100, 100]
F4 0 30[−100, 100]
F5 0 30[−30, 30]
F6 0 30[−100, 100]
F7 0 30[−1.28, 1.28]
F8 12,569.5 30[−500, 500]
F9 0 30[−5.12, 5.12]
F10 0 30[−32, 32]
F11 0 30[−600, 600]
F12 0 30[−50, 50]
F13 0 30[−50, 50]
F14 1 2[−65, 65]
F15 0.0003 4[−5, 5]
F16 1.0316 2[−5, 5]
F17 0.398 2[−5, 5]
F18 3 2[−2, 2]
F19 3.8628 3[0, 1]
F20 3.32 6[0, 1]
F21 10.1532 4[0, 10]
F22 10.4028 4[0, 10]
F23 10.5364 4[0, 10]
Table 2. The computational time results of the SFEDBO algorithm and the DBO algorithm on the 23 benchmark test functions.
Table 2. The computational time results of the SFEDBO algorithm and the DBO algorithm on the 23 benchmark test functions.
Function NumberDBO (s)SFEDBO (s)Difference (s)Rate of Increase
F10.16524 0.17246 0.00721 4.37%
F20.16784 0.18427 0.01642 9.79%
F30.42334 0.43019 0.00686 1.62%
F40.16138 0.16375 0.00237 1.47%
F50.20266 0.20652 0.00386 1.91%
F60.16431 0.16687 0.00257 1.54%
F70.26792 0.27602 0.00811 2.94%
F80.20861 0.21059 0.00198 0.95%
F90.15905 0.17751 0.01846 11.61%
F100.15768 0.18041 0.02273 14.41%
F110.18596 0.21256 0.02659 14.30%
F120.48543 0.48229 −0.00314 −0.65%
F130.49889 0.50524 0.00634 1.27%
F140.81044 0.84861 0.03817 4.71%
F150.15712 0.15637 −0.00074 −0.47%
F160.13202 0.13920 0.00718 5.44%
F170.13973 0.14397 0.00424 3.04%
F180.12605 0.14802 0.02197 17.43%
F190.14560 0.16668 0.02108 14.48%
F200.14759 0.15653 0.00894 6.05%
F210.15494 0.17838 0.02344 15.13%
F220.18023 0.18286 0.00263 1.46%
F230.19007 0.20345 0.01338 7.04%
Table 3. Comparison of SFEDBO and classical optimization algorithms for finding the best on 23 benchmark test function.
Table 3. Comparison of SFEDBO and classical optimization algorithms for finding the best on 23 benchmark test function.
FunctionGWOPSOWOARSODBOSFEDBO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
F1 2.13 × 10 27 4.07 × 10 27 0.43070.1979 1.32 × 10 70 5.93 × 10 70 4.64 × 10 247 0 1.69 × 10 105 9.24 × 10 105 00
F2 9.64 × 10 17 6.98 × 10 17 6.69053.3474 3.66 × 10 51 1.02 × 10 50 8.25 × 10 140 2.20 × 10 139 2.64 × 10 58 1.34 × 10 57 00
F3 3.59 × 10 5 1.33 × 10 4 52.248422.5621 4.07 × 10 4 1.27 × 10 4 2.17 × 10 248 0 3.84 × 10 59 2.10 × 10 58 00
F4 8.09 × 10 7 5.20 × 10 7 4.74291.852147.899327.6412 6.74 × 10 114 3.64 × 10 113 9.08 × 10 49 4.97 × 10 48 00
F527.08070.7835198.9406107.215427.79940.499728.7899 0.2491 25.81200.278024.31610.2397
F60.83250.37880.47100.21090.44740.25153.3140 0.4706 0.00130.0030 3.85 × 10 6 9.55 × 10 6
F70.00190.00100.06380.04630.00410.00450.0004 0.0005 0.00150.00130.0001 1.57 × 10 5
F8 5.83 × 10 3 749.8648 3.15 × 10 3 576.6559 9.11 × 10 3 1.72 × 10 3 5.76 × 10 3 1.15 × 10 3 8.39 × 10 3 1.52 × 10 3 −10,144.81142.5
F94.41396.761380.559421.973800000.79834.372300
F10 9.79 × 10 14 1.90 × 10 14 4.99901.0900 4.20 × 10 15 2.27 × 10 15 1.13 × 10 15 9.01 × 10 16 8.88 × 10 16 0 8.88 × 10 16 0
F110.00300.00630.61900.17280.00670.0261000000
F120.04110.02323.38731.53060.02460.03140.3439 0.1100 3.36 × 10 4 0.0010 2.07 × 10 7 3.94 × 10 7
F130.66420.316111.973415.71910.55350.28762.8593 0.0448 0.62390.51350.03160.0512
F144.26334.07911.92241.34572.63512.97442.9717 2.2938 1.55511.84621.09730.3995
F150.000360.008150.000450.008260.000730.000360.0011 0.0005 0.000790.000330.000380.00026
F16−1.0316 1.81 × 10 8 −1.0316 5.53 × 10 16 −1.0316 1.31 × 10 9 −1.0316 0.0004 −1.0316 5.68 × 10 16 −1.0316 6.05 × 10 16
F170.398 9.28 × 10 7 0.39800.398 7.21 × 10 6 0.4139 0.0154 0.39800.3980
F183 4.75 × 10 5 3 2.60 × 10 15 3 6.84 × 10 5 3 0.0002 3 3.11 × 10 15 3 1.90 × 10 15
F19−3.86150.0023−3.86230.0020−3.85840.0074−3.4260 0.4211 −3.86090.0034−3.86250.0014
F20−3.25910.0724−3.12550.1973−3.20620.1717−1.7343 0.6192 −3.24190.0930−3.25590.0712
F21−8.54772.5261−6.39683.6438−8.77792.2836−0.6248 0.3264 −6.93942.4862−10.15300.0012
F22−10.22370.9700−7.39453.7763−7.66963.0123−1.2642 0.8198 −8.11603.1501−10.40270.0011
F23−10.53450.0010−7.32063.5851−6.29953.2531−1.0041 0.8088 −8.82872.9615−10.5364 6.90 × 10 5
Table 4. Comparison of SFEDBO and improved algorithms for finding the best on 23 benchmark test function.
Table 4. Comparison of SFEDBO and improved algorithms for finding the best on 23 benchmark test function.
FunctionQOLDBOMSADBOSGMDBOEtHGSMSWOASFEDBO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
F1000000000000
F20000 7.80 × 10 178 0000000
F3000000000000
F4 2.53 × 10 287 000 1.50 × 10 166 0000000
F525.93460.309425.07720.190025.05650.656125.07720.19000.0122 0.0253 24.31610.2397
F6 1.33 × 10 5 2.84 × 10 5 1.43 × 10 5 1.71 × 10 5 0.70320.3508 1.43 × 10 5 1.71 × 10 5 3.47 × 10 5 5.57 × 10 5 3.85 × 10 6 9.55 × 10 6
F7 3.24 × 10 5 3.88 × 10 5 8.74 × 10 5 7.77 × 10 5 0.00010.0001 8.74 × 10 5 7.77 × 10 5 9.68 × 10 5 0.0001 0.0001 1.57 × 10 5
F8 1.21 × 10 4 8.76 × 10 4 −8000.91533.4−12,100.7472.8−8000.91533.4−12,549.5870 33.9619 −10,144.81142.5
F9000000000000
F10 8.88 × 10 16 0 8.88 × 10 16 0 8.88 × 10 16 0 8.88 × 10 16 0 8.88 × 10 16 0 8.88 × 10 16 0
F11000000000000
F12 1.21 × 10 8 5.93 × 10 6 1.03 × 10 6 1.69 × 10 6 0.01040.0073 1.03 × 10 6 1.69 × 10 6 6.57 × 10 6 1.64 × 10 5 2.07 × 10 7 3.94 × 10 7
F13 2.86 × 10 5 0.16750.81380.46490.35860.28180.81380.4649 3.40 × 10 5 8.31 × 10 5 0.03160.0512
F141.103201.52191.84702.04072.97921.52191.84701.7251 1.0682 1.09730.3995
F150.0003 7.14 × 10 5 0.00070.00040.00060.00020.00070.00040.0005 0.0003 0.000380.00026
F16−1.0316 2.08 × 10 11 −1.0316 2.76 × 10 11 −1.0316 6.45 × 10 16 −1.0316 2.76 × 10 11 −1.0316 6.94 × 10 11 −1.0316 6.05 × 10 16
F170.39800.39800.39800.39800.3993 0.0031 0.3980
F183 2.96 × 10 15 3 2.59 × 10 15 3 2.47 × 10 15 3 2.59 × 10 15 7.5394 10.3244 3 1.90 × 10 15
F19−3.8625 2.68 × 10 16 −3.86220.0021−3.8628 2.65 × 10 15 −3.86220.0021−3.6761 0.2305 −3.86250.0014
F20−3.28 2.25 × 10 1 −3.260.0735−3.270.0599−3.260.0735−2.8589 0.2485 −3.260.0712
F21−10.1526 2.98 × 10 3 −10.02860.5741−10.1396 5.75 × 10 15 −10.02860.5741−10.0920 0.1687 −10.15300.0012
F22−10.4027 5.09 × 10 3 −10.23120.6805−10.4029 8.08 × 10 16 −10.23120.6805−10.3191 0.2225 −10.40270.0011
F23−10.5364 3.98 × 10 3 −10.51590.0894−10.5364 2.14 × 10 15 −10.51590.0894−10.4682 0.1545 −10.5364 6.90 × 10 5
Table 5. The result of pressure vessel design problems.
Table 5. The result of pressure vessel design problems.
Algorithm s 1 s 2 s 3 s 4 CostRank
SFEDBO0.7781690.38464940.319622005885.3327731
DBO0.7868080.38744640.36973199.31495949.5802625
GWO0.7792340.39072540.37375199.41585906.9946214
MSADBO0.7789760.38504840.36146199.41845886.7152
SGMDBO0.7804210.38576340.43633198.38165889.1943
Table 6. The result of tension/compression spring design problems.
Table 6. The result of tension/compression spring design problems.
Algorithm s 1 s 2 s 3 WeightRank
SFEDBO0.0516520.35581611.34220.0126651
DBO0.050.31742514.027770.0127194
GWO0.0505280.32917113.118750.0127063
MSADBO0.051930.36253810.982520.0126932
SGMDBO0.050.31742514.027770.0127194
Table 7. The data of the 3D UAV path-planning experiment.
Table 7. The data of the 3D UAV path-planning experiment.
Algorithm f L f H f S f m i n
SFEDBO153.40330.24150.567676.8876
DBO165.85830.54030.351783.1616
GWO165.75410.47460.584283.1363
MSADBO155.03890.26961.455777.8915
SGMDBO155.97180.27520.782678.2250
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, D.; Wang, Z.; Sun, F. Somersault Foraging and Elite Opposition-Based Learning Dung Beetle Optimization Algorithm. Appl. Sci. 2024, 14, 8624. https://doi.org/10.3390/app14198624

AMA Style

Zhang D, Wang Z, Sun F. Somersault Foraging and Elite Opposition-Based Learning Dung Beetle Optimization Algorithm. Applied Sciences. 2024; 14(19):8624. https://doi.org/10.3390/app14198624

Chicago/Turabian Style

Zhang, Daming, Zijian Wang, and Fangjin Sun. 2024. "Somersault Foraging and Elite Opposition-Based Learning Dung Beetle Optimization Algorithm" Applied Sciences 14, no. 19: 8624. https://doi.org/10.3390/app14198624

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop