Next Article in Journal
FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning
Previous Article in Journal
Potato Malformation Identification and Classification Based on Improved YOLOv3 Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Swarm Intelligence for Obstacle Avoidance with Multi-Strategy and Improved Dung Beetle Optimization Algorithm in Mobile Robot Navigation

1
School of Mechanical and Electrical Engineering, Xuzhou University of Technology, Xuzhou 221018, China
2
Institute of Bio-Inspired Structure and Surface Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
3
College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(21), 4462; https://doi.org/10.3390/electronics12214462
Submission received: 14 October 2023 / Revised: 27 October 2023 / Accepted: 27 October 2023 / Published: 30 October 2023

Abstract

:
The Dung Beetle Optimization (DBO) algorithm is a powerful metaheuristic algorithm that is widely used for optimization problems. However, the DBO algorithm has limitations in balancing global exploration and local exploitation capabilities, often leading to getting stuck in local optima. To overcome these limitations and address global optimization problems, this study introduces the Multi-Strategy and Improved DBO (MSIDBO) Algorithm. The MSIDBO algorithm incorporates several advanced computational techniques to enhance its performance. Firstly, it introduces a random reverse learning strategy to improve population diversity and mitigate early convergence or local stagnation issues present in the DBO algorithm. Additionally, a fitness-distance balancing strategy is employed to better manage the trade-off between diversity and convergence within the population. Furthermore, the algorithm utilizes a spiral foraging strategy to enhance precision, promote strong exploratory capabilities, and prevent being trapped in local optima. To further enhance the global search ability and particle utilization of the MSIDBO algorithm, it combines the Optimal Dimension-Wise Gaussian Mutation strategy. By minimizing premature convergence, population diversity is increased, and the convergence of the algorithm is accelerated. This expansion of the search space reduces the likelihood of being trapped in local optima during the evolutionary process. To demonstrate the effectiveness of the MSIDBO algorithm, extensive experiments are conducted using benchmark test functions, comparing its performance against other well-known metaheuristic algorithms. The results highlight the feasibility and superiority of MSIDBO in solving optimization problems. Moreover, the MSIDBO algorithm is applied to path planning simulation experiments to showcase its practical application potential. A comparison with the DBO algorithm shows that MSIDBO generates shorter and faster paths, effectively addressing real-world application problems.

1. Introduction

Mobile robots have demonstrated their success in various fields such as industries, military operations, and search and rescue missions, effectively carrying out critical unmanned tasks. In this highly intelligent and modern society, path planning plays a vital role in enabling these mobile robots to navigate and explore complex environments [1,2]. The objective of path planning is to find an optimal or suboptimal path from the initial state to the target state, considering specific performance criteria and the characteristics of the robot and its working environment [3,4].
Traditional path planning approaches have been investigated extensively. For instance, Yao et al. [5] proposed an improved artificial potential field method to address the problem of this technique falling into trap areas and local minima during the planning process. Experimental results demonstrated that the enhanced algorithm effectively overcomes these issues. Kandathil et al. [6] implemented the BUG-1 algorithm across a group of robots to move them from a start location to a target location, significantly reducing the time required for robot movement. Huang et al. [7] introduced a novel approach called the navigation strategy with path priority (NSPP) for multiple robots moving within a large flat space. NSPP utilizes Voronoi diagrams to divide the map based on each robot’s path-priority order and performs path planning for the robots in the space. This method can be applied to any number of robots and exhibits superior performance in terms of average trajectory length. Alshammrei et al. [8] designed and practically implemented an optimal collision-free algorithm based on an improved Dijkstra algorithm. The results indicate that the enhanced Dijkstra algorithm efficiently solves the optimal path planning problem for mobile robots navigating through environments containing obstacles. Ma [9] proposed a bidirectional search Probabilistic Roadmaps global path planning algorithm to address the limitations of low search efficiency and excessive path turning points in Probabilistic Roadmaps. Simulation results demonstrate that the improved algorithm effectively improves convergence speed and path smoothness when applied in actual mobile robot navigation experiments. Kang [10] presented an enhanced rapidly-exploring random tree (RRT) algorithm to address the issues of low planning efficiency and unsmooth paths in complex environments. The simulation results indicate significantly improved planning efficiency and path smoothness for mobile robots. Wang et al. [11] proposed a road stiffness planning algorithm called NRRT* based on convolutional neural networks (CNN). This algorithm is capable of quickly obtaining high-quality paths and accelerating convergence speed. On the other hand, Chi et al. [12] proposed a heuristic path planning algorithm based on the Generalized Voronoi Graph (GVD). This algorithm has the ability to automatically recognize environmental features and provide reasonable heuristic paths.
It is evident that although traditional path planning algorithms possess numerous advantages, they often fail to provide globally optimal paths. Additionally, when confronted with complex environments, these methods exhibit high computational costs and low success rates, making it challenging to meet practical application requirements. As a result, researchers have turned their attention towards meta-heuristic path planning methods due to their distributed computing capabilities, independence from prior knowledge, and robustness. In recent years, significant research advancements have been made in the field of group intelligent optimization algorithms for path planning. For example, Pehlivanoglu et al. [13] proposed an initial population enhancement method in genetic algorithms, thereby accelerating convergence speed. Patle et al. [14] conducted research on classical and reactive methods and found that reactive methods are more robust. When used as a hybrid algorithm, reactive methods are used to improve the performance of classical methods. Mohanty Prases et al. [15] proposed a smart adaptive particle swarm optimization algorithm for robot path planning. This algorithm enables robots to reach target points by using the shortest possible paths while safely avoiding obstacles in uncertain environments. Wu et al. [16] introduced an improved ant colony optimization algorithm to enhance convergence speed and global optimization effects. Simulation results demonstrate the effectiveness and efficiency of this algorithm in solving path planning problems for mobile robots within various constraint environments. Ge et al. [17] proposed an improved A* algorithm that considers energy consumption for path planning of spherical robots. The results indicate that compared to traditional path planning algorithms, the proposed method minimizes energy consumption and path length for spherical robots as much as possible. Zhang et al. [18] improved the sparrow search algorithm by introducing a fitness-distance balance and integrating the Harris Hawks algorithm. These enhancements aimed to overcome issues related to premature convergence and a decline in population diversity observed in the basic sparrow search algorithm.
Guo et al. [19] developed a multistrategy improved whale optimization algorithm specifically for trajectory planning of upper extremity exoskeleton rehabilitation robots, focusing on isokinetic rehabilitation. This algorithm provides a feasible solution for isokinetic rehabilitation trajectory planning of upper extremity exoskeleton rehabilitation robots. Gao et al. [20] presented a novel robot path planning algorithm based on Quantum-inspired Evolutionary Algorithm. Simulation results indicate that this algorithm is suitable for both complex static and dynamic environments and outperforms conventional genetic algorithms in solving robot path planning problems by effectively optimizing the paths. Zhang et al. [18] enhanced the sparrow search algorithm by incorporating fitness-distance balance and the Harris Hawks algorithm to address issues such as premature convergence and declining population diversity. Simulation experiments conducted in unknown environments demonstrate the effectiveness and robustness of the proposed algorithm in solving local path planning problems for mobile robots. Hong Kwon Ryong et al. [21] proposed an algorithm called Rao-combined artificial bee colony for designing minimum path planning to minimize radiation exposure to nuclear power plant workers. The experimental results reveal that the proposed method surpasses traditional algorithms in terms of exploration, exploitation, convergence, and robustness. Li et al. [22] aimed to enhance the sand cat swarm optimization algorithm for robot path planning by stochastic variation with elite collaboration. The results demonstrated the effectiveness and engineering practicability of the improved algorithm in addressing the path planning problem for mobile robots. The work proposed by Jiang et al. [23] is significant as it addresses the challenge of navigating complex environments effectively. By incorporating SLAM, tracking, and detection methods, their framework offers a comprehensive solution of robust and reliable localization in multi-story buildings. Dong et al. [24] introduced an improved grey wolf optimization algorithm that overcomes issues related to premature convergence and falling into local optima. The experimental results indicate that this optimization outperforms other competitors and its applicability is verified through a robot global path planning problem, demonstrating its ability to plan shorter and safer paths. Li et al. [25] proposed an improved multiobjective genetic algorithm for solving static global path planning. Simulation experiments conducted in grid environments demonstrate that the algorithm addresses issues present in traditional genetic algorithms, such as slow convergence speed and susceptibility to local optima.
The DBO algorithm is a novel swarm intelligence algorithm inspired by the rolling, dancing, foraging, stealing, and reproduction behaviors of dung beetles. The DBO algorithm exhibits strong optimization capabilities and fast convergence. However, it also suffers from limitations such as an imbalance between global exploration and local exploitation, a tendency to get trapped in local optima, and relatively weaker global exploration abilities. Many scholars have endeavored to optimize and improve this algorithm. Zhu et al. [26] proposed a dung beetle search algorithm called QHDBO that combines quantum computing and multi-strategy techniques. They utilized a good point set strategy to initialize the initial population of dung beetles, ensuring a more uniform distribution and reducing the likelihood of falling into local optima. QHDBO was compared with six other intelligent algorithms using 37 test functions and practical engineering application problems. The experimental results demonstrated that the optimized dung beetle optimization algorithm significantly improved convergence speed, optimization accuracy, and robustness. In a related study, Zhang et al. [27] introduced a modified mantis algorithm (IDBOP) to optimize a BP neural network for improving the prediction accuracy of wood mechanical behavior. They employed segmented linear chaos mapping to increase diversity and incorporated an adaptive parameter adjustment strategy to enhance early best search capabilities and improve search efficiency. Experimental results showed that optimizing the neural network model with IDBO greatly enhanced the prediction accuracy of wood mechanical properties. Building upon the improved sinusoid algorithm, Wang et al. [28] proposed an enhanced DBO algorithm called QQLDBO. This approach integrated quasi-oppositional learning and Q-learning, expanded the search scope, improved global exploration capabilities, and the variable spiral local domain method is proposed to make up for the shortage of developing only around the neighborhood optimum. The QQLDBO algorithm was tested using 23 benchmark test functions and compared with other well-known meta-heuristics, demonstrating its superior performance. Furthermore, the practical potential of the QQLDBO algorithm was validated by successfully applying it to engineering design problems, effectively addressing practical applications. Shen et al. [29] presented a multi-strategy enhancement of the dung beetle optimizer (MDBO) for drone (UAV) 3D path planning. They utilized Beta distribution to dynamically generate reflection solutions, allowing for exploration of a broader search space and enabling particles to escape local optima. Additionally, Levy distribution was introduced to handle bound particles, and two different cross operators were utilized to improve the thief beetle update stage. This strategy accelerated integration, balanced exploration, and development capabilities, and enhanced optimization accuracy and stability. MDBO’s effectiveness was confirmed through comparisons with other algorithms on benchmark functions. Moreover, the performance of MDBO in UAV 3D path planning was verified, successfully finding feasible paths even in challenging scenarios. The proposed MDBO algorithm demonstrated its capability to find safe and optimal paths in most cases, although it has limitations in avoiding local obstacles and may require multiple UAVs for complex situations.
Through previous research, researchers and practitioners can gain insight into the origins and development of the original algorithm, broaden their optimization approaches, integrate valuable strategies into their own to enhance existing technologies, and benefit from these discoveries. Building upon prior research, the MSIDBO algorithm addresses the issues of slow convergence speed, susceptibility to local optima, poor accuracy, and limited stability by incorporating approaches such as random reverse learning, fitness distance balance, spiral foraging, and optimal dimensional Gaussian mutation strategies. These improvements result in enhanced convergence speed and accuracy. The random reverse learning strategy enhances population diversity and alleviates the inherent challenges of premature convergence and local stagnation faced by the DBO algorithm. The fitness distance balance strategy effectively balances diversity and convergence within the population, mitigating between global exploration and local exploitation, which leads to inferior global search capabilities and vulnerability to local optima. Furthermore, the integration of the spiral foraging strategy improves search accuracy, demonstrating superior exploration capabilities and avoiding local optima. By combining the MSIDBO algorithm with the optimal dimensional Gaussian mutation strategy, premature convergence is reduced, global search capabilities are strengthened, particle utilization is improved, population diversity is increased, and the convergence speed of the MSIDBO algorithm is accelerated. This enables better escapement from local optima during the evolutionary process. It is evident that the proposed MSIDBO algorithm addresses the limitations and deficiencies of the original DBO algorithm. Researchers and practitioners can incorporate these strategies into their own or leverage specific aspects of the MSIDBO algorithm to enhance their existing techniques. It is worth noting that the aforementioned improvements are conducive to further advancements in the field and have the potential to significantly impact and improve existing algorithms. Additionally, 19 benchmark test functions are selected for simulation experiments, comparing the improved MSIDBO algorithm with the original DBO algorithm. The experimental results demonstrate that the improved algorithm not only generates relatively optimal paths but also significantly improves the speed of path planning. Moreover, as the size of the map increases, the effectiveness becomes more pronounced. The application of the improved algorithm in path planning experiments further confirms that MSIDBO can meet practical requirements, generating shorter and smoother paths with faster convergence. It exhibits high adaptability in complex environments and avoids various shaped obstacles, showcasing superior performance. By effectively utilizing the advantages of both global and local planning methods, it successfully resolves the problem of local obstacle avoidance when a global route has already been planned, thereby improving obstacle avoidance performance and ensuring globally optimal planned paths.

2. Methods

2.1. Dung Beetle Optimization (DBO) Algorithm

The DBO algorithm is a novel swarm intelligence optimization algorithm inspired by the social behavior of dung beetle populations. These beetles compress dung into balls before rolling them to a safe location. They can roll significantly larger dung balls and utilize celestial cues to roll them in a straight line when there is a light source available. However, in the absence of a light source, their paths become curved and susceptible to natural disturbances [30]. The survival of dung beetles is intricately linked to acquiring dung balls, where some are used for reproduction and nurturing offspring while the rest serve as food.
Motivated by this behavior, the DBO algorithm simulates five key behaviors exhibited by dung beetles: ball rolling, dancing, foraging, stealing, and reproduction. The dung beetle population is divided into four subgroups: rollers, reproducers, minors, and stealers, with different search strategies employed for each subgroup [31].

2.1.1. Roller Beetle

During the rolling process, dung beetles must navigate using celestial cues, particularly the sun, moon, and polarized light, to maintain the straight-line rolling trajectory of the dung ball. As shown in Figure 1, dung beetle trajectory model is displayed. It can be observed that dung beetles utilize the sun for navigation, with the arrow indicating the rolling direction [32].
Assuming that the intensity of the light source also influences the rolling path of dung beetles, the position of the roller beetle is updated and can be represented as follows [33]:
X i ( t + 1 ) = X i ( t ) + α × k × X i ( t + 1 ) + b × Δ x
Δ x = X i ( t ) X w
where t represents the current iteration count; Xi(t) represents the position information of the i-th dung beetle in the t-th iteration; k ∈ (0, 0.2] denotes a constant value indicating the deviation coefficient; b is a constant value belonging to the interval (0, 1); α is a natural coefficient assigned as either 1 or −1; Xw represents the global worst position; and ∆x simulates the variation of light intensity. The parameter a simulates natural factors (such as wind and uneven terrain) that can cause the dung beetle to deviate from its original direction. Specifically, a = 1 signifies no deviation, while a = −1 indicates deviation from the original direction. A larger ∆x implies weaker light source, which brings two benefits: (1) thoroughly exploring the entire problem space during the optimization process; (2) enhancing search capabilities and reducing the likelihood of getting trapped in local optima.

2.1.2. Dancing Behavior

Various natural factors, such as wind and uneven terrain, can have a significant impact on the trajectory of dung beetles. In such situations, dung beetles typically climb to the top of the dung ball and engage in a dancing behavior, which involves a series of rotations and pauses. Through this dancing behavior, they determine their movement direction by changing their orientation, thus obtaining a new path. To mimic this dancing behavior, a tangent function is employed to obtain the new rolling direction. Figure 2 shows the tangent function model and dancing model of dung beetles. It is important to note that only the values defined on the interval [0, 1] of the tangent function need to be considered, as shown in the diagram [34].
Once the correct direction is successfully determined, the roller beetle should continue rolling the ball forward. At this point, the position update of the roller beetle is as follows:
X i ( t + 1 ) = X i ( t ) + tan β X i ( t ) X i ( t 1 )
where the deviation angle β ∈ [0, π]. In the equation, t represents the current iteration count; Xi(t) denotes the position information of the i-th roller beetle in the t-th iteration; |Xi(t) − Xi(t − 1)| is the absolute difference between the position of the i-th beetle in the t-th iteration and its position in the previous (t − 1)th iteration. It can be observed that the position update of the roller beetle is closely related to both its current and historical position information. It is important to note that if the deviation angle equals 0, π/2, or π, the position of the beetle will not be updated.

2.1.3. Dung Beetle Reproduction

To provide a safe environment for their offspring, selecting the right oviposition site is crucial for dung beetles. Inspired by the aforementioned discussion, a boundary selection strategy is proposed to simulate the region where female dung beetles lay their eggs [35]. It is defined as follows:
L h = max X 1 R , L h
U h = min X 1 R , U h
where X* represents the current local best position; Lh* and Uh* denote the upper and lower bounds of the oviposition area, respectively; R = 1 − t/Tmax, where Tmax represents the maximum iteration count; Lh and Uh are the upper and lower bounds of the search space, respectively.
As shown in Figure 3, it is worth noting that each dung ball contains an egg of a dung beetle. Additionally, the red dots represent the boundaries’ upper and lower limits. In the DBO algorithm, it is assumed that each female dung beetle lays only one egg in each iteration. Due to the dynamic changes in the boundary range during the iteration, this helps prevent the algorithm from getting trapped in local optima, primarily determined by the inertia weight value R. Thus, the position of the egg balls is also dynamic during the iteration process and is defined by the following equation [36]:
Y i ( t + 1 ) = X + b 1 Y i ( t ) L h + b 2 Y i ( t ) U h
where Yi(t) represents the position information of the i-th egg ball in the t-th iteration, X* denotes the current local best position, Lh* and Uh* represent the upper and lower bounds of the oviposition area, respectively. Y i ( t ) L h , Y i ( t ) U h denotes the difference between the current position of the egg ball and the upper and lower limits of the oviposition area, b1 is a random number following a normal distribution, b2 is a random vector within the range of (0, 1), and D represents the dimension of the optimization problem.

2.1.4. Minor Dung Beetle

Some adult dung beetles burrow into the ground in search of food, and these types of beetles are referred to as minor dung beetles. To simulate the foraging process of minor dung beetles, it is necessary to determine the optimal foraging area [37]. The simulation equation for this area is defined as follows:
L m = max X h 1 R , L h
U m = min X h 1 R , U h
where Xh represents the global best position; Lm and Um are the lower and upper bounds of the optimal foraging area, respectively; Lh and Uh are the lower and upper bounds of the search space. Therefore, the position update for the minor dung beetles is as follows [38]:
X i ( t + 1 ) = X i ( t ) + C 1 X i ( t ) L m + C 2 X i ( t ) U m
where Xi(t) represents the position of the i-th dung beetle in the t-th iteration; X i ( t ) L m and X i ( t ) U m denote the differences between the current dung beetle position and the upper and lower bounds of the optimal foraging area, respectively. C1 is a randomly generated number following a normal distribution, and C2 is a random vector within the range of (0, 1).

2.1.5. Thieving Dung Beetle

The position information of the thief is updated as follows during the iteration process, considering that some dung beetles, referred to as thieves, steal dung balls from other beetles. Based on Equation (9), it can be observed that Xh represents the optimal food source. Assuming that the area surrounding Xh represents the prime location for competitive food, the position update for the thief is given by [39]:
X i ( t + 1 ) = X h + W g X i ( t ) X + X i ( t ) X h
where Xi(t) represents the position information of the i-th thief in the t-th iteration, Xh represents the global best position, X* represents the current local best position, |Xi(t) − X*| and |Xi(t) − Xh| represent the absolute differences between the current position and the local best and global best positions, respectively. g is a randomly generated vector of size 1 × D following a normal distribution, and W is a constant.
Although the DBO algorithm possesses strong optimization capabilities and fast convergence speed, it suffers from the imbalance between global exploration and local exploitation abilities, making it prone to getting stuck in local optima and exhibiting weak global exploration capabilities. To enhance the search performance of DBO, this study proposes four strategies to strengthen DBO.

2.2. Multi-Strategy and Improved Dung Beetle Optimization (MSIDBO) Algorithm

2.2.1. Random Backward Learning Strategy

In swarm intelligence algorithms, the choice of initialization strategy determines the distribution of the initial population in the solution space. In the standard dung beetle algorithm, the population is initialized using a random generation method. Although this approach is simple and easy to implement, randomly generated solutions have certain limitations, as they may fail to cover some potentially good points, thereby slowing down the algorithm’s performance.
To address this issue, this paper adopts a strategy called random reversal learning for population initialization. The random reversal learning strategy [40] generates the reverse position for each individual and evaluates both the original and reverse individuals. The superior individuals are retained for the next generation, enhancing population diversity and avoiding getting trapped in local optima. The specific formula is as follows:
X i o = L h + U h X i t
where X i t represents the i-th individual in the population, X i o is the corresponding reverse solution to X i t , and Lh and Uh are the upper and lower limits of individuals in the search space.
To further enhance population diversity and overcome the limitation that the reverse solutions generated by the basic reversal learning strategy may not be better than the current solutions, this paper proposes an improved random reversal learning strategy. By using random reversal learning to generate a reversed population, the search space of the population is expanded, providing more opportunities to discover potential optimal individuals and avoiding the problem of easily getting trapped in local optima during the algorithm’s evolution. The random reversal learning strategy enriches population diversity and effectively improves the exploration capability of the DBO algorithm, which is weaker than its exploitation capability and prone to premature convergence or local stagnation [41]. The specific formula is as follows:
X i r o = L b + U b r a n d X i
where X i r o represents the reverse individual generated by the random reversal learning strategy, and Lb and Ub represent the upper and lower limits of the individual’s position, respectively. r a n d X i denotes the random reverse solution, and X i denotes the reverse solution. rand represents a random number between 0 and 1.
For each individual in the DBO algorithm, the random reversal learning strategy is first applied. This involves simultaneously searching the current solution and its dynamically reversed solution. The better solution is then selected as the initial solution, followed by using other methods for position updates. It is important to note that, to reduce computational cost, during each iteration, each individual is evaluated [42]. If r a n d < t T max (where t represents the iteration count and Tmax represents the maximum iteration count), the random reversal learning strategy is executed. Otherwise, the subsequent position update method is directly executed to guide the evolution towards the optimal individual’s position, enhancing the algorithm’s exploration capability and breaking free from the local optima dilemma, leading to improved convergence speed.

2.2.2. Fitness-Distance Balancing (FDB) Strategy

To address the trade-off between diversity and convergence in the population of the DBO algorithm, this paper proposes a fitness-distance balance (FDB) strategy [43]. The FDB strategy is based on the work by Kahraman et al., which relies on both fitness and distance for selection. The objective of this method is to identify one or more candidate solutions that contribute the most to the algorithm’s search process. The key difference of FDB from other selection methods is that the selection process is [44] performed based on the candidates’ scores, not just their fitness values. The score calculation takes into account both the candidates’ fitness function values and their distances to the optimal solution. This ensures that the candidate solution with the highest score is selected to effectively guide the population search. In the stealing process of the DBO algorithm, the thief’s position is influenced by the global best solution, which enables the thief to produce optimal solutions. However, once the optimal individual gets trapped in local optima, the efficiency of the algorithm’s solution process significantly decreases. The FDB algorithm possesses the capability of sensitive searching [45], providing effective diversity, and establishing a strong balance between exploitation and exploration. It can be applied to constrained/unconstrained, unimodal/multimodal/hybrid/combined problem types, and various dimensions. Therefore, this paper incorporates the FDB strategy into the stealing position update equation, where the implementation of the FDB selection method eliminates the local optima problem in the dung beetle algorithm. The specific formula is as follows:
X i ( t + 1 ) = X F D B + β 3 × g × ( X i ( t ) X + X i ( t ) X F D B )
where X i ( t + 1 ) represents the position information of the i-th thief in the (t + 1)-th iteration, X F D B is the candidate solution of the i-th thief selected by the FDB strategy in the t-th iteration, X i ( t ) X represents the absolute difference between the position of the i-th dung beetle in the t-th iteration and its current local best position, X i ( t ) X F D B represents the absolute difference between the position of the i-th dung beetle in the t-th iteration and its candidate position selected through FDB, β is a weighting coefficient, and g is a Gaussian random vector of size 1 × D.

2.2.3. Spiral Foraging Strategy

Figure 4 shows the search models of DBO and MSIDBO algorithm. The original DBO algorithm is prone to getting trapped in local optima, resulting in premature convergence. During the optimization process of DBO, the population gradually converges towards the leader. This search tends to densely explore the vicinity of the current optimal position by following a straight line in each iteration, aiming to improve accuracy and exhibiting strong exploratory capability, as shown in Figure 4a. However, during the approach towards the optimal individual, the exploration of the nearby search space is lost, leading to a reduction in population diversity and an increased likelihood of getting stuck in local optima. If the current optimal position happens to be local optima, it can cause the algorithm to stagnate. To address this issue, this paper introduces a logarithmic spiral search model [46], as depicted in Figure 4b.
From the figure, it can be observed that in each generation, individuals gradually approach their updated positions in a spiral pattern [47], which increases the exploration of the surrounding space and maintains population diversity. This enhances the algorithm’s exploratory capability [48]. Based on this analysis, the dung beetle foraging update formula is adjusted as follows:
β = e b l cos ( 2 π b )
l = 2 ( 1 t t max ) 1
X i ( t + 1 ) = X + β × ( X i ( t ) L H ) + β × ( X i ( t ) U H )
where b is a constant that determines the shape of the spiral, with a value of 1. β follows a logarithmic function base e, and l is a parameter that linearly decreases from 1 to −1. X i ( t + 1 ) represents the position information of the i-th dung beetle in the (t + 1)-th iteration, X represents the local best position, ( X i ( t ) L H ) and ( X i ( t ) U H ) represent the difference between the position of the i-th dung beetle in the t-th iteration and the upper and lower limits of the oviposition region.

2.2.4. Optimal Per-Dimension Gaussian Mutation Strategy

To avoid premature convergence, enhance its global search capability, and further improve the performance of multi-objective optimization problems, this paper adopts the Optimal Individual-wise Gaussian Mutation (GM) strategy [49].
Gaussian mutation (GM) is an optimization strategy that applies random numbers following a normal distribution to act on the original position vector in order to generate new positions. It initializes particles adjacent to the target particle with a certain probability, and most of the mutation operators are distributed around the original position, which effectively performs local neighborhood search. This type of mutation not only improves the optimization accuracy of the algorithm but also helps the algorithm escape from local optima. Additionally, the Gaussian mutation randomly initializes particles that exceed the range, with a few operators moving away from the current position. This increases particle utilization and enhances population diversity, facilitating better exploration of potential regions and improving search speed, thereby accelerating the convergence trend of the optimization algorithm [50]. The Gaussian probability density function is defined as follows:
f ( x ) = 1 2 π σ exp ( ( x μ ) 2 2 σ 2 )
where μ represents the mean or expected value of the distribution, and σ represents the standard deviation. Specifically, in the standard Gaussian probability density function, μ and σ are set to 0 and 1, respectively [51].
To enhance the algorithm’s ability to escape from local optima, at the end of each iteration, the optimal individual obtained is subjected to an Optimal Individual-wise Gaussian Mutation strategy, and a greedy strategy is used to preserve better individuals. In this phase, a Gaussian mutation or Cauchy strategy is applied to each dimension of the optimal individual individually. The specific formula is as follows:
X n e w b ( j ) = X b ( j ) + ( r a n d n × ( X r a n d 1 ( j ) x b ( j ) ) + r a n d n × ( X r a n d 2 ( j ) X b ( j ) ) ) / 2
X b ( j ) = X n e w b ( j ) , i f   f ( X n e w b ) < f ( X b ) X b ( j ) , i f   f ( X n e w b ) f ( X b )
where X n e w b ( j ) represents the j-th dimension of the current optimal individual, X b ( j ) represents the j-th dimension of the optimal individual after each iteration, and X r a n d 1 and X r a n d 2 are randomly selected from the dung beetle population. By using two random individuals ( X b ) and the current individual ( X n e w b ), a new individual is formed and compared with the original individual, overcoming the issues of the DBO algorithm’s susceptibility to local optima and low accuracy.

2.3. The Proposed Algorithm Flowchart

This study proposes a MSIDBO algorithm to overcome the limitations of the traditional DBO algorithm and effectively solve global optimization problems. The MSIDBO algorithm addresses imbalances between global exploration and local exploitation capabilities by introducing a random reverse learning strategy, enhancing population diversity, and mitigating early convergence or local stagnation issues. Additionally, a fitness-distance balancing strategy is incorporated to optimize the trade-off between diversity and convergence within the population. To further improve precision and exploratory capabilities while avoiding local optima, the algorithm utilizes a spiral foraging strategy. Moreover, the MSIDBO algorithm combines the Optimal Dimension-Wise Gaussian Mutation strategy to minimize premature convergence, enhance global search ability, and increase population diversity, thereby accelerating convergence. Figure 5 shows the flow chart of the proposed algorithm. The program flow and steps can be summarized as follows:
Step 1:
Set the percentage of producers (P_percent) in the population.
Step 2:
Calculate the population size of producers (pNum) based on P_percent and the total population size (pop).
Step 3:
Initialize lower bounds (Lb) and upper bounds (Ub) for each dimension.
Step 4:
Initialize the population (x) by randomly selecting grid positions within the bounds.
Step 5:
Evaluate fitness (fit) for each solution in the population.
Step 6:
Set personal best fitness values (pFit) and positions (pX) as the initial fitness values and positions.
Step 7:
Set the current global best fitness value (fMin) and position (bestX) as the minimum fitness value and corresponding position from pFit and pX, respectively.
Step 8:
Start updating the solutions for M iterations: (a) Update a subset of fireflies based on random values and conditions. (b) Apply bounds to the updated solutions. (c) Evaluate fitness for the updated solutions. (d) Determine the current best fitness value (fMMin) and position (bestXX). (e) Update another subset of solutions using Spiral Foraging and FDB Strategy. (f) Apply bounds to the updated solutions. (g) Evaluate fitness for the updated solutions. (h) Update the individual’s best fitness value and global best fitness value if necessary. (i) Random reverse learning and Gaussian mutation to the best solution. (j) Store the best fitness value in the Convergence_curve array.
Step 9:
Termination condition: If the maximum number of iterations (M) is reached, go to step 10. Otherwise, go back to step 8.
Step 10:
Return the global best fitness value (fMin), corresponding position (bestX), and Convergence_curve array.
Figure 5. The proposed algorithm flowchart.
Figure 5. The proposed algorithm flowchart.
Electronics 12 04462 g005

3. Simulation Experiment Analysis

3.1. Experiment Environment Simulation

To ensure the rigor and fairness of the experiments, the simulation experiments were conducted using the following hardware and software configurations:
Hardware Environment:
Processor: AMD Ryzen 5 4600 U Radeon [email protected] GHz
Memory: 16 GB DDR4 RAM
Graphics Card: NVIDIA GeForce
Software Environment:
Simulation: MATLAB R2022a
Operating System: Windows 11 (64-bit)
(Note: The configurations of both hardware and software were used for all simulations in this article).

3.2. Experimental Design

To ensure the fairness of the experiments, the parameters were uniformly set as follows: number of runs = 100, maximum iterations = 300, and search agents = 30. A series of numerical experiments were conducted on MSIDBO to validate its effectiveness. Statistical analysis and convergence analysis were performed on 19 test functions. A comparison was made between six algorithms commonly used in recent years: Genetic Algorithms (GA), Grey Wolf Optimization (GWO), Moth-Flame Optimization (MFO), Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA) and DBO, and the proposed MSIDBO.
GA is a search method that is also referred to as an evolutionary algorithm that draws inspiration from Darwinian evolution. It simulates the natural and biological evolution process to achieve optimization. The GA algorithm is commonly used for function optimization, data mining, production scheduling, and combination optimization, among other applications. GWO algorithm is a swarm intelligence optimization algorithm that emulates the cooperative behavior of grey wolves to achieve optimization. The GWO algorithm offers advantages such as simplicity, easy implementation, and fast convergence, making it suitable for solving complex optimization problems. MFO is inspired by the movement behavior of moths and the phenomenon of moth flames going extinct. MFOA searches for the optimal solution by simulating moth movement behavior and adjusting parameters based on extinction. PSO is a swarm intelligence algorithm that mimics the social behavior of bird flocks. It optimizes problems by iterating through a population of particles, with each particle representing a potential solution. PSO adjusts particle velocity and position to find the overall best solution. WOA is a nature-inspired algorithm that resembles the search behavior of whales. It leverages social interactions and search patterns to discover global optimal solutions for complex optimization problems. WOA finds application in engineering, data science, operations research, and other fields. DBO is a novel swarm intelligence algorithm that draws inspiration from the ball rolling, dancing, foraging, stealing, and reproduction behaviors of dung beetles. This algorithm incorporates diversified location update strategies to effectively solve complex search and optimization problems.
The superiority of MSIDBO was evaluated by comparing the worst value, best value, average value, and standard deviation (STD) across the 19 benchmark functions. These methods were chosen because they have diverse performance characteristics in exploration and exploitation. Table 1 summarizes the parameter settings for each corresponding algorithm. It is important to note that the values for these parameters were set based on recommendations from their respective reference papers.

3.3. Benchmark Test Functions

To evaluate the performance of the MSIDBO algorithm, a set of 19 benchmark test functions (as shown in Table 2) were selected for testing. Table 1 summarizes the parameter settings for each corresponding algorithm. It is important to note that the values for these parameters were set based on recommendations from their respective reference papers. Among these functions, f1 to f7 are high-dimensional unimodal functions with only one global optimum, which can be used to assess the algorithm’s local exploration capability. Functions f8 to f13 are high-dimensional multimodal functions, while functions f14 to f19 are fixed-dimension multimodal benchmark functions that have multiple local optima. These functions are suitable for testing the algorithm’s performance in escaping local optima.

3.4. Comparative Analysis with Other Swarm Intelligence Algorithms

The proposed MSIDBO algorithm was independently run 100 times on 19 benchmark functions, along with other population-based intelligent algorithms such as GA, GWO, MFO, PSO, WOA, and DBO. The objective was to evaluate the performance and stability of the proposed algorithm compared to the others in terms of worst value, best value, average value, and standard deviation for dimensions of 30, 60, and 100. The experimental results for different algorithms and dimensions are presented in Table 3, Table 4 and Table 5. These results provide insights into the superior search performance and stability of the proposed algorithm compared to other population-based intelligent algorithms.
For unimodal functions f1 and f3 (Table 3), the theoretical optimum can only be found by the proposed MSIDBO algorithm, with both a mean and standard deviation equal to 0. DBO performs relatively worse, with the smallest negative mean values being closest to 0. In f2 and f4, only the standard deviation of the MSIDBO algorithm is 0. Among f5, f6, and f7, MSIDBO finds all optimal solutions, with DBO performing second-best, while GWO, PSO, and MFO exhibit inferior performance. Regarding multimodal functions, in f9 and f10, only the MSIDBO algorithm is capable of finding the theoretical optimum, with both mean and standard deviation equal to 0. In fixed-dimensional functions f13 to f19, MSIDBO achieves the best values, demonstrating significant and distinctive superiority: the standard deviation of MSIDBO is consistently one or more orders of magnitude better than other algorithms.
According to Table 4, for unimodal functions, it can be observed that only the MSIDBO algorithm can find the theoretical optimum, with both mean and standard deviation equal to 0 in f1 and f3. In f2 and f4, only the standard deviation of the MSIDBO algorithm is 0, followed by DBO. MSIDBO outperforms other algorithms in terms of various metrics in f5 to f7, particularly in f6. The mean and standard deviation of MSIDBO are several orders of magnitude smaller (negative) compared to other algorithms, which are on the order of −2 or higher. When observing multimodal functions f8 to f13, the differences among the values of various algorithms are not significant. However, the advantage of MSIDBO still persists, as it consistently achieves optimal search results. It is worth noting that in f8 to f13, the values obtained by GA are noticeably larger than those of other algorithms, indicating its weaker algorithmic performance compared to others.
In Table 5, for unimodal functions, it can be observed that only the MSIDBO algorithm can find the theoretical optimum, with both mean and standard deviation equal to 0 in f1 and f3. In f2 and f4, only the standard deviation of the MSIDBO algorithm is 0. Furthermore, in f5 to f7, MSIDBO exhibits the smallest standard deviation, indicating greater stability. For fixed-dimensional functions, specifically in f17 to f19, the standard deviation of GWO is significantly larger than that of other algorithms, indicating its poor stability. Among the three dimensions, data for dim = 60 and dim = 100 are similar in nature. In these cases, MSIDBO shows relatively minor fluctuations across different dimensions, demonstrating strong stability. Conversely, the other algorithms exhibit more noticeable fluctuations. GA displays the largest standard deviation and least stability, followed by PSO, GWO, and MFO, all of which experience varying degrees of reduced convergence speed and susceptibility to dimension changes.
Based on the above analysis, it can be observed that MSIDBO exhibits a more significant competitive advantage in solving unimodal, multimodal, and fixed-dimensional functions. To provide a more intuitive comparison of the algorithms’ convergence accuracy and speed, this study plotted the convergence curves of the test functions based on the number of iterations and fitness values (refer to Figure 6, Figure 7 and Figure 8). Figure 9 illustrates the two-dimensional shapes of some test functions used to evaluate the MSIDBO algorithm.
Upon observing Figure 6, Figure 7 and Figure 8, it can be noted that for functions f1, f3, f10, f11, f15, and f16, MSIDBO consistently converges to the global optimal solution with a success rate of 100%. The convergence accuracy of MSIDBO is significantly better than the other six algorithms, which fail to find the global optimum. In f1, f2, f3, f4, and f8, MSIDBO distributes the population more evenly across the solution space, increasing the number of individuals near the optimal solution. Consequently, the algorithm quickly finds the optimal solution within a few iterations, resulting in nearly linear convergence curves in the figures. Regarding local exploitation capability, MSIDBO outperforms the selected well-known algorithms on the tested functions. In terms of computational time, MSIDBO requires significantly less iterations to find the optimal solution compared to other algorithms: in f1 to f8, f10, and f13, MSIDBO quickly identifies the optimal value with 250–300 iterations, performing thorough exploration. In f9 and f10, the convergence iterations range from 50 to 100, while in f16, it is less than 50, indicating faster convergence and smoother stability. By examining the average convergence curves of high-dimensional multimodal functions (f8 to f13), it becomes evident that MSIDBO achieves higher optimization accuracy. It demonstrates superior exploration capability in finding the optimal solutions for these multimodal functions. For fixed-dimensional functions (f15 to f19), due to the complexity of the algorithms, the curves for all methods appear similar with minimal differences. However, MSIDBO still exhibits slightly better performance as it converges quickly in the early stages of iteration. The results indicate that in terms of exploration capability, the MSIDBO algorithm consistently outperforms other methods in both convergence speed and accuracy, demonstrating superior global search ability and the capability to escape local optima.
From a different perspective, the figures reveal that compared to other algorithms, MSIDBO achieves satisfactory convergence speed in functions f1f13. This can be attributed to the algorithm’s ability to thoroughly explore promising regions in the early iterations. MSIDBO exhibits the fastest optimization process among the 19 benchmark functions, effectively saving optimization time, and converging to the global optimal position as quickly as possible in the later stages. As a result, the chances of MSIDBO getting trapped in local optima are significantly lower than those of other methods, whereas other algorithms tend to become trapped in local optima in the later stages and struggle to escape. Overall, the MSIDBO algorithm consistently demonstrates superior convergence speed and accuracy compared to other methods.

3.5. Effectiveness Analysis of Improvement Strategies

Compared to other well-known algorithms, MSIDBO has shown a certain level of improvement in solution accuracy and more stable average optimization performance. This indicates that the introduced random reversed learning strategy enhances the algorithm’s global exploration capability and generates highly diverse initial dung beetle populations. Additionally, the convergence curve gradually deepens its exploration when the convergence speed slows down, indicating that MSIDBO, with the embedded fitness distance balancing strategy, effectively balances the diversity and convergence within the population, as well as the global exploration and local exploitation abilities. The introduced spiral foraging strategy improves optimization stability, enhances initial population diversity, and strengthens the ability for rapid iterative optimization while avoiding local optima and premature convergence. The incorporation of optimal individual-wise Gaussian mutation strategy significantly enhances the algorithm’s exploitation capability, and its fast convergence curve suggests that the Gaussian mutation strategy effectively avoids getting trapped in local optima and improves the algorithm’s optimization ability.
From the algorithms such as MFO, PSO, and GA mentioned in this article, it can be observed that swarm intelligence algorithms currently encounter several issues including slow convergence speed, easily falling into local optima, low solving accuracy, and limited stability. To address these problems, the MSIDBO algorithm is proposed in this study, incorporating a random reverse learning strategy to enhance population diversity. Additionally, the algorithm adopts a fitness distance balance strategy to effectively address the trade-off between diversity and convergence within the population. Moreover, the integration of a spiral foraging strategy in the algorithm enhances search accuracy and mitigates local optima. By combining the MSIDBO algorithm with the optimal dimensional Gaussian mutation strategy, premature convergence is reduced, global search capabilities are strengthened, and convergence speed is accelerated. The research outcomes hold significant importance in the areas of algorithm development and path planning. The proposal and verification of the MSIDBO algorithm significantly contribute to the existing body of knowledge on swarm intelligence-based optimization techniques and their practical applications. Notably, these findings hold great potential in practical applications such as robotics, autonomous navigation, and search and rescue missions. Efficient path planning plays a critical role in ensuring effective mobility in complex environments. The MSIDBO algorithm’s ability to generate globally optimal paths while considering obstacle avoidance can substantially enhance the performance and reliability of such systems. Furthermore, the research outcomes contribute to the academic understanding of algorithm design and optimization strategies. The integration of various strategies within the MSIDBO algorithm, along with the introduction of the novel spiral foraging technique, offers a fresh perspective on path algorithm enhancements. These findings not only address current challenges with innovative solutions but also represent advancements in the field. Moreover, the research results have the potential to impact and improve existing algorithms. As a response to the limitations and deficiencies discovered in the original DBO algorithm, the MSIDBO provides an improved alternative for path planning. Researchers and practitioners can benefit from these findings by integrating the proposed strategies into their own or leveraging specific aspects of the MSIDBO algorithm to enhance their existing techniques.
In conclusion, regardless of unimodal, multimodal, fixed-dimension, or different dimensional problems, MSIDBO is capable of accurately finding the optimal values. It exhibits adaptive transitions between global exploration and local exploitation, demonstrating excellent performance. By comparison, the MSIDBO algorithm not only achieves higher final convergence accuracy than other algorithms, but also converges faster. Its convergence speed is significantly advantageous, while maintaining high convergence accuracy, allowing for quick search and convergence, and further improvements.

4. Simulation and Validation of Path Planning Experiments

To further validate the capabilities and effectiveness of the MSIDBO algorithm in path planning, this study adopts grid-based modeling to represent a known environment. MATLAB simulation software is utilized to conduct comparative experiments between the MSIDBO and DBO algorithms, exploring their performance on grid maps of varying sizes (10 × 10, 15 × 15, 20 × 20, 30 × 30, and 40 × 40) with a grid distance of 1 m. Different obstacle rates are employed to construct grids with various complexities. In these grid maps, black cells denote obstacles, while white cells indicate areas where the algorithms can navigate. The robot is represented as a point entity equipped with perception capabilities to detect and comprehend obstacles and target points within the grid map.
The evaluation of the generated paths encompasses assessing their completeness in reaching the designated targets, analyzing path lengths for efficiency evaluation, ensuring collision avoidance with obstacles, and quantifying the number of turns and iterations involved in the paths to evaluate the results. The number of turns reflects the complexity of the path planning process, while the number of iterations signifies the level of iterative optimization achieved by the algorithm.
By independently running the standard MSIDBO and DBO algorithms, both initialized with identical population sizes and a maximum iteration count of 100, we compare their performances. The starting and ending points of the paths are located at the bottom left and top right corners of the grid map, respectively. Each algorithm undergoes 50 independent runs, and the average values are computed as the final outcomes. The path trajectories are shown in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24, while the metric test results are presented in Table 6.
In Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24, it can be observed that due to MSIDBO being an optimization of the base algorithm DBO, there are similarities in the path planning. In Figure 11b, the path planned by MSIDBO is a straight line. In Figure 13a,b, Figure 14a,b and Figure 18a,b, it can be seen that the paths generated by MSIDBO are simpler, smoother, and more stable compared to DBO. When the convergence speed slows down, MSIDBO delves deeper into exploration by increasing the diversity of the initial population, enhancing its ability for rapid iterative optimization while avoiding local optima and premature convergence. Therefore, the paths generated by MSIDBO are smoother with fewer turning points, shorter in length, require less search time, effectively avoid obstacles, and achieve ordered visits to all waypoints, finding the optimal solution with less iteration. On the other hand, the paths generated by DBO have more turning points, longer lengths, lack smoothness, and exhibit insufficient local exploration capabilities during subsequent iterations, leading the algorithm to become trapped in local optima.
By observing the convergence curves, it can be seen that under the same maximum iteration count, the convergence curve of the MSIDBO algorithm is smoother and tends to stabilize earlier and faster. Additionally, from the graphs, it is apparent that the DBO path planning contains numerous turning points and transitions, indicating that DBO is prone to getting trapped in local optima and struggles to escape from them. Conversely, the MSIDBO curve in many cases approximates a steep line, with minimal instances of being stuck in local optima. Even if it does get trapped, it quickly escapes and finds the optimal solution. Considering the overall observations in Figure 17, Figure 19, Figure 20, Figure 21 and Figure 22 in complex maps, MSIDBO tends to generate optimal paths near the diagonal of the map. This ensures the generation of relatively optimal paths while significantly improving the speed of path finding. As the map size increases, this effect becomes more pronounced, and MSIDBO performs better on larger-sized maps. On the other hand, the DBO algorithm generates paths on the outer parts of the map where there are fewer obstacles. In such cases, although the number of turns decreases, the path length becomes longer. DBO performs relatively well in small-sized maps with slightly fewer turning points and approaches the optimal solution. However, as the map size increases, the number of turning points increases, leading to multiple instances of getting trapped in local optima and deviating further from the optimal solution.
To compare the two algorithms more clearly and precisely, this study summarizes and compares the path length, number of turns, and iteration count in Table 6 and Figure 25. The table also includes the corresponding reduction rates for path length, number of turns, and iteration count achieved by the MSIDBO algorithm. Figure 25 shows the performance comparisons between DBO and MSIDBO. By comparing the data in Table 6, it can be observed that MSIDBO performs better than DBO in terms of path length, number of turns, and iteration count. It is worth noting that in maps shown in Figure 17, Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23, the number of turns in MSIDBO is greater than that in DBO. However, MSIDBO still outperforms DBO in terms of path length and iteration count. Therefore, the performance of MSIDBO remains superior to DBO. Satisfactorily, in Figure 10, the number of turns in MSIDBO reduced by 100%. In Figure 23, the path length in MSIDBO decreased by 58.85%. In Figure 22, the iteration count decreased by 54.84%. As the map size increases, the advantages of MSIDBO become more pronounced, particularly in optimizing smaller-sized maps, indicating its broader applicability. This clearer observation highlights that MSIDBO provides significant optimization over DBO in all aspects. It successfully avoids the central region with multiple obstacles, resulting in a shorter path length, lower variance in path length, better stability unaffected by map size, and fewer turning points. The algorithm runs smoothly and is more suitable for finding optimal paths in complex obstacle environments.

5. Conclusions

The paper begins by introducing the principles and mathematical model of the DBO algorithm. Addressing the limitations of the original DBO algorithm, such as the imbalance between global exploration and local exploitation capabilities leading to a tendency for local optima and weaker global exploration abilities, the study proposes improvements. The revised strategies are elaborated upon in detail. Subsequently, 19 benchmark test functions are selected for simulation and a thorough comparison between the improved MSIDBO and DBO algorithms is carried out. The experimental results demonstrate that the improved algorithm not only ensures the generation of relatively optimal paths but also significantly improves the speed of pathfinding. This improvement becomes more pronounced with increasing map size. Applying the enhanced algorithm in path planning experiments further confirms that MSIDBO meets practical requirements by generating shorter and smoother paths while exhibiting faster convergence. It shows adaptability in complex environments, making it highly suitable. Additionally, by effectively utilizing the advantages of both global and local planning methods, the proposed algorithm successfully addresses the problem of local obstacle avoidance when a global route has already been planned. This enhances obstacle avoidance performance while guaranteeing globally optimal planned paths, resulting in significant optimization effects. Therefore, in scenarios involving complex maps or emphasizing real-time computation, the MSIDBO algorithm exhibits strong application advantages.
The research findings are highly significant in the fields of algorithm development and path planning. By proposing and demonstrating the effectiveness of the MSIDBO algorithm, the contributes to the existing knowledge on swarm intelligence-based optimization techniques and their applications. One major aspect of the research findings is their potential value in practical applications such as robotics, autonomous navigation, and search and rescue missions. Effective path planning plays a crucial role in ensuring efficient movement in complex and dynamic environments in these applications. The MSIDBO algorithm’s ability to generate globally optimal paths while considering obstacle avoidance can greatly enhance the performance and reliability of such systems. Moreover, the research findings contribute to the academic community by expanding the understanding of algorithm design and optimization strategies. The integration of multiple strategies in the MSIDBO algorithm, along with the introduction of the novel spiral foraging technique, presents new perspectives on how to improve path planning algorithms. This not only provides innovative solutions for current challenges, but also progress in the field. Furthermore, the research findings have the potential to influence and improve existing algorithms. By addressing the limitations and drawbacks of the original DBO algorithm, the MSIDBO algorithm offers an alternative and improved approach to path planning. Other researchers and practitioners can benefit from these findings by incorporating the proposed strategies into their own algorithms or leveraging specific aspects of the MSIDBO algorithm to enhance their existing techniques.

Author Contributions

Conceptualization, C.G.; methodology, L.L. (Longhai Li); software, L.L. (Lili Liu), Y.S. and X.Z.; validation, Y.C., Y.S. and X.Z.; formal analysis, L.L. (Longhai Li) and L.L. (Lili Liu); investigation, L.L. (Longhai Li) and L.L. (Lili Liu); data curation, Y.C., Y.S. and X.Z.; writing—original draft preparation, L.L. (Longhai Li); writing—review and editing, C.G. and H.N.; project administration, H.N.; supervision, C.G.; funding acquisition, C.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant No. 51875282); the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (grant No. 21KJB460033); Science Research Project of Xuzhou University of Technology (grant No. KC21002); Jiangsu Industry University Research Cooperation Projects (BY2022774), and the National Defense Basic Scientific Research Project (grant No. JCKY2018605C010).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their gratitude to the National Natural Science Foundation of China and the technical staff of the College of Electrical Engineering at Zhejiang University for their valuable technical support.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this article.

References

  1. Jiang, S.; Hong, Z. Unexpected Dynamic Obstacle Monocular Detection in the Driver View. IEEE Intell. Transp. Syst. Mag. 2022, 15, 68–81. [Google Scholar] [CrossRef]
  2. Ait Saadi, A.; Soukane, A.; Meraihi, Y.; Benmessaoud Gabis, A.; Mirjalili, S.; Ramdane-Cherif, A. UAV path planning using optimization approaches: A survey. Arch. Comput. Methods Eng. 2022, 29, 4233–4284. [Google Scholar] [CrossRef]
  3. Yan, C.; Xiang, X.; Wang, C. Towards real-time path planning through deep reinforcement learning for a UAV in dynamic environments. J. Intell. Robot. Syst. 2020, 98, 297–309. [Google Scholar] [CrossRef]
  4. Puente-Castro, A.; Rivero, D.; Pazos, A.; Fernandez-Blanco, E. A review of artificial intelligence applied to path planning in UAV swarms. Neural Comput. Appl. 2022, 34, 153–170. [Google Scholar] [CrossRef]
  5. Yao, Q.; Zheng, Z.; Qi, L. Path planning method with improved artificial potential field—A reinforcement learning perspective. IEEE Access 2020, 8, 135513–135523. [Google Scholar] [CrossRef]
  6. Kandathil, J.J.; Mathew, R.; Hiremath, S.S. Development and analysis of a novel obstacle avoidance strategy for a multi robot system inspired by the bug-1 algorithm. Simulation 2020, 96, 807–824. [Google Scholar] [CrossRef]
  7. Huang, S.K.; Wang, W.J.; Sun, C.H. A Path Planning Strategy for Multi-Robot Moving with Path-Priority Order Based on a Generalized Voronoi Diagram. Appl. Sci. 2021, 11, 9650. [Google Scholar] [CrossRef]
  8. Alshammrei, S.; Boubaker, S. Improved Dijkstra Algorithm for Mobile Robot Path Planning and Obstacle Avoidance. Comput. Mater. Contin. 2022, 72, 5939–5954. [Google Scholar] [CrossRef]
  9. Ma, X.; Gong, R.; Tan, Y. Path planning of mobile robot based on improved PRM based on cubic spline. Wirel. Commun. Mob. Comput. 2022, 2022, 1632698. [Google Scholar] [CrossRef]
  10. Kang, J.G.; Lim, D.W.; Choi, Y.S. Improved RRT-Connect Algorithm Based on Triangular Inequality for Robot Path Planning. Sensors 2021, 21, 333. [Google Scholar] [CrossRef]
  11. Wang, J.; Chi, W.; Li, C.; Wang, C.; Meng, M.Q.H. Neural RRT*: Learning-based optimal path planning. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1748–1758. [Google Scholar] [CrossRef]
  12. Chi, W.; Ding, Z.; Wang, J.; Chen, G.; Sun, L. A generalized Voronoi diagram-based efficient heuristic path planning method for RRTs in mobile robots. IEEE Trans. Ind. Electron. 2021, 69, 4926–4937. [Google Scholar] [CrossRef]
  13. Pehlivanoglu, Y.V.; Pehlivanoglu, P. An enhanced genetic algorithm for path planning of autonomous UAV in target coverage problems. Appl. Soft Comput. 2021, 112, 107796. [Google Scholar] [CrossRef]
  14. Patle, B.K.; Pandey, A.; Parhi, D.R.K.; Jagadeesh, A.J.D.T. A review: On path planning strategies for navigation of mobile robot. Def. Technol. 2019, 15, 582–606. [Google Scholar] [CrossRef]
  15. Mohanty, P.K.; Dewang, H.S. A smart path planner for wheeled mobile robots using adaptive particle swarm optimization. J. Braz. Soc. Mech. Sci. Eng. 2021, 43, 101. [Google Scholar] [CrossRef]
  16. Wu, L.; Huang, X.; Cui, J. Modified adaptive ant colony optimization algorithm and its application for solving path planning of mobile robot. Expert Syst. Appl. 2023, 215, 119410. [Google Scholar] [CrossRef]
  17. Ge, H.; Ying, Z.; Chen, Z. Improved A* Algorithm for Path Planning of Spherical Robot Considering Energy Consumption. Sensors 2023, 23, 7115. [Google Scholar] [CrossRef]
  18. Zhang, G.; Zhang, E. An improved sparrow search based intelligent navigational algorithm for local path planning of mobile robot. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 14111–14123. [Google Scholar] [CrossRef]
  19. Guo, F.; Zhang, H.; Xu, Y.; Xiong, G.; Zeng, C. Isokinetic Rehabilitation Trajectory Planning of an Upper Extremity Exoskeleton Rehabilitation Robot Based on a Multistrategy Improved Whale Optimization Algorithm. Symmetry 2023, 15, 232. [Google Scholar] [CrossRef]
  20. Gao, L.; Liu, R.; Wang, F. An Advanced Quantum Optimization Algorithm for Robot Path Planning. J. Circuits Syst. Comput. 2020, 29, 2050122. [Google Scholar] [CrossRef]
  21. Hong, K.R.; Park, J.K.; Kim, U.C. Rao-combined artificial bee colony algorithm for minimum dose path planning in complex radioactive environments. Nucl. Eng. Des. 2022, 400, 112043. [Google Scholar] [CrossRef]
  22. Li, Y.; Wang, G. Sand cat swarm optimization based on stochastic variation with elite collaboration. IEEE Access 2022, 10, 89989–90003. [Google Scholar] [CrossRef]
  23. Jiang, S.; Yao, W.; Wong, M.S.; Hang, M.; Hong, Z.; Kim, E.J. Automatic elevator button localization using a combined detecting and tracking framework for multi-story navigation. IEEE Access 2019, 8, 1118–1134. [Google Scholar] [CrossRef]
  24. Dong, L.; Yuan, X.; Yan, B. An Improved Grey Wolf Optimization with Multi-Strategy Ensemble for Robot Path Planning. Sensors 2022, 22, 6843. [Google Scholar] [CrossRef] [PubMed]
  25. Li, K.; Hu, Q.; Liu, J. Path Planning of Mobile Robot Based on Improved Multiobjective Genetic Algorithm. Wirel. Commun. Mob. Comput. 2021, 2021, 8881684. [Google Scholar] [CrossRef]
  26. Zhu, F.; Li, G.; Tang, H. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2023, 236, 121219. [Google Scholar] [CrossRef]
  27. Zhang, R.; Zhu, Y. Predicting the Mechanical Properties of Heat-Treated Woods Using Optimization-Algorithm-Based BPNN. Forests 2023, 14, 935. [Google Scholar] [CrossRef]
  28. Wang, Z.; Huang, L.; Yang, S. A quasi-oppositional learning of updating quantum state and Q-learning based on the dung beetle algorithm for global optimization. Alex. Eng. J. 2023, 81, 469–488. [Google Scholar] [CrossRef]
  29. Shen, Q.; Zhang, D.; Xie, M. Multi-Strategy Enhanced Dung Beetle Optimizer and Its Application in Three-Dimensional UAV Path Planning. Symmetry 2023, 15, 1432. [Google Scholar] [CrossRef]
  30. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  31. Jin, H.; Ji, H.; Yan, F. An Effective Obstacle Avoidance and Motion Planning Design for Underwater Telescopic Arm Robots Based on a Tent Chaotic Dung Beetle Algorithm. Electronics 2023, 12, 4128. [Google Scholar] [CrossRef]
  32. Qin, L.; Li, T.; Shi, M. Internal leakage rate prediction and unilateral and bilateral internal leakage identification of ball valves in the gas pipeline based on pressure detection. Eng. Fail. Anal. 2023, 153, 107584. [Google Scholar] [CrossRef]
  33. Xiao, Y.; Zhang, H.; Wang, R. Low-Carbon and Energy-Saving Path Optimization Scheduling of Material Distribution in Machining Shop Based on Business Compass Model. Processes 2023, 11, 1960. [Google Scholar] [CrossRef]
  34. Wu, C.; Fu, J.; Huang, X.; Xu, X.; Meng, J. Lithium-Ion Battery Health State Prediction Based on VMD and DBO-SVR. Energies 2023, 16, 3993. [Google Scholar] [CrossRef]
  35. Zhu, X.; Ni, C.; Chen, G.; Guo, J. Optimization of Tungsten Heavy Alloy Cutting Parameters Based on RSM and Reinforcement Dung Beetle Algorithm. Sensors 2023, 23, 5616. [Google Scholar] [CrossRef] [PubMed]
  36. Guo, X.; Qin, X.; Zhang, Q.; Zhang, Y.; Wang, P.; Fan, Z. Speaker Recognition Based on Dung Beetle Optimized CNN. Appl. Sci. 2023, 13, 9787. [Google Scholar] [CrossRef]
  37. Zilong, W.; Peng, S. A Multi-Strategy Dung Beetle Optimization Algorithm for Optimizing Constrained Engineering Problems. IEEE Access 2023, 11, 98805–98817. [Google Scholar] [CrossRef]
  38. Dong, Y.H.; Yu, Z.C.; Hu, T.Y. Inversion of Rayleigh wave dispersion curve based on improved dung beetle optimizer algorithm. Pet. Geol. Recovery Effic. 2023, 30, 86–97. [Google Scholar]
  39. Alamgeer, M.; Alruwais, N.; Alshahrani, H.M. Dung Beetle Optimization with Deep Feature Fusion Model for Lung Cancer Detection and Classification. Cancers 2023, 15, 3982. [Google Scholar] [CrossRef]
  40. Mohapatra, S.; Mohapatra, P. Fast random opposition-based learning Golden Jackal Optimization algorithm. Knowl.-Based Syst. 2023, 275, 110679. [Google Scholar] [CrossRef]
  41. Ali, M.A.S.; PP, F.R.; Salama Abd Elminaam, D. A Feature Selection Based on Improved Artificial Hummingbird Algorithm Using Random Opposition-Based Learning for Solving Waste Classification Problem. Mathematics 2022, 10, 2675. [Google Scholar] [CrossRef]
  42. Balakrishnan, K.; Dhanalakshmi, R.; Mahadeo Khaire, U. Excogitating marine predators algorithm based on random opposition-based learning for feature selection. Concurr. Comput. Pract. Exp. 2022, 34, e6630. [Google Scholar] [CrossRef]
  43. Kahraman, H.T.; Aras, S.; Gedikli, E. Fitness-distance balance (FDB): A new selection method for meta-heuristic search algorithms. Knowl.-Based Syst. 2020, 190, 105169. [Google Scholar] [CrossRef]
  44. Wang, K.; Tao, S.; Wang, R.L. Fitness-distance balance with functional weights: A new selection method for evolutionary algorithms. IEICE Trans. Inf. Syst. 2021, 104, 1789–1792. [Google Scholar] [CrossRef]
  45. Tasci, D.A.; Kahraman, H.T.; Kati, M. Improved Gradient-Based Optimizer with Dynamic Fitness Distance Balance for Global Optimization Problems. Smart Appl. Adv. Mach. Learn. Hum.-Centred Probl. Des. 2023, 1, 247. [Google Scholar]
  46. Li, X.; Yang, Q.; Wu, H. Joints Trajectory Planning of Robot Based on Slime Mould Whale Optimization Algorithm. Algorithms 2022, 15, 363. [Google Scholar] [CrossRef]
  47. Zhang, X.Y.; Hao, W.K.; Wang, J.S. Manta ray foraging optimization algorithm with mathematical spiral foraging strategies for solving economic load dispatching problems in power systems. Alex. Eng. J. 2023, 70, 613–640. [Google Scholar] [CrossRef]
  48. Yang, C.; Yang, H.; Zhu, D. Chaotic sparrow search algorithm with manta ray spiral foraging for engineering optimization. Syst. Sci. Control Eng. 2023, 11, 2249021. [Google Scholar] [CrossRef]
  49. Song, S.; Wang, P.; Heidari, A.A. Dimension decided Harris hawks optimization with Gaussian mutation: Balance analysis and diversity patterns. Knowl.-Based Syst. 2021, 215, 106425. [Google Scholar] [CrossRef]
  50. Zhang, X.; Xu, Y.; Yu, C. Gaussian mutational chaotic fruit fly-built optimization and feature selection. Expert Syst. Appl. 2020, 141, 112976. [Google Scholar] [CrossRef]
  51. Zhou, W.; Wang, P.; Heidari, A.A. Spiral Gaussian mutation sine cosine algorithm: Framework and comprehensive performance optimization. Expert Syst. Appl. 2022, 209, 118372. [Google Scholar] [CrossRef]
Figure 1. Dung beetle trajectory model.
Figure 1. Dung beetle trajectory model.
Electronics 12 04462 g001
Figure 2. Tangent function and dancing model of dung beetles. (a) Tangent function in the rolling direction; (b) dancing model of dung beetles.
Figure 2. Tangent function and dancing model of dung beetles. (a) Tangent function in the rolling direction; (b) dancing model of dung beetles.
Electronics 12 04462 g002
Figure 3. Concept model of boundary selection strategy.
Figure 3. Concept model of boundary selection strategy.
Electronics 12 04462 g003
Figure 4. Search models. (a) search model of DBO algorithm; (b) logarithmic spiral search model.
Figure 4. Search models. (a) search model of DBO algorithm; (b) logarithmic spiral search model.
Electronics 12 04462 g004
Figure 6. Convergence processes of different algorithms for a unimodal function. Subfigures (ag) correspond to functions f1f7, respectively.
Figure 6. Convergence processes of different algorithms for a unimodal function. Subfigures (ag) correspond to functions f1f7, respectively.
Electronics 12 04462 g006
Figure 7. Convergence processes of different algorithms for a multimodal function. Subfigures (af) correspond to functions f8f13, respectively.
Figure 7. Convergence processes of different algorithms for a multimodal function. Subfigures (af) correspond to functions f8f13, respectively.
Electronics 12 04462 g007
Figure 8. Convergence processes of different algorithms for a fixed-dimensional multimodal function. Subfigures (af) correspond to functions f14f19, respectively.
Figure 8. Convergence processes of different algorithms for a fixed-dimensional multimodal function. Subfigures (af) correspond to functions f14f19, respectively.
Electronics 12 04462 g008
Figure 9. Plots of the 19 standard benchmark functions. Subfigures (as) represent functions f1f19, respectively. Different colors represent different numerical values, with lighter colors leading to larger values.
Figure 9. Plots of the 19 standard benchmark functions. Subfigures (as) represent functions f1f19, respectively. Different colors represent different numerical values, with lighter colors leading to larger values.
Electronics 12 04462 g009
Figure 10. Path planning comparison graph (10 × 10, obstacle rates 10%). Panels (a) and (b) present the outcomes obtained from the DBO and MSIDBO algorithms, respectively, applied to path planning. Panels (c) and (d) provide descriptive fitness value iteration curves corresponding to the DBO and MSIDBO algorithms. Note that Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24 have similar meanings to Figure 10, but with differences in map size, complexity, and obstacle rate.
Figure 10. Path planning comparison graph (10 × 10, obstacle rates 10%). Panels (a) and (b) present the outcomes obtained from the DBO and MSIDBO algorithms, respectively, applied to path planning. Panels (c) and (d) provide descriptive fitness value iteration curves corresponding to the DBO and MSIDBO algorithms. Note that Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24 have similar meanings to Figure 10, but with differences in map size, complexity, and obstacle rate.
Electronics 12 04462 g010
Figure 11. Path planning comparison graph (10 × 10, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 11. Path planning comparison graph (10 × 10, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g011
Figure 12. Path planning comparison graph (10 × 10, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 12. Path planning comparison graph (10 × 10, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g012
Figure 13. Path planning comparison graph (15 × 15, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 13. Path planning comparison graph (15 × 15, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g013
Figure 14. Path planning comparison graph (15 × 15, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 14. Path planning comparison graph (15 × 15, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g014
Figure 15. Path planning comparison graph (15 × 15, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 15. Path planning comparison graph (15 × 15, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g015
Figure 16. Path planning comparison graph (20 × 20, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 16. Path planning comparison graph (20 × 20, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g016
Figure 17. Path planning comparison graph (20 × 20, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 17. Path planning comparison graph (20 × 20, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g017
Figure 18. Path planning comparison graph (20 × 20, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 18. Path planning comparison graph (20 × 20, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g018
Figure 19. Path planning comparison graph (30 × 30, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 19. Path planning comparison graph (30 × 30, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g019
Figure 20. Path planning comparison graph (30 × 30, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 20. Path planning comparison graph (30 × 30, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g020
Figure 21. Path planning comparison graph (30 × 30, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 21. Path planning comparison graph (30 × 30, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g021
Figure 22. Path planning comparison graph (40 × 40, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 22. Path planning comparison graph (40 × 40, obstacle rates 10%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g022
Figure 23. Path planning comparison graph (40 × 40, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 23. Path planning comparison graph (40 × 40, obstacle rates 15%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g023
Figure 24. Path planning comparison graph (40 × 40, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Figure 24. Path planning comparison graph (40 × 40, obstacle rates 20%). Panels (a) and (b) present the results of DBO and MSIDBO algorithms for path planning, respectively. Panels (c) and (d) depict the fitness value-iterations curves for DBO and MSIDBO algorithms, respectively.
Electronics 12 04462 g024
Figure 25. Performance comparisons between DBO and MSIDBO. (a) metric values; (b) MSIDBO overperformance ratio.
Figure 25. Performance comparisons between DBO and MSIDBO. (a) metric values; (b) MSIDBO overperformance ratio.
Electronics 12 04462 g025
Table 1. Parameter variable settings.
Table 1. Parameter variable settings.
AlgorithmParameter Variables
MSIDBODeviation coefficient K = 0.1, random number b = 0.3, c = 0.5
DBODeviation coefficient K = 0.1, random number b = 0.3, c = 0.5
GAMaximum possible mutation probability P max = 0.9 , Minimum possible mutation probability P min = 0.01
GWOConvergence factor a linearly decreasing from 2 to 0 during iterations
MFOPath coefficient t r , 1 , Variable r linearly decreases from −1 to −2
PSOLearning factor C1 = C2 = 2, Initial inertia weight W max = 0.9 , Inertia weight at maximum evolution iteration W min = 0.6
WOARandom number for position iteration update a 1 , 1
Table 2. Benchmark test functions.
Table 2. Benchmark test functions.
FunctionsDimDomainGlobal Opt
f 1 ( x ) = i = 1 D x i 2 50 100 , 100 0
f 2 ( x ) = i = 1 D x i + i = 1 D x i 50 100 , 100 0
f 3 ( x ) = i = 1 D j = 1 D x j 2 50 100 , 100 0
f 4 ( x ) = M a x x i , 1 i D 50 100 , 100 0
f 5 ( x ) = i = 1 D 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 50 30 , 30 0
f 6 ( x ) = i = 1 D x i + 0.5 2 50 100 , 100 0
f 7 ( x ) = i = 1 D i x i 4 + r a n d o m 0 , 1 50 1.28 , 1.28 0
f 8 ( x ) = i = 1 D x i sin ( x i ) 50 500 , 500 419.98 × D i m
f 9 ( x ) = i = 1 D x i 2 10 cos ( 2 π x i ) + 10 50 5.12 , 5.12 0
f 10 ( x ) = 20 exp u x exp v ( x ) + 20 + e u ( x ) = 0.2 1 d i = 1 D x i 2 v ( x ) = 1 d i = 1 D cos ( 2 π x i ) 50 32 , 32 0
f 11 ( x ) = 1 4000 i = 1 D x i 2 i = 1 D cos x i i + 1 50 600 , 600 0
f 12 ( x ) = π D v ( x ) + w ( x ) + g ( x ) v ( x ) = i = 1 D y i 1 2 1 + 10 sin 2 ( π y i + 1 ) w ( x ) = 10 sin ( π y i ) + y D 1 2 g ( x ) = i = 1 D u ( x i , 10 , 100 , 4 ) 50 50 , 50 0
f 13 ( x ) = 0.1 v ( x ) + w ( x ) + g ( x ) v ( x ) = i = 1 D x i 1 2 1 + sin 2 ( 3 π x i + 1 ) w ( x ) = sin 2 ( 3 π x i ) + x D 1 2 1 + sin 2 ( 2 π x D ) g ( x ) = i = 1 D u ( x i , 5 , 100 , 4 ) 50 50 , 50 0
f 14 ( x ) = 1 500 + i = 1 25 1 j + i = 1 2 x i a i j 6 1 2 65 , 65 1
f 15 ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4 5 , 5 0.0003
f 16 ( x ) = i = 1 4 c i exp i = 1 6 a i j x i p i j 2 6 0 , 1 −3.32
f 17 ( x ) = i = 1 5 X a i X a i T + c i 1 4 0 , 10 −10.1532
f 18 ( x ) = i = 1 7 X a i X a i T + c i 1 4 0 , 10 −10.4028
f 19 ( x ) = i = 1 10 X a i X a i T + c i 1 4 0 , 10 −10.5363
Table 3. Experimental results (dim = 30).
Table 3. Experimental results (dim = 30).
Function
Name
MetricGAGWOMFOPSOWOADBOMSIDBO
f1Worst4689.192.97 × 10−1413,302.4413.17405341.6254.1 × 10−530
Best1.32 × 1042.70 × 10−165.04 × 1012.86 × 103.2 × 10−509.8 × 10−910
Average2.84 × 1044.70 × 10−152.02 × 1037.00 × 108.6 × 10−411.37 × 10−540
STD8.7 × 1036.37 × 10−153.98 × 1032.56 × 101.8 × 10−39548 × 10−540
f2Worst83.80834.244 × 10−978.5176613.826993.6 × 10−282.2 × 10−263.06 × 10−18
Best45.975834.34 × 10−105.046195.649962.4 × 10−345.2 × 10−474.6 × 10−22
Average63.220561.50 × 10−935.057029.052611.8 × 10−297.2 × 10−281.1 × 10−184
STD9.132779.03 × 10−1019.025261.969556.7 × 10−293.9 × 10−270
f3Worst96,369.820.5176750,566.01694.711003.131.4 × 10−70
Best2.77 × 1047.08 × 10−48.55 × 1031.82 × 1023.1 × 1047.1 × 10−760
Average5.48 × 1046.09 × 10−22.55 × 1043.79 × 1026.4 × 1044.5 × 10−90
STD1.65 × 1041.12 × 10−11.06 × 1041.20 × 1021.7 × 1042.5 × 10−80
f4Worst86.7050.003683.661763.206789.269.7 × 10−261.6 × 10−176
Best55.5940.0001751.501261.75382.756543.4 × 10−457.1 × 10−218
Average73.4310.0009669.804232.421253.323.2 × 10−275.8 × 10−178
STD7.7020.000777.9910.337527.271.8 × 10−260
f5Worst1.33 × 10828.80314800,7697165.92528.8128.06626.0072
Best885,82926.105877674.538869.118827.6725.91624.8950
Average453,69827.407644,007,1692655.26428.4226.44425.4896
STD295,6230.79504317,0961453.0690.340.44120.2755
f6Worst46,613.691.868415,289.4214.18031.75970.363672.09 × 10−5
Best1.19 × 1042.05 × 10−15.50 × 1012.99 × 102.8 × 10−11.8 × 10−32.46 × 10−7
Average2.76 × 1041.01 × 102.29 × 1036.95 × 108.7 × 10−15.8 × 10−22.70 × 10−6
STD8.63 × 1034.06 × 10−14.45 × 1032.63 × 103.6 × 10−19.9 × 10−24.14 × 10−6
f7Worst54.343060.00933424.6571194.562210.02710.00790.00059
Best5.20 × 101.07 × 10−31.71 × 10−18.86 × 102.4 × 10−41.8 × 10−48.89 × 10−6
Average21.600950.0038183.04919737.298640.00600.00210.00016
STD12.372770.0019285.73357721.476010.00670.00190.00014
f8Worst−1290.93−3474.61−6702.6−2963.57−7243.31−6060.32−2 × 1010
Best−3387.99−7725.31−10,229.3−7818.69−12,565.2−11,415.9−3 × 1012
Average−2160.87−5955.99−8484.38−5438.12−9912.5−7973.58−3 × 1011
STD523.1186922.7797890.57121403.1011721.521310.627.1 × 1011
f9Worst386.624924.65843247.4207274.363716.617934.58620
Best185.29675.32 × 10−987.51504146.6758000
Average2.91 × 1027.61 × 101.67 × 1022.12 × 1025.9 × 10−11.3 × 100
STD4.50 × 1015.54 × 103.78 × 1012.98 × 1014.3 × 105.1 × 100
f10Worst426.81720.041462126.0510.6115430.19690.02270
Best120.67722.88 × 10−151.4367570.188632000
Average254.6980.00669219.06220.3674740.00890.00080
STD77.169240.01173536.49660.1040.04010.00430
f11Worst1.61 × 1080.241454419,1910.9099690.46260.03620.00347
Best2.76 × 1061.71 × 10−21.18 × 1014.75 × 10−21.1 × 10−25.2 × 10−55.66 × 10−9
Average4.82 × 1076.39 × 10−21.71 × 1062.41 × 10−15.9 × 10−23.1 × 10−31.16 × 10−4
STD4.09 × 1074.87 × 10−28.66 × 1061.78 × 10−18.4 × 10−26.9 × 10−36.33 × 10−4
f12Worst1.64 × 1080.249422440,2320.8526340.4420.0330.0035
Best2.75 × 1061.68 × 10−21.21 × 1014.92 × 10−21.1 × 10−25.3 × 10−55.69 × 10−9
Average4.83 × 1076.45 × 10−21.78 × 1062.35 × 10−15.8 × 10−22.9 × 10−31.16 × 10−4
STD4.11 × 1074.88 × 10−28.27 × 1061.89 × 10−18.5 × 10−26.9 × 10−36.33 × 10−4
f13Worst449,5111.407713,8662.657171.4982.09228.37 × 10−6
Best1.65 × 1072.86 × 10−19.67 × 1016.42 × 10−12.8 × 10−16.2 × 10−28.58 × 10−8
Average1.47 × 1088.10 × 10−15.20 × 1061.45 × 107.9 × 10−11.0 × 101.06 × 10−6
STD1.06 × 1082.75 × 10−12.73 × 1074.95 × 10−13.1 × 10−15.6 × 10−11.72 × 10−6
f14Worst5.19217112.668729.504939.109910.9758.09952.967
Best0.9980.9980.9980.9980.9980.9980.998
Average1.4527974.88152.417073.1773.33451.6391.0791
STD0.9927564.16252.146842.46043.2781.6040.3832
f15Worst0.0597810.0216650.0121760.002420.00410.001870.00159
Best0.001040.0003120.0006040.0005980.000320.000310.00031
Average0.0146540.0047610.0017080.0009430.000830.000820.0007
STD0.0139430.0082610.0026760.0003530.000740.000420.00032
f16Worst−0.5501−2.9809−3.0249−3.2028−2.4351−2.78316−3.2031
Best−2.8874−3.322−3.322−3.322−3.3215−3.322−3.322
Average−1.5422−3.2555−3.2295−3.2680−3.2046−3.2249−3.2703
STD0.53630.08400.06540.05910.14240.10850.0589
f17Worst−0.318−2.543−2.631−2.6305−1.786−2.631−9.706
Best−3.126−10.153−10.15−10.1532−10.2−10.15−10.15
Average−0.808−9.040−6.64−6.876−7.68−7.09−10.14
STD0.4852.3513.3213.232.782.6330.067
f18Worst−0.411−4.153−1.989−2.3467−1.708−2.023−10.15
Best−3.312−10.402−10.40−10.403−10.40−10.40−10.40
Average−0.953−10.217−7.509−8.519−7.045−8.074−10.39
STD0.4920.9453.4262.9783.0812.8670.03
f19Worst−0.594−2.4372−2.023−2.371−1.62−2.078−10.47
Best−3.84−10.54−10.54−10.54−10.53−10.54−10.54
Average−1.183−10.191−7.76−9.116−6.597−8.39−10.53
STD0.49491.4763.542.75423.2763.00280.01
Table 4. Experimental results (dim = 60).
Table 4. Experimental results (dim = 60).
Function NameMetricGAGWOMFOPSOWOADBOMSIDBO
f1Worst47,060.833.22 × 10−1415,141.2613.24628170.81272.78 × 10−530
Best1.3 × 1042.52 × 10−165.03 × 1012.87 × 106.10 × 10−504.91 × 10−910
Average2.83 × 1044.89 × 10−152.06 × 1036.91 × 101.67 × 10−408.65 × 10−550
STD8.64 × 1036.85 × 10−154.33 × 1032.53 × 109.06 × 10−405.08 × 10−540
f2Worst83.831874.25 × 10−978.746813.94195.98 × 10−281.094 × 10−261.5 × 10−183
Best45.733914.14 × 10−103.8496115.47713.67 × 10−342.65 × 10−471.46 × 10−217
Average63.181247105 × 10−935.115589.02362.36 × 10−293.65 × 10−285.11 × 10−185
STD9.2647229.11 × 10−1019.56892.02651.11 × 10−282.00 × 10−270
f3Worst97,172.370.752149,847.9695.748399,905.815.71180
Best2.78 × 1047.60 × 10−48.79 × 1031.81 × 1023.14 × 1043.51 × 10−760
Average5.51 × 1046.88 × 10−22.52 × 1043.80 × 1026.63 × 1041.90 × 10−10
STD1.68 × 1041.44 × 10−41.05 × 1041.20 × 1021.67 × 1041.04 × 100
f4Worst86.8840.0036683.620593.19860589.008631.917 × 10−248.75 × 10−177
Best56.15210.00016851.082741.79754173.1457234.07 × 10−458.69 × 10−26
Average73.5422020.00096969.64890.4262253.914846.40 × 10−262.92 × 10−178
STD7.6961020.0007867.979950.33272727.164033.50 × 10−250
f5Worst1.23 × 10828.7924559,238,6216627.83928.8058527.7707125.9643
Best8,811,37426.097197164.07848.792127.6634825.8996224.84403
Average45,135,45027.426612,806,3092627.18328.4070126.4259425.47919
STD27,999,1970.7930112,767,9341374.0010.3413370.4001490.272682
f6Worst46,879.2331.88805714,017.87713.54771.625810.35686931.95 × 10−5
Best1.22 × 1042.10 × 10−15.28 × 1013.04 × 102.78 × 10−11.70 × 10−32.32 × 10−7
Average2.77 × 1049.97 × 10−12.31 × 1036.90 × 108.54 × 10−15.82 × 10−22.70 × 10−6
STD8.71 × 1034.11 × 10−14.22 × 1032.58 × 103.41 × 10−19.07 × 10−24.11 × 10−6
f7Worst55.524980.00932126.9301290.473120.0028680.0072820.000599
Best4.88 × 101.12 × 10−31.72 × 10−18177 × 101.92 × 10−41.93 × 10−49.01 × 10−6
Average21.792460.0038753.31612336.302170.006050.002060.000157
STD12.667690.0019626.28779720.095010.0067840.0017230.000141
f8Worst−1267.73−3594.26−6697.17−2895.65−7041.94−6147.06−2.5 × 1010
Best−3409.11−7652.27−1010.9−7839.41−12,546.6−11,252.6−4 × 1012
Average−2173.52−5955.47−8504.9−5384.74−9829.7−7927.88−3.6 × 1011
STD521.8694909.7507878.05971418.2731703.079287.0798.1 × 1011
f9Worst384.257224.70109246.482327.964725.6753630.330650
Best191.5284.79 × 10−990.29257146.4949000
Average2.92 × 1027.78 × 101.66 × 1022.11 × 1029.34 × 10−11.27 × 100
STD4.38 × 1015.92 × 103.67 × 1013.04 × 1014.76 × 105.77 × 100
f10Worst431.19770.039783137.02230.6226140.2289260.263250
Best117.97652.95 × 10−151.456280.181875000
Average253.7670.00653620.41440.3664360.010580.0011480
STD77.743290.01132138.430990.1060270.0433940.0046560
f11Worst1.72 × 1080.234841436267940.8911380.3343940.0495330.003463
Best2.95 × 1061.80 × 10−21.21 × 1014.83 × 10−21024 × 10−24.92 × 10−56.04 × 10−9
Average4.90 × 1076.39 × 10−21.90 × 1062.38 × 10−15.50 × 10−23.62 × 10−31.15 × 10−4
STD4.24 × 1074.60 × 10−28088 × 1061.85 × 10−16.43 × 10−21.02 × 10−26.32 × 10−4
f12Worst1.68 × 1080.248472477743280.9078320.3370220.0475680.003463
Best2.93 × 1061.80 × 10−21.21 × 1014.81 × 10−21.24 × 10−25.01 × 10−56.00 × 10−9
Average4.92 × 1076.41 × 10−21.90 × 1062.38 × 10−15.52 × 10−23.62 × 10−31.15 × 10−4
STD4.27 × 1074.59 × 10−28.88 × 1061.86 × 10−16.42 × 10−21.02 × 10−26.32 × 10−4
f13Worst452,965.61.39155158,787.82.673531.5066652.0685677.067 × 10−6
Best1.92 × 1073.06 × 10−11.08 × 1026068 × 10−12.75 × 10−18.56 × 10−28.21 × 10−8
Average1.48 × 1088.18 × 10−15.86 × 1061.44 × 107.82 × 10−11.03 × 101.12 × 10−6
STD1.08 × 1082.66 × 10−13.00 × 1074.93 × 10−13.12 × 10−15.41 × 10−11.61 × 10−6
f14Worst5.60018312.685019.3213129.36096711.028947.9626172.618797
Best0.99830.99810.99820.9980.99810.99840.9982
Average1.4703074.8815332.4204373.1509833.4307221.6628251.07296
STD1.05 × 104.1625332.13 × 102.51 × 103.36 × 101.61 × 103.30 × 10−1
f15Worst0.0598990.0216550.0141850.0019070.0041830.0019090.001563
Best0.0010080.0003140.0006050.0006180.0003170.003080.000309
Average0.0143150.0047190.0017880.0009340.0008180.0008270.000704
STD0.0141040.0080980.0029430.0002560.0008060.0004310.000324
f16Worst−0.54294−2.94784−3.0336−3.2028−2.43004−2.753082−3.2031
Best−2.8889−3.322−3.322−3.322−3.3215−3.322−3.322
Average−1.54377−3.25589−3.2285−3.26701−3.20363−3.2237−3.26929
STD0.5330950.0848350.065680.05920.143950.1107050.05909
f17Worst−0.319−2.541−2.6305−2.6305−1849−2.631−9.855
Best−3.211−10.15−10.1532−10.1532−10,152−10.15−10.15
Average−0.815−8.98−6.617−6.8967−7.7006−7.013−10.14
STD0.49492.3983.319863.21512.676752.61870.35
f18Worst−0.414−3.703−2.0203−2.3476−1.7156−2.0833−10,158
Best−3.31293−10.4025−10.4029−10.4029−10.401−10.403−104,029
Average−0.96489−10199−7.548−8.48659−7.065−81,011−1039
STD0.49671.03183.421733.00543.08772.86080.02935
f19Worst−0.5987−2.531−1.9923−2.3513−1.6329−2.094−10.474
Best−4.265−10,536−10.5364−10.5364−10.53−10.536−105,364
Average−1.17597−10,179−7.76749−9.1143−6.5757−8.3602−105,326
STD0.47381.5423.541162.72243.267422.995160.00961
Table 5. Experimental Results (dim = 100).
Table 5. Experimental Results (dim = 100).
Function NameMetricGAGWOMFOPSOWOADBOMSIDBO
f1Worst47,399.253.291 × 10−1414,797.5813.157102.4882.9 × 10−530
Best1.30 × 1042.47 × 10−165.14 × 1012.87 × 105.36 × 10−503.09 × 10−910
Average2.82 × 1045.03 × 10−152.10 × 1036.89 × 101.04 × 10−409.77 × 10−550
STD8.65 × 1037.05 × 10−154.30 × 1032.53 × 105.62 × 10−405.27 × 10−540
f2Worst83.99024.291 × 10−979.88406813.819855.167 × 10−287.2 × 10−272.07 × 10−183
Best45.62424.23 × 10−103.77100645.5100943.28 × 10−341.60 × 10−478.74 × 10−218
Average63.192971.52 × 10−935.4099949.0079682.05 × 10−292.41 × 10−286.90 × 10−185
STD9.2577239.13 × 10−1019.6801741.991599.57 × 10−291.31 × 10−270
f3Worst97,429.10.81251650,226.56687.148499,321.493.4270760
Best2.80 × 1047.42 × 10−48.80 × 1031.82 × 1023.09 × 1042.10 × 10−760
Average5.53 × 1047.18 × 10−22.50 × 1043.77 × 1026.32 × 1041.14 × 10−10
STD1.65 × 1041.65 × 10−11.05 × 1041.20 × 1021.66 × 1046.26 × 10−10
f4Worst86.865570.00387796883.9758393.21073289.1564233.42 × 10−245.25 × 10−177
Best56.000250.00016949550.5865671.8111833.545583.36 × 10−459.52 × 10−214
Average73.533780.00097980569.7300782.43032354.2423831.14 × 10−251.75 × 10−178
STD7.7465930.0008300818.0884550.330527.0344696.24 × 10−250
f5Worst1.16 × 10828.7707943,416,1516205.32228.7990427.5529425.93408
Best8,545,64026.070476957.318809.392927.6610825.9010224.87498
Average44,424,26027.389082,055,5562572.31628.3952426.4069625.48113
STD26,219,2380.7728788,911,1641299.9160.3418060.3562480.260821
f6Worst47,171.091.90168614,914.6413.5591.6703710.35971.988 × 10−5
Best1.22 × 1041.97 × 10−15.20 × 1013.00 × 102.81 × 10−11.68 × 10−32.28 × 10−7
Average2.78 × 1049.84 × 10−12.24 × 1036.90 × 108.52 × 10−15.81 × 10−22.75 × 10−6
STD8.66 × 1034.18 × 10−14.35 × 1032.54 × 103.43 × 10−19.65 × 10−24.09 × 10−6
f7Worst54.896280.00957227.5912991.975570.0278720.0072250.000602
Best4.82 × 101.08 × 10−31.71 × 10−18.31 × 101.80 × 10−41.90 × 10−48.67 × 10−6
Average21.778330.0038533.33706836.48180.0060050.0020840.000159
STD12.551150.0019776.29302920.541510.0067160.0016970.000142
f8Worst−1249.28−3471.2−6735.39−2904.71−7103.28−6111.66−2.5 × 1010
Best−3440.29−7639.39−10,154.8−7969.89−12,554.5−11,228.6−3.6 × 1012
Average−2169.19−5951.5−8515.51−5400.25−9857.02−7914.14−3.4 × 1011
STD528.4644921.9968865.40961430.6851713.6841297.7727.19 × 1011
f9Worst384.27828.04614247.2384272.086926.8362528.646490
Best198.97172.9 × 10−991.00348148.2142000
Average2.91 × 1027.88 × 101.67 × 1022.12 × 1029.43 × 10−11.22 × 100
STD4.41 × 1016.54 × 103.69 × 1013.02 × 1014.94 × 105.46 × 100
f10Worst429.66590.038694135.87560.6167560.2256990.0273150
Best118.42942.97 × 10−151.4521130.183202000
Average2.53 × 1026.66 × 10−32.09 × 1013.69 × 10−11.10 × 10−21.05 × 10−30
STD7.80 × 1011.12 × 10−23.96 × 1011.05 × 10−14.65 × 10−25.17 × 10−30
f11Worst1.74 × 1080.24208554,754,9760.8882560.3472580.0430880.002078
Best2.88 × 1061.77 × 10−21.21 × 1014.86 × 10−21.23 × 10−24.79 × 10−56.05 × 10−9
Average4.89 × 1076.35 × 10−22.27 × 1062.35 × 10−15.59 × 10−23.23 × 10−36.93 × 10−5
STD4.27 × 1074.71 × 10−21.09 × 1071.85 × 10−16.61 × 10−28.83 × 10−33.79 × 10−4
f12Worst174,221.50.2420846354,755.740.88830.34725810.0430880.0020781
Best2.88 × 1061.77 × 10−21.21 × 1014.86 × 10−21.23 × 10−24.79 × 10−56.05 × 10−9
Average4.89 × 1076.35 × 10−22.27 × 1062.35 × 10−15.59 × 10−23.23 × 10−36.93 × 10−5
STD4.27 × 1074.71 × 10−21.09 × 1071.85 × 10−16.61 × 10−28.83 × 10−33.79 × 10−4
f13Worst44,047.21.398164171,203.52.7084981.5094192.0602597.515 × 10−6
Best1.85 × 1073.16 × 10−11.10 × 1026.57 × 10−12.74 × 10−18.01 × 10−28.26 × 10−8
Average1.47 × 1088.28 × 10−16.62 × 1061.44 × 107.80 × 10−11.03 × 101.12 × 10−6
STD1.05 × 1082.68 × 10−13.30 × 1074.99 × 10−13.11 × 10−15.44 × 10−11.61 × 10−6
f14Worst5.5278912.716259.314139.25552511.320967.9185392.403875
Best0.9980.9980.9980.9980.9980.9980.998
Average1.4653714.6811822.4508013.1308183.4020651.669731.063373
STD1.0446144.1011962.1418182.4526273.306751.6028250.28916
f15Worst0.0628580.0215070.0130960.0017020.0047470.0019030.001509
Best0.0010240.0003150.0006140.0006180.0003160.0003080.000308
Average0.0148750.0046170.0017050.0009260.0008590.0008280.0007
STD0.0148750.008010.0026910.0002160.0009220.0004310.000324
f16Worst−0.5533−2.962267−3.00948−3.202858−2.470033−2.748−3.2031
Best−2.896−3.322−3.322−3.322−3.321454−3.322−3.322
Average−1.544−3.256654−3.228337−3.267211−3.20243−3.223−3.268901
STD0.52930.083926820.065860.059240.143530.11030.059099
f17Worst−0.317−2.557448−2.6305−2.6305−1.904−2.631−9.8866
Best−3.389−10.1527−10.153−10.153−10.152−10.15−10.153
Average−0.817−8.97082−6.5838−6.9079−7.6909−6.989−10.142
STD0.5052.398623.31393.21242.77232.60690.03235
f18Worst−0.419−3.7898−1.9655−2.336−1.724−2.129−10.226
Best−3.326−10.4024−10.403−10.41−10.401−10.41−10.403
Average−0.96−10.194−7.5417−8.449−7.0764−8.11−10.396
STD0.48991.044523.4223.0173.0882.86440.0224
f19Worst−0.591−2.5181−1.9941−2.34−1.6243−2.12−10.477
Best−4.24−10.5359−10.536−10.536−10.53−10.54−10.536
Average−1.173−10.17441−7.7544−9.104−6.559−8.363−10.533
STD0.47291.5614473.54222.74673.25942.98810.00928
Table 6. Metrics comparison of the runs.
Table 6. Metrics comparison of the runs.
Map SizeObstacle RateMetricDBOMSIDBODecrease Percentage (%)
10 × 1010%Path Length (m)14.485314.15322.29%
Number of Turns6433.33%
Iteration908011.11%
15%Path Length (m)15.0512.7315.41%
Number of Turns50100.00%
Iteration90882.22%
20%Path Length (m)14.4613.2813.54%
Number of Turns5340%
Iteration926529.35%
15 × 1510%Path Length (m)25.6568520.3847820.55%
Number of Turns6433.33%
Iteration917814.29%
15%Path Length (m)25.3137120.9705617.16%
Number of Turns15566.67%
Iteration635020.63%
20%Path Length (m)22.1421420.384787.94%
Number of Turns10460%
Iteration90837.78%
20 × 2010%Path Length (m)34.485333.31373.40%
Number of Turns10730.00%
Iteration807012.50%
15%Path Length (m)35.6430.9213.24%
Number of Turns813−6.25%
Iteration72676.94%
20%Path Length (m)43.5630.6329.68%
Number of Turns191142.11%
Iteration967620.83%
30 × 3010%Path Length (m)55.6451.477.49%
Number of Turns517−70.58%
Iteration565010.71%
15%Path Length (m)60.9352.5713.72%
Number of Turns11736.37%
Iteration978017.53%
20%Path Length (m)57.7254.086.31%
Number of Turns616−166.67%
Iteration1101009.09%
40 × 4010%Path Length (m)75.2468.538.92%
Number of Turns523−360.00%
Iteration934254.84%
15%Path Length (m)188.9477.7458.85%
Number of Turns462936.96%
Iteration908011.11%
20%Path Length (m)76.8372.735.34%
Number of Turns49−175.00%
Iteration95914.21%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Liu, L.; Shao, Y.; Zhang, X.; Chen, Y.; Guo, C.; Nian, H. Enhancing Swarm Intelligence for Obstacle Avoidance with Multi-Strategy and Improved Dung Beetle Optimization Algorithm in Mobile Robot Navigation. Electronics 2023, 12, 4462. https://doi.org/10.3390/electronics12214462

AMA Style

Li L, Liu L, Shao Y, Zhang X, Chen Y, Guo C, Nian H. Enhancing Swarm Intelligence for Obstacle Avoidance with Multi-Strategy and Improved Dung Beetle Optimization Algorithm in Mobile Robot Navigation. Electronics. 2023; 12(21):4462. https://doi.org/10.3390/electronics12214462

Chicago/Turabian Style

Li, Longhai, Lili Liu, Yuxuan Shao, Xu Zhang, Yue Chen, Ce Guo, and Heng Nian. 2023. "Enhancing Swarm Intelligence for Obstacle Avoidance with Multi-Strategy and Improved Dung Beetle Optimization Algorithm in Mobile Robot Navigation" Electronics 12, no. 21: 4462. https://doi.org/10.3390/electronics12214462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop