Next Article in Journal
Majorization Problem for q-General Family of Functions with Bounded Radius Rotations
Previous Article in Journal
Defining and Analyzing New Classes Associated with (λ,γ)-Symmetrical Functions and Quantum Calculus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Spider-Wasp Optimizer for Obstacle Avoidance Path Planning in Mobile Robots

1
College of Automation and Electrical Engineering, Nanjing Tech University, Nanjing 210000, China
2
Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
3
College of Computer and Information Engineering, Nanjing Tech University, Nanjing 210000, China
4
NUIST Reading Academy, Nanjing University of Information Science and Technology, Nanjing 210000, China
5
College of Material and Chemical Engineering, Zhengzhou University of Light Industry, Zhengzhou 450000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2604; https://doi.org/10.3390/math12172604
Submission received: 5 July 2024 / Revised: 17 August 2024 / Accepted: 21 August 2024 / Published: 23 August 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The widespread application of mobile robots holds significant importance for advancing social intelligence. However, as the complexity of the environment increases, existing Obstacle Avoidance Path Planning (OAPP) methods tend to fall into local optimal paths, compromising reliability and practicality. Therefore, based on the Spider-Wasp Optimizer (SWO), this paper proposes an improved OAPP method called the LMBSWO to address these challenges. Firstly, the learning strategy is introduced to enhance the diversity of the algorithm population, thereby improving its global optimization performance. Secondly, the dual-median-point guidance strategy is incorporated to enhance the algorithm’s exploitation capability and increase its path searchability. Lastly, a better guidance strategy is introduced to enhance the algorithm’s ability to escape local optimal paths. Subsequently, the LMBSWO is employed for OAPP in five different map environments. The experimental results show that the LMBSWO achieves an advantage in collision-free path length, with 100% probability, across five maps of different complexity, while obtaining 80% fault tolerance across different maps, compared to nine existing novel OAPP methods with efficient performance. The LMBSWO ranks first in the trade-off between planning time and path length. With these results, the LMBSWO can be considered as a robust OAPP method with efficient solving performance, along with high robustness.

1. Introduction

With the development of social technology, mobile robots have gradually become a key technology in intelligent development. They are widely used in industrial production, medical assistance, environmental protection, and other fields to enhance production efficiency [1], which has profound significance for promoting social intelligence and reducing resource waste. In the field of mobile robotics, Obstacle Avoidance Path Planning (OAPP) is an essential task aimed at generating the shortest collision-free path from the starting point to the destination [2]. In the past few decades, scholars have tackled OAPP by employing traditional optimization methods such as cell decomposition [3], potential field method [4], road map [5], and others. However, due to the nonlinear and non-deterministic characteristics of OAPP, traditional optimization methods struggle to effectively solve it. As a result, metaheuristic algorithms have gained popularity for solving OAPP due to their simplicity in structure and efficiency in finding solutions [6].
The commonly used metaheuristic algorithms can be categorized into four types: population-based algorithms, evolution-based algorithms, human-inspired algorithms, and physical and chemical-based algorithms. Population-based algorithms mainly include the whale optimization algorithm (WOA) [7], the Aquila Optimizer (AO) [8], the salp swarm algorithm (SSA) [9], the Grey Wolf Optimizer (GWO) [10], and Harris Hawks Optimization (HHO) [11]. Evolution-based algorithms primarily include the genetic algorithm (GA) [12], evolution strategies (ESs) [13], and differential evolution (DE) [14]. Human-inspired algorithms mainly include teaching–learning-based optimization (TLBO) [15], search and rescue optimization (SAR) [16], and socio evolution and learning optimization (SELO) [17]. Physical and chemical-based algorithms primarily include the big bang–big crunch algorithm (BB-BC) [18], water evaporation optimization (WEO) [19], and the multi-verse optimizer (MVO) [20].
In recent years, scholars have proposed numerous OAPP solutions based on metaheuristic algorithms. For instance, Divya Agarwal et al. proposed the slime mould optimization algorithm (SMOA) for solving OAPP by combining oscillatory features and adaptive weighting properties to dynamically adjust the optimal paths by exploiting the positive and negative feedback mechanisms in bio-inspired systems. The proposed OAPP method offers a great advantage in terms of the time required to generate the optimal collision-free paths, but there is still room for strengthening the performance of the SMOA in terms of the length of the generated paths due to its tendency to fall into the problem of locally optimal paths when solving OAPP [2]. Wang et al. proposed an adaptive strategy-based salp swarm algorithm (ABSSA) for handling the OAPP task of autonomous mobile robots by integrating inertia weights and global optimal guidance mechanism with the salp swarm algorithm, due to the adaptive nature of the ABSSA, which gives it an excellent search capability and stability that enable the robot to follow the shortest path from the starting point to the goal. However, the lack of global search capability of the algorithm makes the ABSSA-based OAPP method suffer from the risk of path collision [21]. Md. Rafiqul Islam et al. demonstrated that the OAPP performance of Chemical Reaction Optimization (CRO), using two repair operators combined with the repair feature of the algorithm during evolutionary iterations, enhances the smoothness of the optimal collision-free paths and reduces the execution time. However, due to the overdevelopment of the algorithm’s OAPP process, it makes the collision-free path length increase despite the advantage of reducing the execution time [22]. Liu et al. proposed an improved sparrow search algorithm for solving OAPP by combining Cauchy inverse learning and the Lévy flight strategy, obtaining better convergence results in solving OAPP due to the strategy’s strong exploitability. However, there is a deficiency in its solution stability, which causes the OAPP method to lack robustness [23]. Dai et al. proposed a novel whale optimization algorithm (NWOA) by combining adaptive techniques, as well as virtual obstacle techniques. It has been proved that the OAPP method based on NWOA minimizes the average value metrics of path lengths, but its insufficient trade-off between global search and local exploitation prevents it from further exploring the optimal paths [24]. Zou et al. proposed a mayfly optimization algorithm based on reinforcement learning. Due to the dynamic selectivity of reinforcement learning, the stability and accuracy of the algorithm in solving OAPP tasks are improved. However, due to the imbalance between global exploration and the local development of the algorithm, there is still room for improvement in its performance [1]. Liu et al. enhanced the ability of the ant colony optimization algorithm to escape from local optimal paths by combining a pseudo-random transition strategy and a dynamic adjustment strategy. However, there is still a problem of insufficient convergence performance when dealing with complex map environments [25].
The above example proves that the meta-heuristic-based OAPP method is an effective means to solve the mobile robot path planning. However, the existing meta-heuristic-based OAPP methods do not achieve a good trade-off between the various metrics, which leads to difficulties in ensuring their practicality. Meanwhile, as the field of mobile robot applications progresses, the actual environment becomes increasingly chaotic, making the search space more complex. These facts lead to the existing OAPP methods being prone to becoming stuck in local optimal paths, resulting in insufficient solution accuracy, among other issues. Therefore, we should attempt to explore an innovative, suitable, and highly efficient metaheuristic-based OAPP algorithm to alleviate the challenges posed by complex environments. Fortunately, the Spider-Wasp Optimizer (SWO) has been proven to be a robust tool with high search performance [26]. Therefore, this paper will use the SWO to solve OAPP. Meanwhile, considering that the SWO may encounter problems, such as becoming trapped in local optimal paths and having low global search performance when solving OAPP in complex environments, this paper proposes an enhanced version of the SWO, called the LMBSWO, combining the learning strategy, the dual-median-point guidance strategy, and a better guidance strategy to improve the SWO. Firstly, the learning strategy is introduced in the searching stage of the original SWO to enrich the population diversity and enhance the algorithm’s global optimal path search capability by learning the information gaps between multiple groups of individuals. Secondly, the dual-median-point guidance strategy is introduced in the following and escaping stages of the original SWO to enhance the algorithm’s exploitation performance and improve path search capability through the guidance of dual medians. Lastly, the better guidance strategy is introduced in the nesting behavior, enhancing the algorithm’s ability to escape from local optimal paths and improving path-solving performance through the guidance of a superior individual set. The main contributions of this paper are as follows:
  • The application of eight novel algorithms to solve the OAPP has introduced new perspectives and criteria for solving OAPP.
  • The introduction of the learning strategy in SWO has enhanced the population diversity of the algorithm, thereby improving its capability to search for global optimal paths.
  • The introduction of the dual-median-point guidance strategy in SWO has enhanced the algorithm’s path search capability.
  • The introduction of the better guidance strategy in SWO has enhanced the algorithm’s ability to escape from local optima in path search.
  • The LMBSWO-based OAPP method is proposed in combination with the above strategies, achieving outstanding results in five different map environments.
The subsequent sections of this paper are arranged as follows. Section 2 primarily introduces the relevant knowledge of the SWO. Section 3 combines the learning strategy, dual-median-point guidance strategy, and the better guidance strategy to propose LMBSWO. Section 4 evaluates the performance of LMBSWO’s OAPP in five different map environments. Section 5 provides the conclusion and outlines future work in this paper.

2. Spider-Wasp Optimizer

In this section, our primary focus is on analyzing the theoretical concept and mathematical model of the SWO.

2.1. Theoretical Concept

The SWO as an innovative optimization algorithm grounded in swarm intelligence, drawing inspiration from hunting and nesting behaviors, as well as the obligatory brood parasitism observed in certain species of wasps, where an egg is laid in the abdomen of each spider. Firstly, the female spider wasp meticulously searches for a suitable spider within its environment, subsequently immobilizing it and transporting it to a pre-prepared, optimal nesting site. Ultimately, the female spider wasp lays an egg within the spider’s abdomen and seals the nest. In the implementation of the SWO, an initial population of female wasps is randomly dispersed within the search space. Subsequently, each female wasp engages in an iterative exploration process within this space, seeking out spiders that are conducive to the gender determination of their offspring. Upon encountering a suitable spider, the female spider wasp initiates a predatory sequence, involving foraging near the spider’s web, scanning the ground for fallen prey, attacking the spider to incapacitate it, and then dragging the paralyzed spider to a previously prepared nest. Finally, the female wasp deposits an egg within the spider’s abdomen and seals the nest. We abstract the aforementioned processes into four distinct behaviors: the searching behavior, the following and escaping behaviors, the nesting behavior, and the mating behavior. The above four behaviors will be mathematically modeled in the next subsection.

2.2. Mathematical Model

Similar to other evolutionary algorithms, SWO starts the optimization process by initializing the population. Subsequently, the position of individual wasps is updated through the searching behavior, the following and escaping behaviors, the nesting behavior, and the mating behavior. Subsequently, the population individuals are retained until the optimization process ends, when the termination condition is satisfied. The mathematical model of the behaviors in the SWO optimization process is as follows:
(1)
Initializing population
In SWO, each spider wasp (female) represents a solution to the problem to be solved, expressed using Equation (1).
S W = x 1 , x 2 , x 3 , , x D
where D denotes the dimension of the problem to be solved, and x i denotes the value of the individual in the i t h dimension. In solving the optimization problems, it is usually necessary to generate initialized individuals between the problem boundaries, denoted as Equation (2).
S W i = L B + r U B L B
where S W i denotes the information of the i t h female individual, and r denotes a D-dimensional random vector in the interval [0, 1]. L B denotes the lower bound of the problem, and U B denotes the upper bound of the problem, both of which are a D-dimensional vector. The N individuals are subsequently formed into an initialized population for subsequent iterative optimization, denoted as Equation (3).
S W P o p = s w 1 , 1 s w 1 , 2 s w 1 , D s w 2 , 1 s w 2 , 2 s w 2 , D s w N , 1 s w N , 2 s w N , D
(2)
Searching behavior
In the searching stage, the female spider wasp randomly explores the population space at a constant step size to find relevant spiders to feed the larvae, denoted as Equation (4).
S W i t + 1 = S W i t + μ 1 · S W a t S W b t
where t denotes the number of iterations, and S W i t denotes the information of the i t h individual at the t iteration. S W a t and S W b t denote two random individuals in the population that are not identical to each other, respectively. S W i t + 1 denotes the new state generated by the i t h individual after it passes through the search phase. μ 1 is used to determine the direction of the search and is expressed as Equation (5).
μ 1 = r n · r 1
where r n denotes a random number obeying a normal distribution, and r 1 is a random number in the interval [0, 1]. In addition, considering that the female spider wasp sometimes loses track of the spider that falls from the sphere, the female spider wasp considers searching the area around the location where this spider falls, denoted as Equation (6).
S W i t + 1 = S W c t + μ 2 · L B + r U B L B
where S W c t denotes a random individual in the population, and μ 2 denotes its search direction, expressed as Equation (7).
μ 2 = B · c o s ( 2 π l )
where B is expressed using Equation (8).
B = 1 1 + e l
where l denotes the random number generated in the interval [−2, −1]. The search stage is modeled using Equation (9) by Equation (4) and Equation (6) assisting each other to locate the more promising areas.
S W i t + 1 = Equation   ( 4 ) r 2 < r 3 Equation   ( 6 ) o t h e r w i s e
where both r 2 and r 3 denote random numbers in the interval [0, 1].
(3)
Following and escaping behaviors
After confirming the location of the prey, the female spider wasp hunts the prey. This behavior is modelled using Equation (10).
S W i t + 1 = S W i t + C · 2 · r S W a t S W i t
where C denotes the distance control factor, which decreases linearly from 2 to 0, expressed as Equation (11).
C = 2 2 · t t m a x · r 1
where t denotes the current number of iterations and t m a x denotes the maximum number of iterations. Meanwhile, the prey tries to escape when attacked, and this behavior is expressed using Equation (12).
S W i t + 1 = S W i t v c
where v c is a D-dimensional vector generated according to the normal distribution in the interval [ k ,   k ] , and k is expressed using Equation (13).
k = 1 t t m a x
Subsequent trade-offs between hunting and fleeing behaviors by randomization are expressed as Equation (14).
S W i t + 1 = Equation   ( 10 ) r 2 < r 3 Equation   ( 12 ) o t h e r w i s e
In the optimization process, the searching behavior is first used to identify the region that may contain the globally optimal solution, and then the following and escaping behaviors are used to make the algorithm avoid falling into the local optimal trap. Therefore, Equation (15) is used in order to weigh these two behaviors.
S W i t + 1 = Equation   ( 9 ) r 1 < k Equation   ( 14 ) o t h e r w i s e
(4)
Nesting behavior
In this stage, the female spider wasp pulls the spider into a pre-prepared nest. Since female spider wasp has multiple nesting methods, two are considered in SWO. The first one is to pull the spider towards the most suitable area for female spider wasp, expressed as Equation (16).
S W i t + 1 = S W * + c o s ( 2 π l ) · S W * S W i t
where S W * denotes the optimal individual in the population. The second is to randomly select female spider wasp from the population and construct nests based on that location, expressed using Equation (17).
S W i t + 1 = S W a t + r 1 · | γ | · S W a t S W i t + ( 1 r 2 ) · U S W b t S W c t
where γ is a random number generated by levy flight, and U is a binary vector, denoted as Equation (18).
U = 1 r 4 > r 5 0 o t h e r w i s e
where r 4 and r 5 are two D-dimensional random vectors in the interval [0, 1]. The above two nesting methods are weighed according to Equation (19).
S W i t + 1 = Equation   ( 16 ) r 2 < r 3 Equation   ( 17 ) o t h e r w i s e
Finally, the searching behavior, the following and escaping behaviors, and the nesting behavior are weighed through Equation (20).
S W i t + 1 = Equation   ( 15 ) i < N · k Equation   ( 19 ) o t h e r w i s e
(5)
Mating behavior
The mating behavior of the female spider wasp was mainly considered in this stage to generate a new spider wasp by Equation (21).
S W i t + 1 = C r o s s o v e r ( S W i t , S W m t , C R )
where C r o s s o v e r denotes a crossover operation between the solutions S W i t and S W m t with the crossover probability C R , where, as confirmed in the literature [26], CR takes the value of 0.2. S W i t and S W m t denote the i t h female spider wasp and male spider wasp, respectively, where the generation of the male spider wasp is denoted by Equation (22).
S W m t + 1 = S W i t + e l · | β | · ν 1 + ( 1 e l ) · | β 1 | · ν 2
where β and β 1 are random numbers generated according to a normal distribution. e is an exponential constant. ν 1 and ν 2 are generated using Equation (23) and Equation (24), respectively.
v 1 = S W a t S W i t f ( S W a t ) < f ( S W i t ) S W i t S W a t o t h e r w i s e
v 2 = S W b t S W c t f ( S W b t ) < f ( S W c t ) S W c t S W b t o t h e r w i s e
where f ( X ) is the value of the objective function corresponding to the individual. Finally, the hunting and nesting behaviors and the mating behavior are weighed by the trade-off rate (TR). As confirmed in the literature [26], TR takes a value of 0.3.
(6)
Population reduction and memory optimization
At the end of the mating behavior, the number of female spider wasps in the population will be optimized, allowing for better results and faster convergence in the optimization process. This process is expressed as Equation (25).
N = N m i n + ( N N m i n ) · k
where N m i n denotes the minimum population size used during the optimization process to avoid the algorithm falling into the local optimum trap. Finally, the updated individuals are compared with those of the previous generation, saved using an elite strategy, and the globally optimal individuals are stored. The pseudo-code of the SWO is represented using Algorithm 1.
Algorithm 1: pseudo-code of SWO
Input :   N ,   N m i n ,   D ,   C R ,   T R ,   t m a x
Output :   S W *
1.  Initialize the population containing N   female   spider   wasp   using   Equation   ( 3 ) ,   S W i t   ( i = 1 , 2 , , N )
2.   Calculate   the   value   of   the   objective   function   of   S W i t   while   storing   S W *
3.   t = 1
4.  while ( t < t m a x )
5.  if  r a n d < T R
6.   for  i = 1: N
7.     Update   the   position   of   S W i t   using   Equation   ( 20 )   to   S W i t + 1
8.     Calculate   the   value   of   the   objective   function   of   S W i t + 1
9.   end for
10.   t = t + 1
11. else
12.  for  i = 1: N
13.    Update   the   position   of   S W i t   using   Equation   ( 21 )   to   S W i t + 1
14.    Calculate   the   value   of   the   objective   function   of   S W i t + 1
15.  end for
16.   t = t + 1
17. end if
18. Retention of individuals using elite strategy
19. Using Equation (25) to update N
20. end while

3. Improved Spider-Wasp Optimizer (LMBSWO)

Due to the shortcomings of the SWO, such as low population diversity and weak exploitation capability, its global optimization performance is insufficient for solving the OAPP problem for mobile robots. As a result, it is easy to fall into the local optimal path, and it cannot generate a collision-free shortest path from the starting point to the end point in a limited time. In order to improve these issues, first, a learning strategy is proposed to address the weak global search ability of the SWO, which combines the dynamics of the learning factors and the diversity of the information gaps of different individuals to enhance the population diversity during the algorithm solving process, which in turn enhances the global search performance of the SWO in solving the OAPP. Secondly, the introduction of the dual-median-point guidance strategy, where individuals learn from a collection of individuals better than from themselves, enhances individual exploitation performance while ensuring group diversity, resulting in a better balance between the global exploration phase and the exploitation phase. Finally, the guidance through the best individual in the original SWO greatly increases the risk of falling into a locally optimal path, so a better guidance strategy is introduced, where individuals enhance the effective exploitation of the algorithm by learning from the individuals in the better set. By introducing learning strategies, the dual-median-point guidance strategy and the better guidance strategy in SWO, an improved SWO (LMBSWO) is proposed, which exhibits stronger mobile robot OAPP performance compared to SWO. The LMBSWO is described in detail in the following section.

3.1. Learning Strategy

The original SWO easily falls into local optimal paths when solving the OAPP for mobile robots. The main reason is that the algorithm population diversity is not high, which makes it difficult to escape local optimal paths. Therefore, it is necessary to enhance the population diversity of SWO. In the literature [27], it has been pointed out that learning from the gaps of different individuals can help to improve the algorithmic population diversity and enhance the algorithmic exploration capability. Based on this inspiration, learning strategies are introduced into the search stage of SWO to enhance the algorithmic exploration capability in this stage. In this section, we consider four sets of gaps between individuals. They are the following: the gap between the best individual and the better individual ( D E 1 ), the gap between the best individual and the worse individual ( D E 2 ), the gap between the better individual and the worse individual ( D E 3 ), and the gap between two random individuals ( D E 4 ). They are expressed as Equation (26).
D E 1 = S W * S W b e t t e r t D E 2 = S W * S W w o r s e t D E 3 = S W b e t t e r t S W w o r s e t D E 4 = S W d t S W e t
where, referring to the literature [1], we define the better individuals as the first 10 individuals in the ascending-ordered population that are different from the optimal ones, and the worse individuals as the last 10 individuals in the ascending-ordered population. Subsequently, the degree of learnability of each set of disparities was calculated using Equation (27).
E A k = D E k k = 1 4 D E k , ( k = 1 , 2 , 3 , 4 )
where E A k denotes the degree of learnability of each gap. Meanwhile, the learning ability of different individuals is calculated through Equation (28).
L A i = f ( S W i t ) f m a x ,   ( i = 1 , 2 , , N )
where f m a x denotes the value of the objective function corresponding to the worst individual in the population, and L A i denotes the learning ability of i t h individuals. Subsequently, the amount of information acquired by the i t h individual in these four sets of gaps was calculated by Equation (29).
N I i , k = L A i · E A k · D E k , ( k = 1 , 2 , 3 , 4 ;   i = 1 , 2 , N )
Finally, the i t h individual learns the amount of information acquired through Equation (30).
S W i t + 1 = S W i t + N I i , 1 + N I i , 2 + N I i , 3 + N I i , 4
The learning strategy proposed in this section can effectively enhance the algorithm population diversity and thus improve the performance of the algorithm to solve the OAPP problem for mobile robots.

3.2. Dual-Median-Point Guidance Strategy

The mobile robot OAPP problem requires the algorithm to be able to give a collision-free travel route with the shortest path, which requires the algorithm to have a strong exploitation capability to ensure the shortest path. However, while having a strong exploitation capability, it may fall into a local optimal path due to insufficient exploration capability. Therefore, a good algorithm should improve the exploitation capability as much as possible while maintaining high exploration performance. The weak exploitation performance in the original SWO needs to be improved to effectively solve the OAPP problem for mobile robots. In this section, we need to enhance the exploitation performance of SWO while maintaining its exploration performance. It has been pointed out in the literature [28] that median point learning helps to enhance the exploitation performance of the algorithm while still guaranteeing some exploration performance. Therefore, inspired by this, a dual-median-point guidance strategy is proposed in this section to minimize the travel path of the mobile robot, taking into account the characteristics of the problem. The strategy is expressed as Equation (31).
S W i t + 1 = S W i t + C · r ε
where the coefficient C decreases from 2 to 0 as the number of iterations increases, which ensures that the bootstrapping needs to be strengthened in the pre-iteration period and gradually reduced in the late iteration period to ensure that the algorithm converges accurately. ε denotes the bootstrapping distance and is expressed as Equation (32).
ε = S W d m e d t S W i t
where S W d m e d t denotes the dichotomous median point corresponding to the i t h individual in the t iteration. The selection of the binary median point is shown in Figure 1, where the darker color represents the larger value of the objective function. Each individual corresponds to a corresponding binary median point, and individuals are always guided by individuals better than themselves, which helps to enhance the exploitation of the algorithm. At the same time, since different individuals have different binary median points, the algorithm exploration ability is also guaranteed during the guidance process. Therefore, the introduction of the dual-median-point guidance strategy in the following and escaping stages of the original SWO guarantees both the convergence speed and accuracy in solving the OAPP problem for mobile robots.

3.3. Better Guidance Strategy

In the nesting behavior in the original SWO, Equation (16) guides the solution through the global optimal individuals, which helps to improve the convergence speed of the algorithm approaching the global optimum. However, there is a drawback: assuming that the current global optimal solution is in a local optimal trap, a large number of individuals will be guided into the trap, which will increase the risk of falling into the local optimal path when SWO solves the OAPP problem for mobile robots. Therefore, to improve this behavior, we propose the better guidance strategy. This strategy is expressed in Equation (33).
S W i t + 1 = S W b e t t e r t + c o s ( 2 π l ) · S W b e t t e r t S W i t
where S W b e t t e r t denotes the better individuals in the t iteration, which are taken here to be the top 10 individuals in the population after sorting according to the value of the objective function. Bootstrapping through the better guidance strategy ensures the convergence speed of the algorithm while improving its ability to jump out of local traps. The simulation process of the better guidance strategy is shown in Figure 2. From the figure, it can be seen that when an individual is guided through the current global optimal individual, it may cause the problem of falling into the local optimal trap, resulting in the algorithm stagnating. However, if the bootstrapping is performed through the better individuals, it can introduce diversity into the bootstrapping mechanism, and although some of the better ones may also fall into the local optimal trap, the probability will be greatly reduced. The above analysis shows that the ability of the algorithm to jump out of the local optimal trap when solving the path planning problem can be improved by the better guidance strategy, and the global optimality-seeking ability can be enhanced.

3.4. Framework of the LMBSWO

In this section, firstly, the algorithm population diversity is enhanced by introducing a learning strategy in the searching stage of the SWO, which enhances the algorithm’s global optimization-seeking ability. Secondly, the dual-median-point guidance strategy is introduced in the following and escaping stages of the SWO, which ensures the algorithm maintains its exploration ability, while enhancing the exploitation ability of the algorithm and improving its convergence accuracy. Finally, the better guidance strategy is added to the nesting behavior of the SWO to enhance the ability of the algorithm to escape the local optimal trap. The performance of the algorithm in solving the OAPP problem for mobile robots is enhanced. The pseudo-code of the LMBSWO is shown in Algorithm 2. The flowchart is shown in Figure 3.
Algorithm 2: pseudo-code of LMBSWO
Input :   N ,   N m i n ,   D ,   C R ,   T R ,   t m a x
Output :   S W *
1.  Initialize the population containing N   female   spider   wasp   using   Equation   ( 3 ) ,   S W i t   ( i = 1 , 2 , , N )
2.   Calculate   the   value   of   the   objective   function   of   S W i t   while   storing   S W *
3.   t = 1
4.  while  ( t < t m a x )
5.  if  r a n d < T R
6.  for  i = 1: N
7.    if  i < N · k
8.     if  p < k
9.      if rand < rand
10.       Update   position   of   S W i t   using   Equation   ( 4 )   to   S W i t + 1
11.     else
12.       Update   position   of   S W i t   using   Equation   ( 30 )   to   S W i t + 1
13.     end if
14.    else
15.      Update   the   position   of   S W i t   using   Equation   ( 31 )   to   S W i t + 1
16.    end if
17.   else
18.    if rand < rand
19.      Update   the   position   of   S W i t   using   Equation   ( 33 )   to   S W i t + 1
20.    else
21.      Update   the   position   of   S W i t   using   Equation   ( 17 )   to   S W i t + 1
22.    end if
23.   end if
24.    Calculate   the   value   of   the   objective   function   of   S W i t + 1
25.  end for
26.   t = t + 1
27. else
28.  for  i = 1: N
29.    Update   the   position   of   S W i t   using   Equation   ( 21 )   to   S W i t + 1
30.    Calculate   the   value   of   the   objective   function   of   S W i t + 1
31.  end for
32.   t = t + 1
33. end if
34. Retention of individuals using elite strategy
35. Using Equation (25) to update N
36. end while

4. Results and Discussion

In this section, we evaluate the performance of the proposed LMBSWO for solving the OAPP for mobile robots. In order to analyze the performance of the LMBSWO objectively and comprehensively, we used five map environments for testing, as shown in Table 1. Meanwhile, the solution performance of the LMBSWO was compared with the eight most recent novel algorithms and one classical algorithm, and the specific information of the algorithms is shown in Table 2. By analyzing the population diversity, the exploration/exploitation ratio, path length, stability, convergence, nonparametric tests, runtime, and comprehensive indexes, the performance of the LMBSWO in solving the OAPP for mobile robots was evaluated comprehensively and objectively, highlighting the advantages of the LMBSWO.
To ensure the fairness of the experiment, the maximum number of iterations was set to 1000, the population size was 150, and all the codes were implemented on MATLAB2021b. The operating system was Windows 11, and the computer attributes were the following: AMD Ryzen 5 3600 6-Core Processor with a speed of 3.60 GHz. Meanwhile, in order to avoid the chance of the experimental results, each experiment was run independently without repetition for 30 times, and the results of the 30 experiments were statistically analyzed.

4.1. Objective Function

In this section, we focus on the objective function modeling of OAPP for mobile robots. In this problem, the main consideration is to generate a path from the start point to the end point without collision. Therefore, the objective function of this problem is modeled as Equation (34).
f ( S W ) = P L + P L · γ · D
where S W denotes the solution, P L denotes the path length, γ denotes the scale factor, and D denotes the average distance from the interpolated points on the path to all obstacles. Referring to the literature [2], γ is taken to be 100 and the number of interpolation points is taken to be five.

4.2. Population Diversity Analysis

In this section, we mainly analyze the population diversity of the LMBSWO when solving OAPP for mobile robots. Higher population diversity can effectively avoid the algorithm from falling into local stagnation, which enhances the global optimization ability and reduces the probability of falling into local optimal paths. The results of the population diversity experiment are shown in Figure 4, where the X-axis represents the number of iterations and the Y-axis represents the population diversity of the algorithm.
From the figure, it can be seen that the population diversity of LMBSWO is always higher than that of SWO when executing map 1, which means that the introduction of the learning strategy and the better guidance strategy improves the population diversity of the algorithm, which in turn improves the algorithm’s global exploration ability. When executing map 2, in the first 200 iterations, the population diversity of the LMBSWO is slightly lower than that of the SWO, but after 200 iterations, the population diversity of LMBSWO is always higher than that of SWO, which indicates that with the gradual complexity of the map environment, the learning strategy and the better guidance strategy proposed in this paper are still able to effectively enhance the population diversity of the algorithm and ensure the global search performance. During the execution of map 3, In the first 180 iterations, the population diversity of LMBSWO is lower than that of SWO, which is due to the fact that the pre-randomization in the original SWO is too serious, resulting in the lack of a good balance between the exploitation phase and the exploration phase, whereas the introduction of the better guidance strategy in LMBSWO makes the algorithm achieve a better balance between the exploitation phase and the exploration phase, which helps the algorithm to better perform the global path search. When executing map 4, the population diversity of the LMBSWO is similar to that of SWO in the pre-iteration period, which indicates that the global search ability of the LMBSWO is similar to that of the SWO in the pre-iteration period. However, unlike the SWO, the population diversity of the LMBSWO after 700 iterations is higher than that of the SWO, which indicates that the LMBSWO has a stronger ability to escape the locally optimal path in the late iteration. According to the results of map 5, it can be seen that with the complexity of the map environment, the population diversity of the LMBSWO is weaker than that of SWO in the pre-iteration period, but after 300 iterations, the population diversity of the LMBSWO is consistently higher than that of SWO, which suggests that the LMBSWO possesses a stronger ability to escape the locally optimal paths in the iteration follow-up process. Through the above analysis of the experimental results, we can conclude that due to the introduction of the learning strategy and the better guidance strategy, the population diversity of the algorithm has been enhanced to a certain extent, which in turn enhances the algorithm’s global optimization ability, reduces the probability of falling into the locally optimal path, and enhances the algorithm’s performance in solving the OAPP for mobile robots.

4.3. Exploration/Exploitation Analysis

In this section, we focus on analyzing the exploration/exploitation ratio of the LMBSWO for solving OAPP for mobile robots. The exploration ratio characterizes the ability of the algorithm to explore the potential optimal region in the population space, and the higher the ratio, the better the global exploration ability. The exploitation ratio characterizes the ability of the algorithm to dig deeper in the potential optimal region, and the higher the ratio, the better the local search ability. The two phases of exploration and exploitation are complementary to each other. Due to the high complexity of the OAPP problem for mobile robots, we strongly prioritize the exploration ability of the algorithm while improving its exploitation ability as much as possible to achieve optimal solution performance. Figure 5 shows a graph of the exploration and development ratio of the LMBSWO in solving these five maps, with the X-axis indicating the number of iterations and the Y-axis indicating the ratio.
When executing map 1, the exploration rate during the execution of the algorithm is concentrated at about 60%, which is mainly due to the introduction of the learning strategy and the better guidance strategy, which enables the algorithm to have a strong global exploration capability. The exploration rate is concentrated at about 40%, which is mainly due to the introduction of the dual-median-point guidance strategy, which guarantees the algorithm’s exploitation ability and improves the convergence speed and accuracy. When executing map 2, with the increase in map complexity, the risk of the algorithm falling into the local optimal path increases, so a higher exploration rate can effectively reduce the risk of falling into the local optimal path. As can be seen in the figure, the exploration rate is concentrated in about 70%, which effectively avoids the risk of falling into the local optimum caused by the complexity of the environment. When executing map 3, the exploration rate is concentrated at about 75%. Due to the introduction of the learning strategy and the better guidance strategy, the LMBSWO has a higher exploration rate when solving the OAPP problem for complex maps, which reduces the risk of falling into a locally optimal path and improves the performance of the LMBSWO when solving the OAPP. When executing map 4, the exploration rate is concentrated at about 80%. As the complexity of the map environment gradually increases, the high exploration rate can locate the potential optimal region in the problem solution space more effectively, thus improving the global optimization performance of the algorithm and the ability to escape the local optimal path. When executing map 5, the exploration rate is concentrated at 70%, which indicates that, as the map environment becomes more and more complex, the LMBSWO proposed in this paper can still solve the OAPP reliably and effectively. By analyzing the above results, we can conclude that the introduction of the three strategies in this paper will enable the LMBSWO to have an efficient performance in solving the mobile robot OAPP and greatly reduce the probability of falling into local optimal paths.

4.4. Numerical Analysis

In this section, the numerical results of the LMBSWO for solving different maps are statistically analyzed to evaluate its performance, and the evaluation metrics include the best value, the mean value, the worst value, and the standard deviation. Meanwhile, in order to provide a comprehensive evaluation of the performance of the LMBSWO in solving OAPP for mobile robots, the results are compared with those of the eight most recent novel algorithms (EDO, AHA, ARO, DO, INFO, KOA, NGO, SWO) and one classical algorithm (PSO). A comparison of the results is performed. The experimental results are shown in Table 3, where mean, best, worse, and Std denote the average path length, the optimal path length, the shortest path length, and the standard deviation, respectively, over the 30 solutions, where bold indicates the optimal solution on that indicator, as do subsequent tables.
From the table, it can be seen that all the algorithms can solve OAPP for mobile robots efficiently, but the solution performance of the LMBSWO is better. In map 1, the LMBSWO ranks first in the mean, with a performance lead of about 4% compared to AHA, ARO, DO, and NGO, and about 3% compared to EDO, INFO, and PSO. Compared to the better KOA and SWO, the performance lead is about 0.4%. This shows that the global optimization performance of the LMBSWO is better compared to that of the state-of-the-art algorithms. This mainly depends on the introduction of learning strategy and better guidance strategy. In terms of the best value, each algorithm is able to obtain an approximation of the optimal solution, but the LMBSWO has better performance, which shows that the dual-median-point guidance strategy helps to enhance the algorithm’s exploitation capability and improve the algorithm’s optimization finding accuracy. Meanwhile, Figure 6 demonstrates the optimal paths obtained by the algorithms, and we can find that the optimal paths are tangent to the obstacles, with guaranteed collision-free paths, which demonstrates that the LMBSWO is able to generate collision-free paths effectively in map 1. The LMBSWO is ranked first in terms of the worst-case value, which indicates that the LMBSWO possesses better fault tolerance and has greater reliability and practicality in real-world environments. In addition to the optimization accuracy, the solution stability of the algorithm is also important. Compared to the existing state-of-the-art algorithms, the LMBSWO ranks first in terms of standard deviation and achieves better solution stability. In order to visualize the stability of each algorithm, Figure 7 shows the box plot of the algorithms when solving the problem. From the figure, it is evident that the LMBSWO is concentrated around the optimal solution over 30 runs, demonstrating stronger solution stability compared to other algorithms. This also confirms that the LMBSWO has higher reliability in real-world environments.
In map 2, as the environment becomes progressively more complex, the LMBSWO is still ranked first in terms of average value, with path lengths shrinking by more than 0.05 compared to DO, INFO, PSO, and SWO and by more than 0.01 compared to the more superior EDO, AHA, ARO, KOA, and NGO. This is an exciting result indicating that, as the environment becomes progressively more complex, the global optimization performance of the LMBSWO is still strong, which is attributed to the learning strategy and the better guidance strategy proposed in this paper to enhance the exploration capability of the algorithm. In terms of the optimal value, each algorithm obtains an approximation of the optimal path, but the LMBSWO has the shortest optimal path due to the good balance between the exploration phase and the exploitation phase of the LMBSWO. As can be seen in Figure 6, the LMBSWO obtains a collision-free shortest path in the solution of map 2, demonstrating a strong solving capability. The LMBSWO is ranked first in terms of the worst value, which demonstrates that the LMBSWO remains highly fault-tolerant as the environment becomes progressively more complex, and this also indicates that LMBSWO possesses a high degree of solving utility in real-world environments. Meanwhile, in terms of standard deviation, the LMBSWO achieved the first place, with a standard deviation of 0.0013. It can also be seen from Figure 7 that LMBSWO possesses stronger solution stability, which demonstrates that, with the gradual increase in the complexity of the environment, the LMBSWO still possesses stronger solution reliability. The above advantages are mainly attributed to the fact that the three strategies introduced in this paper improve the algorithm solution performance in different aspects, making LMBSWO a reliable and practical path-planning method.
In map 3, LMBSWO exhibits better global optimization performance due to the introduction of better search strategies. This is reflected in the fact that its path is shortened by more than 1.2 compared to AHA, DO, NGO, and PSO, and by more than 0.1 compared to EDO, ARO, INFO, KOA, and SWO. This shows that LMBSWO still maintains a high level of solving performance in complex map environments. Meanwhile, almost all algorithms approximate the optimal path in terms of the optimal value, but the LMBSWO still occupies the first place, with a better advantage, which demonstrates that the LMBSWO possesses stronger development capability. Meanwhile, from Figure 6, it can be seen that in map 3, the LMBSWO generates an optimal path that perfectly avoids obstacles, which demonstrates the strong path-planning ability of the LMBSWO. The LMBSWO ranks second after the KOA in terms of the worst value, which is due to the fact that the exploration/exploitation of the LMBSWO in the late iteration did not strike a good balance, resulting in its less fault tolerance compared to the KOA in map 3. This also shows that there is still room for further improvement in LMBSWO. Meanwhile, the LMBSWO ranks third in terms of standard deviation and, combined with the box plot executed in map 3 in Figure 7, we find that LMBSWO possesses more anomalies. This is also due to the fact that the above-mentioned exploration/exploitation in the LMBSWO did not achieve a good balance in the later iterations. This also reinforces the fact that the LMBSWO needs to be further improved for solving path planning problems in specific environments.
From the table, we can see that the number of obstacles in map 4 and map 5 are 30 and 45 respectively, restoring the high complexity of the real environment. On average, the path length of the LMBSWO is more than 0.1 ahead of other excellent algorithms in most cases, which also shows that the learning strategy and the better guidance strategy proposed in this paper are still effective in enhancing the algorithm’s global optimization-seeking ability in complex map environments. Meanwhile, in terms of the optimal value, LMBSWO is ranked first in both map 4 and map 5, showing stronger development capability compared to other algorithms. It can also be seen in Figure 6 that the LMBSWO generates collision-free tangent shortest paths when solving both map 4 and map 5, which confirms that the LMBSWO has stronger development capability due to the introduction of the dual-median-point guidance strategy. Meanwhile, the LMBSWO still ranks first in terms of the worst values for path planning in two environments possessing high complexity, demonstrating stronger fault tolerance. The above experimental results confirm that the three strategies introduced in this paper can enhance the algorithm’s exploration performance and its exploitation performance, thus improving the algorithm’s performance in solving the OAPP for mobile robots.

4.5. Convergence Analysis

In this section, the convergence behavior of the algorithm in solving the OAPP for mobile robots in different map environments is analyzed to truly reflect the real convergence characteristics of the LMBSWO. Figure 8 shows the experimental results, where the X-axis represents the number of iterations, and the Y-axis represents the fitness value (path length). From the figure, it can be seen that as the number of iterations increases, the fitness values of all the algorithms are decreasing. Meanwhile, the other algorithms have a better convergence speed compared to the LMBSWO, which is mainly due to the fact that the LMBSWO adopts a better exploratory technique, which results in a certain loss in convergence speed. However, after about 400 iterations, all the other algorithms fall into local optimal paths, and the change of the fitness value stays on a horizontal line, which is mainly due to the inability of the other algorithms to explore the local optimal region effectively, causing them to fall into the local optimum, and this phenomenon becomes more and more serious with the increase in the complexity of the map environment. However, the LMBSWO maintains a continuous optimization in most of the maps, i.e., the fitness value is continuously optimized as the number of iterations increases, which indicates that, as the maps become progressively more complex, the LMBSWO is able to effectively cope with the challenge of falling into locally optimal paths. The above convergence phenomenon is mainly due to the introduction of the learning strategy and the better guidance strategy, which greatly improves the global search performance of the algorithm and enhances the ability of the algorithm to avoid the locally optimal paths. In addition, despite the slowdown in convergence speed, the LMBSWO is able to continue to optimize the shortest paths in solving the OAPP for mobile robots. Compared with other state-of-the-art algorithms in optimizing the shortest path, the LMBSWO shows stronger global optimization ability, as well as stronger optimization stability.

4.6. Friedman Test Analysis

Although the mean results have been analyzed in Section 4.3, the mean analysis only considers the average performance and ignores the variance and distribution of the performance. Therefore, the degree of variance between the 30 independent experiments is considered in this section, and Friedman’s test with a significance factor of 0.05 is used to better help us determine whether there is a significant difference between the different algorithms and to further evaluate LMBSWO’s performance in solving the OAPP for mobile robots. The experimental results are shown in Table 4. From Table 4, it can be seen that in map 1, map 2, map 3, and map 5, the LMBSWO achieves the first place, with an average ranking of 1.65, 1.10, 3.05, and 1.80, respectively. This shows a better global optimization performance compared to other algorithms. Such results are attributed to the fact that the introduction of the learning strategy, the dual median guidance strategy, and the better guidance strategy in the algorithm improves the ability of the algorithm to escape locally optimal paths, which in turn improves the solution performance of the algorithm. In map 4, the LMBSWO ranks third in performance after the EDO and the KOA. This shows that in some specific environments, the performance of the LMBSWO may not be as good as that of other algorithms in solving the OAPP for mobile robots, which suggests that there is still some room for improvement in the LMBSWO. Although the performance of the LMBSWO has some shortcomings in some specific environments, from a comprehensive point of view, the LMBSWO shows the strongest comprehensive global optimal search performance, with an average ranking of 2.29 over five maps, which confirms that the three strategies introduced in this paper help to enhance the algorithm’s ability to escape the locally optimal path when solving the OAPP for mobile robots.

4.7. Runtime Analysis

When the algorithm solves the OAPP for mobile robots, the runtime is especially important in addition to the solution accuracy. Table 5 shows the average runtime of the algorithm for solving different maps. From the table, it can be seen that as the map environment becomes more complex, the runtime increases. From the last row of the table, it can be seen that the average runtime of the LMBSWO is ranked 4.60 placing it fifth overall, which is weaker compared to the AHA, ARO, KOA, and SWO. Since the structure of the AHA, ARO, and KOA is simpler than that of the LMBSWO proposed in this paper, the runtime of the LMBSWO is longer. Meanwhile, since the LMBSWO is formed by introducing three improved strategies based on the SWO, which in turn increases the computation time, the runtime of the LMBSWO is longer than that of the SWO. However, from a comprehensive point of view, the average runtime ranking of the LMBSWO is moderate, while the gap compared to the AHA, ARO, KOA, and SWO is not too big, so we consider its runtime to be acceptable. This can guarantee the practicality of the algorithm when solving the OAPP for mobile robots.

4.8. Synthesized Analysis

In this section, we focus on a comprehensive analysis of the two most important metrics when the algorithm solves the OAPP for mobile robots to weigh the reliability and practicality of the algorithm. Figure 9 shows a stacked plot of the algorithm’s average path length rankings and average runtime rankings when running path planning for the five map environments, where larger values on the Y-axis indicate poorer comprehensive algorithm performance. As can be seen from the figure, the LMBSWO corresponds to the lowest column height, which indicates that, compared to other excellent algorithms, the LMBSWO-based OAPP method strikes a better balance between the two metrics of path length and runtime. As can be seen from the figure, the KOA and ARO are ranked second and third, respectively. Although the KOA and ARO are better than the LMBSWO in the runtime metrics, they are much weaker in the path length metrics, which suggests that the overall performance of the LMBSWO is better than that of the KOA and ARO. Meanwhile, due to the introduction of learning strategies in the LMBSWO, the runtime of the LMBSWO is higher than that of the SWO, but the LMBSWO can obtain better robot paths, and it is feasible to sacrifice some of the time and thus trade for better paths. Therefore, we believe that the LMBSWO proposed in this paper has the strongest comprehensive solution performance and consider the LMBSWO as a promising OAPP method for mobile robots.

5. Conclusions and Prospects

In this paper, in order to better solve the OAPP for mobile robots, we propose the LMBSWO based on the SWO by combining the learning strategy, the dual median guidance strategy, and the better guidance strategy. By conducting experiments in five map environments and analyzing the experimental results, we conclude that, compared to the current most recent novel algorithms, the LMBSWO has stronger global optimization performance in solving the OAPP for mobile robots, and it can effectively avoid falling into local optimal paths. Therefore, we believe that the LMBSWO is a promising path-planning method. However, despite the great advantages of the LMBSWO, there is no doubt that in some specific map environments, the solution performance of the LMBSWO is weaker than that of other algorithms, which indicates that there is still room for further optimization of the LMBSWO. Meanwhile, the introduction of the improvement strategy in the SWO improves the convergence accuracy but also increases the runtime, which is also a problem we have to consider moving forward.
In summary, our future work will focus on the following three areas: 1. Improve the LMBSWO again for specific map environments to enhance the performance of the OAPP for mobile robots. 2. Develop a lighter exploration strategy to reduce the algorithm’s runtime to enhance real-time responsiveness. 3. Since only five map environments are considered in this paper, more map environments will be added to the experiments in future work to evaluate the performance of the algorithm more comprehensively.

Author Contributions

Conceptualization, Y.G. and Z.L.; methodology, Y.G.; software, Y.G. and Z.L.; validation, H.W. and Y.H.; formal analysis, Y.G. and H.W.; investigation, H.J.; resources, X.J.; data curation, Y.G. and D.C.; writing—original draft preparation, Y.G.; writing—review and editing, Y.G. and D.C.; visualization, Z.L. and Y.G.; supervision, D.C.; project administration, D.C. and Z.L.; funding acquisition, D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 62202221.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Acknowledgments

The authors thank the reviewers for their valuable suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zou, A.; Wang, L.; Li, W.; Cai, J.; Wang, H.; Tan, T. Mobile Robot Path Planning Using Improved Mayfly Optimization Algorithm and Dynamic Window Approach. J. Supercomput. 2023, 79, 8340–8367. [Google Scholar] [CrossRef]
  2. Agarwal, D.; Bharti, P.S. Implementing Modified Swarm Intelligence Algorithm Based on Slime Moulds for Path Planning and Obstacle Avoidance Problem in Mobile Robots. Appl. Soft Comput. 2021, 107, 107372. [Google Scholar] [CrossRef]
  3. Ghita, N.; Kloetzer, M. Trajectory Planning for a Car-like Robot by Environment Abstraction. Rob. Auton. Syst. 2012, 60, 609–619. [Google Scholar] [CrossRef]
  4. Chiang, H.T.; Malone, N.; Lesser, K.; Oishi, M.; Tapia, L. Path-Guided Artificial Potential Fields with Stochastic Reachable Sets for Motion Planning in Highly Dynamic Environments. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 25–30 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 2347–2354. [Google Scholar]
  5. Nazif, A.N.; Davoodi, A.; Pasquier, P. Multi-agent area coverage using a single query roadmap: A swarm intelligence approach. In Advances in Practical Multi-Agent Systems; Springer: Berlin/Heidelberg, Germany, 2010; pp. 95–112. [Google Scholar]
  6. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  8. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering Design Problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  11. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  12. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  13. Schwefel, H.-P.; Beyer, H.-G. Evolution Strategies-A Comprehensive Introduction Evolution Strategies A Comprehensive Introduction. ACM Comput. Classif. 2002, 1, 3–52. [Google Scholar]
  14. Rocca, P.; Oliveri, G.; Massa, A. Differential evolution as applied to electromagnetics. IEEE Antennas Propag. Mag. 2011, 53, 38–49. [Google Scholar] [CrossRef]
  15. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. CAD Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  16. Shabani, A.; Asgarian, B.; Salido, M.; Asil Gharebaghi, S. Search and Rescue Optimization Algorithm: A New Optimization Method for Solving Constrained Engineering Optimization Problems. Expert Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  17. Kumar, M.; Kulkarni, A.J.; Satapathy, S.C. Socio Evolution & Learning Optimization Algorithm: A Socio-Inspired Optimization Methodology. Future Gener. Comput. Syst. 2018, 81, 252–272. [Google Scholar]
  18. Erol, O.K.; Eksin, I. A New Optimization Method: Big Bang-Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  19. Kaveh, A.; Bakhshpoori, T. Water Evaporation Optimization: A Novel Physically Inspired Optimization Algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A Nature-Inspired Algorithm for Global Optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  21. Wang, Z.; Ding, H.; Wang, J.; Hou, P.; Li, A.; Yang, Z.; Hu, X. Adaptive Guided Salp Swarm Algorithm with Velocity Clamping Mechanism for Solving Optimization Problems. J. Comput. Des. Eng. 2022, 9, 2196–2234. [Google Scholar] [CrossRef]
  22. Islam, M.R.; Protik, P.; Das, S.; Boni, P.K. Mobile Robot Path Planning with Obstacle Avoidance Using Chemical Reaction Optimization. Soft Comput. 2021, 25, 6283–6310. [Google Scholar] [CrossRef]
  23. Liu, L.; Liang, J.; Guo, K.; Ke, C.; He, D.; Chen, J. Dynamic Path Planning of Mobile Robot Based on Improved Sparrow Search Algorithm. Biomimetics 2023, 8, 8020182. [Google Scholar] [CrossRef]
  24. Dai, Y.; Yu, J.; Zhang, C.; Zhan, B.; Zheng, X. A novel whale optimization algorithm of path planning strategy for mobile robots. Appl. Intell. 2023, 53, 10843–10857. [Google Scholar] [CrossRef]
  25. Liu, C.; Wu, L.; Xiao, W.; Li, G.; Xu, D.; Guo, J.; Li, W. An Improved Heuristic Mechanism Ant Colony Optimization Algorithm for Solving Path Planning. Knowl. Based Syst. 2023, 271, 110540. [Google Scholar] [CrossRef]
  26. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Spider Wasp Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Artif. Intell. Rev. 2023, 56, 11675–11738. [Google Scholar] [CrossRef]
  27. Zhang, Q.; Gao, H.; Zhan, Z.H.; Li, J.; Zhang, H. Growth Optimizer: A Powerful Metaheuristic Algorithm for Solving Continuous and Discrete Global Optimization Problems. Knowl. Based Syst. 2023, 261, 110206. [Google Scholar] [CrossRef]
  28. Zhang, X.; Lin, Q. Three-Learning Strategy Particle Swarm Algorithm for Global Optimization Problems. Inf. Sci. (NY) 2022, 593, 289–313. [Google Scholar] [CrossRef]
  29. Zhao, W.; Wang, L.; Mirjalili, S. Artificial Hummingbird Algorithm: A New Bio-Inspired Optimizer with Its Engineering Applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  30. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial Rabbits Optimization: A New Bio-Inspired Meta-Heuristic Algorithm for Solving Engineering Optimization Problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  31. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A Nature-Inspired Metaheuristic Algorithm for Engineering Applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  32. Abdel-Basset, M.; El-Shahat, D.; Jameel, M.; Abouhawwash, M. Exponential Distribution Optimizer (EDO): A Novel Math-Inspired Algorithm for Global Optimization and Engineering Problems. Artif. Intell. Rev. 2023, 56, 9329–9400. [Google Scholar] [CrossRef]
  33. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm Based on Weighted Mean of Vectors. Expert. Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  34. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler Optimization Algorithm: A New Metaheuristic Algorithm Inspired by Kepler’s Laws of Planetary Motion. Knowl. Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  35. Dehghani, M.; Hubalovsky, S.; Trojovsky, P. Northern Goshawk Optimization: A New Swarm-Based Algorithm for Solving Optimization Problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  36. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
Figure 1. The dual-median-point guidance strategy simulation chart.
Figure 1. The dual-median-point guidance strategy simulation chart.
Mathematics 12 02604 g001
Figure 2. The better guidance strategy simulation chart.
Figure 2. The better guidance strategy simulation chart.
Mathematics 12 02604 g002
Figure 3. The LMBSWO flow chart.
Figure 3. The LMBSWO flow chart.
Mathematics 12 02604 g003
Figure 4. Population diversity across different maps.
Figure 4. Population diversity across different maps.
Mathematics 12 02604 g004
Figure 5. Percentage of exploration/exploitation across different maps.
Figure 5. Percentage of exploration/exploitation across different maps.
Mathematics 12 02604 g005
Figure 6. The optimal path of LMBSWO across different maps.
Figure 6. The optimal path of LMBSWO across different maps.
Mathematics 12 02604 g006
Figure 7. Box plots across different maps.
Figure 7. Box plots across different maps.
Mathematics 12 02604 g007
Figure 8. Convergence plots across different maps.
Figure 8. Convergence plots across different maps.
Mathematics 12 02604 g008
Figure 9. Composite indicator stacking chart.
Figure 9. Composite indicator stacking chart.
Mathematics 12 02604 g009
Table 1. The map environmental information.
Table 1. The map environmental information.
MapsNo. of ObstaclesBeginEndX AxisY AxisObstacle Size
Map13(0,0)(4,6)[1 1.8 4.5][1 5.0 0.9][0.8 1.5 1]
Map26(0,0)(10,10)[1.5 8.5 3.2 6.0 1.2 7.0][4.5 6.5 2.5 3.5 1.5 8.0][1.5 0.9 0.4 0.6 0.8 0.6]
Map313(3,3)(14,14)[1.5 4.0 1.2 5.2 9.5 6.5 10.8 5.9 3.4 8.6 11.6 3.3 11.8][4.5 3.0 1.5 3.7 10.3 7.3 6.3 9.9 5.6 8.2 8.6 11.5 11.5][0.5 0.4 0.4 0.8 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7]
Map430(3,3)(14,14)[10.1 10.6 11.1 11.6 12.1 11.2 11.7 12.2 12.7 13.2 11.4 11.9 12.4 12.9 13.4 8 8.5 9 9.5 10 9.3 9.8 10.3 10.8 11.3 5.9 6.4 6.9 7.4 7.9][8.8 8.8 8.8 8.8 8.8 11.7 11.7 11.7 11.7 11.7 9.3 9.3 9.3 9.3 9.3 5.3 5.3 5.3 5.3 5.3 6.7 6.7 6.7 6.7 6.7 8.4 8.4 8.4 8.4 8.4][0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4]
Map545(0,0)(15,15)[4 4 4 4 4 4 4 4 4 6 6 6 6 6 6 6 6 6 8 8 8 8 8 8 8 8 8 10 10 10 10 10 10 10 10 10 12 12 12 12 12 14 14 14 14][3 3.5 4 4.5 5 5.5 6 6.5 7 8 8.5 9 9.5 10 10.5 11 11.5 12 1 1.5 2 2.5 3 3.4 4 4.5 5 6 6.5 7 7.5 8 8.5 9 9.5 10 10 10.5 11 11.5 12 10 10.5 11 11.5][0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4]
Table 2. Comparison algorithm parameter information.
Table 2. Comparison algorithm parameter information.
AlgorithmsTimesParameters
Artificial Hummingbird Algorithm (AHA) [29]2022 M i g r a t i o n   c o e f f i c i e n t = 2 · n
Artificial Rabbits Optimization (ARO) [30]2022 L = ( e e ( ( t 1 ) / T ) 2 ) · s i n ( 2 π r ) ,   H = ( T - t + 1 ) / T · r
Dandelion Optimizer (DO) [31]2022 α [ 0 , 1 ] ,   k [ 0 , 1 ]
Exponential Distribution Optimizer (EDO) [32]2023 s w i t c h   p a r a m e t e r = 0.5
Weighted Mean of Vectors (INFO) [33]2022 c = 2 ,   d = 4
Kepler Optimization Algorithm (KOA) [34]2023 μ 0 = 0.1 ,   γ = 15 ,   T = 3
Northern Goshawk Optimization (NGO) [35]2021 R = 0.02 · ( 1 t / T )
Particle Swarm Optimization (PSO) [36]1995 w = 1 ,   w p = 0.99 ,   c 1 = 1.5 ,   c 2 = 0.01
Spider-Wasp Optimizer (SWO) [26]2023 C R = 0.2 ,   T R = 0 . 3
LMBSWONA C R = 0.2 ,   T R = 0 . 3
Table 3. Numeric results under different maps.
Table 3. Numeric results under different maps.
MapsMetricsEDOAHAARODOINFOKOANGOPSOSWOLMBSWO
Map1Mean7.57187.72007.70247.74967.63807.42587.73347.57247.42957.4092
Best7.46827.42307.41107.42967.40457.40417.71327.42267.40547.4038
Worse7.66407.74107.73897.82917.73387.50147.73647.73807.73217.4194
Std0.05590.06990.09800.07860.14840.02300.00620.15090.07170.0041
Map2Mean14.317014.306314.305214.357414.355814.303814.309414.909414.337614.2996
Best14.309514.305014.299614.300414.301514.301614.305714.303214.298714.2978
Worse14.326814.310714.310314.483914.473814.307314.315116.279214.480314.3025
Std0.00470.00210.00220.05690.07810.00160.00260.90570.07330.0013
Map3Mean16.157317.015916.411117.207616.191916.002917.449217.392216.338315.9782
Best15.902915.848615.851715.785915.796315.842216.697415.854915.784015.7738
Worse16.790118.761816.762718.803716.762516.115619.184619.316416.810116.7113
Std0.20411.46260.41861.02850.43680.10481.15521.16680.40670.2683
Map4Mean16.004616.191616.191216.359216.298616.090916.765616.221316.066315.9883
Best15.809216.189316.188015.574315.874815.845115.866316.188515.571615.5658
Worse16.223216.194116.196521.869718.588416.217618.595516.314216.192916.1918
Std0.13990.00110.00281.32080.54430.15561.23460.03600.25380.2508
Map5Mean21.666221.780521.972921.970421.799821.493921.779722.477921.486821.4822
Best21.550021.539621.469721.569421.502521.476821.643121.498821.476321.4479
Worse21.835423.068723.070323.181023.137421.643722.248824.552721.494821.4946
Std0.08750.44120.55550.42670.50360.03560.23500.79100.00680.0099
Table 4. Friedman test results across different maps.
Table 4. Friedman test results across different maps.
MapsEDOAHAARODOINFOKOANGOPSOSWOLMBSWO
Map15.007.856.909.605.203.008.105.402.301.65
Map28.055.254.108.356.703.056.807.554.051.10
Map35.054.905.507.553.804.007.908.155.103.05
Map43.437.756.655.104.753.485.758.955.303.85
Map56.255.856.708.055.752.357.058.802.401.80
Mean Rank5.566.325.977.735.243.187.127.773.832.29
Table 5. Runtime across different maps.
Table 5. Runtime across different maps.
MapsMetricsEDOAHAARODOINFOKOANGOPSOSWOLMBSWO
Map1Time32.6724.1724.0927.9926.8225.6548.4129.1625.7726.07
Rank92176310845
Map2Time32.7424.8924.5628.0827.6526.6049.0429.3526.4726.64
Rank92176410835
Map3Time34.6826.1925.6429.7928.5727.4452.2430.8127.8427.98
Rank92176310845
Map4Time37.7929.9729.4633.9832.0630.5657.7734.0131.6331.32
Rank92176310854
Map5Time41.0533.0332.7237.2135.4333.9764.1436.6433.4933.89
Rank92186510734
Mean Rank9.002.001.007.206.003.6010.007.803.804.60
Final Rank92176310845
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Y.; Li, Z.; Wang, H.; Hu, Y.; Jiang, H.; Jiang, X.; Chen, D. An Improved Spider-Wasp Optimizer for Obstacle Avoidance Path Planning in Mobile Robots. Mathematics 2024, 12, 2604. https://doi.org/10.3390/math12172604

AMA Style

Gao Y, Li Z, Wang H, Hu Y, Jiang H, Jiang X, Chen D. An Improved Spider-Wasp Optimizer for Obstacle Avoidance Path Planning in Mobile Robots. Mathematics. 2024; 12(17):2604. https://doi.org/10.3390/math12172604

Chicago/Turabian Style

Gao, Yujie, Zhichun Li, Haorui Wang, Yupeng Hu, Haoze Jiang, Xintong Jiang, and Dong Chen. 2024. "An Improved Spider-Wasp Optimizer for Obstacle Avoidance Path Planning in Mobile Robots" Mathematics 12, no. 17: 2604. https://doi.org/10.3390/math12172604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop