Next Article in Journal
Bulk System Reliability Assessment Incorporating Nodal Correlations in Supply–Demand Variabilities and Uncertainties Created with Net-Zero Emission Targets
Previous Article in Journal
Saline–CO2 Solution Effects on the Mechanical Properties of Sandstones: An Experimental Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Multi-Strategy Fusion of the Chimpanzee Optimization Algorithm and Its Application in Path Planning

College of Information and Control Engineering, Xi’an University of Architecture and Technology (XUAT), No. 13 Yanta Road, Xi’an 710055, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(2), 608; https://doi.org/10.3390/app15020608 (registering DOI)
Submission received: 6 November 2024 / Revised: 30 December 2024 / Accepted: 31 December 2024 / Published: 10 January 2025

Abstract

:
In this paper, a multi-strategy enhanced chimpanzee optimization algorithm (MSEChOA) acting on path planning for delivery vehicles is proposed to achieve the goal of shortening global path lengths for the delivery unmanned vehicles and obtaining safer paths. In the initialization phase, the algorithm introduces a hybrid good point set and chaos initialization strategy, combining the advantages of both to enhance the randomness and homogeneity of the initial population. After that, it incorporates a benchmark weight strategy and Gaussian-modulated cosine factor to adaptively adjust algorithm parameters, thus balancing the global and local search capabilities and improving the search efficiency. In the end, the algorithm incorporates a global search enhancer (GEE) to further enhance the global search capability in the later phases, thereby avoiding local optima. Experiments on several benchmark test functions show that MSEChOA outperforms traditional ChOA and other optimization algorithms in optimization accuracy and convergence speed. In simulation experiments, MSEChOA shows stronger path planning ability and good computational efficiency in both simple and complex environments, proving its feasibility and superiority in the field of path planning.

1. Introduction

In recent years, the demand for unmanned logistics has increased significantly. As key logistics equipment, unmanned delivery vehicles play a vital role in warehousing, sorting, and transportation processes [1]. Path planning, a core issue [2,3], directly impacts operational efficiency and task completion quality. Researchers worldwide have extensively studied path planning, achieving notable progress through algorithms ranging from traditional methods like the A* algorithm [4,5] (A-star algorithm), artificial potential field method [6], and Dijkstra’s algorithm [7], for its improvement, and intelligent algorithms such as simulated annealing [8], ant colony optimization [9], genetic algorithms, and grey wolf optimization.
However, traditional algorithms struggle in complex environments with multiple obstacles, resulting in poor timeliness and lengthy paths, which hinder unmanned vehicle operations. To address this, metaheuristic algorithms inspired by natural biological systems have gained attention for their efficiency [10,11].
These algorithms, mimicking collective behavior or ecological mechanisms, excel in optimization tasks. Among them, genetic algorithms were among the first applied to combinatorial optimization problems and are widely used in robotic path planning [12]. Recently, the Grey Wolf Optimizer (GWO) [13] has become a prominent metaheuristic algorithm for various optimization problems. Alejo-Reyes et al. [14] improved GWO by introducing weight factors and displacement vectors to enhance exploration strategies and avoid infeasible solutions. Alkhraisat et al. [15] tackled local optima by incorporating mutation methods to control mutation rates effectively.
The Chimp Optimization Algorithm (ChOA) is a novel metaheuristic inspired by chimpanzee population behavior [16], combining the strengths of the Grey Wolf Optimization Algorithm, Dragonfly Algorithm, and Whale Optimization Algorithm. It features strong global search capability, adaptability, and fewer control parameters. However, traditional ChOA suffers from slow convergence and low accuracy. To address these issues, Khishe et al. (2021) proposed a weighted chimpanzee algorithm, improving convergence speed and mitigating local optima capture in high-dimensional optimization problems, validated through 13 high-dimensional and 10 real-world optimization cases [17]. The weighted Chimp Optimization Algorithm has shown improvements in high-dimensional optimization through a weighting mechanism. However, it still encounters common issues found in classic heuristic optimization algorithms, such as slow convergence, susceptibility to local optima, and high computational resource demands. In 2022, Bo Q et al. combined Particle Swarm Optimization (PSO) local search techniques with a new constraint handling method, ensuring robustness in photovoltaic model parameter estimation optimization [18]. Despite its versatility, PSO often falls into local optima, particularly in complex or high-dimensional problems, limiting its global search capability.
To address these issues, Qian L et al. (2022) introduced six spiral functions and proposed two new hybrid spiral functions into the ChOA algorithm, aiming to mitigate slow search speeds and premature convergence [19]. However, spiral functions require precise parameter tuning to remain effective across different problem domains. Wang J et al. also identified limitations of ChOA in handling binary problems and developed a binary version, BChOA, using four S- and V-shaped transfer functions alongside a new binary method [20]. Comparisons of the four new binary optimization algorithms with 18 state-of-the-art methods revealed that BChOA, supported by V-shaped transfer functions, achieved significant statistical improvements. However, BChOA is tailored for binary problems rather than continuous ones, limiting its applicability. While it performs well in specific tasks, its generalization ability may be insufficient.
In 2024, Lan Z et al. proposed a chimpanzee optimization algorithm integrating good point sets, Cauchy operators, and simplex strategies to solve two real engineering problems [21]. The good point strategy and Cauchy operator enhance global exploration but may weaken local exploitation, while the simplex method’s reliance on local search makes balancing global and local exploration challenging. Zhou et al. [22] proposed a Nonlinear Improved Grey Wolf Optimization Algorithm (NI-GWO) in 2024, introducing nonlinear dynamic adjustment into the convergence factor mechanism and combining it with the Dynamic Window Approach (DWA) to handle dynamic obstacles. This approach improved UAV path planning by dynamically balancing global exploration and local exploitation while enhancing adaptability to dynamic environments. Li J et al. developed a Multi-Strategy Improved Seagull Optimization Algorithm (MSOA), which incorporates chaotic initialization, multi-directional somersault migration, and spiral leap predation strategies [23]. These strategies significantly improved global search and local exploitation capabilities. Experimental results showed that MSOA performs exceptionally well on standard test functions and robot path planning tasks, quickly generating obstacle-avoiding paths and demonstrating strong potential for path planning applications. However, it lacks comparisons with more advanced algorithms and sufficient testing in practical scenarios.
In summary, the traditional Chimp Optimization Algorithm (ChOA) and its improved versions exhibit strong global search capabilities and adaptability. Strategies such as weighting mechanisms, sine–cosine functions, and neighborhood search have improved convergence speed and optimization precision to some extent. However, these methods still face challenges, including reliance on initial solutions, high computational complexity, inadequate adaptability to dynamic environments, and suboptimal performance in certain complex scenarios.
To overcome these limitations, this paper proposes a multi-strategy improved method that integrates hybrid good point sets and chaos initialization, Gaussian-modulated cosine factors, benchmark weight strategies, and a global search enhancer (GEE). These enhancements aim to improve search efficiency and path planning accuracy. Experimental results confirm the effectiveness and superiority of the proposed method in solving path planning problems for unmanned delivery vehicles.

2. Chimpanzee Optimization Algorithm

The social behavior of chimpanzee populations inspires the Chimpanzee Optimization Algorithm (ChOA). In this algorithm, chimpanzees are categorized into four types of social roles based on their abilities: attackers, encircles, drivers, and chasers. The main search process is abstracted as chimpanzee hunting, which consists of two main processes: an exploration phase and an exploitation phase [24].
In the exploration phase of the ChOA, each chimpanzee adjusts its position according to the assigned task (attacker, encircler, driver, chaser). The formula for updating the current individual position is:
X c h i m p ( t + 1 ) = X p r e y ( t ) a · d
d = | c · X p r e y ( t ) m · X c h i m p ( t ) |
X c h i m p represents the current position vector of the chimpanzee, and X p r e y represents the position vector of the prey. The vectors a and c , respectively, influence the distance between the chimpanzee and the prey and represent the impact factors of obstacles to the chimpanzee’s hunting. d is the distance between the chimpanzee and the prey, and m is the chaos vector, as shown in the equation: when | a | > 1, the individual moves away from the prey for global search; when | a | ≤ 1, the individual gradually approaches the prey for local exploitation.
The expressions for a and c are:
a = 2 · f · r 1 f
c = 2 · r 2
where f is the convergence factor that nonlinearly decreases from 2.5 to 0 as the number of iterations increases, and r 1 and r 2 are two random numbers between [0, 1].
The formulas for f and c are as follows:
f = 2.5 ( 1 t T )
In the exploitation phase, the chimpanzee begins to encircle and capture the prey. During this process, the algorithm gradually approaches the optimal solution by updating the positions of the attacker, encircler, driver, and chaser. The position update model is as follows:
X ( t + 1 ) = X 1 + X 2 + X 3 + X 4 4
where X ( t + 1 ) is the updated position vector of the current chimpanzee individual, and X 1 , X 2 , X 3 , and X 4 are the updated position vectors of the attacker, encircler, driver, and chaser, respectively. The mathematical model for their prey attack is as follows:
X 1 = X A t t a c ker a 1 · d A t t a c ker X 2 = X B a r r i e r a 2 · d B a r r i e r X 3 = X C h a s e r a 3 · d C h a s e r X 4 = X D r i v e r a 4 · d D r i v e r
d A t t a c ker = | c 1 · X A t t a c ker m 1 · X | d B a r r i e r = | c 2 · X A t t a c ker m 2 · X | d C h a s e r = | c 3 · X C h a s e r m 3 · X | d D r i v e r = | c 4 · X D r i v e r m 4 · X |
where d represents the distance between the chimpanzee individual and the prey, and c represents the impact factor of obstacles blocking the chimpanzee during hunting. After finally converging on the target, the chimpanzee will attack the prey, and once the prey stops moving, the chimpanzee completes the hunt.

3. Multi-Strategy Improved Chimpanzee Optimization Algorithm

3.1. Hybrid Good Point Set and Chaos Initialization Strategy

Population diversity is crucial for avoiding local optima and enhancing global exploration. The original ChOA’s simple random initialization lacks sophistication. To address this, this paper proposes a hybrid strategy combining good point sets and chaotic initialization. The good point set ensures uniform distribution, while chaotic mapping introduces randomness and diversity, enhancing the algorithm’s search capability.

3.1.1. Good Point Set Initialization

The good point set is an effective method for generating uniformly distributed points, particularly in high-dimensional search spaces, where it ensures better distribution than random initialization.
Originally proposed by Hua Luogeng et al. [25], its basic definition and construction are as follows:
The deviation P n ( k ) = { ( { r 1 ( n ) · k } , { r 2 ( n ) · k } , · · · , { r s ( n ) · k } ) | 1 k n } satisfies φ ( n ) = C ( r , ε ) N 1 + ε , where C ( r , ε ) N 1 + ε is a constant only related to r and ε . n denotes the number of points. The set P n ( k ) is called a good point set, with r as the good point. { r s ( n ) · k } represents the fractional part, with r = { 2 cos ( 2 π k p ) | 1 k s } , where p is the smallest prime number satisfying ( p 3 2 s ) .
Good point set initialization generates low-discrepancy points within a given dimension, ensuring uniform spatial distribution. This improves the quality of the initial population and mitigates early convergence to local optima. However, the strong regularity of good point sets limits randomness and dynamism, potentially reducing the algorithm’s exploration capability in complex or high-dimensional problems.

3.1.2. Chaos Mapping

Chaotic mapping, as a nonlinear dynamic system, possesses pseudo-randomness and ergodicity, enabling the generation of random and diverse points during the initialization phase, thereby enhancing the algorithm’s exploration capabilities.
The unpredictability of chaotic point distributions helps prevent falling into local optima. Chaos mapping is a nonlinear dynamic system characterized by sensitivity to initial conditions and ergodicity. Introducing chaos mapping into optimization algorithms can further enhance population diversity and improve the algorithm’s ability to escape local optima in later stages. Common chaos mappings include Logistic mapping, Tent mapping and so on.
In this study, we combined three classic chaotic mappings (Circle mapping, ICMIC mapping, and Sinusoidal mapping) to generate population points with different distribution characteristics, ensuring both uniformity and diversity during the population initialization phase.
The formula for the Circle mapping is as follows:
x i + 1 = mod ( x i + a b 2 π sin ( 2 π x i ) , 1 )
The formula for the ICMIC mapping (Infinite Collapsing Iterative Chaotic Mapping) is as follows:
x i + 1 = sin p x i
The formula for the Sinusoidal mapping is as follows:
x i + 1 = a · x i 2 · sin ( π x i )
However, the point distributions generated by chaotic mapping may lack uniformity, resulting in insufficient coverage of the search space.

3.1.3. Hybrid Good Point Set and Chaos Initialization

The initialization strategy proposed in this paper begins by generating a good point set, followed by applying chaotic mapping to introduce perturbations, thereby enhancing its randomness and diversity. This hybrid approach ensures that the initialized point set retains uniform distribution while incorporating sufficient randomness. The goal is to improve the coverage of the search space, enhancing the algorithm’s exploration capability and reducing the likelihood of convergence to local optima.
The steps of the hybrid good point set and chaotic initialization are illustrated in the Figure 1 below:
To evaluate the effectiveness of our hybrid strategy that combines good point sets with chaotic initialization, we conducted experiments using various proportions of good point sets alongside chaotic initialization. The experimental results for both unimodal and multimodal functions are illustrated in Figure 2:
As shown in Figure 1, for unimodal functions, the average optimal fitness values are lowest and most stable at 0.6 and 0.8 proportions of the good point set. For multimodal functions, the 0.6 proportion achieves the lowest average optimal fitness value, indicating superior performance. Therefore, subsequent experiments and applications adopt a 0.6 proportion of the good point set for hybrid initialization.

3.1.4. Comparison of Different Initialization Methods

Assuming a two-dimensional search space with a range of [−2, 2] and a population size of 100, the population distributions generated by random initialization, good point set initialization, chaos mapping initialization, and hybrid good point set chaos initialization are compared, as shown in Figure 3:
As shown in the figure above, by combining the good point set and chaos mapping, the advantages of both can be leveraged to ensure uniform distribution of the initial population while increasing diversity and randomness. As a result, this method helps to avoid issues related to local convergence and improves overall global search efficiency.

3.2. Benchmark Weight Strategy and Gaussian Modulated Cosine Factor

3.2.1. Benchmark Weight Strategy

In complex path planning problems, a simple average position update often fails to achieve rapid convergence to the global optimal solution. Since the attacker has the highest fitness in the current population and exerts the greatest influence, this paper introduces a benchmark weight strategy. In each iteration, the attacker’s position (X1) is fixed as a reference, and the positions of other roles are combined to calculate corresponding weights for position updates. This approach enhances the algorithm’s search capability. The improved position update formula is shown below:
W 1 = | X 1 | | X 1 | + | X 2 | + | X 3 | + | X 4 | W 2 = | X 1 | | X 2 | + | X 2 | + | X 3 | + | X 4 | W 3 = | X 1 | | X 3 | + | X 2 | + | X 3 | + | X 4 | W 4 = | X 1 | | X 4 | + | X 2 | + | X 3 | + | X 4 |
X ( t + 1 ) = 1 W 1 + W 2 + W 3 + W 4 × W 1 · X 1 + W 2 · X 2 + W 3 · X 3 + W 4 · X 4 4
Each weight W 1 , W 2 , W 3 and W 4 has its numerator fixed as X 1 , while the denominator consists of different combinations of the absolute values of the positions of the four roles.
This benchmark weight strategy enhances the search ability for the optimal solution, particularly in path planning problems, ensuring that the search agents better adapt to complex environments and improving the algorithm’s performance and stability.

3.2.2. Gaussian-Modulated Cosine Factor

In the Chimp Optimization Algorithm (ChOA), the convergence factor f is primarily used to balance global exploration and local exploitation during the search process, playing a crucial role in the algorithm’s performance.
However, in Equation (5), the linear decay of f does not align with the actual convergence behavior during the algorithm’s execution. To enhance the global search capability and local exploitation ability of the Chimp Optimization Algorithm, this paper proposes a Gaussian-modulated cosine factor, as shown in Equation (14):
f = f s t a r t + ( f e n d f s t a r t ) · exp ( ( t T / 2 ) 2 2 σ 2 ) · ( 1 cos ( π t 2 · T ) )
In the above equation, f s t a r t and f e n d are the initial and final values of the convergence factor, respectively. t is the number of current iterations, σ is the standard deviation of the Gaussian function, which controls the width of the Gaussian function, and T is the maximum number of iterations. The Gaussian distribution function enables a smooth adjustment of the factor throughout the entire search process. In the later stages of iteration, the exponential decay of the Gaussian function suppresses the oscillation of the cosine term, allowing the population to focus on the vicinity of the optimal solution and enhancing local exploitation. The cosine modulation term introduces periodic oscillations, which improve the algorithm’s randomness and diversity during the global search phase. f s t a r t and f e n d control the dynamic range of the factor, enabling the algorithm to flexibly adapt to the requirements of different search stages. The Gaussian-modulated cosine factor combines the advantages of the Gaussian function and the cosine function, enabling dynamic and smooth adjustments to the convergence factor while providing a balanced effect that other factors cannot achieve simultaneously. Additionally, it does not require frequent parameter adjustments for specific problems, making it suitable for various scenarios, including high-dimensional, multimodal, and dynamic problems. In practical experiments, the Gaussian-modulated cosine factor demonstrated excellent convergence speed, precision, and robustness, particularly outperforming other factors in complex optimization problems.

3.3. Global Exploration Enhancer

In optimization algorithms, search agents must thoroughly explore the solution space to identify the global optimal solution. However, algorithms often become trapped in local optima, reducing search efficiency. To address this, this paper introduces a Global Exploration Enhancer (GEE), which integrates Dynamic Opposition-based Learning (DOBL) [26] and the Sine–Cosine Algorithm (SCA) [27,28]. GEE dynamically adjusts the application probabilities of these strategies across different iteration stages to enhance search performance. During early iterations, GEE prioritizes DOBL to increase population diversity and prevent premature convergence. In mid-iterations, it balances DOBL and SCA to steadily approach the optimal solution. In later stages, GEE predominantly applies SCA to refine solution accuracy.

3.4. Algorithm Flow

Step 1: Input parameters such as population size N , search dimensions dim , search boundaries u b and l b , and maximum number of iterations M a x i t e r .
Step 2: Initialize the population using a hybrid good point set and chaos mapping to ensure diversity and uniform distribution within the population.
Step 3: Calculate the fitness values and select the positions of the four best individuals, denoted as the attacker X A t t a c ker , encircler X B a r r i e r , chaser X C h a s e r , and driver X D r i v e r .
Step 4: Calculate the convergence factor f and dynamically adjust its value based on the current iteration count t using the Gaussian-modulated cosine factor formula.
Step 5: In each iteration, selectively apply dynamic opposition-based learning and sine–cosine strategies according to the current stage ratio, and dynamically adjust their application probabilities. Update each chimpanzee’s position based on the benchmark weight strategy, calculate new fitness values, and adjust the population structure to ensure convergence towards the optimal solution.
Step 6: Repeat steps 3 through 6 until the algorithm reaches the maximum number of iterations or the convergence criteria are met and the optimal solution is output.
The flowchart of the MSEChOA algorithm is shown in Figure 4:

4. Experimental Simulation and Results Analysis

4.1. Experimental Environment and Parameter Settings

The proposed MSEChOA algorithm was tested in a simulation environment based on an Intel (R) Core (TM) i7-8300H CPU @ 2.30 GHz processor, 16 GB RAM, NVIDIA GeForce RTX 3050 Ti, and Windows 10 Professional (64-bit) operating system. The programming software used was Matlab R2021a. The experimental settings were as follows: search space dimension of 30, population size of 30, maximum iterations of 1000, and 30 independent runs.
Table 1 lists the basic information for 11 benchmark test functions, including five unimodal functions (F1 to F5), four nonlinear multimodal functions (F6 to F9), and two fixed-dimension multimodal functions (F10 and F11). The functions F1 to F11 cover a range of optimization problems, from simple to complex, from unimodal to multimodal, and from variable dimensions to fixed dimensions. Detailed information and the search ranges for each function are as follows:

4.2. Analysis of the Impact of Different Improvement Strategies on Algorithm Performance

To analyze the impact of different improvement strategies on the performance of the Chimp Optimization Algorithm (ChOA), this paper designs a series of ablation experiments and conducts comparative studies on the following strategies: ChOA refers to the original Chimp Optimization Algorithm; CChOA is the improved algorithm that only applies the hybrid good point set and chaos initialization strategy; GChOA is the improved algorithm that only applies the benchmark weight strategy and Gaussian-modulated cosine factor strategy; EChOA is the improved algorithm that only applies the Global Exploration Enhancer (dynamic opposition-based learning + sine cosine strategy); MSEChOA is the multi-strategy enhanced Chimp Optimization Algorithm that integrates all the above strategies. The results are shown in Table 2.
The results in Table 2 provide qualitative insights into the impact of each improvement strategy on ChOA’s performance. The hybrid initialization strategy (CChOA) combines good point sets and chaotic mapping, enhancing population diversity, and uniformity for comprehensive exploration of the search space. This reduces the risk of premature convergence in the early stages but lacks refinement mechanisms for precise local exploitation in later stages. The Gaussian-modulated cosine factor and benchmark weight strategy (GChOA) dynamically adjust the convergence factor, balancing global exploration and local exploitation. This approach maintains population diversity in early iterations, facilitating the discovery of high-quality solutions, and strengthens local exploitation in later stages, significantly improving solution precision. Consequently, GChOA excels in unimodal and fixed-dimension functions where high convergence accuracy is crucial.
The Global Exploration Enhancer (EChOA), integrating dynamic opposition-based learning and sine–cosine strategies, enhances global search capabilities. Opposition-based learning generates high-quality opposite solutions, expanding the search range, while the sine–cosine strategy introduces nonlinear updates to escape local optima, particularly excelling in multimodal functions. However, EChOA’s lack of refinement for local exploitation limits its performance on unimodal functions compared to GChOA. By integrating these strategies, MSEChOA achieves a comprehensive balance between exploration and exploitation, excelling across all test functions.
The MSEChOA algorithm surpasses both the original algorithm and those with individual strategy improvements in optimization accuracy, convergence speed, global search capability, and algorithm stability. This demonstrates the significant advantages of the multi-strategy fusion approach in addressing complex optimization problems, validating its effectiveness and superiority.

4.3. Performance Comparison Between MSEChOA and Other Algorithms

To validate the practical effectiveness of MSEChOA, this paper conducts a comprehensive performance comparison with several state-of-the-art metaheuristic optimization algorithms. These include the Grey Wolf Optimizer (GWO), Equilibrium Optimizer (EO) [29], Ant Lion Optimizer (ALO) [30], Multi-Strategy Grey Wolf Optimizer (MSGWO) [31], Chimp Optimization Algorithm (ChOA), Improved Chimp Optimization Algorithm (IChOA) [32], Weighted Chimp Optimization Algorithm (WChOA), and Adaptive Weighted Chimp Optimization Algorithm (AWChOA), which applies adaptive convergence factors and dynamic weight strategies. The specific experimental settings and parameters are shown in Table 3.
In MSEChOA, the parameter k is a dynamic opposition-based learning (DOBL) parameter with a value range of [0, 1]. Its design aims to control the generation range of opposition solutions to adapt to the search requirements of different optimization stages. In the early stages of optimization, k has a larger value (close to 1), resulting in a broader distribution of opposition solutions, which helps to cover the search space and enhance global search capabilities. In the later stages, k gradually decreases, leading to a more concentrated distribution of opposition solutions, facilitating the fine-tuning of the local search space. Parameters γ 1 , γ 2 , γ 3 , and γ 4 , respectively, represent the step size adjustment parameter, which controls the magnitude of the search step size; the variation range parameter, which limits the range of solution updates; the direction switching parameter, which enables switching between forward and reverse searches; and the random perturbation parameter, which introduces randomness during the solution update process.

4.3.1. Solution Accuracy Analysis

As shown in Table 4, in the tests on unimodal functions (F1 to F5), the MSEChOA algorithm achieved the theoretical optimal value of 0 on F1 to F4. Although MSEChOA did not surpass MSGWO and AWChOA on the F5 function, its performance was still significantly better than the original ChOA and most other improved algorithms, indicating its robustness in handling complex functions.
In the multimodal functions (F6 to F9), the MSEChOA algorithm also demonstrated superior performance. On the F6 function, while the fitness value of MSEChOA was slightly lower than that of AWChOA, it had a lower standard deviation, indicating better stability. In the F7 and F8 functions, MSEChOA showed good optimization results, especially in the F8 function, where MSEChOA achieved the optimal result among all tested algorithms, with a standard deviation close to zero, indicating high stability and accuracy. On the F9 function, MSEChOA, like several other improved algorithms, achieved the optimal value of 0, and its performance was comparable to other top algorithms such as AWChOA.
In the fixed-dimension multimodal functions (F10, F11), MSEChOA continued to exhibit excellent performance, particularly in complex high-dimensional search spaces. In both of these functions, MSEChOA achieved the theoretical optimal value and displayed a low standard deviation value, demonstrating its advantage in high-dimensional multimodal optimization problems.
In summary, the MSEChOA algorithm has demonstrated excellent optimization capabilities across various benchmark test functions, especially in handling complex optimization problems.

4.3.2. Convergence Curve Analysis

Convergence curves are commonly used to illustrate the overall performance of optimization algorithms during the iterative process. They depict the changes in the objective function value over successive iterations. By analyzing the convergence curves, one can intuitively assess the convergence speed and stability of the algorithm. As shown in the convergence curves in Figure 5, the MSEChOA algorithm demonstrates outstanding performance across multiple test functions, showing significant advantages compared to other algorithms.
In the tests on unimodal functions (F1 to F5), the convergence curve of the MSEChOA algorithm exhibits an extremely rapid convergence trend. Particularly on the F1, F2, F3, and F4 functions, MSEChOA quickly approaches the theoretical optimal value of 0 almost in the initial iteration stages.
In the tests on multimodal functions (F6 to F9), although the EO algorithm slightly outperforms in some iterations on the F6 function, the convergence curve of MSEChOA shows a more stable downward trend, particularly as it approaches the optimal solution, where the curve’s volatility significantly decreases, indicating good convergence stability of the algorithm. The F7, F8, and F9 functions, as typical representatives of complex multimodal functions, show MSEChOA’s outstanding performance, with the convergence curve not only quickly approaching the optimal value but also displaying high stability in later iterations, with a standard deviation close to zero.
In the tests on fixed-dimension multimodal functions (F10, F11), MSEChOA also performs excellently. Its convergence curve shows that when dealing with high-dimensional complex search spaces, although other algorithms such as MSGWO and AWChOA perform well, they still exhibit some fluctuations in later iterations, whereas MSEChOA’s curve tends to stabilize, with a low standard deviation.

4.3.3. Time Complexity Analysis

Unmanned delivery vehicles need to plan paths quickly in dynamic and complex environments to ensure efficient logistics operations. To ensure that unmanned vehicles can plan paths in real time in complex dynamic environments, the computation time and stability of each algorithm must be rigorously tested and compared. Table 5 shows the average runtime and standard deviation of each algorithm on unimodal, multimodal, and fixed-dimension multimodal functions, thereby evaluating their time complexity in path planning.
As shown in Table 5 and Figure 6, the runtime of the ALO algorithm is significantly higher than that of other algorithms, indicating its higher computational cost when handling path planning problems in complex environments. In contrast, the EO and GWO algorithms exhibit higher computational efficiency. The average runtime of the MSEChOA algorithm is moderate; although it is slightly higher than EO and GWO, it is lower than the original ChOA and other improved algorithms. Combined with the algorithm performance results discussed earlier, MSEChOA demonstrates a good balance between computational efficiency and path planning performance.
Further analysis of the standard deviation shows that the MSEChOA algorithm has a lower standard deviation when dealing with multimodal functions and fixed-dimension multimodal functions, indicating better operational stability. This stability is especially important for the path planning of delivery unmanned vehicles in dynamic environments, as it means the algorithm can maintain consistent and efficient performance across different scenarios.

5. Simulation of Unmanned Delivery Vehicle Path Planning Based on the MSEChOA Algorithm

To validate the effectiveness of the MSEChOA algorithm in actual unmanned delivery vehicle path planning, a simulation test was conducted using the grid method in MATLAB. The experiments were performed on 30 × 30 scale maps with obstacle coverage rates of approximately 15% and 42%, representing simple and complex environmental maps, respectively, to simulate different real-world delivery scenarios. The experimental parameters were set as follows: a population size of 100 and a maximum iteration value of 1000. To ensure the reliability of the results, each scenario was repeated 10 times.
In grid maps, it is usually necessary to convert between row–column indices and linear indices. Assuming the map size is rows × cols, the conversion formula for rows and columns is as shown in Equation (15):
i n d e x = ( r 1 ) × c o l s + c r = i n d e x 1 c o l s + 1 c = i n d e x ( r 1 ) × c o l s
The starting and ending points of the delivery unmanned vehicle are marked with green and red dots, respectively, and obstacles are represented by black squares. Different colored paths represent the path planning results of each algorithm. By comparing and analyzing these results, the differences in obstacle avoidance capability, path length, and operational efficiency of the algorithms can be observed. The MSEChOA algorithm was compared with other metaheuristic algorithms and various improved Chimp Optimization Algorithms. A randomly selected experimental result is shown in Figure 7:
Figure 7A,B shows the path planning results of the MSEChOA algorithm compared to other metaheuristic algorithms and various improved Chimp Optimization Algorithms in a simple environment. The paths generated by MSEChOA are shorter and smoother, significantly outperforming the other algorithms. Compared to other improved chimp algorithms, MSEChOA still shows advantages, excelling in both path length and planning efficiency.
Figure 7C,D present the comparison of path planning in a complex environment. MSEChOA still demonstrates excellent obstacle avoidance capabilities when faced with complex obstacles, generating paths that are superior in both length and smoothness compared to other algorithms. Even though algorithms like IChOA and WChOA approach optimality in some cases, MSEChOA’s overall performance is more stable, validating its superiority in complex scenarios. Table 6 presents the data on average path length, average standard deviation, and average runtimes for each algorithm in both simple and complex environments. The following is a detailed analysis of the data in the table:
In a simple environment, the MSEChOA algorithm demonstrates the best path planning capability among all algorithms, with an average path length of 45.6985, indicating that this algorithm can effectively find the shortest path in simple environments. In contrast, traditional algorithms like ALO and MSGWO have relatively longer average path lengths, revealing their performance disadvantages in such environments. In a complex environment, MSEChOA also performs excellently, with an average path length of 45.6985, significantly better than other algorithms, especially ChOA (55.0711) and AWChOA (54.4853), demonstrating MSEChOA’s significant advantage in handling complex path planning tasks.
Regarding path planning stability, the MSEChOA algorithm has an average standard deviation of 1.1419 in a simple environment, slightly higher than WChOA’s 1.0479, but still at a low level. In a complex environment, MSEChOA’s average standard deviation further decreases to 0.52394, the lowest among all algorithms, indicating that the algorithm can not only find shorter paths in complex environments, but also has high path stability and robustness. Additionally, in terms of runtime, MSEChOA also demonstrates high computational efficiency in path planning tasks.
An analysis of Table 6 shows that the MSEChOA algorithm performs excellently in both simple and complex environments, with its performance in complex environments further validating its potential as an efficient and reliable solution for unmanned delivery vehicle path planning.
Meanwhile, in order to simulate a more realistic delivery scenario, this study also simulates a warehouse environment and merges two randomized tasks, where the start point and end point are randomly generated (triangles represent the start point, circles represent the end point, and the blue lines are the routes planned and traveled by the unmanned vehicle) as well as dynamic obstacles. The yellow squares are dynamic obstacles and the dotted line behind them is the trajectory of the dynamic obstacles. The gray squares are unknown obstacles. Dynamic obstacles change their positions through random movements, further increasing the complexity of the scenario and the adaptability requirements of the algorithm, as shown in Figure 8:
The experimental results show that in tasks simulating real delivery scenarios, the MSEChOA algorithm can quickly generate optimal paths that avoid dynamic obstacles. Moreover, its stability is evident in complex scenarios involving random tasks and dynamic obstacles, demonstrating superior global search and local exploitation capabilities compared to other algorithms. This further validates the potential of the MSEChOA algorithm in practical applications, particularly in dynamic logistics and unmanned vehicle delivery domains.

6. Conclusions

To address the inefficiency and limited capability of traditional path planning algorithms in complex environments, this paper proposes a multi-strategy enhanced Chimpanzee Optimization Algorithm (MSEChOA). By incorporating multiple strategies, MSEChOA significantly improves optimization efficiency and adaptability. Comparative experiments with other classic optimization algorithms demonstrate that MSEChOA provides an efficient and reliable solution for unmanned delivery vehicle path planning. The main conclusions are as follows:
(1)
A chimpanzee algorithm (MSEChOA) combining hybrid good point set chaos initialization, benchmark weight strategy, Gaussian-modulated cosine factor, and Global Exploration Enhancer (GEE) strategies was proposed. Compared to the original chimpanzee algorithm, which has limited global search capability and is prone to becoming trapped in local optima when dealing with complex path planning, MSEChOA effectively addresses these limitations, enhancing the algorithm’s global search capability and convergence accuracy.
(2)
The experimental results show that MSEChOA performs excellently across eleven benchmark test functions, particularly in optimization accuracy, convergence speed, and global search capability, surpassing traditional and other improved algorithms. In the simulation experiments for delivery unmanned vehicle path planning, MSEChOA not only generated superior path planning schemes in both simple and complex environments, but also exhibited outstanding performance in terms of computational efficiency and algorithm stability. Compared to other algorithms, MSEChOA effectively balances global search and local exploitation capabilities, avoiding the pitfalls of local optima, demonstrating strong adaptability and robustness, making it suitable for handling complex and dynamic real-world scenarios.
The MSEChOA algorithm provides an efficient and reliable solution for delivery unmanned vehicle path planning, with broad application prospects. In addition to its proven effectiveness in simple and complex map environments, the algorithm has also been validated in experiments simulating more realistic logistics scenarios involving random tasks and dynamic obstacles. Future research can focus on further optimizing the algorithm’s computational efficiency, improving its adaptability in dynamic and real-world logistics environments, and enhancing its performance in simulations of more realistic logistics scenarios. These efforts aim to refine the algorithm’s practical utility and accelerate the development and widespread adoption of unmanned delivery technology.
Moreover, the superior optimization performance of the MSEChOA algorithm also demonstrates extensive application potential in other practical scenarios. For instance, in logistics distribution center scheduling, the algorithm can optimize vehicle and cargo allocation to improve logistical efficiency. In intelligent warehouse path planning, it can assist robots in generating optimal paths within dynamic storage environments, thereby reducing operation time. In industrial production scheduling, it provides efficient scheduling solutions for multi-process manufacturing tasks, thereby enhancing production efficiency.
Despite its outstanding performance in various optimization problems, MSEChOA still has certain limitations. For example, the algorithm’s capability in handling real-time path planning in dynamic environments requires further validation and improvement. Additionally, its adaptability to high-dimensional and complex environments also has room for enhancement. Future research can focus on the following directions: optimizing the real-time performance of MSEChOA in dynamic environments to ensure stability and efficiency in rapidly changing obstacle and target scenarios; integrating deep learning or reinforcement learning methods to further enhance the algorithm’s adaptability to high-dimensional and complex scenarios; and conducting deeper investigations into logistics and distribution applications, such as logistics scheduling and traffic information. Further research can explore and improve these issues, enabling MSEChOA to realize its potential in a wider range of practical scenarios, providing more efficient solutions for unmanned delivery, robotic navigation, intelligent logistics, and industrial optimization.

Author Contributions

Research conceptualization, X.H. Advice and guidance, X.H. Overall direction control, X.H. Investigation, X.H. Resources, X.H. Methodology, X.H. Study design, C.G. Software, C.G. Validation, C.G. Experimental execution, C.G. Data analysis, C.G. Visualization, C.G. Preparation of first draft, C.G. Project management, X.H. Review and revision of the manuscript, X.H. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the National Natural Science Foundation of China for partial support under Grant No. 72071153. The authors also express their gratitude to the Key Laboratory Fund under Grant no. 6142003190204, and the Natural Science Special Program of Xi’an University of Architecture and Technology under Grant no. ZR20048.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank their colleagues at Xi’an University of Architecture and Technology for their constructive discussions and feedback during the research. The authors also acknowledge the anonymous reviewers for their valuable suggestions that greatly improved the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Srinivas, S.; Ramachandiran, S.; Rajendran, S. Autonomous robot-driven deliveries: A review of recent developments and future directions. Transp. Res. Part E Logist. Transp. Rev. 2022, 165, 102834. [Google Scholar] [CrossRef]
  2. Abed, M.S.; Lutfy, O.F.; Al-Doori, Q.F. A Review on Path Planning Algorithms for Mobile Robots. Eng. Technol. 2021, 39, 804–820. [Google Scholar] [CrossRef]
  3. Das, P.K.; Jena, P.K. Multi-robot path planning using improved particle swarm optimization algorithm through novel evolutionary operators. Appl. Soft Comput. 2020, 92, 106312. [Google Scholar] [CrossRef]
  4. Guruji, A.K.; Agarwal, H.; Parsediya, D.K. Time-efficient A* Algorithm for Robot Path Planning. In Proceedings of the 3rd International Conference on Innovations in Automation and Mechatronics Engineering, Vallabh Vidhyanagar, India, 5–6 February 2016; pp. 144–149. [Google Scholar]
  5. Wei, S.; Yunfeng, L.V.; Hongwei, T.; Min, X. Mobile Robot Path Planning Based on an Improved A Algorithm. J. Hunan Univ. (Nat. Sci.) 2017, 44, 94–101. (In Chinese) [Google Scholar] [CrossRef]
  6. Costa, A.N.; Medeiros, F.L.; Dantas, J.P.; Geraldo, D.; Soma, N.Y. Formation control method based on artificial potential fields for aircraft flight simulation. Simulation 2022, 98, 575–595. [Google Scholar] [CrossRef]
  7. Fadzli, S.A.; Abdulkadir, S.I.; Makhtar, M. Robotic Indoor Path Planning Using Dijkstra’s Algorithm with Multi-Layer Dictionaries. In Proceedings of the 2015 2nd International Conference on Information Science and Security (ICISS), Seoul, Republic of Korea, 14–16 December 2015; pp. 1–4. [Google Scholar]
  8. Kirkpatrick, S. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  9. Eda, Y.; Salman, F.S.; Erdoğan, G. Optimizing two-dimensional vehicle loading and dispatching decisions in freight logistics. Eur. J. Oper. Res. 2022, 302, 954–969. [Google Scholar]
  10. Abedinia, O.; Amjady, N.; Ghasemi, A. A new metaheuristic algorithm based on shark smell optimization. Complexity 2016, 21, 97–116. [Google Scholar] [CrossRef]
  11. Agrawal, P.; Abutarboush, H.F.; Ganesh, T.; Mohamed, A.W. Metaheuristic Algorithms on Feature Selection: A Survey of One Decade of Research (2009–2019). IEEE Access 2021, 9, 26766–26791. [Google Scholar] [CrossRef]
  12. Yao, M.; Wang, N.; Zhao, L. Improved simulated annealing algorithm and genetic algorithm for TSP. Comput. Eng. Appl. 2013, 49, 60–65. [Google Scholar]
  13. Mirjalili, S.; Mirjalili, S.M.; Lewis, A.D. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  14. Alejo-Reyes, A.; Cuevas, E.; Rodríguez, A.; Mendoza, A.; Olivares-Benitez, E. An Improved Grey Wolf Optimizer for a Supplier Selection and Order Quantity Allocation Problem. Mathematics 2020, 8, 1457. [Google Scholar] [CrossRef]
  15. Alkhraisat, H.; Dalbah, L.M.; Al-Betar, M.A.; Awadallah, M.A.; Assaleh, K.; Deriche, M. Size optimization of truss structures using improved grey wolf optimizer. IEEE Access 2023, 11, 13383–13397. [Google Scholar] [CrossRef]
  16. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  17. Khishe, M.; Nezhadshahbodaghi, M.; Mosavi, M.R.; Martín, D. A Weighted Chimp Optimization Algorithm. IEEE Access 2021, 9, 158508–158539. [Google Scholar] [CrossRef]
  18. Bo, Q.; Cheng, W.; Khishe, M.; Mohammadi, M.; Mohammed, A.H. Solar photovoltaic model parameter identification using robust niching chimp optimization. Sol. Energy 2022, 239, 179–197. [Google Scholar] [CrossRef]
  19. Qian, L.; Khishe, M.; Huang, Y.; Mirjalili, S. SEB-ChOA: An improved chimp optimization algorithm using spiral exploitation behavior. Neural Comput. Appl. 2024, 36, 4763–4786. [Google Scholar] [CrossRef]
  20. Wang, J.; Khishe, M.; Kaveh, M.; Mohammadi, H. Binary Chimp Optimization Algorithm (BChOA): A New Binary Meta-heuristic for Solving Optimization Problems. Cogn. Comput. 2021, 13, 1297–1316. [Google Scholar] [CrossRef]
  21. Lan, Z.; He, Q. A novel chimp optimization algorithm with cauchy perturbation. J. Chin. Comput. Syst. 2023, 44, 715–723. [Google Scholar]
  22. Zhou, X.; Shi, G.; Zhang, J. Improved Grey Wolf Algorithm: A Method for UAV Path Planning. Drones 2024, 8, 675. [Google Scholar] [CrossRef]
  23. Li, J.; Shang, W.; Hu, Y. Multi-Strategy Improved Seagull Algorithm for Solving Robot Path Planning Problems. Mach. Tool Autom. Technol. 2024, 4, 19–25, 30. (In Chinese) [Google Scholar] [CrossRef]
  24. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  25. Hua, L.; Wang, Y. Applications of Number Theory to Numerical Analysis; Springer: Berlin, Germany, 1981; pp. 83–87. [Google Scholar]
  26. Xie, Y.; Lin, L.; Wu, Z. Dynamic opposition-based learning in nature-inspired optimization: A review of concepts and applications. Swarm Evol. Comput. 2023, 85, 101057. [Google Scholar]
  27. Chen, C.; Wang, X.; Yu, H.; Wang, M.; Chen, H. Dealing with multi-modality using synthesis of Moth-flame optimizer with sine cosine mechanisms. Math. Comput. Simul. 2021, 188, 291–318. [Google Scholar] [CrossRef]
  28. Gupta, R.; Singh, P.K.; Ghosh, A. Hybrid sine cosine algorithm for solving engineering optimization problems. Expert Syst. Appl. 2023, 223, 119725. [Google Scholar]
  29. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  30. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  31. Qin, H.; Wang, L.; Fu, Y.; Sui, M.; He, B.J. Grey Wolf Optimization Algorithm Based on Multi-Strategy Combination and Its Application. Shandong Univ. 2024, 59, 51–60. (In Chinese) [Google Scholar]
  32. Liu, H.; Lei, B.; Wang, W.; Yun, Y.; Chai, H. Path Planning for Warehouse Mobile Robots Based on an Improved Chimpanzee Optimization Algorithm. Inf. Control 2023, 52, 689–700. (In Chinese) [Google Scholar]
Figure 1. Flowchart of Hybrid Good Point Set and Chaotic Initialization.
Figure 1. Flowchart of Hybrid Good Point Set and Chaotic Initialization.
Applsci 15 00608 g001
Figure 2. Average fitness for different proportions of good point set chaos initialization. (A) is the unimodal function test result; (B) is the multimodal function test result.
Figure 2. Average fitness for different proportions of good point set chaos initialization. (A) is the unimodal function test result; (B) is the multimodal function test result.
Applsci 15 00608 g002aApplsci 15 00608 g002b
Figure 3. Initial population distribution generated by different methods. (A) Generate 100 points using random method; (B) generate 100 points using good point set; (C) generate 100 points using chaos mapping; (D) generate 100 points using hybrid good point set and chaos initialization.
Figure 3. Initial population distribution generated by different methods. (A) Generate 100 points using random method; (B) generate 100 points using good point set; (C) generate 100 points using chaos mapping; (D) generate 100 points using hybrid good point set and chaos initialization.
Applsci 15 00608 g003aApplsci 15 00608 g003b
Figure 4. MSEChOA Algorithm Flowchart.
Figure 4. MSEChOA Algorithm Flowchart.
Applsci 15 00608 g004
Figure 5. Convergence curves of various algorithms on benchmark functions. (AE) Convergence results on fixed-dimension multimodal functions (F1~F5); (FI) Convergence results on fixed-dimension multimodal functions (F6~F9); (J,K) Convergence results on fixed-dimension multimodal functions (F10, F11).
Figure 5. Convergence curves of various algorithms on benchmark functions. (AE) Convergence results on fixed-dimension multimodal functions (F1~F5); (FI) Convergence results on fixed-dimension multimodal functions (F6~F9); (J,K) Convergence results on fixed-dimension multimodal functions (F10, F11).
Applsci 15 00608 g005aApplsci 15 00608 g005bApplsci 15 00608 g005c
Figure 6. Comparison of execution times among different algorithms.
Figure 6. Comparison of execution times among different algorithms.
Applsci 15 00608 g006
Figure 7. Path planning results of various algorithms in simple and complex environments. (A,B) Path comparison of MSEChOA with other intelligent optimization algorithms in a simple map; (C,D) path comparison of MSEChOA with other intelligent optimization algorithms in a complex map.
Figure 7. Path planning results of various algorithms in simple and complex environments. (A,B) Path comparison of MSEChOA with other intelligent optimization algorithms in a simple map; (C,D) path comparison of MSEChOA with other intelligent optimization algorithms in a complex map.
Applsci 15 00608 g007aApplsci 15 00608 g007b
Figure 8. Simulated real delivery scenarios: (A,B) represent two random tasks.
Figure 8. Simulated real delivery scenarios: (A,B) represent two random tasks.
Applsci 15 00608 g008
Table 1. List of Benchmark Test Functions.
Table 1. List of Benchmark Test Functions.
Function NumberFunction NameDimensionDomainRemarks
F1Sphere30[−100, 100]Unimodal
F2Schwefel 2.2230[−10, 10]Unimodal
F3Schwefel 1.2630[−100, 100]Unimodal
F4Schwefel max30[−100, 100]Unimodal
F5Rosenbrockp30[−30, 30]Unimodal
F6Step30[−100, 100]Unimodal
F7Quartic30[−1.28, 1.28]Unimodal
F8Schwefel 2.2630[−500, 500]Multimodal
F9Rastrigin30[−5.12, 5.12]Multimodal
F10Ackley30[−32, 32]Multimodal (Fixed Dimension)
F11Griewank30[−600, 600]Multimodal (Fixed Dimension)
Table 2. Test Results for Different Improved Strategy Algorithms.
Table 2. Test Results for Different Improved Strategy Algorithms.
AlgorithmStandard FunctionAverage Fitness ValueStandard Deviation of Fitness Value
ChOAF15.3205 × 10201.4612 × 1020
CChOA1.0185 × 10494.4854 × 1049
GChOA9.0227 × 102590
EChOA5.3708 × 101420
MSEChOA00
ChOAF21.1672 × 10141.4103 × 1014
CChOA7.9241 × 10372.3578 × 1036
GChOA1.1178 × 102060
EChOA3.3878 × 101160
MSEChOA2.7981 × 103180
ChOAF32.3789 × 1035.9865 × 103
CChOA5.6502 × 10582.6851 × 1058
GChOA00
EChOA3.57 × 101180
MSEChOA00
ChOAF42.5449 × 1024.0083 × 102
CChOA1.2281 × 10103.8544 × 105
GChOA1.9831 × 101980
EChOA3.2545 × 10600
MSEChOA00
ChOAF52.6484 × 1016.6254 × 101
CChOA2.7376 × 1011.1346
GChOA2.6859 × 1011.8091 × 102
EChOA2.5143 × 1011.8091 × 102
MSEChOA2.7710 × 1011.7452 × 101
ChOAF63.31813.8517 × 101
CChOA2.08186.9924 × 101
GChOA2.88064.1811 × 101
EChOA2.63974.1811 × 101
MSEChOA3.1859 × 10173.0727 × 101
ChOAF79.1266 × 1020.00087611
CChOA3.0555 × 1032.2088 × 103
GChOA3.8075 × 1053.1309 × 105
EChOA3.8677 × 1053.1309 × 105
MSEChOA1.3864 × 1071.2646 × 1007
ChOAF8−5.7409 × 1036.6939 × 101
CChOA−6.0430 × 1031.6773 × 102
GChOA−8.3671 × 1038.5857 × 101
EChOA−6.3068 × 1038.5857 × 101
MSEChOA−9.8562 × 1047.1199
ChOAF91.7892 × 1029.7979 × 102
CChOA6.2543 × 10124.2156 × 1012
GChOA00
EChOA00
MSEChOA00
ChOAF101.9962 × 1011.2464 × 13
CChOA1.2967 × 10145.327 × 1015
GChOA8.8818 × 10160
EChOA0.0976190
MSEChOA8.8818 × 10160
ChOAF117.1037 × 1031.3919 × 103
CChOA1.0514 × 1063.1708 × 106
GChOA00
EChOA00
MSEChOA00
Table 3. Parameter Settings of Each Algorithm in the Simulation Experiments.
Table 3. Parameter Settings of Each Algorithm in the Simulation Experiments.
AlgorithmParameters
GWO r 1 = [0, 1], r 2 = [0, 1], a = [0, 2]
EO r 1 = [0, 1], r 2 = [0, 1], a 1 = 2, a 2 = 1, G P = 0.3
ALO r 1 = [0, 1], r 2 = [0, 1], c > 1
MSGWO r 1 = [0, 1], r 2 = [0, 1], a = [0, 2], h = 0.3
ChOA r 1 = [0, 1], r 2 = [0, 1], f = [0, 2.5]
IChOA r 1 = [0, 1], r 2 = [0, 1], f = [0, 2.5]
WChOA r 1 = [0, 1], r 2 = [0, 1], f = [0, 2.5]
AWChOA r 1 = [0, 1], r 2 = [0, 1], f = [0, 2.5]
MSEChOA r 1 = [0, 1], r 2 = [0, 1], f = [0, 2.5], k = [0, 1], γ 1 = [0, 2],
γ 2 = [0, 1], γ 3 = [−1, 1], γ 4 = [0, 1]
Table 4. The experimental results of each algorithm on benchmark test functions.
Table 4. The experimental results of each algorithm on benchmark test functions.
AlgorithmStandard FunctionBest Fitness ValueAverage Fitness ValueWorst Fitness ValueStandard Deviation of Fitness Value
GWOF12.7451 × 10612.0058 × 10598.0392 × 10593.4286 × 1059
EO8.1172 × 10897.6076 × 10872.7059 × 10861.1166 × 1086
ALO4.7580 × 1075.0654 × 1069.8827 × 1063.7870 × 106
MSGWO0000
ChOA4.0154 × 10222.6677 × 10215.6054 × 10212.3480 × 1021
IChOA5.7817 × 102895.2753 × 102802.6309 × 102790
WChOA0000
AWChOA0000
MSEChOA0000
GWOF24.4984 × 10361.1163 × 10346.5505 × 10341.4335 × 1034
EO1.2043 × 10502.927 × 10491.3489 × 10484.0016 × 1049
ALO2.8345 × 1012.6784 × 1011.2578 × 1024.3768 × 101
MSGWO0000
ChOA3.8479 × 10174.7866 × 10141.9832 × 10136.4244 × 1014
IChOA3.3371 × 101718.3392 × 101611.4128 × 101593.1574 × 10160
WChOA1.5529 × 102701.4878 × 102671.4111 × 102660
AWChOA0000
MSEChOA0000
GWOF35.9076 × 10192.0702 × 10134.1257 × 10129.2235 × 1013
EO4.1664 × 10271.0605 × 10202.0582 × 10194.5960 × 1020
ALO4.3682 × 1021.2297 × 1031.9752 × 1034.7699 × 102
MSGWO0000
ChOA1.4114 × 1072.3789 × 1032.5539 × 1025.9865 × 103
IChOA6.25216.3564 × 1032.0569 × 1045.9676 × 103
WChOA0000
AWChOA0000
MSEChOA0000
GWOF46.7639 × 10162.2771 × 10148.1989 × 10142.5581 × 1014
EO6.721 × 10232.0782 × 10211.215 × 10203.8011 × 1021
ALO9.3198 × 10234.9083 × 10234.8676 × 10221.5379 × 1022
MSGWO0001.5379 × 1022
ChOA8.3812 × 1042.5449 × 1021.1852 × 1014.0083 × 102
IChOA1.1553 × 10901.9693 × 10441.3232 × 10434.4478 × 1044
WChOA5.4692 × 102531.1032 × 102491.0184 × 102480
AWChOA0000
MSEChOA0000
GWOF52.6014 × 1012.6887 × 1012.7976 × 1016.6254 × 101
EO2.4195 × 1012.4447 × 1012.4845 × 1011.6682 × 101
ALO2.6732 × 1012.6905 × 1021.7752 × 1034.3496 × 102
MSGWO2.8824 × 1012.8955 × 1012.8997 × 1013.5780 × 102
ChOA2.8062 × 1012.8836 × 1012.8948 × 1011.7312 × 101
IChOA2.8743 × 1012.8794 × 1012.8868 × 1012.7903 × 102
WChOA2929295.7969 × 1010
AWChOA2929296.5972 × 1016
MSEChOA2.7710 × 1012.8460 × 1012.8821 × 1014.2720 × 1018
GWOF62.9949 × 1016.6693 × 1011.5180 × 1013.4501 × 101
EO4.8801 × 10112.0216 × 1081.8594 × 1081.3300 × 109
ALO8.4663 × 1079.1442 × 1068.3931 × 1058.3129 × 106
MSGWO5.73836.02076.50399.4713 × 101
ChOA3.04423.33813.64012.3828 × 101
IChOA1.90451.66332.55939.3163 × 101
WChOA1.34091.43161.52241.283 × 101
AWChOA1.16711.31721.51801.8088
MSEChOA2.1209 × 10174.1698 × 1075.4862 × 1061.2778 × 106
GWOF71.9937 × 1048.3273 × 1041.8214 × 1033.7865 × 104
EO1.8614 × 1046.3278 × 1041.6592 × 1033.9671 × 104
ALO4.6633 × 1029.9301 × 1021.8258 × 1013.0897 × 102
MSGWO8.3223 × 1072.9184 × 1051.6351 × 1043.4050 × 105
ChOA2.1619 × 1057.8746 × 1044.5600 × 1039.6417 × 104
IChOA2.1131 × 1056.4404 × 1042.1390 × 1035.3923 × 104
WChOA3.6663 × 1068.5284 × 1055.1775 × 1041.0510 × 104
AWChOA9.8998 × 1072.8929 × 1059.1816 × 1052.4063 × 105
MSEChOA2.4499 × 1071.5532 × 1057.1409 × 1051.5829 × 105
GWOF8−7.0131 × 103−6.0293 × 103−3.9242 × 1037.0841 × 102
EO−9.9502 × 103−8.8828 × 103−7.7454 × 1036.9652 × 102
ALO−8.8387 × 103−5.5972 × 103−5.4177 × 1036.1683 × 102
MSGWO−3.3303 × 103−2.3413 × 103−1.2683 × 1034.6097 × 102
ChOA−5.8594 × 103−5.7289 × 103−5.6299 × 1035.5124 × 101
IChOA−1.2569 × 104−8.0906 × 103−3.8683 × 1032.1861 × 103
WChOA−4.1874 × 103−3.3371 × 103−2.5980 × 1033.1731 × 102
AWChOA−3.0328 × 103−2.2503 × 103−1.5103 × 1034.0934 × 102
MSEChOA−1.2569 × 104−1.2569 × 104−1.2565 × 1049.9146 × 101
GWOF902.6289 × 1017.88681.4707
EO0000
ALO6.2682 × 1018.5168 × 1011.3432 × 1021.7589 × 101
MSGWO0000
ChOA05.5223 × 1011.0257 × 1012.0008
IChOA01.3263 × 10141.1369 × 10132.8649 × 1014
WChOA0000
AWChOA0000
MSEChOA0000
GWOF101.1546 × 10141.8296 × 10142.2204 × 10142.2534 × 1015
EO4.4409 × 10154.7962 × 10157.9936 × 10151.1235 × 1015
ALO1.6562 × 1032.07993.46217.2054 × 101
MSGWO8.8818 × 10161.5987 × 10154.4409 × 10151.4580 × 1015
ChOA1.9956 × 1011.9962 × 1011.9964 × 1011.2985 × 103
IChOA4.4409 × 10157.2831 × 10151.5099 × 10143.6692 × 1015
WChOA4.4409 × 10154.4409 × 10154.4409 × 10150
AWChOA8.8818 × 10161.0658 × 10154.4409 × 10157.9441 × 1016
MSEChOA8.8818 × 10168.8818 × 10168.8818 × 10160
GWOF1101.3528 × 1039.8275 × 1033.3126 × 103
EO06.1605 × 1041.2321 × 1022.7551 × 103
ALO6.6037 × 1041.8867 × 1027.8795 × 1022.0600 × 102
MSGWO0000
ChOA05.6528 × 1036.3903 × 1021.5818 × 102
IChOA02.2204 × 10171.1102 × 10164.5563 × 1017
WChOA0000
AWChOA0000
MSEChOA0000
Table 5. Average Execution Time and Standard Deviation of Different Algorithms on Various Benchmark Functions.
Table 5. Average Execution Time and Standard Deviation of Different Algorithms on Various Benchmark Functions.
Algorithm NameGWOEOALOMSGWOChOAIChOAWChOAAWChOAMSEChOA
Unimodal Function0.18400.165826.86180.65030.63240.03910.66494.98640.2147
Multimodal Function0.18730.153825.99000.64850.63220.03750.61904.81410.2034
Fixed-Dimension Multimodal Function0.19490.161025.81210.64990.64670.05090.63554.76280.2155
Average Runtime (s)0.18870.160226.22130.64950.63710.04250.63984.85440.2112
Average Standard Deviation (s)0.00680.01480.57460.02820.02310.0040.04110.11860.0115
Table 6. Path Planning Results of Various Algorithms on Two Different Maps.
Table 6. Path Planning Results of Various Algorithms on Two Different Maps.
Algorithm NameGWOEOALOMSGWOChOAIChOAWChOAAWChOAMSEChOA
Simple MapAverage Path Length49.79948.041648.041648.041655.071147.313748.213254.485345.6985
Average Standard Deviation0.66791.30990.962554.52992.61.20831.04791.57781.1419
Average Runtime (s)7.91667.861111.80179.92547.70537.76465.58068.11355.2658
Complex MapAverage Journey Length49.79948.041648.041648.041655.071147.313748.213254.485345.6985
Average Standard Deviation0.261971.06641.41632.82111.68672.15690.957681.87790.52394
Average Runtime (s)9.96779.895711.80578.97978.805210.53137.91108.11357.6658
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, X.; Guo, C. Research on Multi-Strategy Fusion of the Chimpanzee Optimization Algorithm and Its Application in Path Planning. Appl. Sci. 2025, 15, 608. https://doi.org/10.3390/app15020608

AMA Style

He X, Guo C. Research on Multi-Strategy Fusion of the Chimpanzee Optimization Algorithm and Its Application in Path Planning. Applied Sciences. 2025; 15(2):608. https://doi.org/10.3390/app15020608

Chicago/Turabian Style

He, Xing, and Chenxv Guo. 2025. "Research on Multi-Strategy Fusion of the Chimpanzee Optimization Algorithm and Its Application in Path Planning" Applied Sciences 15, no. 2: 608. https://doi.org/10.3390/app15020608

APA Style

He, X., & Guo, C. (2025). Research on Multi-Strategy Fusion of the Chimpanzee Optimization Algorithm and Its Application in Path Planning. Applied Sciences, 15(2), 608. https://doi.org/10.3390/app15020608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop