Next Article in Journal
A Unified Approach to Implicit Fractional Differential Equations with Anti-Periodic Boundary Conditions
Previous Article in Journal
Third-Order Hankel Determinant for a Class of Bi-Univalent Functions Associated with Sine Function
Previous Article in Special Issue
Prediction of Sulfur Dioxide Emissions in China Using Novel CSLDDBO-Optimized PGM(1, N) Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MSAPO: A Multi-Strategy Fusion Artificial Protozoa Optimizer for Solving Real-World Problems

1
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
2
School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(17), 2888; https://doi.org/10.3390/math13172888
Submission received: 19 August 2025 / Revised: 30 August 2025 / Accepted: 3 September 2025 / Published: 6 September 2025
(This article belongs to the Special Issue Advances in Metaheuristic Optimization Algorithms)

Abstract

Artificial protozoa optimizer (APO), as a newly proposed meta-heuristic algorithm, is inspired by the foraging, dormancy, and reproduction behaviors of protozoa in nature. Compared with traditional optimization algorithms, APO demonstrates strong competitive advantages; nevertheless, it is not without inherent limitations, such as slow convergence and a proclivity towards local optimization. In order to enhance the efficacy of the algorithm, this paper puts forth a multi-strategy fusion artificial protozoa optimizer, referred to as MSAPO. In the initialization stage, MSAPO employs the piecewise chaotic opposition-based learning strategy, which results in a uniform population distribution, circumvents initialization bias, and enhances the global exploration capability of the algorithm. Subsequently, cyclone foraging strategy is implemented during the heterotrophic foraging phase. enabling the algorithm to identify the optimal search direction with greater precision, guided by the globally optimal individuals. This reduces random wandering, significantly accelerating the optimization search and enhancing the ability to jump out of the local optimal solutions. Furthermore, the incorporation of hybrid mutation strategy in the reproduction stage enables the algorithm to adaptively transform the mutation patterns during the iteration process, facilitating a strategic balance between rapid escape from local optima in the initial stages and precise convergence in the subsequent stages. Ultimately, crisscross strategy is incorporated at the conclusion of the algorithm’s iteration. This not only enhances the algorithm’s global search capacity but also augments its capability to circumvent local optima through the integrated application of horizontal and vertical crossover techniques. This paper presents a comparative analysis of MSAPO with other prominent optimization algorithms on the three-dimensional CEC2017 and the highest-dimensional CEC2022 test sets, and the results of numerical experiments show that MSAPO outperforms the compared algorithms, and ranks first in the performance evaluation in a comprehensive way. In addition, in eight real-world engineering design problem experiments, MSAPO almost always achieves the theoretical optimal value, which fully confirms its high efficiency and applicability, thus verifying the great potential of MSAPO in solving complex optimization problems.

1. Introduction

Numerous real-life problems can often be reduced to the model building process of finding optimal solutions, a process that aims to achieve the best results through model solving [1]. The accelerated advancement of science and technology has led to the pervasion of real-world optimization challenges across a multitude of domains. These include image processing [2], feature selection [3], mechanical design [4], engineering problems [5,6], production scheduling and automation [7], and so on. These optimization problems are increasingly complex, and they are often characterized by high dimensionality [8], nonlinearity [9], discontinuities, and multiple constraints [10], thus putting traditional numerical optimization methods to a severe test and making it difficult to satisfy the growing demand for optimization [11].
Metaheuristic algorithms, by virtue of their intuitive and easy-to-understand principles, high randomness, high generality, and no need to rely on a specific problem background [12], have demonstrated extraordinary effectiveness in overcoming high-dimensional and large-scale optimization problems that are difficult to be mastered by traditional methods. This innovative methodology has not only broadened the horizon of optimization problems, but also attracted great attention and in-depth research in domestic and international academic circles [13]. Meta-heuristic algorithms can be classified into one of five categories: biological evolution-based, population intelligence-based, thinking cognition-based, physics and mathematics-based, and strategy enhancement-based [1].
The algorithms based on biological evolution are methods for conducting random search by emulating the genetic, mutation, and selection processes observed in biological evolution. The most classic algorithms are genetic algorithm (GA), proposed in 1975 [14], and differential evolution (DE), proposed in 1995 [15]. In 2013, Pinar proposed the backtracking search algorithm (BSA) [16]. In addition to this, other evolution-based algorithms proposed in recent years include spherical evolutionary algorithm (SEA) [17], geometric probabilistic evolutionary algorithm (GPEA) [18], etc.
Population intelligence-based algorithms implement stochastic search by simulating the instinctive behaviors of biological populations in nature (e.g., activities such as foraging, predation, migration, and courtship of different species). In 1995, Kennedy and Eberhart simulated the foraging behaviors of a flock of birds and proposed the classical particle swarm optimizer (PSO) [19]. In 2006, Dorigo et al. proposed ant colony optimizer (ACO) [20]. In 2014, Mirjalili proposed gray wolf optimizer (GWO) [21]. GWO has inspiration from the gray wolf population hierarchy mechanism and the collaborative strategy of wolf hunting. Subsequently, in 2016, he put forth whale optimization algorithm (WOA) [22]. Other notable algorithms are harris hawk optimization (HHO) [23], marine predator algorithm (MPA) [24], snake optimizer (SO) [25], nutcracker optimization algorithm (NOA) [26], genghis khan shark optimizer (GKSO) [27], and the recently proposed crested porcupine optimization (CPO) [28], secretary bird optimization algorithm (SBOA) [29], and so on.
Thinking cognition-based algorithms represent a methodology of random search whereby the thought-cognitive process of complex human behavioral patterns in nature and daily life is simulated. Rao proposed teaching-learning-based optimization (TLBO) [30] in 2011. Das proposed student psychology-based optimization algorithm (SPBO) [31] in 2020, inspired by the behavior of students who study hard to become the best in their class. There is also poor–rich optimization (PRO) [32], gaining–sharing knowledge-based algorithm (GSK) [33], human memory optimization algorithm (HMO) [34], and others.
Physics- and mathematics-based algorithms are employed to model physical phenomena and mathematical theorems. An illustrative example is sine cosine algorithm (SCA) [35]. Other algorithms include electrostatic discharge algorithm (ESDA) [36], archimedes optimization algorithm (AOA) [37], snow ablation optimizer (SAO) [38], fick’s law algorithm (FLA) [39], and others.
Policy-enhanced algorithms represent a novel category of algorithms that have been derived from the original heuristic algorithms. They effectively circumvent the inherent limitations of the original heuristics by incorporating innovative strategies or by drawing upon the superior attributes of other algorithms, thereby achieving enhanced performance and results in the optimization process. Examples include GSRPSO [40] and the collaboration-based hybrid GWO-SCA optimizer (cHGWOSCA) [41].
The “no free lunch” theorem [42] demonstrates that no universal optimization algorithm can achieve optimal performance on all problems. In view of this, many researchers will introduce improved strategies, algorithmic fusion, or innovative algorithms to solve complex optimization problems. Artificial protozoa optimizer (APO) is a novel metaheuristic algorithm proposed in this context.
Wang et al. [43] developed an artificial protozoan optimizer (APO) by simulating the foraging, dormant, and reproductive behaviors of protozoa. The APO employs two adaptively varying probability parameters to maintain equilibrium between the exploration and development phases of the optimization process, and introduces mapping vectors to vary the dimensions. The superiority and effectiveness of APO have been demonstrated in literature [43], which presents the results of experiments conducted with 32 intelligent optimization algorithms on the CEC2022 test set and five engineering applications. These experiments included statistical analyses and Wilcoxon rank sum tests. Nevertheless, the guided positional updates provided by unidentified individuals during the autotrophic phase may result in a slower convergence of the algorithm. Furthermore, the algorithm’s heterotrophic phase, which is influenced only by neighboring individuals, may result in a relatively straightforward descent into a local optimum.
To further enhance APO’s ability to find excellence, this paper puts forth a multi-strategy fusion artificial protozoa optimizer, or MSAPO, as a potential solution. Comparative experiments between MSAPO and established optimization algorithms are conducted on the CEC2017 and CEC2022 test sets, respectively. The results demonstrate the efficacy and competitive advantage of MSAPO. This paper makes the following contributions to the field:
(1)
First, we adopt the piecewise chaotic opposition-based learning strategy in the stochastic initialization process, which serves to augment the dispersion and exploration space of the initial solution set and improves the algorithm’s ability to search globally. Next, cyclone foraging strategy is implemented during the heterotrophic foraging phase. enabling the algorithm to identify the optimal search direction with greater precision, guided by the globally optimal individuals. In addition, hybrid mutation strategy is added in the reproduction phase to enhance the ability of the algorithm to quickly move away from the local optimum in the early phase and to converge accurately in the later phase. The incorporation of the crisscross strategy prior to the conclusion of the iteration not only enhances the algorithm’s capacity for efficient global domain search but also augments its capability to circumvent local optima.
(2)
The effectiveness of MSAPO is substantiated through its validation in three distinct dimensions of the CEC2017 test set and in the highest dimension of the CEC2022 test set. Furthermore, its performance is benchmarked against other state-of-the-art swarm intelligence optimization algorithms, and the experimental outcomes substantiate MSAPO’s superiority.
(3)
The efficacy of MSAPO in addressing practical engineering challenges has been substantiated through the examination of eight illustrative case studies.
The remainder of the paper is structured as illustrated in the following outline: a detailed description of the basic APO is given in Section 2. The four strategies introduced in this work and the proposed MSAPO algorithm are demonstrated in Section 3. Section 4 presents the findings of the experiments conducted on distinct test sets, offering a detailed analysis of the results. Section 5 examines the optimization outcomes of MSAPO in the context of eight engineering case studies. The final section, Section 6, presents the conclusion.

2. Artificial Protozoa Optimizer

Euglena is a representative unicellular protozoan that has some characteristics of both plants and animals, and can therefore obtain nutrients for survival by both autotrophic and heterotrophic means. Euglena reproduces asexually through a binary fission process [43]. Wang et al. developed an artificial protozoa optimizer (APO) by simulating the behaviors exhibited by Euglena, including foraging, dormant and reproductive behaviors. The mathematical model of the APO is described below, and the pseudo code is shown in Algorithm 1.
Algorithm 1: APO
Input: The population size N, the individual dimension D, Controlling parameters np, pfmax, and the maximum number of iterations T.
Output: The optimal individual ybest and its corresponding fitness fbest.
Mathematics 13 02888 i001

2.1. Population Initialization

The APO is founded upon the principles of traditional random sampling, which is employed to generate a population that is uniformly distributed at random. y i = y i , 1 , y i , 2 , , y i , D ,   i = 1,2 , , N represents the ith individual in a protozoan population of size N , each with dimension D . The fundamental population is produced through Equation (1),
y i = L b + R a n d × ( U b L b ) ,
where L b and U b are the upper and lower search space bounds, L b = [ l b 1 , l b 2 , , l b D ] , U b = [ u b 1 , u b 2 , , u b D ] , where l b d i and u b d i are the dith component of L b and U b , respectively. The vector R a n d is a random variable with elements distributed between 0 and 1, R a n d = [ r a n d 1 , r a n d 2 , , r a n d D ] . In this section, the operator × denotes the Hadamard product.
At the beginning of each iteration, the current population of protozoa is sorted on the basis of the fitness value, as defined by the Equation (2). In the sorted population, a few individuals are randomly selected to enter the dormant or breeding state and the rest to enter the foraging state, based on the proportion fraction p f in Equation (3).
y i = sort y i ,
p f = p f m a x r a n d
where the maximum proportional score p f m a x is set to 0.1 and r a n d is a randomly generated number within the interval [ 0,1 ] .

2.2. Foraging

Protozoa can take up the nutrients they need to survive in both autotrophic and heterotrophic ways.

2.2.1. Autotrophic Mode

In the right light, protozoa carry out plant-like photosynthesis, using chloroplasts to produce carbohydrates for energy, and protozoa are able to respond to light cues by moving closer or further away from the light to find a suitable habitat for survival. Protozoan populations will move towards individuals whose ambient light intensity is suitable for photosynthesis. The autotrophic model is mathematically represented by the following Equation (4),
y i n e w ( t + 1 ) = y i ( t ) + f ( y j ( t ) y i ( t ) + 1 n p k = 1 n p w a ( y k ( t ) y k + ( t ) ) ) × M f ,
f = r a n d ( 1 + cos ( t T π ) ) ,
n p N 1 2 m a x
w a = e f ( y k ( t ) ) f ( y k + ( t ) ) + eps ,
M f [ d i ] = 1 , if   d i   is   in   randperm ( D , D i N ) 0 , otherwise ,
where y i n e w ( t + 1 ) is the position of the tth generation of protozoa y i ( t ) after renewal, the current and maximum number of iterations, respectively, are represented by the symbols t and T . y j ( t ) is a randomly selected protozoan in generation t . The current populations have been sorted by fitness value from smallest to largest. y k ( t ) denotes the kth left neighbour of the current individual y i ( t ) , a protozoan with a ranking index lower than i , randomly selected from the population. y k + ( t ) denotes the kth right neighbour of y i ( t ) , a random selection of protozoa, exhibiting a ranking index greater than i . n p denotes the number of selected pairs of neighbors. f denotes the foraging factor. w a is the weight factor in the autotrophic mode, f ( ) calculates the fitness value of an individual. M f is a random mapping vector of size D dimensions, which influences the mutation dimension in the foraging state.

2.2.2. Heterotrophic Mode

In conditions of low light, protozoa display behaviors that are characteristic of animals, including the absorption of organic matter from their surrounding environment. Assuming that food is abundant in the vicinity of y n e a r ( t ) , then the protozoa will move towards it. The is mathematical model of the heterotrophic model is as follows,
y i n e w t + 1 = y i t + f y n e a r t y i t + 1 n p k = 1 n p w h y i k t y i + k t × M f ,
y n e a r t = 1 ± R a n d 1 t T × y i t ,  
w h = e f y i k t f y i + k t + eps ,
where y n e a r t is the location near y i t , “ ± ” indicates that it is possible for y n e a r t and y i t to be in different directions. A pair of neighbors of y i t denoted by y i k t and y i + k t . w h is the weight factor in heterotrophic mode.
In the foraging state, the autotrophic mode is concerned with a comprehensive search of the surrounding area, thereby enhancing the algorithm’s exploration capacity. Conversely, the heterotrophic mode is geared towards identifying regions with greater potential, thus bolstering the exploitation capability. The transition between autotrophic and heterotrophic modes is governed by Equation (12),
y i n e w t + 1 = Equation   4 , if     r   a n d < p a h Equation   9 , otherwise ,
p a h = 1 2 1 + cos   t T π ,
The parameter p a h , which represents the probability of a given protozoan exhibiting autotrophic or heterotrophic modes, is observed to decrease as the iteration proceeds. This shift in foraging tendency from autotrophic to heterotrophic modes is a consequence of the aforementioned decrease in p a h .

2.3. Dormancy or Reproduction

2.3.1. Dormancy

Protozoa may enter a dormant state in unfavorable environments. In such cases, the population is maintained by the production of new individuals that replace those in the dormant state. The model of the dormant state, as it is represented mathematically, is as follows:
y i n e w ( t + 1 ) = L b + R a n d × ( U b L b ) .

2.3.2. Reproduction

Protozoa may reproduce asexually by binary fission at appropriate ages and health conditions. The process of reproduction is modelled by the generation of replicate protozoa and the introduction of perturbations, expressed by Equation (15):
y i n e w t + 1 = y i t ± r   a n d L b + R a n d × U b L b × M r ,
M r d i = 1 , if   d i   is   in   randperm D , D r a n d 0 , otherwise ,
In the case of the symbol “ ± ”, this indicates that the mutation in question can be either forward or reverse. The symbol represents an upward rounding operation, while M r denotes a vector of random mappings that affect the dimensionality of the mutation during the process of reproduction.
The dormant state of the protozoa represents the exploratory phase of the algorithm, while the reproductive state corresponds to the developmental phase. The transition between these two states is governed by Equation (17).
y i n e w ( t + 1 ) = Equation   ( 14 ) , if   r   a n d < p d r Equation   ( 15 ) , otherwise ,
p d r = 1 2 ( 1 + cos ( ( 1 i N ) π ) ) ,
where p d r is the probability parameter between dormancy and reproduction.
At the conclusion of each iteration, APO determines the final population through a greedy selection process, with the fitness value, as calculated by the formula presented in Equation (19), serving as the determining factor.
y i ( t + 1 ) = y i n e w ( t + 1 ) , if   f ( y i n e w ( t + 1 ) ) < f ( y i ( t ) ) y i ( t ) , otherwise .

3. The Proposed MSAPO

As evidenced in the literature [43], the basic APO has demonstrated effectiveness within the domain of optimization problems. However, the algorithm exhibits limitations when confronted with optimization challenges characterised by high complexity and multi-dimensionality. These include relatively slow convergence and a proclivity towards locally optimal solutions, which limit the effectiveness and applicability of the algorithm in solving complex optimization problems. In view of the aforementioned constraints, we put forward the suggestion of an enhanced APO, MSAPO. The objective is to address the shortcomings of the basic APO. The MSAPO flowchart is presented in Figure 1. The MSAPO integrates four additional strategies into the basic APO: As shown in Figure 1, MSAPO employs a segmented chaotic counter-learning strategy to enhance the diversity of the initial population, integrates cyclone foraging and hybrid mutation to prevent population premature convergence, and utilizes a crossover strategy to balance the population’s exploration and exploitation capabilities.

3.1. Piecewise Chaotic Opposition-Based Learning Strategy

The fundamental APO population initialization employs a random distribution, which is a straightforward approach but may result in an uneven spatial distribution of protozoan individuals. This, in turn, may prompt the algorithm to converge on a local optimum. To enhance the initial population quality of APO and circumvent the potential for local optimum convergence, piecewise chaotic opposition-based learning strategy is introduced in this section.
The fundamental premise of traditional OBL (opposition-based learning) is to derive the inverse solution to a feasible solution and select a superior candidate solution by evaluating the feasible solution and the inverse solution. The OBL formulation is as follows:
y i _ O B L = L b + U b y i ,
where y i _ O B L represents the inverse solution corresponding to the ith individual y i in the random initial population. The application of OBL serves to facilitate the investigation of hitherto unconsidered solution domains within the search field, thereby enhancing the diversity of the population.
In an effort to more fully address the issue of unequal initial population distribution, this paper introduces the use of Piecewise chaotic mapping in OBL. Piecewise chaotic mapping is a segmentally defined chaotic system that divides the entire definition domain into multiple subintervals and applies distinct nonlinear transformations on each subinterval. This type of mapping is typically characterised by more intricate mathematical expressions and a greater degree of chaotic behavior, making it well suited to scenarios that demand heightened complexity and flexibility. The aforementioned mathematical model of piecewise chaotic mapping can be expressed as the following Equation (21),
P W L C M k + 1 = P W L C M k P , 0 P W L C M k < P P W L C M k P 0.5 P , P P W L C M k < 0.5 1 P P W L C M k 0.5 P , 0.5 P W L C M k < 1 P 1 P W L C M k P , 1 P P W L C M k < 1 ,
where P W L C M ( k ) denotes the kth chaotic value of the Piecewise chaotic sequence P W L C M , The preliminary value of the P W L C M sequence is established by the random number rand, which is drawn from the interval [ 0,1 ] , and chaos parameter P = 0.1 .
The generation of chaotic sequences with a more balanced distribution is achieved through the application of Equation (21), which in turn is utilized to generate novel solutions, as illustrated in Equation (22),
y i _ C O B L = L b + U b P W L C M × y i ,
where y i _ C O B L represents the novel solution, which is analogous to the initial population y i . P W L C M signifies the corresponding sequence of chaotic mappings. Furthermore, the operator × in this instance denotes the Hadamard product operation of two matrices.
Piecewise chaotic opposition-based learning strategy is utilized for the generation of a population of protozoa, which is then ranked and compared with the randomly generated population. The top N superior individuals are subsequently selected as the initial population for subsequent operations. This strategy generates diverse populations through mirror flipping, retaining superior individuals as the initial population for iterative cycles. This approach enhances the diversity of the initial population and effectively prevents excessive clustering of the population near suboptimal positions. As this strategy operates solely during the initial population phase, it imposes minimal impact on the algorithm’s computational complexity.

3.2. Cyclone Foraging Strategy

In the autotrophic foraging phase, APO relies more on the guidance of random individuals to update the position. This process is exploratory but often accompanied by a high degree of blindness and randomness, resulting in a relatively slow speed of optimality search. In the heterotrophic foraging phase, the same characteristics will be observed. The algorithm is influenced by two types of individuals: those in nearby positions and the current individual’s front and rear neighbors, ranked according to the population’s fitness value. This can result in the algorithm reaching a local optimum, making it challenging to escape. To address this limitation, the cyclone foraging strategy [44] is integrated into the heterotrophic foraging phase of APO with the following mathematical model:
y i n e w t + 1 = y b e s t t + r y b e s t t y i t + β y b e s t t y i t , i = 1 y b e s t t + r y i 1 t y i t + β y b e s t t y i t , otherwise
β = 2 e r 1 T t + 1 T sin 2 π r 1 ,
where y i n e w t + 1 is the position of the ith protozoa y i ( t ) after updating in generation t , y b e s t ( t ) is the optimal individual in the tth generation of protozoa. The random numbers r and r 1 are drawn from a uniform distribution on the interval [ 0,1 ] , and β represents the weight coefficient.
Figure 2 illustrates the behavioral pattern of APO subsequent to the introduction of cyclone foraging strategy. The red blob in Figure 2 represents the current optimal protozoan individual, and the black blob represents the candidate individual. As shown in Figure 2, during the update process, APO’s y i n e w is simultaneously influenced by y i 1 and y b e s t , generating a combined force along the spiral trajectory that propels the population toward the optimal point at an accelerated pace. In this way, APO updates its candidate individual of the population through the application of Equations (23) and (24), thereby approaching the optimal individual with the trajectory of the spiral line. The guidance of the global optimal individual enables APO to locate the search direction with greater accuracy, reducing the necessity for random wandering and significantly improving the speed of locating the optimal solution. Through the population renewal mechanism of cyclone foraging, individuals can approach the global optimum via richer convergence pathways. This enhances diversity in population renewal while maintaining incremental optimality, thereby preventing local optima traps.

3.3. Hybrid Mutation Strategy

Figure 3 illustrates the variability of the density function curves of the standard Cauchy distribution (Cauchy (1,0)) and the standard Gaussian distribution (Gaussian (0,1)). In Figure 3, the Gaussian distribution and Cauchy distribution are continuous probability distributions that represent disturbance terms in nature. Their key difference lies in tail behavior: the Gaussian distribution exhibits exponential decay, resulting in rapid decreases in extreme outcomes, which characterizes it as a light-tailed distribution. In contrast, the Cauchy distribution has a power-law decay, which increases the likelihood of extreme outliers. In the Cauchy distribution, 1 denotes the scale parameter, which is half the width at half the maximum, while 0 denotes the position parameter, which is the peak of the distribution. In the Gaussian distribution, 0 is the mean and 1 is the standard deviation. With regard to the properties of probability distributions, the Cauchy distribution exhibits a greater probability of occurrence at both extremes, indicating a propensity for generating variance values that deviate from the origin. This makes it a more suitable choice for extensive searches in the initial stages of an iteration, whereas the Gaussian distribution displays a higher concentration near the origin, making it more appropriate for fine-tuning in the latter stages of an iteration.
The hybrid mutation strategy employs the mixed mutation mechanism of Gaussian and Cauchy distributions to adjust the position of individuals with the objective of achieving diversification of mutation methods. The specific mathematical model is as follows:
y i n e w ( t + 1 ) = y i ( t ) + y i ( t ) × ( w 1 ( t ) C a u c h y ( 1,0 ) + w 2 ( t ) G a u s s i a n ( 1,0 ) ) ,
w 1 t = 1 t T 3 w 2 t = t T 3 ,
where y i n e w t + 1 is the position of the individual y i ( t ) after mutation, C a u c h y ( 1,0 ) denotes a sequence of variants obeying Cauchy distribution, G a u s s i a n ( 0,1 ) denotes a sequence of variants obeying Gaussian distribution. w 1 ( t ) and w 2 ( t ) denote the Cauchy and Gaussian variance factors, respectively, and the variations are shown in Figure 4. As shown in Figure 4, in the early stages of optimization, algorithms leverage outlier variance for exploration, and later transition to more minor errors for precise searches. Thus, parameters w 1 and w 2 , which sum to 1, are used to balance these two aspects of the algorithm.
This paper introduces hybrid mutation strategy in the reproduction phase of the algorithm. By dynamically adjusting the weights of the Cauchy mutation factor and the Gaussian mutation factor, the algorithm is able to flexibly switch mutation modes during the iteration process, thereby achieving rapid escape from the local optimum in the early stage and accurate convergence in the later stage.

3.4. Crisscross Strategy

In order to circumvent the algorithm’s tendency to converge prematurely and to circumvent the potential pitfall of a local optimum, the APO introduces the Crisscross Strategy before the conclusion of the iteration [45]. The utilization of horizontal crossover in the search process can diminish the number of self-points encountered by the algorithm, thereby enhancing its global search capability. Vertical crossover, on the other hand, can facilitate the elimination of dimensions that have reached a point of convergence.

3.4.1. Horizontal Crossover

The horizontal crossover operation is analogous to the crossover operation in genetic algorithms, which facilitates the exchange of information between different populations in the same dimension. Initially, it is essential to randomly pair the individuals of the parents to perform the crossover operation in the dth dimension, as follows:
C y i , d t = r y i , d t + 1 r y j , d t + c y i , d t y j , d t C y j , d t = r y j , d t + 1 r y i , d t + c y j , d t y i , d t ,
where C y i , d ( t ) and C y j , d ( t ) denote the values of the dth dimensional components of the two individuals C y i ( t ) and C y j ( t ) , respectively, produced by the ith protozoa y i t and the jth protozoa y j t in generation t after a lateral crossover. r is a random number between the interval [0, 1] and c is a random number between the interval [ 1 , 1 ] .
The solutions generated by the horizontal crossover must be evaluated in comparison with their parental counterparts, and the individuals exhibiting the highest degree of fitness are selected for retention. This approach guarantees the efficacy and precision of the algorithmic convergence and optimization.

3.4.2. Vertical Crossover

APO is prone to local optimality at a later stage, largely due to the fact that some individuals in the population exhibit local optimality in a specific dimension, which results in premature convergence of the entire system. To address this issue, MSAPO employs a strategy of horizontal and vertical crossover operations on newborn individuals, improving the algorithm’s capacity to circumvent local optima.
If we assume that newborn protozoan C y i ( t + 1 ) crosses vertically in the d1th and d2th dimensions, the calculation is as follows:
S y i , j ( t + 1 ) = r C y i , d 1 ( t + 1 ) + ( 1 r ) C y i , d 2 ( t + 1 ) ,
y i n e w ( t + 1 ) = S y i ( t + 1 ) , if   f ( S y i ( t + 1 ) ) < f ( y i n e w ( t + 1 ) ) y i n e w ( t + 1 ) , otherwise ,
where S y i ( t + 1 ) is an individual resulting from vertical crossover from individual C y i ( t + 1 ) .
The vertical crossover process generates offspring individuals with enhanced fitness by integrating information from disparate dimensions. Furthermore, a meritocracy mechanism, as illustrated by Equation (29), is employed to guarantee the efficacy of the crossover operation. The conjunction of horizontal and vertical crossover results in augmented global search abilities and an enhanced capacity to circumvent local optima. In the iterative process, once an individual has escaped from a local optimum through vertical crossover, its updated dimension information is rapidly disseminated throughout the population via horizontal crossover, thereby driving the population as a whole towards a superior solution.
The pseudo code for MSAPO is given in Algorithm 2.
Algorithm 2: MSAPO
Input: The population size N, the individual dimension D, Controlling parameters np, pfmax, and the maximum number of iterations T.
Output: The optimal individual ybest and its corresponding fitness fbest.
Mathematics 13 02888 i002

3.5. Computational Complexity

The basic APO performs sorting, dormancy and reproduction, autotrophic foraging and heterotrophic foraging, and fitness value calculations with a time complexity of O ( T N ( log ( N ) + D + f ( ) ) ) . Piecewise chaotic opposition-based learning strategy adds the generation of the inverse solution of the chaotic mapping and the comparison of the ordering with the original initial population, with an increased time complexity of O ( N D + 2 N log ( 2 N ) ) . The time complexity of the heterotrophic phase introducing cyclone foraging strategy increases O ( 0.5 T ( N N p f ) D ) . The time complexity of the hybrid mutation strategy is O ( T N p f D ) . The crisscross strategy has a time complexity of O ( 2 T N D ) . To reduce the time complexity of the algorithm computation, the evaluation values are recorded once after the cross-sectional cross-computation, so the increased complexity of the computation of the fitness values that are retained optimally after the introduction of the strategy is O ( T N f ( ) ) . Therefore, the final time complexity of MSAPO is O ( N ( D + 2 log ( 2 N ) ) + T N ( 3.5 D + log ( N ) + 2 f ( ) ) ) .

4. Numerical Experiments and Analysis

4.1. Baseline Algorithms and Benchmark Function Sets

In order to corroborate the enhanced performance of the MSAPO put forth in this paper, in this section, the CEC2017 and CEC2022 test sets are chosen for the purpose of conducting simulation experimentation and analyzing the resulting data. All functions of CEC2017 are rotated and shifted, increasing the difficulty of algorithmic optimization search. CEC2017 contains 29 test functions: unimodal functions (F1, F3), simple multimodal functions (F4–F10), hybrid functions (F11–F20), and composition functions (F21–F30). The CEC2022 test set includes 12 test functions. The unimodal function has only one global optimum, which is employed to assess the capability of the optimization algorithm to develop; the simple multimodal function contains multiple local optimums, which is utilized to evaluate the capacity of the algorithm to transcend the local optimum; and the hybrid function increases the complexity of the solution. Composition functions present more complex optimization problems and necessitate the utilization of algorithms with enhanced global exploration and local development capabilities. The efficacy of the optimization algorithm can be assessed in an objective manner through the utilization of test functions that encompass a diverse array of optimization problems.
The MSAPO algorithm is extensively compared with four classes of existing optimization algorithms: (1) The first class of algorithms: includes some well-known algorithms published in recent years, such as slime mold algorithm (SMA) [46], african vultures optimization algorithm (AVOA) [47], snake optimizer (SO) [25], artificial rabbits optimization (ARO) [48], nutcracker optimizer algorithm (NOA) [26] and PID-based search algorithm (PSA) [49]; (2) Algorithms in the second category: covering some widely cited and studied classical algorithms, including grey wolf optimizer (GWO) [21], whale optimization algorithm (WOA) [22] and salp swarm algorithm (SSA) [50]; (3) The third class of algorithms: includes variants of classical optimization algorithms PSO, such as XPSO [51], FVICLPSO [52] and SRPSO [53]; (4) Algorithms of the fourth category: including the high-performance optimizers LSHADE_cnEpSin [54] and LSHADE-SPACMA [55], which won the CEC2017 competition.
Initially, a comparison is made between MSAPO and the first three classes of algorithms under the highest dimension of the CEC2022 test set. To further validate the efficiency of MSAPO, experiments are conducted which include the fourth class of comparison algorithms under 30, 50 and 100 dimensions of the CEC2017 test set. In order to guarantee the comparability of the experimental results, the population size N of all algorithms was set to 100, and the maximum number of iterations T m a x set to 1000, corresponding to a maximum of 100,000 fitness evaluations. To eliminate the influence of chance factors, each algorithm was executed independently on 30 occasions.
In fact, we observed that certain algorithms employ strategies involving population reduction (such as LSHADE_cnEpSin and LSHADE_SPACMA). Some algorithms exhibit multiple evaluation behaviors (such as SO and NOA). For these algorithms, we recorded the fitness value corresponding to the global optimum after each evaluation. We then grouped these into 1000 sets of experimental data, each comprising 100 evaluations, thereby ensuring the fairness of the comparative trials.

4.2. Sensitivity Analysis of Parameter

In MSAPO, we introduce an additional chaotic parameter P . To investigate whether the value of parameter P significantly impacts algorithm performance and to determine its approximate optimal value, we conducted sensitivity analysis experiments on this parameter.
According to Equation (21), the parameter P takes values within the range 0 < P < 0.5 . Therefore, in this experiment, the candidate P were set to 0.05, 0.1, …, 0.5. Table S1 presents the convergence curves of MSAPO under different parameter settings. According to Figure 5, the performance of MSAPO varies little across different parameters, indicating that the algorithm is not sensitive to the value of parameter P . However, in the scenario where P = 0.1 , the algorithm consistently achieves slightly better performance. Therefore, the default value of P in this paper is set to 0.1.
Table S1 shows the average values and rankings obtained using different values of the chaotic parameter ( P ) on the 30-dimensional CEC2017 test set. As indicated by the ranking in the last row of Table S1, MSAPO achieves the best overall ranking on the test set when P = 0.1 . Therefore, the chaotic parameter P = 0.1 is adopted for subsequent numerical experiments.

4.3. Ablation Experiment

MSAPO employs four strategies that collectively enhance its performance in complex engineering optimization. To validate their effectiveness, we conducted an experiment removing each strategy—named APO1, APO2, APO3, and APO4—representing the removal of 4 strategies, piecewise chaotic opposition-based learning strategy, cyclone foraging strategy, hybrid mutation strategy, and crisscross strategy. We performed ablation tests on a 30-dimensional CEC2017 dataset, comparing these algorithms with MSAPO.
Table 1 shows the average values of the optimal solutions for six algorithms, with partial convergence curves depicted in Figure 6. Results in Table 1 show the performance ranking: MSAPO > APO1 > APO2 > APO3 > APO4 > APO, indicating that removing any strategy reduces effectiveness. The crisscross strategy has the greatest impact on algorithm performance, while the chaotic opposition-based learning strategy has the least impact. The other two strategies also influence algorithm performance.

4.4. Algorithm Parameter Settings and Performance Indicators

The parameter settings of the algorithms employed in the experiment are presented in Table 2. The paper employs five performance evaluation metrics to conduct a comprehensive analysis of the experimental results obtained from the MSAPO algorithm in comparison to other algorithms. These metrics encompass Mean, Best, standard deviation (Std), Mean Rank for Friedman’s test (MR), and Wilcoxon rank sum test for 30 runs on each test function.
Mean and Best values provide an intuitive reflection of the efficacy of the algorithmic solution, whereas Std offers insight into the concentration of the solution results, thereby indicating the stability of the algorithmic solution. The ranking of each algorithm on the current test function can be derived by ranking the mean rank (MR) of the Friedman test. The Wilcoxon rank sum test provides a means of ascertaining whether the fitness values of the two algorithms originate from distributions with an identical median, thereby indicating whether they are significantly disparate. In this study, a one-sided Wilcoxon rank sum test was conducted on MSAPO and other comparison algorithms, with a significance level of 0.05. The calculated p-value allows us to statistically determine the performance of MSAPO on the test set in relation to its comparison algorithms. In the context of the Wilcoxon rank sum test, “+” signifies that the MSAPO algorithm is demonstrably superior to the comparison algorithm, “−” signifies that the MSAPO algorithm is significantly inferior to the comparison algorithm, and “=” signifies that there is no significant distinction between the two. The integration of these indicators offers a comprehensive and detailed reference point for the analysis of the experimental results.

4.5. Analysis of Results Under CEC2022

Table 3 and Table S2 present the statistical metrics of the 14 algorithms for the 12 test function optimization results under the 20-dimensional CEC2022 test set. The results include those of the Mean, Best, Std, MR, Rank and Wilcoxon rank-sum tests. Additionally, the table provides the average MR (Average MR) on the 12 test functions and the combined rank (Total Rank). As evidenced in Table 3 and Table S2, MSAPO exhibits the most optimal Mean performance across the eight test functions, F1, F3, F5–F9, and F12. It demonstrates the capacity to converge stably to its theoretical optimum on F1, F3, and F5, indicating a robust ability to navigate and overcome local optima. This makes MSAPO a versatile choice for diverse types of optimization problems. MSAPO achieved the smallest Std of all algorithms on functions F1, F3, F5, F6, F9, F10, and F12, showing the stability of its results. The results of the Wilcoxon rank sum test and the Rank metrics on the 12 tested functions indicate that MSAPO significantly outperforms the other compared algorithms on 10 functions and ranks first. Additionally, MSAPO achieves a second-place ranking on F4 and F10. With regard to function F4, APO exhibited a slight advantage over MSAPO, whereas FVICLPSO demonstrated superior performance on function F10. Collectively, MSAPO attained an average MR of 1.7500 across the 12 functions, ultimately securing the top ranking.
Figure 7 and Figure 8 illustrate the mean convergence curves and box plots of the 14 algorithms under the 20-dimensional CEC2022 test set. As illustrated in Figure 7, for the unimodal function F1, APO is unable to achieve a high convergence accuracy for the algorithm due to its slow convergence speed. In contrast, MSAPO expedites the convergence speed and optimality-finding ability of the algorithm through the utilization of a high-quality initial population and a hybrid mutation strategy, thereby enabling it to reach the optimal solution in a more rapid manner. In the context of the simple multimodal function F4, MSAPO’s principal competitor is APO. MSAPO demonstrates a superior convergence rate in the initial phase of the iteration, while APO exhibits a greater capacity to identify the optimal solution in the subsequent phase. Despite MSAPO’s inability to retain its leading position in this domain, the performance gap between it and APO remains relatively narrow. In addressing the intricate optimization issues pertaining to other combinatorial and composite functions within the test set, MSAPO has demonstrated its capacity to identify superior fitness values with a reduced number of iterations, thereby attaining enhanced convergence precision. This outcome suggests that MSAPO enhances the algorithmic convergence precision and its capacity to circumvent local optima through the deployment of cyclone foraging strategy and crisscross strategy. As illustrated in Figure 8, MSAPO exhibits the narrowest and lowest box across the majority of test functions, thereby substantiating its superior stability and optimality-finding capabilities. Figure 9 presents radar charts of the comprehensive ranking of MSAPO in comparison with other algorithms. The size of the enclosed area in the chart is indicative of the algorithm’s performance, allowing for an intuitive understanding of MSAPO’s notable superiority.

4.6. Result Analysis on CEC2017

In order to circumvent the constraints of a singular test set, this section elects to utilize CEC2017, which exhibits an analogous function type to that of CEC2022 and a more expansive array of optimization functions, as the test set. Additionally, it introduces two triumphant algorithms from the CEC2017 test set competition, LSHADE_cnEpSin and LSHADE_SPACMA, as the comparison algorithms to substantiate the efficacy of the proposed MSAPO algorithm. Table 4, Table 5, Table 6 and Tables S3–S5 present the Mean, Best, Std, MR, Rank, and Average MR and Total Rank of the 16 algorithms under the three dimensions and on 29 functions of the CEC2017 test set. Table 7, Table 8 and Table 9 show the statistical results of the Wilcoxon rank sum test with a significance level of 0.05.
The combined experimental results presented in Table 4, Table 5, Table 6 and Tables S3–S5 clearly illustrate that the MSAPO solution substantially exceeds the original APO for the 30-, 50-, and 100-dimensional unimodal functions F1 and F3 by orders of magnitude. In the 30- and 50-dimensional functions F1, MSAPO is ranked third, yet its Mean and Best metrics indicate that it identifies the theoretical optimum of the objective function. Furthermore, its overall performance is surpassed only by that of the two competition-winning algorithms. With a dimension of 100, MSAPO is the top-performing system in terms of F1. For function F3, MSAPO is the top-performing algorithm in all three dimensions, which indicates that the improvement strategy effectively enhances the exploration and development of the algorithm.
In the assessment of simple multimodal functions F4–F10, MSAPO achieved 3, 4 and 7 first places in 30, 50 and 100 dimensions, respectively. In 30 dimensions, MSAPO’s metrics on two functions exhibit slight inferiority compared to the original APO. However, in 50 and 100 dimensions, its overall performance is superior to the original APO, indicating that cyclone foraging strategy and crisscross strategy effectively enhance algorithm’s capacity to converge accurately and to circumvent local optima. As the dimension increases, MSAPO demonstrates a notable enhancement in its ability to solve problems, thereby indicating its efficacy in addressing complex, high-dimensional issues.
In the context of hybrid functions F11–F20, MSAPO achieved a total of 7, 6, and 6 first-place rankings in 30, 50, and 100 dimensions, respectively. In the 30-dimensional experiment, MSAPO ranked third on function F12, with two competition winning algorithms as the main competitors. In regard to function F14, while MSAPO’s Mean and Std metrics are not as good as those of the original APO, its best metrics demonstrate superior performance, indicating an aptitude for identifying optimal outcomes. However, further enhancements are necessary to ensure stability. In regard to function F18, MSAPO’s performance is surpassed only by the two competition-winning algorithms. In the 50-dimensional experiments, MSAPO’s overall performance on function F12 is second only to LSHADE_SPACMA, but it excels in its Best metric. In terms of functions F14 and F18, MSAPO is ranked third, which is below the two competition-winning algorithms. In the ranking of function F15, APO, NOA, XPSO, LSHADE_cnEpSin and LSHADE_SPACMA demonstrated superior performance. In the 100-dimensional experiments, MSAPO ranked in the top three of all 10 hybrid functions, with individual functions not as good as the statistical metrics of the two competition-winning algorithms.
In the composition functions F21–F30, MSAPO obtained 2, 8 and 9 first places in 30, 50 and 100 dimensions, respectively. In 30 dimensions, APO demonstrates superior performance to MSAPO on five functions, indicating that APO is a more advantageous approach for addressing low-dimensional complex problems. In 50 dimensions, MSAPO demonstrates superior performance to the original APO across all metrics on 10 functions. However, it is ranked third behind the two competing algorithms on function F18 and exhibits suboptimal performance on function F25, where it is ranked eighth. In the 100-dimensional function F26, MSAPO is outperformed by XPSO and SRPSO. However, its Mean, Best, and Std exceed those of the other algorithms, indicating that MSAPO’s overall performance remains superior.
Table 7, Table 8 and Table 9 illustrate the Wilcoxon rank sum test statistics between MSAPO and the other 15 algorithms, evaluated in three dimensions. Table 7 illustrates that out of the 29 test functions of the 30-dimensional CEC2017, MSAPO performed better on 20 functions, comparable on 5 functions, and worse on 4 functions compared to the original APO. MSAPO’s principal competitors are LSHADE_cnEpSin and LSHADE_SPACMA, with MSAPO exhibiting inferior performance relative to LSHADE_cnEpSin on eight functions and to LSHADE_SPACMA on 11. In comparison to the other algorithms that were subjected to evaluation, MSAPO demonstrated superior performance across all 29 functions that were tested.
The 50-dimensional experimental results presented in Table 8 show that MSAPO’s performance is significantly improved compared to the 30-dimensional. Specifically, 27 functions demonstrate a performance that is superior to that of the original APO, while one function exhibits a performance that is comparable to that of the original APO, and one function exhibits a performance that is inferior to that of the original APO. However, MSAPO is significantly inferior to SMA and SRPSO on F25, to SO on F5, F8, and to NOA and XPSO on F15. For LSHADE_cnEpSin and LSHADE_SPACMA, MSAPO is inferior to LSHADE_cnEpSin on 4 functions, and to LSHADE_ SPACMA. Overall, there is still scope for enhancement in MSAPO’s optimised performance on F5, F7, F8, F15, F18 and F25.
The 100-dimensional test results presented in Table 9 demonstrate that MSAPO exhibits enhanced performance, significantly outperforming the original APO on 28 test functions. There is no discernible difference between MSAPO and APO on F19, and MSAPO identifies a superior Best. With the exception of F14 and F18, where the performance is not as optimal as that of LSHADE_cnEpSin and LSHADE_SPACMA, MSAPO demonstrates superior performance to the comparative algorithms across the remaining functions, thereby substantiating its robust capability in addressing high-dimensional optimization problems.
The Friedman average ranking data for the 29 functions in Table 4, Table 5, Table 6 and Tables S3–S5 were combined to produce the final rankings on the 30-, 50-, and 100-dimensions for MSAPO. These were found to be 2.6460, 2.5103, and 1.9046, respectively, which all rank first. In general, MSAPO demonstrates superior performance compared to the other 15 algorithms on all three dimensions of the CEC2017 test set. This illustrates its robust capability for optimization, particularly in complex high-dimensional optimization problems where it exhibits remarkable proficiency in optimization finding.
Furthermore, the convergence and stability of MSAPO are illustrated by the convergence curves and box plots of the 16 algorithms, and the convergence curves of the 16 algorithms on 29 test functions in 30, 50 and 100 dimensions are presented in Figure 10, Figure 11 and Figure 12, respectively. The convergence curves demonstrate that MSAPO exhibits a faster convergence rate than APO during the pre-iteration period. Furthermore, MSAPO identifies superior fitness values and attains higher convergence accuracy with a reduced number of iterations. In particular, the convergence performance of MSAPO is more pronounced in 100 dimensions than in 30 and 50 dimensions. The box plots for the three dimensions are provided in Figure 13, Figure 14, and Figure 15, respectively. The MSAPO exhibits the narrowest and lowest box in the case of the majority of the test functions, which indicates that it demonstrates superior stability and optimization-seeking ability.

4.7. Result Analysis on CEC2017

In this section, we conducted three independent numerical experiments using MSAPO. In the first experiment, we performed a sensitivity analysis on the newly added parameter P in MSAPO. The results indicate that changes in parameter P have little impact on the algorithm’s stability. When P is set to 0.1, the algorithm exhibits a slight advantage in convergence accuracy compared to other parameter values. In the second set of experiments, we sequentially removed each strategy component from MSAPO to validate the rationality of their combination. Results confirmed that every strategy component contributes positively to the algorithm, and the combined strategy set significantly outperforms the original algorithm. We compared the performance of MSAPO with optimized parameter P against dozens of benchmark algorithms. These benchmarks encompassed a comprehensive range of types, with standardized environments and parameter settings to ensure fairness and consistency across all evaluations. Experimental results demonstrate that MSAPO exhibits high convergence speed and accuracy, maintains exceptional stability across diverse test functions, and holds significant potential for tackling complex optimization problems in engineering.

5. Real-World Optimization Problems

This section employs MSAPO and 13 comparative algorithms (presented in Section 4.3) to address a set of 8 real-world constrained optimization problems [56]. The objective is to evaluate the efficacy of MSAPO in addressing engineering design optimization problems. The parameters of the given engineering problems are presented in Table 10. In order to evaluate the results yielded by each algorithm, four statistical indicators are employed: Best, Mean, Worst, and Std. The eight engineering constraint problems have different mathematical models and constraints, allowing for the effective testing of the algorithms’ ability and applicability in solving these problems. To facilitate a fair and consistent comparison, the population size, the number of independent runs of the algorithms, and the Maximum iteration times of the 14 algorithms are maintained at the same values throughout the process, which are set at 30, 30, and 500, respectively.

5.1. Process Synthesis Problem (PSP)

PSP can be defined as a mixed-integer nonlinear constrained optimization problem [56]. The problem comprises 7 free variables ( x 1 ~ x 7 ) and 9 nonlinear constraints ( g 1 ~ g 9 ). The mathematical model is as follows:
Minimize   f ( x ) = ( 1 x 1 ) 2 + ( 2 x 2 ) 2 + ( 3 x 3 ) 2 + ( 1 x 4 ) 2 + ( 1 x 5 ) 2 + ( 1 x 6 ) 2 ln ( 1 + x 7 )
subject to:
g 1 x = x 1 + x 2 + x 3 + x 4 + x 5 + x 6 5 0 ,
g 2 x = x 1 2 + x 2 2 + x 3 2 + x 6 2 5.5 0 ,
g 3 x = x 1 + x 4 1.2 0 ,
g 4 x = x 2 + x 5 1.8 0 ,
g 5 x = x 3 + x 6 2.5 0 ,
g 6 x = x 1 + x 7 1.2 0 ,
g 7 x = x 2 2 + x 5 2 1.64 0 ,
g 8 x = x 3 2 + x 6 2 4.25 0 ,
g 9 ( x ) = x 3 2 + x 5 2 4.64 0 ,
where the range of the variables are 0 x 1 , x 2 , x 3 100 , x 4 , x 5 , x 6 , x 7 { 0,1 } .
The numerical results obtained for the 14 algorithms employed to solve PSP are presented in Table 11 and Table S6. The optimal values are indicated by bold text in the tables, and the engineering problems that follow are similarly labelled. Table 11 illustrates that MSAPO’s Best and Worst are the smallest among all algorithms. Notably, MSAPO’s Best reaches the theoretical optimum of 2.9248305537 for PSP, which suggests that MSAPO has the capacity to resolve this specific issue in an exemplary manner. The inferior performance of MSAPO in comparison to NOA on Mean and Std indicates that MSAPO lacks the reliability and robustness necessary to effectively address PSP. The statistical indicators of MSAPO are superior to those of the original APO, which suggests that the incorporation of the four strategies has led to a notable enhancement in the algorithm’s performance. Table S6 presents the optimal design solution for PSP solved by MSAPO, which is (0.1983, 1.2806, 1.9547, 0.7564, −0.4323, 0.0864, 1.2050).

5.2. Weight Minimization of a Speed Reducer (WMSR)

The objective of WMSR is the design of a reducer for a small aircraft engine. In order to minimise the weight of the reducer, WMSR must satisfy 11 constraints, involving 7 variables. Figure 16 shows the schematic structure of the reducer and the specific mathematical model of the problem is as follows:
Minimize   f x = 0.7854 x 1 x 2 2 14.9334 x 3 + 3.3333 x 3 2 43.0934 + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.447 ( x 6 3 + x 7 3 )
subject to:
g 1 x = x 1 x 2 2 x 3 + 27 0 ,
g 2 x = x 1 x 2 2 x 3 2 + 397.5 0 ,
g 3 x = x 2 x 3 x 4 3 x 6 4 + 1.93 0 ,
g 4 x = x 2 x 3 x 5 3 x 7 4 + 1.93 0 ,
g 5 x = 10 x 6 3 16.91 × 1 0 6 + ( 745 x 2 1 x 3 1 x 4 ) 2 1100 0 ,
g 6 x = 10 x 7 3 157.5 × 1 0 6 + ( 745 x 2 1 x 3 1 x 5 ) 2 850 0 ,
g 7 x = x 2 x 3 40 0 ,
g 8 x = x 1 x 2 1 + 5 0 ,
g 9 x = x 1 x 2 1 12 0 ,
g 10 x = 1.5 x 6 x 4 + 1.9 0 ,
g 11 ( x ) = 1.1 x 7 x 5 + 1.9 0 ,
where the range of the variables are 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 , x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5 .
The numerical results obtained for the 14 algorithms used to solve WMSR are presented in Table 12 and Table S7. Table 12 demonstrates that the optimal value of 2994.4244658 for WMSR can be reached by Best, Mean and Worst of MSAPO. Furthermore, Std outperforms the other comparative algorithms, indicating that MSAPO is an effective and robust method for solving WMSR. It is also concerned that PSA and FVICLPSO can also solve for the theoretical optimum, suggesting that they converge with sufficient accuracy, but neither is as robust as MSAPO. Table S7 illustrates the optimal design solution as determined by MSAPO. The solution is (3.5, 0.7, 17, 7.3, 7.7153, 3.3505, 5.2867).

5.3. Tension/Compression Spring Design (T/CSD)

In the context of T/CSD, the objective is to minimise the weight of the tension/compression spring. The schematic structure is illustrated in Figure 17. The problem mainly consists of 3 continuous decision variables, and 4 constraints g 1 ~ g 4 are to be satisfied, which are mathematically modelled as follows:
Minimize   f ( x ) = x 1 2 x 2 ( 2 + x 3 )
subject to:
g 1 x = 1 x 2 3 x 3 71758 x 1 4 0 ,
g 2 x = 1 5108 x 1 2 + 4 x 2 2 x 1 x 2 12566 x 1 3 x 2 x 1 4 1 0 ,
g 3 x = 1 140.45 x 1 x 2 2 x 3 0 ,
g 4 ( x ) = x 1 + x 2 1.5 1 0 ,
where the range of the variables are 0.05 x 1 2.00 , 0.25 x 2 1.30 , 2.00 x 3 15.0 .
The numerical results obtained for the 14 algorithms used to solve T/CSD are shown in Table 13 and Table S8. Table 13 illustrates that Best of MSAPO is the smallest among all the algorithms and attaining the theoretical optimum of 0.0126652328 for T/CSD. However, the performance of MSAPO on Mean, Worst and Std is not as good as that of NOA, which indicates that the reliability and robustness of MSAPO for T/CSD are still insufficient. Table S8 indicates that the optimal design solution for the T/CSD solved by MSAPO is (0.0516875570, 0.3566815558, 11.2910874220).

5.4. Welded Beam Design (WBD)

The objective of WBD [57] is to find the lowest solution for the manufacturing cost by adjusting the decision variables. The configuration of WBD is depicted in Figure 18, which contains 4 design variables and 5 constraints and is mathematically modelled as follows:
Minimize   f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( x 2 + 14 )
subject to:
g 1 x = x 1 x 4 0 ,
g 2 x = δ x δ m a x ,
g 3 x = P P c x 0 ,
g 4 x = τ x τ m a x ,
g 5 x = σ x σ m a x ,
where:
τ = τ 2 + τ 2 + 2 τ τ x 2 2 R , τ = R M J , τ = P 2 x 1 x 2 ,
R = x 2 2 4 + x 1 + x 3 2 2 , J = 2 x 2 2 4 + x 1 + x 3 2 2 2 x 1 x 2 ,
σ x = 6 P L x 3 2 x 4 , δ x = 6 P L 3 E x 3 2 x 4 , P c x = 4.013 E x 3 x 4 3 6 L 2 1 x 3 2 L E 4 G ,
L = 14 in , P = 6000 lb , E = 30 × 10 6 psi , σ m a x = 30000 p s i ,
τ m a x = 13600 p s i , G = 12 × 10 6 psi , δ m a x = 0.25 i n
with bounds: 0.125 x 1 2 , 0.1 x 2 , x 3 10 , 0.1 x 4 2 .
The numerical results obtained for the 14 algorithms employed to solve WBD are presented in Table 14 and Table S9. Table 14 shows that the MSAPO Best, Mean and Worst values reach the theoretical optimum of 1.6702177263 for WBD, and that its standard deviation is also significantly better than that of the other algorithms under comparison. Although XPSO’s Best can also reach the theoretical optimum, it is less stable. Table S9 indicates that the best design solution for WBD solved by MSAPO is (0.1988323072, 3.3373652986, 9.1920243225, 0.1988323072).

5.5. Three-Bar Truss Design Problem (TBTD)

In the context of TBTD, the primary objective is to minimise the volume of the three-rod truss, as illustrated schematically in Figure 19. Given that the cross-sectional areas of rods x 1 and x 3 are identical in the three-rod truss, only rods x 1 and x 2 are selected as the optimization variables, and the variable σ represents the stress that the three-rod truss is subjected to at each truss member. The particular mathematical model is as follows:
Minimize   f ( x ) = ( 2 2 x 1 + x 2 ) × l
subject to:
g 1 x = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 2 x = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 3 ( x ) = 1 x 1 + 2 x 2 P σ 0 ,
where
l = 100 ,   P = 2 ,   and   σ = 2 .
with bounds: 0 x 1 , x 2 1 .
The numerical results obtained for the 14 algorithms used to solve TBTD are presented in Table 15 and Table S10. Table 15 illustrates that the three algorithms, MSAPO, APO and NOA, collectively achieved the theoretical optimum value of 263.89584338 for TBTD. Nevertheless, MSAPO displays superior performance in terms of Mean, Worst and Std metrics in comparison to the other algorithms. Table S10 indicates that the optimal design solution for TBTD solved by MSAPO is (0.7886751377, 0.4082482817).

5.6. Step-Cone Pulley Problem (SCP)

The main objective of SCP [58] is to minimise the weight of a 4th order conical pulley using 5 variables. This problem contains 11 nonlinear constraints. The structure of step-cone pulley is shown schematically in Figure 20. The formula is as follows:
Minimize   f ( x ) = ρ ω [ d 1 2 { 1 + ( N 1 N ) 2 } + d 2 2 { 1 + ( N 2 N ) 2 } + d 3 2 { 1 + ( N 3 N ) 2 } + d 4 2 { 1 + ( N 4 N ) 2 } ]
subject to:
h 1 x = C 1 C 2 = 0 ,
h 2 x = C 1 C 3 = 0 ,
h 3 x = C 1 C 4 = 0 ,
g 1,2 , 3,4 x = R i 2 ,
g 5,6 , 7,8 ( x ) = P i ( 0.75 745.6998 ) ,
where ρ = 7200   kg / m 3 , a = 3   m , μ = 0.35 , s = 1.75   MPa , t = 8 mm , and C i is the belt length to obtain speed N i , R i and P i respectively represents the tension ratio and the power transmitted at each step, the formulas are as follows
C i = π d i 2 1 + N i N + ( N i N 1 ) 2 4 a + 2 a ,
P i = s t ω [ 1 exp [ μ { π 2 sin 1 { ( N i N 1 ) d i 2 a } } ] ] π d i N i 60 ,
R i = exp [ μ { π 2 sin 1 { ( N i N 1 ) d i 2 a } } ] , ( i = 1,2 , 3,4 ) .
The numerical results obtained for the 14 algorithms used to solve SCP are given in Table 16 and Table S11. As illustrated in Table 16, the MSAPO exhibits superior performance in terms of the Best and Mean metrics when compared to the other algorithms. However, according to the Std metric, it displays slightly reduced stability. The optimal design solution solved by MSAPO in Table S11 is (38.41396, 52.85864, 70.47270, 84.49572, 90).

5.7. Gas Transmission Compressor Design (GTCD)

GTCD [59] has 4 decision variables and 1 constraint. The specific mathematical model is as follows:
Minimize   f ( x ) = 7.72 × 1 0 8 x 1 1 x 2 0.219 765.43 × 1 0 6 x 1 1 + 8.61 × 1 0 5 x 1 1 / 2 x 2 x 3 2 / 3 x 4 1 / 2 + 3.69 × 1 0 4 x 3
subject to:
g 1 ( x ) = x 2 2 + x 2 2 x 4 1 0 ,
with bounds: 20 x 1 , x 3 50 , 1 x 2 10 , 0.1 x 4 60 .
The numerical results obtained for the 14 algorithms employed to address GTCD are presented in Table 17 and Table S12. Table 17 illustrates that Best metric of three algorithms, MSAPO, APO and NOA, is the smallest among the compared algorithms. Meanwhile, Mean, Worst and Std metrics of MSAPO demonstrate a notable superiority over those of the other algorithms. Table S12 denotes that the optimal design solution for GTCD solved by MSAPO is (50, 1.178283951, 24.592590288, 0.388353071).

5.8. Himmelblau’s Function (HF)

Himmelblau’s function is a general benchmark for the analysis of nonlinear constrained optimization algorithms [56], comprising 6 nonlinear constraints and 5 variables. The specific modelling of the problem is as follows:
Minimize   f ( x ) = 37.293239 x 1 + 5.3578547 x 3 2 + 0.8356891 x 1 x 5 40792.141
subject to:
g 1 x = G 1 0 ,
g 2 x = G 1 92 0 ,
g 3 x = 90 G 2 0 ,
g 4 x = G 2 110 0 ,
g 5 x = 20 G 3 0 ,
g 6 ( x ) = G 3 25 0 ,
where
G 1 = 0.0006262 x 1 x 4 + 0.0056858 x 2 x 5 0.0022053 x 3 x 5 + 85.334407 ,
G 2 = 0.0029955 x 1 x 2 + 0.0021813 x 3 2 + 0.0071317 x 2 x 5 + 80.51249 ,
G 3 = 0.00125447 x 1 x 3 + 0.0019085 x 3 x 4 + 0.0047026 x 3 x 5 + 9.300961 ,
with bounds: 78 x 1 102 , 33 x 2 45 , 27 x 3 , x 4 , x 5 45 .
Table 18 and Table S13 present the results of 14 algorithms to solve Himmelblau’s function. Table 18 demonstrates that MSAPO’s Best, Mean, and Worst values align with the Himmelblau’s function theoretical optimal value of −30665.538672. Additionally, MSAPO’s Std value exhibits a notable superiority over those of other algorithms, indicating that MSAPO possesses exceptional capabilities in solving Himmelblau’s function. From Table S13, the optimal design solution for Himmelblau’s function solved by MSAPO is (78, 33, 29.99525603, 45, 36.77581291).

5.9. Conclusion on Engineering Optimization Problems

In the section addressing practical engineering optimization problems, we conducted eight independent experiments, each corresponding to a distinct engineering optimization problem. Across these experiments, MSAPO demonstrated superior overall performance in most problems, ranking highly across the majority of evaluation metrics. Benefiting from the enhancement provided by the cross-strategy approach, MSAPO exhibits the ability to avoid local optima. Furthermore, its cyclonic search pattern accelerates convergence speed in high-dimensional problems. However, while the algorithm achieved competitive optimality in a few optimization problems, it exhibited weaker stability compared to algorithms like NOA in certain low-dimensional complex constraint problems (e.g., process synthesis, stepped cone pulleys). Consequently, there remains room for improvement in this algorithm.

6. Conclusions and Future Work

This paper proposes an enhanced APO algorithm, designated MSAPO, which incorporates piecewise chaotic opposition-based learning strategy, cyclone foraging strategy, hybrid mutation strategy, and crisscross strategy, alongside the APO. The combination of these four strategies enhances the initial population diversity, augment the local search capability of the algorithm, and diminish the probability of the algorithm attaining a local optimum. The comparison of the results with other excellent optimization algorithms on the CEC2017 and CEC2022 test functions reveals that the proposed MSAPO has significantly enhanced its computational accuracy and convergence speed. Notably, MSAPO demonstrates a remarkable capacity to identify optimal solutions, particularly in high-dimensional problems, which substantiates its competitiveness in optimization problems. The experimental results of the engineering examples demonstrate that MSAPO is capable of identifying superior optimization solutions when confronted with genuine problems. Nevertheless, MSAPO is not without its own set of limitations. The incorporation of the vertical and horizontal crossover strategy towards the conclusion of the algorithmic process has the effect of increasing its overall complexity, thereby rendering it less optimal for achieving rapid convergence when confronted with specific functional equations. In the future, the algorithm’s problem-solving ability will be enhanced by incorporating a variety of strategies to address diverse types of problems. Furthermore, the potential of the proposed MSAPO to optimize intricate optimization problems in diverse fields represents a promising avenue for future research.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math13172888/s1, Table S1: Experimental results for different P values under 30-dimensional CEC2017; Table S2: Comparison results of various algorithms under 20-dimensional cec2022; Table S3: Comparison results of 16 algorithms under 30-dimensional cec2017; Table S4: Comparison results of 16 algorithms under 50-dimensional cec2017; Table S5: Comparison results of 16 algorithms under 100-dimensional cec2017; Table S6: Optimal design solutions for process synthesis problem; Table S7: Optimal design solutions for weight minimization of a speed reducer; Table S8: Optimal design solutions for tension/compression spring design; Table S9: Optimal design solution for welded beam design; Table S10: Optimal design solution for three-bar truss design problem; Table S11: Optimal design solution for step-cone pulley problem; Table S12: Optimal design solutions for gas transmission compressor design; Table S13: Optimal design solutions for himmelblau’s function.

Author Contributions

Conceptualization, G.H.; Methodology, H.B., J.W. and G.H.; Software, H.B. and J.W.; Validation, J.W. and G.H.; Formal analysis, H.B.; Investigation, H.B., J.W. and G.H.; Resources, G.H.; Data curation, H.B. and J.W.; Writing—original draft, H.B., J.W. and G.H.; Writing—review and editing, H.B., J.W. and G.H.; Visualization, J.W.; Supervision, G.H.; Project administration, G.H.; Funding acquisition, G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article/Supplementary Materials. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yao, L.; Yuan, P.; Tsai, C.Y.; Zhang, T.; Lu, Y.; Ding, S. ESO: An enhanced snake optimizer for real-world engineering problems. Expert Syst. Appl. 2023, 230, 120594. [Google Scholar] [CrossRef]
  2. Elnokrashy, A.F.; Abdelaziz, L.N.; Shawky, A.; Tawfeek, R.M. Advanced framework for enhancing ultrasound images through an optimized hybrid search algorithm and a novel motion compounding processing chain. Biomed. Signal Process. Control 2023, 86, 105237. [Google Scholar] [CrossRef]
  3. Abdel-Salam, M.; Alzahrani, A.I.; Alblehai, F.; Zitar, R.A.; Abualigah, L. An improved Genghis Khan optimizer based on enhanced solution quality strategy for global optimization and feature selection problems. Knowl.-Based Syst. 2024, 302, 112347. [Google Scholar] [CrossRef]
  4. Ming, F.; Gong, W.; Zhen, H.; Wang, L.; Gao, L. Constrained multi-objective optimization evolutionary algorithm for real-world continuous mechanical design problems. Eng. Appl. Artif. Intell. 2024, 135, 108673. [Google Scholar] [CrossRef]
  5. Hu, G.; Zhong, J.; Du, B.; Guo, W. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  6. Houssein, E.H.; Çelik, E.; Mahdy, M.A.; Ghoniem, R.M. Self-adaptive Equilibrium Optimizer for solving global, combinatorial, engineering, and Multi-Objective problems. Expert Syst. Appl. 2022, 195, 116552. [Google Scholar] [CrossRef]
  7. Luo, W.; Yu, X. Reinforcement learning-based modified cuckoo search algorithm for economic dispatch problems. Knowl.-Based Syst. 2022, 257, 109844. [Google Scholar] [CrossRef]
  8. Liang, S.; Yin, M.; Sun, G.; Li, J.; Li, H.; Lang, Q. An enhanced sparrow search swarm optimizer via multi-strategies for high-dimensional optimization problems. Swarm Evol. Comput. 2024, 88, 101603. [Google Scholar] [CrossRef]
  9. Lu, H.C.; Tseng, H.Y.; Lin, S.W. Double-track particle swarm optimizer for nonlinear constrained optimization problems. Inf. Sci. 2023, 622, 587–628. [Google Scholar] [CrossRef]
  10. Meng, X.; Li, H. An adaptive co-evolutionary competitive particle swarm optimizer for constrained multi-objective optimization problems. Swarm Evol. Comput. 2024, 91, 101746. [Google Scholar] [CrossRef]
  11. Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization problems. Appl. Intell. 2023, 53, 11833–11860. [Google Scholar] [CrossRef]
  12. Hu, G.; Cheng, M.; Houssein, E.H.; Hussien, A.G.; Abualigah, L. SDO: A novel sled dog-inspired optimizer for solving engineering problems. Adv. Eng. Inform. 2024, 62, 102783. [Google Scholar] [CrossRef]
  13. Hamarashid, H.K.; Hassan, B.A.; Rashid, T.A. Modified-improved fitness dependent optimizer for complex and engineering problems. Knowl.-Based Syst. 2024, 300, 112098. [Google Scholar] [CrossRef]
  14. Bohrer, J.D.S.; Dorn, M. Enhancing classification with hybrid feature selection: A multi-objective genetic algorithm for high-dimensional data. Expert Syst. Appl. 2024, 255, 124518. [Google Scholar] [CrossRef]
  15. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  16. Civicioglu, P. Backtracking Search Optimization Algorithm for numerical optimization problems. Appl. Math. Comput. 2013, 219, 8121–8144. [Google Scholar] [CrossRef]
  17. Serrano-Rubio, J.P.; Hernández-Aguirre, A.; Herrera-Guzmán, R. An evolutionary algorithm using spherical inversions. Soft Comput. 2018, 22, 1993–2014. [Google Scholar] [CrossRef]
  18. Segovia-Domínguez, I.; Herrera-Guzmán, R.; Serrano-Rubio, J.P.; Hernández-Aguirre, A. Geometric probabilistic evolutionary algorithm. Expert Syst. Appl. 2020, 144, 113080. [Google Scholar] [CrossRef]
  19. Al-Bahrani, L.T.; Patra, J.C. A novel orthogonal PSO algorithm based on orthogonal diagonalization. Swarm Evol. Comput. 2018, 40, 1–23. [Google Scholar] [CrossRef]
  20. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  21. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  23. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  24. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  25. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  26. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl.-Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  27. Hu, G.; Guo, Y.; Wei, G.; Abualigah, L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inf. 2023, 58, 102210. [Google Scholar] [CrossRef]
  28. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  29. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  30. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  31. Das, B.; Mukherjee, V.; Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  32. Moosavi, S.H.S.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  33. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  34. Zhu, D.; Wang, S.; Zhou, C.; Yan, S.; Xue, J. Human memory optimization algorithm: A memory-inspired optimizer for global optimization problems. Expert Syst. Appl. 2024, 237, 121597. [Google Scholar] [CrossRef]
  35. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  36. Bouchekara, H.R. Electrostatic discharge algorithm: A novel nature-inspired optimisation algorithm and its application to worst-case tolerance analysis of an EMC filter. IET Sci. Meas. Technol. 2019, 13, 491–499. [Google Scholar] [CrossRef]
  37. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  38. Deng, L.; Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  39. Hashim, F.A.; Mostafa, R.R.; Hussien, A.G.; Mirjalili, S.; Sallam, K.M. Fick’s Law Algorithm: A physical law-based algorithm for numerical optimization. Knowl.-Based Syst. 2023, 260, 110146. [Google Scholar] [CrossRef]
  40. Hu, G.; Gong, C.; Li, X.; Xu, Z. CGKOA: An enhanced Kepler optimization algorithm for multi-domain optimization problems. Comput. Methods Appl. Mech. Eng. 2024, 425, 116964. [Google Scholar] [CrossRef]
  41. Duan, Y.; Yu, X. A collaboration-based hybrid GWO-SCA optimizer for engineering optimization problems. Expert Syst. Appl. 2023, 213, 119017. [Google Scholar] [CrossRef]
  42. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  43. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.S.; Kong, L.; Shehadeh, H.A. Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization. Knowl.-Based Syst. 2024, 295, 111737. [Google Scholar] [CrossRef]
  44. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  45. Chakraborty, S.; Saha, A.K.; Ezugwu, A.E.; Chakraborty, R.; Saha, A. Horizontal crossover and co-operative hunting-based Whale Optimization Algorithm for feature selection. Knowl.-Based Syst. 2023, 282, 111108. [Google Scholar] [CrossRef]
  46. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  47. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  48. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  49. Gao, Y. PID-based search algorithm: A novel metaheuristic algorithm based on PID algorithm. Expert Syst. Appl. 2023, 232, 120886. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  51. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.H. An expanded particle swarm optimization based on multi-exemplar and forgetting ability. Inf. Sci. Int. J. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  52. Xu, S.; Xiong, G.; Mohamed, A.W.; Bouchekara, H.R. Bouchekara, Forgetting velocity based improved comprehensive learning particle swarm optimization for non-convex economic dispatch problems with valve-point effects and multi-fuel options. Energy 2022, 256, 124511. [Google Scholar] [CrossRef]
  53. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Self regulating particle swarm optimization algorithm. Inf. Sci. 2015, 294, 182–202. [Google Scholar] [CrossRef]
  54. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation, CEC, IEEE, San Sebastián, Spain, 5–8 June 2017. [Google Scholar] [CrossRef]
  55. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation, CEC, IEEE, San Sebastián, Spain, 5–8 June 2017. [Google Scholar] [CrossRef]
  56. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  57. Hu, G.; Song, K.; Abdel-salam, M. Sub-population evolutionary particle swarm optimization with dynamic fitness-distance balance and elite reverse learning for engineering design problems. Adv. Eng. Softw. 2025, 202, 103866. [Google Scholar] [CrossRef]
  58. Wang, W.C.; Tian, W.C.; Xu, D.M.; Zang, H.F. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw. 2024, 195, 103694. [Google Scholar] [CrossRef]
  59. Kumar, N.; Mahato, S.K.; Bhunia, A.K. Design of an efficient hybridized CS-PSO algorithm and its applications for solving constrained and bound constrained structural engineering design problems. Results Control Optim. 2021, 5, 100064. [Google Scholar] [CrossRef]
Figure 1. Flowchart of MSAPO.
Figure 1. Flowchart of MSAPO.
Mathematics 13 02888 g001
Figure 2. Behavioral patterns of APOs after incorporating cyclone foraging strategy.
Figure 2. Behavioral patterns of APOs after incorporating cyclone foraging strategy.
Mathematics 13 02888 g002
Figure 3. Density profiles of the Cauchy and Gaussian distributions.
Figure 3. Density profiles of the Cauchy and Gaussian distributions.
Mathematics 13 02888 g003
Figure 4. Variation curves of weight factors w1 and w2 over 1000 iterations.
Figure 4. Variation curves of weight factors w1 and w2 over 1000 iterations.
Mathematics 13 02888 g004
Figure 5. Partial convergence curves in the sensitivity analysis experiment.
Figure 5. Partial convergence curves in the sensitivity analysis experiment.
Mathematics 13 02888 g005
Figure 6. Partial convergence curves in the ablation experiment.
Figure 6. Partial convergence curves in the ablation experiment.
Mathematics 13 02888 g006
Figure 7. Convergence curves of various algorithms under 20-dimensional CEC2022.
Figure 7. Convergence curves of various algorithms under 20-dimensional CEC2022.
Mathematics 13 02888 g007
Figure 8. Box plots of various algorithms under 20-dimensional CEC2022.
Figure 8. Box plots of various algorithms under 20-dimensional CEC2022.
Mathematics 13 02888 g008
Figure 9. Radar plots of the ranking of various algorithms under 20-dimensional CEC2022.
Figure 9. Radar plots of the ranking of various algorithms under 20-dimensional CEC2022.
Mathematics 13 02888 g009
Figure 10. Convergence curves of various algorithms under 30-dimensional CEC2017.
Figure 10. Convergence curves of various algorithms under 30-dimensional CEC2017.
Mathematics 13 02888 g010
Figure 11. Convergence curves of various algorithms under 50-dimensional CEC2017.
Figure 11. Convergence curves of various algorithms under 50-dimensional CEC2017.
Mathematics 13 02888 g011
Figure 12. Convergence curves of various algorithms under 100-dimensional CEC2017.
Figure 12. Convergence curves of various algorithms under 100-dimensional CEC2017.
Mathematics 13 02888 g012
Figure 13. Box plots of various algorithms under 30-dimensional CEC2017.
Figure 13. Box plots of various algorithms under 30-dimensional CEC2017.
Mathematics 13 02888 g013aMathematics 13 02888 g013b
Figure 14. Box plots of various algorithms under 50-dimensional CEC2017.
Figure 14. Box plots of various algorithms under 50-dimensional CEC2017.
Mathematics 13 02888 g014aMathematics 13 02888 g014b
Figure 15. Box plots of various algorithms under 100-dimensional CEC2017.
Figure 15. Box plots of various algorithms under 100-dimensional CEC2017.
Mathematics 13 02888 g015aMathematics 13 02888 g015b
Figure 16. The speed reducer design model.
Figure 16. The speed reducer design model.
Mathematics 13 02888 g016
Figure 17. Schematic diagram of T/CSD.
Figure 17. Schematic diagram of T/CSD.
Mathematics 13 02888 g017
Figure 18. Schematic design of welded beam [57].
Figure 18. Schematic design of welded beam [57].
Mathematics 13 02888 g018
Figure 19. Schematic diagram of TBTD.
Figure 19. Schematic diagram of TBTD.
Mathematics 13 02888 g019
Figure 20. Schematic diagram of SCP.
Figure 20. Schematic diagram of SCP.
Mathematics 13 02888 g020
Table 1. Statistical results of mean values in the melting experiment.
Table 1. Statistical results of mean values in the melting experiment.
FMSAPOAPOAPO1APO2APO3APO4
F11.0010E+023.6670E+031.0020E+021.0010E+022.0805E+031.0122E+02
F33.0000E+023.0315E+043.0000E+023.0000E+024.5763E+033.2184E+02
F44.2279E+025.0889E+024.3407E+024.2423E+025.0893E+024.6988E+02
F55.3671E+025.3046E+025.3234E+025.3064E+025.2552E+025.4438E+02
F66.0000E+026.0000E+026.0000E+026.0000E+026.0000E+026.0000E+02
F77.6139E+027.5792E+027.5876E+027.6566E+027.5809E+027.6975E+02
F88.3273E+028.2728E+028.3413E+028.3562E+028.2829E+028.4597E+02
F99.0001E+029.0060E+029.0004E+029.0005E+029.0002E+029.0086E+02
F103.4372E+033.8150E+033.3170E+033.2895E+033.7963E+033.3472E+03
F111.1439E+031.1829E+031.1351E+031.1372E+031.1559E+031.1363E+03
F121.5912E+044.8696E+051.1148E+041.2251E+043.1425E+052.7184E+04
F134.1810E+031.0161E+041.5974E+031.5640E+031.2586E+041.8285E+03
F141.4485E+031.4410E+031.4459E+031.4415E+031.4383E+031.4470E+03
F151.5498E+031.5726E+031.5453E+031.5388E+031.5843E+031.5498E+03
F161.6966E+031.9763E+031.7532E+031.7935E+031.8927E+031.8121E+03
F171.7475E+031.7624E+031.7480E+031.7502E+031.7536E+031.7521E+03
F182.3738E+032.9820E+042.0872E+032.1971E+031.7743E+042.2266E+03
F191.9281E+031.9763E+031.9281E+031.9314E+032.7031E+031.9329E+03
F202.0782E+032.0780E+032.0622E+032.0786E+032.1098E+032.0933E+03
F212.3342E+032.3277E+032.3293E+032.3268E+032.3220E+032.3287E+03
F222.3000E+032.3005E+032.3000E+032.3000E+032.3000E+032.3002E+03
F232.6861E+032.6750E+032.6845E+032.6806E+032.6715E+032.6811E+03
F242.8520E+032.8461E+032.8546E+032.8550E+032.8404E+032.8545E+03
F252.8868E+032.8878E+032.8865E+032.8864E+032.8868E+032.8887E+03
F263.9985E+033.8484E+033.9364E+033.9543E+033.7921E+033.9921E+03
F273.2032E+033.2086E+033.2071E+033.2052E+033.2048E+033.2091E+03
F283.1103E+033.2287E+033.1517E+033.1103E+033.2041E+033.1527E+03
F293.3649E+033.4187E+033.3781E+033.3833E+033.3949E+033.3776E+03
F305.2553E+038.7990E+035.7758E+035.4629E+037.7271E+035.7277E+03
Rank162345
Table 2. Parameter settings of the experimental algorithms.
Table 2. Parameter settings of the experimental algorithms.
AlgorithmParameterValue
MSAPO (Proposed)neighbor pairs ( n p )1
proportion fraction maximum0.1
Chaotic parameter ( P )0.1
APOneighbor pairs ( n p )1
proportion fraction maximum0.1
SMAControlling parameter ( z )0.03
AVOAControlling parameters ( p 1 ,   p 2 ,   p 3 ,   α ,   β ,   γ )0.6, 0.4, 0.6, 0.8, 0.2, 2.5
SOThreshold value0.25
ARO//
NOAControlling parameters ( P r b ,   P a 2 ,   δ )0.2, 0.2, 0.05
PSAControlling parameters ( t ,   K p ,   K i ,   K d )1, 1, 0.5, 1,2
GWOConvergence constant a Decreases Linearly from 2 to 0
WOAConvergence constant a Decreases Linearly from 2 to 0
Spiral factor b 1
SSA c 1 Decreases from 2 to 0
XPSOacceleration constants ( c c )[1.0 0.5 0.5]
weight of the search space0.1
FVICLPSOControlling parameters ( λ , w )0.3, 0
SRPSOControlling parameters ( w m i n ,   w m a x ,   c 1 ,   c 2 )0.5, 1.05, 1.49445, 1.49445
LSHADE_cnEpSinchange freq0.5
Controlling parameters ( p b ,   p s )0.4, 0.5
LSHADE_SPACMA L R a t e 0.8
Controlling parameter10−8
Table 3. Comparison results of various algorithms under 20D CEC2022.
Table 3. Comparison results of various algorithms under 20D CEC2022.
FIndexMSAPOAPOSMAAVOASOARONOAPSAGWOWOASSAXPSOFVICLPSOSRPSO
F1MR1.00009.36675.23336.433310.000013.53337.23333.466711.400011.26672.00003.566713.43337.0667
Rank1956101483121124137
F2MR3.23337.83335.93338.26675.700010.46675.63336.066710.766712.33337.266710.80005.46675.2333
Rank1961051147121481332
F3MR1.00002.06678.500012.60007.033310.20005.20009.36679.866713.933312.43335.80002.93334.0667
Rank1281371159101412634
F4MR2.76671.10009.566710.43333.866712.46676.60009.06676.466712.53339.56673.766712.06674.7333
Rank2191141378614103125
F5MR1.00003.40008.600011.53333.933313.33332.53339.33337.700012.93338.90004.666711.66675.4667
Rank1381141421071395126
F6MR1.36674.133311.76677.10007.16678.96672.93336.533310.566710.80009.30006.466711.83336.0667
Rank1313789261112105144
F7MR1.70002.40006.900012.20006.033311.73336.10009.30008.233313.400011.43334.60005.03335.9333
Rank1281361271091411345
F8MR1.53333.23336.433310.80005.900010.60007.36675.60009.866712.400011.33335.00006.53338.4000
Rank1261251184101413379
F9MR1.00003.36678.23338.76675.366712.23335.40002.100012.500013.300010.433311.36676.66674.2667
Rank1389512621314101174
F10MR2.96674.733310.100010.10004.166710.53335.66679.43339.866712.30009.16676.96671.56677.4333
Rank2411123135910148617
F11MR1.63334.600010.20006.50007.200010.63337.36674.433313.666711.40008.56675.06679.00004.7333
Rank1311671282141395104
F12MR1.80002.36675.10009.40008.866712.70002.86679.96677.933313.00008.300011.50004.90006.3000
Rank1251091331171481246
+/=/−compared10/1/111/1/012/0/010/2/012/0/012/0/012/0/012/0/012/0/012/0/012/0/010/1/112/0/0
Average MR1.75004.05008.04729.51116.269411.45005.40837.05569.902812.46679.05836.63067.59175.8083
Total Rank1291151337121410684
Table 4. Comparison results of 16 algorithms under 30-dimensional CEC2017.
Table 4. Comparison results of 16 algorithms under 30-dimensional CEC2017.
FIndexMSAPOAPOSMAAVOASOARONOAPSAGWOWOASSAXPSOFVICLPSOSRPSOLSHADE_
cnEpSin
LSHADE_
SPACMA
F1MR2.83336.900010.26677.833310.200014.43337.96677.766715.900014.66677.73337.63339.63339.06672.16671.0000
Rank34138121497161565111021
F3MR1.800011.03333.83338.500012.600014.56676.60005.533311.766715.50006.46677.133314.60009.60002.80003.6667
Rank11149131475121668151023
F4MR1.43339.30007.16679.36676.800013.86676.30006.900013.200015.40009.666712.40009.70008.50003.73332.2667
Rank19710515461416111312832
F5MR3.90003.300010.033314.36674.500013.533310.166710.46679.133315.833312.16675.666712.00006.30002.60002.0333
Rank43915514101181613612721
F6MR1.00002.366710.233314.56678.633312.50006.600012.100011.266715.966714.36677.86675.43335.83334.10003.1667
Rank12101591371211161485643
F7MR3.36673.40008.566715.26676.700013.66679.833312.100010.200015.666710.36675.466711.43336.13332.73331.1000
Rank34815714913101611512621
F8MR5.20002.86679.866713.80004.233314.13339.800010.76678.666715.500012.20006.000013.03335.56672.56671.8000
Rank53101441591181612713621
F9MR1.60003.766711.033313.90005.700013.63333.366711.33339.233315.700012.93336.933312.23336.83334.43333.3667
Rank14101561421191613812753
F10MR3.96675.56677.466712.80009.86679.000013.90009.20007.866715.13339.46676.066713.10006.63333.10002.8667
Rank34713129151081611514621
F11MR2.00004.86678.70009.00005.900015.80006.00009.300013.333315.066711.33337.400013.46677.06673.30003.4667
Rank14910516611131512814723
F12MR2.56677.800010.400011.03336.366712.33334.40007.066714.400015.866712.93336.066713.53337.70002.06671.4667
Rank39101161247151613514821
F13MR2.16676.66679.366712.10007.733311.23335.03337.066713.866715.033313.96676.600012.53337.56672.66672.4000
Rank16101291147141615513832
F14MR2.23331.433311.600011.43338.900015.13333.93338.900011.266715.36679.43338.066712.56678.33334.00003.4000
Rank21131281549111610614753
F15MR1.16672.566711.166711.60009.033311.20004.63338.200014.333315.266714.36678.06678.13338.70003.96673.6000
Rank12111310125814161567943
F16MR1.76672.76678.666713.10006.366714.16677.900012.20009.566715.566712.10007.40008.83336.23333.70005.6667
Rank12914615813111612710534
F17MR1.33332.466712.200013.16678.233314.76675.800012.166710.000014.933311.60007.00007.20007.40003.56674.1667
Rank12131491551210161167834
F18MR2.66675.066712.033312.00009.500014.63333.73337.866711.466714.233310.26677.633311.100010.20001.73331.8667
Rank35141381647121510611912
F19MR1.56672.733311.766710.13338.700011.20003.80009.166713.933315.733314.70009.03337.10009.06673.50003.8667
Rank12131171241014161586935
F20MR2.10002.600011.500014.26676.700014.26676.166711.300010.500014.600012.16677.06677.10006.53333.80005.3333
Rank12121471551110161389634
F21MR3.06672.566710.700014.30004.766713.56679.366710.73339.200015.866712.00006.100011.06676.46672.80003.4333
Rank31101551491181613612724
F22MR2.66677.466712.966712.900011.233312.86676.600010.133311.566713.76678.90004.683310.23336.43332.56671.0167
Rank37151411136912168410521
F23MR4.10002.266710.100014.56675.666714.03338.933311.56679.866715.666711.16676.100011.46675.76672.26672.4667
Rank41101551481391611712623
F24MR4.10002.433310.266714.56675.700015.53339.000011.26679.366714.80009.53336.166712.80005.30002.76672.4000
Rank42111461681291510713531
F25MR4.00006.13336.86678.26677.133314.40003.53339.500014.366715.466710.400011.033311.56675.60004.00003.7333
Rank36798151101416111213542
F26MR5.63335.433311.000013.63338.533314.46675.700010.70009.866715.30008.63334.16678.50006.26673.93334.2333
Rank54131491561211161028713
F27MR2.86672.43336.100012.966710.633314.16677.600011.43339.966715.466710.200012.03337.00005.03334.06674.0333
Rank21614111581291610137543
F28MR2.13338.86679.50008.66679.900015.00004.46676.866714.766714.90009.76674.366712.90005.96675.46672.4667
Rank19108121647141511313652
F29MR1.46673.166711.266713.43338.100013.30009.066710.76679.666715.900013.20006.76678.26674.76673.30003.5667
Rank12121571491110161368534
F30MR2.03335.20008.666712.13335.833310.90009.63334.900015.000015.733314.26678.733311.96676.86672.20001.9333
Rank25813611104151614912731
Average MR2.64604.60119.769012.19547.729913.52766.89089.560911.501115.306911.25177.229310.63796.95633.23792.9580
Total Rank14101481559131612711632
Table 5. Comparison results of 16 algorithms under 50-dimensional CEC2017.
Table 5. Comparison results of 16 algorithms under 50-dimensional CEC2017.
FIndexMSAPOAPOSMAAVOASOARONOAPSAGWOWOASSAXPSOFVICLPSOSRPSOLSHADE_
cnEpSin
LSHADE_
SPACMA
F1MR2.600010.066710.80005.633311.933315.80008.56675.300015.166714.03334.90005.100013.00006.06674.73332.3000
Rank21011712169615144513831
F3MR1.200012.03333.43337.633312.433315.50005.53336.333310.200013.90007.13338.333315.26679.73332.80004.5333
Rank11238131656111479151024
F4MR2.33338.56677.66677.33336.900015.60005.73334.633314.066715.00008.733311.566713.16676.86674.46673.3667
Rank11098716541415111213632
F5MR3.93335.70009.533312.60002.466714.333311.43339.93338.766715.800011.13334.833314.46675.63333.66671.7667
Rank47913214121081611515631
F6MR1.00002.066711.300014.43336.366712.16676.500011.966710.600016.000014.53337.93336.83335.33335.56673.4000
Rank12111461371210161598453
F7MR2.76674.66678.500014.83335.733314.03339.766712.10009.033315.833310.36674.133312.13337.66673.23331.2000
Rank25815614101291611413731
F8MR4.40005.30009.300013.06672.633313.833311.266710.10008.500015.400011.80005.033314.56675.73333.16671.9000
Rank46913214111081612515731
F9MR1.66672.733311.466712.33336.400013.60007.133310.90009.166715.700011.73335.600015.06674.33334.66673.5000
Rank12111371481091612615453
F10MR3.06678.30005.666710.366714.23336.933313.80005.56677.333313.76679.73334.733314.93335.96675.76675.8333
Rank11041215814391311216756
F11MR1.16674.06678.63337.23339.700015.56677.06677.933314.466713.400011.26677.200014.56673.86675.36674.5000
Rank13108111669141312715254
F12MR2.13337.26679.733310.36677.866714.36675.20006.733314.266715.166712.16675.766714.00006.93332.30001.7333
Rank28101191546141612513731
F13MR2.23333.70009.866711.30007.900013.60008.23335.066715.600014.400011.70005.300014.36674.56674.50003.6667
Rank13101181396161512714542
F14MR2.53335.200010.733311.03338.466715.76673.60008.166712.400014.466710.00007.000014.06678.60001.76672.2000
Rank35111281647131510614912
F15MR6.26675.300010.933312.23337.766712.63334.80007.033315.100015.066713.43334.900010.76675.63332.10002.0333
Rank75111291338161514410621
F16MR2.50004.033310.333313.33339.000013.53337.500011.16677.700015.866711.10006.000011.50003.96673.06675.4000
Rank14101491571281611613325
F17MR1.33333.700010.433313.46677.000013.63336.433311.06678.300015.266711.90007.500010.20006.80002.96676.0000
Rank13111471551291613810624
F18MR2.60005.433310.73339.833310.033314.63333.93337.666712.233315.26679.90007.633313.23339.40001.73331.7333
Rank35129111547131610614812
F19MR1.66678.10007.13339.93338.866711.66674.33338.433314.233315.300015.30007.533311.23337.73332.60001.9333
Rank18511101349141516612732
F20MR2.10003.200010.366713.566710.233312.96676.366711.23339.300014.900012.06676.50008.03334.96673.70006.5000
Rank12111510145129161368437
F21MR2.06674.70009.400013.76672.933313.966710.500010.90008.900015.800010.70005.133313.96675.40003.70004.1667
Rank15913214101281611615734
F22MR1.16678.83336.933310.566713.10008.433312.26678.36676.766714.46679.13335.266714.03335.53336.20004.9333
Rank11071214913861611315452
F23MR1.93333.50009.466714.63334.900014.366710.533311.03339.266715.96679.90005.400012.76675.00003.36673.9667
Rank13915514111281610713624
F24MR1.53333.60009.433314.23337.166715.766710.333310.93338.766715.00009.06675.000012.96675.13333.36673.7000
Rank13101471611128159513624
F25MR6.766710.70004.86678.80006.066715.56677.50006.233314.433314.76675.533311.566713.13334.26672.00003.8000
Rank81141061697141551213312
F26MR3.10006.03338.000013.80006.933314.36679.500012.20009.700015.96675.60004.033312.83335.40004.43334.1000
Rank17914815101211166213543
F27MR2.16674.00007.166713.40009.333314.93339.900010.233310.100015.76679.666710.20008.70004.00004.10002.3333
Rank13614815101311169127452
F28MR3.200011.23335.76677.53339.300015.36678.43336.833314.366714.50005.500010.733313.76674.03332.60002.8333
Rank31268101697141551113412
F29MR1.63333.90009.833313.03334.900013.46679.200010.366710.166716.000013.93337.266710.33335.73333.53332.7000
Rank14913514812101615711632
F30MR1.73334.83338.03339.83335.800012.500011.96674.800014.600015.933314.466710.533310.00003.26673.80003.9000
Rank16897131251516141110234
Average MR2.51035.88858.809211.38397.805713.75528.18398.732211.155215.127610.42766.818412.54835.77823.62993.4460
Total Rank15101371589121611614432
Table 6. Comparison results of 16 algorithms under 100-dimensional CEC2017.
Table 6. Comparison results of 16 algorithms under 100-dimensional CEC2017.
FIndexMSAPOAPOSMAAVOASOARONOAPSAGWOWOASSAXPSOFVICLPSOSRPSOLSHADE_
cnEpSin
LSHADE_
SPACMA
MR1.666712.00008.23335.83339.766716.000010.96676.866714.966714.03332.36672.300013.00008.83334.93334.2333
Rank11286101611715143213954
MR1.000011.66673.86676.30009.400015.06674.266711.06677.166715.30007.66679.866714.46679.70003.43335.7667
Rank11336915412716811141025
MR1.900010.73333.66679.23336.600016.00008.60007.133314.266714.70007.533311.433313.03334.76673.10003.3000
Rank11141061697141581213523
MR2.06676.93339.266711.56673.433314.600012.50009.80008.566714.833311.23332.700015.43334.46675.16673.4333
Rank17912314131081511216564
MR1.00002.733311.300014.30002.933312.23337.633312.000010.466716.000014.70007.03338.10006.13335.80003.6333
Rank12111431381210161579654
MR1.50006.03338.166714.60004.133314.43339.066712.400010.066715.966710.76672.700012.03336.90004.93332.3000
Rank16815414913101611312752
MR2.33336.96678.933312.56673.333314.633311.933310.23338.400015.533310.76672.500014.70004.73334.90003.5333
Rank17913314121081611215564
MR1.20004.566711.50009.93333.766714.33338.400011.033310.433314.566711.06673.733316.00006.33335.60003.5333
Rank15139414811101512316762
MR2.600010.00004.06676.833315.50006.200013.80004.83334.533312.96675.36674.166715.30009.200010.200010.4333
Rank11028167145413631591112
MR1.033310.93332.60008.066712.633315.56679.10005.433312.500014.20007.43335.933315.03338.53333.43333.5667
Rank11128131610512147615934
MR1.20008.10009.23339.06677.933315.90005.60004.633314.533313.200011.93339.433314.36675.86673.00002.0000
Rank18109716541513121114632
MR1.56674.966711.500010.60009.400015.86673.00005.166715.033313.966710.53336.400013.06673.56676.56674.8000
Rank15121191626151410713384
MR3.16679.700010.03338.30009.233315.63333.26677.633312.266713.96679.70005.733315.26678.53331.63331.9333
Rank31012791646131411515812
MR1.70002.933310.766710.93338.333315.50005.76674.500015.333313.800011.50006.400013.16675.26676.36673.7333
Rank12101191664151412813573
MR2.86675.43337.33339.800011.466712.933312.16677.70007.633315.90009.96675.200015.03332.16673.63336.7667
Rank25710121413981611415136
MR1.63333.56679.200012.000010.833313.86677.96679.66675.966715.200010.20005.800014.86674.60003.36677.2667
Rank13913121481061611515427
MR3.00007.166712.13338.266711.200014.76673.63337.200011.066712.50009.63335.833315.866710.36671.43331.9333
Rank36138121547111495161012
MR3.73332.90009.700010.93338.100015.00004.66675.633314.766714.600013.43335.433312.16675.20005.06674.6667
Rank21101191638151413712654
MR2.03334.36679.233311.633314.100011.00009.76678.40005.966714.13338.06674.233313.43336.70004.16678.7667
Rank14101315121185167314629
MR1.10006.33339.000013.63333.366714.100011.033310.23338.933316.000010.76672.933314.23333.93335.36675.0333
Rank17913314121081611215465
MR1.733310.53334.50008.066714.96676.733313.90004.23335.666713.43336.16674.333315.53336.73339.90009.5667
Rank11249157142513631681110
MR1.03334.93337.533314.50002.066714.36679.066710.23339.933316.000011.26677.666713.13334.83335.33334.1000
Rank15715214911101612813463
MR1.10004.46678.033314.30003.700014.733310.200011.16679.933315.96679.53337.200012.86672.43335.13335.2333
Rank14814315111210169713256
MR2.233310.93334.06677.43337.966715.96678.16676.866714.400013.40008.466711.900014.23333.70003.70002.5667
Rank11157816961513101214342
MR3.60005.06678.100014.33334.066714.666710.700011.70009.500016.00008.60003.500012.70003.16675.00005.3000
Rank36814415111210169213157
MR2.20005.83335.000012.76676.033314.76679.66679.333311.600015.90009.366710.300013.33332.43334.30003.1667
Rank16513715108121691114243
MR2.133311.56672.90007.166710.333315.20008.83336.733313.633313.36677.033311.033315.80003.70003.66672.9000
Rank11228101596141371116543
MR1.80005.00007.200011.43333.400014.166711.13337.900010.633315.966712.96677.700014.00004.76673.76674.1667
Rank16712215119101613814534
MR1.10006.96678.666711.36677.033314.50008.06673.000014.800015.300013.40009.533311.13334.90004.06672.1667
Rank16912714831516131011542
Average MR1.90467.01157.783910.54377.759814.09438.71958.025310.792014.71389.70466.308013.83795.60234.72304.4759
Total Rank16812715109131611514432
Table 7. Wilcoxon rank sum test results of MSAPO and other algorithms under 30-dimensional CEC2017.
Table 7. Wilcoxon rank sum test results of MSAPO and other algorithms under 30-dimensional CEC2017.
FAPOSMAAVOASOARONOAPSAGWOWOASSAXPSOFVICLPSOSRPSOLSHADE_
cnEpSin
LSHADE_
SPACMA
F11.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.9998E−01
1.0000E+00
F31.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.9997E−01
F41.8449E−11
+
8.0661E−11
+
2.7470E−11
+
9.7839E−11
+
1.5099E−11
+
7.3215E−11
+
1.2193E−09
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.7863E−10
+
1.5099E−11
+
1.8449E−11
+
6.5555E−09
+
5.2033E−05
+
F51.4128E−01
=
1.6692E−11
+
1.5099E−11
+
7.7312E−01
=
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
1.6692E−11
+
1.5099E−11
+
1.5099E−11
+
9.2874E−04
+
1.5099E−11
+
7.4590E−07
+
9.9912E−01
9.9996E−01
F61.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
1.4986E−11
+
F76.7350E−01
=
5.4683E−11
+
1.5099E−11
+
6.5555E−09
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.8449E−11
+
1.5099E−11
+
1.5099E−11
+
1.7962E−05
+
1.5099E−11
+
2.4005E−07
+
9.0490E−02
=
1.0000E+00
F81.0000E+00
2.2508E−11
+
1.5090E−11
+
9.9308E−01
1.5090E−11
+
1.5090E−11
+
1.5090E−11
+
4.8752E−10
+
1.5090E−11
+
1.5090E−11
+
2.1789E−02
+
1.5090E−11
+
1.5797E−01
=
1.0000E+00
1.0000E+00
F92.1536E−08
+
7.1337E−12
+
7.1337E−12
+
1.0808E−10
+
7.1337E−12
+
4.6311E−07
+
7.1337E−12
+
7.1337E−12
+
7.1337E−12
+
7.1337E−12
+
1.6221E−11
+
7.1337E−12
+
7.1337E−12
+
1.9564E−09
+
1.6715E−04
+
F101.2551E−02
+
2.6320E−04
+
2.2522E−11
+
2.6325E−05
+
5.1387E−07
+
1.5099E−11
+
1.0076E−08
+
9.3042E−07
+
1.5099E−11
+
4.1760E−08
+
4.9417E−03
+
1.5099E−11
+
7.5071E−03
+
4.0354E−01
=
9.9258E−02
=
F118.0311E−07
+
2.0998E−10
+
1.0772E−10
+
8.6470E−08
+
1.5099E−11
+
4.5316E−08
+
1.0772E−10
+
1.5099E−11
+
1.5099E−11
+
2.4876E−11
+
9.2837E−10
+
1.5099E−11
+
7.1471E−09
+
8.4066E−05
+
4.9417E−03
+
F121.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.7839E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.0116E−09
+
1.5099E−11
+
1.5099E−11
+
9.8966E−01
9.9998E−01
F132.3080E−10
+
1.6692E−11
+
1.5099E−11
+
2.7470E−11
+
1.5099E−11
+
2.4990E−09
+
1.0974E−08
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.2704E−08
+
1.5099E−11
+
2.0564E−07
+
2.7850E−03
+
5.1877E−02
=
F149.9916E−01
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.3610E−10
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.2457E−06
+
6.2385E−05
+
F153.7996E−07
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.2522E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.8449E−11
+
3.0605E−10
+
1.7486E−09
+
F161.5915E−03
+
8.8845E−11
+
1.5099E−11
+
6.5555E−09
+
1.5099E−11
+
6.6443E−11
+
1.6692E−11
+
6.6443E−11
+
1.5099E−11
+
1.5099E−11
+
1.2193E−09
+
1.6692E−11
+
2.4990E−09
+
1.3863E−05
+
2.7664E−08
+
F174.8959E−05
+
1.5099E−11
+
1.5099E−11
+
7.3215E−11
+
1.5099E−11
+
1.5099E−11
+
2.4876E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.0329E−11
+
1.5099E−11
+
1.5099E−11
+
2.7664E−08
+
1.6760E−08
+
F181.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
7.0334E−05
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.9971E−01
9.9957E−01
F195.1573E−03
+
1.5099E−11
+
1.5099E−11
+
2.4876E−11
+
1.5099E−11
+
2.4876E−11
+
2.0386E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.3655E−08
+
2.4990E−09
+
F201.5733E−02
+
7.3215E−11
+
1.5099E−11
+
8.6452E−07
+
1.5099E−11
+
7.3666E−08
+
3.3478E−11
+
6.0283E−11
+
1.5099E−11
+
1.5099E−11
+
7.7326E−10
+
1.7486E−09
+
7.7326E−10
+
9.4581E−05
+
2.2102E−06
+
F213.2553E−01
=
1.5099E−11
+
1.5099E−11
+
1.5915E−03
+
1.5099E−11
+
2.7863E−10
+
1.8449E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
5.9684E−07
+
1.5405E−08
+
2.4990E−09
+
5.2014E−01
=
7.8446E−01
=
F221.4299E−10
+
1.2145E−11
+
1.8184E−11
+
2.9970E−11
+
1.2145E−11
+
2.3037E−10
+
2.7131E−11
+
1.2145E−11
+
1.2145E−11
+
5.4197E−11
+
3.0356E−10
+
1.2145E−11
+
1.2989E−10
+
6.9649E−01
=
1.0000E+00
F239.9996E−01
1.5099E−11
+
1.5099E−11
+
1.3863E−05
+
1.6692E−11
+
4.4967E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.8294E−05
+
1.5099E−11
+
2.4909E−04
+
9.9996E−01
9.9972E−01
F249.9967E−01
1.5099E−11
+
1.5099E−11
+
2.0165E−03
+
1.5099E−11
+
1.6692E−11
+
1.5099E−11
+
1.9101E−10
+
1.5099E−11
+
1.5099E−11
+
3.4563E−04
+
1.5099E−11
+
1.5733E−02
+
9.9667E−01
9.9938E−01
F254.8959E−05
+
3.3681E−06
+
5.0177E−04
+
5.1387E−07
+
1.5099E−11
+
8.5000E−02
=
7.9820E−08
+
1.5099E−11
+
1.5099E−11
+
4.5316E−08
+
1.5099E−11
+
1.5099E−11
+
4.7341E−03
+
7.7312E−01
=
6.2040E−01
=
F269.9239E−02
=
1.8403E−11
+
2.5418E−08
+
2.4941E−09
+
1.5061E−11
+
8.9999E−01
=
1.6600E−06
+
3.0539E−10
+
1.5061E−11
+
3.9778E−03
+
3.7104E−01
=
8.8214E−03
+
2.3982E−01
=
9.9999E−01
9.9986E−01
F272.9727E−01
=
8.1753E−06
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
4.4967E−11
+
2.0998E−10
+
1.1857E−10
+
1.5099E−11
+
1.0772E−10
+
2.5362E−10
+
1.6692E−11
+
7.2061E−03
+
1.8354E−03
+
5.5546E−02
=
F281.5099E−11
+
1.6692E−11
+
2.4876E−11
+
1.5099E−11
+
1.5099E−11
+
1.3049E−10
+
2.7470E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.6937E−02
+
1.5099E−11
+
3.6945E−11
+
4.5316E−08
+
6.5671E−02
=
F291.9026E−07
+
1.5099E−11
+
1.5099E−11
+
2.2522E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
4.9593E−11
+
1.5099E−11
+
1.7486E−09
+
2.7310E−06
+
6.4302E−07
+
F301.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.3080E−10
+
1.5099E−11
+
1.5099E−11
+
9.7839E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.9258E−02
=
7.8446E−01
=
+/=/−20/5/429/0/029/0/027/1/129/0/027/2/029/0/029/0/029/0/029/0/028/1/029/0/027/2/015/6/811/7/11
Table 8. Wilcoxon rank sum test results of MSAPO and other algorithms under 50-dimensional CEC2017.
Table 8. Wilcoxon rank sum test results of MSAPO and other algorithms under 50-dimensional CEC2017.
FAPOSMAAVOASOARONOAPSAGWOWOASSAXPSOFVICLPSOSRPSOLSHADE_
cnEpSin
LSHADE_
SPACMA
F11.5099E−11
+
1.5099E−11
+
1.7600E−07
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
7.4658E−05
+
1.5099E−11
+
1.5099E−11
+
4.7604E−04
+
2.9853E−05
+
1.5099E−11
+
1.6760E−08
+
1.2497E−03
+
7.2827E−01
=
F31.5099E−11
+
2.4876E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
8.0661E−11
+
2.4990E−09
+
F44.9593E−11
+
2.5362E−10
+
2.2863E−09
+
4.2424E−09
+
1.5099E−11
+
2.1553E−08
+
1.5786E−05
+
1.5099E−11
+
1.5099E−11
+
4.8778E−10
+
2.0386E−11
+
1.5099E−11
+
1.2193E−09
+
6.6248E−05
+
9.3341E−02
=
F52.9853E−05
+
1.5099E−11
+
1.5099E−11
+
9.9993E−01
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
1.5099E−11
+
1.5099E−11
+
1.2078E−02
+
1.5099E−11
+
1.2653E−04
+
3.4783E−01
=
1.0000E+00
F61.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
F73.0052E−08
+
1.5099E−11
+
1.5099E−11
+
6.0116E−09
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.4563E−04
+
1.5099E−11
+
3.0605E−10
+
3.6322E−01
=
1.0000E+00
F87.2446E−02
=
1.5099E−11
+
1.5099E−11
+
9.9999E−01
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.8090E−01
=
1.5099E−11
+
1.6693E−03
+
9.9816E−01
1.0000E+00
F95.8736E−05
+
1.5099E−11
+
1.5099E−11
+
8.0661E−11
+
1.5099E−11
+
1.8449E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.0329E−11
+
1.5099E−11
+
7.7904E−09
+
2.7863E−10
+
1.1884E−07
+
F101.5984E−09
+
7.5890E−04
+
1.0772E−10
+
2.2522E−11
+
2.2220E−07
+
1.5099E−11
+
1.8219E−02
+
2.6325E−05
+
1.6692E−11
+
1.0974E−08
+
1.4603E−02
+
1.5099E−11
+
6.7869E−02
=
1.2164E−05
+
2.2102E−06
+
F111.8449E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
4.0764E−11
+
1.5099E−11
+
3.0605E−10
+
7.3215E−11
+
1.0169E−09
+
F121.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.5593E−09
+
1.5099E−11
+
1.8449E−11
+
8.1875E−01
=
9.7897E−01
F135.8688E−04
+
1.5099E−11
+
1.5099E−11
+
8.8845E−11
+
1.5099E−11
+
2.4876E−11
+
2.0420E−05
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.9263E−07
+
1.5099E−11
+
7.2116E−04
+
1.4395E−06
+
1.8852E−04
+
F142.5362E−10
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.1886E−03
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
5.0120E−02
=
3.7108E−01
=
F159.9336E−01
1.8229E−08
+
1.7371E−10
+
2.7071E−01
=
1.1857E−10
+
9.9833E−01
4.6427E−01
=
1.5099E−11
+
1.5099E−11
+
3.0329E−11
+
9.8594E−01
1.5099E−11
+
1.5367E−01
=
9.9992E−01
9.9992E−01
F166.5083E−04
+
2.0386E−11
+
1.5099E−11
+
5.3328E−08
+
1.5099E−11
+
8.8845E−11
+
6.6443E−11
+
4.8778E−10
+
1.5099E−11
+
7.3215E−11
+
3.3825E−05
+
1.5099E−11
+
1.0753E−02
+
3.0418E−01
=
1.1390E−05
+
F171.5984E−09
+
1.5099E−11
+
1.5099E−11
+
2.4990E−09
+
1.5099E−11
+
2.2522E−11
+
1.5099E−11
+
7.3215E−11
+
1.5099E−11
+
1.5099E−11
+
2.4876E−11
+
1.5099E−11
+
1.5794E−10
+
8.6452E−07
+
1.9101E−10
+
F181.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
5.0523E−09
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.9986E−01
9.9995E−01
F191.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.3080E−10
+
1.5099E−11
+
1.9101E−10
+
2.9369E−04
+
2.9727E−01
=
F202.6320E−04
+
2.7470E−11
+
1.5099E−11
+
9.2837E−10
+
1.6692E−11
+
3.8863E−09
+
2.2522E−11
+
8.0661E−11
+
1.5099E−11
+
1.5099E−11
+
1.3914E−07
+
1.6692E−11
+
5.0177E−04
+
9.4581E−05
+
3.6901E−10
+
F215.9684E−07
+
1.5099E−11
+
1.5099E−11
+
6.1809E−04
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
5.9684E−07
+
1.5099E−11
+
1.0979E−07
+
1.2164E−05
+
1.5786E−05
+
F221.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
4.7570E−06
+
1.5099E−11
+
2.4876E−11
+
2.0386E−11
+
2.7863E−10
+
F231.1390E−05
+
1.5099E−11
+
1.5099E−11
+
1.1949E−08
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
8.6452E−07
+
1.5099E−11
+
3.2631E−07
+
6.6248E−05
+
4.4414E−06
+
F245.5386E−07
+
1.5099E−11
+
1.5099E−11
+
8.8845E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
4.9164E−08
+
1.5099E−11
+
7.7326E−10
+
6.2466E−06
+
1.3392E−06
+
F253.2639E−08
+
9.9667E−01
2.0165E−03
+
4.3764E−01
=
1.5099E−11
+
2.1156E−01
=
5.3951E−01
=
1.5099E−11
+
1.5099E−11
+
2.5805E−01
=
6.4352E−10
+
1.5099E−11
+
9.9902E−01
1.0000E+00
9.9997E−01
F266.6834E−06
+
1.7600E−07
+
6.6443E−11
+
2.4005E−07
+
2.0386E−11
+
4.2424E−09
+
1.5099E−11
+
2.4876E−11
+
1.5099E−11
+
3.9795E−03
+
9.3341E−02
=
1.5099E−11
+
7.2116E−04
+
5.3813E−03
+
6.7869E−02
=
F272.3195E−05
+
3.5593E−09
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.4876E−11
+
1.3049E−10
+
2.0386E−11
+
1.5099E−11
+
4.4967E−11
+
1.5099E−11
+
1.5099E−11
+
4.0600E−04
+
6.5083E−04
+
5.4933E−01
=
F281.5099E−11
+
1.1942E−04
+
1.0169E−09
+
2.2522E−11
+
1.5099E−11
+
4.0507E−10
+
8.4736E−10
+
1.5099E−11
+
1.5099E−11
+
7.4658E−05
+
1.7371E−10
+
1.5099E−11
+
2.7071E−01
=
1.2967E−01
=
1.3345E−01
=
F295.5386E−07
+
4.9593E−11
+
1.5099E−11
+
1.8229E−08
+
1.5099E−11
+
1.5099E−11
+
2.0386E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.3049E−10
+
1.5099E−11
+
1.4608E−09
+
2.7310E−06
+
7.9844E−04
+
F304.0507E−10
+
1.5099E−11
+
1.5099E−11
+
3.6945E−11
+
1.5099E−11
+
1.5099E−11
+
4.6301E−09
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.6834E−06
+
1.0783E−03
+
2.4713E−05
+
+/=/−27/1/128/0/129/0/025/2/229/0/027/1/127/2/029/0/029/0/028/1/026/2/129/0/025/3/119/6/415/7/7
Table 9. Wilcoxon rank sum test results of MSAPO and other algorithms under 100-dimensional CEC2017.
Table 9. Wilcoxon rank sum test results of MSAPO and other algorithms under 100-dimensional CEC2017.
FAPOSMAAVOASOARONOAPSAGWOWOASSAXPSOFVICLPSOSRPSOLSHADE_
cnEpSin
LSHADE_
SPACMA
F11.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
5.3813E−03
+
2.1130E−03
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
F31.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.6945E−11
+
F41.5099E−11
+
2.2296E−04
+
1.5099E−11
+
3.3478E−11
+
1.5099E−11
+
1.6692E−11
+
1.0169E−09
+
1.5099E−11
+
1.5099E−11
+
1.0772E−10
+
4.4967E−11
+
1.5099E−11
+
1.2861E−07
+
2.7806E−04
+
1.5170E−03
+
F51.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.8486E−04
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.1001E−01
=
1.5099E−11
+
8.6470E−08
+
6.0283E−11
+
1.5159E−02
+
F61.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
F71.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.3478E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.5456E−06
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.5562E−03
+
F81.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.9415E−03
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
8.7663E−01
=
1.5099E−11
+
2.7664E−08
+
2.4876E−11
+
1.5159E−02
+
F92.7863E−10
+
1.5099E−11
+
1.5099E−11
+
1.5794E−10
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.5555E−09
+
1.5099E−11
+
1.0974E−08
+
3.0329E−11
+
3.8863E−09
+
F101.5099E−11
+
4.5344E−03
+
4.1760E−08
+
1.5099E−11
+
3.2591E−09
+
1.5099E−11
+
6.6248E−05
+
2.9853E−05
+
1.5099E−11
+
1.9153E−05
+
9.1840E−03
+
1.5099E−11
+
2.7664E−08
+
1.5099E−11
+
1.5099E−11
+
F111.5099E−11
+
2.9837E−09
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.1857E−10
+
1.9101E−10
+
F121.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
9.7839E−11
+
1.5099E−11
+
1.5099E−11
+
8.8845E−11
+
1.5029E−04
+
F134.9593E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11V1.5099E−11
+
5.2033E−05
+
6.4352E−10
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.7371E−10
+
1.5099E−11
+
3.1013E−04
+
1.5099E−11
+
3.0605E−10
+
F141.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.1001E−01
=
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.0000E+00
9.9999E−01
F151.5910E−04
+
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
1.5099E−11
+
5.0523E−09
+
4.4205E−07
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.2193E−09
+
1.5099E−11
+
7.4590E−07
+
1.4358E−10
+
2.8036E−05
+
F162.0564E−07
+
1.0974E−08
+
3.6945E−11
+
5.8687E−10
+
1.8449E−11
+
1.8449E−11
+
4.6301E−09
+
2.9837E−09
+
1.5099E−11
+
4.0764E−11
+
1.6840E−04
+
1.5099E−11
+
5.0120E−02
=
7.4827E−02
=
2.6320E−04
+
F175.0523E−09
+
1.6692E−11
+
1.5099E−11
+
4.0764E−11
+
1.5099E−11
+
1.8449E−11
+
4.0764E−11
+
3.0605E−10
+
1.5099E−11
+
8.0661E−11
+
3.5593E−09
+
1.5099E−11
+
1.6278E−07
+
1.3392E−06
+
2.0913E−09
+
F181.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.4563E−04
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.0386E−11
+
1.5099E−11
+
1.5099E−11
+
1.0000E+00
9.9992E−01
F193.6322E−01
=
3.0329E−11
+
1.5099E−11
+
3.2591E−09
+
1.5099E−11
+
1.4603E−02
+
1.5170E−03
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
4.5344E−03
+
1.5099E−11
+
1.1622E−02
+
1.8219E−02
+
2.1033E−02
+
F201.1884E−07
+
4.8778E−10
+
1.5099E−11
+
1.5099E−11
+
9.7839E−11
+
1.5099E−11
+
2.7863E−10
+
4.7570E−06
+
1.5099E−11
+
9.7839E−11
+
3.1013E−04
+
1.5099E−11
+
2.7310E−06
+
1.7962E−05
+
1.2861E−07
+
F211.5099E−11
+
1.5099E−11
+
1.5099E−11
+
4.4455E−10
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.3008E−08
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
8.0661E−11
+
F221.5099E−11
+
2.9293E−06
+
9.7839E−11
+
1.5099E−11
+
4.4455E−10
+
1.5099E−11
+
2.1765E−05
+
2.2220E−07
+
1.5099E−11
+
3.2639E−08
+
2.1088E−04
+
1.5099E−11
+
2.2220E−07
+
1.5099E−11
+
1.1857E−10
+
F231.5099E−11
+
1.5099E−11
+
1.5099E−11
+
7.0549E−10
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.7470E−11
+
1.5099E−11
+
1.5099E−11
+
F241.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.7470E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
4.5316E−08
+
3.0329E−11
+
1.5794E−10
+
F251.5099E−11
+
4.6056E−05
+
3.6945E−11
+
3.6945E−11
+
1.5099E−11
+
2.0386E−11
+
1.7371E−10
+
1.5099E−11
+
1.5099E−11
+
1.8449E−11
+
1.5099E−11
+
1.5099E−11
+
1.4457E−03
+
1.4603E−02
+
7.0617E−01
=
F261.0262E−03
+
4.5316E−08
+
1.5099E−11
+
9.9415E−03
+
1.5099E−11
+
2.0998E−10
+
6.0283E−11
+
5.3509E−10
+
1.5099E−11
+
4.1460E−06
+
3.6322E−01
=
1.5099E−11
+
6.6273E−01
=
7.2061E−03
+
1.5170E−03
+
F278.4736E−10
+
2.8000E−07
+
1.5099E−11
+
4.4455E−10
+
1.5099E−11
+
1.5099E−11
+
4.0764E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.4783E−01
=
1.2164E−05
+
2.1033E−02
+
F281.5099E−11
+
7.5071E−03
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.0386E−11
+
1.5099E−11
+
1.5099E−11
+
1.6692E−11
+
1.5099E−11
+
1.5099E−11
+
2.1088E−04
+
1.3863E−05
+
9.5562E−03
+
F292.0913E−09
+
3.6945E−11
+
1.5099E−11
+
5.5289E−05
+
1.5099E−11
+
1.5099E−11
+
3.3478E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
2.0386E−11
+
1.5099E−11
+
2.4005E−07
+
9.3042E−07
+
5.4534E−06
+
F301.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
3.6901E−10
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
1.5099E−11
+
6.7972E−08
+
+/=/−28/1/029/0/029/0/029/0/029/0/028/1/029/0/029/0/029/0/029/0/026/3/029/0/026/3/026/1/226/1/2
Table 10. Parameters of engineering problems.
Table 10. Parameters of engineering problems.
ProblemsNameDghTheoretical Optimum
1Process synthesis problem7902.9248305537
2Weight minimization of a speed reducer71102994.4244658
3Tension/compression spring design3300.0126652328
4Welded beam design4501.6702177263
5Three-bar truss design problem230263.89584338
6Step-cone pulley problem58316.069868725
7Gas transmission compressor design4102964895.4173
8Himmelblau’s function560−30665.538672
Table 11. Statistical results of PSP.
Table 11. Statistical results of PSP.
AlgorithmBestMeanWorstStd
MSAPO2.92483055372.95679653653.08173212710.0576094552
APO2.92483998114.012475503210.74685281931.5855135136
SMA6.94011990338.204830703911.47657276901.0678409572
AVOA2.92486612953.26334253794.69000000520.6078710167
SO2.92491144613.10799714354.20673410510.3686948968
ARO2.94789171075.208035184012.51760621002.1988190414
NOA2.92483058872.93824909903.08173212720.0397377470
PSA2.92484285463.28197973214.76750061010.6159630085
GWO2.92539572313.90555776517.19277104711.0964234765
WOA2.95375652147.113794543913.30685281942.9114996820
SSA2.92499235463.79961273274.90049929020.5862288981
XPSO2.92491126243.75021936624.32395831120.5614549006
FVICLPSO2.92487493853.30743641939.75423151711.2546785460
SRPSO2.92484699163.49947739784.78844440020.6988961273
Table 12. Statistical results of WMSR.
Table 12. Statistical results of WMSR.
AlgorithmBestMeanWorstStd
MSAPO2994.42446582994.42446582994.42446588.40212E−13
APO2994.42446892994.42449032994.42455851.98659E−05
SMA2994.42637112994.43863562994.49416330.017359420
AVOA2995.20058513003.69119323015.36270135.491776785
SO2994.42447912994.85581293004.15862891.779878646
ARO2996.02584423007.82573333062.60947241.45742E+01
NOA2994.42447592994.42452982994.42469624.89070E−05
PSA2994.42446582994.42698912994.47790000.010037723
GWO3003.10786463010.51750943020.66784884.524933383
WOA3009.44819803569.72596095590.43527557.76495E+02
SSA3005.74410593036.93933683077.01393702.00322E+01
XPSO3016.36289533021.79437673032.83657034.399323516
FVICLPSO2994.42446582994.42446582994.42446581.33940E−10
SRPSO2994.42448932994.42457272994.42477386.86375E−05
Table 13. Statistical Results of T/CSD.
Table 13. Statistical Results of T/CSD.
AlgorithmBestMeanWorstStd
MSAPO0.01266523280.01267040940.01269488317.32811E−06
APO0.01266524150.01268719340.01280119153.61245E−05
SMA0.01266797170.01386797840.01678084210.001419395
AVOA0.01267063390.01302643910.01461535220.000524042
SO0.01266562030.01393788840.01777315930.001590096
ARO0.01267976080.01390756070.01784268950.001733137
NOA0.01266527310.01266765030.01267967263.20593E−06
PSA0.01266613310.01358992260.01775651130.001410830
GWO0.01268707400.01279012040.01347966610.000173615
WOA0.01266541740.01369715540.01562130480.000951365
SSA0.01268775850.01435623080.02538777820.002781414
XPSO0.01270820640.01335151990.01523185040.000653020
FVICLPSO0.01267055720.01307868190.01412123030.000435546
SRPSO0.01270844670.01313771350.01427828040.000472955
Table 14. Statistical results of WBD.
Table 14. Statistical results of WBD.
AlgorithmBestMeanWorstStd
MSAPO1.67021772631.67021772631.67021772631.29848E−13
APO1.67021833281.67787017011.78421718500.023168762
SMA1.67024876131.67514357311.69337277050.005433919
AVOA1.67138458021.76335076021.81671032990.055203758
SO1.67022805671.72317241132.12268564540.118830458
ARO1.69754579852.58770417993.83474238120.640174267
NOA1.67022354021.67024232971.67029932931.73743E−05
PSA1.67037615481.95605179763.32728732720.411996244
GWO1.67248730611.67546320671.67952180970.002139746
WOA1.73999096102.67437319034.81170673280.798276810
SSA1.69314723251.84812242902.15000989110.117680402
XPSO1.67021772631.67022770751.67046420304.48547E−05
FVICLPSO1.70200560122.10135049282.83919060180.306116774
SRPSO1.67021960101.67050783751.67433923580.000944901
Table 15. Statistical results of TBTD.
Table 15. Statistical results of TBTD.
AlgorithmBestMeanWorstStd
MSAPO263.89584338263.89584337646263.895843376461.73446E−13
APO263.89584338263.89584337658263.895843378053.22794E−10
SMA265.95665827270.29189062052275.619676194482.355049747
AVOA263.89600718263.92932771908264.132467254610.049148698
SO263.89584735263.89696737254263.906560672390.002099651
ARO263.89584349264.10592832336265.442903851590.392013217
NOA263.89584338263.89584337649263.895843376725.44970E−11
PSA263.89593168263.91828913156264.028717566700.036145621
GWO263.89687204263.90453317790263.930066922010.007237521
WOA263.90872632265.47952244582276.936662498962.597531606
SSA263.89584427263.89866520994263.914083351240.004029022
XPSO263.89584473263.89604262610263.897321580370.000360897
FVICLPSO263.89584516265.19163306555276.529732068722.696934891
SRPSO263.89584824263.89631291786263.898419196280.000590811
Table 16. Statistical results of SCP.
Table 16. Statistical results of SCP.
AlgorithmBestMeanWorstStd
MSAPO16.09027430016.42694554817.0937774430.375730967
APO16.83977131617.06164661917.1680894910.079676469
SMA16.33129597817.61756290018.3723091940.597217204
AVOA16.72870172317.54069428618.2386880140.417091341
SO16.09550075617.03330911218.2559852730.610190959
ARO2.71328E+039.67806E+068.48363E+071.83876E+07
NOA16.57408005616.94959108918.2729804400.340123629
PSA16.09038076816.43424082517.0396899430.278482514
GWO1.42761E+062.71227E+076.07531E+071.75319E+07
WOA20.4557416263.82896E+103.51545E+119.98215E+10
SSA16.24506048017.33837213318.2437981960.448903199
XPSO16.7543999277.57199E+065.39672E+071.48767E+07
FVICLPSO16.2325054961.33555E+021.63368E+033.36975E+02
SRPSO16.28079525716.75798108317.1672466490.305472989
Table 17. Statistical results of GTCD.
Table 17. Statistical results of GTCD.
AlgorithmBestMeanWorstStd
MSAPO2964895.41592964895.41592964895.41591.27673E−09
APO2964895.41592964895.41802964895.46859.59826E−03
SMA2964895.79842970576.92722986613.56196.07937E+03
AVOA2965125.95752988025.34793050307.01272.42679E+04
SO2964895.92192965144.10922967033.77924.48052E+02
ARO2965405.39714846531.72689269316.89371.86248E+06
NOA2964895.41592964895.41612964895.41905.88091E−04
PSA2964909.45362967324.63612973467.11972.52014E+03
GWO2965013.96192965785.70882967150.47996.17094E+02
WOA2965320.67403026321.84633198106.69905.59910E+04
SSA2969856.00183153184.25613491860.75541.65074E+05
XPSO2964909.51862965984.03532969341.81601.23956E+03
FVICLPSO2965548.98403284138.61786057429.35346.61504E+05
SRPSO2964913.03282967587.76292982479.54684.01481E+03
Table 18. Statistical results of Himmelblau’s function.
Table 18. Statistical results of Himmelblau’s function.
AlgorithmBestMeanWorstStd
MSAPO−30665.538672−30665.538672−30665.5386721.02453E−11
APO−30665.538333−30665.510505−30665.0876330.082171306
SMA−30665.538492−30665.516100−30665.3892770.032493939
AVOA−30665.538466−30586.627024−30211.0026261.50529E+02
SO−30665.538671−30665.070138−30652.0875642.454381948
ARO−30574.908116−29982.671102−29177.2852413.42334E+02
NOA−30665.538294−30665.533662−30665.5240860.003413955
PSA−30665.538670−30662.245165−30581.0252541.54469E+01
GWO−30663.897917−30657.336829−30643.9399034.045396571
WOA−30585.962162−29725.582260−28958.0104594.10427E+02
SSA−30637.811103−30495.116447−30150.4986691.27269E+02
XPSO−30644.802248−30610.381679−30476.6472813.21344E+01
FVICLPSO−30659.626134−30501.241730−29955.1509831.74227E+02
SRPSO−30665.537638−30664.535049−30636.7379305.250368207
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bo, H.; Wu, J.; Hu, G. MSAPO: A Multi-Strategy Fusion Artificial Protozoa Optimizer for Solving Real-World Problems. Mathematics 2025, 13, 2888. https://doi.org/10.3390/math13172888

AMA Style

Bo H, Wu J, Hu G. MSAPO: A Multi-Strategy Fusion Artificial Protozoa Optimizer for Solving Real-World Problems. Mathematics. 2025; 13(17):2888. https://doi.org/10.3390/math13172888

Chicago/Turabian Style

Bo, Hanyu, Jiajia Wu, and Gang Hu. 2025. "MSAPO: A Multi-Strategy Fusion Artificial Protozoa Optimizer for Solving Real-World Problems" Mathematics 13, no. 17: 2888. https://doi.org/10.3390/math13172888

APA Style

Bo, H., Wu, J., & Hu, G. (2025). MSAPO: A Multi-Strategy Fusion Artificial Protozoa Optimizer for Solving Real-World Problems. Mathematics, 13(17), 2888. https://doi.org/10.3390/math13172888

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop