Next Article in Journal
XAI Framework for Cardiovascular Disease Prediction Using Classification Techniques
Previous Article in Journal
Effect of Large Uniaxial Stress on the Thermoelectric Properties of Microcrystalline Silicon Thin Films
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Disturbance Marine Predator Algorithm Based on Oppositional Learning and Compound Mutation

1
School of Information Engineering, Tianjin University of Commerce, Tianjin 300134, China
2
School of Science, Tianjin University of Commerce, Tianjin 300134, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(24), 4087; https://doi.org/10.3390/electronics11244087
Submission received: 4 November 2022 / Revised: 26 November 2022 / Accepted: 5 December 2022 / Published: 8 December 2022

Abstract

:
Marine Predator Algorithm (MPA) is a meta-heuristic algorithm based on the foraging behavior of marine animals. It has the advantages of few parameters, simple setup, easy implementation, accurate calculation, and easy application. However, compared with other meta-heuristic algorithms, this algorithm has some problems, such as a lack of transition between exploitation and exploration and unsatisfactory global optimization performance. Aiming at the shortage of MPA, this paper proposes a multi-disturbance Marine Predator Algorithm based on oppositional learning and compound mutation (mMPA-OC). Firstly, the optimal value selection process is improved by using Opposition-Based Learning mechanism and enhance MPA’s exploration ability. Secondly, the combined mutation strategy was used to improve the predator position updating mechanism and improve the MPA’s global search ability. Finally, the disturbances factors are improved to multiple disturbances factors, so that the MPA could maintain the population diversity. In order to verify the performance of the mMPA-OC, experiments are conducted to compare mMPA-OC with seven meta-heuristic algorithms, including MPA on different dimensions of the CEC-2017 benchmark function, complex CEC-2019 benchmark function, and engineering optimization problems. Experiments have shown that the mMPA-OC is more efficient than other meta-heuristic algorithms.

1. Introduction

In recent years, as optimization problems in the real world have become more difficult and complex, some scientific researchers have been attracted to studying optimization problems. Ordinary optimization algorithms include the gradient descent method, Newton method, conjugate gradient method, and Lagrange multiplier method. Because of their flexibility, robustness and self-organization, meta-heuristic algorithms are more attractive than ordinary optimization methods in solving complex optimization problems, providing a powerful tool for solving them.
Meta-heuristics are methods that solve optimization problems by simulating various natural processes, biological or human thinking. In general, meta-heuristic algorithms can be divided into four categories: algorithms based on biological evolution, algorithms based on swarm intelligence, algorithms based on physical phenomena, and algorithms based on human social behavior [1,2]. The evolution process of organisms is based on biological evolution in nature; examples include Genetic Algorithm (GA) [3], Evolutionary Programming (EP) [4], Differential Evolution (DE) [5], and Biogeography-Based Optimizer (BBO) [6]. Swarm intelligence-based algorithms are inspired by the foraging behavior of social creatures, such as Particle Swarm Optimization (PSO) [7], Artificial Bee Colony (ABC) [8], Cuckoo Search (CS) [9], Bat Algorithm (BA) [10], Gray Wolf Optimizer (GWO) [11], and Ant Colony Optimization (ACO) [12]. Algorithms based on physical phenomena are inspired by objective physical regulars, such as Black Hole (BH) [13], Gravitational Search Algorithm (GSA) [14], Small World Optimization Algorithm (SWOA) [15], Ray Optimization (RO) [16], Central Force Optimization (CFO) [17], Charged System Search (CSS) [18], Simulated Annealing (SA) [19]. Algorithms based on human social behavior are inspired by the structure of human social networks, such as Group Counseling Optimization (GCO) [20] and Teaching-Learning-Based Optimization (TLBO) [21].
In recent years, many novel meta-heuristic algorithms have been proposed by scholars, such as the Whale Optimization Algorithm (WOA) [22], Moth Flame Optimization (MFO) [23], Butterfly Optimization Algorithm (BOA) [24], Salp Swarm Algorithm (SSA) [25], Galactic Swarm Optimization (GSO) [26], Sine Cosine Algorithm (SCA) [27], Harmony Search (HS) [28], Group Search Optimizer (GSO) [29], Political Optimizer (PO) [30], Majority–Minority Cellular Automata Algorithm (MMCAA) [31]. They perform well on a variety of complex optimization problems, such as multi-objective scheduling [32], image segmentation [33], fault diagnosis [34], and feature selection [35].
Marine Predators Algorithm (MPA) [36] is a novel meta-heuristic optimization algorithm proposed by Afshin Faramarzi et al. in 2020, which is inspired by the predation behavior of marine animals. Marine predators choose the best foraging strategy between Lévy flight wander and Brownian motion to achieve the purpose of predation. The algorithm has the advantages of easy implementation and higher convergence accuracy, so it has attracted the wide attention of scholars once it was proposed. As an excellent meta-heuristic algorithm, MPA has been applied to many optimization fields and it has achieved good results. For example, Ho et al. [37] combined MPA and Feedforward Neural Network (FNN) to train a neural network for structural damage detection. Dinh [38] used the MPA for parameter tuning to realize image fusion, so that the output image has good quality. Chen et al. [39] proposed that MPA was used to accurately determine the parameters of the PV model. Ridha [40] applied the MPA to the single-diode and dual-diode photovoltaic models, and accurately determined the parameters of the photovoltaic model. Xia [41] applied the MPA to the optimization problem of wind turbine blade shape, saving raw materials. Houssein [42] used the MPA to automatically search and optimize CNN parameters to minimize user variability in CNN training. Oszust [43] proposed an optimized extreme learning machine based on MPA (MPA-ELM) to predict the thermal displacement of the motorized spindle model with higher prediction accuracy.
Despite this, the algorithm still has problems, such as lack of transition between exploitation and exploration and the ease with which it becomes stuck in local optima [44,45,46,47]. Many scholars have studied these problems and proposed many improvements. For example, Elaziz et al. [44] proposed that Differential evolution (DE) can enhance the exploration phase of MPA and make the algorithm jump out of local optimum. Zhong et al. [45] combined the teaching-based optimization algorithm with the MPA and added effective mutation and crossover strategies to avoid falling into the local optimum. Wang et al. [46] added the position update strategy of PSO algorithm to MPA to enhance the global search ability of MPA. Oszust [43] used the local escape operator (LEO) to enhance the balance between exploitation and exploration of MPA, and to overcome the premature convergence. The above improvements reduce the possibility of MPA falling into local optima and optimize the performance of MPA. The above researches promote the rapid development of MPA, but the algorithm still has some shortcomings such as weak global search ability, slow convergence speed and unbalanced exploration, and exploitation when dealing with complex optimization problems. Therefore, it is still the focus of research to improve MPA and apply it to different complex optimization problems.
The innovative contributions of this paper are as follows:
  • A variant of MPA, a multiple disturbance Marine Predator Algorithm based on opposition-based learning and composite mutation (mMPA–OC) is proposed.
  • The Opposition–Based Learning mechanism was used to improve the optimal value selection process and MPA’s exploration ability. The composite mutation strategy was used to improve the update mechanism of the predator’s position to improve the global search ability of MPA. The disturbances factors are improved to multiple disturbances factors, so that the MPA maintains the population diversity in the iterative process.
  • In order to verify the effectiveness of mMPA–OC, different CEC benchmark functions and engineering problems are used to evaluate the performance.
The paper is arranged as follows: Section 2 introduces the basic MPA. Section 3 presents the proposed mMPA-OC. Section 4 presents the results of numerical simulation experiments and analyzes. Section 5 presents and analyzes the results of mMPA-OC on engineering optimization problems. Section 6 presents the conclusion and outlook.

2. Marine Predators Algorithm

The Marine Predator Algorithm is a new meta-heuristic optimization algorithm proposed by Faramarzi et al. [36] in 2020. It simulates the behavior of marine animals such as swoosh fish, sharks, sun fish and horse fish to find food, and abstracts it as the marine predators choose different foraging strategies under different moving speed ratios [48]. The MPA has included many excellent strategies in other algorithms, such as iterative segmentation, adaptive operators, elite selection, Levy flight, which makes MPA have better optimization ability. The MPA can be divided into five phases: initialization phase, exploration phase, transition phase, exploitation phase, and disturbance phase. Among them, the exploration phase, transition phase, and exploitation phase are the main parts of the optimization process of MPA.

2.1. Initialization Phase

As an optimization algorithm based on swarm intelligence, MPA is the same as other similar methods in its initial solution distribution: it is random in the search space. MPA initializes the population by Equation (1):
X i = L B + r a n d 1 ( U B L B ) i = 1 , 2 , , n
where, U B and L B are the upper and lower boundaries of the search space, respectively, and r a n d 1 are random numbers between 0 and 1, and n is the population size. In the MPA, predators are divided into elite predators and common predators, and the two predators perform predation activities at the same time. Therefore, the elite matrix E l i t e and the prey matrix P r e y are defined to store the positions of elite predators and other predators, respectively, as shown in Equations (2) and (3).
E l i t e = x 1 , 1 I x 1 , 2 I x 1 , d I x 2 , 1 I x 2 , 2 I x 2 , d I x n , 1 I x n , 2 I x n , d I n d
P r e y = x 1 , 1 x 1 , 2 x 1 , d x 2 , 1 x 2 , 2 x 2 , d x n , 1 x n , 2 x n , d n d
In Equation (2), the elite matrix is composed of the elite predator X I repeated n times, X I represents the elite predator vector, n represents the population number, and d represents the dimension of the search space. In Equation (3), the prey matrix has the same dimension as the elite matrix, and x i , j denotes the ith dimension of the jth predator.

2.2. Exploration Phase

At the beginning of the algorithm iteration ( i t e r a t i o n M a x i t e r a t i o n s 3 ), the prey moves faster than the predator ( v 10 ), and the algorithm enters the exploration phase. The prey searches the entire solution space with Brownian motion, and the predators remain motionless. The exploration phase is defined as follows:
s t e p i = R B E l i t e i R B P r e y i P r e y i = P r e y i + P × r a n d 2 step i
where R B stands for Brownian motion, P = 0.5 , r a n d 2 is a random number between zero and one, ⊗ represents the multiplication of elements. E l i t e i represents the predator, P r e y i represents the prey ( i = 1 , 2 , , n ), n denotes the number of populations.

2.3. Transition Phase

When M a x i t e r a t i o n s 3 < i t e r a t i o n s 2 × M a x i t e r a t i o n s 3 , MPA is in the transition phase from exploration to exploitation. In this phase, the predator and prey move at the same speed ( v = 1 ) and the population is divided into two parts: the predator for exploration and the prey for exploitation. The mathematical model for this phase is as follows:
While M a x i t e r a t i o n s 3 < i t e r a t i o n s 2 × M a x i t e r a t i o n s 3
s t e p i = R L E l i t e i R L P r e y i P r e y i = P r e y i + P × r a n d 3 s t e p i i n / 2
s t e p i = R B E l i t e R B P r e y i P r e y i = P r e y i + P × C F s t e p i i > n / 2
In Equation (5), R L denotes the Lévy flight vector and r a n d 3 is a random vector between zero and one. In Equation (6), C F is a decay factor that causes the predator’s step size to decrease, and its expression is C F = 1 i t e r a t i o n M a x i t e r a t i o n s 2 i t e r a t i o n M a x i t e r a t i o n s , M a x i t e r a t i o n s indicates the maximum number of iterations.

2.4. Exploition Phase

When 2 × M a x i t e r a t i o n s 3 < i t e r a t i o n M a x i t e r a t i o n s , the MPA is in the exploition phase, the motion speed of the predator is faster than the motion speed of the prey ( v = 0.1 ), and the best predation mode of the predator is Lévy flight. Therefore, predators mainly exploit locally in this phase. Its mathematical model is defined as follows:
While 2 × M a x i t e r a t i o n s 3 < i t e r a t i o n M a x i t e r a t i o n s
s t e p i = R L ( R L E l i t e i P r e y i ) P r e y i = E l i t e i + P × C F s t e p i i = 1 , 2 , , n

2.5. Disturbance Mechanism

The artificial disturbance mechanism was added to the MPA, that is, the influence of eddy formation and Fish Aggregation Devices (FADs) on predators in the ocean was considered. Overall, 80% of fish prey near themselves, and only 20% make long jumps, due to vortices and fish-gathering devices. The mathematical expression is as follows:
P r e y i = P r e y i + C F L B + r 1 ( U B L B ) U i f r 1 F A D s P r e y i + F A D s 1 r 1 + r 1 P r e y s 1 P r e y s 2 i f r 1 > F A D s
where F A D s = 0.2 , U is the binary vector of zeros and ones, r 1 is a random number between 0 and 1, s 1 and s 2 represents any two predators in the matrix P r e y . Marine predators have good memories of where they foraged. Implemented F A D S after updating, Evaluate P r e y i for fitness. If P r e y i fitness is smaller than its previous value, E l i t e i is updated so the quality of the solution keeps improving. Pseudocode of the Marine Predator Algorithm is shown in Algorithm 1.
Algorithm 1: Marine Predator Algorithm.
Initialize populations (Prey) based on Equation (1)
While termination criteria are not met
Structure the matrix E l i t e and the matrix P r e y based on Equations (2) and (3),
calculate the fitness
saving
  If i t e r a t i o n M a x i t e r a t i o n s 3
    Update matrix P r e y based on Equation (4)
  Else if M a x i t e r a t i o n s 3 < i t e r a t i o n 2 × M a x i t e r a t i o n s 3
    For the first half of the populations ( i n / 2 )
    Update matrix P r e y based on Equation (5)
    For the other half of the populations ( n / 2 < i n )
    Update matrix P r e y based on Equation (6)
  Else if 2 × M a x i t e r a t i o n s 3 < i t e r a t i o n M a x i t e r a t i o n s
    Update matrix P r e y based on Equation (7)
  End if
  Accomplish memory saving and E l i t e update
  Applying F A D s effect and update based on Equation (8)
End while

3. Proposed Algorithm

In view of the lack of transition between MPA exploition and exploration, the OBL mechanism is used to generating reverse solutions, which improved the exploration ability of MPA. This paper adds an OBL mechanism to the optimal value selection process to improve the exploration ability of MPA. Aiming at the problem that MPA is prone to becoming stuck in local optima, the composite mutation strategy is used to improve the update mechanism of predator position to improve the MPA’s global search ability. Aiming at the problem that the single artificial disturbance process in MPA is obviously inconsistent with the actual predation process of predators, and it is less likely to jump out of the local optimal, the artificial disturbance factors are improved to multiple artificial disturbance factors, so that the MPA’s population diversity is maintained in the iterative process.

3.1. Opposition-Based Learning

The OBL mechanism was proposed by Tizhoosh [49] in 2005. Its principle is as follows: first, the inverse solution of the current solution is calculated, and then the optimal solution is selected from the current solution and its inverse solution to update. Previous studies have found that the reverse solution is 50% more likely to be better than the current solution [50], and it can effectively increase the diversity of the population. Therefore, the OBL mechanism has been applied to the improvement of a variety of algorithms and achieved good improvement results. Yu [50] used the OBL mechanism to reduce the probability of Grey Wolf Algorithm falling into local optimum and increase the diversity of the population. Gupta [51] used OBL to optimize HHO, which improved the search efficiency of HHO, effectively alleviating the problems of suboptimal solution stagnation and premature convergence.
The opposition-based learning mechanism is defined as follows: Let P = y 1 , y 2 , , y d be a point in M dimensional space, where y 1 , y 2 , , y d are all real numbers, y i a i , b i . Then the opposite point P ¯ = y 1 ¯ , y 2 ¯ , , y d ¯ is defined as:
y i ¯ = a i + b i y i
In this paper, the OBL mechanism is used to improve the MPA, increase the algorithm‘s population diversity, and enhance the exploration ability of the algorithm. As shown in Equation (10).
P r e y 1 i = U B + L B P r e y i
where P r e y 1 i is the reverse position of P r e y i , compare the fitness of P r e y 1 i and P r e y i . If the fitness value of P r e y 1 i is smaller than the fitness value of P r e y i , P r e y 1 i is updated to P r e y i , so that the quality of the solution is constantly improved. In this way, the population diversity of MPA is increased, the probability of MPA finding a better location is increased, and the exploration capability of MPA is enhanced.

3.2. Compound Mutation Strategy

The compound mutation strategy can enhance the exploration and exploitation ability of swarm intelligence algorithm. For this reason, many researchers have proposed various DE algorithms by applying multiple mutation schemes to enhance the performance of swarm intelligence algorithms. Gao [52] proposed a PSO with composite mutation strategy, which makes PSO have stronger exploration ability and faster convergence rate. Cui [53] designed a new adaptive multi-subgroup DE algorithm to improve the optimization performance. Zhang [54] used the compound mutation strategy to balance the exploitation and exploration trend of SSA, and increased the global convergence ability of SSA.
The Composite mutation strategy has been used to improve a variety of algorithms and achieved remarkable results. Therefore, this paper adopts the local mutation scheme of Composite DE (CoDE) [55], which performs well in the CEC-2017 competition, to improve the global exploration ability of MPA. The generation equations of the three mutation locations P r e y 2 i , j , P r e y 3 i , j , P r e y 4 i , j after using the local mutation scheme is shown in Equations (11)–(13):
P r e y 2 i , j = P r e y r 1 , j + F 1 × P r e y r 2 , j P r e y r 3 , j i f r a n d ( ) < C r 1 P r e y i , j o t h e r w i s e
where r 1 , r 2 , r 3 is a random integer ranging from 1 to n, and F 1 represents the scale coefficient, whose value is set to 1.0. C r 1 stands for mutation rate and its value is set to 0.1.
P r e y 3 i , j = P r e y r 4 , j + F 2 × ( P r e y r 5 , j P r e y r 6 , j ) + F 2 × ( P r e y r 7 , j P r e y r 8 , j ) i f r a n d ( ) < C r 2 P r e y i , j otherwise
where r 4 , r 5 , r 6 , r 7 , r 8 is a random integer ranging from 1 to n, F 2 represents the scale coefficient, whose value is set to 0.8. C r 2 stands for mutation rate and its value is set to 0.6.
P r e y 4 i , j = P r e y i , j + r a n d ( ) × ( P r e y r 9 , j P r e y i , j ) + F 3 × ( P r e y r 10 , j P r e y r 11 , j ) i f r a n d ( ) < C r 3 P r e y i , j otherwise
where r 9 , r 10 , r 11 is a random integer ranging from 1 to n, F 3 represents the scale coefficient, whose value is set to 0.8. C r 3 stands for mutation rate and its value is set to 0.9.
Among the three mutation positions P r e y 2 i , j , P r e y 3 i , j , P r e y 4 i , j of the i t h predator, firstly, the boundary of the candidate position was corrected, and then the greedy selection mechanism was used to select the position with the lowest fitness value P r e y 2 i , j , P r e y 3 i , j , P r e y 4 i , j and P r e y i as the predator position.

3.3. Multiple Disturbances Strategy

The disturbance mechanism of the marine predator algorithm, which can make MPA overcome the premature convergence problem in the optimization process, simulates the influence of eddy formation and Fish Aggregation Devices (FADs) during the process of predation. However, in the actual process of predation, the marine predators will be affected by the device more than once. Therefore, it is considered to run the disturbances mechanism several times in order to simulate the actual predation process of marine predators more accurately. It operates as follows:
F o r disturbances = 1:2
P r e y i = P r e y i + C F L B + r 1 ( U B L B ) U i f r 1 F A D s P r e y i + F A D s 1 r 1 + r 1 P r e y s 1 P r e y s 2 i f r 1 > F A D s
Multiple disturbances strategy can not only maintain the population‘s diversity, but also greatly increase the probability of the algorithm to jump out of local optimal.

3.4. The Proposed mMPA-OC Algorithm

Firstly, an OBL mechanism is added to the optimal value selection process to improve the exploration ability of MPA. Secondly, the composite mutation strategy was used to improve the update strategy of predator position to improve the global search ability of MPA. Finally, the interference factors are improved to multiple disturbance factors, so that MPA can maintain the population diversity in the iterative process. The pseudocode of mMPA-OC is shown in Algorithm 2.
Algorithm 2: mMPA-OC Algorithm.
Initialize populations (Prey) based on Equation (1)
While termination criteria are not met
Structure the matrix E l i t e and the matrix P r e y based on Equations (2) and (3),
calculate the fitness
saving
  If i t e r a t i o n M a x i t e r a t i o n s 3
    Update matrix P r e y based on Equation (4)
  Else if M a x i t e r a t i o n s 3 < i t e r a t i o n 2 × M a x i t e r a t i o n s 3
    For the first half of the populations ( i n / 2 )
     Update matrix P r e y based on Equation (5)
    For the other half of the populations ( n / 2 < i n )
     Update matrix P r e y based on Equation (6)
  Else if 2 × M a x i t e r a t i o n s 3 < i t e r a t i o n M a x i t e r a t i o n s
    Update matrix P r e y based on Equation (7)
  End (if)
  Accomplish memory saving and matrix E l i t e update
  Based Equation (10) compute the reverse solution, According to fitness update prey
  For disturbances = 1:2
   Applying FADs effect and update based on Equation (8)
  End for
  Generate three mutation positions based on the Equations (11)–(13)
  Update prey location based on greedy selection mechanism
End while

3.5. Computational Complexity

In order to better understand mMPA-OC, the space complexity and time complexity of the proposed algorithm are briefly analyzed in this section.
First of all, because the size of the matrix Elite and matrix Prey required by the algorithm is n × d , therefore the maximum space required by the algorithm is O ( n × d ) .
Second, for time complexity, initializing the population and setting parameters requires O ( n × d ) where n is the number of populations and d is the problem’s dimension. In the process of constructing predator matrix P r e y and elite matrix E l i t e , requires n; The optimization process, namely the corresponding three phase, requires O ( n × d ) ; In the multiple disturbances, it requires n; in the reverse learning phase, it requires n; in the complex mutation stage, O ( n × d ) is required; the outermost loop does t, where t is the number of iterations. Finally, the time complexity of mMPA-OC is O ( t × n × d ) .

4. Numerical Experiments and Analysis

The performance of meta-heuristics is measured by experimental results on test functions. Due to the randomness of the meta-heuristic algorithm, mMPA-OC needs sufficient testing to ensure that the performance of the algorithm is not random. Therefore, we set up different experiments to verify the effectiveness of the mMPA-OC.
The first set of experiments is to verify the performance of the mMPA-OC in different dimensions of CEC-2017 with rotation and translation characteristics [56]; the details are shown in Table 1 in order to verify the mMPA-OC in convergence rate and accuracy, and the effectiveness of the mMPA-OC in high-dimensional problems.
The second set of experiments was conducted to verify the performance of the improved algorithm in 10 test functions of CEC-2019, which are characterized by dynamic, niche composition, and many other types of problems with higher difficulty coefficients. The details of CEC-2019 are shown in Table 2 to verify the effectiveness of the mMPA-OC on different test problems.
At the same time, for the fairness of experiments, each group of experiments selected a representative meta-heuristic algorithm for comparison. Each group of experiments evaluated the improved algorithm from three aspects. The first aspect is to compare the mean ( M e a n ), minimum ( B e s t ), and standard deviation ( S t d ) of the test function results. The second aspect is the analysis of the convergence rate of the optimization solution process. Finally, the third aspect is a non-parametric statistical test, namely the Friedman test, the Wilcoxon rank-sum test and Bonferonni–Holm test.

4.1. The Experimental Setup

In order to compare the performance of various algorithm in complex numerical optimization problems, this paper selected the representative meta-heuristic optimization algorithm with good solving ability in recent years as the comparison. These algorithms are DE [5], PSO [7], SSA [25], GWO [11], SCA [25], MPA [32], AOA [57]. For the fairness and rationality of the experiment, the parameter Settings of the selected comparison algorithm are consistent with those of the original literature. For all algorithms, the number of populations is set to 25, the number of iterations of each function is 500, and each algorithm is run 30 times independently. The M e a n obtained by 30 independent calculations can greatly reduce the impact of accidental errors, so it is used as an index to evaluate the convergence accuracy of the algorithm. The closer its value to the theoretical value, the better the optimization effect of the algorithm. The S t d is used to evaluate the stability and robustness of the algorithm, and a smaller value indicates a more stable algorithm. The B e s t represents the optimal solution that the algorithm can find, and the smaller the B e s t is, the stronger the optimization ability of the algorithm.

4.2. Experiments on CEC-2017

In this section, we evaluate different dimensions of the CEC2017 function, to verify the convergence rate and accuracy of the mMPA-OC in different dimensions.

4.2.1. Experiments on CEC-2017 (10 Dimensions)

Analysis of Convergence Accuracy

This section compares the accuracy of mMPA-OC with other comparison algorithms on the CEC-2017 (10 dimensions). Table 3 shows the results of the Unimodal Function and Simple Multimodal Function, Table 4 shows the results of the Hybrid Functions, and Table 5 shows the results of the Composition Functions. It should be noted that the numbers in bold represent the relative best value for the comparison algorithm.
It can be seen from Table 3 that mMPA-OC has the best indexes on Unimodal Functions functions. mMPA-OC performs best on Simple Multimodal Functions, F4, F5, and F9 in all indicators, and the B e s t is very close to the theoretical optimal value. On F6, although the performance of mMPA-OC is not the best, it is close to the theoretical optimal value. On F7 and F8, the B e s t and M e a n found by mMPA-OC are the best, but DE S t d is the best. mMPA-OC performs well on the M e a n of F10, but the B e s t found by GWO performs best.
As for the Hybrid function, F11–F16, and F18–F19 are the best in all performance mMPA-OC. On F17, DE performed well in M e a n and S t d , MPA performed well in the optimal solution B e s t , but mMPA-OC performed better in M e a n and S t d than MPA. DE was the best on F20, mMPA-OC is second better than MPA.
For the combined function, mMPA-OC is the best in all performance on F21 and F28–F30, the B e s t and the M e a n values of mMPA-OC on F22, F23, F25 are the best of all algorithms, but the S t d is not the best. On F24, mMPA-OC performs best in the B e s t , but slightly worse in the M e a n and S t d . On F26, mMPA-OC performs well in the M e a n , but not the best in the B e s t and S t d . On F27, The M e a n and S t d obtained by mMPA-OC are good, but the B e s t is not the best.
From the above results, we can infer that mMPA-OC can achieve results close to the theoretical optimal value to a large extent and is relatively stable compared to other algorithms.

Analysis of Convergence Rate

This section uses the convergence curve to assess the convergence rate of mMPA-OC to further verify the performance of mMPA-OC. Figure 1 and Figure 2 specifically shows the convergence characteristics of the comparison algorithm on F1, F3, F8, F10, F12, F14, F16, F18, F20, F21, F23, F25, F26, and F30. The convergence curve shows that when the iterations reaches 500, the convergence accuracy of mMPA-OC is very close to MPA on F3, F14, F18, and F21, but the convergence rate of mMPA-OC is fast. Among the convergence curves, mMPA-OC has the fastest convergence rate and the highest convergence accuracy.
In general, mMPA-OC has the best convergence performance, MPA, SSA and DE have weak convergence performance, and GWO, SCA, AOA and PSO have the weakest convergence performance. These results show that mMPA-OC can quickly find a better location and achieve efficient search.

Analysis of Statistical Results

In this section, the most commonly used test, namely the Wilcoxon rank sum test, Friedman test, and Bonferroni–Holm test, were used to verify the statistical difference between other algorithms and mMPA-OC in CEC-2017 (10 dimensions).
Compare Wilcoxon Rank Sum Test
In this paper, the significance level of 0.05 was used. The p-value of Wilcoxon rank sum test was compared with the significance level of 0.05 [58] to evaluate the performance of mMPA-OC. The p-values of the Wilcoxon rank sum test for mMPA-OC and other compared algorithms are shown in Table 6.
From the p-value test results given in Table 6, it can be seen that there is no significant difference between the mMPA-OC and the other seven algorithms only in a few functions. For example, in F5, F11, F17, F22, F23, and F29, there is no significant difference between the mMPA-OC and MPA, and on F26 there is no significant difference between the mMPA-OC and SSA, which is reflected in the performance of mMPA-OC not being very good. In short, mMPA-OC differs greatly from other algorithms. Therefore, comparison of Wilcoxon rank sum test results shows that mMPA-OC has superior performance.
Compare Friedman Test
In order to further illustrate the comprehensive performance of mMPA-OC and other algorithms, the Friedman test was introduced to test the data. The Friedman tests were performed with a significance level of 0.05 to obtain the average ranking of each algorithm. The results are shown in Table 7. Furthermore, mMPA-OC has the best comprehensive performance in CEC-2017 test set (10 dimensions), followed by MPA, DE, GWO, SSA, PSO, SCA, and AOA.
Bonferroni–Holm Test
In order to further verify performance differences between the proposed algorithm mMPA-OC and its peers. Bonferroni–Holm tests were used to test the p-values obtained by Wilcoxon rank sum test. When the significance level is 0.025, the Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2017 (10 dimensions) are obtained as shown in the Table 8. The results show that the difference between mMPA-OC and its peers is statistically significant.

Compared with Other Improved Algorithms

In this section, we conduct the comparison experiment of mMPA-OC and two latest modified MPAs on CEC2017 (10 dims). while TLMPA [45] was proposed in 2020, MSMPA [47] was proposed in 2022. Comparing the test results, it can be seen that the proposed algorithm has 16 optimal results and 9 suboptimal results on 29 test functions. It can be concluded that the proposed algorithm is competitive with other improvements for MPA. The comparison of mMPA-OC and other improved MPAs is shown in Table 9.

4.2.2. Experiments on CEC-2017 (30 Dimensions)

We conducted experiments based on CEC-2017 (30-dimensions) to verify the effectiveness of the mMPA-OC for solving higher-dimensional problems.

Analysis of Convergence Accuracy

In this section, the convergence accuracy of mMPA-OC is compared with other algorithms on the CEC-2017(30 dimensions). The results for Unimodal function and Simple Multimodal function are presented in Table 10, the results for Hybrid function are presented in Table 11, and the results for Composition function are presented in Table 12. Again, the numbers in bold represent the best value for the comparison algorithm. It can be seen from Table 8 that mMPA -OC algorithm has the best performance in all Unimodal function F1 and F3.
In Table 10, mMPA-OC is the best in Simple Multimodal function, F9 is the best in all performance, on F4, F5, F7, F8, F10, the B e s t and the M e a n is the best, but S t d is not the best among all comparison algorithms. DE has the best performance on the function F6, but the B e s t found by mMPA-OC is also close to the theoretical optimal value, and better than MPA.
As for the Hybrid function in Table 11, mMPA-OC of F12–F15 and F17–F19 is the best in all performance, mMPA-OC is the B e s t and the M e a n of F11, F16, and F20 is the best. However, the S t d are not the best performing among the compared algorithms.
As for the composition functions in Table 12, F29 and F30 are the best in all performance mMPA-OC. On F22 and F27, the B e s t found by mMPA-OC is the best, but S t d and M e a n are not the best among all comparison algorithms. MPA performs well on F22, while PSO performs well on F27. On F23–F25 and F28, mMPA-OC performed well in B e s t and the M e a n , but S t d was not the best among all comparison algorithms. On F21, the M e a n found by mMPA-OC is the best among all algorithms, but the S t d of DE is the best, and the B e s t of MPA is best. The performance of mMPA-OC was not the best on F26.
From the above results, we can infer that the convergence performance of mMPA-OC is also improved in different dimensions of the same test function.

Analysis of Convergence Rate

In order to further evaluate the performance of mMPA-OC, the convergence curve is used in this section to evaluate the convergence rate of mMPA-OC in high dimensions. Figure 3 and Figure 4 specifically show the convergence performance of the mMPA-OC and other algorithms on F1, F3, F5, F7, F9, F12, F14, F16, F18, F20, F21, F23, F25, F27, and F30 of CEC-2017 (30-dimensional). Although the convergence rate on F1 is good in the early phase of SSA, the convergence rate of mMPA-OC is faster than SSA in the period phase of searching, PSO converges faster than it on F27. However, the mMPA-OC curve is always at the bottom, indicating that the mMPA-OC has fast convergence rate and convergence accuracy. These results indicate that mMPA-OC in CEC-2017 (30 dimensions) can approach the optimal solution faster than other comparison algorithms.

Analysis of Statistical Results

In order to verify the statistical difference between other algorithms and mMPA-OC in CEC-2017 (30 dimensions), and further illustrate the comprehensive performance of mMPA-OC and its comparison algorithm, the most commonly used Wilcoxon rank sum test, Friedman test, and Bonferroni–Holm test were used to evaluate the algorithm.
Compare Wilcoxon Rank Sum Test
According to the p-values of the Wilcoxon rank sum test in Table 13, it shows that mMPA-OC is not significantly different from the other seven comparison algorithms only in a few functions. For example, on F4, F7, and F26, there is no significant difference between the mMPA-OC and MPA, and most of these cases occur when the convergence accuracy of the mMPA-OC is not as good as that of the MPA. Therefore, a comparison of Wilcoxon rank sum test results showed that mMPA-OC still had superior performance in CEC-2017 (30 dimensions).
Compare Friedman Test
The Friedman test was introduced to test the data, to illustrate the performance of mMPA-OC and its comparison algorithm in the CEC-2017 test set (30 dimensions). The Friedman tests were performed with a significance level of 0.05 to obtain the average ranking of each algorithm, and the results are shown in Table 14. It shows that the mMPA-OC has the best comprehensive performance in the CEC-2017 (30 dimensions), followed by MPA, DE, GWO, SSA, PSO, SCA and AOA.
Bonferroni–Holm Test
In order to further investigate performance differences between proposed algorithm mMPA-OC and its peers. Bonferroni–Holm tests were used to test the p-values obtained by Wilcoxon rank sum test. When the significance level is 0.025, the Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2017 (30 dimensions) are obtained as shown in the Table 15. The results show that the difference between mMPA-OC and its peers is statistically significant.

4.3. Experiments on CEC-2019

In order to verify the performance of the mMPA-OC in different test functions, we conducted experiments on CEC-2019, which is characterized by dynamic, niche composition, computationally expensive and many other types of problems with a higher degree of difficulty.

4.3.1. Analysis of Convergence Accuracy

In this section, the convergence accuracy performance of mMPA-OC and other comparison algorithms on the CEC-2019 is introduced. The summary results are shown in Table 16. The optimal value will be bolded. It can be seen from Table 16 that mMPA-OC has the best performance on F1–F3 and F6, and is close to the theoretical global optimal value in F1 and F6. On F4, F8, and F10, the B e s t and M e a n are the best, but the S t d of F4 and F10 is slightly worse than that of DE, and the S t d of F8 is slightly worse than that of SSA. On F6, the B e s t found by mMPA-OC is the best. On F5, F7, and F9, the M e a n and S t d obtained by mMPA-OC are the best, but the B e s t obtained by mMPA-OC is the best in MPA. In conclusion, compared with these seven meta-heuristic optimization algorithms, the global optimal value searched by MPA-OC is closest to the theoretical global optimal value. This indicates that mMPA-OC search ability is stronger.

4.3.2. Analysis of Convergence Rate

In this section, the convergence curve in CEC-2019 is used to evaluate the convergence rate of mMPA-OC and verify the performance of mMPA-OC. Figure 5 shows the convergence characteristics of the algorithm on F1, F4–F10 of CEC-2019. The curve of mMPA-OC is always under the iteration curve, indicating that mMPA-OC algorithm has a fast convergence rate. On F5, although the early convergence rate of search is not as good as that of DE, it does not fall into local optimum. Therefore, it can be shown that compared with other comparison algorithms, mMPA-OC in CEC-2019 can approach the optimal solution and search the optimal solution more quickly.

4.3.3. Analysis of Statistical Results

In order to verify the statistics difference between mMPA-OC and other algorithms in CEC-2019, the Wilcoxon rank-sum test, Friedman test, and Bonferroni–Holm tests are used in this section to evaluate their statistical differences.

Compare Wilcoxon Rank-Sum Test

Table 17 shows the p-value test results of Wilcoxon rank-sum test. It can be seen that there is no significant difference between the mMPA-OC and the other algorithms only in a few functions. For example, on F7, there is no significant difference between mMPA-OC and MPA. Therefore, a comparison of Wilcoxon rank-sum test results shows that mMPA-OC still has superior performance in CEC-2019.

Compare Friedman Test

In order to further illustrate the performance of mMPA-OC, the Friedman test was introduced for the data. At the significance level of 0.05, the Friedman test was conducted to obtain the average ranking of each algorithm. The results are listed in Table 18, and mMPA-OC has the best comprehensive performance in the CEC-2019, followed by MPA, GWO, DE, SSA, PSO, SCA, and AOA.

Bonferroni–Holm Test

In order to further investigate performance differences between proposed algorithm mMPA-OC and other algorithms. Bonferroni–Holm tests were used to test the p-values obtained by Wilson’s rank-sum test. When the significance level is 0.025, the Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2019 are obtained as shown in the Table 19. The results show that the difference between mMPA-OC and its peers is statistically significant.

5. Engineering Optimization Problem

In order to verify the effectiveness of the mMPA-OC in practical engineering applications, the design problems of the welded beam, chemical vessel, and gear train are used to evaluate the performance of the algorithm. The proposed mMPA-OC is compared with PSO, DE, SSA, SCA, GWO, AOA, and MPA. The experiment set the population size to 50 and the maximum number of iterations to 100. Moreover, the experiment results of each algorithm are recorded after 30 independent runs.

5.1. Design of Welding Beam

The main purpose of the welding beam design problem is to minimize the manufacturing cost, which was proposed by Coello [59]. Its four design variables are welding thickness t = q 1 , height h = q 2 , length l = q 3 , and strip thickness s = q 4 . The objective function of this problem can be formulated as follows:
minimize:
f ( X ) = 1.10471 q 1 2 q 2 + 0.04811 q 3 q 4 14.0 + q 2
subject to:
G 1 ( X ) = τ ( X ) τ max 0 , G 2 ( X ) = σ ( X ) σ max 0 , G 3 ( X ) = δ ( X ) δ max 0 , G 4 ( X ) = q 1 q 4 0 , G 5 ( X ) = P P c ( X ) 0 , G 6 ( X ) = 0.125 q 1 0 , G 7 ( X ) = 1.10471 q 1 2 + 0.04811 q 3 q 4 14.0 + q 2 5.0 0 ,
ϕ ( X ) = ϕ 2 + 2 ϕ ϕ q 2 2 R + ϕ 2 , ϕ = P 2 q 1 q 2 , ϕ = M R J , M = P L + q 2 2 , R = q 2 2 4 + q 1 + q 3 2 2 , J = 2 2 q 1 q 2 q 2 2 4 + q 1 + x 3 2 2 , σ ( X ) = 6 P L q 4 q 3 2 , δ ( X ) = 6 P L 3 E q 3 2 x 4 , P c ( X ) = 4.013 E q 3 2 q 4 6 / 36 L 2 1 q 3 2 L E 4 G , P = 6000 l b , L = 14 i n , δ max = 0.25 in , E = 3 × 10 7 psi , G = 1.2 × 10 7 psi , τ max = 1.36 × 10 4 psi , σ max = 3 × 10 4 psi ,
variable range: 0.1 q 1 , q 4 2 , 0.1 q 2 , q 3 10 . The experimental results of various metaheuristic algorithms when dealing with the welded beam design problem are shown in Table 20, and the results show that mMPA-OC performs the best. Therefore, the mMPA-OC can effectively solve the welding beam design problem.

5.2. Pressure Vessel Design

The problem of pressure vessel design [60] is to minimize its production cost f ( q ) while meeting production requirements. Its four design variables are: the thickness of the shell t S = q 1 , the thickness of the head t H = q 2 , the inner radius t R = q 3 , and the length of the cylindrical part of the container t L = q 4 . Moreover, q 1 and q 2 are integer multiples of 0.625, and are continuous variables. The objective function of this problem is as follows.
min f ( x ) = 0.6224 q 1 q 3 q 4 + 1.7781 q 2 q 3 2 + 3.1661 q 1 2 q 4 + 19.84 q 1 2 q 3
The constraints are as follows:
G 1 ( x ) = q 1 + 0.0193 q 3 0 G 2 ( x ) = q 2 + 0.00954 q 3 0 G 3 ( x ) = π q 3 2 q 4 4 3 π q 3 3 + 1296000 0 , G 4 ( x ) = q 4 240 0
The range of the variable-is:
min f ( q ) = 0.6224 q 1 q 3 q 4 + 1.7781 q 2 q 3 2 + 3.1661 q 1 2 q 4 + 19.84 q 1 2 q 3 ,
subject to:
G 1 ( q ) = q 1 + 0.0193 q 3 0 G 2 ( q ) = q 2 + 0.00954 q 3 0 G 3 ( q ) = π q 3 2 q 4 4 3 π q 3 3 + 1296000 0 G 4 ( q ) = q 4 240 0
variable range:
q 1 , q 2 W { 1 × 0.0625 , 2 × 0.0625 , 3 × 0.0625 , , 1600 × 0.0625 , } ,
10 q 3 , q 4 200 .
Table 21 records the results of the comparison between mMPA-OC and seven meta-heuristics on the pressure vessel design problem, and the results shows that mMPA-OC performs the best. Therefore, the mMPA-OC can effectively solve the pressure vessel design problem.

5.3. Gear Train Design Problem

The gear train design problem is an unconstrained discrete design problem in mechanical engineering [61]. The objective of this problem is to minimize the ratio of the angular velocity of the output axis to the angular velocity of the input axis. Its four design variables are: N A = q 1 , N B = q 2 , N C = q 3 , and N D = q 4 . The objective function of this problem can be formulated as follows:
minimize:
f ( X ) = 1 6.931 q 3 q 2 q 1 q 4 2
variable range:
q 1 , q 2 , q 3 , q 4 { 12 , 13 , 14 , , 60 }
Table 22 records the comparison results between mMPA-OC and seven meta-heuristics on the gear train design problem, and the results shows that mMPA-OC performs the best. Therefore, the mMPA-OC can effectively solve the problem of gear train design.

6. Conclusions

In this paper, we propose a multi-disturbance Marine predator algorithm based on opposition-based learning and compound mutation. MPA lacks the transition between exploitation and exploration, and the algorithm easily becomes stuck in a local optimum. Therefore, we propose three improvements to overcome these shortcomings. Firstly, the opposition-based learning mechanism was added to the optimal value selection process to improve the MPA’s exploration ability. Secondly, the composite mutation strategy was used to improve the predator position update mechanism to improve the MPA’s global search ability. Finally, the disturbance factors are improved to multiple disturbance factors, so that MPA can maintain the population diversity in the iterative process. In order to evaluate the performance of mMPA-OC, experiments are designed to compare with seven algorithms MPA, PSO, DE, GWO, SSA, SCA and AOA on different dimensions of CEC-2017 and on the more complex CEC-2019 function test set. For the experimental results, we evaluate the algorithms in terms of convergence accuracy, stability, convergence rate, Friedman test, Wilcoxon test, and Bonferonni–Holm test. The experimental results show that the mMPA-OC can not only maintain the diversity of the population and greatly increase the ability to jump out of local optimum, but also can well balance the exploration phase and the exploitation phase. In order to evaluate the effect of mMPA-OC on real engineering optimization problems, mMPA-OC is compared with seven meta-heuristic algorithms. The experimental results show that the mMPA-OC has better optimization effect than other meta-heuristic algorithms. In conclusion, mMPA-OC is an effective improvement of MPA, which has certain development prospects and is a feasible method to solve complex optimization problems, which is worth our research. For example, whether other strategies can be introduced on the basis of mMPA-OC algorithm to improve its performance ability, whether it can be improved into an optimizer to solve multi-objective optimization problems, whether it can be improved into an optimizer to solve 0-1 knapsack problem, and whether it can be better applied to practical problems.

Author Contributions

Methodology, L.C.; software, C.H.; validation, L.C., C.H. and Y.M.; formal analysis, C.H.; investigation, L.C.; resources, L.C. and Y.M.; data curation, C.H.; writing—original draft preparation, C.H.; writing—review and editing, C.H. and L.C.; visualization, C.H.; supervision, L.C.; project administration, Y.M.; funding acquisition, L.C. and Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is support by the National Natural Science Foundation of China (Grant: No. 62203332 and No. 61535008); the Science and Technology Research Team in Higher Education Institutions of Hebei Province (Grant: NO. ZD2018045); the National Natural Science Foundation of Tianjin (Grant: NO. 20JCQNJC00430).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
U B upper boundary of the search space
L B lower boundary of the search space
r a n d random numbers between zero and one
ddimension of the search space
npopulation size
E l i t e i predator position
P r e y i prey position
R L coefficient of Lévy motion
R B coefficient of Brownian motion
C F decay factor used to control the step size of the predator
M a x i t e r a t i o n maximum number of iterations
Ubinary vector of zeros and ones
F 1 , F 2 , F 3 scale coefficients
C r 1 , C r 2 , C r 3 mutation rates

References

  1. Kallioras, N.A.; Lagaros, N.D.; Avtzis, D.N. Pity beetle algorithm—A new metaheuristic inspired by the behavior of bark beetles. Adv. Eng. Softw. 2018, 121, 147–166. [Google Scholar] [CrossRef]
  2. Abualigah, L.; Shehab, M.; Alshinwan, M.; Alabool, H. Salp swarm algorithm: A comprehensive survey. Neural Comput. Appl. 2020, 32, 11195–11215. [Google Scholar] [CrossRef]
  3. Holland, J. Genetic Algorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
  4. Fogel, D.B. Artificial Intelligence through Simulated Evolution. In Evolutionary Computation: The Fossil Record; IEEE: Piscataway, NJ, USA, 1998; pp. 227–296. [Google Scholar]
  5. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  6. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  7. Obaiahnahatti, B.G.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the Computer Science MHS’95, Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  8. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  9. Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  10. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Proceedings of the Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), Studies in Computational Intelligence, Coimbatore, India, 9–11 December 2010; Volume 284, pp. 65–74. [Google Scholar]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  12. Dorigo, M.; Caro, G.D. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 2, pp. 1470–1477. [Google Scholar]
  13. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  14. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  15. Du, H.; Wu, X.; Zhuang, J. Small-World Optimization Algorithm for Function Optimization. Adv. Nat. Comput. 2006, 264–273. [Google Scholar]
  16. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  17. Formato, R. Central force optimization: A new metaheuristic with applications in applied electromagnetics. Prog. Electromagn. Res. 2007, 77, 425–491. [Google Scholar] [CrossRef] [Green Version]
  18. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  19. Hwang, C.R. Simulated annealing: Theory and applications. Acta Appl. Math. 1988, 12, 108–111. [Google Scholar] [CrossRef]
  20. Eita, M.A.; Fahmy, M.M. Group Counseling Optimization: A Novel Approach. In Research and Development in Intelligent Systems XXVI; Springer: Berlin/Heidelberg, Germany, 2010; pp. 195–208. [Google Scholar]
  21. Rao, R.V.; Savsani, V.; Vakharia, D. Teaching-Learning-Based Optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  23. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  24. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  26. Muthiah-Nakarajan, V.; Noel, M.M. Galactic Swarm Optimization: A new global optimization metaheuristic inspired by galactic motion. Appl. Soft Comput. 2015, 38, 771–787. [Google Scholar] [CrossRef]
  27. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2015, 96, 120–133. [Google Scholar] [CrossRef]
  28. Sarkhel, R.; Das, N.; Saha, A.K.; Nasipuri, M. An improved Harmony Search Algorithm embedded with a novel piecewise opposition based learning algorithm. Eng. Appl. Artif. 2018, 67, 317–330. [Google Scholar] [CrossRef]
  29. Abualigah, L. Group search optimizer: A nature-inspired meta-heuristic optimization algorithm with its results, variants, and applications. Neural Comput. Appl. 2021, 33, 2949–2972. [Google Scholar] [CrossRef]
  30. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  31. Seck-Tuoh-Mora, J.C.; Hernandez-Romero, R.; Santander-Baños, F.; Volpi-Leon, V.; Medina-Marin, J.; Lagos-Eulogio, P. A majority–minority cellular automata algorithm for global optimization. Expert Syst. Appl. 2022, 203, 117379. [Google Scholar] [CrossRef]
  32. Kumar, B.S. Application of chicken swarm optimization algorithm for multi objective scheduling problems in FMS. Mater. Today Proc. 2022; in press. [Google Scholar]
  33. Houssein, E.H.; Abdelkareem, D.A.; Emam, M.M.; Hameed, M.A.; Younan, M. An efficient image segmentation method for skin cancer imaging using improved golden jackal optimization algorithm. Comput. Biol. Med. 2022, 149, 106075. [Google Scholar] [CrossRef]
  34. Liu, X.J.; Sun, B.; Xu, Z.D.; Liu, X.Y. An adaptive Particle Swarm Optimization algorithm for fire source identification of the utility tunnel fire. Fire Saf. J. 2021, 126, 103486. [Google Scholar] [CrossRef]
  35. Ullah, Z.; Khan, M.; Naqvi, S.R.; Farooq, W.; Yang, H.P.; Wang, S.; Vo, D.V. A comparative study of machine learning methods for bio-oil yield prediction – A genetic algorithm-based features selection. Bioresour. Technol. 2021, 335, 125292. [Google Scholar] [CrossRef]
  36. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  37. Ho, L.V.; Nguyen, D.H.; Mousavi, M.; Roeck, G.D.; Bui-Tien, T.; Gandomi, A.H.; Wahab, M.A. A hybrid computational intelligence approach for structural damage detection using marine predator algorithm and feedforward neural networks. Expert Syst. Appl. 2021, 252, 106568. [Google Scholar] [CrossRef]
  38. Dinh, P.H. A novel approach based on Three-scale image decomposition and Marine predators algorithm for multi-modal medical image fusion. Biomed. Signal Process. Control. 2021, 67, 102536. [Google Scholar] [CrossRef]
  39. Chen, X.; Qi, X.; Wang, Z.; Cui, C.; Wu, B.; Yang, Y. Fault diagnosis of rolling bearing using marine predators algorithm-based support vector machine and topology learning and out-of-sample embedding. Measurement 2021, 176, 109116. [Google Scholar] [CrossRef]
  40. Ridha, H.M. Parameters extraction of single and double diodes photovoltaic models using Marine Predators Algorithm and Lambert W function. Sol. Energy 2020, 209, 674–693. [Google Scholar] [CrossRef]
  41. Xia, H.J.; Zhang, S.; Jia, R.Y.; Qiu, H.D.; Xu, S.H. Blade shape optimization of Savonius wind turbine using radial based function model and marine predator algorithm. Energy Rep. 2022, 8, 12366–12378. [Google Scholar] [CrossRef]
  42. Houssein, E.H.; Hassaballah, M.; Ibrahim, I.E.; AbdElminaam, D.S.; Wazery, Y.M. BAn automatic arrhythmia classification model based on improved Marine Predators Algorithm and Convolutions Neural Networks. Expert Syst. Appl. 2022, 187, 115936. [Google Scholar] [CrossRef]
  43. Oszust, M. Enhanced Marine Predators Algorithm with Local Escaping Operator for Global Optimization. Knowl.-Based Syst. 2021, 232, 107467. [Google Scholar] [CrossRef]
  44. Elaziz, M.A.; Thanikanti, S.B.; Ibrahim, I.A.; Lu, S.F.; Nastasi, B.; Majed, A.A.; Hossain, M.A.; Yousri, D. Enhanced Marine Predators Algorithm for identifying static and dynamic Photovoltaic models parameters. Energy Convers. Manag. 2021, 236, 113971. [Google Scholar] [CrossRef]
  45. Zhong, K.Y.; Luo, Q.F.; Zhou, Y.Q.; Jiang, M. TLMPA: Teaching-learning-based Marine Predators algorithm. Aims Math. 2021, 6, 113971. [Google Scholar] [CrossRef]
  46. Wang, N.; Wang, J.S.; Zhu, L.F.; Wang, H.Y.; Wang, G. A Novel Dynamic Clustering Method by Integrating Marine Predators Algorithm and Particle Swarm Optimization Algorithm. IEEE Access 2021, 9, 3557–3569. [Google Scholar] [CrossRef]
  47. Han, M.X.; Du, Z.F.; Zhu, H.T.; Li, Y.C.; Zhu, H.M. Golden-Sine dynamic marine predator algorithm for addressing engineering design optimization. Knowl.-Based Syst. 2022, 210, 118460. [Google Scholar] [CrossRef]
  48. Viswanathan, G.M.; Afanasyev, V.; Buldyrev, S.V.; Murphy, E.J.; Prince, P.A.; Stanley, H.E. Lévy flight search patterns of wandering albatrosses. Nature 1996, 381, 6581. [Google Scholar] [CrossRef]
  49. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  50. Yu, X.B.; Xu, W.Y.; Li, C.L. Opposition-based learning grey wolf optimizer for global optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  51. Gupta, S.; Deep, K.; Heidari, A.A.; Moayedi, H.; Wang, M. Opposition-based learning Harris hawks optimization with advanced transition rules: Principles and analysis. Expert Syst. Appl. 2020, 158, 113510. [Google Scholar] [CrossRef]
  52. Gao, H.; Xu, W. Particle swarm algorithm with hybrid mutation strategy. Appl. Soft Comput. 2011, 11, 5129–5142. [Google Scholar] [CrossRef]
  53. Cui, L.; Li, G.; Lin, Q.; Chen, J.; Lu, N. Adaptive differential evolution algorithm with novel mutation strategies in multiple sub-populations. Comput. Oper. Res. 2016, 67, 155–173. [Google Scholar] [CrossRef]
  54. Zhang, H.; Wang, Z.; Chen, W.; Heidari, A.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Zhang, X. Ensemble mutation-driven salp swarm algorithm with restart mechanism: Framework and fundamental analysis. Expert Syst. Appl. 2021, 165, 113897. [Google Scholar] [CrossRef]
  55. Wang, Y.; Cai, Z.; Zhang, Q. Differential Evolution with Composite Trial Vector Generation Strategies and Control Parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  56. Price, K.V.; Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the 100-Digit Challenge Special Session and Competition on Single Objective Numerical Optimization. 2017. Available online: https://pdfslide.net/documents/the-100-digit-challenge-problem-definitions-and-web-documentscec-2019.html?page=21 (accessed on 4 December 2022).
  57. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  58. Oyeka, I.C.A.; Ebuh, G.U. Modified Wilcoxon Signed-Rank Test. Open J. Stat. 2012, 2, 172–176. [Google Scholar] [CrossRef] [Green Version]
  59. Coello, C.A.C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  60. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social Network Search for Solving Engineering Optimization Problems. Comput. Intell. Neurosci. 2021, 8548639. [Google Scholar] [CrossRef] [PubMed]
  61. Sandgren, E. Nonlinear integer and discrete programming in mechanical design optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
Figure 1. Convergence curve of CEC-2017 test function (10 dimensions).
Figure 1. Convergence curve of CEC-2017 test function (10 dimensions).
Electronics 11 04087 g001
Figure 2. Convergence curve of CEC-2017 test function (10 dimensions).
Figure 2. Convergence curve of CEC-2017 test function (10 dimensions).
Electronics 11 04087 g002
Figure 3. Convergence curve of CEC-2017 test function (30 dimensions).
Figure 3. Convergence curve of CEC-2017 test function (30 dimensions).
Electronics 11 04087 g003
Figure 4. Convergence curve of CEC-2017 test function (30 dimensions).
Figure 4. Convergence curve of CEC-2017 test function (30 dimensions).
Electronics 11 04087 g004
Figure 5. Convergence curve of CEC-2019 test function.
Figure 5. Convergence curve of CEC-2019 test function.
Electronics 11 04087 g005
Table 1. Summary of the CEC-2017 function test set.
Table 1. Summary of the CEC-2017 function test set.
ClassNoFunctionsDimBoundaryOptima
Unimodal Functions1Shifted and Rotated Bent Cigar10, 30[−100,100]100
3Shifted and Rotated Zakharov10, 30[−100,100]300
Simple Multimodal Functions4Shifted and Rotated Rosenbrock10, 30[−100,100]400
5Shifted and Rotated Rastrigin10, 30[−100,100]500
6Shifted and Rotated Expanded Scaffer’s F610, 30[−100,100]600
7Shifted and Rotated Lunacek Bi_Rastrigin10, 30[−100,100]700
8Shifted and Rotated Non-Continuous Rastrigin10, 30[−100,100]800
9Shifted and Rotated Levy10, 30[−100,100]900
10Shifted and Rotated Schwefel10, 30[−100,100]1000
Hybrid Functions11Hybrid 1 (N = 3)10, 30[−100,100]1100
12Hybrid 2 (N = 3)10, 30[−100,100]1200
13Hybrid 3 (N = 3)10, 30[−100,100]1300
14Hybrid 4 (N = 4)10, 30[−100,100]1400
15Hybrid 5 (N = 4)10, 30[−100,100]1500
16Hybrid 6 (N = 4)10, 30[−100,100]1600
17Hybrid 7 (N = 5)10, 30[−100,100]1700
18Hybrid 8 (N = 5)10, 30[−100,100]1800
19Hybrid 9 (N = 5)10, 30[−100,100]1900
20Hybrid 10 (N = 6)10, 30[−100,100]2000
Composition Functions21Hybrid 1 (N = 3)10, 30[−100,100]2100
22Composition 2 (N = 3)10, 30[−100,100]2200
23Composition 3 (N = 4)10, 30[−100,100]2300
24Composition 4 (N = 4)10, 30[−100,100]2400
25Composition 5 (N = 5)10, 30[−100,100]2500
26Composition 6 (N = 5)10, 30[−100,100]2600
27Composition 7 (N = 6)10, 30[−100,100]2700
28Composition 8 (N = 6)10, 30[−100,100]2800
29Composition 9 (N = 3)10, 30[−100,100]2900
30Composition 10 (N = 3)10, 30[−100,100]3000
Table 2. Summary of the CEC-2019 function test set.
Table 2. Summary of the CEC-2019 function test set.
NoFunctionDimBoundaryOptima
1Storn’s Chebyshev Polynomial Fitting Problem9[−8192,8192]1
2Inverse Hilbert Matrix Problem16[−16,384,16,384]1
3Lennard–Jones Bestimum Energy Cluster18[−4,4]1
4Rastrigin10[−100,100]1
5Griewangk10[−100,100]1
6Weierstrass10[−100,100]1
7Modified Schwefel10[−100,100]1
8Expanded Schaffer’s F610[−100,100]1
9Happy Cat10[−100,100]1
10Ackley10[−100,100]1
Table 3. Comparison of experimental results on single-peak and multi-peak functions of CEC-2017 (10 dimensions).
Table 3. Comparison of experimental results on single-peak and multi-peak functions of CEC-2017 (10 dimensions).
mMPA-OCMPAPSOSCASSAAOAGWODE
F1Mean100106.82768,108,315.8441,061,382,1001843.803810,664,648,726163,470,763.910,692.9039
Best100100.18041,208,907.685569,859,889.3100.25894,729,457,81416,763.8129926.8083
Std06.7298,657,916.616336,289,717.61752.39644,069,722,672636,396,645.212,658.4727
F3Mean3003005957.273430.088437.096513,083.90994276.94678330.1159
Best300300582.6085892.233008066.2468320.5512902.241
Std004460.71071557.6146209.75942622.35083768.27043138.024
F4Mean400.0726401.5475416.593462.3447412.09751242.2072419.4951406.5477
Best400400.0007402.5074432.1187400.0935633.5613402.5088404.8564
Std0.10151.316324.190323.766120.8928340.939217.49840.792
F5Mean510.0037512.438533.3588557.7172553.9596568.0954522.4643517.2904
Best503.9798503.9798510.5889537.6987512.9344533.9107507.1906510.7732
Std3.82634.81611.30827.961820.604517.83210.84243.9419
F6Mean600.0019600.0584620.5732621.9675629.6076642.5492601.6539600
Best600.0003600.0003603.9594616.4607605.3087633.8042600.0523600
Std0.00330.215210.51864.258412.94635.15982.12510
F7Mean720.2035725.794755.5331785.0931739.9699802.4523736.7592730.2992
Best713.265716.205732.4352758.5851721.4695779.06714.7309723.4814
Std4.54536.352613.502812.35912.560114.09213.33383.352
F8Mean808.3908812.2049828.6228847.7033828.4226841.4078817.6611817.7312
Best803.9798804.9748810.8841834.2953813.9294826.695805.8513812.0345
Std2.89413.591510.01287.1828.76659.15856.56632.826
F9Mean900900.19241023.23621089.70861135.94621426.7447918.2368900
Best900900910.6348935.6915905.30431097.6163900.2621900
Std00.3114170.4269109.6609257.0835211.641823.6020
F10Mean1401.94911491.54832075.77872512.84282159.86232278.18241710.13021825.1921
Best1187.09881122.36811532.07461991.13341125.97051780.61661075.7611492.0982
Std110.5593168.8878264.8628228.571405.4517276.9056368.5721116.0331
Table 4. Comparison of experimental results on CEC-2017 (10 dimensions) Hybrid function.
Table 4. Comparison of experimental results on CEC-2017 (10 dimensions) Hybrid function.
mMPA-OCMPAPSOSCASSAAOAGWODE
F11Mean1103.48911104.13641281.97241259.52731191.35043354.63291139.6641106.0263
Best1101.01121101.59781154.16891171.83841114.73851293.93051105.51021103.9669
Std1.29251.3009108.107379.612349.61611728.178535.41071.3954
F12Mean1240.55621308.72653,230,858.99621,377,525.082,789,474.332432,667,659.81,051,695.4921,095,867.871
Best1200.3431203.559233,493.04742,841,849.04622,150.2276138,437.523218,893.007250,517.6037
Std53.065574.45265,644,195.59712,709,367.483,387,559.385432,830,989.41,772,161.396969,349.5173
F13Mean1304.44261308.955440,434.039782,836.545818,372.94328755.346812,717.51245458.9172
Best1300.10771301.45374576.60373679.24762962.00213641.97452357.50971839.8396
Std2.41443.28847,557.375758,029.104113,541.02535331.62448828.81294421.4108
F14Mean1402.69261408.60922672.79992821.5716039.553311,204.21913285.54481883.5571
Best1400.00231401.0181492.71231543.57381483.80571687.8881474.43041421.6967
Std1.25588.09572509.89551301.55234822.51898911.92332000.6228968.4925
F15Mean1500.54911501.54457998.02074053.896229,219.814418,537.53476658.67371872.0453
Best1500.04441500.17322891.54481867.65621725.74073339.33391711.29781516.6168
Std0.43810.79214822.02052086.559840343.46056398.67884547.513401.708
F16Mean1601.71531610.18821828.59721842.70751971.00832109.11431769.02361636.9271
Best1600.38141601.61861635.81821689.03731675.25341758.97761605.23721604.0572
Std1.048122.4468124.273397.2775145.2623149.8626134.851346.3899
F17Mean1717.20591721.42311822.61151797.61551794.85461881.5781762.76521714.5795
Best1701.90631700.51261756.45021748.44531737.90881750.3361727.81751701.8373
Std9.069610.099953.033920.754462.2821105.75520.46128.8204
F18Mean1800.89381809.293290,944.4631362,483.334519,698.24447,995,222.95225,226.89886237.7304
Best1800.14531803.7613142.595239,744.10043881.24933318.08533300.11882393.893
Std0.71684.6172164,478.7197282,886.67612,313.254430,473,369.4914,511.96514248.5487
F19Mean1900.87671901.74367380.795511,372.838810,661.52149,380.324719,619.9683074.5644
Best1900.09131900.47992065.73552484.4981967.29022368.57561928.37341905.9442
Std0.4420.74197630.22088020.57099014.073196,392.491949,536.82521975.285
F20Mean2014.20192023.4122125.18322131.74492212.79912159.35522104.64322000.2411
Best2000.99622000.00392059.13842070.75852095.29632047.34422027.48572000
Std9.79629.800947.5439.529578.049291.008363.7410.3387
Table 5. Comparison of experimental results on CEC-2017 (10-dimensions) combined functions.
Table 5. Comparison of experimental results on CEC-2017 (10-dimensions) combined functions.
mMPA-OCMPAPSOSCASSAAOAGWODE
F21Mean22002200.15562320.032310.58432334.49612339.34122306.44622303.7184
Best22002200.00022205.97212212.080822002226.53852201.11022225.1284
Std00.481136.697463.023952.646937.16136.406936.2225
F22Mean2279.52312287.31442311.80832413.89762304.03333048.21272380.10522303.7112
Best2200.00012200.00262259.00622348.37762239.17492539.93072301.82932301.1204
Std39.51731.441813.888937.634312.9538363.6323266.91152.5557
F23Mean2610.54612612.70362635.88532663.9462666.72912760.55732625.59512619.4209
Best2605.11772605.12942617.6442644.27752610.2052677.4462607.51452611.4611
Std4.04514.442414.155810.011342.12449.674613.17163.7865
F24Mean2565.88582508.05942756.19262793.93592724.37762844.0622751.4982744.1484
Best25002500.00072519.93152773.128225002644.07422732.41542623.0139
Std105.013643.825560.82667.1696128.7548103.045214.036831.2292
F25Mean2881.0052897.86732943.67532975.14822929.09713404.39292937.11382927.91
Best2600.04582897.74292899.35182932.94282897.99143017.23012903.55712898.8279
Std77.23810.466936.796119.385730.1638253.293219.020415.6694
F26Mean2816.66862844.02553196.10723251.46462943.4354055.55693261.31992989.3457
Best2600.00072600.0082854.5193015.14662600.00013409.70052807.13312913.7001
Std117.687893.253325.1367398.2489240.3498308.8662440.572943.2016
F27Mean3089.39493089.98173174.09253106.4073120.51453270.14023112.99113091.7492
Best3089.00553088.9783074.16953098.86313091.77343155.18163090.99873090.2317
Std0.40561.196149.13134.492333.356167.376328.48971.4882
F28Mean3100.00023101.85673289.50013357.28573297.5553819.28453390.03913345.4925
Best31003100.00063272.50113244.553131003401.08333179.3013208.123
Std0.000210.162113.348492.0799124.7313138.3026116.522178.2294
F29Mean3145.39093155.78523299.70753280.51863312.82693459.64323214.97343200.8089
Best3130.41013134.2853165.7493174.59073157.07943181.56143154.82753164.4616
Std9.75620.251471.08158.376785.9996201.906852.609619.9764
F30Mean3471.06283936.744992,479.37291,749,161.064954,889.199939,227,575.781,144,625.586152,386.1769
Best3394.84343405.83593720.360544,102.539738,412.1961322,791.36017034.746416,203.7498
Std230.7961032.6363289,931.51581,129,451.2481,480,086.71431,423,163.171,467,710.156178,563.0271
Table 6. p-values of the Wilson rank-sum test of mMPA-OC and the comparison algorithm in CEC-2017 (10 dimensions).
Table 6. p-values of the Wilson rank-sum test of mMPA-OC and the comparison algorithm in CEC-2017 (10 dimensions).
mMPA-OC & MPAmMPA-OC & PSOmMPA-OC & SCAmMPA-OC & SSAmMPA-OC & AOAmMPA-OC & GWOmMPA-OC & DE
F13.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F33.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F44.69 × 10 8 3.02 × 10 11 3.02 × 10 11 8.15 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F52.51 × 10 2 1.46 × 10 10 3.02 × 10 11 6.07 × 10 11 3.02 × 10 11 1.87 × 10 7 9.06 × 10 8
F64.42 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F75.87 × 10 4 3.34 × 10 11 3.02 × 10 11 3.50 × 10 9 3.02 × 10 11 1.29 × 10 6 3.82 × 10 9
F88.66 × 10 5 9.92 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 1.31 × 10 8 7.39 × 10 11
F93.69 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 4.08 × 10 5
F104.51 × 10 2 6.07 × 10 11 3.02 × 10 11 4.57 × 10 9 3.02 × 10 11 4.71 × 10 4 6.70 × 10 11
F111.19 × 10 1 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.34 × 10 11 1.20 × 10 8
F122.96 × 10 5 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F131.73 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F143.16 × 10 5 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F151.86 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F161.87 × 10 7 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 4.08 × 10 11
F175.19 × 10 2 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.01 × 10 1
F183.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F191.60 × 10 7 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F201.78 × 10 4 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.69 × 10 11 3.69 × 10 11
F213.02 × 10 11 3.02 × 10 11 3.02 × 10 11 5.57 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F225.75 × 10 2 2.44 × 10 9 3.02 × 10 11 3.16 × 10 10 3.02 × 10 11 3.02 × 10 11 4.98 × 10 11
F237.01 × 10 2 3.69 × 10 11 3.02 × 10 11 1.33 × 10 10 3.02 × 10 11 1.49 × 10 6 6.52 × 10 9
F246.10 × 10 3 2.15 × 10 10 3.02 × 10 11 3.99 × 10 4 1.41 × 10 9 1.29 × 10 9 4.20 × 10 10
F255.56 × 10 4 2.87 × 10 10 3.69 × 10 11 7.38 × 10 10 3.02 × 10 11 3.16 × 10 10 2.67 × 10 9
F263.99 × 10 4 8.10 × 10 10 3.02 × 10 11 2.28 × 10 1 3.02 × 10 11 8.10 × 10 10 3.02 × 10 11
F279.03 × 10 4 9.51 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 6.70 × 10 11
F284.98 × 10 11 3.02 × 10 11 3.02 × 10 11 9.51 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F295.55 × 10 2 3.34 × 10 11 3.02 × 10 11 6.07 × 10 11 3.02 × 10 11 2.37 × 10 10 3.34 × 10 11
F302.49 × 10 6 5.49 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
Table 7. Average ranking of the Friedman test on CEC-2017 (10 dimensions).
Table 7. Average ranking of the Friedman test on CEC-2017 (10 dimensions).
AlgorithmmMPA-OCMPAPSOSCASSAAOAGWODE
Mean rank1.13792.10345.41386.24145.37937.72414.79313.2069
Final rank12675843
Table 8. Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2017 (10 dimensions).
Table 8. Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2017 (10 dimensions).
mMPA-OC & MPAmMPA-OC & PSOmMPA-OC & SCAmMPA-OC & SSAmMPA-OC & AOAmMPA-OC & GWOmMPA-OC & DE
F11111111
F31111111
F40111111
F51111111
F61111111
F71111111
F81111111
F90111111
F100111111
F111111111
F121111111
F131111111
F141111111
F151111111
F161111111
F170111110
F181111111
F191111111
F200111111
F211111111
F220111111
F230111111
F240111111
F251111111
F261111111
F271111111
F281111111
F290111111
F301111111
Table 9. Comparison of mMPA-OC with other improved algorithms in CEC-2017 (10 dimensions).
Table 9. Comparison of mMPA-OC with other improved algorithms in CEC-2017 (10 dimensions).
mMPA-OCTLMPA [45]MSMPA [47]
MeanStdMeanStdMeanStd
F1100010001077.31
F3300030003000
F4400.07260.1015400.030.064010.912
F5510.00373.8263510.5820585090.039
F6600.00190.003360006000.0155
F7720.20354.5453723.633.087244.31
F8808.39082.8941811.563.728103.05
F9900090009000.175
F101401.9491110.55931613.14113.1914609.41
F111103.48911.29251102.511.0811001.88
F121240.556253.06551310.6495.32129060
F131304.44262.41441309.722.3613103.22
F141402.69261.25581402.711.8814104.66
F151500.54910.43811500.760.3615000.789
F161601.71531.04811604.054.6116001.22
F171717.20599.06961706.294.8217209.07
F181800.89380.71681801.471.6518103.53
F191900.87670.4421900.450.4719000.682
F202014.20199.79622000.723.61202010.5
F21220002214.9937.45221038.6
F222279.523139.5172288.3429.04229026.1
F232610.54614.04512611.554.0426103.45
F242565.8858105.01362587.59112.7255097.9
F252881.00577.23812905.5617.2929000.0093
F262816.6686117.68782879.9976.11276010.6
F273089.39490.40563089.360.5830900.419
F283100.00020.00023108.9295.29311051.6
F293145.39099.7563165.0711.13315012.1
F303471.0628230.7964133.443178.85347069.5
Table 10. Comparison of experimental results on single-peak and multi-peak functions of CEC-2017 (30 dimensions).
Table 10. Comparison of experimental results on single-peak and multi-peak functions of CEC-2017 (30 dimensions).
mMPA-OCMPAPSOSCASSAAOAGWODE
F1Mean42,182.500631,294,122.70112,922,443,48820,666,583,679122,494.24755,220,766,4983,886,271,590990,394.399
Best5560.8869966,870.196045,429,136,51913,038,925,5618616.06088832,234,584,745828,838,627.5120,694.7251
Std35,535.616681,349,814.7176,301,414,6744,261,179,855172,517.28249,973,432,2692,532,897,207790,074.4228
F3Mean1996.3267899141.842858145,908.314692,702.3792875,813.3514583,952.7458270,405.81736169,970.3748
Best364.90410712067.0423559,466.2111246,576.1709325,686.0708259,268.5497544,922.37915118,659.4953
Std1298.0357524136.31830649,126.7605520,057.5047126,863.064938984.21867615,513.6425432,593.307
F4Mean499.7804649506.00884732536.3644762922.009032545.461168414,441.64737686.3956453522.0442862
Best467.8360878476.5938901827.08563181477.230752475.99922767706.914711567.819881495.1244623
Std22.897330717.141275491144.518336957.52305846.640386665627.89178182.6529259320.88511283
F5Mean579.3948743617.9093447789.872256830.268896726.9839312896.8573171643.2966936694.0952688
Best552.8156458558.8030149696.8631336753.4065639626.3734703819.2773451590.8492131660.0310588
Std14.8672415623.7408538154.1813128839.3816443359.6902792838.405292141.6773393213.29234634
F6Mean603.2557297610.6667441669.3466865665.7943635661.2356692683.8376316613.9177854600.1691854
Best600.9790419604.0560878638.4494488654.1626578638.8042274660.5653423606.0947963600.0927924
Std1.4030052344.44896213212.297939566.76373046810.408666957.5938904924.9704310060.045275737
F7Mean845.7248071864.81999041300.2233731259.05305969.88102641409.510269932.944045931.499091
Best782.8268592810.31969621153.1927581135.5894861.38132411316.214984823.0886023893.1720415
Std31.5058551129.8321925987.859284670.2239069866.0317667756.5116103155.2690154313.87776347
F8Mean874.2971663907.84677721046.903851104.863143958.01476961121.452392913.1056072997.8145444
Best849.7488228870.2565545984.84162771029.22054885.59730311030.011903862.9636365979.6881102
Std16.5512442517.5238817832.7154035227.2607627728.6064087736.1247221928.8966573811.09386178
F9Mean1100.9284781617.2446459858.6461838627.2017575899.2327998074.5546422420.9375862118.072252
Best940.84482951018.8902115351.8062585152.0052633715.7689646243.8096791221.0315631404.080004
Std217.1184266378.65301342418.4245461890.5315621037.790861233.2645031105.724071467.7150445
F10Mean4099.6670464508.3914137585.1000578870.7582775196.3920398026.5751985159.1329867700.291644
Best2740.4743533087.4697845854.1725787736.2419373744.5123526624.0897113352.3528776976.181046
Std574.7750946528.1053996767.3836531452.065567633.5282205617.87043181407.922492308.9169598
Table 11. Comparison of experimental results on CEC-2017 (30-dimensional) mixed function.
Table 11. Comparison of experimental results on CEC-2017 (30-dimensional) mixed function.
mMPA-OCMPAPSOSCASSAAOAGWODE
F11Mean1198.1113281213.5119284625.598754139.2295991460.5541979516.8133982522.6825242128.050893
Best1127.1436471148.4501322102.4485692342.55481272.6623294312.219181372.8133691444.757575
Std34.3427730829.565512352205.8705631131.405696159.09134653281.1994191321.685389813.0570137
F12Mean630,554.27231,618,609.87326,229,87402,662,909,79539,488,993.6514,685,258,660138,514,181.630,813,285.77
Best18,155.62017114,474.700780,085,784.45888,221,541.34,352,912.4178,528,127,9045,415,646.899,509,025.965
Std682,378.3951,361,850.423149,814,158.9956,093,52641,786,086.753,638,880,699148,843,762.515,592,738.45
F13Mean1621.576587335.8475786,254,323.8771,246,434,42889,376.6384912,720,791,54319,810,084.234,111,696.121
Best1448.980893499.37311930,936.1215425,636,626.820,257.7415,181,599,00440,816.37699423,533.289
Std170.097892333.5714374,609,517.375513,557,603.949,459.281515,343,770,11163,340,877.023,302,270.622
F14Mean1441.8935421486.11475190,070.8079819,455.9544441,998.87114,204,442.154682,355.4525460,770.9919
Best1431.0446941448.2290969313.379368141,111.86685963.904711101,715.1127440.45694628,903.51463
Std6.43552374817.52021901224,746.7845451,333.8154386,426.03483,141,820.846909,216.3139432,112.5578
F15Mean1589.1696591779.8836982,554,385.83569,626,410.9234,485.30008409,626,559.6621,614.2374944,879.4466
Best1530.7068861631.20429570,477.691533,837,870.6166888.6385217,884.9797914,612.2727365,236.70263
Std28.122635379.5438212111,596,949.6269,156,949.8226,283.73154529,660,184.51,001,420.41868,593.7792
F16Mean2234.173812448.0003283459.3147814109.0931283235.3615135623.8804222773.7390873072.995614
Best1657.8145572012.9404082690.5955233577.3175932571.7671583787.6741412083.9046762727.350064
Std218.1261359215.4089878388.7831725311.8323531406.16258481225.195907387.3942008187.4076652
F17Mean1857.6910561915.223872590.7957672833.3321642436.9076456051.0095492141.283472211.508781
Best1761.6895841787.4104851890.1514112256.4768592042.3319022654.5738561785.1084151938.868372
Std74.65630401105.2666646263.6206959217.2106479271.69812573018.525755212.5609811121.3702865
F18Mean2019.5193023079.1676943,564,313.30116,475,978.492,486,312.37132,361,179.983,691,766.0783,191,591.88
Best1842.5577752128.692528211,460.25773,943,823.946294,731.73352,435,144.75970,417.058491,257,845.528
Std171.90840591075.0260093,782,269.338,896,736.9872,962,883.88424,063,290.295,615,533.3331,605,533.552
F19Mean1927.4407691989.48272610,933,812.04111,670,459.18,162,084.747753,180,7932,532,275.903843,861.0162
Best1918.0806051942.2194126,6189.184321,916,322.2870,016.911472,076,596.5396158.737264122,729.6263
Std6.04682347529.3100157513,675,902.9383,264,644.914,918,593.502585,886,052.17,808,509.054822,262.9829
F20Mean2225.3330152327.8023622813.630392908.6719882769.2884742879.2946612508.5099972512.480112
Best2055.8299592124.2049982480.6899112564.1783032316.7412492392.0345842180.8373852240.343975
Std114.8740254101.3253257173.2153674163.0862191301.9609215227.7024178225.6131741138.4528088
Table 12. Comparison of experimental results on CEC-2017 (30-dimensional) combinatorial functions.
Table 12. Comparison of experimental results on CEC-2017 (30-dimensional) combinatorial functions.
mMPA-OCMPAPSOSCASSAAOAGWODE
F21Mean2366.162722379.3966042585.6191532604.4455942514.2216492702.3434492415.6106942492.451478
Best2332.2832772229.1074622499.0439232560.5160162446.6856522616.3057192364.1968482463.934067
Std15.5200031535.2953539944.9238971420.3275101450.6621947452.3723857727.9401569111.89111618
F22Mean2775.713172314.2224637871.5941729984.738254676.0838238914.7770195216.193687128.466697
Best2300.7337462304.958433174.3720245455.4950342300.8165975950.6784442399.3847813022.438351
Std1237.4752336.3395100271805.632724992.35809352613.231471118.2677861727.6347091955.515728
F23Mean2707.2646032734.2856143105.7552183083.0319123025.5436823628.6600052809.7516152841.193252
Best2669.0991012671.3229962936.563253009.5535092684.3457463309.1375172740.8383082817.92824
Std15.2751735429.764022286.5324872442.62796051123.3648787155.788101949.9141517715.06073136
F24Mean2882.3568922897.8276573295.12313264.7986713101.7672473901.5284532956.5051153038.737564
Best2848.6136012854.904653068.1546653163.7695192994.2138273523.0549122873.8234043013.088665
Std17.6330122523.21260688117.961668842.3787036969.0709779207.21642449.1651678814.48515637
F25Mean2897.5367192914.2292623350.2225753662.7010952975.9333945755.8053673057.7942922898.606938
Best2883.763332884.8012083042.0104823341.74162912.6136184388.1456042957.6978082890.451576
Std16.6440879320.33170492214.187784219.267753248.54949565774.62449476.139931486.663182868
F26Mean3488.1344723387.1280798072.9582228049.3096685218.49865410771.385025114.731755565.532713
Best2900.3360772911.2993474538.2068957261.4548672801.8650579146.9248524195.0660385248.104381
Std714.4476505612.35475191305.878392447.75366112115.923606861.4001182492.8098306132.3567493
F27Mean3214.7654373227.4583083200.0071433574.7724513354.1380914727.055993287.227793230.328489
Best3191.479463205.9081443200.0067753398.56373238.662753872.4897463232.6184213216.658404
Std11.2040652217.120034340.00014089180.9405789295.30269168394.839258333.280011866.354347982
F28Mean3090.0000023100.0013243291.7526023308.7182353294.3429663828.8163773368.392533324.526818
Best2800.0000573100.0006163272.5010813238.5655073100.0000133449.3602143167.7188783176.822487
Std54.772245380.00038633912.8168293379.17289652126.5297553141.051555387.2301835889.86123431
F29Mean3567.4048243762.7476644931.1347525333.5742024603.5254968284.2307443975.0108594184.722548
Best3402.4582413445.518664028.0170244954.9671394008.6192525138.1575653581.6116463861.138053
Std106.4221859156.3873518519.3277082270.2138512310.48023536414.773921232.6293185153.6286832
F30Mean7386.44536327,380.5271227,479,160.45200,765,178.610,251,055.412,420,933,11513,899,516.33672,438.0501
Best5323.04873310,343.759863,949,404.87134,199,558.31474,611.0326354,733,9902,225,792.497169,746.2971
Std1252.38634920,194.7645822,842,729.593,741,282.029,391,200.271,147,208,08411,553,559.77552,842.0405
Table 13. p-values of the Wilcoxon rank sum test of mMPA-OC and the comparison algorithm in CEC-2017 (30 dimensions).
Table 13. p-values of the Wilcoxon rank sum test of mMPA-OC and the comparison algorithm in CEC-2017 (30 dimensions).
mMPA-OC & MPAmMPA-OC & PSOmMPA-OC & SCAmMPA-OC & SSAmMPA-OC & AOAmMPA-OC & GWOmMPA-OC & DE
F18.15 × 10 11 3.02 × 10 11 3.02 × 10 11 3.51 × 10 2 3.02 × 10 11 3.02 × 10 11 4.50 × 10 11
F32.03 × 10 9 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F41.45 × 10 1 3.02 × 10 11 3.02 × 10 11 1.02 × 10 5 3.02 × 10 11 3.02 × 10 11 3.56 × 10 4
F52.39 × 10 8 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 1.46 × 10 10 3.02 × 10 11
F68.15 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F72.24 × 10 2 3.02 × 10 11 3.02 × 10 11 4.20 × 10 10 3.02 × 10 11 1.07 × 10 7 1.21 × 10 10
F85.53 × 10 8 3.02 × 10 11 3.02 × 10 11 7.39 × 10 11 3.02 × 10 11 2.38 × 10 7 3.02 × 10 11
F91.01 × 10 8 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 1.46 × 10 10 1.33 × 10 10
F103.50 × 10 3 3.02 × 10 11 3.02 × 10 11 1.36 × 10 7 3.02 × 10 11 8.66 × 10 5 3.02 × 10 11
F111.09 × 10 1 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F127.30 × 10 4 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F133.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F146.07 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F154.08 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F169.03 × 10 4 3.02 × 10 11 3.02 × 10 11 3.34 × 10 11 3.02 × 10 11 2.20 × 10 7 3.02 × 10 11
F172.61 × 10 2 8.99 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 1.25 × 10 7 6.70 × 10 11
F182.67 × 10 9 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F193.34 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F207.70 × 10 4 3.02 × 10 11 3.02 × 10 11 9.92 × 10 11 4.08 × 10 11 5.19 × 10 7 2.92 × 10 9
F212.05 × 10 3 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.82 × 10 10 3.02 × 10 11
F225.86 × 10 6 1.78 × 10 10 3.69 × 10 11 1.86 × 10 3 4.08 × 10 11 1.01 × 10 8 8.89 × 10 10
F231.68 × 10 4 3.02 × 10 11 3.02 × 10 11 4.20 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F242.24 × 10 2 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 4.18 × 10 9 3.02 × 10 11
F252.53 × 10 4 3.02 × 10 11 3.02 × 10 11 1.46 × 10 10 3.02 × 10 11 3.02 × 10 11 1.99 × 10 2
F269.93 × 10 2 3.34 × 10 11 3.02 × 10 11 7.96 × 10 3 3.02 × 10 11 1.21 × 10 10 3.02 × 10 11
F271.68 × 10 3 8.48 × 10 9 3.02 × 10 11 3.34 × 10 11 3.02 × 10 11 4.98 × 10 11 9.06 × 10 8
F283.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.95 × 10 11
F297.22 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.82 × 10 10 3.02 × 10 11
F303.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
Table 14. Average ranking of the Friedman test on CEC-2017 (30 dimensions).
Table 14. Average ranking of the Friedman test on CEC-2017 (30 dimensions).
AlgorithmmMPA-OCMPAPSOSCASSAAOAGWODE
Mean rank1.13792.10345.82586.47934.24147.72414.13794.0345
Final rank12675843
Table 15. Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2017 (30 dimensions).
Table 15. Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2017 (30 dimensions).
mMPA-OC & MPAmMPA-OC & PSOmMPA-OC & SCAmMPA-OC & SSAmMPA-OC & AOAmMPA-OC & GWOmMPA-OC & DE
F11110111
F31111111
F40111111
F51111111
F61111111
F70111111
F81111111
F91111111
F100111111
F110111111
F121111111
F131111111
F141111111
F151111111
F161111111
F170111110
F181111111
F191111111
F201111111
F211111111
F221111111
F231111111
F240111111
F251111110
F260110111
F271111111
F281111111
F291111111
F301111111
Table 16. Comparison of experimental results in CEC-2019.
Table 16. Comparison of experimental results in CEC-2019.
mMPA-OCMPAPSOSCASSAAOAGWODE
F1mean11.07968,044,536.86,771,320.3431,511,907.7727,077,275.73674,023.99467132100.557
Best11610,350.777635.575981,411.93281.055611,317,465.473
Std00.18819,818,726.75510,233,145.431,182,947.65412,124,639.33141,898.66323,705,531.336
F2mean3.034610.77193745.42344632.69771142.694412,917.2973471.81434382.4856
Best2.10373.40581057.36241308.569261.55187091.80314.98592698.3515
Std0.29226.29572017.67462137.3641842.60563155.681312.5551796.7693
F3mean1.45121.49578.893210.20064.02110.4492.88157.9373
Best114.41897.46011.53787.92911.00316.1993
Std0.31780.35541.9060.97061.54991.07071.9540.5986
F4mean11.183913.868434.730553.253455.888363.2721.179217.0837
Best5.97486.969816.292836.329123.88429.08319.09712.854
Std2.76114.188616.65098.349621.20311.83837.7772.1346
F5mean1.05241.08682.129711.71141.241288.53761.94511.1955
Best1.01521.01231.78454.00341.061520.68591.30751.0946
Std0.02530.04860.20974.98630.124631.0620.80140.0633
F6mean1.16331.48026.74848.31226.4611.09133.64321.9132
Best1.00351.00844.07185.02311.99067.3531.41741.056
Std0.20910.35931.67781.2971.9731.23871.20520.7157
F7mean523.6464584.28821177.6061627.64511287.9661424.4459761.5338824.5373
Best249.7686241.8689536.86331264.7746638.5018930.3358169.6064535.2754
Std143.0002191.1648318.3026202.6899353.3968249.2289321.0158155.7938
F8mean3.44093.68564.54874.57054.634.72044.05144.0483
Best2.62032.68244.01783.89773.28563.86772.62863.1103
Std0.36710.35390.29930.22390.51710.3460.53230.3036
F9mean1.09071.10041.4631.74171.36783.11361.21281.2744
Best1.03661.01651.23111.41921.10261.27371.10071.1769
Std0.02650.03910.16850.33980.19090.78940.0760.0486
F10mean19.080620.342921.457721.502720.449921.142321.22721.1906
Best1.00051.024921.199321.32834.574220.966914.014221.118
Std5.87843.64860.09210.07492.99840.07041.36640.0456
Table 17. p-values of the Wilson rank-sum test of mMPA-OC and the comparison algorithm in CEC-2019.
Table 17. p-values of the Wilson rank-sum test of mMPA-OC and the comparison algorithm in CEC-2019.
mMPA-OC & MPAmMPA-OC & PSOmMPA-OC & SCAmMPA-OC & SSAmMPA-OC & AOAmMPA-OC & GWOmMPA-OC & DE
F14.06 × 10 6 2.98 × 10 11 2.98 × 10 11 2.98 × 10 11 2.98 × 10 11 3.44 × 10 10 2.98 × 10 11
F23.34 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
F32.42 × 10 2 3.02 × 10 11 3.02 × 10 11 1.33 × 10 10 3.02 × 10 11 1.86 × 10 6 3.02 × 10 11
F43.34 × 10 3 3.69 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.83 × 10 8 1.41 × 10 9
F54.71 × 10 4 3.02 × 10 11 3.02 × 10 11 2.61 × 10 10 3.02 × 10 11 3.02 × 10 11 4.50 × 10 11
F61.25 × 10 4 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 6.07 × 10 11 3.26 × 10 7
F71.86 × 10 1 2.61 × 10 10 3.02 × 10 11 1.61 × 10 10 3.02 × 10 11 0.0014423282.39 × 10 8
F80.0150141334.08 × 10 11 4.08 × 10 11 1.41 × 10 9 4.08 × 10 11 3.09 × 10 6 2.83 × 10 8
F90.1579756893.02 × 10 11 3.02 × 10 11 2.87 × 10 10 3.02 × 10 11 3.16 × 10 10 3.02 × 10 11
F100.1714500473.02 × 10 11 3.02 × 10 11 2.00 × 10 5 6.72 × 10 10 4.20 × 10 10 3.02 × 10 11
Table 18. Average ranking of the Friedman test on CEC-2019.
Table 18. Average ranking of the Friedman test on CEC-2019.
AlgorithmmMPA-OCMPAPSOSCASSAAOAGWODE
Mean rank125.96.84.97.33.84.3
Final rank12675834
Table 19. Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2019.
Table 19. Logistic values of Bonferroni–Holm tests for mMPA-OC compared with other algorithms in CEC-2019.
mMPA-OC & MPAmMPA-OC & PSOmMPA-OC & SCAmMPA-OC & SSAmMPA-OC & AOAmMPA-OC & GWOmMPA-OC & DE
F11111111
F21111111
F30111111
F41111111
F51111111
F61111111
F70111111
F80111111
F90111111
F100111111
Table 20. Comparison results between mMPA-OC and the other seven meta-heuristics algorithms on the welded beam design problem.
Table 20. Comparison results between mMPA-OC and the other seven meta-heuristics algorithms on the welded beam design problem.
AlgorithmmMPA-OCMPAPSOSCASSAAOAGWODE
M e a n 1.69 × 10 1 1.70 ×   10 1 2.59 ×   10 1 1.71 ×   10 1 1.95 ×   10 1 1.95 ×   10 1 1.09 ×   10 14 1.09 ×   10 14
M i n 1.70 × 10 1 1.70 × 10 1 1.94 ×   10 1 1.70 ×   10 1 1.83 ×   10 1 1.72 ×   10 1 1.09 ×   10 14 1.09 ×   10 14
S D 3.25 ×   10 4 1.10 × 10 2 3.76 × 10 1 7.14 × 10 3 9.49 × 10 2 2.69 × 10 1 04.90 × 10 1
Table 21. Comparison results between mMPA-OC and the other seven meta-heuristics algorithms on the pressure vessel design problem.
Table 21. Comparison results between mMPA-OC and the other seven meta-heuristics algorithms on the pressure vessel design problem.
AlgorithmmMPA-OCMPAPSOSCASSAAOAGWODE
M e a n 2302.552302.595778.645227.903507.21204,449.50204,449.50204,584.87
M i n 2302.552302.553797.112322.542335.69204,345.84204,345.84204,323.08
S D 0.000.171148.711582.19345.4797.9797.97367.49
Table 22. Comparison results between mMPA-OC and the other seven meta-heuristics on the gear train design problem.
Table 22. Comparison results between mMPA-OC and the other seven meta-heuristics on the gear train design problem.
AlgorithmmMPA-OCMPAPSOSCASSAAOAGWODE
M e a n 2.00 × 10 9 2.16 × 10 9 9.45 × 10 9 1.01 × 10 8 1.14 × 10 9 8.53 × 10 8 6.83 × 10 10 3.13 × 10 10
M i n 2.70 × 10 12 2.70 × 10 12 9.94 × 10 11 2.31 × 10 11 2.31 × 10 11 3.30 × 10 9 2.70 × 10 12 2.70 × 10 12
S D 1.32 × 10 9 2.34 × 10 9 1.10 × 10 8 1.63 × 10 8 8.28 × 10 10 1.13 × 10 7 5.68 × 10 10 3.98 × 10 10
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, L.; Hao, C.; Ma, Y. A Multi-Disturbance Marine Predator Algorithm Based on Oppositional Learning and Compound Mutation. Electronics 2022, 11, 4087. https://doi.org/10.3390/electronics11244087

AMA Style

Chen L, Hao C, Ma Y. A Multi-Disturbance Marine Predator Algorithm Based on Oppositional Learning and Compound Mutation. Electronics. 2022; 11(24):4087. https://doi.org/10.3390/electronics11244087

Chicago/Turabian Style

Chen, Lei, Congwang Hao, and Yunpeng Ma. 2022. "A Multi-Disturbance Marine Predator Algorithm Based on Oppositional Learning and Compound Mutation" Electronics 11, no. 24: 4087. https://doi.org/10.3390/electronics11244087

APA Style

Chen, L., Hao, C., & Ma, Y. (2022). A Multi-Disturbance Marine Predator Algorithm Based on Oppositional Learning and Compound Mutation. Electronics, 11(24), 4087. https://doi.org/10.3390/electronics11244087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop