Next Article in Journal
Bounds for Eigenvalues of q-Laplacian on Contact Submanifolds of Sasakian Space Forms
Previous Article in Journal
On the Convergence Rate of Quasi-Newton Methods on Strongly Convex Functions with Lipschitz Gradient
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Marine Predator Algorithm by Using Engineering Optimisation Problems

Department of Informatics and Computers, University of Ostrava, 30. dubna 22, 70103 Ostrava, Czech Republic
Mathematics 2023, 11(23), 4716; https://doi.org/10.3390/math11234716
Submission received: 30 October 2023 / Revised: 18 November 2023 / Accepted: 20 November 2023 / Published: 21 November 2023
(This article belongs to the Special Issue Computational Optimization with Differential-Algebraic Constraints)

Abstract

:
This paper provides a real application of a popular swarm-intelligence optimisation method. The aim is to analyse the efficiency of various settings of the marine predator algorithm (MPA). Four crucial numerical parameters of the MPA are statistically analysed to propose the most efficient setting for solving engineering problems. Besides population size, particle velocity parameter P, Lévy flight parameter β , and fish aggregating device (FAD) probabilities are studied. Finally, 193 various settings, including fixed values and dynamic changes of the MPA parameters, are experimentally compared when solving 13 engineering problems. Standard statistical approaches are employed to highlight significant differences in various MPA settings. The setting of two MPA parameters (P, FADs) significantly influences MPA performance. Three newly proposed MPA settings outperform the original variant significantly. The best results provide the MPA variant with the dynamic linear change of P from 0.5 to 0. These parameters influence the velocity of prey and predator individuals in all three stages of the MPA search process. Decreasing the value of P showed that decreasing the velocity of individuals during the search provides good performance. Further, lower efficiency of the MPA with higher FAD values was detected. It means that more occasional use of fish aggregating devices (FADs) can increase the solvability of engineering problems. Regarding population size, lower values ( N = 10 ) provided significantly better results compared with the higher values ( N = 500 ).

1. Introduction

In many areas of human lives, an optimal system setting is required to decrease time and cost demands significantly. A popular area of real-world optimisation is represented by engineering tasks, such as product design, optimal shape, resource scheduling, space mapping, etc. [1]. A better solution to engineering problems impacts the economic and ecological aspects of human resources. There are many approaches to solving engineering optimisation problems, and the group of efficient optimisers inspired by nature systems and particles (hunting, collecting food, reproducing, building nests, etc.) is called swarm intelligence (SI) algorithms. These methods model the behaviour of particles independently and also the relationship between the particles [2].
The history of SI algorithms was introduced in 1995 when the particle swarm optimisation (PSO) algorithm was proposed [3]. During almost three decades of development, many variants of the SI algorithms were designed. Besides the PSO, popular variants of SI algorithms include the grey wolf optimiser [4], artificial bee colony [5], cuckoo search [6], firefly algorithm [7], self-organising migration algorithm [8], and many others. The efficiency of some selected SI optimisation methods was analysed and compared in many experimental studies [9,10,11,12]. This paper analyses the performance of a recently proposed SI metaheuristic inspired by marine predators’ widespread foraging strategy called the marine predator algorithm (MPA).
Before studying the efficiency of MPA, a definition of the global optimisation (GO) problem is provided. The GO problem is represented by the objective (goal) function f ( x ) , x = ( x 1 , x 2 , , x D ) I R D , which is defined on the D-dimensional search space Ω bounded by Ω = j = 1 D [ a j , b j ] , a j < b j , j = 1 , 2 , , D . Then, the point x * is called the global minimum of the problem if f ( x * ) f ( x ) , x Ω .
Regarding real-world engineering problems, the search space Ω is constrained by equality (h) or inequality (g) limits given by:
g i ( x ) 0 , i = 1 , 2 , , n g h i ( x ) = 0 , i = 1 , 2 , , n h .
These constraints define a feasible area where the solutions should be located. It is only feasible if the solution satisfies all the constraints. Notice that a positive tolerance area is taken as an acceptable region in the case of equality constraints. Finally, the solutions are evaluated by the objective function f and penalised by leaving the feasible area.
This paper aims to study the control parameters of the MPA algorithm to achieve good performance on real-world engineering problems. The MPA algorithm was selected for this experiment based on preliminary experiments where it achieved very promising results compared to well-known swarm intelligence algorithms. The original MPA algorithm uses three numerical control parameters, and the authors of this method recommended some combination of the values. Here, 193 various settings of four MPA parameters (including the parameters of population size N) are implemented, evaluated, and compared statistically to achieve some conclusions. The achieved results provide the researchers insight into the efficiency of the MPA settings when solving engineering problems. Moreover, the newly proposed dynamic change of the MPA parameters outperforms the original MPA settings.
The rest of the paper is organised as follows. A brief description of the marine predator algorithm and its control parameters are in Section 2 and Section 3. Several enhanced MPA variants and their applications are in Section 2.2. Settings of the experiment, engineering problems, and results of the comparison are discussed in Section 4 and Section 5. The conclusions and some recommendations are made in Section 6.

2. MPA: Marine Predator Algorithm

In 2020, Faramarzi et al. introduced a new nature-inspired swarm intelligence metaheuristic using Lévy flight and Brownian motion for model movements of predators in the ocean called the marine predator algorithm (MPA) [13]. The main idea of the MPA is to simulate a typical pattern of foraging of predators (i.e., tuna fish, sunfish, shark, etc.) under the water. Many researchers have found that the most accurate simulation of the moving and food-searching strategy of the marine predators is Lévy flight represented rather by many small steps. Moreover, ‘smooth’ walking in some stages of the particles’ search process enables Brownian motion. Brownian motion enables the sampling of random values by employing the Gaussian distribution function. The Lévy flight provides steps of random walk defined by Lévy distribution with flight length x j and power-law exponent α ( 1 , 2 ) :
L ( x j ) | x j | 1 α .
Brownian motion and Lévy flight produce a random movement of elements in the predefined search space. The main difference between these two techniques is that whereas Brownian motion is defined by steps of a similar length, Lévy flight enables occasional bigger steps (see Figure 1).
Both methods started from the same position and took steps in different directions. The step size of Brownian motion is more consistent (rather ‘smooth’), whereas Lévy flight produces steps with various sizes.

2.1. Three Phases of MPA

The MPA method belongs to a set of nature-inspired swarm-intelligence population-based optimisation algorithms. As the most population-based algorithms, MPA starts with a randomly initialised population P in the search space Ω . Besides the population (swarm) of predators, the elite predator position ( E l i t e ) is stored to be used in the moving process. After the initialisation, the update process of the predators’ position started. This process is represented by three phases of moving the predator and prey, as depicted in Figure 2.

2.1.1. First Phase

In the first phase, the beginning of the predator’s moving process is initialised. In this phase, the highest velocity (speed) of predator foraging speed is achieved. The idea is to simulate the exploration phase, where the areas of potentially good food sources are detected. This phase of MPA is expected in the first third of the optimisation process when the number of function evaluations (FES) is lower than the total number of function evaluations allocated for the run (maxFES). Then, the new position of the particle is updated using the current position ( x i ) and step:
x i = x i + P · r a n d ( 0 , 1 ) · R B · ( E l i t e R B · x i )
where P ( 0 , 1 ) is the input control parameter of the first stage, R B is the vector of random numbers from Brownian movement, and E l i t e represents the best particle position in the history of the run.

2.1.2. Second Phase

The second phase occurs when the current number of function evaluations (FES) is in the second third of the run, maxFES / 3 FES 2 · maxFES / 3 . This phase simulates where prey and predator move similarly because both are foraging the food positions. In this phase, exploration and exploitation occurred, where half the population explored the search space and the second half was used for exploitation. Thus, particles ( x i ) in the first half of the population are updated using Lévy flight:
x i = x i + P · r a n d ( 0 , 1 ) · R L · ( E l i t e R L · x i )
and the second half of the population is updated, employing the Brownian movement:
x i = E l i t e + P · CF · R B · ( R B · E l i t e x i )
where R L and R B are vectors of random numbers from Lévy and Brownian movement, E l i t e is the position of the best solution, P ( 0 , 1 ) is the MPA control parameter, and the control parameter of the predator’s step is CF = ( 1 FES maxFES ) ( 2 FES maxFES ) .

2.1.3. Third Phase

In the third phase ( FES > 2 · maxFES / 3 ), the lowest velocity of particles is achieved. In this scenario, the predator is faster than the prey, i.e., the particles x i follow the update by the exploration phase:
x i = E l i t e + P · CF · R L · ( R L · E l i t e x i ) .
Predators (e.g., sharks) spend much of their lives in Eddy’s formation or fish aggregating devices (FADs). This effect inspired the authors of MPA to add a new enhanced element to the optimiser to avoid stagnation in the area of the local minima.
x i = x i + CF [ x min + R · ( x max x min ) ] · U , i f r FADs x i + [ FADs ( 1 r ) + r ] ( x r 1 x r 2 ) , otherwise .
where FADs = 0.2 is the probability of the FADs effect, vector U contains random values from ( 0 , 1 ) transformed to 0 if they are under 0.2 or 1, x max and x min are vectors with maximal and minimal values of prey locations, and r 1 , r 2 are randomly selected indices of prey. For better illustration, the flowchart of the MPA algorithms is depicted in Figure 3.
Briefly, the phases of the MPA enable exploration of the search space, moving the prey in the first phase (see Figure 1). Both prey and predators search for their food sources (half of the population works on exploration and half on exploitation). Finally, predators exploit the allocated food areas (switching from Brownian to Lévy steps).
The MPA algorithm was successfully enhanced and applied to many various optimisation problems. Some examples of the most popular MPA experiments are presented as follows.

2.2. Applications of MPA

In 2020, Abdel-Basset et al. proposed a new hybrid model of the improved marine predator algorithm (MPA) to detect COVID-19 [14]. The new variant of MPA introduced a ranking-based mechanism for faster convergence of the population towards the best solution. A new IMPA variant was used to detect COVID-19 in the X-ray images of the human lungs. The proposed IMPA outperformed five nature-inspired optimisers and also the original MPA variant.
In 2020, Al-qaness et al. proposed a model MPA with an Adaptive Neuro-Fuzzy Inference System (ANFIS) to forecast the number of COVID-19 cases in four world countries—Italy, USA, Iran, and Korea [15]. Standard time-series quantifiers measured the quality of the estimated numbers. The proposed MPA-ANFIS model is compared with ABC, ANFIS, FPASSA, GA, PSO, and SCA. The results illustrate that the proposed MPA-ANFIS is not the fastest method, but it provides the most accurate forecast of the COVID-19 case numbers in these countries.
In 2020, Sahlol et al. introduced an enhanced MPA variant to classify the COVID-19 images [16]. The authors combined convolutional neural networks (CNN) with MPA to develop a new FO-MPA optimiser. The proposed method is applied to X-ray COVID-19 images from two Kaggle datasets. The results are compared with nine nature-inspired algorithms, including the original MPA. The proposed FO-MPA provided the best mean results. Only the Harris Hawk optimiser and the original MPA found better solutions.
In 2020, Elaziz et al. proposed a model of MPA cooperating with the moth–flame optimiser (MFO) [17]. A newly designed MPAMFO was applied to the segmentation of CT images of COVID-19 cases. The results are compared with eight nature-inspired methods, including the original MPA and MFO. The proposed MPAMFO provided the most robust results in several levels of thresholding of human head CT images.
In 2020, Yousri et al. simulated applying the MPA optimiser to achieve the maximal energy of photovoltaic plants [18]. The performance of MPA is measured by the simulation of 9 × 9 , 16 × 16 , and 25 × 25 arrays and compared with three optimisers (MRFO, HHO, and PSO) and the regular TCT connection. The results illustrated that the MPA is the best-performing method that reduces time complexity by 5–20%.
In 2020, Soliman et al. employed the MPA to estimate the parameters of two different marketable triple-diode photovoltaic systems [19]. The experiment’s results are compared with those of four nature-inspired methods. The authors concluded that the MPA can provide accurate results for any marketable photovoltaic system.
In 2020, Abdel-Basst et al. proposed an energy-aware model of MPA to solve the task scheduling of fog computing [20]. The authors introduced two MPA versions—the first improved exploitation capability using the last updated positions instead of the best one. The second used a ranking-based strategy and mutation with the random regeneration of part of the population after a predefined number of iterations. Three MPA variants are compared with five nature-inspired algorithms. The results illustrated the high performance of the proposed MPA variants.
In 2021, Qinsong et al. proposed a modified variant of MPA with an opposition-based learning (OBL) approach based on chaotic maps [21]. Besides OBL, a new updated process of individuals using an inertia weight coefficient was performed. Further, a nonlinear step size for balance between exploration and exploitation was introduced. The efficiency of the new MMPA was measured by artificial problems (CEC 2020) and four engineering problems. The results illustrated the MMPA superiority on most optimisation tasks.
In 2021, Abd Elminaam et al. proposed a new variant of MPA using a k-nearest neighbor strategy [22]. This approach enables the classification of records of the training datasets using Euclidean distances. A new MPA-kNN variant was applied to 18 datasets to classify features and instances. The results were compared with seven other metaheuristics, and the proposed MPA achieved the best performance.
In 2023, Aydemir et al. introduced an elite evolution strategy approach for MPA when solving engineering problems [23]. The idea was to introduce a random mutation for elite individuals, controlling the convergence of MPA. The proposed MPA variant was applied to CEC 2017 problems and several engineering problems. Achieved results were compared with several counterparts and illustrated the performance of the new MPA.
In 2023, Kumar et al. introduced a new variant of MPA with chaotic maps for engineering problems [24]. The idea was to employ a chaos approach in MPA to avoid repeated locations (positions) during the search. The CMPA only has control parameter P c to control the amount of chaos used in the search process. Twelve various chaos settings were used to achieve the best performance in engineering problems. The results illustrate that different CMPA chaos settings achieved the best results in different engineering problems.
The authors of these studies focused on applying the MPA algorithms to various optimisation problems. The studies did not analyse the setting of the MPA parameters for practical (real-world) use. In this paper, a comprehensive analysis of the MPA control parameters is performed to illustrate the proper setting of this optimiser for future applications. Thirteen real-world engineering optimisation problems were used to illustrate the performance of various MPA settings.

3. MPA Control Parameters

The MPA algorithm is controlled by several control parameters (see Equations (3)–(7)). Besides the population size N, three crucial numerical parameters play a significant role in the performance of this optimiser when searching for the solution to the problems. The aim of this paper is to study various MPA settings to achieve insight into the efficiency of this optimiser when solving engineering problems. Therefore, several rather equidistant values were proposed for each MPA numerical parameter and combined with the remaining parameters. Various (small, middle, and large) values of the MPA numerical parameters were employed.

3.1. Fish Aggregating Devices Strategy

The first parameter studied here is called FADs ( 0 , 1 ) , employed in Equation (7). This value helps the predators to employ the fish aggregating device (FAD) strategy. The authors recommended the value of FADs = 0.2 . Three fixed values are studied in the presented experiment, FADs = ( 0.2 ,   0.5 ,   0.8 ) . Moreover, six linear-change settings are used, where the value of the FAD is linearly updated (increased or decreased) during the search process: (0-1), (1-0), (0-0.5), (0.5-1), (1-0.5), (0.5-0).

3.2. Velocity of Particles

The velocity of predator and prey individuals in the MPA is controlled by P ( 0 , 1 ) , and the authors recommended setting P = 0.5 . This parameter is used in Equations (3)–(6). In this study, two more interesting values of this parameter are added. Therefore, three fixed values are used in the experiment, P = ( 0.1 ,   0.5 ,   0.9 ) . Further, six linear change settings are investigated, where the value is linearly updated (increased or decreased) during the search process: (0-1), (1-0), (0-0.5), (0.5-1), (1-0.5), (0.5-0).

3.3. Lévy Flight

The Lévy flight strategy is used to move particles in the second and third phases (see Equations (4) and (6). This movement strategy is based on the vector of R L , controlled by the parameter β . The authors of MPA recommended using β = 1.5 , three fixed values are used in this study, β = ( 1 , 1.25 , 1.5 ) . Moreover, six linear change settings of the β parameter are used, where the value is linearly updated (increased or decreased) during the search process: (0-1.5), (1.5-0), (0-0.75), (0.75-1.5), (1.5-0.75), (0.75-0).

4. Experiment Settings

This study aims to analyse the performance of three crucial MPA control parameters. As described above, several various settings of P, FADs, and β were mentioned in the experiment. In total, 188 combinations of the MPA control parameters were employed for this study. Moreover, five variants of MPA with the recommended settings and linearly decreased population size are also included in the comparison. Four numbers distinguish variants of MPA to make the experiment results more transparent. Variants of the MPA algorithm are labelled by ‘(MPA_)N_FADs _ P _ β ’. For example, ‘MPA_10_0.2 _ 0.5 _ 1.5 ’ (or ‘ 10 _ 0.2 _ 0.5 _ 1.5 ’) denotes the original settings of MPA with population size N = 10 .
Thirteen real-world engineering problems were selected for this study to show the performance of the MPA variants. All the problems are minimisation, i.e., the minimal function value is searched. For each problem and MPA setting, 30 independent runs are performed to achieve robustness of the statistical comparison. The run is divided into 11 equidistant stages to illustrate the performance of the methods during the search process. For each run of the algorithm and problem, the solution was achieved when F E S = 9000 number of function evaluations was reached. These real-world engineering problems were also collected for another study, where more detail is provided [25]. All the problems are represented by constrained problems, where search space Ω is bounded by linear and nonlinear limits to define feasible areas. The dimensionality of these problems (D) and function value of true solution ( f ( x * ) ) are provided in Table 1.
The dimensionality of the selected engineering problems is rather low ( 2 D 11 ). It causes lower complexity, where the movement of particles in a low-dimensional space is faster. For a better illustration, the design of six engineering problems is provided in Figure 4, Figure 5, Figure 6 and Figure 7.

5. Results

This study compares 193 MPA variants with various settings of the control parameters when solving 13 real-world engineering optimisation problems. The experiment provides a huge amount of data results to analyse. An advanced statistical comparison is performed instead of standard descriptive values and plots for better illustration. At first, the absolute mean ranks from the Friedman tests of all 193 MPA settings are provided for each stage separately inTable 2, Table 3, Table 4 and Table 5.
Each table row represents one MPA setting, defined by four control parameters in the first four columns. The remaining columns of the table present the absolute mean ranks, where a lower rank means better results of the MPA settings regarding all 13 engineering problems. The rows are sorted using the ranks in the final stage of the run (last column). As such, the best-performing settings are at the top of the table, and the efficiency of the MPA decreases towards the end of the table. The best results provide three MPA settings with the recommended settings of FADs and β , and linear change of P, (0.5-0), (0.5-1), (1-0). The recommended MPA setting takes the fourth position.
It is possible to provide several remarks regarding MPA parameters:
  • The efficiency of MPA is decreased with increasing population size.
  • The best linear population size reduction is in the 20th position.
  • Twenty of the best MPA variants use recommended FADs = 0.2 .
  • Six of the worst performing MPA settings employ FADs = 0.8 .
  • Seven out of seventeen of the best-performing MPA variants use a linear change of β .
  • The variant of MPA 10 _ 0 0.5 _ 0.5 _ 1.5 (original settings and linear change of FADs) achieves good results in the first five search stages. That information is interesting for the cases where the low time demands are provided.
Decreasing the population size of MPA did not provide better results compared to a fixed small population size ( N = 10 ). Therefore, these variants were removed from the illustration of the mean ranks from the Friedman tests (Figure 8). The settings of all MPA variants are labelled on the axes, where values of β and N are on the vertical axis, and P and FADs are on the horizontal axis. The rectangle is divided into nine smaller rectangles based on β and P values. Absolute mean ranks denoting the ten best-performing and ten worst-performing MPA variants are included in this figure.
The mean ranks are evaluated by colour, from the least mean rank (best-performing setting, dark blue) to the biggest mean rank (worst-performing, dark red) setting, according to the colour bar on the right side. Dark blue horizontal rows illustrate the good performance of the small population size. A decreasing saturation of blue from the left to the right side of smaller rectangles means that the efficiency of MPA is decreasing with the increasing value of FAD. Conversely, the smaller rectangles’ colour patterns are similar, which points out the smaller influence of P and β parameters. Analysing Table 2, Table 3, Table 4 and Table 5 and Figure 8 results in the selection of the best-performing MPA variants for further analysis.
The absolute ranks of the top nine MPA variants during 11 stages are depicted in Figure 9. The positions of the lines on the right side illustrate the final efficiency of the optimisers, where dots present the recommended original setting. The best setting (10_0.2_0.5-0_1.5) is very efficient during the search process. Surprisingly, the combination of 10_0-0.5_0.5_1.5 (increasing FADs) is very efficient in the early stages, and then it is substantially worse. Oppositely, an efficiency of 10_0.2_1-0_1.5 (decreasing P) is increasing from the worst position to the third place.
The next step of the MPA variant analysis is to compare the median values of the best-performing settings for each problem separately (Table 6). The median values were computed from the independent runs of the algorithms in the last stage. The results of the original MPA settings are in the first column. Although the variability of the achieved results is not substantial, the best-achieved result for each problem is highlighted.
The most frequently best-performing MPA variant was 10_0.2_0.5-0_1.5 (five times out of 13). The original setting never achieved the best results separately. Also, the Kruskal–Wallis tests were performed to show significant differences between the nine algorithms for each problem separately. The tests were performed in each stage of the run. The results are significant if the achieved significance level is lower than 0.05 . The significant results in the final stage were only for the 11th problem. Also, there were significant differences in some stages of the problems ( 3 , 6 , 8 , 10 ).
Finally, the number of first, second, third, and last positions of the nine best-performing MPA variants for each problem and all stages are illustrated in Table 7. The results of the best-performing methods for each problem and stage are highlighted. The algorithms are ordered such that the most frequently best-performing one is on the left side of the table. In most cases, the MPA setting achieved the best results by 10_0.2_0.5-0_1.5, outperforming other MPA variants from the 6th to 11th stage. This setting provides the best results in 28 cases out of 143 (11 stages, 13 problems). On the other side, it achieves the worst results in five cases. Two other combinations of the MPA parameters achieved very promising results in the early stages of the optimisation process: 10_0.2_0.5_1.5-0.75 and 10_0-0.5_0.5_1.5.
More details on the MPA performance of the nine best-performing variants come from the two-sample non-parametric Wilcoxon test. The recommended (original) MPA setting is compared with the remaining variants separately on each problem (Table 8). A number of better results (bet.), significantly better results from the Wilcoxon tests (s.bet.), worse results (wor.), and significantly worse (s.wor.) results of eight newly proposed MPA variants are provided. The methods are ordered from better performing on the left side to worse performing on the right side. The MPA variant 10_0.2_0.5-0_1.5 achieved similar results to the second MPA 10_0.2_1-0_1.5, which was three times outperformed by the original MPA. The variant MPA 10_0.2_0.5_1.5-0.75 achieved the same number of wins (4) and loses (4) compared to the original MPA.

6. Conclusions

This paper studied the performance of the successful swarm-intelligence optimisation heuristic inspired by marine predators (MPA). The study aims to analyse the MPA parameters when solving real-world engineering problems. Besides the population size N, the MPA optimiser uses three numerical parameters—P, FADs, and β . The authors of MPA recommended P = 0.5 , FADs = 0.2 , and β = 1.5 . Although the MPA was introduced when applied to real-world problems, a more comprehensive experimental comparison of various settings was performed.
Four fixed values and the linear reduction of N were used. Further, three fixed values for each of the three MPA parameters were employed, along with six various linear (increase, decrease) settings. Therefore, 193 various settings of the MPA parameters were compared when solving 13 real-world constrained engineering problems. Achieved results were statistically analysed to illustrate significant differences.
The original MPA setting was outperformed by three variants with the linear change of the particle velocity parameter, P { (0.5-0), (0.5-1), (1-0)}. Decreasing values of P (from 1 or 0.5 to 0) illustrate the situation where the MPA individual’s velocity is decreased during the search process. It is interesting that increasing velocity enables us to outperform the original MPA settings. This finding is very interesting to the real applications of this optimiser.
Further, it was found that the MPA performance decreases when a bigger population size is applied. This fact is caused mostly by the small dimensionality of the real-world problems despite the solutions of the engineering problems limited by the feasible regions.
Finally, the efficiency of MPA is low when the higher value of FAD is employed. This parameter directly influences the frequency of using Eddy’s formation and fish aggregating devices during the search process. From Equation (7), it is clear that using FADs causes a random position of the current solution, which is slowing down the convergence of the MPA population.
A different setting of the β parameter has no substantial influence on MPA results. This parameter is called the power law index, and it controls the Lévy flight process, which is used to generate new positions in the second and third phases. Notice that the value of β is from β ( 1 , 2 ) .
The most efficient MPA variant in the comparison (10_0.2_0.5-0_1.5) has robust performance during all 11 stages of the search process (based on the absolute rank). This MPA variant achieved the best solution in 5 problems out of 13. Better results were achieved by variant 10_0.2_0.5_1 on the first problem.
Achieved results enable us to illustrate the influence of the MPA parameters when solving engineering problems. The linear change of the P value has the potential to be analysed in future research.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data were measured in MATLAB 2020b during the experiments.

Conflicts of Interest

The funders had no role in the design of the study, in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Abualigah, L.; Abd Elaziz, M.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-qaness, M.A.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  2. Chu, S.C.; Huang, H.C.; Roddick, J.F.; Pan, J.S. Overview of Algorithms for Swarm Intelligence. In Computational Collective Intelligence, Technologies and Applications, Proceedings of the Third International Conference, ICCCI 2011, Gdynia, Poland, 21–23 September 2011; Jędrzejowicz, P., Nguyen, N.T., Hoang, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 28–41. [Google Scholar]
  3. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  4. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  5. Karaboga, D.; Basturk, B. Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems. In Foundations of Fuzzy Logic and Soft Computing, Proceedings of the 12th International Fuzzy Systems Association World Congress, IFSA 2007, Cancun, Mexico, 18–21 June 2007; Melin, P., Castillo, O., Aguilar, L.T., Kacprzyk, J., Pedrycz, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 789–798. [Google Scholar]
  6. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; IEEE: New York, NY, USA, 2009; pp. 210–214. [Google Scholar]
  7. Yang, X.S. Firefly algorithm, Levy flights and global optimization. In Research and Development in Intelligent Systems XXVI; Springer: Berlin/Heidelberg, Germany, 2010; pp. 209–218. [Google Scholar]
  8. Zelinka, I. SOMA—Self-Organizing Migrating Algorithm. In New Optimization Techniques in Engineering; Springer: Berlin/Heidelberg, Germany, 2004; pp. 167–217. [Google Scholar]
  9. Bujok, P.; Tvrdik, J.; Polakova, R. Comparison of nature-inspired population-based algorithms on continuous optimisation problems. Swarm Evol. Comput. 2019, 50, 100490. [Google Scholar] [CrossRef]
  10. Bujok, P. Enhanced Tree-Seed Algorithm Solving Real-World Problems. In Proceedings of the 2020 7th International Conference on Soft Computing & Machine Intelligence (ISCMI 2020), Stockholm, Sweden, 14–15 November 2020; IEEE: New York, NY, USA, 2020; pp. 12–16. [Google Scholar] [CrossRef]
  11. Bujok, P. Harris Hawks Optimisation: Using of an Archive. In Artificial Intelligence and Soft Computing, Proceedings of the 20th International Conference on Artificial Intelligence and Soft Computing (ICAISC 2021), Virtual Event, 21–23 June 2021; Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J., Eds.; Polish Neural Network Soc; Univ Social Sci Lodz; Czestochowa Univ Technol, Dept Intelligent Comp Syst; IEEE Computat Intelligence Soc, Poland Chapter; Czestochowa Univ Technol, Inst Computat Intelligence; Part I; Lecture Notes in Artificial Intelligence; Springer: Cham, Switzerland, 2021; Volume 12854, pp. 415–423. [Google Scholar] [CrossRef]
  12. Bujok, P.; Lacko, M. Slime Mould Algorithm: An Experimental Study of Nature-Inspired Optimiser. In Bioinspired Optimization Methods and Their Applications, Proceedings of the 10th International Conference on Bioinspired Optimization Methods and Their Applications (BIOMA), Maribor, Slovenia, 17–18 November 2022; Lecture Notes in Computer Science; Mernik, M., Eftimov, T., Crepinsek, M., Eds.; Springer: Cham, Switzerland, 2022; Volume 13627, pp. 201–215. [Google Scholar] [CrossRef]
  13. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  14. Abdel-Basset, M.; Mohamed, R.; Elhoseny, M.; Chakrabortty, R.K.; Ryan, M. A Hybrid COVID-19 Detection Model Using an Improved Marine Predators Algorithm and a Ranking-Based Diversity Reduction Strategy. IEEE Access 2020, 8, 79521–79540. [Google Scholar] [CrossRef]
  15. Al-qaness, M.A.A.; Ewees, A.A.; Fan, H.; Abualigah, L.; Abd Elaziz, M. Marine Predators Algorithm for Forecasting Confirmed Cases of COVID-19 in Italy, USA, Iran and Korea. Int. J. Environ. Res. Public Health 2020, 17, 3520. [Google Scholar] [CrossRef]
  16. Sahlol, A.T.; Yousri, D.; Ewees, A.A.; Al-qaness, M.A.A.; Damasevicius, R.; Abd Elaziz, M. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci. Rep. 2020, 10, 15364. [Google Scholar] [CrossRef]
  17. Abd Elaziz, M.; Ewees, A.A.; Yousri, D.; Alwerfali, H.S.N.; Awad, Q.A.; Lu, S.; Al-Qaness, M.A.A. An Improved Marine Predators Algorithm With Fuzzy Entropy for Multi-Level Thresholding: Real World Example of COVID-19 CT Image Segmentation. IEEE Access 2020, 8, 125306–125330. [Google Scholar] [CrossRef]
  18. Yousri, D.; Babu, T.S.; Beshr, E.; Eteiba, M.B.; Allam, D. A Robust Strategy Based on Marine Predators Algorithm for Large Scale Photovoltaic Array Reconfiguration to Mitigate the Partial Shading Effect on the Performance of PV System. IEEE Access 2020, 8, 112407–112426. [Google Scholar] [CrossRef]
  19. Soliman, M.A.; Hasanien, H.M.; Alkuhayli, A. Marine Predators Algorithm for Parameters Identification of Triple-Diode Photovoltaic Models. IEEE Access 2020, 8, 155832–155842. [Google Scholar] [CrossRef]
  20. Abdel-Basset, M.; Mohamed, R.; Elhoseny, M.; Bashir, A.K.; Jolfaei, A.; Kumar, N. Energy-Aware Marine Predators Algorithm for Task Scheduling in IoT-Based Fog Computing Applications. IEEE Trans. Ind. Inform. 2021, 17, 5068–5076. [Google Scholar] [CrossRef]
  21. Fan, Q.; Huang, H.; Chen, Q.; Yao, L.; Yang, K.; Huang, D. A modified self-adaptive marine predators algorithm: Framework and engineering applications. Eng. Comput. 2022, 38, 3269–3294. [Google Scholar] [CrossRef]
  22. Abd Elminaam, D.S.; Nabil, A.; Ibraheem, S.A.; Houssein, E.H. An Efficient Marine Predators Algorithm for Feature Selection. IEEE Access 2021, 9, 60136–60153. [Google Scholar] [CrossRef]
  23. Aydemir, S.B.; Onay, F.K. Marine predator algorithm with elite strategies for engineering design problems. Concurr.-Comput.-Pract. Exp. 2023, 35, e7612. [Google Scholar] [CrossRef]
  24. Kumar, S.; Yildiz, B.S.; Mehta, P.; Panagant, N.; Sait, S.M.; Mirjalili, S.; Yildiz, A.R. Chaotic marine predators algorithm for global optimization of real-world engineering problems. Knowl.-Based Syst. 2023, 261, 110192. [Google Scholar] [CrossRef]
  25. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social Network Search for Solving Engineering Optimization Problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef]
Figure 1. Trajectory of steps for Brownian motion and Lévy flight.
Figure 1. Trajectory of steps for Brownian motion and Lévy flight.
Mathematics 11 04716 g001
Figure 2. Three phases of the marine predator algorithm (MPA).
Figure 2. Three phases of the marine predator algorithm (MPA).
Mathematics 11 04716 g002
Figure 3. Flowchart of the marine predator algorithm.
Figure 3. Flowchart of the marine predator algorithm.
Mathematics 11 04716 g003
Figure 4. Speed reducer design problem and three-bar truss problem.
Figure 4. Speed reducer design problem and three-bar truss problem.
Mathematics 11 04716 g004
Figure 5. Gear train design problem and cantilever beam design problem.
Figure 5. Gear train design problem and cantilever beam design problem.
Mathematics 11 04716 g005
Figure 6. Tubular columns design problem and piston lever problem.
Figure 6. Tubular columns design problem and piston lever problem.
Mathematics 11 04716 g006
Figure 7. Tension/compression spring design problem and concrete beam design problem.
Figure 7. Tension/compression spring design problem and concrete beam design problem.
Mathematics 11 04716 g007
Figure 8. Absolute ranks from the Friedman test computed for 108 combinations of marine predator algorithm control parameter settings.
Figure 8. Absolute ranks from the Friedman test computed for 108 combinations of marine predator algorithm control parameter settings.
Mathematics 11 04716 g008
Figure 9. Absolute ranks of the best nine marine predator algorithm settings from the Friedman test.
Figure 9. Absolute ranks of the best nine marine predator algorithm settings from the Friedman test.
Mathematics 11 04716 g009
Table 1. Detail of real-world engineering problems—dimensionality D and true solution f ( x * ) [25].
Table 1. Detail of real-world engineering problems—dimensionality D and true solution f ( x * ) [25].
ProblemD f ( x * )
Speed reducer design (Figure 4)72994.4244658
Tension/compression spring design30.012665232788
Pressure vessel design46059.714335048436
Three-bar truss design (Figure 4)2263.89584338
Gear train design (Figure 5)42.70085714 × 10−12
Cantilever beam (Figure 5)51.3399576
Minimise I-beam vertical deflection40.0130741
Tubular column design (Figure 6)226.486361473
Piston lever (Figure 6)48.41269832311
Corrugated bulkhead design46.8429580100808
Car side impact design (Figure 7)1122.84296954
Welded beam design41.724852308597366
Reinforced concrete beam design (Figure 7)3359.2080
Table 2. Absolute mean ranks from the Friedman tests of 11 stages.
Table 2. Absolute mean ranks from the Friedman tests of 11 stages.
NFADsP β AR1AR2AR3AR4AR5AR6AR7AR8AR9AR10AR11
100.2 0.5 _ 0 1.512231111111
100.2 0.5 _ 1 1.513141779782322
100.2 1 _ 0 1.5192223141091014833
100.20.51.510513111112119444
100.20.5 1.5 _ 0.75 46465554255
100.20.5163812443566
100.20.51.2534327335677
100.20.11.5201891014131613988
100.20.51.51475586681099
100.20.5 0 _ 1.5 12988722681010
100.20.91181519181514128111111
100.2 0.5 _ 0 1.5 _ 0.75 910613128911121212
100.2 1 _ 0.5 1.52323242421201818141313
100.20.5 0 _ 0.75 15131015410711131414
100.20.5 0.75 _ 0 511181318171720201515
100.20.5 0.75 _ 1.5 812141616151313171616
100.20.5 1.5 _ 0 111612913161516161717
100.20.91.252217221720181919191818
100.20.91.52120201919192123221919
19 _ 5 0.20.51.52427272525232216152020
10 0.5 _ 0 0.51.52826262726242422212121
10 0 _ 0.5 0.51.521143111417182222
100.20.111719152323212324232323
100.20.11.251621162118222021242424
100.2 0 _ 1 1.52525212222252525252525
100.50.51.253229292929292929272626
100.50.91.53736323234343232302727
100.50.11.52928282828282827262828
100.50.513333303131313030282929
100.50.913637363432333433343030
100.50.91.253534333333323334333131
100.50.51.53130313030303131323232
10 1 _ 0 0.51.54746423836363535353333
100.2 0 _ 0.5 1.53024252627272728323434
10 0 _ 1 0.51.578112025262626293535
100.50.112631373637373737363636
100.50.11.252732353738383836373737
10 1 _ 0.5 0.51.54851574847444139393838
97 _ 5 0.20.51.57672747165655048403939
10 0.5 _ 1 0.51.53435343535353638384040
97 _ 10 0.20.51.57569737067696352494141
500.2 0.5 _ 0 1.55850514949494947474242
100.80.513838384140404041414343
100.80.51.53940413939393940424444
100.80.51.254139394041414243434545
100.80.91.54041414243434544444646
500.2 1 _ 0 1.55768585353575151524747
100.80.91.254442434444464342454848
100.80.11.54243444642474646464949
100.80.11.254345464748484849485050
100.80.114647474546454750505151
100.80.914544454345424445525252
50 0.5 _ 0 0.51.57875727473737068575353
500.20.51.254955525151505355585454
500.20.5 1.5 _ 0.75 6357615550525862535555
Table 3. Absolute mean ranks from the Friedman tests of 11 stages.
Table 3. Absolute mean ranks from the Friedman tests of 11 stages.
NFADsP β AR1AR2AR3AR4AR5AR6AR7AR8AR9AR10AR11
500.20.515559656459605560595656
500.20.5 0.75 _ 0 6561545754515458545757
500.2 0.5 _ 0 1.5 _ 0.75 5152565452555254555858
500.20.51.55249505060565653565959
500.20.51.55454606061636956606161
500.20.5 0 _ 0.75 6162665657616464626161
500.20.5 0.75 _ 1.5 7163646664546366616262
500.20.5 1.5 _ 0 6960495956626065676363
500.20.5 0 _ 1.5 6764555958535960636464
500.20.11.58674706569686161656565
500.20.91.255665687367677171696666
500.20.118171696762665757666767
500.2 1 _ 0.5 1.56267637272716869646868
500.20.11.258173716470646563706969
500.2 0.5 _ 1 1.56666536864586767687070
500.20.915058676271707272717171
500.20.91.55356626168727373727272
195 _ 5 0.20.51.5107103122111114908985817373
50 0 _ 0.5 0.51.57248485255596670737474
500.2 0 _ 0.5 1.512886787676747474757575
500.2 0 _ 1 1.59679757575767575747676
195 _ 10 0.20.51.5110114118118106898884817777
50 1 _ 0 0.51.5951011079087868380767878
500.50.11.58784818080827776777979
500.50.119485808279807981798080
1000.2 0.5 _ 0 1.513310410510698878789868181
500.50.11.259382838182797679828282
500.50.51.57370767978778077788383
500.50.515977797781818178838484
1000.2 1 _ 0 1.511510010110289919290908585
500.50.51.256876777877788282848686
500.50.91.57083848484858587888787
500.50.91.257481858585848688898888
500.50.916080828483838486888989
50 0 _ 1 0.51.56453596974757883859090
1000.20.5 1.5 _ 0.75 12511210910290939091939191
1000.20.111561301201131021039896959292
1000.20.51109113899297989394929393
1000.20.51.589959189941029493999494
1000.20.11.512712611311610195100100969595
100 0.5 _ 0 0.51.5136123123115117116114101979696
1000.20.5 0.75 _ 0 12111010411291100100921049797
1000.2 0.5 _ 0 1.5 _ 0.75 1081159898951069695919898
1000.20.51.59710810297991151041041059999
1000.2 1 _ 0.5 1.510610611699116113102109109100100
1000.20.5 0.75 _ 1.5 1189310891100114108103102101101
1000.20.11.25142128119120104929710298102102
1000.20.51.251009290878894919794103103
1000.20.5 0 _ 1 .51221181111101119610399100104104
1000.20.5 0 _ 0.75 137109115119108109109105103105105
50 1 _ 0.5 0.51.5101107121121119122117116106106106
1000.20.5 1.5 _ 0 119116961141149710698101107107
1000.20.91.2592111106109115107110107107108108
500.80.11124105111107112110113111108109109
1000.20.91.5859997108105108105108110110110
Table 4. Absolute mean ranks from the Friedman tests of 11 stages.
Table 4. Absolute mean ranks from the Friedman tests of 11 stages.
NFADsP β AR1AR2AR3AR4AR5AR6AR7AR8AR9AR10AR11
1000.20.9191949388103104107113112111111
1000.2 0.5 _ 1 1.5113120112117107111115115113112112
500.80.11.51239799949399111112115113113
500.80.11.2512098949592101101110111114114
100 0 _ 0.5 0.51.5117117959396105112114117115115
1000.2 0 _ 0.5 1.5191142131126122119123118116116116
1000.2 0 _ 1 1.5175136130129124126126122119117117
1000.50.11.25153139135131128121121124123118118
500.80.51.2582898796110112116117118119119
100 1 _ 0 0.51.5162141143136135135134133126120120
500.80.51798992104118118118123120122122
50 0.5 _ 1 0.51.577788686868895106114122122
1000.50.11.5144132127124125124122120122123123
500.80.51.5839088100109117120121124124124
1000.50.11161140132128127123119119121125125
1000.50.51.25105129125133133131130132128126126
1000.50.51.598124126125129130127126125127127
1000.50.51104125129127130129131130130128128
500.80.918891117122123127128128129129129
1000.50.91.5103119128130131132132131131130130
500.80.91.59087115123126128129129132131131
500.80.91.258496100104120125124125127132132
1000.50.91102122134135134133133134134133133
100 0 _ 1 0.51.5130102103105121120125127133134134
1000.50.91.2599121124132132134135135135135135
1000.80.11.25158147141144141144141140139136136
100 1 _ 0.5 0.51.5167143144143145145139137137137137
1000.80.11.5166153146146146139145144140138138
1000.80.51132137140137137140140136136139139
1000.80.51.5114133137139140138142143138140140
1000.80.11165149145145143137138141141142142
1000.80.51.25126138143140138143136139142142142
100 0.5 _ 1 0.51.5129127133134136136137138144143143
1000.80.91.5116135137138144141143142143144144
1000.80.91.25111134138141142146146146145145145
1000.80.91112131139142139142144145146146146
5000.2 1 _ 0 1.5160159156156161163157155151147147
5000.20.91131144147147147147148150147148148
5000.2 1 _ 0.5 1.5154161155148154149156156154149149
5000.20.91.25138151148150148148147148150150150
5000.20.51.5159159158160160153151149148151151
5000.20.91.5143146150151149152149147149152152
5000.2 0.5 _ 0 1.5178181174177183182170166163153153
5000.2 0.5 _ 0 1.5 _ 0.75 148161151157159155150155152154154
5000.20.5 1.5 _ 0.75 188168175162164160161160158155155
5000.20.51.25155157157159157158153153155156156
5000.20.51151163159158153152155158154157157
5000.20.5 1.5 _ 0 187172179161162165154151161158158
5000.20.51.5134150153149155150158157156159159
5000.20.5 0.75 _ 0 186179167164173175172164165160160
Table 5. Absolute mean ranks from the Friedman tests of 11 stages.
Table 5. Absolute mean ranks from the Friedman tests of 11 stages.
NFADsP β AR1AR2AR3AR4AR5AR6AR7AR8AR9AR10AR11
500 0.5 _ 0 0.51.5174173172175179170175169159161161
5000.20.5 0.75 _ 1.5 178169163163166172165161162162162
5000.50.91.5141152154153151156152152157163163
5000.20.5 0 _ 0.75 184180178173176180167162166164164
5000.20.5 0 _ 1 .5190176177170174168180171173165165
5000.50.51.25150165170171165162162170160166166
5000.50.91135148152154158154159163164167167
5000.20.11.5183185182180178186169165167168168
5000.2 0.5 _ 1 1.5180174168168172169177173173169169
500 1 _ 0 0.51.5182177185182187181185185178170170
500 0 _ 0.5 0.51.5185175171169171173164167170171171
5000.50.51.5145162160155152159160159168172172
5000.50.51146156162166156161166168169173173
5000.20.11.25170187186186184178179177171174174
5000.20.11171182183184177168178175176175175
5000.50.11.25168183181183182176168172175176176
5000.50.91.25139145149152151157163174174177177
500 0 _ 1 0.51.5181171165165180174176179179178178
5000.2 0 _ 1 1.5192193192189190193193192183179179
5000.80.91.25149154161172168164174176177180180
5000.2 0 _ 0.5 1.5193192193191193189188188187181181
5000.50.11.5173189188188181177184181180182182
5000.80.91147155166167163179173183182183183
5000.80.11.25164188191193191192187187186184184
5000.50.11189184184187185184181182188185185
500 0.5 _ 1 0.51.5173179173178188188189190193186186
500 1 _ 0.5 0.51.5176186187185189191191189191187187
5000.80.11.5179191190192192190190186185188188
5000.80.11169190189190186187192193190189189
5000.80.51.5157166180181175183171180184190190
5000.80.91.5140164164176169185183178181191191
5000.80.51163167169174167166183184189192192
5000.80.51.25152170177179170171186191192193193
Table 6. Median values for all problems by nine best-performing marine predator algorithms (MPAs) with given fish aggregating devices, P and β (all N = 10 ).
Table 6. Median values for all problems by nine best-performing marine predator algorithms (MPAs) with given fish aggregating devices, P and β (all N = 10 ).
Prob.0.2_0.5_1.5 *0.2_0.5-0_1.50.2_0.5_10.2_0.5_1.250.2_1-0_1.50.2_0.5-1_1.50.2_0.5_0-1.50.2_0.5_1.5-0.750-0.5_0.5_1.5
12995.22995.3352994.8952995.512995.092995.1252995.252995.1152995.525
20.0126680.0126660.01266760.0126660.0126670.0126670.0126670.0126660.012673
36059.726059.716059.7156059.716059.716059.716059.716059.7156065.095
4263.896263.896263.896263.896263.896263.896263.896263.896263.896
52.7 × 10 12 2.7 × 10 12 2.701 × 10 12 2.7 × 10 12 2.7 × 10 12 2.7 × 10 12 2.7 × 10 12 2.7 × 10 12 2.7 × 10 12
61.339961.339961.339981.340011.3399651.339971.339991.3399851.33997
70.0130740.0130740.01307430.0130740.0130740.0130740.0130750.0130740.013075
826.486426.486426.486426.486426.486426.486426.486426.486426.4864
98.4131158.412928.412978.4130958.4130958.4134858.413248.4132258.4132
106.842976.842976.8429756.842976.842986.842976.842976.8429756.84299
1123.068922.892223.171123.1911523.034123.1720523.0800523.066223.0587
121.7248851.724871.724881.724881.7248951.724871.7248951.7248851.724915
13359.208359.208359.208359.208359.208359.208359.208359.208359.208
* The original MPA setting. The best performing MPA variant for each problem is bolded.
Table 7. Number of the best, second best, third, and last positions of the Kruskal–Wallis tests for nine best-performing marine predator algorithms (MPAs) with given fish aggregating devices (FADs), P, and β (all N = 10 ).
Table 7. Number of the best, second best, third, and last positions of the Kruskal–Wallis tests for nine best-performing marine predator algorithms (MPAs) with given fish aggregating devices (FADs), P, and β (all N = 10 ).
Stage0.2_0.5-0_1.50.2_0.5_1.5-0.750-0.5_0.5_1.50.2_1-0_1.50.2_0.5_10.2_0.5_1.250.2_0.5_1.5 *0.2_0.5-1_1.50.2_0.5_0-1.5
12/1/4/01/4/0/04/2/1/01/0/3/61/1/2/12/1/2/22/1/0/10/0/0/00/3/1/3
23/1/0/01/2/1/13/2/1/11/0/0/61/1/2/01/0/5/00/2/1/21/2/1/20/1/0/1
31/2/0/13/0/1/14/1/3/01/0/0/30/1/2/11/2/1/10/1/0/20/2/1/10/1/2/1
42/0/3/03/1/0/02/2/2/12/0/0/31/2/2/00/5/0/10/0/1/20/0/1/10/0/1/3
51/3/1/03/1/0/12/1/2/11/1/1/32/1/1/00/2/2/10/0/0/10/0/2/00/0/0/3
62/3/1/12/1/1/21/1/2/32/0/0/31/2/0/01/1/1/00/0/1/00/0/2/00/1/1/1
73/1/1/12/1/1/10/1/1/32/0/1/21/2/0/11/2/1/00/0/0/10/1/1/00/1/3/1
83/1/1/12/1/2/10/0/1/31/1/0/22/1/0/11/3/0/10/1/1/00/1/2/00/0/2/1
93/2/0/12/1/0/00/0/1/51/1/0/12/1/0/01/2/2/20/1/3/00/1/1/00/0/2/1
104/0/1/01/0/2/00/0/0/61/2/2/01/1/0/02/2/0/30/2/0/00/2/1/00/0/3/1
114/0/1/01/0/2/00/0/0/61/2/2/01/1/0/02/2/0/30/2/0/00/2/1/00/0/3/1
Σ 28/14/13/521/12/10/716/10/14/2914/7/9/2913/14/9/412/22/14/142/10/7/91/11/13/40/7/18/17
* The original MPA setting. The best performing MPA variant for each stage is bolded.
Table 8. Number of better, significantly better, worse, and significantly worse cases of nine best performing marine predator algorithms (MPAs) compared with the original setting from the Wilcoxon tests (all N = 10 ).
Table 8. Number of better, significantly better, worse, and significantly worse cases of nine best performing marine predator algorithms (MPAs) compared with the original setting from the Wilcoxon tests (all N = 10 ).
0.2_0.5-0_1.50.2_1-0_1.50.2_05-1_1.50.2_0.5_10.2_0.5_1.250.2_0.5_1.5-0.750.2_0.5_0-1.50-0.5_0.5_1.5
bet.55555421
s.bet.11000000
wor.13343468
s.wor.00001000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bujok, P. Evaluation of Marine Predator Algorithm by Using Engineering Optimisation Problems. Mathematics 2023, 11, 4716. https://doi.org/10.3390/math11234716

AMA Style

Bujok P. Evaluation of Marine Predator Algorithm by Using Engineering Optimisation Problems. Mathematics. 2023; 11(23):4716. https://doi.org/10.3390/math11234716

Chicago/Turabian Style

Bujok, Petr. 2023. "Evaluation of Marine Predator Algorithm by Using Engineering Optimisation Problems" Mathematics 11, no. 23: 4716. https://doi.org/10.3390/math11234716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop