Next Article in Journal
Digital Imaging and Artificial Intelligence in Infantile Hemangioma: A Systematic Literature Review
Next Article in Special Issue
Multitask Level-Based Learning Swarm Optimizer
Previous Article in Journal
The Problems and Design of a Neck Dummy
Previous Article in Special Issue
Maximizing Nash Social Welfare Based on Greedy Algorithm and Estimation of Distribution Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Enhanced Parrot Optimizer: Global Optimization and Feature Selection

College of Geophysics and Petroleum Resources, Yangtze University, Wuhan 430100, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(11), 662; https://doi.org/10.3390/biomimetics9110662
Submission received: 9 September 2024 / Revised: 24 October 2024 / Accepted: 30 October 2024 / Published: 31 October 2024

Abstract

:
Optimization algorithms are pivotal in addressing complex problems across diverse domains, including global optimization and feature selection (FS). In this paper, we introduce the Enhanced Crisscross Parrot Optimizer (ECPO), an improved version of the Parrot Optimizer (PO), designed to address these challenges effectively. The ECPO incorporates a sophisticated strategy selection mechanism that allows individuals to retain successful behaviors from prior iterations and shift to alternative strategies in case of update failures. Additionally, the integration of a crisscross (CC) mechanism promotes more effective information exchange among individuals, enhancing the algorithm’s exploration capabilities. The proposed algorithm’s performance is evaluated through extensive experiments on the CEC2017 benchmark functions, where it is compared with ten other conventional optimization algorithms. Results demonstrate that the ECPO consistently outperforms these algorithms across various fitness landscapes. Furthermore, a binary version of the ECPO is developed and applied to FS problems on ten real-world datasets, demonstrating its ability to achieve competitive error rates with reduced feature subsets. These findings suggest that the ECPO holds promise as an effective approach for both global optimization and feature selection.

1. Introduction

In the era of data-driven decision-making, where vast amounts of data are generated daily, the need to extract meaningful information efficiently becomes paramount. One crucial aspect of this extraction process is feature selection (FS), which involves identifying and choosing the most relevant and informative features from a potentially overwhelming set of data [1]. FS is vital for several reasons. Firstly, in the realm of machine learning, models trained on irrelevant or redundant features can suffer from decreased accuracy and efficiency [2]. These extraneous features can introduce noise, complicating the learning process and potentially leading to overfitting. By carefully selecting the most pertinent features, we can streamline the model, enhancing its predictive power and generalizability. Secondly, feature selection bolsters the interpretability of machine learning models [3]. In many applications, especially those in healthcare, finance, or policy-making, understanding which features contribute most significantly to the model’s output is essential [4]. By reducing the feature set, we can more clearly discern the impact of each variable, thereby facilitating transparent and accountable decision-making [5].
FS techniques can be broadly categorized into three main approaches: wrapper, filter, and embedded methods [6]. Wrapper methods involve evaluating various feature subsets based on their impact on a specific learning algorithm’s performance [7]. Common examples include recursive feature elimination (RFE) [8] and evolutionary algorithms (EAs) [9], which iteratively assess the contribution of features to the model’s accuracy. Filter methods [10], on the other hand, rely on statistical properties to assess the relevance of individual features, independent of any learning algorithm. Techniques such as the chi-square test [11], information gain [12], and correlation coefficient [13] are widely used in this category. Embedded methods integrate the feature selection process into the model training phase, allowing for simultaneous learning and feature selection. Methods like L1 regularization (e.g., Lasso regression) and tree-based models that provide feature importance rankings (e.g., random forest) fall under this category [14].
While these traditional methods have proven effective in various scenarios, they often face challenges when dealing with complex, high-dimensional datasets. Metaheuristics offer a promising alternative for feature selection. These algorithms excel in exploring the vast search space efficiently, identifying globally optimal feature subsets [15]. Their inherent ability to handle diversity, avoid local minima, and adapt to changing landscapes makes them particularly suitable for complex optimization tasks, including feature selection in high-dimensional data.
Metaheuristic algorithms are generally categorized into two types: evolutionary algorithms and swarm intelligence optimization algorithms. Evolutionary algorithms, exemplified by the genetic algorithm (GA) [16], mimic the natural evolutionary process through selection, crossover, and mutation operations to evolve better solutions over generations. On the other hand, swarm intelligence optimization algorithms, such as particle swarm optimization (PSO) [17], draw inspiration from the collective behavior of social insects or animals. These algorithms utilize the concept of swarm intelligence to search for the global optimum through collaboration and information sharing among individuals.
Over the past few years, many scholars have dedicated their efforts to solving the FS problem using metaheuristic algorithms [18]. Shan et al. [19] proposed a multi-strategy enhanced crow search algorithm. The proposed method effectively improved the optimization performance of the algorithm through a crossover mechanism and a combined mutation mechanism. FS experiments were conducted on 10 datasets using the proposed method, and the results indicated that the proposed approach could obtain feature subsets with higher accuracy. Tubishat et al. [20] proposed an enhanced butterfly optimization algorithm for feature selection, termed DBOA, which incorporates a local search algorithm based on mutation to overcome local optima and improve solution diversity. Tested on 20 UCI datasets, the DBOA outperformed several benchmark algorithms in classification accuracy and feature subset selection efficiency. Kwakye et al. (2024) [21] proposed a particle swarm-guided bald eagle search (PS-BES) algorithm for global optimization and feature selection. The algorithm introduced the Attack–Retreat–Surrender technique to enhance balance between diversification and intensification. The study demonstrated the superior performance of the PS-BES through comprehensive evaluations using 26 benchmark functions and 27 classification datasets from the UCI repository, comparing it favorably against ten state-of-the-art algorithms. Abdelrazek et al. [22] proposed a modified version of the dwarf mongoose optimization algorithm named CDMO for feature selection. This new approach integrated ten chaotic maps to enhance the DMO’s convergence speed and effectiveness. Numerous studies have been conducted on metaheuristic algorithms for FS problems, and similar research is thoroughly reviewed in the literature [23].
Despite the development of such metaheuristic algorithms for feature selection research, the “no free lunch” theorem indicates that no single algorithm can perform well on all datasets, urging us to develop more algorithms to solve different problems.
The parrot optimizer (PO) is a novel metaheuristic algorithm proposed by Lian et al. in 2024 [24], inspired by the foraging, dwelling, communication, and fear of strangers exhibited by domesticated Green-cheeked parakeet (Pyrrhura molinae). Unlike the traditional exploration–exploitation two-stage structure, the PO effectively escapes local optima by randomly adopting four different behaviors for each individual. However, the PO still suffers from some problems that lead to its slow convergence due to its randomly chosen search strategy; it often makes the search stagnant by generating inferior solutions.
In this paper, we present an enhanced version of the parrot optimizer (PO), named ECPO. By improving its strategy selection mechanism, the ECPO enables each individual to repeat the behavior that successfully updates its position in the previous iteration, and to randomly switch to another behavior upon failure, thereby enhancing the algorithm’s search capabilities. The introduction of a crisscross (CC) mechanism strengthens the exchange of information among individuals, further augmenting the algorithm’s exploration abilities. Additionally, we propose a binary version of the ECPO, which is applied to feature selection problems on real-world datasets. The main contributions of this paper are as follows:
  • An enhanced PO algorithm is proposed by introducing the CC mechanism and enhanced strategy selection mechanism.
  • The performance of the ECPO algorithm is verified in detail, through comparison experiments with 10 other conventional optimization algorithms on the CEC2017 benchmark functions.
  • A binary version of the ECPO for solving FS problems and validated on ten real datasets that the ECPO can effectively solve the FS problems.
The organization of this paper is as follows: Section 1 provides an introduction to the research background and motivation, includes a brief literature review, and concludes with a summary of the paper’s main contributions. Section 2 outlines the original PO algorithm. In Section 3, the CC mechanism and enhanced strategy selection mechanism are described in detail, along with the proposed ECPO algorithm. Section 4 covers the methodology, results, and analysis of the global optimization experiments. Section 5 details the experiments related to feature selection on ten real-world datasets. Finally, Section 6 wraps up the paper with a summary.

2. The Original PO

The PO is a new metaheuristic optimization algorithm inspired by the behavior of Pyrrhura molinae parrots, proposed by Lian et al. in 2024 [24]. The PO aims to solve complex optimization problems by mimicking the foraging, perching, communicating, and fear of strangers behaviors observed in these parrots. By modeling the four behaviors of parrots, the optimization process of the algorithm is divided into the following four stages:
1. Foraging behavior: In the foraging behavior of the PO, the approximate location of food is determined primarily through observation of the food’s position or by considering the owner’s location, and subsequent flight towards that position. The positional movement is governed by Equation (1).
X i t + 1 = ( X i t X b e s t ) L e v y ( d i m ) + r a n d ( 0,1 ) ( 1 t M a x i t e r ) 2 t M a x i t e r X m e a n t
where X b e s t denotes the optimal solution obtained by the current iteration, X i t denotes position of the i th particles at the t th iteration, X i t + 1 denotes the position that will be updated in the next iteration, X m e a n t donotes the average position of the population at the t th iteration. L e v y ( d i m ) represents a Lévy distribution of the problem’s dimensionality, and t and M a x i t e r represent the current iteration and maximum number of iterations, respectively.
2. Staying behavior: In the staying behavior of the PO, the behavior of the particle is modeled by simulating a parrot randomly landing on any position of its owner and remaining stationary. Progress is governed by Equation (2).
X i t + 1 = X i t + X b e s t L e v y ( d i m ) + r a n d ( 0,1 ) o n e s ( 1 , d i m )
where o n e s ( 1 , d i m ) is a 1xdim all-ones matrix, and this process signifies the parrot flying towards its owner and randomly stopping.
3. Communicating behavior: In the communicating behavior of the PO, the movement process of particles simulates information exchange within a parrot flock, manifesting specifically in two behaviors: flying towards the flock and moving away from it. In PO, it is assumed that these two behaviors occur with equal probability, and the average position of the current population is used to represent the center of the flock. This process is modeled in Equation (3).
X i t + 1 = { 0.2 r a n d ( 0,1 ) ( 1 t M a x i t e r ) ( X i t X m e a n t ) P 0.5 0.2 r a n d ( 0,1 ) e x p ( t r a n d ( 0,1 ) M a x i t e r ) P > 0.5
In the formula, the part where p 0.5 is used to achieve individual approach towards the center of the population, while the part where p > 0.5 is used to generate random positions, enabling the individual to move away from the population.
4. Fear of strangers behavior: In fear of strangers behavior of the PO, particles simulate the behavior of maintaining distance from unfamiliar individuals and seeking a safe environment together with the owner, as shown in Equation (4).
X i t + 1 = X i t + r a n d ( 0,1 ) cos ( 0.5 π t M a x i t e r ) ( X b e s t X i t )   cos ( r a n d ( 0,1 ) π ) ( t M a x i t e r ) 2 M a x i t e r ( X i t X b e s t )
In the formula, the upper part represents the process of adjusting the direction to fly towards the owner, while the lower part represents the process of moving away from strangers.
During the initialization phase of the algorithm, a set of predefined solutions are generated as the initial population through Equation (5).
X i 0 = l b + r a n d ( 0,1 ) ( u b l b )
where u b and l b represent the upper and lower bounds, respectively. X i 0 is the initial population generated.
Afterwards, iterative updates begin. In the updating process, the algorithm will randomly select several stages from the four stages to update individuals in the population. Each solution will be dynamically adjusted based on the best solution identified by the PO algorithm so far. The entire updating process will continue to iterate until the maximum number of iterations is reached. A flowchart for the PO is shown in Figure 1.

3. Proposed ECPO

3.1. Crisscross Strategy

The design of the crossover strategy draws inspiration from Meng’s crossover optimizer (CSO) [25]. The approach consists of two main parts, horizontal crossover search (HCS) and vertical crossover search (VCS), which enable the exchange of horizontal and vertical information between particles, respectively. This crossover strategy focuses on generating new particles by exchanging information among randomly selected particles or dimensions. The most fit particles are preserved and added back into the population, thereby enhancing the global search capability of the crossover mechanism. Shan et al. [19] improved the CSA by adding a crossover strategy along with a combined mutation approach, which assists the population in avoiding local optima. Similarly, Hu et al. [26] incorporated the crossover strategy into the SCA algorithm, revealing that it boosted global convergence, enhanced population diversity, and supported the escape from local optima.
In this study, we introduce the CC strategy in the original PO and improve its strategy management mechanism to propose the ECPO. The CC strategy and the improved strategy management mechanism are described in detail below.

3.1.1. Horizontal Crossover Search

The HCS will randomly select the particles in the population and randomly pair two by two to perform the crossover operation. The HCS can make more use of the population information and improve the exploration ability of the algorithm. The HCS operation is defined by Equations (6) and (7).
H C S i j = r 1 × x i j + 1 r 1 × x k j + c 1 × x i j x k j
H C S k j = r 2 × x k j + 1 r 2 × x i j + c 2 × x k j x i j
where r 1 and r 2 are random numbers in interval [0, 1], c 1 and c 2 are random numbers in interval [−1, 1], x i j is the value of the j th dimension of the i th particle, x k j is the value of the j th dimension of the k th particle. H C S i j and H C S k j are the new offspring of the two particles generated by the HCS. HCS ends up performing greedy selection to retain individuals with better fitness values between offspring and parents. Algorithm 1 presents the pseudo-code for the HCS operation.
Algorithm 1 HCS
RandIndex = randperm ( n )
For i = 1 : n / 2
   i   = RandIndex ( 2 i 1 ) k   = RandIndex ( 2 i )
  For  j = 1 : d i m
    Generate four random number r 1 , r 2 (0,1), c 1 , c 2 (−1,1)
    Generate H C S i j   and H C S k j   by Equations (6) and (7)
  End  F o r
End F o r
For i = 1 : n
  IF  F ( H C S i ) < F ( X i )
                X i = H C S ( i )
  End  I F
End  F o r
End

3.1.2. Vertical Crossover Search

The VCS will perform a crossover operation on two random dimensions of each particle. Similarly, the VCS ends up with perform greedy selection to retain individuals with better fitness values between offspring and parents. The VCS operation is defined by Equation (8).
V C S i j = r 3 × x i j 1 + 1 r 3 × x i j 2
where r 3 is a random number in interval [0, 1], x i j 1 and x i j 2 indicate the values of the two dimensions chosen at random by the i th individual, respectively, V C S i j represents the value of the j th dimension generated from two random dimensions of the i th particle. Algorithm 2 presents the pseudo-code for the VCS.
Algorithm 2 VCS
RandIndex = randperm ( d i m )
Generate a random number p (0,1)
For j = 1 : d i m / 2
   IF  p < 0.6
    j 1 = RandIndex ( 2 j 1 )
    j 2 = RandIndex ( 2 j )
   For  i = 1 : n
       Generate a random number r 3 (0,1)
       Generate V C S i j by Equation (6)
   End  F o r
   End  I F
End F o r
For i = 1 : n
   IF  F ( V C S ( i ) ) < F ( X ( i ) )
        X i = V C S ( i )
   End  I F
End F o r
End

3.2. Enhanced Strategy Management

The original PO is randomly updated in one of four ways in each iteration and, in this paper, a greedy selection-like approach is used for the policy management of the algorithm. Specifically, the algorithm will record whether the strategy chosen each time succeeds in improving the particles; if it succeeds, it will continue to use the strategy; if it fails, it will randomly choose one of the remaining three strategies.
The greedy selection approach to policy management has faster convergence and optimization efficiency compared to a completely random selection policy. Improved strategies tend to reuse strategies that have successfully improved examples in past iterations, meaning that the algorithm is more likely to move in a better direction in the search space. By recording and prioritizing successful strategies, the algorithm is able to make more efficient use of limited computational resources.

3.3. The Proposed ECPO

In this subsection we describe the workflow of the ECPO. Firstly, the ECPO will initialize the parameters required for the PO and generate the initial population; after which the algorithm first randomly selects the policy as the initial policy for particle updating. After that, an improved greedy strategy management is utilized to decide the strategy to be executed by each particle. At the end of each iteration of the population update, the algorithm will execute the CC strategy to generate the new population generated after the HCS and VCS operations. This process will continue iterating until the algorithm’s termination criteria are met, at which point the optimal solution is returned. The flowchart illustrating the algorithm is presented in Figure 2.
Algorithm 3 provides the pseudo-code for the ECPO.
Algorithm 3 Pseudo-code of the ECPO
Set parameters: The maximum iteration number T , the problem dimension d i m , and the population size N
Initialize population X
t = 1
S t = r a n d i ( [ 1,4 ] )
For  i = 1 : N
  Evaluate the fitness value of x i
  Find the global min x b e s t
End  F o r
While ( t T )
  IF  S t = 1                        /* Behavior 1 */
     Update X by Equation (1)
  ELSE IF  S t = 2                     /* Behavior 2 */
     Update X by Equation (2)
  ELSE IF  S t = 3                     /* Behavior 3 */
     Update X by Equation (3)
  ELSE IF  S t = 4                     /* Behavior 4 */
     Update X by Equation (4)
  END
   x o l d _ b e s t = x b e s t            /* Enhanced Strategy Management */
  Update x b e s t
  IF  x o l d _ b e s t = x b e s t
      S t = r a n d i ( [ 1,4 ] )
  END IF
  For  i = 1 : N                          /*CC*/
     Perform Horizontal crossover search to update x i
     Perform Vertical crossover search to update x i
     Update x b e s t
  End F o r
   t = t + 1 ;
End While
Return  x b e s t
End
The computational complexity of the ECPO depends on four basic aspects: population initialization, computation of fitness function, position update of particles and the CC strategy. So, the computational complexity O (ECPO) ≈ O (T × N) + O (T × N) + O (T × N × D) + O (T × N × D) ≈ O (T × N × D).

4. Results and Analysis of Global Optimization Experiments

This section uses the 29 benchmark functions of CEC2017 to conduct a complete evaluation of the proposed ECPO. All experiments are conducted fairly on industry-standard benchmarks. The experiments are performed on an Intel i5-13600KF with 32 GB of RAM and a Windows 11 operating system, using MATLAB 2024a for coding. In the comparative experiments, the population size of all algorithms is set to 30, the problem dimension is set to 30, the maximum number of evaluations is set to 300,000, and after running 30 times, the average and standard deviation are recorded for each function as the experimental results.

4.1. Benchmark Function

This subsection introduces the 29 benchmark functions used for testing, which originate from the 2017 IEEE Congress on Evolutionary Computation (CEC2017) [27]. These functions primarily consist of four types: unimodal, multimodal, hybrid, and composition functions. They are used to evaluate the performance of functions across different types of functions in detail. Table 1 provides an introduction to the function set of CEC2017.

4.2. Comparison of Performance with Other Algorithms

In this subsection, we compare the proposed ECPO with eight other classical algorithms on the CEC2017 benchmark functions. These algorithms include PO, SMA, WOA, MFO, BA, SCA, PSO, and DE. The hyperparameters of these algorithms are given in Table 2.
Table 3 summarizes the average fitness value (Avg) and standard deviation (Std) for the ECPO and other algorithms across each benchmark function in CEC2017. The ‘Rank’ section shows the Friedman test ranking for each algorithm, while ‘AVG’ indicates the average ranking the algorithm attained over all functions in CEC2017. The ‘+/−/=’ column illustrates whether the ECPO is better than, equal to, or worse than the other algorithms.
Table 3 indicates that the ECPO achieves an average ranking of 2.0345 on the benchmark function, placing it first among all algorithms, highlighting its considerable advantage over competitors. The ECPO reached the global optimum in all 30 trials for F5 and F8, and came close to this optimum in F2, F21, F20, F22, and F24, demonstrating the algorithm’s consistent optimization reliability. Among the compared algorithms, the DE performs closest to the ECPO but still falls short in 14 functions.
Table 4 reinforces the findings presented in Table 3. In the context of the Wilcoxon signed-rank test, a p-value lower than 0.05 indicates that the null hypothesis can be dismissed, showing a significant difference between the tested algorithm and the comparative algorithms. The data in Table 4 reveals that the majority of functions have p-values below 0.05, providing compelling evidence that the ECPO notably outperforms the other algorithms across the benchmarks.
Figure 3 presents the convergence curves for all algorithms evaluated on specific functions. The horizontal axis denotes the number of evaluations carried out by the algorithms, while the vertical axis indicates the best fitness value achieved at any point. The legend found at the bottom of Figure 3 clarifies which algorithms are represented. Importantly, the red lines consistently fall below the other colored lines across all types of functions, which suggests that the ECPO effectively navigates away from local optima and identifies superior solutions compared to the other algorithms. In conclusion, the CC markedly enhances the search capabilities of the PO, demonstrating a significant edge over the competing algorithms in the benchmarks.

5. Application to Feature Selection

In this section, we apply the ECPO to feature selection and compare it with six meta-heuristics on 22 real datasets. FS is a classical combinatorial optimization problem, where whether or not to select a feature is determined by 0 and 1. Thus, each individual of the initial population of the ECPO is generated by randomly populating 0 and 1. We set the upper and lower bounds of the problem between 0 and 1 and use Equation (9) to determine whether each feature is selected or not, thus allowing the ECPO, which is only valid in the continuous domain, to act on the search space of the feature selection binary.
X i , j = 0           X i , j < 0.5 1           X i , j 0.5
where X i , j denotes the j th feature of the i th subset of features in the population, Equation (9) implies that all features greater than 0.5 will be selected, while those less than 0.5 will not be selected.
The KNN is used as the evaluated classifier, and Equation (10) is used as the fitness function for feature selection
F i t n e s s = μ × E + ( 1 μ ) × l L
where E is the error rate, l is the length of the selected feature subset, and μ is a constant value between 0 and 1 used to control the weights of the feature subset length and error rate. In this experiment we are more concerned about the accuracy of the feature subset, so we set μ to 0.05.

5.1. Datasets Used in Experiments

In order to further evaluate the performance of the proposed method, feature selection experiments were conducted on 10 datasets selected from the UCI database, with the number of samples in the selected datasets ranging from 72 to 1473, and the feature dimensions ranging from 9 to 7130. Table 5 shows a detailed description of the datasets used, which includes the number of samples, number of classifications, and feature dimensions.

5.2. Analysis and Discussion of Experimental Results

This subsection presents the experimental results of the ECPO and the comparison algorithms on various datasets. The comparison algorithms include the BGWO [28], BGSA [29], BPSO [30], BBA [31], and BSSA [32]. The experiments were conducted on ten real-world datasets, with a population size of 30 and a maximum iteration count of 1000 set for each algorithm. Ten-fold cross-validation was used to avoid the contingency of the experiments. Detailed experimental results are provided in Table 6 and Table 7.
The error rate of the experiment and the average number of selected features are shown in Table 6 and Table 7. The first row for each dataset represents the mean, and the second row represents the variance. The minimum value of the mean obtained for each dataset is in bold. From the table, it can be seen that the proposed ECPO has achieved the best performance on all datasets. Notably, it excels on datasets with fewer samples, such as heartandlung and hepatitisfulldata, where the error rates are all below one percent, outperforming other algorithms by an order of magnitude. On the cmc dataset, all algorithms perform poorly with error rates above 45%, which is attributed to the performance bottleneck of the classifier. However, the ECPO has a lower error rate compared to other binary metaheuristic algorithms. On the Leukemia and Leukemia1 datasets, the ECPO, BGWO, BPSO, and BSSA all achieved an error rate of 0, while BGSA and BBA performed the worst. Specifically, the error rates of the BGSA were 10.2% and 6.31%, and the error rates of BBA were 1.67% and 5.19%.
From the perspective of the length of the selected feature subsets, the ECPO does not have an advantage over other algorithms. The BGSA tends to discover shorter feature subsets, especially on high-dimensional datasets such as Leukemia and Leukemia1, which are an order of magnitude lower than other algorithms. However, due to this, its classification error rate is relatively higher compared to other algorithms.
In summary, the ECPO has achieved the best performance among the BGWO, BGSA, BPSO, BBA, and BSSA. This is attributed to the effective enhancement of population diversity by the CC and the adaptive improvement of the algorithm population update efficiency by the greedy strategy selection mechanism.

6. Conclusions

In this study, we improve the strategy selection mechanism of the original PO algorithm and combine it with the CC strategy to propose an enhanced PO, where the CC strategy improves the diversity of generated offspring by enhancing the exchange of information between populations, thus accelerating the algorithm’s ability to find the best solution. The enhanced strategy selection approach maximizes the use of each update of the algorithm by adopting a greedy approach and repeatedly retaining the mechanism of successful updates. The proposed algorithm is compared with eight optimization algorithms in experiments on the CEC2017 benchmark function, and the experimental results show that the ECPO is able to achieve better performance than the other algorithms on functions with different types of fitness landscapes.
In addition, the ECPO is also used to solve the feature selection problem under 10 datasets using KNN as an evaluator, and the experimental results show that ECPO can achieve lower error rates with shorter feature subsets and is a competitive method for feature selection.
In our future research, we plan to develop and explore more advanced optimization methods. In addition we will explore the combination of deep learning and reinforcement learning techniques with optimization algorithms to solve real-world optimization problems more efficiently and intelligently.

Author Contributions

T.C.: Conceptualization, Software, Data Curation, Investigation, Writing—Original Draft, Project Administration; Y.Y.: Methodology, Writing—Original Draft, Writing—Review and Editing, Validation, Formal Analysis, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The third round of school enterprise cooperation project of North China Petroleum grant number HBYT-YJY-2018-JS-507.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature Selection: A Data Perspective. ACM Comput. Surv. 2018, 50, 94. [Google Scholar] [CrossRef]
  2. Ahmad, A.; Khan, M.; Paul, A.; Din, S.; Rathore, M.M.; Jeon, G.; Choi, G.S. Toward Modeling and Optimization of Features Selection in Big Data Based Social Internet of Things. Future Gener. Comput. Syst. 2018, 82, 715–726. [Google Scholar] [CrossRef]
  3. Mahapatra, D.; Poellinger, A.; Shao, L.; Reyes, M. Interpretability-Driven Sample Selection Using Self Supervised Learning for Disease Classification and Segmentation. IEEE Trans. Med. Imag. 2021, 40, 2548–2562. [Google Scholar] [CrossRef] [PubMed]
  4. Htun, H.H.; Biehl, M.; Petkov, N. Survey of Feature Selection and Extraction Techniques for Stock Market Prediction. Financ. Innov. 2023, 9, 26. [Google Scholar] [CrossRef]
  5. Li, J.P.; Haq, A.U.; Din, S.U.; Khan, J.; Khan, A.; Saboor, A. Heart Disease Identification Method Using Machine Learning Classification in E-Healthcare. IEEE Access 2020, 8, 107562–107582. [Google Scholar] [CrossRef]
  6. Dhal, P.; Azad, C. A Comprehensive Survey on Feature Selection in the Various Fields of Machine Learning. Appl. Intell. 2022, 52, 4543–4581. [Google Scholar] [CrossRef]
  7. Maldonado, J.; Riff, M.C.; Neveu, B. A Review of Recent Approaches on Wrapper Feature Selection for Intrusion Detection. Expert Syst. Appl. 2022, 198, 116822. [Google Scholar] [CrossRef]
  8. Jeon, H.; Oh, S. Hybrid-Recursive Feature Elimination for Efficient Feature Selection. Appl. Sci. 2020, 10, 3211. [Google Scholar] [CrossRef]
  9. Agrawal, P.; Abutarboush, H.F.; Ganesh, T.; Mohamed, A.W. Metaheuristic Algorithms on Feature Selection: A Survey of One Decade of Research (2009–2019). IEEE Access 2021, 9, 26766–26791. [Google Scholar] [CrossRef]
  10. Bommert, A.; Sun, X.; Bischl, B.; Rahnenführer, J.; Lang, M. Benchmark for Filter Methods for Feature Selection in High-Dimensional Classification Data. Comput. Stat. Data Anal. 2020, 143, 106839. [Google Scholar] [CrossRef]
  11. Alshaer, H.N.; Otair, M.A.; Abualigah, L.; Alshinwan, M.; Khasawneh, A.M. Feature Selection Method Using Improved CHI Square on Arabic Text Classifiers: Analysis and Application. Multimed. Tools Appl. 2021, 80, 10373–10390. [Google Scholar] [CrossRef]
  12. Omuya, E.O.; Okeyo, G.O.; Kimwele, M.W. Feature Selection for Classification Using Principal Component Analysis and Information Gain. Expert Syst. Appl. 2021, 174, 114765. [Google Scholar] [CrossRef]
  13. Hsu, H.-H.; Hsieh, C.-W. Feature Selection via Correlation Coefficient Clustering. J. Softw. 2010, 5, 1371–1377. [Google Scholar] [CrossRef]
  14. Pudjihartono, N.; Fadason, T.; Kempa-Liehr, A.W.; O’Sullivan, J.M. A Review of Feature Selection Methods for Machine Learning-Based Disease Risk Prediction. Front. Bioinform. 2022, 2, 927312. [Google Scholar] [CrossRef]
  15. Dokeroglu, T.; Deniz, A.; Kiziloz, H.E. A Comprehensive Survey on Recent Metaheuristics for Feature Selection. Neurocomputing 2022, 494, 269–296. [Google Scholar] [CrossRef]
  16. Katoch, S.; Chauhan, S.S.; Kumar, V. A Review on Genetic Algorithm: Past, Present, and Future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
  17. Poli, R.; Kennedy, J.; Blackwell, T. Particle Swarm Optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  18. Rezaei Miandoab, A.; Bagherzadeh, S.A.; Meghdadi Isfahani, A.H. Numerical Study of the Effects of Twisted-Tape Inserts on Heat Transfer Parameters and Pressure Drop Across a Tube Carrying Graphene Oxide Nanofluid: An Optimization by Implementation of Artificial Neural Network and Genetic Algorithm. Eng. Anal. Bound. Elem. 2022, 140, 1–11. [Google Scholar] [CrossRef]
  19. Shan, W.; Hu, H.; Cai, Z.; Chen, H.; Liu, H.; Wang, M.; Teng, Y. Multi-Strategies Boosted Mutative Crow Search Algorithm for Global Tasks: Cases of Continuous and Discrete Optimization. J. Bionic Eng. 2022, 19, 1830–1849. [Google Scholar] [CrossRef]
  20. Tubishat, M.; Alswaitti, M.; Mirjalili, S.; Al-Garadi, M.A.; Rana, T.A. Dynamic Butterfly Optimization Algorithm for Feature Selection. IEEE Access 2020, 8, 194303–194314. [Google Scholar] [CrossRef]
  21. Kwakye, B.D.; Li, Y.; Mohamed, H.H.; Baidoo, E.; Asenso, T.Q. Particle Guided Metaheuristic Algorithm for Global Optimization and Feature Selection Problems. Expert Syst. Appl. 2024, 248, 123362. [Google Scholar] [CrossRef]
  22. Abdelrazek, M.; Abd Elaziz, M.; El-Baz, A.H. CDMO: Chaotic Dwarf Mongoose Optimization Algorithm for Feature Selection. Sci. Rep. 2024, 14, 701. [Google Scholar] [CrossRef]
  23. Ali, M.Z.; Abdullah, A.; Zaki, A.M.; Rizk, F.H.; Eid, M.M.; El-Kenway, E.M. Advances and Challenges in Feature Selection Methods: A Comprehensive Review. J. Artif. Intell. Metaheuristics 2024, 7, 67–77. [Google Scholar] [CrossRef]
  24. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot Optimizer: Algorithm and Applications to Medical Problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  25. Meng, A.; Chen, Y.; Yin, H.; Chen, S. Crisscross Optimization Algorithm and Its Application. Knowl. Based Syst. 2014, 67, 218–229. [Google Scholar] [CrossRef]
  26. Hu, H.; Shan, W.; Tang, Y.; Heidari, A.A.; Chen, H.; Liu, H.; Wang, M.; Escorcia-Gutierrez, J.; Mansour, R.F.; Chen, J. Horizontal and Vertical Crossover of Sine Cosine Algorithm with Quick Moves for Optimization and Feature Selection. J. Comput. Des. Eng. 2022, 9, 2524–2555. [Google Scholar] [CrossRef]
  27. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  28. Chantar, H.; Mafarja, M.; Alsawalqah, H.; Heidari, A.A.; Aljarah, I.; Faris, H. Feature Selection Using Binary Grey Wolf Optimizer with Elite-Based Crossover for Arabic Text Classification. Neural Comput. Appl. 2020, 32, 12201–12220. [Google Scholar] [CrossRef]
  29. Taradeh, M.; Mafarja, M.; Heidari, A.A.; Faris, H.; Aljarah, I.; Mirjalili, S.; Fujita, H. An Evolutionary Gravitational Search-Based Feature Selection. Inform. Sci. 2019, 497, 219–239. [Google Scholar] [CrossRef]
  30. Dara, S.; Banka, H. A Binary PSO Feature Selection Algorithm for Gene Expression Data. In Proceedings of the 2014 International Conference on Advances in Communication and Computing Technologies (ICACACT 2014), Mumbai, India, 10–11 August 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–6. [Google Scholar]
  31. Mafarja, M.; Heidari, A.A.; Habib, M.; Faris, H.; Thaher, T.; Aljarah, I. Augmented Whale Feature Selection for IoT Attacks: Structure, Analysis and Applications. Future Gener. Comp. Syst. 2020, 112, 18–40. [Google Scholar] [CrossRef]
  32. Shekhawat, S.S.; Sharma, H.; Kumar, S.; Nayyar, A.; Qureshi, B. bSSA: Binary Salp Swarm Algorithm with Hybrid Data Transformation for Feature Selection. IEEE Access 2021, 9, 14867–14882. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the PO.
Figure 1. Flowchart of the PO.
Biomimetics 09 00662 g001
Figure 2. Flowchart of the ECPO.
Figure 2. Flowchart of the ECPO.
Biomimetics 09 00662 g002
Figure 3. Convergence curves of the ECPO on benchmarks with other algorithms.
Figure 3. Convergence curves of the ECPO on benchmarks with other algorithms.
Biomimetics 09 00662 g003
Table 1. CEC2017 benchmark functions.
Table 1. CEC2017 benchmark functions.
FunctionFunction NameClassOptimum
F1Shifted and Rotated Bent Cigar FunctionUnimodal100
F2Shifted and Rotated Zakharov FunctionUnimodal300
F3Shifted and Rotated Rosenbrock’s FunctionMultimodal400
F4Shifted and Rotated Rastrigin’s FunctionMultimodal500
F5Shifted and Rotated Expanded Schaffer’s F6 FunctionMultimodal600
F6Shifted and Rotated Lunacek Bi-Rastrigin FunctionMultimodal700
F7Shifted and Rotated Non-Continuous Rastrigin’s FunctionMultimodal800
F8Shifted and Rotated Lévy FunctionMultimodal900
F9Shifted and Rotated Schwefel’s FunctionMultimodal1000
F10Hybrid Function 1 (N = 3)Hybrid1100
F11Hybrid Function 2 (N = 3)Hybrid1200
F12Hybrid Function 3 (N = 3)Hybrid1300
F13Hybrid Function 4 (N = 4)Hybrid1400
F14Hybrid Function 5 (N = 4)Hybrid1500
F15Hybrid Function 6 (N = 4)Hybrid1600
F16Hybrid Function 6 (N = 5)Hybrid1700
F17Hybrid Function 6 (N = 5)Hybrid1800
F18Hybrid Function 6 (N = 5)Hybrid1900
F19Hybrid Function 6 (N = 6)Hybrid2000
F20Composition Function 1 (N = 3)Composition2100
F21Composition Function 2 (N = 3)Composition2200
F22Composition Function 3 (N = 4)Composition2300
F23Composition Function 4 (N = 4)Composition2400
F24Composition Function 5 (N = 5)Composition2500
F25Composition Function 6 (N = 5)Composition2600
F26Composition Function 7 (N = 6)Composition2700
F27Composition Function 8 (N = 6)Composition2800
F28Composition Function 9 (N = 3)Composition2900
F29Composition Function 10 (N = 3)Composition3000
Table 2. Hyperparameters for correlation algorithms.
Table 2. Hyperparameters for correlation algorithms.
Name Parameters
ECPO p l   = 0.3; p e   = 0.7
PO p l   = 0.3; p e   = 0.7
SMA/
WOA a 1   = [2, 0]; a 2   = [−1, −2]; b = 1
MFOb = 1; t = [−1, 1]; a = [−1, −2]
BA/
SCA/
PSO V m a x = 6; W m a x = 0.9, W m i n   = 0.2; C 1   = 2; C 2   = 2
DE a = [2, 0]
Table 3. Results of the ECPO and Other Algorithms on CEC2017.
Table 3. Results of the ECPO and Other Algorithms on CEC2017.
F1 F2 F3
AvgStdAvgStdAvgStd
ECPO2.5533 × 1033.3868 × 1036.0183 × 1031.7925 × 1035.9406 × 1022.1013 × 101
PO5.7678 × 1078.8266 × 1075.1121 × 1032.6765 × 1037.1852 × 1024.8687 × 101
SMA2.8682 × 1091.2223 × 1093.4641 × 1046.1441 × 1037.1538 × 1022.7779 × 101
WOA3.8858 × 1063.0510 × 1061.6617 × 1055.8485 × 1047.9399 × 1025.6422 × 101
MFO1.1190 × 10108.5527 × 1099.8579 × 1047.2925 × 1047.0977 × 1024.3436 × 101
BA5.3477 × 1053.4745 × 1053.0010 × 1029.9425 × 10−28.3536 × 1027.2939 × 101
SCA1.2294 × 10103.1232 × 1093.8025 × 1047.6942 × 1037.7882 × 1021.9855 × 101
PSO3.3674 × 1034.4518 × 1033.0001 × 1029.7402 × 10−36.9581 × 1023.2590 × 101
DE1.6720 × 1032.6872 × 1032.0113 × 1044.4064 × 1036.0597 × 1029.7270 × 100
F4 F5 F6
AvgStdAvgStdAvgStd
ECPO5.9108 × 1021.9276 × 1016.0000 × 1029.0390 × 10−78.4685 × 1022.9649 × 101
PO7.1669 × 1023.4384 × 1016.5577 × 1029.1330 × 1001.1843 × 1037.6114 × 101
SMA7.0915 × 1023.6763 × 1016.4378 × 1021.0139 × 1011.0658 × 1036.3361 × 101
WOA7.7040 × 1024.4565 × 1016.6792 × 1028.4931 × 1001.2419 × 1039.0990 × 101
MFO7.1352 × 1023.8297 × 1016.4130 × 1021.0526 × 1011.1420 × 1031.8908 × 102
BA8.3843 × 1026.8433 × 1016.7011 × 1029.2768 × 1001.5929 × 1032.0371 × 102
SCA7.7418 × 1021.5843 × 1016.5148 × 1026.4379 × 1001.1344 × 1034.4865 × 101
PSO6.9478 × 1023.5034 × 1016.4689 × 1028.2484 × 1001.0325 × 1038.5500 × 101
DE6.1027 × 1021.0950 × 1016.0000 × 1022.1111 × 10−148.4277 × 1021.0209 × 101
F7 F8 F9
AvgStdAvgStdAvgStd
ECPO8.8811 × 1022.1385 × 1011.0710 × 1032.8770 × 1023.6511 × 1033.7742 × 102
PO9.6243 × 1022.6318 × 1015.2394 × 1037.8276 × 1025.9115 × 1038.8067 × 102
SMA9.6665 × 1021.8766 × 1015.5914 × 1031.0315 × 1035.7496 × 1036.1702 × 102
WOA1.0067 × 1033.6604 × 1017.9806 × 1031.9843 × 1036.1472 × 1037.7219 × 102
MFO1.0103 × 1034.8478 × 1017.1955 × 1032.0629 × 1035.4253 × 1038.1241 × 102
BA1.0634 × 1035.3963 × 1011.3610 × 1044.9676 × 1035.6359 × 1036.5778 × 102
SCA1.0517 × 1031.8991 × 1015.2171 × 1031.3222 × 1038.1202 × 1032.1751 × 102
PSO9.5196 × 1023.4527 × 1014.3162 × 1039.8906 × 1025.2490 × 1034.9744 × 102
DE9.0921 × 1029.2515 × 1009.0000 × 1021.0125 × 10−135.8029 × 1032.6995 × 102
F10 F11 F12
AvgStdAvgStdAvgStd
ECPO1.1725 × 1033.3533 × 1013.6206 × 1052.1631 × 1051.7551 × 1041.9244 × 104
PO1.3148 × 1037.4800 × 1012.3973 × 1071.8828 × 1071.1801 × 1058.5025 × 104
SMA1.5223 × 1031.0297 × 1021.0611 × 1085.8804 × 1072.7739 × 1064.2840 × 106
WOA1.5213 × 1031.3839 × 1023.2133 × 1072.5845 × 1071.4900 × 1058.0362 × 104
MFO6.4286 × 1036.0657 × 1033.9567 × 1085.6595 × 1082.6205 × 1087.2166 × 108
BA1.3098 × 1035.8084 × 1011.6301 × 1061.1887 × 1062.9733 × 1051.3313 × 105
SCA2.1802 × 1036.1972 × 1021.1914 × 1092.9546 × 1084.5240 × 1083.4561 × 108
PSO1.2127 × 1033.7415 × 1014.6061 × 1042.6228 × 1041.5307 × 1041.3263 × 104
DE1.1605 × 1032.3085 × 1011.6235 × 1061.0534 × 1063.4063 × 1041.8594 × 104
F13 F14 F15
AvgStdAvgStdAvgStd
ECPO6.5336 × 1047.4352 × 1048.7271 × 1039.4149 × 1032.2824 × 1032.4337 × 102
PO3.9427 × 1042.5394 × 1046.3004 × 1047.0528 × 1043.0432 × 1034.0381 × 102
SMA1.7908 × 1058.5321 × 1041.7915 × 1049.2843 × 1032.8746 × 1033.6782 × 102
WOA7.3860 × 1057.4934 × 1057.8654 × 1045.8614 × 1043.6492 × 1035.8224 × 102
MFO1.5150 × 1053.2789 × 1053.0150 × 1071.6484 × 1083.1766 × 1034.0829 × 102
BA7.4300 × 1035.0607 × 1031.0372 × 1055.6221 × 1043.3258 × 1034.0253 × 102
SCA1.3310 × 1058.2141 × 1041.2124 × 1071.0113 × 1073.6278 × 1032.1015 × 102
PSO5.8906 × 1032.7806 × 1036.2839 × 1035.5652 × 1032.8517 × 1032.5253 × 102
DE4.4570 × 1042.5716 × 1047.5459 × 1034.6732 × 1032.0702 × 1031.9291 × 102
F16 F17 F18
AvgStdAvgStdAvgStd
ECPO1.9136 × 1031.2822 × 1023.7244 × 1053.4390 × 1057.0782 × 1038.3821 × 103
PO2.3801 × 1031.9388 × 1025.8504 × 1054.4892 × 1055.8349 × 1054.2013 × 105
SMA2.2758 × 1031.6430 × 1025.3866 × 1057.5932 × 1052.8899 × 1054.1376 × 105
WOA2.5498 × 1032.6048 × 1021.8399 × 1062.1360 × 1062.5475 × 1062.1432 × 106
MFO2.5970 × 1032.3042 × 1025.5814 × 1061.5538 × 1078.3761 × 1063.2804 × 107
BA2.7586 × 1032.6623 × 1021.5614 × 1051.2508 × 1056.8822 × 1052.2271 × 105
SCA2.4064 × 1031.3907 × 1023.0596 × 1061.5234 × 1062.2375 × 1071.0658 × 107
PSO2.4592 × 1032.9620 × 1021.8900 × 1051.5659 × 1056.6243 × 1035.6651 × 103
DE1.8402 × 1035.3057 × 1012.9290 × 1051.5946 × 1057.9477 × 1034.8951 × 103
F19 F20 F21
AvgStdAvgStdAvgStd
ECPO2.2751 × 1031.2262 × 1022.3819 × 1031.5295 × 1012.3992 × 1035.4327 × 102
PO2.5437 × 1031.2347 × 1022.4876 × 1036.8086 × 1013.5938 × 1032.0824 × 103
SMA2.4542 × 1031.2723 × 1022.4753 × 1032.6051 × 1014.0557 × 1032.1240 × 103
WOA2.6637 × 1032.2014 × 1022.5756 × 1036.2939 × 1017.4764 × 1031.6306 × 103
MFO2.6983 × 1032.4920 × 1022.5084 × 1033.9069 × 1016.5151 × 1031.4708 × 103
BA2.9531 × 1032.2637 × 1022.5920 × 1036.6446 × 1017.1956 × 1031.3313 × 103
SCA2.5736 × 1031.0855 × 1022.5513 × 1031.7852 × 1017.8573 × 1032.5756 × 103
PSO2.6252 × 1032.2683 × 1022.4644 × 1034.4789 × 1014.9218 × 1032.1152 × 103
DE2.1201 × 1036.5962 × 1012.4092 × 1038.7803 × 1004.3067 × 1032.0968 × 103
F22 F23 F24
AvgStdAvgStdAvgStd
ECPO2.7283 × 1031.8692 × 1012.9069 × 1032.4259 × 1012.8953 × 1031.8401 × 101
PO2.9626 × 1036.5892 × 1013.0979 × 1036.7596 × 1012.9305 × 1032.4406 × 101
SMA2.8616 × 1032.8845 × 1013.0110 × 1032.6314 × 1013.0129 × 1034.4877 × 101
WOA3.0630 × 1031.0752 × 1023.1687 × 1037.7939 × 1012.9545 × 1033.7077 × 101
MFO2.8320 × 1033.2325 × 1012.9960 × 1033.1654 × 1013.4894 × 1036.4121 × 102
BA3.3552 × 1031.2481 × 1023.3999 × 1031.3564 × 1022.9048 × 1032.3391 × 101
SCA2.9882 × 1032.4771 × 1013.1570 × 1032.2646 × 1013.1938 × 1037.2395 × 101
PSO3.2553 × 1031.4516 × 1023.3690 × 1031.0292 × 1022.8797 × 1034.1309 × 100
DE2.7578 × 1039.6283 × 1002.9551 × 1031.5861 × 1012.8874 × 1033.6306 × 10−1
F25 F26 F27
AvgStdAvgStdAvgStd
ECPO4.1785 × 1037.8488 × 1023.2169 × 1031.1041 × 1013.1625 × 1034.9575 × 101
PO6.4763 × 1031.6166 × 1033.3401 × 1036.0505 × 1013.3120 × 1034.1035 × 101
SMA5.1692 × 1037.5012 × 1023.2677 × 1034.4365 × 1013.4222 × 1035.4394 × 101
WOA7.7623 × 1031.0328 × 1033.3540 × 1036.1127 × 1013.3084 × 1035.2989 × 101
MFO5.9229 × 1033.8663 × 1023.2597 × 1032.9353 × 1014.3155 × 1039.3149 × 102
BA9.0215 × 1032.3002 × 1033.4622 × 1031.3974 × 1023.1250 × 1035.2099 × 101
SCA6.9036 × 1033.2348 × 1023.3982 × 1033.9296 × 1013.8181 × 1031.4307 × 102
PSO6.8379 × 1032.3430 × 1033.3153 × 1033.2316 × 1023.1413 × 1035.2797 × 101
DE4.6352 × 1037.6760 × 1013.2051 × 1033.4090 × 1003.1925 × 1035.2602 × 101
F28 F29
AvgStdAvgStd
ECPO3.6216 × 1031.6586 × 1029.5921 × 1034.2939 × 103
PO4.5766 × 1033.6111 × 1025.6228 × 1063.4315 × 106
SMA4.0411 × 1031.9995 × 1024.7570 × 1062.3763 × 106
WOA4.8224 × 1034.7130 × 1021.0089 × 1076.4263 × 106
MFO4.1861 × 1033.5153 × 1021.2444 × 1062.8259 × 106
BA5.0852 × 1034.7170 × 1021.2750 × 1067.5539 × 105
SCA4.6824 × 1032.5762 × 1027.7851 × 1073.2081 × 107
PSO3.9629 × 1033.1779 × 1026.5335 × 1033.3407 × 103
DE3.5197 × 1036.4547 × 1011.2726 × 1044.0079 × 103
Overall Rank
RANK+/=/−AVGComputational Time(s)
ECPO1~2.0345183.16
PO526/3/05.1379118.33
SMA428/1/04.9655176.48
WOA829/0/07.172498.34
MFO628/1/06.3103114.83
BA725/0/46.5517116.22
SCA929/0/07.3448106.56
PSO317/6/63.206991.75
DE214/8/72.2759143.73
Table 4. The p-values of the ECPO versus other algorithms on CEC2017.
Table 4. The p-values of the ECPO versus other algorithms on CEC2017.
POSMAWOAMFO
F11.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F21.53 × 10−11.73 × 10−61.73 × 10−66.98 × 10−6
F31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F41.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F72.35 × 10−61.73 × 10−61.73 × 10−61.92 × 10−6
F81.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F91.73 × 10−61.73 × 10−61.73 × 10−62.13 × 10−6
F101.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F111.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−6
F121.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F131.71 × 10−12.22 × 10−43.52 × 10−65.04 × 10−1
F146.34 × 10−67.71 × 10−42.13 × 10−62.35 × 10−6
F153.52 × 10−61.73 × 10−61.73 × 10−62.88 × 10−6
F161.73 × 10−62.60 × 10−61.73 × 10−61.73 × 10−6
F178.22 × 10−25.44 × 10−19.71 × 10−56.84 × 10−3
F181.73 × 10−63.52 × 10−61.73 × 10−62.35 × 10−6
F197.69 × 10−63.72 × 10−53.52 × 10−66.98 × 10−6
F201.36 × 10−51.73 × 10−61.73 × 10−61.73 × 10−6
F211.64 × 10−51.73 × 10−61.73 × 10−61.73 × 10−6
F221.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−6
F231.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F245.31 × 10−51.92 × 10−63.52 × 10−63.88 × 10−6
F251.49 × 10−53.06 × 10−41.92 × 10−61.73 × 10−6
F261.73 × 10−63.18 × 10−61.73 × 10−64.73 × 10−6
F271.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F281.73 × 10−63.52 × 10−61.73 × 10−65.22 × 10−6
F291.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
BASCAPSODE
F11.73 × 10−61.73 × 10−65.04 × 10−11.99 × 10−1
F21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F31.73 × 10−61.73 × 10−61.73 × 10−61.25 × 10−2
F41.73 × 10−61.73 × 10−61.73 × 10−65.29 × 10−4
F51.73 × 10−61.73 × 10−61.73 × 10−67.81 × 10−3
F61.73 × 10−61.73 × 10−61.73 × 10−67.66 × 10−1
F71.73 × 10−61.73 × 10−63.52 × 10−62.22 × 10−4
F81.73 × 10−61.73 × 10−61.73 × 10−68.21 × 10−6
F91.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F101.73 × 10−61.73 × 10−64.20 × 10−41.36 × 10−1
F112.35 × 10−61.73 × 10−61.92 × 10−61.92 × 10−6
F121.73 × 10−61.73 × 10−67.81 × 10−16.64 × 10−4
F139.32 × 10−61.59 × 10−33.18 × 10−65.72 × 10−1
F141.73 × 10−61.73 × 10−62.99 × 10−17.66 × 10−1
F151.73 × 10−61.73 × 10−63.18 × 10−64.53 × 10−4
F161.73 × 10−61.73 × 10−61.73 × 10−62.70 × 10−2
F173.38 × 10−31.73 × 10−61.32 × 10−27.19 × 10−1
F181.73 × 10−61.73 × 10−66.00 × 10−15.19 × 10−2
F191.73 × 10−61.73 × 10−62.60 × 10−61.49 × 10−5
F201.73 × 10−61.73 × 10−61.92 × 10−63.18 × 10−6
F211.73 × 10−61.73 × 10−65.22 × 10−51.36 × 10−5
F221.73 × 10−61.73 × 10−61.73 × 10−68.47 × 10−6
F231.73 × 10−61.73 × 10−61.73 × 10−62.60 × 10−6
F244.07 × 10−21.73 × 10−61.49 × 10−51.25 × 10−1
F252.88 × 10−61.73 × 10−64.58 × 10−58.73 × 10−3
F261.73 × 10−61.73 × 10−65.71 × 10−25.79 × 10−5
F271.32 × 10−21.73 × 10−64.17 × 10−11.25 × 10−2
F281.73 × 10−61.73 × 10−65.31 × 10−54.11 × 10−3
F291.73 × 10−61.73 × 10−65.32 × 10−38.73 × 10−3
Table 5. Description of the experimental data sets.
Table 5. Description of the experimental data sets.
DatasetsSamplesFeaturesClasses
Australian690142
cmc147393
heartandlung139232
hepatitisfulldata155192
glass21496
heart303135
thyroid_2class18782
Leukemia7271302
Leukemia17253273
M-of-n1000132
Table 6. Comparison results of the ECPO with other binary metaheuristic algorithms on KNN error rate.
Table 6. Comparison results of the ECPO with other binary metaheuristic algorithms on KNN error rate.
FunctionECPOBGWOBGSABPSOBBABSSA
Australian7.32 × 10−28.00 × 10−28.37 × 10−27.97 × 10−22.36 × 10−17.82 × 10−2
(2.46 × 10−2)(2.38 × 10−2)(2.16 × 10−2)(1.97 × 10−2)(8.46 × 10−2)(3.72 × 10−2)
cmc4.54 × 10−14.82 × 10−14.82 × 10−14.75 × 10−15.65 × 10−14.78 × 10−1
(3.16 × 10−2)(1.82 × 10−2)(1.83 × 10−2)(2.98 × 10−2)(5.23 × 10−2)(2.31 × 10−2)
heartandlung7.14 × 10−31.44 × 10−21.49 × 10−21.43 × 10−21.59 × 10−11.67 × 10−2
(2.29 × 10−2)(3.13 × 10−2)(3.02 × 10−2)(2.36 × 10−2)(1.21 × 10−1)(3.02 × 10−2)
hepatitisfulldata5.36 × 10−31.95 × 10−22.59 × 10−28.32 × 10−32.06 × 10−11.65 × 10−2
(1.65 × 10−3)(3.15 × 10−2)(6.86 × 10−3)(1.97 × 10−3)(8.32 × 10−2)(2.76 × 10−3)
glass9.86 × 10−21.17 × 10−11.07 × 10−11.21 × 10−12.92 × 10−11.06 × 10−1
(6.49 × 10−2)(4.25 × 10−2)(4.99 × 10−2)(5.44 × 10−2)(1.08 × 10−1)(4.93 × 10−2)
heart4.72 × 10−27.03 × 10−25.55 × 10−27.07 × 10−22.59 × 10−16.26 × 10−2
(1.38 × 10−2)(4.43 × 10−2)(5.31 × 10−2)(3.23 × 10−2)(9.71 × 10−2)(4.25 × 10−2)
thyroid_2class2.02 × 10−12.01 × 10−12.14 × 10−12.08 × 10−13.21 × 10−12.17 × 10−1
(6.72 × 10−2)(6.49 × 10−2)(6.85 × 10−2)(7.31 × 10−2)(8.81 × 10−2)(7.59 × 10−2)
Leukemia0.00 × 1000.00 × 1001.02 × 10−10.00 × 1001.67 × 10−20.00 × 100
(0.00 × 100)(0.00 × 100)(4.17 × 10−2)(0.00 × 100)(5.21 × 10−2)(0.00 × 100)
Leukemia10.00 × 1000.00 × 1006.31 × 10−20.00 × 1005.19 × 10−20.00 × 100
(0.00 × 100)(0.00 × 100)(6.07 × 10−2)(0.00 × 100)(6.05 × 10−2(0.00 × 100)
M-of-n0.00 × 1000.00 × 1000.00 × 1000.00 × 1001.70 × 10−10.00 × 100
(0.00 × 100)(0.000 × 100)(0.00 × 100)(0.00 × 100)(9.05 × 10−2)(0.00 × 100)
Table 7. Comparison results of the ECPO with other binary metaheuristic algorithms on the number of features selected.
Table 7. Comparison results of the ECPO with other binary metaheuristic algorithms on the number of features selected.
FunctionECPOBGWOBGSABPSOBBABSSA
Australian6.35.36.15.15.86.9
(1.24)(1.94)(2.23)(1.37)(1.54)(0.99)
cmc5.34.66.35.15.75.3
(0.823)(1.08)(0.99)(0.87)(1.31)(0.94)
heartandlung13.64.62.82.67.34.3
(2.95)(0.96)(1.17)(1.13)(3.41)(2.05)
hepatitisfulldata10.24.33.86.175.4
(2.52)(1.15)(1.70)(2.02)(2.82)(2.59)
glass5.43.84.63.93.74.2
(0.96)(0.63)(1.42)(0.73)(1.56)(0.78)
heart8.15.96.55.65.56.4
(1.10)(1.19)(1.26)(1.64)(1.50)(1.71)
thyroid_2class6.96.54.343.73.8
(0.73)(1.17)(0.67)(1.24)(1.33)(1.39)
Leukemia2657.65356.5531.728753240.32795.8
(24.51)(56.65)(24.45)(42.43)(34.15)(394.05)
Leukemia13425.44177.6377.12092.72389.22215
(50.57)4(3.03)(20.82)(45.44)(25.96)(165.48)
M-of-n8.26667.96.1
(0.84)(0)(0)(0)(1.72)(0.31)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, T.; Yi, Y. Multi-Strategy Enhanced Parrot Optimizer: Global Optimization and Feature Selection. Biomimetics 2024, 9, 662. https://doi.org/10.3390/biomimetics9110662

AMA Style

Chen T, Yi Y. Multi-Strategy Enhanced Parrot Optimizer: Global Optimization and Feature Selection. Biomimetics. 2024; 9(11):662. https://doi.org/10.3390/biomimetics9110662

Chicago/Turabian Style

Chen, Tian, and Yuanyuan Yi. 2024. "Multi-Strategy Enhanced Parrot Optimizer: Global Optimization and Feature Selection" Biomimetics 9, no. 11: 662. https://doi.org/10.3390/biomimetics9110662

APA Style

Chen, T., & Yi, Y. (2024). Multi-Strategy Enhanced Parrot Optimizer: Global Optimization and Feature Selection. Biomimetics, 9(11), 662. https://doi.org/10.3390/biomimetics9110662

Article Metrics

Back to TopTop