Next Article in Journal
Test-Retest Reliability of a 6DoF Marker Set for Gait Analysis in Cerebral Palsy Children
Next Article in Special Issue
Linked Data Triples Enhance Document Relevance Classification
Previous Article in Journal
Experimental Study of Biogas–Hydrogen Mixtures Combustion in Conventional Natural Gas Systems
Previous Article in Special Issue
Opportunities for Machine Learning in District Heating
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

BHHO-TVS: A Binary Harris Hawks Optimizer with Time-Varying Scheme for Solving Data Classification Problems

1
Faculty of Information Technology, Sebha University, Sebha 18758, Libya
2
Department of Engineering and Technology Sciences, Arab American University, P.O. Box 240 Jenin, Zababdeh 13, Palestine
3
Information Technology Engineering, Al-Quds University, Abu Deis, P.O. Box 20002, Jerusalem 51000, Palestine
4
Department of Information Technology, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia
5
Department of Computer Science, Birzeit University, P.O. Box 14, Birzeit, Palestine
6
Computer Science Department, Southern Connecticut State University, New Haven, CT 06514, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(14), 6516; https://doi.org/10.3390/app11146516
Submission received: 15 June 2021 / Revised: 10 July 2021 / Accepted: 12 July 2021 / Published: 15 July 2021

Abstract

:
Data classification is a challenging problem. Data classification is very sensitive to the noise and high dimensionality of the data. Being able to reduce the model complexity can help to improve the accuracy of the classification model performance. Therefore, in this research, we propose a novel feature selection technique based on Binary Harris Hawks Optimizer with Time-Varying Scheme (BHHO-TVS). The proposed BHHO-TVS adopts a time-varying transfer function that is applied to leverage the influence of the location vector to balance the exploration and exploitation power of the HHO. Eighteen well-known datasets provided by the UCI repository were utilized to show the significance of the proposed approach. The reported results show that BHHO-TVS outperforms BHHO with traditional binarization schemes as well as other binary feature selection methods such as binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), binary bat algorithm (BBA), binary whale optimization algorithm (BWOA), and binary salp swarm algorithm (BSSA). Compared with other similar feature selection approaches introduced in previous studies, the proposed method achieves the best accuracy rates on 67% of datasets.

1. Introduction

Data mining is determined as an important step in the knowledge discovery process. It has become an active research domain due to the presence of huge collections of digital data that need to be explored and transformed into useful patterns. The main role of data mining is to develop methods that assist in finding potentially useful hidden patterns in huge data collections [1]. In data mining techniques such as classification, preprocessing of data has a great influence on the goodness of discovered patterns and the efficiency of machine learning classifiers [1,2]. Feature selection (FS) is one of the main preprocessing techniques to discover and retain informative features and eliminate noisy and irrelevant ones. Selecting the optimal or near-optimal subset of given features will enhance the performance of the classification models and reduce the computational cost [2,3,4].
Based on the evaluation criteria of the selected features subset, FS approaches are classified into two classes: filter and wrapper approaches [3]. Filter techniques depend on scoring matrices such as chi-square and information gain to estimate the quality of the picked subset of features. More accurately, in filter approaches, a filter approach (e.g., chi-square) is used to rank the features, and then the only ones that have weights greater than or equal to a predefined threshold are retained. In contrast, wrapper approaches mainly consider a machine learning classifier such as K-Nearest Neighbors (KNN) or Support Vector Machines (SVM) to evaluate the feature subset.
Another aspect for categorizing FS methods is based on the selection mechanism that is used to explore the feature space, searching for the most informative features. The search algorithm task is to generate subsets of features, and then the machine learning algorithm is applied to assess the generated subsets of features to find the optimal one [4,5,6]. Compared to filter approaches, wrappers have superior performance, especially in terms of accuracy since it considers the dependencies between features in the dataset, while filter FS may ignore such relations [7]. Although, filter FS is better than wrapper FS in terms of computational cost [4].
Commonly, for a wide range of data mining applications, reaching the optimal subset of features is a challenging task. The size of the search space grows exponentially with respect to the number of features (i.e., 2 K 1 possible subsets can be generated for a dataset with k features). Accordingly, FS is an intractable NP-hard optimization problem in which exhaustive search and even conventional exact optimization methods are impractical. For that reason, the FS domain has been extensively investigated by many researchers [5,8]. For example, in  [9], an improved version of the binary Particle Swarm Optimization (PSO) algorithm was introduced for the FS problem. An unsupervised FS approach based on Ant Colony Optimization (ACO) was proposed by [10]. Moreover, an FS technique that hybrids Genetic Algorithm (GA) and PSO was introduced in [11]. Finally, a binary variant of the hybrid Grey Wolf Optimization (GWO) and PSO is presented in [12] to tackle the FS problem.
Meta-heuristic algorithms have been very successful in tackling many optimization problems such as data mining, machine learning, engineering design, production tasks, and FS [13]. Meta-heuristic algorithms are general-purpose stochastic methods that can find a near-optimal solution within a reasonable time. Lately, various Swarm Intelligence (SI) based meta-heuristics have been developed and proved a good performance for handling FS tasks in different fields [14,15]. Some examples include Whale Optimization Algorithm (WOA) [16], Slim Mould Algorithm (SMA) [17], Marine Predators Algorithm (MPA) [18], and Grey Wolf Optimizer (GWO) [19].
Recently, Heidari and his co-authors proposed a new nature-inspired meta-heuristic optimizer named Harris Hawks Optimization (HHO) [20]. HHO simulates the behavior of hawks when they surprisingly attack their prey from different directions. HHO has several merits; it is simple, flexible, and free of internal parameters. Furthermore, it has a variety of exploitation and exploration strategies that ensure good results favorable convergence speed [21]. The original real-valued version of the HHO algorithm has been applied in conjunction with various techniques to solve many optimization problems belonging to different domains [22,23,24,25,26]. HHO has also been applied for solving FS problems [27,28,29].
Broadly, several binarization schemes have been introduced to adapt real-valued meta-heuristics to deal with discrete search space. These approaches follow two major branches. The first branch is named continuous-binary operator, in which the meta-heuristic is adapted to work in binary search space by redefining the basic real values operators of its equations into binary operators [30]. However, in the second branch, which is named two-step binarization, real values operators of meta-heuristics are kept without adjustment. To conduct the binarization, the first stage involves employing a transfer function (TF) to convert the real-valued solution R n into an intermediate probability vector [0, 1] n . Each element in the probability vector determines the probability of transforming its equivalent in R n into 0 or 1. In the second stage, a binarization rule is applied to transform the output of TF into a binary solution [30]. In general, the second binarization scheme is commonly used for adapting meta-heuristics to work in binary search space. In this regard, Transfer Functions (TFs) are defined depending on their shapes into two types: S-shaped and V-shaped [31,32,33]. Traditional or time-independent TFs are not able to deliver a satisfactory balance between exploration and exploitation in the search space. To overcome this shortcoming, several time-varying TFs have been proposed and applied with many meta-heuristic algorithms for providing a good balance between exploration and exploitation over iterations [34,35,36].
In this work, to be utilized for FS tasks, the authors integrate time-varying versions of V-shaped TFs into the HHO algorithm to convert the continuous HHO into a binary version called BHHO. The benefit of using time-varying functions with the BHHO algorithm is to enhance its search ability by getting a better balance between exploration and exploitation phases. Time-varying functions also help in avoiding BHHO from getting stuck in local minima. The proposed approach is verified through eighteen benchmark datasets and revealed excellent performance compared to other state-of-the-art methods.
The rest of this article is organized as follows: Section 2 introduces the related works, whereas Section 3 presents the HHO algorithm. Section 4 presents the proposed BHHO variants. Section 5 outlines FS using the BHHO algorithm. Results and discussions are presented in Section 6, while the conclusion in Section 7 sums up the main findings of this work.

2. Related Works

The literature reveals that meta-heuristic algorithms have been very successful in tackling FS problems. GA and PSO algorithms have been utilized to develop effective FS methods for many problems. Several GA-based approaches have been proposed. Examples of these approaches are [37,38,39,40,41]. Moreover, many binary variants of PSO have been frequently applied in many FS methods. Some examples can be found in Chuang et al. [42], Chantar et al. [4], Mafarja et al. [43], and Moradi et al. [44]. For instance, in Chuang et al. [42], an improved version of Binary PSO named Chaotic BPSO was used for FS in which two chaotic maps called logistic and tent were embedded in BPSO for estimating the value of inertia weight in the velocity equation of PSO algorithm. Another example is the recent work of Mafarja et al. [43], where five strategies were used to update the value of the inertia weight parameter during the search process. The proposed approaches have shown better performance when compared to other similar FS approaches. ACO algorithm, which was introduced by Dorigo et al. [45] was also applied in FS. As examples, one can refer to the work of Deriche M. [46], Chen et al. [47], and Kashef et al. [48]. Artificial Bee Colony (ABC) optimizer [49]. An example of using the ABC algorithm for FS is presented in [50]. In addition, as shown in [51], the binary version of the well-known meta-heuristic Bat Algorithm (BA) was used as an FS method. Experiential results demonstrated the superiority of BA based FS method in contrast with GA and PSO-based methods. In addition to the algorithms mentioned above that have been applied for FS, many recently introduced meta-heuristic algorithms such as Slap Swarm Algorithm (SSA) [6], Moth-Flame Optimization (MFO) [52], Dragonfly Algorithm (DA) [53], and Ant Lion Optimization (ALO) [54] have been successfully utilized in FS for many classification problems.
Harris Hawks algorithm has been utilized to solve many optimization problems. For instance, as stated in [23], in the civil engineering domain, HHO was used to improve the performance of the artificial neural network classifier in predicting the soil slope stability. In addition, a hybrid model based on HHO and Differential Evaluation (DE) algorithms has been applied to tackle the task of color image segmentation. Using different measures for evaluation purposes, results prove that HHO-DE based approach is superior compared to several state-of-the-arts image segmentation techniques [24]. A novel automatic approach combining deep learning and optimization algorithms for nine control chart patterns (CCPs) recognition was proposed by [25]. An HHO algorithm was applied for the best tuning of ConvNet parameters. In addition, an improved version of the HHO algorithm that incorporates three strategies, including chaos, topological multi-population, and differential evolution (DE), was proposed by [26]. DE-driven multi-population HHO (CMDHHO) algorithm has shown its effectiveness in solving real-world optimization problems.
The investigated literature reveals that some binary versions of HHO have been proposed since the appearance of the HHO algorithm in 2019 for FS problems [27,28,29,55]. As presented in [27], a set of binary variants of the HHO algorithm was proposed as wrapper FS methods. Eight V-shaped and S-shaped TFs and four quadratic functions were used to transform the search space from continuous to binary. The performance of proposed variants of BHHO are compared with binary forms of different optimization algorithms, include DE algorithm, binary Flower Pollination Algorithm (FPA), binary Multi-Verse Optimizer (MVO), binary SSA, and GA. The experimental results show that the QBHHO approach can mostly perform the best in terms of classification accuracy, least fitness value, and the lowest number of selected features. As stated in [28], two binary variants of the HHO algorithm were proposed as wrapper FS approaches in which two transfer functions (S-shaped and V-shaped) were used to transform continuous search space into binary. Using several high dimension and low-sample challenging datasets along with different optimization algorithms (e.g., GA, BPSO, and BBA) for validating purposes, the S-shaped transfer function-based BHHO shows promising results in dealing with challenging datasets. Recently, Ref. [55] proposed a wrapper-based FS for text classification in the Arabic context utilizing four binary variants of the HHO algorithm. The proposed variants of BHHO confirmed excellent performance compared to seven wrapper-based methods.
The traditional time-independent TFs are the most commonly used ones for adapting meta-heuristic algorithms to work in binary search space. For example, Kennedy and Eberhart [31] used an S-shaped TF to convert PSO optimizer to deal with binary optimization problems. A V-shaped transfer function was adopted by [33] to introduce a binary version of the Gravitational Search Algorithm (GSA). In 2013, for converting the continuous version of the PSO algorithm into Binary, Mirjalili and Lewis [32] introduced six new V-shaped and S-shaped TFs for mapping continuous search space into a binary one. Experimental results approved that the new proposed V-shaped group of TFs can remarkably improve the performance of the classic version of PSO, especially in terms of convergence speed and avoiding local minima problems. In addition, the same set of TFs introduced by [32] was also applied by Mafarja et al. [56] to propose six versions of binary ALO. Results show that equipping ALO with V-shaped TFs can significantly improve its performance in terms of accuracy and preventing local minima.
Time-varying TFs were proposed by Islam et al. [34] for boosting the performance of BPSO in which a modified form of BPSO called TV T -BPSO that adopts a time-varying transfer function was introduced to overcome the drawbacks of traditional TFs by providing a better balance between exploration and exploitation for the BPSO through its optimization process. In addition, Mafarja et al. [35] was also applied several time-varying S-shaped and V-shaped TFs for improving the exploitation and exploration power of the Binary DA (BDA). The experimental results confirmed the superiority of time-varying S-shaped BDA approaches when compared to other tested approaches. Recently, Kahya et al. [36] investigated the use of a time-varying transfer function with a binary WOA for FS. The results confirmed that BWOA-TV2 has consistency in FS. It also provides high accuracy of the classification with better convergence over conventional algorithms such as Binary Firefly Algorithm (BFA) and BPSO.

3. Harris Hawks Optimization (HHO)

HHO is a new meta-heuristic optimization algorithm introduced by Heidari et al. in 2019 [20]. HHO mimics the hunting mechanism of Harris Hawks in nature. The study of Harris hawks’ behavior revealed that these birds use various sophisticated strategies in surprisingly attacking and hunting the fleeing prey (mostly a rabbit). As shown in the original publication of HHO, the mathematical modeling of this algorithm confirms its effectiveness in tackling diverse optimization problems. As any other population-based meta-heuristic optimizer, HHO generates a population of search agents and updates these search agents using exploration and exploitation phases. The exploration of this algorithm has two stages, while the exploitation consists of four stages [20]. Figure 1 depicts the stages of the HHO optimizer. The following subsections describe the phases and mathematical models of HHO.

3.1. Exploration Phase

In this phase, the search agents (Hawks) are updated through two strategies where both strategies have an equal chance to be selected. In HHO, agents perch with respect to the positions of other close individuals and the prey or perch on random positions (tall trees). These strategies can be mathematically formulated as in Equation (1)
X ( t + 1 ) = X r a n d ( t ) r 1 X r a n d ( t ) 2 r 2 X ( t ) p 0.5 ( X p r e y ( t ) X n ( t ) ) r 3 ( L B + r 4 ( U B L B ) ) p < 0.5
where X ( t + 1 ) denotes hawks’ position vector in the next generation t, X p r e y ( t ) refers to hawks’ current position, r 1 , r 2 , r 3 , r p , and p are randomly generated numbers within range (0, 1) in each generation, LB and UB mean the lower and upper boundaries of variables respectively, X r a n d ( t ) denotes a randomly picked individual (hawk) from the current generation, X n refers to the mean position of the current generation of individuals, which can be calculated using Equation (2):
X n ( t ) = 1 N i = 1 N X i ( t )
where N indicates the size of the population of hawks, and X i ( t ) denotes the location of each individual at generation t.

3.2. Moving from Exploration to Exploitation

In general, to achieve a suitable balance between the core searching behaviors, an algorithm requires an appropriate way to transfer from exploration to exploitation. In HHO, the decreasing energy of a fleeing prey is used to control this part of the search process, where this energy decreases through the escaping behavior. The energy of the escaping prey is formulated as in Equation (3)
E = 2 E 0 ( 1 t T )
where E denotes the escaping energy of the prey (rabbit), E 0 presents the initial value of the rabbit’s energy, and T indicates the maximum number of generations. For each iteration t, E 0 changes at random in range (−1, 1). The prey is physically strengthening when the value of E 0 increases from 0 to 1, while it is flagging if E 0 decreases from 0 to −1. The escaping energy is reduced over the generation. When | E | 1 , it means that the algorithm performs exploration by searching different regions to locate a rabbit, whilst the algorithm does exploitation when | E | < 1 .

3.3. Exploitation Phase

This phase comes after HHO completes the exploration of promising regions of the search space. At this stage, HHO puts more emphasis on intensifying better solutions to reach the optimal one. To achieve that, Harris’ Hawks perform what is called the surprise pounce in order to attack the prey. The prey always attempts to flee from a dangerous place. Consequently, various chasing strategies happen in reality. Depending on the escaping mechanisms of the prey and chasing behavior of hawks, four possible attaching behaviors are formulated in the HHO optimizer. Let r be the probability that a prey succeeds in escaping where ( r < 0.5 ) indicates that the prey succeeded in escaping and ( r 0.5 ) means it could not. One of two actions named soft and hard besiege is performed by hawks to catch the prey. In this way, the prey will be surrounded from various directions softly or hardly based on prey’s remaining energy. This process is modeled using the parameter | E | where soft besiege takes place when | E | 0.5 and hard besiege happens if | E | < 0.5.

3.3.1. Soft Besiege

If the values of the parameters ( r 0.5) and ( | E | 0.5), this means that the prey still has sufficient energy to run; thus, the hawks surround the prey softly in order to make it tired and then perform a surprise pounce. This is mathematically modeled using the following two rules:
X ( t + 1 ) = Δ X ( t ) E J X p r e y ( t ) X ( t )
Δ X ( t ) = X p r e y ( t ) X ( t )
where Δ X ( t ) denotes the difference between the prey’s position vector and the current hawk, E denotes the escaping energy, r 5 is a randomly generated number in the range [0, 1], and J = 2 ( 1 r 5 ) denotes the random jump strength of the prey during the escaping operation.

3.3.2. Hard Besiege

If ( r 0.5) and ( | E | < 0.5), then the prey is extremely tired and its escaping energy is low. Consequently, the hawks surround the targeted prey hardly and do the surprise pounce. In this case, the following formula is used for updating the current positions:
X ( t + 1 ) = X p r e y ( t ) E Δ X ( t )

3.3.3. Soft Besiege with Progressive Rapid Dives

In the soft besiege stage, if ( r < 0.5 ) and still ( | E | 0.5), this means that the prey still has sufficient energy to succeed in escaping. A more sophisticated soft besiege step is done prior to the surprise pounce. To model the escaping styles of the prey in this case, the HHO algorithm uses the levy flight strategy to simulate the actual movements of prey as well as the abrupt, rapid, and irregular movements of search agents (hawks) toward the escaping prey (rabbit). Based on the actual behavior of Harris hawks, it is assumed that they can decide their next motion according to the rule in Equation (7):
Y = X p r e y ( t ) E J X p r e y ( t ) X ( t )
After that, they make a comparison between the movement and the previous dive to see which one is better. If the previous dive is still better, then the hawks will make rapid dive depending on the levy flight (LF) pattern using Equation (8):
Z = Y + S × L F ( D )
where D indicates the dimension of given search space, S denotes a random vector with size 1 × D , and LF represents levy flight function. LF value is obtained using Equation (9):
L F ( x ) = 0.01 × u × σ v 1 β , σ = Γ ( 1 + β ) × s i n ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 ( β 1 2 ) 1 β
where u, v are random numbers inside (0,1), β equals to 1.5, and  Γ ( x ) is the standard gamma function.
Finally, in the soft besiege stage, the updating strategy of the positions of hawks can be done by Equation (10):
X ( t + 1 ) = Y i f F ( Y ) < F ( X ( t ) ) Z i f F ( Z ) < F ( X ( t ) )
where F ( x ) denotes the fitness function for the given solution X, Y and Z can be calculated using Equations (7) and (8).

3.3.4. Hard Besiege with Progressive Rapid Dives

If ( r < 0.5 ) and also ( | E | < 0.5), then the prey has no sufficient energy to flee. In this case, prior to the surprise pounce to capture the prey, a hard besiege is done by the hawks where they attempt to decrease the distances between their average location and the intended prey. Therefore, the rule presented in Equation (11) is used in a hard besiege case.
X ( t + 1 ) = Y i f F ( Y ) < F ( X ( t ) ) Z i f F ( Z ) < F ( X ( t ) )
where Y and Z can be calculated using Equations (12) and (13).
Y = X p r e y ( t ) E J X p r e y ( t ) X m ( t )
where X m ( t ) is calculated using Equation (2), E denotes the escaping energy, and J refers to the jump strength.
Z = Y + S × L F ( D )
where D indicates the dimension of a given search space, S denotes a random vector with size 1 × D , and LF represents levy flight function. For more details about the HHO algorithm, please refer to the original paper [20].

4. Proposed Binary HHO

In general, optimization algorithms are initially developed for solving problems in the continuous search space. The basic forms of these algorithms can not be directly applied to deal with binary and discrete optimization problems. In the binary optimization field, the search space can be viewed as a hypercube in which a search agent can adjust its position in the search space by changing the bits of its position vector from 1 to 0 or vise versa [34,35]. In the literature, depending on the shape of function, two basic forms of TFs known as S-shaped and V-shaped are proposed for adapting continuous search into binary. The first S-shaped TF was proposed by Kennedy and Eberhart [31] to transform the continuous original version of the PSO algorithm into a discrete one while the initial V-shaped transfer function was proposed by Rashedi et al. [33] for developing a binary variant of GSA (BGSA). Although the sigmoid TF is simple, effective, cheap in terms of computational cost, and widely utilized for binary variants of optimization algorithms, it has some shortcomings. It is unable to provide sufficient balance between the two essential stages of the optimization process (exploration and exploitation). In addition, it also has difficulty in avoiding the stuck of the algorithm in local minima and controlling the convergence speed [32]. In the case of V-shaped TF, it is defined based on some principles to map continuous values of velocity vectors into probabilities. The main concept is that the search agents that have significant absolute values of velocity are potentially far from the optimal solution; hence the TF should provide a high probability for changing the positions of search agents. When the velocity vector has small absolute values, then the TF should present small probability values of changing the positions of the search agents [33].
To overcome the limitations of basic TFs in mapping velocity values to probability ones, Mirjalili and Lewis [32] extensively studied the influence of the available TFs on the performance of BPSO. Accordingly, six new transfer functions divided into two groups according to their forms, S-shaped and V-shaped, were introduced for mapping the continuous search to discrete search space. It was found that V-shaped family of TFs, in particular V4 TF, significantly improves the performance of binary algorithms compared to the sigmoid TF. Furthermore, the same families of TFs were employed by Mafarja et al. in [56] to develop six discrete forms of ALO for FS. It was observed that the V-shaped TFs, especially ALO-V3, significantly enhance the performance of binary ALO optimizer for FS tasks.
Following the appearance of various forms of TFs for adapting the optimization algorithms to work in discrete search space, in 2017, Islam et al. [34] studied and analyzed the behavior and performance of existing TFs with the PSO algorithm in dealing with low and high dimensional discrete optimization problems. It was demonstrated that current TFs still suffer from difficulty in controlling the balance between exploration and exploitation of the optimization process. As presented in [34], to overcome the limitations of current basic TFs, the authors defined some concepts in which the search process for an optimal solution should concentrate on the exploration in the early generations of the optimization process by letting the TF produce a high probability of changing the elements of the position vector of a search agent based on the value of the velocity vector (step). In later phases, the optimization process should move the focus of the search from exploration to exploitation by enabling the TF to provide a low probability of changing the position’s elements of a search agent. According to these concepts, a control parameter ( τ ) was adopted in the TF, where this parameter starts with a large value and decreases gradually over the iteration to obtain a smooth shift from exploration to exploitation. In this way, the shape of the TF changes over time based on the value of the controlling parameter. The purpose of employing the time-varying scheme is to obtain a better balance between exploration and exploitation through the optimization process of a BPSO. Time-varying TFs demonstrated their superiority when compared to existing static TFs based on BPSO approaches over low-dimensional and high-dimensional discrete optimization problems.
Inspired by the work of [32,34], Mafarja et al. [35] proposed eight time-varying TFs related into two families (S-shaped and V-shaped) for developing binary versions of DA (BDA) to be used for FS. The authors demonstrated the efficiency of these time-varying TFs by comparing their performance with other static TFs as well as various wrapper-based FS approaches. In addition, three types of time-varying transfer functions were introduced in [36] for improving the performance of the binary WOA in the FS domain. WOA with time-varying TFs has shown higher effectiveness and efficiency than other popular approaches in the FS domain. In this work, considering the previous studies of the impact of TFs on the performance of binary optimization algorithms, we select the time-varying TFs, specifically V-shaped, proposed by [35], as shown in Table 1, to convert HHO to binary and apply the binary variants of HHO to the FS problem. In the time-varying form of the TFs, τ represents a time-varying variable that begins with an initial value and progressively reduces over iterations, as shown in Equation (14).    
τ = τ m a x ( τ m a x τ m i n ) × t T
where τ m i n and τ m a x represent the bounds of the τ parameter, t denotes the current iteration, and T represents the maximum number of iterations. In this study, τ m i n and τ m a x were selected to be 0.01 and 4, respectively [35]. The original time independent V-shaped TFs are shown in Figure 2, while the time varying variants of TFs are shown in Figure 3.
After employing the original or time-varying TFs as a first step in the binarization scheme, the real-valued solution R n is converted into an intermediate probability vector [0, 1] n such that each of its element determines the probability of transforming its equivalent in R n into 0 or 1. In the second step, a binarization rule is applied to transform the output of TFs into a binary solution [30]. In this work, the complement binarization introduced by Rashedi et al. [33] is applied as given in Equation (15).
X j ( t + 1 ) = b j r < T ( X j ( t ) ) b j O t h e r w i s e
where ∽ denotes the complement, b j is the current binary value for the jth element, and  X j ( t + 1 ) is the new binary value. It is noted that the updated binary value is set considering the current binary solution, that is, based on the probability value T ( X j ( t ) , the jth element is either kept or flipped.
Algorithm 1 explains the pseudo-code of the Binary HHO algorithm.
Algorithm 1 Pseudo-code of the BHHO algorithm.
  • Inputs: Number of hawks (N) and maximum iterations (T)
  • Outputs: X p r e y
  • Generate the initial binary population X i ( i = 1 , 2 , , N )
  • while (t < T) do
  •     Evaluate the fitness values of hawks
  •     Find out the best search agent X p r e y
  •     for (each hawk ( X i )) do
  •         Update E 0 and jump strength J                   ▹E 0 =2rand()−1, J=2(1−rand())
  •         Update E by Equation (3)
  •         if ( | E | 1 ) then                             ▹ Exploration phase
  •            Update the position vector by Equation (1)
  •            Calculate the probability vector using time-varying V-shaped TFs
  •            Calculate the binary solution using Equation (15)
  •         if ( | E | < 1 ) then                             ▹ Exploitation phase
  •            if ( r 0.5) then
  •                if ( | E | 0.5 ) then                             ▹ Soft besiege
  •                    Update the position vector by Equation (4)
  •                else if ( | E | < 0.5 ) then                           ▹ Hard besiege
  •                    Update the position vector by Equation (6)
  •                Calculate the probability vector using time-varying V-shaped TFs
  •                Calculate the binary solution using Equation (15)
  •            if ( r < 0.5) then
  •                if ( | E | 0.5 ) then                 ▹ Soft besiege with progressive rapid dives
  •                    Calculate Y and Z using Equations (7) and (8)
  •                    Convert Y and Z into binary using time-varying TF and binarization rule in Equation (15)
  •                    Update the position vector by Equation (10)
  •                else if ( | E | < 0.5 ) then               ▹ Hard besiege with progressive rapid dives
  •                    Calculate Y’ and Z’ using Equations (12) and (13)
  •                    Convert Y’ and Z’ into binary using time-varying TF and binarization rule in Equation (15)
  •                    Update the position vector by Equation (11)
  • Return X p r e y

5. BHHO-Based FS

FS is recognized as a binary optimization task, where potential solutions (subsets of features) are encoded using binary values. Therefore, FS can be solved by employing a binary optimizer (e.g., BHHO). In this work, a wrapper FS approach that utilizes the binary version of HHO as a search algorithm and KNN classifier for evaluating the goodness of selected features generated by BHHO is introduced. In the FS problem, a binary vector is used to encode a solution where the vector’s length equals the number of features in the dataset. When the value of an element of the features vector is zero, that means the corresponding feature is omitted while one indicates that the feature is selected. In this paper, four FS methods using different binary versions of HHO are developed, where each method uses a different time-varying V-shaped TF to transform continuous values to binary. FS is considered a multi-objective optimization task where the highest classification accuracy and the least number of features are two criteria that need to be fulfilled. As shown in Equation (16), both classification accuracy and the number of selected features are included in the applied fitness function [35,36].
F i t n e s s = ( × e r r ) + ( β × ( R N ) )
where err stands for the error rate of the KNN algorithm over a selected subset of features by the BHHO optimizer, ∝, and β are two parameters for balancing between classification accuracy and the size of features subset, ∝ is a number within [0, 1], β is equal to (1 −∝), N is the number of all features in the dataset, and R indicates the cardinality of the subset of features selected by a search agent.

6. Results and Discussion

In this section, we have conducted various experiments and tests to assess the performance of V-shaped time-varying-based HHO algorithms in solving the FS problem. The proposed BHHO algorithms were also compared to different optimizers. To achieve a fair comparison, the initial settings of all optimizers, such as population size, number of iterations, and number of independent runs, were unified by setting them to similar initials values.
Eighteen popular benchmark datasets obtained from the UCI data repository are applied for evaluating the performance of the proposed FS approaches. Table 2 shows the details of the datasets comprising a number of features, classes, and instances in each dataset. Following the hold-out method, each dataset is arbitrarily split into two portions (training/testing), where 80% of the data were preserved for training while the rest was employed for testing. Furthermore, each FS approach was run for 30 trials with a randomly set seed on a machine with an Intel Core i5, 2.2 GHz CPU, and 4 GB of RAM.
In this work, internal parameters of algorithms were set according to recommended settings in original papers as well as related works on FS problems, while common parameters were set based on the results of several trials. Table 3 reveals the detailed parameters settings of each algorithm.
To study the impact of four types of time-varying V-shaped TFs on the efficiency of the BHHO optimizer, we provide comparisons between the results of HHO with four basic V-shaped TFs and those recorded by HHO with four time-varying V-shaped TFs. Furthermore, the best FS approach among tested basic and time-varying V-shaped based approaches was then compared to several state-of-the-art FS approaches comprising BGSA, BPSO, BBA, BSSA, and BWOA. The following criteria were used for the comparisons:
  • The average of accuracy rates obtained from 30 trials.
  • The average of best selected features rates recorded from 30 trials.
  • The mean of best fitness values obtained from 30 trials.
  • F-test method is used for ranking different FS methods to determine the best results.
Please note that in all reported tables, the best-obtained results are highlighted using a boldface format.

6.1. Comparison between Various Versions of BHHO with Basic and Time Varying V-Shaped TFs

In general, experimental results show that HHO with V-shaped time-varying transfer functions (TV-TFs) is better compared to those with classic V-shaped TFs. Inspecting the results in Table 4, in the case of BHHO V 1 and BHHO T V 1 , BHHO V 1 has recorded higher accuracy rates on seven datasets while BHHO T V 1 has found higher accuracy rates for eight cases. However, both approaches have the same accuracy rates in three cases. In addition, we see that BHHO T V 2 has better accuracy measures than BHHO V 2 on eleven datasets, whereas BHHO V 2 outperforms BHHO T V 2 in five cases. It can be observed that BHHO T V 2 and BHHO V 2 have maximum accuracy rates in two cases (M-of-N and Zoo). In the case of BHHO V 3 and BHHO T V 3 , it can be noticed that BHHO T V 3 outperforms BHHO V 3 on nine datasets while BHHO V 3 obtained higher accuracy rates on five datasets. It can be seen that both approaches obtained similar accuracy rates on the exactly dataset and the maximum accuracy measures on three datasets, including M-of-N, WineEW, and Zoo. As per results, BHHO T V 4 outperforms BHHO V 4 on eleven datasets in terms of accuracy rates, whereas BHHO V 4 is superior in only three cases. However, both methods obtained similar maximum obtained maximum accuracy rates on four datasets. In terms of classification accuracy, as per F-test results, it can be seen that BHHO T V 4 is ranked as the best, followed by the BHHO T V 3 method. Based on the observed results, we can say that HHO with TV4 transfer function is able to obtain the best classification accuracy compared to its peers, including basic and time-varying TFs-based FS approaches.
In terms of selected features, as presented in Table 5, it can be seen that the basic versions of V1 and V2 based approaches outperform the time-varying-based ones. In the case of BHHO V 3 and BHHO T V 3 , it is clear that BHHO T V 3 is dominant on 61.11% of cases while BHHO T V 4 outperformed BHHO T V 4 on 50% of the cases. According to recorded FS rates, F-test results show that BHHO V 4 is ranked as the best method in terms of the least number of selected features. However, excessive feature reduction may not be the preferred option since it may exclude some relevant features, which degrade the classification performance. Although the basic versions of TFs-based approaches outperform the time-varying-based ones in terms of feature reduction, the latter can find the most relevant subset of features that provides better classification accuracy, as provided in Table 4.
To confirm the effectiveness of the competing algorithms, the fitness value that combines the two measures (i.e., accuracy and reduction rate) is adopted. In terms of fitness rates, as provided in Table 6, it is clear that all time-varying V-shaped TFs based methods outperform their peers (basic V-shaped-based techniques) in terms of fitness rates. Considering F-test results, BHHO T V 4 is ranked as the best place compared to all other competitors. In this work, we consider that classification accuracy has higher importance compared to the number of selected features. Based on results, we found that HHO with time-varying V-shaped TV4 can realize the best performance.

6.2. Comparison with Other Optimization Algorithms

This section provides a comparison between the best approach BHHO T V 4 and other well-known metaheuristic methods (BGSA, BPSO, BBA, BSSA, and BWOA). The comparison is made based on different criteria, including average classification accuracy, number of selected features, and fitness values.
As per results in Table 7, it can be observed that BHHO T V 4 outperforms other algorithms for 11 out of 18 datasets in terms of accuracy rates. It reached the maximum accuracy averages on five datasets. We see that BHHO T V 4 , BPSO, and BSSA reached maximum accuracy for the Zoo dataset. In addition, compared to BHHO T V 4 , it can be seen that BPSO obtained better results on Exactly2, Vote, and WaveformEW datasets. As per F-test results, we observe the BHHO T V 4 is ranked one, followed by BPSO, BSSA, BWOA, BGSA, and BBA methods. To see whether the differences between obtained results from BHHO T V 4 and other algorithms are statistically significant or not, a two-tailed Wilcoxon statistical test with 5% significance was used. Table 8 presents the p-values of the Wilcoxon test in terms of classification accuracy. It is clear that there are meaningful differences in terms of accuracy averages between BHHO T V 4 and its competitors in most of the cases.
In terms of the least number of selected features, as stated in Table 9, it is observed that BHHO T V 4 obtained the best averages on 13 out of 18 datasets while BPSO outperforms all other algorithms on three datasets. As per F-test results, we can see that the BHHO T V 4 is ranked as the best one, followed by BPSO, and BBA methods, respectively. Inspecting the results of the p-value in Table 10, it is evident that the insignificant differences in terms of the lowest number of selected features between BHHO T V 4 and other peers are limited.
Fitness rates are shown in Table 11, and it can be noticed that BHHO T V 4 reached the lowest fitness values compared with other algorithms on 11 out of 18 datasets. We can also see that BPSO is the best in four cases. Again, according to F-test results as in Table 11, it is clear that the BHHO T V 4 is ranked as the best, followed by the BPSO method. In addition, Table 12 shows the p-values of the Wilcoxon test in terms of best fitness rates. It can be observed that the differences between BHHO T V 4 and others are not statistically significant in only four cases.
The convergence behaviors of BHHO T V 4 and other algorithms were also investigated to assess their ability to make an adequate balance between exploration and exploitation by avoiding local optima and early convergence. The convergence behaviors of BHHO T V 4 on 12 datasets compared to other optimizers are demonstrated in Figure 4 and Figure 5. In all tested cases, the superiority of BHHO T V 4 can be seen in converging faster than other competitors towards the optimal solution.

6.3. Comparison with Results of Previous Works

This section provides comparisons of accuracy rates between optimal approach BHHO T V 4 in this research and its similar FS approaches introduced in previous studies. Results of BHHO T V 4 are compared with results of SSA in [58], WOA in [59], Grasshopper Optimization Algorithm (GOA) in [60], GSA boosted with evolutionary crossover and mutation operators in [61], GOA with Evolutionary Population Dynamics (EPD) stochastic search strategies in [62], BDA [35], hybrid approach based on Grey Wolf Optimization (GWO) and PSO in [12] and Binary Butterfly Optimization Algorithm (BOA) [63]. As in Table 13, it can be seen that the proposed approach BHHO T V 4 has achieved the best accuracy rates on twelve datasets compared to results presented in previous studies on the same datasets. We can also observe that BHHO T V 4 reached the highest accuracy rates on six datasets. In addition, the F-test results indicate that BHHO T V 4 is ranked as the best in comparison with results of other algorithms used in preceding works.
In general, the results reflect the impact of the adopted binarization scheme on the performance of HHO in scanning the binary search space for finding the optimal solution (e.g., the ideal or near to the ideal subset of features). It is evident that the utilized time-varying TFs, in particular, TV V 4 can remarkably enhance the exploration and exploitation of the HHO algorithm. A potential key factor behind the superiority of BHHO T V 4 is that changing the shape of TV V 4 transfer function over generations has enabled the HHO algorithm to obtain an appropriate balance between exploration and exploitation phases and boosted the HHO algorithm to reach areas containing highly valuable features in the search space. Furthermore, similar to many materialistic algorithms, HHO suffers from the problem of sliding into local optima. The accuracy rates of BHHO T V 4 compared to other algorithms prove its superior capability in preserving the population diversity during the search procedure. Hence, preventing the occurrence of an early convergence problem.

7. Conclusions and Future Directions

In this paper, various FS approaches were developed using a recently introduced swarm-based optimizer named HHO. The proposed methods integrate the HHO algorithm with V-shaped time-varying binarization schemes to enable HHO to work in a binary search space. Various well-known datasets from the UCI data repository were utilized for evaluating the introduced approaches, and the results of the best approach BHHO T V 4 were compared with those obtained from several meta-heuristic-based FS approaches such as BGSA, BPSO, BBA, BSSA, and BWOA. It is clear from the obtained results that the efficiency of HHO in the FS domain is highly influenced by the binarization scheme used. The proposed BHHO T V 4 can often overtake other FS approaches presented in previous studies. In future work, we will study the effect of using S-shaped time-varying binarization schemes on the performance of HHO in the FS problem.

Author Contributions

Conceptualization, T.T. and M.M.; Methodology, H.C., T.T., H.T. and M.M.; implementation and experimental work, H.C., T.T., H.T. and M.M.; Validation, H.C., T.T., H.T., M.M. and A.S.; Writing original draft preparation, H.C., T.T. and H.T.; Writing review and editing, M.M. and A.S.; Proofreading, A.S.; Supervision, M.M.; funding acquisition, H.T. All authors have read and agreed to the published version of the manuscript.

Funding

Taif University Researchers Supporting Project number (TURSP-2020/125), Taif University, Taif, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to acknowledge Taif University Researchers Supporting Project Number (TURSP-2020/125), Taif University, Taif, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques; Morgan Kaufmann Publishers: San Francisco, CA, USA, 2012. [Google Scholar]
  2. Mafarja, M.; Mirjalili, S. Hybrid Whale Optimization Algorithm with Simulated Annealing for Feature Selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  3. Liu, H.; Motoda, H. Feature Selection for Knowledge Discovery and Data Mining; Springer Science & Business Media: New York, NY, USA, 2012; Volume 454. [Google Scholar]
  4. Chantar, H.K.; Corne, D.W. Feature subset selection for Arabic document categorization using BPSO-KNN. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011; pp. 546–551. [Google Scholar]
  5. Guyon, I.; Elisseeff, A. An Introduction to Variable and Feature Selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  6. Ahmed, S.; Mafarja, M.; Faris, H.; Aljarah, I. Feature Selection Using Salp Swarm Algorithm with Chaos. In 2018 Proceedings of the 2nd International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence; ACM: New York, NY, USA, 2018; pp. 65–69. [Google Scholar]
  7. Thaher, T.; Mafarja, M.; Turabieh, H.; Castillo, P.A.; Faris, H.; Aljarah, I. Teaching Learning-Based Optimization With Evolutionary Binarization Schemes for Tackling Feature Selection Problems. IEEE Access 2021, 9, 41082–41103. [Google Scholar] [CrossRef]
  8. Dash, M.; Liu, H. Feature Selection for Classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef]
  9. Yuanning, L.; Wang, G.; Chen, H.; Dong, H.; Zhu, X.; Wang, S. An Improved Particle Swarm Optimization for Feature Selection. J. Bionic Eng. 2011, 8, 191–200. [Google Scholar]
  10. Tabakhi, S.; Moradi, P.; Akhlaghian, F. An unsupervised feature selection algorithm based on ant colony optimization. Eng. Appl. Artif. Intell. 2014, 32, 112–123. [Google Scholar] [CrossRef]
  11. Ghamisi, P.; Benediktsson, J. Feature Selection Based on Hybridization of Genetic Algorithm and Particle Swarm Optimization. IEEE Geosci. Remote Sens. Lett. 2015, 12, 309–313. [Google Scholar] [CrossRef] [Green Version]
  12. Al-Tashi, Q.; Jadid Abdulkadir, S.; Rais, H.; Mirjalili, S.; Alhussian, H. Binary Optimization Using Hybrid Grey Wolf Optimization for Feature Selection. IEEE Access 2019, 7, 39496–39508. [Google Scholar] [CrossRef]
  13. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  14. Rostami, M.; Berahmand, K.; Nasiri, E.; Forouzandeh, S. Review of swarm intelligence-based feature selection methods. Eng. Appl. Artif. Intell. 2021, 100, 104210. [Google Scholar] [CrossRef]
  15. Nguyen, B.H.; Xue, B.; Zhang, M. A survey on swarm intelligence approaches to feature selection in data mining. Swarm Evol. Comput. 2020, 54, 100663. [Google Scholar] [CrossRef]
  16. Hassouneh, Y.; Turabieh, H.; Thaher, T.; Tumar, I.; Chantar, H.; Too, J. Boosted Whale Optimization Algorithm With Natural Selection Operators for Software Fault Prediction. IEEE Access 2021, 9, 14239–14258. [Google Scholar] [CrossRef]
  17. Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.K.; Ryan, M.J.; Mirjalili, S. An efficient binary slime mould algorithm integrated with a novel attacking-feeding strategy for feature selection. Comput. Ind. Eng. 2021, 153, 107078. [Google Scholar] [CrossRef]
  18. Elminaam, D.S.A.; Nabil, A.; Ibraheem, S.A.; Houssein, E.H. An Efficient Marine Predators Algorithm for Feature Selection. IEEE Access 2021, 9, 60136–60153. [Google Scholar] [CrossRef]
  19. Abdel-Basset, M.; El-Shahat, D.; El-henawy, I.; de Albuquerque, V.; Mirjalili, S. A new fusion of grey wolf optimizer algorithm with a two-phase mutation for feature selection. Expert Syst. Appl. 2020, 139, 112824. [Google Scholar] [CrossRef]
  20. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  21. Al-Betar, M.A.; Awadallah, M.A.; Heidari, A.A.; Chen, H.; Al-khraisat, H.; Li, C. Survival exploration strategies for Harris Hawks Optimizer. Expert Syst. Appl. 2020, 168, 114243. [Google Scholar] [CrossRef]
  22. Alabool, H.; Al-Arabiat, D.; Abualigah, L.; Heidari, A.A. Harris hawks optimization: A comprehensive review of recent variants and applications. Neural Comput. Appl. 2021, 33, 8939–8980. [Google Scholar] [CrossRef]
  23. Moayedi, H.; Osouli, A.; Nguyen, H.; A Rashid, A.S. A novel Harris hawks’ optimization and k-fold cross-validation predicting slope stability. Eng. Comput. 2019, 35, 1–11. [Google Scholar] [CrossRef]
  24. Bao, X.; Jia, H.; Lang, C. A Novel Hybrid Harris Hawks Optimization for Color Image Multilevel Thresholding Segmentation. IEEE Access 2019, 7, 76529–76546. [Google Scholar] [CrossRef]
  25. Golilarz, N.; Addeh, A.; Gao, H.; Ali, L.; Roshandeh, A.; Mudassir Munir, H.; Khan, R. A New Automatic Method for Control Chart Patterns Recognition Based on ConvNet and Harris Hawks Meta Heuristic Optimization Algorithm. IEEE Access 2019, 7, 149398–149405. [Google Scholar] [CrossRef]
  26. Chen, H.; Heidar, A.; Chen, H.; Wang, M.; Pan, Z.; Gandomi, A. Multi-population differential evolution-assisted Harris hawks optimization: Framework and case studies. Future Gener. Comput. Syst. 2020, 111, 175–198. [Google Scholar] [CrossRef]
  27. Too, J.; Abdullah, A.R.; Mohd Saad, N. A New Quadratic Binary Harris Hawk Optimization for Feature Selection. Electronics 2019, 8, 1130. [Google Scholar] [CrossRef] [Green Version]
  28. Thaher, T.; Heidari, A.A.; Mafarja, M.; Dong, J.S.; Mirjalili, S. Binary Harris Hawks Optimizer for High-Dimensional, Low Sample Size Feature Selection. In Evolutionary Machine Learning Techniques; Springer: Singapore, 2020; pp. 251–272. [Google Scholar]
  29. Zhang, Y.; Liu, R.; Wang, X.; Chen, H.; Li, C. Boosted binary Harris hawks optimizer and feature selection. Eng. Comput. 2020, 1–30. [Google Scholar] [CrossRef]
  30. Crawford, B.; Soto, R.; Astorga, G.; Garcia Conejeros, J.; Castro, C.; Paredes, F. Putting Continuous Metaheuristics to Work in Binary Search Spaces. Complexity 2017, 2017, 1–19. [Google Scholar] [CrossRef] [Green Version]
  31. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
  32. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  33. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. BGSA: Binary gravitational search algorithm. Nat. Comput. 2010, 9, 727–745. [Google Scholar] [CrossRef]
  34. Islam, M.; Li, X.; Mei, Y. A Time-Varying Transfer Function for Balancing the Exploration and Exploitation ability of a Binary PSO. Appl. Soft Comput. 2017, 59, 182–196. [Google Scholar] [CrossRef]
  35. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier Viger, P.; Li, X.; Mirjalili, S. Binary Dragonfly Optimization for Feature Selection using Time-Varying Transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  36. Kahya, M.A.; Altamir, S.A.; Algamal, Z.Y. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numer. Algebr. Control Optim. 2021, 11, 87–98. [Google Scholar] [CrossRef] [Green Version]
  37. Yang, J.; Honavar, V. Feature Subset Selection Using a Genetic Algorithm. Intell. Syst. Their Appl. IEEE 1998, 13, 44–49. [Google Scholar] [CrossRef] [Green Version]
  38. Huang, C.L.; Dun, J.F. A distributed PSO–SVM hybrid system with feature selection and parameter optimization. Appl. Soft Comput. 2008, 8, 1381–1391. [Google Scholar] [CrossRef]
  39. Ferri, F.J.; Kadirkamanathan, V.; Kittler, J. Feature Subset Search using Genetic Algorithms. In IEE/IEEE Workshop on Natural Algorithms in Signal Processing; 1993; Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24.3338&rep=rep1&type=pdf (accessed on 15 June 2012).
  40. Chaikla, N.; Qi, Y. Genetic algorithms in feature selection. In Proceedings of the IEEE SMC’99 Conference Proceedings, 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028), Tokyo, Japan, 12–15 October 1999; Volume 5, pp. 538–540. [Google Scholar]
  41. Mafarja, M.; Abdullah, S. Investigating memetic algorithm in solving rough set attribute reduction. Int. J. Comput. Appl. Technol. 2013, 48, 195–202. [Google Scholar] [CrossRef]
  42. Chuang, L.Y.; Yang, C.H.; Li, J.C. Chaotic maps based on binary particle swarm optimization for feature selection. Appl. Soft Comput. 2011, 11, 239–248. [Google Scholar] [CrossRef]
  43. Mafarja, M.; Jarrar, R.; Ahmed, S.; Abusnaina, A. Feature Selection Using Binary Particle Swarm Optimization with Time Varying Inertia Weight Strategies. In Proceedings of the 2nd International Conference on Future Networks and Distributed Systems, Amman, Jordan, 26–27 June 2018; pp. 1–9. [Google Scholar]
  44. Moradi, P.; Gholampour, M. A Hybrid Particle Swarm Optimization for Feature Subset Selection by Integrating a Novel Local Search Strategy. Appl. Soft Comput. 2016, 43, 117–130. [Google Scholar] [CrossRef]
  45. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  46. Deriche, M. Feature Selection using Ant Colony Optimization. In Proceedings of the 2009 6th International Multi-Conference on Systems, Signals and Devices, Djerba, Tunisia, 23–26 March 2009; pp. 1–4. [Google Scholar]
  47. Chen, Y.; Miao, D.; Wang, R. A rough set approach to feature selection based on ant colony optimization. Pattern Recognit. Lett. 2010, 31, 226–233. [Google Scholar] [CrossRef]
  48. Kashef, S.; Nezamabadi-pour, H. An advanced ACO algorithm for feature subset selection. Neurocomputing 2015, 147, 271–279. [Google Scholar] [CrossRef]
  49. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report TR06; Computer Engineering Department, Engineering Faculty, Erciyes University: Kayseri, Turkey, 2005; pp. 1–10. [Google Scholar]
  50. Agrawal, V.; Chandra, S. Feature selection using Artificial Bee Colony algorithm for medical image classification. In Proceedings of the 2015 Eighth International Conference on Contemporary Computing, Noida, India, 20–22 August 2015; pp. 171–176. [Google Scholar]
  51. Nakamura, R.Y.M.; Pereira, L.A.M.; Costa, K.A.; Rodrigues, D.; Papa, J.P.; Yang, X. BBA: A Binary Bat Algorithm for Feature Selection. In Proceedings of the 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, Ouro Preto, Brazil, 22–25 August 2012; pp. 291–297. [Google Scholar]
  52. Zawbaa, H.M.; Emary, E.; Parv, B.; Sharawi, M. Feature selection approach based on moth-flame optimization algorithm. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4612–4617. [Google Scholar]
  53. Mafarja, M.M.; Eleyan, D.; Jaber, I.; Hammouri, A.; Mirjalili, S. Binary Dragonfly Algorithm for Feature Selection. In Proceedings of the 2017 International conference on new trends in computing sciences (ICTCS), Amman, Jordan, 11–13 October 2017; pp. 12–17. [Google Scholar]
  54. Zawbaa, H.M.; Emary, E.; Parv, B. Feature selection based on antlion optimization algorithm. In Proceedings of the 2015 Third World Conference on Complex Systems (WCCS), Marrakech, Morocco, 23–25 November 2015; pp. 1–7. [Google Scholar]
  55. Thaher, T.; Saheb, M.; Turabieh, H.; Chantar, H. Intelligent Detection of False Information in Arabic Tweets Utilizing Hybrid Harris Hawks Based Feature Selection and Machine Learning Models. Symmetry 2021, 13, 556. [Google Scholar] [CrossRef]
  56. Mafarja, M.; Eleyan, D.; Abdullah, S.; Mirjalili, S. S-Shaped vs. V-Shaped Transfer Functions for Ant Lion Optimization Algorithm in Feature Selection Problem. In Proceedings of the International Conference on Future Networks and Distributed Systems, Cambridge, UK, 19–20 July 2017; pp. 1–7. [Google Scholar]
  57. Wei, W.; Li, X.; Liu, J.; Zhou, Y.; Li, L.; Zhou, J. Performance Evaluation of Hybrid WOA-SVR and HHO-SVR Models with Various Kernels to Predict Factor of Safety for Circular Failure Slope. Appl. Sci. 2021, 11, 1922. [Google Scholar] [CrossRef]
  58. Faris, H.; Mafarja, M.; Heidari, A.A.; Aljarah, I.; Al-Zoubi, A.; Mirjalili, S.; Fujita, H. An Efficient Binary Salp Swarm Algorithm with Crossover Scheme for Feature Selection Problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  59. Mafarja, M.; Mirjalili, S. Whale optimization approaches for wrapper feature selection. Appl. Soft Comput. 2018, 62, 441–453. [Google Scholar] [CrossRef]
  60. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Hammouri, A.I.; Faris, H.; Ala’M, A.Z.; Mirjalili, S. Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems. Knowl.-Based Syst. 2018, 145, 25–45. [Google Scholar] [CrossRef] [Green Version]
  61. Taradeh, M.; Mafarja, M.; Heidari, A.A.; Faris, H.; Aljarah, I.; Mirjalili, S.; Fujita, H. An Evolutionary Gravitational Search-based Feature Selection. Inf. Sci. 2019, 497, 219–239. [Google Scholar] [CrossRef]
  62. Mafarja, M.; Aljarah, I.; Faris, H.; Hammouri, A.; Al-Zoubi, A.; Mirjalili, S. Binary Grasshopper Optimisation Algorithm Approaches for Feature Selection Problems. Expert Syst. Appl. 2018, 117, 267–286. [Google Scholar] [CrossRef]
  63. Arora, S.; Anand, P. Binary butterfly optimization approaches for feature selection. Expert Syst. Appl. 2019, 116, 147–160. [Google Scholar] [CrossRef]
Figure 1. Overall stages of HHO [57].
Figure 1. Overall stages of HHO [57].
Applsci 11 06516 g001
Figure 2. V-shaped transfer functions.
Figure 2. V-shaped transfer functions.
Applsci 11 06516 g002
Figure 3. Behaviors of V-shaped TFs with time varying approach over 10 iterations ( τ decreased linearly from τ m a x = 4 to τ m i n = 0.01 ).
Figure 3. Behaviors of V-shaped TFs with time varying approach over 10 iterations ( τ decreased linearly from τ m a x = 4 to τ m i n = 0.01 ).
Applsci 11 06516 g003
Figure 4. Convergence curves of BHHO T V 4 versus other competitors on Breastcancer, BreastEW, CongressEW, Exactly, Exactly2, HeartEW, IonosphereEW, KrvskpEW, and Lymphography datasets.
Figure 4. Convergence curves of BHHO T V 4 versus other competitors on Breastcancer, BreastEW, CongressEW, Exactly, Exactly2, HeartEW, IonosphereEW, KrvskpEW, and Lymphography datasets.
Applsci 11 06516 g004
Figure 5. Convergence curves of BHHO T V 4 versus other competitors on M-of-n, penglungEW, SonarEW, SpectEW, Tic-tac-toe, Vote, WaveformEW, WineEW, and Zoo datasets.
Figure 5. Convergence curves of BHHO T V 4 versus other competitors on M-of-n, penglungEW, SonarEW, SpectEW, Tic-tac-toe, Vote, WaveformEW, WineEW, and Zoo datasets.
Applsci 11 06516 g005
Table 1. Original and time-varying V-shaped transfer functions.
Table 1. Original and time-varying V-shaped transfer functions.
Original FamilyTime-Varying Family
NameTransfer FunctionNameTransfer Function
V1 T ( x ) = | erf ( Π 2 x ) | TV1 T ( x , τ ) = | erf ( Π 2 x τ ) |
V2 T ( x ) = | tanh ( x ) | TV2 T ( x , τ ) = | tanh ( x τ ) |
V3 T ( x ) = | ( x ) / 1 + x 2 | TV3 T ( x , τ ) = | ( x τ ) / 1 + ( x τ ) 2 |
V4 T ( x ) = | 2 Π arc tan ( Π 2 x ) | TV4 T ( x , τ ) = | 2 Π arctan ( Π 2 x τ ) |
Table 2. List of employed datasets.
Table 2. List of employed datasets.
DatasetNo. of FeaturesNo. of InstancesNo. of Classes
Breastcancer96992
BreastEW305692
Exactly1310002
Exactly21310002
HeartEW132702
Lymphography181484
M-of-n1310002
PenglungEW325737
SonarEW602082
SpectEW222672
CongressEW164352
IonosphereEW343512
KrvskpEW3631962
Tic-tac-toe99582
Vote163002
WaveformEW4050003
WineEW1664763
Zoo161017
Table 3. Common and internal parameters used in the experiments.
Table 3. Common and internal parameters used in the experiments.
Common Parameters
Number of runs30
population size10
Number of iterations100
Dimension#features
Fitness function α = 0.99, β = 0.01
K for KNN classifier5
Internal Parameters
GSA G 0 = 10
c 1 = c 2 = 2
PSO ω : from 0.9 to 0.2
BA Q m i n = 0, Q m a x = 2
A loudness = 0.5, r Pulse rate = 0.5
WOAa: from 2 to 0
a 2 : from −1 to −2
HHOE: from 2 to 0
Table 4. Comparison of BHHO with the basic and time-varying V-shaped variants in terms of accuracy rates.
Table 4. Comparison of BHHO with the basic and time-varying V-shaped variants in terms of accuracy rates.
Dataset BHHO V 1 BHHO T V 1 BHHO V 2 BHHO T V 2 BHHO V 3 BHHO T V 3 BHHO V 4 BHHO T V 4
Breastcancer0.96930.97830.99980.99240.97790.98480.99290.9781
BreastEW0.97020.98190.99090.98830.98130.99180.97370.9792
CongressEW0.99390.98010.98890.99920.96550.98160.97741.0000
Exactly1.00000.98280.91350.99650.99970.99970.99930.9998
Exactly20.79180.81370.81480.72630.75650.79750.77120.7885
HeartEW0.93700.88770.89880.90370.87040.89570.90740.9105
IonosphereEW0.96200.96950.94180.95070.95960.96150.95310.9728
KrvskpEW0.97350.97910.97240.97280.97350.97010.97350.9789
Lymphography0.98220.91330.88780.94890.95110.92670.96560.9811
M-of-n0.99980.99981.00001.00001.00001.00001.00001.0000
penglungEW1.00001.00001.00000.94440.99331.00001.00001.0000
SonarEW0.94210.95560.94920.98330.95950.93410.95080.9754
SpectEW0.90560.88830.85490.87780.86050.92960.91110.9093
Tic-tac-toe0.82670.85420.84100.85940.83160.84180.81630.8333
Vote0.96390.99940.98330.98720.98830.98610.98670.9872
WaveformEW0.80230.79710.80360.80830.80560.79160.80030.7973
WineEW1.00001.00001.00000.99261.00001.00001.00001.0000
Zoo1.00000.94441.00001.00001.00001.00001.00001.0000
W | T | L 7 | 3 | 8 8 | 3 | 7 5 | 2 | 11 11 | 2 | 5 5 | 4 | 9 9 | 4 | 5 3 | 4 | 11 11 | 4 | 3
Rank (F-Test)4.564.614.834.4454.534.613.42
Table 5. Comparison of BHHO with the basic and time-varying V-shaped variants in terms of the number of selected features.
Table 5. Comparison of BHHO with the basic and time-varying V-shaped variants in terms of the number of selected features.
Dataset BHHO V 1 BHHO T V 1 BHHO V 2 BHHO T V 2 BHHO V 3 BHHO T V 3 BHHO V 4 BHHO T V 4
Breastcancer5.103.933.974.905.073.133.035.13
BreastEW6.707.307.337.508.178.534.838.83
CongressEW2.874.273.603.933.172.934.473.03
Exactly6.006.235.306.076.036.076.036.07
Exactly24.675.373.836.276.375.335.934.43
HeartEW4.873.205.805.205.605.675.276.13
IonosphereEW4.175.305.074.874.874.234.073.63
KrvskpEW13.8014.8718.9013.3716.1017.0713.4313.73
Lymphography4.135.905.435.774.334.135.634.97
M-of-n6.076.036.006.036.006.076.076.00
penglungEW11.308.1711.8320.679.608.2312.6711.07
SonarEW13.6714.3714.2011.1316.4311.5013.1014.57
SpectEW6.974.776.305.204.777.705.405.17
Tic-tac-toe5.008.207.836.176.477.935.905.13
Vote4.573.204.304.635.505.374.201.70
WaveformEW19.0017.4316.1719.7017.4716.7315.3716.00
WineEW4.034.334.106.773.533.273.004.27
Zoo3.104.873.073.103.002.034.074.70
W | T | L 11 | 0 | 7 7 | 0 | 11 12 | 0 | 6 6 | 0 | 12 7 | 0 | 11 11 | 0 | 7 9 | 0 | 9 9 | 0 | 9
Rank (F-Test)3.895.064.565.194.754.423.864.28
Table 6. Comparison of BHHO with the basic and time-varying V-shaped variants in terms of fitness rates.
Table 6. Comparison of BHHO with the basic and time-varying V-shaped variants in terms of fitness rates.
Dataset BHHO V 1 BHHO T V 1 BHHO V 2 BHHO T V 2 BHHO V 3 BHHO T V 3 BHHO V 4 BHHO T V 4
Breastcancer0.03610.02580.00460.01300.02760.01860.01040.0274
BreastEW0.03180.02040.01140.01410.02120.01100.02770.0235
CongressEW0.00790.02240.01330.00320.03610.02000.02520.0019
Exactly0.00460.02180.08970.00810.00500.00500.00530.0048
Exactly20.20970.18860.18630.27580.24600.20460.23110.2128
HeartEW0.06610.11370.10470.09930.13260.10760.09570.0933
IonosphereEW0.03890.03180.05910.05020.04140.03940.04770.0280
KrvskpEW0.03000.02490.03260.03060.03070.03440.03000.0247
Lymphography0.01990.08910.11410.05380.05080.07490.03720.0215
M-of-n0.00480.00480.00460.00460.00460.00470.00470.0046
penglungEW0.00030.00030.00040.05560.00690.00030.00040.0003
SonarEW0.05960.04640.05270.01840.04280.06710.05090.0268
SpectEW0.09670.11280.14650.12340.14030.07320.09050.0922
Tic-tac-toe0.17710.15350.16610.14610.17390.16540.18840.1707
Vote0.03860.00260.01920.01550.01500.01710.01580.0137
WaveformEW0.20050.20530.19850.19470.19680.21050.20160.2047
WineEW0.00310.00330.00320.01250.00270.00250.00230.0033
Zoo0.00190.05800.00190.00190.00190.00130.00250.0029
W | T | L 8 | 0 | 10 10 | 0 | 8 7 | 0 | 11 11 | 0 | 7 7 | 0 | 11 11 | 0 | 7 5 | 0 | 13 13 | 0 | 5
Rank (F-Test)4.444.754.924.335.034.314.753.47
Table 7. Comparison of BHHO T V 4 versus other optimizers in terms of average classification accuracy.
Table 7. Comparison of BHHO T V 4 versus other optimizers in terms of average classification accuracy.
DatasetBHHO T V 4 BGSABPSOBBABSSABWOA
Breastcancer0.97810.98550.97830.96980.97000.9783
BreastEW0.97920.96430.97340.93800.96610.9763
CongressEW1.00000.96630.98770.92800.98160.9774
Exactly0.99980.72270.98920.68150.98270.9952
Exactly20.78850.79080.80270.73130.77330.7465
HeartEW0.91050.84880.90000.78640.92720.9037
IonosphereEW0.97280.85070.93620.86810.97180.8681
KrvskpEW0.97890.91820.97590.82670.97590.9749
Lymphography0.98110.82200.89440.68670.91560.8933
M-of-n1.00000.88150.99750.76650.99300.9993
penglungEW1.00000.88320.99780.88670.94220.8044
SonarEW0.97540.93970.94130.84680.91670.8714
SpectEW0.90930.82650.86480.82220.90430.8482
Tic-tac-toe0.83330.79410.81740.70240.90040.8594
Vote0.98720.92941.00000.88000.95000.9683
WaveformEW0.79730.77530.81670.71960.80000.8102
WineEW1.00000.98430.99630.91110.99260.9815
Zoo1.00000.96831.00000.83341.00000.9889
Rank (F-Test)1.724.52.445.812.973.56
Table 8. The 2-tailed p-values of the Wilcoxon signed ranks test for accuracy results reported in Table 7 (p-values ≤ 0.05 are significant).
Table 8. The 2-tailed p-values of the Wilcoxon signed ranks test for accuracy results reported in Table 7 (p-values ≤ 0.05 are significant).
DatasetBGSABPSOBBABSSABWOABHHOTV4
Breastcancer3.82E-095.70E-014.26E-013.33E-125.70E-011
BreastEW1.15E-088.90E-042.45E-119.81E-101.78E-021
CongressEW7.40E-132.50E-121.04E-124.17E-132.05E-131
Exactly1.68E-123.98E-021.69E-121.02E-031.98E-021
Exactly28.30E-026.83E-111.79E-116.39E-111.53E-111
HeartEW7.29E-112.67E-032.22E-103.43E-037.95E-031
IonosphereEW1.30E-115.60E-091.69E-112.22E-011.21E-111
KrvskpEW5.80E-111.24E-012.88E-111.05E-033.31E-041
Lymphography1.12E-112.27E-111.57E-112.87E-115.19E-121
M-of-n1.19E-128.15E-021.20E-121.37E-038.15E-021
penglungEW2.54E-133.34E-016.09E-131.97E-114.16E-141
SonarEW1.83E-083.52E-071.24E-116.77E-126.77E-121
SpectEW1.07E-121.56E-122.87E-123.15E-024.70E-131
Tic-tac-toe1.17E-128.26E-131.17E-124.16E-141.69E-141
Vote2.73E-121.47E-097.07E-124.23E-132.60E-101
WaveformEW2.06E-087.16E-105.72E-112.60E-014.18E-081
WineEW1.06E-054.18E-023.70E-122.70E-035.88E-081
Zoo5.88E-08NaN4.48E-12NaN5.47E-031
Table 9. Comparison of BHHO T V 4 versus other optimizers in terms of average selected features.
Table 9. Comparison of BHHO T V 4 versus other optimizers in terms of average selected features.
DatasetBHHO T V 4 BGSABPSOBBABSSABWOA
Breastcancer5.135.103.103.704.774.40
BreastEW8.8314.8011.4312.3018.2316.17
CongressEW3.036.974.906.206.206.27
Exactly6.077.876.176.306.876.57
Exactly24.434.472.434.938.837.67
HeartEW6.136.505.174.777.536.37
IonosphereEW3.6313.579.4712.5017.4012.83
KrvskpEW13.7319.9319.0015.5725.3025.50
Lymphography4.978.535.976.739.039.77
M-of-n6.007.906.206.207.106.80
penglungEW11.07149.87126.50123.07174.27120.83
SonarEW14.5728.2724.3725.6336.3031.27
SpectEW5.1710.908.379.3711.6313.33
Tic-tac-toe5.136.136.204.337.139.00
Vote1.707.572.636.577.076.00
WaveformEW16.0021.8023.5016.6325.8728.83
WineEW4.276.475.975.905.936.17
Zoo4.706.333.736.174.175.97
Rank (F-Test)1.614.722.362.784.924.61
Table 10. The 2-tailed p-values of the Wilcoxon signed ranks test for the number of features reported in Table 9 (p-values ≤ 0.05 are significant).
Table 10. The 2-tailed p-values of the Wilcoxon signed ranks test for the number of features reported in Table 9 (p-values ≤ 0.05 are significant).
DatasetBGSABPSOBBABSSABWOABHHOTV4
Breastcancer6.18E-013.47E-114.06E-069.99E-032.63E-051
BreastEW3.24E-084.08E-041.72E-053.16E-104.36E-091
CongressEW6.51E-122.25E-102.96E-111.55E-121.47E-121
Exactly9.70E-082.37E-012.10E-016.30E-079.34E-051
Exactly23.80E-018.67E-045.48E-011.98E-111.20E-081
HeartEW4.58E-015.03E-021.72E-025.39E-033.01E-011
IonosphereEW1.91E-112.43E-111.81E-111.71E-111.81E-111
KrvskpEW4.74E-085.98E-072.64E-024.09E-115.47E-111
Lymphography9.21E-109.16E-035.78E-041.93E-106.91E-111
M-of-n2.03E-092.15E-028.10E-013.32E-102.64E-081
penglungEW2.84E-112.86E-112.87E-112.86E-112.88E-111
SonarEW9.02E-115.47E-108.76E-102.86E-114.53E-111
SpectEW6.26E-117.97E-092.71E-097.11E-113.83E-111
Tic-tac-toe9.22E-066.21E-031.53E-022.31E-121.17E-131
Vote1.44E-114.31E-051.57E-104.00E-119.17E-111
WaveformEW2.22E-068.70E-083.31E-011.62E-091.91E-101
WineEW1.56E-087.13E-096.52E-061.98E-093.63E-111
Zoo1.98E-073.82E-051.71E-033.27E-033.98E-061
Table 11. Comparison of BHHO T V 4 versus other optimizers in terms of average fitness values.
Table 11. Comparison of BHHO T V 4 versus other optimizers in terms of average fitness values.
DatasetBHHO T V 4 BGSABPSOBBABSSABWOA
Breastcancer0.02740.02000.02490.01990.03500.0263
BreastEW0.02350.04020.03020.04180.03970.0288
CongressEW0.00190.03770.01520.03040.02210.0263
Exactly0.00480.28060.01550.28460.02240.0098
Exactly20.21280.21050.19720.22990.23120.2569
HeartEW0.09330.15470.10300.12350.07790.1002
IonosphereEW0.02800.15180.06600.11020.03300.1344
KrvskpEW0.02470.08650.02910.08280.03090.0319
Lymphography0.02150.18090.10780.20880.08860.1110
M-of-n0.00460.12340.00720.13530.01240.0059
penglungEW0.00030.12030.00610.07390.06260.1973
SonarEW0.02680.06440.06220.09960.08860.1325
SpectEW0.09220.17670.13760.12960.10000.1564
Tic-tac-toe0.17070.21070.18770.22960.10660.1492
Vote0.01370.07460.00160.07150.05390.0351
WaveformEW0.20470.22790.18730.22920.20450.1951
WineEW0.00330.02060.00830.01670.01190.0231
Zoo0.00290.03540.00230.06210.00260.0147
Rank (F-Test)1.834.892.444.833.113.89
Table 12. The 2-tailed p-values of the Wilcoxon signed ranks test for fitness results reported in Table 11 (p-values ≤ 0.05 are significant).
Table 12. The 2-tailed p-values of the Wilcoxon signed ranks test for fitness results reported in Table 11 (p-values ≤ 0.05 are significant).
DatasetBGSABPSOBBABSSABWOABHHOTV4
Breastcancer6.84E-105.19E-116.86E-101.98E-122.60E-061
BreastEW6.74E-109.17E-064.32E-105.38E-113.31E-061
CongressEW1.67E-121.44E-121.68E-121.57E-121.52E-121
Exactly2.35E-121.01E-012.35E-128.15E-071.03E-041
Exactly22.47E-038.79E-112.48E-112.56E-112.54E-111
HeartEW3.76E-117.68E-021.64E-061.06E-021.85E-021
IonosphereEW2.42E-114.80E-102.43E-117.52E-022.43E-111
KrvskpEW4.97E-112.46E-023.33E-118.85E-065.84E-061
Lymphography2.10E-112.64E-112.14E-112.09E-112.05E-111
M-of-n1.21E-122.16E-024.57E-123.88E-102.65E-081
penglungEW2.84E-112.86E-112.88E-112.86E-112.88E-111
SonarEW1.76E-106.44E-093.80E-102.96E-112.98E-111
SpectEW2.09E-112.09E-113.17E-104.79E-082.12E-111
Tic-tac-toe1.67E-121.19E-121.66E-126.50E-142.71E-141
Vote7.36E-124.57E-117.19E-126.91E-126.92E-121
WaveformEW6.52E-092.44E-091.56E-089.76E-011.09E-051
WineEW2.76E-116.44E-102.30E-107.63E-117.47E-121
Zoo1.37E-113.82E-051.35E-113.27E-031.36E-071
Table 13. Comparison of the proposed B H H O T V 4 and other approaches from previous works in terms of accuracy rates.
Table 13. Comparison of the proposed B H H O T V 4 and other approaches from previous works in terms of accuracy rates.
DatasetBHHOTV4BSSA_S3_CP [58]WOA-CM [59]BGOA_EPD_Tour [60]HGSA [61]BGOA-M [62]BDA-TVv4 [35]BGWOPSO [12]S-bBOA [63]
Breastcancer0.9780.9770.9680.9800.9740.9740.9770.9800.9686
BreastEW0.9790.9480.9710.9470.9710.9700.9740.9700.9709
CongressEW1.0000.9630.7920.9640.9660.9760.9950.9800.9593
Exactly1.0000.9800.9560.9991.0001.0000.9291.0000.9724
Exactly20.7890.7581.0000.7800.7700.7350.7260.7600.7596
HeartEW0.9100.8610.7420.8330.8560.8360.8860.8500.8237
IonosphereEW0.9730.9180.9190.8990.9340.9460.9250.9500.907
KrvskpEW0.9790.9640.8660.9680.9780.9740.9710.9800.966
Lymphography0.9810.8900.8070.8680.8920.9120.8950.9200.8676
M-of-n1.0000.9920.9261.0001.0001.0000.9731.0000.972
penglungEW1.0000.8780.9720.9270.9560.9340.8070.9600.8775
SonarEW0.9750.9370.8520.9120.9580.9150.9950.9600.9362
SpectEW0.9090.8360.9910.8260.9190.8260.8770.8800.8463
Tic-tac-toe0.8330.8210.7850.8080.7880.7910.8220.8100.7983
Vote0.9870.9510.9390.9660.9730.9630.9620.9700.9653
WaveformEW0.7970.7340.7530.7370.8150.7510.7490.8000.7429
WineEW1.0000.9930.9590.9890.9890.9890.9991.0000.9843
Zoo1.0001.0000.9800.9930.9320.9580.9831.0000.9775
Rank (F-test)1.786.006.785.924.285.504.863.036.86
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chantar, H.; Thaher, T.; Turabieh, H.; Mafarja, M.; Sheta, A. BHHO-TVS: A Binary Harris Hawks Optimizer with Time-Varying Scheme for Solving Data Classification Problems. Appl. Sci. 2021, 11, 6516. https://doi.org/10.3390/app11146516

AMA Style

Chantar H, Thaher T, Turabieh H, Mafarja M, Sheta A. BHHO-TVS: A Binary Harris Hawks Optimizer with Time-Varying Scheme for Solving Data Classification Problems. Applied Sciences. 2021; 11(14):6516. https://doi.org/10.3390/app11146516

Chicago/Turabian Style

Chantar, Hamouda, Thaer Thaher, Hamza Turabieh, Majdi Mafarja, and Alaa Sheta. 2021. "BHHO-TVS: A Binary Harris Hawks Optimizer with Time-Varying Scheme for Solving Data Classification Problems" Applied Sciences 11, no. 14: 6516. https://doi.org/10.3390/app11146516

APA Style

Chantar, H., Thaher, T., Turabieh, H., Mafarja, M., & Sheta, A. (2021). BHHO-TVS: A Binary Harris Hawks Optimizer with Time-Varying Scheme for Solving Data Classification Problems. Applied Sciences, 11(14), 6516. https://doi.org/10.3390/app11146516

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop