Next Article in Journal
Optical Monitoring of Breathing Patterns and Tissue Oxygenation: A Potential Application in COVID-19 Screening and Monitoring
Previous Article in Journal
Naphthalene Detection in Air by Highly Sensitive TiO2 Sensor: Real Time Response to Concentration Changes Monitored by Simultaneous UV Spectrophotometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Algorithm for Cancer Biomarker Gene Detection Using Harris Hawks Optimization

Information Technology Department, College of Computer and Information Sciences, King Saud University (KSU), Riyadh 11451, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(19), 7273; https://doi.org/10.3390/s22197273
Submission received: 28 July 2022 / Revised: 1 September 2022 / Accepted: 9 September 2022 / Published: 26 September 2022
(This article belongs to the Section Biosensors)

Abstract

:
This paper presents two novel swarm intelligence algorithms for gene selection, HHO-SVM and HHO-KNN. Both of these algorithms are based on Harris Hawks Optimization (HHO), one in conjunction with support vector machines (SVM) and the other in conjunction with k-nearest neighbors (k-NN). In both algorithms, the goal is to determine a small gene subset that can be used to classify samples with a high degree of accuracy. The proposed algorithms are divided into two phases. To obtain an accurate gene set and to deal with the challenge of high-dimensional data, the redundancy analysis and relevance calculation are conducted in the first phase. To solve the gene selection problem, the second phase applies SVM and k-NN with leave-one-out cross-validation. A performance evaluation was performed on six microarray data sets using the two proposed algorithms. A comparison of the two proposed algorithms with several known algorithms indicates that both of them perform quite well in terms of classification accuracy and the number of selected genes.

1. Introduction

Approximately 10 million people worldwide die from cancer every year, or one in every six deaths, according to the WHO [1]. Early diagnosis and treatment can reduce the cancer mortality rate. Wrong classifications and predictions cause serious harm to patients and their families [2]. Generally, microarray data are employed in cancer research, where early detection of cancer can greatly influence the treatment and survival rate [3]. Nevertheless, microarray data suffer from high dimensionality issues since the number of genes far outnumbers the number of samples, with the result of the so-called “curse of dimensionality”. When the dimensionality of a data set rises significantly, it can be difficult to demonstrate the statistical significance of the results [4].
There have been four approaches to solving the “curse of dimensionality”: filtering, wrapper, embedded, and hybrid methods [5]. The filtering method evaluates the relevance of features as scores based only on property values. It is possible to sort features by their scores and to remove low-scoring features. In wrapper methods, the analysis model is embedded within the search for appropriate features. Embedded methods search for an optimal subset of features as part of the analysis algorithm. Hybrid methods combine two methods for selecting features to take advantage of both [6].
Two feature-selection methods based on wrapper-based algorithms are presented in this paper, both of which employ the Harris Hawks Optimizer (HHO) to select the most informative genes for classification and achieve high accuracy: HHO-SVM works in conjunction with support vector machines (SVM), and HHO-KNN works in conjunction with the k-nearest neighbors (k-NN) algorithm. To evaluate the effectiveness of both HHO-SVM and HHO-KNN, we compared the results from six microarray cancer data sets to several recently published techniques. In both binary and multiclass classifications, HHO-KNN and HHO-SVM appear to be able to achieve higher classification accuracy with a smaller number of genes selected. This paper aim test HHO as a feature selection method and view its effectiveness. This paper will answer those two questions: Can we use HHO as a feature selection method on well-known cancer gene microarray datasets? In addition, which classifier works best with HHO?
The paper is structured as follows: Section 2 describes how HHO was inspired and the mathematical modeling that went into it. In Section 3, we introduce our proposed HHO-SVM and HHO-KNN approaches to gene selection. Discussions and experimental results are presented in Section 4. Finally, the conclusion is given in Section 5.

2. The Harris Hawks Optimizer

2.1. Inspiration

The Harris Hawks Optimizer (HHO) is a swarm computation method that was developed by Heidari et al. in 2019 [7]. This algorithm was inspired by the cooperative hunting and chasing behavior exhibited by Harris’s hawks, particularly “surprise pounces” or “the seven kills.” In a cooperative attack, numerous hawks coordinate their efforts and simultaneously attack a rabbit that has shown itself.
The attack could well be accomplished quickly by catching the surprised prey in a matter of seconds; however, depending on the prey’s actions and ability to flee, the attack may include repeated, short, fast dives near the prey over the course of many minutes. According to the changing circumstances and the prey’s escape patterns, Harris’s hawks can demonstrate a variety of chasing styles. Generally, tactics are changed when the party’s strongest hawk (leader) goes after the prey but loses it, at which point another party member continues the chase. It is common to observe these switches in a variety of settings because they are used to confuse escaping rabbits. Moreover, the rabbit has no way of regaining its defensive abilities when a new hawk begins to chase it, and it is unable to escape the attacking team since any hawk, usually the most experienced and powerful, captures the exhausted rabbit and shares it with the rest of the team.

2.2. Mathematical Modeling

Hawks are known to chase their prey by tracing, encircling, and eventually striking and killing. The mathematical model, which is based on hawks’ hunting behaviors, comprises three various stages: exploration, transition between exploration and exploitation, and exploitation. At each stage of the hunt, the Harris’s hawks are the candidate solutions, and the targeted prey is the best candidate solution (almost the optimal).
As they search for prey, Harris’s hawks use two different exploration techniques. Candidate solutions are designed to be as close to the prey as possible, while the best is the one that is the intended prey. First, Harris’s hawks choose a spot by considering the locations of other hawks and their prey. In the second method, the hawks wait on random tall trees. Using Equation (1), the two methods can be simulated with equal odds of q:
x ( t + 1 ) = x r a n d o m ( t ) r 1 | x r a n d o m ( t ) 2 r 2 x ( t ) | q 0.5 x r a b b i t ( t ) x m e a n ( t ) r 3 ( L B + r 4 ( U B L B ) ) q < 0.5
  • Vector x ( t ) is the current hawk position, whereas vector x ( t + 1 ) is the hawk’s position at the next iteration.
  • The hawk x r a n d o m ( t ) is selected at random from the population.
  • The rabbit position is x r a b b i t ( t ) .
  • q , r 1 , r 2 , r 3 and r 4 are randomly generated numbers inside (0,1).
  • LB and UB are the upper and lower bounds of variables.
  • x m e a n ( t ) is the average position of the current population of hawks, which is calculated as shown in Equation (2).
x m e a n ( t ) = 1 N i = 1 N x i ( t )
  • t is the total number of iterations.
  • x i ( t ) is the position for each hawk in iteration t.
  • The total number of hawks is represented by N.
The algorithm switches from exploration to exploitation (transition from exploration to exploitation) depending on the rabbit’s running or escaping energy, as shown in Equation (3).
E = 2 E 0 ( 1 t M a x _ i t e r )
  • E represents the prey’s escaping energy.
  • The initial state of the energy is indicated by E 0 , which changes randomly inside (−1, 1) at each iteration.
When | E | 1 , hawks seek out more areas to investigate the rabbit’s whereabouts; alternatively, the exploitation stage begins. The algorithm formulates the rabbit’s escape success p 0.5 or failure p < 0.5 with an equal chance p. The Hawks also will also carry out a soft | E | 0.5 or hard siege | E | < 0.5 , based on the rabbit’s energy. The soft siege is defined as in Equations (4)–(6).
x ( t + 1 ) = Δ x ( t ) E | J · x r a b b i t ( t ) x ( t )
Δ x ( t ) = x r a b b i t ( t ) x ( t )
J = 2 ( 1 r a n d o m )
  • The difference between the hawk and rabbit positions is represented by Δ x ( t ) .
  • J is a random number used to generate the rabbit’s random jump force.
A hard siege, on the other hand, can be calculated as follows in Equation (7):
x ( t + 1 ) = x ( t ) E | Δ x ( t ) |
A soft siege with repeated fast dives is attempted when | E | 0.5 and p < 0.5 , as the rabbit could successfully escape. The hawks have the option of selecting the best dive. Lévy flight is employed to imitate the prey’s hopping. The hawks’ next action is calculated as shown in Equation (8) to determine whether the dive is successful or not.
k = x r a b b i t ( t ) E | J · x r a b b i t ( t ) x ( t ) |
The hawks will dive following Equation (9), the Lévy flight L pattern, if the previous dive turns out to be ineffective.
z = k + R a n d o m V e c t o r · L ( d i m )
  • The problem dimension d i m is the size of the random vector R a n d o m V e c t o r , and d i m is the dimension of the problem.
Equation (10) has been used to update the final soft-siege rapid dives
x ( t + 1 ) = k i f f ( k ) < f ( x ( t ) ) z i f f ( z ) < f ( x ( t ) )
Equations (8) and (9) are used to calculate k and z, respectively. A hard siege with progressive rapid dives occurs when | E | 0.5 and p < 0.5 are not sufficient for the rabbit to flee, as it no longer possesses enough energy. The rabbit’s z is calculated via Equation (9), while k is updated using Equation (11).
k = x r a b b i t ( t ) E | J · x r a b b i t ( t ) x m e a n ( t ) |

3. Proposed Algorithm

In this section, how the two proposed algorithms work will be described in detail. We combined HHO with SVM and k-NN to develop two approaches, HHO-SVM and HHO-KNN, for solving the microarray high dimensionality issue to find the most meaningful genes and compare between SVM and k-NN classifiers to find which one gives the best accuracy and selects the fewest genes. The fitness function that is used is the error rate.
The steps of both the HHO-SVM and HHO-KNN algorithms are shown in Figure 1. In addition, in Algorithm 1, we present pseudo code for the HHO algorithm.
To evaluate the performance of the two proposed approaches, leave-one-out cross-validation (LOOCV) was used to avoid model overfitting of both classifiers to calculate the accuracy. All samples except one are used as testing data in LOOCV, with the remaining sample used as training data. This is repeated until all samples have been tested. Based on N times of classification, the LOOCV calculates the average accuracy.
Algorithm 1: Pseudo-Code of HHO Algorithm.
Sensors 22 07273 i001

4. Experimental Results and Discussions

The exploratory approach, the findings of implementing the proposed algorithms to microarray cancer data sets, and the gene expression data sets used in the study are all described in this section.

4.1. Data Sets

In our study, we used two publicly available microarray cancer data sets and binary and multiclass data sets. The performance and effectiveness of the two algorithms were evaluated by evaluating six benchmark microarray data sets. The three binary data sets used were for colon tumors [8], lung cancer [9], and leukemia3 [10]. In addition, there were three multiclass data sets, which were leukemia2 [8], lymphoma [11], and SRBCT [11]. A detailed breakdown of the experimental data sets on the basis of diverse samples and classes can be found in Table 1.

4.2. Parameter Settings

To determine the most suitable solution, SVM and k-NN classifiers were used. Since k = 7 performed well across all test sets, it was used in the experiments. There are two significant factors that influence the practicality of a method: its iterations ( M a x _ i t e r ) and its dimensions. In addition to k, dimensions, U B , and L B , there are other parameters, which can be found in Table 2.

4.3. Results and Analysis

Features are selected to improve the accuracy of the classification while lowering the number of features being used. Each data set was processed with the two algorithms on a different number of features. We applied the proposed techniques in each cancer data set by using 1 to 30 genes. For evaluating the experimental results, both HHO-KNN and HHO-SVM were applied to binary and multiclass high-dimensional microarray cancer data sets for selecting genes. There were two metrics used in our comparison: classification accuracy and the number of genes selected for cancer classification. Here are the experimental results for all of the cancer data sets that were used.
On the colon data set, Table 3 shows the best, worst, and average classification accuracy using the HHO-KNN and HHO-SVM algorithms. Interestingly, the highest classification accuracy obtained was the same when either the k-NN or the SVM classifier was applied with 90.32%. However, with SVM, the number of selected genes was 10 genes that is lower than the selected genes for k-NN, with 16 genes.
Looking at Table 4 for Leukemia2 data set results, we can see both HHO-KNN and HHO-SVM by selecting the same number of genes (11 genes); the SVM classifier is more accurate, 97.22%.
The results of implementing HHO-SVM and HHO-KNN algorithms in the leukemia3 data set are shown in Table 5. When k-NN was used, the best classification accuracy was achieved when 25 genes were selected. The classification accuracy increased to 90.28% for k-NN and 84.72% for SVM.
The accuracy performance of best, average, and worst HHO-SVM and HHO-KNN algorithms in the Lung data set is presented in Table 6. It shows the highest accuracy of 100% when 2 or 10 genes are selected for both the k-NN and SVM classifiers.
Table 7 shows the best, worst, and average classification accuracy of Lymphoma data set for applying the HHO-KNN and HHO-SVM algorithms. It achieve an accuracy of 100% in most cases for both classifiers, but the selected genes for k-NN was lower than SVM to achieve 100% accuracy with 2 genes in k-NN and 3 genes for SVM.
Table 8 compares the average, best, and worst accuracy performance on the implementation of HHO-SVM and HHO-KNN algorithms in the SRBCT data set. The highest accuracy was when 29 genes are selected with 92.77% for SVM and 91.57% for k-NN.

4.4. Comparative Evaluations

Comparing and evaluating the performance of HHO-SVM and HHO-KNN against the other bio-inspired metaheuristic gene selection algorithms was an important part of our evaluation. Table 9 shows how our findings compare based on accuracy and the number of genes selected.
As can be seen in Table 9 for lung and lymphoma, the HHO-KNN accuracy outperformed the other bio-inspired gene selection algorithms since it reached 100% classification accuracy, and the number of selected genes is smaller than the other methods. HHO-SVM for the lung data set outperformed the other bio-inspired gene selection algorithms. In addition, as can be seen from the table, HHO-KNN and HHO-SVM performed better than their competitor (BQPSO) on the colon data set.

5. Conclusions

Our study proposes two new feature selection techniques using Harris Hawks Optimization (HHO) combined with support vector machines (SVM) and the k-nearest neighbors (k-NN) algorithm for high-dimensional cancer gene selection and classification. The objective of this study was to devise a new algorithm for solving gene selection problems based on bio-inspired principles. Using HHO-SVM and HHO-KNN on six binary and multiclass high-dimensional cancer microarray data sets, we have shown that in terms of classification accuracy and the number of chosen genes, our two algorithms are better than several other algorithms in finding useful informative genes.
The experimental findings are displayed by using gene expression data sets, and with all of the data sets, we observe that we only achieved 100% accuracy for the lung and lymphoma datasets. Additionally, the accuracy obtained for the entire dataset with the KNN classifier and the SVM classifier is greater than 90%. Last but not least, on all datasets except the Leukemia3 dataset, HHO-KNN outperformed HHO-SVM. As well as discovering the tremendous promise for HHO when used alone, we recommend combining HHO with another wrapper bio-inspired feature selection methodology to produce a hybrid method that enhances HHO accuracy for future works while picking fewer genes.

Author Contributions

Conceptualization, H.A. (Halah AlMazrua); data curation, H.A. (Halah AlMazrua); formal analysis, H.A. (Halah AlMazrua); investigation, H.A. (Halah AlMazrua); methodology, H.A. (Halah AlMazrua); supervision, H.A. (Hala Alshamlan); validation, H.A. (Halah AlMazrua); writing—original draft, H.A. (Halah AlMazrua); writing—review and editing, H.A. (Halah AlMazrua) and H.A. (Hala Alshamlan). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. Cancer. Available online: https://www.who.int/news-room/fact-sheets/detail/cancer (accessed on 3 February 2022).
  2. Doreswamy, H.; Salma, U. A binary bat inspired algorithm for the classification of breast cancer data. Int. J. Soft Comput. Artif. Intell. Appl. IJSCAI 2016, 5, 1–21. [Google Scholar] [CrossRef]
  3. Selvaraj, S.; Natarajan, J. Microarray data analysis and mining tools. Bioinformation 2011, 6, 95. [Google Scholar] [CrossRef] [PubMed]
  4. Han, J.; Pei, J.; Kamber, M. Data Mining: Concepts and Techniques; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  5. Saeys, Y.; Inza, I.; Larranaga, P. A review of feature selection techniques in bioinformatics. Bioinformatics 2007, 23, 2507–2517. [Google Scholar] [CrossRef] [PubMed]
  6. Lee, I.H.; Lushington, G.H.; Visvanathan, M. A filter-based feature selection approach for identifying potential biomarkers for lung cancer. J. Clin. Bioinform. 2011, 1, 11. [Google Scholar] [CrossRef] [PubMed]
  7. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  8. Golub, T.; Slonim, D.; Tamayo, P.; Huard, C.; Gaasenbeek, M.; Mesirov, J.; Coller, H.; Loh, M.; Downing, J.; Caligiuri, M.; et al. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 1999, 286, 531–537. [Google Scholar] [CrossRef]
  9. Beer, D.; Kardia, S.; Huang, C.; Giordano, T.; Levin, A.; Misek, D.; Lin, L.; Chen, G.; Gharib, T.; Thomas, D.; et al. Gene-expression profiles predict survival of patients with lung adenocarcinoma. Nat. Med. 2002, 8, 816–824. [Google Scholar] [CrossRef]
  10. Armstrong, S.; Staunton, J.; Silverman, L.; Pieters, R.; den Boer, M.; Minden, M.; Sallan, S.; Lander, E.; Golub, T.; Korsmeyer, S. MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nat. Genet. 2002, 30, 41–47. [Google Scholar] [CrossRef] [PubMed]
  11. Khan, J.; Wei, J.; Ringnér, M.; Saal, L.; Ladanyi, M.; Westermann, F.; Berthold, F.; Schwab, M.; Antonescu, C.; Peterson, C.; et al. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat. Med. 2001, 7, 673–679. [Google Scholar] [CrossRef] [PubMed]
  12. Vijay, S.; Ganeshkumar, P. Fuzzy expert system based on a novel hybrid stem cell (HSC) algorithm for classification of micro array data. J. Med. Syst. 2018, 42, 61. [Google Scholar] [CrossRef] [PubMed]
  13. Almugren, N.; Alshamlan, H. FF-SVM: New firefly-based gene selection algorithm for microarray cancer classification. In Proceedings of the 2019 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Siena, Italy, 9–11 July 2019; pp. 1–6. [Google Scholar]
  14. Alshamlan, H.; Badr, G.; Alohali, Y. Genetic bee colony (GBC) algorithm: A new gene selection method for microarray cancer classification. Comput. Biol. Chem. 2015, 56, 49–60. [Google Scholar] [CrossRef]
  15. Dabba, A.; Tari, A.; Meftali, S.; Mokhtari, R. Gene selection and classification of microarray data method based on mutual information and moth flame algorithm. Expert Syst. Appl. 2021, 166, 114012. [Google Scholar] [CrossRef]
  16. Dabba, A.; Tari, A.; Meftali, S. Hybridization of moth flame optimization algorithm and quantum computing for gene selection in microarray data. J. Ambient Intell. Humaniz. Comput. 2021, 12, 2731–2750. [Google Scholar] [CrossRef]
  17. Xi, M.; Sun, J.; Liu, L.; Fan, F.; Wu, X. Cancer feature selection and classification using a binary quantum-behaved particle swarm optimization and support vector machine. Comput. Math. Methods Med. 2016, 2016, 3572705. [Google Scholar] [CrossRef] [PubMed]
  18. Hameed, S.; Muhammad, F.; Hassan, R.; Saeed, F. Gene selection and classification in microarray datasets using a hybrid approach of PCC-BPSO/GA with multi classifiers. J. Comput. Sci. 2018, 14, 868–880. [Google Scholar] [CrossRef] [Green Version]
Figure 1. HHO-SVM and HHO-KNN flowchart.
Figure 1. HHO-SVM and HHO-KNN flowchart.
Sensors 22 07273 g001
Table 1. Description of microarray data sets.
Table 1. Description of microarray data sets.
Data SetNo. Total GenesNo. SamplesNo. Classes
Colon Tumor [8]2000622
Lung Cancer [9]7129962
Leukemia2 [8]7129723
Leukemia3 [10]7129722
SRBCT [11]2308834
Lymphoma [11]4026663
Table 2. Parameter settings for HHO-SVM and HHO-KNN.
Table 2. Parameter settings for HHO-SVM and HHO-KNN.
ParameterValue
DimensionNo. genes in data set
No. iterations (Max_iter)100
Lower bound (LB)0
Upper bound (UB)1
No. Harris’s hawks (SearchAgents_no)10
No. runs (m)30
k7
Table 3. Colon data set results.
Table 3. Colon data set results.
No. GenesBestAverageWorst
HHO-KNN2082.26%75.92%64.52%
1690.32%74.42%53.23%
1088.71%74.65%61.29%
583.87%68.12%53.23%
279.03%64.01%48.39%
HHO-SVM2085.48%76.21%62.90%
1687.10%74.48%56.45%
1090.32%73.94%51.61%
583.87%69.02%56.45%
274.19%64.16%51.61%
Table 4. Leukemia2 data set results.
Table 4. Leukemia2 data set results.
No. GenesBestAverageWorst
HHO-KNN1666.67%61.25%54.17%
1194.44%64.29%52.78%
673.61%64.12%55.56%
272.22%61.62%48.61%
HHO-SVM1668.06%64.95%62.50%
1197.22%66.17%58.33%
672.22%65.09%58.33%
272.22%65.32%59.72%
Table 5. Leukemia3 data set results.
Table 5. Leukemia3 data set results.
No. GenesBestAverageWorst
HHO-KNN3069.44%59.07%50.00%
2590.28%58.52%45.83%
2063.89%55.42%44.44%
1559.72%52.69%43.06%
561.11%52.82%43.06%
HHO-SVM3069.44%55.97%37.50%
2584.72%58.19%50.00%
2065.28%53.43%38.89%
1563.89%53.33%38.89%
566.67%55.00%40.28%
Table 6. Lung data set results.
Table 6. Lung data set results.
No. GenesBestAverageWorst
HHO-KNN1998.96%92.38%83.33%
10100.00%93.70%86.46%
2100.00%91.57%84.38%
197.92%89.65%85.42%
HHO-SVM1995.83%90.25%89.58%
10100.00%90.90%87.50%
2100.00%92.09%86.46%
197.92%89.76%86.46%
Table 7. Lymphoma data set results.
Table 7. Lymphoma data set results.
No. GenesBestAverageWorst
HHO-KNN1298.48%92.31%80.30%
10100.00%92.22%77.27%
3100.00%77.23%60.61%
2100.00%73.63%60.61%
175.76%66.36%56.06%
HHO-SVM12100.00%93.43%80.30%
10100.00%92.66%72.73%
3100.00%78.35%65.15%
296.97%74.01%65.15%
181.82%70.61%66.67%
Table 8. SRBCT data set results.
Table 8. SRBCT data set results.
No. GenesBestAverageWorst
HHO-KNN3083.13%60.64%39.76%
2991.57%56.43%37.35%
2083.13%53.41%39.76%
1075.90%47.83%28.92%
559.04%42.41%30.12%
HHO-SVM3090.36%59.92%33.73%
2992.77%57.35%40.96%
2083.13%53.21%34.94%
1078.31%45.26%21.69%
569.88%39.92%24.10%
Table 9. Comparison between the proposed selection methods and previous methods in terms of the number of selected genes and accuracy.
Table 9. Comparison between the proposed selection methods and previous methods in terms of the number of selected genes and accuracy.
AlgorithmsColonLungLeukemia2Leukemia3LymphomaSRBCT
HHO-KNN90.32%(16)100%(2)94.44%(11)90.28%(25)100%(2)91.57%(29)
HHO-SVM90.32%(10)100%(2)97.22%(11)84.72%(25)100%(3)92.77%(29)
HS-GA [12]95.9%(20)-97.5%(20)---
FF-SVM [13]92.7%(22)100%(2)99.5%(11)-92.6%(19)97.5%(14)
GBC [14]98.38%(20)-100%(5)---
MIM-mMFA [15]100%(20)100%(20)100%(6)100%(15)100%(4)100%(23)
QMFOA [16]100%(27)100%(20)100%(32)100%(30)-100%(23)
BQPSO [17]83.59%(46)100%(46)93.1%(48)-100%(49)-
PCC-GA [18]91.94%(29)97.54%(42)100%(35)-100%(39)100%(20)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

AlMazrua, H.; AlShamlan, H. A New Algorithm for Cancer Biomarker Gene Detection Using Harris Hawks Optimization. Sensors 2022, 22, 7273. https://doi.org/10.3390/s22197273

AMA Style

AlMazrua H, AlShamlan H. A New Algorithm for Cancer Biomarker Gene Detection Using Harris Hawks Optimization. Sensors. 2022; 22(19):7273. https://doi.org/10.3390/s22197273

Chicago/Turabian Style

AlMazrua, Halah, and Hala AlShamlan. 2022. "A New Algorithm for Cancer Biomarker Gene Detection Using Harris Hawks Optimization" Sensors 22, no. 19: 7273. https://doi.org/10.3390/s22197273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop