Transforming Cancer Classification: The Role of Advanced Gene Selection
Abstract
:1. Introduction
1.1. Paper Organization
1.2. Motivation for This Study
2. Related Work
3. Materials and Methods
3.1. Proposed Cancer Classification Approach
3.2. Selecting Informative Genes Using Mutual Information
- MI Calculation: For each gene in the dataset, calculate the MI between the gene and the target class (cancerous or non-cancerous).
- Gene Ranking: Rank genes in descending order based on their MI values. Higher MI values indicate stronger relevance to the target class.
- Feature Selection: Select the top genes based on their MI ranking. The number of selected genes is determined by the desired level of dimensionality reduction.
- Redundancy Removal: Remove genes that are highly correlated with others to avoid redundancy in the selected gene subset.
3.3. Particle Swarm Optimization (PSO)-Based Feature Selection
3.4. Classification Based on Support Vector Machine
Algorithm 1: Pseudocode for the proposed method |
function MI_PSA_ Gene_ Selection (Data, Labels): // Stage 1: Mutual Information (MI) based gene selection selected_ genes = MI_ Selection (Data, Labels) // Stage 2: Particle Swarm Optimization (PSO) refinement best_ gene_ set = PSO_ Refinement (Data [selected_ genes], Labels) return best_ gene_ set function MI_ Selection (Data, Labels): // Calculate mutual information between each gene and class labels mutual_ information_ scores = calculate _ mutual_ information (Data, Labels) // Select top genes based on mutual information scores selected_ genes = select _top _genes (mutual _ information _scores) return selected_ genes function PSO_ Refinement (Data, Labels): // Initialize particle swarm particles = initialize_ particles () global_ best_ position = null // PSO optimization loop while not convergence _criteria _met (): for particle in particles: // Evaluate fitness of particle’s gene selection fitness = evaluate _fitness (particle. position, Data, Labels) // Update particle’s best position and global best position if fitness > particle. best_ fitness: particle. best_ position = particle. position particle. best_ fitness = fitness if fitness > global_ best_ fitness: global_ best_ position = particle. position global_ best_ fitness = fitness // Update particle positions using velocity and global best position update_ particle_ positions (particles, global_ best_ position) return global_ best_ position |
3.5. Experimental Setup and Data Structure
4. Results and Discussion
4.1. Confusion Matrix
4.2. Precision Recall Curve
4.3. Area Under the Curve
4.4. Box Plot
4.5. Discussion of Alternative Methods to MI, PSO, and SVM
4.6. Alternative Optimization Algorithms
- Genetic Algorithms (GAs): Genetic Algorithms (GAs) are popular evolutionary algorithms inspired by natural selection. GAs use crossover, mutation, and selection operations to evolve a population of candidate solutions over successive generations. The GA approach has been widely used for feature selection due to its ability to search large and complex spaces for optimal solutions. The potential advantage of GAs over PSO lies in their robustness to local optima, as they explore the solution space by combining different solutions, which can sometimes lead to better performance. However, GAs can be computationally expensive, especially when dealing with high-dimensional data, and require careful tuning of parameters such as population size and mutation rates.
- Ant Colony Optimization (ACO): Ant Colony Optimization (ACO) is a swarm-based optimization algorithm inspired by the foraging behavior of ants. ACO is effective for solving combinatorial optimization problems, including feature selection. Like PSO, ACO is capable of exploring a large solution space, but uses pheromone trails to guide the search for optimal solutions. ACO could potentially offer advantages in terms of discovering new solutions, but it is also computationally intensive. Furthermore, ACO’s performance can be sensitive to the choice of parameters, such as the pheromone decay rate and the number of ants.
- Other Optimization Algorithms: Beyond GAs and ACO, other optimization algorithms like Particle Swarm Optimization (PSO), Simulated Annealing (SA), and Differential Evolution (DE) can be considered for feature selection. Each has its strengths, but PSO has been preferred in our approach due to its balance between convergence speed and solution quality in high-dimensional search spaces.
4.7. Alternative Classifiers
- Random Forest (RF): Random Forest is an ensemble learning method that constructs multiple decision trees and outputs the mode of their predictions. RF is widely regarded for its ability to handle high-dimensional data and deal with overfitting. It is computationally less expensive compared to SVM, especially when dealing with large datasets. The ability of RF to handle non-linear relationships and interactions between features makes it a suitable alternative to SVM. However, it might be less effective than SVM in cases where the decision boundaries are highly complex or when the dataset is very sparse.
- Deep Learning (DL): Deep learning models, particularly neural networks, have gained considerable attention in cancer classification due to their ability to learn hierarchical representations of data. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are powerful models that can learn complex patterns in large-scale datasets. Deep learning models could potentially offer superior performance in classification tasks, especially in cases where the relationships between features are highly non-linear. However, deep learning models tend to require much larger datasets for training and are computationally more intensive compared to traditional methods like SVM and RF. They also demand significant computational resources, making them less feasible for real-time applications or scenarios with limited data.
- Logistic Regression (LR): Logistic Regression is a simpler and less computationally demanding model compared to SVM and RF. It performs well for binary classification problems and can be a good baseline for evaluating more complex models. However, its performance tends to degrade when there are high levels of feature interaction or non-linear relationships between features, which is common in gene expression data.
- k-Nearest Neighbors (k-NN): The k-Nearest Neighbors algorithm is a non-parametric method that classifies new instances based on the majority vote of the k nearest data points. While it is simple to implement and computationally efficient for smaller datasets, k-NN can struggle with high-dimensional data due to the “curse of dimensionality,” where the distance between data points becomes less meaningful as the number of features increases.
4.8. Impact of Alternatives on Performance
- Accuracy: Deep learning models and Random Forest may achieve higher accuracy in large-scale datasets with complex patterns, especially when a large volume of training data is available. However, for smaller datasets, SVM and PSO-based approaches tend to be more reliable.
- Computational Efficiency: The proposed MI-PSO-SVM method is computationally efficient compared to deep learning models, which require extensive computational resources for training. Random Forest also tends to be more efficient than SVM in handling larger datasets, but it might not always provide the same level of accuracy, particularly when dealing with sparse or imbalanced data.
- Interpretability: Methods like Logistic Regression, Random Forest, and SVM provide a level of interpretability that is crucial in clinical settings, where understanding the importance of specific features (e.g., genes) is essential. Deep learning models, on the other hand, are often regarded as “black-box” models, making them less interpretable.
4.9. Discussion
5. Conclusions
6. Future Directions
- 1.
- Integration with Deep Learning Models:
- Investigate the potential of integrating the MI-PSA gene selection method with deep learning architectures, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This integration could help capture complex relationships and patterns in gene expression data, leading to even higher classification accuracy and robustness.
- 2.
- Application to Other Cancer Types and Multi-Class Scenarios:
- Extend the application of the MI-PSA algorithm to other cancer types beyond breast cancer, including multi-class classification scenarios. This would demonstrate the generalizability and effectiveness of the method across different cancer datasets with varying complexities.
- 3.
- Incorporation of Clinical and Multi-Omics Data:
- Explore the integration of clinical data (e.g., patient demographics, clinical history) and multi-omics data (e.g., proteomics, metabolomics) with the MI-PSA approach. This holistic view could offer a more comprehensive understanding of cancer mechanisms and improve personalized treatment strategies.
- 4.
- Dynamic Adaptation of PSO Parameters:
- Develop adaptive mechanisms for dynamically adjusting PSO parameters, such as inertia weight and acceleration coefficients, based on dataset characteristics. This could optimize the performance of the PSO algorithm in diverse gene expression datasets, enhancing gene selection efficiency and classification outcomes.
- 5.
- Real-Time Gene Selection for Personalized Medicine:
- Investigate the feasibility of using the MI-PSA algorithm in a real-time clinical setting for personalized medicine. This could involve developing a user-friendly software tool that clinicians can use to quickly identify key genetic markers and recommend targeted therapies based on individual patient profiles.
- 6.
- Combination with Other Feature Selection Techniques:
- Explore the combination of MI-PSA with other feature selection methods, such as Recursive Feature Elimination (RFE) or Genetic Algorithms (GAs), to create hybrid models. This could further refine gene selection and lead to even better classification performance.
- 7.
- Handling Imbalanced Datasets:
- Develop strategies within the MI-PSA framework to effectively handle class imbalance in cancer datasets, such as incorporating synthetic data generation techniques like SMOTE (Synthetic Minority Over-sampling Technique) to improve the classification of minority classes.
- 8.
- Exploring Gene-Gene Interaction Networks:
- Extend the MI-PSA approach to account for gene–gene interaction networks by incorporating network-based feature selection techniques. This would help in understanding the synergistic effects of gene sets and their impact on cancer progression and classification.
- 9.
- Longitudinal and Prognostic Studies:
- Apply the MI-PSA algorithm to longitudinal cancer datasets to identify genes associated with disease progression and prognosis. This could contribute to the development of predictive models for patient outcomes and inform long-term treatment planning.
- 10.
- Benchmarking Against State-of-the-Art Methods:
- Conduct extensive benchmarking of the MI-PSA algorithm against other state-of-the-art gene selection and classification methods using a variety of cancer datasets. This would provide a comprehensive evaluation of its strengths and potential areas for improvement.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yaqoob, A.; Verma, N.K.; Aziz, R.M.; Shah, M.A. RNA-Seq analysis for breast cancer detection: A study on paired tissue samples using hybrid optimization and deep learning techniques. J. Cancer Res. Clin. Oncol. 2024, 150, 455. [Google Scholar] [CrossRef]
- Dashtban, M.; Balafar, M. Gene selection for microarray cancer classification using a new evolutionary method employing artificial intelligence concepts. Genomics 2017, 109, 91–107. [Google Scholar] [CrossRef]
- Ayesha, S.; Hanif, M.K.; Talib, R. Overview and comparative study of dimensionality reduction techniques for high dimensional data. Inf. Fusion 2020, 59, 44–58. [Google Scholar] [CrossRef]
- Yaqoob, A.; Verma, N.K.; Aziz, R.M.; Shah, M.A. Optimizing cancer classification: A hybrid RDO-XGBoost approach for feature selection and predictive insights. Cancer Immunol. Immunother. 2024, 73, 261. [Google Scholar] [CrossRef]
- Masud, M.; Sikder, N.; Al Nahid, A.; Bairagi, A.K.; Alzain, M.A. A machine learning approach to diagnosing lung and colon cancer using a deep learning-based classification framework. Sensors 2021, 21, 748. [Google Scholar] [CrossRef]
- Motieghader, H.; Najafi, A.; Sadeghi, B.; Masoudi-Nejad, A. A hybrid gene selection algorithm for microarray cancer classification using genetic algorithm and learning automata. Inform. Med. Unlocked 2017, 9, 246–254. [Google Scholar] [CrossRef]
- Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
- Kong, W.; Vanderburg, C.R.; Gunshin, H.; Rogers, J.T.; Huang, X. A review of independent component analysis application to microarray gene expression data. Biotechniques 2008, 45, 501–520. [Google Scholar] [CrossRef] [PubMed]
- Sowan, B.; Eshtay, M.; Dahal, K.; Qattous, H.; Zhang, L. Hybrid PSO feature selection-based association classification approach for breast cancer detection. Neural Comput. Appl. 2023, 35, 5291–5317. [Google Scholar] [CrossRef]
- Nanglia, P.; Kumar, S.; Mahajan, A.N.; Singh, P.; Rathee, D. A hybrid algorithm for lung cancer classification using SVM and Neural Networks. ICT Express 2021, 7, 335–341. [Google Scholar] [CrossRef]
- Rani, M.J.; Devaraj, D. Two-Stage Hybrid Gene Selection Using Mutual Information and Genetic Algorithm for Cancer Data Classification. J. Med. Syst. 2019, 43, 235. [Google Scholar] [CrossRef]
- Yaqoob, A.; Verma, N.K.; Aziz, R.M. Improving breast cancer classification with mRMR + SS0 + WSVM: A hybrid approach. Multimed. Tools Appl. 2024, 16, 1–12. [Google Scholar] [CrossRef]
- Yaqoob, A. Combining the mRMR technique with the Northern Goshawk Algorithm (NGHA) to choose genes for cancer classification. Int. J. Inf. Technol. 2024. [Google Scholar] [CrossRef]
- Aljuaid, H.; Alturki, N.; Alsubaie, N.; Cavallaro, L.; Liotta, A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. Comput. Methods Programs Biomed. 2022, 223, 106951. [Google Scholar] [CrossRef]
- Abdar, M.; Samami, M.; Mahmoodabad, S.D.; Doan, T.; Mazoure, B.; Hashemifesharaki, R.; Liu, L.; Khosravi, A.; Acharya, U.R.; Makarenkov, V.; et al. Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput. Biol. Med. 2021, 135, 104418. [Google Scholar] [CrossRef] [PubMed]
- Ragab, M.; Albukhari, A.; Alyami, J.; Mansour, R.F. Ensemble Deep-Learning-Enabled Clinical Decision Support Ultrasound Images. Biology 2022, 11, 439. [Google Scholar] [CrossRef]
- Stoean, R. Analysis on the potential of an EA–surrogate modelling tandem for deep learning parametrization: An example for cancer classification from medical images. Neural Comput. Appl. 2020, 32, 313–322. [Google Scholar] [CrossRef]
- Sharma, S.; Mehra, R. Conventional Machine Learning and Deep Learning Approach for Multi-Classification of Breast Cancer Histopathology Images—A Comparative Insight. J. Digit. Imaging 2020, 33, 632–654. [Google Scholar] [CrossRef] [PubMed]
- Adla, D.; Reddy, G.V.R.; Nayak, P.; Karuna, G. Deep learning-based computer aided diagnosis model for skin cancer detection and classification. Distrib. Parallel Databases 2022, 40, 717–736. [Google Scholar] [CrossRef]
- Mijwil, M.M. Skin cancer disease images classification using deep learning solutions. Multimed. Tools Appl. 2021, 80, 26255–26271. [Google Scholar] [CrossRef]
- Sharma, N.; Sharma, K.P.; Mangla, M.; Rani, R. Breast cancer classification using snapshot ensemble deep learning model and t-distributed stochastic neighbor embedding. Multimed. Tools Appl. 2023, 82, 4011–4029. [Google Scholar] [CrossRef]
- Cui, L.; Li, H.; Hui, W.; Chen, S.; Yang, L.; Kang, Y.; Bo, Q.; Feng, J. A deep learning-based framework for lung cancer survival analysis with biomarker interpretation. BMC Bioinform. 2020, 21, 112. [Google Scholar] [CrossRef]
- De Angeli, K.; Gao, S.; Danciu, I.; Durbin, E.B.; Wu, X.-C.; Stroup, A.; Doherty, J.; Schwartz, S.; Wiggins, C.; Damesyn, M.; et al. Class imbalance in out-of-distribution datasets: Improving the robustness of the TextCNN for the classification of rare cancer types. J. Biomed. Inform. 2022, 125, 103957. [Google Scholar] [CrossRef] [PubMed]
- Kourou, K.; Exarchos, T.P.; Exarchos, K.P.; Karamouzis, M.V.; Fotiadis, D.I. Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotechnol. J. 2015, 13, 8–17. [Google Scholar] [CrossRef] [PubMed]
- Albashish, D.; Hammouri, A.I.; Braik, M.; Atwan, J.; Sahran, S. Binary biogeography-based optimization based SVM-RFE for feature selection. Appl. Soft Comput. 2021, 101, 107026. [Google Scholar] [CrossRef]
- Lu, H.; Chen, J.; Yan, K.; Jin, Q.; Xue, Y.; Gao, Z. A hybrid feature selection algorithm for gene expression data classification. Neurocomputing 2017, 256, 56–62. [Google Scholar] [CrossRef]
- Shahbeig, S.; Helfroush, M.S.; Rahideh, A. A fuzzy multi-objective hybrid TLBO–PSO approach to select the associated genes with breast cancer. Signal Process. 2017, 131, 58–65. [Google Scholar] [CrossRef]
- Sheikh-Zadeh, A.; Scott, M.A.; Enayaty-Ahangar, F. The role of prescriptive data and non-linear dimension-reduction methods in spare part classification. Comput. Ind. Eng. 2022, 175, 108912. [Google Scholar] [CrossRef]
- Sen, P.C.; Hajra, M.; Ghosh, M. Emerging Technology in Modelling and Graphics; Springer: Berlin/Heidelberg, Germany, 2020; Volume 937, Available online: http://link.springer.com/10.1007/978-981-13-7403-6 (accessed on 19 September 2024).
- Singh, N.; Singh, S.B.; Houssein, E.H. Hybridizing Salp Swarm Algorithm with Particle Swarm Optimization Algorithm for Recent Optimization Functions; Springer: Berlin/Heidelberg, Germany, 2022; Volume 15. [Google Scholar] [CrossRef]
- Gu, S.; Cheng, R.; Jin, Y. Feature selection for high-dimensional classification using a competitive swarm optimizer. Soft Comput. 2018, 22, 811–822. [Google Scholar] [CrossRef]
- Masoudi-Sobhanzadeh, Y.; Motieghader, H.; Omidi, Y.; Masoudi-Nejad, A. A machine learning method based on the genetic and world competitive contests algorithms for selecting genes or features in biological applications. Sci. Rep. 2021, 11, 3349. [Google Scholar] [CrossRef]
- Aziz, R.; Verma, C.K.; Srivastava, N. A fuzzy based feature selection from independent component subspace for machine learning classification of microarray data. Genom. Data 2016, 8, 4–15. [Google Scholar] [CrossRef]
- Basavaraju, A.; Du, J.; Zhou, F.; Ji, J. A Machine Learning Approach to Road Surface Anomaly Assessment Using Smartphone Sensors. IEEE Sens. J. 2020, 20, 2635–2647. [Google Scholar] [CrossRef]
- Gammermann, A. Support vector machine learning algorithm and transduction. Comput. Stat. 2000, 15, 31–39. [Google Scholar] [CrossRef]
- Van’T Veer, L.J.; Dai, H.; Van De Vijver, M.J.; He, Y.D.; Hart, A.A.M.; Mao, M.; Peterse, H.L.; Van Der Kooy, K.; Marton, M.J.; Witteveen, A.T.; et al. Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415, 530–536. [Google Scholar] [CrossRef]
- Alshamlan, H.M.; Badr, G.H.; Alohali, Y.A. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification. Comput. Biol. Chem. 2015, 56, 49–60. [Google Scholar] [CrossRef] [PubMed]
- Yildiz, A.R. Cuckoo search algorithm for the selection of optimal machining parameters in milling operations. Int. J. Adv. Manuf. Technol. 2013, 64, 55–61. [Google Scholar] [CrossRef]
- Mohamed, N.S.; Zainudin, S.; Othman, Z.A. Metaheuristic approach for an enhanced mRMR filter method for classification using drug response microarray data. Expert Syst. Appl. 2017, 90, 224–231. [Google Scholar] [CrossRef]
- Aziz, R.; Verma, C.K.; Srivastava, N. Artificial Neural Network Classification of High Dimensional Data with Novel Optimization Approach of Dimension Reduction. Ann. Data Sci. 2018, 5, 615–635. [Google Scholar] [CrossRef]
- Alshamlan, H.; Badr, G.; Alohali, Y. MRMR-ABC: A hybrid gene selection algorithm for cancer classification using microarray gene expression profiling. BioMed Res. Int. 2015, 2015, 604910. [Google Scholar] [CrossRef]
- Towfek, S.K.; Khodadadi, N.; Abualigah, L.; Rizk, F. AI in Higher Education: Insights from Student Surveys and Predictive Analytics using PSO-Guided WOA and Linear Regression. J. Artif. Intell. Eng. Pract. 2024, 1, 1–17. [Google Scholar] [CrossRef]
- Aziz, R.M.; Baluch, M.F.; Patel, S.; Ganie, A.H. LGBM: A machine learning approach for Ethereum fraud detection. Int. J. Inf. Technol. 2022, 14, 3321–3331. [Google Scholar] [CrossRef]
- Algamal, Z.Y.; Lee, M.H. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification. Comput. Biol. Med. 2015, 67, 136–145. [Google Scholar] [CrossRef] [PubMed]
Stage | Parameter | Value |
---|---|---|
Filter | Approach to Selecting Genes | Mutual information |
Chosen Genes (k) | 100 | |
Wrapper | SVM (kernel) | Linear |
Size for Testing Data | 20% | |
Random State | 42 (Seed for reproducibility) | |
SVM Classifier C | Default (1.0) | |
Standardization Method | Standard Scaler | |
Training Size | 80% |
Data | Description | Sample | Classes | Genes |
---|---|---|---|---|
Breast Cancer [36] | Breast cancer originates in the cells of breast tissue and ranks as one of the prevalent cancer types affecting women. | 97 | 2 | 24,481 |
Selected Genes | Accuracy | ||||||||
---|---|---|---|---|---|---|---|---|---|
Proposed Method | Mutual Information | SVM | |||||||
Best | Average | Worst | Best | Average | Worst | Best | Average | Worst | |
13 | 89.29 | 81.93 | 75.91 | 83.55 | 75.22 | 69.37 | 81.91 | 75.96 | 65.67 |
17 | 96.63 | 87.86 | 77.89 | 88.35 | 78.65 | 68.76 | 87.86 | 77.89 | 67.76 |
19 | 99.01 | 91.26 | 82.93 | 93.44 | 82.54 | 71.44 | 91.26 | 82.93 | 70.44 |
20 | 96.87 | 89.66 | 81.53 | 92.30 | 91.04 | 81.44 | 89.66 | 81.53 | 80.44 |
22 | 96.54 | 88.08 | 80.98 | 97.34 | 88.65 | 79.77 | 88.08 | 80.98 | 78.77 |
23 | 94.68 | 84.94 | 77.97 | 95.45 | 85.23 | 77.81 | 84.94 | 77.97 | 73.81 |
27 | 93.16 | 83.78 | 76.23 | 93.57 | 83.08 | 75.39 | 83.78 | 76.23 | 71.39 |
31 | 92.84 | 82.58 | 74.35 | 91.29 | 81.35 | 71.21 | 82.58 | 74.35 | 70.21 |
35 | 90.77 | 80.38 | 73.36 | 88.76 | 79.22 | 69.48 | 80.38 | 73.36 | 68.48 |
39 | 88.96 | 78.97 | 71.17 | 87.06 | 77.02 | 67.78 | 78.97 | 71.17 | 65.78 |
43 | 87.15 | 76.99 | 69.31 | 84.35 | 74.37 | 64.12 | 76.99 | 69.31 | 63.2 |
47 | 86.29 | 75.12 | 67.76 | 83.65 | 73.58 | 65.31 | 75.12 | 67.76 | 62.31 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yaqoob, A.; Mir, M.A.; Jagannadha Rao, G.V.V.; Tejani, G.G. Transforming Cancer Classification: The Role of Advanced Gene Selection. Diagnostics 2024, 14, 2632. https://doi.org/10.3390/diagnostics14232632
Yaqoob A, Mir MA, Jagannadha Rao GVV, Tejani GG. Transforming Cancer Classification: The Role of Advanced Gene Selection. Diagnostics. 2024; 14(23):2632. https://doi.org/10.3390/diagnostics14232632
Chicago/Turabian StyleYaqoob, Abrar, Mushtaq Ahmad Mir, G. V. V. Jagannadha Rao, and Ghanshyam G. Tejani. 2024. "Transforming Cancer Classification: The Role of Advanced Gene Selection" Diagnostics 14, no. 23: 2632. https://doi.org/10.3390/diagnostics14232632
APA StyleYaqoob, A., Mir, M. A., Jagannadha Rao, G. V. V., & Tejani, G. G. (2024). Transforming Cancer Classification: The Role of Advanced Gene Selection. Diagnostics, 14(23), 2632. https://doi.org/10.3390/diagnostics14232632