Graph Based Feature Selection for Reduction of Dimensionality in Next-Generation RNA Sequencing Datasets
Abstract
:1. Introduction
2. Related Work
2.1. Feature Selection and Classification
2.2. Discretization and Association Rule Mining
2.3. Theoretical Description of the Classifiers
2.3.1. Naïve Bayes Algorithm
2.3.2. Sequential Minimal Optimization (SMO)
2.3.3. Multilayer Perceptron
2.4. Measures for Performance Evaluation
Classification Accuracy, Time, Kappa Statistic, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE)
2.5. Generating a Graph
3. Materials and Methods
3.1. Data Source and Data Type
3.2. Data Preprocessing
3.3. Feature Selection
3.3.1. Principal Component Analysis (PCA)
3.3.2. Recursive Feature Elimination
3.3.3. Proposed Graph-Based Approach
- i.
- NormalizationCalculate scaling factor (Equation (9)).
- ii.
- Network constructionCalculate PCC of the normalized features (Equation (10)):
- iii.
- Determine threshold (Equation (11)):
- iv.
- Construct a topological overlap matrix TOM based on the adjacency of(Equation (12)):
- v.
- Filter the resulting network using maximal cliques:
- Apply Bron–Kerbosch algorithm to find all possible cliques within the graph that has been filtered where a clique is a complete subgraph .
- Determine a set of of the maximal cliques where a maximal clique is a complete subgraph which is not a subset of another complete subgraph .
- Whenever there is , where a rating function , is applied and maximal clique with the highest score selected.
3.4. Classification
3.5. Discretization
Algorithms 1: Equal-Width Interval Discretization Steps |
Input: continuous values of , with being number of parts where |
Output: discretized values |
Step 1. Sort values of A in ascending order |
Step 2: Calculate interval using Equation (19) |
Step 3: Divide the data into 3 bins |
Step 4: Place the values of the array in the same boundary |
3.6. Association Rule Mining (ARM)
4. Results and Discussion
4.1. Data Preprocessing
4.2. Feature Selection
4.3. Classification
4.4. Association Rule Mining
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jindal, P.; Kumar, D. A review on dimensionality reduction techniques. Int. J. Comput. Appl. 2017, 173, 42–46. [Google Scholar] [CrossRef]
- Nguyen, L.H.; Holmes, S. Ten quick tips for effective dimensionality reduction. PLoS Comput. Biol. 2019, 15, e1006907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zebari, R.; Abdulazeez, A.; Zeebaree, D.; Zebari, D.; Saeed, J. A comprehensive review of dimensionality reduction techniques for feature selection and feature extraction. J. Appl. Sci. Technol. Trends 2020, 1, 56–70. [Google Scholar] [CrossRef]
- Abdulrazzaq, M.B.; Saeed, J.N. A Comparison of Three Classification Algorithms for Handwritten Digit Recognition. In Proceedings of the 2019 International Conference on Advanced Science and Engineering (ICOASE), Zakho-Duhok, Iraq, 2–4 April 2019; pp. 58–63. [Google Scholar]
- Mafarja, M.; Mirjalili, S. Whale optimization approaches for wrapper feature selection. Appl. Soft Comput. 2018, 62, 441–453. [Google Scholar] [CrossRef]
- Yu, L.; Liu, H. Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution. In Proceedings of the 20th international conference on machine learning (ICML-03), Washington, DC, USA, 21–24 August 2003; pp. 856–863. [Google Scholar]
- Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
- Jović, A.; Brkić, K.; Bogunović, N. A review of feature selection methods with applications. In Proceedings of the 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 25–29 May 2015; pp. 1200–1205. [Google Scholar]
- Mlambo, N.; Cheruiyot, W.K.; Kimwele, M.W. A survey and comparative study of filter and wrapper feature selection techniques. Int. J. Eng. Sci. 2016, 5, 57–67. [Google Scholar]
- Urbanowicz, R.J.; Meeker, M.; La Cava, W.; Olson, R.S.; Moore, J.H. Relief-based feature selection: Introduction and review. J. Biomed. Inform. 2018, 85, 189–203. [Google Scholar] [CrossRef]
- Abiodun, E.O.; Alabdulatif, A.; Abiodun, O.I.; Alawida, M.; Alabdulatif, A.; Alkhawaldeh, R.S. A systematic review of emerging feature selection optimization methods for optimal text classification: The present state and prospective opportunities. Neural Comput. Appl. 2021, 33, 15091–15118. [Google Scholar] [CrossRef]
- Piles, M.; Bergsma, R.; Gianola, D.; Gilbert, H.; Tusell, L. Feature Selection Stability and Accuracy of Prediction Models for Genomic Prediction of Residual Feed Intake in Pigs Using Machine Learning. Front. Genet. 2021, 12, 137. [Google Scholar] [CrossRef]
- Yang, P.; Huang, H.; Liu, C. Feature selection revisited in the single-cell era. Genome Biol. 2021, 22, 321. [Google Scholar] [CrossRef]
- Arowolo, M.O.; Adebiyi, M.O.; Adebiyi, A.A.; Olugbara, O. Optimized hybrid investigative based dimensionality reduction methods for malaria vector using KNN classifier. J. Big Data 2021, 8, 1–14. [Google Scholar] [CrossRef]
- Cateni, S.; Vannucci, M.; Vannocci, M.; Colla, V. Variable Selection and Feature Extraction through Artificial Intelligence Techniques. Available online: https://www.intechopen.com/chapters/41752 (accessed on 7 December 2021).
- Kim, K. An improved semi-supervised dimensionality reduction using feature weighting: Application to sentiment analysis. Expert Syst. Appl. 2018, 109, 49–65. [Google Scholar] [CrossRef]
- Samuel, A.L. Some studies in machine learning using the game of checkers. IBM J. Res. Dev. 1959, 3, 210–229. [Google Scholar] [CrossRef]
- Das, H.; Naik, B.; Behera, H. Classification of diabetes mellitus disease (DMD): A data mining (DM) approach. In Progress in Computing, Analytics and Networking; Springer: Singapore, 2018; pp. 539–549. [Google Scholar]
- Mazumder, D.H.; Veilumuthu, R. An enhanced feature selection filter for classification of microarray cancer data. ETRI J. 2019, 41, 358–370. [Google Scholar] [CrossRef]
- Sun, S.; Zhu, J.; Ma, Y.; Zhou, X. Accuracy, robustness and scalability of dimensionality reduction methods for single-cell RNA-seq analysis. Genome Biol. 2019, 20, 1–21. [Google Scholar] [CrossRef]
- Ai, D.; Pan, H.; Li, X.; Gao, Y.; He, D. Association rule mining algorithms on high-dimensional datasets. Artif. Life Robot. 2018, 23, 420–427. [Google Scholar] [CrossRef] [Green Version]
- Agrawal, R.; Imieliński, T.; Swami, A. Mining Association Rules between Sets of Items in Large Databases. In Proceedings of the 1993 ACM SIGMOD international conference on Management of Data, Washington, DC, USA, 25–28 May 1993; pp. 207–216. [Google Scholar]
- Liu, X.; Sang, X.; Chang, J.; Zheng, Y.; Han, Y. The water supply association analysis method in Shenzhen based on kmeans clustering discretization and apriori algorithm. PLoS ONE 2021, 16, e0255684. [Google Scholar] [CrossRef]
- Saeys, Y.; Inza, I.; Larrañaga, P. A review of feature selection techniques in bioinformatics. Bioinformatics 2007, 23, 2507–2517. [Google Scholar] [CrossRef] [Green Version]
- Ang, J.C.; Mirzal, A.; Haron, H.; Hamed, H.N.A. Supervised, unsupervised, and semi-supervised feature selection: A review on gene selection. IEEE/ACM Trans. Comput. Biol. Bioinform. 2016, 13, 971–989. [Google Scholar] [CrossRef]
- Ray, R.B.; Kumar, M.; Rath, S.K. Fast In-Memory Cluster Computing of Sizeable Microarray Using Spark. In Proceedings of the 2016 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 8–9 April 2016; pp. 1–6. [Google Scholar]
- Lokeswari, Y.; Jacob, S.G. Prediction of child tumours from microarray gene expression data through parallel gene selection and classification on spark. In Computational Intelligence in Data Mining; Springer: Singapore, 2017; pp. 651–661. [Google Scholar]
- Peralta, D.; Del Río, S.; Ramírez-Gallego, S.; Triguero, I.; Benitez, J.M.; Herrera, F. Evolutionary feature selection for big data classification: A mapreduce approach. Math. Probl. Eng. 2015, 2015. [Google Scholar] [CrossRef] [Green Version]
- Sonnenburg, S.; Franc, V.; Yom-Tov, E.; Sebag, M. Pascal Large Scale Learning Challenge. In Proceedings of the 25th International Conference on Machine Learning (ICML2008) Workshop, Helsinki, Finland, 5–9 July 2008. [Google Scholar]
- Alghunaim, S.; Al-Baity, H.H. On the scalability of machine-learning algorithms for breast cancer prediction in big data context. IEEE Access 2019, 7, 91535–91546. [Google Scholar] [CrossRef]
- Turgut, S.; Dağtekin, M.; Ensari, T. Microarray Breast Cancer Data Classification Using Machine Learning Methods. In Proceedings of the 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), Istanbul, Turkey, 18–19 April 2018. [Google Scholar]
- Matamala, N.; Vargas, M.T.; Gonzalez-Campora, R.; Minambres, R.; Arias, J.I.; Menendez, P.; Andres-Leon, E.; Gomez-Lopez, G.; Yanowsky, K.; Calvete-Candenas, J. Tumor microRNA expression profiling identifies circulating microRNAs for early breast cancer detection. Clin. Chem. 2015, 61, 1098–1106. [Google Scholar] [CrossRef] [Green Version]
- Morovvat, M.; Osareh, A. An ensemble of filters and wrappers for microarray data classification. Mach. Learn. Appl. An. Int. J. 2016, 3, 1–17. [Google Scholar] [CrossRef] [Green Version]
- Goswami, S.; Das, A.K.; Guha, P.; Tarafdar, A.; Chakraborty, S.; Chakrabarti, A.; Chakraborty, B. An approach of feature selection using graph-theoretic heuristic and hill climbing. Pattern Anal. Appl. 2019, 22, 615–631. [Google Scholar] [CrossRef]
- Zhang, Z.; Hancock, E.R. A Graph-Based Approach to Feature Selection. In International Workshop on Graph-Based Representations in Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2011; pp. 205–214. [Google Scholar]
- Schroeder, D.T.; Styp-Rekowski, K.; Schmidt, F.; Acker, A.; Kao, O. Graph-Based Feature Selection Filter Utilizing Maximal Cliques. In Proceedings of the 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), Granada, Spain, 22–25 October 2019; pp. 297–302. [Google Scholar]
- Roffo, G.; Castellani, U.; Vinciarelli, A.; Cristani, M. Infinite feature selection: A graph-based feature filtering approach. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 12. [Google Scholar] [CrossRef]
- Rana, P.; Thai, P.; Dinh, T.; Ghosh, P. Relevant and Non-Redundant Feature Selection for Cancer Classification and Subtype Detection. Cancers 2021, 13, 4297. [Google Scholar] [CrossRef]
- Nguyen, H.; Thai, P.; Thai, M.; Vu, T.; Dinh, T. Approximate k-Cover in Hypergraphs: Efficient Algorithms, and Applications. arXiv 2019, arXiv:1901.07928. [Google Scholar]
- Lu, S.J.; Xie, J.; Li, Y.; Yu, B.; Ma, Q.; Liu, B.Q. Identification of lncRNAs-gene interactions in transcription regulation based on co-expression analysis of RNA-seq data. Math. Biosci. Eng. 2019, 16, 7112–7125. [Google Scholar] [CrossRef]
- Chiclana, F.; Kumar, R.; Mittal, M.; Khari, M.; Chatterjee, J.M.; Baik, S.W. ARM–AMO: An efficient association rule mining algorithm based on animal migration optimization. Knowl. Based Syst. 2018, 154, 68–80. [Google Scholar]
- Wen, F.; Zhang, G.; Sun, L.; Wang, X.; Xu, X. A hybrid temporal association rules mining method for traffic congestion prediction. Comput. Ind. Eng. 2019, 130, 779–787. [Google Scholar] [CrossRef]
- Shui, Y.; Cho, Y.-R. Filtering Association Rules in GENE Ontology Based on Term Specificity. In Proceedings of the 2016 IEEE international conference on bioinformatics and biomedicine (bibm), Shenzhen, China, 15–18 December 2016; pp. 1314–1321. [Google Scholar]
- Agapito, G.; Cannataro, M.; Guzzi, P.H.; Milano, M. Using GO-WAR for mining cross-ontology weighted association rules. Comput. Methods Programs Biomed. 2015, 120, 113–122. [Google Scholar] [CrossRef] [PubMed]
- Bhavsar, H.; Ganatra, A. A comparative study of training algorithms for supervised machine learning. Int. J. Soft Comput. Eng. (IJSCE) 2012, 2, 2231–2307. [Google Scholar]
- Han, J.; Pei, J.; Kamber, M. Data Mining: Concepts and Techniques; The Morgan Kaufmann Series in Data Management Systems 5.4; Morgan Kaufmann Publishers: Waltham, MA, USA, 2011; pp. 83–124. [Google Scholar]
- Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
- Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef] [Green Version]
- Tanwani, A.K.; Afridi, J.; Shafiq, M.Z.; Farooq, M. Guidelines to Select Machine Learning Scheme for Classification of Biomedical Datasets. In Proceedings of the European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics; Springer: Berlin/Heidelberg, Germany, 2009; pp. 128–139. [Google Scholar]
- Carletta, J. Assessing agreement on classification tasks: The kappa statistic. arXiv 1996, arXiv:9602004. [Google Scholar]
- Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)?—Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
- Dunham, M.H.; Sridhar, S. Data Mining: Introductory and Advanced Topics, Dorling Kindersley; Pearson Education: New Delhi, India, 2006. [Google Scholar]
- Jiang, L.; Huang, J.; Higgs, B.W.; Hu, Z.; Xiao, Z.; Yao, X.; Conley, S.; Zhong, H.; Liu, Z.; Brohawn, P. Genomic landscape survey identifies SRSF1 as a key oncodriver in small cell lung cancer. PLoS Genet. 2016, 12, e1005895. [Google Scholar] [CrossRef]
- Djureinovic, D.; Hallström, B.M.; Horie, M.; Mattsson, J.S.M.; La Fleur, L.; Fagerberg, L.; Brunnström, H.; Lindskog, C.; Madjar, K.; Rahnenführer, J. Profiling cancer testis antigens in non–small-cell lung cancer. JCI Insight 2016, 1, e86837. [Google Scholar] [CrossRef] [Green Version]
- Bullard, J.; Purdom, E.; Hansen, K.D.; Dudoit, S. Evaluation of statistical methods for normalization and differential expression in mrna-seq experiments. BMC Bioinform. 2010, 11, 94. [Google Scholar] [CrossRef] [Green Version]
- Ustebay, S.; Turgut, Z.; Aydin, M.A. Intrusion Detection System with Recursive Feature Elimination by Using Random Forest and Deep Learning Classifier. In Proceedings of the International Congress on Big Data, Deep Learning and Fighting Cyber Terrorism (IBIGDELFT), Ankara, Turkey, 3–4 December 2018; pp. 71–76. [Google Scholar]
- Gunduz, H. An efficient stock market prediction model using hybrid feature reduction method based on variational autoencoders and recursive feature elimination. Financ. Innov. 2021, 7, 1–24. [Google Scholar] [CrossRef]
- Artur, M. Review the performance of the Bernoulli Naïve Bayes Classifier in Intrusion Detection Systems using Recursive Feature Elimination with Cross-validated selection of the best number of features. Procedia Comput. Sci. 2021, 190, 564–570. [Google Scholar] [CrossRef]
- Furat, F.G.; İbrikçi, T. Tumor Type Detection Using Naïve Bayes Algorithm on Gene Expression Cancer RNA-Seq Data Set. Lung Cancer 2019, 10, 13. [Google Scholar]
Dataset Name | Instances | Attributes | Classes | Source |
---|---|---|---|---|
SCLC(GSE60052) | 86 | 28,089 | 2 (79 small-cell lung cancer and 7 normal) | [53] |
NSCLC(GSE81089) | 218 | 28,089 | 2 (199 non-small-cell lung cancer and 19 normal) | [54] |
Feature Selection Method | |||||
---|---|---|---|---|---|
Dataset | Number of Features | Preprocessed | Graph | PCA | RFE |
GSE60052 | 28,089 | 3423 (12.2%) | 80 | 86 | 198 |
GSE81089 | 28,089 | 12,145 (43.2%) | 134 | 208 | 270 |
NAÏVE BAYES | |||||||
---|---|---|---|---|---|---|---|
Dataset | Feature Selection Method | Accuracy | MAE | Kappa | RMSE | F-Measure | T/s |
GSE60052 | Graph-based | 96.4286 | 0.0357 | 0.8679 | 0.189 | 0.963 | 0.01 |
RFE | 100 | 0 | 1 | 0 | 1 | 0.01 | |
PCA | 96.4286 | 0.0357 | 0.8679 | 0.189 | 0.963 | 0.02 | |
GSE81089 | Graph-based | 100 | 0 | 1 | 0 | 1 | 0.06 |
RFE | 100 | 0 | 1 | 0 | 1 | 0.01 | |
PCA | 100 | 0 | 1 | 0 | 1 | 0.02 | |
MULTILAYER PERCEPTRON | |||||||
Dataset | Feature Selection Method | Accuracy | MAE | Kappa | RMSE | F-Measure | T/s |
GSE60052 | Graph-based | 96.4286 | 0.0366 | 0.8679 | 0.1814 | 0.979 | 18.66 |
RFE | 96.4286 | 0.0389 | 0.8679 | 0.1851 | 0.963 | 9.7 | |
PCA | 96.4286 | 0.0224 | 0.8679 | 0.0993 | 0.963 | 9.62 | |
GSE81089 | Graph-based | 96.4286 | 0.0389 | 0.8679 | 0.1851 | 0.963 | 124.15 |
RFE | 100 | 0 | 1 | 0 | 1 | 131.53 | |
PCA | 100 | 0 | 1 | 0 | 1 | 0.77 | |
SEQUENTIAL MINIMAL OPTIMIZATION | |||||||
Dataset | Feature Selection Method | Accuracy | MAE | Kappa | RMSE | F-Measure | T/s |
GSE60052 | Graph-based | 96.4286 | 0.0357 | 0.8679 | 0.189 | 0.889 | 0.01 |
RFE | 96.4286 | 0.0357 | 0.8679 | 0.189 | 0.963 | 0.01 | |
PCA | 100 | 0 | 1 | 0 | 1 | 0.02 | |
GSE81089 | Graph-based | 98.5915 | 0.0141 | 0.9567 | 0.1187 | 0.986 | 0.14 |
RFE | 100 | 0 | 1 | 0 | 1 | 0.01 | |
PCA | 100 | 0 | 1 | 0 | 1 | 0.01 |
Dataset | Selection Method | Support | Confidence | Lift | No. of Rules | Non-Redundant Rules |
---|---|---|---|---|---|---|
GSE60052 | Graph-based | 0.5 | 0.9 | 2 | 19 | 15 |
PCA | 0.4 | 0.9 | 2 | 38 | 38 | |
RFE | 0.3 | 0.9 | 1.98 | 357,986 | 112,357 | |
GSE81089 | Graph-based | 0.5 | 0.9 | 2 | 36 | 36 |
PCA | 0.4 | 0.9 | 1 | 121 | 121 | |
RFE | 0.4 | 0.9 | 1 | 899 | 884 |
GSE60052 | ||||
---|---|---|---|---|
Rules | Support | Confidence | Lift | |
X | Y | |||
{SFTPA1, SDC4, LRRK2} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{ACVRL1, COL4A3, AQP1} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{EDNRB, SFTPC, AGER} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{PTPRB, SFTPC, CLDN5} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{EPAS1, EDNRB, LRRK2, AQP1} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{CLDN18, EPAS1, SFTPA1, AGER} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{EPAS1, NAPSA, LRRK2, AGER} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{TIMP3, CTSH, SFTPA1, LRRK2} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{CTSH, NAPSA, TGFBR2, SFTPC} => {SLC34A2} | 0.5 | 0.9 | 2 | |
{RRAS, PTPRB, YAP1, SMAD6} => {SLC34A2} | 0.5 | 0.9 | 2 | |
GSE81089 | ||||
Rules | Support | Confidence | Lift | |
X | Y | |||
{ASPM, KIF4A, NUF2} => {CENPF} | 0.5 | 1 | 2 | |
{ASPM, KIF4A, CDC6, NUF2} => {TOP2A} | 0.5 | 1 | 2 | |
{ASPM, CDC6, CDC20, NUF2} => {TOP2A} | 0.5 | 1 | 2 | |
{ASPM, CDC6, CDCA8, NUF2} => {TOP2A} | 0.5 | 1 | 2 | |
{TPX2, FOXM1, NUF2, IQGAP3} => {BIRC5} | 0.5 | 1 | 2 | |
{CDC6, FOXM1, CDC20, UBE2C} => {TPX2} | 0.5 | 1 | 2 | |
{ASPM, CDC6, DLGAP5, NUF2} => {TOP2A} | 0.5 | 1 | 2 | |
{TPX2, CDCA8, UBE2C, IQGAP3} => {BIRC5} | 0.5 | 1 | 2 | |
{ASPM, KIF4A, CDC6, UBE2C} => {CENPF} | 0.5 | 1 | 2 | |
{TPX2, CDC6, FOXM1, IQGAP3} => {BIRC5} | 0.5 | 1 | 2 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gakii, C.; Mireji, P.O.; Rimiru, R. Graph Based Feature Selection for Reduction of Dimensionality in Next-Generation RNA Sequencing Datasets. Algorithms 2022, 15, 21. https://doi.org/10.3390/a15010021
Gakii C, Mireji PO, Rimiru R. Graph Based Feature Selection for Reduction of Dimensionality in Next-Generation RNA Sequencing Datasets. Algorithms. 2022; 15(1):21. https://doi.org/10.3390/a15010021
Chicago/Turabian StyleGakii, Consolata, Paul O. Mireji, and Richard Rimiru. 2022. "Graph Based Feature Selection for Reduction of Dimensionality in Next-Generation RNA Sequencing Datasets" Algorithms 15, no. 1: 21. https://doi.org/10.3390/a15010021
APA StyleGakii, C., Mireji, P. O., & Rimiru, R. (2022). Graph Based Feature Selection for Reduction of Dimensionality in Next-Generation RNA Sequencing Datasets. Algorithms, 15(1), 21. https://doi.org/10.3390/a15010021