mathematics-logo

Journal Browser

Journal Browser

Evolutionary Computation for Feature Selection and Dimensionality Reduction

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 March 2026) | Viewed by 17518

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, China
Interests: deep learning; evolutionary computation; lightweight deep learning; lightweight large models; lightweight machine learning; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
Interests: intelligence optimization; data mining

E-Mail Website
Guest Editor
School of Engineering and Computer Science (SECS), Victoria University of Wellington (VUW), Wellington 6012, New Zealand
Interests: evolutionary computation; feature selection; computer vision; image analysis; neuroevolution
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
NICE Research Group, Department of Computer Science, University of Surrey, Guildford GU2 7XH, UK
Interests: heuristic optimisation; neural architecture search; feature selection; machine learning systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue "Evolutionary Computation for Feature Selection and Dimensionality Reduction” delves into the intricate fusion of evolutionary computation in the realms of feature selection in machine learning and feature map selection in deep learning. Feature selection and feature map selection are important data processing techniques to shallow learning and deep learning methods. They can significantly improve the performance of learning algorithms in terms of the accuracy and learning speed while also reducing their size. However, they are challenging tasks due to the large search space.

This Special Issue aims to investigate both the new theories and methods in different evolutionary computation/machine learning paradigms, focusing on feature selection in shallow learning and feature map selection in deep learning. Evolutionary computation paradigms include, but are not limited to, particle swarm optimization, artificial bee colony optimization, genetic algorithm, and differential evolution, while those for machine learning include, but are not limited to, MLP, CNN, and genetic programming. This Special Issue also welcomes novel applications of EC-based/learning-based feature selection methods in related fields. For all the aforementioned issues, we kindly invite the scientific community to contribute to this Special Issue by submitting novel and original research related, but not limited, to the following topics:

  • Feature selection;
  • Feature map selection;
  • Learning-based optimization;
  • Dimensionality reduction;
  • Swarm intelligence optimization;
  • Evolutionary computation;
  • Learning-based feature selection;
  • Evolutionary feature selection;
  • Feature extraction;
  • Feature dimensionality reduction on high-dimensional and large-scale data;
  • Evolutionary feature selection and construction;
  • Multi-objective feature selection;
  • Feature selection for clustering;
  • Feature selection for multi-task optimization and multi-task learning;
  • Hybridization of feature selection and cost-sensitive classification/clustering;
  • Hybridization of feature selection and class-imbalance classification/clustering;
  • Applications of feature selection;
  • Genetic algorithm/genetic programming/particle swarm optimization/ant colony optimization/artificial bee colony/differential evolution/fireworks algorithm/brain storm optimization for feature selection;
  • Machine learning/data mining/neural network/deep learning/decision tree/deep neural network/convolutional neural network/reinforcement learning/ensemble learning/K-means for feature selection/ feature map selection;
  • Real-world applications of feature selection, e.g., images and video sequences/analysis, face recognition, gene analysis, biomarker detection, medical data analysis, text mining, intrusion detection systems, vehicle routing, computer vision, natural language processing, speech recognition, etc.

Prof. Dr. Yu Xue
Prof. Dr. Yong Zhang
Prof. Dr. Bing Xue
Prof. Dr. Ferrante Neri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • feature selection
  • feature map selection
  • learning-based optimization
  • dimensionality reduction
  • swarm intelligence optimization
  • evolutionary computation
  • learning-based feature selection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 2457 KB  
Article
An Enhanced ABC Algorithm with Hybrid Initialization and Stagnation-Guided Search for Parameter-Efficient Text Summarization
by Yun Liu, Yingjing Yao, Wenyu Pei, Mengqi Liu and Hao Gao
Mathematics 2026, 14(7), 1120; https://doi.org/10.3390/math14071120 - 27 Mar 2026
Viewed by 279
Abstract
The digital transformation of oil and gas pipeline networks has generated substantial volumes of unstructured maintenance documentation from communication systems, creating an urgent need for automated summarization to improve operational efficiency. However, domain-specific text summarization for pipeline communication maintenance remains challenging due to [...] Read more.
The digital transformation of oil and gas pipeline networks has generated substantial volumes of unstructured maintenance documentation from communication systems, creating an urgent need for automated summarization to improve operational efficiency. However, domain-specific text summarization for pipeline communication maintenance remains challenging due to scarce labeled data and the high computational cost of fine-tuning large pretrained models. Parameter-efficient fine-tuning alleviates this issue, but its effectiveness strongly depends on appropriate hyperparameter selection. This paper proposes a unified framework that combines weight-decomposed low-rank adaptation with an enhanced Artificial Bee Colony algorithm for automated hyperparameter optimization. The enhanced algorithm addresses two specific limitations of the standard Artificial Bee Colony algorithm: uninformed random initialization that ignores promising regions, and premature abandonment of stagnated solutions that discards partially useful search directions. These two components represent principled design choices, each targeting a distinct bottleneck in applying swarm intelligence search to high-dimensional mixed-type hyperparameter spaces. The method introduces a hybrid initialization strategy to exploit prior knowledge and a stagnation-guided local search mechanism to refine stagnated solutions instead of discarding them, achieving a better balance between exploration and exploitation. Experimental results on a public Chinese summarization benchmark and an industrial oil and gas pipeline communication maintenance corpus show that the proposed approach consistently outperforms full fine-tuning, manually tuned parameter-efficient methods, and several evolutionary optimization baselines in terms of ROUGE metrics. The automated search introduces modest additional computational overhead compared to manual tuning while eliminating expert-dependent hyperparameter configuration and achieving consistent performance gains across both datasets. Overall, the proposed framework provides an efficient and robust solution for adapting large language models to specialized summarization tasks in the context of pipeline communication system maintenance. Full article
Show Figures

Figure 1

26 pages, 1065 KB  
Article
Feature Selection Using Nearest Neighbor Gaussian Processes
by Konstantin Posch, Maximilian Arbeiter, Christian Truden, Martin Pleschberger and Jürgen Pilz
Mathematics 2026, 14(3), 476; https://doi.org/10.3390/math14030476 - 29 Jan 2026
Viewed by 557
Abstract
We introduce a novel Bayesian approach for feature (variable) selection using Gaussian process regression, which is crucial for enhancing interpretability and model regularization. Our method employs nearest neighbor Gaussian processes as scalable approximations to classical Gaussian processes. Feature selection is performed by conditioning [...] Read more.
We introduce a novel Bayesian approach for feature (variable) selection using Gaussian process regression, which is crucial for enhancing interpretability and model regularization. Our method employs nearest neighbor Gaussian processes as scalable approximations to classical Gaussian processes. Feature selection is performed by conditioning the process mean and covariance function on a random set representing the indices of relevant variables. A priori beliefs regarding this set control the feature selection, while reference priors are assigned to the remaining model parameters, ensuring numerical robustness in the process covariance matrix. For model inference, we propose a Metropolis-within-Gibbs algorithm. The effectiveness of the proposed feature selection approach is demonstrated through evaluation on simulated data, a computer experiment approximation, and two real-world data sets. Full article
Show Figures

Figure 1

27 pages, 3116 KB  
Article
Anomaly Deviation-Based Window Size Selection of Sensor Data for Enhanced Fault Diagnosis Efficiency in Autonomous Manufacturing Systems
by Minjae Kim, Sangyoon Lee, Dongkeun Oh, Byungho Park, Jeongdai Jo and Changwoo Lee
Mathematics 2026, 14(3), 471; https://doi.org/10.3390/math14030471 - 29 Jan 2026
Viewed by 537
Abstract
In autonomous manufacturing systems, the performance of time-series-based anomaly detection and fault diagnosis is highly sensitive to window size selection. Conventional approaches rely on empirical rules or fixed window settings, which often fail to capture the diverse temporal characteristics of anomalies and lead [...] Read more.
In autonomous manufacturing systems, the performance of time-series-based anomaly detection and fault diagnosis is highly sensitive to window size selection. Conventional approaches rely on empirical rules or fixed window settings, which often fail to capture the diverse temporal characteristics of anomalies and lead to performance degradation. This study systematically addresses the window size selection problem by categorizing anomaly patterns into three representative types: variability, cycle, and local spike. Each pattern is associated with a distinct temporal scale and underlying physical mechanism. Based on this insight, an Anomaly Deviation-Based Window Size Selection (ADW) method is proposed, which quantitatively evaluates anomaly deviation as a function of window size. Unlike traditional preprocessing-oriented approaches, the proposed method redefines window size as a core design variable that directly governs anomaly representation and diagnostic sensitivity. The effectiveness of the ADW method is validated using tension data from a roll-to-roll continuous manufacturing process and vibration data from a rotating bearing fault dataset. Experimental results demonstrate that the proposed approach consistently identifies optimized window sizes tailored to different anomaly types, leading to improved fault classification accuracy and diagnostic robustness. The proposed framework provides a physically interpretable and data-driven guideline for adaptive window size selection in long-term autonomous manufacturing systems. Full article
Show Figures

Figure 1

27 pages, 5895 KB  
Article
A Density-Based Feature Space Optimization Approach for Intelligent Fault Diagnosis in Smart Manufacturing Systems
by Junyoung Yun, Kyung-Chul Cho, Wonmo Kang, Changwan Kim, Heung Soo Kim and Changwoo Lee
Mathematics 2025, 13(24), 3984; https://doi.org/10.3390/math13243984 - 14 Dec 2025
Viewed by 693
Abstract
In light of ongoing advancements in smart manufacturing, there is a growing need for intelligent fault diagnosis methods that maintain reliability under noisy, high-variability operating conditions. Conventional feature selection strategies often struggle when data contain outliers or suboptimal feature subsets, limiting their diagnostic [...] Read more.
In light of ongoing advancements in smart manufacturing, there is a growing need for intelligent fault diagnosis methods that maintain reliability under noisy, high-variability operating conditions. Conventional feature selection strategies often struggle when data contain outliers or suboptimal feature subsets, limiting their diagnostic utility. This study introduces a density-based feature space optimization (DBFSO) framework that integrates feature selection with localized density estimation to enhance feature space separability and classifier efficiency. Using k-nearest neighbor density estimation, the method identifies and removes low-density feature vectors associated with noise or outlier behavior, thereby sharpening the feature space and improving class discriminability. Experiments using roll-to-roll (R2R) manufacturing data under mechanical disturbances demonstrate that DBFSO improves classification accuracy by up to 36–40% when suboptimal feature subsets are used and reduces training time by 60–71% due to reduced feature space volume. Even with already-optimized feature sets, DBFSO provides consistent performance gains and increased robustness against operational variability. Additional validation using a bearing fault dataset confirms that the framework generalizes across domains, yielding improved accuracy and significantly more compact, noise-resistant feature representations. These findings highlight DBFSO as an effective preprocessing strategy for intelligent fault diagnosis in intelligent manufacturing systems. Full article
Show Figures

Figure 1

33 pages, 2022 KB  
Article
Evolutionary Computation for Feature Optimization and Image-Based Dimensionality Reduction in IoT Intrusion Detection
by Hessah A. Alsalamah and Walaa N. Ismail
Mathematics 2025, 13(23), 3869; https://doi.org/10.3390/math13233869 - 2 Dec 2025
Viewed by 646
Abstract
The exponential growth of the Internet of Things (IoT) has made it increasingly vulnerable to cyberattacks, where malicious manipulation of network and sensor data can lead to incorrect data classification. IoT data are inherently heterogeneous, comprising sensor readings, network flow records, and device [...] Read more.
The exponential growth of the Internet of Things (IoT) has made it increasingly vulnerable to cyberattacks, where malicious manipulation of network and sensor data can lead to incorrect data classification. IoT data are inherently heterogeneous, comprising sensor readings, network flow records, and device metadata that differ significantly in scale and structure. This diversity motivates transforming tabular IoT data into image-based representations to facilitate the recognition of intrusion patterns and the analysis of spatial correlations. Many deep learning models offer robust detection performance, including CNNs, LSTMs, CNN–LSTM hybrids, and Transformer-based networks, but many of these architectures are computationally intensive and require significant training resources. To address this challenge, this study introduces an evolutionary-driven framework that mathematically formalizes the transformation of tabular IoT data into image-encoded matrices and optimizes feature selection through metaheuristic algorithms. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Variable Neighborhood Search (VNS) are employed to identify optimal feature subsets for Random Forest (RF) and Extreme Gradient Boosting (XGBoost) classifiers. The approach enhances discrimination by optimizing multi-objective criteria, including accuracy and sparsity, while maintaining low computational complexity suitable for edge deployment. Experimental results on benchmark IoT intrusion datasets demonstrate that VNS-XGBoost configurations performed better on the IDS2017 and IDS2018 benchmarks, achieving accuracies up to 0.99997 and a significant reduction in Type II errors (212 and 6 in tabular form, reduced to 4 and 1 using image-encoded representations). These results confirm that integrating evolutionary optimization with image-based feature modeling enables accurate, efficient, and robust intrusion detection across large-scale IoT systems. Full article
Show Figures

Figure 1

33 pages, 2439 KB  
Article
A Novel Deep Hybrid Learning Framework for Structural Reliability Under Civil and Mechanical Constraints
by Qasim Aljamal, Mahmoud AlJamal, Mohammad Q. Al-Jamal, Zaid Jawasreh, Ayoub Alsarhan, Sami Aziz Alshammari, Nayef H. Alshammari and Rahaf R. Alshammari
Mathematics 2025, 13(23), 3834; https://doi.org/10.3390/math13233834 - 29 Nov 2025
Cited by 2 | Viewed by 826
Abstract
This study presents an AI-based framework that unifies civil and mechanical engineering principles to optimize the structural performance of steel frameworks. Unlike traditional methods that analyze material behavior, load-bearing capacity, and dynamic response separately, the proposed model integrates these factors into a single [...] Read more.
This study presents an AI-based framework that unifies civil and mechanical engineering principles to optimize the structural performance of steel frameworks. Unlike traditional methods that analyze material behavior, load-bearing capacity, and dynamic response separately, the proposed model integrates these factors into a single hybrid feature space combining material properties, geometric descriptors, and load-response characteristics. A deep learning model enhanced with physics-informed reliability constraints is developed to predict both safety states and optimal design configurations. Using AISC steel datasets and experimental records, the framework achieves 99.91% accuracy in distinguishing safe from unsafe designs, with mean absolute errors below 0.05 and percentage errors under 2% for reliability and load-bearing predictions. The system also demonstrates high computational efficiency, achieving inference latency below 3 ms, which supports real-time deployment in design and monitoring environments. the proposed framework provides a scalable, interpretable, and code-compliant approach for optimizing steel structures, advancing data-driven reliability assessment in both civil and mechanical engineering. Full article
Show Figures

Figure 1

22 pages, 7824 KB  
Article
SFPFMformer: Short-Term Power Load Forecasting for Proxy Electricity Purchase Based on Feature Optimization and Multiscale Decomposition
by Chengfei Qi, Yanli Feng, Junling Wan, Xinying Mao and Peisen Yuan
Mathematics 2025, 13(10), 1584; https://doi.org/10.3390/math13101584 - 12 May 2025
Cited by 1 | Viewed by 1044
Abstract
Short-term load forecasting is important for proxy electricity purchasing in the electricity spot trading market. In this paper, a model SFPFMformer for short-term power load forecasting is proposed to address the issue of balancing accuracy and timeliness. In SFPFMformer, the random forest algorithm [...] Read more.
Short-term load forecasting is important for proxy electricity purchasing in the electricity spot trading market. In this paper, a model SFPFMformer for short-term power load forecasting is proposed to address the issue of balancing accuracy and timeliness. In SFPFMformer, the random forest algorithm is applied to select the most important attributes, which reduces redundant attributes and improves performance and efficiency; then, multiple timescale segmentation is used to extract load data features from multiple time dimensions to learn feature representations at different levels. In addition, fusion time location encoding is adopted in Transformer to ensure that the model can accurately capture time-position information. Finally, we utilize a depthwise separable convolution block to extract features from power load data, which efficiently captures the pattern of change in load. We conducted extensive experiment on real datasets, and the experimental results show that in 4 h prediction, the RMSE, MAE, and MAPE of our model are 1128.69, 803.91, and 2.63%, respectively. For 24 h forecast, the RMSE, MAE and MAPE of our model are 1190.51, 897.26, and 2.97%, respectively. Compared with existing methods, such as Informer, Autoformer, ETSformer, LSTM, and Seq2seq, our model has better precision and time performance for short-term power load forecasting for proxy spot trading. Full article
Show Figures

Figure 1

15 pages, 2289 KB  
Article
Temporal Graph Attention Network for Spatio-Temporal Feature Extraction in Research Topic Trend Prediction
by Zhan Guo, Mingxin Lu and Jin Han
Mathematics 2025, 13(5), 686; https://doi.org/10.3390/math13050686 - 20 Feb 2025
Cited by 4 | Viewed by 4701
Abstract
Comprehensively extracting spatio-temporal features is essential to research topic trend prediction. This necessity arises from the fact that research topics exhibit both temporal trend features and spatial correlation features. This study proposes a Temporal Graph Attention Network (T-GAT) to extract the spatio-temporal features [...] Read more.
Comprehensively extracting spatio-temporal features is essential to research topic trend prediction. This necessity arises from the fact that research topics exhibit both temporal trend features and spatial correlation features. This study proposes a Temporal Graph Attention Network (T-GAT) to extract the spatio-temporal features of research topics and predict their trends. In this model, a temporal convolutional layer is employed to extract temporal trend features from multivariate topic time series. Additionally, a multi-head graph attention layer is introduced to capture spatial correlation features among research topics. This layer learns attention scores from the data by using scaled dot product operations and updates edge weights between topics accordingly, thereby mitigating the issue of over-smoothing. Furthermore, we introduce WFtopic-econ and WFtopic-polit, two domain-specific datasets for Chinese research topics constructed from the Wanfang Academic Database. Extensive experiments demonstrate that T-GAT outperforms baseline models in prediction accuracy, with RMSE and MAE being reduced by 4.8% to 7.1% and 14.5% to 18.4%, respectively, while R2 improved by 4.8% to 7.9% across varying observation time steps on the WFtopic-econ dataset. Moreover, on the WFtopic-polit dataset, RMSE and MAE were reduced by 4.0% to 5.3% and 10.0% to 10.7%, respectively, and R2 improved by 7.6% to 14.4%. These results validate the effectiveness of integrating graph attention with temporal convolution to model the spatio-temporal evolution of research topics, providing a robust tool for scholarly trend analysis and decision making. Full article
Show Figures

Figure 1

20 pages, 1728 KB  
Article
Sentence Embedding Generation Framework Based on Kullback–Leibler Divergence Optimization and RoBERTa Knowledge Distillation
by Jin Han and Liang Yang
Mathematics 2024, 12(24), 3990; https://doi.org/10.3390/math12243990 - 18 Dec 2024
Cited by 2 | Viewed by 3621
Abstract
In natural language processing (NLP) tasks, computing semantic textual similarity (STS) is crucial for capturing nuanced semantic differences in text. Traditional word vector methods, such as Word2Vec and GloVe, as well as deep learning models like BERT, face limitations in handling context dependency [...] Read more.
In natural language processing (NLP) tasks, computing semantic textual similarity (STS) is crucial for capturing nuanced semantic differences in text. Traditional word vector methods, such as Word2Vec and GloVe, as well as deep learning models like BERT, face limitations in handling context dependency and polysemy and present challenges in computational resources and real-time processing. To address these issues, this paper introduces two novel methods. First, a sentence embedding generation method based on Kullback–Leibler Divergence (KLD) optimization is proposed, which enhances semantic differentiation between sentence vectors, thereby improving the accuracy of textual similarity computation. Second, this study proposes a framework incorporating RoBERTa knowledge distillation, which integrates the deep semantic insights of the RoBERTa model with prior methodologies to enhance sentence embeddings while preserving computational efficiency. Additionally, the study extends its contributions to sentiment analysis tasks by leveraging the enhanced embeddings for classification. The sentiment analysis experiments, conducted using a Stochastic Gradient Descent (SGD) classifier on the ACL IMDB dataset, demonstrate the effectiveness of the proposed methods, achieving high precision, recall, and F1 score metrics. To further augment model accuracy and efficacy, a feature selection approach is introduced, specifically through the Dynamic Principal Component Selection (DPCS) algorithm. The DPCS method autonomously identifies and prioritizes critical features, thus enriching the expressive capacity of sentence vectors and significantly advancing the accuracy of similarity computations. Experimental results demonstrate that our method outperforms existing methods in semantic similarity computation on the SemEval-2016 dataset. When evaluated using cosine similarity of average vectors, our model achieved a Pearson correlation coefficient (τ) of 0.470, a Spearman correlation coefficient (ρ) of 0.481, and a mean absolute error (MAE) of 2.100. Compared to traditional methods such as Word2Vec, GloVe, and FastText, our method significantly enhances similarity computation accuracy. Using TF-IDF-weighted cosine similarity evaluation, our model achieved a τ of 0.528, ρ of 0.518, and an MAE of 1.343. Additionally, in the cosine similarity assessment leveraging the Dynamic Principal Component Smoothing (DPCS) algorithm, our model achieved a τ of 0.530, ρ of 0.518, and an MAE of 1.320, further demonstrating the method’s effectiveness and precision in handling semantic similarity. These results indicate that our proposed method has high relevance and low error in semantic textual similarity tasks, thereby better capturing subtle semantic differences between texts. Full article
Show Figures

Figure 1

17 pages, 823 KB  
Article
Feature Optimization and Dropout in Genetic Programming for Data-Limited Image Classification
by Chan Min Lee, Chang Wook Ahn and Man-Je Kim
Mathematics 2024, 12(23), 3661; https://doi.org/10.3390/math12233661 - 22 Nov 2024
Viewed by 1832
Abstract
Image classification in data-limited environments presents a significant challenge, as collecting and labeling large image datasets in real-world applications is often costly and time-consuming. This has led to increasing interest in developing models under data-constrained conditions. This paper introduces the Feature Optimization and [...] Read more.
Image classification in data-limited environments presents a significant challenge, as collecting and labeling large image datasets in real-world applications is often costly and time-consuming. This has led to increasing interest in developing models under data-constrained conditions. This paper introduces the Feature Optimization and Dropout in Genetic Programming (FOD-GP) framework, which addresses this issue by leveraging Genetic Programming (GP) to evolve models automatically. FOD-GP incorporates feature optimization and adaptive dropout techniques to improve overall performance. Experimental evaluations on benchmark datasets, including CIFAR10, FMNIST, and SVHN, demonstrate that FOD-GP improves training efficiency. In particular, FOD-GP achieves up to a 12% increase in classification accuracy over traditional methods. The effectiveness of the proposed framework is validated through statistical analysis, confirming its practicality for image classification. These findings establish a foundation for future advancements in data-limited and interpretable machine learning, offering a scalable solution for complex classification tasks. Full article
Show Figures

Figure 1

Back to TopTop