Previous Issue
Volume 18, September
 
 

Algorithms, Volume 18, Issue 10 (October 2025) – 69 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 6617 KB  
Article
A Modular and Explainable Machine Learning Pipeline for Student Dropout Prediction in Higher Education
by Abdelkarim Bettahi, Fatima-Zahra Belouadha and Hamid Harroud
Algorithms 2025, 18(10), 662; https://doi.org/10.3390/a18100662 (registering DOI) - 18 Oct 2025
Abstract
Student dropout remains a persistent challenge in higher education, with substantial personal, institutional, and societal costs. We developed a modular dropout prediction pipeline that couples data preprocessing with multi-model benchmarking and a governance-ready explainability layer. Using 17,883 undergraduate records from a Moroccan higher [...] Read more.
Student dropout remains a persistent challenge in higher education, with substantial personal, institutional, and societal costs. We developed a modular dropout prediction pipeline that couples data preprocessing with multi-model benchmarking and a governance-ready explainability layer. Using 17,883 undergraduate records from a Moroccan higher education institution, we evaluated nine algorithms (logistic regression (LR), decision tree (DT), random forest (RF), k-nearest neighbors (k-NN), support vector machine (SVM), gradient boosting, Extreme Gradient Boosting (XGBoost), Naïve Bayes (NB), and multilayer perceptron (MLP)). On our test set, XGBoost attained an area under the receiver operating characteristic curve (AUC–ROC) of 0.993, F1-score of 0.911, and recall of 0.944. Subgroup reporting supported governance and fairness: across credit–load bins, recall remained high and stable (e.g., <9 credits: precision 0.85, recall 0.932; 9–12: 0.886/0.969; >12: 0.915/0.936), with full TP/FP/FN/TN provided. A Shapley additive explanations (SHAP)-based layer identified risk and protective factors (e.g., administrative deadlines, cumulative GPA, and passed-course counts), surfaced ambiguous and anomalous cases for human review, and offered case-level diagnostics. To assess generalization, we replicated our findings on a public dataset (UCI–Portugal; tables only): XGBoost remained the top-ranked (F1-score 0.792, AUC–ROC 0.922). Overall, boosted ensembles combined with SHAP delivered high accuracy, transparent attribution, and governance-ready outputs, enabling responsible early-warning implementation for student retention. Full article
41 pages, 762 KB  
Article
MCMC Methods: From Theory to Distributed Hamiltonian Monte Carlo over PySpark
by Christos Karras, Leonidas Theodorakopoulos, Aristeidis Karras, George A. Krimpas, Charalampos-Panagiotis Bakalis and Alexandra Theodoropoulou
Algorithms 2025, 18(10), 661; https://doi.org/10.3390/a18100661 - 17 Oct 2025
Abstract
The Hamiltonian Monte Carlo (HMC) method is effective for Bayesian inference but suffers from synchronization overhead in distributed settings. We propose two variants: a distributed HMC (DHMC) baseline with synchronized, globally exact gradient evaluations and a communication-avoiding leapfrog HMC (CALF-HMC) method that interleaves [...] Read more.
The Hamiltonian Monte Carlo (HMC) method is effective for Bayesian inference but suffers from synchronization overhead in distributed settings. We propose two variants: a distributed HMC (DHMC) baseline with synchronized, globally exact gradient evaluations and a communication-avoiding leapfrog HMC (CALF-HMC) method that interleaves local surrogate micro-steps with a single–global Metropolis–Hastings correction per trajectory. Implemented on Apache Spark/PySpark and evaluated on a large synthetic logistic regression (N=107, d=100, workers J{4,8,16,32}), DHMC attained an average acceptance of 0.986, mean ESS of 1200, and wall-clock of 64.1 s per evaluation run, yielding 18.7 ESS/s; CALF-HMC achieved an acceptance of 0.942, mean ESS of 5.1, and 14.8 s, i.e., ≈0.34 ESS/s under the tested surrogate configuration. While DHMC delivered higher ESS/s due to robust mixing under conservative integration, CALF-HMC reduced the per-trajectory runtime and exhibited more favorable scaling as inter-worker latency increased. The study contributes (i) a systems-oriented communication cost model for distributed HMC, (ii) an exact, communication-avoiding leapfrog variant, and (iii) practical guidance for ESS/s-optimized tuning on clusters. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 4th Edition)
Show Figures

Figure 1

25 pages, 4588 KB  
Article
Bibliometric Mapping of Soil Chemicalization and Fertilizer Research: Environmental and Computational Insights
by Gabriela S. Bungau, Andrei-Flavius Radu, Ada Radu, Delia Mirela Tit and Paul Andrei Negru
Algorithms 2025, 18(10), 660; https://doi.org/10.3390/a18100660 - 17 Oct 2025
Abstract
Soil chemicalization, involving the use of synthetic chemicals like fertilizers, pesticides, and herbicides, has been crucial in modern agriculture but has raised concerns about soil degradation, environmental pollution, and long-term sustainability. Over the past few decades, research has evolved from studying the effects [...] Read more.
Soil chemicalization, involving the use of synthetic chemicals like fertilizers, pesticides, and herbicides, has been crucial in modern agriculture but has raised concerns about soil degradation, environmental pollution, and long-term sustainability. Over the past few decades, research has evolved from studying the effects of heavy metals and pesticides to exploring emerging contaminants such as microplastics, biochar, and oxidative stress in soils. Despite this growing body of research, gaps remain in understanding long-term trends, shifts in research priorities, and dynamics of scientific contributions. Notably, bibliometric analyses specifically focused on soil fertilizer research and associated agricultural practices remain scarce and poorly represented in the scientific literature. This bibliometric study examines the development of soil chemicalization research from 1975 to 2025, using data from the Web of Science to analyze scientific output, international cooperation, and thematic patterns. Citation impact peaked in 2018, although recent declines reflect citation lag. China led in total output (1977 documents) but lagged in population-adjusted productivity compared to the U.S. and Australia. Thematic shifts moved from studies on heavy metals and pesticides to research on microplastics, biochar, and oxidative stress, with sustainable soil management becoming a critical focus. Keyword clusters emphasized agricultural sustainability, pollutant toxicity, and bioremediation. Leading institutions included Nanjing Agricultural University, while journals like Science of the Total Environment and Chemosphere led in publications. Challenges remain in evaluating the long-term ecological effects, optimizing sustainable alternatives, and addressing regional disparities. Future research should focus on integrated soil health assessments, emerging contaminants, and policy-driven approaches to minimize environmental risks while sustaining agricultural productivity. Full article
27 pages, 5792 KB  
Article
Optimized Hybrid Deep Learning Framework for Short-Term Power Load Interval Forecasting via Improved Crowned Crested Porcupine Optimization and Feature Mode Decomposition
by Shucheng Luo, Xiangbin Meng, Xinfu Pang, Haibo Li and Zedong Zheng
Algorithms 2025, 18(10), 659; https://doi.org/10.3390/a18100659 - 17 Oct 2025
Abstract
This paper presents an optimized hybrid deep learning model for power load forecasting—QR-FMD-CNN-BiGRU-Attention—that integrates similar day selection, load decomposition, and deep learning to address the nonlinearity and volatility of power load data. Firstly, the original data are classified using Gaussian Mixture Clustering optimized [...] Read more.
This paper presents an optimized hybrid deep learning model for power load forecasting—QR-FMD-CNN-BiGRU-Attention—that integrates similar day selection, load decomposition, and deep learning to address the nonlinearity and volatility of power load data. Firstly, the original data are classified using Gaussian Mixture Clustering optimized by ICPO (ICPO-GMM), and similar day samples consistent with the predicted day category are selected. Secondly, the load data are decomposed into multi-scale components (IMFs) using feature mode decomposition optimized by ICPO (ICPO-FMD). Then, with the IMFs as targets, the quantile interval forecasting is trained using the CNN-BiGRU-Attention model optimized by ICPO. Subsequently, the forecasting model is applied to the features of the predicted day to generate interval forecasting results. Finally, the model’s performance is validated through comparative evaluation metrics, sensitivity analysis, and interpretability analysis. The experimental results show that compared with the comparative algorithm presented in this paper, the improved model has improved RMSE by at least 39.84%, MAE by 26.12%, MAPE by 45.28%, PICP and MPIW indicators by at least 3.80% and 2.27%, indicating that the model not only outperforms the comparative model in accuracy, but also exhibits stronger adaptability and robustness in complex load fluctuation scenarios. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
16 pages, 995 KB  
Article
An Information Granulation-Enhanced Kernel Principal Component Analysis Method for Detecting Anomalies in Time Series
by Xu Feng, Hongzhou Chai, Jinkai Feng and Yunlong Wu
Algorithms 2025, 18(10), 658; https://doi.org/10.3390/a18100658 - 17 Oct 2025
Abstract
In complex process systems, accurate real-time anomaly detection is essential to ensure operational safety and reliability. This study proposes a novel detection method that combines information granulation with kernel principal component analysis (KPCA). Here, information granulation is introduced as a general framework, with [...] Read more.
In complex process systems, accurate real-time anomaly detection is essential to ensure operational safety and reliability. This study proposes a novel detection method that combines information granulation with kernel principal component analysis (KPCA). Here, information granulation is introduced as a general framework, with the principle of justifiable granularity (PJG) adopted as the specific implementation. Time series data are first granulated using PJG to extract compact features that preserve local dynamics. The KPCA model, equipped with a radial basis function kernel, is then applied to capture nonlinear correlations and construct monitoring statistics including T2 and SPE. Thresholds are derived from training data and used for online anomaly detection. The method is evaluated on the Tennessee Eastman process and Continuous Stirred Tank Reactor datasets, covering various types of faults. Experimental results demonstrate that the proposed method achieves a near-zero false alarm rate below 1% and maintains a missed detection rate under 6%, highlighting its effectiveness and robustness across different fault scenarios and industrial datasets. Full article
Show Figures

Figure 1

18 pages, 2350 KB  
Article
Deep Ensembles and Multisensor Data for Global LCZ Mapping: Insights from So2Sat LCZ42
by Loris Nanni and Sheryl Brahnam
Algorithms 2025, 18(10), 657; https://doi.org/10.3390/a18100657 - 17 Oct 2025
Viewed by 8
Abstract
Classifying multiband images acquired by advanced sensors, including those mounted on satellites, is a central task in remote sensing and environmental monitoring. These sensors generate high-dimensional outputs rich in spectral and spatial information, enabling detailed analyses of Earth’s surface. However, the complexity of [...] Read more.
Classifying multiband images acquired by advanced sensors, including those mounted on satellites, is a central task in remote sensing and environmental monitoring. These sensors generate high-dimensional outputs rich in spectral and spatial information, enabling detailed analyses of Earth’s surface. However, the complexity of such data presents substantial challenges to achieving both accuracy and efficiency. To address these challenges, we tested the ensemble learning framework based on ResNet50, MobileNetV2, and DenseNet201, each trained on distinct three-channel representations of the input to capture complementary features. Training is conducted on the LCZ42 dataset of 400,673 paired Sentinel-1 SAR and Sentinel-2 multispectral image patches annotated with Local Climate Zone (LCZ) labels. Experiments show that our best ensemble surpasses several recent state-of-the-art methods on the LCZ42 benchmark. Full article
Show Figures

Figure 1

19 pages, 892 KB  
Article
Optimizing Renewable Microgrid Performance Through Hydrogen Storage Integration
by Bruno Ribeiro, José Baptista and Adelaide Cerveira
Algorithms 2025, 18(10), 656; https://doi.org/10.3390/a18100656 - 17 Oct 2025
Viewed by 32
Abstract
The global transition to a low-carbon energy system requires innovative solutions that integrate renewable energy production with storage and utilization technologies. The growth in energy demand, combined with the intermittency of these sources, highlights the need for advanced management models capable of ensuring [...] Read more.
The global transition to a low-carbon energy system requires innovative solutions that integrate renewable energy production with storage and utilization technologies. The growth in energy demand, combined with the intermittency of these sources, highlights the need for advanced management models capable of ensuring system stability and efficiency. This paper presents the development of an optimized energy management system integrating renewable sources, with a focus on green hydrogen production via electrolysis, storage, and use through a fuel cell. The system aims to promote energy autonomy and support the transition to a low-carbon economy by reducing dependence on the conventional electricity grid. The proposed model enables flexible hourly energy flow optimization, considering solar availability, local consumption, hydrogen storage capacity, and grid interactions. Formulated as a Mixed-Integer Linear Programming (MILP) model, it supports strategic decision-making regarding hydrogen production, storage, and utilization, as well as energy trading with the grid. Simulations using production and consumption profiles assessed the effects of hydrogen storage capacity and electricity price variations. Results confirm the effectiveness of the model in optimizing system performance under different operational scenarios. Full article
(This article belongs to the Special Issue Optimization in Renewable Energy Systems (2nd Edition))
Show Figures

Figure 1

22 pages, 972 KB  
Article
Strategies for Parallelization of Algorithms for Integer Partition
by Iliya Bouyukliev, Dushan Bikov and Maria Pashinska-Gadzheva
Algorithms 2025, 18(10), 655; https://doi.org/10.3390/a18100655 - 16 Oct 2025
Viewed by 73
Abstract
In this work we present strategies for parallelization of algorithms for representing integers as a sum of positive integers using OpenMP (Open Multi-Processing). We consider different types of algorithms - a non-recursive algorithm, a recursive algorithm and its modifications to introduce restrictions on [...] Read more.
In this work we present strategies for parallelization of algorithms for representing integers as a sum of positive integers using OpenMP (Open Multi-Processing). We consider different types of algorithms - a non-recursive algorithm, a recursive algorithm and its modifications to introduce restrictions on the number and size of the summands. Different strategies for their parallelization are presented. In order to evaluate the efficiency of the strategies, we consider both the execution times and the distribution of work among threads in the presented implementations. Full article
(This article belongs to the Special Issue Advances in Parallel and Distributed AI Computing)
19 pages, 4105 KB  
Essay
HIPACO: An RSSI Indoor Positioning Algorithm Based on Improved Ant Colony Optimization Algorithm
by Yiying Zhao and Baohua Jin
Algorithms 2025, 18(10), 654; https://doi.org/10.3390/a18100654 - 16 Oct 2025
Viewed by 114
Abstract
Aiming at the shortcomings of traditional ACO algorithms in indoor localization applications, a high-performance improved ant colony algorithm (HIPACO) based on dynamic hybrid pheromone strategy is proposed. The algorithm divides the ant colony into worker ants (local exploitation) and soldier ants (global exploration) [...] Read more.
Aiming at the shortcomings of traditional ACO algorithms in indoor localization applications, a high-performance improved ant colony algorithm (HIPACO) based on dynamic hybrid pheromone strategy is proposed. The algorithm divides the ant colony into worker ants (local exploitation) and soldier ants (global exploration) through a division of labor mechanism, in which the worker ants use a pheromone-weighted learning strategy for refined search, and the soldier ants perform Gaussian perturbation-guided global exploration. At the same time, an adaptive pheromone attenuation model (elite particle enhancement, ordinary particle attenuation) and a dimensional balance strategy (sinusoidal modulation function) are designed to dynamically optimize the searching process; moreover, a hybrid guidance mechanism is introduced to apply adaptive Gaussian perturbation guidance on successive failed particles to dynamically optimize the searching process. A hybrid guidance mechanism is introduced to enhance the robustness of the algorithm by applying adaptive Gaussian perturbation to successive failed particles. The experimental results show that in the 3D localization scenario with four beacon nodes, the average localization error of HIPACO is 0.82 ± 0.35 m, which is 42.3% lower than that of the traditional ACO algorithm, the convergence speed is improved by 2.1 times, and the optimal performance is maintained under different numbers of anchor nodes and spatial scales. This study provides an efficient solution to the indoor localization problem in the presence of multipath effect and non-line-of-sight propagation. Full article
Show Figures

Figure 1

21 pages, 677 KB  
Systematic Review
Quantifying Statistical Heterogeneity and Reproducibility in Cooperative Multi-Agent Reinforcement Learning: A Meta-Analysis of the SMAC Benchmark
by Rex Li and Chunyu Liu
Algorithms 2025, 18(10), 653; https://doi.org/10.3390/a18100653 - 16 Oct 2025
Viewed by 154
Abstract
This study presents the first quantitative meta-analysis in cooperative multi-agent reinforcement learning (MARL). Undertaken on the StarCraft Multi-Agent Challenge (SMAC) benchmark, we quantify reproducibility and statistical heterogeneity across studies using the five algorithms introduced in the original SMAC paper (IQL, VDN, QMIX, COMA, [...] Read more.
This study presents the first quantitative meta-analysis in cooperative multi-agent reinforcement learning (MARL). Undertaken on the StarCraft Multi-Agent Challenge (SMAC) benchmark, we quantify reproducibility and statistical heterogeneity across studies using the five algorithms introduced in the original SMAC paper (IQL, VDN, QMIX, COMA, QTRAN) on five widely used maps at a fixed 2M-step budget. The analysis pools win rates via multilevel mixed-effects meta-regression with cluster-robust variance and reports Algorithm × Map cell-specific heterogeneity and 95% prediction intervals. Results show that heterogeneity is pervasive: 17/25 cells exhibit high heterogeneity (I2 ≥ 80%), indicating between-study variance dominates sampling error. Moderator analyses find publication year significantly explains part of residual variance, consistent with secular drift in tooling and defaults. Prediction intervals are broad across most cells, implying a new study can legitimately exhibit substantially lower or higher performance than pooled means. The study underscores the need for standardized reporting (SC2 versioning, evaluation episode counts, hyperparameters), preregistered map panels, open code/configurations, and machine-readable curves to enable robust, heterogeneity-aware synthesis and more reproducible SMAC benchmarking. Full article
Show Figures

Figure 1

18 pages, 1611 KB  
Article
A Graph-Based Algorithm for Detecting Long Non-Coding RNAs Through RNA Secondary Structure Analysis
by Hugo Cabrera-Ibarra, David Hernández-Granados and Lina Riego-Ruiz
Algorithms 2025, 18(10), 652; https://doi.org/10.3390/a18100652 - 16 Oct 2025
Viewed by 104
Abstract
Non-coding RNAs (ncRNAs) are involved in many biological processes, making their identification and functional characterization a priority. Among them, long non-coding RNAs (lncRNAs) have been shown to regulate diverse cellular processes, such as cell development, stress response, and transcriptional regulation. The continued identification [...] Read more.
Non-coding RNAs (ncRNAs) are involved in many biological processes, making their identification and functional characterization a priority. Among them, long non-coding RNAs (lncRNAs) have been shown to regulate diverse cellular processes, such as cell development, stress response, and transcriptional regulation. The continued identification of new lncRNAs highlights the demand for reliable methods for their detection, with structural analysis offering insightful information. Currently, lncRNAs are identified using tools such as LncFinder, whose database has a large collection of lncRNAs from humans, mice, and chickens, among others. In this work, we present a graph-based algorithm to represent and compare RNA secondary structures. Rooted tree graphs were used to compare two groups of Saccharomyces cerevisiae RNA sequences, lncRNAs and not lncRNAs, by searching for structural similarities between each group. When applied to a novel candidate sequence dataset, the algorithm evaluated whether characteristic structures identified in known lncRNAs recurred. If so, the sequences were classified as likely lncRNAs. These results indicate that graph-based structural analysis offers a complementary methodology for identifying lncRNAs and may complement existing sequence-based tools such as lncFinder or PreLnc. Recent studies have shown that tumor cells can secrete lncRNAs into human biological fluids forming circulating lncRNAs which can be used as biomarkers for cancer. Our algorithm could be applied to identify novel lncRNAs with structural similarities to those associated with tumor malignancy. Full article
Show Figures

Graphical abstract

19 pages, 2117 KB  
Article
Point-Wise Full-Field Physics Neural Mapping Framework via Boundary Geometry Constrained for Large Thermoplastic Deformation
by Jue Wang, Xinyi Xu, Changxin Ye and Wei Huangfu
Algorithms 2025, 18(10), 651; https://doi.org/10.3390/a18100651 - 16 Oct 2025
Viewed by 151
Abstract
Computation modeling for large thermoplastic deformation of plastic solids is critical for industrial applications like non-invasive assessment of engineering components. While deep learning-based methods have emerged as promising alternatives to traditional numerical simulations, they often suffer from systematic errors caused by geometric mismatches [...] Read more.
Computation modeling for large thermoplastic deformation of plastic solids is critical for industrial applications like non-invasive assessment of engineering components. While deep learning-based methods have emerged as promising alternatives to traditional numerical simulations, they often suffer from systematic errors caused by geometric mismatches between predicted and ground truth meshes. To overcome this limitation, we propose a novel boundary geometry-constrained neural framework that establishes direct point-wise mappings between spatial coordinates and full-field physical quantities within the deformed domain. The key contributions of this work are as follows: (1) a two-stage strategy that separates geometric prediction from physics-field resolution by constructing direct, point-wise mappings between coordinates and physical quantities, inherently avoiding errors from mesh misalignment; (2) a boundary-condition-aware encoding mechanism that ensures physical consistency under complex loading conditions; and (3) a fully mesh-free approach that operates on point clouds without structured discretization. Experimental results demonstrate that our method achieves a 36–98% improvement in prediction accuracy over deep learning baselines, offering a efficient alternative for high-fidelity simulation of large thermoplastic deformations. Full article
(This article belongs to the Special Issue AI Applications and Modern Industry)
Show Figures

Figure 1

19 pages, 895 KB  
Review
Machine Learning in Reverse Logistics: A Systematic Literature Review
by Abner Fernandes Souza da Silva, Virginia Aparecida da Silva Moris, João Eduardo Azevedo Ramos da Silva, Murilo Aparecido Voltarelli and Tiago F. A. C. Sigahi
Algorithms 2025, 18(10), 650; https://doi.org/10.3390/a18100650 - 16 Oct 2025
Viewed by 184
Abstract
Reverse logistics (RL) plays a crucial role in promoting circularity and sustainability in supply chains, particularly in the face of increasing waste generation and growing environmental demands. In recent years, machine learning (ML) has emerged as a strategic tool to enhance processes, decision-making, [...] Read more.
Reverse logistics (RL) plays a crucial role in promoting circularity and sustainability in supply chains, particularly in the face of increasing waste generation and growing environmental demands. In recent years, machine learning (ML) has emerged as a strategic tool to enhance processes, decision-making, and outcomes in RL. This article presents a systematic review of ML applications in reverse logistics, highlighting trends, challenges, and research opportunities. The analysis covers 52 articles retrieved from the Scopus and Web of Science databases, following the PRISMA protocol. The results show that the most frequently employed techniques are supervised models, followed by unsupervised methods and, to a lesser extent, reinforcement learning. The main ML applications in RL focus on return and waste generation forecasting, process optimization, classification, pricing, reliability assessments, and consumer behavior analysis. The studies examined predominantly use traditional evaluation metrics, such as MAPE and F1-score, while few consider multidimensional indicators encompassing long-term social or environmental impacts. Key challenges identified include data scarcity and quality, inherent uncertainties in reverse supply chains, and the high computational cost of models. This article also points to research gaps concerning metadata standardization, the absence of public benchmarks, model explainability, and the integration of ML with simulations and digital twins, indicating pathways toward more robust, transparent, and sustainable RL. Full article
Show Figures

Figure 1

23 pages, 5261 KB  
Article
FocusNet: A Lightweight Insulator Defect Detection Network via First-Order Taylor Importance Assessment and Knowledge Distillation
by Yurong Jing, Zhiyong Tao and Sen Lin
Algorithms 2025, 18(10), 649; https://doi.org/10.3390/a18100649 - 16 Oct 2025
Viewed by 159
Abstract
In the detection of small targets such as insulator defects and flashovers, the existing YOLOv11 has problems such as insufficient feature extraction and difficulty in balancing model lightweight and detection accuracy. We propose a lightweight architecture called FocusNet based on YOLOv11n. To improve [...] Read more.
In the detection of small targets such as insulator defects and flashovers, the existing YOLOv11 has problems such as insufficient feature extraction and difficulty in balancing model lightweight and detection accuracy. We propose a lightweight architecture called FocusNet based on YOLOv11n. To improve the feature expression ability of small targets, Aggregation Diffusion Neck is designed to achieve deep integration and optimization of features at different levels through multiple rounds of multi-scale feature fusion and scale adaptation, and Focus module is introduced to focus on and strengthen the key features of small targets. On this basis, to achieve efficient deployment, the Group-Level First-Order Taylor Expansion Importance Assessment Method is proposed to eliminate channels that have little impact on detection accuracy to streamline the model structure. Then, Channel Distribution Distillation compensates for the slight accuracy loss caused by pruning, and finally achieves the dual optimization of high accuracy and high efficiency. Furthermore, we analyze the interpretability of FocusNet via heatmaps generated by KPCA-CAM. Experiments show that FocusNet achieves 98.50% precision and 99.20% mAP@0.5 on a proprietary insulator defect detection database created for this project using only 3.80 GFLOPs. This research provides reliable technical support for insulator monitoring in power systems. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Figure 1

16 pages, 3378 KB  
Article
Cosine Prompt-Based Class Incremental Semantic Segmentation for Point Clouds
by Lei Guo, Hongye Li, Min Pang, Kaowei Liu, Xie Han and Fengguang Xiong
Algorithms 2025, 18(10), 648; https://doi.org/10.3390/a18100648 - 16 Oct 2025
Viewed by 138
Abstract
Although current 3D semantic segmentation methods have achieved significant success, they suffer from catastrophic forgetting when confronted with dynamic, open environments. To address this issue, class incremental learning is introduced to update models while maintaining a balance between plasticity and stability. In this [...] Read more.
Although current 3D semantic segmentation methods have achieved significant success, they suffer from catastrophic forgetting when confronted with dynamic, open environments. To address this issue, class incremental learning is introduced to update models while maintaining a balance between plasticity and stability. In this work, we propose CosPrompt, a rehearsal-free approach for class incremental semantic segmentation. Specifically, we freeze the prompts for existing classes and incrementally expand and fine-tune the prompts for new classes, thereby generating discriminative and customized features. We employ clamping operations to regulate backward propagation, ensuring smooth training. Furthermore, we utilize the learning without forgetting loss and pseudo-label generation to further mitigate catastrophic forgetting. We conduct comparative and ablation experiments on the S3DIS dataset and ScanNet v2 dataset, demonstrating the effectiveness and feasibility of our method. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

24 pages, 985 KB  
Article
Artificial Intelligence-Driven Diagnostics in Eye Care: A Random Forest Approach for Data Classification and Predictive Modeling
by Luís F. F. M. Santos, Miguel Ángel Sánchez-Tena, Cristina Alvarez-Peregrina and Clara Martinez-Perez
Algorithms 2025, 18(10), 647; https://doi.org/10.3390/a18100647 - 15 Oct 2025
Viewed by 137
Abstract
Artificial intelligence and machine learning have increasingly transformed optometry, enabling automated classification and predictive modeling of eye conditions. In this study, we introduce Optometry Random Forest, an artificial intelligence-based system for automated classification and forecasting of optometric data. The proposed methodology leverages Random [...] Read more.
Artificial intelligence and machine learning have increasingly transformed optometry, enabling automated classification and predictive modeling of eye conditions. In this study, we introduce Optometry Random Forest, an artificial intelligence-based system for automated classification and forecasting of optometric data. The proposed methodology leverages Random Forest models, trained on academic optometric datasets, to classify key diagnostic categories, including Contactology, Dry Eye, Low Vision, Myopia, Pediatrics, and Refractive Surgery. Additionally, an autoRegressive integrated moving average based forecasting model is incorporated to predict future research trends in optometry until 2030. Comparing the one-shot and epoch-trained Optometry Random Forest, the findings indicate that the epoch-trained model consistently outperforms the one-shot model, achieving superior classification accuracy (97.17%), precision (97.28%), and specificity (100%). Moreover, the comparative analysis with Optometry Bidirectional Encoder Representations from Transformers demonstrates that the Optometry Random Forest excels in classification reliability and predictive analytics, positioning it as a robust artificial intelligence tool for clinical decision-making and resource allocation. This research highlights the potential of Random Forest models in medical artificial intelligence, offering a scalable and interpretable solution for automated diagnosis, predictive analytics, and artificial intelligence-enhanced decision support in optometry. Future work should focus on integrating real-world clinical datasets to further refine classification performance and enhance the potential for artificial intelligence-driven patient care. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Graphical abstract

16 pages, 5544 KB  
Article
Visual Feature Domain Audio Coding for Anomaly Sound Detection Application
by Subin Byun and Jeongil Seo
Algorithms 2025, 18(10), 646; https://doi.org/10.3390/a18100646 - 15 Oct 2025
Viewed by 159
Abstract
Conventional audio and video codecs are designed for human perception, often discarding subtle spectral cues that are essential for machine-based analysis. To overcome this limitation, we propose a machine-oriented compression framework that reinterprets spectrograms as visual objects and applies Feature Coding for Machines [...] Read more.
Conventional audio and video codecs are designed for human perception, often discarding subtle spectral cues that are essential for machine-based analysis. To overcome this limitation, we propose a machine-oriented compression framework that reinterprets spectrograms as visual objects and applies Feature Coding for Machines (FCM) to anomalous sound detection (ASD). In our approach, audio signals are transformed log-mel spectrograms, from which intermediate feature maps are extracted, compressed, and reconstructed through the FCM pipeline. For comparison, we implement AAC-LC (Advanced Audio Coding Low Complexity) as a representative perceptual audio codec and VVC (Versatile Video Coding) as spectrogram-based video codec. Experiments were conducted on the DCASE (Detection and Classification of Acoustic Scenes and Events) 2023 Task 2 dataset, covering four machine types (fan, valve, toycar, slider), with anomaly detection performed using the official Autoencoder baseline model released in DCASE 2024. Detection scores were computed from reconstruction error and Mahalanobis distance. The results show that the proposed FCM-based ACoM (Audio Coding for Machines) achieves comparable or superior performance to AAC at less than half the bitrate, reliably preserving critical features even under ultra-low bitrate conditions (1.3–6.3 kbps). While VVC retains competitive performance only at high bitrates, it degrades sharply at low bitrates. These findings demonstrate that feature-based compression offers a promising direction for next-generation ACoM standardization, enabling efficient and robust ASD in bandwidth-constrained industrial environments. Full article
(This article belongs to the Special Issue Visual Attributes in Computer Vision Applications)
Show Figures

Figure 1

25 pages, 902 KB  
Article
Deconstructing a Minimalist Transformer Architecture for Univariate Time Series Forecasting
by Filippo Garagnani and Vittorio Maniezzo
Algorithms 2025, 18(10), 645; https://doi.org/10.3390/a18100645 - 14 Oct 2025
Viewed by 349
Abstract
This paper provides a detailed breakdown of a minimalist, fundamental Transformer-based architecture for forecasting univariate time series. It describes each processing step in detail, from input embedding and positional encoding to self-attention mechanisms and output projection. All of these steps are specifically tailored [...] Read more.
This paper provides a detailed breakdown of a minimalist, fundamental Transformer-based architecture for forecasting univariate time series. It describes each processing step in detail, from input embedding and positional encoding to self-attention mechanisms and output projection. All of these steps are specifically tailored to sequential temporal data. By isolating and analyzing the role of each component, this paper demonstrates how Transformers capture long-term dependencies in time series. A simplified, interpretable Transformer model named ’minimalist Transformer’ is implemented and showcased using a simple example. It is then validated using the M3 forecasting competition benchmark, which is based on real-world data, and a number of data series generated by IoT sensors. The aim of this work is to serve as a practical guide and foundation for future Transformer-based forecasting innovations, providing a solid baseline that is simple to achieve but exhibits a stable forecasting ability not far behind that of state-of-the-art specialized designs. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

22 pages, 5357 KB  
Article
An Effective Approach to Rotatory Fault Diagnosis Combining CEEMDAN and Feature-Level Integration
by Sumika Chauhan, Govind Vashishtha and Prabhkiran Kaur
Algorithms 2025, 18(10), 644; https://doi.org/10.3390/a18100644 - 12 Oct 2025
Viewed by 205
Abstract
This paper introduces an effective approach for rotatory fault diagnosis, specifically focusing on centrifugal pumps, by combining complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and feature-level integration. Centrifugal pumps are critical in various industries, and their condition monitoring is essential for [...] Read more.
This paper introduces an effective approach for rotatory fault diagnosis, specifically focusing on centrifugal pumps, by combining complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and feature-level integration. Centrifugal pumps are critical in various industries, and their condition monitoring is essential for reliability. The proposed methodology addresses the limitations of traditional single-sensor fault diagnosis by fusing information from acoustic and vibration signals. CEEMDAN was employed to decompose raw signals into intrinsic mode functions (IMFs), mitigating noise and non-stationary characteristics. Weighted kurtosis was used to select significant IMFs, and a comprehensive set of time, frequency, and time–frequency domain features was extracted. Feature-level fusion integrated these features, and a support vector machine (SVM) classifier, optimized using the crayfish optimization algorithm (COA), identified different health conditions. The methodology was validated on a centrifugal pump with various impeller defects, achieving a classification accuracy of 95.0%. The results demonstrate the efficacy of the proposed approach in accurately diagnosing the state of centrifugal pumps. Full article
Show Figures

Figure 1

20 pages, 444 KB  
Article
A Gumbel-Based Selection Data-Driven Evolutionary Algorithm and Its Application to Chinese Text-Based Cheating Official Accounts Mining
by Jiheng Yuan and Jian-Yu Li
Algorithms 2025, 18(10), 643; https://doi.org/10.3390/a18100643 - 12 Oct 2025
Viewed by 148
Abstract
Data-driven evolutionary algorithms (DDEAs) are essential computational intelligent methods for solving expensive optimization problems (EOPs). The management of surrogate models for fitness predictions, particularly the selection and integration of multiple models, is key to their success. However, how to select and integrate models [...] Read more.
Data-driven evolutionary algorithms (DDEAs) are essential computational intelligent methods for solving expensive optimization problems (EOPs). The management of surrogate models for fitness predictions, particularly the selection and integration of multiple models, is key to their success. However, how to select and integrate models to obtain accurate predictions remains a challenging issue. This paper proposes a novel Gumbel-based selection DDEA named GBS-DDEA, which innovates in both aspects of model selection and integration. First, a Gumbel-based selection (GBS) strategy is proposed to probabilistically choose surrogate models. GBS employs the Gumbel-based distribution to strike a balance between exploiting high-accuracy models and exploring others, providing a more principled and robust selection strategy than conventional probability sampling. Second, a ranking-based weighting ensemble (RBWE) strategy is developed. Instead of relying on absolute error metrics that can be sensitive to outliers, RBWE assigns integration weights based on the models’ relative performance rankings, leading to a more stable and reliable ensemble prediction. Comprehensive experiments on various benchmark problems and a Chinese text-based cheating official accounts mining problem demonstrate that GBS-DDEA consistently outperforms several state-of-the-art DDEAs, confirming the effectiveness and superiority of the proposed dual-strategy approach. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

23 pages, 4988 KB  
Article
Contextual Object Grouping (COG): A Specialized Framework for Dynamic Symbol Interpretation in Technical Security Diagrams
by Jan Kapusta, Waldemar Bauer and Jerzy Baranowski
Algorithms 2025, 18(10), 642; https://doi.org/10.3390/a18100642 - 10 Oct 2025
Viewed by 255
Abstract
This paper introduces Contextual Object Grouping (COG), a specific computer vision framework that enables automatic interpretation of technical security diagrams through dynamic legend learning for intelligent sensing applications. Unlike traditional object detection approaches that rely on post-processing heuristics to establish relationships between the [...] Read more.
This paper introduces Contextual Object Grouping (COG), a specific computer vision framework that enables automatic interpretation of technical security diagrams through dynamic legend learning for intelligent sensing applications. Unlike traditional object detection approaches that rely on post-processing heuristics to establish relationships between the detected elements, COG embeds contextual understanding directly into the detection process by treating spatially and functionally related objects as unified semantic entities. We demonstrate this approach in the context of Cyber-Physical Security Systems (CPPS) assessment, where the same symbol may represent different security devices across different designers and projects. Our proof-of-concept implementation using YOLOv8 achieves robust detection of legend components (mAP50 ≈ 0.99, mAP50–95 ≈ 0.81) and successfully establishes symbol–label relationships for automated security asset identification. The framework introduces a new ontological class—the contextual COG class that bridges atomic object detection and semantic interpretation, enabling intelligent sensing systems to perceive context rather than infer it through post-processing reasoning. This proof-of-concept appears to validate the COG hypothesis and suggests new research directions for structured visual understanding in smart sensing environments, with applications potentially extending to building automation and cyber-physical security assessment. Full article
Show Figures

Figure 1

15 pages, 1428 KB  
Article
A Decision Tree Regression Algorithm for Real-Time Trust Evaluation of Battlefield IoT Devices
by Ioana Matei and Victor-Valeriu Patriciu
Algorithms 2025, 18(10), 641; https://doi.org/10.3390/a18100641 - 10 Oct 2025
Viewed by 261
Abstract
This paper presents a novel gateway-centric architecture for context-aware trust evaluation in Internet of Battle Things (IoBT) environments. The system is structured across multiple layers, from embedded sensing devices equipped with internal modules for signal filtering, anomaly detection, and encryption, to high-level data [...] Read more.
This paper presents a novel gateway-centric architecture for context-aware trust evaluation in Internet of Battle Things (IoBT) environments. The system is structured across multiple layers, from embedded sensing devices equipped with internal modules for signal filtering, anomaly detection, and encryption, to high-level data processing in a secure cloud infrastructure. At its core, the gateway evaluates the trustworthiness of sensor nodes by computing reputation scores based on behavioral and contextual metrics. This design offers operational advantages, including reduced latency, autonomous decision-making in the absence of central command, and real-time responses in mission-critical scenarios. Our system integrates supervised learning, specifically Decision Tree Regression (DTR), to estimate reputation scores using features such as transmission success rate, packet loss, latency, battery level, and peer feedback. The results demonstrate that the proposed approach ensures secure, resilient, and scalable trust management in distributed battlefield networks, enabling informed and reliable decision-making under harsh and dynamic conditions. Full article
Show Figures

Figure 1

22 pages, 6298 KB  
Article
TMP-M2Align: A Topology-Aware Multiobjective Approach to the Multiple Sequence Alignment of Transmembrane Proteins
by Joel Cedeño-Muñoz, Cristian Zambrano-Vega and Antonio J. Nebro
Algorithms 2025, 18(10), 640; https://doi.org/10.3390/a18100640 - 10 Oct 2025
Viewed by 235
Abstract
Transmembrane proteins (TMPs) constitute approximately 30% of the mammalian proteome and are critical targets in biomedical research due to their involvement in signaling, transport, and drug interactions. However, their unique structural characteristics pose significant challenges for conventional multiple sequence alignment (MSA) methods, which [...] Read more.
Transmembrane proteins (TMPs) constitute approximately 30% of the mammalian proteome and are critical targets in biomedical research due to their involvement in signaling, transport, and drug interactions. However, their unique structural characteristics pose significant challenges for conventional multiple sequence alignment (MSA) methods, which are typically optimized for soluble proteins. In this paper, we propose TMP-M2Align, a novel topology-aware multiobjective algorithm specifically designed for the multiple alignment of TMPs. The method simultaneously optimizes two complementary objectives: (i) a topology-aware Sum-of-Pairs (SPs) score that integrates region-specific substitution matrices and gap penalties, and (ii) an Aligned Regions (ARs) score that rewards consistent alignment of functional and topological domains. By combining these objectives, TMP-M2Align generates Pareto front approximations of alignment solutions, enabling researchers to select trade-offs that best suit their biological questions. We evaluated TMP-M2Align on BAliBASE Reference Set 7 and on complete datasets of human G protein-coupled receptors (GPCRs) from classes A, B1, and C. Experimental results demonstrate that TMP-M2Align consistently outperforms both traditional alignment tools and specialized TM-specific methods in terms of SPs and Total Column metrics. Moreover, qualitative topological analyses confirm that TMP-M2Align preserves the integrity of transmembrane helices and loop boundaries more effectively than competing approaches. These findings highlight the effectiveness of integrating topology-aware scoring with multiobjective optimization for achieving accurate and biologically meaningful alignments of TMPs. Full article
(This article belongs to the Special Issue Advanced Research on Machine Learning Algorithms in Bioinformatics)
Show Figures

Figure 1

29 pages, 1205 KB  
Article
OIKAN: A Hybrid AI Framework Combining Symbolic Inference and Deep Learning for Interpretable Information Retrieval Models
by Didar Yedilkhan, Arman Zhalgasbayev, Sabina Saleshova and Nursultan Khaimuldin
Algorithms 2025, 18(10), 639; https://doi.org/10.3390/a18100639 - 10 Oct 2025
Viewed by 493
Abstract
The rapid expansion of AI applications in various domains demands models that balance predictive power with human interpretability, a requirement that has catalyzed the development of hybrid algorithms combining high accuracy with human-readable outputs. This study introduces a novel neuro-symbolic framework, OIKAN (Optimized [...] Read more.
The rapid expansion of AI applications in various domains demands models that balance predictive power with human interpretability, a requirement that has catalyzed the development of hybrid algorithms combining high accuracy with human-readable outputs. This study introduces a novel neuro-symbolic framework, OIKAN (Optimized Interpretable Kolmogorov–Arnold Network), designed to integrate the representational power of feedforward neural networks with the transparency of symbolic regression. The framework employs Gaussian noise-based data augmentation and a two-phase sparse symbolic regression pipeline using ElasticNet, producing analytical expressions suitable for both classification and regression problems. Evaluated on 60 classification and 58 regression datasets from the Penn Machine Learning Benchmarks (PMLB), OIKAN Classifier achieved a median accuracy of 0.886, with perfect performance on linearly separable datasets, while OIKAN Regressor reached a median R2 score of 0.705, peaking at 0.992. In comparative experiments with ElasticNet, DecisionTree, and XGBoost baselines, OIKAN showed competitive accuracy while maintaining substantially higher interpretability, highlighting its distinct contribution to the field of explainable AI. OIKAN demonstrated computational efficiency, with fast training and low inference time and memory usage, highlighting its suitability for real-time and embedded applications. However, the results revealed that performance declined more noticeably on high-dimensional or noisy datasets, particularly those lacking compact symbolic structures, emphasizing the need for adaptive regularization, expanded function libraries, and refined augmentation strategies to enhance robustness and scalability. These results underscore OIKAN’s ability to deliver transparent, mathematically tractable models without sacrificing performance, paving the way for explainable AI in scientific discovery and industrial engineering. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 6258 KB  
Article
Texture-Adaptive Fabric Defect Detection via Dynamic Subspace Feature Extraction and Luminance Reconstruction
by Weitao Wu, Zengwen Zhang, Zhong Xiang and Miao Qian
Algorithms 2025, 18(10), 638; https://doi.org/10.3390/a18100638 - 9 Oct 2025
Viewed by 196
Abstract
Defect detection in textile manufacturing is critically hampered by the inefficiency of manual inspection and the dual constraints of deep learning (DL) approaches. Specifically, DL models suffer from poor generalization, as the rapid iteration of fabric types makes acquiring sufficient training data impractical. [...] Read more.
Defect detection in textile manufacturing is critically hampered by the inefficiency of manual inspection and the dual constraints of deep learning (DL) approaches. Specifically, DL models suffer from poor generalization, as the rapid iteration of fabric types makes acquiring sufficient training data impractical. Furthermore, their high computational costs impede real-time industrial deployment. To address these challenges, this paper proposes a texture-adaptive fabric defect detection method. Our approach begins with a Dynamic Subspace Feature Extraction (DSFE) technique to extract spatial luminance features of the fabric. Subsequently, a Light Field Offset-Aware Reconstruction Model (LFOA) is introduced to reconstruct the luminance distribution, effectively compensating for environmental lighting variations. Finally, we develop a texture-adaptive defect detection system to identify potential defective regions, alongside a probabilistic ‘OutlierIndex’ to quantify their likelihood of being true defects. This system is engineered to rapidly adapt to new fabric types with a small number of labeled samples, demonstrating strong generalization and suitability for dynamic industrial conditions. Experimental validation confirms that our method achieves 70.74% accuracy, decisively outperforming existing models by over 30%. Full article
(This article belongs to the Topic Soft Computing and Machine Learning)
Show Figures

Figure 1

33 pages, 2980 KB  
Article
Phymastichus–Hypothenemus Algorithm for Minimizing and Determining the Number of Pinned Nodes in Pinning Control of Complex Networks
by Jorge A. Lizarraga, Alberto J. Pita, Javier Ruiz-Leon, Alma Y. Alanis, Luis F. Luque-Vega, Rocío Carrasco-Navarro, Carlos Lara-Álvarez, Yehoshua Aguilar-Molina and Héctor A. Guerrero-Osuna
Algorithms 2025, 18(10), 637; https://doi.org/10.3390/a18100637 - 9 Oct 2025
Viewed by 206
Abstract
Pinning control is a key strategy for stabilizing complex networks through a limited set of nodes. However, determining the optimal number and location of pinned nodes under dynamic and structural constraints remains a computational challenge. This work proposes an improved version of the [...] Read more.
Pinning control is a key strategy for stabilizing complex networks through a limited set of nodes. However, determining the optimal number and location of pinned nodes under dynamic and structural constraints remains a computational challenge. This work proposes an improved version of the Phymastichus–Hypothenemus Algorithm—Minimized and Determinated (PHA-MD) to solve multi-constraint, hybrid optimization problems in pinning control without requiring a predefined number of control nodes. Inspired by the parasitic behavior of Phymastichus coffea on Hypothenemus hampei, the algorithm models each agent as a parasitoid capable of propagating influence across a network, inheriting node importance and dynamically expanding search dimensions through its “offspring.” Unlike its original formulation, PHA-MD integrates variable-length encoding and V-stability assessment to autonomously identify a minimal yet effective pinning set. The method was evaluated on benchmark network topologies and compared against state-of-the-art heuristic algorithms. The results show that PHA-MD consistently achieves asymptotic stability using fewer pinned nodes while maintaining energy efficiency and convergence robustness. These findings highlight the potential of biologically inspired, dimension-adaptive algorithms in solving high-dimensional, combinatorial control problems in complex dynamical systems. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms: 2nd Edition)
Show Figures

Figure 1

27 pages, 957 KB  
Review
Deep Learning for Brain MRI Tissue and Structure Segmentation: A Comprehensive Review
by Nedim Šišić and Peter Rogelj
Algorithms 2025, 18(10), 636; https://doi.org/10.3390/a18100636 - 9 Oct 2025
Viewed by 442
Abstract
Brain MRI segmentation plays a crucial role in neuroimaging studies and clinical trials by enabling the precise localization and quantification of brain tissues and structures. The advent of deep learning has transformed the field, offering accurate and fast tools for MRI segmentation. Nevertheless, [...] Read more.
Brain MRI segmentation plays a crucial role in neuroimaging studies and clinical trials by enabling the precise localization and quantification of brain tissues and structures. The advent of deep learning has transformed the field, offering accurate and fast tools for MRI segmentation. Nevertheless, several challenges limit the widespread applicability of these methods in practice. In this systematic review, we provide a comprehensive analysis of developments in deep learning-based segmentation of brain MRI in adults, segmenting the brain into tissues, structures, and regions of interest. We explore the key model factors influencing segmentation performance, including architectural design, choice of input size and model dimensionality, and generalization strategies. Furthermore, we address validation practices, which are particularly important given the scarcity of manual annotations, and identify the limitations of current methodologies. We present an extensive compilation of existing segmentation works and highlight the emerging trends and key results. Finally, we discuss the challenges and potential future directions in the field. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (4th Edition))
Show Figures

Figure 1

23 pages, 2817 KB  
Article
Characterizing and Optimizing Spatial Selectivity of Peripheral Nerve Stimulation Montages and Electrode Configurations In Silico
by Jonathan Brand, Ryan Kochis, Vasav Shah and Wentai Liu
Algorithms 2025, 18(10), 635; https://doi.org/10.3390/a18100635 - 9 Oct 2025
Viewed by 328
Abstract
Spatially selective nerve stimulation is an active area of research, with the capability to reduce side effects and increase the clinical efficacy of nerve stimulation technologies. Several research groups have demonstrated proof-of-concept devices capable of performing spatially selective stimulation with multi-contact cuff electrodes [...] Read more.
Spatially selective nerve stimulation is an active area of research, with the capability to reduce side effects and increase the clinical efficacy of nerve stimulation technologies. Several research groups have demonstrated proof-of-concept devices capable of performing spatially selective stimulation with multi-contact cuff electrodes in vivo; however, optimizing the technique is difficult due to the large possibility space granted by a multi-electrode cuff. Our work attempts to elucidate the most valuable stimulation montages (current ratios between stimulating electrodes) provided by a multi-contact cuff. We characterized the performance of five different montage types when stimulating fibers in different “electrode configurations”, with configurations including up to three rings of electrode contacts, 13 different counts of electrodes per ring, and five electrode arc lengths per electrode count (for 195 unique configurations). Selected montages included several methods from prior art, as well as our own. Among montage types, the most spatially selective stimulation was one we refer to as “X-Adjacent” stimulation, in which three adjacent electrodes are active per ring. Optimized X-adjacent montages achieved an average fiber specificity of 71.9% for single-ring electrode configurations when stimulating fibers located at a depth of two-thirds of the nerve radius, and an average fiber specificity of 77.2% for two-ring configurations. These values were the highest among montages tested, and in combination with our other metrics, led these montages to perform best in the majority of cost functions investigated. This success leads us to recommend X-Adjacent montages to researchers exploring spatially selective stimulation. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (4th Edition))
Show Figures

Figure 1

21 pages, 1254 KB  
Article
AI-Enhanced PBL and Experiential Learning for Communication and Career Readiness: An Engineering Pilot Course
by Estefanía Avilés Mariño and Antonio Sarasa Cabezuelo
Algorithms 2025, 18(10), 634; https://doi.org/10.3390/a18100634 - 9 Oct 2025
Viewed by 351
Abstract
This study investigates the utilisation of AI tools, including Grammarly Free, QuillBot Free, Canva Free Individual, and others, to enhance learning outcomes for 180 s-year telecommunications engineering students at Universidad Politécnica de Madrid. This research incorporates teaching methods like problem-based learning, experiential learning, [...] Read more.
This study investigates the utilisation of AI tools, including Grammarly Free, QuillBot Free, Canva Free Individual, and others, to enhance learning outcomes for 180 s-year telecommunications engineering students at Universidad Politécnica de Madrid. This research incorporates teaching methods like problem-based learning, experiential learning, task-based learning, and content–language integrated learning, with English as the medium of instruction. These tools were strategically used to enhance language skills, foster computational thinking, and promote critical problem-solving. A control group comprising 120 students who did not receive AI support was included in the study for comparative analysis. The control group’s role was essential in evaluating the impact of AI tools on learning outcomes by providing a baseline for comparison. The results indicated that the pilot group, utilising AI tools, demonstrated superior performance compared to the control group in listening comprehension (98.79% vs. 90.22%) and conceptual understanding (95.82% vs. 84.23%). These findings underscore the significance of these skills in enhancing communication and problem-solving abilities within the field of engineering. The assessment of the pilot course’s forum revealed a progression from initially error-prone and brief responses to refined, evidence-based reflections in participants. This evolution in responses significantly contributed to the high success rate of 87% in conducting complex contextual analyses by pilot course participants. Subsequent to these results, a project for educational innovation aims to implement the AI-PBL-CLIL model at Universidad Politécnica de Madrid from 2025 to 2026. Future research should look into adaptive AI systems for personalised learning and study the long-term effects of AI integration in higher education. Furthermore, collaborating with industry partners can significantly enhance the practical application of AI-based methods in engineering education. These strategies facilitate benchmarking against international standards, provide structured support for skill development, and ensure the sustained retention of professional competencies, ultimately elevating the international recognition of Spain’s engineering education. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

20 pages, 3456 KB  
Article
TWISS: A Hybrid Multi-Criteria and Wrapper-Based Feature Selection Method for EMG Pattern Recognition in Prosthetic Applications
by Aura Polo, Nelson Cárdenas-Bolaño, Lácides Antonio Ripoll Solano, Lely A. Luengas-Contreras and Carlos Robles-Algarín
Algorithms 2025, 18(10), 633; https://doi.org/10.3390/a18100633 - 8 Oct 2025
Viewed by 251
Abstract
This paper proposes TWISS (TOPSIS + Wrapper Incremental Subset Selection), a novel hybrid feature selection framework designed for electromyographic (EMG) pattern recognition in upper-limb prosthetic control. TWISS integrates the multi-criteria decision-making method TOPSIS with a forward wrapper search strategy, enabling subject-specific feature optimization [...] Read more.
This paper proposes TWISS (TOPSIS + Wrapper Incremental Subset Selection), a novel hybrid feature selection framework designed for electromyographic (EMG) pattern recognition in upper-limb prosthetic control. TWISS integrates the multi-criteria decision-making method TOPSIS with a forward wrapper search strategy, enabling subject-specific feature optimization based on a ranking that combines filter metrics, including Chi-squared, ANOVA, and Mutual Information. Unlike conventional static feature sets, such as the Hudgins configuration (48 features: four per channel, 12 channels) or All Features (192 features: 16 per channel, 12 channels), TWISS dynamically adapts feature subsets to each subject, addressing inter-subject variability and classification robustness challenges in EMG systems. The proposed algorithm was evaluated on the publicly available Ninapro DB7 dataset, comprising both intact and transradial amputee participants, and implemented in an open-source, fully reproducible environment. Two Google Colab tools were developed to support diverse workflows: one for end-to-end feature extraction and selection, and another for selection on precomputed feature sets. Experimental results demonstrated that TWISS achieved a median F1-macro score of 0.6614 with Logistic Regression, outperforming the All Features set (0.6536) and significantly surpassing the Hudgins set (0.5626) while reducing feature dimensionality. TWISS offers a scalable and computationally efficient solution for feature selection in biomedical signal processing and beyond, promoting the development of personalized, low-cost prosthetic control systems and other resource-constrained applications. Full article
Show Figures

Graphical abstract

Previous Issue
Back to TopTop