Next Issue
Volume 18, March
Previous Issue
Volume 18, January
 
 

Algorithms, Volume 18, Issue 2 (February 2025) – 65 articles

Cover Story (view full-size image): Tensor networks are powerful data structures developed for quantum system simulations that have found use in machine learning due to their high performance in the HPC setting. It is known that when they have highly regular geometries, dimensionality has a large impact on representation power. For heterogeneous structures, however, these effects are not well characterized. In this article, we train tensor networks with different geometries to encode a random quantum state, seeing that densely connected structures achieve better infidelities than more sparse structures, with higher success rates and less time. We also give some insight into how to improve the memory requirements of these sparse structures and how it impacts training, and use a last-generation supercomputer to showcase performance improvements with GPU acceleration. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3819 KiB  
Article
Robust Client Selection Strategy Using an Improved Federated Random High Local Performance Algorithm to Address High Non-IID Challenges
by Pramote Sittijuk, Narin Petrot and Kreangsak Tamee
Algorithms 2025, 18(2), 118; https://doi.org/10.3390/a18020118 - 19 Feb 2025
Viewed by 394
Abstract
This paper introduces an improved version of the Federated Random High Local Performance (Fed-RHLP) algorithm, specifically aimed at addressing the difficulties posed by Non-IID (Non-Independent and Identically Distributed) data within the context of federated learning. The refined Fed-RHLP algorithm implements a more targeted [...] Read more.
This paper introduces an improved version of the Federated Random High Local Performance (Fed-RHLP) algorithm, specifically aimed at addressing the difficulties posed by Non-IID (Non-Independent and Identically Distributed) data within the context of federated learning. The refined Fed-RHLP algorithm implements a more targeted client selection approach, emphasizing clients based on the size of their datasets, the diversity of labels, and the performance of their local models. It employs a biased roulette wheel mechanism for selecting clients, which improves the aggregation of the global model. This approach ensures that the global model is primarily influenced by high-performing clients while still permitting contributions from those with lower performance during the model training process. Experimental findings indicate that the improved Fed-RHLP algorithm significantly surpasses existing methodologies, including FederatedAveraging (FedAvg), Power of Choice (PoC), and FedChoice, by achieving superior global model accuracy, accelerated convergence rates, and decreased execution times, especially under conditions of high Non-IID data. Furthermore, the improved Fed-RHLP algorithm exhibits resilience even when the number of clients participating in local model updates and aggregation is diminished in each communication round. This characteristic positively influences the conservation of limited communication and computational resources. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

25 pages, 1212 KiB  
Article
TOCA-IoT: Threshold Optimization and Causal Analysis for IoT Network Anomaly Detection Based on Explainable Random Forest
by Ibrahim Gad
Algorithms 2025, 18(2), 117; https://doi.org/10.3390/a18020117 - 19 Feb 2025
Viewed by 566
Abstract
The Internet of Things (IoT) is developing quickly, which has led to the development of new opportunities in many different fields. As the number of IoT devices continues to expand, particularly in transportation and healthcare, the need for efficient and secure operations has [...] Read more.
The Internet of Things (IoT) is developing quickly, which has led to the development of new opportunities in many different fields. As the number of IoT devices continues to expand, particularly in transportation and healthcare, the need for efficient and secure operations has become critical. In the next few years, IoT connections will continue to expand across different fields. In contrast, a number of problems require further attention to be addressed to provide safe and effective operations, such as security, interoperability, and standards. This research investigates the efficacy of integrating explainable artificial intelligence (XAI) techniques and causal inference methods to enhance network anomaly detection. This study proposes a robust TOCA-IoT framework that utilizes the linear non-Gaussian acyclic model (LiNGAM) to find causal relationships in network traffic data, thereby improving the accuracy and interpretability of anomaly detection. A refined threshold optimization strategy is employed to address the challenge of selecting optimal thresholds for anomaly classification. The performance of the TOCA-IoT model is evaluated on an IoT benchmark dataset known as CICIoT2023. The results highlight the potential of combining causal discovery with XAI for building more robust and transparent anomaly detection systems. The results showed that the TOCA-IoT framework achieved the highest accuracy of 100% and an F-score of 100% in classifying the IoT attacks. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

11 pages, 2233 KiB  
Article
Knowledge Discovery in Predicting Martensite Start Temperature of Medium-Carbon Steels by Artificial Neural Networks
by Xiao-Song Wang, Anoop Kumar Maurya, Muhammad Ishtiaq, Sung-Gyu Kang and Nagireddy Gari Subba Reddy
Algorithms 2025, 18(2), 116; https://doi.org/10.3390/a18020116 - 19 Feb 2025
Cited by 1 | Viewed by 385
Abstract
Martensite start (Ms) temperature is a critical parameter in the production of parts and structural steels and plays a vital role in heat treatment processes to achieve desired properties. However, it is often challenging to estimate accurately through experience alone. This study introduces [...] Read more.
Martensite start (Ms) temperature is a critical parameter in the production of parts and structural steels and plays a vital role in heat treatment processes to achieve desired properties. However, it is often challenging to estimate accurately through experience alone. This study introduces a model that predicts the Ms temperature of medium-carbon steels based on their chemical compositions using the artificial neural network (ANN) method and compares the results with those from previous empirical formulae. The results indicate that the ANN model surpasses conventional methods in predicting the Ms temperature of medium-carbon steel, achieving an average absolute error of −0.93 degrees and −0.097% in mean percentage error. Furthermore, this research provides an accurate method or tool with which to present the quantitative effect of alloying elements on the Ms temperature of medium-carbon steels. This approach is straightforward, visually interpretable, and highly accurate, making it valuable for materials design and prediction of material properties. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science)
Show Figures

Figure 1

31 pages, 429 KiB  
Article
Solution of Bin Packing Instances in Falkenauer T Class: Not So Hard
by György Dósa, András Éles, Angshuman Robin Goswami, István Szalkai and Zsolt Tuza
Algorithms 2025, 18(2), 115; https://doi.org/10.3390/a18020115 - 19 Feb 2025
Viewed by 477
Abstract
In this work, the Bin Packing combinatorial optimization problem is studied from the practical side. The focus is on the Falkenauer T benchmark class, which is a collection of 80 problem instances that are considered hard to handle algorithmically. Contrary to this widely [...] Read more.
In this work, the Bin Packing combinatorial optimization problem is studied from the practical side. The focus is on the Falkenauer T benchmark class, which is a collection of 80 problem instances that are considered hard to handle algorithmically. Contrary to this widely accepted view, we show that the instances of this benchmark class can be solved relatively easily, without applying any sophisticated methods like metaheuristics. A new algorithm is proposed, which can operate in two modes: either using backtrack or local search to find optimal packing. In theory, both operating modes are guaranteed to find a solution. Computational results show that all instances of the Falkenauer T benchmark class can be solved in a total of 1.18 s and 2.39 s with the two operating modes alone, or 0.2 s when running in parallel. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms (2nd Edition))
Show Figures

Figure 1

19 pages, 4995 KiB  
Article
Energy Management and Hosting Capacity Evaluation of a Hybrid AC-DC Micro Grid Including Photovoltaic Units and Battery Energy Storage Systems
by Mohammed Ajel Awdaa, Elaheh Mashhour, Hossein Farzin and Mahmood Joorabian
Algorithms 2025, 18(2), 114; https://doi.org/10.3390/a18020114 - 18 Feb 2025
Viewed by 457
Abstract
Renewable energy sources must be scheduled to manage power flow and load demand. Photovoltaic power generation is usually connected to power distribution networks and is not designed to add significant amounts of production in the event of increased electricity demand. Therefore, it is [...] Read more.
Renewable energy sources must be scheduled to manage power flow and load demand. Photovoltaic power generation is usually connected to power distribution networks and is not designed to add significant amounts of production in the event of increased electricity demand. Therefore, it is necessary to increase the generated capacity (i.e., hosting capacity) to meet the expansion in demand. This paper discussed two topics; the first is how to create an energy management strategy (EMS) for a hybrid micro-grid containing photovoltaic (PV) and battery energy storage system (BESS). A model was created within the MATLAB program through which the charging and discharging process of the BESS was managed, and the energy source was through PV. The model is connected to the leading network, where the m.file is linked to the model to control variable settings. This was carried out by using a logical–numerical modeling method. The second topic discussed how to evaluate hosting capacity (HC) without causing the network to collapse. This was achieved by choosing the best location and size for the PV. This study relied on the use of two algorithms, particle swarm optimization (PSO) and Harris hawks optimization (HHO). The fast decoupled load Flow (FDPF) method was adopted in the network analysis and finally the results of the two algorithms were compared. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

28 pages, 2303 KiB  
Article
DLMinTC+: A Deep Learning Based Algorithm for Minimum Timeline Cover on Temporal Graphs
by Giorgio Lazzarinetti, Riccardo Dondi, Sara Manzoni and Italo Zoppis
Algorithms 2025, 18(2), 113; https://doi.org/10.3390/a18020113 - 17 Feb 2025
Viewed by 400
Abstract
Combinatorial optimization on temporal graphs is critical for summarizing dynamic networks in various fields, including transportation, social networks, and biology. Among these problems, the Minimum Timeline Cover (MinTCover) problem, aimed at identifying minimal activity intervals for representing temporal interactions, remains underexplored in the [...] Read more.
Combinatorial optimization on temporal graphs is critical for summarizing dynamic networks in various fields, including transportation, social networks, and biology. Among these problems, the Minimum Timeline Cover (MinTCover) problem, aimed at identifying minimal activity intervals for representing temporal interactions, remains underexplored in the context of advanced machine learning techniques. Existing heuristic and approximate methods, while effective in certain scenarios, struggle with capturing complex temporal dependencies and scalability in dense, large-scale networks. Addressing this gap, this paper introduces DLMinTC+, a novel deep learning-based algorithm for solving the MinTCover problem. The proposed method integrates Graph Neural Networks for structural embedding, Transformer-based temporal encoding, and Pointer Networks for activity interval selection, coupled with an iterative adjustment algorithm to ensure valid solutions. Key contributions include (i) demonstrating the efficacy of deep learning for temporal combinatorial optimization, achieving superior accuracy and efficiency over state-of-the-art heuristics, and (ii) advancing the analysis of temporal knowledge graphs by incorporating robust, time-sensitive embeddings. Extensive evaluations on synthetic and real-world datasets highlight DLMinTC+’s ability to achieve significant coverage size reduction while maintaining generalization, offering a scalable and precise solution for complex temporal networks. Full article
Show Figures

Figure 1

19 pages, 566 KiB  
Article
Enumerating Minimal Vertex Covers and Dominating Sets with Capacity and/or Connectivity Constraints
by Yasuaki Kobayashi, Kazuhiro Kurita, Kevin Mann, Yasuko Matsui and Hirotaka Ono
Algorithms 2025, 18(2), 112; https://doi.org/10.3390/a18020112 - 17 Feb 2025
Viewed by 395
Abstract
In this paper, we consider the minimal vertex cover and minimal dominating sets with capacity and/or connectivity constraint enumeration problems. We develop polynomial-delay enumeration algorithms for these problems on bounded-degree graphs. For the case of minimal connected vertex covers, our algorithms run in [...] Read more.
In this paper, we consider the minimal vertex cover and minimal dominating sets with capacity and/or connectivity constraint enumeration problems. We develop polynomial-delay enumeration algorithms for these problems on bounded-degree graphs. For the case of minimal connected vertex covers, our algorithms run in polynomial delay, even on the class of d-claw free graphs. This result is extended for bounded-degree graphs and outputs in quasi-polynomial time on general graphs. To complement these algorithmic results, we show that the minimal connected vertex cover, minimal connected dominating set, and minimal capacitated vertex cover enumeration problems in 2-degenerated bipartite graphs are at least as hard as enumerating minimal transversals in hypergraphs. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers from IWOCA 2024)
Show Figures

Figure 1

32 pages, 3219 KiB  
Article
Enhancing Energy Microgrid Sizing: A Multiyear Optimization Approach with Uncertainty Considerations for Optimal Design
by Sebastián F. Castellanos-Buitrago, Pablo Maya-Duque, Walter M. Villa-Acevedo, Nicolás Muñoz-Galeano and Jesús M. López-Lezama
Algorithms 2025, 18(2), 111; https://doi.org/10.3390/a18020111 - 17 Feb 2025
Viewed by 422
Abstract
This paper addresses the challenge of optimizing microgrid sizing to enhance reliability and efficiency in electrical energy supply. A comprehensive framework that integrates multiyear optimization with uncertainty considerations is presented to facilitate optimal microgrid design. The aim is to economically, safely, and reliably [...] Read more.
This paper addresses the challenge of optimizing microgrid sizing to enhance reliability and efficiency in electrical energy supply. A comprehensive framework that integrates multiyear optimization with uncertainty considerations is presented to facilitate optimal microgrid design. The aim is to economically, safely, and reliably supply electrical energy to communities with limited or no access to the main power grid, primarily utilizing renewable sources such as solar and wind technologies. The proposed framework incorporates environmental stochasticity, electrical demand uncertainty, and various electrical generation technologies. Electric power generation models are developed, and a metaheuristic optimization method is employed to minimize total costs while improving power supply reliability. The practical utility of the developed computational tool is emphasized, highlighting its significance in decision-making for microgrid installations. Utilizing real-world data, the approach involves a two-stage process: the first stage focuses on installation decisions, and the second evaluates operational performance using an iterated local search (ILS) optimization algorithm. Additionally, dispatch strategies are implemented to optimize computational time and enable real-time network modeling. The proposed microgrid sizing approach is a valuable asset for optimizing decision-making processes, significantly contributing to extending electricity coverage in non-interconnected zones while minimizing costs and ensuring steadfast reliability. Full article
Show Figures

Figure 1

20 pages, 3364 KiB  
Article
Optimized Travel Itineraries: Combining Mandatory Visits and Personalized Activities
by Parida Jewpanya, Pinit Nuangpirom, Siwasit Pitjamit and Warisa Nakkiew
Algorithms 2025, 18(2), 110; https://doi.org/10.3390/a18020110 - 17 Feb 2025
Viewed by 754
Abstract
Tourism refers to the activity of traveling for pleasure, recreation, or leisure purposes. It encompasses a wide range of activities and experiences, from sightseeing to cultural exploration. In today’s digital age, tourists often organize their excursions independently by utilizing information available on websites. [...] Read more.
Tourism refers to the activity of traveling for pleasure, recreation, or leisure purposes. It encompasses a wide range of activities and experiences, from sightseeing to cultural exploration. In today’s digital age, tourists often organize their excursions independently by utilizing information available on websites. However, due to constraints in designing customized tour routes such as travel time and budget, many still require assistance with vacation planning to optimize their experiences. Therefore, this paper proposes an algorithm for personalized tourism planning that considers tourists’ preferences. For instance, the algorithm can recommend places to visit and suggest activities based on tourist requirements. The proposed algorithm utilizes an extended model of the team orienteering problem with time windows (TOPTW) to account for mandatory locations and activities at each site. It offers trip planning that includes a set of locations and activities designed to maximize the overall score accumulated from visiting these locations. To solve the proposed model, the Adaptive Neighborhood Simulated Annealing (ANSA) algorithm is applied. ANSA is an enhanced version of the well-known Simulated Annealing algorithm (SA), providing an adaptive mechanism to manage the probability of selecting neighborhood moves during the SA search process. The computational results demonstrate that ANSA performs well in solving benchmark problems. Furthermore, a real-world attractive location in Tak Province, Thailand, is used as the case study in this paper to illustrate the effectiveness of the proposed model. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

15 pages, 1752 KiB  
Article
Optimizing Investment Portfolios with Bacterial Foraging and Robust Risk Management
by Hubert Zarzycki
Algorithms 2025, 18(2), 109; https://doi.org/10.3390/a18020109 - 17 Feb 2025
Viewed by 350
Abstract
This study introduces a novel portfolio optimization approach that combines Bacterial Foraging Optimization (BFO) with risk management techniques and Sharpe ratio analysis. BFO, a nature-inspired algorithm, is employed to construct diversified portfolios, while risk management strategies, including stop-loss limits and transaction cost considerations, [...] Read more.
This study introduces a novel portfolio optimization approach that combines Bacterial Foraging Optimization (BFO) with risk management techniques and Sharpe ratio analysis. BFO, a nature-inspired algorithm, is employed to construct diversified portfolios, while risk management strategies, including stop-loss limits and transaction cost considerations, enhance risk control. The Sharpe ratio is used to evaluate the efficiency of the investment strategy by accounting for risk-adjusted returns. The experiments demonstrate that this approach effectively balances risk and return, making it a valuable tool for portfolio management in dynamic financial markets. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

11 pages, 877 KiB  
Article
Beyond Spectrograms: Rethinking Audio Classification from EnCodec’s Latent Space
by Jorge Perianez-Pascual, Juan D. Gutiérrez, Laura Escobar-Encinas, Álvaro Rubio-Largo and Roberto Rodriguez-Echeverria
Algorithms 2025, 18(2), 108; https://doi.org/10.3390/a18020108 - 16 Feb 2025
Viewed by 499
Abstract
This paper presents a novel approach to audio classification leveraging the latent representation generated by Meta’s EnCodec neural audio codec. We hypothesize that the compressed latent space representation captures essential audio features more suitable for classification tasks than the traditional spectrogram-based approaches. We [...] Read more.
This paper presents a novel approach to audio classification leveraging the latent representation generated by Meta’s EnCodec neural audio codec. We hypothesize that the compressed latent space representation captures essential audio features more suitable for classification tasks than the traditional spectrogram-based approaches. We train a vanilla convolutional neural network for music genre, speech/music, and environmental sound classification using EnCodec’s encoder output as input to validate this. Then, we compare its performance training with the same network using a spectrogram-based representation as input. Our experiments demonstrate that this approach achieves comparable accuracy to state-of-the-art methods while exhibiting significantly faster convergence and reduced computational load during training. These findings suggest the potential of EnCodec’s latent representation for efficient, faster, and less expensive audio classification applications. We analyze the characteristics of EnCodec’s output and compare its performance against traditional spectrogram-based approaches, providing insights into this novel approach’s advantages. Full article
Show Figures

Figure 1

26 pages, 1259 KiB  
Article
Multi-Strategy Improved Artificial Rabbit Algorithm for QoS-Aware Service Composition in Cloud Manufacturing
by Le Deng, Ting Shu and Jinsong Xia
Algorithms 2025, 18(2), 107; https://doi.org/10.3390/a18020107 - 15 Feb 2025
Viewed by 432
Abstract
Cloud manufacturing represents a pioneering service paradigm that provides flexible, personalized manufacturing services to customers via the Internet. Service composition plays a crucial role in cloud manufacturing, which focuses on integrating dispersed manufacturing services in the cloud platform into a complete composite service [...] Read more.
Cloud manufacturing represents a pioneering service paradigm that provides flexible, personalized manufacturing services to customers via the Internet. Service composition plays a crucial role in cloud manufacturing, which focuses on integrating dispersed manufacturing services in the cloud platform into a complete composite service to form an efficient and collaborative manufacturing solution that fulfills the customer’s requirements, having the highest service quality. This research presents the multi-strategy improved artificial rabbit optimization (MIARO) technique, designed to overcome the limitations with the original method, which often risks converging to local optima and have poor solution quality when dealing with optimization problems. MIARO helps the algorithm escape local optimality with Lévy flights, extends local search with the golden sine mechanism, and expands variability with Archimedean spiral mutations. MIARO is experimented on 23 benchmark functions, 3 engineering design problems, and QoS-aware cloud service composition (QoS-CSC) issues at various sizes, and the experimental findings indicate that MIARO delivers outstanding performance and offers a viable solution to the QoS-CSC problem. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 3748 KiB  
Article
Machine Learning for Decision Support and Automation in Games: A Study on Vehicle Optimal Path
by Gonçalo Penelas, Luís Barbosa, Arsénio Reis, João Barroso and Tiago Pinto
Algorithms 2025, 18(2), 106; https://doi.org/10.3390/a18020106 - 15 Feb 2025
Viewed by 560
Abstract
In the field of gaming artificial intelligence, selecting the appropriate machine learning approach is essential for improving decision-making and automation. This paper examines the effectiveness of deep reinforcement learning (DRL) within interactive gaming environments, focusing on complex decision-making tasks. Utilizing the Unity engine, [...] Read more.
In the field of gaming artificial intelligence, selecting the appropriate machine learning approach is essential for improving decision-making and automation. This paper examines the effectiveness of deep reinforcement learning (DRL) within interactive gaming environments, focusing on complex decision-making tasks. Utilizing the Unity engine, we conducted experiments to evaluate DRL methodologies in simulating realistic and adaptive agent behavior. A vehicle driving game is implemented, in which the goal is to reach a certain target within a small number of steps, while respecting the boundaries of the roads. Our study compares Proximal Policy Optimization (PPO) and Soft Actor–Critic (SAC) in terms of learning efficiency, decision-making accuracy, and adaptability. The results demonstrate that PPO successfully learns to reach the target, achieving higher and more stable cumulative rewards. Conversely, SAC struggles to reach the target, displaying significant variability and lower performance. These findings highlight the effectiveness of PPO in this context and indicate the need for further development, adaptation, and tuning of SAC. This research contributes to developing innovative approaches in how ML can improve how player agents adapt and react to their environments, thereby enhancing realism and dynamics in gaming experiences. Additionally, this work emphasizes the utility of using games to evolve such models, preparing them for real-world applications, namely in the field of vehicles’ autonomous driving and optimal route calculation. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

21 pages, 2447 KiB  
Article
Advancing Taxonomy with Machine Learning: A Hybrid Ensemble for Species and Genus Classification
by Loris Nanni, Matteo De Gobbi, Roger De Almeida Matos Junior and Daniel Fusaro
Algorithms 2025, 18(2), 105; https://doi.org/10.3390/a18020105 - 14 Feb 2025
Viewed by 625
Abstract
Traditionally, classifying species has required taxonomic experts to carefully examine unique physical characteristics, a time-intensive and complex process. Machine learning offers a promising alternative by utilizing computational power to detect subtle distinctions more quickly and accurately. This technology can classify both known (described) [...] Read more.
Traditionally, classifying species has required taxonomic experts to carefully examine unique physical characteristics, a time-intensive and complex process. Machine learning offers a promising alternative by utilizing computational power to detect subtle distinctions more quickly and accurately. This technology can classify both known (described) and unknown (undescribed) species, assigning known samples to specific species and grouping unknown ones at the genus level—an improvement over the common practice of labeling unknown species as outliers. In this paper, we propose a novel ensemble approach that integrates neural networks with support vector machines (SVM). Each animal is represented by an image and its DNA barcode. Our research investigates the transformation of one-dimensional vector data into two-dimensional three-channel matrices using discrete wavelet transform (DWT), enabling the application of convolutional neural networks (CNNs) that have been pre-trained on large image datasets. Our method significantly outperforms existing approaches, as demonstrated on several datasets containing animal images and DNA barcodes. By enabling the classification of both described and undescribed species, this research represents a major step forward in global biodiversity monitoring. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

15 pages, 3372 KiB  
Article
A Training Algorithm for Locally Recurrent Neural Networks Based on the Explicit Gradient of the Loss Function
by Sara Carcangiu and Augusto Montisci
Algorithms 2025, 18(2), 104; https://doi.org/10.3390/a18020104 - 14 Feb 2025
Viewed by 582
Abstract
In this paper, a new algorithm for the training of Locally Recurrent Neural Networks (LRNNs) is presented, which aims to reduce computational complexity and at the same time guarantee the stability of the network during the training. The main feature of the proposed [...] Read more.
In this paper, a new algorithm for the training of Locally Recurrent Neural Networks (LRNNs) is presented, which aims to reduce computational complexity and at the same time guarantee the stability of the network during the training. The main feature of the proposed algorithm is the capability to represent the gradient of the error in an explicit form. The algorithm builds on the interpretation of Fibonacci’s sequence as the output of an IIR second-order filter, which makes it possible to use Binet’s formula that allows the generic terms of the sequence to be calculated directly. Thanks to this approach, the gradient of the loss function during the training can be explicitly calculated, and it can be expressed in terms of the parameters, which control the stability of the neural network. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

22 pages, 4006 KiB  
Article
Building a Custom Crime Detection Dataset and Implementing a 3D Convolutional Neural Network for Video Analysis
by Juan Camilo Londoño Lopera, Freddy Bolaños Martinez and Luis Alejandro Fletscher Bocanegra
Algorithms 2025, 18(2), 103; https://doi.org/10.3390/a18020103 - 14 Feb 2025
Viewed by 673
Abstract
This study addresses the challenge of detecting crimes against individuals in public security applications, particularly where the availability of quality data is limited, and existing models exhibit a lack of generalization to real-world scenarios. To mitigate the challenges associated with collecting extensive and [...] Read more.
This study addresses the challenge of detecting crimes against individuals in public security applications, particularly where the availability of quality data is limited, and existing models exhibit a lack of generalization to real-world scenarios. To mitigate the challenges associated with collecting extensive and labeled datasets, this study proposes the development of a novel dataset focused specifically on crimes against individuals, including incidents such as robberies, assaults, and physical altercations. The dataset is constructed using data from publicly available sources and undergoes a rigorous labeling process to ensure both quality and representativeness of criminal activities. Furthermore, a 3D convolutional neural network (Conv 3D) is implemented for real-time video analysis to detect these crimes effectively. The proposed approach includes a comprehensive validation of both the dataset and the model through performance comparisons with existing datasets, utilizing key evaluation metrics such as the Area Under the Curve of the Receiver Operating Characteristic (AUC-ROC). Experimental results demonstrate that the proposed dataset and model achieve an accuracy rate between 94% and 95%, highlighting their effectiveness in accurately identifying criminal activities. This study contributes to the advancement of crime detection technologies, offering a practical solution for implementation in surveillance and public safety systems in urban environments. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

16 pages, 12755 KiB  
Article
Improved Algorithm to Detect Clandestine Airstrips in Amazon RainForest
by Gabriel R. Pardini, Paulo M. Tasinaffo, Elcio H. Shiguemori, Tahisa N. Kuck, Marcos R. O. A. Maximo and William R. Gyotoku
Algorithms 2025, 18(2), 102; https://doi.org/10.3390/a18020102 - 13 Feb 2025
Viewed by 606
Abstract
The Amazon biome is frequently targeted by illegal activities, with clandestine mining being one of the most prominent. Due to the dense forest cover, criminals often rely on covert aviation as a logistical tool to supply remote locations and sustain these activities. This [...] Read more.
The Amazon biome is frequently targeted by illegal activities, with clandestine mining being one of the most prominent. Due to the dense forest cover, criminals often rely on covert aviation as a logistical tool to supply remote locations and sustain these activities. This work presents an enhancement to a previously developed landing strip detection algorithm tailored for the Amazon biome. The initial algorithm utilized satellite images combined with the use of Convolutional Neural Networks (CNNs) to find the targets’ spatial locations (latitude and longitude). By addressing the limitations identified in the initial approach, this refined algorithm aims to improve detection accuracy and operational efficiency in complex rainforest environments. Tests in a selected area of the Amazon showed that the modified algorithm resulted in a recall drop of approximately 1% while reducing false positives by 26.6%. The recall drop means there was a decrease in the detection of true positives, which is balanced by the reduction in false positives. When applied across the entire biome, the recall decreased by 1.7%, but the total predictions dropped by 17.88%. These results suggest that, despite a slight reduction in recall, the modifications significantly improved the original algorithm by minimizing its limitations. Additionally, the improved solution demonstrates a 25.55% faster inference time, contributing to more rapid target identification. This advancement represents a meaningful step toward more effective detection of clandestine airstrips, supporting ongoing efforts to combat illegal activities in the region. Full article
(This article belongs to the Special Issue Visual Attributes in Computer Vision Applications)
Show Figures

Figure 1

23 pages, 25753 KiB  
Article
A Lightweight Deep Learning Approach for Detecting External Intrusion Signals from Optical Fiber Sensing System Based on Temporal Efficient Residual Network
by Yizhao Wang, Ziye Guo, Haitao Luo, Jing Liu and Ruohua Zhou
Algorithms 2025, 18(2), 101; https://doi.org/10.3390/a18020101 - 11 Feb 2025
Viewed by 588
Abstract
Deep neural networks have been widely applied to fiber optic sensor systems, where the detection of external intrusion in metro tunnels is a major challenge; thus, how to achieve the optimal balance between resource consumption and accuracy is a critical issue. To address [...] Read more.
Deep neural networks have been widely applied to fiber optic sensor systems, where the detection of external intrusion in metro tunnels is a major challenge; thus, how to achieve the optimal balance between resource consumption and accuracy is a critical issue. To address this issue, we propose a lightweight deep learning model, the Temporal Efficient Residual Network (TEResNet), for the detection of anomalous intrusion. In contrast to the majority of two-dimensional convolutional approaches, which require a deep architecture to encompass both low- and high-frequency domains, our methodology employs temporal convolutions and a compact residual network architecture. This allows the model to incorporate lower-level features into the higher-level feature formation in subsequent layers, leveraging informative features from the lower layers, and thus reducing the number of stacked layers for generating high-level features. As a result, the model achieves a superior performance with a relatively small number of layers. Moreover, the two-dimensional feature map is reduced in size to reduce the computational burden without adding parameters. This is crucial for enabling rapid intrusion detection. Experiments were conducted in the construction environment of the Guangzhou Metro, resulting in the creation of a dataset containing 6948 signal segments, which is publicly accessible. The results demonstrate that TEResNet outperforms the existing intrusion detection methods and advanced deep learning networks, achieving an accuracy of 97.12% and an F1 score of 96.15%. With only 48,009 learnable parameters, it provides an efficient and reliable solution for intrusion detection in metro tunnels, aligning with the growing demand for lightweight and robust information processing systems. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

31 pages, 5042 KiB  
Article
A Levelized Multiple Workflow Heterogeneous Earliest Finish Time Allocation Model for Infrastructure as a Service (IaaS) Cloud Environment
by Farheen Bano, Faisal Ahmad, Mohammad Shahid, Mahfooz Alam, Faraz Hasan and Mohammad Sajid
Algorithms 2025, 18(2), 99; https://doi.org/10.3390/a18020099 - 10 Feb 2025
Cited by 1 | Viewed by 800
Abstract
Cloud computing, a superset of heterogeneous distributed computing, allows sharing of geographically dispersed resources across multiple organizations on a rental basis using virtualization as per demand. In cloud computing, workflow allocation to achieve the optimum schedule has been reported to be NP-hard. This [...] Read more.
Cloud computing, a superset of heterogeneous distributed computing, allows sharing of geographically dispersed resources across multiple organizations on a rental basis using virtualization as per demand. In cloud computing, workflow allocation to achieve the optimum schedule has been reported to be NP-hard. This paper proposes a Levelized Multiple Workflow Heterogeneous Earliest Finish Time (LMHEFT) model to optimize makespan in the cloud computing environment. The model has two phases: task prioritization and task allocation. The task prioritization phase begins by dividing workflows into the number of partitions as per the level attribute; after that, upward rank is employed to determine the partition-wise task allocation order. In the allocation phase, the best-suited virtual machine is determined to offer the lowest finish time for each task in partition-wise mapping to minimize the workflow task’s completion time. The model considers the inter-task communication between the cooperative workflow tasks. A comparative performance evaluation of LMHEFT has been conducted with the competitive models from the literature implemented in MATLAB, i.e., heterogeneous earliest finish time (HEFT) and dynamic level scheduling (DLS), on makespan, flowtime, and utilization. The experimental findings indicate that LMHEFT surpasses HEFT and DLS in terms of makespan 15.51% and 85.12% when varying the number of workflows, 41.19% and 86.73% when varying depth levels, and 13.74% and 80.24% when varying virtual machines, respectively. Further statistical analysis has been carried out to confirm the hypothesis developed in the simulation study by using normality tests, homogeneity tests, and the Kruskal–Wallis test. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 1944 KiB  
Article
Pediatric Pneumonia Recognition Using an Improved DenseNet201 Model with Multi-Scale Convolutions and Mish Activation Function
by Petra Radočaj, Dorijan Radočaj and Goran Martinović
Algorithms 2025, 18(2), 98; https://doi.org/10.3390/a18020098 - 10 Feb 2025
Viewed by 742
Abstract
Pediatric pneumonia remains a significant global health issue, particularly in low- and middle-income countries, where it contributes substantially to mortality in children under five. This study introduces a deep learning model for pediatric pneumonia diagnosis from chest X-rays that surpasses the performance of [...] Read more.
Pediatric pneumonia remains a significant global health issue, particularly in low- and middle-income countries, where it contributes substantially to mortality in children under five. This study introduces a deep learning model for pediatric pneumonia diagnosis from chest X-rays that surpasses the performance of state-of-the-art methods reported in the recent literature. Using a DenseNet201 architecture with a Mish activation function and multi-scale convolutions, the model was trained on a dataset of 5856 chest X-ray images, achieving high performance: 0.9642 accuracy, 0.9580 precision, 0.9506 sensitivity, 0.9542 F1 score, and 0.9507 specificity. These results demonstrate a significant advancement in diagnostic precision and efficiency within this domain. By achieving the highest accuracy and F1 score compared to other recent work using the same dataset, our approach offers a tangible improvement for resource-constrained environments where access to specialists and sophisticated equipment is limited. While the need for high-quality datasets and adequate computational resources remains a general consideration for deep learning applications, our model’s demonstrably superior performance establishes a new benchmark and offers the delivery of more timely and precise diagnoses, with the potential to significantly enhance patient outcomes. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

19 pages, 7491 KiB  
Article
Performance Investigation of Active, Semi-Active and Passive Suspension Using Quarter Car Model
by Kyle Samaroo, Abdul Waheed Awan, Siva Marimuthu, Muhammad Naveed Iqbal, Kamran Daniel and Noman Shabbir
Algorithms 2025, 18(2), 100; https://doi.org/10.3390/a18020100 - 10 Feb 2025
Viewed by 705
Abstract
In this paper, a semi-active and fully active suspension system using a PID controller were designed and tuned in MATLAB/Simulink to achieve simultaneous optimisation of comfort and road holding ability. This was performed in order to quantify and observe the trends of both [...] Read more.
In this paper, a semi-active and fully active suspension system using a PID controller were designed and tuned in MATLAB/Simulink to achieve simultaneous optimisation of comfort and road holding ability. This was performed in order to quantify and observe the trends of both the semi-active and active suspension, which can then influence the choice of controlled suspension systems used for different applications. The response of the controlled suspensions was compared to a traditional passive setup in terms of the sprung mass displacement and acceleration, tyre deflection, and suspension working space for three different road profile inputs. It was found that across all road profiles, the usage of a semi-active or fully active suspension system offered notable improvements over a passive suspension in terms of comfort and road-holding ability. Specifically, the rms sprung mass displacement was reduced by a maximum of 44% and 56% over the passive suspension when using the semi-active and fully active suspension, respectively. Notably, in terms of sprung mass acceleration, the semi-active suspension offered better performance with a 65% reduction in the passive rms sprung mass acceleration compared to a 40% reduction for the fully active suspension. The tyre deflection of the passive suspension was also reduced by a maximum of 6% when using either the semi-active or fully active suspension. Furthermore, both the semi-active and fully active suspensions increased the suspension working space by 17% and 9%, respectively, over the passive suspension system, which represents a decreased level of performance. In summary, the choice between a semi-active or fully active suspension should be carefully considered based on the level of ride comfort and handling performance that is needed and the suspension working space that is available in the particular application. However, the results of this paper show that the performance gap between the semi-active and fully active suspension is quite small, and the semi-active suspension is mostly able to match and sometimes outperform the fully active suspension n in certain metrics. When considering other factors, such as weight, power requirements, and complexity, the semi-active suspension represents a better choice over the fully active suspension, in the author’s opinion. As such, future work will look at utilising more robust control methods and tuning procedures that may further improve the performance of the semi-active suspension. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 1166 KiB  
Article
Preamble-Based Signal-to-Noise Ratio Estimation for Adaptive Modulation in Space–Time Block Coding-Assisted Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing System
by Shahid Manzoor, Noor Shamsiah Othman and Mohammed W. Muhieldeen
Algorithms 2025, 18(2), 97; https://doi.org/10.3390/a18020097 - 9 Feb 2025
Viewed by 553
Abstract
This paper presents algorithms to estimate the signal-to-noise ratio (SNR) in the time domain and frequency domain that employ a modified Constant Amplitude Zero Autocorrelation (CAZAC) synchronization preamble, denoted as CAZAC-TD and CAZAC-FD SNR estimators, respectively. These SNR estimators are invoked in a [...] Read more.
This paper presents algorithms to estimate the signal-to-noise ratio (SNR) in the time domain and frequency domain that employ a modified Constant Amplitude Zero Autocorrelation (CAZAC) synchronization preamble, denoted as CAZAC-TD and CAZAC-FD SNR estimators, respectively. These SNR estimators are invoked in a space–time block coding (STBC)-assisted multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system. These SNR estimators are compared to the benchmark frequency domain preamble-based SNR estimator referred to as the Milan-FD SNR estimator when used in a non-adaptive 2×2 STBC-assisted MIMO-OFDM system. The performance of the CAZAC-TD and CAZAC-FD SNR estimators is further investigated in the non-adaptive 4×4 STBC-assisted MIMO-OFDM system, which shows improved bit error rate (BER) and normalized mean square error (NMSE) performance. It is evident that the non-adaptive 2×2 and 4×4 STBC-assisted MIMO-OFDM systems that invoke the CAZAC-TD SNR estimator exhibit superior performance and approach closer to the normalized Cramer–Rao bound (NCRB). Subsequently, the CAZAC-TD SNR estimator is invoked in an adaptive modulation scheme for a 2×2 STBC-assisted MIMO-OFDM system employing M-PSK, denoted as the AM-CAZAC-TD-MIMO system. The AM-CAZAC-TD-MIMO system outperformed the non-adaptive STBC-assisted MIMO-OFDM system using 8-PSK by about 2 dB at BER = 104. Moreover, the AM-CAZAC-TD-MIMO system demonstrated an SNR gain of about 4 dB when compared with an adaptive single-input single-output (SISO)-OFDM system with M-PSK. Therefore, it was shown that the spatial diversity of the MIMO-OFDM system is key for the AM-CAZAC-TD-MIMO system’s improved performance. Full article
Show Figures

Figure 1

33 pages, 437 KiB  
Review
The Diagnostic Classification of the Pathological Image Using Computer Vision
by Yasunari Matsuzaka and Ryu Yashiro
Algorithms 2025, 18(2), 96; https://doi.org/10.3390/a18020096 - 8 Feb 2025
Viewed by 1126
Abstract
Computer vision and artificial intelligence have revolutionized the field of pathological image analysis, enabling faster and more accurate diagnostic classification. Deep learning architectures like convolutional neural networks (CNNs), have shown superior performance in tasks such as image classification, segmentation, and object detection in [...] Read more.
Computer vision and artificial intelligence have revolutionized the field of pathological image analysis, enabling faster and more accurate diagnostic classification. Deep learning architectures like convolutional neural networks (CNNs), have shown superior performance in tasks such as image classification, segmentation, and object detection in pathology. Computer vision has significantly improved the accuracy of disease diagnosis in healthcare. By leveraging advanced algorithms and machine learning techniques, computer vision systems can analyze medical images with high precision, often matching or even surpassing human expert performance. In pathology, deep learning models have been trained on large datasets of annotated pathology images to perform tasks such as cancer diagnosis, grading, and prognostication. While deep learning approaches show great promise in diagnostic classification, challenges remain, including issues related to model interpretability, reliability, and generalization across diverse patient populations and imaging settings. Full article
29 pages, 5818 KiB  
Article
Enhancing Non-Invasive Blood Glucose Prediction from Photoplethysmography Signals via Heart Rate Variability-Based Features Selection Using Metaheuristic Algorithms
by Saifeddin Alghlayini, Mohammed Azmi Al-Betar and Mohamed Atef
Algorithms 2025, 18(2), 95; https://doi.org/10.3390/a18020095 - 8 Feb 2025
Cited by 1 | Viewed by 848
Abstract
Diabetes requires effective monitoring of the blood glucose level (BGL), traditionally achieved through invasive methods. This study addresses the non-invasive estimation of BGL by utilizing heart rate variability (HRV) features extracted from photoplethysmography (PPG) signals. A systematic feature selection methodology was developed employing [...] Read more.
Diabetes requires effective monitoring of the blood glucose level (BGL), traditionally achieved through invasive methods. This study addresses the non-invasive estimation of BGL by utilizing heart rate variability (HRV) features extracted from photoplethysmography (PPG) signals. A systematic feature selection methodology was developed employing advanced metaheuristic algorithms, specifically the Improved Dragonfly Algorithm (IDA), Binary Grey Wolf Optimizer (bGWO), Binary Harris Hawks Optimizer (BHHO), and Genetic Algorithm (GA). These algorithms were integrated with machine learning (ML) models, including Random Forest (RF), Extra Trees Regressor (ETR), and Light Gradient Boosting Machine (LightGBM), to enhance predictive accuracy and optimize feature selection. The IDA-LightGBM combination exhibited superior performance, achieving a mean absolute error (MAE) of 13.17 mg/dL, a root mean square error (RMSE) of 15.36 mg/dL, and 94.74% of predictions falling within the clinically acceptable Clarke error grid (CEG) zone A, with none in dangerous zones. This research underscores the efficiency of utilizing HRV and PPG for non-invasive glucose monitoring, demonstrating the effectiveness of integrating metaheuristic and ML approaches for enhanced diabetes monitoring. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

24 pages, 3330 KiB  
Article
Encoding-Based Machine Learning Approach for Health Status Classification and Remote Monitoring of Cardiac Patients
by Sohaib R. Awad and Faris S. Alghareb
Algorithms 2025, 18(2), 94; https://doi.org/10.3390/a18020094 - 7 Feb 2025
Cited by 1 | Viewed by 710
Abstract
Remote monitoring of a patient’s vital activities has become increasingly important in dealing with various medical applications. In particular, machine learning (ML) techniques have been extensively utilized to analyze electrocardiogram (ECG) signals in cardiac patients to classify heart health status. This trend is [...] Read more.
Remote monitoring of a patient’s vital activities has become increasingly important in dealing with various medical applications. In particular, machine learning (ML) techniques have been extensively utilized to analyze electrocardiogram (ECG) signals in cardiac patients to classify heart health status. This trend is largely driven by the growing interest in computer-aided diagnosis based on ML algorithms. However, there has been inadequate investigation into the impact of risk factors on heart health, which hinders the ability to identify heart-related issues and predict the conditions of cardiac patients. In this context, developing a GUI-based classification approach can significantly facilitate online monitoring and provide real-time warnings by predicting potential complications. In this paper, a general framework structure for medical real-time monitoring systems is proposed for modeling the vital signs of cardiac patients in order to predict the patient’s status. The proposed approach analyzes AI-driven interventions to provide a more accurate cardiac diagnosis and real-time monitoring system. To further demonstrate the validity of the presented approach, we employ it in a LabVIEW-based remote tracking system to predict three healthcare statuses (stable, unstable non-critical, and unstable critical). The developed monitoring system receives various information about patients’ vital signs, and then it leverages a novel encoding-based machine learning algorithm to pre-process, analyze, and classify patient status. The developed ANN classifier and proposed encoding-based ML model are compared to other conventional ML-based models, such as Naive Bayes, SVM, and KNN for model accuracy evaluation. The obtained outcomes demonstrate the efficacy of the presented ANN and encoding-based ML approaches by achieving an accuracy of 98.4% and 98.8% for the developed ANN classifier and the proposed encoding-based technique, respectively, whereas Naive Bayes and quadratic SVM algorithms realize 94.8% and 96%, respectively. In short, this study aims to explore how ML algorithms can enhance diagnostic accuracy, improve real-time monitoring, and optimize treatment outcomes. Meanwhile, the proposed tracking system outperforms most existing monitoring systems by offering high classification accuracy of the heart health status and a user-friendly interactive interface. Therefore, it can potentially be utilized to improve the performance of remote healthcare monitoring for cardiac patients. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

18 pages, 1778 KiB  
Article
Advancing Real-Estate Forecasting: A Novel Approach Using Kolmogorov–Arnold Networks
by Iosif Viktoratos and Athanasios Tsadiras
Algorithms 2025, 18(2), 93; https://doi.org/10.3390/a18020093 - 7 Feb 2025
Viewed by 905
Abstract
Accurately estimating house values is a critical challenge for real-estate stakeholders, including homeowners, buyers, sellers, agents, and policymakers. This study introduces a novel approach to this problem using Kolmogorov–Arnold networks (KANs), a type of neural network based on the Kolmogorov–Arnold theorem. The proposed [...] Read more.
Accurately estimating house values is a critical challenge for real-estate stakeholders, including homeowners, buyers, sellers, agents, and policymakers. This study introduces a novel approach to this problem using Kolmogorov–Arnold networks (KANs), a type of neural network based on the Kolmogorov–Arnold theorem. The proposed KAN model was tested on two datasets and demonstrated superior performance compared to existing state-of-the-art methods for predicting house prices. By delivering more precise price forecasts, the model supports improved decision-making for real-estate stakeholders. Additionally, the results highlight the broader potential of KANs for addressing complex prediction tasks in data science. This study aims to provide an innovative and effective solution for accurate house price estimation, offering significant benefits for the real-estate industry and beyond. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 3110 KiB  
Essay
Optimization of Multimodal Transport Paths Considering a Low-Carbon Economy Under Uncertain Demand
by Zhiwei Liu, Sihui Zhou and Song Liu
Algorithms 2025, 18(2), 92; https://doi.org/10.3390/a18020092 - 6 Feb 2025
Cited by 2 | Viewed by 718
Abstract
Aiming at the uncertainty in cargo demand in the transportation process, the multimodal transportation path optimization problem is studied from the perspective of a low-carbon economy, and the robust optimization modeling method is introduced. Firstly, a robust optimization model for multimodal transportation is [...] Read more.
Aiming at the uncertainty in cargo demand in the transportation process, the multimodal transportation path optimization problem is studied from the perspective of a low-carbon economy, and the robust optimization modeling method is introduced. Firstly, a robust optimization model for multimodal transportation is built using the multimodal transportation path optimization model under demand certainty, and the total transportation cost is then calculated by taking into account not just only the cost of transportation and trans-shipment but, additionally, the price of waiting because of schedule restrictions on trains and airplanes. Secondly, carbon emissions are added into the model as a constraint or cost by converting four different low-carbon policies. Then, the simulated annealing mechanism is introduced to improve the ACO algorithm. Finally, solomon calculus is used for the solution. The outcomes demonstrate that the improved annealing ant colony hybrid algorithm simulation can essentially improve the multimodal transportation path optimization problem with uncertain demand and promote multimodal transportation emission reduction. Among the four carbon emission policies, the mandatory carbon emission policy means are tough, and the greatest impact comes from reducing emissions and using less energy. Energy conservation and emission reduction have the second-best impact, while the three policy tools of carbon taxes, carbon trading and carbon payment are more modest. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

15 pages, 6304 KiB  
Technical Note
Advanced Dynamic Vibration of Terfenol-D Control Law on Functionally Graded Material Plates/Cylindrical Shells in Unsteady Supersonic Flow
by Chih-Chiang Hong
Algorithms 2025, 18(2), 91; https://doi.org/10.3390/a18020091 - 6 Feb 2025
Viewed by 510
Abstract
The thermal vibration of thick Terfenol-D control law on functionally graded material (FGM) plates/cylindrical shells in nonlinear unsteady supersonic flow with third-order shear deformation theory (TSDT) is investigated by using the generalized differential quadrature (GDQ) method. The effects of the coefficient term of [...] Read more.
The thermal vibration of thick Terfenol-D control law on functionally graded material (FGM) plates/cylindrical shells in nonlinear unsteady supersonic flow with third-order shear deformation theory (TSDT) is investigated by using the generalized differential quadrature (GDQ) method. The effects of the coefficient term of TSDT displacement models on the thermal stress and center displacement of Terfenol-D control law on FGM plates/cylindrical shells in nonlinear unsteady supersonic flow are investigated. The coefficient term of TSDT models of thick Terfenol-D control law on FGM plates/cylindrical shells provide an additional effect on the values of displacements and stresses. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 5315 KiB  
Article
ECG Signal Classification Using Interpretable KAN: Towards Predictive Diagnosis of Arrhythmias
by Hongzhen Cui, Shenhui Ning, Shichao Wang, Wei Zhang and Yunfeng Peng
Algorithms 2025, 18(2), 90; https://doi.org/10.3390/a18020090 - 6 Feb 2025
Viewed by 975
Abstract
To address the need for accurate classification of electrocardiogram (ECG) signals, we employ an interpretable KAN to classify arrhythmia diseases. Experimental evaluation of the MIT-BIH and PTB datasets demonstrates the significant superiority of the KAN in classifying arrhythmia diseases. Specifically, preprocessing steps such [...] Read more.
To address the need for accurate classification of electrocardiogram (ECG) signals, we employ an interpretable KAN to classify arrhythmia diseases. Experimental evaluation of the MIT-BIH and PTB datasets demonstrates the significant superiority of the KAN in classifying arrhythmia diseases. Specifically, preprocessing steps such as sample balancing and variance sorting effectively optimized the feature distribution and significantly enhanced the model’s classification performance. In the MIT-BIH, the KAN achieved classification accuracy and precision rates of 99.08% and 99.07%, respectively. Similarly, on the PTB dataset, both metrics reached 99.11%. In addition, experimental results indicate that compared to the traditional multi-layer perceptron (MLP), the KAN demonstrates higher classification accuracy and better fitting stability and adaptability to complex data scenarios. Applying three clustering methods demonstrates that the features extracted by the KAN exhibit clearer cluster boundaries, thereby verifying its effectiveness in ECG signal classification. Additionally, convergence analysis reveals that the KAN’s training process exhibits a smooth and stable loss decline curve, confirming its robustness under complex data conditions. The findings of this study validate the applicability and superiority of the KAN in classifying ECG signals for arrhythmia and other diseases, offering a novel technical approach to the classification and diagnosis of arrhythmias. Finally, potential future research directions are discussed, including the KAN in early warning and rapid diagnosis of arrhythmias. This study establishes a theoretical foundation and practical basis for advancing interpretable networks in clinical applications. Full article
Show Figures

Figure 1

22 pages, 20326 KiB  
Article
GATransformer: A Graph Attention Network-Based Transformer Model to Generate Explainable Attentions for Brain Tumor Detection
by Sara Tehsin, Inzamam Mashood Nasir and Robertas Damaševičius
Algorithms 2025, 18(2), 89; https://doi.org/10.3390/a18020089 - 6 Feb 2025
Cited by 2 | Viewed by 846
Abstract
Brain tumors profoundly affect human health owing to their intricacy and the difficulties associated with early identification and treatment. Precise diagnosis is essential for effective intervention; nevertheless, the resemblance among tumor forms often complicates the identification of brain tumor types, particularly in the [...] Read more.
Brain tumors profoundly affect human health owing to their intricacy and the difficulties associated with early identification and treatment. Precise diagnosis is essential for effective intervention; nevertheless, the resemblance among tumor forms often complicates the identification of brain tumor types, particularly in the early stages. The latest deep learning systems offer very high classification accuracy but lack explainability to help patients understand the prediction process. GATransformer, a graph attention network (GAT)-based Transformer, uses the attention mechanism, GAT, and Transformer to identify and preserve key neural network channels. The channel attention module extracts deeper properties from weight-channel connections to improve model representation. Integrating these elements results in a reduction in model size and enhancement in computing efficiency, while preserving adequate model performance. The proposed model is assessed using two publicly accessible datasets, FigShare and Kaggle, and is cross-validated using the BraTS2019 and BraTS2020 datasets, demonstrating high accuracy and explainability. Notably, GATransformer generates interpretable attention maps, visually highlighting tumor regions to aid clinical understanding in medical imaging. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop