Next Issue
Volume 18, July
Previous Issue
Volume 18, May
 
 

Algorithms, Volume 18, Issue 6 (June 2025) – 75 articles

Cover Story (view full-size image): Introduction: When disaster strikes, every second counts. UAV teams must collect critical data while dodging obstacles and battling winds, a challenge that stumps conventional algorithms. This innovative research fuses A* pathfinding with deep learning to create an adaptive routing system that thinks ahead. The neural network learns to predict travel times based on wind conditions, payload weight, and trajectory complexity, while A* ensures collision-free paths through cluttered environments. In extensive tests on 30×30 km grids with strategic obstacles, the hybrid approach delivered up to 15% more value than baseline methods without sacrificing speed. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 4961 KiB  
Article
A Small-Sample Scenario Optimization Scheduling Method Based on Multidimensional Data Expansion
by Yaoxian Liu, Kaixin Zhang, Yue Sun, Jingwen Chen and Junshuo Chen
Algorithms 2025, 18(6), 373; https://doi.org/10.3390/a18060373 - 19 Jun 2025
Viewed by 289
Abstract
Currently, deep reinforcement learning has been widely applied to energy system optimization and scheduling, and the DRL method relies more heavily on historical data. The lack of historical operation data in new integrated energy systems leads to insufficient DRL training samples, which easily [...] Read more.
Currently, deep reinforcement learning has been widely applied to energy system optimization and scheduling, and the DRL method relies more heavily on historical data. The lack of historical operation data in new integrated energy systems leads to insufficient DRL training samples, which easily triggers the problems of underfitting and insufficient exploration of the decision space and thus reduces the accuracy of the scheduling plan. In addition, conventional data-driven methods are also difficult to accurately predict renewable energy output due to insufficient training data, which further affects the scheduling effect. Therefore, this paper proposes a small-sample scenario optimization scheduling method based on multidimensional data expansion. Firstly, based on spatial correlation, the daily power curves of PV power plants with measured power are screened, and the meteorological similarity is calculated using multicore maximum mean difference (MK-MMD) to generate new energy output historical data of the target distributed PV system through the capacity conversion method; secondly, based on the existing daily load data of different types, the load historical data are generated using the stochastic and simultaneous sampling methods to construct the full historical dataset; subsequently, for the sample imbalance problem in the small-sample scenario, an oversampling method is used to enhance the data for the scarce samples, and the XGBoost PV output prediction model is established; finally, the optimal scheduling model is transformed into a Markovian decision-making process, which is solved by using the Deep Deterministic Policy Gradient (DDPG) algorithm. The effectiveness of the proposed method is verified by arithmetic examples. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 984 KiB  
Article
A Linear Regression Prediction-Based Dynamic Multi-Objective Evolutionary Algorithm with Correlations of Pareto Front Points
by Junxia Ma, Yongxuan Sang, Yaoli Xu and Bo Wang
Algorithms 2025, 18(6), 372; https://doi.org/10.3390/a18060372 - 19 Jun 2025
Viewed by 212
Abstract
The Dynamic Multi-objective Optimization Problem (DMOP) is one of the common problem types in academia and industry. The Dynamic Multi-Objective Evolutionary Algorithm (DMOEA) is an effective way for solving DMOPs. Despite the existence of many research works proposing a variety of DMOEAs, the [...] Read more.
The Dynamic Multi-objective Optimization Problem (DMOP) is one of the common problem types in academia and industry. The Dynamic Multi-Objective Evolutionary Algorithm (DMOEA) is an effective way for solving DMOPs. Despite the existence of many research works proposing a variety of DMOEAs, the demand for efficient solutions to DMOPs in drastically changing scenarios is still not well met. To this end, this paper is oriented towards DMOEA and innovatively proposes to explore the correlation between different points of the optimal frontier (PF) to improve the accuracy of predicting new PFs for new environments, which is the first attempt, to our best knowledge. Specifically, when the DMOP environment changes, this paper first constructs a spatio-temporal correlation model between various key points of the PF based on the linear regression algorithm; then, based on the constructed model, predicts a new location for each key point in the new environment; subsequently, constructs a sub-population by introducing the Gaussian noise into the predicted location to improve the generalization ability; and then, utilizes the idea of NSGA-II-B to construct another sub-population to further improve the population diversity; finally, combining the previous two sub-populations, re-initializing a new population to adapt to the new environment through a random replacement strategy. The proposed method was evaluated by experiments on the CEC 2018 test suite, and the experimental results show that the proposed method can obtain the optimal MIGD value on six DMOPs and the optimal MHVD value on five DMOPs, compared with six recent research results. Full article
Show Figures

Figure 1

24 pages, 705 KiB  
Article
Substring Counting with Insertions
by Janez Brank and Tomaž Hočevar
Algorithms 2025, 18(6), 371; https://doi.org/10.3390/a18060371 - 19 Jun 2025
Viewed by 211
Abstract
Substring counting is a classical algorithmic problem with numerous solutions that achieve linear time complexity. In this paper, we address a variation of the problem where, given three strings p, t, and s, we are interested in the number of [...] Read more.
Substring counting is a classical algorithmic problem with numerous solutions that achieve linear time complexity. In this paper, we address a variation of the problem where, given three strings p, t, and s, we are interested in the number of occurrences of p in all strings that would result from inserting t into s at every possible position. Essentially, we are solving several substring counting problems of the same substring p in related strings. We give a detailed description of several conceptually different approaches to solving this problem and conclude with an algorithm that has a linear time complexity. The solution is based on a recent result from the field of substring search in compressed sequences and exploits the periodicity of strings. We also provide a self-contained implementation of the algorithm in C++ and experimentally verify its behavior, chiefly to demonstrate that its running time is linear in the lengths of all three input strings. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

18 pages, 3202 KiB  
Article
DScanNet: Packaging Defect Detection Algorithm Based on Selective State Space Models
by Yirong Luo, Yanping Du, Zhaohua Wang, Jingtian Mo, Wenxuan Yu and Shuihai Dou
Algorithms 2025, 18(6), 370; https://doi.org/10.3390/a18060370 - 19 Jun 2025
Viewed by 269
Abstract
With the rapid development of e-commerce and the logistics industry, the importance of logistics packaging defect detection as a key link in product quality control is becoming increasingly prominent. However, existing target detection models often face the problems of difficulty in improving detection [...] Read more.
With the rapid development of e-commerce and the logistics industry, the importance of logistics packaging defect detection as a key link in product quality control is becoming increasingly prominent. However, existing target detection models often face the problems of difficulty in improving detection accuracy and high model complexity when dealing with small-scale targets in logistics packaging. For this reason, an improved target detection model, DScanNet, is proposed in this paper. To address the problem that the model’s detailed feature extraction for small target defects is not sufficient and thus leads to low detection accuracy, the MEFE module, the local feature extraction module (LFEM Block), and the PCR module of the multi-scale convolution and feature enhancement strategy are proposed to enhance the model’s capability of capturing defective features and focusing on specific features, and to improve the detection accuracy. To address the problem of excessive model complexity, a Mamba module incorporating a channel attention mechanism is proposed to optimize the model via its linear complexity. Through experiments on its own dataset, BIGC-LP, DScanNet achieves a high accuracy of 96.8% on the defect detection task compared with the current mainstream detection algorithms, while the number of model parameters and the computational volume are effectively controlled. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (3rd Edition))
Show Figures

Figure 1

19 pages, 2399 KiB  
Article
The Fine Feature Extraction and Attention Re-Embedding Model Based on the Swin Transformer for Pavement Damage Classification
by Shizheng Zhang, Kunpeng Wang, Zhihao Liu, Min Huang and Sheng Huang
Algorithms 2025, 18(6), 369; https://doi.org/10.3390/a18060369 - 18 Jun 2025
Viewed by 315
Abstract
The accurate detection and classification of pavement damage are critical for ensuring timely maintenance and extending the service life of road infrastructure. In this study, we propose a novel pavement damage recognition model based on the Swin Transformer architecture, specifically designed to address [...] Read more.
The accurate detection and classification of pavement damage are critical for ensuring timely maintenance and extending the service life of road infrastructure. In this study, we propose a novel pavement damage recognition model based on the Swin Transformer architecture, specifically designed to address the challenges inherent in pavement imagery, such as low damage visibility, varying illumination conditions, and highly similar surface textures. Unlike the original Swin Transformer, the proposed model incorporates two key components: a fine feature extraction module and a multi-head self-attention re-embedding module. These additions enhance the model’s ability to capture subtle and complex damage patterns. Experimental evaluations demonstrate that the proposed model achieves a 2.07% improvement in classification accuracy and a 0.97% increase in F1 score compared to the baseline while maintaining comparable computational complexity. Overall, the model significantly outperforms the baseline Swin Transformer in pavement damage detection and classification, highlighting its practical applicability. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

13 pages, 337 KiB  
Article
Synthesizing Explainability Across Multiple ML Models for Structured Data
by Emir Veledar, Lili Zhou, Omar Veledar, Hannah Gardener, Carolina M. Gutierrez, Jose G. Romano and Tatjana Rundek
Algorithms 2025, 18(6), 368; https://doi.org/10.3390/a18060368 - 18 Jun 2025
Viewed by 258
Abstract
Explainable Machine Learning (XML) in high-stakes domains demands reproducible methods to aggregate feature importance across multiple models applied to the same structured dataset. We propose the Weighted Importance Score and Frequency Count (WISFC) framework, which combines importance magnitude and consistency by aggregating ranked [...] Read more.
Explainable Machine Learning (XML) in high-stakes domains demands reproducible methods to aggregate feature importance across multiple models applied to the same structured dataset. We propose the Weighted Importance Score and Frequency Count (WISFC) framework, which combines importance magnitude and consistency by aggregating ranked outputs from diverse explainers. WISFC assigns a weighted score to each feature based on its rank and frequency across model-explainer pairs, providing a robust ensemble feature-importance ranking. Unlike simple consensus voting or ranking heuristics that are insufficient for capturing complex relationships among different explainer outputs, WISFC offers a more principled approach to reconciling and aggregating this information. By aggregating many “weak signals” from brute-force modeling runs, WISFC can surface a stronger consensus on which variables matter most. The framework is designed to be reproducible and generalizable, capable of taking important outputs from any set of machine-learning models and producing an aggregated ranking highlighting consistently important features. This approach acknowledges that any single model is a simplification of complex, multidimensional phenomena; using multiple diverse models, each optimized from a different perspective, WISFC systematically captures different facets of the problem space to create a more structured and comprehensive view. As a consequence, this study offers a useful strategy for researchers and practitioners who seek innovative ways of exploring complex systems, not by discovering entirely new variables but by introducing a novel mindset for systematically combining multiple modeling perspectives. Full article
(This article belongs to the Section Databases and Data Structures)
Show Figures

Figure 1

24 pages, 1201 KiB  
Article
A Two-Stage Bin Packing Algorithm for Minimizing Machines and Operators in Cyclic Production Systems
by Yossi Hadad and Baruch Keren
Algorithms 2025, 18(6), 367; https://doi.org/10.3390/a18060367 - 17 Jun 2025
Viewed by 293
Abstract
This study presents a novel, two-stage algorithm that minimizes the number of machines and operators required to produce multiple product types repeatedly in cyclic scheduling. Our algorithm treats the problem of minimum machines as a bin packing problem (BPP), and the problem of [...] Read more.
This study presents a novel, two-stage algorithm that minimizes the number of machines and operators required to produce multiple product types repeatedly in cyclic scheduling. Our algorithm treats the problem of minimum machines as a bin packing problem (BPP), and the problem of determining the number of operators required is also modeled as the BPP, but with constraints. The BPP is NP-hard, but with suitable heuristic algorithms, the proposed model allocates multiple product types to machines and multiple machines to operators without overlapping setup times (machine interference). The production schedule on each machine is represented as a circle (donut). By using lower bounds, it is possible to assess whether the number of machines required by our model is optimal; if not, the optimality gap can be quantified. The algorithm has been validated using real-world data from an industrial facility producing 17 types of products. The results of our algorithm led to significant cost savings and improved scheduling performance. The outcomes demonstrate the effectiveness of the proposed algorithm in optimizing resource utilization by reducing the number of machines and operators required. Although this study focuses on a manufacturing system, the model can also be applied to other contexts. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

29 pages, 6462 KiB  
Article
A Clustering-Based Dimensionality Reduction Method Guided by POD Structures and Its Application to Convective Flow Problems
by Qingyang Yuan and Bo Zhang
Algorithms 2025, 18(6), 366; https://doi.org/10.3390/a18060366 - 17 Jun 2025
Viewed by 278
Abstract
Proper orthogonal decomposition (POD) is a widely used linear dimensionality reduction technique, but it often fails to capture critical features in complex nonlinear flows. In contrast, clustering methods are effective for nonlinear feature extraction, yet their application in dimensionality reduction methods is hindered [...] Read more.
Proper orthogonal decomposition (POD) is a widely used linear dimensionality reduction technique, but it often fails to capture critical features in complex nonlinear flows. In contrast, clustering methods are effective for nonlinear feature extraction, yet their application in dimensionality reduction methods is hindered by unstable cluster initialization and inefficient mode sorting. To address these issues, we propose a clustering-based dimensionality reduction method guided by POD structures (C-POD), which uses POD preprocessing to stabilize the selection of cluster centers. Additionally, we introduce an entropy-controlled Euclidean-to-probability mapping (ECEPM) method to improve modal sorting and assess mode importance. The C-POD approach is evaluated using the one-dimensional Burgers’ equation and a two-dimensional cylinder wake flow. Results show that C-POD achieves higher accuracy in dimensionality reduction than POD. Its dominant modes capture more temporal dynamics, while higher-order modes offer better physical interpretability. When solving an inverse problem using sparse sensor data, the Gappy C-POD method improves reconstruction accuracy by 19.75% and enhances the lower bound of reconstruction capability by 13.4% compared to Gappy POD. Overall, C-POD demonstrates strong potential for modeling and reconstructing complex nonlinear flow fields, providing a valuable tool for dimensionality reduction methods in fluid dynamics. Full article
Show Figures

Figure 1

13 pages, 1776 KiB  
Article
An Efficient Computational Algorithm for the Nonlocal Cahn–Hilliard Equation with a Space-Dependent Parameter
by Zhengang Li, Xinpei Wu and Junseok Kim
Algorithms 2025, 18(6), 365; https://doi.org/10.3390/a18060365 - 15 Jun 2025
Viewed by 374
Abstract
In this article, we present a nonlocal Cahn–Hilliard (nCH) equation incorporating a space-dependent parameter to model microphase separation phenomena in diblock copolymers. The proposed model introduces a modified formulation that accounts for spatially varying average volume fractions and thus captures nonlocal interactions between [...] Read more.
In this article, we present a nonlocal Cahn–Hilliard (nCH) equation incorporating a space-dependent parameter to model microphase separation phenomena in diblock copolymers. The proposed model introduces a modified formulation that accounts for spatially varying average volume fractions and thus captures nonlocal interactions between distinct subdomains. Such spatial heterogeneity plays a critical role in determining the morphology of the resulting phase-separated structures. To efficiently solve the resulting partial differential equation, a Fourier spectral method is used in conjunction with a linearly stabilized splitting scheme. This numerical approach not only guarantees stability and efficiency but also enables accurate resolution of spatially complex patterns without excessive computational overhead. The spectral representation effectively handles the nonlocal terms, while the stabilization scheme allows for large time steps. Therefore, this method is suitable for long-time simulations of pattern formation processes. Numerical experiments conducted under various initial conditions demonstrate the ability of the proposed method to resolve intricate phase separation behaviors, including coarsening dynamics and interface evolution. The results show that the space-dependent parameters significantly influence the orientation, size, and regularity of the emergent patterns. This suggests that spatial control of average composition could be used to engineer desirable microstructures in polymeric materials. This study provides a robust computational framework for investigating nonlocal pattern formation in heterogeneous systems, enables simulations in complex spatial domains, and contributes to the theoretical understanding of morphology control in polymer science. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

14 pages, 1467 KiB  
Article
A Two-Step High-Order Compact Corrected WENO Scheme
by Yong Yang, Caixia Chen, Shiming Yuan and Yonghua Yan
Algorithms 2025, 18(6), 364; https://doi.org/10.3390/a18060364 - 15 Jun 2025
Viewed by 277
Abstract
In this study, we introduce a novel 2-step compact scheme-based high-order correction method for computational fluid dynamics (CFD). Unlike traditional single-formula-based schemes, our proposed approach refines flux function values by leveraging results from high-order compact schemes on the same stencils, provided a certain [...] Read more.
In this study, we introduce a novel 2-step compact scheme-based high-order correction method for computational fluid dynamics (CFD). Unlike traditional single-formula-based schemes, our proposed approach refines flux function values by leveraging results from high-order compact schemes on the same stencils, provided a certain smoothness condition is met. By applying this method, we achieve a more stable and efficient compact corrected Weighted Essentially Non-Oscillatory (WENO) scheme. The results demonstrate significant improvements across all enhanced schemes, particularly in capturing shock waves sharply and maintaining stability in complex scenarios, such as two interacting blast waves, as validated through 1D benchmark tests. In addition, error analysis is also provided for the two different correction configurations based on WENO. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

8 pages, 191 KiB  
Editorial
Algorithms for Game AI
by Wenxin Li and Haifeng Zhang
Algorithms 2025, 18(6), 363; https://doi.org/10.3390/a18060363 - 13 Jun 2025
Viewed by 546
Abstract
Games have long been benchmarks for AI algorithms and, with the boost of computational power and the application of new algorithms, AI systems have achieved superhuman performance in games for which it was once thought that they could only be mastered by humans [...] Read more.
Games have long been benchmarks for AI algorithms and, with the boost of computational power and the application of new algorithms, AI systems have achieved superhuman performance in games for which it was once thought that they could only be mastered by humans due to their high complexity [...] Full article
(This article belongs to the Special Issue Algorithms for Games AI)
21 pages, 1045 KiB  
Article
WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation
by Leonidas Akritidis and Panayiotis Bozanis
Algorithms 2025, 18(6), 362; https://doi.org/10.3390/a18060362 - 12 Jun 2025
Viewed by 697
Abstract
Rank aggregation deals with the problem of fusing multiple ranked lists of elements into a single aggregate list with improved element ordering. Such cases are frequently encountered in numerous applications across a variety of areas, including bioinformatics, machine learning, statistics, information retrieval, and [...] Read more.
Rank aggregation deals with the problem of fusing multiple ranked lists of elements into a single aggregate list with improved element ordering. Such cases are frequently encountered in numerous applications across a variety of areas, including bioinformatics, machine learning, statistics, information retrieval, and so on. The weighted rank aggregation methods consider a more advanced version of the problem by assuming that the input lists are not of equal importance. In this context, they first apply ad hoc techniques to assign weights to the input lists, and then, they study how to integrate these weights into the scores of the individual list elements. In this paper, we adopt the idea of exploiting the list weights not only during the computation of the element scores, but also to determine which elements will be included in the consensus aggregate list. More specifically, we introduce and analyze a novel refinement mechanism, called WIRE, that effectively removes the weakest elements from the less important input lists, thus improving the quality of the output ranking. We experimentally demonstrate the effectiveness of our method in multiple datasets by comparing it with a collection of state-of-the-art weighted and non-weighted techniques. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

29 pages, 351 KiB  
Article
The Computability of the Channel Reliability Function and Related Bounds
by Holger Boche and Christian Deppe
Algorithms 2025, 18(6), 361; https://doi.org/10.3390/a18060361 - 11 Jun 2025
Viewed by 701
Abstract
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated [...] Read more.
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated functions, demonstrating that the reliability function is not Turing computable. This also holds true for functions related to the sphere packing bound and the expurgation bound. Additionally, we examine the R function and zero-error feedback capacity, as they are vital in the context of the reliability function. Both the R function and the zero-error feedback capacity are not Banach–Mazur computable. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
17 pages, 371 KiB  
Article
A Box-Bounded Non-Linear Least Square Minimization Algorithm with Application to the JWL Parameter Determination in the Isentropic Expansion for Highly Energetic Material Simulation
by Yuri Caridi, Andrea Cucuzzella, Fabio Vicini and Stefano Berrone
Algorithms 2025, 18(6), 360; https://doi.org/10.3390/a18060360 - 11 Jun 2025
Viewed by 670
Abstract
This work presents a robust box-constrained nonlinear least-squares algorithm for accurately fitting the Jones–Wilkins–Lee (JWL) equation of state parameters, which describes the isentropic expansion of detonation products from high-energy materials. In the energetic material literature, there are plenty of methods that address this [...] Read more.
This work presents a robust box-constrained nonlinear least-squares algorithm for accurately fitting the Jones–Wilkins–Lee (JWL) equation of state parameters, which describes the isentropic expansion of detonation products from high-energy materials. In the energetic material literature, there are plenty of methods that address this problem, and in some cases, it is not fully clear which method is employed. We provide a fully detailed numerical framework that explicitly enforces Chapman–Jouguet (CJ) constraints and systematically separates the contributions of different terms in the JWL expression. The algorithm leverages a trust-region Gauss–Newton method combined with singular value decomposition to ensure numerical stability and rapid convergence, even in highly overdetermined systems. The methodology is validated through comprehensive comparisons with leading thermochemical codes such as CHEETAH 2.0, ZMWNI, and EXPLO5. The results demonstrate that the proposed approach yields lower residual fitting errors and improved consistency with CJ thermodynamic conditions compared to standard fitting routines. By providing a reproducible and theoretically based methodology, this study advances the state of the art in JWL parameter determination and improves the reliability of energetic material simulations. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 1093 KiB  
Article
ADDAEIL: Anomaly Detection with Drift-Aware Ensemble-Based Incremental Learning
by Danlei Li, Nirmal-Kumar C. Nair and Kevin I-Kai Wang
Algorithms 2025, 18(6), 359; https://doi.org/10.3390/a18060359 - 11 Jun 2025
Viewed by 771
Abstract
Time series anomaly detection in streaming environments faces persistent challenges due to concept drift, which gradually degrades model reliability. In this paper, we propose Anomaly Detection with Drift-Aware Ensemble-based Incremental Learning (ADDAEIL), an unsupervised anomaly detection framework that incrementally adapts to concept drift [...] Read more.
Time series anomaly detection in streaming environments faces persistent challenges due to concept drift, which gradually degrades model reliability. In this paper, we propose Anomaly Detection with Drift-Aware Ensemble-based Incremental Learning (ADDAEIL), an unsupervised anomaly detection framework that incrementally adapts to concept drift in non-stationary streaming time series data. ADDAEIL integrates a hybrid drift detection mechanism that combines statistical distribution tests with structural-based performance evaluation of base detectors in Isolation Forest. This design enables unsupervised detection and continuous adaptation to evolving data patterns. Based on the estimated drift intensity, an adaptive update strategy selectively replaces degraded base detectors. This allows the anomaly detection model to incorporate new information while preserving useful historical behavior. Experiments on both real-world and synthetic datasets show that ADDAEIL consistently outperforms existing state-of-the-art methods and maintains robust long-term performance in non-stationary data streams. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

20 pages, 25324 KiB  
Article
DGSS-YOLOv8s: A Real-Time Model for Small and Complex Object Detection in Autonomous Vehicles
by Siqiang Cheng, Lingshan Chen and Kun Yang
Algorithms 2025, 18(6), 358; https://doi.org/10.3390/a18060358 - 11 Jun 2025
Viewed by 1305
Abstract
Object detection in complex road scenes is vital for autonomous driving, facing challenges such as object occlusion, small target sizes, and irregularly shaped targets. To address these issues, this paper introduces DGSS-YOLOv8s, a model designed to enhance detection accuracy and high-FPS performance within [...] Read more.
Object detection in complex road scenes is vital for autonomous driving, facing challenges such as object occlusion, small target sizes, and irregularly shaped targets. To address these issues, this paper introduces DGSS-YOLOv8s, a model designed to enhance detection accuracy and high-FPS performance within the You Only Look Once version 8 small (YOLOv8s) framework. The key innovation lies in the synergistic integration of several architectural enhancements: the DCNv3_LKA_C2f module, leveraging Deformable Convolution v3 (DCNv3) and Large Kernel Attention (LKA) for better the capture of complex object shapes; an Optimized Feature Pyramid Network structure (Optimized-GFPN) for improved multi-scale feature fusion; the Detect_SA module, incorporating spatial Self-Attention (SA) at the detection head for broader context awareness; and an Inner-Shape Intersection over Union (IoU) loss function to improve bounding box regression accuracy. These components collectively target the aforementioned challenges in road environments. Evaluations on the Berkeley DeepDrive 100K (BDD100K) and Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) datasets demonstrate the model’s effectiveness. Compared to baseline YOLOv8s, DGSS-YOLOv8s achieves mean Average Precision (mAP)@50 improvements of 2.4% (BDD100K) and 4.6% (KITTI). Significant gains were observed for challenging categories, notably 87.3% mAP@50 for cyclists on KITTI, and small object detection (AP-small) improved by up to 9.7% on KITTI. Crucially, DGSS-YOLOv8s achieved high processing speeds suitable for autonomous driving, operating at 103.1 FPS (BDD100K) and 102.5 FPS (KITTI) on an NVIDIA GeForce RTX 4090 GPU. These results highlight that DGSS-YOLOv8s effectively balances enhanced detection accuracy for complex scenarios with high processing speed, demonstrating its potential for demanding autonomous driving applications. Full article
(This article belongs to the Special Issue Advances in Computer Vision: Emerging Trends and Applications)
Show Figures

Figure 1

25 pages, 1991 KiB  
Article
Crude Oil and Hot-Rolled Coil Futures Price Prediction Based on Multi-Dimensional Fusion Feature Enhancement
by Yongli Tang, Zhenlun Gao, Ya Li, Zhongqi Cai, Jinxia Yu and Panke Qin
Algorithms 2025, 18(6), 357; https://doi.org/10.3390/a18060357 - 11 Jun 2025
Viewed by 809
Abstract
To address the challenges in forecasting crude oil and hot-rolled coil futures prices, the aim is to transcend the constraints of conventional approaches. This involves effectively predicting short-term price fluctuations, developing quantitative trading strategies, and modeling time series data. The goal is to [...] Read more.
To address the challenges in forecasting crude oil and hot-rolled coil futures prices, the aim is to transcend the constraints of conventional approaches. This involves effectively predicting short-term price fluctuations, developing quantitative trading strategies, and modeling time series data. The goal is to enhance prediction accuracy and stability, thereby supporting decision-making and risk management in financial markets. A novel approach, the multi-dimensional fusion feature-enhanced (MDFFE) prediction method has been devised. Additionally, a data augmentation framework leveraging multi-dimensional feature engineering has been established. The technical indicators, volatility indicators, time features, and cross-variety linkage features are integrated to build a prediction system, and the lag feature design is used to prevent data leakage. In addition, a deep fusion model is constructed, which combines the temporal feature extraction ability of the convolution neural network with the nonlinear mapping advantage of an extreme gradient boosting tree. With the help of a three-layer convolution neural network structure and adaptive weight fusion strategy, an end-to-end prediction framework is constructed. Experimental results demonstrate that the MDFFE model excels in various metrics, including mean absolute error, root mean square error, mean absolute percentage error, coefficient of determination, and sum of squared errors. The mean absolute error reaches as low as 0.0068, while the coefficient of determination can be as high as 0.9970. In addition, the significance and stability of the model performance were verified by statistical methods such as a paired t-test and ANOVA analysis of variance. This MDFFE algorithm offers a robust and practical approach for predicting commodity futures prices. It holds significant theoretical and practical value in financial market forecasting, enhancing prediction accuracy and mitigating forecast volatility. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 5824 KiB  
Article
Identifying Hubs Through Influential Nodes in Transportation Network by Using a Gravity Centrality Approach
by Worawit Tepsan, Aniwat Phaphuangwittayakul, Saronsad Sokantika and Napat Harnpornchai
Algorithms 2025, 18(6), 356; https://doi.org/10.3390/a18060356 - 10 Jun 2025
Viewed by 1126
Abstract
Hubs are strategic locations that function as central nodes within clusters of cities, playing a pivotal role in the distribution of goods, services, and connectivity. Identifying these vital hubs—through analyzing influential locations within transportation networks—is essential for effective urban planning, logistics optimization, and [...] Read more.
Hubs are strategic locations that function as central nodes within clusters of cities, playing a pivotal role in the distribution of goods, services, and connectivity. Identifying these vital hubs—through analyzing influential locations within transportation networks—is essential for effective urban planning, logistics optimization, and enhancing infrastructure resilience. This task becomes even more crucial in developing and less-developed countries, where such hubs can significantly accelerate urban growth and drive economic development. However, existing hub identification approaches face notable limitations. Traditional centrality measures often yield low variance in node scores, making it difficult to distinguish truly influential nodes. Moreover, these methods typically rely solely on either local metrics or global network structures, limiting their effectiveness. To address these challenges, we propose a novel method called Hybrid Community-based Gravity Centrality (HCGC), which integrates local influence measures, community detection, and gravity-based modeling to more effectively identify influential nodes in complex networks. Through extensive experiments, we demonstrate that HCGC consistently outperforms existing methods in terms of spreading ability across varying truncation radii. To further validate our approach, we introduce ThaiNet, a newly constructed real-world transportation network dataset. The results show that HCGC not only preserves the strengths of traditional local approaches but also captures broader structural patterns, making it a powerful and practical tool for real-world network analysis. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 2140 KiB  
Article
Effective Detection of Malicious Uniform Resource Locator (URLs) Using Deep-Learning Techniques
by Yirga Yayeh Munaye, Aneas Bekele Workneh, Yenework Belayneh Chekol and Atinkut Molla Mekonen
Algorithms 2025, 18(6), 355; https://doi.org/10.3390/a18060355 - 7 Jun 2025
Viewed by 1110
Abstract
The rapid growth of internet usage in daily life has led to a significant increase in cyber threats, with malicious URLs serving as a common cybercrime. Traditional detection methods often suffer from high false alarm rates and struggle to keep pace with evolving [...] Read more.
The rapid growth of internet usage in daily life has led to a significant increase in cyber threats, with malicious URLs serving as a common cybercrime. Traditional detection methods often suffer from high false alarm rates and struggle to keep pace with evolving threats due to outdated feature extraction techniques and datasets. To address these limitations, we propose a deep learning-based approach aimed at developing an effective model for detecting malicious URLs. Our proposed method, the Char2B model, leverages a fusion of BERT and CharBiGRU embedding, further enhanced by a Conv1D layer with a kernel size of three and unit-sized stride and padding. After combining the embedding, we used the BERT model as a baseline for comparison. The study involved collecting a dataset of 87,216 URLs, comprising both benign and malicious samples sourced from the open project directory (DMOZ), PhishTank, and Any.Run. Models were trained using the training set and evaluated on the test set using standard metrics, including accuracy, precision, recall, and F1-score. Through iterative refinement, we optimized the model’s performance to maximize its effectiveness. As a result, our proposed model achieved 98.50% accuracy, 98.27% precision, 98.69% recall, and a 98.48% F1-score, outperforming the baseline BERT model. Additionally, the false positive rate of our model was 0.017 better than the baseline model’s 0.018. By effectively extracting and utilizing informative features, the model accurately classified URLs into benign and malicious categories, thereby improving detection capabilities. This study highlights the significance of our deep learning approach in strengthening cybersecurity by integrating advanced algorithms that enhance detection accuracy, bolster defense mechanisms, and contribute to a safer digital environment. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

40 pages, 3827 KiB  
Review
A Review of Hybrid Vehicles Classification and Their Energy Management Strategies: An Exploration of the Advantages of Genetic Algorithms
by Yuede Pan, Kaifeng Zhong, Yubao Xie, Mingzhang Pan, Wei Guan, Li Li, Changye Liu, Xingjia Man, Zhiqing Zhang and Mantian Li
Algorithms 2025, 18(6), 354; https://doi.org/10.3390/a18060354 - 6 Jun 2025
Cited by 1 | Viewed by 2156
Abstract
This paper presents a comprehensive analysis of hybrid electric vehicle (HEV) classification and energy management strategies (EMS), with a particular emphasis on the application and potential of genetic algorithms (GAs) in optimizing energy management strategies for hybrid electric vehicles. Initially, the paper categorizes [...] Read more.
This paper presents a comprehensive analysis of hybrid electric vehicle (HEV) classification and energy management strategies (EMS), with a particular emphasis on the application and potential of genetic algorithms (GAs) in optimizing energy management strategies for hybrid electric vehicles. Initially, the paper categorizes hybrid electric vehicles based on mixing rates and power source configurations, elucidating the operational principles and the range of applicability for different hybrid electric vehicle types. Following this, the two primary categories of energy management strategies—rule-based and optimization-based—are introduced, emphasizing their significance in enhancing energy efficiency and performance, while also acknowledging their inherent limitations. Furthermore, the advantages of utilizing genetic algorithms in optimizing energy management systems for hybrid vehicles are underscored. As a global optimization technique, genetic algorithms are capable of effectively addressing complex multi-objective problems by circumventing local optima and identifying the global optimal solution. The adaptability and versatility of genetic algorithms allow them to conduct real-time optimization across diverse driving conditions. Genetic algorithms play a pivotal role in hybrid vehicle energy management and exhibit a promising future. When combined with other optimization techniques, genetic algorithms can augment the optimization potential for tackling complex tasks. Nonetheless, the advancement of this technique is confronted with challenges such as cost, battery longevity, and charging infrastructure, which significantly influence its widespread adoption and application. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

15 pages, 349 KiB  
Article
Evolutionary Optimization for the Classification of Small Molecules Regulating the Circadian Rhythm Period: A Reliable Assessment
by Antonio Arauzo-Azofra, Jose Molina-Baena and Maria Luque-Rodriguez
Algorithms 2025, 18(6), 353; https://doi.org/10.3390/a18060353 - 6 Jun 2025
Viewed by 689
Abstract
The circadian rhythm plays a crucial role in regulating biological processes, and its disruption is linked to various health issues. Identifying small molecules that influence the circadian period is essential for developing targeted therapies. This study explores the use of evolutionary optimization techniques [...] Read more.
The circadian rhythm plays a crucial role in regulating biological processes, and its disruption is linked to various health issues. Identifying small molecules that influence the circadian period is essential for developing targeted therapies. This study explores the use of evolutionary optimization techniques to enhance the classification of these molecules. We applied a genetic algorithm to optimize feature selection and classification performance. Several tree-based learning classification algorithms (Decision Trees, Extra Trees, Random Forest, XGBoost) and a distance-based classifier (kNN) were employed. Their performance was evaluated using accuracy and F1-score, while considering their generalization ability with a validation set. The findings demonstrate that the proposed genetic algorithm improves classification accuracy and reduces overfitting compared to baseline models. Additionally, the use of variance in accuracy as a penalty factor may enhance the model’s reliability for real-world applications. Our study confirms that evolutionary optimization is an effective strategy for classifying small molecules regulating the circadian rhythm. The proposed approach not only improves predictive performance but also ensures a more robust model. Full article
Show Figures

Figure 1

28 pages, 1589 KiB  
Systematic Review
ChatGPT in Education: A Systematic Review on Opportunities, Challenges, and Future Directions
by Yirga Yayeh Munaye, Wasyihun Admass, Yenework Belayneh, Atinkut Molla and Mekete Asmare
Algorithms 2025, 18(6), 352; https://doi.org/10.3390/a18060352 - 6 Jun 2025
Viewed by 2295
Abstract
This study presents a systematic review on the integration of ChatGPT in education, examining its opportunities, challenges and future directions. Utilizing the PRISMA framework, the review analyzes 40 peer-reviewed studies published from 2020 to 2024. Opportunities identified include the potential for ChatGPT to [...] Read more.
This study presents a systematic review on the integration of ChatGPT in education, examining its opportunities, challenges and future directions. Utilizing the PRISMA framework, the review analyzes 40 peer-reviewed studies published from 2020 to 2024. Opportunities identified include the potential for ChatGPT to foster individualized educational experiences, tailoring learning to meet the needs of individual students. Its capacity to automate grading and assessments is noted as a time-saving measure for educators, allowing them to focus on more interactive and engaging teaching methods. However, the study also addresses significant challenges associated with utilizing ChatGPT in educational contexts. Concerns regarding academic integrity are paramount, as students might misuse ChatGPT for cheating or plagiarism. Additionally, issues such as ChatGPT bias are highlighted, raising questions about the fairness and inclusivity of ChatGPT-generated content in educational materials. The necessity for ethical governance is emphasized, underscoring the importance of establishing clear policies to guide the responsible use of AI in education. The findings highlight several key trends regarding ChatGPT’s role in enhancing personalized learning, automating assessments, and providing support to educators. The review concludes by stressing the importance of identifying best practices to optimize ChatGPT’s effectiveness in teaching and learning environments. There is a clear need for future research focusing on adaptive ChatGPT regulation, which will be essential as educational stakeholders seek to understand and manage the long-term impacts of ChatGPT integration on pedagogy. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

16 pages, 1400 KiB  
Article
An RMSprop-Incorporated Latent Factorization of Tensor Model for Random Missing Data Imputation in Structural Health Monitoring
by Jingjing Yang
Algorithms 2025, 18(6), 351; https://doi.org/10.3390/a18060351 - 6 Jun 2025
Viewed by 841
Abstract
In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization [...] Read more.
In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization of tensor (LFT)-based method has proven effective for such problems, with optimization typically achieved via stochastic gradient descent (SGD). However, SGD-based LFT models and other imputation methods exhibit significant sensitivity to learning rates and slow tail-end convergence. To address these limitations, this study proposes an RMSprop-incorporated latent factorization of tensor (RLFT) model, which integrates an adaptive learning rate mechanism to dynamically adjust step sizes based on gradient magnitudes. Experimental validation on a scaled bridge accelerometer dataset demonstrates that RLFT achieves faster convergence and higher imputation accuracy compared to state-of-the-art models including SGD-based LFT and the long short-term memory (LSTM) network, with improvements of at least 10% in both imputation accuracy and convergence rate, offering a more efficient and reliable solution for missing data handling in SHM. Full article
Show Figures

Figure 1

24 pages, 2877 KiB  
Article
Memory-Efficient Batching for Time Series Transformer Training: A Systematic Evaluation
by Phanwadee Sinthong, Nam Nguyen, Vijay Ekambaram, Arindam Jati, Jayant Kalagnanam and Peeravit Koad
Algorithms 2025, 18(6), 350; https://doi.org/10.3390/a18060350 - 5 Jun 2025
Viewed by 1248
Abstract
Transformer-based time series models are being increasingly employed for time series data analysis. However, their training remains memory intensive, especially with high-dimensional data and extended look-back windows, while model-level memory optimizations are well studied, the batch formation process remains an underexplored factor to [...] Read more.
Transformer-based time series models are being increasingly employed for time series data analysis. However, their training remains memory intensive, especially with high-dimensional data and extended look-back windows, while model-level memory optimizations are well studied, the batch formation process remains an underexplored factor to performance inefficiency. This paper introduces a memory-efficient batching framework based on view-based sliding windows operating directly on GPU-resident tensors. This approach eliminates redundant data materialization caused by tensor stacking and reduces data transfer volumes without modifying model architectures. We present two variants of our solution: (1) per-batch optimization for datasets exceeding GPU memory, and (2) dataset-wise optimization for in-memory workloads. We evaluate our proposed batching framework systematically using peak GPU memory consumption and epoch runtime as efficiency metrics across varying batch sizes, sequence lengths, feature dimensions, and model architectures. Results show consistent memory savings, averaging 90% and runtime improvements of up to 33% across multiple transformer-based models (Informer, Autoformer, Transformer, and PatchTST) and a linear baseline (DLinear) without compromising model accuracy. We extensively validate our method using synthetic and standard real-world benchmarks, demonstrating accuracy preservation and practical scalability in distributed GPU environments. The proposed method highlights batch formation process as a critical component for improving training efficiency. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

22 pages, 9553 KiB  
Article
Testing the Effectiveness of Voxels for Structural Analysis
by Sara Gonizzi Barsanti and Ernesto Nappi
Algorithms 2025, 18(6), 349; https://doi.org/10.3390/a18060349 - 5 Jun 2025
Viewed by 526
Abstract
To assess the condition of cultural heritage assets for conservation, reality-based 3D models can be analyzed using FEA (finite element analysis) software, yielding valuable insights into their structural integrity. Three-dimensional point clouds obtained through photogrammetric and laser scanning techniques can be transformed into [...] Read more.
To assess the condition of cultural heritage assets for conservation, reality-based 3D models can be analyzed using FEA (finite element analysis) software, yielding valuable insights into their structural integrity. Three-dimensional point clouds obtained through photogrammetric and laser scanning techniques can be transformed into volumetric data suitable for FEA by utilizing voxels. When directly using the point cloud data in this process, it is crucial to employ the highest level of accuracy. The fidelity of r point clouds can be compromised by various factors, including uncooperative materials or surfaces, poor lighting conditions, reflections, intricate geometries, and limitations in the precision of the instruments. This data not only skews the inherent structure of the point cloud but also introduces extraneous information. Hence, the geometric accuracy of the resulting model may be diminished, ultimately impacting the reliability of any analyses conducted upon it. The removal of noise from point clouds, a crucial aspect of 3D data processing, known as point cloud denoising, is gaining significant attention due to its ability to reveal the true underlying point cloud structure. This paper focuses on evaluating the geometric precision of the voxelization process, which transforms denoised 3D point clouds into volumetric models suitable for structural analyses. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

23 pages, 676 KiB  
Article
Numerical and Theoretical Treatments of the Optimal Control Model for the Interaction Between Diabetes and Tuberculosis
by Saburi Rasheed, Olaniyi S. Iyiola, Segun I. Oke and Bruce A. Wade
Algorithms 2025, 18(6), 348; https://doi.org/10.3390/a18060348 - 5 Jun 2025
Viewed by 664
Abstract
We primarily focus on the formulation, theoretical, and numerical analyses of a non-autonomous model for tuberculosis (TB) prevention and control programs in a population where individuals suffering from the double trouble of tuberculosis and diabetes are present. The model incorporates four time-dependent control [...] Read more.
We primarily focus on the formulation, theoretical, and numerical analyses of a non-autonomous model for tuberculosis (TB) prevention and control programs in a population where individuals suffering from the double trouble of tuberculosis and diabetes are present. The model incorporates four time-dependent control functions, saturated treatment of non-infectious individuals harboring tuberculosis, and saturated incidence rate. Furthermore, the basic reproduction number of the autonomous form of the proposed optimal control mathematical model is calculated. Sensitivity indexes regarding the constant control parameters reveal that the proposed control and preventive measures will reduce the tuberculosis burden in the population. This study establishes that the combination of campaigns that teach people how the development of tuberculosis and diabetes can be prevented, a treatment strategy that provides saturated treatment to non-infectious individuals exposed to tuberculosis infections, and prompt effective treatment of individuals infected with tuberculosis disease is the optimal strategy to achieve zero TB by 2035. Full article
Show Figures

Figure 1

16 pages, 2603 KiB  
Article
A Novel Model for Accurate Daily Urban Gas Load Prediction Using Genetic Algorithms
by Xi Chen, Feng Wang, Li Xu, Taiwu Xia, Minhao Wang, Gangping Chen, Longyu Chen and Jun Zhou
Algorithms 2025, 18(6), 347; https://doi.org/10.3390/a18060347 - 5 Jun 2025
Viewed by 743
Abstract
With the increase of natural gas consumption year by year, the shortage of urban natural gas reserves leads to the increasingly serious gas supply–demand imbalance. It is particularly important to establish a correct and reasonable gas daily load forecasting model to ensure the [...] Read more.
With the increase of natural gas consumption year by year, the shortage of urban natural gas reserves leads to the increasingly serious gas supply–demand imbalance. It is particularly important to establish a correct and reasonable gas daily load forecasting model to ensure the realization of forecasting function and the accuracy and reliability of calculation results. Most of the current prediction models are combined with the characteristics of gas data and prediction models, and the influencing factors are often considered less. In order to solve this problem, the basic concept of multiple weather parameter (MWP) was introduced, and the influence of factors such as the average temperature, solar radiation, cumulative temperature, wind power, and temperature change of the building foundation on the daily load of urban gas were analyzed. A multiple weather parameter–daily load prediction (MWP-DLP) model based on System Thermal Days (STD) was established, and the genetic algorithm was used to solve the model. The daily gas load in a city was predicted, and the results were analyzed. The results show that the trend between the predicted value of gas daily load obtained by the MWP-DLP model and the actual value was basically consistent. The maximum relative error was 8.2%, and the mean absolute percentage error (MAPE) was 2.68%. The feasibility of the MWP- DLP prediction model was verified, which has practical significance for gas companies to reasonably formulate and decide peak shaving schemes to reserve natural gas. Full article
(This article belongs to the Special Issue Artificial Intelligence for More Efficient Renewable Energy Systems)
Show Figures

Figure 1

14 pages, 698 KiB  
Article
Inferring the Timing of Antiretroviral Therapy by Zero-Inflated Random Change Point Models Using Longitudinal Data Subject to Left-Censoring
by Hongbin Zhang, McKaylee Robertson, Sarah L. Braunstein, David B. Hanna, Uriel R. Felsen, Levi Waldron and Denis Nash
Algorithms 2025, 18(6), 346; https://doi.org/10.3390/a18060346 - 5 Jun 2025
Viewed by 608
Abstract
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution [...] Read more.
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution for the longitudinal data, which is also subject to left-censoring, and the underlying data-generating mechanism is a nonlinear mixed-effects model. We extend the Stochastic EM (StEM) algorithm by combining a Gibbs sampler with a Metropolis–Hastings sampling. We apply the method to real HIV data to infer the timing of ART initiation since diagnosis. Additionally, we conduct simulation studies to assess the performance of our proposed method. Full article
Show Figures

Figure 1

27 pages, 552 KiB  
Article
Automatic Generation of Synthesisable Hardware Description Language Code of Multi-Sequence Detector Using Grammatical Evolution
by Bilal Majeed, Rajkumar Sarma, Ayman Youssef, Douglas Mota Dias and Conor Ryan
Algorithms 2025, 18(6), 345; https://doi.org/10.3390/a18060345 - 5 Jun 2025
Viewed by 639
Abstract
Quickly designing digital circuits that are both correct and efficient poses significant challenges. Electronics, especially those incorporating sequential logic circuits, are complex to design and test. While Electronic Design Automation (EDA) tools aid designers, they do not fully automate the creation of synthesisable [...] Read more.
Quickly designing digital circuits that are both correct and efficient poses significant challenges. Electronics, especially those incorporating sequential logic circuits, are complex to design and test. While Electronic Design Automation (EDA) tools aid designers, they do not fully automate the creation of synthesisable circuits that can be directly translated into hardware. This paper introduces a system that employs Grammatical Evolution (GE) to automatically generate synthesisable Hardware Description Language (HDL) code for the Finite State Machine (FSM) of a Multi-Sequence Detector (MSD). This MSD differs significantly from prior work as it can detect multiple sequences in contrast to the single-sequence detectors discussed in existing literature. Sequence Detectors (SDs) are essential in circuits that detect sequences of specific events to produce timely alerts. The proposed MSD applies to a real-time vending machine scenario, enabling customer selections upon successful payment. However, this technique can evolve any MSD, such as a traffic light control system or a robot navigation system. We examine two parent selection techniques, Tournament Selection (TS) and Lexicase Selection (LS), demonstrating that LS performs better than TS, although both techniques successfully produce synthesisable hardware solutions. Both hand-crafted “Gold” and evolved circuits are synthesised using Generic Process Design Kit (GPDK) technologies at 45 nm, 90 nm, and 180 nm scales, demonstrating their efficacy. Full article
Show Figures

Figure 1

15 pages, 920 KiB  
Article
A Novel Connected-Components Algorithm for 2D Binarized Images
by Costin-Anton Boiangiu, Giorgiana-Violeta Vlăsceanu, Constantin-Eduard Stăniloiu, Nicolae Tarbă and Mihai-Lucian Voncilă
Algorithms 2025, 18(6), 344; https://doi.org/10.3390/a18060344 - 5 Jun 2025
Viewed by 511
Abstract
This paper introduces a new memory-efficient algorithm for connected-components labeling in binary images, which is based on run-length encoding. Unlike conventional pixel-based methods that scan and label individual pixels using global buffers or disjoint-set structures, our approach encodes rows as linked segments and [...] Read more.
This paper introduces a new memory-efficient algorithm for connected-components labeling in binary images, which is based on run-length encoding. Unlike conventional pixel-based methods that scan and label individual pixels using global buffers or disjoint-set structures, our approach encodes rows as linked segments and merges them using a union-by-size strategy. We accelerate run detection by using a precomputed 16-bit cache of binary patterns, allowing for fast decoding without relying on bitwise CPU instructions. When compared against other run-length encoded algorithms, such as the Scan-Based Labeling Algorithm or Run-Based Two-Scan, our method achieves up to 35% faster on most real-world datasets. While other binary-optimized algorithms, such as Bit-Run Two-Scan and Bit-Merge Run Scan, are up to 45% faster than our algorithm, they require much higher memory usage. Compared to them, our method tends to reduce memory consumption on some large document datasets by up to 80%. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop