Next Issue
Volume 18, September
Previous Issue
Volume 18, July
 
 

Algorithms, Volume 18, Issue 8 (August 2025) – 83 articles

Cover Story (view full-size image): Inspired by biological development and evolutionary theory, in this work, we present a generative design algorithm driven by graph-based controllers. Truss structures (as the benchmark) grow by coupling nature-inspiring rules with engineering analysis. Local graph-inspired controllers, Graph Neural Networks (GNNs), and Cartesian Genetic Programming (CGP) coordinate the adaptation. The introduced methods lead to optimized structures while providing transparent engineering rules that can be reused across different design loading and constraints. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 838 KiB  
Article
Research on Commuting Mode Split Model Based on Dominant Transportation Distance
by Jinhui Tan, Shuai Teng, Zongchao Liu, Wei Mao and Minghui Chen
Algorithms 2025, 18(8), 534; https://doi.org/10.3390/a18080534 - 21 Aug 2025
Viewed by 92
Abstract
Conventional commuting mode split models are characterized by inherent limitations in dynamic adaptability, primarily due to persistent dependence on periodic survey data with significant temporal gaps. A dominant transportation distance-based modeling framework for commuting mode choice is proposed, formalizing a generalized cost function. [...] Read more.
Conventional commuting mode split models are characterized by inherent limitations in dynamic adaptability, primarily due to persistent dependence on periodic survey data with significant temporal gaps. A dominant transportation distance-based modeling framework for commuting mode choice is proposed, formalizing a generalized cost function. Through the application of random utility theory, probability density curves are generated to quantify mode-specific dominant distance ranges across three demographic groups: car-owning households, non-car households, and collective households. Empirical validation was conducted using Dongguan as a case study, with model parameters calibrated against 2015 resident travel survey data. Parameter updates are dynamically executed through the integration of big data sources (e.g., mobile signaling and LBS). Successful implementation has been achieved in maintaining Dongguan’s transportation models during the 2021 and 2023 iterations. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

16 pages, 707 KiB  
Article
High-Resolution Human Keypoint Detection: A Unified Framework for Single and Multi-Person Settings
by Yuhuai Lin, Kelei Li and Haihua Wang
Algorithms 2025, 18(8), 533; https://doi.org/10.3390/a18080533 - 21 Aug 2025
Viewed by 148
Abstract
Human keypoint detection has become a fundamental task in computer vision, underpinning a wide range of downstream applications such as action recognition, intelligent surveillance, and human–computer interaction. Accurate localization of keypoints is crucial for understanding human posture, behavior, and interactions in various environments. [...] Read more.
Human keypoint detection has become a fundamental task in computer vision, underpinning a wide range of downstream applications such as action recognition, intelligent surveillance, and human–computer interaction. Accurate localization of keypoints is crucial for understanding human posture, behavior, and interactions in various environments. In this paper, we propose a deep-learning-based human skeletal keypoint detection framework that leverages a High-Resolution Network (HRNet) to achieve robust and precise keypoint localization. Our method maintains high-resolution representations throughout the entire network, enabling effective multi-scale feature fusion, without sacrificing spatial details. This approach preserves the fine-grained spatial information that is often lost in conventional downsampling-based methods. To evaluate its performance, we conducted extensive experiments on the COCO dataset, where our approach achieved competitive performance in terms of Average Precision (AP) and Average Recall (AR), outperforming several state-of-the-art methods. Furthermore, we extended our pipeline to support multi-person keypoint detection in real-time scenarios, ensuring scalability for complex environments. Experimental results demonstrated the effectiveness of our method in both single-person and multi-person settings, providing a comprehensive and flexible solution for various pose estimation tasks in dynamic real-world applications. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

36 pages, 5771 KiB  
Article
Improving K-Means Clustering: A Comparative Study of Parallelized Version of Modified K-Means Algorithm for Clustering of Satellite Images
by Yuv Raj Pant, Larry Leigh and Juliana Fajardo Rueda
Algorithms 2025, 18(8), 532; https://doi.org/10.3390/a18080532 - 21 Aug 2025
Viewed by 188
Abstract
Efficient clustering of high-spatial-dimensional satellite image datasets remains a critical challenge, particularly due to the computational demands of spectral distance calculations, random centroid initialization, and sensitivity to outliers in conventional K-Means algorithms. This study presents a comprehensive comparative analysis of eight parallelized variants [...] Read more.
Efficient clustering of high-spatial-dimensional satellite image datasets remains a critical challenge, particularly due to the computational demands of spectral distance calculations, random centroid initialization, and sensitivity to outliers in conventional K-Means algorithms. This study presents a comprehensive comparative analysis of eight parallelized variants of the K-Means algorithm, designed to enhance clustering efficiency and reduce computational burden for large-scale satellite image analysis. The proposed parallelized implementations incorporate optimized centroid initialization for better starting point selection using a dynamic K-Means sharp method to detect the outlier to improve cluster robustness, and a Nearest-Neighbor Iteration Calculation Reduction method to minimize redundant computations. These enhancements were applied to a test set of 114 global land cover data cubes, each comprising high-dimensional satellite images of size 3712 × 3712 × 16 and executed on multi-core CPU architecture to leverage extensive parallel processing capabilities. Performance was evaluated across three criteria: convergence speed (iterations), computational efficiency (execution time), and clustering accuracy (RMSE). The Parallelized Enhanced K-Means (PEKM) method achieved the fastest convergence at 234 iterations and the lowest execution time of 4230 h, while maintaining consistent RMSE values (0.0136) across all algorithm variants. These results demonstrate that targeted algorithmic optimizations, combined with effective parallelization strategies, can improve the practicality of K-Means clustering for high-dimensional-satellites image analysis. This work underscores the potential of improving K-Means clustering frameworks beyond hardware acceleration alone, offering scalable solutions good for large-scale unsupervised image classification tasks. Full article
(This article belongs to the Special Issue Algorithms in Multi-Sensor Imaging and Fusion)
Show Figures

Graphical abstract

40 pages, 17003 KiB  
Article
Marine Predators Algorithm-Based Robust Composite Controller for Enhanced Power Sharing and Real-Time Voltage Stability in DC–AC Microgrids
by Md Saiful Islam, Tushar Kanti Roy and Israt Jahan Bushra
Algorithms 2025, 18(8), 531; https://doi.org/10.3390/a18080531 - 20 Aug 2025
Viewed by 170
Abstract
Hybrid AC/DC microgrids (HADCMGs), which integrate renewable energy sources and battery storage systems, often face significant stability challenges due to their inherently low inertia and highly variable power inputs. To address these issues, this paper proposes a novel, robust composite controller based on [...] Read more.
Hybrid AC/DC microgrids (HADCMGs), which integrate renewable energy sources and battery storage systems, often face significant stability challenges due to their inherently low inertia and highly variable power inputs. To address these issues, this paper proposes a novel, robust composite controller based on backstepping fast terminal sliding mode control (BFTSMC). This controller is further enhanced with a virtual capacitor to emulate synthetic inertia and with a fractional power-based reaching law, which ensures smooth and finite-time convergence. Moreover, the proposed control strategy ensures the effective coordination of power sharing between AC and DC sub-grids through bidirectional converters, thereby maintaining system stability during rapid fluctuations in load or generation. To achieve optimal control performance under diverse and dynamic operating conditions, the controller gains are adaptively tuned using the marine predators algorithm (MPA), a nature-inspired metaheuristic optimization technique. Furthermore, the stability of the closed-loop system is rigorously established through control Lyapunov function analysis. Extensive simulation results conducted in the MATLAB/Simulink environment demonstrate that the proposed controller significantly outperforms conventional methods by eliminating steady-state error, reducing the settling time by up to 93.9%, and minimizing overshoot and undershoot. In addition, real-time performance is validated via processor-in-the-loop (PIL) testing, thereby confirming the controller’s practical feasibility and effectiveness in enhancing the resilience and efficiency of HADCMG operations. Full article
Show Figures

Figure 1

25 pages, 2133 KiB  
Article
Blockchain-Enabled Self-Autonomous Intelligent Transport System for Drone Task Workflow in Edge Cloud Networks
by Pattaraporn Khuwuthyakorn, Abdullah Lakhan, Arnab Majumdar and Orawit Thinnukool
Algorithms 2025, 18(8), 530; https://doi.org/10.3390/a18080530 - 20 Aug 2025
Viewed by 138
Abstract
In recent years, self-autonomous intelligent transportation applications such as drones and autonomous vehicles have seen rapid development and deployment across various countries. Within the domain of artificial intelligence, self-autonomous agents are defined as software entities capable of independently operating drones in an intelligent [...] Read more.
In recent years, self-autonomous intelligent transportation applications such as drones and autonomous vehicles have seen rapid development and deployment across various countries. Within the domain of artificial intelligence, self-autonomous agents are defined as software entities capable of independently operating drones in an intelligent transport system (ITS) without human intervention. The integration of these agents into autonomous vehicles and their deployment across distributed cloud networks have increased significantly. These systems, which include drones, ground vehicles, and aircraft, are used to perform a wide range of tasks such as delivering passengers and packages within defined operational boundaries. Despite their growing utility, practical implementations face significant challenges stemming from the heterogeneity of network resources, as well as persistent issues related to security, privacy, and processing costs. To overcome these challenges, this study proposes a novel blockchain-enabled self-autonomous intelligent transport system designed for drone workflow applications. The proposed system architecture is based on a remote method invocation (RMI) client–server model and incorporates a serverless computing framework to manage processing costs. Termed the self-autonomous blockchain-enabled cost-efficient system (SBECES), the framework integrates a client and system agent mechanism governed by Q-learning and deep-learning-based policies. Furthermore, it incorporates a blockchain-based hash validation and fault-tolerant (HVFT) mechanism to ensure data integrity and operational reliability. A deep reinforcement learning (DRL)-enabled adaptive scheduler is utilized to manage drone workflow execution while meeting quality of service (QoS) constraints, including deadlines, cost-efficiency, and security. The overarching objective of this research is to minimize the total processing costs that comprise execution, communication, and security overheads, while maximizing operational rewards and ensuring the timely execution of drone-based tasks. Experimental results demonstrate that the proposed system achieves a 30% reduction in processing costs and a 29% improvement in security and privacy compared to existing state-of-the-art solutions. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 2306 KiB  
Article
Optimized Adaptive Multi-Scale Architecture for Surface Defect Recognition
by Xueli Chang, Yue Wang, Heping Zhang, Bogdan Adamyk and Lingyu Yan
Algorithms 2025, 18(8), 529; https://doi.org/10.3390/a18080529 - 20 Aug 2025
Viewed by 254
Abstract
Detection of defects on steel surface is crucial for industrial quality control. To address the issues of structural complexity, high parameter volume, and poor real-time performance in current detection models, this study proposes a lightweight model based on an improved YOLOv11. The model [...] Read more.
Detection of defects on steel surface is crucial for industrial quality control. To address the issues of structural complexity, high parameter volume, and poor real-time performance in current detection models, this study proposes a lightweight model based on an improved YOLOv11. The model first reconstructs the backbone network by introducing a Reversible Connected Multi-Column Network (RevCol) to effectively preserve multi-level feature information. Second, the lightweight FasterNet is embedded into the C3k2 module, utilizing Partial Convolution (PConv) to reduce computational overhead. Additionally, a Group Convolution-driven EfficientDetect head is designed to maintain high-performance feature extraction while minimizing consumption of computational resources. Finally, a novel WISEPIoU loss function is developed by integrating WISE-IoU and POWERFUL-IoU to accelerate the model convergence and optimize the accuracy of bounding box regression. The experiments on the NEU-DET dataset demonstrate that the improved model achieves a parameter reduction of 39.1% from the baseline and computational complexity of 49.2% reduction in comparison with the baseline, with an mAP@0.5 of 0.758 and real-time performance of 91 FPS. On the DeepPCB dataset, the model exhibits reduction of parameters and computations by 39.1% and 49.2%, respectively, with mAP@0.5 = 0.985 and real-time performance of 64 FPS. The study validates that the proposed lightweight framework effectively balances accuracy and efficiency, and proves to be a practical solution for real-time defect detection in resource-constrained environments. Full article
(This article belongs to the Special Issue Visual Attributes in Computer Vision Applications)
Show Figures

Figure 1

17 pages, 8132 KiB  
Article
DGNCA-Net: A Lightweight and Efficient Insulator Defect Detection Model
by Qiang Chen, Yuanfeng Luo, Wu Yuan, Ruiliang Zhang and Yunshou Mao
Algorithms 2025, 18(8), 528; https://doi.org/10.3390/a18080528 - 20 Aug 2025
Viewed by 207
Abstract
This paper proposes a lightweight DGNCA-Net insulator defect detection algorithm based on improvements to the YOLOv11 framework, addressing the issues of high computational complexity and low detection accuracy for small targets in machine vision-based insulator defect detection methods. Firstly, to enhance the model’s [...] Read more.
This paper proposes a lightweight DGNCA-Net insulator defect detection algorithm based on improvements to the YOLOv11 framework, addressing the issues of high computational complexity and low detection accuracy for small targets in machine vision-based insulator defect detection methods. Firstly, to enhance the model’s ability to perceive multi-scale targets while reducing computational overhead, a lightweight Ghost-backbone network is designed. This network integrates the improved Ghost modules with the original YOLOv11 backbone layers to improve feature extraction efficiency. Meanwhile, the original C2PSA module is replaced with a CSPCA module incorporating Coordinate Attention, thereby strengthening the model’s spatial awareness and target localization capabilities. Secondly, to improve the detection accuracy of small insulator defects in complex scenes and reduce redundant feature information, a DC-PUFPN neck network is constructed. This network combines deformable convolutions with a progressive upsampling feature pyramid structure to optimize the Neck part of YOLOv11, enabling efficient feature fusion and information transfer, while retaining the original C3K2 module. Additionally, a composite loss function combining Wise-IoUv3 and Focal Loss is adopted to further accelerate model convergence and improve detection accuracy. Finally, the effectiveness and advancement of the proposed DGNCA-Net algorithm in insulator defect detection tasks are comprehensively validated through ablation studies, comparative experiments, and visualization results. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 2505 KiB  
Article
A New Geometric Algebra-Based Classification of Hand Bradykinesia in Parkinson’s Disease Measured Using a Sensory Glove
by Giovanni Saggio, Paolo Roselli, Luca Pietrosanti, Alessandro Romano, Nicola Arangino, Martina Patera and Antonio Suppa
Algorithms 2025, 18(8), 527; https://doi.org/10.3390/a18080527 - 19 Aug 2025
Viewed by 310
Abstract
Parkinson’s disease (PD) is a chronic neurodegenerative disorder that progressively impairs motor functions. Clinical assessments have traditionally relied on rating scales such as the Movement Disorder Society Unified Parkinson Disease Rating Scale (MDS-UPDRS); however, these evaluations are susceptible to rater-dependent variability and may [...] Read more.
Parkinson’s disease (PD) is a chronic neurodegenerative disorder that progressively impairs motor functions. Clinical assessments have traditionally relied on rating scales such as the Movement Disorder Society Unified Parkinson Disease Rating Scale (MDS-UPDRS); however, these evaluations are susceptible to rater-dependent variability and may miss subtle motor changes. This study explored objective and quantitative methods for assessing motor function in PD patients using the Quantum Metaglove, a sensory glove produced by MANUS®, which was used to record finger movements during three tasks: finger tapping, hand gripping, and pronation–supination. Classic and geometric motor features (the latter based on Clifford algebra, an advanced approach for trajectory shape analysis) were extracted. The resulting data were used to train various machine learning algorithms (k-NN, SVM, and Naive Bayes) to distinguish healthy subjects from PD patients. The integration of traditional kinematic and geometric approaches improves objective hand movement analysis, providing new diagnostic opportunities. In particular, geometric trajectory analysis provides more interpretable information than conventional signal processing methods. This study highlights the value of wearable technologies and Clifford algebra-based algorithms as tools that can complement clinical assessment. They are capable of reducing inter-rater variability and enabling more continuous and precise monitoring of hand motor movements in patients with PD. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Graphical abstract

18 pages, 384 KiB  
Article
On Solving the Minimum Spanning Tree Problem with Conflicting Edge Pairs
by Roberto Montemanni and Derek H. Smith
Algorithms 2025, 18(8), 526; https://doi.org/10.3390/a18080526 - 18 Aug 2025
Cited by 1 | Viewed by 174
Abstract
The Minimum Spanning Tree with Conflicting Edge Pairs is a generalization that adds conflict constraints to a classical optimization problem on graphs used to model several real-world applications. In recent years, several heuristic and exact approaches have been proposed to tackle this problem. [...] Read more.
The Minimum Spanning Tree with Conflicting Edge Pairs is a generalization that adds conflict constraints to a classical optimization problem on graphs used to model several real-world applications. In recent years, several heuristic and exact approaches have been proposed to tackle this problem. In this paper, we present a mixed-integer linear program not previously applied to this problem, and we solve it with an open-source solver. Computational results for the benchmark instances commonly adopted in the literature of the problem are reported. The results indicate that the approach we propose obtains results aligned with those of the much more sophisticated approaches available, notwithstanding it being much simpler to implement. During the experimental campaign, six instances were closed for the first time, with nine improved best-known lower bounds and sixteen improved best-known upper bounds over a total of two hundred thirty instances considered. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

12 pages, 858 KiB  
Article
Examining the Neurophysiology of Attentional Habituation to Repeated Presentations of Food and Non-Food Visual Stimuli
by Aruna Duraisingam, Daniele Soria and Ramaswamy Palaniappan
Algorithms 2025, 18(8), 525; https://doi.org/10.3390/a18080525 - 18 Aug 2025
Viewed by 147
Abstract
Existing research shows that the human salivary response habituates to repeated presentation of visual, olfactory, or gustatory food cues in adults and children. The aim of this research is to examine the neurophysiological effects of attentional habituation within sessions toward repetition of the [...] Read more.
Existing research shows that the human salivary response habituates to repeated presentation of visual, olfactory, or gustatory food cues in adults and children. The aim of this research is to examine the neurophysiological effects of attentional habituation within sessions toward repetition of the same high- and low-calorie food and non-food images. Participants’ event-related potential (ERP) responses were measured as they passively viewed the same food and non-food images repeatedly. The ERP analysis results from trial groups within a session over time indicated that repeated exposure to the same image has a distinct effect on the brain’s attentional responses to food and non-food images. The brain response modulated by motivation and attention decreases over time, and it is significant in the 170–300 ms onset time window for low-calorie images and 180–330 ms onset time window for non-food images in the parietal region of the brain. However, the modulation to high-calorie images remains sustained over time within the session. Furthermore, the ERP results show that high-calorie images have a slower rate of declination than low-calorie images, followed by non-food images. In conclusion, our ERP study showed that a habituation-like mechanism modulates attention to repeated low-calorie and non-food images, whereas high-calorie images have a negligible effect. High-energy foods have a larger reward value, which increases prolonged attention and reduces the process of habituation. This could be one of the reasons why a negligible neural attentional habituation and slow habituation rate to high-calorie diets could have negative health consequences. Full article
(This article belongs to the Special Issue Advancements in Signal Processing and Machine Learning for Healthcare)
Show Figures

Figure 1

26 pages, 36602 KiB  
Article
FE-MCFN: Fuzzy-Enhanced Multi-Scale Cross-Modal Fusion Network for Hyperspectral and LiDAR Joint Data Classification
by Shuting Wei, Mian Jia and Junyi Duan
Algorithms 2025, 18(8), 524; https://doi.org/10.3390/a18080524 - 18 Aug 2025
Viewed by 336
Abstract
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, [...] Read more.
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, different materials” and “same material, different spectra” phenomena, as well as the complexity of spectral features. Furthermore, existing multimodal fusion approaches often fail to fully leverage the complementary advantages of hyperspectral and LiDAR data. We propose a fuzzy-enhanced multi-scale cross-modal fusion network (FE-MCFN) designed to achieve joint classification of hyperspectral and LiDAR data. The FE-MCFN enhances convolutional neural networks through the application of fuzzy theory and effectively integrates global contextual information via a cross-modal attention mechanism. The fuzzy learning module utilizes a Gaussian membership function to assign weights to features, thereby adeptly capturing uncertainties and subtle distinctions within the data. To maximize the complementary advantages of multimodal data, a fuzzy fusion module is designed, which is grounded in fuzzy rules and integrates multimodal features across various scales while taking into account both local features and global information, ultimately enhancing the model’s classification performance. Experimental results obtained from the Houston2013, Trento, and MUUFL datasets demonstrate that the proposed method outperforms current state-of-the-art classification techniques, thereby validating its effectiveness and applicability across diverse scenarios. Full article
(This article belongs to the Section Databases and Data Structures)
Show Figures

Figure 1

38 pages, 6706 KiB  
Article
Intelligent Method for Generating Criminal Community Influence Risk Parameters Using Neural Networks and Regional Economic Analysis
by Serhii Vladov, Lyubomyr Chyrun, Eduard Muzychuk, Victoria Vysotska, Vasyl Lytvyn, Tetiana Rekunenko and Andriy Basko
Algorithms 2025, 18(8), 523; https://doi.org/10.3390/a18080523 - 18 Aug 2025
Viewed by 172
Abstract
This article develops an innovative and intelligent method for analysing the criminal community’s influence on risk-forming parameters based on an analysis of regional economic processes. The research motivation was the need to create an intelligent method for quantitative assessment and risk control arising [...] Read more.
This article develops an innovative and intelligent method for analysing the criminal community’s influence on risk-forming parameters based on an analysis of regional economic processes. The research motivation was the need to create an intelligent method for quantitative assessment and risk control arising from the interaction between regional economic processes and criminal activity. The method includes a three-level mathematical model in which the economic activity dynamics are described by a modified logistic equation, taking into account the criminal activity’s negative impact and feedback through the integral risk. The criminal activity itself is modelled by a similar logistic equation, taking into account the economic base. The risk parameter accumulates the direct impact and delayed effects through the memory core. To numerically solve the spatio-temporal optimal control problem, a neural network based on the convolutional architecture was developed: two successive convolutional layers (N1 with 3 × 3 filters and N2 with 3 × 3 filters) extract local features, after which two 1 × 1 convolutional layers (FC1 and FC2) form a three-channel output corresponding to the control actions UE, UC, and UI. The loss function combines the supervised component and the residual terms of the differential equations, which ensures the satisfaction of physical constraints. The computational experiment showed the high accuracy of the model: accuracy is 0.9907, precision is 0.9842, recall is 0.9983, and F1-score is 0.9912, with a minimum residual loss of 0.0093 and superiority over alternative architectures in key metrics (MSE is 0.0124, IoU is 0.74, and Dice is 0.83). Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

15 pages, 2850 KiB  
Brief Report
Exploring the Frequency Domain Point Cloud Processing for Localisation Purposes in Arboreal Environments
by Rosa Pia Devanna, Miguel Torres-Torriti, Kamil Sacilik, Necati Cetin and Fernando Auat Cheein
Algorithms 2025, 18(8), 522; https://doi.org/10.3390/a18080522 - 18 Aug 2025
Viewed by 225
Abstract
Point clouds from 3D sensors such as LiDAR are increasingly used in agriculture for tasks like crop characterisation, pest detection, and leaf area estimation. While traditional point cloud processing typically occurs in Cartesian space using methods such as principal component analysis (PCA), this [...] Read more.
Point clouds from 3D sensors such as LiDAR are increasingly used in agriculture for tasks like crop characterisation, pest detection, and leaf area estimation. While traditional point cloud processing typically occurs in Cartesian space using methods such as principal component analysis (PCA), this paper introduces a novel frequency-domain approach for point cloud registration. The central idea is that point clouds can be transformed and analysed in the spectral domain, where key frequency components capture the most informative spatial structures. By selecting and registering only the dominant frequencies, our method achieves significant reductions in localisation error and computational complexity. We validate this approach using public datasets and compare it with standard Iterative Closest Point (ICP) techniques. Our method, which applies ICP only to points in selected frequency bands, reduces localisation error from 4.37 m to 1.22 m (MSE), an improvement of approximately 72%. These findings highlight the potential of frequency-domain analysis as a powerful and efficient tool for point cloud registration in agricultural and other GNSS-challenged environments. Full article
(This article belongs to the Special Issue Advances in Computer Vision: Emerging Trends and Applications)
Show Figures

Figure 1

21 pages, 1334 KiB  
Article
Analysis of the State and Fault Detection of a Plastic Injection Machine—A Machine Learning-Based Approach
by João Costa, Rui Silva, Gonçalo Martins, Jorge Barreiros and Mateus Mendes
Algorithms 2025, 18(8), 521; https://doi.org/10.3390/a18080521 - 18 Aug 2025
Viewed by 244
Abstract
Predictive maintenance is essential for minimizing unplanned downtime and optimizing industrial processes. In the case of plastic injection molding machines, failures that lead to downtime, slowing production, or manufacturing defects can cause large financial losses or even endanger people and property. As industrialization [...] Read more.
Predictive maintenance is essential for minimizing unplanned downtime and optimizing industrial processes. In the case of plastic injection molding machines, failures that lead to downtime, slowing production, or manufacturing defects can cause large financial losses or even endanger people and property. As industrialization advances, proactive equipment management enhances cost efficiency, reliability, and operational continuity. This study aims to detect machine anomalies as early as possible, using sensors, statistical analysis and classification models. A case study was carried out, including machine characterization and data collection. Clustering methods identified operational patterns and anomalies, classifying the machine’s behavior into distinct states, validated by company experts. Dimensionality reduction with PCA contributed to highlighting salient features and reducing noise. State classification was carried out using the resulting cluster data. Classification using XGBoost achieved the best performance among the machine learning models tested, reaching an accuracy of 83%. This approach can contribute to maximizing plastic injection machines’ availability and reducing losses due to malfunctions and downtime. Full article
Show Figures

Figure 1

34 pages, 1664 KiB  
Article
Fitness Landscape Analysis for the Differential Evolution Algorithm
by Amani Saad, Andries P. Engelbrecht and Salman A. Khan
Algorithms 2025, 18(8), 520; https://doi.org/10.3390/a18080520 - 17 Aug 2025
Viewed by 136
Abstract
It is crucial to understand how fitness landscape characteristics (FLCs) are associated with the performance and behavior of the differential evolution (DE) algorithm to optimize its application across various optimization problems. Although previous studies have explored DE performance in relation to FLCs, these [...] Read more.
It is crucial to understand how fitness landscape characteristics (FLCs) are associated with the performance and behavior of the differential evolution (DE) algorithm to optimize its application across various optimization problems. Although previous studies have explored DE performance in relation to FLCs, these studies have limitations. Specifically, the narrow range of FLC metrics considered for problem characterization and the lack of research exploring the relationship between the search behavior of the DE algorithm and FLCs represent two major concerns. This study investigates the impact of five FLCs, namely ruggedness, gradients, funnels, deception, and searchability, on DE performance and behavior across various problems and dimensions. Two experiments were conducted: the first assesses DE performance using three performance metrics, i.e., solution quality, success rate, and success speed. The first experiment reveals that DE exhibits stronger associations with FLCs for higher-dimensional problems. Moreover, the presence of multiple funnels and high deception levels are linked to performance degradation, while high searchability is significantly associated with improved performance. The second experiment analyzes the DE search behavior using the diversity rate-of-change (DRoC) behavioral measure. The second experiment shows that the speed at which the DE algorithm transitions from exploration to exploitation varies with different FLCs and the problem dimensionality. The analysis reveals that DE reduces its diversity more slowly in landscapes with multiple funnels and resists deception, but faces excessively slow convergence for high-dimensional problems. Overall, the results elucidate that multiple funnels and high deception levels are the FLCs most strongly associated with the performance and search behavior of the DE algorithm. These findings contribute to a deeper understanding of how FLCs interact with both the performance and search behavior of the DE algorithm and suggest avenues to optimize DE for real-world applications. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

24 pages, 2115 KiB  
Article
MHD-Protonet: Margin-Aware Hard Example Mining for SAR Few-Shot Learning via Dual-Loss Optimization
by Marii Zayani, Abdelmalek Toumi and Ali Khalfallah
Algorithms 2025, 18(8), 519; https://doi.org/10.3390/a18080519 - 16 Aug 2025
Viewed by 388
Abstract
Synthetic aperture radar (SAR) image classification under limited data conditions faces two major challenges: inter-class similarity, where distinct radar targets (e.g., tanks and armored trucks) have nearly identical scattering characteristics, and intra-class variability, caused by speckle noise, pose changes, and differences in depression [...] Read more.
Synthetic aperture radar (SAR) image classification under limited data conditions faces two major challenges: inter-class similarity, where distinct radar targets (e.g., tanks and armored trucks) have nearly identical scattering characteristics, and intra-class variability, caused by speckle noise, pose changes, and differences in depression angle. To address these challenges, we propose MHD-ProtoNet, a meta-learning framework that extends prototypical networks with two key innovations: margin-aware hard example mining to better separate confusable classes by enforcing prototype distance margins, and dual-loss optimization to refine embeddings and improve robustness to noise-induced variations. Evaluated on the MSTAR dataset in a five-way one-shot task, MHD-ProtoNet achieves 76.80% accuracy, outperforming the Hybrid Inference Network (HIN) (74.70%), as well as standard few-shot methods such as prototypical networks (69.38%), ST-PN (72.54%), and graph-based models like ADMM-GCN (61.79%) and DGP-NET (68.60%). By explicitly mitigating inter-class ambiguity and intra-class noise, the proposed model enables robust SAR target recognition with minimal labeled data. Full article
Show Figures

Figure 1

19 pages, 650 KiB  
Article
Algorithmic Efficiency Analysis in Innovation-Driven Labor Markets: A Super-SBM and Malmquist Productivity Index Approach
by Chia-Nan Wang and Giovanni Cahilig
Algorithms 2025, 18(8), 518; https://doi.org/10.3390/a18080518 - 15 Aug 2025
Viewed by 300
Abstract
Innovation-driven labor markets play a pivotal role in economic development, yet significant disparities exist in how efficiently countries transform innovation inputs into labor market outcomes. This study addresses the critical gap in benchmarking multi-stage innovation efficiency by developing an integrated framework combining Data [...] Read more.
Innovation-driven labor markets play a pivotal role in economic development, yet significant disparities exist in how efficiently countries transform innovation inputs into labor market outcomes. This study addresses the critical gap in benchmarking multi-stage innovation efficiency by developing an integrated framework combining Data Envelopment Analysis (DEA) Super Slack-Based Measure (Super-SBM) for static efficiency evaluation and the Malmquist Productivity Index (MPI) for dynamic productivity decomposition, enhanced with cooperative game theory for robustness testing. Focusing on the top 20 innovative economies over a 5-year period, we analyze key inputs (Innovation Index, GDP, trade openness) and outputs (labor force, unemployment rates), revealing stark efficiency contrasts: China, Luxembourg, and the U.S. demonstrate optimal performance (mean scores > 1.9), while Singapore and the Netherlands show significant underutilization (scores < 0.4). Our results identify a critical productivity shift period (average MPI = 1.325) driven primarily by technological advancements. This study contributes a replicable, data-driven model for cross-domain efficiency assessment and provides empirical evidence for policymakers to optimize innovation-labor market conversion. The methodological framework offers scalable applications for future research in computational economics and productivity analysis. Full article
Show Figures

Figure 1

22 pages, 2209 KiB  
Article
Hybrid BiLSTM-ARIMA Architecture with Whale-Driven Optimization for Financial Time Series Forecasting
by Panke Qin, Bo Ye, Ya Li, Zhongqi Cai, Zhenlun Gao, Haoran Qi and Yongjie Ding
Algorithms 2025, 18(8), 517; https://doi.org/10.3390/a18080517 - 15 Aug 2025
Viewed by 217
Abstract
Financial time series display inherent nonlinearity and high volatility, creating substantial challenges for accurate forecasting. Advancements in artificial intelligence have positioned deep learning as a critical tool for financial time series forecasting. However, conventional deep learning models often fail to accurately predict future [...] Read more.
Financial time series display inherent nonlinearity and high volatility, creating substantial challenges for accurate forecasting. Advancements in artificial intelligence have positioned deep learning as a critical tool for financial time series forecasting. However, conventional deep learning models often fail to accurately predict future trends in complex financial data due to inherent limitations. To address these challenges, this study introduces a WOA-BiLSTM-ARIMA hybrid forecasting model leveraging parameter optimization. Specifically, the whale optimization algorithm (WOA) optimizes hyperparameters for the Bidirectional Long Short-Term Memory (BiLSTM) network, overcoming parameter tuning challenges in conventional approaches. Due to its strong capacity for nonlinear feature extraction, BiLSTM excels at modeling nonlinear patterns in financial time series. To mitigate the shortcomings of BiLSTM in capturing linear patterns, the Autoregressive Integrated Moving Average (ARIMA) methodology is integrated. By exploiting ARIMA’s strengths in modeling linear features, the model refines BiLSTM’s prediction residuals, achieving more accurate and comprehensive financial time series forecasting. To validate the model’s effectiveness, this paper applies it to the prediction experiment of future spread data. Compared to classical models, WOA-BiLSTM-ARIMA achieves significant improvements across multiple evaluation metrics. The mean squared error (MSE) is reduced by an average of 30.5%, the mean absolute error (MAE) by 20.8%, and the mean absolute percentage error (MAPE) by 29.7%. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms (2nd Edition))
Show Figures

Figure 1

17 pages, 267 KiB  
Article
Student Surpasses the Teacher: Apprenticeship Learning for Quadratic Unconstrained Binary Optimisation
by Jack Cakebread, Warren G. Jackson, Daniel Karapetyan, Andrew J. Parkes and Ender Özcan
Algorithms 2025, 18(8), 516; https://doi.org/10.3390/a18080516 - 15 Aug 2025
Viewed by 224
Abstract
This study introduces a novel train-and-test approach referred to as apprenticeship learning (AL) for generating selection hyper-heuristics to solve the Quadratic Unconstrained Binary Optimisation (QUBO) problem. The primary goal is to automate the design of hyper-heuristics by learning from a state-of-the-art expert and [...] Read more.
This study introduces a novel train-and-test approach referred to as apprenticeship learning (AL) for generating selection hyper-heuristics to solve the Quadratic Unconstrained Binary Optimisation (QUBO) problem. The primary goal is to automate the design of hyper-heuristics by learning from a state-of-the-art expert and to evaluate whether the apprentice can outperform that expert. The proposed method collects detailed search trace data from the expert and trains the apprentice based on the machine learning models to predict heuristic selection and parameter settings. Multiple data filtering and class balancing techniques are explored to enhance model performance. The empirical results on unseen QUBO instances show that indeed, “student surpasses the teacher”; the hyper-heuristic with the generated heuristic selection not only outperforms the expert but also generalises quite well by solving unseen QUBO instances larger than the ones on which the apprentice was trained. These findings highlight the potential of AL to generalise expert behaviour and improve heuristic search performance. Full article
Show Figures

Figure 1

18 pages, 1417 KiB  
Article
A Fusion-Based Approach with Bayes and DeBERTa for Efficient and Robust Spam Detection
by Ao Zhang, Kelei Li and Haihua Wang
Algorithms 2025, 18(8), 515; https://doi.org/10.3390/a18080515 - 15 Aug 2025
Viewed by 269
Abstract
Spam emails pose ongoing risks to digital security, including data breaches, privacy violations, and financial losses. Addressing the limitations of traditional detection systems in terms of accuracy, adaptability, and resilience remains a significant challenge. In this paper, we propose a hybrid spam detection [...] Read more.
Spam emails pose ongoing risks to digital security, including data breaches, privacy violations, and financial losses. Addressing the limitations of traditional detection systems in terms of accuracy, adaptability, and resilience remains a significant challenge. In this paper, we propose a hybrid spam detection framework that integrates a classical multinomial naive Bayes classifier with a pre-trained large language model, DeBERTa. The framework employs a weighted probability fusion strategy to combine the strengths of both models—lexical pattern recognition and deep semantic understanding—into a unified decision process. We evaluate the proposed method on a widely used spam dataset. Experimental results demonstrate that the hybrid model achieves superior performance in terms of accuracy and robustness when compared with other classifiers. The findings support the effectiveness of hybrid modeling in advancing spam detection techniques. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

28 pages, 10631 KiB  
Article
A Novel ECC-Based Method for Secure Image Encryption
by Younes Lahraoui, Saiida Lazaar, Youssef Amal and Abderrahmane Nitaj
Algorithms 2025, 18(8), 514; https://doi.org/10.3390/a18080514 - 14 Aug 2025
Viewed by 176
Abstract
As the Internet of Things (IoT) expands, ensuring secure and efficient image transmission in resource-limited environments has become crucial and important. In this paper, we propose a lightweight image encryption scheme based on Elliptic Curve Cryptography (ECC), tailored for embedded and IoT applications. [...] Read more.
As the Internet of Things (IoT) expands, ensuring secure and efficient image transmission in resource-limited environments has become crucial and important. In this paper, we propose a lightweight image encryption scheme based on Elliptic Curve Cryptography (ECC), tailored for embedded and IoT applications. In this scheme, the image data blocks are mapped into elliptic curve points using a decimal embedding algorithm and shuffled to improve resistance to tampering and noise. Moreover, an OTP-like operation is applied to enhance the security while avoiding expensive point multiplications. The proposed scheme meets privacy and cybersecurity requirements with low computational costs. Classical security metrics such as entropy, correlation, NPCR, UACI, and key sensitivity confirm its strong robustness. Rather than relying solely on direct comparisons with existing benchmarks, we employ rigorous statistical analyses to objectively validate the encryption scheme’s robustness and security. Furthermore, we propose a formal security analysis that demonstrates the resistance of the new scheme to chosen-plaintext attacks and noise and cropping attacks, while the GLCM analysis confirms the visual encryption quality. Our scheme performs the encryption of a 512×512 image in only 0.23 s on a 1 GB RAM virtual machine, showing its efficiency and suitability for real-time IoT systems. Our method can be easily applied to guarantee the security and the protection of lightweight data in future smart environments. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

25 pages, 1734 KiB  
Article
A Multimodal Affective Interaction Architecture Integrating BERT-Based Semantic Understanding and VITS-Based Emotional Speech Synthesis
by Yanhong Yuan, Shuangsheng Duo, Xuming Tong and Yapeng Wang
Algorithms 2025, 18(8), 513; https://doi.org/10.3390/a18080513 - 14 Aug 2025
Viewed by 447
Abstract
Addressing the issues of coarse emotional representation, low cross-modal alignment efficiency, and insufficient real-time response capabilities in current human–computer emotional language interaction, this paper proposes an affective interaction framework integrating BERT-based semantic understanding with VITS-based speech synthesis. The framework aims to enhance the [...] Read more.
Addressing the issues of coarse emotional representation, low cross-modal alignment efficiency, and insufficient real-time response capabilities in current human–computer emotional language interaction, this paper proposes an affective interaction framework integrating BERT-based semantic understanding with VITS-based speech synthesis. The framework aims to enhance the naturalness, expressiveness, and response efficiency of human–computer emotional interaction. By introducing a modular layered design, a six-dimensional emotional space, a gated attention mechanism, and a dynamic model scheduling strategy, the system overcomes challenges such as limited emotional representation, modality misalignment, and high-latency responses. Experimental results demonstrate that the framework achieves superior performance in speech synthesis quality (MOS: 4.35), emotion recognition accuracy (91.6%), and response latency (<1.2 s), outperforming baseline models like Tacotron2 and FastSpeech2. Through model lightweighting, GPU parallel inference, and load balancing optimization, the system validates its robustness and generalizability across English and Chinese corpora in cross-linguistic tests. The modular architecture and dynamic scheduling ensure scalability and efficiency, enabling a more humanized and immersive interaction experience in typical application scenarios such as psychological companionship, intelligent education, and high-concurrency customer service. This study provides an effective technical pathway for developing the next generation of personalized and immersive affective intelligent interaction systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 460 KiB  
Article
Modeling Local Search Metaheuristics Using Markov Decision Processes
by Rubén Ruiz-Torrubiano, Deepak Dhungana, Sarita Paudel and Himanshu Buckchash
Algorithms 2025, 18(8), 512; https://doi.org/10.3390/a18080512 - 14 Aug 2025
Viewed by 141
Abstract
Local search metaheuristics like tabu search or simulated annealing are popular heuristic optimization algorithms for finding near-optimal solutions for combinatorial optimization problems. However, it is still challenging for researchers and practitioners to analyze their behavior and systematically choose one over a vast set [...] Read more.
Local search metaheuristics like tabu search or simulated annealing are popular heuristic optimization algorithms for finding near-optimal solutions for combinatorial optimization problems. However, it is still challenging for researchers and practitioners to analyze their behavior and systematically choose one over a vast set of possible metaheuristics for the particular problem at hand. In this paper, we introduce a theoretical framework based on Markov Decision Processes (MDPs) for analyzing local search metaheuristics. This framework not only helps in providing convergence results for individual algorithms but also provides an explicit characterization of the exploration–exploitation tradeoff and a theory-grounded guidance for practitioners for choosing an appropriate metaheuristic for the problem at hand. We present this framework in detail and show how to apply it in the case of hill climbing and the simulated annealing algorithm, including computational experiments. Full article
Show Figures

Figure 1

23 pages, 2744 KiB  
Article
CASF: Correlation-Alignment and Significance-Aware Fusion for Multimodal Named Entity Recognition
by Hui Li, Yunshi Tao, Huan Wang, Zhe Wang and Qingzheng Liu
Algorithms 2025, 18(8), 511; https://doi.org/10.3390/a18080511 - 14 Aug 2025
Viewed by 239
Abstract
With the increasing content richness of social media platforms, Multimodal Named Entity Recognition (MNER) faces the dual challenges of heterogeneous feature fusion and accurate entity recognition. Aiming at the key problems of inconsistent distribution of textual and visual information, insufficient feature alignment and [...] Read more.
With the increasing content richness of social media platforms, Multimodal Named Entity Recognition (MNER) faces the dual challenges of heterogeneous feature fusion and accurate entity recognition. Aiming at the key problems of inconsistent distribution of textual and visual information, insufficient feature alignment and noise interference fusion, this paper proposes a multimodal named entity recognition model based on dual-stream Transformer: CASF-MNER, which designs cross-modal cross-attention based on visual and textual features, constructs a bidirectional interaction mechanism between single-layer features, forms a higher-order semantic correlation modeling, and realizes the cross relevance alignment of modal features; construct a dynamic perception mechanism of multimodal feature saliency features based on multiscale pooling method, construct an entropy weighting strategy of global feature distribution information to adaptively suppress noise redundancy and enhance key feature expression; establish a deep semantic fusion method based on hybrid isomorphic model, design a progressive cross-modal interaction structure, and combine with contrastive learning to realize global fusion of the deep semantic space and representational consistency optimization. The experimental results show that CASF-MNER achieves excellent performance on both Twitter-2015 and Twitter-2017 public datasets, which verifies the effectiveness and advancement of the method proposed in this paper. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 3875 KiB  
Article
Edge AI for Industrial Visual Inspection: YOLOv8-Based Visual Conformity Detection Using Raspberry Pi
by Marcelo T. Okano, William Aparecido Celestino Lopes, Sergio Miele Ruggero, Oduvaldo Vendrametto and João Carlos Lopes Fernandes
Algorithms 2025, 18(8), 510; https://doi.org/10.3390/a18080510 - 14 Aug 2025
Viewed by 431
Abstract
This paper presents a lightweight and cost-effective computer vision solution for automated industrial inspection using You Only Look Once (YOLO) v8 models deployed on embedded systems. The YOLOv8 Nano model, trained for 200 epochs, achieved a precision of 0.932, an mAP@0.5 of 0.938, [...] Read more.
This paper presents a lightweight and cost-effective computer vision solution for automated industrial inspection using You Only Look Once (YOLO) v8 models deployed on embedded systems. The YOLOv8 Nano model, trained for 200 epochs, achieved a precision of 0.932, an mAP@0.5 of 0.938, and an F1-score of 0.914, with an average inference time of ~470 ms on a Raspberry Pi 500, confirming its feasibility for real-time edge applications. The proposed system aims to replace physical jigs used for the dimensional verification of extruded polyamide tubes in the automotive sector. The YOLOv8 Nano and YOLOv8 Small models were trained on a Graphics Processing Unit (GPU) workstation and subsequently tested on a Central Processing Unit (CPU)-only Raspberry Pi 500 to evaluate their performance in constrained environments. The experimental results show that the Small model achieved higher accuracy (a precision of 0.951 and an mAP@0.5 of 0.941) but required a significantly longer inference time (~1315 ms), while the Nano model achieved faster execution (~470 ms) with stable metrics (precision of 0.932 and mAP@0.5 of 0.938), therefore making it more suitable for real-time applications. The system was validated using authentic images in an industrial setting, confirming its feasibility for edge artificial intelligence (AI) scenarios. These findings reinforce the feasibility of embedded AI in smart manufacturing, demonstrating that compact models can deliver reliable performance without requiring high-end computing infrastructure. Full article
(This article belongs to the Special Issue Advances in Computer Vision: Emerging Trends and Applications)
Show Figures

Figure 1

18 pages, 2124 KiB  
Article
Automated Subregional Hippocampus Segmentation Using 3D CNNs: A Computational Framework for Brain Aging Biomarker Analysis
by Eshaa Gogia, Arash Dehzangi and Iman Dehzangi
Algorithms 2025, 18(8), 509; https://doi.org/10.3390/a18080509 - 13 Aug 2025
Viewed by 348
Abstract
The hippocampus is a critical brain structure involved in episodic memory, spatial orientation, and stress regulation. Its volumetric shrinkage is among the earliest and most reliable indicators of both physiological brain aging and pathological neurodegeneration. Accurate segmentation and measurement of the hippocampal subregions [...] Read more.
The hippocampus is a critical brain structure involved in episodic memory, spatial orientation, and stress regulation. Its volumetric shrinkage is among the earliest and most reliable indicators of both physiological brain aging and pathological neurodegeneration. Accurate segmentation and measurement of the hippocampal subregions from magnetic resonance imaging (MRI) is therefore essential for neurobiological age estimation and the early identification of at-risk individuals. In this study, we present a fully automated pipeline that leverages nnU-Net, a self-configuring deep learning framework, to segment the hippocampus from high-resolution 3D T1-weighted brain MRI scans. The primary objective of this work is to enable accurate estimation of brain age through quantitative analysis of hippocampal volume. By fusing domain knowledge in neuroanatomy with data-driven learning through a highly expressive and self-optimizing model, this work advances the methodological frontier for neuroimaging-based brain-age estimation. The proposed approach demonstrates that deep learning can serve as a reliable segmentation tool as well as a foundational layer in predictive neuroscience, supporting early detection of accelerated aging and subclinical neurodegenerative processes. Full article
Show Figures

Figure 1

19 pages, 7309 KiB  
Article
Hierarchical Coordination Control of Distributed Drive Intelligent Vehicle Based on TSMPC and Tire Force Optimization Allocation
by Junmin Li, Fei Wang, Wenguang Guo, Zhengyong Zhou, Shuaike Miao and Te Chen
Algorithms 2025, 18(8), 508; https://doi.org/10.3390/a18080508 - 13 Aug 2025
Viewed by 331
Abstract
An intelligent vehicle hierarchical coordinated control strategy based on time delay state feedback model predictive control (TSMPC) and tire force optimization allocation is presented. Aiming at the problem of insufficient trajectory tracking accuracy and the limited time delay compensation capability of distributed drive [...] Read more.
An intelligent vehicle hierarchical coordinated control strategy based on time delay state feedback model predictive control (TSMPC) and tire force optimization allocation is presented. Aiming at the problem of insufficient trajectory tracking accuracy and the limited time delay compensation capability of distributed drive intelligent vehicles in complex working conditions, an innovative hierarchical control architecture was designed by establishing vehicle dynamics models and path tracking models. The upper-level controller adopts TSMPC algorithm, which significantly improves the coordinated control ability of path tracking and vehicle stability through incremental prediction model and time–delay state feedback mechanism. The lower-level controller adopts an improved artificial bee colony (IABC) algorithm to optimize tire force allocation, effectively solving the dynamic performance optimization problem of redundant drive systems. Simulation verification shows that compared with traditional model predictive control (MPC) algorithms, TSMPC algorithm exhibits significant advantages in trajectory accurateness, error suppression, and stability control. In addition, the IABC algorithm further improves the trajectory accurateness and stability control performance of vehicles in tire force optimization allocation. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

41 pages, 7109 KiB  
Article
Cross-Cultural Safety Judgments in Child Environments: A Semantic Comparison of Vision-Language Models and Humans
by Don Divin Anemeta and Rafal Rzepka
Algorithms 2025, 18(8), 507; https://doi.org/10.3390/a18080507 - 13 Aug 2025
Viewed by 313
Abstract
Despite advances in complex reasoning, Vision-Language Models (VLMs) remain inadequately benchmarked for safety-critical applications like childcare. To address this gap, we conduct a multilingual (English, French, Polish, Japanese) comparison of VLMs and human safety assessments using a dataset of original images from child [...] Read more.
Despite advances in complex reasoning, Vision-Language Models (VLMs) remain inadequately benchmarked for safety-critical applications like childcare. To address this gap, we conduct a multilingual (English, French, Polish, Japanese) comparison of VLMs and human safety assessments using a dataset of original images from child environments in Japan and Poland. Our proposed methodology utilizes semantic clustering to normalize and compare hazard identification and mitigation strategies. While both models and humans identify overt dangers with high semantic agreement (e.g., 0.997 similarity for ‘scissors’), their proposed actions diverge significantly. Humans strongly favor direct physical intervention (‘remove object’: 64.% for Polish vs. 55.0% for VLMs) and context-specific actions (‘move object elsewhere’: 17.8% for Japanese), strategies that models under-represent. Conversely, VLMs consistently over-recommend supervisory actions (such as ‘Supervise children closely’ or ‘Supervise use of scissors’). These quantified discrepancies highlight the critical need to integrate nuanced, human-like contextual judgment for the safe deployment of AI systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 2794 KiB  
Article
Algorithmic Modeling of Generation Z’s Therapeutic Toys Consumption Behavior in an Emotional Economy Context
by Xinyi Ma, Xu Qin and Li Lv
Algorithms 2025, 18(8), 506; https://doi.org/10.3390/a18080506 - 13 Aug 2025
Viewed by 303
Abstract
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and [...] Read more.
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and random forest modeling to forecast purchase intention for therapeutic toys and interpret its underlying drivers. First, 856 customer reviews were scraped from Jellycat’s official website and subjected to polarity classification using a fine-tuned RoBERTa-wwm-ext model (F1 = 0.92), with generated sentiment scores and high-frequency keywords mapped as interpretable features. Next, Boruta–SHAP feature selection was applied to 35 structured variables from 336 survey records, retaining 17 significant predictors. The core module employed a RF (random forest) model to estimate continuous “purchase intention” scores, achieving R2 = 0.83 and MSE = 0.14 under 10-fold cross-validation. To enhance interpretability, RF model was also utilized to evaluate feature importance, quantifying each feature’s contribution to the model outputs, revealing Social Ostracism (β = 0.307) and Task Overload (β = 0.207) as dominant predictors. Finally, k-means clustering with gap statistics segmented consumers based on emotional relevance, value rationality, and interest level, with model performance compared across clusters. Experimental results demonstrate that our integrated predictive model achieves a balance between forecasting accuracy and decision interpretability in emotional value computation, offering actionable insights for targeted product development and precision marketing in the therapeutic goods sector. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

17 pages, 1234 KiB  
Article
Avalanche Hazard Prediction in East Kazakhstan Using Ensemble Machine Learning Algorithms
by Yevgeniy Fedkin, Natalya Denissova, Gulzhan Daumova, Ruslan Chettykbayev and Saule Rakhmetullina
Algorithms 2025, 18(8), 505; https://doi.org/10.3390/a18080505 - 13 Aug 2025
Viewed by 225
Abstract
The study is devoted to the construction of an avalanche susceptibility map based on ensemble machine learning algorithms (random forest, XGBoost, LightGBM, gradient boosting machines, AdaBoost, NGBoost) for the conditions of the East Kazakhstan region. To train these models, data were collected on [...] Read more.
The study is devoted to the construction of an avalanche susceptibility map based on ensemble machine learning algorithms (random forest, XGBoost, LightGBM, gradient boosting machines, AdaBoost, NGBoost) for the conditions of the East Kazakhstan region. To train these models, data were collected on avalanche path profiles, meteorological conditions, and historical avalanche events. The quality of the trained machine learning models was assessed using metrics such as accuracy, precision, true positive rate (recall), and F1-score. The obtained metrics indicated that the trained machine learning models achieved reasonably accurate forecasting performance (forecast accuracy from 67% to 73.8%). ROC curves were also constructed for each obtained model for evaluation. The resulting AUCs for these ROC curves showed acceptable levels (from 0.57 to 0.73), which also indicated that the presented models could be used to predict avalanche danger. In addition, for each machine learning model, we determined the importance of the indicators used to predict avalanche danger. Analysis of the importance of the indicators showed that the most significant indicators were meteorological data, namely temperature and snow cover level in avalanche paths. Among the indicators that characterized the avalanche paths’ profiles, the most important were the minimum and maximum slope elevations. Thus, within the framework of this study, a highly accurate model was built using geospatial and meteorological data that allows identifying potentially dangerous slope areas. These results can support territorial planning, the design of protective infrastructure, and the development of early warning systems to mitigate avalanche risks. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Previous Issue
Back to TopTop