Next Issue
Volume 14, September
Previous Issue
Volume 14, July
 
 

Computers, Volume 14, Issue 8 (August 2025) – 48 articles

Cover Story (view full-size image): This paper provides a comprehensive survey of the ENF as an environmental fingerprint for enhancing Metaverse security, reviewing its characteristics, sensing methods, limitations, and applications in threat modeling and the CIA triad (confidentiality, integrity, and availability). By capturing the ENF as having a unique signature that is timestamped, this method strengthens security by directly correlating physical grid behavior and virtual interactions, effectively combating threats such as deepfake manipulations. Building upon recent developments in signal processing, this strategy reinforces the integrity of digital environments, delivering robust protection against evolving cyber–physical risks and facilitating secure, scalable virtual infrastructures. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
54 pages, 6926 KiB  
Review
A Comprehensive Review of Sensor Technologies in IoT: Technical Aspects, Challenges, and Future Directions
by Sadiq H. Abdulhussain, Basheera M. Mahmmod, Almuntadher Alwhelat, Dina Shehada, Zainab I. Shihab, Hala J. Mohammed, Tuqa H. Abdulameer, Muntadher Alsabah, Maryam H. Fadel, Susan K. Ali, Ghadeer H. Abbood, Zianab A. Asker and Abir Hussain
Computers 2025, 14(8), 342; https://doi.org/10.3390/computers14080342 - 21 Aug 2025
Viewed by 256
Abstract
The rapid advancements in wireless technology and digital electronics have led to the widespread adoption of compact, intelligent devices in various aspects of daily life. These advanced systems possess the capability to sense environmental changes, process data, and communicate seamlessly within interconnected networks. [...] Read more.
The rapid advancements in wireless technology and digital electronics have led to the widespread adoption of compact, intelligent devices in various aspects of daily life. These advanced systems possess the capability to sense environmental changes, process data, and communicate seamlessly within interconnected networks. Typically, such devices integrate low-power radio transmitters and multiple smart sensors, hence enabling efficient functionality across wide ranges of applications. Alongside these technological developments, the concept of the IoT has emerged as a transformative paradigm, facilitating the interconnection of uniquely identifiable devices through internet-based networks. This paper aims to provide a comprehensive exploration of sensor technologies, detailing their integral role within IoT frameworks and examining their impact on optimizing efficiency and service delivery in modern wireless communications systems. Also, it presents a thorough review of sensor technologies, current research trends, and the associated challenges in this evolving field, providing a detailed explanation of recent advancements and IoT-integrated sensor systems, with a particular emphasis on the fundamental architecture of sensors and their pivotal role in modern technological applications. It explores the core benefits of sensor technologies and delivers an in-depth classification of their fundamental types. Beyond reviewing existing developments, this study identifies key open research challenges and outlines prospective directions for future exploration, offering valuable insights for both academic researchers and industry professionals. Ultimately, this paper serves as an essential reference for understanding sensor technologies and their potential contributions to IoT-driven solutions. This study offers meaningful contributions to academic and industrial sectors, facilitating advancements in sensor innovation. Full article
Show Figures

Figure 1

16 pages, 3704 KiB  
Article
Optimization of Scene and Material Parameters for the Generation of Synthetic Training Datasets for Machine Learning-Based Object Segmentation
by Malte Nagel, Kolja Hedrich, Nils Melchert, Lennart Hinz and Eduard Reithmeier
Computers 2025, 14(8), 341; https://doi.org/10.3390/computers14080341 - 21 Aug 2025
Viewed by 160
Abstract
Synthetic training data is often essential for neural-network-based segmentation when real datasets are difficult or impossible to obtain. Conventional synthetic data generation relies on manually selecting scene and material parameters. This can lead to poor performance because the optimal parameters are often non-intuitive [...] Read more.
Synthetic training data is often essential for neural-network-based segmentation when real datasets are difficult or impossible to obtain. Conventional synthetic data generation relies on manually selecting scene and material parameters. This can lead to poor performance because the optimal parameters are often non-intuitive and depend heavily on the specific use case and on the objects to be segmented. This study proposes a novel, automated optimization pipeline to improve the quality of synthetic datasets for specific object segmentation tasks. Synthetic datasets are generated by varying material and scene parameters with the BlenderProc framework. These parameters are optimized with the Optuna framework to maximize the average precision achieved by models trained on this data and validated using a small real dataset. After initial single-parameter studies and subsequent multidimensional optimization, optimal scene and material parameters are identified for each object. The results demonstrate the potential of this optimization pipeline to produce synthetic training datasets that enhance neural network performance for specific segmentation tasks, offering insights into the critical role of scene design and material selection in synthetic data generation. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Figure 1

16 pages, 2576 KiB  
Article
Enhancement in Three-Dimensional Depth with Bionic Image Processing
by Yuhe Chen, Chaoping Chen, Baoen Han and Yunfan Yang
Computers 2025, 14(8), 340; https://doi.org/10.3390/computers14080340 - 20 Aug 2025
Viewed by 150
Abstract
This study proposes an image processing framework based on Bionic principles to optimize 3D visual perception in virtual reality (VR) systems. By simulating the physiological mechanisms of the human visual system, the framework significantly enhances depth perception and visual fidelity in VR content. [...] Read more.
This study proposes an image processing framework based on Bionic principles to optimize 3D visual perception in virtual reality (VR) systems. By simulating the physiological mechanisms of the human visual system, the framework significantly enhances depth perception and visual fidelity in VR content. The research focuses on three core algorithms: Gabor texture feature extraction algorithm based on directional selectivity of neurons in the V1 region of the visual cortex, which enhances edge detection capability through fourth-order Gaussian kernel; improved Retinex model based on adaptive mechanism of retinal illumination, achieving brightness balance under complex illumination through horizontal–vertical dual-channel decomposition; the RGB adaptive adjustment algorithm, based on the three color response characteristics of cone cells, integrates color temperature compensation with depth cue optimization, enhances color naturalness and stereoscopic depth. Build a modular processing system on the Unity platform, integrate the above algorithms to form a collaborative optimization process, and ensure per-frame processing time meets VR real-time constraints. The experiment uses RMSE, AbsRel, and SSIM metrics, combined with subjective evaluation to verify the effectiveness of the algorithm. The results show that compared with traditional methods (SSAO, SSR, SH), our algorithm demonstrates significant advantages in simple scenes and marginal superiority in composite metrics for complex scenes. Collaborative processing of three algorithms can significantly improve depth map noise and enhance the user’s subjective experience. The research results provide a solution that combines biological rationality and engineering practicality for visual optimization in fields such as implantable metaverse, VR healthcare, and education. Full article
Show Figures

Figure 1

23 pages, 2044 KiB  
Article
Topic Modeling of Positive and Negative Reviews of Soulslike Video Games
by Tibor Guzsvinecz
Computers 2025, 14(8), 339; https://doi.org/10.3390/computers14080339 - 19 Aug 2025
Viewed by 219
Abstract
Soulslike games are renowned for their challenging gameplay and distinctive design. To examine player reception of this genre, 993,932 user reviews of 21 Soulslike video games were collected from the Steam platform, of which 418,483 were tagged as English and analyzed. Latent Dirichlet [...] Read more.
Soulslike games are renowned for their challenging gameplay and distinctive design. To examine player reception of this genre, 993,932 user reviews of 21 Soulslike video games were collected from the Steam platform, of which 418,483 were tagged as English and analyzed. Latent Dirichlet Allocation (LDA) was applied to identify and compare thematic patterns across positive and negative reviews. The resulting topics were grouped into five categories: aesthetics, gameplay mechanics, feelings, bugs/issues, and miscellaneous. Positive reviews emphasized aesthetics and atmosphere, whereas negative reviews focused on gameplay mechanics and technical issues. Notably, emotional tone differed significantly between review types. Overall, these results may benefit game developers refining design elements, researchers investigating player experience, and critics analyzing the reception of Soulslike games. Furthermore, the study provides a basis for understanding player perspectives in Soulslike games and establishes a foundation for comparative research with newer titles such as Elden Ring. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

19 pages, 832 KiB  
Article
Leveraging Contrastive Semantics and Language Adaptation for Robust Financial Text Classification Across Languages
by Liman Zhang, Qianye Lin, Fanyu Meng, Siyu Liang, Jingxuan Lu, Shen Liu, Kehan Chen and Yan Zhan
Computers 2025, 14(8), 338; https://doi.org/10.3390/computers14080338 - 19 Aug 2025
Viewed by 249
Abstract
With the growing demand for multilingual financial information, cross-lingual financial sentiment recognition faces significant challenges, including semantic misalignment, ambiguous sentiment expression, and insufficient transferability. To address these issues, a unified multilingual recognition framework is proposed, integrating semantic contrastive learning with a language-adaptive modulation [...] Read more.
With the growing demand for multilingual financial information, cross-lingual financial sentiment recognition faces significant challenges, including semantic misalignment, ambiguous sentiment expression, and insufficient transferability. To address these issues, a unified multilingual recognition framework is proposed, integrating semantic contrastive learning with a language-adaptive modulation mechanism. This approach is built upon the XLM-R multilingual model and employs a semantic contrastive module to enhance cross-lingual semantic consistency. In addition, a language modulation module based on low-rank parameter injection is introduced to improve the model’s sensitivity to fine-grained emotional features in low-resource languages such as Chinese and French. Experiments were conducted on a constructed trilingual financial sentiment dataset encompassing English, Chinese, and French. The results demonstrate that the proposed model significantly outperforms existing methods in cross-lingual sentiment recognition tasks. Specifically, in the English-to-French transfer setting, the model achieved 73.6% in accuracy, 69.8% in F1-Macro, 72.4% in F1-Weighted, and a cross-lingual generalization score of 0.654. Further improvements were observed under multilingual joint training, reaching 77.3%, 73.6%, 76.1%, and 0.696, respectively. In overall comparisons, the proposed model attained the highest performance across cross-lingual scenarios, with 75.8% in accuracy, 72.3% in F1-Macro, and 74.7% in F1-Weighted, surpassing strong baselines such as XLM-R+SimCSE and LaBSE. These results highlight the model’s superior capability in semantic alignment and generalization across languages. The proposed framework demonstrates strong applicability and promising potential in multilingual financial sentiment analysis, public opinion monitoring, and multilingual risk modeling. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

23 pages, 6167 KiB  
Article
Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance
by Dodi Sudiana, Anugrah Indah Lestari, Mia Rizkinia, Indra Riyanto, Yenni Vetrita, Athar Abdurrahman Bayanuddin, Fanny Aditya Putri, Tatik Kartika, Argo Galih Suhadha, Atriyon Julzarika, Shinichi Sobue, Anton Satria Prabuwono and Josaphat Tetuko Sri Sumantyo
Computers 2025, 14(8), 337; https://doi.org/10.3390/computers14080337 - 18 Aug 2025
Viewed by 184
Abstract
Burned area detection plays a critical role in assessing the impact of forest and land fires, particularly in Indonesia, where both peatland and non-peatland areas are increasingly affected. Optical remote sensing has been widely used for this task, but its effectiveness is limited [...] Read more.
Burned area detection plays a critical role in assessing the impact of forest and land fires, particularly in Indonesia, where both peatland and non-peatland areas are increasingly affected. Optical remote sensing has been widely used for this task, but its effectiveness is limited by persistent cloud cover in tropical regions. A Synthetic Aperture Radar (SAR) offers a cloud-independent alternative for burned area mapping. This study investigates the performance of a Stacking Ensemble Neural Network (SENN) model using polarimetric features derived from both C-band (Sentinel 1) and L-band (Advanced Land Observing Satellite—Phased Array L-band Synthetic Aperture Radar (ALOS-2/PALSAR-2)) data. The analysis covers three representative sites in Indonesia: peatland areas in (1) Rokan Hilir, (2) Merauke, and non-peatland areas in (3) Bima and Dompu. Validation is conducted using high-resolution PlanetScope imagery(Planet Labs PBC—San Francisco, California, United States). The results show that the SENN model consistently outperforms conventional artificial neural network (ANN) approaches across most evaluation metrics. L-band SAR data yields a superior performance to the C-band, particularly in peatland areas, with overall accuracy reaching 93–96% and precision between 92 and 100%. The method achieves 76% accuracy and 89% recall in non-peatland regions. Performance is lower in dry, hilly savanna landscapes. These findings demonstrate the effectiveness of the SENN, especially with L-band SAR, in improving burned area detection across diverse land types, supporting more reliable fire monitoring efforts in Indonesia. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

19 pages, 2569 KiB  
Article
CNN-Random Forest Hybrid Method for Phenology-Based Paddy Rice Mapping Using Sentinel-2 and Landsat-8 Satellite Images
by Dodi Sudiana, Sayyidah Hanifah Putri, Dony Kushardono, Anton Satria Prabuwono, Josaphat Tetuko Sri Sumantyo and Mia Rizkinia
Computers 2025, 14(8), 336; https://doi.org/10.3390/computers14080336 - 18 Aug 2025
Viewed by 243
Abstract
The agricultural sector plays a vital role in achieving the second Sustainable Development Goal: “Zero Hunger”. To ensure food security, agriculture must remain resilient and productive. In Indonesia, a major rice-producing country, the conversion of agricultural land for non-agricultural uses poses a serious [...] Read more.
The agricultural sector plays a vital role in achieving the second Sustainable Development Goal: “Zero Hunger”. To ensure food security, agriculture must remain resilient and productive. In Indonesia, a major rice-producing country, the conversion of agricultural land for non-agricultural uses poses a serious threat to food availability. Accurate and timely mapping of paddy rice is therefore crucial. This study proposes a phenology-based mapping approach using a Convolutional Neural Network-Random Forest (CNN-RF) Hybrid model with multi-temporal Sentinel-2 and Landsat-8 imagery. Image processing and analysis were conducted using the Google Earth Engine platform. Raw spectral bands and four vegetation indices—NDVI, EVI, LSWI, and RGVI—were extracted as input features for classification. The CNN-RF Hybrid classifier demonstrated strong performance, achieving an overall accuracy of 0.950 and a Cohen’s Kappa coefficient of 0.893. These results confirm the effectiveness of the proposed method for mapping paddy rice in Indramayu Regency, West Java, using medium-resolution optical remote sensing data. The integration of phenological characteristics and deep learning significantly enhances classification accuracy. This research supports efforts to monitor and preserve paddy rice cultivation areas amid increasing land use pressures, contributing to national food security and sustainable agricultural practices. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

20 pages, 2233 KiB  
Article
HPC Cluster Task Prediction Based on Multimodal Temporal Networks with Hierarchical Attention Mechanism
by Xuemei Bai, Jingbo Zhou and Zhijun Wang
Computers 2025, 14(8), 335; https://doi.org/10.3390/computers14080335 - 18 Aug 2025
Viewed by 262
Abstract
In recent years, the increasing adoption of High-Performance Computing (HPC) clusters in scientific research and engineering has exposed challenges such as resource imbalance, node idleness, and overload, which hinder scheduling efficiency. Accurate multidimensional task prediction remains a key bottleneck. To address this, we [...] Read more.
In recent years, the increasing adoption of High-Performance Computing (HPC) clusters in scientific research and engineering has exposed challenges such as resource imbalance, node idleness, and overload, which hinder scheduling efficiency. Accurate multidimensional task prediction remains a key bottleneck. To address this, we propose a hybrid prediction model that integrates Informer, Long Short-Term Memory (LSTM), and Graph Neural Networks (GNN), enhanced by a hierarchical attention mechanism combining multi-head self-attention and cross-attention. The model captures both long- and short-term temporal dependencies and deep semantic relationships across features. Built on a multitask learning framework, it predicts task execution time, CPU usage, memory, and storage demands with high accuracy. Experiments show prediction accuracies of 89.9%, 87.9%, 86.3%, and 84.3% on these metrics, surpassing baselines like Transformer-XL. The results demonstrate that our approach effectively models complex HPC workload dynamics, offering robust support for intelligent cluster scheduling and holding strong theoretical and practical significance. Full article
Show Figures

Figure 1

20 pages, 3376 KiB  
Article
Time–Frequency Feature Fusion Approach for Hemiplegic Gait Recognition
by Linglong Mao and Zhanyong Mei
Computers 2025, 14(8), 334; https://doi.org/10.3390/computers14080334 - 18 Aug 2025
Viewed by 203
Abstract
Accurately distinguishing hemiplegic gait from healthy gait is significant for alleviating clinicians’ diagnostic workloads and enhancing rehabilitation efficiency. The center of pressure (CoP) trajectory extracted from pressure sensor arrays can be utilized for hemiplegic gait recognition. Existing research studies on hemiplegic gait recognition [...] Read more.
Accurately distinguishing hemiplegic gait from healthy gait is significant for alleviating clinicians’ diagnostic workloads and enhancing rehabilitation efficiency. The center of pressure (CoP) trajectory extracted from pressure sensor arrays can be utilized for hemiplegic gait recognition. Existing research studies on hemiplegic gait recognition based on plantar pressure have paid limited attention to the differences in recognition performance offered by CoP trajectories along different directions. To address this, this paper proposes a neural network model based on time–frequency domain feature interaction—the temporal–frequency domain interaction network (TFDI-Net)—to achieve efficient hemiplegic gait recognition. The work encompasses: (1) collecting CoP trajectory data using a pressure sensor array from 19 hemiplegic patients and 29 healthy subjects; (2) designing and implementing the TFDI-Net architecture, which extracts frequency domain features of the CoP trajectory via fast Fourier transform (FFT) and interacts or fuses them with time domain features to construct a discriminative joint representation; (3) conducting five-fold cross-validation comparisons with traditional machine learning methods and deep learning methods. Intra-fold data augmentation was performed by adding Gaussian noise to each training fold during partitioning. Box plots were employed to visualize and analyze the performance metrics of different models across test folds, revealing their stability and advantages. The results demonstrate that the proposed TFDI-Net outperforms traditional machine learning models, achieving improvements of 2.89% in recognition rate, 4.6% in F1-score, and 8.25% in recall. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

28 pages, 6117 KiB  
Article
Comparison of Modern Convolution and Transformer Architectures: YOLO and RT-DETR in Meniscus Diagnosis
by Aizhan Tlebaldinova, Zbigniew Omiotek, Markhaba Karmenova, Saule Kumargazhanova, Saule Smailova, Akerke Tankibayeva, Akbota Kumarkanova and Ivan Glinskiy
Computers 2025, 14(8), 333; https://doi.org/10.3390/computers14080333 - 17 Aug 2025
Viewed by 187
Abstract
The aim of this study is a comparative evaluation of the effectiveness of YOLO and RT-DETR family models for the automatic recognition and localization of meniscus tears in knee joint MRI images. The experiments were conducted on a proprietary annotated dataset consisting of [...] Read more.
The aim of this study is a comparative evaluation of the effectiveness of YOLO and RT-DETR family models for the automatic recognition and localization of meniscus tears in knee joint MRI images. The experiments were conducted on a proprietary annotated dataset consisting of 2000 images from 2242 patients from various clinics. Based on key performance metrics, the most effective representatives from each family, YOLOv8-x and RT-DETR-l, were selected. Comparative analysis based on training, validation, and testing results showed that YOLOv8-x delivered more stable and accurate outcomes than RT-DETR-l. The YOLOv8-x model achieved high values across key metrics: accuracy—0.958, recall—0.961; F1-score—0.960; mAP@50—0.975; and mAP@50–95—0.616. These results demonstrate the potential of modern object detection models for clinical application, providing accurate, interpretable, and reproducible diagnosis of meniscal injuries. Full article
Show Figures

Figure 1

21 pages, 806 KiB  
Tutorial
Multi-Layered Framework for LLM Hallucination Mitigation in High-Stakes Applications: A Tutorial
by Sachin Hiriyanna and Wenbing Zhao
Computers 2025, 14(8), 332; https://doi.org/10.3390/computers14080332 - 16 Aug 2025
Viewed by 627
Abstract
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated [...] Read more.
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated or safety-critical domains such as financial services, compliance review, and client decision support, it is not. Motivated by these realities, we develop an integrated mitigation framework that layers complementary controls rather than relying on any single technique. The framework combines structured prompt design, retrieval-augmented generation (RAG) with verifiable evidence sources, and targeted fine-tuning aligned with domain truth constraints. Our interest in this problem is practical. Individual mitigation techniques have matured quickly, yet teams deploying LLMs in production routinely report difficulty stitching them together in a coherent, maintainable pipeline. Decisions about when to ground a response in retrieved data, when to escalate uncertainty, how to capture provenance, and how to evaluate fidelity are often made ad hoc. Drawing on experience from financial technology implementations, where even rare hallucinations can carry material cost, regulatory exposure, or loss of customer trust, we aim to provide clearer guidance in the form of an easy-to-follow tutorial. This paper makes four contributions. First, we introduce a three-layer reference architecture that organizes mitigation activities across input governance, evidence-grounded generation, and post-response verification. Second, we describe a lightweight supervisory agent that manages uncertainty signals and triggers escalation (to humans, alternate models, or constrained workflows) when confidence falls below policy thresholds. Third, we analyze common but under-addressed security surfaces relevant to hallucination mitigation, including prompt injection, retrieval poisoning, and policy evasion attacks. Finally, we outline an implementation playbook for production deployment, including evaluation metrics, operational trade-offs, and lessons learned from early financial-services pilots. Full article
Show Figures

Figure 1

37 pages, 2286 KiB  
Article
Parameterised Quantum SVM with Data-Driven Entanglement for Zero-Day Exploit Detection
by Steven Jabulani Nhlapo, Elodie Ngoie Mutombo and Mike Nkongolo Wa Nkongolo
Computers 2025, 14(8), 331; https://doi.org/10.3390/computers14080331 - 15 Aug 2025
Viewed by 387
Abstract
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic. [...] Read more.
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic. This study evaluates several ML models on a labeled network traffic dataset, with a focus on zero-day attack detection. Ensemble learning methods, particularly eXtreme gradient boosting (XGBoost), achieved perfect classification, identifying all 6231 zero-day instances without false positives and maintaining efficient training and prediction times. While classical support vector machines (SVMs) performed modestly at 64% accuracy, their performance improved to 98% with the use of the borderline synthetic minority oversampling technique (SMOTE) and SMOTE + edited nearest neighbours (SMOTEENN). To explore quantum-enhanced alternatives, a quantum SVM (QSVM) is implemented using three-qubit and four-qubit quantum circuits simulated on the aer_simulator_statevector. The QSVM achieved high accuracy (99.89%) and strong F1-scores (98.95%), indicating that nonlinear quantum feature maps (QFMs) can increase sensitivity to zero-day exploit patterns. Unlike prior work that applies standard quantum kernels, this study introduces a parameterised quantum feature encoding scheme, where each classical feature is mapped using a nonlinear function tuned by a set of learnable parameters. Additionally, a sparse entanglement topology is derived from mutual information between features, ensuring a compact and data-adaptive quantum circuit that aligns with the resource constraints of noisy intermediate-scale quantum (NISQ) devices. Our contribution lies in formalising a quantum circuit design that enables scalable, expressive, and generalisable quantum architectures tailored for zero-day attack detection. This extends beyond conventional usage of QSVMs by offering a principled approach to quantum circuit construction for cybersecurity. While these findings are obtained via noiseless simulation, they provide a theoretical proof of concept for the viability of quantum ML (QML) in network security. Future work should target real quantum hardware execution and adaptive sampling techniques to assess robustness under decoherence, gate errors, and dynamic threat environments. Full article
Show Figures

Figure 1

20 pages, 4041 KiB  
Article
Enhancing Cardiovascular Disease Detection Through Exploratory Predictive Modeling Using DenseNet-Based Deep Learning
by Wael Hadi, Tushar Jaware, Tarek Khalifa, Faisal Aburub, Nawaf Ali and Rashmi Saini
Computers 2025, 14(8), 330; https://doi.org/10.3390/computers14080330 - 15 Aug 2025
Viewed by 349
Abstract
Cardiovascular Disease (CVD) remains the number one cause of morbidity and mortality, accounting for 17.9 million deaths every year. Precise and early diagnosis is therefore critical to the betterment of the patient’s outcomes and the many burdens that weigh on the healthcare systems. [...] Read more.
Cardiovascular Disease (CVD) remains the number one cause of morbidity and mortality, accounting for 17.9 million deaths every year. Precise and early diagnosis is therefore critical to the betterment of the patient’s outcomes and the many burdens that weigh on the healthcare systems. This work presents for the first time an innovative approach using the DenseNet architecture that allows for the automatic recognition of CVD from clinical data. The data is preprocessed and augmented, with a heterogeneous dataset of cardiovascular-related images like angiograms, echocardiograms, and magnetic resonance images used. Optimizing the deep features for robust model performance is conducted through fine-tuning a custom DenseNet architecture along with rigorous hyper parameter tuning and sophisticated strategies to handle class imbalance. The DenseNet model, after training, shows high accuracy, sensitivity, and specificity in the identification of CVD compared to baseline approaches. Apart from the quantitative measures, detailed visualizations are conducted to show that the model is able to localize and classify pathological areas within an image. The accuracy of the model was found to be 0.92, precision 0.91, and recall 0.95 for class 1, and an overall weighted average F1-score of 0.93, which establishes the efficacy of the model. There is great clinical applicability in this research in terms of accurate detection of CVD to provide time-interventional personalized treatments. This DenseNet-based approach advances the improvement on the diagnosis of CVD through state-of-the-art technology to be used by radiologists and clinicians. Future work, therefore, would probably focus on improving the model’s interpretability towards a broader population of patients and its generalization towards it, revolutionizing the diagnosis and management of CVD. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

15 pages, 1844 KiB  
Article
Artificial Intelligence Agent-Enabled Predictive Maintenance: Conceptual Proposal and Basic Framework
by Wenyu Jiang and Fuwen Hu
Computers 2025, 14(8), 329; https://doi.org/10.3390/computers14080329 - 15 Aug 2025
Viewed by 592
Abstract
Predictive maintenance (PdM) represents a significant evolution in maintenance strategies. However, challenges such as system integration complexity, data quality, and data availability are intricately intertwined, collectively impacting the successful deployment of PdM systems. Recently, large model-based agents, or agentic artificial intelligence (AI), have [...] Read more.
Predictive maintenance (PdM) represents a significant evolution in maintenance strategies. However, challenges such as system integration complexity, data quality, and data availability are intricately intertwined, collectively impacting the successful deployment of PdM systems. Recently, large model-based agents, or agentic artificial intelligence (AI), have evolved from simple task automation to active problem-solving and strategic decision-making. As such, we propose an AI agent-enabled PdM method that leverages an agentic AI development platform to streamline the development of a multimodal data-based fault detection agent, a RAG (retrieval-augmented generation)-based fault classification agent, a large model-based fault diagnosis agent, and a digital twin-based fault handling simulation agent. This approach breaks through the limitations of traditional PdM, which relies heavily on single models. This combination of “AI workflow + large reasoning models + operational knowledge base + digital twin” integrates the concepts of BaaS (backend as a service) and LLMOps (large language model operations), constructing an end-to-end intelligent closed loop from data perception to decision execution. Furthermore, a tentative prototype is demonstrated to show the technology stack and the system integration methods of the agentic AI-based PdM. Full article
Show Figures

Figure 1

22 pages, 5233 KiB  
Article
Drone Frame Optimization via Simulation and 3D Printing
by Faris Kateb, Abdul Haseeb, Syed Misbah-Un-Noor, Bandar M. Alghamdi, Fazal Qudus Khan, Bilal Khan, Abdul Baseer, Masood Iqbal Marwat and Sadeeq Jan
Computers 2025, 14(8), 328; https://doi.org/10.3390/computers14080328 - 13 Aug 2025
Viewed by 426
Abstract
This study presents a simulation-driven methodology for the design and optimization of a lightweight drone frame. Starting with a CAD model developed in SolidWorks, finite element analysis (FEA) and computational fluid dynamics (CFD) which are used to evaluate stress, deformation, fatigue behavior, and [...] Read more.
This study presents a simulation-driven methodology for the design and optimization of a lightweight drone frame. Starting with a CAD model developed in SolidWorks, finite element analysis (FEA) and computational fluid dynamics (CFD) which are used to evaluate stress, deformation, fatigue behavior, and aerodynamic performance. Topology optimization is then applied to reduce non-critical material and enhance the stiffness-to-weight ratio. CFD-informed refinements further help to minimize drag and improve airflow uniformity. The final design is fabricated using fused deposition modeling (FDM) with PLA, enabling rapid prototyping and experimental validation. Future work will explore advanced materials to improve fatigue resistance and structural durability. Full article
Show Figures

Figure 1

25 pages, 28917 KiB  
Article
Synthetic Data-Driven Methods to Accelerate the Deployment of Deep Learning Models: A Case Study on Pest and Disease Detection in Precision Viticulture
by Telmo Adão, Agnieszka Chojka, David Pascoal, Nuno Silva, Raul Morais and Emanuel Peres
Computers 2025, 14(8), 327; https://doi.org/10.3390/computers14080327 - 13 Aug 2025
Viewed by 288
Abstract
The development of reliable visual inference models is often constrained by the burdensome and time-consuming processes involved in collecting and annotating high-quality datasets. This challenge becomes more acute in domains where key phenomena are time-dependent or event-driven, narrowing the opportunity window to capture [...] Read more.
The development of reliable visual inference models is often constrained by the burdensome and time-consuming processes involved in collecting and annotating high-quality datasets. This challenge becomes more acute in domains where key phenomena are time-dependent or event-driven, narrowing the opportunity window to capture representative observations. Yet, accelerating the deployment of deep learning (DL) models is crucial to support timely, data-driven decision-making in operational settings. To tackle such an issue, this paper explores the use of 2D synthetic data grounded in real-world patterns to train initial DL models in contexts where annotated datasets are scarce or can only be acquired within restrictive time windows. Two complementary approaches to synthetic data generation are investigated: rule-based digital image processing and advanced text-to-image generative diffusion models. These methods can operate independently or be combined to enhance flexibility and coverage. A proof-of-concept is presented through a couple case studies in precision viticulture, a domain often constrained by seasonal dependencies and environmental variability. Specifically, the detection of Lobesia botrana in sticky traps and the classification of grapevine foliar symptoms associated with black rot, ESCA, and leaf blight are addressed. The results suggest that the proposed approach potentially accelerates the deployment of preliminary DL models by comprehensively automating the production of context-aware datasets roughly inspired by specific challenge-driven operational settings, thereby mitigating the need for time-consuming and labor-intensive processes, from image acquisition to annotation. Although models trained on such synthetic datasets require further refinement—for example, through active learning—the approach offers a scalable and functional solution that reduces human involvement, even in scenarios of data scarcity, and supports the effective transition of laboratory-developed AI to real-world deployment environments. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Graphical abstract

52 pages, 3006 KiB  
Article
Empirical Performance Analysis of WireGuard vs. OpenVPN in Cloud and Virtualised Environments Under Simulated Network Conditions
by Joel Anyam, Rajiv Ranjan Singh, Hadi Larijani and Anand Philip
Computers 2025, 14(8), 326; https://doi.org/10.3390/computers14080326 - 13 Aug 2025
Viewed by 713
Abstract
With the rise in cloud computing and virtualisation, secure and efficient VPN solutions are essential for network connectivity. We present a systematic performance comparison of OpenVPN (v2.6.12) and WireGuard (v1.0.20210914) across Azure and VMware environments, evaluating throughput, latency, jitter, packet loss, and resource [...] Read more.
With the rise in cloud computing and virtualisation, secure and efficient VPN solutions are essential for network connectivity. We present a systematic performance comparison of OpenVPN (v2.6.12) and WireGuard (v1.0.20210914) across Azure and VMware environments, evaluating throughput, latency, jitter, packet loss, and resource utilisation. Testing revealed that the protocol performance is highly context dependent. In VMware environments, WireGuard demonstrated a superior TCP throughput (210.64 Mbps vs. 110.34 Mbps) and lower packet loss (12.35% vs. 47.01%). In Azure environments, both protocols achieved a similar baseline throughput (~280–290 Mbps), though OpenVPN performed better under high-latency conditions (120 Mbps vs. 60 Mbps). Resource utilisation showed minimal differences, with WireGuard maintaining slightly better memory efficiency. Security Efficiency Index calculations revealed environment-specific trade-offs: WireGuard showed marginal advantages in Azure, while OpenVPN demonstrated better throughput efficiency in VMware, though WireGuard remained superior for latency-sensitive applications. Our findings indicate protocol selection should be guided by deployment environment and application requirements rather than general superiority claims. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

14 pages, 452 KiB  
Article
An Integrated Intuitionistic Fuzzy-Clustering Approach for Missing Data Imputation
by Charlène Béatrice Bridge-Nduwimana, Aziza El Ouaazizi and Majid Benyakhlef
Computers 2025, 14(8), 325; https://doi.org/10.3390/computers14080325 - 12 Aug 2025
Viewed by 277
Abstract
Missing data imputation is a critical preprocessing task that directly impacts the quality and reliability of data-driven analyses, yet many existing methods treat numerical and categorical data separately and lack the integration of advanced techniques. We suggest a novel imputation technique to overcome [...] Read more.
Missing data imputation is a critical preprocessing task that directly impacts the quality and reliability of data-driven analyses, yet many existing methods treat numerical and categorical data separately and lack the integration of advanced techniques. We suggest a novel imputation technique to overcome these restrictions that synergistically combines regression imputation using HistGradientBoostingRegressor and fuzzy rule-based systems and is enhanced by a tailored clustering process. This integrated approach effectively handles mixed data types and complex data structures using regression models to predict missing numerical values, fuzzy logic to incorporate expert knowledge and interpretability, and clustering to capture latent data patterns. Categorical variables are managed by mode imputation and label encoding. We evaluated the method on twelve tabular datasets with artificially introduced missingness, employing a comprehensive set of metrics focused on originally missing entries. The results demonstrate that our iterative imputer performs competitively with other established imputation techniques, achieving better and comparable error rates and accuracy. By combining statistical learning with fuzzy and clustering frameworks, the method achieves 15% lower Root Mean Square Error (RMSE), 10% lower Mean Absolute Error (MAE), and 80% higher precision in UCI datasets, thus offering a promising advance in data preprocessing in practical applications. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 2730 KiB  
Article
Cybersecurity Threats in Saudi Healthcare: Exploring Email Communication Practices to Enhance Cybersecurity Among Healthcare Employees in Saudi Arabia
by Ebtesam Shadadi, Rasha Ibrahim and Essam Ghadafi
Computers 2025, 14(8), 324; https://doi.org/10.3390/computers14080324 - 12 Aug 2025
Viewed by 422
Abstract
As cyber threats such as phishing and ransomware continue to escalate, healthcare systems are facing significant challenges in protecting sensitive data and ensuring operational continuity. This study explores how email communication practices influence cybersecurity in Saudi Arabia’s healthcare sector, particularly within the framework [...] Read more.
As cyber threats such as phishing and ransomware continue to escalate, healthcare systems are facing significant challenges in protecting sensitive data and ensuring operational continuity. This study explores how email communication practices influence cybersecurity in Saudi Arabia’s healthcare sector, particularly within the framework of rapid digitalisation under Vision 2030. The research employs a qualitative approach, with semi-structured interviews conducted with 40 healthcare professionals across various hospitals. A phenomenological analysis of the data revealed several key vulnerabilities, including inconsistent cybersecurity training, a reliance on informal messaging apps, and limited awareness of phishing tactics. The inconsistent cybersecurity training across regions emerged as a major weakness affecting overall resilience. These findings, grounded in rich qualitative data, offer a significant standalone contribution to understanding cybersecurity in healthcare settings. The findings highlight the need for mandatory training and awareness programmes and policy reforms to enhance cyber resilience within healthcare settings. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

20 pages, 1735 KiB  
Article
Multilingual Named Entity Recognition in Arabic and Urdu Tweets Using Pretrained Transfer Learning Models
by Fida Ullah, Muhammad Ahmad, Grigori Sidorov, Ildar Batyrshin, Edgardo Manuel Felipe Riverón and Alexander Gelbukh
Computers 2025, 14(8), 323; https://doi.org/10.3390/computers14080323 - 11 Aug 2025
Viewed by 321
Abstract
The increasing use of Arabic and Urdu on social media platforms, particularly Twitter, has created a growing need for robust Named Entity Recognition (NER) systems capable of handling noisy, informal, and code-mixed content. However, both languages remain significantly underrepresented in NER research, especially [...] Read more.
The increasing use of Arabic and Urdu on social media platforms, particularly Twitter, has created a growing need for robust Named Entity Recognition (NER) systems capable of handling noisy, informal, and code-mixed content. However, both languages remain significantly underrepresented in NER research, especially in social media contexts. To address this gap, this study makes four key contributions: (1) We introduced a manual entity consolidation step to enhance the consistency and accuracy of named entity annotations. In the original datasets, entities such as person names and organization names were often split into multiple tokens (e.g., first name and last name labeled separately). We manually refined the annotations to merge these segments into unified entities, ensuring improved coherence for both training and evaluation. (2) We selected two publicly available datasets from GitHub—one in Arabic and one in Urdu—and applied two novel strategies to tackle low-resource challenges: a joint multilingual approach and a translation-based approach. The joint approach involved merging both datasets to create a unified multilingual corpus, while the translation-based approach utilized automatic translation to generate cross-lingual datasets, enhancing linguistic diversity and model generalizability. (3) We presented a comprehensive and reproducible pseudocode-driven framework that integrates translation, manual refinement, dataset merging, preprocessing, and multilingual model fine-tuning. (4) We designed, implemented, and evaluated a customized XLM-RoBERTa model integrated with a novel attention mechanism, specifically optimized for the morphological and syntactic complexities of Arabic and Urdu. Based on the experiments, our proposed model (XLM-RoBERTa) achieves 0.98 accuracy across Arabic, Urdu, and multilingual datasets. While it shows a 7–8% improvement over traditional baselines (RF), it also achieves a 2.08% improvement over a deep learning (BiLSTM = 0.96), highlighting the effectiveness of our cross-lingual, resource-efficient approach for NER in low-resource, code-mixed social media text. Full article
Show Figures

Figure 1

26 pages, 1484 KiB  
Article
Digital Twin-Enhanced Programming Education: An Empirical Study on Learning Engagement and Skill Acquisition
by Ming Lu and Zhongyi Hu
Computers 2025, 14(8), 322; https://doi.org/10.3390/computers14080322 - 8 Aug 2025
Viewed by 475
Abstract
As an introductory core course in computer science and related fields, “Fundamentals of Programming” has always faced many challenges in stimulating students’ interest in learning and cultivating their practical coding abilities. The traditional teaching model often fails to effectively connect theoretical knowledge with [...] Read more.
As an introductory core course in computer science and related fields, “Fundamentals of Programming” has always faced many challenges in stimulating students’ interest in learning and cultivating their practical coding abilities. The traditional teaching model often fails to effectively connect theoretical knowledge with practical applications, resulting in a low retention rate of students’ learning and a weak ability to solve practical problems. Digital twin (DT) technology offers a novel approach to addressing these challenges by creating dynamic, virtual replicas of physical systems with real-time, interactive capabilities. This study explores DT integration in programming teaching and its impact on learning engagement (behavioral, cognitive, emotional) and skill acquisition (syntax, algorithm design, debugging). A quasi-experimental design was employed to study 135 first-year undergraduate students, divided into an experimental group (n = 90) using a DT-based learning environment and a control group (n = 45) receiving traditional instruction. Quantitative data analysis was conducted on participation surveys, planning evaluations, and qualitative feedback. The results showed that, compared with the control group, the DT group exhibited a higher level of sustained participation (p < 0.01) and achieved better results in actual coding tasks (p < 0.05). Students with limited coding experience showed the most significant progress in algorithmic thinking. The findings highlight that digital twin technology significantly enhances engagement and skill acquisition in introductory programming, particularly benefiting novice learners through immersive, theory-aligned experiences. This study establishes a new paradigm for introductory programming education by addressing two critical gaps in digital twin applications: (1) differential effects on students with varying prior knowledge (engagement/skill acquisition) and (2) pedagogical mechanisms in conceptual visualization and authentic context creation. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

45 pages, 3405 KiB  
Article
Electric Network Frequency as Environmental Fingerprint for Metaverse Security: A Comprehensive Survey
by Mohsen Hatami, Lhamo Dorje, Xiaohua Li and Yu Chen
Computers 2025, 14(8), 321; https://doi.org/10.3390/computers14080321 - 8 Aug 2025
Viewed by 457
Abstract
The rapid expansion of the Metaverse presents complex security challenges, particularly in verifying virtual objects and avatars within immersive environments. Conventional authentication methods, such as passwords and biometrics, often prove inadequate in these dynamic environments, especially as essential infrastructures, such as smart grids, [...] Read more.
The rapid expansion of the Metaverse presents complex security challenges, particularly in verifying virtual objects and avatars within immersive environments. Conventional authentication methods, such as passwords and biometrics, often prove inadequate in these dynamic environments, especially as essential infrastructures, such as smart grids, integrate with virtual platforms. Cybersecurity threats intensify as advanced attacks introduce fraudulent data, compromising system reliability and safety. Using the Electric Network Frequency (ENF), a naturally varying signal emitted from power grids, provides an innovative environmental fingerprint to authenticate digital twins and Metaverse entities in the smart grid. This paper provides a comprehensive survey of the ENF as an environmental fingerprint for enhancing Metaverse security, reviewing its characteristics, sensing methods, limitations, and applications in threat modeling and the CIA triad (Confidentiality, Integrity, and Availability), and presents a real-world case study to demonstrate its effectiveness in practical settings. By capturing the ENF as having a unique signature that is timestamped, this method strengthens security by directly correlating physical grid behavior and virtual interactions, effectively combating threats such as deepfake manipulations. Building upon recent developments in signal processing, this strategy reinforces the integrity of digital environments, delivering robust protection against evolving cyber–physical risks and facilitating secure, scalable virtual infrastructures. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

23 pages, 8610 KiB  
Article
Healthcare AI for Physician-Centered Decision-Making: Case Study of Applying Deep Learning to Aid Medical Professionals
by Aleksandar Milenkovic, Andjelija Djordjevic, Dragan Jankovic, Petar Rajkovic, Kofi Edee and Tatjana Gric
Computers 2025, 14(8), 320; https://doi.org/10.3390/computers14080320 - 7 Aug 2025
Viewed by 412
Abstract
This paper aims to leverage artificial intelligence (AI) to assist physicians in utilizing advanced deep learning techniques integrated into developed models within electronic health records (EHRs) in medical information systems (MISes), which have been in use for over 15 years in health centers [...] Read more.
This paper aims to leverage artificial intelligence (AI) to assist physicians in utilizing advanced deep learning techniques integrated into developed models within electronic health records (EHRs) in medical information systems (MISes), which have been in use for over 15 years in health centers across the Republic of Serbia. This paper presents a human-centered AI approach that emphasizes physician decision-making supported by AI models. This study presents two developed and implemented deep neural network (DNN) models in the EHR. Both models were based on data that were collected during the COVID-19 outbreak. The models were evaluated using five-fold cross-validation. The convolutional neural network (CNN), based on the pre-trained VGG19 architecture for classifying chest X-ray images, was trained on a publicly available smaller dataset containing 196 entries, and achieved an average classification accuracy of 91.83 ± 2.82%. The DNN model for optimizing patient appointment scheduling was trained on a large dataset (341,569 entries) and a rich feature design extracted from the MIS, which is daily used in Serbia, achieving an average classification accuracy of 77.51 ± 0.70%. Both models have consistent performance and good generalization. The architecture of a realized MIS, incorporating the positioning of developed AI tools that encompass both developed models, is also presented in this study. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

24 pages, 23907 KiB  
Article
Optimizing Data Pipelines for Green AI: A Comparative Analysis of Pandas, Polars, and PySpark for CO2 Emission Prediction
by Youssef Mekouar, Mohammed Lahmer and Mohammed Karim
Computers 2025, 14(8), 319; https://doi.org/10.3390/computers14080319 - 7 Aug 2025
Viewed by 412
Abstract
This study evaluates the performance and energy trade-offs of three popular data processing libraries—Pandas, PySpark, and Polars—applied to GreenNav, a CO2 emission prediction pipeline for urban traffic. GreenNav is an eco-friendly navigation app designed to predict CO2 emissions and determine low-carbon [...] Read more.
This study evaluates the performance and energy trade-offs of three popular data processing libraries—Pandas, PySpark, and Polars—applied to GreenNav, a CO2 emission prediction pipeline for urban traffic. GreenNav is an eco-friendly navigation app designed to predict CO2 emissions and determine low-carbon routes using a hybrid CNN-LSTM model integrated into a complete pipeline for the ingestion and processing of large, heterogeneous geospatial and road data. Our study quantifies the end-to-end execution time, cumulative CPU load, and maximum RAM consumption for each library when applied to the GreenNav pipeline; it then converts these metrics into energy consumption and CO2 equivalents. Experiments conducted on datasets ranging from 100 MB to 8 GB demonstrate that Polars in lazy mode offers substantial gains, reducing the processing time by a factor of more than twenty, memory consumption by about two-thirds, and energy consumption by about 60%, while maintaining the predictive accuracy of the model (R2 ≈ 0.91). These results clearly show that the careful selection of data processing libraries can reconcile high computing performance and environmental sustainability in large-scale machine learning applications. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

29 pages, 2673 KiB  
Review
Integrating Large Language Models into Digital Manufacturing: A Systematic Review and Research Agenda
by Chourouk Ouerghemmi and Myriam Ertz
Computers 2025, 14(8), 318; https://doi.org/10.3390/computers14080318 - 7 Aug 2025
Viewed by 738
Abstract
Industries 4.0 and 5.0 are based on technological advances, notably large language models (LLMs), which are making a significant contribution to the transition to smart factories. Although considerable research has explored this phenomenon, the literature remains fragmented and lacks an integrative framework that [...] Read more.
Industries 4.0 and 5.0 are based on technological advances, notably large language models (LLMs), which are making a significant contribution to the transition to smart factories. Although considerable research has explored this phenomenon, the literature remains fragmented and lacks an integrative framework that highlights the multifaceted implications of using LLMs in the context of digital manufacturing. To address this limitation, we conducted a systematic literature review, analyzing 53 papers selected according to predefined inclusion and exclusion criteria. Our descriptive and thematic analyses, respectively, mapped new trends and identified emerging themes, classified into three axes: (1) manufacturing process optimization, (2) data structuring and innovation, and (3) human–machine interaction and ethical challenges. Our results revealed that LLMs can enhance operational performance and foster innovation while redistributing human roles. Our research offers an in-depth understanding of the implications of LLMs. Finally, we propose a future research agenda to guide future studies. Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
Show Figures

Figure 1

31 pages, 1583 KiB  
Article
Ensuring Zero Trust in GDPR-Compliant Deep Federated Learning Architecture
by Zahra Abbas, Sunila Fatima Ahmad, Adeel Anjum, Madiha Haider Syed, Saif Ur Rehman Malik and Semeen Rehman
Computers 2025, 14(8), 317; https://doi.org/10.3390/computers14080317 - 4 Aug 2025
Viewed by 753
Abstract
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous [...] Read more.
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous standards like the GDPR, with traditional setups struggling to ensure compliance and maintain trust. Addressing these issues, our research introduces an innovative Zero Trust-based DFL architecture designed for GDPR compliant systems, integrating advanced security and privacy mechanisms to ensure safe and transparent cross-node data processing. Our base paper proposed the basic GDPR-Compliant DFL Architecture. Now we validate the previously proposed architecture by formally verifying it using High-Level Petri Nets (HLPNs). This Zero Trust-based framework facilitates secure, decentralized model training without direct data sharing. Furthermore, we have also implemented a case study using the MNIST and CIFAR-10 datasets to evaluate the existing approach with the proposed Zero Trust-based DFL methodology. Our experiments confirmed its effectiveness in enhancing trust, complying with GDPR, and promoting DFL adoption in privacy-sensitive areas, achieving secure, ethical Artificial Intelligence (AI) with transparent and efficient data processing. Full article
Show Figures

Figure 1

22 pages, 1566 KiB  
Review
Multi-Objective Evolutionary Algorithms in Waste Disposal Systems: A Comprehensive Review of Applications, Case Studies, and Future Directions
by Saad Talal Alharbi
Computers 2025, 14(8), 316; https://doi.org/10.3390/computers14080316 - 4 Aug 2025
Viewed by 426
Abstract
Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful optimization tools for addressing the complex, often conflicting goals present in modern waste disposal systems. This review explores recent advances and practical applications of MOEAs in key areas, including waste collection routing, waste-to-energy (WTE) systems, [...] Read more.
Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful optimization tools for addressing the complex, often conflicting goals present in modern waste disposal systems. This review explores recent advances and practical applications of MOEAs in key areas, including waste collection routing, waste-to-energy (WTE) systems, and facility location and allocation. Real-world case studies from cities like Braga, Lisbon, Uppsala, and Cyprus demonstrate how MOEAs can enhance operational efficiency, boost energy recovery, and reduce environmental impacts. While these algorithms offer significant advantages, challenges remain in computational complexity, adapting to dynamic environments, and integrating with emerging technologies. Future research directions highlight the potential of combining MOEAs with machine learning and real-time data to create more flexible and responsive waste management strategies. By leveraging these advancements, MOEAs can play a pivotal role in developing sustainable, efficient, and adaptive waste disposal systems capable of meeting the growing demands of urbanization and stricter environmental regulations. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Graphical abstract

28 pages, 1874 KiB  
Article
Lexicon-Based Random Substitute and Word-Variant Voting Models for Detecting Textual Adversarial Attacks
by Tarik El Lel, Mominul Ahsan and Majid Latifi
Computers 2025, 14(8), 315; https://doi.org/10.3390/computers14080315 - 2 Aug 2025
Viewed by 428
Abstract
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense [...] Read more.
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense mechanisms: the Lexicon-Based Random Substitute Model (LRSM) and the Word-Variant Voting Model (WVVM). LRSM employs randomized substitutions from a dataset-specific lexicon to generate diverse input variations, disrupting adversarial strategies by introducing unpredictability. Unlike traditional defenses requiring synonym dictionaries or precomputed semantic relationships, LRSM directly substitutes words with random lexicon alternatives, reducing overhead while maintaining robustness. Notably, LRSM not only neutralizes adversarial perturbations but occasionally surpasses the original accuracy by correcting inherent model misclassifications. Building on LRSM, WVVM integrates LRSM, Frequency-Guided Word Substitution (FGWS), and Synonym Random Substitution and Voting (RS&V) in an ensemble framework that adaptively combines their outputs. Logistic Regression (LR) emerged as the optimal ensemble configuration, leveraging its regularization parameters to balance the contributions of individual defenses. WVVM consistently outperformed standalone defenses, demonstrating superior restored accuracy and F1 scores across adversarial scenarios. The proposed defenses were evaluated on two well-known sentiment analysis benchmarks: the IMDB Sentiment Dataset and the Yelp Polarity Dataset. The IMDB dataset, comprising 50,000 labeled movie reviews, and the Yelp Polarity dataset, containing labeled business reviews, provided diverse linguistic challenges for assessing adversarial robustness. Both datasets were tested using 4000 adversarial examples generated by established attacks, including Probability Weighted Word Saliency, TextFooler, and BERT-based Adversarial Examples. WVVM and LRSM demonstrated superior performance in restoring accuracy and F1 scores across both datasets, with WVVM excelling through its ensemble learning framework. LRSM improved restored accuracy from 75.66% to 83.7% when compared to the second-best individual model, RS&V, while the Support Vector Classifier WVVM variation further improved restored accuracy to 93.17%. Logistic Regression WVVM achieved an F1 score of 86.26% compared to 76.80% for RS&V. These findings establish LRSM and WVVM as robust frameworks for defending against adversarial text attacks in sentiment analysis. Full article
Show Figures

Figure 1

20 pages, 1253 KiB  
Article
Multimodal Detection of Emotional and Cognitive States in E-Learning Through Deep Fusion of Visual and Textual Data with NLP
by Qamar El Maazouzi and Asmaa Retbi
Computers 2025, 14(8), 314; https://doi.org/10.3390/computers14080314 - 2 Aug 2025
Viewed by 554
Abstract
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing [...] Read more.
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing a general view of learners’ cognitive and affective states. We propose a multimodal system that integrates three complementary analyzes: (1) a CNN-LSTM model augmented with warning signs such as PERCLOS and yawning frequency for fatigue detection, (2) facial emotion recognition by EmoNet and an LSTM to handle temporal dynamics, and (3) sentiment analysis of feedback by a fine-tuned BERT model. It was evaluated on three public benchmarks: DAiSEE for fatigue, AffectNet for emotion, and MOOC Review (Coursera) for sentiment analysis. The results show a precision of 88.5% for fatigue detection, 70% for emotion detection, and 91.5% for sentiment analysis. Aggregating these cues enables an accurate identification of disengagement periods and triggers individualized pedagogical interventions. These results, although based on independently sourced datasets, demonstrate the feasibility of an integrated approach to detecting disengagement and open the door to emotionally intelligent learning systems with potential for future work in real-time content personalization and adaptive learning assistance. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

49 pages, 5495 KiB  
Review
A Map of the Research About Lighting Systems in the 1995–2024 Time Frame
by Gaetanino Paolone, Andrea Piazza, Francesco Pilotti, Romolo Paesani, Jacopo Camplone and Paolino Di Felice
Computers 2025, 14(8), 313; https://doi.org/10.3390/computers14080313 - 1 Aug 2025
Viewed by 319
Abstract
Lighting Systems (LSs) are a key component of modern cities. Across the years, thousands of articles have been published on this topic; nevertheless, a map of the state of the art of the extant literature is lacking. The present review reports on an [...] Read more.
Lighting Systems (LSs) are a key component of modern cities. Across the years, thousands of articles have been published on this topic; nevertheless, a map of the state of the art of the extant literature is lacking. The present review reports on an analysis of the network of the co-occurrences of the authors’ keywords from 12,148 Scopus-indexed articles on LSs published between 1995 and 2024. This review addresses the following research questions: (RQ1) What are the major topics explored by scholars in connection with LSs within the 1995–2024 time frame? (RQ2) How do they group together? The investigation leveraged VOSviewer, an open-source software largely used for performing bibliometric analyses. The number of thematic clusters returned by VOSviewer was determined by the value of the minimum number of occurrences needed for the authors’ keywords to be admitted into the business analysis. If such a number is not properly chosen, the consequence is a set of clusters that do not represent meaningful patterns of the input dataset. In the present study, to overcome this issue, the threshold value balanced the score of four independent clustering validity indices against the authors’ judgment of a meaningful partition of the input dataset. In addition, our review delved into the impact that the use/non-use of a thesaurus of the authors’ keywords had on the number and composition of the thematic clusters returned by VOSviewer and, ultimately, on how this choice affected the correctness of the interpretation of the clusters. The study adhered to a well-known protocol, whose implementation is reported in detail. Thus, the workflow is transparent and replicable. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop