Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Virtual Reality for Hydrodynamics: Evaluating an Original Physics-Based Submarine Simulator Through User Engagement
Computers 2025, 14(9), 348; https://doi.org/10.3390/computers14090348 - 24 Aug 2025
Abstract
STEM education is constantly seeking innovative methods to enhance student learning. Virtual Reality technology can represent a critical tool for effectively teaching complex engineering subjects. This study evaluates an original Virtual Reality software application, entitled Submarine Simulator, which is developed specifically to
[...] Read more.
STEM education is constantly seeking innovative methods to enhance student learning. Virtual Reality technology can represent a critical tool for effectively teaching complex engineering subjects. This study evaluates an original Virtual Reality software application, entitled Submarine Simulator, which is developed specifically to support competencies in hydrodynamics within an Underwater Engineering course at MINES Paris—PSL. Our application uniquely integrates a customized physics engine explicitly designed for realistic underwater simulation, significantly improving user comprehension through accurate real-time representation of hydrodynamic forces. The study involved a homogeneous group of 26 fourth-year engineering students, all specializing in engineering and sharing similar academic backgrounds in robotics, electronics, programming, and computer vision. This uniform cohort, primarily aged 22–28, enrolled in the same 3-month course, was intentionally chosen to minimize variations in skills, prior knowledge, and learning pace. Through a combination of quantitative assessments and Confirmatory Factor Analysis, we find that Virtual Reality affordances significantly predict user flow state (path coefficient: 0.811) which then predicts user engagement and satisfaction (path coefficient: 0.765). These findings show the substantial educational potential of tailored Virtual Reality experiences in STEM, particularly in engineering, and highlight directions for further methodological refinement.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►
Show Figures
Open AccessArticle
MITM- and DoS-Resistant PUF Authentication for Industrial WSNs via Sensor-Initiated Registration
by
Ashraf Alyanbaawi
Computers 2025, 14(9), 347; https://doi.org/10.3390/computers14090347 - 23 Aug 2025
Abstract
Industrial Wireless Sensor Networks (IWSNs) play a critical role in Industry 4.0 environments, enabling real-time monitoring and control of industrial processes. However, existing lightweight authentication protocols for IWSNs remain vulnerable to sophisticated security attacks because of inadequate initial authentication phases. This study presents
[...] Read more.
Industrial Wireless Sensor Networks (IWSNs) play a critical role in Industry 4.0 environments, enabling real-time monitoring and control of industrial processes. However, existing lightweight authentication protocols for IWSNs remain vulnerable to sophisticated security attacks because of inadequate initial authentication phases. This study presents a security analysis of Gope et al.’s PUF-based authentication protocol for IWSNs and identifies critical vulnerabilities that enable man-in-the-middle (MITM) and denial-of-service (DoS) attacks. We demonstrate that Gope et al.’s protocol is susceptible to MITM attacks during both authentication and Secure Periodical Data Collection (SPDC), allowing adversaries to derive session keys and compromise communication confidentiality. Our analysis reveals that the sensor registration phase of the protocol lacks proper authentication mechanisms, enabling attackers to perform unauthorized PUF queries and subsequently mount successful attacks. To address these vulnerabilities, we propose an enhanced authentication scheme that introduces a sensor-initiated registration process. In our improved protocol, sensor nodes generate and control PUF challenges rather than passively responding to gateway requests. This modification prevents unauthorized PUF queries while preserving the lightweight characteristics essential for resource-constrained IWSN deployments. Security analysis demonstrates that our enhanced scheme effectively mitigates the identified MITM and DoS attacks without introducing significant computational or communication overhead. The proposed modifications maintain compatibility with the existing IWSN infrastructure while strengthening the overall security posture. Comparative analysis shows that our solution addresses the security weaknesses of the original protocol while preserving its practical advantages for industrial use. The enhanced protocol provides a practical and secure solution for real-time data access in IWSNs, making it suitable for deployment in mission-critical industrial environments where both security and efficiency are paramount.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
A Method for Few-Shot Modulation Recognition Based on Reinforcement Metric Meta-Learning
by
Fan Zhou, Xiao Han, Jinyang Ren, Wei Wang, Yang Wang, Peiying Zhang and Shaolin Liao
Computers 2025, 14(9), 346; https://doi.org/10.3390/computers14090346 - 22 Aug 2025
Abstract
In response to the problem where neural network models fail to fully learn signal sample features due to an insufficient number of signal samples, leading to a decrease in the model’s ability to recognize signal modulation methods, a few-shot signal modulation mode recognition
[...] Read more.
In response to the problem where neural network models fail to fully learn signal sample features due to an insufficient number of signal samples, leading to a decrease in the model’s ability to recognize signal modulation methods, a few-shot signal modulation mode recognition method based on reinforcement metric meta-learning (RMML) is proposed. This approach, grounded in meta-learning techniques, employs transfer learning to building a feature extraction network that effectively extracts the data features under few-shot conditions. Building on this, by integrating the measurement of features of similar samples and the differences between features of different classes of samples, the metric network’s target loss function is optimized, thereby improving the network’s ability to distinguish between features of different modulation methods. The experimental results demonstrate that this method exhibits a good performance in processing new class signals that have not been previously trained. Under the condition of 5-way 5-shot, when the signal-to-noise ratio (SNR) is 0 dB, this method can achieve an average recognition accuracy of 91.8%, which is 2.8% higher than that of the best-performing baseline method, whereas when the SNR is 18 dB, the model’s average recognition accuracy significantly improves to 98.5%.
Full article
(This article belongs to the Special Issue Wireless Sensor Networks in IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Swallow Search Algorithm (SWSO): A Swarm Intelligence Optimization Approach Inspired by Swallow Bird Behavior
by
Farah Sami Khoshaba, Shahab Wahhab Kareem and Roojwan Sc Hawezi
Computers 2025, 14(9), 345; https://doi.org/10.3390/computers14090345 - 22 Aug 2025
Abstract
Swarm Intelligence (SI) algorithms were applied widely in solving complex optimization problems because they are simple, flexible, and efficient. The current paper proposes a new SI algorithm, which is based on the bird-like actions of swallows, which have highly synchronized behaviors of foraging
[...] Read more.
Swarm Intelligence (SI) algorithms were applied widely in solving complex optimization problems because they are simple, flexible, and efficient. The current paper proposes a new SI algorithm, which is based on the bird-like actions of swallows, which have highly synchronized behaviors of foraging and migration. The optimization algorithm (SWSO) makes use of these behaviors to boost the ability of exploration and exploitation in the optimization process. Unlike other birds, swallows are known to be so precise when performing fast directional alterations and making intricate aerial acrobatics during foraging. Moreover, the flight patterns of swallows are very efficient; they have extensive capabilities to transition between flapping and gliding with ease to save energy over long distances during migration. This allows instantaneous changes of wing shape variations to optimize performance in any number of flying conditions. The model used by the SWSO algorithm combines these biologically inspired flight dynamics into a new computational model that is aimed at enhancing search performance in rugged terrain. The design of the algorithm simulates the swallow’s social behavior and energy-saving behavior, converting it into exploration, exploitation, control mechanisms, and convergence control. In order to verify its effectiveness, (SWSO) is applied to many benchmark problems, such as unimodal, multimodal, fixed-dimension functions, and a benchmark CEC2019, which consists of some of the most widely used benchmark functions. Comparative tests are conducted against more than 30 metaheuristic algorithms that are regarded as state-of-the-art, developed so far, including PSO, MFO, WOA, GWO, and GA, among others. The measures of performance included best fitness, rate of convergence, robustness, and statistical significance. Moreover, the use of (SWSO) in solving real-life engineering design problems is used to prove (SWSO)’s practicality and generality. The results confirm that the proposed algorithm offers a competitive and reliable solution methodology, making it a valuable addition to the field of swarm-based optimization.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Open AccessArticle
Hybrid FEM-AI Approach for Thermographic Monitoring of Biomedical Electronic Devices
by
Danilo Pratticò, Domenico De Carlo, Gaetano Silipo and Filippo Laganà
Computers 2025, 14(9), 344; https://doi.org/10.3390/computers14090344 - 22 Aug 2025
Abstract
►▼
Show Figures
Prolonged operation of biomedical devices may compromise electronic component integrity due to cyclic thermal stress, thereby impacting both functionality and safety. Regulatory standards require regular inspections, particularly for surgical applications, highlighting the need for efficient and non-invasive diagnostic tools. This study introduces an
[...] Read more.
Prolonged operation of biomedical devices may compromise electronic component integrity due to cyclic thermal stress, thereby impacting both functionality and safety. Regulatory standards require regular inspections, particularly for surgical applications, highlighting the need for efficient and non-invasive diagnostic tools. This study introduces an integrated system that combines finite element models, infrared thermographic analysis, and artificial intelligence to monitor thermal stress in printed circuit boards (PCBs) within biomedical devices. A dynamic thermal model, implemented in COMSOL Multiphysics® (version 6.2), identifies regions at high risk of thermal overload. The infrared measurements acquired through a FLIR P660 thermal camera provided experimental validation and a dataset for training a hybrid artificial intelligence system. This model integrates deep learning-based U-Net architecture for thermal anomaly segmentation with machine learning classification of heat diffusion patterns. By combining simulation, the proposed system achieved an F1-score of 0.970 for hotspot segmentation using a U-Net architecture and an F1-score of 0.933 for the classification of heat propagation modes via a Multi-Layer Perceptron. This study contributes to the development of intelligent diagnostic tools for biomedical electronics by integrating physics-based simulation and AI-driven thermographic analysis, supporting automatic classification and localisation of thermal anomalies, real-time fault detection and predictive maintenance strategies.
Full article

Figure 1
Open AccessArticle
Application of Partial Discrete Logarithms for Discrete Logarithm Computation
by
Dina Shaltykova, Yelizaveta Vitulyova, Kaisarali Kadyrzhan and Ibragim Suleimenov
Computers 2025, 14(9), 343; https://doi.org/10.3390/computers14090343 - 22 Aug 2025
Abstract
►▼
Show Figures
A novel approach to constructing an algorithm for computing discrete logarithms, which holds significant interest for advancing cryptographic methods and the applied use of multivalued logic, is proposed. The method is based on the algebraic delta function, which allows the computation of a
[...] Read more.
A novel approach to constructing an algorithm for computing discrete logarithms, which holds significant interest for advancing cryptographic methods and the applied use of multivalued logic, is proposed. The method is based on the algebraic delta function, which allows the computation of a discrete logarithm to be reduced to the decomposition of known periodic functions into Fourier–Galois series. The concept of the “partial discrete logarithm”, grounded in the existence of a relationship between Galois fields and their complementary finite algebraic rings, is introduced. It is demonstrated that the use of partial discrete logarithms significantly reduces the number of operations required to compute the discrete logarithm of a given element in a Galois field. Illustrative examples are provided to demonstrate the advantages of the proposed approach. Potential practical applications are discussed, particularly for enhancing methods for low-altitude diagnostics of agricultural objects, utilizing groups of unmanned aerial vehicles, and radio geolocation techniques.
Full article

Figure 1
Open AccessReview
A Comprehensive Review of Sensor Technologies in IoT: Technical Aspects, Challenges, and Future Directions
by
Sadiq H. Abdulhussain, Basheera M. Mahmmod, Almuntadher Alwhelat, Dina Shehada, Zainab I. Shihab, Hala J. Mohammed, Tuqa H. Abdulameer, Muntadher Alsabah, Maryam H. Fadel, Susan K. Ali, Ghadeer H. Abbood, Zianab A. Asker and Abir Hussain
Computers 2025, 14(8), 342; https://doi.org/10.3390/computers14080342 - 21 Aug 2025
Abstract
The rapid advancements in wireless technology and digital electronics have led to the widespread adoption of compact, intelligent devices in various aspects of daily life. These advanced systems possess the capability to sense environmental changes, process data, and communicate seamlessly within interconnected networks.
[...] Read more.
The rapid advancements in wireless technology and digital electronics have led to the widespread adoption of compact, intelligent devices in various aspects of daily life. These advanced systems possess the capability to sense environmental changes, process data, and communicate seamlessly within interconnected networks. Typically, such devices integrate low-power radio transmitters and multiple smart sensors, hence enabling efficient functionality across wide ranges of applications. Alongside these technological developments, the concept of the IoT has emerged as a transformative paradigm, facilitating the interconnection of uniquely identifiable devices through internet-based networks. This paper aims to provide a comprehensive exploration of sensor technologies, detailing their integral role within IoT frameworks and examining their impact on optimizing efficiency and service delivery in modern wireless communications systems. Also, it presents a thorough review of sensor technologies, current research trends, and the associated challenges in this evolving field, providing a detailed explanation of recent advancements and IoT-integrated sensor systems, with a particular emphasis on the fundamental architecture of sensors and their pivotal role in modern technological applications. It explores the core benefits of sensor technologies and delivers an in-depth classification of their fundamental types. Beyond reviewing existing developments, this study identifies key open research challenges and outlines prospective directions for future exploration, offering valuable insights for both academic researchers and industry professionals. Ultimately, this paper serves as an essential reference for understanding sensor technologies and their potential contributions to IoT-driven solutions. This study offers meaningful contributions to academic and industrial sectors, facilitating advancements in sensor innovation.
Full article
(This article belongs to the Special Issue The Internet of Things—Current Trends, Applications, and Future Challenges (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Optimization of Scene and Material Parameters for the Generation of Synthetic Training Datasets for Machine Learning-Based Object Segmentation
by
Malte Nagel, Kolja Hedrich, Nils Melchert, Lennart Hinz and Eduard Reithmeier
Computers 2025, 14(8), 341; https://doi.org/10.3390/computers14080341 - 21 Aug 2025
Abstract
Synthetic training data is often essential for neural-network-based segmentation when real datasets are difficult or impossible to obtain. Conventional synthetic data generation relies on manually selecting scene and material parameters. This can lead to poor performance because the optimal parameters are often non-intuitive
[...] Read more.
Synthetic training data is often essential for neural-network-based segmentation when real datasets are difficult or impossible to obtain. Conventional synthetic data generation relies on manually selecting scene and material parameters. This can lead to poor performance because the optimal parameters are often non-intuitive and depend heavily on the specific use case and on the objects to be segmented. This study proposes a novel, automated optimization pipeline to improve the quality of synthetic datasets for specific object segmentation tasks. Synthetic datasets are generated by varying material and scene parameters with the BlenderProc framework. These parameters are optimized with the Optuna framework to maximize the average precision achieved by models trained on this data and validated using a small real dataset. After initial single-parameter studies and subsequent multidimensional optimization, optimal scene and material parameters are identified for each object. The results demonstrate the potential of this optimization pipeline to produce synthetic training datasets that enhance neural network performance for specific segmentation tasks, offering insights into the critical role of scene design and material selection in synthetic data generation.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancement in Three-Dimensional Depth with Bionic Image Processing
by
Yuhe Chen, Chaoping Chen, Baoen Han and Yunfan Yang
Computers 2025, 14(8), 340; https://doi.org/10.3390/computers14080340 - 20 Aug 2025
Abstract
This study proposes an image processing framework based on Bionic principles to optimize 3D visual perception in virtual reality (VR) systems. By simulating the physiological mechanisms of the human visual system, the framework significantly enhances depth perception and visual fidelity in VR content.
[...] Read more.
This study proposes an image processing framework based on Bionic principles to optimize 3D visual perception in virtual reality (VR) systems. By simulating the physiological mechanisms of the human visual system, the framework significantly enhances depth perception and visual fidelity in VR content. The research focuses on three core algorithms: Gabor texture feature extraction algorithm based on directional selectivity of neurons in the V1 region of the visual cortex, which enhances edge detection capability through fourth-order Gaussian kernel; improved Retinex model based on adaptive mechanism of retinal illumination, achieving brightness balance under complex illumination through horizontal–vertical dual-channel decomposition; the RGB adaptive adjustment algorithm, based on the three color response characteristics of cone cells, integrates color temperature compensation with depth cue optimization, enhances color naturalness and stereoscopic depth. Build a modular processing system on the Unity platform, integrate the above algorithms to form a collaborative optimization process, and ensure per-frame processing time meets VR real-time constraints. The experiment uses RMSE, AbsRel, and SSIM metrics, combined with subjective evaluation to verify the effectiveness of the algorithm. The results show that compared with traditional methods (SSAO, SSR, SH), our algorithm demonstrates significant advantages in simple scenes and marginal superiority in composite metrics for complex scenes. Collaborative processing of three algorithms can significantly improve depth map noise and enhance the user’s subjective experience. The research results provide a solution that combines biological rationality and engineering practicality for visual optimization in fields such as implantable metaverse, VR healthcare, and education.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Topic Modeling of Positive and Negative Reviews of Soulslike Video Games
by
Tibor Guzsvinecz
Computers 2025, 14(8), 339; https://doi.org/10.3390/computers14080339 - 19 Aug 2025
Abstract
Soulslike games are renowned for their challenging gameplay and distinctive design. To examine player reception of this genre, 993,932 user reviews of 21 Soulslike video games were collected from the Steam platform, of which 418,483 were tagged as English and analyzed. Latent Dirichlet
[...] Read more.
Soulslike games are renowned for their challenging gameplay and distinctive design. To examine player reception of this genre, 993,932 user reviews of 21 Soulslike video games were collected from the Steam platform, of which 418,483 were tagged as English and analyzed. Latent Dirichlet Allocation (LDA) was applied to identify and compare thematic patterns across positive and negative reviews. The resulting topics were grouped into five categories: aesthetics, gameplay mechanics, feelings, bugs/issues, and miscellaneous. Positive reviews emphasized aesthetics and atmosphere, whereas negative reviews focused on gameplay mechanics and technical issues. Notably, emotional tone differed significantly between review types. Overall, these results may benefit game developers refining design elements, researchers investigating player experience, and critics analyzing the reception of Soulslike games. Furthermore, the study provides a basis for understanding player perspectives in Soulslike games and establishes a foundation for comparative research with newer titles such as Elden Ring.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
Leveraging Contrastive Semantics and Language Adaptation for Robust Financial Text Classification Across Languages
by
Liman Zhang, Qianye Lin, Fanyu Meng, Siyu Liang, Jingxuan Lu, Shen Liu, Kehan Chen and Yan Zhan
Computers 2025, 14(8), 338; https://doi.org/10.3390/computers14080338 - 19 Aug 2025
Abstract
With the growing demand for multilingual financial information, cross-lingual financial sentiment recognition faces significant challenges, including semantic misalignment, ambiguous sentiment expression, and insufficient transferability. To address these issues, a unified multilingual recognition framework is proposed, integrating semantic contrastive learning with a language-adaptive modulation
[...] Read more.
With the growing demand for multilingual financial information, cross-lingual financial sentiment recognition faces significant challenges, including semantic misalignment, ambiguous sentiment expression, and insufficient transferability. To address these issues, a unified multilingual recognition framework is proposed, integrating semantic contrastive learning with a language-adaptive modulation mechanism. This approach is built upon the XLM-R multilingual model and employs a semantic contrastive module to enhance cross-lingual semantic consistency. In addition, a language modulation module based on low-rank parameter injection is introduced to improve the model’s sensitivity to fine-grained emotional features in low-resource languages such as Chinese and French. Experiments were conducted on a constructed trilingual financial sentiment dataset encompassing English, Chinese, and French. The results demonstrate that the proposed model significantly outperforms existing methods in cross-lingual sentiment recognition tasks. Specifically, in the English-to-French transfer setting, the model achieved 73.6% in accuracy, 69.8% in F1-Macro, 72.4% in F1-Weighted, and a cross-lingual generalization score of 0.654. Further improvements were observed under multilingual joint training, reaching 77.3%, 73.6%, 76.1%, and 0.696, respectively. In overall comparisons, the proposed model attained the highest performance across cross-lingual scenarios, with 75.8% in accuracy, 72.3% in F1-Macro, and 74.7% in F1-Weighted, surpassing strong baselines such as XLM-R+SimCSE and LaBSE. These results highlight the model’s superior capability in semantic alignment and generalization across languages. The proposed framework demonstrates strong applicability and promising potential in multilingual financial sentiment analysis, public opinion monitoring, and multilingual risk modeling.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures

Figure 1
Open AccessArticle
Assessing Burned Area Detection in Indonesia Using the Stacking Ensemble Neural Network (SENN): A Comparative Analysis of C- and L-Band Performance
by
Dodi Sudiana, Anugrah Indah Lestari, Mia Rizkinia, Indra Riyanto, Yenni Vetrita, Athar Abdurrahman Bayanuddin, Fanny Aditya Putri, Tatik Kartika, Argo Galih Suhadha, Atriyon Julzarika, Shinichi Sobue, Anton Satria Prabuwono and Josaphat Tetuko Sri Sumantyo
Computers 2025, 14(8), 337; https://doi.org/10.3390/computers14080337 - 18 Aug 2025
Abstract
Burned area detection plays a critical role in assessing the impact of forest and land fires, particularly in Indonesia, where both peatland and non-peatland areas are increasingly affected. Optical remote sensing has been widely used for this task, but its effectiveness is limited
[...] Read more.
Burned area detection plays a critical role in assessing the impact of forest and land fires, particularly in Indonesia, where both peatland and non-peatland areas are increasingly affected. Optical remote sensing has been widely used for this task, but its effectiveness is limited by persistent cloud cover in tropical regions. A Synthetic Aperture Radar (SAR) offers a cloud-independent alternative for burned area mapping. This study investigates the performance of a Stacking Ensemble Neural Network (SENN) model using polarimetric features derived from both C-band (Sentinel 1) and L-band (Advanced Land Observing Satellite—Phased Array L-band Synthetic Aperture Radar (ALOS-2/PALSAR-2)) data. The analysis covers three representative sites in Indonesia: peatland areas in (1) Rokan Hilir, (2) Merauke, and non-peatland areas in (3) Bima and Dompu. Validation is conducted using high-resolution PlanetScope imagery(Planet Labs PBC—San Francisco, California, United States). The results show that the SENN model consistently outperforms conventional artificial neural network (ANN) approaches across most evaluation metrics. L-band SAR data yields a superior performance to the C-band, particularly in peatland areas, with overall accuracy reaching 93–96% and precision between 92 and 100%. The method achieves 76% accuracy and 89% recall in non-peatland regions. Performance is lower in dry, hilly savanna landscapes. These findings demonstrate the effectiveness of the SENN, especially with L-band SAR, in improving burned area detection across diverse land types, supporting more reliable fire monitoring efforts in Indonesia.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
CNN-Random Forest Hybrid Method for Phenology-Based Paddy Rice Mapping Using Sentinel-2 and Landsat-8 Satellite Images
by
Dodi Sudiana, Sayyidah Hanifah Putri, Dony Kushardono, Anton Satria Prabuwono, Josaphat Tetuko Sri Sumantyo and Mia Rizkinia
Computers 2025, 14(8), 336; https://doi.org/10.3390/computers14080336 - 18 Aug 2025
Abstract
The agricultural sector plays a vital role in achieving the second Sustainable Development Goal: “Zero Hunger”. To ensure food security, agriculture must remain resilient and productive. In Indonesia, a major rice-producing country, the conversion of agricultural land for non-agricultural uses poses a serious
[...] Read more.
The agricultural sector plays a vital role in achieving the second Sustainable Development Goal: “Zero Hunger”. To ensure food security, agriculture must remain resilient and productive. In Indonesia, a major rice-producing country, the conversion of agricultural land for non-agricultural uses poses a serious threat to food availability. Accurate and timely mapping of paddy rice is therefore crucial. This study proposes a phenology-based mapping approach using a Convolutional Neural Network-Random Forest (CNN-RF) Hybrid model with multi-temporal Sentinel-2 and Landsat-8 imagery. Image processing and analysis were conducted using the Google Earth Engine platform. Raw spectral bands and four vegetation indices—NDVI, EVI, LSWI, and RGVI—were extracted as input features for classification. The CNN-RF Hybrid classifier demonstrated strong performance, achieving an overall accuracy of 0.950 and a Cohen’s Kappa coefficient of 0.893. These results confirm the effectiveness of the proposed method for mapping paddy rice in Indramayu Regency, West Java, using medium-resolution optical remote sensing data. The integration of phenological characteristics and deep learning significantly enhances classification accuracy. This research supports efforts to monitor and preserve paddy rice cultivation areas amid increasing land use pressures, contributing to national food security and sustainable agricultural practices.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures

Figure 1
Open AccessArticle
HPC Cluster Task Prediction Based on Multimodal Temporal Networks with Hierarchical Attention Mechanism
by
Xuemei Bai, Jingbo Zhou and Zhijun Wang
Computers 2025, 14(8), 335; https://doi.org/10.3390/computers14080335 - 18 Aug 2025
Abstract
►▼
Show Figures
In recent years, the increasing adoption of High-Performance Computing (HPC) clusters in scientific research and engineering has exposed challenges such as resource imbalance, node idleness, and overload, which hinder scheduling efficiency. Accurate multidimensional task prediction remains a key bottleneck. To address this, we
[...] Read more.
In recent years, the increasing adoption of High-Performance Computing (HPC) clusters in scientific research and engineering has exposed challenges such as resource imbalance, node idleness, and overload, which hinder scheduling efficiency. Accurate multidimensional task prediction remains a key bottleneck. To address this, we propose a hybrid prediction model that integrates Informer, Long Short-Term Memory (LSTM), and Graph Neural Networks (GNN), enhanced by a hierarchical attention mechanism combining multi-head self-attention and cross-attention. The model captures both long- and short-term temporal dependencies and deep semantic relationships across features. Built on a multitask learning framework, it predicts task execution time, CPU usage, memory, and storage demands with high accuracy. Experiments show prediction accuracies of 89.9%, 87.9%, 86.3%, and 84.3% on these metrics, surpassing baselines like Transformer-XL. The results demonstrate that our approach effectively models complex HPC workload dynamics, offering robust support for intelligent cluster scheduling and holding strong theoretical and practical significance.
Full article

Figure 1
Open AccessArticle
Time–Frequency Feature Fusion Approach for Hemiplegic Gait Recognition
by
Linglong Mao and Zhanyong Mei
Computers 2025, 14(8), 334; https://doi.org/10.3390/computers14080334 - 18 Aug 2025
Abstract
Accurately distinguishing hemiplegic gait from healthy gait is significant for alleviating clinicians’ diagnostic workloads and enhancing rehabilitation efficiency. The center of pressure (CoP) trajectory extracted from pressure sensor arrays can be utilized for hemiplegic gait recognition. Existing research studies on hemiplegic gait recognition
[...] Read more.
Accurately distinguishing hemiplegic gait from healthy gait is significant for alleviating clinicians’ diagnostic workloads and enhancing rehabilitation efficiency. The center of pressure (CoP) trajectory extracted from pressure sensor arrays can be utilized for hemiplegic gait recognition. Existing research studies on hemiplegic gait recognition based on plantar pressure have paid limited attention to the differences in recognition performance offered by CoP trajectories along different directions. To address this, this paper proposes a neural network model based on time–frequency domain feature interaction—the temporal–frequency domain interaction network (TFDI-Net)—to achieve efficient hemiplegic gait recognition. The work encompasses: (1) collecting CoP trajectory data using a pressure sensor array from 19 hemiplegic patients and 29 healthy subjects; (2) designing and implementing the TFDI-Net architecture, which extracts frequency domain features of the CoP trajectory via fast Fourier transform (FFT) and interacts or fuses them with time domain features to construct a discriminative joint representation; (3) conducting five-fold cross-validation comparisons with traditional machine learning methods and deep learning methods. Intra-fold data augmentation was performed by adding Gaussian noise to each training fold during partitioning. Box plots were employed to visualize and analyze the performance metrics of different models across test folds, revealing their stability and advantages. The results demonstrate that the proposed TFDI-Net outperforms traditional machine learning models, achieving improvements of 2.89% in recognition rate, 4.6% in F1-score, and 8.25% in recall.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Comparison of Modern Convolution and Transformer Architectures: YOLO and RT-DETR in Meniscus Diagnosis
by
Aizhan Tlebaldinova, Zbigniew Omiotek, Markhaba Karmenova, Saule Kumargazhanova, Saule Smailova, Akerke Tankibayeva, Akbota Kumarkanova and Ivan Glinskiy
Computers 2025, 14(8), 333; https://doi.org/10.3390/computers14080333 - 17 Aug 2025
Abstract
►▼
Show Figures
The aim of this study is a comparative evaluation of the effectiveness of YOLO and RT-DETR family models for the automatic recognition and localization of meniscus tears in knee joint MRI images. The experiments were conducted on a proprietary annotated dataset consisting of
[...] Read more.
The aim of this study is a comparative evaluation of the effectiveness of YOLO and RT-DETR family models for the automatic recognition and localization of meniscus tears in knee joint MRI images. The experiments were conducted on a proprietary annotated dataset consisting of 2000 images from 2242 patients from various clinics. Based on key performance metrics, the most effective representatives from each family, YOLOv8-x and RT-DETR-l, were selected. Comparative analysis based on training, validation, and testing results showed that YOLOv8-x delivered more stable and accurate outcomes than RT-DETR-l. The YOLOv8-x model achieved high values across key metrics: accuracy—0.958, recall—0.961; F1-score—0.960; mAP@50—0.975; and mAP@50–95—0.616. These results demonstrate the potential of modern object detection models for clinical application, providing accurate, interpretable, and reproducible diagnosis of meniscal injuries.
Full article

Figure 1
Open AccessTutorial
Multi-Layered Framework for LLM Hallucination Mitigation in High-Stakes Applications: A Tutorial
by
Sachin Hiriyanna and Wenbing Zhao
Computers 2025, 14(8), 332; https://doi.org/10.3390/computers14080332 - 16 Aug 2025
Abstract
►▼
Show Figures
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated
[...] Read more.
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated or safety-critical domains such as financial services, compliance review, and client decision support, it is not. Motivated by these realities, we develop an integrated mitigation framework that layers complementary controls rather than relying on any single technique. The framework combines structured prompt design, retrieval-augmented generation (RAG) with verifiable evidence sources, and targeted fine-tuning aligned with domain truth constraints. Our interest in this problem is practical. Individual mitigation techniques have matured quickly, yet teams deploying LLMs in production routinely report difficulty stitching them together in a coherent, maintainable pipeline. Decisions about when to ground a response in retrieved data, when to escalate uncertainty, how to capture provenance, and how to evaluate fidelity are often made ad hoc. Drawing on experience from financial technology implementations, where even rare hallucinations can carry material cost, regulatory exposure, or loss of customer trust, we aim to provide clearer guidance in the form of an easy-to-follow tutorial. This paper makes four contributions. First, we introduce a three-layer reference architecture that organizes mitigation activities across input governance, evidence-grounded generation, and post-response verification. Second, we describe a lightweight supervisory agent that manages uncertainty signals and triggers escalation (to humans, alternate models, or constrained workflows) when confidence falls below policy thresholds. Third, we analyze common but under-addressed security surfaces relevant to hallucination mitigation, including prompt injection, retrieval poisoning, and policy evasion attacks. Finally, we outline an implementation playbook for production deployment, including evaluation metrics, operational trade-offs, and lessons learned from early financial-services pilots.
Full article

Figure 1
Open AccessArticle
Parameterised Quantum SVM with Data-Driven Entanglement for Zero-Day Exploit Detection
by
Steven Jabulani Nhlapo, Elodie Ngoie Mutombo and Mike Nkongolo Wa Nkongolo
Computers 2025, 14(8), 331; https://doi.org/10.3390/computers14080331 - 15 Aug 2025
Abstract
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic.
[...] Read more.
Zero-day attacks pose a persistent threat to computing infrastructure by exploiting previously unknown software vulnerabilities that evade traditional signature-based network intrusion detection systems (NIDSs). To address this limitation, machine learning (ML) techniques offer a promising approach for enhancing anomaly detection in network traffic. This study evaluates several ML models on a labeled network traffic dataset, with a focus on zero-day attack detection. Ensemble learning methods, particularly eXtreme gradient boosting (XGBoost), achieved perfect classification, identifying all 6231 zero-day instances without false positives and maintaining efficient training and prediction times. While classical support vector machines (SVMs) performed modestly at 64% accuracy, their performance improved to 98% with the use of the borderline synthetic minority oversampling technique (SMOTE) and SMOTE + edited nearest neighbours (SMOTEENN). To explore quantum-enhanced alternatives, a quantum SVM (QSVM) is implemented using three-qubit and four-qubit quantum circuits simulated on the aer_simulator_statevector. The QSVM achieved high accuracy (99.89%) and strong F1-scores (98.95%), indicating that nonlinear quantum feature maps (QFMs) can increase sensitivity to zero-day exploit patterns. Unlike prior work that applies standard quantum kernels, this study introduces a parameterised quantum feature encoding scheme, where each classical feature is mapped using a nonlinear function tuned by a set of learnable parameters. Additionally, a sparse entanglement topology is derived from mutual information between features, ensuring a compact and data-adaptive quantum circuit that aligns with the resource constraints of noisy intermediate-scale quantum (NISQ) devices. Our contribution lies in formalising a quantum circuit design that enables scalable, expressive, and generalisable quantum architectures tailored for zero-day attack detection. This extends beyond conventional usage of QSVMs by offering a principled approach to quantum circuit construction for cybersecurity. While these findings are obtained via noiseless simulation, they provide a theoretical proof of concept for the viability of quantum ML (QML) in network security. Future work should target real quantum hardware execution and adaptive sampling techniques to assess robustness under decoherence, gate errors, and dynamic threat environments.
Full article
(This article belongs to the Special Issue Intrusion Detection and Trust Provisioning in Edge-of-Things Environment)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancing Cardiovascular Disease Detection Through Exploratory Predictive Modeling Using DenseNet-Based Deep Learning
by
Wael Hadi, Tushar Jaware, Tarek Khalifa, Faisal Aburub, Nawaf Ali and Rashmi Saini
Computers 2025, 14(8), 330; https://doi.org/10.3390/computers14080330 - 15 Aug 2025
Abstract
Cardiovascular Disease (CVD) remains the number one cause of morbidity and mortality, accounting for 17.9 million deaths every year. Precise and early diagnosis is therefore critical to the betterment of the patient’s outcomes and the many burdens that weigh on the healthcare systems.
[...] Read more.
Cardiovascular Disease (CVD) remains the number one cause of morbidity and mortality, accounting for 17.9 million deaths every year. Precise and early diagnosis is therefore critical to the betterment of the patient’s outcomes and the many burdens that weigh on the healthcare systems. This work presents for the first time an innovative approach using the DenseNet architecture that allows for the automatic recognition of CVD from clinical data. The data is preprocessed and augmented, with a heterogeneous dataset of cardiovascular-related images like angiograms, echocardiograms, and magnetic resonance images used. Optimizing the deep features for robust model performance is conducted through fine-tuning a custom DenseNet architecture along with rigorous hyper parameter tuning and sophisticated strategies to handle class imbalance. The DenseNet model, after training, shows high accuracy, sensitivity, and specificity in the identification of CVD compared to baseline approaches. Apart from the quantitative measures, detailed visualizations are conducted to show that the model is able to localize and classify pathological areas within an image. The accuracy of the model was found to be 0.92, precision 0.91, and recall 0.95 for class 1, and an overall weighted average F1-score of 0.93, which establishes the efficacy of the model. There is great clinical applicability in this research in terms of accurate detection of CVD to provide time-interventional personalized treatments. This DenseNet-based approach advances the improvement on the diagnosis of CVD through state-of-the-art technology to be used by radiologists and clinicians. Future work, therefore, would probably focus on improving the model’s interpretability towards a broader population of patients and its generalization towards it, revolutionizing the diagnosis and management of CVD.
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Artificial Intelligence Agent-Enabled Predictive Maintenance: Conceptual Proposal and Basic Framework
by
Wenyu Jiang and Fuwen Hu
Computers 2025, 14(8), 329; https://doi.org/10.3390/computers14080329 - 15 Aug 2025
Abstract
Predictive maintenance (PdM) represents a significant evolution in maintenance strategies. However, challenges such as system integration complexity, data quality, and data availability are intricately intertwined, collectively impacting the successful deployment of PdM systems. Recently, large model-based agents, or agentic artificial intelligence (AI), have
[...] Read more.
Predictive maintenance (PdM) represents a significant evolution in maintenance strategies. However, challenges such as system integration complexity, data quality, and data availability are intricately intertwined, collectively impacting the successful deployment of PdM systems. Recently, large model-based agents, or agentic artificial intelligence (AI), have evolved from simple task automation to active problem-solving and strategic decision-making. As such, we propose an AI agent-enabled PdM method that leverages an agentic AI development platform to streamline the development of a multimodal data-based fault detection agent, a RAG (retrieval-augmented generation)-based fault classification agent, a large model-based fault diagnosis agent, and a digital twin-based fault handling simulation agent. This approach breaks through the limitations of traditional PdM, which relies heavily on single models. This combination of “AI workflow + large reasoning models + operational knowledge base + digital twin” integrates the concepts of BaaS (backend as a service) and LLMOps (large language model operations), constructing an end-to-end intelligent closed loop from data perception to decision execution. Furthermore, a tentative prototype is demonstrated to show the technology stack and the system integration methods of the agentic AI-based PdM.
Full article
(This article belongs to the Special Issue Adaptive Decision Making Across Industries with AI and Machine Learning: Frameworks, Challenges, and Innovations)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026

Special Issues
Special Issue in
Computers
Emerging Trends in Machine Learning and Artificial Intelligence
Guest Editor: Thuseethan SelvarajahDeadline: 31 August 2025
Special Issue in
Computers
When Blockchain Meets IoT: Challenges and Potentials
Guest Editors: Andres Marin Lopez, David ArroyoDeadline: 31 August 2025
Special Issue in
Computers
Present and Future of E-Learning Technologies (2nd Edition)
Guest Editor: Antonio Sarasa CabezueloDeadline: 30 September 2025
Special Issue in
Computers
Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields
Guest Editor: Rafiqul ChowdhuryDeadline: 30 September 2025