Next Issue
Volume 14, November
Previous Issue
Volume 14, September
 
 

Computers, Volume 14, Issue 10 (October 2025) – 46 articles

Cover Story (view full-size image): Modern energy systems are evolving toward decentralized, intelligent, and resilient architectures driven by interconnected multi-microgrids (MMGs). Understanding their performance requires more than power analysis—it demands an integrated view of control, communication, and cybersecurity. Through a cyber–physical systems perspective, this work explores how quality of service (QoS), co-simulation platforms, and digital interdependence shape MMG reliability and adaptability. By linking computing technologies with real-time control and communication analysis, the study outlines a forward-looking framework for building scalable, secure, and adaptive MMG infrastructures capable of meeting the demands of future smart grids. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 1728 KB  
Article
Optimizing Federated Scheduling for Real-Time DAG Tasks via Node-Level Parallelization
by Jiaqing Qiao, Sirui Chen, Tianwen Chen and Lei Feng
Computers 2025, 14(10), 449; https://doi.org/10.3390/computers14100449 - 21 Oct 2025
Viewed by 131
Abstract
Real-time task scheduling in multi-core systems is a crucial research area, especially for parallel task scheduling, where the Directed Acyclic Graph (DAG) model is commonly used to represent task dependencies. However, existing research shows that resource utilization and schedulability rates for DAG task [...] Read more.
Real-time task scheduling in multi-core systems is a crucial research area, especially for parallel task scheduling, where the Directed Acyclic Graph (DAG) model is commonly used to represent task dependencies. However, existing research shows that resource utilization and schedulability rates for DAG task set scheduling remain relatively low. Meanwhile, some studies have identified that certain parallel task nodes exhibit “parallelization freedom,” allowing them to be decomposed into sub-threads that can execute concurrently. This presents a promising opportunity for improving task schedulability. Building on this, we propose an approach that optimizes both node parallelization and processor core allocation under federated scheduling. Simulation experiments demonstrate that by parallelizing nodes, we can significantly reduce the number of cores required for each task and increase the percentage of task sets being schedulable. Full article
Show Figures

Figure 1

25 pages, 8305 KB  
Article
SAHI-Tuned YOLOv5 for UAV Detection of TM-62 Anti-Tank Landmines: Small-Object, Occlusion-Robust, Real-Time Pipeline
by Dejan Dodić, Vuk Vujović, Srđan Jovković, Nikola Milutinović and Mitko Trpkoski
Computers 2025, 14(10), 448; https://doi.org/10.3390/computers14100448 - 21 Oct 2025
Viewed by 209
Abstract
Anti-tank landmines endanger post-conflict recovery. Detecting camouflaged TM-62 landmines in low-altitude unmanned aerial vehicle (UAV) imagery is challenging because targets occupy few pixels and are low-contrast and often occluded. We introduce a single-class anti-tank dataset and a YOLOv5 pipeline augmented with a SAHI-based [...] Read more.
Anti-tank landmines endanger post-conflict recovery. Detecting camouflaged TM-62 landmines in low-altitude unmanned aerial vehicle (UAV) imagery is challenging because targets occupy few pixels and are low-contrast and often occluded. We introduce a single-class anti-tank dataset and a YOLOv5 pipeline augmented with a SAHI-based small-object stage and Weighted Boxes Fusion. The evaluation combines COCO metrics with an operational operating point (score = 0.25; IoU = 0.50) and stratifies by object size and occlusion. On a held-out test partition representative of UAV acquisition, the baseline YOLOv5 attains mAP@0.50:0.95 = 0.553 and AP@0.50 = 0.851. With tuned SAHI (768 px tiles, 40% overlap) plus fusion, performance rises to mAP@0.50:0.95 = 0.685 and AP@0.50 = 0.935—ΔmAP = +0.132 (+23.9% rel.) and ΔAP@0.50 = +0.084 (+9.9% rel.). At the operating point, precision = 0.94 and recall = 0.89 (F1 = 0.914), implying a 58.4% reduction in missed detections versus a non-optimized SAHI baseline and a +14.3 AP@0.50 gain on the small/occluded subset. Ablations attribute gains to tile size, overlap, and fusion, which boost recall on low-pixel, occluded landmines without inflating false positives. The pipeline sustains real-time UAV throughput and supports actionable triage for humanitarian demining, as well as motivating RGB–thermal fusion and cross-season/-domain adaptation. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

19 pages, 2599 KB  
Article
Blockchain-Based Cooperative Medical Records Management System
by Sultan Alyahya and Zahraa Almaghrabi
Computers 2025, 14(10), 447; https://doi.org/10.3390/computers14100447 - 21 Oct 2025
Viewed by 237
Abstract
The effective management of electronic medical records is critical to deliver high-quality healthcare services. However, existing systems often suffer from issues such as fragmented data, lack of interoperability, and weak privacy protections, which hinder collaboration among healthcare stakeholders. This paper proposes a blockchain-based [...] Read more.
The effective management of electronic medical records is critical to deliver high-quality healthcare services. However, existing systems often suffer from issues such as fragmented data, lack of interoperability, and weak privacy protections, which hinder collaboration among healthcare stakeholders. This paper proposes a blockchain-based system to securely manage and share medical records in a decentralized and transparent manner. By leveraging smart contracts and access control policies, the system empowers patients with control over their data, ensures auditability of all interactions, and facilitates secure data sharing among patients, healthcare providers, insurance companies, and regulatory authorities. The proposed architecture is implemented using a private Ethereum blockchain and evaluated through a scenario-based comparison with the Prince Sultan Military Medical City system, as well as quantitative performance measurements of the blockchain prototype. Results demonstrate significant improvements in data security, access transparency, and system interoperability, with patients gaining the ability to track and control access to their records across multiple healthcare providers, while system performance remained practical for healthcare workflows. Full article
(This article belongs to the Special Issue Revolutionizing Industries: The Impact of Blockchain Technology)
Show Figures

Figure 1

41 pages, 2159 KB  
Systematic Review
Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024)
by Mohammad Ghattas, Suhail Odeh and Antonio M. Mora
Computers 2025, 14(10), 446; https://doi.org/10.3390/computers14100446 - 20 Oct 2025
Viewed by 349
Abstract
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven [...] Read more.
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven academic databases (2010–2024). From 6657 initial records, 30 high-quality studies were included after rigorous screening and quality assessment. In addition, 59 website performance metrics were identified and validated through an expert survey, resulting in 16 core indicators. The review highlights a dominant reliance on traditional evaluation metrics (e.g., Load Time, Page Size, Response Time) and reveals limited adoption of machine learning and deep learning approaches. Most existing studies focus on e-government and educational websites, with little attention to e-commerce, healthcare, and industry domains. Furthermore, the geographic distribution of research remains uneven, with a concentration in Asia and limited contributions from North America and Africa. This study contributes by (i) consolidating and validating a set of 16 critical performance metrics, (ii) critically analyzing current methodologies, and (iii) identifying gaps in domain coverage and intelligent prediction models. Future research should prioritize cross-domain benchmarks, integrate machine learning for scalable predictions, and address the lack of standardized evaluation protocols. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

21 pages, 949 KB  
Article
Exploring the Moderating Role of Personality Traits in Technology Acceptance: A Study on SAP S/4 HANA Learning Among University Students
by Sandra Barjaktarovic, Ivana Kovacevic and Ognjen Pantelic
Computers 2025, 14(10), 445; https://doi.org/10.3390/computers14100445 - 19 Oct 2025
Viewed by 259
Abstract
The aim of this study is to examine the impact of personality traits on students’ intention to accept the SAP S/4HANA business software. Grounded in the Big Five Factor (BFF) model of personality and the Technology Acceptance Model (TAM), the research analyzed the [...] Read more.
The aim of this study is to examine the impact of personality traits on students’ intention to accept the SAP S/4HANA business software. Grounded in the Big Five Factor (BFF) model of personality and the Technology Acceptance Model (TAM), the research analyzed the role of individual differences in students’ learning performance using this ERP system. The study was conducted on a sample of N = 418 first-year students who underwent a quasi-experimental treatment based on realistic business scenarios. The results indicate that conscientiousness emerged as a positive predictor, while agreeableness demonstrated negative predictive value in learning SAP S/4HANA, whereas neuroticism did not exhibit a significant effect. Moderation analysis revealed that both Perceived Usefulness and Actual Usage of technology moderated the relationship between conscientiousness and SAP learning performance, enhancing its predictive strength. These findings underscore the importance of individual differences in the process of SAP S/4HANA acceptance within an educational context and suggest that instructional strategies should be tailored to students’ personality traits in order to optimize learning outcomes. Full article
Show Figures

Figure 1

39 pages, 7020 KB  
Article
Improved Multi-Faceted Sine Cosine Algorithm for Optimization and Electricity Load Forecasting
by Stephen O. Oladipo, Udochukwu B. Akuru and Abraham O. Amole
Computers 2025, 14(10), 444; https://doi.org/10.3390/computers14100444 - 17 Oct 2025
Viewed by 198
Abstract
The sine cosine algorithm (SCA) is a population-based stochastic optimization method that updates the position of each search agent using the oscillating properties of the sine and cosine functions to balance exploration and exploitation. While flexible and widely applied, the SCA often suffers [...] Read more.
The sine cosine algorithm (SCA) is a population-based stochastic optimization method that updates the position of each search agent using the oscillating properties of the sine and cosine functions to balance exploration and exploitation. While flexible and widely applied, the SCA often suffers from premature convergence and getting trapped in local optima due to weak exploration–exploitation balance. To overcome these issues, this study proposes a multi-faceted SCA (MFSCA) incorporating several improvements. The initial population is generated using dynamic opposition (DO) to increase diversity and global search capability. Chaotic logistic maps generate random coefficients to enhance exploration, while an elite-learning strategy allows agents to learn from multiple top-performing solutions. Adaptive parameters, including inertia weight, jumping rate, and local search strength, are applied to guide the search more effectively. In addition, Lévy flights and adaptive Gaussian local search with elitist selection strengthen exploration and exploitation, while reinitialization of stagnating agents maintains diversity. The developed MFSCA was tested against 23 benchmark optimization functions and assessed using the Wilcoxon rank-sum and Friedman rank tests. Results showed that MFSCA outperformed the original SCA and other variants. To further validate its applicability, this study developed a fuzzy c-means MFSCA-based adaptive neuro-fuzzy inference system to forecast energy consumption in student residences, using student apartments at a university in South Africa as a case study. The MFSCA-ANFIS achieved superior performance with respect to RMSE (1.9374), MAD (1.5483), MAE (1.5457), CVRMSE (42.8463), and SD (1.9373). These results highlight MFSCA’s effectiveness as a robust optimizer for both general optimization tasks and energy management applications. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Figure 1

69 pages, 7515 KB  
Review
Towards an End-to-End Digital Framework for Precision Crop Disease Diagnosis and Management Based on Emerging Sensing and Computing Technologies: State over Past Decade and Prospects
by Chijioke Leonard Nkwocha and Abhilash Kumar Chandel
Computers 2025, 14(10), 443; https://doi.org/10.3390/computers14100443 - 16 Oct 2025
Viewed by 514
Abstract
Early detection and diagnosis of plant diseases is critical for ensuring global food security and sustainable agricultural practices. This review comprehensively examines latest advancements in crop disease risk prediction, onset detection through imaging techniques, machine learning (ML), deep learning (DL), and edge computing [...] Read more.
Early detection and diagnosis of plant diseases is critical for ensuring global food security and sustainable agricultural practices. This review comprehensively examines latest advancements in crop disease risk prediction, onset detection through imaging techniques, machine learning (ML), deep learning (DL), and edge computing technologies. Traditional disease detection methods, which rely on visual inspections, are time-consuming, and often inaccurate. While chemical analyses are accurate, they can be time consuming and leave less flexibility to promptly implement remedial actions. In contrast, modern techniques such as hyperspectral and multispectral imaging, thermal imaging, and fluorescence imaging, among others can provide non-invasive and highly accurate solutions for identifying plant diseases at early stages. The integration of ML and DL models, including convolutional neural networks (CNNs) and transfer learning, has significantly improved disease classification and severity assessment. Furthermore, edge computing and the Internet of Things (IoT) facilitate real-time disease monitoring by processing and communicating data directly in/from the field, reducing latency and reliance on in-house as well as centralized cloud computing. Despite these advancements, challenges remain in terms of multimodal dataset standardization, integration of individual technologies of sensing, data processing, communication, and decision-making to provide a complete end-to-end solution for practical implementations. In addition, robustness of such technologies in varying field conditions, and affordability has also not been reviewed. To this end, this review paper focuses on broad areas of sensing, computing, and communication systems to outline the transformative potential of end-to-end solutions for effective implementations towards crop disease management in modern agricultural systems. Foundation of this review also highlights critical potential for integrating AI-driven disease detection and predictive models capable of analyzing multimodal data of environmental factors such as temperature and humidity, as well as visible-range and thermal imagery information for early disease diagnosis and timely management. Future research should focus on developing autonomous end-to-end disease monitoring systems that incorporate these technologies, fostering comprehensive precision agriculture and sustainable crop production. Full article
Show Figures

Figure 1

22 pages, 51772 KB  
Article
On a Software Framework for Automated Pore Identification and Quantification for SEM Images of Metals
by Michael Mulligan, Oliver Fowler, Joshua Voell, Mark Atwater and Howie Fang
Computers 2025, 14(10), 442; https://doi.org/10.3390/computers14100442 - 16 Oct 2025
Viewed by 174
Abstract
The functional performance of porous metals and alloys is dictated by pore features such as size, connectivity, and morphology. While methods like mercury porosimetry or gas pycnometry provide cumulative information, direct observation via scanning electron microscopy (SEM) offers detailed insights unavailable through other [...] Read more.
The functional performance of porous metals and alloys is dictated by pore features such as size, connectivity, and morphology. While methods like mercury porosimetry or gas pycnometry provide cumulative information, direct observation via scanning electron microscopy (SEM) offers detailed insights unavailable through other means, especially for microscale or nanoscale pores. Each scanned image can contain hundreds or thousands of pores, making efficient identification, classification, and quantification challenging due to the processing time required for pixel-level edge recognition. Traditionally, pore outlines on scanned images were hand-traced and analyzed using image-processing software, a process that is time-consuming and often inconsistent for capturing both large and small pores while accurately removing noise. In this work, a software framework was developed that leverages modern computing tools and methodologies for automated image processing for pore identification, classification, and quantification. Vectorization was implemented as the final step to utilize the direction and magnitude of unconnected endpoints to reconstruct incomplete or broken edges. Combined with other existing pore analysis methods, this automated approach reduces manual effort dramatically, reducing analysis time from multiple hours per image to only minutes, while maintaining acceptable accuracy in quantified pore metrics. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

22 pages, 964 KB  
Article
Multi-Modal Emotion Detection and Tracking System Using AI Techniques
by Werner Mostert, Anish Kurien and Karim Djouani
Computers 2025, 14(10), 441; https://doi.org/10.3390/computers14100441 - 16 Oct 2025
Viewed by 330
Abstract
Emotion detection significantly impacts healthcare by enabling personalized patient care and improving treatment outcomes. Single-modality emotion recognition often lacks reliability due to the complexity and subjectivity of human emotions. This study proposes a multi-modal emotion detection platform integrating visual, audio, and heart rate [...] Read more.
Emotion detection significantly impacts healthcare by enabling personalized patient care and improving treatment outcomes. Single-modality emotion recognition often lacks reliability due to the complexity and subjectivity of human emotions. This study proposes a multi-modal emotion detection platform integrating visual, audio, and heart rate data using AI techniques, including convolutional neural networks and support vector machines. The system outperformed single-modality approaches, demonstrating enhanced accuracy and robustness. This improvement underscores the value of multi-modal AI in emotion detection, offering potential benefits across healthcare, education, and human–computer interaction. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Figure 1

18 pages, 354 KB  
Article
Implementation of Ring Learning-with-Errors Encryption and Brakerski–Fan–Vercauteren Fully Homomorphic Encryption Using ChatGPT
by Zhigang Chen, Xinxia Song, Liqun Chen and Hai Liu
Computers 2025, 14(10), 440; https://doi.org/10.3390/computers14100440 - 16 Oct 2025
Viewed by 219
Abstract
This paper investigates whether ChatGPT, a large language model, can assist in the implementation of lattice-based cryptography and fully homomorphic encryption algorithms, specifically the Ring Learning-with-Errors encryption scheme and the Brakerski–Fan–Vercauteren FHE scheme. To the best of our knowledge, this study represents the [...] Read more.
This paper investigates whether ChatGPT, a large language model, can assist in the implementation of lattice-based cryptography and fully homomorphic encryption algorithms, specifically the Ring Learning-with-Errors encryption scheme and the Brakerski–Fan–Vercauteren FHE scheme. To the best of our knowledge, this study represents the first systematic exploration of ChatGPT’s ability to implement these cryptographic algorithms. Fully homomorphic encryption, despite its theoretical and practical significance, poses significant challenges due to its computational complexity and efficiency requirements. This study evaluates ChatGPT’s capability as a development tool from both algorithmic and implementation perspectives. At the algorithmic level, ChatGPT demonstrates a solid understanding of the Rring Learning-with-Errors lattice encryption scheme but faces limitations in comprehending the intricate structure of the Brakerski–Fan–Vercauteren FHE scheme. At the code level, ChatGPT can generate functional C++ implementations of both encryption schemes, significantly reducing manual coding effort. However, debugging and corrections remain necessary, particularly for the more complex Brakerski–Fan–Vercauteren scheme, where additional effort is required to ensure correctness. The findings highlight ChatGPT’s potential and limitations in supporting cryptographic algorithm development, offering insights into its application for advancing implementations of complex cryptographic systems. Full article
(This article belongs to the Special Issue Emerging Trends in Network Security and Applied Cryptography)
Show Figures

Figure 1

39 pages, 9477 KB  
Article
Simulation Application of Adaptive Strategy Hybrid Secretary Bird Optimization Algorithm in Multi-UAV 3D Path Planning
by Xiaojun Zheng, Rundong Liu and Xiaoyang Liu
Computers 2025, 14(10), 439; https://doi.org/10.3390/computers14100439 - 15 Oct 2025
Viewed by 297
Abstract
Multi-UAV three-dimensional (3D) path planning is formulated as a high-dimensional multi-constraint optimization problem involving costs such as path length, flight altitude, avoidance cost, and smoothness. To address this challenge, we propose an Adaptive Strategy Hybrid Secretary Bird Optimization Algorithm (ASHSBOA), an enhanced variant [...] Read more.
Multi-UAV three-dimensional (3D) path planning is formulated as a high-dimensional multi-constraint optimization problem involving costs such as path length, flight altitude, avoidance cost, and smoothness. To address this challenge, we propose an Adaptive Strategy Hybrid Secretary Bird Optimization Algorithm (ASHSBOA), an enhanced variant of the Secretary Bird Optimization Algorithm (SBOA). ASHSBOA integrates a weighted multi-direction dynamic learning strategy, an adaptive strategy-selection mechanism, and a hybrid elite-guided boundary-repair scheme to enhance the ability to identify local optima and balance exploration and exploitation. The algorithm is tested on benchmark suites CEC-2017 and CEC-2022 against nine classic or state-of-the-art optimizers. Non-parametric tests show that ASHSBOA consistently achieves superior performance and ranks first among competitors. Finally, we applied ASHSBOA to a multi-UAV 3D path planning model. In Scenario 1, the path cost planned by ASHSBOA decreased by 124.9 compared to the second-ranked QHSBOA. In the more complex Scenario 2, this figure reached 1137.9. Simulation results demonstrate that ASHSBOA produces lower-cost flight paths and more stable convergence behavior compared to comparative methods. These results validate the robustness and practicality of ASHSBOA in UAV path planning. Full article
Show Figures

Graphical abstract

33 pages, 1124 KB  
Review
Machine and Deep Learning in Agricultural Engineering: A Comprehensive Survey and Meta-Analysis of Techniques, Applications, and Challenges
by Samuel Akwasi Frimpong, Mu Han, Wenyi Zheng, Xiaowei Li, Ernest Akpaku and Ama Pokuah Obeng
Computers 2025, 14(10), 438; https://doi.org/10.3390/computers14100438 - 15 Oct 2025
Viewed by 341
Abstract
Machine learning and deep learning techniques integrated with advanced sensing technologies have revolutionized agricultural engineering, addressing complex challenges in food production, quality assessment, and environmental monitoring. This survey presents a systematic review and meta-analysis of recent developments by examining the peer-reviewed literature from [...] Read more.
Machine learning and deep learning techniques integrated with advanced sensing technologies have revolutionized agricultural engineering, addressing complex challenges in food production, quality assessment, and environmental monitoring. This survey presents a systematic review and meta-analysis of recent developments by examining the peer-reviewed literature from 2015 to 2024. The analysis reveals computational approaches ranging from traditional algorithms like support vector machines and random forests to deep learning architectures, including convolutional and recurrent neural networks. Deep learning models often demonstrate superior performance, showing 5–10% accuracy improvements over traditional methods and achieving 93–99% accuracy in image-based applications. Three primary application domains are identified: agricultural product quality assessment using hyperspectral imaging, crop and field management through precision optimization, and agricultural automation with machine vision systems. Dataset taxonomy shows spectral data predominating at 42.1%, followed by image data at 26.2%, indicating preference for non-destructive approaches. Current challenges include data limitations, model interpretability issues, and computational complexity. Future trends emphasize lightweight model development, ensemble learning, and expanding applications. This analysis provides a comprehensive understanding of current capabilities and future directions for machine learning in agricultural engineering, supporting the development of efficient and sustainable agricultural systems for global food security. Full article
Show Figures

Figure 1

23 pages, 1409 KB  
Systematic Review
A Systematic Review of Machine Learning in Credit Card Fraud Detection Under Original Class Imbalance
by Nazerke Baisholan, J. Eric Dietz, Sergiy Gnatyuk, Mussa Turdalyuly, Eric T. Matson and Karlygash Baisholanova
Computers 2025, 14(10), 437; https://doi.org/10.3390/computers14100437 - 15 Oct 2025
Viewed by 750
Abstract
Credit card fraud remains a significant concern for financial institutions due to its low prevalence, evolving tactics, and the operational demand for timely, accurate detection. Machine learning (ML) has emerged as a core approach, capable of processing large-scale transactional data and adapting to [...] Read more.
Credit card fraud remains a significant concern for financial institutions due to its low prevalence, evolving tactics, and the operational demand for timely, accurate detection. Machine learning (ML) has emerged as a core approach, capable of processing large-scale transactional data and adapting to new fraud patterns. However, much of the literature modifies the natural class distribution through resampling, potentially inflating reported performance and limiting real-world applicability. This systematic literature review examines only studies that preserve the original class imbalance during both training and evaluation. Following PRISMA 2020 guidelines, strict inclusion and exclusion criteria were applied to ensure methodological rigor and relevance. Four research questions guided the analysis, focusing on dataset usage, ML algorithm adoption, evaluation metric selection, and the integration of explainable artificial intelligence (XAI). The synthesis reveals dominant reliance on a small set of benchmark datasets, a preference for tree-based ensemble methods, limited use of AUC-PR despite its suitability for skewed data, and rare implementation of operational explainability, most notably through SHAP. The findings highlight the need for semantics-preserving benchmarks, cost-aware evaluation frameworks, and analyst-oriented interpretability tools, offering a research agenda to improve reproducibility and enable effective, transparent fraud detection under real-world imbalance conditions. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

19 pages, 1396 KB  
Article
Sparse Keyword Data Analysis Using Bayesian Pattern Mining
by Sunghae Jun
Computers 2025, 14(10), 436; https://doi.org/10.3390/computers14100436 - 14 Oct 2025
Viewed by 277
Abstract
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address [...] Read more.
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address this issue, this study proposes a probabilistic framework called Bayesian Pattern Mining (BPM), which integrates Bayesian inference into association rule mining (ARM). The proposed method estimates both the expected values and credible intervals of interestingness measures such as confidence and lift, providing a probabilistic evaluation of keyword associations. Experiments conducted on 9436 quantum computing patent documents, from which 175 representative keywords were extracted, demonstrate that BPM yields more stable and interpretable associations than conventional ARM. By incorporating credible intervals, BPM reduces the risk of biased decisions under sparsity and enhances the reliability of keyword-based technology analysis, offering a rigorous approach for knowledge discovery in zero-inflated text data. Full article
Show Figures

Figure 1

15 pages, 2005 KB  
Article
A Web-Based Digital Twin Framework for Interactive E-Learning in Engineering Education
by Peter Weis, Ronald Bašťovanský and Matúš Vereš
Computers 2025, 14(10), 435; https://doi.org/10.3390/computers14100435 - 14 Oct 2025
Viewed by 289
Abstract
Traditional engineering education struggles to bridge the theory–practice gap in the Industry 4.0 era, as static 2D schematics inadequately convey complex spatial relationships. While advanced visualization tools exist, their adoption is frequently hindered by requirements for specialized hardware and software, limiting accessibility. This [...] Read more.
Traditional engineering education struggles to bridge the theory–practice gap in the Industry 4.0 era, as static 2D schematics inadequately convey complex spatial relationships. While advanced visualization tools exist, their adoption is frequently hindered by requirements for specialized hardware and software, limiting accessibility. This study details the development and evaluation of a novel, web-based Digital Twin framework designed for accessible, intuitive e-learning that requires no client-side installation. The framework, centered on a high-fidelity 3D model of a historic radial engine, was assessed through a qualitative pilot case study with seven engineering professionals. Data was collected via a “think-aloud” protocol and a mixed-methods survey with a Likert scale and open-ended questions. Findings revealed an overwhelmingly positive reception; quantitative data showed high mean scores for usability, educational impact, and professional training potential (M > 4.2). Qualitative analysis confirmed the framework’s success in enhancing spatial understanding via features like dynamic cross-sections, improving the efficiency of accessing integrated documentation, and demonstrating high value as an onboarding tool. This work provides strong preliminary evidence that an accessible, web-based Digital Twin is a powerful and scalable solution for technical education that significantly enhances spatial comprehension and knowledge transfer. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

25 pages, 3060 KB  
Article
Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games
by Sehar Shahzad Farooq, Hameedur Rahman, Samiya Abdul Wahid, Muhammad Alyan Ansari, Saira Abdul Wahid and Hosu Lee
Computers 2025, 14(10), 434; https://doi.org/10.3390/computers14100434 - 13 Oct 2025
Viewed by 396
Abstract
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage [...] Read more.
Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor–Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent’s generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments. Full article
Show Figures

Figure 1

20 pages, 1343 KB  
Article
Hybrid CDN Architecture Integrating Edge Caching, MEC Offloading, and Q-Learning-Based Adaptive Routing
by Aymen D. Salman, Akram T. Zeyad, Asia Ali Salman Al-karkhi, Safanah M. Raafat and Amjad J. Humaidi
Computers 2025, 14(10), 433; https://doi.org/10.3390/computers14100433 - 13 Oct 2025
Viewed by 417
Abstract
Content Delivery Networks (CDNs) have evolved to meet surging data demands and stringent low-latency requirements driven by emerging applications like high-definition video streaming, virtual reality, and IoT. This paper proposes a hybrid CDN architecture that synergistically combines edge caching, Multi-access Edge Computing (MEC) [...] Read more.
Content Delivery Networks (CDNs) have evolved to meet surging data demands and stringent low-latency requirements driven by emerging applications like high-definition video streaming, virtual reality, and IoT. This paper proposes a hybrid CDN architecture that synergistically combines edge caching, Multi-access Edge Computing (MEC) offloading, and reinforcement learning (Q-learning) for adaptive routing. In the proposed system, popular content is cached at radio access network edges (e.g., base stations) and computation-intensive tasks are offloaded to MEC servers, while a Q-learning agent dynamically routes user requests to the optimal service node (cache, MEC server, or origin) based on the network state. The study presented detailed system design and provided comprehensive simulation-based evaluation. The results demonstrate that the proposed hybrid approach significantly improves cache hit ratios and reduces end-to-end latency compared to traditional CDNs and simpler edge architectures. The Q-learning-enabled routing adapts to changing load and content popularity, converging to efficient policies that outperform static baselines. The proposed hybrid model has been tested against variants lacking MEC, edge caching, or the RL-based controller to isolate each component’s contributions. The paper concludes with a discussion on practical considerations, limitations, and future directions for intelligent CDN networking at the edge. Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))
Show Figures

Figure 1

21 pages, 3148 KB  
Article
A Novel Multimodal Hand Gesture Recognition Model Using Combined Approach of Inter-Frame Motion and Shared Attention Weights
by Xiaorui Zhang, Shuaitong Li, Xianglong Zeng, Peisen Lu and Wei Sun
Computers 2025, 14(10), 432; https://doi.org/10.3390/computers14100432 - 13 Oct 2025
Viewed by 321
Abstract
Dynamic hand gesture recognition based on computer vision aims at enabling computers to understand the semantic meaning conveyed by hand gestures in videos. Existing methods predominately rely on spatiotemporal attention mechanisms to extract hand motion features in a large spatiotemporal scope. However, they [...] Read more.
Dynamic hand gesture recognition based on computer vision aims at enabling computers to understand the semantic meaning conveyed by hand gestures in videos. Existing methods predominately rely on spatiotemporal attention mechanisms to extract hand motion features in a large spatiotemporal scope. However, they cannot accurately focus on the moving hand region for hand feature extraction because frame sequences contain a substantial amount of redundant information. Although multimodal techniques can extract a wider variety of hand features, they are less successful at utilizing information interactions between various modalities for accurate feature extraction. To address these challenges, this study proposes a multimodal hand gesture recognition model combining inter-frame motion and shared attention weights. By jointly using an inter-frame motion attention (IFMA) mechanism and adaptive down-sampling (ADS), the spatiotemporal search scope can be effectively narrowed down to the hand-related regions based on the characteristic of hands exhibiting obvious movements. The proposed inter-modal attention weight (IMAW) loss enables RGB and Depth modalities to share attention, allowing each to adjust its distribution based on the other. Experimental results on the EgoGesture, NVGesture, and Jester datasets demonstrate the superiority of our proposed model over existing state-of-the-art methods in terms of hand motion feature extraction and hand gesture recognition accuracy. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

30 pages, 2870 KB  
Article
CourseEvalAI: Rubric-Guided Framework for Transparent and Consistent Evaluation of Large Language Models
by Catalin Anghel, Marian Viorel Craciun, Emilia Pecheanu, Adina Cocu, Andreea Alexandra Anghel, Paul Iacobescu, Calina Maier, Constantin Adrian Andrei, Cristian Scheau and Serban Dragosloveanu
Computers 2025, 14(10), 431; https://doi.org/10.3390/computers14100431 - 11 Oct 2025
Viewed by 363
Abstract
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces [...] Read more.
Background and objectives: Large language models (LLMs) show promise in automating open-ended evaluation tasks, yet their reliability in rubric-based assessment remains uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about transparency and pedagogical validity in educational contexts. This study introduces CourseEvalAI, a framework designed to enhance consistency and fidelity in rubric-guided evaluation by fine-tuning a general-purpose LLM with authentic university-level instructional content. Methods: The framework employs supervised fine-tuning with Low-Rank Adaptation (LoRA) on rubric-annotated answers and explanations drawn from undergraduate computer science exams. Responses generated by both the base and fine-tuned models were independently evaluated by two human raters and two LLM judges, applying dual-layer rubrics for answers (technical or argumentative) and explanations. Inter-rater reliability was reported as intraclass correlation coefficient (ICC(2,1)), Krippendorff’s α, and quadratic-weighted Cohen’s κ (QWK), and statistical analyses included Welch’s t tests with Holm–Bonferroni correction, Hedges’ g with bootstrap confidence intervals, and Levene’s tests. All responses, scores, feedback, and metadata were stored in a Neo4j graph database for structured exploration. Results: The fine-tuned model consistently outperformed the base version across all rubric dimensions, achieving higher scores for both answers and explanations. After multiple-testing correction, only the Generative Pre-trained Transformer (GPT-4)—judged Technical Answer contrast remains statistically significant; other contrasts show positive trends without passing the adjusted threshold, and no additional significance is claimed for explanation-level results. Variance in scoring decreased, inter-model agreement increased, and evaluator feedback for fine-tuned outputs contained fewer vague or critical remarks, indicating stronger rubric alignment and greater pedagogical coherence. Inter-rater reliability analyses indicated moderate human–human agreement and weaker alignment of LLM judges to the human mean. Originality: CourseEvalAI integrates rubric-guided fine-tuning, dual-layer evaluation, and graph-based storage into a unified framework. This combination provides a replicable and interpretable methodology that enhances the consistency, transparency, and pedagogical value of LLM-based evaluators in higher education and beyond. Full article
Show Figures

Figure 1

54 pages, 6893 KB  
Article
Automated OSINT Techniques for Digital Asset Discovery and Cyber Risk Assessment
by Tetiana Babenko, Kateryna Kolesnikova, Olga Abramkina and Yelizaveta Vitulyova
Computers 2025, 14(10), 430; https://doi.org/10.3390/computers14100430 - 9 Oct 2025
Viewed by 422
Abstract
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse [...] Read more.
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse public information sources, including WHOIS records, DNS data, and SSL certificates, into a unified analysis pipeline without relying on intrusive probing. For risk scoring we applied Gradient Boosted Decision Trees, which proved more robust with messy real-world data than other models we tested. DBSCAN clustering was used to detect unusual exposure patterns across assets. In validation on organizational data, the framework achieved 93.3% accuracy in detecting known vulnerabilities and an F1-score of 0.92 for asset classification. More importantly, security teams spent about 58% less time on manual triage and false alarm handling. The system also demonstrated reasonable scalability, indicating that automated OSINT analysis can provide a practical and resource-efficient way for organizations to maintain visibility over their attack surface. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

23 pages, 2198 KB  
Review
Security Requirements Engineering: A Review and Analysis
by Aftab Alam Janisar, Ayman Meidan, Khairul Shafee bin Kalid, Abdul Rehman Gilal and Aliza Bt Sarlan
Computers 2025, 14(10), 429; https://doi.org/10.3390/computers14100429 - 9 Oct 2025
Viewed by 345
Abstract
Security is crucial, especially as software systems become increasingly complex. Both practitioners and researchers advocate for the early integration of security requirements (SR) into the Software Development Life Cycle (SDLC). However, ensuring the validation and assurance of security requirements is still a major [...] Read more.
Security is crucial, especially as software systems become increasingly complex. Both practitioners and researchers advocate for the early integration of security requirements (SR) into the Software Development Life Cycle (SDLC). However, ensuring the validation and assurance of security requirements is still a major challenge in developing secure systems. To investigate this issue, a two-phase study was carried out. First phase: a literature review was conducted on 45 relevant studies related to Security Requirements Engineering (SRE) and Security Requirements Assurance (SRA). Nine SRE techniques were examined across multiple parameters, including major categories, requirements engineering stages, project scale, and the integration of standards involving 17 distinct activities. Second phase: An empirical survey of 58 industry professionals revealed a clear disparity between the understanding of Security Requirements Engineering (SRE) and the implementation of Security Requirements Assurance (SRA). While statistical analyses (ANOVA, regression, correlation, Kruskal–Wallis) confirmed a moderate grasp of SRE practices, SRA remains poorly understood and underapplied. Unlike prior studies focused on isolated models, this research combines practical insights with comparative analysis, highlighting the systemic neglect of SRA in current practices. The findings indicate the need for stronger security assurance in early development phases, offering targeted, data-driven recommendations for bridging this gap. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

21 pages, 1410 KB  
Article
Measure Student Aptitude in Learning Programming in Higher Education—A Data Analysis
by João Pires, Ana Rosa Borges, Jorge Bernardino, Fernanda Brito Correia and Anabela Gomes
Computers 2025, 14(10), 428; https://doi.org/10.3390/computers14100428 - 9 Oct 2025
Viewed by 265
Abstract
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including [...] Read more.
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including Freshmen and Repeating Students, using descriptive statistics, correlation analysis, Categorical Principal Component Analysis and Item Response Theory models analysis. Analysis of the cognitive test revealed that some reasoning questions presented a statistically significant correlation, albeit of weak magnitude, with the course grades, particularly for freshman students. The development of models for predicting student performance in Introductory Programming using cognitive tests is also being explored. This study found that reasoning skills, namely logical reasoning and sequence completion, were more predictive of success in programming than general ability. The study also showed that a Programming Cognitive Test can be a useful tool for identifying students at risk of failure, particularly for freshmen students. Full article
Show Figures

Figure 1

20 pages, 1205 KB  
Review
LLMs for Commit Messages: A Survey and an Agent-Based Evaluation Protocol on CommitBench
by Mohamed Mehdi Trigui and Wasfi G. Al-Khatib
Computers 2025, 14(10), 427; https://doi.org/10.3390/computers14100427 - 7 Oct 2025
Viewed by 469
Abstract
Commit messages are vital for traceability, maintenance, and onboarding in modern software projects, yet their quality is frequently inconsistent. Recent large language models (LLMs) can transform code diffs into natural language summaries, offering a path to more consistent and informative commit messages. This [...] Read more.
Commit messages are vital for traceability, maintenance, and onboarding in modern software projects, yet their quality is frequently inconsistent. Recent large language models (LLMs) can transform code diffs into natural language summaries, offering a path to more consistent and informative commit messages. This paper makes two contributions: (i) it provides a systematic survey of automated commit message generation with LLMs, critically comparing prompt-only, fine-tuned, and retrieval-augmented approaches; and (ii) it specifies a transparent, agent-based evaluation blueprint centered on CommitBench. Unlike prior reviews, we include a detailed dataset audit, preprocessing impacts, evaluation metrics, and error taxonomy. The protocol defines dataset usage and splits, prompting and context settings, scoring and selection rules, and reporting guidelines (results by project, language, and commit type), along with an error taxonomy to guide qualitative analysis. Importantly, this work emphasizes methodology and design rather than presenting new empirical benchmarking results. The blueprint is intended to support reproducibility and comparability in future studies. Full article
Show Figures

Figure 1

32 pages, 12099 KB  
Article
Hardware–Software System for Biomass Slow Pyrolysis: Characterization of Solid Yield via Optimization Algorithms
by Ismael Urbina-Salas, David Granados-Lieberman, Juan Pablo Amezquita-Sanchez, Martin Valtierra-Rodriguez and David Aaron Rodriguez-Alejandro
Computers 2025, 14(10), 426; https://doi.org/10.3390/computers14100426 - 5 Oct 2025
Viewed by 399
Abstract
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware [...] Read more.
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware consists of a custom-designed pyrolizer equipped with temperature and weight sensors, a dedicated control unit, and a user-friendly interface. On the software side, a two-step kinetic model was implemented and coupled with three optimization algorithms, i.e., Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Nelder–Mead (N-M), to estimate the Arrhenius kinetic parameters governing biomass degradation. Slow pyrolysis experiments were performed on wheat straw (WS), pruning waste (PW), and biosolids (BS) at a heating rate of 20 °C/min within 250–500 °C, with a 120 min residence time favoring biochar production. The comparative analysis shows that the N-M method achieved the highest accuracy (100% fit in estimating solid yield), with a convergence time of 4.282 min, while GA converged faster (1.675 min), with a fit of 99.972%, and PSO had the slowest convergence time at 6.409 min and a fit of 99.943%. These results highlight both the versatility of the system and the potential of optimization techniques to provide accurate predictive models of biomass decomposition as a function of time and temperature. Overall, the main contributions of this work are the development of a low-cost, custom MATLAB-based experimental platform and the tailored implementation of optimization algorithms for kinetic parameter estimation across different biomasses, together providing a robust framework for biomass pyrolysis characterization. Full article
Show Figures

Figure 1

25 pages, 4460 KB  
Systematic Review
Rethinking Blockchain Governance with AI: The VOPPA Framework
by Catalin Daniel Morar, Daniela Elena Popescu, Ovidiu Constantin Novac and David Ghiurău
Computers 2025, 14(10), 425; https://doi.org/10.3390/computers14100425 - 4 Oct 2025
Viewed by 583
Abstract
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It [...] Read more.
Blockchain governance has become central to the performance and resilience of decentralized systems, yet current models face recurring issues of participation, coordination, and adaptability. This article offers a structured analysis of governance frameworks and highlights their limitations through recent high-impact case studies. It then examines how artificial intelligence (AI) is being integrated into governance processes, ranging from proposal summarization and anomaly detection to autonomous agent-based voting. In response to existing gaps, this paper proposes the Voting Via Parallel Predictive Agents (VOPPA) framework, a multi-agent architecture aimed at enabling predictive, diverse, and decentralized decision-making. Strengthening blockchain governance will require not just decentralization but also intelligent, adaptable, and accountable decision-making systems. Full article
Show Figures

Figure 1

24 pages, 637 KB  
Article
ZDBERTa: Advancing Zero-Day Cyberattack Detection in Internet of Vehicle with Zero-Shot Learning
by Amal Mirza, Sobia Arshad, Muhammad Haroon Yousaf and Muhammad Awais Azam
Computers 2025, 14(10), 424; https://doi.org/10.3390/computers14100424 - 3 Oct 2025
Viewed by 541
Abstract
The Internet of Vehicles (IoV) is becoming increasingly vulnerable to zero-day (ZD) cyberattacks, which often bypass conventional intrusion detection systems. To mitigate this challenge, this study proposes Zero-Day Bidirectional Encoder Representations from Transformers approach (ZDBERTa), a zero-shot learning (ZSL)-based framework for ZD attack [...] Read more.
The Internet of Vehicles (IoV) is becoming increasingly vulnerable to zero-day (ZD) cyberattacks, which often bypass conventional intrusion detection systems. To mitigate this challenge, this study proposes Zero-Day Bidirectional Encoder Representations from Transformers approach (ZDBERTa), a zero-shot learning (ZSL)-based framework for ZD attack detection, evaluated on the CICIoV2024 dataset. Unlike conventional AI models, ZSL enables the classification of attack types not previously encountered during the training phase. Two dataset variants are formed: Variant 1, created through synthetic traffic generation using a mixture of pattern-based, crossover, and mutation techniques, and Variant 2, augmented with a Generative Adversarial Network (GAN). To replicate realistic zero-day conditions, denial-of-service (DoS) attacks were omitted during training and introduced only at testing. The proposed ZDBERTa incorporates a Byte-Pair Encoding (BPE) tokenizer, a multi-layer transformer encoder, and a classification head for prediction, enabling the model to capture semantic patterns and identify previously unseen threats. The experimental results demonstrate that ZDBERTa achieves 86.677% accuracy on Variant 1, highlighting the complexity of zero-day detection, while performance significantly improves to 99.315% on Variant 2, underscoring the effectiveness of GAN-based augmentation. To the best of our knowledge, this is the first research to explore ZD detection within CICIoV2024, contributing a novel direction toward resilient IoV cybersecurity. Full article
Show Figures

Figure 1

33 pages, 9908 KB  
Article
Mapping the Chemical Space of Antiviral Peptides with Half-Space Proximal and Metadata Networks Through Interactive Data Mining
by Daniela de Llano García, Yovani Marrero-Ponce, Guillermin Agüero-Chapin, Hortensia Rodríguez, Francesc J. Ferri, Edgar A. Márquez, José R. Mora, Felix Martinez-Rios and Yunierkis Pérez-Castillo
Computers 2025, 14(10), 423; https://doi.org/10.3390/computers14100423 - 3 Oct 2025
Viewed by 1335
Abstract
Antiviral peptides (AVPs) are promising therapeutic candidates, yet the rapid growth of sequence data and the field’s emphasis on predictors have left a gap: the lack of an integrated view linking peptide chemistry with biological context. Here, we map the AVP landscape through [...] Read more.
Antiviral peptides (AVPs) are promising therapeutic candidates, yet the rapid growth of sequence data and the field’s emphasis on predictors have left a gap: the lack of an integrated view linking peptide chemistry with biological context. Here, we map the AVP landscape through interactive data mining using Half-Space Proximal Networks (HSPNs) and Metadata Networks (MNs) in the StarPep toolbox. HSPNs minimize edges and avoid fixed thresholds, reducing computational cost while enabling high-resolution analysis. A threshold-free HSPN resolved eight chemically and biologically distinct communities, while MNs contextualized AVPs by source, function, and target, revealing structural–functional relationships. To capture diversity compactly, we applied centrality-guided scaffold extraction with redundancy removal (90–50% identity), producing four representative subsets suitable for modeling and similarity searches. Alignment-free motif discovery yielded 33 validated motifs, including 10 overlapping with reported AVP signatures and 23 apparently novel. Motifs displayed category-specific enrichment across antimicrobial classes, and sequences carrying multiple motifs (≥4–5) consistently showed higher predicted antiviral probabilities. Beyond computational insights, scaffolds provide representative “entry points” into AVP chemical space, while motifs serve as modular building blocks for rational design. Together, these resources provide an integrated framework that may inform AVP discovery and support scaffold- and motif-guided therapeutic design. Full article
Show Figures

Figure 1

35 pages, 4926 KB  
Article
Hybrid MOCPO–AGE-MOEA for Efficient Bi-Objective Constrained Minimum Spanning Trees
by Dana Faiq Abd, Haval Mohammed Sidqi and Omed Hasan Ahmed
Computers 2025, 14(10), 422; https://doi.org/10.3390/computers14100422 - 2 Oct 2025
Viewed by 374
Abstract
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the [...] Read more.
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the other, resulting in imbalanced solutions, limited Pareto fronts, or poor scalability on larger instances. To overcome these shortcomings, this study introduces a Hybrid MOCPO–AGE-MOEA algorithm that strategically combines the exploratory strength of Multi-Objective Crested Porcupines Optimization (MOCPO) with the exploitative refinement of the Adaptive Geometry-based Evolutionary Algorithm (AGE-MOEA), while a Kruskal-based repair operator is integrated to strictly enforce feasibility and preserve solution diversity. Moreover, through extensive experiments conducted on Euclidean graphs with 11–100 nodes, the hybrid consistently demonstrates superior performance compared with five state-of-the-art baselines, as it generates Pareto fronts up to four times larger, achieves nearly 20% reductions in hop counts, and delivers order-of-magnitude runtime improvements with near-linear scalability. Importantly, results reveal that allocating 85% of offspring to MOCPO exploration and 15% to AGE-MOEA exploitation yields the best balance between diversity, efficiency, and feasibility. Therefore, the Hybrid MOCPO–AGE-MOEA not only addresses critical gaps in constrained MST optimization but also establishes itself as a practical and scalable solution with strong applicability to domains such as software-defined networking, wireless mesh systems, and adaptive routing, where both computational efficiency and solution diversity are paramount Full article
Show Figures

Figure 1

22 pages, 6620 KB  
Article
A Study to Determine the Feasibility of Combining Mobile Augmented Reality and an Automatic Pill Box to Support Older Adults’ Medication Adherence
by Osslan Osiris Vergara-Villegas, Vianey Guadalupe Cruz-Sánchez, Abel Alejandro Rubín-Alvarado, Saulo Abraham Gante-Díaz, Jonathan Axel Cruz-Vazquez, Brandon Areyzaga-Mendizábal, Jesús Yaljá Montiel-Pérez, Juan Humberto Sossa-Azuela, Iliac Huerta-Trujillo and Rodolfo Romero-Herrera
Computers 2025, 14(10), 421; https://doi.org/10.3390/computers14100421 - 2 Oct 2025
Viewed by 1064
Abstract
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used [...] Read more.
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used in developing a mobile augmented reality (MAR) pill box. The proposal supports patients in adhering to their medication treatment. First, we explain the design and construction of the automatic pill box, which includes alarms and uses QR codes recognized by the MAR system to provide medication information. Then, we explain the development of the MAR system. We conducted a preliminary survey with 30 participants to assess the feasibility of the MAR app. One hundred older adults participated in the survey. After one week of using the proposal, each patient answered a survey regarding the proposal functionality. The results revealed that 88% of the participants strongly agree and 11% agree that the app is a support in adhering to medical treatment. Finally, we conducted a study to compare the time elapsed between the scheduled time for taking the medication and the time it was actually consumed. The results from 189 records showed that using the proposal, 63.5% of the patients take medication with a maximum delay of 4.5 min. The results also showed that the alarm always sounded at the scheduled time and that the QR code displayed always corresponded to the medication that had to be consumed. Full article
Show Figures

Figure 1

21 pages, 2222 KB  
Article
Machine Learning-Driven Security and Privacy Analysis of a Dummy-ABAC Model for Cloud Computing
by Baby Marina, Irfana Memon, Fizza Abbas Alvi, Ubaidullah Rajput and Mairaj Nabi
Computers 2025, 14(10), 420; https://doi.org/10.3390/computers14100420 - 2 Oct 2025
Viewed by 347
Abstract
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved. [...] Read more.
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved. To address this shortcoming, we present a novel privacy-preserving Dummy-ABAC model that obfuscates real attributes with dummy attributes before transmission to the cloud server. In the proposed model, only dummy attributes are stored in the cloud database, whereas real attributes and mapping tokens are stored in a local machine database. Only dummy attributes are used for the access request evaluation in the cloud, and real data are retrieved in the post-decision mechanism using secure tokens. The security of the proposed model was assessed using a simulated threat scenario, including attribute inference, policy injection, and reverse mapping attacks. Experimental evaluation using machine learning classifiers (“DecisionTree” DT, “RandomForest” RF), demonstrated that inference accuracy dropped from ~0.65 on real attributes to ~0.25 on dummy attributes confirming improved resistance to inference attacks. Furthermore, the model rejects malformed and unauthorized policies. Performance analysis of dummy generation, token generation, encoding, and nearest-neighbor search, demonstrated minimal latency in both local and cloud environments. Overall, the proposed model ensures an efficient, secure, and privacy-preserving access control in cloud environments. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop