Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Bridging the Gap: Enhancing BIM Education for Sustainable Design Through Integrated Curriculum and Student Perception Analysis
Computers 2025, 14(11), 463; https://doi.org/10.3390/computers14110463 (registering DOI) - 25 Oct 2025
Abstract
Building Information Modeling (BIM) is a transformative tool in Sustainable Design (SD), providing measurable benefits for efficiency, collaboration, and performance in architectural, engineering, and construction (AEC) practices. Despite its growing presence in academic curricula, a gap persists between students’ recognition of BIM’s sustainability
[...] Read more.
Building Information Modeling (BIM) is a transformative tool in Sustainable Design (SD), providing measurable benefits for efficiency, collaboration, and performance in architectural, engineering, and construction (AEC) practices. Despite its growing presence in academic curricula, a gap persists between students’ recognition of BIM’s sustainability potential and their confidence or ability to apply these concepts in real-world practice. This study examines students’ understanding and perceptions of BIM and Sustainable Design education, offering insights for enhancing curriculum integration and pedagogical strategies. The objectives are to: (1) assess students’ current understanding of BIM and Sustainable Design; (2) identify gaps and misconceptions in applying BIM to sustainability; (3) evaluate the effectiveness of existing teaching methods and curricula to inform future improvements; and (4) explore the alignment between students’ theoretical knowledge and practical abilities in using BIM for Sustainable Design. The research methodology includes a comprehensive literature review and a survey of 213 students from architecture and construction management programs. Results reveal that while most students recognize the value of BIM for early-stage sustainable design analysis, many lack confidence in their practical skills, highlighting a perception–practice gap. The paper examines current educational practices, identifies curriculum shortcomings, and proposes strategies, such as integrated, hands-on learning experiences, to better align academic instruction with industry needs. Distinct from previous studies that focused primarily on single-discipline or software-based training, this research provides an empirical, cross-program analysis of students’ perception–practice gaps and offers curriculum-level insights for sustainability-driven practice. These findings provide practical recommendations for enhancing BIM and sustainability education, thereby better preparing students to meet the demands of the evolving AEC sector.
Full article
(This article belongs to the Special Issue The Digital Transformation of Education: Trends, Technologies, and Responsible Innovation)
►
Show Figures
Open AccessArticle
Decision Support for Cargo Pickup and Delivery Under Uncertainty: A Combined Agent-Based Simulation and Optimization Approach
by
Renan Paula Ramos Moreno, Rui Borges Lopes, Ana Luísa Ramos, José Vasconcelos Ferreira, Diogo Correia and Igor Eduardo Santos de Melo
Computers 2025, 14(11), 462; https://doi.org/10.3390/computers14110462 (registering DOI) - 25 Oct 2025
Abstract
This article introduces an innovative hybrid methodology that integrates deterministic Mixed-Integer Linear Programming optimization with stochastic Agent-Based Simulation to address the PDP-TW. The approach is applied to real-world operational data from a luggage-handling company in Lisbon, covering 158 service requests from January 2025.
[...] Read more.
This article introduces an innovative hybrid methodology that integrates deterministic Mixed-Integer Linear Programming optimization with stochastic Agent-Based Simulation to address the PDP-TW. The approach is applied to real-world operational data from a luggage-handling company in Lisbon, covering 158 service requests from January 2025. The MILP model generates optimal routing and task allocation plans, which are subsequently stress-tested under realistic uncertainties, such as variability in travel and service times, using ABS implemented in AnyLogic. The framework is iterative: violations of temporal or capacity constraints identified during the simulation are fed back into the optimization model, enabling successive adjustments until robust and feasible solutions are achieved for real-world scenarios. Additionally, the study incorporates transshipment scenarios, evaluating the impact of using warehouses as temporary hubs for order redistribution. Results include a comparative analysis between deterministic and stochastic models regarding operational efficiency, time window adherence, reduction in travel distances, and potential decreases in CO2 emissions. This work provides a contribution to the literature by proposing a practical and robust decision-support framework aligned with contemporary demands for sustainability and efficiency in urban logistics, overcoming the limitations of purely deterministic approaches by explicitly reflecting real-world uncertainties.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Approaching Challenges in Representations of Date–Time Ambiguities
by
Amer Harb, Kamilla Klonowska and Daniel Einarson
Computers 2025, 14(11), 461; https://doi.org/10.3390/computers14110461 (registering DOI) - 24 Oct 2025
Abstract
►▼
Show Figures
Inconsistencies in Earth’s spinning, changes in calendar systems, etc., necessitate time being represented correspondingly. Date–time handling in programming involves specific challenges, including conflicts between calendars, time zone discrepancies, daylight savings, and leap second adjustments—issues that other data types like numbers and text do
[...] Read more.
Inconsistencies in Earth’s spinning, changes in calendar systems, etc., necessitate time being represented correspondingly. Date–time handling in programming involves specific challenges, including conflicts between calendars, time zone discrepancies, daylight savings, and leap second adjustments—issues that other data types like numbers and text do not encounter. This article identifies these challenges and investigates existing approaches to date–time representation. Limitations in current systems, including how leap seconds, time zone variations, and inconsistent calendar representations complicate date–time handling, is examined. Inconsistent date–time representations imply significant challenges, especially when considering the interplay of leap seconds and time zone shifts. This study highlights the need for a new approach to date–time data types addressing these problems effectively. The article reviews existing date–time data types and explores their shortcomings, proposing a theoretical framework for a more robust solution. The study suggests that an improved date–time data type could enhance time resolution, support leap seconds, and offer greater flexibility in handling time zone shifts. Such a solution would provide a more reliable alternative to current systems. By addressing issues like leap second handling and time zone shifts, the proposed framework demonstrates the feasibility of a new date–time data type, with potential for broader adoption in future systems.
Full article

Figure 1
Open AccessArticle
Emotion in Words: The Role of Ed Sheeran and Sia’s Lyrics on the Musical Experience
by
Catarina Travanca, Mónica Cruz and Abílio Oliveira
Computers 2025, 14(11), 460; https://doi.org/10.3390/computers14110460 (registering DOI) - 24 Oct 2025
Abstract
Music plays an increasingly vital role in modern society, becoming a fundamental part of everyday life. Beyond entertainment, it contributes to emotional well-being by helping individuals express their feelings, process emotions, and find comfort during different life moments. This study explores the emotional
[...] Read more.
Music plays an increasingly vital role in modern society, becoming a fundamental part of everyday life. Beyond entertainment, it contributes to emotional well-being by helping individuals express their feelings, process emotions, and find comfort during different life moments. This study explores the emotional impact of Ed Sheeran’s lyrics and Sia’s lyrics on listeners. Using an exploratory approach, it applies a text mining tool to extract data, identify key dimensions, and compare thematic elements across both artists’ work. The analysis reveals distinct emotional patterns and thematic contrasts, offering insight into how their lyrics resonate with audiences on a deeper level. These findings enhance our understanding of the emotional power of contemporary music and highlight how lyrical content can shape listeners’ emotional experiences. Moreover, the study demonstrates the value of text mining as a method for examining popular music, providing a new lens through which to explore the connection between music and emotion.
Full article
(This article belongs to the Special Issue Recent Advances in Data Mining: Methods, Trends, and Emerging Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Efficient Image Restoration for Autonomous Vehicles and Traffic Systems: A Knowledge Distillation Approach to Enhancing Environmental Perception
by
Yongheng Zhang
Computers 2025, 14(11), 459; https://doi.org/10.3390/computers14110459 (registering DOI) - 24 Oct 2025
Abstract
Image restoration tasks such as deraining, deblurring, and dehazing are crucial for enhancing the environmental perception of autonomous vehicles and traffic systems, particularly for tasks like vehicle detection, pedestrian detection and lane line identification. While transformer-based models excel in these tasks, their prohibitive
[...] Read more.
Image restoration tasks such as deraining, deblurring, and dehazing are crucial for enhancing the environmental perception of autonomous vehicles and traffic systems, particularly for tasks like vehicle detection, pedestrian detection and lane line identification. While transformer-based models excel in these tasks, their prohibitive computational complexity hinders real-world deployment on resource-constrained platforms. To bridge this gap, this paper introduces a novel Soft Knowledge Distillation (SKD) framework, designed specifically for creating highly efficient yet powerful image restoration models. Our core innovation is twofold: first, we propose a Multi-dimensional Cross-Net Attention(MCA) mechanism that allows a compact student model to learn comprehensive attention relationships from a large teacher model across both spatial and channel dimensions, capturing fine-grained details essential for high-quality restoration. Second, we pioneer the use of a contrastive learning loss at the reconstruction level, treating the teacher’s outputs as positives and the degraded inputs as negatives, which significantly elevates the student’s reconstruction quality. Extensive experiments demonstrate that our method achieves a superior trade-off between performance and efficiency, notably enhancing downstream tasks like object detection. The primary contributions of this work lie in delivering a practical and compelling solution for real-time perceptual enhancement in autonomous systems, pushing the boundaries of efficient model design.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Generative Artificial Intelligence and the Editing of Academic Essays: Necessary and Sufficient Ethical Judgments in Its Use by Higher Education Students
by
Antonio Pérez-Portabella, Mario Arias-Oliva, Jorge de Andrés-Sánchez and Graciela Padilla-Castillo
Computers 2025, 14(11), 458; https://doi.org/10.3390/computers14110458 - 24 Oct 2025
Abstract
The emergence of generative artificial intelligence (GAI) has significantly transformed higher education. As a linguistic assistant, GAI can promote equity and reduce barriers in academic writing. However, its widespread availability also raises ethical dilemmas about integrity, fairness, and skill development. Despite the growing
[...] Read more.
The emergence of generative artificial intelligence (GAI) has significantly transformed higher education. As a linguistic assistant, GAI can promote equity and reduce barriers in academic writing. However, its widespread availability also raises ethical dilemmas about integrity, fairness, and skill development. Despite the growing debate, empirical evidence on how students’ ethical evaluations influence their predicted use of GAI in academic tasks remains scarce. This study analyzes the ethical determinants of students’ determination to use GAI as a linguistic assistant in essay writing. Based on the Multidimensional Ethics Scale (MES), the model incorporates four ethical criteria: moral equity, moral relativism, consequentialism, and deontology. Data were collected from a sample of 151 university students. For the analysis, we used a mix of partial least squares structural equation modeling (PLS-SEM), aimed at testing sufficiency relationships, and necessary condition analysis (NCA), to identify minimum acceptance thresholds or necessary conditions. The PLS-SEM results show that only consequentialism is statistically relevant in explaining the predicted use. Moreover, the NCA reveals that reaching a minimum degree in the evaluations of all ethical constructs is necessary for use to occur. While the necessary condition effect size of moral equity and consequentialism is high, that of relativism and deontology is moderate. Thus, although acceptance of GAI use in the analyzed context increases only when its consequences are perceived as more favorable, for such use to occur it must be considered acceptable, which requires surpassing certain thresholds in all the ethical factors proposed as explanatory.
Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Dealing with Class Overlap Through Cluster-Based Sample Weighting
by
Patrick Thiam, Friedhelm Schwenker and Hans Armin Kestler
Computers 2025, 14(11), 457; https://doi.org/10.3390/computers14110457 - 24 Oct 2025
Abstract
The classification performance of an inference model trained in a supervised manner depends substantially on the size and quality of the labeled training data. The characteristics of the underlying data distribution significantly impact the generalization ability of a trained model, particularly in cases
[...] Read more.
The classification performance of an inference model trained in a supervised manner depends substantially on the size and quality of the labeled training data. The characteristics of the underlying data distribution significantly impact the generalization ability of a trained model, particularly in cases where some class overlap can be observed. In such cases, training a single model on the entirety of the labeled data can result in an increase in the complexity of the resulting decision boundary, leading to over-fitting and consequently to some poor generalization performance. In the current work, a cluster-based sample weighting approach is proposed in order to improve the generalization ability of a classification model while dealing with such complex data distributions. The approach consists of first performing a clustering of the training data and subsequently optimizing cluster-specific classification models, using a weighted loss based on the samples-to-cluster-center distances. An unseen sample is first assigned a cluster and subsequently classified based on the model specific to its assigned cluster. The proposed approach was evaluated on three different pain recognition datasets, and the performed evaluation showed that the approach is not only able to attain state-of-the-art classification performances but also systematically outperforms its single model counterpart.
Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Investigation of Cybersecurity Bottlenecks of AI Agents in Industrial Automation
by
Sami Shrestha, Chipiliro Banda, Amit Kumar Mishra, Fatiha Djebbar and Deepak Puthal
Computers 2025, 14(11), 456; https://doi.org/10.3390/computers14110456 - 23 Oct 2025
Abstract
The growth of Agentic AI systems in Industrial Automation has brought forth new cybersecurity issues which in turn put at risk the reliability and integrity of these systems. In this study we look at the cybersecurity issues in industrial automation in terms of
[...] Read more.
The growth of Agentic AI systems in Industrial Automation has brought forth new cybersecurity issues which in turn put at risk the reliability and integrity of these systems. In this study we look at the cybersecurity issues in industrial automation in terms of the threats, risks, and vulnerabilities related to Agentic AI. We conducted a systematic literature review to report on the present day practices in terms of cybersecurity for industrial automation and Agentic AI. Also we used a simulation based approach to study the security issues and their impact on industrial automation systems. Our study results identify the key areas of focus and what mitigation strategies may be put in place to secure the integration of Agentic AI in industrial automation. Our research brings to the table results which will play a role in the development of more secure and reliable industrial automation systems, which in the end will improve the overall cybersecurity of these systems.
Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
►▼
Show Figures

Figure 1
Open AccessArticle
Research on the Application of Federated Learning Based on CG-WGAN in Gout Staging Prediction
by
Junbo Wang, Kaiqi Zhang, Zhibo Guan, Zi Ye, Chao Ma and Hai Huang
Computers 2025, 14(11), 455; https://doi.org/10.3390/computers14110455 - 23 Oct 2025
Abstract
Traditional federated learning frameworks face significant challenges posed by non-independent and identically distributed (non-IID) data in the healthcare domain, particularly in multi-institutional collaborative gout staging prediction. Differences in patient population characteristics, distributions of clinical indicators, and proportions of disease stages across hospitals lead
[...] Read more.
Traditional federated learning frameworks face significant challenges posed by non-independent and identically distributed (non-IID) data in the healthcare domain, particularly in multi-institutional collaborative gout staging prediction. Differences in patient population characteristics, distributions of clinical indicators, and proportions of disease stages across hospitals lead to inefficient model training, increased category prediction bias, and heightened risks of privacy leakage. In the context of gout staging prediction, these issues result in decreased classification accuracy and recall, especially when dealing with minority classes. To address these challenges, this paper proposes FedCG-WGAN, a federated learning method based on conditional gradient penalization in Wasserstein GAN (CG-WGAN). By incorporating conditional information from gout staging labels and optimizing the gradient penalty mechanism, this method generates high-quality synthetic medical data, effectively mitigating the non-IID problem among clients. Building upon the synthetic data, a federated architecture is further introduced, which replaces traditional parameter aggregation with synthetic data sharing. This enables each client to design personalized prediction models tailored to their local data characteristics, thereby preserving the privacy of original data and avoiding the risk of information leakage caused by reverse engineering of model parameters. Experimental results on a real-world dataset comprising 51,127 medical records demonstrate that the proposed FedCG-WGAN significantly outperforms baseline models, achieving up to a 7.1% improvement in accuracy. Furthermore, by maintaining the composite quality score of the generated data between 0.85 and 0.88, the method achieves a favorable balance between privacy preservation and model utility.
Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
►▼
Show Figures

Figure 1
Open AccessArticle
HRCD: A Hybrid Replica Method Based on Community Division Under Edge Computing
by
Shengyao Sun, Ying Du, Dong Wang, Jiwei Zhang and Shengbin Liang
Computers 2025, 14(11), 454; https://doi.org/10.3390/computers14110454 - 22 Oct 2025
Abstract
With the emergence of Industry 5.0 and explosive data growth, replica allocation has become a critical issue in edge computing systems. Current methods often focus on placing replicas on edge servers near terminals, yet this may lead to edge node overload and system
[...] Read more.
With the emergence of Industry 5.0 and explosive data growth, replica allocation has become a critical issue in edge computing systems. Current methods often focus on placing replicas on edge servers near terminals, yet this may lead to edge node overload and system performance degradation, especially in large 6G edge computing communities. Meanwhile, existing terminal-based strategies struggle due to their time-varying nature. To address these challenges, we propose the HRCD, a hybrid replica method based on community division. The HRCD first divides time-varying terminals into stable sets using the community division algorithm. Then, it employs fuzzy clustering analysis to select terminals with strong service capabilities for replica placement while utilizing uniform distribution to prioritize geographically local hotspot data as replica data. Extensive experiments demonstrate that the HRCD effectively reduces data access latency and decreases edge server load compared to other replica strategies. Overall, the HRCD offers a promising approach to optimizing replica placement in 6G edge computing environments.
Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimization of Emergency Notification Processes in University Campuses Through Multiplatform Mobile Applications: A Case Study
by
Steven Alejandro Salazar Cazco, Christian Alejandro Dávila Fuentes, Nelly Margarita Padilla Padilla, Rosa Belén Ramos Jiménez and Johanna Gabriela Del Pozo Naranjo
Computers 2025, 14(11), 453; https://doi.org/10.3390/computers14110453 - 22 Oct 2025
Abstract
Universities face continuous challenges in ensuring rapid and efficient communication during emergencies due to outdated, fragmented, and manual notification systems. This research presents the design, development, and implementation of a multiplatform mobile application to optimize emergency notifications at the Escuela Superior Politécnica de
[...] Read more.
Universities face continuous challenges in ensuring rapid and efficient communication during emergencies due to outdated, fragmented, and manual notification systems. This research presents the design, development, and implementation of a multiplatform mobile application to optimize emergency notifications at the Escuela Superior Politécnica de Chimborazo (ESPOCH). The application, developed using the Flutter framework, offers real-time alert dispatch, geolocation services, and seamless integration with ESPOCH’s Security Unit through Application Programming Interfaces (APIs). A descriptive and applied research methodology was adopted, analyzing existing notification workflows and evaluating agile development methodologies. MOBILE-D was selected for its rapid iteration capabilities and alignment with small development teams. The application’s architecture incorporates a Node.js backend, Firebase Realtime Database, Google Maps API, and the ESPOCH Digital ID API for robust and scalable performance. Efficiency metrics were evaluated using ISO/IEC 25010 standards, focusing on temporal behavior. The results demonstrated a 53.92% reduction in response times compared to traditional notification processes, enhancing operational readiness and safety across the campus. This study underscores the importance of leveraging mobile technologies to streamline emergency communication and provides a scalable model for educational institutions seeking to modernize their security protocols.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
Siyasat: AI-Powered AI Governance Tool to Generate and Improve AI Policies According to Saudi AI Ethics Principles
by
Dabiah Alboaneen, Shaikha Alhajri, Khloud Alhajri, Muneera Aljalal, Noura Alalyani, Hajer Alsaadan, Zainab Al Thonayan and Raja Alyafer
Computers 2025, 14(11), 452; https://doi.org/10.3390/computers14110452 - 22 Oct 2025
Abstract
►▼
Show Figures
The rapid development of artificial intelligence (AI) and growing reliance on generative AI (GenAI) tools such as ChatGPT and Bing Chat have raised concerns about risks, including privacy violations, bias, and discrimination. AI governance is viewed as a solution, and in Saudi Arabia,
[...] Read more.
The rapid development of artificial intelligence (AI) and growing reliance on generative AI (GenAI) tools such as ChatGPT and Bing Chat have raised concerns about risks, including privacy violations, bias, and discrimination. AI governance is viewed as a solution, and in Saudi Arabia, the Saudi Data and Artificial Intelligence Authority (SDAIA) has introduced the AI Ethics Principles. However, many organizations face challenges in aligning their AI policies with these principles. This paper presents Siyasat, an Arabic web-based governance tool designed to generate and enhance AI policies based on SDAIA’s AI Ethics Principles. Powered by GPT-4-turbo and a Retrieval-Augmented Generation (RAG) approach, the tool uses a dataset of ten AI policies and SDAIA’s official ethics document. The results show that Siyasat achieved a BERTScore of 0.890 and Self-BLEU of 0.871 in generating AI policies, while in improving AI policies, it scored 0.870 and 0.980, showing strong consistency and quality. The paper contributes a practical solution to support public, private, and non-profit sectors in complying with Saudi Arabia’s AI Ethics Principles.
Full article

Figure 1
Open AccessArticle
Integrated Systems Ontology (ISOnto): Integrating Engineering Design and Operational Feedback for Dependable Systems
by
Haytham Younus, Felician Campean, Sohag Kabir, Pascal Bonnaud and David Delaux
Computers 2025, 14(11), 451; https://doi.org/10.3390/computers14110451 - 22 Oct 2025
Abstract
This paper proposes an integrated ontological framework, Integrated Systems Ontology (ISOnto), for dependable systems engineering by semantically linking design models with real-world operational failure data. Building upon the recently proposed Function–Behaviour–Structure–Failure Modes (FBSFM) framework, ISOnto integrates early-stage design information with field-level evidence to
[...] Read more.
This paper proposes an integrated ontological framework, Integrated Systems Ontology (ISOnto), for dependable systems engineering by semantically linking design models with real-world operational failure data. Building upon the recently proposed Function–Behaviour–Structure–Failure Modes (FBSFM) framework, ISOnto integrates early-stage design information with field-level evidence to support more informed, traceable, and dependable failure analysis. This extends the semantic scope of the FBSFM ontology to include operational/field feedback from warranty claims and technical inspections, enabling two-way traceability between design-phase assumptions (functions, behaviours, structures, and failure modes) and field-reported failures, causes, and effects. As a theoretical contribution, ISOnto introduces a formal semantic bridge between design and operational phases, strengthening the validation of known failure modes and the discovery of previously undocumented ones. Developed using established ontology engineering practices and formalised in OWL with Protégé, it incorporates domain-specific extensions to represent field data with structured mappings to design entities. A real-world automotive case study conducted with a global manufacturer demonstrates ISOnto’s ability to consolidate multisource lifecycle data into a coherent, machine-readable repository. The framework supports advanced reasoning, structured querying, and system-level traceability, thereby facilitating continuous improvement, data-driven validation, and more reliable decision-making across product development and reliability engineering.
Full article
(This article belongs to the Special Issue Recent Trends in Dependable and High Availability Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Cross-Domain Adversarial Alignment for Network Anomaly Detection Through Behavioral Embedding Enrichment
by
Cristian Salvador-Najar and Luis Julián Domínguez Pérez
Computers 2025, 14(11), 450; https://doi.org/10.3390/computers14110450 - 22 Oct 2025
Abstract
Detecting anomalies in network traffic is a central task in cybersecurity and digital infrastructure management. Traditional approaches rely on statistical models, rule-based systems, or machine learning techniques to identify deviations from expected patterns, but often face limitations in generalization across domains. This study
[...] Read more.
Detecting anomalies in network traffic is a central task in cybersecurity and digital infrastructure management. Traditional approaches rely on statistical models, rule-based systems, or machine learning techniques to identify deviations from expected patterns, but often face limitations in generalization across domains. This study proposes a cross-domain data enrichment framework that integrates behavioral embeddings with network traffic features through adversarial autoencoders. Each network traffic record is paired with the most similar behavioral profile embedding from user web activity data (Charles dataset) using cosine similarity, thereby providing contextual enrichment for anomaly detection. The proposed system comprises (i) behavioral profile clustering via autoencoder embeddings and (ii) cross-domain latent alignment through adversarial autoencoders, with a discriminator to enable feature fusion. A Deep Feedforward Neural Network trained on the enriched feature space achieves 97.17% accuracy, 96.95% precision, 97.34% recall, and 97.14% F1-score, with stable cross-validation performance (99.79% average accuracy across folds). Behavioral clustering quality is supported by a silhouette score of 0.86 and a Davies–Bouldin index of 0.57. To assess robustness and transferability, the framework was evaluated on the UNSW-NB15 and the CIC-IDS2017 datasets, where results confirmed consistent performance and reliability when compared to traffic-only baselines. This supports the feasibility of cross-domain alignment and shows that adversarial training enables stable feature integration without evidence of overfitting or memorization.
Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimizing Federated Scheduling for Real-Time DAG Tasks via Node-Level Parallelization
by
Jiaqing Qiao, Sirui Chen, Tianwen Chen and Lei Feng
Computers 2025, 14(10), 449; https://doi.org/10.3390/computers14100449 - 21 Oct 2025
Abstract
►▼
Show Figures
Real-time task scheduling in multi-core systems is a crucial research area, especially for parallel task scheduling, where the Directed Acyclic Graph (DAG) model is commonly used to represent task dependencies. However, existing research shows that resource utilization and schedulability rates for DAG task
[...] Read more.
Real-time task scheduling in multi-core systems is a crucial research area, especially for parallel task scheduling, where the Directed Acyclic Graph (DAG) model is commonly used to represent task dependencies. However, existing research shows that resource utilization and schedulability rates for DAG task set scheduling remain relatively low. Meanwhile, some studies have identified that certain parallel task nodes exhibit “parallelization freedom,” allowing them to be decomposed into sub-threads that can execute concurrently. This presents a promising opportunity for improving task schedulability. Building on this, we propose an approach that optimizes both node parallelization and processor core allocation under federated scheduling. Simulation experiments demonstrate that by parallelizing nodes, we can significantly reduce the number of cores required for each task and increase the percentage of task sets being schedulable.
Full article

Figure 1
Open AccessArticle
SAHI-Tuned YOLOv5 for UAV Detection of TM-62 Anti-Tank Landmines: Small-Object, Occlusion-Robust, Real-Time Pipeline
by
Dejan Dodić, Vuk Vujović, Srđan Jovković, Nikola Milutinović and Mitko Trpkoski
Computers 2025, 14(10), 448; https://doi.org/10.3390/computers14100448 - 21 Oct 2025
Abstract
Anti-tank landmines endanger post-conflict recovery. Detecting camouflaged TM-62 landmines in low-altitude unmanned aerial vehicle (UAV) imagery is challenging because targets occupy few pixels and are low-contrast and often occluded. We introduce a single-class anti-tank dataset and a YOLOv5 pipeline augmented with a SAHI-based
[...] Read more.
Anti-tank landmines endanger post-conflict recovery. Detecting camouflaged TM-62 landmines in low-altitude unmanned aerial vehicle (UAV) imagery is challenging because targets occupy few pixels and are low-contrast and often occluded. We introduce a single-class anti-tank dataset and a YOLOv5 pipeline augmented with a SAHI-based small-object stage and Weighted Boxes Fusion. The evaluation combines COCO metrics with an operational operating point (score = 0.25; IoU = 0.50) and stratifies by object size and occlusion. On a held-out test partition representative of UAV acquisition, the baseline YOLOv5 attains mAP@0.50:0.95 = 0.553 and AP@0.50 = 0.851. With tuned SAHI (768 px tiles, 40% overlap) plus fusion, performance rises to mAP@0.50:0.95 = 0.685 and AP@0.50 = 0.935—ΔmAP = +0.132 (+23.9% rel.) and ΔAP@0.50 = +0.084 (+9.9% rel.). At the operating point, precision = 0.94 and recall = 0.89 (F1 = 0.914), implying a 58.4% reduction in missed detections versus a non-optimized SAHI baseline and a +14.3 AP@0.50 gain on the small/occluded subset. Ablations attribute gains to tile size, overlap, and fusion, which boost recall on low-pixel, occluded landmines without inflating false positives. The pipeline sustains real-time UAV throughput and supports actionable triage for humanitarian demining, as well as motivating RGB–thermal fusion and cross-season/-domain adaptation.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Blockchain-Based Cooperative Medical Records Management System
by
Sultan Alyahya and Zahraa Almaghrabi
Computers 2025, 14(10), 447; https://doi.org/10.3390/computers14100447 - 21 Oct 2025
Abstract
The effective management of electronic medical records is critical to deliver high-quality healthcare services. However, existing systems often suffer from issues such as fragmented data, lack of interoperability, and weak privacy protections, which hinder collaboration among healthcare stakeholders. This paper proposes a blockchain-based
[...] Read more.
The effective management of electronic medical records is critical to deliver high-quality healthcare services. However, existing systems often suffer from issues such as fragmented data, lack of interoperability, and weak privacy protections, which hinder collaboration among healthcare stakeholders. This paper proposes a blockchain-based system to securely manage and share medical records in a decentralized and transparent manner. By leveraging smart contracts and access control policies, the system empowers patients with control over their data, ensures auditability of all interactions, and facilitates secure data sharing among patients, healthcare providers, insurance companies, and regulatory authorities. The proposed architecture is implemented using a private Ethereum blockchain and evaluated through a scenario-based comparison with the Prince Sultan Military Medical City system, as well as quantitative performance measurements of the blockchain prototype. Results demonstrate significant improvements in data security, access transparency, and system interoperability, with patients gaining the ability to track and control access to their records across multiple healthcare providers, while system performance remained practical for healthcare workflows.
Full article
(This article belongs to the Special Issue Revolutionizing Industries: The Impact of Blockchain Technology)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Predicting Website Performance: A Systematic Review of Metrics, Methods, and Research Gaps (2010–2024)
by
Mohammad Ghattas, Suhail Odeh and Antonio M. Mora
Computers 2025, 14(10), 446; https://doi.org/10.3390/computers14100446 - 20 Oct 2025
Abstract
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven
[...] Read more.
Website performance directly impacts user experience, trust, and competitiveness. While numerous studies have proposed evaluation methods, there is still no comprehensive synthesis that integrates performance metrics with predictive models. This study conducts a systematic literature review (SLR) following the PRISMA framework across seven academic databases (2010–2024). From 6657 initial records, 30 high-quality studies were included after rigorous screening and quality assessment. In addition, 59 website performance metrics were identified and validated through an expert survey, resulting in 16 core indicators. The review highlights a dominant reliance on traditional evaluation metrics (e.g., Load Time, Page Size, Response Time) and reveals limited adoption of machine learning and deep learning approaches. Most existing studies focus on e-government and educational websites, with little attention to e-commerce, healthcare, and industry domains. Furthermore, the geographic distribution of research remains uneven, with a concentration in Asia and limited contributions from North America and Africa. This study contributes by (i) consolidating and validating a set of 16 critical performance metrics, (ii) critically analyzing current methodologies, and (iii) identifying gaps in domain coverage and intelligent prediction models. Future research should prioritize cross-domain benchmarks, integrate machine learning for scalable predictions, and address the lack of standardized evaluation protocols.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring the Moderating Role of Personality Traits in Technology Acceptance: A Study on SAP S/4 HANA Learning Among University Students
by
Sandra Barjaktarovic, Ivana Kovacevic and Ognjen Pantelic
Computers 2025, 14(10), 445; https://doi.org/10.3390/computers14100445 - 19 Oct 2025
Abstract
►▼
Show Figures
The aim of this study is to examine the impact of personality traits on students’ intention to accept the SAP S/4HANA business software. Grounded in the Big Five Factor (BFF) model of personality and the Technology Acceptance Model (TAM), the research analyzed the
[...] Read more.
The aim of this study is to examine the impact of personality traits on students’ intention to accept the SAP S/4HANA business software. Grounded in the Big Five Factor (BFF) model of personality and the Technology Acceptance Model (TAM), the research analyzed the role of individual differences in students’ learning performance using this ERP system. The study was conducted on a sample of N = 418 first-year students who underwent a quasi-experimental treatment based on realistic business scenarios. The results indicate that conscientiousness emerged as a positive predictor, while agreeableness demonstrated negative predictive value in learning SAP S/4HANA, whereas neuroticism did not exhibit a significant effect. Moderation analysis revealed that both Perceived Usefulness and Actual Usage of technology moderated the relationship between conscientiousness and SAP learning performance, enhancing its predictive strength. These findings underscore the importance of individual differences in the process of SAP S/4HANA acceptance within an educational context and suggest that instructional strategies should be tailored to students’ personality traits in order to optimize learning outcomes.
Full article

Figure 1
Open AccessArticle
Improved Multi-Faceted Sine Cosine Algorithm for Optimization and Electricity Load Forecasting
by
Stephen O. Oladipo, Udochukwu B. Akuru and Abraham O. Amole
Computers 2025, 14(10), 444; https://doi.org/10.3390/computers14100444 - 17 Oct 2025
Abstract
The sine cosine algorithm (SCA) is a population-based stochastic optimization method that updates the position of each search agent using the oscillating properties of the sine and cosine functions to balance exploration and exploitation. While flexible and widely applied, the SCA often suffers
[...] Read more.
The sine cosine algorithm (SCA) is a population-based stochastic optimization method that updates the position of each search agent using the oscillating properties of the sine and cosine functions to balance exploration and exploitation. While flexible and widely applied, the SCA often suffers from premature convergence and getting trapped in local optima due to weak exploration–exploitation balance. To overcome these issues, this study proposes a multi-faceted SCA (MFSCA) incorporating several improvements. The initial population is generated using dynamic opposition (DO) to increase diversity and global search capability. Chaotic logistic maps generate random coefficients to enhance exploration, while an elite-learning strategy allows agents to learn from multiple top-performing solutions. Adaptive parameters, including inertia weight, jumping rate, and local search strength, are applied to guide the search more effectively. In addition, Lévy flights and adaptive Gaussian local search with elitist selection strengthen exploration and exploitation, while reinitialization of stagnating agents maintains diversity. The developed MFSCA was tested against 23 benchmark optimization functions and assessed using the Wilcoxon rank-sum and Friedman rank tests. Results showed that MFSCA outperformed the original SCA and other variants. To further validate its applicability, this study developed a fuzzy c-means MFSCA-based adaptive neuro-fuzzy inference system to forecast energy consumption in student residences, using student apartments at a university in South Africa as a case study. The MFSCA-ANFIS achieved superior performance with respect to RMSE (1.9374), MAD (1.5483), MAE (1.5457), CVRMSE (42.8463), and SD (1.9373). These results highlight MFSCA’s effectiveness as a robust optimizer for both general optimization tasks and energy management applications.
Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 March 2026
Conferences
Special Issues
Special Issue in
Computers
Multimedia Data and Network Security
Guest Editor: Zahid AkhtarDeadline: 31 October 2025
Special Issue in
Computers
Adaptive Decision Making Across Industries with AI and Machine Learning: Frameworks, Challenges, and Innovations
Guest Editor: Samiul IslamDeadline: 31 October 2025
Special Issue in
Computers
Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition)
Guest Editors: Mariofanna Milanova, Friedhelm SchwenkerDeadline: 31 October 2025
Special Issue in
Computers
AI for Humans and Humans for AI (AI4HnH4AI)
Guest Editors: Amit Kumar Mishra, Deepak PuthalDeadline: 31 October 2025





