Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15.5 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.6 (2023);
5-Year Impact Factor:
2.4 (2023)
Latest Articles
Employing Blockchain, NFTs, and Digital Certificates for Unparalleled Authenticity and Data Protection in Source Code: A Systematic Review
Computers 2025, 14(4), 131; https://doi.org/10.3390/computers14040131 (registering DOI) - 2 Apr 2025
Abstract
In higher education, especially in programming-intensive fields like computer science, safeguarding students’ source code is crucial to prevent theft that could impact learning and future careers. Traditional storage solutions like Google Drive are vulnerable to hacking and alterations, highlighting the need for stronger
[...] Read more.
In higher education, especially in programming-intensive fields like computer science, safeguarding students’ source code is crucial to prevent theft that could impact learning and future careers. Traditional storage solutions like Google Drive are vulnerable to hacking and alterations, highlighting the need for stronger protection. This work explores digital technologies that enhance source code security, with a focus on Blockchain and NFTs. Due to Blockchain’s decentralized and immutable nature, NFTs can be used to control code ownership, improving security, traceability, and preventing unauthorized access. This approach effectively addresses existing gaps in protecting academic intellectual property. However, as Bennett et al. highlight, while these technologies have significant potential, challenges remain in large-scale implementation and user acceptance. Despite these hurdles, integrating Blockchain and NFTs presents a promising opportunity to enhance academic integrity. Successful adoption in educational settings may require a more inclusive and innovative strategy.
Full article
(This article belongs to the Section Blockchain Infrastructures and Enabled Applications)
►
Show Figures
Open AccessArticle
Artificial Intelligence in Neoplasticism: Aesthetic Evaluation and Creative Potential
by
Su Jin Mun and Won Ho Choi
Computers 2025, 14(4), 130; https://doi.org/10.3390/computers14040130 - 2 Apr 2025
Abstract
This research investigates the aesthetic evaluation of AI-generated neoplasticist artworks, exploring how well artificial intelligence systems, specifically Midjourney, replicate the core principles of neoplasticism, such as geometric forms, balance, and color harmony. The background of this study stems from ongoing debates about the
[...] Read more.
This research investigates the aesthetic evaluation of AI-generated neoplasticist artworks, exploring how well artificial intelligence systems, specifically Midjourney, replicate the core principles of neoplasticism, such as geometric forms, balance, and color harmony. The background of this study stems from ongoing debates about the legitimacy of AI-generated art and how these systems engage with established artistic movements. The purpose of the research is to assess whether AI can produce artworks that meet aesthetic standards comparable to human-created works. The research utilized Monroe C. Beardsley’s aesthetic emotion criteria and Noël Carroll’s aesthetic experience criteria as a framework for evaluating the artworks. A logistic regression analysis was conducted to identify key compositional elements in AI-generated neoplasticist works. The findings revealed that AI systems excelled in areas such as unity, color diversity, and overall artistic appeal but showed limitations in handling monochromatic elements. The implications of this research suggest that while AI can produce high-quality art, further refinement is needed for more subtle aspects of design. This study contributes to understanding the potential of AI as a tool in the creative process, offering insights for both artists and AI developers.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
►▼
Show Figures

Figure 1
Open AccessArticle
Strengthening Cybersecurity Resilience: An Investigation of Customers’ Adoption of Emerging Security Tools in Mobile Banking Apps
by
Irfan Riasat, Mahmood Shah and M. Sinan Gonul
Computers 2025, 14(4), 129; https://doi.org/10.3390/computers14040129 - 1 Apr 2025
Abstract
►▼
Show Figures
The rise in internet-based services has raised risks of data exposure. The manipulation and exploitation of sensitive data significantly impact individuals’ resilience—the ability to protect and prepare against cyber incidents. Emerging technologies seek to enhance cybersecurity resilience by developing various security tools. This
[...] Read more.
The rise in internet-based services has raised risks of data exposure. The manipulation and exploitation of sensitive data significantly impact individuals’ resilience—the ability to protect and prepare against cyber incidents. Emerging technologies seek to enhance cybersecurity resilience by developing various security tools. This study aims to explore the adoption of security tools using a qualitative research approach. Twenty-two semi-structured interviews were conducted with users of mobile banking apps from Pakistan. Data were analyzed using thematic analysis, which revealed that biometric authentication and SMS alerts are commonly used. Limited use of multifactor authentication has been observed, mainly due to a lack of awareness or implementation knowledge. Passwords are still regarded as a trusted and secure mechanism. The findings indicate that the adoption of security tools is based on perceptions of usefulness, perceived trust, and perceived ease of use, while knowledge and awareness play a moderating role. This study also proposes a framework by extending TAM to include multiple security tools and introducing knowledge and awareness as a moderator influencing users’ perceptions. The findings inform practical implications for financial institutions, application developers, and policymakers to ensure standardized policy to include security tools in online financial platforms, thereby enhancing overall cybersecurity resilience.
Full article

Figure 1
Open AccessArticle
SMS3D: 3D Synthetic Mushroom Scenes Dataset for 3D Object Detection and Pose Estimation
by
Abdollah Zakeri, Bikram Koirala, Jiming Kang, Venkatesh Balan, Weihang Zhu, Driss Benhaddou and Fatima A. Merchant
Computers 2025, 14(4), 128; https://doi.org/10.3390/computers14040128 - 1 Apr 2025
Abstract
The mushroom farming industry struggles to automate harvesting due to limited large-scale annotated datasets and the complex growth patterns of mushrooms, which complicate detection, segmentation, and pose estimation. To address this, we introduce a synthetic dataset with 40,000 unique scenes of white Agaricus
[...] Read more.
The mushroom farming industry struggles to automate harvesting due to limited large-scale annotated datasets and the complex growth patterns of mushrooms, which complicate detection, segmentation, and pose estimation. To address this, we introduce a synthetic dataset with 40,000 unique scenes of white Agaricus bisporus and brown baby bella mushrooms, capturing realistic variations in quantity, position, orientation, and growth stages. Our two-stage pose estimation pipeline combines 2D object detection and instance segmentation with a 3D point cloud-based pose estimation network using a Point Transformer. By employing a continuous 6D rotation representation and a geodesic loss, our method ensures precise rotation predictions. Experiments show that processing point clouds with 1024 points and the 6D Gram–Schmidt rotation representation yields optimal results, achieving an average rotational error of on synthetic data, surpassing current state-of-the-art methods in mushroom pose estimation. The model, further, generalizes well to real-world data, attaining a mean angle difference of on a subset of the M18K dataset with ground-truth annotations. This approach aims to drive automation in harvesting, growth monitoring, and quality assessment in the mushroom industry.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Lossless Compression of Malaria-Infected Erythrocyte Images Using Vision Transformer and Deep Autoencoders
by
Md Firoz Mahmud, Zerin Nusrat and W. David Pan
Computers 2025, 14(4), 127; https://doi.org/10.3390/computers14040127 - 1 Apr 2025
Abstract
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a
[...] Read more.
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a novel machine learning-based approach where lossless compression of malaria-infected erythrocyte images is assisted by cutting-edge classifiers. To this end, we first use a Vision Transformer to classify images into two categories: those cells that are infected with malaria and those that are not. We then employ distinct deep autoencoders for each category, which not only reduces the dimensions of the image data but also preserves crucial diagnostic information. To ensure no loss in reconstructed image quality, we further compress the residuals produced by these autoencoders using the Huffman code. Simulation results show that the proposed method achieves lower overall bit rates and thus higher compression ratios than traditional compression schemes such as JPEG 2000, JPEG-LS, and CALIC. This strategy holds significant potential for effective telemedicine applications and can improve diagnostic capabilities in regions impacted by malaria.
Full article
(This article belongs to the Special Issue Applications of Machine Learning and Artificial Intelligence for Healthcare)
►▼
Show Figures

Figure 1
Open AccessArticle
Analyzing Digital Political Campaigning Through Machine Learning: An Exploratory Study for the Italian Campaign for European Union Parliament Election in 2024
by
Paolo Sernani, Angela Cossiri, Giovanni Di Cosimo and Emanuele Frontoni
Computers 2025, 14(4), 126; https://doi.org/10.3390/computers14040126 - 30 Mar 2025
Abstract
The rapid digitalization of political campaigns has reshaped electioneering strategies, enabling political entities to leverage social media for targeted outreach. This study investigates the impact of digital political campaigning during the 2024 EU elections using machine learning techniques to analyze social media dynamics.
[...] Read more.
The rapid digitalization of political campaigns has reshaped electioneering strategies, enabling political entities to leverage social media for targeted outreach. This study investigates the impact of digital political campaigning during the 2024 EU elections using machine learning techniques to analyze social media dynamics. We introduce a novel dataset—Political Popularity Campaign—which comprises social media posts, engagement metrics, and multimedia content from the electoral period. By applying predictive modeling, we estimate key indicators such as post popularity and assess their influence on campaign outcomes. Our findings highlight the significance of micro-targeting practices, the role of algorithmic biases, and the risks associated with disinformation in shaping public opinion. Moreover, this research contributes to the broader discussion on regulating digital campaigning by providing analytical models that can aid policymakers and public authorities in monitoring election compliance and transparency. The study underscores the necessity for robust frameworks to balance the advantages of digital political engagement with the challenges of ensuring fair democratic processes.
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Introducing a New Genetic Operator Based on Differential Evolution for the Effective Training of Neural Networks
by
Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
Computers 2025, 14(4), 125; https://doi.org/10.3390/computers14040125 - 28 Mar 2025
Abstract
Artificial neural networks are widely established models used to solve a variety of real-world problems in the fields of physics, chemistry, etc. These machine learning models contain a series of parameters that must be appropriately tuned by various optimization techniques in order to
[...] Read more.
Artificial neural networks are widely established models used to solve a variety of real-world problems in the fields of physics, chemistry, etc. These machine learning models contain a series of parameters that must be appropriately tuned by various optimization techniques in order to effectively address the problems that they face. Genetic algorithms have been used in many cases in the recent literature to train artificial neural networks, and various modifications have been made to enhance this procedure. In this article, the incorporation of a novel genetic operator into genetic algorithms is proposed to effectively train artificial neural networks. The new operator is based on the differential evolution technique, and it is periodically applied to randomly selected chromosomes from the genetic population. Furthermore, to determine a promising range of values for the parameters of the artificial neural network, an additional genetic algorithm is executed before the execution of the basic algorithm. The modified genetic algorithm is used to train neural networks on classification and regression datasets, and the results are reported and compared with those of other methods used to train neural networks.
Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessReview
Advances in Federated Learning: Applications and Challenges in Smart Building Environments and Beyond
by
Mohamed Rafik Aymene Berkani, Ammar Chouchane, Yassine Himeur, Abdelmalik Ouamane, Sami Miniaoui, Shadi Atalla, Wathiq Mansoor and Hussain Al-Ahmad
Computers 2025, 14(4), 124; https://doi.org/10.3390/computers14040124 - 27 Mar 2025
Abstract
►▼
Show Figures
Federated Learning (FL) is a transformative decentralized approach in machine learning and deep learning, offering enhanced privacy, scalability, and data security. This review paper explores the foundational concepts, and architectural variations of FL, prominent aggregation algorithms like FedAvg, FedProx, and FedMA, and diverse
[...] Read more.
Federated Learning (FL) is a transformative decentralized approach in machine learning and deep learning, offering enhanced privacy, scalability, and data security. This review paper explores the foundational concepts, and architectural variations of FL, prominent aggregation algorithms like FedAvg, FedProx, and FedMA, and diverse innovative applications in thermal comfort optimization, energy prediction, healthcare, and anomaly detection within smart buildings. By enabling collaborative model training without centralizing sensitive data, FL ensures privacy and robust performance across heterogeneous environments. We further discuss the integration of FL with advanced technologies, including digital twins and 5G/6G networks, and demonstrate its potential to revolutionize real-time monitoring, and optimize resources. Despite these advances, FL still faces challenges, such as communication overhead, security issues, and non-IID data handling. Future research directions highlight the development of adaptive learning methods, robust privacy measures, and hybrid architectures to fully leverage FL’s potential in driving innovative, secure, and efficient intelligence for the next generation of smart buildings.
Full article

Figure 1
Open AccessArticle
Cross-Dataset Data Augmentation Using UMAP for Deep Learning-Based Wind Speed Prediction
by
Eder Arley Leon-Gomez, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(4), 123; https://doi.org/10.3390/computers14040123 - 27 Mar 2025
Abstract
Wind energy has emerged as a cornerstone in global efforts to transition to renewable energy, driven by its low environmental impact and significant generation potential. However, the inherent intermittency of wind, influenced by complex and dynamic atmospheric patterns, poses significant challenges for accurate
[...] Read more.
Wind energy has emerged as a cornerstone in global efforts to transition to renewable energy, driven by its low environmental impact and significant generation potential. However, the inherent intermittency of wind, influenced by complex and dynamic atmospheric patterns, poses significant challenges for accurate wind speed prediction. Existing approaches, including statistical methods, machine learning, and deep learning, often struggle with limitations such as non-linearity, non-stationarity, computational demands, and the requirement for extensive, high-quality datasets. In response to these challenges, we propose a novel neighborhood preserving cross-dataset data augmentation framework for high-horizon wind speed prediction. The proposed method addresses data variability and dynamic behaviors through three key components: (i) the uniform manifold approximation and projection (UMAP) is employed as a non-linear dimensionality reduction technique to encode local relationships in wind speed time-series data while preserving neighborhood structures, (ii) a localized cross-dataset data augmentation (DA) approach is introduced using UMAP-reduced spaces to enhance data diversity and mitigate variability across datasets, and (iii) recurrent neural networks (RNNs) are trained on the augmented datasets to model temporal dependencies and non-linear patterns effectively. Our framework was evaluated using datasets from diverse geographical locations, including the Argonne Weather Observatory (USA), Chengdu Airport (China), and Beijing Capital International Airport (China). Comparative tests using regression-based measures on RNN, GRU, and LSTM architectures showed that the proposed method was better at improving the accuracy and generalizability of predictions, leading to an average reduction in prediction error. Consequently, our study highlights the potential of integrating advanced dimensionality reduction, data augmentation, and deep learning techniques to address critical challenges in renewable energy forecasting.
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Learning Analytics to Guide Serious Game Development: A Case Study Using Articoding
by
Antonio Calvo-Morata, Cristina Alonso-Fernández, Julio Santilario-Berthilier, Iván Martínez-Ortiz and Baltasar Fernández-Manjón
Computers 2025, 14(4), 122; https://doi.org/10.3390/computers14040122 - 27 Mar 2025
Abstract
Serious games are powerful interactive environments that provide more authentic experiences for learning or training different skills. However, developing effective serious games is complex, and a more systematic approach is needed to create better evidence-based games. Learning analytics—based on the analysis of collected
[...] Read more.
Serious games are powerful interactive environments that provide more authentic experiences for learning or training different skills. However, developing effective serious games is complex, and a more systematic approach is needed to create better evidence-based games. Learning analytics—based on the analysis of collected in-game user interactions—can support game development and the players’ learning process, providing assessment information to teachers, students, and other stakeholders. However, empirical studies applying and demonstrating the use of learning analytics in the context of serious games in real environments remain scarce. In this paper, we study the application of learning analytics throughout the whole lifecycle of a serious game, in order to assess the game’s design and players’ learning using a serious game that introduces basic programming concepts through a visual programming language. The game was played by N = 134 high school students in two 50-min sessions. During the game sessions, all player interactions were collected, including the time spent solving levels, their programming solutions, and the number of replays. We analyzed these interaction traces to gain insights that can facilitate teachers’ use of serious games in their lessons and assessments, as well as guide developers in making possible improvements to the game. Among these insights, knowing which tasks students struggle with is critical for both teachers and game developers, and can also reveal game design issues. Among the results obtained through analysis of the interaction data, we found differences between boys and girls when playing. Girls play in a more reflexive way and, in terms of acceptance of the game, a higher percentage of girls had neutral opinions. We also found the most repeated errors, the level each player reached, and how long it took them to reach those levels. These data will help to make further improvements to the game’s design, resulting in a more effective educational tool in the future. The process and results of this study can guide other researchers when applying learning analytics to evaluate and improve the educational design of serious games, as well as supporting teachers—both during and after the game activity—in applying an evidence-based assessment of the players based on the collected learning analytics.
Full article
(This article belongs to the Special Issue Smart Learning Environments)
►▼
Show Figures

Figure 1
Open AccessArticle
Scalable Data Transformation Models for Physics-Informed Neural Networks (PINNs) in Digital Twin-Enabled Prognostics and Health Management (PHM) Applications
by
Atuahene Kwasi Barimah, Ogwo Precious Onu, Octavian Niculita, Andrew Cowell and Don McGlinchey
Computers 2025, 14(4), 121; https://doi.org/10.3390/computers14040121 - 26 Mar 2025
Abstract
Digital twin (DT) technology has become a key enabler for prognostics and health management (PHM) in complex industrial systems, yet scaling predictive models for multi-component degradation (MCD) scenarios remains challenging, particularly when transferring insights from predictive models of smaller systems developed with limited
[...] Read more.
Digital twin (DT) technology has become a key enabler for prognostics and health management (PHM) in complex industrial systems, yet scaling predictive models for multi-component degradation (MCD) scenarios remains challenging, particularly when transferring insights from predictive models of smaller systems developed with limited data to larger systems. To address this, a physics-informed neural network (PINN) framework that integrates a standardized scaling methodology, enabling scalable DT analytics for MCD prognostics, was developed in this paper. Our approach employs a systematic DevOps workflow that features containerized PINN DT analytics deployed on a Kubernetes cluster for dynamic resource optimization, a real-time DT platform (PTC ThingWorx™), and a custom API for bidirectional data exchange that connects the cluster to the DT platform. A key contribution of this paper is the scalable DT model, which facilitates transfer learning of degradation patterns across heterogeneous hydraulic systems. Three (3) hydraulic system configurations were modeled, analyzing multi-component filter degradation under pump speeds of 700–900 RPM. Trained on limited data from a reference system, the scaled PINN model achieved 88.98% accuracy for initial degradation detection at 900 RPM—outperforming an unscaled baseline of 64.13%—with consistent improvements across various speeds and thresholds. This work advances PHM analytics by reducing costs and development time, providing a scalable framework for cross-system DT deployment.
Full article
(This article belongs to the Special Issue Generative Artificial Intelligence and Machine Learning in Industrial Processes and Manufacturing)
►▼
Show Figures

Figure 1
Open AccessArticle
FraudX AI: An Interpretable Machine Learning Framework for Credit Card Fraud Detection on Imbalanced Datasets
by
Nazerke Baisholan, J. Eric Dietz, Sergiy Gnatyuk, Mussa Turdalyuly, Eric T. Matson and Karlygash Baisholanova
Computers 2025, 14(4), 120; https://doi.org/10.3390/computers14040120 - 25 Mar 2025
Abstract
►▼
Show Figures
Credit card fraud detection is a critical research area due to the significant financial losses and security risks associated with fraudulent activities. This study presents FraudX AI, an ensemble-based framework addressing the challenges in fraud detection, including imbalanced datasets, interpretability, and scalability. FraudX
[...] Read more.
Credit card fraud detection is a critical research area due to the significant financial losses and security risks associated with fraudulent activities. This study presents FraudX AI, an ensemble-based framework addressing the challenges in fraud detection, including imbalanced datasets, interpretability, and scalability. FraudX AI combines random forest and XGBoost as baseline models, integrating their results by averaging probabilities and optimizing thresholds to improve detection performance. The framework was evaluated on the European credit card dataset, maintaining its natural imbalance to reflect real-world conditions. FraudX AI achieved a recall value of 95% and an AUC-PR of 97%, effectively detecting rare fraudulent transactions and minimizing false positives. SHAP (Shapley additive explanations) was applied to interpret model predictions, providing insights into the importance of features in driving decisions. This interpretability enhances usability by offering helpful information to domain experts. Comparative evaluations of eight baseline models, including logistic regression and gradient boosting, as well as existing studies, showed that FraudX AI consistently outperformed these approaches on key metrics. By addressing technical and practical challenges, FraudX AI advances fraud detection systems with its robust performance on imbalanced datasets and its focus on interpretability, offering a scalable and trusted solution for real-world financial applications.
Full article

Figure 1
Open AccessReview
AI-Powered Software Development: A Systematic Review of Recommender Systems for Programmers
by
Efthimia Mavridou, Eleni Vrochidou, Theofanis Kalampokas, Venetis Kanakaris and George A. Papakostas
Computers 2025, 14(4), 119; https://doi.org/10.3390/computers14040119 - 24 Mar 2025
Abstract
Software engineering is a field that demands extensive knowledge and involves numerous challenges in managing information. The information landscapes in software engineering encompass source code and its revision history, a set of explicit instructions for writing, commenting on and running the codes, a
[...] Read more.
Software engineering is a field that demands extensive knowledge and involves numerous challenges in managing information. The information landscapes in software engineering encompass source code and its revision history, a set of explicit instructions for writing, commenting on and running the codes, a set of procedures and routines, and the development environment. For software engineers who develop code, writing code documentation is also extremely important. Due to the technical complexity, vast scale, and dynamic nature of software engineering, there is a need for a specialized category of tools to assist developers, known as recommendation systems in software engineering (RSSE). RSSEs are specialized software applications designed to assist developers by providing valuable resources, code snippets, solutions to problems, and other useful information and suggestions tailored to their specific tasks. Through the analysis of data and user interactions, RSSEs aim to enhance productivity and decision-making for developers. To this end, this work presents an analysis of the literature on recommender systems for programmers, highlighting the distinct attributes of RSSEs. Moreover, it summarizes all related challenges regarding developing, assessing, and utilizing RSSEs, and offers a broad perspective on the present state of research and advancements in recommendation systems for the highly technical field of software engineering.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhancing CuFP Library with Self-Alignment Technique
by
Fahimeh Hajizadeh, Tarek Ould-Bachir and Jean Pierre David
Computers 2025, 14(4), 118; https://doi.org/10.3390/computers14040118 - 24 Mar 2025
Abstract
►▼
Show Figures
High-Level Synthesis (HLS) tools have transformed FPGA development by streamlining digital design and enhancing efficiency. Meanwhile, advancements in semiconductor technology now support the integration of hundreds of floating-point units on a single chip, enabling more resource-intensive computations. CuFP, an HLS library, facilitates the
[...] Read more.
High-Level Synthesis (HLS) tools have transformed FPGA development by streamlining digital design and enhancing efficiency. Meanwhile, advancements in semiconductor technology now support the integration of hundreds of floating-point units on a single chip, enabling more resource-intensive computations. CuFP, an HLS library, facilitates the creation of customized floating-point operators with configurable exponent and mantissa bit widths, providing greater flexibility and resource efficiency. This paper introduces the integration of the self-alignment technique (SAT) into the CuFP library, extending its capability for customized addition-related floating-point operations with enhanced precision and resource utilization. Our findings demonstrate that incorporating SAT into CuFP enables the efficient FPGA deployment of complex floating-point operators, achieving significant reductions in computational latency and improved resource efficiency. Specifically, for a vector size of 64, CuFPSAF reduces execution cycles by 29.4% compared to CuFP and by 81.5% compared to vendor IP while maintaining the same DSP utilization as CuFP and reducing it by 59.7% compared to vendor IP. These results highlight the efficiency of SAT in FPGA-based floating-point computations.
Full article

Figure 1
Open AccessArticle
Enhancing Scalability and Network Efficiency in IOTA Tangle Networks: A POMDP-Based Tip Selection Algorithm
by
Mays Alshaikhli, Somaya Al-Maadeed and Moutaz Saleh
Computers 2025, 14(4), 117; https://doi.org/10.3390/computers14040117 - 24 Mar 2025
Abstract
The fairness problem in the IOTA (Internet of Things Application) Tangle network has significant implications for transaction efficiency, scalability, and security, particularly concerning orphan transactions and lazy tips. Traditional tip selection algorithms (TSAs) struggle to ensure fair tip selection, leading to inefficient transaction
[...] Read more.
The fairness problem in the IOTA (Internet of Things Application) Tangle network has significant implications for transaction efficiency, scalability, and security, particularly concerning orphan transactions and lazy tips. Traditional tip selection algorithms (TSAs) struggle to ensure fair tip selection, leading to inefficient transaction confirmations and network congestion. This research proposes a novel partially observable Markov decision process (POMDP)-based TSA, which dynamically prioritizes tips with lower confirmation likelihood, reducing orphan transactions and enhancing network throughput. By leveraging probabilistic decision making and the Monte Carlo tree search, the proposed TSA efficiently selects tips based on long-term impact rather than immediate transaction weight. The algorithm is rigorously evaluated against seven existing TSAs, including Random Walk, Unweighted TSA, Weighted TSA, Hybrid TSA-1, Hybrid TSA-2, E-IOTA, and G-IOTA, under various network conditions. The experimental results demonstrate that the POMDP-based TSA achieves a confirmation rate of 89–94%, reduces the orphan tip rate to 1–5%, and completely eliminates lazy tips (0%). Additionally, the proposed method ensures stable scalability and high security resilience, making it a robust and efficient solution for decentralized ledger networks. These findings highlight the potential of reinforcement learning-driven TSAs to enhance fairness, efficiency, and robustness in DAG-based blockchain systems. This work paves the way for future research into adaptive and scalable consensus mechanisms for the IOTA Tangle.
Full article
(This article belongs to the Special Issue The Internet of Things—Current Trends, Applications, and Future Challenges (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Computer-Driven Assessment of Weighted Attributes for E-Learning Optimization
by
Olga Ovtšarenko and Elena Safiulina
Computers 2025, 14(4), 116; https://doi.org/10.3390/computers14040116 - 23 Mar 2025
Abstract
Computer-driven assessment has revolutionized the way educational and professional assessments are conducted. Using artificial intelligence for data analytics, computer-based assessment improves efficiency, accuracy, and optimization of learning across disciplines. Optimizing e-learning requires a structured approach to analyzing learners’ progress and adjusting instruction accordingly.
[...] Read more.
Computer-driven assessment has revolutionized the way educational and professional assessments are conducted. Using artificial intelligence for data analytics, computer-based assessment improves efficiency, accuracy, and optimization of learning across disciplines. Optimizing e-learning requires a structured approach to analyzing learners’ progress and adjusting instruction accordingly. Although learning effectiveness is influenced by numerous parameters, competency-based assessment provides a structured and measurable way to evaluate learners’ achievements. This study explores the application of artificial intelligence algorithms to optimize e-learners’ studying within a generalized e-course framework. A competency-based assessment model was developed using weighted parameters derived from Bloom’s taxonomy. The key contribution of this work is an innovative method for calculating competency scores using weighted attributes and a dynamic assessment parameter, making the optimization process applicable to both learners and instructors. The results indicate that using the weighted attribute method with a dynamic assessment parameter can improve the structuring of e-courses, increase learner engagement, and provide instructors with a clearer understanding of learners’ progress. The proposed approach supports data-driven decision making in e-learning, ensuring a personalized learning experience, and improving overall learning outcomes.
Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
ForensicTwin: Incorporating Digital Forensics Requirements Within a Digital Twin
by
Aymen Akremi
Computers 2025, 14(4), 115; https://doi.org/10.3390/computers14040115 - 22 Mar 2025
Abstract
The Digital Twin (DT) technology shifts the monitoring and control of physical assets into cyberspace through IoT, network, and simulation technologies. However, new challenges have arisen regarding the admissibility of evidence collected from Digital Twin environments. In this paper, we examine the features
[...] Read more.
The Digital Twin (DT) technology shifts the monitoring and control of physical assets into cyberspace through IoT, network, and simulation technologies. However, new challenges have arisen regarding the admissibility of evidence collected from Digital Twin environments. In this paper, we examine the features and challenges that the Digital Twin technology presents to digital forensic science. We propose a new architectural model to guide the implementation of a forensically sound environment. Additionally, we introduce a new knowledge model representation that encompasses all forensic requirements to ensure the admissibility of evidence replicas. We propose a new forensic adversary model to formally analyze the preservation of forensic requirements.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Efficient Orchestration of Distributed Workloads in Multi-Region Kubernetes Cluster
by
Radoslav Furnadzhiev, Mitko Shopov and Nikolay Kakanakov
Computers 2025, 14(4), 114; https://doi.org/10.3390/computers14040114 - 21 Mar 2025
Abstract
Distributed Kubernetes clusters provide robust solutions for geo-redundancy and fault tolerance in modern cloud architectures. However, default scheduling mechanisms primarily optimize for resource availability, often neglecting network topology, inter-node latency, and global resource efficiency, leading to suboptimal task placement in multi-region deployments. This
[...] Read more.
Distributed Kubernetes clusters provide robust solutions for geo-redundancy and fault tolerance in modern cloud architectures. However, default scheduling mechanisms primarily optimize for resource availability, often neglecting network topology, inter-node latency, and global resource efficiency, leading to suboptimal task placement in multi-region deployments. This paper proposes network-aware scheduling plugins that integrate heuristic, metaheuristic, and linear programming methods to optimize resource utilization and inter-zone communication latency for containerized workloads, particularly Apache Spark batch-processing tasks. Unlike the default scheduler, the presented approach incorporates inter-node latency constraints and prioritizes locality-aware scheduling, ensuring efficient pod distribution while minimizing network overhead. The proposed plugins are evaluated using the kube-scheduler-simulator, a tool that replicates Kubernetes scheduling behavior without deploying real workloads. Experiments cover multiple cluster configurations, varying in node count, region count, and inter-region latencies, with performance metrics recorded for scheduler efficiency, inter-zone communication impact, and execution time across different optimization algorithms. The obtained results indicate that network-aware scheduling approaches significantly improve latency-aware placement decisions, achieving lower inter-region communication delays while maintaining resource efficiency.
Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Fine-Tuned RoBERTa Model for Bug Detection in Mobile Games: A Comprehensive Approach
by
Muhammad Usman, Muhammad Ahmad, Fida Ullah, Muhammad Muzamil, Ameer Hamza, Muhammad Jalal and Alexander Gelbukh
Computers 2025, 14(4), 113; https://doi.org/10.3390/computers14040113 - 21 Mar 2025
Abstract
►▼
Show Figures
In the current digital era, the Google Play Store and the App Store are major platforms for the distribution of mobile applications and games. Billions of users regularly download mobile games and provide reviews, which serve as a valuable resource for game vendors
[...] Read more.
In the current digital era, the Google Play Store and the App Store are major platforms for the distribution of mobile applications and games. Billions of users regularly download mobile games and provide reviews, which serve as a valuable resource for game vendors and developers, offering insights into bug reports, feature suggestions, and documentation of existing functionalities. This study showcases an innovative application of fine-tuned RoBERTa for detecting bugs in mobile phone games, highlighting advanced classification capabilities. This approach will increase player satisfaction, lead to higher ratings, and improve brand reputation for game developers, while also reducing development costs and saving time in creating high-quality games. To achieve this goal, a new bug detection dataset was created. Initially, data were sourced from four top-rated mobile games from multiple domains on the Google Play Store and the App Store, focusing on bugs, using the Google Play API and App Store API. Subsequently, the data were categorized into two classes: binary and multi-class. The Logistic Regression, Convolutional Neural Network (CNN), and pre-trained Robustly Optimized BERT Approach (RoBERTa) algorithms were used to compare the results. We explored the strength of pre-trained RoBERTa, which demonstrated its ability to capture both semantic nuances and contextual information within textual content. The results showed that pre-trained RoBERTa significantly outperformed the baseline models (Logistic Regression), achieving superior performance with a 5.49% improvement in binary classification and an 8.24% improvement in multi-class classification, resulting in cross-validation scores of 96% and 92%, respectively.
Full article

Figure 1
Open AccessArticle
Algorithmic Generation of Realistic 3D Graphics for Liquid Surfaces Within Arbitrary-Form Vessels in a Virtual Laboratory and Application in Flow Simulation
by
Dimitrios S. Karpouzas, Vasilis Zafeiropoulos and Dimitris Kalles
Computers 2025, 14(3), 112; https://doi.org/10.3390/computers14030112 - 20 Mar 2025
Abstract
►▼
Show Figures
Hellenic Open University has developed Onlabs, a virtual biology laboratory designed to safely and effectively prepare its students for hands-on work in the university’s on-site labs. This platform simulates key experimental processes, such as 10X TBE solution preparation, agarose gel preparation and electrophoresis,
[...] Read more.
Hellenic Open University has developed Onlabs, a virtual biology laboratory designed to safely and effectively prepare its students for hands-on work in the university’s on-site labs. This platform simulates key experimental processes, such as 10X TBE solution preparation, agarose gel preparation and electrophoresis, which involve liquid transfers between bottles. However, accurately depicting liquid volumes and their flow within complex-shaped laboratory vessels, such as Erlenmeyer flasks and burettes, remains a challenge. This paper addresses this limitation by introducing a unified parametric framework for modeling circular cross-section pipes, including straight pipes with a constant diameter, curved pipes with a constant diameter and straight conical pipes. Analytical expressions are developed to define the position and orientation of points along a pipe’s central axis, as well as the surface geometry of composite pipes formed by combining these elements in planar configurations. Moreover, the process of surface discretization with finite triangular elements is analyzed with the aim of optimizing their representation during the algorithmic implementation. The functions of the current length with respect to the volume of each considered container shape are developed. Finally, the methodology for handling and combining the analytical expressions during the filling of a composite pipe is explained, the filling of certain characteristic bottles is implemented and the results of the implementations are presented. The primary goal is to enable the precise algorithmic generation of 3D graphics representing the surfaces of liquids within various laboratory vessels and, subsequently, the simulation of their flow. By leveraging these parametric models, liquid volumes can be accurately visualized, reflecting the vessels’ geometries and improving the realism of simulations and the filling of various vessels can be realistically simulated.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
2 April 2025
MDPI INSIGHTS: The CEO's Letter #21 - Annual Report, Swiss Consortium, IWD, ICARS, Serbia
MDPI INSIGHTS: The CEO's Letter #21 - Annual Report, Swiss Consortium, IWD, ICARS, Serbia
30 March 2025
Meet Us at the 52nd CAA International Conference (Digital Horizons: Embracing Heritage in an Evolving World), 5–9 May 2025, Athens, Greece
Meet Us at the 52nd CAA International Conference (Digital Horizons: Embracing Heritage in an Evolving World), 5–9 May 2025, Athens, Greece

Topics
Topic in
AI, Buildings, Computers, Drones, Entropy, Symmetry
Applications of Machine Learning in Large-Scale Optimization and High-Dimensional Learning
Topic Editors: Jeng-Shyang Pan, Junzo Watada, Vaclav Snasel, Pei HuDeadline: 30 April 2025
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2025
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds, IJGI
Simulations and Applications of Augmented and Virtual Reality, 2nd Edition
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 June 2025
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025

Conferences
Special Issues
Special Issue in
Computers
Computational Science and Its Applications 2024 (ICCSA 2024)
Guest Editor: Osvaldo GervasiDeadline: 15 April 2025
Special Issue in
Computers
Edge and Fog Computing for Internet of Things Systems (2nd Edition)
Guest Editors: Luís Nogueira, Jorge CoelhoDeadline: 20 April 2025
Special Issue in
Computers
Smart Learning Environments
Guest Editor: Ananda MaitiDeadline: 30 April 2025
Special Issue in
Computers
Future Trends in Computer Programming Education
Guest Editor: Stelios XinogalosDeadline: 31 May 2025