Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Integrative Federated Learning Framework for Multimodal Parkinson’s Disease Biomarker Fusion
Computers 2025, 14(9), 388; https://doi.org/10.3390/computers14090388 - 15 Sep 2025
Abstract
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework
[...] Read more.
Accurate and early diagnosis of Parkinson’s Disease (PD) is challenged by the diverse manifestations of motor and non-motor symptoms across different patients. Existing studies largely rely on limited datasets and biomarkers. In this extended research, we propose a comprehensive Federated Learning (FL) framework designed to integrate heterogeneous biomarkers through multimodal combinations—such as EEG–fMRI pairs, continuous speech with vowel pronunciation, and the fusion of EEG, gait, and accelerometry data—drawn from diverse sources and modalities. By processing data separately at client nodes and performing feature and decision fusion at a central server, our method preserves privacy and enables robust PD classification. Experimental results show accuracies exceeding 85% across multiple fusion techniques, with attention-based fusion reaching 97.8% for Freezing of Gait (FoG) detection. Our framework advances scalable, privacy-preserving, multimodal diagnostics for PD.
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►
Show Figures
Open AccessArticle
Integrating Large Language Models with near Real-Time Web Crawling for Enhanced Job Recommendation Systems
by
David Gauhl, Kevin Kakkanattu, Melbin Mukkattu and Thomas Hanne
Computers 2025, 14(9), 387; https://doi.org/10.3390/computers14090387 - 15 Sep 2025
Abstract
This study addresses the limitations of traditional job recommendation systems that rely on static datasets, making them less responsive to dynamic job market changes. While existing job platforms address job search with an untransparent logic following their business goals, job seekers may benefit
[...] Read more.
This study addresses the limitations of traditional job recommendation systems that rely on static datasets, making them less responsive to dynamic job market changes. While existing job platforms address job search with an untransparent logic following their business goals, job seekers may benefit from a solution actively and dynamically crawling and evaluating job offers from a variety of sites according to their objectives. To address this gap, a hybrid system was developed that integrates large language models (LLMs) for semantic analysis with near real-time data acquisition through web crawling. The system extracts and ranks job-specific keywords from user inputs, such as resumes, while dynamically retrieving job listings from online platforms. User evaluations indicated strong performance in keyword extraction and system usability but revealed challenges in web crawler performance, affecting recommendation accuracy. Compared with a state-of-the-art commercial tool, user tests indicate a smaller accuracy of our prototype but a higher functionality satisfaction. Test users highlighted its great potential for further development. The results highlight the benefits of combining LLMs and web crawling while emphasizing the need for improved near real-time data handling to enhance recommendation precision and user satisfaction.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
An Adaptive Steganographic Method for Reversible Information Embedding in X-Ray Images
by
Elmira Daiyrbayeva, Aigerim Yerimbetova, Ekaterina Merzlyakova, Ualikhan Sadyk, Aizada Sarina, Zhamilya Taichik, Irina Ismailova, Yerbolat Iztleuov and Asset Nurmangaliyev
Computers 2025, 14(9), 386; https://doi.org/10.3390/computers14090386 - 14 Sep 2025
Abstract
The rapid digitalisation of the medical field has heightened concerns over protecting patients’ personal information during the transmission of medical images. This study introduces a method for securely transmitting X-ray images that contain embedded patient data. The proposed steganographic approach ensures that the
[...] Read more.
The rapid digitalisation of the medical field has heightened concerns over protecting patients’ personal information during the transmission of medical images. This study introduces a method for securely transmitting X-ray images that contain embedded patient data. The proposed steganographic approach ensures that the original image remains intact while the embedded data is securely hidden, a critical requirement in medical contexts. To guarantee reversibility, the Interpolation Near Pixels method was utilised, recognised as one of the most effective techniques within reversible data hiding (RDH) frameworks. Additionally, the method integrates a statistical property preservation technique, enhancing the scheme’s alignment with ideal steganographic characteristics. Specifically, the “forest fire” algorithm partitions the image into interconnected regions, where statistical analyses of low-order bits are performed, followed by arithmetic decoding to achieve a desired distribution. This process successfully maintains the original statistical features of the image. The effectiveness of the proposed method was validated through stegoanalysis on real-world medical images from previous studies. The results revealed high robustness, with minimal distortion of stegocontainers, as evidenced by high PSNR values ranging between 52 and 81 dB.
Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Transformer Models for Paraphrase Detection: A Comprehensive Semantic Similarity Study
by
Dianeliz Ortiz Martes, Evan Gunderson, Caitlin Neuman and Nezamoddin N. Kachouie
Computers 2025, 14(9), 385; https://doi.org/10.3390/computers14090385 - 14 Sep 2025
Abstract
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the
[...] Read more.
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the choice of similarity measure and BERT (bert-base-nli-mean-tokens), RoBERTa (all-roberta-large-v1), and MPNet (all-mpnet-base-v2) on the Microsoft Research Paraphrase Corpus (MRPC). Sentence embeddings were compared using cosine similarity, dot product, Manhattan distance, and Euclidean distance, with thresholds optimized for accuracy, balanced accuracy, and F1-score. Results indicate a consistent advantage for MPNet, which achieved the highest accuracy (75.6%), balanced accuracy (71.0%), and F1-score (0.836) when paired with cosine similarity at an optimized threshold of 0.671. BERT and RoBERTa performed competitively but exhibited greater sensitivity to the choice of Similarity metric, with BERT notably underperforming when using cosine similarity compared to Manhattan or Euclidean distance. Optimal thresholds varied widely (0.334–0.867), underscoring the difficulty of establishing a single, generalizable cut-off for paraphrase classification. These findings highlight the value of fine-tuning of both Similarity metrics and thresholds alongside model selection, offering practical guidance for designing high-accuracy semantic similarity systems in real-world NLP applications.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling (2nd Edition))
Open AccessArticle
Narrative-Driven Digital Gamification for Motivation and Presence: Preservice Teachers’ Experiences in a Science Education Course
by
Gregorio Jiménez-Valverde, Noëlle Fabre-Mitjans and Gerard Guimerà-Ballesta
Computers 2025, 14(9), 384; https://doi.org/10.3390/computers14090384 - 14 Sep 2025
Abstract
This mixed-methods study investigated how a personalized, narrative-integrated digital gamification framework (with FantasyClass) was associated with motivation and presence among preservice elementary teachers in a science education course. The intervention combined HEXAD-informed personalization (aligning game elements with player types) with a branching storyworld,
[...] Read more.
This mixed-methods study investigated how a personalized, narrative-integrated digital gamification framework (with FantasyClass) was associated with motivation and presence among preservice elementary teachers in a science education course. The intervention combined HEXAD-informed personalization (aligning game elements with player types) with a branching storyworld, teacher-directed AI-generated narrative emails, and multimodal cues (visuals, music, scent) to scaffold presence alongside autonomy, competence, and relatedness. Thirty-four students participated in a one-group posttest design, completing an adapted 21-item PENS questionnaire and responding to two open-ended prompts. Results, which are exploratory and not intended for broad generalization or causal inference, indicated high self-reported competence and autonomy, positive but more variable relatedness, and strong presence/immersion. Subscale correlations showed that Competence covaried with Autonomy and Relatedness, while Presence/Immersion was positively associated with all other subscales, suggesting that presence may act as a motivational conduit. Thematic analysis portrayed students as active decision-makers within the narrative, linking consequential choices, visible progress, and team-based goals to agency, effectiveness, and social connection. Additional themes included coherence and organization, fun and enjoyment, novelty, extrinsic incentives, and perceived professional transferability. Overall, findings suggest that narrative presence, when coupled with player-aligned game elements, can foster engagement and motivation in STEM-oriented teacher education.
Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
►▼
Show Figures

Figure 1
Open AccessArticle
Supersampling in Render CPOY: Total Annihilation
by
Grigorie Dennis Sergiu and Stanciu Ion Rares
Computers 2025, 14(9), 383; https://doi.org/10.3390/computers14090383 - 12 Sep 2025
Abstract
This paper tackles a significant problem in gaming graphics: balancing visual fidelity with performance in real time. The article introduces CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), a dynamic resolution scaling algorithm designed to enhance both performance and visual quality in
[...] Read more.
This paper tackles a significant problem in gaming graphics: balancing visual fidelity with performance in real time. The article introduces CPOY SR (Continuous Procedural Output Yielder for Scaling Resolution), a dynamic resolution scaling algorithm designed to enhance both performance and visual quality in real-time gaming. Unlike traditional supersampling and anti-aliasing techniques that suffer from fixed settings and hardware limitations, CPOY SR adapts resolution during gameplay based on system resources and user activity. The method is implemented and tested in an actual game project, not just theoretically proposed. The proposed method overcomes these by adjusting resolution dynamically during gameplay. One strong feature is that it works across diverse systems, from low-end laptops to high-end machines. The algorithm utilizes mathematical constraints like Mathf.Clamp to ensure numerical robustness during scaling and avoids manual reconfiguration. Testing was carried out across multiple hardware configurations and resolutions (up to 8K); the approach demonstrated consistent visual fidelity with optimized performance. The research integrates visual rendering, resolution scaling, and anti-aliasing techniques, offering a scalable solution for immersive gameplay. This article outlines the key components and development phases that contribute to the creation of this engaging and visually impressive gaming experience project.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
GraphTrace: A Modular Retrieval Framework Combining Knowledge Graphs and Large Language Models for Multi-Hop Question Answering
by
Anna Osipjan, Hanieh Khorashadizadeh, Akasha-Leonie Kessel, Sven Groppe and Jinghua Groppe
Computers 2025, 14(9), 382; https://doi.org/10.3390/computers14090382 - 11 Sep 2025
Abstract
►▼
Show Figures
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular
[...] Read more.
This paper introduces GraphTrace, a novel retrieval framework that integrates a domain-specific knowledge graph (KG) with a large language model (LLM) to improve information retrieval for complex, multi-hop queries. Built on structured economic data related to the COVID-19 pandemic, GraphTrace adopts a modular architecture comprising entity extraction, path finding, query decomposition, semantic path ranking, and context aggregation, followed by LLM-based answer generation. GraphTrace is compared against baseline retrieval-augmented generation (RAG) and graph-based RAG (GraphRAG) approaches in both retrieval and generation settings. Experimental results show that GraphTrace consistently outperforms the baselines across evaluation metrics, particularly in handling mid-complexity (5–6-hop) queries and achieving top scores in directness during the generation evaluation. These gains are attributed to GraphTrace’s alignment of semantic reasoning with structured KG traversal, combining modular components for more targeted and interpretable retrieval.
Full article

Figure 1
Open AccessArticle
Scaling Linearizable Range Queries on Modern Multi-Cores
by
Chen Zhang, Zhengming Yi and Xinghui Zhu
Computers 2025, 14(9), 381; https://doi.org/10.3390/computers14090381 - 11 Sep 2025
Abstract
►▼
Show Figures
In this paper we introduce Range Query Timestamp Counter (RQ-TSC), a general approach to provide scalable and linearizable range query operations for highly concurrent lock-based data structures. RQ-TSC is a multi-versioned building block that relies on hardware timestamps (e.g., obtained through hardware timestamp
[...] Read more.
In this paper we introduce Range Query Timestamp Counter (RQ-TSC), a general approach to provide scalable and linearizable range query operations for highly concurrent lock-based data structures. RQ-TSC is a multi-versioned building block that relies on hardware timestamps (e.g., obtained through hardware timestamp counter register on x86_64) to generate version timestamps, which greatly reduce a point of contention on a shared atomic counter. To evaluate the performance of RQ-TSC, we apply it to three data structures: a linked list, a skip list, and a binary search tree. Experiments show that our approach can improve scalability significantly. Moreover, in almost all cases, range queries on these data structures built from our design perform as well as or better than state-of-the-art concurrent data structures that support linearizable range queries.
Full article

Figure 1
Open AccessArticle
Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation
by
Abraham Abby Sen, Jeen Mariam Joy and Murray E. Jennex
Computers 2025, 14(9), 380; https://doi.org/10.3390/computers14090380 - 11 Sep 2025
Abstract
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to
[...] Read more.
The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to navigate complex health-related discourse. This paper addresses these challenges by integrating normative ethical theory with organizational practice to evaluate the limitations of AI in moderating healthcare content. Drawing on deontological, utilitarian, and virtue ethics frameworks, the analysis explores the tensions between ethical ideals and real-world implementation. Building on this foundation, the paper proposes a set of normative guidelines that emphasize hybrid human–AI moderation, transparency, the redesign of success metrics, and the cultivation of ethical organizational cultures. To institutionalize these principles, we introduce a governance framework that includes internal accountability structures, external oversight mechanisms, and adaptive processes for handling ambiguity, disagreement, and evolving standards. By connecting ethical theory with actionable design strategies, this study provides a roadmap for responsible and context-sensitive AI moderation in the digital healthcare ecosystem.
Full article
(This article belongs to the Section AI-Driven Innovations)
Open AccessArticle
Integrating Design Thinking Approach and Simulation Tools in Smart Building Systems Education: A Case Study on Computer-Assisted Learning for Master’s Students
by
Andrzej Ożadowicz
Computers 2025, 14(9), 379; https://doi.org/10.3390/computers14090379 - 9 Sep 2025
Abstract
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things
[...] Read more.
The rapid development of smart home and building technologies requires educational methods that facilitate the integration of theoretical knowledge with practical, system-level design skills. Computer-assisted tools play a crucial role in this process by enabling students to experiment with complex Internet of Things (IoT) and building automation ecosystems in a risk-free, iterative environment. This paper proposes a pedagogical framework that integrates simulation-based prototyping with collaborative and spatial design tools, supported by elements of design thinking and blended learning. The approach was implemented in a master’s-level Smart Building Systems course, to engage students in interdisciplinary projects where virtual modeling, digital collaboration, and contextualized spatial design were combined to develop user-oriented smart space concepts. Analysis of project outcomes and student feedback indicated that the use of simulation and visualization platforms may enhance technical competencies, creativity, and engagement. The proposed framework contributes to engineering education by demonstrating how computer-assisted environments can effectively support practice-oriented, user-centered learning. Its modular and scalable structure makes it applicable across IoT- and automation-focused curricula, aligning academic training with the hybrid workflows of contemporary engineering practice. Concurrently, areas for enhancement and modification were identified to optimize support for group and creative student work.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Pre-During-After Software Development Documentation (PDA-SDD): A Phase-Based Approach for Comprehensive Software Documentation in Modern Development Paradigms
by
Abdullah A. H. Alzahrani
Computers 2025, 14(9), 378; https://doi.org/10.3390/computers14090378 - 9 Sep 2025
Abstract
Persistent challenges in software documentation, particularly limitations in generality, simplicity, and efficiency of existing models, impede effective software development. To address these, this research proposes a novel phase-based and holistic software documentation model (PDA-SDD). This model was subsequently evaluated using a digital questionnaire
[...] Read more.
Persistent challenges in software documentation, particularly limitations in generality, simplicity, and efficiency of existing models, impede effective software development. To address these, this research proposes a novel phase-based and holistic software documentation model (PDA-SDD). This model was subsequently evaluated using a digital questionnaire distributed to 150 software development and documentation experts, achieving a 48% response rate (n = 72). The evaluation focused on assessing the proposed model’s generality, simplicity, and efficiency. Findings indicate that while certain sub-models (e.g., SRSD, RLD) were positively received across all criteria and the overall model demonstrated strong perceived generality and efficiency in specific aspects, areas for improvement were identified, particularly regarding terminological consistency and user-friendliness. This study contributes to the understanding of the complexities in achieving a universally effective software documentation model and highlights key considerations for future research and development in this critical area of software engineering.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
The Learning Style Decoder: FSLSM-Guided Behavior Mapping Meets Deep Neural Prediction in LMS Settings
by
Athanasios Angeioplastis, John Aliprantis, Markos Konstantakis, Dimitrios Varsamis and Alkiviadis Tsimpiris
Computers 2025, 14(9), 377; https://doi.org/10.3390/computers14090377 - 8 Sep 2025
Abstract
Personalized learning environments increasingly rely on learner modeling techniques that integrate both explicit and implicit data sources. This study introduces a hybrid profiling methodology that combines psychometric data from an extended Felder–Silverman Learning Style Model (FSLSM) questionnaire with behavioral analytics derived from Moodle
[...] Read more.
Personalized learning environments increasingly rely on learner modeling techniques that integrate both explicit and implicit data sources. This study introduces a hybrid profiling methodology that combines psychometric data from an extended Felder–Silverman Learning Style Model (FSLSM) questionnaire with behavioral analytics derived from Moodle Learning Management System interaction logs. A structured mapping process was employed to associate over 200 unique log event types with FSLSM cognitive dimensions, enabling dynamic, behavior-driven learner profiles. Experiments were conducted across three datasets: a university dataset from the International Hellenic University, a public dataset from Kaggle, and a combined dataset totaling over 7 million log entries. Deep learning models including a Sequential Neural Network, BiLSTM, and a pretrained MLSTM-FCN were trained to predict student performance across regression and classification tasks. Results indicate moderate predictive validity: binary classification achieved practical, albeit imperfect accuracy, while three-class and regression tasks performed close to baseline levels. These findings highlight both the potential and the current constraints of log-based learner modeling. The contribution of this work lies in providing a reproducible integration framework and pipeline that can be applied across datasets, offering a realistic foundation for further exploration of scalable, data-driven personalization.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
Interoperable Semantic Systems in Public Administration: AI-Driven Data Mining from Law-Enforcement Reports
by
Alexandros Z. Spyropoulos and Vassilis Tsiantos
Computers 2025, 14(9), 376; https://doi.org/10.3390/computers14090376 - 8 Sep 2025
Abstract
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is
[...] Read more.
The digitisation of law-enforcement archives is examined with the aim of moving from static analogue records to interoperable semantic information systems. A step-by-step framework for optimal digitisation is proposed, grounded in archival best practice and enriched with artificial-intelligence and semantic-web technologies. Emphasis is placed on semantic data representation, which renders information actionable, searchable, interlinked, and automatically processed. As a proof of concept, a large language model—OpenAI ChatGPT, version o3—was applied to a corpus of narrative police reports, extracting and classifying key entities (metadata, persons, addresses, vehicles, incidents, fingerprints, and inter-entity relationships). The output was converted to Resource Description Framework triples and ingested into a triplestore, demonstrating how unstructured text can be transformed into machine-readable, interoperable data with minimal human intervention. The approach’s challenges—technical complexity, data quality assurance, information-security requirements, and staff training—are analysed alongside the opportunities it affords, such as accelerated access to records, cross-agency interoperability, and advanced analytics for investigative and strategic decision-making. Combining systematic digitisation, AI-driven data extraction, and rigorous semantic modelling ultimately delivers a fully interoperable information environment for law-enforcement agencies, enhancing efficiency, transparency, and evidentiary integrity.
Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
►▼
Show Figures

Figure 1
Open AccessArticle
Rule-Based eXplainable Autoencoder for DNS Tunneling Detection
by
Giacomo De Bernardi, Giovanni Battista Gaggero, Fabio Patrone, Sandro Zappatore, Mario Marchese and Maurizio Mongelli
Computers 2025, 14(9), 375; https://doi.org/10.3390/computers14090375 - 8 Sep 2025
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) are employed in numerous fields and applications. Even if most of these approaches offer a very good performance, they are affected by the “black-box” problem. The way they operate and make decisions is complex and difficult
[...] Read more.
Artificial Intelligence (AI) and Machine Learning (ML) are employed in numerous fields and applications. Even if most of these approaches offer a very good performance, they are affected by the “black-box” problem. The way they operate and make decisions is complex and difficult for human users to interpret, making the systems impossible to manually adjust in case they make trivial (from a human viewpoint) errors. In this paper, we show how a “white-box” approach based on eXplainable AI (XAI) can be applied to the Domain Name System (DNS) tunneling detection problem, a cybersecurity problem already successfully addressed by “black-box” approaches, in order to make the detection explainable. The obtained results show that the proposed solution can achieve a performance comparable to the one offered by an autoencoder-based solution while offering a clear view of how the system makes its choices and the possibility of manual analysis and adjustments.
Full article
(This article belongs to the Special Issue Recent Advances in Data Mining: Methods, Trends, and Emerging Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Bridging Domains: Advances in Explainable, Automated, and Privacy-Preserving AI for Computer Science and Cybersecurity
by
Youssef Harrath, Oswald Adohinzin, Jihene Kaabi and Morgan Saathoff
Computers 2025, 14(9), 374; https://doi.org/10.3390/computers14090374 - 8 Sep 2025
Abstract
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We
[...] Read more.
Artificial intelligence (AI) is rapidly redefining both computer science and cybersecurity by enabling more intelligent, scalable, and privacy-conscious systems. While most prior surveys treat these fields in isolation, this paper provides a unified review of 256 peer-reviewed publications to bridge that gap. We examine how emerging AI paradigms, such as explainable AI (XAI), AI-augmented software development, and federated learning, are shaping technological progress across both domains. In computer science, AI is increasingly embedded throughout the software development lifecycle to boost productivity, improve testing reliability, and automate decision making. In cybersecurity, AI drives advances in real-time threat detection and adaptive defense. Our synthesis highlights powerful cross-cutting findings, including shared challenges such as algorithmic bias, interpretability gaps, and high computational costs, as well as empirical evidence that AI-enabled defenses can reduce successful breaches by up to 30%. Explainability is identified as a cornerstone for trust and bias mitigation, while privacy-preserving techniques, including federated learning and local differential privacy, emerge as essential safeguards in decentralized environments such as the Internet of Things (IoT) and healthcare. Despite transformative progress, we emphasize persistent limitations in fairness, adversarial robustness, and the sustainability of large-scale model training. By integrating perspectives from two traditionally siloed disciplines, this review delivers a unified framework that not only maps current advances and limitations but also provides a foundation for building more resilient, ethical, and trustworthy AI systems.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessReview
Electromagnetic Field Distribution Mapping: A Taxonomy and Comprehensive Review of Computational and Machine Learning Methods
by
Yiannis Kiouvrekis and Theodor Panagiotakopoulos
Computers 2025, 14(9), 373; https://doi.org/10.3390/computers14090373 - 5 Sep 2025
Abstract
Electromagnetic field (EMF) exposure mapping is increasingly important for ensuring compliance with safety regulations, supporting the deployment of next-generation wireless networks, and addressing public health concerns. While numerous surveys have addressed specific aspects of radio propagation or radio environment maps, a comprehensive and
[...] Read more.
Electromagnetic field (EMF) exposure mapping is increasingly important for ensuring compliance with safety regulations, supporting the deployment of next-generation wireless networks, and addressing public health concerns. While numerous surveys have addressed specific aspects of radio propagation or radio environment maps, a comprehensive and unified overview of EMF mapping methodologies has been lacking. This review bridges that gap by systematically analyzing computational, geospatial, and machine learning approaches used for EMF exposure mapping across both wireless communication engineering and public health domains. A novel taxonomy is introduced to clarify overlapping terminology—encompassing radio maps, radio environment maps, and EMF exposure maps—and to classify construction methods, including analytical models, model-based interpolation, and data-driven learning techniques. In addition, the review highlights domain-specific challenges such as indoor versus outdoor mapping, data sparsity, and model generalization, while identifying emerging opportunities in hybrid modeling, big data integration, and explainable AI. By combining perspectives from communication engineering and public health, this work provides a broader and more interdisciplinary synthesis than previous surveys, offering a structured reference and roadmap for advancing robust, scalable, and socially relevant EMF mapping frameworks.
Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Open AccessArticle
Explainable Deep Kernel Learning for Interpretable Automatic Modulation Classification
by
Carlos Enrique Mosquera-Trujillo, Juan Camilo Lugo-Rojas, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(9), 372; https://doi.org/10.3390/computers14090372 - 5 Sep 2025
Abstract
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance
[...] Read more.
Modern wireless communication systems increasingly rely on Automatic Modulation Classification (AMC) to enhance reliability and adaptability, especially in the presence of severe signal degradation. However, despite significant progress driven by deep learning, many AMC models still struggle with high computational overhead, suboptimal performance under low-signal-to-noise conditions, and limited interpretability, factors that hinder their deployment in real-time, resource-constrained environments. To address these challenges, we propose the Convolutional Random Fourier Features with Denoising Thresholding Network (CRFFDT-Net), a compact and interpretable deep kernel architecture that integrates Convolutional Random Fourier Features (CRFFSinCos), an automatic threshold-based denoising module, and a hybrid time-domain feature extractor composed of CNN and GRU layers. Our approach is validated on the RadioML 2016.10A benchmark dataset, encompassing eleven modulation types across a wide signal-to-noise ratio (SNR) spectrum. Experimental results demonstrate that CRFFDT-Net achieves an average classification accuracy that is statistically comparable to state-of-the-art models, while requiring significantly fewer parameters and offering lower inference latency. This highlights an exceptional accuracy–complexity trade-off. Moreover, interpretability analysis using GradCAM++ highlights the pivotal role of the Convolutional Random Fourier Features in the representation learning process, providing valuable insight into the model’s decision-making. These results underscore the promise of CRFFDT-Net as a lightweight and explainable solution for AMC in real-world, low-power communication systems.
Full article
(This article belongs to the Special Issue AI in Complex Engineering Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
The Complexity of eHealth Architecture: Lessons Learned from Application Use Cases
by
Annalisa Barsotti, Gerl Armin, Wilhelm Sebastian, Massimiliano Donati, Stefano Dalmiani and Claudio Passino
Computers 2025, 14(9), 371; https://doi.org/10.3390/computers14090371 - 4 Sep 2025
Abstract
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use
[...] Read more.
The rapid evolution of eHealth technologies has revolutionized healthcare, enabling data-driven decision-making and personalized care. Central to this transformation is interoperability, which ensures seamless communication among heterogeneous systems. This paper explores the critical role of interoperability, data management processes, and the use of international standards in enabling integrated healthcare solutions. We present an overview of interoperability dimensions—technical, semantic, and organizational—and align them with data management phases in a concise eHealth architecture. Furthermore, we examine two practical European use cases to demonstrate the extend of the proposed eHealth architecture, involving patients, environments, third parties, and healthcare providers.
Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Evaluating Interaction Capability in a Serious Game for Children with ASD: An Operability-Based Approach Aligned with ISO/IEC 25010:2023
by
Delia Isabel Carrión-León, Milton Paúl Lopez-Ramos, Luis Gonzalo Santillan-Valdiviezo, Damaris Sayonara Tanguila-Tapuy, Gina Marilyn Morocho-Santos, Raquel Johanna Moyano-Arias, María Elena Yautibug-Apugllón and Ana Eva Chacón-Luna
Computers 2025, 14(9), 370; https://doi.org/10.3390/computers14090370 - 4 Sep 2025
Abstract
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures
[...] Read more.
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures design involved ten children with ASD from the Carlos Garbay Special Education Institute in Riobamba, Ecuador, across 25 gameplay sessions. A bespoke operability algorithm incorporating four weighted components (ease of learning, user control, interface familiarity, and message comprehension) was developed through expert consultation with certified ASD therapists. Statistical study used linear mixed-effects models with Kenward–Roger correction, supplemented by thorough validation including split-half reliability and partial correlations. The operability metric demonstrated excellent internal consistency (split-half reliability = 0.94, 95% CI [0.88, 0.97]) and construct validity through partial correlations controlling for performance (difficulty: r_partial = 0.42, p = 0.037). Eighty percent of sessions achieved moderate-to-high operability levels (M = 45.07, SD = 10.52). In contrast to requirements, operability consistently improved with increasing difficulty level (Easy: M = 37.04; Medium: M = 48.71; Hard: M = 53.87), indicating that individuals with enhanced capabilities advanced to harder levels. Mixed-effects modeling indicated substantial difficulty effects (H = 9.36, p = 0.009, ε2 = 0.39). This pilot study establishes preliminary evidence for operability assessment in ASD serious games, requiring larger confirmatory validation studies (n ≥ 30) to establish broader generalizability and standardized instrument integration. The positive difficulty–operability association highlights the importance of adaptive game design in supporting skill progression.
Full article
(This article belongs to the Section Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessArticle
LightCross: A Lightweight Smart Contract Vulnerability Detection Tool
by
Ioannis Sfyrakis, Paolo Modesti, Lewis Golightly and Minaro Ikegima
Computers 2025, 14(9), 369; https://doi.org/10.3390/computers14090369 - 3 Sep 2025
Abstract
►▼
Show Figures
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing
[...] Read more.
Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing a tool that can enable developers and testers to detect vulnerabilities in smart contracts in an efficient and reliable way. The research contributions include an analysis of existing literature on smart contract security, along with the design and implementation of a lightweight vulnerability detection tool called LightCross. This tool runs two well-known detectors, Slither and Mythril, to analyse smart contracts. Experimental analysis was conducted using the SmartBugs curated dataset, which contains 143 vulnerable smart contracts with a total of 206 vulnerabilities. The results showed that LightCross achieves the same detection rate as SmartBugs when using the same backend detectors (Slither and Mythril) while eliminating SmartBugs’ need for a separate Docker container for each detector. Mythril detects 53% and Slither 48% of the vulnerabilities in the SmartBugs curated dataset. Furthermore, an assessment of the execution time across various vulnerability categories revealed that LightCross performs comparably to SmartBugs when using the Mythril detector, while LightCross is significantly faster when using the Slither detector. Finally, to enhance user-friendliness and relevance, LightCross presents the verification results based on OpenSCV, a state-of-the-art academic classification of smart contract vulnerabilities, aligned with the industry-standard CWE and offering improvements over the unmaintained SWC taxonomy.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
3 September 2025
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada

1 September 2025
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
Topics
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 March 2026

Special Issues
Special Issue in
Computers
Present and Future of E-Learning Technologies (2nd Edition)
Guest Editor: Antonio Sarasa CabezueloDeadline: 30 September 2025
Special Issue in
Computers
Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities
Guest Editor: Lilatul FerdouseDeadline: 30 September 2025
Special Issue in
Computers
Applications of Machine Learning and Artificial Intelligence for Healthcare
Guest Editor: Elias DritsasDeadline: 30 September 2025
Special Issue in
Computers
Artificial Intelligence in Control
Guest Editors: Mads Sloth Vinding, Ivan Maximov, Christoph AignerDeadline: 30 September 2025