Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.5 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Randomized Modality Mixing with Patchwise RBF Networks for Robust Multimodal Pain Recognition
Computers 2026, 15(2), 127; https://doi.org/10.3390/computers15020127 (registering DOI) - 14 Feb 2026
Abstract
Pain recognition based on multimodal physiological signals remains a challenge, not only because of the limited training data, but also due to the varying responses of individuals. In this article, we present a randomized modality mixing technique (Modmix) for multimodal data augmentation and
[...] Read more.
Pain recognition based on multimodal physiological signals remains a challenge, not only because of the limited training data, but also due to the varying responses of individuals. In this article, we present a randomized modality mixing technique (Modmix) for multimodal data augmentation and a patchwise radial basis function (RBF) network designed to improve robustness in limited and highly heterogeneous data. Modmix generates new samples by randomly swapping modalities between existing data points, creating new data in a very simple but effective way. The RBF patch network divides the input into randomly selected, overlapping patches that capture local similarities between modalities. Each patch network is trained end-to-end using stochastic gradient descent. Moreover, the model’s performance is further improved by using multiple independently trained networks and combining them into a single decision. Experiments with the two different pain datasets X-ITE and BioVid were performed under limited training data conditions, where only approximately 30% of the original datasets were used for training. With both datasets the RBF patch network achieved significant improvements for a subset of subjects, resulting in a similar or even slightly better mean accuracy compared to competing related models such as random forest and support vector machine.
Full article
(This article belongs to the Section Human–Computer Interactions)
►
Show Figures
Open AccessArticle
Computing, Electronics, and Health for Everybody: A Multi-Country Workshop on Low-Cost ECG Acquisition
by
Orlando Pérez-Manzo, Denis Mendoza-Cabrera, Miguel Tupac-Yupanqui, Carla Angulo and Cristian Vidal-Silva
Computers 2026, 15(2), 126; https://doi.org/10.3390/computers15020126 (registering DOI) - 14 Feb 2026
Abstract
►▼
Show Figures
A persistent interdisciplinary gap continues to hinder the development of Health 4.0 educational initiatives. Biomedical Engineering programs typically emphasize physiology and instrumentation while providing limited exposure to modern software ecosystems, whereas Informatics curricula often overlook the physical and physiological foundations of bio-instrumentation. To
[...] Read more.
A persistent interdisciplinary gap continues to hinder the development of Health 4.0 educational initiatives. Biomedical Engineering programs typically emphasize physiology and instrumentation while providing limited exposure to modern software ecosystems, whereas Informatics curricula often overlook the physical and physiological foundations of bio-instrumentation. To address this dual deficiency, this paper presents a low-cost and modular educational intervention aligned with the “Computing, Electronics, and Health for Everybody” philosophy. The proposed approach is a hands-on technical workshop that translates core biomedical signal-processing concepts into an accessible learning experience using the Arduino platform and the AD8232 ECG sensor. The intervention was implemented simultaneously across universities in Chile, Peru, and Ecuador, involving a total of undergraduate engineering students. Learning outcomes were evaluated using a pre–post assessment design. The results demonstrate a statistically significant improvement in participants’ conceptual understanding of ECG signal components ( ), with mean scores increasing across all evaluated dimensions. In addition, students reported higher confidence in interpreting physiological signals and applying interdisciplinary reasoning. These findings indicate that the proposed intervention effectively supports interdisciplinary learning for software-oriented engineering students by introducing core biomedical acquisition and signal-processing concepts through an accessible and scalable educational framework.
Full article

Figure 1
Open AccessArticle
Leveraging Structural Symmetry for IoT Security: A Recursive InterNetwork Architecture Perspective
by
Peyman Teymoori and Toktam Ramezanifarkhani
Computers 2026, 15(2), 125; https://doi.org/10.3390/computers15020125 - 13 Feb 2026
Abstract
►▼
Show Figures
The Internet of Things (IoT) has transformed modern life through interconnected devices enabling automation across diverse environments. However, its reliance on legacy network architectures has introduced significant security vulnerabilities and efficiency challenges—for example, when Datagram Transport Layer Security (DTLS) encrypts transport-layer communications to
[...] Read more.
The Internet of Things (IoT) has transformed modern life through interconnected devices enabling automation across diverse environments. However, its reliance on legacy network architectures has introduced significant security vulnerabilities and efficiency challenges—for example, when Datagram Transport Layer Security (DTLS) encrypts transport-layer communications to protect IoT traffic, it simultaneously blinds intermediate proxies that need to inspect message contents for protocol translation and caching, forcing a fundamental trade-off between security and functionality. This paper presents an architectural solution based on the Recursive InterNetwork Architecture (RINA) to address these issues. We analyze current IoT network stacks, highlighting their inherent limitations—particularly how adding security at one layer often disrupts functionality at others, forcing a detrimental trade-off between security and performance. A central principle underlying our approach is the role of structural symmetry in RINA’s design. Unlike the heterogeneous, protocol-specific layers of TCP/IP, RINA exhibits recursive self-similarity: every Distributed IPC Facility (DIF), regardless of its position in the network hierarchy, instantiates identical mechanisms and offers the same interface to layers above. This architectural symmetry ensures predictable, auditable behavior while enabling policy-driven asymmetry for context-specific security enforcement. By embedding security within each layer and allowing flexible layer arrangement, RINA mitigates common IoT attacks and resolves persistent issues such as the inability of Performance Enhancing Proxies to operate on encrypted connections. We demonstrate RINA’s applicability through use cases spanning smart homes, healthcare monitoring, autonomous vehicles, and industrial edge computing, showcasing its adaptability to both RINA-native and legacy device integration. Our mixed-methods evaluation combines qualitative architectural analysis with quantitative experimental validation, providing both theoretical foundations and empirical evidence for RINA’s effectiveness. We also address emerging trends including AI-driven security and massive IoT scalability. This work establishes a conceptual foundation for leveraging recursive symmetry principles to achieve secure, efficient, and scalable IoT ecosystems.
Full article

Graphical abstract
Open AccessArticle
ASD Recognition Through Weighted Integration of Landmark-Based Handcrafted and Pixel-Based Deep Learning Features
by
Asahi Sekine, Abu Saleh Musa Miah, Koki Hirooka, Najmul Hassan, Md. Al Mehedi Hasan, Yuichi Okuyama, Yoichi Tomioka and Jungpil Shin
Computers 2026, 15(2), 124; https://doi.org/10.3390/computers15020124 - 13 Feb 2026
Abstract
Autism Spectrum Disorder (ASD) is a neurological condition that affects communication and social interaction skills, with individuals experiencing a range of challenges that often require specialized care. Automated systems for recognizing ASD face significant challenges due to the complexity of identifying distinguishing features
[...] Read more.
Autism Spectrum Disorder (ASD) is a neurological condition that affects communication and social interaction skills, with individuals experiencing a range of challenges that often require specialized care. Automated systems for recognizing ASD face significant challenges due to the complexity of identifying distinguishing features from facial images. This study proposes an incremental advancement in ASD recognition by introducing a dual-stream model that combines handcrafted facial-landmark features with deep learning-based pixel-level features. The model processes images through two distinct streams to capture complementary aspects of facial information. In the first stream, facial landmarks are extracted using MediaPipe (v0.10.21),with a focus on 137 symmetric landmarks. The face’s position is adjusted using in-plane rotation based on eye-corner angles, and geometric features along with 52 blendshape features are processed through Dense layers. In the second stream, RGB image features are extracted using pre-trained CNNs (e.g., ResNet50V2, DenseNet121, InceptionV3) enhanced with Squeeze-and-Excitation (SE) blocks, followed by feature refinement through Global Average Pooling (GAP) and DenseNet layers. The outputs from both streams are fused using weighted concatenation through a softmax gate, followed by further feature refinement for classification. This hybrid approach significantly improves the ability to distinguish between ASD and non-ASD faces, demonstrating the benefits of combining geometric and pixel-based features. The model achieved an accuracy of 96.43% on the Kaggle dataset and 97.83% on the YTUIA dataset. Statistical hypothesis testing further confirms that the proposed approach provides a statistically meaningful advantage over strong baselines, particularly in terms of classification correctness and robustness across datasets. While these results are promising, they show incremental improvements over existing methods, and future work will focus on optimizing performance to exceed current benchmarks.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Advanced Machine Learning Techniques for Predicting Inpatient Deterioration in General Medicine
by
Said Al Jaadi, Laila Al Wahaibi, Mohammed Al-Hinai, Haneen Hafiz Gaffar and Abdullah M. Al Alawi
Computers 2026, 15(2), 123; https://doi.org/10.3390/computers15020123 - 12 Feb 2026
Abstract
Inpatient deterioration, marked by ICU transfer or mortality, remains a critical challenge in hospital settings. While traditional early warning systems (EWS) have limitations, machine learning (ML) offers a promising approach for the early identification of at-risk patients. This study aimed to develop and
[...] Read more.
Inpatient deterioration, marked by ICU transfer or mortality, remains a critical challenge in hospital settings. While traditional early warning systems (EWS) have limitations, machine learning (ML) offers a promising approach for the early identification of at-risk patients. This study aimed to develop and validate multiple ML models for predicting inpatient deterioration among general medical patients using electronic health record (EHR) data. A retrospective cohort study was conducted on 524 patients admitted between January 2022 and December 2023. The dataset included demographic, clinical, and laboratory variables, with time-stamped measurements treated as distinct features. After excluding variables with >15% missing data, standard imputation was performed. The training data was balanced using the Synthetic Minority Over-sampling Technique (SMOTE), and feature selection was performed using SelectKBest. A range of models—including Logistic Regression, Random Forest, Gradient Boosting, Support Vector Machines (SVMs), and Neural Networks—were trained and evaluated using AUC, accuracy, precision, recall, and F1-score. During 5-fold cross-validation, the models demonstrated high stability, with the Random Forest achieving a mean AUC of 0.980. On the final independent test set, the optimized Random Forest model yielded the highest performance with an AUC of 0.837 and an accuracy of 85.4%. Functional status, oxygen requirements, and urea levels were identified as key predictors. ML models, particularly Random Forest, can significantly enhance the early detection of inpatient deterioration. The contribution of this work lies in its systematic comparison of multiple algorithms and its robust methodology. Future research should focus on external validation, the integration of temporal data using recurrent neural network architectures, and the application of Explainable AI (XAI) to foster clinical trust and facilitate implementation.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
►▼
Show Figures

Figure 1
Open AccessEditorial
Editorial: The Contemporary Landscape of Smart Learning Environments
by
Ananda Maiti
Computers 2026, 15(2), 122; https://doi.org/10.3390/computers15020122 - 12 Feb 2026
Abstract
Educational Technology (EdTech) has moved decisively beyond its early role as a digital substitute for textbooks and classrooms [...]
Full article
(This article belongs to the Special Issue Smart Learning Environments)
Open AccessArticle
Trust-Aware Federated Graph Learning for Secure and Energy-Efficient IoT Ecosystems
by
Manuel J. C. S. Reis
Computers 2026, 15(2), 121; https://doi.org/10.3390/computers15020121 - 11 Feb 2026
Abstract
The integration of Federated Learning (FL) and Graph Neural Networks (GNNs) has emerged as a promising paradigm for distributed intelligence in Internet of Things (IoT) environments. However, challenges related to trust, device heterogeneity, and energy efficiency continue to hinder scalable deployment in real-world
[...] Read more.
The integration of Federated Learning (FL) and Graph Neural Networks (GNNs) has emerged as a promising paradigm for distributed intelligence in Internet of Things (IoT) environments. However, challenges related to trust, device heterogeneity, and energy efficiency continue to hinder scalable deployment in real-world settings. This paper presents Trust-FedGNN, a trust-aware federated graph learning framework that jointly addresses reliability, robustness, and sustainability in IoT ecosystems. The framework combines reliability-based reputation modeling, energy-aware client scheduling, and dynamic graph pruning to reduce communication overhead and energy consumption during collaborative training, while mitigating the influence of unreliable or malicious participants. Trust evaluation is explicitly decoupled from energy availability, ensuring that honest but resource-constrained devices are not penalized during aggregation. Experimental results on benchmark IoT datasets demonstrate up to 5.8% higher accuracy, 3.1% higher F1-score, and approximately 22% lower energy consumption compared with State-of-the-Art federated baselines, while maintaining robustness under partial adversarial participation. These results confirm the effectiveness of Trust-FedGNN as a secure, robust, and energy-efficient federated graph learning solution for heterogeneous IoT networks (a proof-of-concept evaluation across 10 federated clients).
Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
►▼
Show Figures

Figure 1
Open AccessArticle
Dual-Stream Hybrid Attention Network for Robust Intelligent Spectrum Sensing
by
Bixue Song, Yongxin Feng, Fan Zhou and Peiying Zhang
Computers 2026, 15(2), 120; https://doi.org/10.3390/computers15020120 - 11 Feb 2026
Abstract
UAV communication, leveraging high mobility and flexible deployment, is gradually becoming an important component of 6G integrated air–ground networks. With the expansion of aerial services, air–ground spectrum resources are increasingly scarce, and spectrum sharing and opportunistic access have become key technologies for improving
[...] Read more.
UAV communication, leveraging high mobility and flexible deployment, is gradually becoming an important component of 6G integrated air–ground networks. With the expansion of aerial services, air–ground spectrum resources are increasingly scarce, and spectrum sharing and opportunistic access have become key technologies for improving spectrum utilization. Spectrum sensing is the prerequisite for UAVs to perform dynamic access and avoid causing interference to primary users. However, in air–ground links, the channel time variability caused by Doppler effects, carrier frequency offset, and Rician fading can weaken feature separability, making it difficult for deep learning-based spectrum sensing methods to maintain reliable detection in complex environments. In this paper, a dual-stream hybrid-attention spectrum sensing method (DSHA) is proposed, which represents the received signal simultaneously as a time-domain I/Q sequence and an STFT time-frequency map to extract complementary features and employs a hybrid attention mechanism to model key intra-branch dependencies and achieve inter-branch interaction and fusion. Furthermore, a noise-consistent paired training strategy is introduced to mitigate the bias induced by noise randomness, thereby enhancing weak-signal discrimination capability. Simulation results show that under different false-alarm constraints, the proposed method achieves higher detection probability in low-SNR scenarios as well as under fading and CFO perturbations. In addition, compared with multiple typical baselines, DSHA exhibits better robustness and generalization; under Rician channels, its detection probability is improved by about 28.6% over the best baseline.
Full article
(This article belongs to the Special Issue Wireless Sensor Networks in IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Evaluating LLMs for Source Code Generation and Summarization Using Machine Learning Classification and Ranking
by
Hussain Mahfoodh, Mustafa Hammad, Bassam A. Y. Alqaralleh and Aymen I. Zreikat
Computers 2026, 15(2), 119; https://doi.org/10.3390/computers15020119 - 10 Feb 2026
Abstract
The recent use of large language models (LLMs) in code generation and code summarization tasks has been widely adopted by the software engineering community. New LLMs are emerging regularly with improved functionalities, efficiency, and expanding data that allow models to learn more effectively.
[...] Read more.
The recent use of large language models (LLMs) in code generation and code summarization tasks has been widely adopted by the software engineering community. New LLMs are emerging regularly with improved functionalities, efficiency, and expanding data that allow models to learn more effectively. The lack of guidelines for selecting the right LLMs for coding tasks makes the selection a subjective choice by developers rather than a choice built on code complexity, code correctness, and linguistic similarity analysis. This research investigates the use of machine learning classification and ranking methods to select the best-suited open-source LLMs for code generation and code summarization tasks. This work conducts a comparison experiment on four open-source LLMs (Mistral, CodeLlama, Gemma 2, and Phi-3) and uses the MBPP coding question dataset to analyze code-generated outputs in terms of code complexity, maintainability, cyclomatic complexity, code structure, and LLM perplexity by collecting these as a set of features. An SVM classification problem is conducted on the highest correlated feature pairs, where the models are evaluated through performance metrics, including accuracy, area under the ROC curve (AUC), precision, recall, and F1 scores. The RankNet ranking methodology is used to evaluate code summarization model capabilities by measuring ROUGE and BERTScore accuracies between LLM code-generated summaries and the coding questions used from the dataset. The study results show a maximum accuracy of 49% for the code generation experiment, with the highest AUC score reaching 86% among the top four correlated feature pairs. The highest precision score reached is 90%, and the recall score reached up to 92%. Code summarization experiment results show Gemma 2 scored a 1.93 RankNet win probability score, and represented the highest ranking reached among other models. The phi3 model was the second-highest ranking with a 1.66 score. The research highlights the potential of machine learning to select LLMs based on coding metrics and paves the way for advancements in terms of accuracy, dataset diversity, and exploring other machine learning algorithms for other researchers.
Full article
(This article belongs to the Special Issue AI in Action: Innovations and Breakthroughs)
►▼
Show Figures

Graphical abstract
Open AccessArticle
InfluEmo: Influence of Emotions on Instagram Influencers’ Success
by
Chiara Felicia Schettini and Giovanna Maria Dimitri
Computers 2026, 15(2), 118; https://doi.org/10.3390/computers15020118 - 10 Feb 2026
Abstract
The use of social networks has been shown as a powerful tool for driving populations opinions towards specific trends and interests. Yet what actually makes the success of a profile? Are emotions responsible for driving the public opinion and the opinion of the
[...] Read more.
The use of social networks has been shown as a powerful tool for driving populations opinions towards specific trends and interests. Yet what actually makes the success of a profile? Are emotions responsible for driving the public opinion and the opinion of the followers? We present a study on the influence of emotions in their success. To do so, we first created a novel dataset called InfluEmo, crawled from Instagram, in which we designed and analyzed the impact of emotions in influencers’ success. The dataset InfluEmo is novel and freely available. Automatic emotion extraction yielded promising results, supporting our hypothesis that specific emotional profiles in influencers’ posted content are associated with measurable indicators of success measured as number of followers. These findings suggest that emotions might play a systematic and quantifiable role in shaping public opinion and influencing users’ interactions on Instagram. Using the novel InfluEmo dataset (≈38,000 posts, ≈970 profiles, 4 domains: fashion, climate, AI, and journalism), the paper shows, in fact, that more positive emotional language is consistently associated with higher engagement, with fashion influencers achieving the highest average likes (≈138,885/post) and lowest emotional entropy, while AI, climate, and journalism content—characterized by more neutral or mixed emotions—exhibits lower likes (≈6761–19,544/post), weaker sentiment–likes correlations, and higher entropy, indicating that positivity and emotional predictability outperform informational complexity in driving Instagram success.
Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
►▼
Show Figures

Figure 1
Open AccessArticle
VECTR: A Lightweight Requirements Prioritization Method for Software Startups Grounded in ROI, Time-to-Value, and Confidence
by
Frédéric Pattyn, Xiaofeng Wang, Geert Poels, Yannick Dillen and Peter Goetz
Computers 2026, 15(2), 117; https://doi.org/10.3390/computers15020117 - 10 Feb 2026
Abstract
►▼
Show Figures
Startups face critical challenges in prioritizing requirements under severe resource constraints. Although methods such as RICE and ICE are known, prior studies show that decisions often default to ad hoc judgments and stakeholder opinions. This paper introduces and validates VECTR, a lightweight prioritization
[...] Read more.
Startups face critical challenges in prioritizing requirements under severe resource constraints. Although methods such as RICE and ICE are known, prior studies show that decisions often default to ad hoc judgments and stakeholder opinions. This paper introduces and validates VECTR, a lightweight prioritization framework that integrates three criteria highly relevant to startups: return on investment (ROI), Time-to-Value (TtV), and confidence. Rather than presenting these criteria as conceptually new, VECTR’s contribution lies in their explicit operationalization for startup contexts and their integration into a single lightweight visualization. Seventeen practitioners—including founders, product managers, and technical leads—participated in semi-structured interviews. They first described their current prioritization practices and then evaluated VECTR through explanatory material and a proof-of-concept visualization. Results show that ROI, TtV, and confidence are already considered in practice, but inconsistently. Practitioners found VECTR intuitive, visually clear, and useful for structuring discussions, while noting limitations related to estimation effort and subjective assumptions. Most reported that it could support better data-driven decisions, though concerns remained around input estimation and assumptions made by biased humans. This study contributes a financially grounded, software-startup-suitable prioritization method and offers practitioners a practical way to allocate scarce resources toward requirements with the greatest impact, while acknowledging that the validation demonstrates perceived usefulness rather than measurable performance gains.
Full article

Figure 1
Open AccessArticle
Benchmarking Post-Quantum Signatures and KEMs on General-Purpose CPUs Using a TCP Client–Server Testbed
by
Jesus Algar-Fernandez, Andrea Villacís-Vanegas, Ysabel Amaro-Aular and Maria-Dolores Cano
Computers 2026, 15(2), 116; https://doi.org/10.3390/computers15020116 - 9 Feb 2026
Abstract
►▼
Show Figures
Quantum computing threatens widely deployed public-key cryptosystems, accelerating the adoption of Post-Quantum Cryptography (PQC) in practical systems. Beyond asymptotic security, the feasibility of PQC deployments depends on measured performance on real hardware and on implementation-level overheads. This paper presents an experimental evaluation of
[...] Read more.
Quantum computing threatens widely deployed public-key cryptosystems, accelerating the adoption of Post-Quantum Cryptography (PQC) in practical systems. Beyond asymptotic security, the feasibility of PQC deployments depends on measured performance on real hardware and on implementation-level overheads. This paper presents an experimental evaluation of five post-quantum digital signature schemes (CRYSTALS-Dilithium, HAWK, SQISign, SNOVA, and SPHINCS+) and three key encapsulation mechanisms (Kyber, HQC, and BIKE) selected to cover multiple PQC design families and parameterizations used in practice. We implement a TCP client–server testbed in Python that invokes C implementations for each primitive—via standalone executables and, where provided, in-process dynamic libraries—and benchmarks key generation, encapsulation/decapsulation, and signature generation/verification on two Windows 11 commodity processors: an AMD Ryzen 7 4000 (8 cores, 16 threads, 1.8 GHz) and an Intel Core i5-1035G1 (4 cores, 8 threads, 1.0 GHz). Each operation is repeated ten times under a low-interference setup, and results are aggregated as mean (with 95% confidence intervals) timings over repeated runs. Across the evaluated configurations, lattice-based schemes (Kyber, Dilithium, HAWK) show the lowest computational cost, while code-based KEMs (HQC, BIKE), isogeny-based (SQISign), and multivariate (SNOVA) signatures incur higher overhead. Hash-based SPHINCS+ exhibits larger artifacts and higher signing latency depending on the parameterization. The AMD platform consistently outperforms the Intel platform, illustrating the impact of CPU characteristics on observed PQC overheads. These results provide comparative evidence to support primitive selection and capacity planning for quantum-resistant deployments, while motivating future end-to-end validation in protocol and web service settings.
Full article

Figure 1
Open AccessArticle
Cytological Image-Finding Generation Using Open-Source Large Language Models and a Vision Transformer
by
Atsushi Teramoto, Yuka Kiriyama, Tetsuya Tsukamoto, Natsuki Yazawa, Kazuyoshi Imaizumi and Hiroshi Fujita
Computers 2026, 15(2), 115; https://doi.org/10.3390/computers15020115 - 8 Feb 2026
Abstract
In lung cytology, screeners and pathologists examine many cells in cytological specimens and describe their corresponding imaging findings. To support this process, our previous study proposed an image-finding generation model based on convolutional neural networks and a transformer architecture. However, further improvements are
[...] Read more.
In lung cytology, screeners and pathologists examine many cells in cytological specimens and describe their corresponding imaging findings. To support this process, our previous study proposed an image-finding generation model based on convolutional neural networks and a transformer architecture. However, further improvements are required to enhance the accuracy of these findings. In this study, we developed a cytology-specific image-finding generation model using a vision transformer and open-source large language models. In the proposed method, a vision transformer pretrained on large-scale image datasets and multiple open-source large language models was introduced and connected through an original projection layer. Experimental validation using 1059 cytological images demonstrated that the proposed model achieved favorable scores on language-based evaluation metrics and good classification performance when cells were classified based on the generated findings. These results indicate that a task-specific model is an effective approach for generating imaging findings in lung cytology.
Full article
(This article belongs to the Special Issue Generative AI in Medicine: Emerging Applications, Challenges, and Future Directions)
►▼
Show Figures

Figure 1
Open AccessArticle
Multi-Step Forecasting of Weekly Dengue Cases Using Data Augmentation and Convolutional Recurrent Neural Networks
by
Anibal Flores, Hugo Tito-Chura, Jose Guzman-Valdivia, Ruso Morales-Gonzales, Charles Rosado-Chavez and Carlos Silva-Delgado
Computers 2026, 15(2), 114; https://doi.org/10.3390/computers15020114 - 8 Feb 2026
Abstract
Dengue case forecasting is important for the prevention and early control of outbreaks, as well as for the optimization of healthcare resources, among other aspects. This study addresses the need to develop increasingly accurate forecasting models that can support informed decision-making before and
[...] Read more.
Dengue case forecasting is important for the prevention and early control of outbreaks, as well as for the optimization of healthcare resources, among other aspects. This study addresses the need to develop increasingly accurate forecasting models that can support informed decision-making before and during dengue epidemics. Accordingly, two new models based on convolutional and recurrent neural networks, namely ConvLSTM and ConvBiLSTM, combined with data augmentation based on linear interpolation, are proposed. As a case study, weekly dengue cases in Peru from 2000 to 2024 are used. The proposed models are compared with well-known recurrent neural network-based models such as LSTM, BiLSTM, GRU, and BiGRU, both with and without data augmentation. The results show that the proposed models with data augmentation achieve comparable and superior performance to the benchmark models, while also exhibiting a lower average computational cost.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessArticle
Artificial Intelligence-Based Models for Predicting Disease Course Risk Using Patient Data
by
Rafiqul Chowdhury, Wasimul Bari, M. Tariqul Hasan, Ziaul Hossain and Minhajur Rahman
Computers 2026, 15(2), 113; https://doi.org/10.3390/computers15020113 - 6 Feb 2026
Abstract
Nowadays, longitudinal data are common—typically high-dimensional, large, complex, and collected using various methods, with repeated outcomes. For example, the growing elderly population experiences health deterioration, including limitations in Instrumental Activities of Daily Living (IADLs), thereby increasing demand for long-term care. Understanding the risk
[...] Read more.
Nowadays, longitudinal data are common—typically high-dimensional, large, complex, and collected using various methods, with repeated outcomes. For example, the growing elderly population experiences health deterioration, including limitations in Instrumental Activities of Daily Living (IADLs), thereby increasing demand for long-term care. Understanding the risk of repeated IADLs and estimating the trajectory risk by identifying significant predictors will support effective care planning. Such data analysis requires a complex modeling framework. We illustrated a regressive modeling framework employing statistical and machine learning (ML) models on the Health and Retirement Study data to predict the trajectory of IADL risk as a function of predictors. Based on the accuracy measure, the regressive logistic regression (RLR) and the Decision Tree (DT) models showed the highest prediction accuracy: 0.90 to 0.93 for follow-ups 1–6; and 0.89 and 0.90 for follow-up 7, respectively. The Area Under the Curve and Receiver Operating Characteristics curve also showed similar findings. Depression scores, mobility score, large muscle score, and Difficulties of Activities of Daily Living (ADLs) score showed a significant positive association with IADLs (p < 0.05). The proposed modeling framework simplifies the analysis and risk prediction of repeated outcomes from complex datasets and could be automated by leveraging Artificial Intelligence (AI).
Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Modeling Frameworks in Health Informatics and Related Fields)
►▼
Show Figures

Figure 1
Open AccessArticle
Multi-Objective Harris Hawks Optimization with NSGA-III for Feature Selection in Student Performance Prediction
by
Nabeel Al-Milli
Computers 2026, 15(2), 112; https://doi.org/10.3390/computers15020112 - 6 Feb 2026
Abstract
Student performance is an important factor for any education process to succeed; as a result, early detection of students at risk is critical for enabling timely and effective educational interventions. However, most educational datasets are complex and do not have a stable number
[...] Read more.
Student performance is an important factor for any education process to succeed; as a result, early detection of students at risk is critical for enabling timely and effective educational interventions. However, most educational datasets are complex and do not have a stable number of features. As a result, in this paper, we propose a new algorithm called MOHHO-NSGA-III, which is a multi-objective feature-selection framework that jointly optimizes classification performance, feature subset compactness, and prediction stability with cross-validation folds. The algorithm combines Harris Hawks Optimization (HHO) to obtain a good balance between exploration and exploitation, with NSGA-III to preserve solution diversity along the Pareto front. Moreover, we control the diversity management strategy to figure out a new solution to overcome the issue, thereby reducing the premature convergence status. We validated the algorithm on Portuguese and Mathematics datasets obtained from the UCI Student Performance repository. Selected features were evaluated with five classifiers (k-NN, Decision Tree, Naive Bayes, SVM, LDA) through 10-fold cross-validation repeated over 21 independent runs. MOHHO-NSGA-III consistently selected 12 out of 30 features (60% reduction) while achieving 4.5% higher average accuracy than the full feature set (Wilcoxon test; across all classifiers). The most frequently selected features were past failures, absences, and family support aligning with educational research on student success factors. This suggests the proposed algorithm produces not just accurate but also interpretable models suitable for deployment in institutional early warning systems.
Full article
(This article belongs to the Section AI-Driven Innovations)
►▼
Show Figures

Figure 1
Open AccessReview
Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026)
by
Itahisa Pérez-Pérez, Miriam Catalina González-Afonso, Zeus Plasencia-Carballo and David Pérez-Jorge
Computers 2026, 15(2), 111; https://doi.org/10.3390/computers15020111 - 6 Feb 2026
Abstract
The integration of generative AI in higher education has reignited debates around authorship and academic integrity, prompting approaches that emphasize transparency. This study identifies and synthesizes the transparency mechanisms described for assessment involving generative AI, recognizes implementation patterns, and analyzes the available evidence
[...] Read more.
The integration of generative AI in higher education has reignited debates around authorship and academic integrity, prompting approaches that emphasize transparency. This study identifies and synthesizes the transparency mechanisms described for assessment involving generative AI, recognizes implementation patterns, and analyzes the available evidence regarding compliance monitoring, rigor, workload, and acceptability. A scoping review (PRISMA 2020) was conducted using searches in Scopus, Web of Science, ERIC, and IEEE Xplore (2022–2026). Out of 92 records, 11 studies were included, and four dimensions were coded: compliance assessment approach, specified requirements, implementation patterns, and reported evidence. The results indicate limited operationalization: the absence of explicit assessment (27.3%) and unverified self-disclosure (18.2%) are predominant, along with implicit instructor judgment (18.2%). Requirements are often poorly specified (45.5%), and evidence concerning workload and acceptability is rarely reported (63.6%). Overall, the literature suggests that transparency is more feasible when it is proportionate, grounded in clear expectations, and aligned with the assessment design, while avoiding punitive or overly surveillant dynamics. The review protocol was prospectively registered in PROSPERO (CRD420261287226).
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
A Macrocognitive Design Taxonomy for Simulation-Based Training Systems: Bridging Cognitive Theory and Human–Computer Interaction
by
Jessica M. Johnson
Computers 2026, 15(2), 110; https://doi.org/10.3390/computers15020110 - 6 Feb 2026
Abstract
Simulation-based training systems are increasingly deployed to prepare learners for complex, safety-critical, and dynamic work environments. While advances in computing have enabled immersive and data-rich simulations, many systems remain optimized for procedural accuracy and surface-level task performance rather than the macrocognitive processes that
[...] Read more.
Simulation-based training systems are increasingly deployed to prepare learners for complex, safety-critical, and dynamic work environments. While advances in computing have enabled immersive and data-rich simulations, many systems remain optimized for procedural accuracy and surface-level task performance rather than the macrocognitive processes that underpin adaptive expertise. Macrocognition encompasses higher-order cognitive processes that are essential for performance transfer beyond controlled training conditions. When these processes are insufficiently supported, training systems risk fostering brittle strategies and negative training effects. This paper introduces a macrocognitive design taxonomy for simulation-based training systems derived from a large-scale meta-analysis examining the transfer of macrocognitive skills from immersive simulations to real-world training environments. Drawing on evidence synthesized from 111 studies spanning healthcare, industrial safety, skilled trades, and defense contexts, the taxonomy links macrocognitive theory to human–computer interaction (HCI) design affordances, computational data traces, and feedback and adaptation mechanisms shown to support transfer. Grounded in joint cognitive systems theory and learning engineering practice, the taxonomy treats macrocognition as a designable and computable system concern informed by empirical transfer effects rather than as an abstract explanatory construct.
Full article
(This article belongs to the Special Issue Innovative Research in Human–Computer Interactions)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Wearable Technology in Pediatric Cardiac Care: A Scoping Review of Parent Acceptance and Patient Comfort
by
Valentina La Marca, Tara Chatty, Animesh Tandon and Colin K. Drummond
Computers 2026, 15(2), 109; https://doi.org/10.3390/computers15020109 - 6 Feb 2026
Abstract
►▼
Show Figures
While wearable technology has advanced pediatric medical monitoring, home-based success in cardiology depends heavily on human-centered design. This scoping review synthesizes evidence on the human factors—specifically parental acceptance, child comfort, and usability—that determine the real-world adoption of pediatric cardiac wearables. By systematically searching
[...] Read more.
While wearable technology has advanced pediatric medical monitoring, home-based success in cardiology depends heavily on human-centered design. This scoping review synthesizes evidence on the human factors—specifically parental acceptance, child comfort, and usability—that determine the real-world adoption of pediatric cardiac wearables. By systematically searching PubMed, Scopus, Cochrane Library, and ClinicalTrials.gov, we mapped the evidence surrounding diverse technologies, including vital sign and ECG monitors. Our findings reveal a persistent “performance-usability gap”: while devices show high clinical efficacy in controlled settings, their long-term utility is frequently compromised by poor wearability, skin irritation, and a lack of alignment with family routines. The review identifies that current research structures and regulatory pathways reward quantifiable biomedical outcomes, such as sensor accuracy, while routinely sidelining difficult-to-measure factors like parental buy-in and child autonomy. Consequently, we highlight critical gaps in the design process that prioritize clinical specs over the lived experience of the patient. We conclude that a paradigm shift toward human-centered engineering is required to move beyond controlled study success. These results provide a necessary roadmap for developers and regulators to prioritize the “invisible” outcomes of comfort and compliance, which are essential for the effective, sustained home-based monitoring of pediatric patients.
Full article

Figure 1
Open AccessArticle
Design and Implementation of a Trusted Food Supply Chain Traceability System with Incentive Using Hyperledger Fabric
by
Zhiyang Zhou, Yaokai Feng and Kouichi Sakurai
Computers 2026, 15(2), 108; https://doi.org/10.3390/computers15020108 - 5 Feb 2026
Abstract
Effective supply chain traceability is indispensable for ensuring food safety, which is a significant social issue. Traditional traceability systems are mostly based on centralized databases, relying on a single entity or organization and facing problems such as insufficient transparency and the risk of
[...] Read more.
Effective supply chain traceability is indispensable for ensuring food safety, which is a significant social issue. Traditional traceability systems are mostly based on centralized databases, relying on a single entity or organization and facing problems such as insufficient transparency and the risk of data tampering. To address these issues, many studies have adopted blockchain technology, which offers advantages such as decentralization and immutability. However, challenges such as data credibility and insufficient protection of private data remain. This study proposes a multi-channel architecture based on Blockchain (Hyperledger Fabric in this study), in which data is partitioned and managed across dedicated channels to strengthen the protection of sensitive information. Furthermore, a trust and incentive design is implemented, featuring a trust-value calculation function and a reward–penalty mechanism that encourage participants to upload more truthful data and improve the reliability of data before it is recorded on the blockchain. In this paper, the design and implementation of the proposed system are explained in detail, and its performance is examined using Hyperledger Caliper, a blockchain performance benchmark framework. Functional evaluations indicate that the proposed system can be correctly implemented and that it correctly supports supply chain traceability, trust- and incentive-related, privacy protecting and other functions as designed, while performance evaluations indicate that it can maintain stable performance under higher workloads, suggesting that the proposed approach is practical and applicable to food supply chain traceability scenarios.
Full article
(This article belongs to the Special Issue Revolutionizing Industries: The Impact of Blockchain Technology)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, JSAN, Technologies, BDCC, Sensors, Telecom, Electronics
Electronic Communications, IOT and Big Data, 2nd Volume
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Jih-Fu TuDeadline: 31 March 2026
Topic in
AI, Buildings, Computers, Drones, Entropy, Symmetry
Applications of Machine Learning in Large-Scale Optimization and High-Dimensional Learning
Topic Editors: Jeng-Shyang Pan, Junzo Watada, Vaclav Snasel, Pei HuDeadline: 30 April 2026
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2026
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 30 June 2026
Conferences
Special Issues
Special Issue in
Computers
Advances in Semantic Multimedia and Personalized Digital Content
Guest Editors: Phivos Mylonas, Christos Troussas, Akrivi Krouska, Manolis Wallace, Cleo SgouropoulouDeadline: 25 February 2026
Special Issue in
Computers
Advanced Image Processing and Computer Vision (2nd Edition)
Guest Editors: Selene Tomassini, M. Ali DewanDeadline: 28 February 2026
Special Issue in
Computers
Cloud Computing and Big Data Mining
Guest Editor: Rao MikkilineniDeadline: 28 February 2026
Special Issue in
Computers
Wearable Computing and Activity Recognition
Guest Editors: Yang Gao, Yincheng JinDeadline: 28 February 2026




