Next Issue
Volume 12, December
Previous Issue
Volume 12, June
 
 

Informatics, Volume 12, Issue 3 (September 2025) – 44 articles

Cover Story (view full-size image): In a world where artificial intelligence (AI) is becoming increasingly common, it is essential to understand how people accept and trust these systems. This systematic review identifies and analyses the quantitative methods used to measure trust in AI. Following the PRISMA guidelines, we reviewed 1283 articles from three databases, ultimately selecting 45 empirical studies published before December 2023. Through the lenses of cognitive and affective trust, we analysed trust definitions and measurements, types of AI systems, and related variables. We found that definitions and measurements of trust vary considerably. Still, studies show consistency in their elements (theoretical focus, experimental design, and the level of human-like characteristics of AI) in emphasising more the cognitive or affective side of trust. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 336 KB  
Article
Talking Tech, Teaching with Tech: How Primary Teachers Implement Digital Technologies in Practice
by Lyubka Aleksieva, Veronica Racheva and Roumiana Peytcheva-Forsyth
Informatics 2025, 12(3), 99; https://doi.org/10.3390/informatics12030099 - 22 Sep 2025
Viewed by 255
Abstract
This paper explores how primary school teachers integrate digital technologies into their classroom practice, with a particular focus on the extent to which their stated intentions align with what actually takes place during lessons. Drawing on data from the Bulgarian SUMMIT project on [...] Read more.
This paper explores how primary school teachers integrate digital technologies into their classroom practice, with a particular focus on the extent to which their stated intentions align with what actually takes place during lessons. Drawing on data from the Bulgarian SUMMIT project on digital transformation in education, the study employed a mixed-methods design combining semi-structured interviews, structured lesson observations, and analysis of teaching materials. The sample included 44 teachers from 26 Bulgarian schools, representing a range of educational contexts. The analysis was guided by the Digital Technology Integration Framework (DTIF), which distinguishes between three modes of technology use—Support, Extend, and Transform—based on the depth of pedagogical change. The findings indicated a strong degree of consistency between teachers’ accounts and observed practices in areas such as the use of digital tools for content visualisation, lesson enrichment, and reinforcement of knowledge. At the same time, the study highlights important gaps between teachers’ aspirations and classroom realities. Although many spoke of wanting to promote independent exploration, creativity, collaboration, and digital citizenship, these ambitions were rarely realised in observed lessons. Pupil autonomy and opportunities for creative digital production were limited, with extended and transformative practices appearing only occasionally. No significant subject-specific differences were identified: teachers across disciplines tended to rely on the same set of familiar tools, while more advanced or innovative uses of technology remained rare. Rather than offering a definitive account of progress, the study raises critical questions about teachers’ digital pedagogical competencies, contextual constraints and the depth of technology integration in everyday classroom practice. While digital tools are increasingly present, their use often remains limited to supporting traditional instruction, with extended and transformative applications still aspirational rather than routine. The findings draw attention to context-specific challenges in the Bulgarian primary education system and the importance of aligning digital innovation with pedagogical intent. This highlights the need for sustained professional development focused on learner-centred digital pedagogies, along with stronger institutional support and equitable access to infrastructure. Full article
29 pages, 3613 KB  
Article
CyberKG: Constructing a Cybersecurity Knowledge Graph Based on SecureBERT_Plus for CTI Reports
by Binyong Li, Qiaoxi Yang, Chuang Deng and Hua Pan
Informatics 2025, 12(3), 100; https://doi.org/10.3390/informatics12030100 - 22 Sep 2025
Viewed by 268
Abstract
Cyberattacks, especially Advanced Persistent Threats (APTs), have become more complex. These evolving threats challenge traditional defense systems, which struggle to counter long-lasting and covert attacks. Cybersecurity Knowledge Graphs (CKGs), enabled through the integration of multi-source CTI, introduce novel approaches for proactive defense. However, [...] Read more.
Cyberattacks, especially Advanced Persistent Threats (APTs), have become more complex. These evolving threats challenge traditional defense systems, which struggle to counter long-lasting and covert attacks. Cybersecurity Knowledge Graphs (CKGs), enabled through the integration of multi-source CTI, introduce novel approaches for proactive defense. However, building CKGs faces challenges such as unclear terminology, overlapping entity relationships in attack chains, and differences in CTI across sources. To tackle these challenges, we propose the CyberKG framework, which improves entity recognition and relation extraction using a SecureBERT_Plus-BiLSTM-Attention-CRF joint architecture. Semantic features are captured using a domain-adapted SecureBERT_Plus model, while temporal dependencies are modeled through BiLSTM. Attention mechanisms highlight key cross-sentence relationships, while CRF incorporates ATT&CK rule constraints. Hierarchical clustering (HAC), based on contextual embeddings, facilitates dynamic entity disambiguation and semantic fusion. Experimental evaluations on the DNRTI and MalwareDB datasets demonstrate strong performance in extraction accuracy, entity normalization, and the resolution of overlapping relations. The constructed knowledge graph supports APT tracking, attack-chain provenance, proactive defense prediction. Full article
Show Figures

Figure 1

42 pages, 2077 KB  
Systematic Review
From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes
by Ioanna Savveli, Maria Rigou and Stefanos Balaskas
Informatics 2025, 12(3), 98; https://doi.org/10.3390/informatics12030098 - 16 Sep 2025
Viewed by 782
Abstract
Governments increasingly integrate artificial intelligence (AI) into digital public services, and understanding how citizens perceive and respond to these technologies has become essential. This systematic review analyzes 30 empirical studies published from early January 2019 to mid-April 2025, following PRISMA guidelines, to map [...] Read more.
Governments increasingly integrate artificial intelligence (AI) into digital public services, and understanding how citizens perceive and respond to these technologies has become essential. This systematic review analyzes 30 empirical studies published from early January 2019 to mid-April 2025, following PRISMA guidelines, to map the current landscape of citizen attitudes toward AI-enabled e-government services. Guided by four research questions, the study examines: (1) the forms of AI implementation most commonly investigated, (2) the attitudinal variables used to assess user perception, (3) key factors influencing attitudes, and (4) concerns and challenges reported by users. The findings reveal that chatbots dominate current implementations, with behavioral intentions and satisfaction serving as the main outcome measures. Perceived usefulness, ease of use, trust, and perceived risk emerge as recurring determinants of positive attitudes. However, widespread concerns related to privacy and interface usability highlight persistent barriers. Overall, the review underscores the need for transparent, citizen-centered AI design and ethical safeguards to enhance acceptance and trust. It concludes that future research should address understudied applications, include vulnerable populations, and explore perceptions across diverse public sector domains. Full article
Show Figures

Figure 1

15 pages, 545 KB  
Article
The Impact of the 2023 Wikipedia Redesign on User Experience
by Tyler Wilson, Prajjwal Gandharv and Karl Vachuska
Informatics 2025, 12(3), 97; https://doi.org/10.3390/informatics12030097 - 16 Sep 2025
Viewed by 505
Abstract
In January 2023, Wikipedia introduced its most significant user interface (UI) redesign in over a decade, aiming to improve readability, accessibility, and navigation across devices. Despite the scale of this change, little empirical work has assessed its actual impact on user behavior. This [...] Read more.
In January 2023, Wikipedia introduced its most significant user interface (UI) redesign in over a decade, aiming to improve readability, accessibility, and navigation across devices. Despite the scale of this change, little empirical work has assessed its actual impact on user behavior. This study employs a natural experiment framework, leveraging Wikipedia’s exogenous, site-wide redesign date and large-scale, publicly available data—including clickstream, pageview, and edit histories—to evaluate user experience before and after the change. Using a quasi-experimental design, we estimate an immediate jump of ~1.06 million monthly internal link clicks at launch, while average hourly pageviews in January rose 1.25% despite a one-time –1.79 million dip at rollout. These results highlight the potential of large-scale UI changes to reshape user interaction without broadly alienating users and demonstrate the value of quasi-experimental methods for Human–Computer Interaction (HCI) research. Our approach offers a replicable framework for evaluating real-world design interventions at scale. Full article
Show Figures

Figure 1

22 pages, 307 KB  
Article
Digital Cultural Heritage in Southeast Asia: Knowledge Structures and Resources in GLAM Institutions
by Kanyarat Kwiecien, Wirapong Chansanam and Kulthida Tuamsuk
Informatics 2025, 12(3), 96; https://doi.org/10.3390/informatics12030096 - 15 Sep 2025
Viewed by 929
Abstract
This study explores the digital organization of cultural heritage knowledge across national GLAM institutions (galleries, libraries, archives, and museums) in the ten ASEAN countries. By employing a qualitative content analysis approach, this research study investigates the types, structures, and dissemination patterns of information [...] Read more.
This study explores the digital organization of cultural heritage knowledge across national GLAM institutions (galleries, libraries, archives, and museums) in the ten ASEAN countries. By employing a qualitative content analysis approach, this research study investigates the types, structures, and dissemination patterns of information resources available on 40 institutional websites. The findings reveal the diversity and richness of Southeast Asian cultural heritage, including national and local wisdom, history, significant figures, and material culture, collected and curated by these institutions. This study identifies key knowledge domains, content overlaps across GLAM sectors, and limitations in metadata and interoperability. Comparative analysis with international cultural knowledge infrastructures, such as the United Nations Educational Scientific and Cultural Organization (UNESCO)’s framework, Europeana, and the World Digital Library, highlights both shared values and regional distinctions. While GLAMs in the ASEAN have made significant strides in digital preservation and access, the lack of standardized metadata and cross-institutional integration impedes broader discoverability and reuse. This study contributes to the discourse on heritage informatics by providing an empirical foundation for enhancing digital cultural heritage systems in developing regions. The implications point toward the need for interoperable metadata standards, regional collaboration, and capacity building to support sustainable digital heritage ecosystems. This research study offers practical insights for policymakers, digital curators, and information professionals seeking to improve cultural knowledge infrastructures in Southeast Asia and similar contexts. Full article
22 pages, 3012 KB  
Article
Deep Learning-Based Forecasting of Boarding Patient Counts to Address Emergency Department Overcrowding
by Orhun Vural, Bunyamin Ozaydin, James Booth, Brittany F. Lindsey and Abdulaziz Ahmed
Informatics 2025, 12(3), 95; https://doi.org/10.3390/informatics12030095 - 15 Sep 2025
Viewed by 386
Abstract
Emergency department (ED) overcrowding remains a major challenge for hospitals, resulting in worse outcomes, longer waits, elevated hospital operating costs, and greater strain on staff. Boarding count, the number of patients who have been admitted to an inpatient unit but are still in [...] Read more.
Emergency department (ED) overcrowding remains a major challenge for hospitals, resulting in worse outcomes, longer waits, elevated hospital operating costs, and greater strain on staff. Boarding count, the number of patients who have been admitted to an inpatient unit but are still in the ED waiting for transfer, is a key patient flow metric that affects overall ED operations. This study presents a deep learning-based approach to forecasting ED boarding counts using only operational and contextual features—derived from hourly ED tracking, inpatient census, weather, holiday, and local event data—without patient-level clinical information. Different deep learning algorithms were tested, including convolutional and transformer-based time-series models, and the best-performing model, Time Series Transformer Plus (TSTPlus), achieved strong performance at the 6-h prediction horizon, with a mean absolute error of 4.30 and an R2 score of 0.79. After identifying TSTPlus as the best-performing model, its performance was further evaluated at additional horizons of 8, 10, and 12 h. The model was also evaluated under extreme operational conditions, demonstrating robust and accurate forecasts. These findings highlight the potential of the proposed forecasting approach to support proactive operational planning and reduce ED overcrowding. Full article
(This article belongs to the Section Big Data Mining and Analytics)
Show Figures

Figure 1

38 pages, 3221 KB  
Article
Simulating the Effects of Sensor Failures on Autonomous Vehicles for Safety Evaluation
by Francisco Matos, João Durães and João Cunha
Informatics 2025, 12(3), 94; https://doi.org/10.3390/informatics12030094 - 15 Sep 2025
Viewed by 1003
Abstract
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures, [...] Read more.
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures, whether due to noise, malfunction, or degradation, can compromise this perception and lead to incorrect localization or unsafe decisions by the autonomous control system. While modern AV systems often combine data from multiple sensors to mitigate such risks through sensor fusion techniques (e.g., Kalman filtering), the extent to which these systems remain resilient under faulty conditions remains an open question. This work presents a simulation-based fault injection framework to assess the impact of sensor failures on AVs’ behavior. The framework enables structured testing of autonomous driving software under controlled fault conditions, allowing researchers to observe how specific sensor failures affect system performance. To demonstrate its applicability, an experimental campaign was conducted using the CARLA simulator integrated with the Autoware autonomous driving stack. A multi-segment urban driving scenario was executed using a modified version of CARLA’s Scenario Runner to support Autoware-based evaluations. Faults were injected simulating LiDAR, GNSS, and IMU sensor failures in different route scenarios. The fault types considered in this study include silent sensor failures and severe noise. The results obtained by emulating sensor failures in our chosen system under test, Autoware, show that faults in LiDAR and IMU gyroscope have the most critical impact, often leading to erratic motion and collisions. In contrast, faults in GNSS and IMU accelerometers were well tolerated. This demonstrates the ability of the framework to investigate the fault-tolerance of AVs in the presence of critical sensor failures. Full article
Show Figures

Figure 1

26 pages, 3423 KB  
Article
Federated Learning Spam Detection Based on FedProx and Multi-Level Multi-Feature Fusion
by Yunpeng Xiong, Junkuo Cao and Guolian Chen
Informatics 2025, 12(3), 93; https://doi.org/10.3390/informatics12030093 - 12 Sep 2025
Viewed by 465
Abstract
Traditional spam detection methodologies often neglect user privacy preservation, potentially incurring data leakage risks. Furthermore, current federated learning models for spam detection face several critical challenges: (1) data heterogeneity and instability during server-side parameter aggregation, (2) training instability in single neural network architectures [...] Read more.
Traditional spam detection methodologies often neglect user privacy preservation, potentially incurring data leakage risks. Furthermore, current federated learning models for spam detection face several critical challenges: (1) data heterogeneity and instability during server-side parameter aggregation, (2) training instability in single neural network architectures leading to mode collapse, and (3) constrained expressive capability in multi-module frameworks due to excessive complexity. These issues represent fundamental research pain points in federated learning-based spam detection systems. To address this technical challenge, this study innovatively integrates federated learning frameworks with multi-feature fusion techniques to propose a novel spam detection model, FPW-BC. The FPW-BC model addresses data distribution imbalance through the FedProx aggregation algorithm and enhances stability during server-side parameter aggregation via a horse-racing selection strategy. The model effectively mitigates limitations inherent in both single and multi-module architectures through hierarchical multi-feature fusion. To validate FPW-BC’s performance, comprehensive experiments were conducted on six benchmark datasets with distinct distribution characteristics: CEAS, Enron, Ling, Phishing_email, Spam_email, and Fake_phishing, with comparative analysis against multiple baseline methods. Experimental results demonstrate that FPW-BC achieves exceptional generalization capability for various spam patterns while maintaining user privacy preservation. The model attained 99.40% accuracy on CEAS and 99.78% on Fake_phishing, representing significant dual improvements in both privacy protection and detection efficiency. Full article
Show Figures

Figure 1

32 pages, 3323 KB  
Article
A Data-Driven Informatics Framework for Regional Sustainability: Integrating Twin Mean-Variance Two-Stage DEA with Decision Analytics
by Pasura Aungkulanon, Roberto Montemanni, Atiwat Nanphang and Pongchanun Luangpaiboon
Informatics 2025, 12(3), 92; https://doi.org/10.3390/informatics12030092 - 11 Sep 2025
Viewed by 323
Abstract
This study introduces a novel informatics framework for assessing regional sustainability by integrating Twin Mean-Variance Two-Stage Data Envelopment Analysis (TMV-TSDEA) with a desirability-based decision analytics system. The model evaluates both the efficiency and stability of economic and environmental performance across regions, supporting evidence-based [...] Read more.
This study introduces a novel informatics framework for assessing regional sustainability by integrating Twin Mean-Variance Two-Stage Data Envelopment Analysis (TMV-TSDEA) with a desirability-based decision analytics system. The model evaluates both the efficiency and stability of economic and environmental performance across regions, supporting evidence-based policymaking and strategic planning. Applied to 16 Thai provinces, the framework incorporates a wide range of indicators—such as investment, population, tourism, industrial output, electricity use, forest coverage, and air quality. The twin mean-variance approach captures not only average efficiency but also the consistency of performance over time or under varying scenarios. A two-stage DEA structure models the transformation from economic inputs to environmental outcomes. To ensure comparability, all variables are normalized using desirability functions based on standardized statistical coding. The TMV-TSDEA framework generates composite performance scores that reveal clear disparities among regions. Provinces like Bangkok and Ayutthaya demonstrate a consistent high performance, while others show underperformance or variability requiring targeted policy action. Designed for integration with smart governance platforms, the framework provides a scalable and reproducible tool for regional benchmarking, resource allocation, and sustainability monitoring. By combining informatics principles with advanced analytics, TMV-TSDEA enhances transparency, supports decision-making, and offers a holistic foundation for sustainable regional development. Full article
Show Figures

Figure 1

15 pages, 1082 KB  
Article
Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis
by Tai Ming Wut, Stephanie Wing Lee, Jing (Bill) Xu and Man Lung Jonathan Kwok
Informatics 2025, 12(3), 91; https://doi.org/10.3390/informatics12030091 - 9 Sep 2025
Viewed by 597
Abstract
Research indicates that perceived trust affects both behavioral intention to use chatbots and service satisfaction provided by chatbots in customer service contexts. However, it remains unclear whether perceived propensity to trust impacts service satisfaction in this context. Thus, this research aims to explore [...] Read more.
Research indicates that perceived trust affects both behavioral intention to use chatbots and service satisfaction provided by chatbots in customer service contexts. However, it remains unclear whether perceived propensity to trust impacts service satisfaction in this context. Thus, this research aims to explore how customers’ propensity to trust influences trusting beliefs and, subsequently, their satisfaction when using chatbots for customer service. Through purposive sampling, individuals in Hong Kong with prior experience using chatbots were selected to participate in a quantitative survey. The study employed Necessary Condition Analysis, Importance-Performance Map Analysis, and Partial Least Squares Structural Equation Modelling to examine factors influencing users’ trusting beliefs toward chatbots in customer service settings. Findings revealed that trust in chatbot interactions is significantly influenced by propensity to trust technology, social presence, perceived usefulness, and perceived ease of use. Consequently, these factors, along with trusting belief, also influence service satisfaction in this context. Thus, Social Presence, Perceived Ease of Use, Propensity to Trust, Perceived Usefulness, and Trusting Belief are found necessary. By combining Importance-Performance Map Analysis, priority managerial action areas were identified. This research extends the Technology Acceptance Model by incorporating social presence, propensity to trust technology, and trusting belief in the context of AI chatbot use for customer service. Full article
Show Figures

Figure 1

24 pages, 2822 KB  
Article
Digitizing the Higaonon Language: A Mobile Application for Indigenous Preservation in the Philippines
by Danilyn Abingosa, Paul Bokingkito, Jr., Sittie Noffaisah Pasandalan, Jay Rey Gosnell Alovera and Jed Otano
Informatics 2025, 12(3), 90; https://doi.org/10.3390/informatics12030090 - 8 Sep 2025
Viewed by 1196
Abstract
This research addresses the critical need for language preservation among the Higaonon indigenous community in Mindanao, Philippines, through the development of a culturally responsive mobile dictionary application. The Higaonon language faces significant endangerment due to generational language shift, limited documentation, and a scarcity [...] Read more.
This research addresses the critical need for language preservation among the Higaonon indigenous community in Mindanao, Philippines, through the development of a culturally responsive mobile dictionary application. The Higaonon language faces significant endangerment due to generational language shift, limited documentation, and a scarcity of educational materials. Employing user-centered design principles and participatory lexicography, this study involved collaboration with tribal elders, educators, and youth to document and digitize Higaonon vocabulary across ten culturally significant semantic domains. Each Higaonon lexeme was translated into English, Filipino, and Cebuano to enhance comprehension across linguistic groups. The resulting mobile application incorporates multilingual search capabilities, offline access, phonetic transcriptions, example sentences, and culturally relevant design elements. An evaluation conducted with 30 participants (15 Higaonon and 15 non-Higaonon speakers) revealed high satisfaction ratings across functionality (4.81/5.0), usability (4.63/5.0), and performance (4.73/5.0). Offline accessibility emerged as the most valued feature (4.93/5.0), while comparative analysis identified meaningful differences in user experience between native and non-native speakers, with Higaonon users providing more critical assessments particularly regarding font readability and performance optimization. The application demonstrates how community-driven technological interventions can support indigenous language revitalization while respecting cultural integrity, intellectual property rights, and addressing practical community needs. This research establishes a framework for ethical indigenous language documentation that prioritizes community self-determination and provides empirical evidence that culturally responsive digital technologies can effectively preserve endangered languages while serving as repositories for cultural knowledge embedded within linguistic systems. Full article
Show Figures

Figure 1

20 pages, 2103 KB  
Article
Tourist Flow Prediction Based on GA-ACO-BP Neural Network Model
by Xiang Yang, Yongliang Cheng, Minggang Dong and Xiaolan Xie
Informatics 2025, 12(3), 89; https://doi.org/10.3390/informatics12030089 - 3 Sep 2025
Viewed by 453
Abstract
Tourist flow prediction plays a crucial role in enhancing the efficiency of scenic area management, optimizing resource allocation, and promoting the sustainable development of the tourism industry. To improve the accuracy and real-time performance of tourist flow prediction, we propose a BP model [...] Read more.
Tourist flow prediction plays a crucial role in enhancing the efficiency of scenic area management, optimizing resource allocation, and promoting the sustainable development of the tourism industry. To improve the accuracy and real-time performance of tourist flow prediction, we propose a BP model based on a hybrid genetic algorithm (GA) and ant colony optimization algorithm (ACO), called the GA-ACO-BP model. First, we comprehensively considered multiple key factors related to tourist flow, including historical tourist flow data (such as tourist flow from yesterday, the previous day, and the same period last year), holiday types, climate comfort, and search popularity index on online map platforms. Second, to address the tendency of the BP model to get easily stuck in local optima, we introduce the GA, which has excellent global search capabilities. Finally, to further improve local convergence speed, we further introduce the ACO algorithm. The experimental results based on tourist flow data from the Elephant Trunk Hill Scenic Area in Guilin indicate that the GA-AC*O-BP model achieves optimal values for key tourist flow prediction metrics such as MAPE, RMSE, MAE, and R2, compared to commonly used prediction models. These values are 4.09%, 426.34, 258.80, and 0.98795, respectively. Compared to the initial BP neural network, the improved GA-ACO-BP model reduced error metrics such as MAPE, RMSE, and MAE by 1.12%, 244.04, and 122.91, respectively, and increased the R2 metric by 1.85%. Full article
(This article belongs to the Topic The Applications of Artificial Intelligence in Tourism)
Show Figures

Figure 1

23 pages, 2012 KB  
Article
Preliminary Design Guidelines for Evaluating Immersive Industrial Safety Training
by André Cordeiro, Regina Leite, Lucas Almeida, Cintia Neves, Tiago Silva, Alexandre Siqueira, Marcio Catapan and Ingrid Winkler
Informatics 2025, 12(3), 88; https://doi.org/10.3390/informatics12030088 - 1 Sep 2025
Viewed by 548
Abstract
This study presents preliminary design guidelines to support the evaluation of industrial safety training using immersive technologies, with a focus on high-risk work environments such as working at height. Although virtual reality has been widely adopted for training, few studies have explored its [...] Read more.
This study presents preliminary design guidelines to support the evaluation of industrial safety training using immersive technologies, with a focus on high-risk work environments such as working at height. Although virtual reality has been widely adopted for training, few studies have explored its use for behavior-level evaluation, corresponding to Level 3 of the Kirkpatrick Model. Addressing this gap, the study adopts the Design Science Research methodology, combining a systematic literature review with expert focus group analysis to develop a conceptual framework for training evaluation. The results identify key elements necessary for immersive training evaluations, including scenario configuration, ethical procedures, recruitment, equipment selection, experimental design, and implementation strategies. The resulting guidelines are organized into six categories: scenario configuration, ethical procedures, recruitment, equipment selection, experimental design, and implementation strategies. These guidelines represent a DSR-based conceptual artifact to inform future empirical studies and support the structured assessment of immersive safety training interventions. The study also highlights the potential of integrating behavioral and physiological indicators to support immersive evaluations of behavioral change, offering an expert-informed and structured foundation for future empirical studies in high-risk industrial contexts. Full article
Show Figures

Figure 1

24 pages, 1389 KB  
Article
Analysis and Forecasting of Cryptocurrency Markets Using Bayesian and LSTM-Based Deep Learning Models
by Bidesh Biswas Biki, Makoto Sakamoto, Amane Takei, Md. Jubirul Alam, Md. Riajuliislam and Showaibuzzaman Showaibuzzaman
Informatics 2025, 12(3), 87; https://doi.org/10.3390/informatics12030087 - 30 Aug 2025
Viewed by 1049
Abstract
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling [...] Read more.
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling approaches: the Bayesian State-Space model and Long Short-Term Memory (LSTM) neural networks. Historical price data from January 2024 to April 2025 is used for model training and testing. The Bayesian model provided probabilistic insights by achieving a Mean Squared Error (MSE) of 0.0000 and a Mean Absolute Error (MAE) of 0.0026 for training data. For testing data, it provided 0.0013 for MSE and 0.0307 for MAE. On the other hand, the LSTM model provided temporal dependencies and performed strongly by achieving 0.0004 for MSE, 0.0160 for MAE, 0.0212 for RMSE, 0.9924 for R2 in terms of training data and for testing data, and 0.0007 for MSE with an R2 of 0.3505. From the result, it indicates that while the LSTM model excels in training performance, the Bayesian model provides better interpretability with lower error margins in testing by highlighting the trade-offs between model accuracy and probabilistic forecasting in the cryptocurrency markets. Full article
Show Figures

Figure 1

20 pages, 592 KB  
Review
The Temporal Evolution of Large Language Model Performance: A Comparative Analysis of Past and Current Outputs in Scientific and Medical Research
by Ishith Seth, Gianluca Marcaccini, Bryan Lim, Jennifer Novo, Stephen Bacchi, Roberto Cuomo, Richard J. Ross and Warren M. Rozen
Informatics 2025, 12(3), 86; https://doi.org/10.3390/informatics12030086 - 26 Aug 2025
Viewed by 651
Abstract
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model [...] Read more.
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model outputs (GPT-3.5 and GPT-4.0) with ChatGPT-4.5 across three domains: aesthetic surgery counseling, an academic discussion base of thumb arthritis, and a systematic literature review. Methods: We replicated the methodologies of three previously published studies using identical prompts in ChatGPT-4.5. Each output was assessed against its predecessor using a nine-domain Likert-based rubric measuring factual accuracy, completeness, reference quality, clarity, clinical insight, scientific reasoning, bias avoidance, utility, and interactivity. Expert reviewers in plastic and reconstructive surgery independently scored and compared model outputs across versions. Results: ChatGPT-4.5 outperformed earlier versions across all domains. Reference quality improved most significantly (a score increase of +4.5), followed by factual accuracy (+2.5), scientific reasoning (+2.5), and utility (+2.5). In aesthetic surgery counseling, GPT-3.5 produced generic responses lacking clinical detail, whereas ChatGPT-4.5 offered tailored, structured, and psychologically sensitive advice. In academic writing, ChatGPT-4.5 eliminated reference hallucination, correctly applied evidence hierarchies, and demonstrated advanced reasoning. In the literature review, recall remained suboptimal, but precision, citation accuracy, and contextual depth improved substantially. Conclusion: ChatGPT-4.5 represents a major step forward in LLM capability, particularly in generating trustworthy academic and clinical content. While not yet suitable as a standalone decision-making tool, its outputs now support research planning and early-stage manuscript preparation. Persistent limitations include information recall and interpretive flexibility. Continued validation is essential to ensure ethical, effective use in scientific workflows. Full article
Show Figures

Figure 1

32 pages, 362 KB  
Article
Human-AI Symbiotic Theory (HAIST): Development, Multi-Framework Assessment, and AI-Assisted Validation in Academic Research
by Laura Thomsen Morello and John C. Chick
Informatics 2025, 12(3), 85; https://doi.org/10.3390/informatics12030085 - 25 Aug 2025
Viewed by 1840
Abstract
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and [...] Read more.
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and advancing validation practices, this research employed a sequential three-phase mixed-methods approach: (1) systematic theoretical synthesis integrating five paradigmatic perspectives across learning theory, cognition, information processing, ethics, and AI domains; (2) development of an innovative validation framework combining three established theory-building approaches with groundbreaking AI-assisted content assessment protocols; and (3) comprehensive theory validation through both traditional multi-framework evaluation and novel AI-based content analysis demonstrating unprecedented convergent validity. This research contributes both a theoretically grounded framework for human-AI research collaboration and a transformative methodological innovation demonstrating how AI tools can systematically augment traditional expert-driven theory validation. HAIST provides the first comprehensive theoretical foundation designed explicitly for human-AI partnerships in scholarly research with applicability across disciplines, while the AI-assisted validation methodology offers a scalable, reliable model for theory development. Future research directions include empirical testing of HAIST principles in live research settings and broader application of the AI-assisted validation methodology to accelerate theory development across educational research and related disciplines. Full article
25 pages, 2448 KB  
Article
Marketing a Banned Remedy: A Topic Model Analysis of Health Misinformation in Thai E-Commerce
by Kanitsorn Suriyapaiboonwattana, Yuttana Jaroenruen, Saiphit Satjawisate, Kate Hone, Panupong Puttarak, Nattapong Kaewboonma, Puriwat Lertkrai and Siwanath Nantapichai
Informatics 2025, 12(3), 84; https://doi.org/10.3390/informatics12030084 - 18 Aug 2025
Viewed by 1503
Abstract
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance. [...] Read more.
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance. Drawing on a dataset of 1546 product listings across major platforms (Facebook, TikTok, Shopee, and Lazada), we applied Latent Dirichlet Allocation (LDA) to identify prevailing promotional themes and compliance gaps. Despite explicit platform policies, 87.6% of listings appeared on Facebook. Medical claims, particularly for pain relief, featured in 77.6% of posts, while only 18.4% included any risk disclosure. These findings suggest a systematic exploitation of regulatory blind spots and consumer health anxieties, facilitated by templated cross-platform messaging. Anchored in Information Manipulation Theory and the Health Belief Model, the analysis offers theoretical insight into how misinformation is structured and sustained within digital commerce ecosystems. The Thai case highlights urgent implications for platform accountability, policy harmonization, and the design of algorithmic surveillance systems in global health product regulation. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

10 pages, 477 KB  
Article
Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies
by Yifan Zhang and Kuzma Strelnikov
Informatics 2025, 12(3), 83; https://doi.org/10.3390/informatics12030083 - 15 Aug 2025
Viewed by 802
Abstract
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task [...] Read more.
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task under varying levels of contextual cues (high, medium, and low) as defined using human performance, thereby enabling direct AI–human comparisons. Our findings indicate that LLMs significantly outperform humans, particularly in medium- and low-context conditions. While success in medium-context scenarios reflects the efficient utilization of contextual information, performance in low-context situations—where LLMs achieved approximately 25% accuracy compared to just 1% for humans—suggests that the models harness deep linguistic structures beyond mere surface context. This discovery implies that LLMs may elucidate previously unknown aspects of language architecture. The ability of LLMs to exploit deep structural regularities and statistical patterns in medium- and low-predictability contexts offers a novel perspective on the computational architecture of the human language system. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

37 pages, 5086 KB  
Article
Global Embeddings, Local Signals: Zero-Shot Sentiment Analysis of Transport Complaints
by Aliya Nugumanova, Daniyar Rakhimzhanov and Aiganym Mansurova
Informatics 2025, 12(3), 82; https://doi.org/10.3390/informatics12030082 - 14 Aug 2025
Viewed by 1118
Abstract
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific [...] Read more.
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific sentiment analysis or opinion mining tasks on digital service data. To the best of our knowledge, we are the first to test this paradigm on operational multilingual complaints, where public transport agencies must prioritize thousands of Russian- and Kazakh-language messages each day. A human-labelled corpus of 2400 complaints is embedded with five open-source universal models. Obtained embeddings are matched to semantic “anchor” queries that describe three distinct facets: service aspect (eight classes), implicit frustration, and explicit customer request. In the strict zero-shot setting, the best encoder reaches 77% accuracy for aspect detection, 74% for frustration, and 80% for request; taken together, these signals reproduce human four-level priority in 60% of cases. Attaching a single-layer logistic probe on top of the frozen embeddings boosts performance to 89% for aspect, 83–87% for the binary facets, and 72% for end-to-end triage. Compared with recent fine-tuned sentiment analysis systems, our pipeline cuts memory demands by two orders of magnitude and eliminates task-specific training yet narrows the accuracy gap to under five percentage points. These findings indicate that a single frozen encoder, guided by handcrafted anchors and an ultra-light head, can deliver near-human triage quality across multiple pragmatic dimensions, opening the door to low-cost, language-agnostic monitoring of digital-service feedback. Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
Show Figures

Figure 1

21 pages, 1977 KB  
Article
A Flexible Profile-Based Recommender System for Discovering Cultural Activities in an Emerging Tourist Destination
by Isabel Arregocés-Julio, Andrés Solano-Barliza, Aida Valls, Antonio Moreno, Marysol Castillo-Palacio, Melisa Acosta-Coll and José Escorcia-Gutierrez
Informatics 2025, 12(3), 81; https://doi.org/10.3390/informatics12030081 - 14 Aug 2025
Viewed by 740
Abstract
Recommendation systems applied to tourism are widely recognized for improving the visitor’s experience in tourist destinations, thanks to their ability to personalize the trip. This paper presents a hybrid approach that combines Machine Learning techniques with the Ordered Weighted Averaging (OWA) aggregation operator [...] Read more.
Recommendation systems applied to tourism are widely recognized for improving the visitor’s experience in tourist destinations, thanks to their ability to personalize the trip. This paper presents a hybrid approach that combines Machine Learning techniques with the Ordered Weighted Averaging (OWA) aggregation operator to achieve greater accuracy in user segmentation and generate personalized recommendations. The data were collected through a questionnaire applied to tourists in the different points of interest of the Special, Tourist and Cultural District of Riohacha. In the first stage, the K-means algorithm defines the segmentation of tourists based on their socio-demographic data and travel preferences. The second stage uses the OWA operator with a disjunctive policy to assign the most relevant cluster given the input data. This hybrid approach provides a recommendation mechanism for tourist destinations and their cultural heritage. Full article
(This article belongs to the Topic The Applications of Artificial Intelligence in Tourism)
Show Figures

Figure 1

25 pages, 7900 KB  
Article
Multi-Label Disease Detection in Chest X-Ray Imaging Using a Fine-Tuned ConvNeXtV2 with a Customized Classifier
by Kangzhe Xiong, Yuyun Tu, Xinping Rao, Xiang Zou and Yingkui Du
Informatics 2025, 12(3), 80; https://doi.org/10.3390/informatics12030080 - 14 Aug 2025
Viewed by 1236
Abstract
Deep-learning-based multiple label chest X-ray classification has achieved significant success, but existing models still have three main issues: fixed-scale convolutions fail to capture both large and small lesions, standard pooling is lacking in the lack of attention to important regions, and linear classification [...] Read more.
Deep-learning-based multiple label chest X-ray classification has achieved significant success, but existing models still have three main issues: fixed-scale convolutions fail to capture both large and small lesions, standard pooling is lacking in the lack of attention to important regions, and linear classification lacks the capacity to model complex dependency between features. To circumvent these obstacles, we propose CONVFCMAE, a lightweight yet powerful framework that is built on a backbone that is partially frozen (77.08 % of the initial layers are fixed) in order to preserve complex, multi-scale features while decreasing the number of trainable parameters. Our architecture adds (1) an intelligent global pooling module that is learnable, with 1×1 convolutions that are dynamically weighted by their spatial location, and (2) a multi-head attention block that is dedicated to channel re-calibration, along with (3) a two-layer MLP that has been enhanced with ReLU, batch normalization, and dropout. This module is used to enhance the non-linearity of the feature space. To further reduce the noise associated with labels and the imbalance in class distribution inherent to the NIH ChestXray14 dataset, we utilize a combined loss that combines BCEWithLogits and Focal Loss as well as extensive data augmentation. On ChestXray14, the average ROC–AUC of CONVFCMAE is 0.852, which is 3.97 percent greater than the state of the art. Ablation experiments demonstrate the individual and collective effectiveness of each component. Grad-CAM visualizations have a superior capacity to localize the pathological regions, and this increases the interpretability of the model. Overall, CONVFCMAE provides a practical, generalizable solution to the problem of extracting features from medical images in a practical manner. Full article
(This article belongs to the Section Medical and Clinical Informatics)
Show Figures

Figure 1

17 pages, 1210 KB  
Article
CAMBSRec: A Context-Aware Multi-Behavior Sequential Recommendation Model
by Bohan Zhuang, Yan Lan and Minghui Zhang
Informatics 2025, 12(3), 79; https://doi.org/10.3390/informatics12030079 - 4 Aug 2025
Viewed by 862
Abstract
Multi-behavior sequential recommendation (MBSRec) is a form of sequential recommendation. It leverages users’ historical interaction behavior types to better predict their next actions. This approach fits real-world scenarios better than traditional models do. With the rise of the transformer model, attention mechanisms are [...] Read more.
Multi-behavior sequential recommendation (MBSRec) is a form of sequential recommendation. It leverages users’ historical interaction behavior types to better predict their next actions. This approach fits real-world scenarios better than traditional models do. With the rise of the transformer model, attention mechanisms are widely used in recommendation algorithms. However, they suffer from low-pass filtering, and the simple learnable positional encodings in existing models offer limited performance gains. To address these problems, we introduce the context-aware multi-behavior sequential recommendation model (CAMBSRec). It separately encodes items and behavior types, replaces traditional positional encoding with context-similarity positional encoding, and applies the discrete Fourier transform to separate the high and low frequency components and enhance the high frequency components, countering the low-pass filtering effect. Experiments on three public datasets show that CAMBSRec performs better than five baseline models, demonstrating its advantages in terms of recommendation performance. Full article
Show Figures

Figure 1

13 pages, 1520 KB  
Article
Designing a Patient Outcome Clinical Assessment Tool for Modified Rankin Scale: “You Feel the Same Way Too”
by Laura London and Noreen Kamal
Informatics 2025, 12(3), 78; https://doi.org/10.3390/informatics12030078 - 4 Aug 2025
Viewed by 759
Abstract
The modified Rankin Scale (mRS) is a widely used outcome measure for assessing disability in stroke care; however, its administration is often affected by subjectivity and variability, leading to poor inter-rater reliability and inconsistent scoring. Originally designed for hospital discharge evaluations, the mRS [...] Read more.
The modified Rankin Scale (mRS) is a widely used outcome measure for assessing disability in stroke care; however, its administration is often affected by subjectivity and variability, leading to poor inter-rater reliability and inconsistent scoring. Originally designed for hospital discharge evaluations, the mRS has evolved into an outcome tool for disability assessment and clinical decision-making. Inconsistencies persist due to a lack of standardization and cognitive biases during its use. This paper presents design principles for creating a standardized clinical assessment tool (CAT) for the mRS, grounded in human–computer interaction (HCI) and cognitive engineering principles. Design principles were informed in part by an anonymous online survey conducted with clinicians across Canada to gain insights into current administration practices, opinions, and challenges of the mRS. The proposed design principles aim to reduce cognitive load, improve inter-rater reliability, and streamline the administration process of the mRS. By focusing on usability and standardization, the design principles seek to enhance scoring consistency and improve the overall reliability of clinical outcomes in stroke care and research. Developing a standardized CAT for the mRS represents a significant step toward improving the accuracy and consistency of stroke disability assessments. Future work will focus on real-world validation with healthcare stakeholders and exploring self-completed mRS assessments to further refine the tool. Full article
Show Figures

Figure 1

24 pages, 756 KB  
Article
Designs and Interactions for Near-Field Augmented Reality: A Scoping Review
by Jacob Hobbs and Christopher Bull
Informatics 2025, 12(3), 77; https://doi.org/10.3390/informatics12030077 - 1 Aug 2025
Viewed by 948
Abstract
Augmented reality (AR), which overlays digital content within the user’s view, is gaining traction across domains such as education, healthcare, manufacturing, and entertainment. The hardware constraints of commercially available HMDs are well acknowledged, but little work addresses what design or interactions techniques developers [...] Read more.
Augmented reality (AR), which overlays digital content within the user’s view, is gaining traction across domains such as education, healthcare, manufacturing, and entertainment. The hardware constraints of commercially available HMDs are well acknowledged, but little work addresses what design or interactions techniques developers can employ or build into experiences to work around these limitations. We conducted a scoping literature review, with the aim of mapping the current landscape of design principles and interaction techniques employed in near-field AR environments. We searched for literature published between 2016 and 2025 across major databases, including the ACM Digital Library and IEEE Xplore. Studies were included if they explicitly employed design or interaction techniques with a commercially available HMD for near-field AR experiences. A total of 780 articles were returned by the search, but just 7 articles met the inclusion criteria. Our review identifies key themes around how existing techniques are employed and the two competing goals of AR experiences, and we highlight the importance of embodiment in interaction efficacy. We present directions for future research based on and justified by our review. The findings offer a comprehensive overview for researchers, designers, and developers aiming to create more intuitive, effective, and context-aware near-field AR experiences. This review also provides a foundation for future research by outlining underexplored areas and recommending research directions for near-field AR interaction design. Full article
Show Figures

Figure 1

23 pages, 1192 KB  
Article
Multi-Model Dialectical Evaluation of LLM Reasoning Chains: A Structured Framework with Dual Scoring Agents
by Catalin Anghel, Andreea Alexandra Anghel, Emilia Pecheanu, Ioan Susnea, Adina Cocu and Adrian Istrate
Informatics 2025, 12(3), 76; https://doi.org/10.3390/informatics12030076 - 1 Aug 2025
Viewed by 1088
Abstract
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed [...] Read more.
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed modular framework designed to evaluate reasoning through a structured three-stage process: opinion, counterargument, and synthesis. The framework enables transparent and comparative analysis of how different LLMs handle dialectical reasoning. (2) Methods: Each stage is executed by a single model, and final syntheses are scored via two independent LLM evaluators (LLaMA 3.1 and GPT-4o) based on a rubric with four dimensions: clarity, coherence, originality, and dialecticality. In parallel, a rule-based semantic analyzer detects rhetorical anomalies and ethical values. All outputs and metadata are stored in a Neo4j graph database for structured exploration. (3) Results: The system was applied to four open-weight models (Gemma 7B, Mistral 7B, Dolphin-Mistral, Zephyr 7B) across ten open-ended prompts on ethical, political, and technological topics. The results show consistent stylistic and semantic variation across models, with moderate inter-rater agreement. Semantic diagnostics revealed differences in value expression and rhetorical flaws not captured by rubric scores. (4) Originality: The framework is, to our knowledge, the first to integrate multi-stage reasoning, rubric-based and semantic evaluation, and graph-based storage into a single system. It enables replicable, interpretable, and multidimensional assessment of generative reasoning—supporting researchers, developers, and educators working with LLMs in high-stakes contexts. Full article
Show Figures

Figure 1

26 pages, 5535 KB  
Article
Research on Power Cable Intrusion Identification Using a GRT-Transformer-Based Distributed Acoustic Sensing (DAS) System
by Xiaoli Huang, Xingcheng Wang, Han Qin and Zhaoliang Zhou
Informatics 2025, 12(3), 75; https://doi.org/10.3390/informatics12030075 - 21 Jul 2025
Cited by 1 | Viewed by 1040
Abstract
To address the high false alarm rate of intrusion detection systems based on distributed acoustic sensing (DAS) for power cables in complex underground environments, an innovative GRT-Transformer multimodal deep learning model is proposed. The core of this model lies in its distinctive three-branch [...] Read more.
To address the high false alarm rate of intrusion detection systems based on distributed acoustic sensing (DAS) for power cables in complex underground environments, an innovative GRT-Transformer multimodal deep learning model is proposed. The core of this model lies in its distinctive three-branch parallel collaborative architecture: two branches employ Gramian Angular Summation Field (GASF) and Recursive Pattern (RP) algorithms to convert one-dimensional intrusion waveforms into two-dimensional images, thereby capturing rich spatial patterns and dynamic characteristics and the third branch utilizes a Gated Recurrent Unit (GRU) algorithm to directly focus on the temporal evolution features of the waveform; additionally, a Transformer component is integrated to capture the overall trend and global dependencies of the signals. Ultimately, the terminal employs a Bidirectional Long Short-Term Memory (BiLSTM) network to perform a deep fusion of the multidimensional features extracted from the three branches, enabling a comprehensive understanding of the bidirectional temporal dependencies within the data. Experimental validation demonstrates that the GRT-Transformer achieves an average recognition accuracy of 97.3% across three typical intrusion events—illegal tapping, mechanical operations, and vehicle passage—significantly reducing false alarms, surpassing traditional methods, and exhibiting strong practical potential in complex real-world scenarios. Full article
Show Figures

Figure 1

15 pages, 2948 KB  
Review
A Comprehensive Review of ChatGPT in Teaching and Learning Within Higher Education
by Samkelisiwe Purity Phokoye, Siphokazi Dlamini, Peggy Pinky Mthalane, Mthokozisi Luthuli and Smangele Pretty Moyane
Informatics 2025, 12(3), 74; https://doi.org/10.3390/informatics12030074 - 21 Jul 2025
Viewed by 2779
Abstract
Artificial intelligence (AI) has become an integral component of various sectors, including higher education. AI, particularly in the form of advanced chatbots like ChatGPT, is increasingly recognized as a valuable tool for engagement in higher education institutions (HEIs). This growing trend highlights the [...] Read more.
Artificial intelligence (AI) has become an integral component of various sectors, including higher education. AI, particularly in the form of advanced chatbots like ChatGPT, is increasingly recognized as a valuable tool for engagement in higher education institutions (HEIs). This growing trend highlights the potential of AI to enhance student engagement and subsequently improve academic performance. Given this development, it is crucial for HEIs to delve deeper into the potential integration of AI-driven chatbots into educational practices. The aim of this study was to conduct a comprehensive review of the use of ChatGPT in teaching and learning within higher education. To offer a comprehensive viewpoint, it had two primary objectives: to identify the key factors influencing the adoption and acceptance of ChatGPT in higher education, and to investigate the roles of institutional policies and support systems in the acceptance of ChatGPT in higher education. A bibliometric analysis methodology was employed in this study, and a PRISMA diagram was used to explain the papers included in the analysis. The findings reveal the increasing adoption of ChatGPT within the higher education sector while also identifying the challenges faced during its implementation, ranging from technical issues to educational adaptations. Moreover, this review provides guidelines for various stakeholders to effectively integrate ChatGPT into higher education. Full article
Show Figures

Figure 1

26 pages, 2596 KB  
Article
DFPoLD: A Hard Disk Failure Prediction on Low-Quality Datasets
by Shuting Wei, Xiaoyu Lu, Hongzhang Yang, Chenfeng Tu, Jiangpu Guo, Hailong Sun and Yu Feng
Informatics 2025, 12(3), 73; https://doi.org/10.3390/informatics12030073 - 16 Jul 2025
Viewed by 808
Abstract
Hard disk failure prediction is an important proactive maintenance method for storage systems. Recent years have seen significant progress in hard disk failure prediction using high-quality SMART datasets. However, in industrial applications, data loss often occurs during SMART data collection, transmission, and storage. [...] Read more.
Hard disk failure prediction is an important proactive maintenance method for storage systems. Recent years have seen significant progress in hard disk failure prediction using high-quality SMART datasets. However, in industrial applications, data loss often occurs during SMART data collection, transmission, and storage. Existing machine learning-based hard disk failure prediction models perform poorly on low-quality datasets. Therefore, this paper proposes a hard disk fault prediction technique based on low-quality datasets. Firstly, based on the original Backblaze dataset, we construct a low-quality dataset, Backblaze-, by simulating sector damage in actual scenarios and deleting 10% to 99% of the data. Time series features like the Absolute Sum of First Difference (ASFD) were introduced to amplify the differences between positive and negative samples and reduce the sensitivity of the model to SMART data loss. Considering the impact of different quality datasets on time window selection, we propose a time window selection formula that selects different time windows based on the proportion of data loss. It is found that the poorer the dataset quality, the longer the time window selection should be. The proposed model achieves a True Positive Rate (TPR) of 99.46%, AUC of 0.9971, and F1 score of 0.9871, with a False Positive Rate (FPR) under 0.04%, even with 80% data loss, maintaining performance close to that on the original dataset. Full article
(This article belongs to the Section Big Data Mining and Analytics)
Show Figures

Figure 1

24 pages, 1618 KB  
Review
Design Requirements of Breast Cancer Symptom-Management Apps
by Xinyi Huang, Amjad Fayoumi, Emily Winter and Anas Najdawi
Informatics 2025, 12(3), 72; https://doi.org/10.3390/informatics12030072 - 15 Jul 2025
Viewed by 1324
Abstract
Many breast cancer patients follow a self-managed treatment pathway, which may lead to gaps in the data available to healthcare professionals, such as information about patients’ everyday symptoms at home. Mobile apps have the potential to bridge this information gap, leading to more [...] Read more.
Many breast cancer patients follow a self-managed treatment pathway, which may lead to gaps in the data available to healthcare professionals, such as information about patients’ everyday symptoms at home. Mobile apps have the potential to bridge this information gap, leading to more effective treatments and interventions, as well as helping breast cancer patients monitor and manage their symptoms. In this paper, we elicit design requirements for breast cancer symptom-management mobile apps using a systematic review following the PRISMA framework. We then evaluate existing cancer symptom-management apps found on the Apple store according to the extent to which they meet these requirements. We find that, whilst some requirements are well supported (such as functionality to record multiple symptoms and provision of information), others are currently not being met, particularly interoperability, functionality related to responses from healthcare professionals, and personalisation. Much work is needed for cancer patients and healthcare professionals to experience the benefits of digital health innovation. The article demonstrates a formal requirements model, in which requirements are categorised as functional and non-functional, and presents a proposal for conceptual design for future mobile apps. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

20 pages, 1550 KB  
Article
Strategy for Precopy Live Migration and VM Placement in Data Centers Based on Hybrid Machine Learning
by Taufik Hidayat, Kalamullah Ramli and Ruki Harwahyu
Informatics 2025, 12(3), 71; https://doi.org/10.3390/informatics12030071 - 15 Jul 2025
Viewed by 1178
Abstract
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a [...] Read more.
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a combined machine learning method that uses an MDP to choose which VMs to move, the RF method to sort the VMs according to load, and NSGA-III to achieve multiple optimization objectives, such as reducing downtime, improving SLA, and increasing energy efficiency. For this model, the GWA-Bitbrains dataset was used, on which it had a classification accuracy of 98.77%, a MAPE of 7.69% in predicting migration duration, and an energy efficiency improvement of 90.80%. The results of real-world experiments show that the hybrid machine learning strategy could significantly reduce the data center workload, increase the total migration time, and decrease the downtime. The results of hybrid machine learning affirm the effectiveness of integrating the MDP, RF method, and NSGA-III for providing holistic solutions in VM placement strategies for large-scale data centers. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop