Journal Description
Informatics
Informatics
is an international, peer-reviewed, open access journal on information and communication technologies, human–computer interaction, and social informatics, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, and other databases.
- Journal Rank: CiteScore - Q1 (Communication)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 34.9 days after submission; acceptance to publication is undertaken in 4.7 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.8 (2024);
5-Year Impact Factor:
3.1 (2024)
Latest Articles
Simulating the Effects of Sensor Failures on Autonomous Vehicles for Safety Evaluation
Informatics 2025, 12(3), 94; https://doi.org/10.3390/informatics12030094 (registering DOI) - 15 Sep 2025
Abstract
►
Show Figures
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures,
[...] Read more.
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures, whether due to noise, malfunction, or degradation, can compromise this perception and lead to incorrect localization or unsafe decisions by the autonomous control system. While modern AV systems often combine data from multiple sensors to mitigate such risks through sensor fusion techniques (e.g., Kalman filtering), the extent to which these systems remain resilient under faulty conditions remains an open question. This work presents a simulation-based fault injection framework to assess the impact of sensor failures on AVs’ behavior. The framework enables structured testing of autonomous driving software under controlled fault conditions, allowing researchers to observe how specific sensor failures affect system performance. To demonstrate its applicability, an experimental campaign was conducted using the CARLA simulator integrated with the Autoware autonomous driving stack. A multi-segment urban driving scenario was executed using a modified version of CARLA’s Scenario Runner to support Autoware-based evaluations. Faults were injected simulating LiDAR, GNSS, and IMU sensor failures in different route scenarios. The fault types considered in this study include silent sensor failures and severe noise. The results obtained by emulating sensor failures in our chosen system under test, Autoware, show that faults in LiDAR and IMU gyroscope have the most critical impact, often leading to erratic motion and collisions. In contrast, faults in GNSS and IMU accelerometers were well tolerated. This demonstrates the ability of the framework to investigate the fault-tolerance of AVs in the presence of critical sensor failures.
Full article
Open AccessArticle
Federated Learning Spam Detection Based on FedProx and Multi-Level Multi-Feature Fusion
by
Yunpeng Xiong, Junkuo Cao and Guolian Chen
Informatics 2025, 12(3), 93; https://doi.org/10.3390/informatics12030093 - 12 Sep 2025
Abstract
Traditional spam detection methodologies often neglect user privacy preservation, potentially incurring data leakage risks. Furthermore, current federated learning models for spam detection face several critical challenges: (1) data heterogeneity and instability during server-side parameter aggregation, (2) training instability in single neural network architectures
[...] Read more.
Traditional spam detection methodologies often neglect user privacy preservation, potentially incurring data leakage risks. Furthermore, current federated learning models for spam detection face several critical challenges: (1) data heterogeneity and instability during server-side parameter aggregation, (2) training instability in single neural network architectures leading to mode collapse, and (3) constrained expressive capability in multi-module frameworks due to excessive complexity. These issues represent fundamental research pain points in federated learning-based spam detection systems. To address this technical challenge, this study innovatively integrates federated learning frameworks with multi-feature fusion techniques to propose a novel spam detection model, FPW-BC. The FPW-BC model addresses data distribution imbalance through the FedProx aggregation algorithm and enhances stability during server-side parameter aggregation via a horse-racing selection strategy. The model effectively mitigates limitations inherent in both single and multi-module architectures through hierarchical multi-feature fusion. To validate FPW-BC’s performance, comprehensive experiments were conducted on six benchmark datasets with distinct distribution characteristics: CEAS, Enron, Ling, Phishing_email, Spam_email, and Fake_phishing, with comparative analysis against multiple baseline methods. Experimental results demonstrate that FPW-BC achieves exceptional generalization capability for various spam patterns while maintaining user privacy preservation. The model attained 99.40% accuracy on CEAS and 99.78% on Fake_phishing, representing significant dual improvements in both privacy protection and detection efficiency.
Full article
(This article belongs to the Topic Recent Advances in Artificial Intelligence for Security and Security for Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
A Data-Driven Informatics Framework for Regional Sustainability: Integrating Twin Mean-Variance Two-Stage DEA with Decision Analytics
by
Pasura Aungkulanon, Roberto Montemanni, Atiwat Nanphang and Pongchanun Luangpaiboon
Informatics 2025, 12(3), 92; https://doi.org/10.3390/informatics12030092 - 11 Sep 2025
Abstract
►▼
Show Figures
This study introduces a novel informatics framework for assessing regional sustainability by integrating Twin Mean-Variance Two-Stage Data Envelopment Analysis (TMV-TSDEA) with a desirability-based decision analytics system. The model evaluates both the efficiency and stability of economic and environmental performance across regions, supporting evidence-based
[...] Read more.
This study introduces a novel informatics framework for assessing regional sustainability by integrating Twin Mean-Variance Two-Stage Data Envelopment Analysis (TMV-TSDEA) with a desirability-based decision analytics system. The model evaluates both the efficiency and stability of economic and environmental performance across regions, supporting evidence-based policymaking and strategic planning. Applied to 16 Thai provinces, the framework incorporates a wide range of indicators—such as investment, population, tourism, industrial output, electricity use, forest coverage, and air quality. The twin mean-variance approach captures not only average efficiency but also the consistency of performance over time or under varying scenarios. A two-stage DEA structure models the transformation from economic inputs to environmental outcomes. To ensure comparability, all variables are normalized using desirability functions based on standardized statistical coding. The TMV-TSDEA framework generates composite performance scores that reveal clear disparities among regions. Provinces like Bangkok and Ayutthaya demonstrate a consistent high performance, while others show underperformance or variability requiring targeted policy action. Designed for integration with smart governance platforms, the framework provides a scalable and reproducible tool for regional benchmarking, resource allocation, and sustainability monitoring. By combining informatics principles with advanced analytics, TMV-TSDEA enhances transparency, supports decision-making, and offers a holistic foundation for sustainable regional development.
Full article

Figure 1
Open AccessArticle
Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis
by
Tai Ming Wut, Stephanie Wing Lee, Jing (Bill) Xu and Man Lung Jonathan Kwok
Informatics 2025, 12(3), 91; https://doi.org/10.3390/informatics12030091 - 9 Sep 2025
Abstract
►▼
Show Figures
Research indicates that perceived trust affects both behavioral intention to use chatbots and service satisfaction provided by chatbots in customer service contexts. However, it remains unclear whether perceived propensity to trust impacts service satisfaction in this context. Thus, this research aims to explore
[...] Read more.
Research indicates that perceived trust affects both behavioral intention to use chatbots and service satisfaction provided by chatbots in customer service contexts. However, it remains unclear whether perceived propensity to trust impacts service satisfaction in this context. Thus, this research aims to explore how customers’ propensity to trust influences trusting beliefs and, subsequently, their satisfaction when using chatbots for customer service. Through purposive sampling, individuals in Hong Kong with prior experience using chatbots were selected to participate in a quantitative survey. The study employed Necessary Condition Analysis, Importance-Performance Map Analysis, and Partial Least Squares Structural Equation Modelling to examine factors influencing users’ trusting beliefs toward chatbots in customer service settings. Findings revealed that trust in chatbot interactions is significantly influenced by propensity to trust technology, social presence, perceived usefulness, and perceived ease of use. Consequently, these factors, along with trusting belief, also influence service satisfaction in this context. Thus, Social Presence, Perceived Ease of Use, Propensity to Trust, Perceived Usefulness, and Trusting Belief are found necessary. By combining Importance-Performance Map Analysis, priority managerial action areas were identified. This research extends the Technology Acceptance Model by incorporating social presence, propensity to trust technology, and trusting belief in the context of AI chatbot use for customer service.
Full article

Figure 1
Open AccessArticle
Digitizing the Higaonon Language: A Mobile Application for Indigenous Preservation in the Philippines
by
Danilyn Abingosa, Paul Bokingkito, Jr., Sittie Noffaisah Pasandalan, Jay Rey Gosnell Alovera and Jed Otano
Informatics 2025, 12(3), 90; https://doi.org/10.3390/informatics12030090 - 8 Sep 2025
Abstract
►▼
Show Figures
This research addresses the critical need for language preservation among the Higaonon indigenous community in Mindanao, Philippines, through the development of a culturally responsive mobile dictionary application. The Higaonon language faces significant endangerment due to generational language shift, limited documentation, and a scarcity
[...] Read more.
This research addresses the critical need for language preservation among the Higaonon indigenous community in Mindanao, Philippines, through the development of a culturally responsive mobile dictionary application. The Higaonon language faces significant endangerment due to generational language shift, limited documentation, and a scarcity of educational materials. Employing user-centered design principles and participatory lexicography, this study involved collaboration with tribal elders, educators, and youth to document and digitize Higaonon vocabulary across ten culturally significant semantic domains. Each Higaonon lexeme was translated into English, Filipino, and Cebuano to enhance comprehension across linguistic groups. The resulting mobile application incorporates multilingual search capabilities, offline access, phonetic transcriptions, example sentences, and culturally relevant design elements. An evaluation conducted with 30 participants (15 Higaonon and 15 non-Higaonon speakers) revealed high satisfaction ratings across functionality (4.81/5.0), usability (4.63/5.0), and performance (4.73/5.0). Offline accessibility emerged as the most valued feature (4.93/5.0), while comparative analysis identified meaningful differences in user experience between native and non-native speakers, with Higaonon users providing more critical assessments particularly regarding font readability and performance optimization. The application demonstrates how community-driven technological interventions can support indigenous language revitalization while respecting cultural integrity, intellectual property rights, and addressing practical community needs. This research establishes a framework for ethical indigenous language documentation that prioritizes community self-determination and provides empirical evidence that culturally responsive digital technologies can effectively preserve endangered languages while serving as repositories for cultural knowledge embedded within linguistic systems.
Full article

Figure 1
Open AccessArticle
Tourist Flow Prediction Based on GA-ACO-BP Neural Network Model
by
Xiang Yang, Yongliang Cheng, Minggang Dong and Xiaolan Xie
Informatics 2025, 12(3), 89; https://doi.org/10.3390/informatics12030089 - 3 Sep 2025
Abstract
Tourist flow prediction plays a crucial role in enhancing the efficiency of scenic area management, optimizing resource allocation, and promoting the sustainable development of the tourism industry. To improve the accuracy and real-time performance of tourist flow prediction, we propose a BP model
[...] Read more.
Tourist flow prediction plays a crucial role in enhancing the efficiency of scenic area management, optimizing resource allocation, and promoting the sustainable development of the tourism industry. To improve the accuracy and real-time performance of tourist flow prediction, we propose a BP model based on a hybrid genetic algorithm (GA) and ant colony optimization algorithm (ACO), called the GA-ACO-BP model. First, we comprehensively considered multiple key factors related to tourist flow, including historical tourist flow data (such as tourist flow from yesterday, the previous day, and the same period last year), holiday types, climate comfort, and search popularity index on online map platforms. Second, to address the tendency of the BP model to get easily stuck in local optima, we introduce the GA, which has excellent global search capabilities. Finally, to further improve local convergence speed, we further introduce the ACO algorithm. The experimental results based on tourist flow data from the Elephant Trunk Hill Scenic Area in Guilin indicate that the GA-AC*O-BP model achieves optimal values for key tourist flow prediction metrics such as MAPE, RMSE, MAE, and R2, compared to commonly used prediction models. These values are 4.09%, 426.34, 258.80, and 0.98795, respectively. Compared to the initial BP neural network, the improved GA-ACO-BP model reduced error metrics such as MAPE, RMSE, and MAE by 1.12%, 244.04, and 122.91, respectively, and increased the R2 metric by 1.85%.
Full article
(This article belongs to the Topic The Applications of Artificial Intelligence in Tourism)
►▼
Show Figures

Figure 1
Open AccessArticle
Preliminary Design Guidelines for Evaluating Immersive Industrial Safety Training
by
André Cordeiro, Regina Leite, Lucas Almeida, Cintia Neves, Tiago Silva, Alexandre Siqueira, Marcio Catapan and Ingrid Winkler
Informatics 2025, 12(3), 88; https://doi.org/10.3390/informatics12030088 - 1 Sep 2025
Abstract
This study presents preliminary design guidelines to support the evaluation of industrial safety training using immersive technologies, with a focus on high-risk work environments such as working at height. Although virtual reality has been widely adopted for training, few studies have explored its
[...] Read more.
This study presents preliminary design guidelines to support the evaluation of industrial safety training using immersive technologies, with a focus on high-risk work environments such as working at height. Although virtual reality has been widely adopted for training, few studies have explored its use for behavior-level evaluation, corresponding to Level 3 of the Kirkpatrick Model. Addressing this gap, the study adopts the Design Science Research methodology, combining a systematic literature review with expert focus group analysis to develop a conceptual framework for training evaluation. The results identify key elements necessary for immersive training evaluations, including scenario configuration, ethical procedures, recruitment, equipment selection, experimental design, and implementation strategies. The resulting guidelines are organized into six categories: scenario configuration, ethical procedures, recruitment, equipment selection, experimental design, and implementation strategies. These guidelines represent a DSR-based conceptual artifact to inform future empirical studies and support the structured assessment of immersive safety training interventions. The study also highlights the potential of integrating behavioral and physiological indicators to support immersive evaluations of behavioral change, offering an expert-informed and structured foundation for future empirical studies in high-risk industrial contexts.
Full article
(This article belongs to the Special Issue Real-World Applications and Prototyping of Information Systems for Extended Reality (VR, AR, and MR))
►▼
Show Figures

Figure 1
Open AccessArticle
Analysis and Forecasting of Cryptocurrency Markets Using Bayesian and LSTM-Based Deep Learning Models
by
Bidesh Biswas Biki, Makoto Sakamoto, Amane Takei, Md. Jubirul Alam, Md. Riajuliislam and Showaibuzzaman Showaibuzzaman
Informatics 2025, 12(3), 87; https://doi.org/10.3390/informatics12030087 - 30 Aug 2025
Abstract
►▼
Show Figures
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling
[...] Read more.
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling approaches: the Bayesian State-Space model and Long Short-Term Memory (LSTM) neural networks. Historical price data from January 2024 to April 2025 is used for model training and testing. The Bayesian model provided probabilistic insights by achieving a Mean Squared Error (MSE) of 0.0000 and a Mean Absolute Error (MAE) of 0.0026 for training data. For testing data, it provided 0.0013 for MSE and 0.0307 for MAE. On the other hand, the LSTM model provided temporal dependencies and performed strongly by achieving 0.0004 for MSE, 0.0160 for MAE, 0.0212 for RMSE, 0.9924 for R2 in terms of training data and for testing data, and 0.0007 for MSE with an R2 of 0.3505. From the result, it indicates that while the LSTM model excels in training performance, the Bayesian model provides better interpretability with lower error margins in testing by highlighting the trade-offs between model accuracy and probabilistic forecasting in the cryptocurrency markets.
Full article

Figure 1
Open AccessReview
The Temporal Evolution of Large Language Model Performance: A Comparative Analysis of Past and Current Outputs in Scientific and Medical Research
by
Ishith Seth, Gianluca Marcaccini, Bryan Lim, Jennifer Novo, Stephen Bacchi, Roberto Cuomo, Richard J. Ross and Warren M. Rozen
Informatics 2025, 12(3), 86; https://doi.org/10.3390/informatics12030086 - 26 Aug 2025
Abstract
►▼
Show Figures
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model
[...] Read more.
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model outputs (GPT-3.5 and GPT-4.0) with ChatGPT-4.5 across three domains: aesthetic surgery counseling, an academic discussion base of thumb arthritis, and a systematic literature review. Methods: We replicated the methodologies of three previously published studies using identical prompts in ChatGPT-4.5. Each output was assessed against its predecessor using a nine-domain Likert-based rubric measuring factual accuracy, completeness, reference quality, clarity, clinical insight, scientific reasoning, bias avoidance, utility, and interactivity. Expert reviewers in plastic and reconstructive surgery independently scored and compared model outputs across versions. Results: ChatGPT-4.5 outperformed earlier versions across all domains. Reference quality improved most significantly (a score increase of +4.5), followed by factual accuracy (+2.5), scientific reasoning (+2.5), and utility (+2.5). In aesthetic surgery counseling, GPT-3.5 produced generic responses lacking clinical detail, whereas ChatGPT-4.5 offered tailored, structured, and psychologically sensitive advice. In academic writing, ChatGPT-4.5 eliminated reference hallucination, correctly applied evidence hierarchies, and demonstrated advanced reasoning. In the literature review, recall remained suboptimal, but precision, citation accuracy, and contextual depth improved substantially. Conclusion: ChatGPT-4.5 represents a major step forward in LLM capability, particularly in generating trustworthy academic and clinical content. While not yet suitable as a standalone decision-making tool, its outputs now support research planning and early-stage manuscript preparation. Persistent limitations include information recall and interpretive flexibility. Continued validation is essential to ensure ethical, effective use in scientific workflows.
Full article

Figure 1
Open AccessArticle
Human-AI Symbiotic Theory (HAIST): Development, Multi-Framework Assessment, and AI-Assisted Validation in Academic Research
by
Laura Thomsen Morello and John C. Chick
Informatics 2025, 12(3), 85; https://doi.org/10.3390/informatics12030085 - 25 Aug 2025
Abstract
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and
[...] Read more.
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and advancing validation practices, this research employed a sequential three-phase mixed-methods approach: (1) systematic theoretical synthesis integrating five paradigmatic perspectives across learning theory, cognition, information processing, ethics, and AI domains; (2) development of an innovative validation framework combining three established theory-building approaches with groundbreaking AI-assisted content assessment protocols; and (3) comprehensive theory validation through both traditional multi-framework evaluation and novel AI-based content analysis demonstrating unprecedented convergent validity. This research contributes both a theoretically grounded framework for human-AI research collaboration and a transformative methodological innovation demonstrating how AI tools can systematically augment traditional expert-driven theory validation. HAIST provides the first comprehensive theoretical foundation designed explicitly for human-AI partnerships in scholarly research with applicability across disciplines, while the AI-assisted validation methodology offers a scalable, reliable model for theory development. Future research directions include empirical testing of HAIST principles in live research settings and broader application of the AI-assisted validation methodology to accelerate theory development across educational research and related disciplines.
Full article
(This article belongs to the Special Issue Generative AI in Higher Education: Applications, Implications, and Future Directions)
Open AccessArticle
Marketing a Banned Remedy: A Topic Model Analysis of Health Misinformation in Thai E-Commerce
by
Kanitsorn Suriyapaiboonwattana, Yuttana Jaroenruen, Saiphit Satjawisate, Kate Hone, Panupong Puttarak, Nattapong Kaewboonma, Puriwat Lertkrai and Siwanath Nantapichai
Informatics 2025, 12(3), 84; https://doi.org/10.3390/informatics12030084 - 18 Aug 2025
Abstract
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance.
[...] Read more.
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance. Drawing on a dataset of 1546 product listings across major platforms (Facebook, TikTok, Shopee, and Lazada), we applied Latent Dirichlet Allocation (LDA) to identify prevailing promotional themes and compliance gaps. Despite explicit platform policies, 87.6% of listings appeared on Facebook. Medical claims, particularly for pain relief, featured in 77.6% of posts, while only 18.4% included any risk disclosure. These findings suggest a systematic exploitation of regulatory blind spots and consumer health anxieties, facilitated by templated cross-platform messaging. Anchored in Information Manipulation Theory and the Health Belief Model, the analysis offers theoretical insight into how misinformation is structured and sustained within digital commerce ecosystems. The Thai case highlights urgent implications for platform accountability, policy harmonization, and the design of algorithmic surveillance systems in global health product regulation.
Full article
(This article belongs to the Section Health Informatics)
►▼
Show Figures

Figure 1
Open AccessArticle
Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies
by
Yifan Zhang and Kuzma Strelnikov
Informatics 2025, 12(3), 83; https://doi.org/10.3390/informatics12030083 - 15 Aug 2025
Abstract
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task
[...] Read more.
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task under varying levels of contextual cues (high, medium, and low) as defined using human performance, thereby enabling direct AI–human comparisons. Our findings indicate that LLMs significantly outperform humans, particularly in medium- and low-context conditions. While success in medium-context scenarios reflects the efficient utilization of contextual information, performance in low-context situations—where LLMs achieved approximately 25% accuracy compared to just 1% for humans—suggests that the models harness deep linguistic structures beyond mere surface context. This discovery implies that LLMs may elucidate previously unknown aspects of language architecture. The ability of LLMs to exploit deep structural regularities and statistical patterns in medium- and low-predictability contexts offers a novel perspective on the computational architecture of the human language system.
Full article
(This article belongs to the Section Human-Computer Interaction)
►▼
Show Figures

Figure 1
Open AccessArticle
Global Embeddings, Local Signals: Zero-Shot Sentiment Analysis of Transport Complaints
by
Aliya Nugumanova, Daniyar Rakhimzhanov and Aiganym Mansurova
Informatics 2025, 12(3), 82; https://doi.org/10.3390/informatics12030082 - 14 Aug 2025
Abstract
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific
[...] Read more.
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific sentiment analysis or opinion mining tasks on digital service data. To the best of our knowledge, we are the first to test this paradigm on operational multilingual complaints, where public transport agencies must prioritize thousands of Russian- and Kazakh-language messages each day. A human-labelled corpus of 2400 complaints is embedded with five open-source universal models. Obtained embeddings are matched to semantic “anchor” queries that describe three distinct facets: service aspect (eight classes), implicit frustration, and explicit customer request. In the strict zero-shot setting, the best encoder reaches 77% accuracy for aspect detection, 74% for frustration, and 80% for request; taken together, these signals reproduce human four-level priority in 60% of cases. Attaching a single-layer logistic probe on top of the frozen embeddings boosts performance to 89% for aspect, 83–87% for the binary facets, and 72% for end-to-end triage. Compared with recent fine-tuned sentiment analysis systems, our pipeline cuts memory demands by two orders of magnitude and eliminates task-specific training yet narrows the accuracy gap to under five percentage points. These findings indicate that a single frozen encoder, guided by handcrafted anchors and an ultra-light head, can deliver near-human triage quality across multiple pragmatic dimensions, opening the door to low-cost, language-agnostic monitoring of digital-service feedback.
Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
A Flexible Profile-Based Recommender System for Discovering Cultural Activities in an Emerging Tourist Destination
by
Isabel Arregocés-Julio, Andrés Solano-Barliza, Aida Valls, Antonio Moreno, Marysol Castillo-Palacio, Melisa Acosta-Coll and José Escorcia-Gutierrez
Informatics 2025, 12(3), 81; https://doi.org/10.3390/informatics12030081 - 14 Aug 2025
Abstract
Recommendation systems applied to tourism are widely recognized for improving the visitor’s experience in tourist destinations, thanks to their ability to personalize the trip. This paper presents a hybrid approach that combines Machine Learning techniques with the Ordered Weighted Averaging (OWA) aggregation operator
[...] Read more.
Recommendation systems applied to tourism are widely recognized for improving the visitor’s experience in tourist destinations, thanks to their ability to personalize the trip. This paper presents a hybrid approach that combines Machine Learning techniques with the Ordered Weighted Averaging (OWA) aggregation operator to achieve greater accuracy in user segmentation and generate personalized recommendations. The data were collected through a questionnaire applied to tourists in the different points of interest of the Special, Tourist and Cultural District of Riohacha. In the first stage, the K-means algorithm defines the segmentation of tourists based on their socio-demographic data and travel preferences. The second stage uses the OWA operator with a disjunctive policy to assign the most relevant cluster given the input data. This hybrid approach provides a recommendation mechanism for tourist destinations and their cultural heritage.
Full article
(This article belongs to the Topic The Applications of Artificial Intelligence in Tourism)
►▼
Show Figures

Figure 1
Open AccessArticle
Multi-Label Disease Detection in Chest X-Ray Imaging Using a Fine-Tuned ConvNeXtV2 with a Customized Classifier
by
Kangzhe Xiong, Yuyun Tu, Xinping Rao, Xiang Zou and Yingkui Du
Informatics 2025, 12(3), 80; https://doi.org/10.3390/informatics12030080 - 14 Aug 2025
Abstract
Deep-learning-based multiple label chest X-ray classification has achieved significant success, but existing models still have three main issues: fixed-scale convolutions fail to capture both large and small lesions, standard pooling is lacking in the lack of attention to important regions, and linear classification
[...] Read more.
Deep-learning-based multiple label chest X-ray classification has achieved significant success, but existing models still have three main issues: fixed-scale convolutions fail to capture both large and small lesions, standard pooling is lacking in the lack of attention to important regions, and linear classification lacks the capacity to model complex dependency between features. To circumvent these obstacles, we propose CONVFCMAE, a lightweight yet powerful framework that is built on a backbone that is partially frozen (77.08 % of the initial layers are fixed) in order to preserve complex, multi-scale features while decreasing the number of trainable parameters. Our architecture adds (1) an intelligent global pooling module that is learnable, with convolutions that are dynamically weighted by their spatial location, and (2) a multi-head attention block that is dedicated to channel re-calibration, along with (3) a two-layer MLP that has been enhanced with ReLU, batch normalization, and dropout. This module is used to enhance the non-linearity of the feature space. To further reduce the noise associated with labels and the imbalance in class distribution inherent to the NIH ChestXray14 dataset, we utilize a combined loss that combines BCEWithLogits and Focal Loss as well as extensive data augmentation. On ChestXray14, the average ROC–AUC of CONVFCMAE is 0.852, which is 3.97 percent greater than the state of the art. Ablation experiments demonstrate the individual and collective effectiveness of each component. Grad-CAM visualizations have a superior capacity to localize the pathological regions, and this increases the interpretability of the model. Overall, CONVFCMAE provides a practical, generalizable solution to the problem of extracting features from medical images in a practical manner.
Full article
(This article belongs to the Section Medical and Clinical Informatics)
►▼
Show Figures

Figure 1
Open AccessArticle
CAMBSRec: A Context-Aware Multi-Behavior Sequential Recommendation Model
by
Bohan Zhuang, Yan Lan and Minghui Zhang
Informatics 2025, 12(3), 79; https://doi.org/10.3390/informatics12030079 - 4 Aug 2025
Abstract
►▼
Show Figures
Multi-behavior sequential recommendation (MBSRec) is a form of sequential recommendation. It leverages users’ historical interaction behavior types to better predict their next actions. This approach fits real-world scenarios better than traditional models do. With the rise of the transformer model, attention mechanisms are
[...] Read more.
Multi-behavior sequential recommendation (MBSRec) is a form of sequential recommendation. It leverages users’ historical interaction behavior types to better predict their next actions. This approach fits real-world scenarios better than traditional models do. With the rise of the transformer model, attention mechanisms are widely used in recommendation algorithms. However, they suffer from low-pass filtering, and the simple learnable positional encodings in existing models offer limited performance gains. To address these problems, we introduce the context-aware multi-behavior sequential recommendation model (CAMBSRec). It separately encodes items and behavior types, replaces traditional positional encoding with context-similarity positional encoding, and applies the discrete Fourier transform to separate the high and low frequency components and enhance the high frequency components, countering the low-pass filtering effect. Experiments on three public datasets show that CAMBSRec performs better than five baseline models, demonstrating its advantages in terms of recommendation performance.
Full article

Figure 1
Open AccessArticle
Designing a Patient Outcome Clinical Assessment Tool for Modified Rankin Scale: “You Feel the Same Way Too”
by
Laura London and Noreen Kamal
Informatics 2025, 12(3), 78; https://doi.org/10.3390/informatics12030078 - 4 Aug 2025
Abstract
►▼
Show Figures
The modified Rankin Scale (mRS) is a widely used outcome measure for assessing disability in stroke care; however, its administration is often affected by subjectivity and variability, leading to poor inter-rater reliability and inconsistent scoring. Originally designed for hospital discharge evaluations, the mRS
[...] Read more.
The modified Rankin Scale (mRS) is a widely used outcome measure for assessing disability in stroke care; however, its administration is often affected by subjectivity and variability, leading to poor inter-rater reliability and inconsistent scoring. Originally designed for hospital discharge evaluations, the mRS has evolved into an outcome tool for disability assessment and clinical decision-making. Inconsistencies persist due to a lack of standardization and cognitive biases during its use. This paper presents design principles for creating a standardized clinical assessment tool (CAT) for the mRS, grounded in human–computer interaction (HCI) and cognitive engineering principles. Design principles were informed in part by an anonymous online survey conducted with clinicians across Canada to gain insights into current administration practices, opinions, and challenges of the mRS. The proposed design principles aim to reduce cognitive load, improve inter-rater reliability, and streamline the administration process of the mRS. By focusing on usability and standardization, the design principles seek to enhance scoring consistency and improve the overall reliability of clinical outcomes in stroke care and research. Developing a standardized CAT for the mRS represents a significant step toward improving the accuracy and consistency of stroke disability assessments. Future work will focus on real-world validation with healthcare stakeholders and exploring self-completed mRS assessments to further refine the tool.
Full article

Figure 1
Open AccessArticle
Designs and Interactions for Near-Field Augmented Reality: A Scoping Review
by
Jacob Hobbs and Christopher Bull
Informatics 2025, 12(3), 77; https://doi.org/10.3390/informatics12030077 - 1 Aug 2025
Abstract
►▼
Show Figures
Augmented reality (AR), which overlays digital content within the user’s view, is gaining traction across domains such as education, healthcare, manufacturing, and entertainment. The hardware constraints of commercially available HMDs are well acknowledged, but little work addresses what design or interactions techniques developers
[...] Read more.
Augmented reality (AR), which overlays digital content within the user’s view, is gaining traction across domains such as education, healthcare, manufacturing, and entertainment. The hardware constraints of commercially available HMDs are well acknowledged, but little work addresses what design or interactions techniques developers can employ or build into experiences to work around these limitations. We conducted a scoping literature review, with the aim of mapping the current landscape of design principles and interaction techniques employed in near-field AR environments. We searched for literature published between 2016 and 2025 across major databases, including the ACM Digital Library and IEEE Xplore. Studies were included if they explicitly employed design or interaction techniques with a commercially available HMD for near-field AR experiences. A total of 780 articles were returned by the search, but just 7 articles met the inclusion criteria. Our review identifies key themes around how existing techniques are employed and the two competing goals of AR experiences, and we highlight the importance of embodiment in interaction efficacy. We present directions for future research based on and justified by our review. The findings offer a comprehensive overview for researchers, designers, and developers aiming to create more intuitive, effective, and context-aware near-field AR experiences. This review also provides a foundation for future research by outlining underexplored areas and recommending research directions for near-field AR interaction design.
Full article

Figure 1
Open AccessArticle
Multi-Model Dialectical Evaluation of LLM Reasoning Chains: A Structured Framework with Dual Scoring Agents
by
Catalin Anghel, Andreea Alexandra Anghel, Emilia Pecheanu, Ioan Susnea, Adina Cocu and Adrian Istrate
Informatics 2025, 12(3), 76; https://doi.org/10.3390/informatics12030076 - 1 Aug 2025
Abstract
►▼
Show Figures
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed
[...] Read more.
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed modular framework designed to evaluate reasoning through a structured three-stage process: opinion, counterargument, and synthesis. The framework enables transparent and comparative analysis of how different LLMs handle dialectical reasoning. (2) Methods: Each stage is executed by a single model, and final syntheses are scored via two independent LLM evaluators (LLaMA 3.1 and GPT-4o) based on a rubric with four dimensions: clarity, coherence, originality, and dialecticality. In parallel, a rule-based semantic analyzer detects rhetorical anomalies and ethical values. All outputs and metadata are stored in a Neo4j graph database for structured exploration. (3) Results: The system was applied to four open-weight models (Gemma 7B, Mistral 7B, Dolphin-Mistral, Zephyr 7B) across ten open-ended prompts on ethical, political, and technological topics. The results show consistent stylistic and semantic variation across models, with moderate inter-rater agreement. Semantic diagnostics revealed differences in value expression and rhetorical flaws not captured by rubric scores. (4) Originality: The framework is, to our knowledge, the first to integrate multi-stage reasoning, rubric-based and semantic evaluation, and graph-based storage into a single system. It enables replicable, interpretable, and multidimensional assessment of generative reasoning—supporting researchers, developers, and educators working with LLMs in high-stakes contexts.
Full article

Figure 1
Open AccessArticle
Research on Power Cable Intrusion Identification Using a GRT-Transformer-Based Distributed Acoustic Sensing (DAS) System
by
Xiaoli Huang, Xingcheng Wang, Han Qin and Zhaoliang Zhou
Informatics 2025, 12(3), 75; https://doi.org/10.3390/informatics12030075 - 21 Jul 2025
Cited by 1
Abstract
►▼
Show Figures
To address the high false alarm rate of intrusion detection systems based on distributed acoustic sensing (DAS) for power cables in complex underground environments, an innovative GRT-Transformer multimodal deep learning model is proposed. The core of this model lies in its distinctive three-branch
[...] Read more.
To address the high false alarm rate of intrusion detection systems based on distributed acoustic sensing (DAS) for power cables in complex underground environments, an innovative GRT-Transformer multimodal deep learning model is proposed. The core of this model lies in its distinctive three-branch parallel collaborative architecture: two branches employ Gramian Angular Summation Field (GASF) and Recursive Pattern (RP) algorithms to convert one-dimensional intrusion waveforms into two-dimensional images, thereby capturing rich spatial patterns and dynamic characteristics and the third branch utilizes a Gated Recurrent Unit (GRU) algorithm to directly focus on the temporal evolution features of the waveform; additionally, a Transformer component is integrated to capture the overall trend and global dependencies of the signals. Ultimately, the terminal employs a Bidirectional Long Short-Term Memory (BiLSTM) network to perform a deep fusion of the multidimensional features extracted from the three branches, enabling a comprehensive understanding of the bidirectional temporal dependencies within the data. Experimental validation demonstrates that the GRT-Transformer achieves an average recognition accuracy of 97.3% across three typical intrusion events—illegal tapping, mechanical operations, and vehicle passage—significantly reducing false alarms, surpassing traditional methods, and exhibiting strong practical potential in complex real-world scenarios.
Full article

Figure 1

Journal Menu
► ▼ Journal Menu-
- Informatics Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2025
Topic in
Electronics, Healthcare, Informatics, MAKE, Sensors, Systems, IJGI
Theories and Applications of Human-Computer Interaction
Topic Editors: Da Tao, Tingru Zhang, Hailiang WangDeadline: 31 December 2025
Topic in
Applied Sciences, Electronics, Informatics, JCP, Future Internet, Mathematics, Sensors, Remote Sensing
Recent Advances in Artificial Intelligence for Security and Security for Artificial Intelligence
Topic Editors: Tao Zhang, Xiangyun Tang, Jiacheng Wang, Chuan Zhang, Jiqiang LiuDeadline: 28 February 2026
Topic in
AI, Algorithms, BDCC, Computers, Data, Future Internet, Informatics, Information, MAKE, Publications, Smart Cities
Learning to Live with Gen-AI
Topic Editors: Antony Bryant, Paolo Bellavista, Kenji Suzuki, Horacio Saggion, Roberto Montemanni, Andreas Holzinger, Min ChenDeadline: 31 August 2026

Special Issues
Special Issue in
Informatics
Practical Applications of Sentiment Analysis
Guest Editors: Patricia Anthony, Jing ZhouDeadline: 30 September 2025
Special Issue in
Informatics
Generative AI in Higher Education: Applications, Implications, and Future Directions
Guest Editors: Amir Ghapanchi, Reza Ghanbarzadeh, Purarjomandlangrudi AfroozDeadline: 31 October 2025
Special Issue in
Informatics
Real-World Applications and Prototyping of Information Systems for Extended Reality (VR, AR, and MR)
Guest Editors: Kitti Puritat, Kannikar Intawong, Wirapong ChansanamDeadline: 31 March 2026
Special Issue in
Informatics
Health Data Management in the Age of AI
Guest Editors: Brenda Scholtz, Hanlie SmutsDeadline: 30 May 2026