Next Issue
Volume 16, April
Previous Issue
Volume 16, February
 
 

Information, Volume 16, Issue 3 (March 2025) – 92 articles

Cover Story (view full-size image): Organizations often face a choice between comprehensive analytical methods that can stall decisions and agile heuristics that risk oversimplifying. Our approach bridges this gap with AI-enabled semantic analysis, linking structured frameworks to decision heuristics, such as the Thirty-Six Stratagems of ancient China. We use advanced NLP to measure textual similarities between each framework’s parameters and heuristic patterns, providing best-fit recommendations for real-world scenarios. Case studies in automotive innovation and historical computing show how this semantic bridge yields actionable strategies, enabling organizations to maintain both systematic rigor and operational agility in competitive environments. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 10602 KiB  
Article
A Lightweight Network for UAV Multi-Scale Feature Fusion-Based Object Detection
by Sheng Deng and Yaping Wan
Information 2025, 16(3), 250; https://doi.org/10.3390/info16030250 - 20 Mar 2025
Viewed by 247
Abstract
To tackle the issues of small target sizes, missed detections, and false alarms in aerial drone imagery, alongside the constraints posed by limited hardware resources during model deployment, a streamlined object detection approach is proposed to enhance the performance of YOLOv8s. This approach [...] Read more.
To tackle the issues of small target sizes, missed detections, and false alarms in aerial drone imagery, alongside the constraints posed by limited hardware resources during model deployment, a streamlined object detection approach is proposed to enhance the performance of YOLOv8s. This approach introduces a new module, C2f_SEPConv, which incorporates Partial Convolution (PConv) and channel attention mechanisms (Squeeze-and-Excitation, SE), effectively replacing the previous bottleneck and minimizing both the model’s parameter count and computational demands. Modifications to the detection head allow it to perform more effectively in scenarios with small targets in aerial images. To capture multi-scale object information, a Multi-Scale Cross-Axis Attention (MSCA) mechanism is embedded within the backbone network. The neck network integrates a Multi-Scale Fusion Block (MSFB) to combine multi-level features, further boosting detection precision. Furthermore, the Focal-EIoU loss function supersedes the traditional CIoU loss function to address challenges related to the regression of small targets. Evaluations conducted on the VisDrone dataset reveal that the proposed method improves Precision, Recall, mAP0.5, and mAP0.5:0.95 by 4.4%, 5.6%, 6.4%, and 4%, respectively, compared to YOLOv8s, with a 28.3% reduction in parameters. On the DOTAv1.0 dataset, a 2.1% enhancement in mAP0.5 is observed. Full article
Show Figures

Graphical abstract

27 pages, 2569 KiB  
Article
Cognitive Handwriting Insights for Alzheimer’s Diagnosis: A Hybrid Framework
by Shafiq Ul Rehman and Uddalak Mitra
Information 2025, 16(3), 249; https://doi.org/10.3390/info16030249 - 20 Mar 2025
Viewed by 336
Abstract
Alzheimer’s disease (AD) is a persistent neurologic disorder that has no cure. For a successful treatment to be implemented, it is essential to diagnose AD at an early stage, which may occur up to eight years before dementia manifests. In this regard, a [...] Read more.
Alzheimer’s disease (AD) is a persistent neurologic disorder that has no cure. For a successful treatment to be implemented, it is essential to diagnose AD at an early stage, which may occur up to eight years before dementia manifests. In this regard, a new predictive machine learning model is proposed that works in two stages and takes advantage of both unsupervised and supervised learning approaches to provide a fast, affordable, yet accurate solution. The first stage involved fuzzy partitioning of a gold-standard dataset, DARWIN (Diagnosis AlzheimeR WIth haNdwriting). This dataset consists of clinical features and is designed to detect Alzheimer’s disease through handwriting analysis. To determine the optimal number of clusters, four Clustering Validity Indices (CVIs) were averaged, which we refer to as cognitive features. During the second stage, a predictive model was constructed exclusively from these cognitive features. In comparison to models relying on datasets featuring clinical attributes, models incorporating cognitive features showed substantial performance enhancements, ranging from 12% to 26%. Our proposed model surpassed all current state-of-the-art models, achieving a mean accuracy of 99%, mean sensitivity of 98%, mean specificity of 100%, mean precision of 100%, and mean MCC and Cohen’s Kappa of 98%, along with a mean AUC-ROC score of 99%. Hence, integrating the output of unsupervised learning into supervised machine learning models significantly improved their performance. In the process of crafting early interventions for individuals with a heightened risk of disease onset, our prognostic framework can aid in both the recruitment and advancement of clinical trials. Full article
(This article belongs to the Special Issue Detection and Modelling of Biosignals)
Show Figures

Graphical abstract

32 pages, 1513 KiB  
Article
Making Better Decisions, Eschewing Conspiracy, Populism, and Science Denial: Analysing the Attributes of Individuals Who Engage Effectively with Ideas
by Chris Brown, Ruth Luzmore and Yin Wang
Information 2025, 16(3), 248; https://doi.org/10.3390/info16030248 - 19 Mar 2025
Viewed by 258
Abstract
How can we encourage individuals to engage with beneficial ideas, while eschewing dark ideas such as science denial, conspiracy theories, or populist rhetoric? This paper investigates the mechanisms underpinning individuals’ engagement with ideas, proposing a model grounded in education, social networks, and pragmatic [...] Read more.
How can we encourage individuals to engage with beneficial ideas, while eschewing dark ideas such as science denial, conspiracy theories, or populist rhetoric? This paper investigates the mechanisms underpinning individuals’ engagement with ideas, proposing a model grounded in education, social networks, and pragmatic prospection. Beneficial ideas enhance decision-making, improving individual and societal outcomes, while dark ideas lead to suboptimal consequences, such as diminished trust in institutions and health-related harm. Using a Structural Equation Model (SEM) based on survey data from 7000 respondents across seven European countries, we test hypotheses linking critical thinking, network dynamics, and pragmatic prospection (i.e., a forward-looking mindset) to the value individuals ascribe to engaging with ideas, their ability to identify positive and dark ideas effectively, how individuals subsequently engage with ideas, and who they engage in them with. Our results highlight two key pathways: one linking pragmatic prospection to network-building and idea-sharing, and another connecting critical reasoning and knowledge acquisition to effective ideas engagement. Together, these pathways illustrate how interventions in education, network development, and forward-planning can empower individuals to critically evaluate and embrace positive ideas while rejecting those that might be detrimental. The paper concludes with recommendations for policy and future research to support an ideas-informed society. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1323 KiB  
Article
Digital Transformation in Governmental Public Service Provision and Usable Security Perception in Saudi Arabia
by Saqib Saeed
Information 2025, 16(3), 247; https://doi.org/10.3390/info16030247 - 19 Mar 2025
Viewed by 356
Abstract
Usable security and privacy in public services are critical considerations in today’s digital age, where governments increasingly rely on technology to deliver services efficiently while safeguarding sensitive information. Successful usage of these electronic services depends on citizens’ trust level in e-government channels. Therefore, [...] Read more.
Usable security and privacy in public services are critical considerations in today’s digital age, where governments increasingly rely on technology to deliver services efficiently while safeguarding sensitive information. Successful usage of these electronic services depends on citizens’ trust level in e-government channels. Therefore, the design of these public service organizations should consider the usability aspect of security controls. In this paper, we present the results of a quantitative study conducted in Saudi Arabia to understand end users’ perceptions regarding usable security and privacy in their public service usage. Based on the findings, we present a model to further improve the usable security and privacy aspects, which will help policymakers and practitioners improve public service provision by electronic means. The model can be further refined in different geographical contexts to improve cybersecurity in e-government service provision through the integrated efforts of citizens, service-providing organizations and government cybersecurity agencies. Full article
Show Figures

Figure 1

28 pages, 1266 KiB  
Systematic Review
Managing Food Waste Through Gamification and Serious Games: A Systematic Literature Review
by Ezequiel Santos, Cláudia Sevivas and Vítor Carvalho
Information 2025, 16(3), 246; https://doi.org/10.3390/info16030246 - 19 Mar 2025
Viewed by 711
Abstract
Household food waste poses significant environmental, social, and financial challenges. This systematic literature review examines the role of games and gamification in mitigating food waste, addressing four key research questions: how these interventions are applied, their impact on attitudes and behaviors, the specific [...] Read more.
Household food waste poses significant environmental, social, and financial challenges. This systematic literature review examines the role of games and gamification in mitigating food waste, addressing four key research questions: how these interventions are applied, their impact on attitudes and behaviors, the specific mechanisms employed, and their measured outcomes. The analysis identifies a range of strategies, including mobile applications, serious games, educational platforms, and interactive installations. Theoretical frameworks such as the Theory of Planned Behavior, emotional engagement, systems thinking, and cognitive load theory underpin these interventions. Findings suggest that gamification can enhance awareness, knowledge, and behavioral change, with some interventions demonstrating measurable reductions in food waste. However, limitations such as the lack of long-term engagement data, varying effectiveness across socio-economic contexts, and inconsistencies in measurement frameworks remain challenges. Notable interventions—including the MySusCof and Exspiro apps and serious games like FoodFighters and PadovaGoGreen—show promising results but require further validation in diverse settings. This review highlights both the potential and limitations of gamified strategies, emphasizing the need for standardized measurement approaches and longitudinal studies to assess their sustained impact on food waste reduction. Full article
Show Figures

Graphical abstract

28 pages, 1162 KiB  
Article
AHP-Based Evaluation of Discipline-Specific Information Services in Academic Libraries Under Digital Intelligence
by Simeng Zhang, Tao Zhang and Xi Wang
Information 2025, 16(3), 245; https://doi.org/10.3390/info16030245 - 18 Mar 2025
Viewed by 299
Abstract
Over recent years, digital and intelligent technologies have been driving the transformation of discipline-specific information services in academic libraries toward user experience optimization and service innovation. This study constructs a quality evaluation framework for discipline-specific information services in academic libraries, incorporating digital-intelligence characteristics [...] Read more.
Over recent years, digital and intelligent technologies have been driving the transformation of discipline-specific information services in academic libraries toward user experience optimization and service innovation. This study constructs a quality evaluation framework for discipline-specific information services in academic libraries, incorporating digital-intelligence characteristics to provide theoretical references and evaluation guidelines for enhancing service quality and user satisfaction in an information-ubiquitous environment. Drawing on LibQual+TM, WebQUAL, and E-SERVQUAL service quality evaluation models and integrating expert interviews with the contextual characteristics of academic library discipline-specific information services, this study develops a comprehensive evaluation system comprising six dimensions—Perceived Information Quality, Information Usability, Information Security, Interactive Feedback, Tool Application, and User Experience—with fifteen specific indicators. The analytic hierarchy process (AHP) was applied to determine the weight of these indicators. To validate the practicality of the evaluation system, a fuzzy comprehensive evaluation method was employed for an empirical analysis using discipline-specific information services at Tsinghua University Library in China as a case study. The evaluation results indicate that the overall quality of discipline-specific information services at Tsinghua University Library is satisfactory, with Tool Application, Perceived Information Quality, and Information Usability identified as key factors influencing service quality. To further enhance discipline-specific information services in academic libraries, emphasis should be placed on service intelligence and precision-driven optimization, strengthening user experience, interaction and feedback mechanisms, and data security measures. These improvements will better meet the diverse needs of users and enhance the overall effectiveness of discipline-specific information services. Full article
Show Figures

Figure 1

36 pages, 2748 KiB  
Article
A Comparative Study of Privacy-Preserving Techniques in Federated Learning: A Performance and Security Analysis
by Eman Shalabi, Walid Khedr, Ehab Rushdy and Ahmad Salah
Information 2025, 16(3), 244; https://doi.org/10.3390/info16030244 - 18 Mar 2025
Viewed by 542
Abstract
Federated learning (FL) is a machine learning technique where clients exchange only local model updates with a central server that combines them to create a global model after local training. While FL offers privacy benefits through local training, privacy-preserving strategies are needed since [...] Read more.
Federated learning (FL) is a machine learning technique where clients exchange only local model updates with a central server that combines them to create a global model after local training. While FL offers privacy benefits through local training, privacy-preserving strategies are needed since model updates can leak training data information due to various attacks. To enhance privacy and attack robustness, techniques like homomorphic encryption (HE), Secure Multi-Party Computation (SMPC), and the Private Aggregation of Teacher Ensembles (PATE) can be combined with FL. Currently, no study has combined more than two privacy-preserving techniques with FL or comparatively analyzed their combinations. We conducted a comparative study of privacy-preserving techniques in FL, analyzing performance and security. We implemented FL using an artificial neural network (ANN) with a Malware Dataset from Kaggle for malware detection. To enhance privacy, we proposed models combining FL with the PATE, SMPC, and HE. All models were evaluated against poisoning attacks (targeted and untargeted), a backdoor attack, a model inversion attack, and a man in the middle attack. The combined models maintained performance while improving attack robustness. FL_SMPC, FL_CKKS, and FL_CKKS_SMPC improved both their performance and attack resistance. All the combined models outperformed the base FL model against the evaluated attacks. FL_PATE_CKKS_SMPC achieved the lowest backdoor attack success rate (0.0920). FL_CKKS_SMPC best resisted untargeted poisoning attacks (0.0010 success rate). FL_CKKS and FL_CKKS_SMPC best defended against targeted poisoning attacks (0.0020 success rate). FL_PATE_SMPC best resisted model inversion attacks (19.267 MSE). FL_PATE_CKKS_SMPC best defended against man in the middle attacks with the lowest degradation in accuracy (1.68%), precision (1.94%), recall (1.68%), and the F1-score (1.64%). Full article
(This article belongs to the Special Issue Digital Privacy and Security, 2nd Edition)
Show Figures

Figure 1

25 pages, 18192 KiB  
Article
AML-Based Multi-Dimensional Co-Evolution Approach Supported by Blockchain: Architecture Design and Case Study on Intelligent Production Lines for Industry 4.0
by Kai Ding, Detlef Gerhard and Liuqun Fan
Information 2025, 16(3), 243; https://doi.org/10.3390/info16030243 - 18 Mar 2025
Cited by 1 | Viewed by 291
Abstract
Based on Automation ML (AML), Intelligent Production Lines (IPLs) for Industry 4.0 can effectively organize multi-dimensional data and models. However, this process requires interdisciplinary and multi-team contributions, which often involve the dual pressures of private data encryption and public data sharing. As a [...] Read more.
Based on Automation ML (AML), Intelligent Production Lines (IPLs) for Industry 4.0 can effectively organize multi-dimensional data and models. However, this process requires interdisciplinary and multi-team contributions, which often involve the dual pressures of private data encryption and public data sharing. As a transparent decentralized network, blockchain’s compatibility with the challenges of AML collaboration processes, data security, and privacy is not ideal. This paper proposes a new method to enhance the collaborative evolution of IPLs. Its innovations are, firstly, developing a comprehensive two-layer management model, combining blockchain with the Interplanetary File System (IPFS) to build an integrated solution for private and public hybrid containers based on a collaborative model; secondly, designing a version co-evolution management method by combining smart contract workflows and AML multi-dimensional modeling processes; meanwhile, introducing a specially designed conflict resolution mechanism based on the graph model to maintain consistency in version multi-batch management and; finally, using the test cases established in the lab’s I5Blocks for verification. Full article
(This article belongs to the Special Issue Blockchain and AI: Innovations and Applications in ICT)
Show Figures

Figure 1

16 pages, 247 KiB  
Article
Show Me All Writing Errors: A Two-Phased Grammatical Error Corrector for Romanian
by Mihai-Cristian Tudose, Stefan Ruseti and Mihai Dascalu
Information 2025, 16(3), 242; https://doi.org/10.3390/info16030242 - 18 Mar 2025
Viewed by 263
Abstract
Nowadays, grammatical error correction (GEC) has a significant role in writing since even native speakers often face challenges with proficient writing. This research is focused on developing a methodology to correct grammatical errors in the Romanian language, a less-resourced language for which there [...] Read more.
Nowadays, grammatical error correction (GEC) has a significant role in writing since even native speakers often face challenges with proficient writing. This research is focused on developing a methodology to correct grammatical errors in the Romanian language, a less-resourced language for which there are currently no up-to-date GEC solutions. Our main contributions include an open-source synthetic dataset of 345,403 Romanian sentences, a manually curated dataset of 3054 social media comments, a two-phased GEC approach, and a comparison with several Romanian models, including RoMistral and RoLama3, but also LanguageTool, GPT-4o mini, and GPT-4o. We consider a synthetic dataset to finetune our models, while we rely on two real-life datasets with genuine human mistakes (i.e., CNA and RoComments) to evaluate performance. Building an artificial dataset was necessary because of the scarcity of real-life mistake datasets, whereas introducing RoComments, a new genuine dataset, is argued by the necessity to cover errors amongst native speakers encountered in social media comments. We also introduce a two-phased approach, where we first identify the location of erroneous tokens in the sentence; next, the erroneous tokens are replaced by an encoder–decoder model. Our approach achieved an F0.5 of 0.57 on CNA and 0.64 on RoComments, surpassing by a considerable margin LanguageTool as well as an end-to-end version based on Flan-T5 and mT0 in most setups. While our two-phased method did not outperform GPT-4o, arguably by its smaller size and language exposure, it obtained on-par results with GPT-4o mini and achieved higher performance than all Romanian LLMs. Full article
Show Figures

Figure 1

14 pages, 2024 KiB  
Article
An Enhanced Multi-Layer Blockchain Security Model for Improved Latency and Scalability
by Basem Mohamed Elomda, Taher Abouzaid Abdelaty Abdelbary, Hesham Ahmed Hassan, Kamal S. Hamza and Qasem Kharma
Information 2025, 16(3), 241; https://doi.org/10.3390/info16030241 - 18 Mar 2025
Cited by 1 | Viewed by 372
Abstract
The Multi-Layer Blockchain Security Model (MLBSM) proposed in 2024 was designed to safeguard Internet of Things (IoT) networks, as well as similar network architectures, against transaction privacy leakage in public blockchain systems. MLBSM also addresses critical issues like latency, ensuring faster transaction speeds [...] Read more.
The Multi-Layer Blockchain Security Model (MLBSM) proposed in 2024 was designed to safeguard Internet of Things (IoT) networks, as well as similar network architectures, against transaction privacy leakage in public blockchain systems. MLBSM also addresses critical issues like latency, ensuring faster transaction speeds through clustering and parallel processing. This paper presents a new extension to the Multi-Layer Blockchain Security Model (MLBSM). The proposed model is called the Enhanced Multi-Layer Blockchain Security Model (EMLBSM). The proposed EMLBSM will solve latency issues by compressing and reducing the layers of the MLBSM through merging layer2 and layer3 in the MLBSM. This paper describes the required enhanced solution for latency and scalability problems that were found in the MLBSM. Full article
(This article belongs to the Special Issue Blockchain and AI: Innovations and Applications in ICT)
Show Figures

Graphical abstract

20 pages, 1189 KiB  
Article
Human-Centered Artificial Intelligence in Higher Education: A Framework for Systematic Literature Reviews
by Thang Le Dinh, Tran Duc Le, Sylvestre Uwizeyemungu and Claudia Pelletier
Information 2025, 16(3), 240; https://doi.org/10.3390/info16030240 - 18 Mar 2025
Cited by 1 | Viewed by 618
Abstract
Human-centered approaches are vital to manage the rapid growth of artificial intelligence (AI) in higher education, where AI-driven applications can reshape teaching, research, and student engagement. This study presents the Human-Centered AI for Systematic Literature Reviews (HCAI-SLR) framework to guide educators and researchers [...] Read more.
Human-centered approaches are vital to manage the rapid growth of artificial intelligence (AI) in higher education, where AI-driven applications can reshape teaching, research, and student engagement. This study presents the Human-Centered AI for Systematic Literature Reviews (HCAI-SLR) framework to guide educators and researchers in integrating AI tools effectively. The methodology combines AI augmentation with human oversight and ethical checkpoints at each review stage to balance automation and expertise. An illustrative example and experiments demonstrate how AI supports tasks such as searching, screening, extracting, and synthesizing large volumes of literature that lead to measurable gains in efficiency and comprehensiveness. Results show that HCAI-driven processes can reduce time costs while preserving rigor, transparency, and user control. By embedding human values through constant oversight, trust in AI-generated findings is bolstered and potential biases are mitigated. Overall, the framework promotes ethical, transparent, and robust approaches to AI integration in higher education without compromising academic standards. Future work will refine its adaptability across various research contexts and further validate its impact on scholarly practices. Full article
Show Figures

Graphical abstract

23 pages, 2804 KiB  
Article
Exploring Determinants and Predictive Models of Latent Tuberculosis Infection Outcomes in Rural Areas of the Eastern Cape: A Pilot Comparative Analysis of Logistic Regression and Machine Learning Approaches
by Lindiwe Modest Faye, Cebo Magwaza, Ntandazo Dlatu and Teke Apalata
Information 2025, 16(3), 239; https://doi.org/10.3390/info16030239 - 18 Mar 2025
Viewed by 259
Abstract
Latent tuberculosis infection (LTBI) poses a significant public health challenge, especially in populations with high HIV prevalence and limited healthcare access. Early detection and targeted interventions are essential to prevent the progression of active tuberculosis. This study aimed to identify the key factors [...] Read more.
Latent tuberculosis infection (LTBI) poses a significant public health challenge, especially in populations with high HIV prevalence and limited healthcare access. Early detection and targeted interventions are essential to prevent the progression of active tuberculosis. This study aimed to identify the key factors influencing LTBI outcomes through the application of predictive models, including logistic regression and machine learning techniques, while also evaluating strategies to enhance LTBI awareness and testing. Data from rural areas in the Eastern Cape, South Africa, were analyzed to identify key demographic, health, and knowledge-related factors influencing LTBI outcomes. Predictive models utilized, included logistic regression, decision trees, and random forests, to identify key determinants of LTBI positivity based on demographic, health, and knowledge-related factors in rural areas of the Eastern Cape, South Africa. The models evaluated factors such as age, HIV status, and LTBI awareness, with random forests demonstrating the best balance of accuracy and interpretability. Additionally, a knowledge diffusion model was employed to assess the effectiveness of educational strategies in increasing LTBI awareness and testing uptake. Logistic regression achieved an accuracy of 68% with high precision (70%) but low recall (33%) for LTBI-positive cases, identifying age, HIV status, and LTBI awareness as significant predictors. The random forest model outperformed logistic regression in accuracy (59.26%) and F1-score (0.63), providing a better balance between precision and recall. Feature importance analysis revealed that age, occupation, and knowledge of LTBI symptoms were the most critical factors across both models. The knowledge diffusion model demonstrated that targeted interventions significantly increased LTBI awareness and testing, particularly in high-risk groups. While logistic regression offers more interpretable results for public health interventions, machine learning models like random forests provide enhanced predictive power by capturing complex relationships between demographics and health factors. These findings highlight the need for targeted educational campaigns and increased LTBI testing in high-risk populations, particularly those with limited awareness of LTBI symptoms. Full article
Show Figures

Figure 1

18 pages, 2558 KiB  
Article
Speech Emotion Recognition and Serious Games: An Entertaining Approach for Crowdsourcing Annotated Samples
by Lazaros Matsouliadis, Eleni Siamtanidou, Nikolaos Vryzas and Charalampos Dimoulas
Information 2025, 16(3), 238; https://doi.org/10.3390/info16030238 - 18 Mar 2025
Viewed by 269
Abstract
Computer games have emerged as valuable tools for education and training. In particular, serious games, which combine learning with entertainment, offer unique potential for engaging users and enhancing knowledge acquisition. This paper presents a case study on the design, development, and evaluation of [...] Read more.
Computer games have emerged as valuable tools for education and training. In particular, serious games, which combine learning with entertainment, offer unique potential for engaging users and enhancing knowledge acquisition. This paper presents a case study on the design, development, and evaluation of two serious games, “Silent Kingdom” and “Job Interview Simulator”, created using Unreal Engine 5 and incorporating speech emotion recognition (SER) technology. Through a systematic analysis of the existing research in SER and game development, these games were designed to elicit a wide range of emotion responses from player and collect voice data for the enhancement of SER models. By evaluating player engagement, emotional expression, and overall user experience, this study investigates the effectiveness of serious games in collecting speech data and creating more immersive player experiences. The research also explores the technical limitations of SER integration within game environments in real-time, as well as its impact on player enjoyment. Although there are some technology limitations due to the latency provided for real-time SER analysis, the results reveal that a properly developed game with integrated SER technology could become a more engaging and efficient tool for crowdsourcing speech data. Full article
(This article belongs to the Special Issue Information Processing in Multimedia Applications)
Show Figures

Figure 1

13 pages, 197 KiB  
Article
Artificial Intelligence for Medication Management in Discordant Chronic Comorbidities: An Analysis from Healthcare Provider and Patient Perspectives
by Tom Ongwere, Tam V. Nguyen and Zoe Sadowski
Information 2025, 16(3), 237; https://doi.org/10.3390/info16030237 - 17 Mar 2025
Viewed by 301
Abstract
Recent advances in artificial intelligence (AI) have created opportunities to enhance medical decision-making for patients with discordant chronic conditions (DCCs), where a patient has multiple, often unrelated, chronic conditions with conflicting treatment plans. This paper explores the perspectives of healthcare providers (n = [...] Read more.
Recent advances in artificial intelligence (AI) have created opportunities to enhance medical decision-making for patients with discordant chronic conditions (DCCs), where a patient has multiple, often unrelated, chronic conditions with conflicting treatment plans. This paper explores the perspectives of healthcare providers (n = 10) and patients (n = 6) regarding AI tools for medication management. Participants were recruited through two healthcare centers, with interviews conducted via Zoom. The semi-structured interviews (60–90 min) explored their views on AI, including its potential role and limitations in medication decision making and management of DCCs. Data were analyzed using a mixed-methods approach, including semantic analysis and grounded theory, yielding an inter-rater reliability of 0.9. Three themes emerged: empathy in AI–patient interactions, support for AI-assisted administrative tasks, and challenges in using AI for complex chronic diseases. Our findings suggest that while AI can support decision-making, its effectiveness depends on complementing human judgment, particularly in empathetic communication. The paper also highlights the importance of clear AI-generated information and the need for future research on embedding empathy and ethical standards in AI systems. Full article
(This article belongs to the Special Issue 2nd Edition of Data Science for Health Services)
28 pages, 1670 KiB  
Article
Exploring Healthcare Professionals’ Perspectives on Electronic Medical Records: A Qualitative Study
by Reza Torkman, Amir Hossein Ghapanchi and Reza Ghanbarzadeh
Information 2025, 16(3), 236; https://doi.org/10.3390/info16030236 - 17 Mar 2025
Viewed by 512
Abstract
Electronic Medical Records (EMRs) have the potential to enhance decision-making in the healthcare sector. However, healthcare providers encounter various challenges when using computer-based systems such as EMRs in clinical decision-making. This study explores healthcare professionals’ experiences with EMR usage through a qualitative approach. [...] Read more.
Electronic Medical Records (EMRs) have the potential to enhance decision-making in the healthcare sector. However, healthcare providers encounter various challenges when using computer-based systems such as EMRs in clinical decision-making. This study explores healthcare professionals’ experiences with EMR usage through a qualitative approach. A total of 78 interviews were conducted, leading to the identification of four key themes: (1) healthcare professionals’ engagement with EMR systems, (2) job performance, (3) collaboration among healthcare professionals, and (4) quality of care and patient satisfaction. The findings provide valuable insights for researchers and practitioners, including policymakers, senior management, and information technology professionals, to inform strategies for optimising EMR implementation and adoption. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

25 pages, 4520 KiB  
Review
AI Chatbots in Education: Challenges and Opportunities
by Narius Farhad Davar, M. Ali Akber Dewan and Xiaokun Zhang
Information 2025, 16(3), 235; https://doi.org/10.3390/info16030235 - 17 Mar 2025
Cited by 1 | Viewed by 1942
Abstract
With the emergence of artificial intelligence (AI), machine-learning (ML), and chatbot technologies, the field of education has been transformed drastically. The latest advancements in AI chatbots (such as ChatGPT) have proven to offer several benefits for students and educators. However, these benefits also [...] Read more.
With the emergence of artificial intelligence (AI), machine-learning (ML), and chatbot technologies, the field of education has been transformed drastically. The latest advancements in AI chatbots (such as ChatGPT) have proven to offer several benefits for students and educators. However, these benefits also come with inherent challenges, that can impede students’ learning and create hurdles for educators. The study aims to explore the benefits and challenges of AI chatbots in educational settings, with the goal of identifying how they can address existing barriers to learning. The paper begins by outlining the historical evolution of chatbots along with key elements that encompass the architecture of an AI chatbot. The paper then delves into the challenges and limitations associated with the integration of AI chatbots into education. The research findings from this narrative review reveal several benefits of using AI chatbots in education. AI chatbots like ChatGPT can function as virtual tutoring assistants, fostering an adaptive learning environment by aiding students with various learning activities, such as learning programming languages and foreign languages, understanding complex concepts, assisting with research activities, and providing real-time feedback. Educators can leverage such chatbots to create course content, generate assessments, evaluate student performance, and utilize them for data analysis and research. However, this technology presents significant challenges concerning data security and privacy. Additionally, ethical concerns regarding academic integrity and reliance on technology are some of the key challenges. Ultimately, AI chatbots offer endless opportunities by fostering a dynamic and interactive learning environment. However, to help students and teachers maximize the potential of this robust technology, it is essential to understand the risks, benefits, and ethical use of AI chatbots in education. Full article
Show Figures

Figure 1

2 pages, 137 KiB  
Editorial
Editorial to the Special Issue “The Resonant Brain: A Themed Issue Dedicated to Professor Stephen Grossberg”
by Birgitta Dresp-Langley and Luiz Pessoa
Information 2025, 16(3), 234; https://doi.org/10.3390/info16030234 - 16 Mar 2025
Viewed by 252
Abstract
This Special Issue offers a collection of research and model approaches to fundamental principles, mechanisms, and model architectures closely linked to the conceptual foundations of contemporary neural network research laid down by Stephen Grossberg [...] Full article
20 pages, 20407 KiB  
Article
VAD-CLVA: Integrating CLIP with LLaVA for Voice Activity Detection
by Andrea Appiani and Cigdem Beyan
Information 2025, 16(3), 233; https://doi.org/10.3390/info16030233 - 16 Mar 2025
Viewed by 476
Abstract
Voice activity detection (VAD) is the process of automatically determining whether a person is speaking and identifying the timing of their speech in an audiovisual data. Traditionally, this task has been tackled by processing either audio signals or visual data, or by combining [...] Read more.
Voice activity detection (VAD) is the process of automatically determining whether a person is speaking and identifying the timing of their speech in an audiovisual data. Traditionally, this task has been tackled by processing either audio signals or visual data, or by combining both modalities through fusion or joint learning. In our study, drawing inspiration from recent advancements in visual-language models, we introduce a novel approach leveraging Contrastive Language-Image Pretraining (CLIP) models. The CLIP visual encoder analyzes video segments focusing on the upper body of an individual, while the text encoder processes textual descriptions generated by a Generative Large Multimodal Model, i.e., the Large Language and Vision Assistant (LLaVA). Subsequently, embeddings from these encoders are fused through a deep neural network to perform VAD. Our experimental analysis across three VAD benchmarks showcases the superior performance of our method compared to existing visual VAD approaches. Notably, our approach outperforms several audio-visual methods despite its simplicity and without requiring pretraining on extensive audio-visual datasets. Full article
(This article belongs to the Special Issue Application of Machine Learning in Human Activity Recognition)
Show Figures

Figure 1

21 pages, 5950 KiB  
Article
A Contrastive Learning Framework for Vehicle Spatio-Temporal Trajectory Similarity in Intelligent Transportation Systems
by Qiang Tong, Zhi-Chao Xie, Wei Ni, Ning Li and Shoulu Hou
Information 2025, 16(3), 232; https://doi.org/10.3390/info16030232 - 16 Mar 2025
Viewed by 396
Abstract
The rapid development of vehicular networks has facilitated the extensive acquisition of vehicle trajectory data, which serve as a crucial cornerstone for a variety of intelligent transportation system (ITS) applications, such as traffic flow management and urban mobility optimization. Trajectory similarity computation has [...] Read more.
The rapid development of vehicular networks has facilitated the extensive acquisition of vehicle trajectory data, which serve as a crucial cornerstone for a variety of intelligent transportation system (ITS) applications, such as traffic flow management and urban mobility optimization. Trajectory similarity computation has become an essential tool for analyzing and understanding vehicle movements, making it indispensable for these applications. Nonetheless, most existing methods neglect the temporal dimension in trajectory analysis, limiting their effectiveness. To address this limitation, we integrate the temporal dimension into trajectory similarity evaluations and present a novel contrastive learning framework, termed Spatio-Temporal Trajectory Similarity with Contrastive Learning, aimed at training effective representations for spatio-temporal trajectory similarity. The STT-CL framework introduces the innovative concept of spatio-temporal grids and leverages two advanced grid embedding techniques to capture the coarse-grained features of spatio-temporal trajectory points. Moreover, we design a Spatio-Temporal Trajectory Cross-Fusion Encoder (STT-CFE) that seamlessly integrates coarse-grained and fine-grained features. Experiments on two large-scale real-world datasets demonstrate that STT-CL surpasses existing methods, underscoring its potential in trajectory-driven ITS applications. Full article
(This article belongs to the Special Issue Internet of Everything and Vehicular Networks)
Show Figures

Figure 1

21 pages, 5371 KiB  
Article
From Pixels to Diagnosis: Implementing and Evaluating a CNN Model for Tomato Leaf Disease Detection
by Zamir Osmenaj, Evgenia-Maria Tseliki, Sofia H. Kapellaki, George Tselikis and Nikolaos D. Tselikas
Information 2025, 16(3), 231; https://doi.org/10.3390/info16030231 - 16 Mar 2025
Viewed by 545
Abstract
The frequent emergence of multiple diseases in tomato plants poses a significant challenge to agriculture, requiring innovative solutions to deal with this problem. The paper explores the application of machine learning (ML) technologies to develop a model capable of identifying and classifying diseases [...] Read more.
The frequent emergence of multiple diseases in tomato plants poses a significant challenge to agriculture, requiring innovative solutions to deal with this problem. The paper explores the application of machine learning (ML) technologies to develop a model capable of identifying and classifying diseases in tomato leaves. Our work involved the implementation of a custom convolutional neural network (CNN) trained on a diverse dataset of tomato leaf images. The performance of the proposed CNN model was evaluated and compared against the performance of existing pre-trained CNN models, i.e., the VGG16 and VGG19 models, which are extensively used for image classification tasks. The proposed CNN model was further tested with images of tomato leaves captured from a real-world garden setting in Greece. The captured images were carefully preprocessed and an in-depth study was conducted on how either each image preprocessing step or a different—not supported by the dataset used—strain of tomato affects the accuracy and confidence in detecting tomato leaf diseases. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Graphical abstract

16 pages, 5701 KiB  
Article
Generating Human-Interpretable Rules from Convolutional Neural Networks
by Russel Pears and Ashwini Kumar Sharma
Information 2025, 16(3), 230; https://doi.org/10.3390/info16030230 - 16 Mar 2025
Viewed by 434
Abstract
Advancements in the field of artificial intelligence have been rapid in recent years and have revolutionized various industries. Various deep neural network architectures capable of handling both text and images, covering code generation from natural language as well as producing machine translation and [...] Read more.
Advancements in the field of artificial intelligence have been rapid in recent years and have revolutionized various industries. Various deep neural network architectures capable of handling both text and images, covering code generation from natural language as well as producing machine translation and text summaries, have been proposed. For example, convolutional neural networks or CNNs perform image classification at a level equivalent to that of humans on many image datasets. These state-of-the-art networks have reached unprecedented levels of success by using complex architectures with billions of parameters, numerous kernel configurations, weight initialization, and regularization methods. Unfortunately to reach this level of success, the models that CNNs use are essentially black box in nature, with little or no human-interpretable information on the decision-making process. This lack of transparency in decision making gave rise to concerns amongst some sectors of the user community such as healthcare, finance, justice, and defense, among others. This challenge motivated our research, where we successfully produced human-interpretable influential features from CNNs for image classification and captured the interactions between these features by producing a concise decision tree making that makes classification decisions. The proposed methodology makes use of a pretrained VGG-16 with fine-tuning to extract feature maps produced by learnt filters. On the CelebA image benchmark dataset, we successfully produced human-interpretable rules that captured the main facial landmarks responsible for segmenting men from women with 89.6% accuracy, while on the more challenging Cats vs. Dogs dataset, the decision tree achieved 87.6% accuracy. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
Show Figures

Graphical abstract

27 pages, 38446 KiB  
Article
YOLOv8n-Al-Dehazing: A Robust Multi-Functional Operation Terminals Detection for Large Crane in Metallurgical Complex Dust Environment
by Yifeng Pan, Yonghong Long, Xin Li and Yejing Cai
Information 2025, 16(3), 229; https://doi.org/10.3390/info16030229 - 15 Mar 2025
Viewed by 397
Abstract
In the aluminum electrolysis production workshop, heavy-load overhead cranes equipped with multi-functional operation terminals are responsible for critical tasks such as anode replacement, shell breaking, slag removal, and material feeding. The real-time monitoring of these four types of operation terminals is of the [...] Read more.
In the aluminum electrolysis production workshop, heavy-load overhead cranes equipped with multi-functional operation terminals are responsible for critical tasks such as anode replacement, shell breaking, slag removal, and material feeding. The real-time monitoring of these four types of operation terminals is of the utmost importance for ensuring production safety. High-resolution cameras are used to capture dynamic scenes of operation. However, the terminals undergo morphological changes and rotations in three-dimensional space according to task requirements during operations, lacking rotational invariance. This factor complicates the detection and recognition of multi-form targets in 3D environment. Additionally, operations like striking and material feeding generate significant dust, often visually obscuring the terminal targets. The challenge of real-time multi-form object detection in high-resolution images affected by smoke and dust environments demands detection and dehazing algorithms. To address these issues, we propose the YOLOv8n-Al-Dehazing method, which achieves the precise detection of multi-functional material handling terminals in aluminum electrolysis workshops. To overcome the heavy computational costs associated with processing high-resolution images by using YOLOv8n, our method refines YOLOv8n through component substitution and integrates real-time dehazing preprocessing for high-resolution images, thereby reducing the image processing time. We collected on-site data to construct a dataset for experimental validation. Compared with the YOLOv8n method, our method approach increases inference speed by 15.54%, achieving 120.4 frames per second, which meets the requirements for real-time detection on site. Furthermore, compared with state-of-the-art detection methods and variants of YOLO, YOLOv8n-Al-Dehazing demonstrates superior performance, attaining an accuracy rate of 91.0%. Full article
Show Figures

Figure 1

24 pages, 4386 KiB  
Article
A Method for Improving the Monitoring Quality and Network Lifetime of Hybrid Self-Powered Wireless Sensor Networks
by Peng Wang and Yonghua Xiong
Information 2025, 16(3), 228; https://doi.org/10.3390/info16030228 - 15 Mar 2025
Viewed by 307
Abstract
Wireless sensors deployed in large agricultural areas can monitor and collect data in real time, helping to achieve smart agriculture. But the complexity of the environment and the random deployment method seriously affect the coverage quality. The limited capacity of sensor batteries greatly [...] Read more.
Wireless sensors deployed in large agricultural areas can monitor and collect data in real time, helping to achieve smart agriculture. But the complexity of the environment and the random deployment method seriously affect the coverage quality. The limited capacity of sensor batteries greatly limits the network lifetime. Therefore, how to extend the network lifetime while ensuring coverage quality is a highly challenging task. This paper proposes a node deployment optimization method to solve the problems of a poor coverage rate and a short network lifetime in hybrid self-powered sensor networks in obstacle environments. This method first optimizes the sensing direction of stationary nodes, expands the coverage range, and repairs coverage holes. Then, an improved bidirectional search A* algorithm is used to plan the obstacle avoidance moving path of mobile nodes, fill the remaining coverage holes, and improve the coverage quality of the network. Finally, a method based on an improved nutcracker optimizer algorithm is proposed to solve the optimal working sequence of nodes, schedule the “sleep or work” state of nodes, and extend the network lifetime. The simulation experiment verified the effectiveness of the proposed method, indicating that its performance in coverage quality, mobile energy consumption, and network lifetime is superior to other compared methods. Full article
Show Figures

Graphical abstract

20 pages, 4390 KiB  
Article
Application of VGG16 Transfer Learning for Breast Cancer Detection
by Tanjim Fatima and Hamdy Soliman
Information 2025, 16(3), 227; https://doi.org/10.3390/info16030227 - 14 Mar 2025
Viewed by 541
Abstract
Breast cancer is among the primary causes of cancer-related deaths globally, highlighting the critical need for effective and early diagnostic methods. Traditional diagnostic approaches, while valuable, often face limitations in accuracy and accessibility. Recent advancements in deep learning, particularly transfer learning, provide promising [...] Read more.
Breast cancer is among the primary causes of cancer-related deaths globally, highlighting the critical need for effective and early diagnostic methods. Traditional diagnostic approaches, while valuable, often face limitations in accuracy and accessibility. Recent advancements in deep learning, particularly transfer learning, provide promising solutions for enhancing diagnostic precision in breast cancer detection. Due to the limited capability of the BreakHis dataset, transfer learning was utilized to advance the training of our new model with the VGG16 neural network model, well trained on the rich ImageNet dataset. Moreover, the VGG16 architecture was carefully modified, including the fine-tuning of its layers, yielding our new model: M-VGG16. The new M-VGG16 model is designed to carry out the binary cancer/benign classification of breast samples effectively. The experimental results of our M-VGG16 model showed it achieved high validation accuracy (93.68%), precision (93.22%), recall (97.91%), and a high AUC (0.9838), outperforming other peer models in the same field. This study validates the VGG16 model’s suitability for breast cancer detection via transfer learning, providing an efficient, adaptable framework for improving diagnostic accuracy and potentially enhancing breast cancer detection. Key breast cancer detection challenges and potential M-VGG16 model refinements are also discussed. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Graphical abstract

21 pages, 6185 KiB  
Article
Automatic Reading Method for Analog Dial Gauges with Different Measurement Ranges in Outdoor Substation Scenarios
by Yueping Yang, Wenlong Liao, Songhai Fan, Jin Hou and Hao Tang
Information 2025, 16(3), 226; https://doi.org/10.3390/info16030226 - 14 Mar 2025
Viewed by 260
Abstract
In substation working environments, analog dial gauges are widely used for equipment monitoring. Accurate reading of dial values is crucial for real-time understanding of equipment operational status and enhancing the intelligence of substation equipment operation and maintenance. However, existing dial reading recognition algorithms [...] Read more.
In substation working environments, analog dial gauges are widely used for equipment monitoring. Accurate reading of dial values is crucial for real-time understanding of equipment operational status and enhancing the intelligence of substation equipment operation and maintenance. However, existing dial reading recognition algorithms face significant errors in complex scenarios and struggle to adapt to dials with different measurement ranges. To address these issues, this paper proposes an automatic reading method for analog dial gauges consisting of two stages: dial segmentation and reading recognition. In the dial segmentation stage, an improved DeepLabv3+ network is used to achieve precise segmentation of the dial scale and pointer, and the network is made lightweight to meet real-time requirements. In the reading recognition stage, the distorted image is first corrected, and PGNet is used to obtain scale information for scale matching. Finally, an angle-based method is employed to achieve automatic reading recognition of the analog dial gauge. The experimental results show that the improved Deeplabv3+ network has 4.25 M parameters, with an average detection time of 19 ms per image, an average Pixel Accuracy of 92.7%, and an average Intersection over Union (IoU) of 79.7%. The reading recognition algorithm achieves a reading accuracy of 92.3% across dial images in various scenarios, effectively improving reading recognition accuracy and providing strong support for the development of intelligent operation and maintenance in substations. Full article
Show Figures

Figure 1

15 pages, 288 KiB  
Article
LLMs in Action: Robust Metrics for Evaluating Automated Ontology Annotation Systems
by Ali Noori, Pratik Devkota, Somya D. Mohanty and Prashanti Manda
Information 2025, 16(3), 225; https://doi.org/10.3390/info16030225 - 14 Mar 2025
Viewed by 407
Abstract
Ontologies are critical for organizing and interpreting complex domain-specific knowledge, with applications in data integration, functional prediction, and knowledge discovery. As the manual curation of ontology annotations becomes increasingly infeasible due to the exponential growth of biomedical and genomic data, natural language processing [...] Read more.
Ontologies are critical for organizing and interpreting complex domain-specific knowledge, with applications in data integration, functional prediction, and knowledge discovery. As the manual curation of ontology annotations becomes increasingly infeasible due to the exponential growth of biomedical and genomic data, natural language processing (NLP)-based systems have emerged as scalable alternatives. Evaluating these systems requires robust semantic similarity metrics that account for hierarchical and partially correct relationships often present in ontology annotations. This study explores the integration of graph-based and language-based embeddings to enhance the performance of semantic similarity metrics. Combining embeddings generated via Node2Vec and large language models (LLMs) with traditional semantic similarity metrics, we demonstrate that hybrid approaches effectively capture both structural and semantic relationships within ontologies. Our results show that combined similarity metrics outperform individual metrics, achieving high accuracy in distinguishing child–parent pairs from random pairs. This work underscores the importance of robust semantic similarity metrics for evaluating and optimizing NLP-based ontology annotation systems. Future research should explore the real-time integration of these metrics and advanced neural architectures to further enhance scalability and accuracy, advancing ontology-driven analyses in biomedical research and beyond. Full article
(This article belongs to the Special Issue Biomedical Natural Language Processing and Text Mining)
Show Figures

Figure 1

19 pages, 3360 KiB  
Article
The Future of Higher Education: Trends, Challenges and Opportunities in AI-Driven Lifelong Learning in Peru
by Pablo Lara-Navarra, Antonia Ferrer-Sapena, Eduardo Ismodes-Cascón, Carlos Fosca-Pastor and Enrique A. Sánchez-Pérez
Information 2025, 16(3), 224; https://doi.org/10.3390/info16030224 - 14 Mar 2025
Viewed by 888
Abstract
This study analyses future trends in lifelong learning in the Peruvian context. Using the DeflyCompass model, an artificial intelligence tool, the main trends affecting the evolution of postgraduate studies were identified, including the impact of generative AI on the personalisation of education, the [...] Read more.
This study analyses future trends in lifelong learning in the Peruvian context. Using the DeflyCompass model, an artificial intelligence tool, the main trends affecting the evolution of postgraduate studies were identified, including the impact of generative AI on the personalisation of education, the transformation of work and the growth of Generation Z as key players in the educational environment. The methodology applied combines a mixed qualitative and quantitative approach, based on the opinion of experts—45 participants from seven public/private universities in Peru—the technique of semantic projections, the use of generalist search engines and specialised databases, and other digital management resources such as Google Scholar profile analysis and online marketing campaign design tools. In particular, a total of 150 scientific papers and 300 articles from generalist sources were analysed. This approach made it possible to select, analyse and quantify the main trends in higher education in Peru and to assess their potential impact on the future development of graduate schools, specifically in the case of the Pontificia Universidad Católica de Perú (PUCP). The results highlight the importance of adapting postgraduate studies to new demands, such as the adoption of generative AI, the adaptation of personalised education and the integration of digital technologies to enhance the personal and professional growth of students. It also highlights the need to incorporate strategies that address the transformation of work, with a focus on developing digital skills and preparing for an ever-changing work environment. The study thus provides a guide for Peruvian universities on how to adapt their graduate programmes to emerging trends, promoting a flexible and technologically advanced education that responds to the needs of future professionals. Full article
(This article belongs to the Special Issue Generative AI Technologies: Shaping the Future of Higher Education)
Show Figures

Figure 1

16 pages, 14380 KiB  
Article
Online Calibration Method of LiDAR and Camera Based on Fusion of Multi-Scale Cost Volume
by Xiaobo Han, Jie Luo, Xiaoxu Wei and Yongsheng Wang
Information 2025, 16(3), 223; https://doi.org/10.3390/info16030223 - 13 Mar 2025
Viewed by 515
Abstract
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high [...] Read more.
The online calibration algorithm for camera and LiDAR helps solve the problem of multi-sensor fusion and is of great significance in autonomous driving perception algorithms. Existing online calibration algorithms fail to account for both real-time performance and accuracy. High-precision calibration algorithms require high hardware requirements, while it is difficult for lightweight calibration algorithms to meet the accuracy requirements. Secondly, sensor noise, vibration, and changes in environmental conditions may reduce calibration accuracy. In addition, due to the large domain differences between different public datasets, the existing online calibration algorithms are unstable for various datasets and have poor algorithm robustness. To solve the above problems, we propose an online calibration algorithm based on multi-scale cost volume fusion. First, a multi-layer convolutional network is used to downsample and concatenate the camera RGB data and LiDAR point cloud data to obtain three-scale feature maps. The latter is then subjected to feature concatenation and group-wise correlation processing to generate three sets of cost volumes of different scales. After that, all the cost volumes are spliced and sent to the pose estimation module. After post-processing, the translation and rotation matrix between the camera and LiDAR coordinate systems can be obtained. We tested and verified this method on the KITTI odometry dataset and measured the average translation error of the calibration results to be 0.278 cm, the average rotation error to be 0.020°, and the single frame took 23 ms, reaching the advanced level. Full article
Show Figures

Graphical abstract

20 pages, 3504 KiB  
Article
Memristor-Based Neuromorphic System for Unsupervised Online Learning and Network Anomaly Detection on Edge Devices
by Md Shahanur Alam, Chris Yakopcic, Raqibul Hasan and Tarek M. Taha
Information 2025, 16(3), 222; https://doi.org/10.3390/info16030222 - 13 Mar 2025
Viewed by 520
Abstract
An ultralow-power, high-performance online-learning and anomaly-detection system has been developed for edge security applications. Designed to support personalized learning without relying on cloud data processing, the system employs sample-wise learning, eliminating the need for storing entire datasets for training. Built using memristor-based analog [...] Read more.
An ultralow-power, high-performance online-learning and anomaly-detection system has been developed for edge security applications. Designed to support personalized learning without relying on cloud data processing, the system employs sample-wise learning, eliminating the need for storing entire datasets for training. Built using memristor-based analog neuromorphic and in-memory computing techniques, the system integrates two unsupervised autoencoder neural networks—one utilizing optimized crossbar weights and the other performing real-time learning to detect novel intrusions. Threshold optimization and anomaly detection are achieved through a fully analog Euclidean Distance (ED) computation circuit, eliminating the need for floating-point processing units. The system demonstrates 87% anomaly-detection accuracy; achieves a performance of 16.1 GOPS—774× faster than the ASUS Tinker Board edge processor; and delivers an energy efficiency of 783 GOPS/W, consuming only 20.5 mW during anomaly detection. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Graphical abstract

15 pages, 1619 KiB  
Article
Optimal Convolutional Networks for Staging and Detecting of Diabetic Retinopathy
by Minyar Sassi Hidri, Adel Hidri, Suleiman Ali Alsaif, Muteeb Alahmari and Eman AlShehri
Information 2025, 16(3), 221; https://doi.org/10.3390/info16030221 - 13 Mar 2025
Viewed by 304
Abstract
Diabetic retinopathy (DR) is the main ocular complication of diabetes. Asymptomatic for a long time, it is subject to annual screening using dilated fundus or retinal photography to look for early signs. Fundus photography and optical coherence tomography (OCT) are used by ophthalmologists [...] Read more.
Diabetic retinopathy (DR) is the main ocular complication of diabetes. Asymptomatic for a long time, it is subject to annual screening using dilated fundus or retinal photography to look for early signs. Fundus photography and optical coherence tomography (OCT) are used by ophthalmologists to assess retinal thickness and structure, as well as detect edema, hemorrhage, and scarring. The effectiveness of ConvNet no longer needs to be demonstrated, and its use in the field of imaging has made it possible to overcome many barriers, which were until now insurmountable with old methods. Throughout this study, a robust and optimal deep ConvNet is proposed to analyze fundus images and automatically distinguish between healthy, moderate, and severe DR. The proposed model combines the use of the ConvNet architecture taken from ImageNet, data augmentation, class balancing, and transfer learning in order to establish a benchmarking test. A significant improvement at the level of middle class which corresponds to the early stage of DR, which was the major problem in previous studies. By eliminating the need for retina specialists and broadening access to retinal care, the proposed model is substantially more robust in objectively early staging and detecting DR. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop