Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (706)

Search Parameters:
Keywords = intelligent medical detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4797 KB  
Article
Early Oral Cancer Detection with AI: Design and Implementation of a Deep Learning Image-Based Chatbot
by Pablo Ormeño-Arriagada, Gastón Márquez, Carla Taramasco, Gustavo Gatica, Juan Pablo Vasconez and Eduardo Navarro
Appl. Sci. 2025, 15(19), 10792; https://doi.org/10.3390/app151910792 - 7 Oct 2025
Abstract
Oral cancer remains a critical global health challenge, with delayed diagnosis driving high morbidity and mortality. Despite progress in artificial intelligence, computer vision, and medical imaging, early detection tools that are accessible, explainable, and designed for patient engagement remain limited. This study presents [...] Read more.
Oral cancer remains a critical global health challenge, with delayed diagnosis driving high morbidity and mortality. Despite progress in artificial intelligence, computer vision, and medical imaging, early detection tools that are accessible, explainable, and designed for patient engagement remain limited. This study presents a novel system that combines a patient-centred chatbot with a deep learning framework to support early diagnosis, symptom triage, and health education. The system integrates convolutional neural networks, class activation mapping, and natural language processing within a conversational interface. Five deep learning models were evaluated (CNN, DenseNet121, DenseNet169, DenseNet201, and InceptionV3) using two balanced public datasets. Model performance was assessed using accuracy, sensitivity, specificity, diagnostic odds ratio (DOR), and Cohen’s Kappa. InceptionV3 consistently outperformed the other models across these metrics, achieving the highest diagnostic accuracy (77.6%) and DOR (20.67), and was selected as the core engine of the chatbot’s diagnostic module. The deployed chatbot provides real-time image assessments and personalised conversational support via multilingual web and mobile platforms. By combining automated image interpretation with interactive guidance, the system promotes timely consultation and informed decision-making. It offers a prototype for a chatbot, which is scalable and serves as a low-cost solution for underserved populations and demonstrates strong potential for integration into digital health pathways. Importantly, the system is not intended to function as a formal screening tool or replace clinical diagnosis; rather, it provides preliminary guidance to encourage early medical consultation and informed health decisions. Full article
Show Figures

Figure 1

12 pages, 284 KB  
Article
AI-Enabled Secure and Scalable Distributed Web Architecture for Medical Informatics
by Marian Ileana, Pavel Petrov and Vassil Milev
Appl. Sci. 2025, 15(19), 10710; https://doi.org/10.3390/app151910710 - 4 Oct 2025
Viewed by 228
Abstract
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical [...] Read more.
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical informatics, integrating artificial intelligence techniques and cloud-based services. The system ensures interoperability via HL7 FHIR standards and preserves data privacy and fault tolerance across interconnected medical institutions. A hybrid AI pipeline combining principal component analysis (PCA), K-Means clustering, and convolutional neural networks (CNNs) is applied to diffusion tensor imaging (DTI) data for early detection of neurological anomalies. The architecture leverages containerized microservices orchestrated with Docker Swarm, enabling adaptive resource management and high availability. Experimental validation confirms reduced latency, improved system reliability, and enhanced compliance with medical data exchange protocols. Results demonstrate superior performance with an average latency of 94 ms, a diagnostic accuracy of 91.3%, and enhanced clinical workflow efficiency compared to traditional monolithic architectures. The proposed solution successfully addresses scalability limitations while maintaining data security and regulatory compliance across multi-institutional deployments. This work contributes to the advancement of intelligent, interoperable, and scalable e-health infrastructures aligned with the evolution of digital healthcare ecosystems. Full article
(This article belongs to the Special Issue Data Science and Medical Informatics)
Show Figures

Figure 1

22 pages, 782 KB  
Article
Hybrid CNN-Swin Transformer Model to Advance the Diagnosis of Maxillary Sinus Abnormalities on CT Images Using Explainable AI
by Mohammad Alhumaid and Ayman G. Fayoumi
Computers 2025, 14(10), 419; https://doi.org/10.3390/computers14100419 - 2 Oct 2025
Viewed by 151
Abstract
Accurate diagnosis of sinusitis is essential due to its widespread prevalence and its considerable impact on patient quality of life. While multiple imaging techniques are available for detecting maxillary sinus, computed tomography (CT) remains the preferred modality because of its high sensitivity and [...] Read more.
Accurate diagnosis of sinusitis is essential due to its widespread prevalence and its considerable impact on patient quality of life. While multiple imaging techniques are available for detecting maxillary sinus, computed tomography (CT) remains the preferred modality because of its high sensitivity and spatial resolution. Although recent advances in deep learning have led to the development of automated methods for sinusitis classification, many existing models perform poorly in the presence of complex pathological features and offer limited interpretability, which hinders their integration into clinical workflows. In this study, we propose a hybrid deep learning framework that combines EfficientNetB0, a convolutional neural network, with the Swin Transformer, a vision transformer, to improve feature representation. An attention-based fusion module is used to integrate both local and global information, thereby enhancing diagnostic accuracy. To improve transparency and support clinical adoption, the model incorporates explainable artificial intelligence (XAI) techniques using Gradient-weighted Class Activation Mapping (Grad-CAM). This allows for visualization of the regions influencing the model’s predictions, helping radiologists assess the clinical relevance of the results. We evaluate the proposed method on a curated maxillary sinus CT dataset covering four diagnostic categories: Normal, Opacified, Polyposis, and Retention Cysts. The model achieves a classification accuracy of 95.83%, with precision, recall, and F1 score all at 95%. Grad-CAM visualizations indicate that the model consistently focuses on clinically significant regions of the sinus anatomy, supporting its potential utility as a reliable diagnostic aid in medical practice. Full article
36 pages, 5130 KB  
Article
SecureEdge-MedChain: A Post-Quantum Blockchain and Federated Learning Framework for Real-Time Predictive Diagnostics in IoMT
by Sivasubramanian Ravisankar and Rajagopal Maheswar
Sensors 2025, 25(19), 5988; https://doi.org/10.3390/s25195988 - 27 Sep 2025
Viewed by 473
Abstract
The burgeoning Internet of Medical Things (IoMT) offers unprecedented opportunities for real-time patient monitoring and predictive diagnostics, yet the current systems struggle with scalability, data confidentiality against quantum threats, and real-time privacy-preserving intelligence. This paper introduces Med-Q Ledger, a novel, multi-layered framework [...] Read more.
The burgeoning Internet of Medical Things (IoMT) offers unprecedented opportunities for real-time patient monitoring and predictive diagnostics, yet the current systems struggle with scalability, data confidentiality against quantum threats, and real-time privacy-preserving intelligence. This paper introduces Med-Q Ledger, a novel, multi-layered framework designed to overcome these critical limitations in the Medical IoT domain. Med-Q Ledger integrates a permissioned Hyperledger Fabric for transactional integrity with a scalable Holochain Distributed Hash Table for high-volume telemetry, achieving horizontal scalability and sub-second commit times. To fortify long-term data security, the framework incorporates post-quantum cryptography (PQC), specifically CRYSTALS-Di lithium signatures and Kyber Key Encapsulation Mechanisms. Real-time, privacy-preserving intelligence is delivered through an edge-based federated learning (FL) model, utilizing lightweight autoencoders for anomaly detection on encrypted gradients. We validate Med-Q Ledger’s efficacy through a critical application: the prediction of intestinal complications like necrotizing enterocolitis (NEC) in preterm infants, a condition frequently necessitating emergency colostomy. By processing physiological data from maternal wearable sensors and infant intestinal images, our integrated Random Forest model demonstrates superior performance in predicting colostomy necessity. Experimental evaluations reveal a throughput of approximately 3400 transactions per second (TPS) with ~180 ms end-to-end latency, a >95% anomaly detection rate with <2% false positives, and an 11% computational overhead for PQC on resource-constrained devices. Furthermore, our results show a 0.90 F1-score for colostomy prediction, a 25% reduction in emergency surgeries, and 31% lower energy consumption compared to MQTT baselines. Med-Q Ledger sets a new benchmark for secure, high-performance, and privacy-preserving IoMT analytics, offering a robust blueprint for next-generation healthcare deployments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

72 pages, 4170 KB  
Systematic Review
Digital Twin Cognition: AI-Biomarker Integration in Biomimetic Neuropsychology
by Evgenia Gkintoni and Constantinos Halkiopoulos
Biomimetics 2025, 10(10), 640; https://doi.org/10.3390/biomimetics10100640 - 23 Sep 2025
Viewed by 958
Abstract
(1) Background: The convergence of digital twin technology, artificial intelligence, and multimodal biomarkers heralds a transformative era in neuropsychological assessment and intervention. Digital twin cognition represents an emerging paradigm that creates dynamic, personalized virtual models of individual cognitive systems, enabling continuous monitoring, predictive [...] Read more.
(1) Background: The convergence of digital twin technology, artificial intelligence, and multimodal biomarkers heralds a transformative era in neuropsychological assessment and intervention. Digital twin cognition represents an emerging paradigm that creates dynamic, personalized virtual models of individual cognitive systems, enabling continuous monitoring, predictive modeling, and precision interventions. This systematic review comprehensively examines the integration of AI-driven biomarkers within biomimetic neuropsychological frameworks to advance personalized cognitive health. (2) Methods: Following PRISMA 2020 guidelines, we conducted a systematic search across six major databases spanning medical, neuroscience, and computer science disciplines for literature published between 2014 and 2024. The review synthesized evidence addressing five research questions examining framework integration, predictive accuracy, clinical translation, algorithm effectiveness, and neuropsychological validity. (3) Results: Analysis revealed that multimodal integration approaches combining neuroimaging, physiological, behavioral, and digital phenotyping data substantially outperformed single-modality assessments. Deep learning architectures demonstrated superior pattern recognition capabilities, while traditional machine learning maintained advantages in interpretability and clinical implementation. Successful frameworks, particularly for neurodegenerative diseases and multiple sclerosis, achieved earlier detection, improved treatment personalization, and enhanced patient outcomes. However, significant challenges persist in algorithm interpretability, population generalizability, and the integration of healthcare systems. Critical analysis reveals that high-accuracy claims (85–95%) predominantly derive from small, homogeneous cohorts with limited external validation. Real-world performance in diverse clinical settings likely ranges 10–15% lower, emphasizing the need for large-scale, multi-site validation studies before clinical deployment. (4) Conclusions: Digital twin cognition establishes a new frontier in personalized neuropsychology, offering unprecedented opportunities for early detection, continuous monitoring, and adaptive interventions while requiring continued advancement in standardization, validation, and ethical frameworks. Full article
Show Figures

Figure 1

33 pages, 978 KB  
Article
An Interpretable Clinical Decision Support System Aims to Stage Age-Related Macular Degeneration Using Deep Learning and Imaging Biomarkers
by Ekaterina A. Lopukhova, Ernest S. Yusupov, Rada R. Ibragimova, Gulnaz M. Idrisova, Timur R. Mukhamadeev, Elizaveta P. Grakhova and Ruslan V. Kutluyarov
Appl. Sci. 2025, 15(18), 10197; https://doi.org/10.3390/app151810197 - 18 Sep 2025
Viewed by 396
Abstract
The use of intelligent clinical decision support systems (CDSS) has the potential to improve the accuracy and speed of diagnoses significantly. These systems can analyze a patient’s medical data and generate comprehensive reports that help specialists better understand and evaluate the current clinical [...] Read more.
The use of intelligent clinical decision support systems (CDSS) has the potential to improve the accuracy and speed of diagnoses significantly. These systems can analyze a patient’s medical data and generate comprehensive reports that help specialists better understand and evaluate the current clinical scenario. This capability is particularly important when dealing with medical images, as the heavy workload on healthcare professionals can hinder their ability to notice critical biomarkers, which may be difficult to detect with the naked eye due to stress and fatigue. Implementing a CDSS that uses computer vision (CV) techniques can alleviate this challenge. However, one of the main obstacles to the widespread use of CV and intelligent analysis methods in medical diagnostics is the lack of a clear understanding among diagnosticians of how these systems operate. A better understanding of their functioning and of the reliability of the identified biomarkers will enable medical professionals to more effectively address clinical problems. Additionally, it is essential to tailor the training process of machine learning models to medical data, which are often imbalanced due to varying probabilities of disease detection. Neglecting this factor can compromise the quality of the developed CDSS. This article presents the development of a CDSS module focused on diagnosing age-related macular degeneration. Unlike traditional methods that classify diseases or their stages based on optical coherence tomography (OCT) images, the proposed CDSS provides a more sophisticated and accurate analysis of biomarkers detected through a deep neural network. This approach combines interpretative reasoning with highly accurate models, although these models can be complex to describe. To address the issue of class imbalance, an algorithm was developed to optimally select biomarkers, taking into account both their statistical and clinical significance. As a result, the algorithm prioritizes the selection of classes that ensure high model accuracy while maintaining clinically relevant responses generated by the CDSS module. The results indicate that the overall accuracy of staging age-related macular degeneration increased by 63.3% compared with traditional methods of direct stage classification using a similar machine learning model. This improvement suggests that the CDSS module can significantly enhance disease diagnosis, particularly in situations with class imbalance in the original dataset. To improve interpretability, the process of determining the most likely disease stage was organized into two steps. At each step, the diagnostician could visually access information explaining the reasoning behind the intelligent diagnosis, thereby assisting experts in understanding the basis for clinical decision-making. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 8430 KB  
Article
Robust Audio–Visual Speaker Localization in Noisy Aircraft Cabins for Inflight Medical Assistance
by Qiwu Qin and Yian Zhu
Sensors 2025, 25(18), 5827; https://doi.org/10.3390/s25185827 - 18 Sep 2025
Viewed by 434
Abstract
Active Speaker Localization (ASL) involves identifying both who is speaking and where they are speaking from within audiovisual content. This capability is crucial in constrained and acoustically challenging environments, such as aircraft cabins during in-flight medical emergencies. In this paper, we propose a [...] Read more.
Active Speaker Localization (ASL) involves identifying both who is speaking and where they are speaking from within audiovisual content. This capability is crucial in constrained and acoustically challenging environments, such as aircraft cabins during in-flight medical emergencies. In this paper, we propose a novel end-to-end Cross-Modal Audio–Visual Fusion Network (CMAVFN) designed specifically for ASL under real-world aviation conditions, which are characterized by engine noise, dynamic lighting, occlusions from seats or oxygen masks, and frequent speaker turnover. Our model directly processes raw video frames and multi-channel ambient audio, eliminating the need for intermediate face detection pipelines. It anchors spatially resolved visual features with directional audio cues using a cross-modal attention mechanism. To enhance spatiotemporal reasoning, we introduce a dual-branch localization decoder and a cross-modal auxiliary supervision loss. Extensive experiments on public datasets (AVA-ActiveSpeaker, EasyCom) and our domain-specific AirCabin-ASL benchmark demonstrate that CMAVFN achieves robust speaker localization in noisy, occluded, and multi-speaker aviation scenarios. This framework offers a practical foundation for speech-driven interaction systems in aircraft cabins, enabling applications such as real-time crew assistance, voice-based medical documentation, and intelligent in-flight health monitoring. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

40 pages, 10210 KB  
Article
An Explainable Deep Learning-Based Predictive Maintenance Solution for Air Compressor Condition Monitoring
by Alexandru Ciobotaru, Cosmina Corches, Dan Gota and Liviu Miclea
Sensors 2025, 25(18), 5797; https://doi.org/10.3390/s25185797 - 17 Sep 2025
Viewed by 777
Abstract
Air compressors are vital across various sectors—automotive, manufacturing, buildings, and healthcare—as they provide pressurized air for air suspension systems in vehicles, supply power pneumatic machines throughout industrial production lines, and support non-clinical infrastructure within hospital environments, including pneumatic control systems, isolation room pressurization, [...] Read more.
Air compressors are vital across various sectors—automotive, manufacturing, buildings, and healthcare—as they provide pressurized air for air suspension systems in vehicles, supply power pneumatic machines throughout industrial production lines, and support non-clinical infrastructure within hospital environments, including pneumatic control systems, isolation room pressurization, and laboratory equipment operation. Ensuring that such components are reliable is critical, as unexpected failures can disrupt facility functions and compromise patient safety. Predictive maintenance (PdM) has emerged as a key factor in enhancing the reliability and operational efficiency of medical devices by leveraging sensor data and artificial intelligence (AI)-based algorithms to detect component degradation before functional failures occur. In this paper, a predictive maintenance solution for condition monitoring and fault prediction for the exhaust valve, bearings, water pump, and radiator of an air compressor is presented, by comparing a hybrid deep neural network (DNN) as a feature extractor and a support vector machine (SVM) for condition classification: a pure DNN classifier as well as a standalone SVM model. Additionally, each model was trained and validated on three devices—NVIDIA T4 GPU, Raspberry Pi 4 Model B, and NVIDIA Jetson Nano—and performance reports in terms of latency, energy consumption, and CO2 emissions are presented. Moreover, three model agnostic explainable AI (XAI) methods were employed to increase the transparency of the hybrid model’s final decision: Shapley additive explanations (SHAP), local interpretable model-agnostic explanations (LIME) and partial dependence plots (PDP). The hybrid model achieves on average 98.71%, 99.25%, 98.78%, and 99.01% performance in terms of accuracy, precision, recall, and F1-score across all devices Additionally, the DNN baseline and SVM model achieve on average 93.2%, 88.33%, 90.45%, and 89.37%, as well as 93.34%, 88.11%, 95. 41%, and 91.62% performance in terms of accuracy, precision, recall, and F1-score across all devices. The integration of XAI methods within the PdM pipeline offers enhanced transparency, interpretability, and trustworthiness of predictive outcomes, thereby facilitating informed decision-making among maintenance personnel. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

17 pages, 1124 KB  
Review
The Role of Artificial Intelligence in Herpesvirus Detection, Transmission, and Predictive Modeling: With a Special Focus on Marek’s Disease Virus
by Haji Akbar
Pathogens 2025, 14(9), 937; https://doi.org/10.3390/pathogens14090937 - 16 Sep 2025
Viewed by 515
Abstract
Herpesvirus infections, including herpes simplex virus (HSV), Epstein–Barr virus (EBV), and cytomegalovirus (CMV), present significant challenges in diagnosis, treatment, and transmission control. Despite advances in medical technology, managing these infections remains complex due to the viruses’ ability to establish latency and their widespread [...] Read more.
Herpesvirus infections, including herpes simplex virus (HSV), Epstein–Barr virus (EBV), and cytomegalovirus (CMV), present significant challenges in diagnosis, treatment, and transmission control. Despite advances in medical technology, managing these infections remains complex due to the viruses’ ability to establish latency and their widespread prevalence. Artificial Intelligence (AI) has emerged as a transformative tool in biomedical science, enhancing our ability to understand, predict, and manage infectious diseases. In veterinary virology, AI applications offer considerable potential for improving diagnostics, forecasting outbreaks, and implementing targeted control strategies. This review explores the growing role of AI in advancing our understanding of herpesvirus infection, particularly those caused by MDV, through improved detection, transmission modeling, treatment strategies, and predictive tools. Employing AI technologies such as machine learning (ML), deep learning (DL), and natural language processing (NLP), researchers have made significant progress in addressing diagnostic limitations, modeling transmission dynamics, and identifying potential therapeutics. Furthermore, AI holds the potential to revolutionize personalized medicine, predictive analytics, and vaccine development for herpesvirus-related diseases. The review concludes by discussing ethical considerations, implementation challenges, and future research directions necessary to fully integrate AI into clinical and veterinary practice. Full article
(This article belongs to the Section Viral Pathogens)
Show Figures

Figure 1

20 pages, 847 KB  
Review
Artificial Intelligence in Clinical Medicine: Challenges Across Diagnostic Imaging, Clinical Decision Support, Surgery, Pathology, and Drug Discovery
by Eren Ogut
Clin. Pract. 2025, 15(9), 169; https://doi.org/10.3390/clinpract15090169 - 16 Sep 2025
Viewed by 915
Abstract
Aims/Background: The growing integration of artificial intelligence (AI) into clinical medicine has opened new possibilities for enhancing diagnostic accuracy, therapeutic decision-making, and biomedical innovation across several domains. This review is aimed to evaluate the clinical applications of AI across five key domains of [...] Read more.
Aims/Background: The growing integration of artificial intelligence (AI) into clinical medicine has opened new possibilities for enhancing diagnostic accuracy, therapeutic decision-making, and biomedical innovation across several domains. This review is aimed to evaluate the clinical applications of AI across five key domains of medicine: diagnostic imaging, clinical decision support systems (CDSS), surgery, pathology, and drug discovery, highlighting achievements, limitations, and future directions. Methods: A comprehensive PubMed search was performed without language or publication date restrictions, combining Medical Subject Headings (MeSH) and free-text keywords for AI with domain-specific terms. The search yielded 2047 records, of which 243 duplicates were removed, leaving 1804 unique studies. After screening titles and abstracts, 1482 records were excluded due to irrelevance, preclinical scope, or lack of patient-level outcomes. Full-text review of 322 articles led to the exclusion of 172 studies (no clinical validation or outcomes, n = 64; methodological studies, n = 43; preclinical and in vitro-only, n = 39; conference abstracts without peer-reviewed full text, n = 26). Ultimately, 150 studies met inclusion criteria and were analyzed qualitatively. Data extraction focused on study context, AI technique, dataset characteristics, comparator benchmarks, and reported outcomes, such as diagnostic accuracy, area under the curve (AUC), efficiency, and clinical improvements. Results: AI demonstrated strong performance in diagnostic imaging, achieving expert-level accuracy in tasks such as cancer detection (AUC up to 0.94). CDSS showed promise in predicting adverse events (sepsis, atrial fibrillation), though real-world outcome evidence was mixed. In surgery, AI enhanced intraoperative guidance and risk stratification. Pathology benefited from AI-assisted diagnosis and molecular inference from histology. AI also accelerated drug discovery through protein structure prediction and virtual screening. However, challenges included limited explainability, data bias, lack of prospective trials, and regulatory hurdles. Conclusions: AI is transforming clinical medicine, offering improved accuracy, efficiency, and discovery. Yet, its integration into routine care demands rigorous validation, ethical oversight, and human-AI collaboration. Continued interdisciplinary efforts will be essential to translate these innovations into safe and effective patient-centered care. Full article
Show Figures

Figure 1

9 pages, 486 KB  
Proceeding Paper
A Comprehensive Remote Monitoring System for Automated Diabetes Risk Assessment and Control Through Smart Wearables and Personal Health Devices
by Jawad Ali, Manzoor Hussain and Trisiani Dewi Hendrawati
Eng. Proc. 2025, 107(1), 91; https://doi.org/10.3390/engproc2025107091 - 15 Sep 2025
Viewed by 400
Abstract
Diabetes, a chronic metabolic disease marked by elevated blood glucose levels, affects millions of people globally. A lower quality of life and a markedly higher chance of potentially deadly consequences, such as heart disease, renal failure, and other organ dysfunctions, are closely linked [...] Read more.
Diabetes, a chronic metabolic disease marked by elevated blood glucose levels, affects millions of people globally. A lower quality of life and a markedly higher chance of potentially deadly consequences, such as heart disease, renal failure, and other organ dysfunctions, are closely linked to it. In order to effectively manage diabetes and avoid serious consequences, early detection and ongoing monitoring are essential. Remote health monitoring has emerged as a viable and promising option for proactive healthcare due to the development of contemporary technology, particularly in the areas of wearables and mobile computing. In this work, we suggest a thorough and sophisticated framework for remote monitoring that is intended to automatically predict, identify, and manage diabetes risks. To facilitate real-time data collection analysis and tailored feedback, the system makes use of the integration of smartphones, wearable sensors, and specialized medical equipment. In addition to enhancing patient engagement and lowering the strain on conventional healthcare infrastructures, our suggested model aims to assist patients and healthcare providers in maintaining improved glycemic control. We employed a tenfold stratified cross-validation approach to assess the efficacy of our framework and the results showed remarkable performance metrics. A score of 79.00 percent for clarity (specificity) 87.20 percent for sensitivity, and 83.20 percent for accuracy were all attained by the system. The outcomes show how our framework can be a dependable and scalable remote diabetes management solution, opening the door to more intelligent and easily accessible healthcare systems around the world. Full article
Show Figures

Figure 1

26 pages, 3973 KB  
Article
ViT-DCNN: Vision Transformer with Deformable CNN Model for Lung and Colon Cancer Detection
by Aditya Pal, Hari Mohan Rai, Joon Yoo, Sang-Ryong Lee and Yooheon Park
Cancers 2025, 17(18), 3005; https://doi.org/10.3390/cancers17183005 - 15 Sep 2025
Viewed by 501
Abstract
Background/Objectives: Lung and colon cancers remain among the most prevalent and fatal diseases worldwide, and their early detection is a serious challenge. The data used in this study was obtained from the Lung and Colon Cancer Histopathological Images Dataset, which comprises five different [...] Read more.
Background/Objectives: Lung and colon cancers remain among the most prevalent and fatal diseases worldwide, and their early detection is a serious challenge. The data used in this study was obtained from the Lung and Colon Cancer Histopathological Images Dataset, which comprises five different classes of image data, namely colon adenocarcinoma, colon normal, lung adenocarcinoma, lung normal, and lung squamous cell carcinoma, split into training (80%), validation (10%), and test (10%) subsets. In this study, we propose the ViT-DCNN (Vision Transformer with Deformable CNN) model, with the aim of improving cancer detection and classification using medical images. Methods: The combination of the ViT’s self-attention capabilities with deformable convolutions allows for improved feature extraction, while also enabling the model to learn both holistic contextual information as well as fine-grained localized spatial details. Results: On the test set, the model performed remarkably well, with an accuracy of 94.24%, an F1 score of 94.23%, recall of 94.24%, and precision of 94.37%, confirming its robustness in detecting cancerous tissues. Furthermore, our proposed ViT-DCNN model outperforms several state-of-the-art models, including ResNet-152, EfficientNet-B7, SwinTransformer, DenseNet-201, ConvNext, TransUNet, CNN-LSTM, MobileNetV3, and NASNet-A, across all major performance metrics. Conclusions: By using deep learning and advanced image analysis, this model enhances the efficiency of cancer detection, thus representing a valuable tool for radiologists and clinicians. This study demonstrates that the proposed ViT-DCNN model can reduce diagnostic inaccuracies and improve detection efficiency. Future work will focus on dataset enrichment and enhancing the model’s interpretability to evaluate its clinical applicability. This paper demonstrates the promise of artificial-intelligence-driven diagnostic models in transforming lung and colon cancer detection and improving patient diagnosis. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers: 2nd Edition)
Show Figures

Figure 1

22 pages, 518 KB  
Systematic Review
Governing Artificial Intelligence in Radiology: A Systematic Review of Ethical, Legal, and Regulatory Frameworks
by Faten M. Aldhafeeri
Diagnostics 2025, 15(18), 2300; https://doi.org/10.3390/diagnostics15182300 - 10 Sep 2025
Viewed by 1038
Abstract
Purpose: This systematic review explores the ethical, legal, and regulatory frameworks governing the deployment of artificial intelligence technologies in radiology. It aims to identify key governance challenges and highlight strategies that promote the safe, transparent, and accountable integration of artificial intelligence in clinical [...] Read more.
Purpose: This systematic review explores the ethical, legal, and regulatory frameworks governing the deployment of artificial intelligence technologies in radiology. It aims to identify key governance challenges and highlight strategies that promote the safe, transparent, and accountable integration of artificial intelligence in clinical imaging. This review is intended for both medical practitioners and AI developers, offering clinicians a synthesis of ethical and legal considerations while providing developers with regulatory insights and guidance for future AI system design. Methods: A systematic review was conducted, examining thirty-eight peer-reviewed articles published between 2018 and 2025. Studies were identified through searches in PubMed, Scopus, and Embase using terms related to artificial intelligence, radiology, ethics, law, and regulation. The inclusion criteria focused on studies addressing governance implications, rather than technical design. A thematic content analysis was applied to identify common patterns and gaps across ethical, legal, and regulatory domains. Results: The findings reveal widespread radiology-specific concerns, including algorithmic bias in breast and chest imaging datasets, opacity in image-based AI systems such as pulmonary nodule detection models, and unresolved legal liability in cases where radiologists rely on FDA-cleared AI tools that fail to identify abnormalities. Regulatory frameworks vary significantly across regions with limited global harmonization, highlighting the need for adaptive oversight models and improved data governance. Conclusion: Responsible deployment of AI in radiology requires governance models that address bias, explainability, and medico-legal accountability while integrating ethical principles, legal safeguards, and adaptive oversight. This review provides tailored insights for medical practitioners, AI developers, policymakers, and researchers: clinicians gain guidance on ethical and legal responsibilities, developers on regulatory and design priorities, policymakers (especially in the Middle East) on regional framework gaps, and researchers on future directions. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

21 pages, 1623 KB  
Article
Time-Series-Based Anomaly Detection in Industrial Control Systems Using Generative Adversarial Networks
by Chungku Han and Gwangyong Gim
Processes 2025, 13(9), 2885; https://doi.org/10.3390/pr13092885 - 9 Sep 2025
Viewed by 721
Abstract
Recent advances in time-series anomaly detection have leveraged artificial intelligence (AI) to improve detection performance. In industrial control systems (ICSs), however, acquiring training data is challenging due to operational constraints and the difficulty of system shutdowns. To address this, many countries are developing [...] Read more.
Recent advances in time-series anomaly detection have leveraged artificial intelligence (AI) to improve detection performance. In industrial control systems (ICSs), however, acquiring training data is challenging due to operational constraints and the difficulty of system shutdowns. To address this, many countries are developing ICS simulators and testbeds to generate training data. This study uses a publicly available ICS testbed dataset as a benchmark for the discriminator in a Semi-Supervised Generative Adversarial Network (SGAN). The goal is to generate large volumes of synthetic time-series data through adversarial training between generator and discriminator networks, thereby mitigating data scarcity in ICS anomaly detection. Comparative experiments were conducted using this synthetic data to evaluate its impact on existing detection models. Using the HAI 22.04 dataset from the National Security Research Institute, this study performed feature engineering and preprocessing to identify correlations and remove irregularities. Various models, including One-Class SVM, VAE, CNN-GRU-Autoencoder, and CNN-LSTM-Autoencoder, were trained and tested on the dataset. A synthetic dataset was then generated via SGAN and validated using PCA and t-SNE. The results show that applying SGAN-generated data to time-series anomaly detection yielded significant performance improvements in F1 score. Additional validation using the SWaT dataset from the National University of Singapore confirmed similar gains. These findings indicate that synthetic data generated by SGANs can effectively enhance semi-supervised learning for anomaly detection, classification, and prediction in data-constrained environments such as medical, industrial, transportation, and environmental systems. Full article
(This article belongs to the Special Issue Innovation and Optimization of Production Processes in Industry 4.0)
Show Figures

Figure 1

17 pages, 3935 KB  
Article
Markerless Force Estimation via SuperPoint-SIFT Fusion and Finite Element Analysis: A Sensorless Solution for Deformable Object Manipulation
by Qingqing Xu, Ruoyang Lai and Junqing Yin
Biomimetics 2025, 10(9), 600; https://doi.org/10.3390/biomimetics10090600 - 8 Sep 2025
Viewed by 446
Abstract
Contact-force perception is a critical component of safe robotic grasping. With the rapid advances in embodied intelligence technology, humanoid robots have enhanced their multimodal perception capabilities. Conventional force sensors face limitations, such as complex spatial arrangements, installation challenges at multiple nodes, and potential [...] Read more.
Contact-force perception is a critical component of safe robotic grasping. With the rapid advances in embodied intelligence technology, humanoid robots have enhanced their multimodal perception capabilities. Conventional force sensors face limitations, such as complex spatial arrangements, installation challenges at multiple nodes, and potential interference with robotic flexibility. Consequently, these conventional sensors are unsuitable for biomimetic robot requirements in object perception, natural interaction, and agile movement. Therefore, this study proposes a sensorless external force detection method that integrates SuperPoint-Scale Invariant Feature Transform (SIFT) feature extraction with finite element analysis to address force perception challenges. A visual analysis method based on the SuperPoint-SIFT feature fusion algorithm was implemented to reconstruct a three-dimensional displacement field of the target object. Subsequently, the displacement field was mapped to the contact force distribution using finite element modeling. Experimental results demonstrate a mean force estimation error of 7.60% (isotropic) and 8.15% (anisotropic), with RMSE < 8%, validated by flexible pressure sensors. To enhance the model’s reliability, a dual-channel video comparison framework was developed. By analyzing the consistency of the deformation patterns and mechanical responses between the actual compression and finite element simulation video keyframes, the proposed approach provides a novel solution for real-time force perception in robotic interactions. The proposed solution is suitable for applications such as precision assembly and medical robotics, where sensorless force feedback is crucial. Full article
(This article belongs to the Special Issue Bio-Inspired Intelligent Robot)
Show Figures

Figure 1

Back to TopTop