Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = MET-CNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 571 KB  
Systematic Review
Artificial Intelligence in Predictive Healthcare: A Systematic Review
by Abeer Al-Nafjan, Amaal Aljuhani, Arwa Alshebel, Asma Alharbi and Atheer Alshehri
J. Clin. Med. 2025, 14(19), 6752; https://doi.org/10.3390/jcm14196752 - 24 Sep 2025
Viewed by 86
Abstract
Background/Objectives: Today, Artificial intelligence (AI) and machine learning (ML) significantly enhance predictive analytics in the healthcare landscape, enabling timely and accurate predictions that lead to proactive interventions, personalized treatment plans, and ultimately improved patient care. As healthcare systems increasingly adopt data-driven approaches, the [...] Read more.
Background/Objectives: Today, Artificial intelligence (AI) and machine learning (ML) significantly enhance predictive analytics in the healthcare landscape, enabling timely and accurate predictions that lead to proactive interventions, personalized treatment plans, and ultimately improved patient care. As healthcare systems increasingly adopt data-driven approaches, the integration of AI and data analysis has garnered substantial interest, as reflected in the growing number of publications highlighting innovative applications of AI in clinical settings. This review synthesizes recent evidence on application areas, commonly used models, metrics, and challenges. Methods: We conducted a systematic literature review between using Web of Science and Google Scholar databases from 2021–2025 covering a diverse range of AI and ML techniques applied to disease prediction. Results: Twenty-two studies met criteria. The most frequently used machine learning approaches were tree-based ensemble models (e.g., Random Forest, XGBoost, LightGBM) for structured clinical data, and deep learning architectures (e.g., CNN, LSTM) for imaging and time-series tasks. Evaluation most commonly relied on AUROC, F1-score, accuracy, and sensitivity. key challenges remain regarding data privacy, integration with clinical workflows, model interpretability, and the necessity for high-quality representative datasets. Conclusions: Future research should focus on developing interpretable models that clinicians can understand and trust, implementing robust privacy-preserving techniques to safeguard patient data, and establishing standardized evaluation frameworks to effectively assess model performance. Full article
Show Figures

Figure 1

15 pages, 622 KB  
Review
Artificial Intelligence in the Diagnosis and Imaging-Based Assessment of Pelvic Organ Prolapse: A Scoping Review
by Marian Botoncea, Călin Molnar, Vlad Olimpiu Butiurca, Cosmin Lucian Nicolescu and Claudiu Molnar-Varlam
Medicina 2025, 61(8), 1497; https://doi.org/10.3390/medicina61081497 - 21 Aug 2025
Viewed by 639
Abstract
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping [...] Read more.
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping review aims to synthesize current evidence on the use of AI in the imaging-based diagnosis and anatomical evaluation of POP. Materials and Methods: Following the PRISMA-ScR guidelines, a comprehensive search was conducted in PubMed, Scopus, and Web of Science for studies published between January 2020 and April 2025. Studies were included if they applied AI methodologies, such as convolutional neural networks (CNNs), vision transformers (ViTs), or hybrid models, to diagnostic imaging modalities such as ultrasound and magnetic resonance imaging (MRI) to women with POP. Results: Eight studies met the inclusion criteria. In these studies, AI technologies were applied to 2D/3D ultrasound and static or stress MRI for segmentation, anatomical landmark localization, and prolapse classification. CNNs were the most commonly used models, often combined with transfer learning. Some studies used hybrid models of ViTs, demonstrating high diagnostic accuracy. However, all studies relied on internal datasets, with limited model interpretability and no external validation. Moreover, clinical deployment and outcome assessments remain underexplored. Conclusions: AI shows promise in enhancing POP diagnosis through improved image analysis, but current applications are largely exploratory. Future work should prioritize external validation, standardization, explainable AI, and real-world implementation to bridge the gap between experimental models and clinical utility. Full article
(This article belongs to the Section Obstetrics and Gynecology)
Show Figures

Graphical abstract

11 pages, 594 KB  
Review
Applications of Deep Learning Models in Laparoscopy for Gynecology
by Fani Gkrozou, Vasileios Bais, Charikleia Skentou, Dimitrios Rafail Kalaitzopoulos, Georgios Grigoriadis, Anastasia Vatopoulou, Minas Paschopoulos and Angelos Daniilidis
Medicina 2025, 61(8), 1460; https://doi.org/10.3390/medicina61081460 - 14 Aug 2025
Viewed by 610
Abstract
Background and Objectives: The use of Artificial Intelligence (AI) in the medical field is rapidly expanding. This review aims to explore and summarize all published research on the development and validation of deep learning (DL) models in gynecologic laparoscopic surgeries. Materials and [...] Read more.
Background and Objectives: The use of Artificial Intelligence (AI) in the medical field is rapidly expanding. This review aims to explore and summarize all published research on the development and validation of deep learning (DL) models in gynecologic laparoscopic surgeries. Materials and Methods: MEDLINE, IEEE Xplore, and Google scholar were searched for eligible studies published between January 2000 and May 2025. Selected studies developed a DL model using datasets derived from gynecologic laparoscopic procedures. The exclusion criteria included non-gynecologic datasets, non-laparoscopic datasets, non-Convolutional Neural Network (CNN) models, and non-English publications. Results: A total of 16 out of 621 studies met our inclusion criteria. The findings were categorized into four main application areas: (i) anatomy classification (n = 6), (ii) anatomy segmentation (n = 5), (iii) surgical instrument classification and segmentation (n = 5), and (iv) surgical action recognition (n = 5). Conclusions: This review emphasizes the growing role of AI in gynecologic laparoscopy, improving anatomy recognition, instrument tracking, and surgical action analysis. As datasets grow and computational capabilities advance, these technologies are poised to improve intraoperative guidance and standardize surgical training. Full article
(This article belongs to the Section Obstetrics and Gynecology)
Show Figures

Figure 1

29 pages, 1397 KB  
Review
Artificial Intelligence Approaches for EEG Signal Acquisition and Processing in Lower-Limb Motor Imagery: A Systematic Review
by Sonia Rocío Moreno-Castelblanco, Manuel Andrés Vélez-Guerrero and Mauro Callejas-Cuervo
Sensors 2025, 25(16), 5030; https://doi.org/10.3390/s25165030 - 13 Aug 2025
Viewed by 1078
Abstract
Background: Motor imagery (MI) is defined as the cognitive ability to simulate motor movements while suppressing muscular activity. The electroencephalographic (EEG) signals associated with lower limb MI have become essential in brain–computer interface (BCI) research aimed at assisting individuals with motor disabilities. Objective: [...] Read more.
Background: Motor imagery (MI) is defined as the cognitive ability to simulate motor movements while suppressing muscular activity. The electroencephalographic (EEG) signals associated with lower limb MI have become essential in brain–computer interface (BCI) research aimed at assisting individuals with motor disabilities. Objective: This systematic review aims to evaluate methodologies for acquiring and processing EEG signals within brain–computer interface (BCI) applications to accurately identify lower limb MI. Methods: A systematic search in Scopus and IEEE Xplore identified 287 records on EEG-based lower-limb MI using artificial intelligence. Following PRISMA guidelines (non-registered), 35 studies met the inclusion criteria after screening and full-text review. Results: Among the selected studies, 85% applied machine or deep learning classifiers such as SVM, CNN, and LSTM, while 65% incorporated multimodal fusion strategies, and 50% implemented decomposition algorithms. These methods improved classification accuracy, signal interpretability, and real-time application potential. Nonetheless, methodological variability and a lack of standardization persist across studies, posing barriers to clinical implementation. Conclusions: AI-based EEG analysis effectively decodes lower-limb motor imagery. Future efforts should focus on harmonizing methods, standardizing datasets, and developing portable systems to improve neurorehabilitation outcomes. This review provides a foundation for advancing MI-based BCIs. Full article
Show Figures

Figure 1

21 pages, 510 KB  
Review
IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities
by Samson O. Ooko, Emmanuel Ndashimye, Evariste Twahirwa and Moise Busogi
IoT 2025, 6(3), 46; https://doi.org/10.3390/iot6030046 - 7 Aug 2025
Viewed by 1374
Abstract
The activities of birds present increasing challenges in agriculture, aviation, and environmental conservation. This has led to economic losses, safety risks, and ecological imbalances. Attempts have been made to address the problem, with traditional deterrent methods proving to be labour-intensive, environmentally unfriendly, and [...] Read more.
The activities of birds present increasing challenges in agriculture, aviation, and environmental conservation. This has led to economic losses, safety risks, and ecological imbalances. Attempts have been made to address the problem, with traditional deterrent methods proving to be labour-intensive, environmentally unfriendly, and ineffective over time. Advances in artificial intelligence (AI) and the Internet of Things (IoT) present opportunities for enabling automated real-time bird detection and repellence. This study reviews recent developments (2020–2025) in AI-driven bird detection and repellence systems, emphasising the integration of image, audio, and multi-sensor data in IoT and edge-based environments. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework was used, with 267 studies initially identified and screened from key scientific databases. A total of 154 studies met the inclusion criteria and were analysed. The findings show the increasing use of convolutional neural networks (CNNs), YOLO variants, and MobileNet in visual detection, and the growing use of lightweight audio-based models such as BirdNET, MFCC-based CNNs, and TinyML frameworks for microcontroller deployment. Multi-sensor fusion is proposed to improve detection accuracy in diverse environments. Repellence strategies include sound-based deterrents, visual deterrents, predator-mimicking visuals, and adaptive AI-integrated systems. Deployment success depends on edge compatibility, power efficiency, and dataset quality. The limitations of current studies include species-specific detection challenges, data scarcity, environmental changes, and energy constraints. Future research should focus on tiny and lightweight AI models, standardised multi-modal datasets, and intelligent, behaviour-aware deterrence mechanisms suitable for precision agriculture and ecological monitoring. Full article
Show Figures

Figure 1

27 pages, 1326 KB  
Systematic Review
Application of Artificial Intelligence in Pancreatic Cyst Management: A Systematic Review
by Donghyun Lee, Fadel Jesry, John J. Maliekkal, Lewis Goulder, Benjamin Huntly, Andrew M. Smith and Yazan S. Khaled
Cancers 2025, 17(15), 2558; https://doi.org/10.3390/cancers17152558 - 2 Aug 2025
Viewed by 960
Abstract
Background: Pancreatic cystic lesions (PCLs), including intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs), pose a diagnostic challenge due to their variable malignant potential. Current guidelines, such as Fukuoka and American Gastroenterological Association (AGA), have moderate predictive accuracy and may lead [...] Read more.
Background: Pancreatic cystic lesions (PCLs), including intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs), pose a diagnostic challenge due to their variable malignant potential. Current guidelines, such as Fukuoka and American Gastroenterological Association (AGA), have moderate predictive accuracy and may lead to overtreatment or missed malignancies. Artificial intelligence (AI), incorporating machine learning (ML) and deep learning (DL), offers the potential to improve risk stratification, diagnosis, and management of PCLs by integrating clinical, radiological, and molecular data. This is the first systematic review to evaluate the application, performance, and clinical utility of AI models in the diagnosis, classification, prognosis, and management of pancreatic cysts. Methods: A systematic review was conducted in accordance with PRISMA guidelines and registered on PROSPERO (CRD420251008593). Databases searched included PubMed, EMBASE, Scopus, and Cochrane Library up to March 2025. The inclusion criteria encompassed original studies employing AI, ML, or DL in human subjects with pancreatic cysts, evaluating diagnostic, classification, or prognostic outcomes. Data were extracted on the study design, imaging modality, model type, sample size, performance metrics (accuracy, sensitivity, specificity, and area under the curve (AUC)), and validation methods. Study quality and bias were assessed using the PROBAST and adherence to TRIPOD reporting guidelines. Results: From 847 records, 31 studies met the inclusion criteria. Most were retrospective observational (n = 27, 87%) and focused on preoperative diagnostic applications (n = 30, 97%), with only one addressing prognosis. Imaging modalities included Computed Tomography (CT) (48%), endoscopic ultrasound (EUS) (26%), and Magnetic Resonance Imaging (MRI) (9.7%). Neural networks, particularly convolutional neural networks (CNNs), were the most common AI models (n = 16), followed by logistic regression (n = 4) and support vector machines (n = 3). The median reported AUC across studies was 0.912, with 55% of models achieving AUC ≥ 0.80. The models outperformed clinicians or existing guidelines in 11 studies. IPMN stratification and subtype classification were common focuses, with CNN-based EUS models achieving accuracies of up to 99.6%. Only 10 studies (32%) performed external validation. The risk of bias was high in 93.5% of studies, and TRIPOD adherence averaged 48%. Conclusions: AI demonstrates strong potential in improving the diagnosis and risk stratification of pancreatic cysts, with several models outperforming current clinical guidelines and human readers. However, widespread clinical adoption is hindered by high risk of bias, lack of external validation, and limited interpretability of complex models. Future work should prioritise multicentre prospective studies, standardised model reporting, and development of interpretable, externally validated tools to support clinical integration. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

28 pages, 11832 KB  
Article
On the Minimum Dataset Requirements for Fine-Tuning an Object Detector for Arable Crop Plant Counting: A Case Study on Maize Seedlings
by Samuele Bumbaca and Enrico Borgogno-Mondino
Remote Sens. 2025, 17(13), 2190; https://doi.org/10.3390/rs17132190 - 25 Jun 2025
Viewed by 924
Abstract
Object detection is essential for precision agriculture applications like automated plant counting, but the minimum dataset requirements for effective model deployment remain poorly understood for arable crop seedling detection on orthomosaics. This study investigated how much annotated data is required to achieve standard [...] Read more.
Object detection is essential for precision agriculture applications like automated plant counting, but the minimum dataset requirements for effective model deployment remain poorly understood for arable crop seedling detection on orthomosaics. This study investigated how much annotated data is required to achieve standard counting accuracy (R2 = 0.85) for maize seedlings across different object detection approaches. We systematically evaluated traditional deep learning models requiring many training examples (YOLOv5, YOLOv8, YOLO11, RT-DETR), newer approaches requiring few examples (CD-ViTO), and methods requiring zero labeled examples (OWLv2) using drone-captured orthomosaic RGB imagery. We also implemented a handcrafted computer graphics algorithm as baseline. Models were tested with varying training sources (in-domain vs. out-of-distribution data), training dataset sizes (10–150 images), and annotation quality levels (10–100%). Our results demonstrate that no model trained on out-of-distribution data achieved acceptable performance, regardless of dataset size. In contrast, models trained on in-domain data reached the benchmark with as few as 60–130 annotated images, depending on architecture. Transformer-based models (RT-DETR) required significantly fewer samples (60) than CNN-based models (110–130), though they showed different tolerances to annotation quality reduction. Models maintained acceptable performance with only 65–90% of original annotation quality. Despite recent advances, neither few-shot nor zero-shot approaches met minimum performance requirements for precision agriculture deployment. These findings provide practical guidance for developing maize seedling detection systems, demonstrating that successful deployment requires in-domain training data, with minimum dataset requirements varying by model architecture. Full article
Show Figures

Figure 1

30 pages, 1869 KB  
Review
Clinical Applications of Artificial Intelligence in Periodontology: A Scoping Review
by Georgios S. Chatzopoulos, Vasiliki P. Koidou, Lazaros Tsalikis and Eleftherios G. Kaklamanos
Medicina 2025, 61(6), 1066; https://doi.org/10.3390/medicina61061066 - 10 Jun 2025
Viewed by 3252
Abstract
Background and Objectives: This scoping review aimed to identify and synthesize current evidence on the clinical applications of artificial intelligence (AI) in periodontology, focusing on its potential to improve diagnosis, treatment planning, and patient care. Materials and Methods: A comprehensive literature [...] Read more.
Background and Objectives: This scoping review aimed to identify and synthesize current evidence on the clinical applications of artificial intelligence (AI) in periodontology, focusing on its potential to improve diagnosis, treatment planning, and patient care. Materials and Methods: A comprehensive literature search was conducted using electronic databases including PubMed-MEDLINE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Scopus, and Web of Science™ Core Collection. Studies were included if they met predefined PICO criteria relating to AI applications in periodontology. Due to the heterogeneity of study designs, imaging modalities, and outcome measures, a scoping review approach was employed rather than a systematic review. Results: A total of 6394 articles were initially identified and screened. The review revealed a significant interest in utilizing AI, particularly convolutional neural networks (CNNs), for various periodontal applications. Studies demonstrated the potential of AI models to accurately detect and classify alveolar bone loss, intrabony defects, furcation involvements, gingivitis, dental biofilm, and calculus from dental radiographs and intraoral images. AI systems often achieved diagnostic accuracy, sensitivity, and specificity comparable to or exceeding that of dental professionals. Various CNN architectures and methodologies, including ensemble models and task-specific designs, showed promise in enhancing periodontal disease assessment and management. Conclusions: AI, especially deep learning techniques, holds considerable potential to revolutionize periodontology by improving the accuracy and efficiency of diagnostic and treatment planning processes. While challenges remain, including the need for further research with larger and more diverse datasets, the reviewed evidence supports the integration of AI technologies into dental practice to aid clinicians and ultimately improve patient outcomes. Full article
(This article belongs to the Section Dentistry and Oral Health)
Show Figures

Figure 1

17 pages, 4366 KB  
Article
Quantitative Analysis of 3-Monochloropropane-1,2-diol in Fried Oil Using Convolutional Neural Networks Optimizing with a Stepwise Hybrid Preprocessing Strategy Based on Fourier Transform Infrared Spectroscopy
by Xi Wang, Siyi Wang, Shibing Zhang, Jiping Yin and Qi Zhao
Foods 2025, 14(10), 1670; https://doi.org/10.3390/foods14101670 - 9 May 2025
Viewed by 584
Abstract
As one kind of ‘probable human carcinogen’ (Group 2B) compound classified by the International Agency for Research on Cancer, 3-MCPD is mainly formed during the thermal processing of food. Tedious pretreatment techniques are needed for the existing analytical methods to quantify 3-MCPD. Hence, [...] Read more.
As one kind of ‘probable human carcinogen’ (Group 2B) compound classified by the International Agency for Research on Cancer, 3-MCPD is mainly formed during the thermal processing of food. Tedious pretreatment techniques are needed for the existing analytical methods to quantify 3-MCPD. Hence, a nondestructive sensing technique that offers low noise interference and high quantitative precision must be developed to address this problem. Following this, Fourier transform infrared spectroscopy association with an convolutional neural network (CNN) model was employed in this investigation for the nondestructive quantitative measurement of 3-MCPD in oil samples. Before building the CNN model, NL-SGS-D2 was utilized to enhance the feature extraction capability of model by eliminating the background noise. Under the optimal hyperparameter settings, calibration model achieved a determination coefficient (R2C) of 0.9982 and root mean square error (RMSEC) of 0.0181 during validation, along with a 16% performance enhancement enabled by the stepwise hybrid preprocessing strategy. The LODs (0.36 μg/g) and LOQs (1.10 μg/g) of the proposed method met the requirements for 3-MCPD detection in oil samples by the Commission Regulation issued of EU. The method proposed by CNN model with hybrid preprocessing was superior to the traditional model, and contributed to the quality monitoring of edible oil processing industry. Full article
(This article belongs to the Special Issue Application of Rapid Detection Technology of Lipids in Food)
Show Figures

Figure 1

38 pages, 1484 KB  
Review
Enhancing Radiologist Productivity with Artificial Intelligence in Magnetic Resonance Imaging (MRI): A Narrative Review
by Arun Nair, Wilson Ong, Aric Lee, Naomi Wenxin Leow, Andrew Makmur, Yong Han Ting, You Jun Lee, Shao Jin Ong, Jonathan Jiong Hao Tan, Naresh Kumar and James Thomas Patrick Decourcy Hallinan
Diagnostics 2025, 15(9), 1146; https://doi.org/10.3390/diagnostics15091146 - 30 Apr 2025
Cited by 2 | Viewed by 5275
Abstract
Artificial intelligence (AI) shows promise in streamlining MRI workflows by reducing radiologists’ workload and improving diagnostic accuracy. Despite MRI’s extensive clinical use, systematic evaluation of AI-driven productivity gains in MRI remains limited. This review addresses that gap by synthesizing evidence on how AI [...] Read more.
Artificial intelligence (AI) shows promise in streamlining MRI workflows by reducing radiologists’ workload and improving diagnostic accuracy. Despite MRI’s extensive clinical use, systematic evaluation of AI-driven productivity gains in MRI remains limited. This review addresses that gap by synthesizing evidence on how AI can shorten scanning and reading times, optimize worklist triage, and automate segmentation. On 15 November 2024, we searched PubMed, EMBASE, MEDLINE, Web of Science, Google Scholar, and Cochrane Library for English-language studies published between 2000 and 15 November 2024, focusing on AI applications in MRI. Additional searches of grey literature were conducted. After screening for relevance and full-text review, 66 studies met inclusion criteria. Extracted data included study design, AI techniques, and productivity-related outcomes such as time savings and diagnostic accuracy. The included studies were categorized into five themes: reducing scan times, automating segmentation, optimizing workflow, decreasing reading times, and general time-saving or workload reduction. Convolutional neural networks (CNNs), especially architectures like ResNet and U-Net, were commonly used for tasks ranging from segmentation to automated reporting. A few studies also explored machine learning-based automation software and, more recently, large language models. Although most demonstrated gains in efficiency and accuracy, limited external validation and dataset heterogeneity could reduce broader adoption. AI applications in MRI offer potential to enhance radiologist productivity, mainly through accelerated scans, automated segmentation, and streamlined workflows. Further research, including prospective validation and standardized metrics, is needed to enable safe, efficient, and equitable deployment of AI tools in clinical MRI practice. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

23 pages, 10404 KB  
Article
Steel Roll Eye Pose Detection Based on Binocular Vision and Mask R-CNN
by Xuwu Su, Jie Wang, Yifan Wang and Daode Zhang
Sensors 2025, 25(6), 1805; https://doi.org/10.3390/s25061805 - 14 Mar 2025
Viewed by 593
Abstract
To achieve automation at the inner corner guard installation station in a steel coil packaging production line and enable automatic docking and installation of the inner corner guard after eye position detection, this paper proposes a binocular vision method based on deep learning [...] Read more.
To achieve automation at the inner corner guard installation station in a steel coil packaging production line and enable automatic docking and installation of the inner corner guard after eye position detection, this paper proposes a binocular vision method based on deep learning for eye position detection of steel coil rolls. The core of the method involves using the Mask R-CNN algorithm within a deep-learning framework to identify the target region and obtain a mask image of the steel coil end face. Subsequently, the binarized image of the steel coil end face was processed using the RGB vector space image segmentation method. The target feature pixel points were then extracted using Sobel edges, and the parameters were fitted by the least-squares method to obtain the deflection angle and the horizontal and vertical coordinates of the center point in the image coordinate system. Through the ellipse parameter extraction experiment, the maximum deviations in the pixel coordinate system for the center point in the u and v directions were 0.49 and 0.47, respectively. The maximum error in the deflection angle was 0.45°. In the steel coil roll eye position detection experiments, the maximum deviations for the pitch angle, deflection angle, and centroid coordinates were 2.17°, 2.24°, 3.53 mm, 4.05 mm, and 4.67 mm, respectively, all of which met the actual installation requirements. The proposed method demonstrates strong operability in practical applications, and the steel coil end face position solving approach significantly enhances work efficiency, reduces labor costs, and ensures adequate detection accuracy. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

15 pages, 1758 KB  
Article
The Extent to Which Artificial Intelligence Can Help Fulfill Metastatic Breast Cancer Patient Healthcare Needs: A Mixed-Methods Study
by Yvonne W. Leung, Jeremiah So, Avneet Sidhu, Veenaajaa Asokan, Mathew Gancarz, Vishrut Bharatkumar Gajjar, Ankita Patel, Janice M. Li, Denis Kwok, Michelle B. Nadler, Danielle Cuthbert, Philippe L. Benard, Vikaash Kumar, Terry Cheng, Janet Papadakos, Tina Papadakos, Tran Truong, Mike Lovas and Jiahui Wong
Curr. Oncol. 2025, 32(3), 145; https://doi.org/10.3390/curroncol32030145 - 2 Mar 2025
Cited by 3 | Viewed by 2502
Abstract
The Artificial Intelligence Patient Librarian (AIPL) was designed to meet the psychosocial and supportive care needs of Metastatic Breast Cancer (MBC) patients with HR+/HER2− subtypes. AIPL provides conversational patient education, answers user questions, and offers tailored online resource recommendations. This study, conducted in [...] Read more.
The Artificial Intelligence Patient Librarian (AIPL) was designed to meet the psychosocial and supportive care needs of Metastatic Breast Cancer (MBC) patients with HR+/HER2− subtypes. AIPL provides conversational patient education, answers user questions, and offers tailored online resource recommendations. This study, conducted in three phases, assessed AIPL’s impact on patients’ ability to manage their advanced disease. In Phase 1, educational content was adapted for chatbot delivery, and over 100 credible online resources were annotated using a Convolutional Neural Network (CNN) to drive recommendations. Phase 2 involved 42 participants who completed pre- and post-surveys after using AIPL for two weeks. The surveys measured patient activation using the Patient Activation Measure (PAM) tool and evaluated user experience with the System Usability Scale (SUS). Phase 3 included focus groups to explore user experiences in depth. Of the 42 participants, 36 completed the study, with 10 participating in focus groups. Most participants were aged 40–64. PAM scores showed no significant differences between pre-survey (mean = 59.33, SD = 5.19) and post-survey (mean = 59.22, SD = 6.16), while SUS scores indicated good usability. Thematic analysis revealed four key themes: AIPL offers basic wellness and health guidance, provides limited support for managing relationships, offers limited condition-specific medical information, and is unable to offer hope to patients. Despite showing no impact on the PAM, possibly due to high baseline activation, AIPL demonstrated good usability and met basic information needs, particularly for newly diagnosed MBC patients. Future iterations will incorporate a large language model (LLM) to provide more comprehensive and personalized assistance. Full article
(This article belongs to the Section Breast Cancer)
Show Figures

Figure 1

22 pages, 6282 KB  
Article
CropsDisNet: An AI-Based Platform for Disease Detection and Advancing On-Farm Privacy Solutions
by Mohammad Badhruddouza Khan, Salwa Tamkin, Jinat Ara, Mobashwer Alam and Hanif Bhuiyan
Data 2025, 10(2), 25; https://doi.org/10.3390/data10020025 - 18 Feb 2025
Cited by 1 | Viewed by 2444
Abstract
Crop failure is defined as crop production that is significantly lower than anticipated, resulting from plants that are harmed, diseased, destroyed, or influenced by climatic circumstances. With the rise in global food security concern, the earliest detection of crop diseases has proven to [...] Read more.
Crop failure is defined as crop production that is significantly lower than anticipated, resulting from plants that are harmed, diseased, destroyed, or influenced by climatic circumstances. With the rise in global food security concern, the earliest detection of crop diseases has proven to be pivotal in agriculture industries to address the needs of the global food crisis and on-farm data protection, which can be met with a privacy-preserving deep learning model. However, deep learning seems to be a largely complex black box to interpret, necessitating a prerequisite for the groundwork of the model’s interpretability. Considering this, the aim of this study was to follow up on the establishment of a robust deep learning custom model named CropsDisNet, evaluated on a large-scale dataset named “New Bangladeshi Crop Disease Dataset (corn, potato and wheat)”, which contains a total of 8946 images. The integration of a differential privacy algorithm into our CropsDisNet model could establish the benefits of automated crop disease classification without compromising on-farm data privacy by reducing training data leakage. To classify corn, potato, and wheat leaf diseases, we used three representative CNN models for image classification (VGG16, Inception Resnet V2, Inception V3) along with our custom model, and the classification accuracy for these three different crops varied from 92.09% to 98.29%. In addition, demonstration of the model’s interpretability gave us insight into our model’s decision making and classification results, which can allow farmers to understand and take appropriate precautions in the event of early widespread harvest failure and food crises. Full article
(This article belongs to the Topic Decision-Making and Data Mining for Sustainable Computing)
Show Figures

Figure 1

33 pages, 1112 KB  
Review
A Comprehensive Review of Vision-Based Sensor Systems for Human Gait Analysis
by Xiaofeng Han, Diego Guffanti and Alberto Brunete
Sensors 2025, 25(2), 498; https://doi.org/10.3390/s25020498 - 16 Jan 2025
Cited by 7 | Viewed by 5532
Abstract
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human [...] Read more.
Analysis of the human gait represents a fundamental area of investigation within the broader domains of biomechanics, clinical research, and numerous other interdisciplinary fields. The progression of visual sensor technology and machine learning algorithms has enabled substantial developments in the creation of human gait analysis systems. This paper presents a comprehensive review of the advancements and recent findings in the field of vision-based human gait analysis systems over the past five years, with a special emphasis on the role of vision sensors, machine learning algorithms, and technological innovations. The relevant papers were subjected to analysis using the PRISMA method, and 72 articles that met the criteria for this research project were identified. A detailing of the most commonly used visual sensor systems, machine learning algorithms, human gait analysis parameters, optimal camera placement, and gait parameter extraction methods is presented in the analysis. The findings of this research indicate that non-invasive depth cameras are gaining increasing popularity within this field. Furthermore, depth learning algorithms, such as convolutional neural networks (CNNs) and long short-term memory (LSTM) networks, are being employed with increasing frequency. This review seeks to establish the foundations for future innovations that will facilitate the development of more effective, versatile, and user-friendly gait analysis tools, with the potential to significantly enhance human mobility, health, and overall quality of life. This work was supported by [GOBIERNO DE ESPANA/PID2023-150967OB-I00]. Full article
(This article belongs to the Special Issue Advanced Sensors in Biomechanics and Rehabilitation)
Show Figures

Figure 1

22 pages, 863 KB  
Systematic Review
The Accuracy of Algorithms Used by Artificial Intelligence in Cephalometric Points Detection: A Systematic Review
by Júlia Ribas-Sabartés, Meritxell Sánchez-Molins and Nuno Gustavo d’Oliveira
Bioengineering 2024, 11(12), 1286; https://doi.org/10.3390/bioengineering11121286 - 18 Dec 2024
Cited by 2 | Viewed by 2546
Abstract
The use of artificial intelligence in orthodontics is emerging as a tool for localizing cephalometric points in two-dimensional X-rays. AI systems are being evaluated for their accuracy and efficiency compared to conventional methods performed by professionals. The main objective of this study is [...] Read more.
The use of artificial intelligence in orthodontics is emerging as a tool for localizing cephalometric points in two-dimensional X-rays. AI systems are being evaluated for their accuracy and efficiency compared to conventional methods performed by professionals. The main objective of this study is to identify the artificial intelligence algorithms that yield the best results for cephalometric landmark localization, along with their learning system. A literature search was conducted across PubMed-MEDLINE, Cochrane, Scopus, IEEE Xplore, and Web of Science. Observational and experimental studies from 2013 to 2023 assessing the detection of at least 13 cephalometric landmarks in two-dimensional radiographs were included. Studies requiring advanced computer engineering knowledge or involving patients with anomalies, syndromes, or orthodontic appliances, were excluded. Risk of bias was assessed using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) and Newcastle–Ottawa Scale (NOS) tools. Of 385 references, 13 studies met the inclusion criteria (1 diagnostic accuracy study and 12 retrospective cohorts). Six were high-risk, and seven were low-risk. Convolutional neural networks (CNN)-based AI algorithms showed point localization accuracy ranging from 64.3 to 97.3%, with a mean error of 1.04 mm ± 0.89 to 3.40 mm ± 1.57, within the clinical range of 2 mm. YOLOv3 demonstrated improvements over its earlier version. CNN have proven to be the most effective AI system for detecting cephalometric points in radiographic images. Although CNN-based algorithms generate results very quickly and reproducibly, they still do not achieve the accuracy of orthodontists. Full article
Show Figures

Figure 1

Back to TopTop