Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (330)

Search Parameters:
Keywords = computer-aided diagnosis (CAD)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1542 KB  
Article
Comparative Analysis of Diagnostic Performance Between Elastography and AI-Based S-Detect for Thyroid Nodule Detection
by Jee-Yeun Park and Sung-Hee Yang
Diagnostics 2025, 15(17), 2191; https://doi.org/10.3390/diagnostics15172191 (registering DOI) - 29 Aug 2025
Viewed by 151
Abstract
Background/Objectives: Elastography is a non-invasive imaging technique that assesses tissue stiffness and elasticity. This study aimed to evaluate the diagnostic performance and clinical utility of elastography and S-detect in distinguishing benign from malignant thyroid nodules. S-detect (RS85) is a deep learning-based computer-aided diagnosis [...] Read more.
Background/Objectives: Elastography is a non-invasive imaging technique that assesses tissue stiffness and elasticity. This study aimed to evaluate the diagnostic performance and clinical utility of elastography and S-detect in distinguishing benign from malignant thyroid nodules. S-detect (RS85) is a deep learning-based computer-aided diagnosis (DL-CAD) software that analyzes grayscale ultrasound 2D images to evaluate the morphological characteristics of thyroid nodules, providing a visual guide to the likelihood of malignancy. Method: This retrospective study included 159 patients (61 male and 98 female) aged 30–83 years (56.14 ± 11.35) who underwent thyroid ultrasonography between January 2023 and June 2024. All the patients underwent elastography, S-detect analysis, and fine needle aspiration cytology (FNAC). Malignancy status was determined based on the FNAC findings, and the diagnostic performance of the elasticity contrast index (ECI), S-detect, and evaluations by a radiologist were assessed. Based on the FNAC results, 101 patients (63.5%) had benign nodules and 58 patients (36.5%) had malignant nodules. Results: Radiologist interpretation demonstrated the highest diagnostic accuracy (area under the curve 89%), with a sensitivity of 98.28%, specificity of 79.21%, positive predictive value (PPV) of 73.1%, and negative predictive value (NPV) of 98.8%. The elasticity contrast index showed an accuracy of 85%, sensitivity of 87.93%, specificity of 81.19%, PPV of 72.9%, and NPV of 92.1%. S-detect yielded the lowest accuracy at 78%, with a sensitivity of 87.93%, specificity of 68.32%, PPV of 61.4%, and NPV of 90.8%. Conclusions: These findings offer valuable insights into the comparative diagnostic utility of elastography and AI-based S-detect for thyroid nodules in clinical practice. Although limited by its single-center design and sample size, which potentially limits the generalization of the results, the controlled environment ensured consistency and minimized confounding variables. Full article
(This article belongs to the Special Issue The Role of AI in Ultrasound)
Show Figures

Figure 1

33 pages, 8494 KB  
Article
Enhanced Multi-Class Brain Tumor Classification in MRI Using Pre-Trained CNNs and Transformer Architectures
by Marco Antonio Gómez-Guzmán, Laura Jiménez-Beristain, Enrique Efren García-Guerrero, Oscar Adrian Aguirre-Castro, José Jaime Esqueda-Elizondo, Edgar Rene Ramos-Acosta, Gilberto Manuel Galindo-Aldana, Cynthia Torres-Gonzalez and Everardo Inzunza-Gonzalez
Technologies 2025, 13(9), 379; https://doi.org/10.3390/technologies13090379 - 22 Aug 2025
Viewed by 472
Abstract
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the [...] Read more.
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the performance of four pre-trained deep convolutional neural network (CNN) architectures for the automatic multi-class classification of brain tumors into four categories: Glioma, Meningioma, Pituitary, and No Tumor. The proposed approach utilizes the publicly accessible Brain Tumor MRI Msoud dataset, consisting of 7023 images, with 5712 provided for training and 1311 for testing. To assess the impact of data availability, subsets containing 25%, 50%, 75%, and 100% of the training data were used. A stratified five-fold cross-validation technique was applied. The CNN architectures evaluated include DeiT3_base_patch16_224, Xception41, Inception_v4, and Swin_Tiny_Patch4_Window7_224, all fine-tuned using transfer learning. The training pipeline incorporated advanced preprocessing and image data augmentation techniques to enhance robustness and mitigate overfitting. Among the models tested, Swin_Tiny_Patch4_Window7_224 achieved the highest classification Accuracy of 99.24% on the test set using 75% of the training data. This model demonstrated superior generalization across all tumor classes and effectively addressed class imbalance issues. Furthermore, we deployed and benchmarked the best-performing DL model on embedded AI platforms (Jetson AGX Xavier and Orin Nano), demonstrating their capability for real-time inference and highlighting their feasibility for edge-based clinical deployment. The results highlight the strong potential of pre-trained deep CNN and transformer-based architectures in medical image analysis. The proposed approach provides a scalable and energy-efficient solution for automated brain tumor diagnosis, facilitating the integration of AI into clinical workflows. Full article
Show Figures

Figure 1

15 pages, 1125 KB  
Systematic Review
Applications and Performance of Artificial Intelligence in Spinal Metastasis Imaging: A Systematic Review
by Vivek Sanker, Poorvikha Gowda, Alexander Thaller, Zhikai Li, Philip Heesen, Zekai Qiang, Srinath Hariharan, Emil O. R. Nordin, Maria Jose Cavagnaro, John Ratliff and Atman Desai
J. Clin. Med. 2025, 14(16), 5877; https://doi.org/10.3390/jcm14165877 - 20 Aug 2025
Viewed by 399
Abstract
Background: Spinal metastasis is the third most common site for metastatic localization, following the lung and liver. Manual detection through imaging modalities such as CT, MRI, PET, and bone scintigraphy can be costly and inefficient. Preliminary artificial intelligence (AI) techniques and computer-aided detection [...] Read more.
Background: Spinal metastasis is the third most common site for metastatic localization, following the lung and liver. Manual detection through imaging modalities such as CT, MRI, PET, and bone scintigraphy can be costly and inefficient. Preliminary artificial intelligence (AI) techniques and computer-aided detection (CAD) systems have attempted to improve lesion detection, segmentation, and treatment response in oncological imaging. The objective of this review is to evaluate the current applications of AI across multimodal imaging techniques in the diagnosis of spinal metastasis. Methods: Databases like PubMed, Scopus, Web of Science Advance, Cochrane, and Embase (Ovid) were searched using specific keywords like ‘spine metastases’, ‘artificial intelligence’, ‘machine learning’, ‘deep learning’, and ‘diagnosis’. The screening of studies adhered to the PRISMA guidelines. Relevant variables were extracted from each of the included articles such as the primary tumor type, cohort size, and prediction model performance metrics: area under the receiver operating curve (AUC), accuracy, sensitivity, specificity, internal validation and external validation. A random-effects meta-analysis model was used to account for variability between the studies. Quality assessment was performed using the PROBAST tool. Results: This review included 39 studies published between 2007 and 2024, encompassing a total of 6267 patients. The three most common primary tumors were lung cancer (56.4%), breast cancer (51.3%), and prostate cancer (41.0%). Four studies reported AUC values for model training, 16 for internal validation, and five for external validation. The weighted average AUCs were 0.971 (training), 0.947 (internal validation), and 0.819 (external validation). The risk of bias was the highest in the analysis domain, with 22 studies (56%) rated high risk, primarily due to inadequate external validation and overfitting. Conclusions: AI-based approaches show promise for enhancing the detection, segmentation, and characterization of spinal metastatic lesions across multiple imaging modalities. Future research should focus on developing more generalizable models through larger and more diverse training datasets, integrating clinical and imaging data, and conducting prospective validation studies to demonstrate meaningful clinical impact. Full article
(This article belongs to the Special Issue Recent Advances in Spine Tumor Diagnosis and Treatment)
Show Figures

Figure 1

14 pages, 3502 KB  
Article
Deep Learning-Based Nuclei Segmentation and Melanoma Detection in Skin Histopathological Image Using Test Image Augmentation and Ensemble Model
by Mohammadesmaeil Akbarpour, Hamed Fazlollahiaghamalek, Mahdi Barati, Mehrdad Hashemi Kamangar and Mrinal Mandal
J. Imaging 2025, 11(8), 274; https://doi.org/10.3390/jimaging11080274 - 15 Aug 2025
Viewed by 403
Abstract
Histopathological images play a crucial role in diagnosing skin cancer. However, due to the very large size of digital histopathological images (typically in the order of billion pixels), manual image analysis is tedious and time-consuming. Therefore, there has been significant interest in developing [...] Read more.
Histopathological images play a crucial role in diagnosing skin cancer. However, due to the very large size of digital histopathological images (typically in the order of billion pixels), manual image analysis is tedious and time-consuming. Therefore, there has been significant interest in developing Artificial Intelligence (AI)-enabled computer-aided diagnosis (CAD) techniques for skin cancer detection. Due to the diversity of uncertain cell boundaries, automated nuclei segmentation of histopathological images remains challenging. Automating the identification of abnormal cell nuclei and analyzing their distribution across multiple tissue sections can significantly expedite comprehensive diagnostic assessments. In this paper, a deep neural network (DNN)-based technique is proposed to segment nuclei and detect melanoma in histopathological images. To achieve a robust performance, a test image is first augmented by various geometric operations. The augmented images are then passed through the DNN and the individual outputs are combined to obtain the final nuclei-segmented image. A morphological technique is then applied on the nuclei-segmented image to detect the melanoma region in the image. Experimental results show that the proposed technique can achieve a Dice score of 91.61% and 87.9% for nuclei segmentation and melanoma detection, respectively. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

34 pages, 9273 KB  
Review
Multi-Task Deep Learning for Lung Nodule Detection and Segmentation in CT Scans: A Review
by Runhan Li and Barmak Honarvar Shakibaei Asli
Electronics 2025, 14(15), 3009; https://doi.org/10.3390/electronics14153009 - 28 Jul 2025
Viewed by 986
Abstract
Lung nodule detection and segmentation are essential tasks in computer-aided diagnosis (CAD) systems for early lung cancer screening. With the growing availability of CT data and deep learning models, researchers have explored various strategies to improve the performance of these tasks. This review [...] Read more.
Lung nodule detection and segmentation are essential tasks in computer-aided diagnosis (CAD) systems for early lung cancer screening. With the growing availability of CT data and deep learning models, researchers have explored various strategies to improve the performance of these tasks. This review focuses on Multi-Task Learning (MTL) approaches, which unify or cooperatively integrate detection and segmentation by leveraging shared representations. We first provide an overview of traditional and deep learning methods for each task individually, then examine how MTL has been adapted for medical image analysis, with a particular focus on lung CT studies. Key aspects such as network architectures and evaluation metrics are also discussed. The review highlights recent trends, identifies current challenges, and outlines promising directions toward more accurate, efficient, and clinically applicable CAD solutions. The review demonstrates that MTL frameworks significantly enhance efficiency and accuracy in lung nodule analysis by leveraging shared representations, while also identifying critical challenges such as task imbalance and computational demands that warrant further research for clinical adoption. Full article
Show Figures

Figure 1

20 pages, 1606 KB  
Article
Brain Tumour Segmentation Using Choquet Integrals and Coalition Game
by Makhlouf Derdour, Mohammed El Bachir Yahiaoui, Moustafa Sadek Kahil, Mohamed Gasmi and Mohamed Chahine Ghanem
Information 2025, 16(7), 615; https://doi.org/10.3390/info16070615 - 17 Jul 2025
Viewed by 407
Abstract
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating [...] Read more.
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating patients. This research focuses on segmenting glioma brain tumour lesions in MRI images by analysing them at the pixel level. The aim is to develop a deep learning-based approach that enables ensemble learning to achieve precise and consistent segmentation of brain tumours. While many studies have explored ensemble learning techniques in this area, most rely on aggregation functions like the Weighted Arithmetic Mean (WAM) without accounting for the interdependencies between classifier subsets. To address this limitation, the Choquet integral is employed for ensemble learning, along with a novel evaluation framework for fuzzy measures. This framework integrates coalition game theory, information theory, and Lambda fuzzy approximation. Three distinct fuzzy measure sets are computed using different weighting strategies informed by these theories. Based on these measures, three Choquet integrals are calculated for segmenting different components of brain lesions, and their outputs are subsequently combined. The BraTS-2020 online validation dataset is used to validate the proposed approach. Results demonstrate superior performance compared with several recent methods, achieving Dice Similarity Coefficients of 0.896, 0.851, and 0.792 and 95% Hausdorff distances of 5.96 mm, 6.65 mm, and 20.74 mm for the whole tumour, tumour core, and enhancing tumour core, respectively. Full article
Show Figures

Figure 1

39 pages, 2612 KB  
Article
A Deep Learning-Driven CAD for Breast Cancer Detection via Thermograms: A Compact Multi-Architecture Feature Strategy
by Omneya Attallah
Appl. Sci. 2025, 15(13), 7181; https://doi.org/10.3390/app15137181 - 26 Jun 2025
Viewed by 699
Abstract
Breast cancer continues to be the most common malignancy among women worldwide, presenting a considerable public health issue. Mammography, though the gold standard for screening, has limitations that catalyzed the advancement of non-invasive, radiation-free alternatives, such as thermal imaging (thermography). This research introduces [...] Read more.
Breast cancer continues to be the most common malignancy among women worldwide, presenting a considerable public health issue. Mammography, though the gold standard for screening, has limitations that catalyzed the advancement of non-invasive, radiation-free alternatives, such as thermal imaging (thermography). This research introduces a novel computer-aided diagnosis (CAD) framework aimed at improving breast cancer detection via thermal imaging. The suggested framework mitigates the limitations of current CAD systems, which frequently utilize intricate convolutional neural network (CNN) structures and resource-intensive preprocessing, by incorporating streamlined CNN designs, transfer learning strategies, and multi-architecture ensemble methods. Features are primarily obtained from various layers of MobileNet, EfficientNetB0, and ShuffleNet architectures to assess the impact of individual layers on classification performance. Following that, feature transformation methods, such as discrete wavelet transform (DWT) and non-negative matrix factorization (NNMF), are employed to diminish feature dimensionality and enhance computational efficiency. Features from all layers of the three CNNs are subsequently incorporated, and the Minimum Redundancy Maximum Relevance (MRMR) algorithm is utilized to determine the most prominent features. Ultimately, support vector machine (SVM) classifiers are employed for classification purposes. The results indicate that integrating features from various CNNs and layers markedly improves performance, attaining a maximum accuracy of 99.4%. Furthermore, the combination of attributes from all three layers of the CNNs, in conjunction with NNMF, attained a maximum accuracy of 99.9% with merely 350 features. This CAD system demonstrates the efficacy of thermal imaging and multi-layer feature amalgamation to enhance non-invasive breast cancer diagnosis by reducing computational requirements through multi-layer feature integration and dimensionality reduction techniques. Full article
(This article belongs to the Special Issue Application of Decision Support Systems in Biomedical Engineering)
Show Figures

Figure 1

16 pages, 5302 KB  
Article
BREAST-CAD: A Computer-Aided Diagnosis System for Breast Cancer Detection Using Machine Learning
by Riyam M. Masoud, Ramadan Madi Ali Bakir, M. Sabry Saraya and Sarah M. Ayyad
Technologies 2025, 13(7), 268; https://doi.org/10.3390/technologies13070268 - 24 Jun 2025
Viewed by 570
Abstract
This research presents a novel Computer-Aided Diagnosis (CAD) system called BREAST-CAD, developed to support clinicians in breast cancer detection. Our approach follows a three-phase methodology: Initially, a comprehensive literature review between 2000 and 2024 informed the choice of a suitable dataset and the [...] Read more.
This research presents a novel Computer-Aided Diagnosis (CAD) system called BREAST-CAD, developed to support clinicians in breast cancer detection. Our approach follows a three-phase methodology: Initially, a comprehensive literature review between 2000 and 2024 informed the choice of a suitable dataset and the selection of Naive Bayes (NB), K-Nearest Neighbors (KNN), Support Vector Machines (SVM), and Decision Trees (DT) Machine Learning (ML) algorithms. Subsequently, the dataset was preprocessed and the four ML models were trained and validated, with the DT model achieving superior accuracy. We developed a novel, integrated client–server architecture for real-time diagnostic support, an aspect often underexplored in the current CAD literature. In the final phase, the DT model was embedded within a user-friendly client application, empowering clinicians to input patient diagnostic data directly and receive immediate, AI-driven predictions of cancer probability, with results securely transmitted and managed by a dedicated server, facilitating remote access and centralized data storage and ensuring data integrity. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

21 pages, 1062 KB  
Article
Red-KPLS Feature Reduction with 1D-ResNet50: Deep Learning Approach for Multiclass Alzheimer’s Staging
by Syrine Neffati, Ameni Filali, Kawther Mekki and Kais Bouzrara
Technologies 2025, 13(6), 258; https://doi.org/10.3390/technologies13060258 - 19 Jun 2025
Viewed by 978
Abstract
The early detection of Alzheimer’s disease (AD) is essential for improving patient outcomes, enabling timely intervention, and slowing disease progression. However, the complexity of neuroimaging data presents significant obstacles to accurate classification. This study introduces a computationally efficient AI framework designed to enhance [...] Read more.
The early detection of Alzheimer’s disease (AD) is essential for improving patient outcomes, enabling timely intervention, and slowing disease progression. However, the complexity of neuroimaging data presents significant obstacles to accurate classification. This study introduces a computationally efficient AI framework designed to enhance AD staging using structural MRI. The proposed method integrates discrete wavelet transform (DWT) for multi-scale feature extraction, a novel reduced kernel partial least squares (Red-KPLS) algorithm for feature reduction, and ResNet-50 for classification. The proposed technique, referred to as Red-KPLS-CNN, refines MRI features into discriminative biomarkers while minimizing redundancy. As a result, the framework achieves 96.9% accuracy and an F1-score of 97.8% in the multiclass classification of AD cases using the Kaggle dataset. The dataset was strategically partitioned into 60% training, 20% validation, and 20% testing sets, preserving class balance throughout all splits. The integration of Red–KPLS enhances feature selection, reducing dimensionality without compromising diagnostic sensitivity. Compared to conventional models, our approach improves classification robustness and generalization, reinforcing its potential for scalable and interpretable AD diagnostics. These findings emphasize the importance of hybrid wavelet–kernel–deep learning architectures, offering a promising direction for advancing computer-aided diagnosis (CAD) in clinical applications. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Graphical abstract

31 pages, 2654 KB  
Article
A Hybrid Model of Feature Extraction and Dimensionality Reduction Using ViT, PCA, and Random Forest for Multi-Classification of Brain Cancer
by Hisham Allahem, Sameh Abd El-Ghany, A. A. Abd El-Aziz, Bader Aldughayfiq, Menwa Alshammeri and Malak Alamri
Diagnostics 2025, 15(11), 1392; https://doi.org/10.3390/diagnostics15111392 - 30 May 2025
Cited by 1 | Viewed by 789
Abstract
Background/Objectives: The brain serves as the central command center for the nervous system in the human body and is made up of nerve cells known as neurons. When these nerve cells grow rapidly and abnormally, it can lead to the development of a [...] Read more.
Background/Objectives: The brain serves as the central command center for the nervous system in the human body and is made up of nerve cells known as neurons. When these nerve cells grow rapidly and abnormally, it can lead to the development of a brain tumor. Brain tumors are severe conditions that can significantly reduce a person’s lifespan. Failure to detect or delayed diagnosis of brain tumors can have fatal consequences. Accurately identifying and classifying brain tumors poses a considerable challenge for medical professionals, especially in terms of diagnosing and treating them using medical imaging analysis. Errors in diagnosing brain tumors can significantly impact a person’s life expectancy. Magnetic Resonance Imaging (MRI) is highly effective in early detection, diagnosis, and classification of brain cancers due to its advanced imaging abilities for soft tissues. However, manual examination of brain MRI scans is prone to errors and heavily depends on radiologists’ experience and fatigue levels. Swift detection of brain tumors is crucial for ensuring patient safety. Methods: In recent years, computer-aided diagnosis (CAD) systems incorporating deep learning (DL) and machine learning (ML) technologies have gained popularity as they offer precise predictive outcomes based on MRI images using advanced computer vision techniques. This article introduces a novel hybrid CAD approach named ViT-PCA-RF, which integrates Vision Transformer (ViT) and Principal Component Analysis (PCA) with Random Forest (RF) for brain tumor classification, providing a new method in the field. ViT was employed for feature extraction, PCA for feature dimension reduction, and RF for brain tumor classification. The proposed ViT-PCA-RF model helps detect early brain tumors, enabling timely intervention, better patient outcomes, and streamlining the diagnostic process, reducing patient time and costs. Our research trained and tested on the Brain Tumor MRI (BTM) dataset for multi-classification of brain tumors. The BTM dataset was preprocessed using resizing and normalization methods to ensure consistent input. Subsequently, our innovative model was compared against traditional classifiers, showcasing impressive performance metrics. Results: It exhibited outstanding accuracy, specificity, precision, recall, and F1 score with rates of 99%, 99.4%, 98.1%, 98.1%, and 98.1%, respectively. Conclusions: Our innovative classifier’s evaluation underlined our model’s potential, which leverages ViT, PCA, and RF techniques, showing promise in the precise and effective detection of brain tumors. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

41 pages, 4591 KB  
Article
Context-Driven Active Contour (CDAC): A Novel Medical Image Segmentation Method Based on Active Contour and Contextual Understanding
by Suane Pires Pinheiro da Silva, Roberto Fernandes Ivo, Calleo Belo Barroso, João Carlos Nepomuceno Fernandes, Thiago Ferreira Portela, Aldísio Gonçalves Medeiros, Pedro Henrique F. de Sousa, Houbing Song and Pedro Pedrosa Rebouças Filho
Sensors 2025, 25(9), 2864; https://doi.org/10.3390/s25092864 - 30 Apr 2025
Viewed by 775
Abstract
Lung diseases, including chronic obstructive pulmonary disease (COPD) and pulmonary fibrosis, pose significant health challenges due to their high morbidity and mortality rates. Computed tomography (CT) scans play a critical role in early diagnosis and disease management, yet traditional segmentation methods often falter [...] Read more.
Lung diseases, including chronic obstructive pulmonary disease (COPD) and pulmonary fibrosis, pose significant health challenges due to their high morbidity and mortality rates. Computed tomography (CT) scans play a critical role in early diagnosis and disease management, yet traditional segmentation methods often falter in addressing anatomical variability and pathological complexity. To overcome these limitations, this study introduces the context-driven active contour (CDAC), a new segmentation method that combines active contour models (ACMs) with contextual analysis. CDAC leverages contextual information from image embeddings and expert annotations to refine segmentation precision. The algorithm employs contextual attention force (CAF) as an external energy term and contextual balloon force (CBF) as an internal energy term, enabling robust contour adaptation. Evaluations were conducted on CT images of healthy lungs, as well as those affected by COPD and pulmonary fibrosis. CDAC achieved notable performance metrics, including a Dice coefficient of 96.8% for healthy lungs, an Accuracy of 94.5% for COPD, and a Jaccard Index of 92.3% for pulmonary fibrosis. These results demonstrate the method’s effectiveness and adaptability. By integrating contextual insights, CDAC offers a promising solution for enhancing computer-aided diagnostic (CAD) systems in the management of lung diseases. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

10 pages, 864 KB  
Review
Role of Artificial Intelligence in Thyroid Cancer Diagnosis
by Alessio Cece, Massimo Agresti, Nadia De Falco, Pasquale Sperlongano, Giancarlo Moccia, Pasquale Luongo, Francesco Miele, Alfredo Allaria, Francesco Torelli, Paola Bassi, Antonella Sciarra, Stefano Avenia, Paola Della Monica, Federica Colapietra, Marina Di Domenico, Ludovico Docimo and Domenico Parmeggiani
J. Clin. Med. 2025, 14(7), 2422; https://doi.org/10.3390/jcm14072422 - 2 Apr 2025
Cited by 1 | Viewed by 1405
Abstract
The progress of artificial intelligence (AI), particularly its core algorithms—machine learning (ML) and deep learning (DL)—has been significant in the medical field, impacting both scientific research and clinical practice. These algorithms are now capable of analyzing ultrasound images, processing them, and providing outcomes, [...] Read more.
The progress of artificial intelligence (AI), particularly its core algorithms—machine learning (ML) and deep learning (DL)—has been significant in the medical field, impacting both scientific research and clinical practice. These algorithms are now capable of analyzing ultrasound images, processing them, and providing outcomes, such as determining the benignity or malignancy of thyroid nodules. This integration into ultrasound machines is referred to as computer-aided diagnosis (CAD). The use of such software extends beyond ultrasound to include cytopathological and molecular assessments, enhancing the estimation of malignancy risk. AI’s considerable potential in cancer diagnosis and prevention is evident. This article provides an overview of AI models based on ML and DL algorithms used in thyroid diagnostics. Recent studies demonstrate their effectiveness and diagnostic role in ultrasound, pathology, and molecular fields. Notable advancements include content-based image retrieval (CBIR), enhanced saliency CBIR (SE-CBIR), Restore-Generative Adversarial Networks (GANs), and Vision Transformers (ViTs). These new algorithms show remarkable results, indicating their potential as diagnostic and prognostic tools for thyroid pathology. The future trend points to these AI systems becoming the preferred choice for thyroid diagnostics. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

24 pages, 2991 KB  
Article
Automatic Blob Detection Method for Cancerous Lesions in Unsupervised Breast Histology Images
by Vincent Majanga, Ernest Mnkandla, Zenghui Wang and Donatien Koulla Moulla
Bioengineering 2025, 12(4), 364; https://doi.org/10.3390/bioengineering12040364 - 31 Mar 2025
Viewed by 752
Abstract
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the [...] Read more.
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the visual inspection of medical images, which is ineffective, particularly for large and visible cancerous lesions in such images. Additionally, conventional methods face challenges in analyzing objects in large images due to overlapping/intersecting objects and the inability to resolve their image boundaries/edges. Nevertheless, the early detection of breast cancer lesions is a key determinant for diagnosis and treatment. In this study, we present a deep learning-based technique for breast cancer lesion detection, namely blob detection, which automatically detects hidden and inaccessible cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Secondly, a stain normalization technique is applied to the augmented images to separate nucleus features from tissue structures. Thirdly, morphology operation techniques, namely erosion, dilation, opening, and a distance transform, are used to enhance the images by highlighting foreground and background pixels while removing overlapping regions from the highlighted nucleus objects in the image. Subsequently, image segmentation is handled via the connected components method, which groups highlighted pixel components with similar intensity values and assigns them to their relevant labeled components (binary masks). These binary masks are then used in the active contours method for further segmentation by highlighting the boundaries/edges of ROIs. Finally, a deep learning recurrent neural network (RNN) model automatically detects and extracts cancerous lesions and their edges from the histology images via the blob detection method. This proposed approach utilizes the capabilities of both the connected components method and the active contours method to resolve the limitations of blob detection. This detection method is evaluated on 27,249 unsupervised, augmented human breast cancer histology dataset images, and it shows a significant evaluation result in the form of a 98.82% F1 accuracy score. Full article
Show Figures

Figure 1

10 pages, 784 KB  
Article
An Effective and Fast Model for Characterization of Cardiac Arrhythmia and Congestive Heart Failure
by Salim Lahmiri and Stelios Bekiros
Diagnostics 2025, 15(7), 849; https://doi.org/10.3390/diagnostics15070849 - 27 Mar 2025
Viewed by 515
Abstract
Background/Objectives: Cardiac arrhythmia (ARR) and congestive heart failure (CHF) are heart diseases that can cause dysfunction of other body organs and possibly death. This paper describes a fast and accurate detection system to distinguish between ARR and normal sinus (NS), and between CHF [...] Read more.
Background/Objectives: Cardiac arrhythmia (ARR) and congestive heart failure (CHF) are heart diseases that can cause dysfunction of other body organs and possibly death. This paper describes a fast and accurate detection system to distinguish between ARR and normal sinus (NS), and between CHF and NS. Methods: the proposed automatic detection system uses the higher amplitude coefficients (HAC) of the discrete cosine transform (DCT) of the electrocardiogram (ECG) as discriminant features to distinguish ARR and CHF signals from NS. The approach is tested with three statistical classifiers, of which the k-nearest neighbors (k-NN) algorithm. Results: the DCT provides fast compression of the ECG signal, and statistical tests show that the obtained HACs are different from ARR and NS, and for CHF and NS. The latter achieved highest accuracy under ten-fold cross-validation in comparison to Naïve Bayes (NB) and nonlinear support vector machine (SVM). The kNN yielded 97% accuracy, 99% sensitivity, 90% specificity and 0.63 s processing time when classifying ARR against NS, and it yielded 99% accuracy, 99.7% sensitivity, and 99.2% specificity, and 0.27 seconds processing time when classifying HCF against NS. In addition to a fast response, the DCT-kNN system yields higher accuracy in comparison to recent works. Conclusions: it is concluded that using the DCT based HACs as biomarkers of ARR and CHF can lead an efficient computer aided diagnosis (CAD) system which is fast, accurate and does not require ECG signal pre-processing and segmentation. The proposed system is promising for applications in clinical milieu. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

23 pages, 6296 KB  
Article
Dynamic Patch-Based Sample Generation for Pulmonary Nodule Segmentation in Low-Dose CT Scans Using 3D Residual Networks for Lung Cancer Screening
by Ioannis D. Marinakis, Konstantinos Karampidis, Giorgos Papadourakis and Mostefa Kara
Appl. Biosci. 2025, 4(1), 14; https://doi.org/10.3390/applbiosci4010014 - 5 Mar 2025
Cited by 1 | Viewed by 1210
Abstract
Lung cancer is by far the leading cause of cancer death among both men and women, making up almost 25% of all cancer deaths Each year, more people die of lung cancer than colon, breast, and prostate cancer combined. The early detection of [...] Read more.
Lung cancer is by far the leading cause of cancer death among both men and women, making up almost 25% of all cancer deaths Each year, more people die of lung cancer than colon, breast, and prostate cancer combined. The early detection of lung cancer is critical for improving patient outcomes, and automation through advanced image analysis techniques can significantly assist radiologists. This paper presents the development and evaluation of a computer-aided diagnostic system for lung cancer screening, focusing on pulmonary nodule segmentation in low-dose CT images, by employing HighRes3DNet. HighRes3DNet is a specialized 3D convolutional neural network (CNN) architecture based on ResNet principles which uses residual connections to efficiently learn complex spatial features from 3D volumetric data. To address the challenges of processing large CT volumes, an efficient patch-based extraction pipeline was developed. This method dynamically extracts 3D patches during training with a probabilistic approach, prioritizing patches likely to contain nodules while maintaining diversity. Data augmentation techniques, including random flips, affine transformations, elastic deformations, and swaps, were applied in the 3D space to enhance the robustness of the training process and mitigate overfitting. Using a public low-dose CT dataset, this approach achieved a Dice coefficient of 82.65% on the testing set for 3D nodule segmentation, demonstrating precise and reliable predictions. The findings highlight the potential of this system to enhance efficiency and accuracy in lung cancer screening, providing a valuable tool to support radiologists in clinical decision-making. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Biosciences)
Show Figures

Figure 1

Back to TopTop