Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (367)

Search Parameters:
Keywords = computer-aided diagnosis (CAD)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5241 KB  
Article
DSF-BRNet: Dual-Gated Semantic Fusion and Boundary Refinement for Efficient Endoscopic Polyp Segmentation
by Botao Liu, Changqi Shi and Ming Zhao
Sensors 2026, 26(9), 2717; https://doi.org/10.3390/s26092717 - 28 Apr 2026
Viewed by 361
Abstract
Early detection and accurate segmentation of colorectal polyps during colonoscopy are crucial for the prevention of colorectal cancer. However, automated polyp segmentation remains challenging because of high inter-class variance, complex intestinal backgrounds, and blurred boundaries. To address these issues while maintaining computational efficiency, [...] Read more.
Early detection and accurate segmentation of colorectal polyps during colonoscopy are crucial for the prevention of colorectal cancer. However, automated polyp segmentation remains challenging because of high inter-class variance, complex intestinal backgrounds, and blurred boundaries. To address these issues while maintaining computational efficiency, DSF-BRNet was developed for endoscopic polyp segmentation. In this framework, a Dual-Gated Semantic Fusion (DSF) module is introduced to reduce spatial misalignment between cross-level features and to provide a more reliable semantic basis for lesion localization. To further alleviate boundary ambiguity, a High-Frequency Boundary Refinement (HBR) module is used to sharpen segmentation contours under aligned semantic guidance. Together, these components form an Align-then-Refine framework in which semantic localization is strengthened before boundary refinement is performed. Experiments on four public benchmark datasets—Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, and ETIS-LaribPolypDB—showed competitive performance with favorable computational efficiency. Mean Dice scores of 0.943 on CVC-ClinicDB and 0.818 on ETIS-LaribPolypDB were achieved, with 25.55 M parameters and an inference speed of 80.08 FPS. These results indicate that accurate semantic localization and fine boundary preservation can be achieved simultaneously, suggesting that the method may be promising for real-time computer-aided diagnosis (CAD). Full article
Show Figures

Figure 1

48 pages, 30810 KB  
Article
Swin–YOLOv12: A Hybrid Transformer-Based Deep Learning Approach for Enhanced Real-Time Brain Tumor Detection in MRI Images
by Mubashar Tariq and Kiho Choi
Mathematics 2026, 14(9), 1447; https://doi.org/10.3390/math14091447 - 25 Apr 2026
Viewed by 345
Abstract
Brain tumors (BTs) arise from the abnormal growth of cells within brain tissue and may spread rapidly, making them a major cause of mortality worldwide. Early detection of BTs remains highly challenging due to the brain’s complex structure and the heterogeneous nature of [...] Read more.
Brain tumors (BTs) arise from the abnormal growth of cells within brain tissue and may spread rapidly, making them a major cause of mortality worldwide. Early detection of BTs remains highly challenging due to the brain’s complex structure and the heterogeneous nature of tumors. Magnetic Resonance Imaging (MRI) provides detailed information about tumor size, location, and shape, thereby supporting clinical decision-making for treatments such as chemotherapy, radiation therapy, and surgery. Traditional machine learning (ML) approaches mainly rely on manual feature extraction, whereas recent advances in Computer-Aided Diagnosis (CAD) and deep learning (DL) have enabled more accurate detection of small and complex tumor regions. To improve automated tumor detection, we propose a hybrid Swin–YOLO framework that combines the Swin Transformer (ST) with the latest CNN-based YOLOv12 model. In this framework, the Swin Transformer serves as the main backbone for feature extraction, while the Feature Pyramid Network (FPN) and Path Aggregation Network (PANet) are employed in the neck to better capture multi-scale features. For training, we used the publicly available Br35H dataset and applied data augmentation to enhance the model’s robustness and generalization capability. The experimental results show that the proposed framework achieved 99.7% accuracy, 99.4% mAP@50, and 87.2% mAP@50:95. Furthermore, we incorporated Explainable Artificial Intelligence (XAI) techniques, including Grad-CAM and SHAP, to improve the interpretability of the model by visually highlighting the tumor regions that contributed most to the prediction. In addition, we developed NeuroVision AI, a web-based application designed to support faster and more accurate clinical decision-making. Although the proposed model demonstrated strong performance on the dataset, these results should be interpreted within the context of the current experimental setting. Full article
Show Figures

Figure 1

45 pages, 2083 KB  
Systematic Review
AI-Driven Breast Cancer Diagnosis: A Systematic Review of Imaging Modalities, Deep Learning, and Explainability
by Margo Sabry, Hossam Magdy Balaha, Khadiga M. Ali, Ali Mahmoud, Dibson Gondim, Mohammed Ghazal, Tayseer Hassan A. Soliman and Ayman El-Baz
Cancers 2026, 18(8), 1305; https://doi.org/10.3390/cancers18081305 - 20 Apr 2026
Viewed by 888
Abstract
Background: This article provides a comprehensive overview of recent advancements in artificial intelligence (AI) and deep-learning technologies for breast cancer (BC) diagnosis across various imaging modalities. Methods: A systematic review was conducted in strict adherence to the PRISMA guidelines, incorporating a [...] Read more.
Background: This article provides a comprehensive overview of recent advancements in artificial intelligence (AI) and deep-learning technologies for breast cancer (BC) diagnosis across various imaging modalities. Methods: A systematic review was conducted in strict adherence to the PRISMA guidelines, incorporating a comparative analysis of 65 peer-reviewed studies published between 2018 and 2024. The evaluation focused on diagnostic performance, architectural developments, and clinical integration strategies. Results: The review synthesizes primary findings on convolutional neural networks (CNNs), emerging architectures including graph neural networks, and hybrid models, with diagnostic accuracy, risk prediction, and personalized screening strategies identified as the leading research domains. Notable achievements include CNNs attaining up to 98.5% accuracy in mammography and Vision Transformers reaching 96% in histopathological analysis. Furthermore, the implementation of explainable AI methodologies, such as SHAP, LIME, and Grad-CAM, is emphasized for maintaining transparency, trust, and accountability in clinical decision-making. Conclusions: AI constitutes a pivotal factor in facilitating early BC diagnosis and optimizing treatment outcomes. Nevertheless, significant challenges persist, including dataset heterogeneity, model generalizability, standardization of imaging protocols, computational resource limitations, and the seamless integration of these technologies into established clinical workflows. Future research must prioritize robust multi-dataset validation and standardized implementation frameworks to overcome existing limitations and advance successful BC diagnostic practices. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

35 pages, 12420 KB  
Article
LUMINA-Net: Acute Lymphocytic Leukemia Subtype Classification via Interpretable Convolution Neural Network Based on Wavelet and Attention Mechanisms
by Omneya Attallah
Algorithms 2026, 19(4), 298; https://doi.org/10.3390/a19040298 - 10 Apr 2026
Viewed by 295
Abstract
Acute Lymphoblastic Leukemia (ALL) is a highly prevalent hematological malignancy, especially in children, for whom precise and prompt subtype identification is essential to establish suitable treatment protocols. Current deep learning-based computer-aided diagnosis (CAD) methods for identifying ALL are hindered by numerous drawbacks, such [...] Read more.
Acute Lymphoblastic Leukemia (ALL) is a highly prevalent hematological malignancy, especially in children, for whom precise and prompt subtype identification is essential to establish suitable treatment protocols. Current deep learning-based computer-aided diagnosis (CAD) methods for identifying ALL are hindered by numerous drawbacks, such as a dependence on solely spatial feature depictions, elevated feature dimensions, computationally extensive deep learning architectures, inadequate multi-layer feature utilization, and poor interpretability. This paper introduces LUMINA-Net, a custom, lightweight, and interpretable deep learning CAD for the automated identification and subtype diagnosis of ALL using microscopic blood smear pictures. LUMINA-Net makes four principal contributions: first, it integrates a self-attention module within a lightweight custom Convolution Neural Network (CNN) to effectively capture long-range spatial relationships across clinically pertinent cytological patterns while preserving a compact design. Second, it employs a Discrete Wavelet Transform (DWT)-based wavelet pooling layer that decreases feature dimensions by up to 96.875% while enhancing the obtained depictions with spatial-spectral information. Third, it utilizes a multi-layer feature fusion strategy that combines wavelet-pooled features from two deep layers with a third fully connected layer to create a discriminating multi-scale feature vector. Fourth, it incorporates Gradient-weighted Class Activation Mapping as a dedicated explainability process to furnish clinicians with apparent visual explanations for each classification decision. Withoit the need for image enhancement or segmentation preprocessing, LUMINA-Net outperforms the competing state-of-the-art methods on the same dataset, achieving a peak accuracy of 99.51%, specificity of 99.84%, and sensitivity of 99.51% on the publicly available Kaggle ALL dataset. This demonstrates that LUMINA-Net has the potential to be a dependable, effective, and clinically interpretable CAD tool for ALL diagnosis. Full article
Show Figures

Graphical abstract

22 pages, 882 KB  
Review
Artificial Intelligence for Tuberculosis Screening and Detection: From Evidence to Policy and Implementation
by Hien Thi Thu Nguyen, Vang Le-Quy, Anh Tuan Dinh-Xuan and Linh Nhat Nguyen
Diagnostics 2026, 16(8), 1127; https://doi.org/10.3390/diagnostics16081127 - 9 Apr 2026
Viewed by 1414
Abstract
Artificial intelligence (AI) is increasingly used to support tuberculosis (TB) screening and diagnosis, particularly through computer-aided detection (CAD) applied to chest radiography (CXR). However, the programmatic value of AI depends not only on diagnostic accuracy but also on implementation context, threshold calibration, and [...] Read more.
Artificial intelligence (AI) is increasingly used to support tuberculosis (TB) screening and diagnosis, particularly through computer-aided detection (CAD) applied to chest radiography (CXR). However, the programmatic value of AI depends not only on diagnostic accuracy but also on implementation context, threshold calibration, and integration into diagnostic pathways. We conducted a narrative, state-of-the-art review of AI applications across the TB diagnosis pathway. Evidence was synthesized from World Health Organization policy documents, independent validation initiatives, and peer-reviewed studies published between 2010 and 2026, with a structured selection process aligned with PRISMA principles. CAD for CXR is the most mature AI application and is recommended by WHO for TB screening and triage among individuals aged ≥15 years in specific contexts. Across studies, CAD-CXR demonstrates sensitivity comparable to human readers, although performance varies by product, population, and imaging conditions, necessitating local threshold calibration. Evidence from implementation studies suggests improvements in screening efficiency and potential cost-effectiveness in high-burden settings. Other AI modalities, including computed tomography (CT)-based imaging analysis, point-of-care ultrasound interpretation, cough or stethoscope sound analysis, clinical risk models, and genomic resistance prediction show promising but heterogeneous results, with most requiring further independent validation and prospective evaluation. AI has the potential to strengthen TB screening and diagnostic pathways, but its impact depends on integration into health systems and evaluated using patient- and program-level outcomes rather than accuracy alone. A differentiated approach is needed, with responsible scale-up of policy-endorsed tools alongside rigorous evaluation of emerging technologies to support effective and equitable TB care. Full article
(This article belongs to the Special Issue Innovative Approaches to Tuberculosis Screening and Diagnosis)
Show Figures

Figure 1

14 pages, 2429 KB  
Article
Identifying a Critical Blind Spot: How Commercial AI (CAD) Systems Fail to Detect Faint Ground-Glass Opacities at −730 HU on Low-Dose CT
by Shan Liang, Jia Wang, Wentao Fu and Yali Wang
Diagnostics 2026, 16(7), 1014; https://doi.org/10.3390/diagnostics16071014 - 27 Mar 2026
Viewed by 417
Abstract
Objective: The integration of artificial intelligence (AI) into computer-aided detection (CAD) is a major innovation in lung cancer diagnosis. However, its reliability in detecting the earliest radiographic sign—faint ground-glass opacities (GGOs) indicating pre-invasive adenocarcinoma—remains a critical, unquantified gap. This study aimed to perform [...] Read more.
Objective: The integration of artificial intelligence (AI) into computer-aided detection (CAD) is a major innovation in lung cancer diagnosis. However, its reliability in detecting the earliest radiographic sign—faint ground-glass opacities (GGOs) indicating pre-invasive adenocarcinoma—remains a critical, unquantified gap. This study aimed to perform a rigorous failure analysis to define the specific conditions under which commercial AI/CAD systems fail in a low-dose CT (LDCT) screening setting. Methods: In this retrospective diagnostic accuracy study, a primary cohort of 100 patients and an external validation cohort of 50 patients with moderate/low-risk nodules on LDCT were included. An expert reference standard was established by a consensus panel of three thoracic radiologists. Two independent, commercially deployed AI/CAD systems from different vendors (Vendor A & Vendor B) processed all cases. Nodules confirmed by experts but missed by AI were analyzed. Their morphology was categorized, and their mean CT attenuation (HU) was measured via manual region-of-interest placement. Results: The AI systems demonstrated significant and comparable false negative rates in the combined cohort: 12.7% for Vendor A and 14.7% for Vendor B. The vast majority of missed nodules were GGOs (92.3% and 78.6%, respectively, in the primary cohort). Crucially, quantitative analysis revealed a consistent density threshold for AI failure: the mean CT value of missed GGOs was −737 ± 51.50 HU for Vendor A and −727 ± 70.07 HU for Vendor B. This algorithmic blind spot was fully corroborated by the external validation cohort (−741 ± 48.2 HU and −733 ± 62.5 HU, respectively). Anatomical complexity (juxta-pleural/endobronchial location) was a secondary failure factor. Conclusions: This study identifies a quantifiable “−730 HU blind spot” as a common limitation of current commercial AI/CAD systems in diagnosing early lung adenocarcinoma. This finding represents a pivotal advancement in understanding AI’s role in diagnostics: it is not infallible. To innovate and safeguard screening efficacy, radiologists must adopt a human–AI collaborative model with mandated manual verification targeting low-attenuation opacities, ensuring this diagnostic innovation fulfills its promise while mitigating the risks of overdiagnosis. Full article
(This article belongs to the Special Issue Advancements and Innovations in the Diagnosis of Lung Cancer)
Show Figures

Figure 1

25 pages, 1948 KB  
Article
VDTAR-Net: A Cooperative Dual-Path Convolutional Neural Network–Transformer Network for Robust Highlight Reflection Segmentation
by Qianlong Zhang and Yue Zeng
Computers 2026, 15(3), 168; https://doi.org/10.3390/computers15030168 - 4 Mar 2026
Viewed by 433
Abstract
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent [...] Read more.
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent “object assumption.” Conversely, pure transformer models often lose high-frequency boundary details and incur substantial computational costs. To tackle these challenges, this paper introduces VDTAR-Net, a specialized framework adapted to address the unique optical characteristics of specular reflections. Building upon hybrid architectures, our contribution focuses on two core mechanisms: (1) a Cross-architecture Fusion Module (CFM) that enables deep, bidirectional information flow, allowing the Transformer’s global illumination modeling to continuously correct the CNN’s local texture biases; and (2) a Reflective-Aware Module (RAM), which explicitly integrates the physical prior of high-intensity saturation into the attention mechanism. This task-specific design significantly enhances sensitivity to boundary details in overexposed regions. We also created the first large-scale, expert-labeled cervical white light segmentation dataset, Cervix-WL-900. High-quality ground truth labels were generated through rigorous double-blind annotation and arbitration by senior experts. Experimental results show that VDTAR-Net achieves a Dice score of 92.56% and a mean Intersection over Union (mIoU) score of 87.31% on Cervix-WL-900, demonstrating superior performance compared to methods like U-Net, DeepLabv3+, SegFormer, and PSPNet. Ablation studies further confirm the substantial contributions of dual-path collaboration, CFM deep fusion, and RAM task-specific priors. VDTAR-Net provides a robust baseline for precise highlight segmentation, laying a foundation for subsequent image quality assessment, restoration, and feature decoupling in diagnostic models. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
Show Figures

Figure 1

32 pages, 9891 KB  
Article
Attention-Based Deep Learning Framework for Lung Nodule Classification in CT Images
by Vinayak K. Bairagi, Aparna Rajesh Lokhande, Shweta Sadanand Salunkhe, Ekkarat Boonchieng and Preeti Topannavar
Symmetry 2026, 18(3), 431; https://doi.org/10.3390/sym18030431 - 28 Feb 2026
Viewed by 669
Abstract
Lung cancer continues to be one of the leading causes of cancer-related deaths worldwide, as pulmonary nodules are often diagnosed at later stages. Therefore, accurate nodule classification is crucial for enabling early detection and supporting timely clinical decision-making. This study proposes a computer-aided [...] Read more.
Lung cancer continues to be one of the leading causes of cancer-related deaths worldwide, as pulmonary nodules are often diagnosed at later stages. Therefore, accurate nodule classification is crucial for enabling early detection and supporting timely clinical decision-making. This study proposes a computer-aided diagnosis (CAD) system for lung nodule classification using computed tomography (CT) images, specifically focused on malignancy prediction and structural morphology analysis. The proposed framework is based on a novel attention-based Convolutional Neural Network (CNN) that incorporates both channel-wise and spatial attention mechanisms. This dual-attention structure enables the model to emphasize diagnostically relevant features while suppressing irrelevant background information, thereby improving interpretability and classification accuracy. For benchmarking purposes, CNN, CNN-SVM, and ResNet101 architectures were implemented for comparison. Experimental results on the LIDC-IDRI dataset for binary classification (benign vs. malignant) and on the IQ-OTH/NCCD dataset for both binary and three-class (normal, benign, malignant) classification tasks demonstrate that the proposed Attention-Based CNN outperforms all baseline models, achieving a maximum classification accuracy of 98% in the binary setting. In addition to accuracy, the proposed model achieves strong performance across multiple evaluation metrics, including precision, recall, F1-score, AUC, and separately reported confusion matrices for both binary and multiclass evaluations, indicating the robustness of the approach. The dual-attention mechanism enhances salient feature localization and discriminative representation learning, thereby contributing to improved performance in both binary and multiclass classification tasks Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Image Classification)
Show Figures

Figure 1

16 pages, 1036 KB  
Article
Breast Cancer Classification Using Feature Selection via Improved Simulated Annealing and SVM Classifier
by Maedeh Kiani Sarkaleh, Hossein Azgomi and Azadeh Kiani-Sarkaleh
Diagnostics 2026, 16(4), 637; https://doi.org/10.3390/diagnostics16040637 - 23 Feb 2026
Viewed by 542
Abstract
Background: Breast cancer is among the most common cancers in women, and early diagnosis is critical for better treatment outcomes and reduced mortality. Efficient computer-aided diagnostic (CAD) systems play a crucial role in enhancing diagnostic accuracy and facilitating timely clinical decisions. Methods: This [...] Read more.
Background: Breast cancer is among the most common cancers in women, and early diagnosis is critical for better treatment outcomes and reduced mortality. Efficient computer-aided diagnostic (CAD) systems play a crucial role in enhancing diagnostic accuracy and facilitating timely clinical decisions. Methods: This study proposes an automated CAD system for detecting cancerous tumors in mammograms, consisting of four stages: preprocessing, feature extraction, feature selection, and classification. In preprocessing, the region of interest (ROI) is extracted, followed by noise suppression and contrast enhancement to improve image quality. Shape, histogram, and tissue-related features are then computed from each ROI. An Improved Simulated Annealing (ISA) algorithm is employed to adaptively select the most informative features through a flexible process and composite fitness function, effectively reducing dimensionality while preserving high classification accuracy. Finally, classification is performed using a Support Vector Machine (SVM) to distinguish between malignant and benign masses. Results: Evaluation on the CBIS-DDSM and MIAS datasets showed the system achieved accuracies of 99.67% and 98%, sensitivities of 99.33% and 98%, and F1-scores of 99.66% and 97.9%, respectively. These results indicate notable improvements over traditional SA and full-feature approaches. Conclusions: The findings confirm the effectiveness of the ISA algorithm in selecting relevant features, thereby enhancing the performance of breast cancer detection. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

33 pages, 12030 KB  
Article
An Interpretable Ensemble Transformer Framework for Breast Cancer Detection in Ultrasound Images
by Riyadh M. Al-Tam, Aymen M. Al-Hejri, Fatma A. Hashim, Sachin M. Narangale, Mugahed A. Al-Antari and Sarah A. Alzakari
Diagnostics 2026, 16(4), 622; https://doi.org/10.3390/diagnostics16040622 - 20 Feb 2026
Viewed by 947
Abstract
Background/Objectives: Early and accurate detection of breast cancer is essential for reducing mortality and improving patient outcomes. However, the manual interpretation of breast ultrasound images is challenging due to image variability, noise, and inter-observer subjectivity. This study aims to address these limitations [...] Read more.
Background/Objectives: Early and accurate detection of breast cancer is essential for reducing mortality and improving patient outcomes. However, the manual interpretation of breast ultrasound images is challenging due to image variability, noise, and inter-observer subjectivity. This study aims to address these limitations by developing an automated and interpretable computer-aided diagnosis (CAD) system. Methods: We propose an automated and interpretable computer-aided diagnosis (CAD) system that integrates ensemble transfer learning with Vision Transformer architectures. The system combines the Data-Efficient Image Transformer (Deit) and Vision Transformer (ViT) through concatenation-based feature fusion to exploit their complementary representations. Preprocessing, normalization, and targeted data augmentation enhance robustness, while Gradient-weighted Class Activation Mapping (Grad-CAM) provides visual explanations to support clinical interpretability. The proposed model is benchmarked against state-of-the-art CNNs (VGG16, ResNet50, DenseNet201) and Transformer models (ViT, DeiT, Swin, Beit) using the Breast Ultrasound Images (BUSI) dataset. Results: The ensemble achieved 96.92% accuracy and 97.10% AUC for binary classification, and 94.27% accuracy with 94.81% AUC for three-class classification. External validation on independent datasets demonstrated strong generalizability, with 87.76%/88.07% accuracy/AUC on BrEaST, 86.77%/85.90% on BUS-BRA, and 86.99%/86.99% on BUSI_WHU. Performance decreased for fine-grained BI-RADS classification—76.68%/84.59% accuracy/AUC on BUS-BRA and 68.75%/81.10% on BrEaST—reflecting the inherent complexity and subjectivity of clinical subclassification. Conclusions: The proposed Vision Transformer-based ensemble demonstrates high diagnostic accuracy, strong cross-dataset generalization, and clinically meaningful explainability. These findings highlight its potential as a reliable second-opinion CAD tool for breast cancer diagnosis, particularly in resource-limited clinical environments. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

24 pages, 3288 KB  
Article
Multi-Task Deep Learning for Lung Nodule Detection and Segmentation in CT Scans
by Runhan Li and Barmak Honarvar Shakibaei Asli
Electronics 2026, 15(4), 736; https://doi.org/10.3390/electronics15040736 - 9 Feb 2026
Viewed by 804
Abstract
The early detection of pulmonary nodules in chest CT scans is critical for improving lung cancer outcomes. While existing computer-aided diagnosis (CAD) systems have shown promise, most treat detection and segmentation as separate tasks, leading to fragmented pipelines and limited representation sharing. This [...] Read more.
The early detection of pulmonary nodules in chest CT scans is critical for improving lung cancer outcomes. While existing computer-aided diagnosis (CAD) systems have shown promise, most treat detection and segmentation as separate tasks, leading to fragmented pipelines and limited representation sharing. This study proposes a 2.5D multi-task learning (MTL) framework that integrates both tasks within a unified Mask R-CNN architecture. The framework incorporates a tailored preprocessing pipeline—including Hounsfield Unit (HU) normalisation, CLAHE enhancement, and lung parenchyma masking—to improve input consistency and task-relevant contrast characteristics. To enhance sensitivity for small or ambiguous nodules, an auxiliary RoI classifier is introduced. Additionally, a nodule-level evaluation strategy aggregates slice-wise predictions across the z-axis, supporting a clinically meaningful assessment that approximates 3D diagnostic workflows. Experiments on the LUNA16 dataset demonstrate that the proposed framework achieves a favourable trade-off between detection and segmentation performance under a unified 2.5D multi-task setting. These results highlight the potential of integrated MTL approaches to advance CAD systems for early lung cancer screening. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application: Second Edition)
Show Figures

Figure 1

38 pages, 3458 KB  
Article
MERGE: Mammogram-Enhanced Representation via Wavelet-Guided CNNs for Computer-Aided Diagnosis of Breast Cancer
by Omneya Attallah
Mach. Learn. Knowl. Extr. 2026, 8(2), 40; https://doi.org/10.3390/make8020040 - 9 Feb 2026
Cited by 1 | Viewed by 737
Abstract
The early and accurate identification of breast cancer is a significant healthcare issue, largely because the traditional machine learning approaches rely on handcrafted features that are unable to fully capture the spatial and textural complexity found in mammograms. Even with the advancements made [...] Read more.
The early and accurate identification of breast cancer is a significant healthcare issue, largely because the traditional machine learning approaches rely on handcrafted features that are unable to fully capture the spatial and textural complexity found in mammograms. Even with the advancements made possible through deep learning and improvements in diagnostic performance, most computational-aided diagnosis (CAD) systems based on Convolutional Neural Networks (CNNs) still only rely on single-domain features, normally spatial features, while neglecting some important spectral and spatial–spectral features, leading to limitations in generalisability, redundancy, and loss of performative interpretability. Inspired by these limitations, this research proposes MERGE, a novel CAD framework that combines spatial, spectral, and spatial–spectral information—all part of a single multistage architecture taking advantage of three fine-tuned CNN models (ResNet-50, Xception, and Inception). This system utilises Discrete Stationary Wavelet Transform (DSWT) to enhance spectral–spatial features; Discrete Cosine Transform (DCT) to fuse the features optimally, resulting in enhanced spatial and spatial–spectral representations; and, finally, Non-Negative Matrix Factorisation (NNMF) for reduced-dimensional features. Finally, the Linear Discriminant Analysis (LDA), support vector machine (SVM), and k-nearest neighbours (KNN) classifiers provide a robust diagnosis. Using the INBreast and MIAS datasets in evaluations of the experimental research design, evaluation metrics of accuracy, sensitivity, specificity, and AUC were around 99%, with performance surpassing state-of-the-art paradigms. The findings of the suggested MERGE indicate significant promise as a dependable and effective diagnostic tool, enhancing the consistency and interpretability of breast cancer screening results. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

21 pages, 13845 KB  
Article
Semi-Automated Lung Segmentation Based on Region-Growing Methods in Interstitial Lung Disease
by Mădălin-Cristian Moraru, Cristiana-Iulia Dumitrescu, Suzana Măceș, Cătălin Ciobîrcă, Mihai Popescu, Luana Corina Lascu, Dragoș-Ovidiu Alexandru, Diana-Maria Trască, Diana Maria Ciobîrcă, Marian-Răzvan Bălan, Oana Sorina Tica, Radu Teodoru Popa and Daniela Dumitrescu
J. Clin. Med. 2026, 15(4), 1339; https://doi.org/10.3390/jcm15041339 - 8 Feb 2026
Viewed by 659
Abstract
Background: One of the main tools for investigating pulmonary disorders is computed tomography. Starting with a CT, analyses can be qualitative (e.g., direct interpretation of 2D slices, virtual bronchoscopy) or quantitative (e.g., fibrosis score). Qualitative analyses can be performed without segmentation, but [...] Read more.
Background: One of the main tools for investigating pulmonary disorders is computed tomography. Starting with a CT, analyses can be qualitative (e.g., direct interpretation of 2D slices, virtual bronchoscopy) or quantitative (e.g., fibrosis score). Qualitative analyses can be performed without segmentation, but quantitative analyses require lung segmentation. Methods: We present the concepts for a class of lung segmentation methods that use region-growing algorithms, the implementation and testing details, and the results obtained in our software platform. Accurate segmentation of lung regions from medical images is a crucial step in computer-aided diagnosis (CAD) systems for pulmonary diseases such as chronic obstructive pulmonary disease (COPD), pneumonia, and lung cancer. Manual segmentation is time-consuming and subjective, while fully automated methods may fail under challenging imaging conditions. Results: This article presents a semi-automated lung segmentation approach, based on region-growing methods, that balances automation with user control. Conclusions: The proposed technique effectively delineates lung boundaries in computed tomography (CT), minimizing computational complexity and manual effort. Full article
(This article belongs to the Special Issue Advances in Pulmonary Disease Management and Innovation in Treatment)
Show Figures

Figure 1

13 pages, 3297 KB  
Article
Effect of a Real-Time Artificial Intelligence-Assisted Ultrasound System on BI-RADS C4 Breast Lesions Based on Breast Density
by Jeeyeon Lee, Won Hwa Kim, Jaeil Kim, Byeongju Kang, Joon Suk Moon, Hye Jung Kim, Soo Jung Lee, In Hee Lee and Ho Yong Park
Cancers 2026, 18(3), 536; https://doi.org/10.3390/cancers18030536 - 6 Feb 2026
Viewed by 869
Abstract
Background: Artificial intelligence-based computer-aided diagnosis (AI-CAD) systems are increasingly used in breast ultrasonography; however, their diagnostic performance may vary with breast density. Given that dense breasts are highly prevalent among Asian women, understanding this relationship is essential for optimizing AI-assisted imaging strategies. Therefore, [...] Read more.
Background: Artificial intelligence-based computer-aided diagnosis (AI-CAD) systems are increasingly used in breast ultrasonography; however, their diagnostic performance may vary with breast density. Given that dense breasts are highly prevalent among Asian women, understanding this relationship is essential for optimizing AI-assisted imaging strategies. Therefore, this study aims to evaluate the effect of breast density on the diagnostic accuracy of an AI-CAD ultrasound system in BI-RADS category 4 (C4) breast lesions. Methods: Overall, 110 consecutive BI-RADS C4 lesions were reviewed between January and December 2023. An AI-CAD ultrasound system automatically assigned BI-RADS categories and calculated the probability of malignancy (POM) using static ultrasound images. Histopathology served as the reference standard, with atypia and malignancy combined into a non-benign category. Diagnostic performance—including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and overall accuracy—was analyzed based on breast density (BI-RADS B–D), determined using AI-assisted mammography. Results: Overall, the sensitivity and NPV were 81.3% and 87.5%, respectively, while the specificity and PPV were lower at 53.8% and 41.9%. All diagnostic performance metrics improved with increasing breast density. In the density D category, sensitivity (92.3%), specificity (61.5%), NPV (96.0%), and accuracy (69.2%) were highest. Additionally, concordance between AI-assigned BI-RADS categories and histopathologic diagnoses increased with density (B: 50.0%, C: 57.5%, D: 67.3%). Across all density groups, non-benign lesions consistently demonstrated higher POM values. Conclusions: Breast density significantly affects the diagnostic performance of AI-CAD ultrasound in BI-RADS C4 lesions. The AI system demonstrates higher accuracy and concordance in dense breasts, suggesting more consistent lesion interpretation in high-density environments. These findings highlight the potential utility of AI-assisted ultrasound as a diagnostic adjunct, particularly for Asian women, who commonly have dense breast composition. Further multicenter, real-time validation studies are warranted to validate these findings. Full article
(This article belongs to the Special Issue Application of Ultrasound in Cancer Diagnosis and Treatment)
Show Figures

Figure 1

21 pages, 2169 KB  
Article
Enhancing Early Detection of Alzheimer’s Disease via Vision Transformer Machine Learning Architecture Using MRI Images
by Wided Hechkel, Marco Leo, Pierluigi Carcagnì, Marco Del-Coco and Abdelhamid Helali
Information 2026, 17(2), 163; https://doi.org/10.3390/info17020163 - 6 Feb 2026
Viewed by 1155
Abstract
Computer-aided diagnosis (CAD) systems based on deep learning have shown significant potential for Alzheimer’s disease (AD) stage classification from Magnetic Resonance Imaging (MRI). Nevertheless, challenges such as class imbalance, small sample sizes, and the presence of multiple slices per subject may lead to [...] Read more.
Computer-aided diagnosis (CAD) systems based on deep learning have shown significant potential for Alzheimer’s disease (AD) stage classification from Magnetic Resonance Imaging (MRI). Nevertheless, challenges such as class imbalance, small sample sizes, and the presence of multiple slices per subject may lead to biased evaluation and statistically unreliable performance, particularly for minority classes. In this study, a Vision Transformer (ViT)-based framework is proposed for multi-class AD classification using a Kaggle dataset containing 6400 MRI slices across four cognitive stages. A subject-wise data-splitting strategy is employed to prevent information leakage between the training and testing sets, and the statistical unreliability of near-perfect scores in underrepresented classes is critically examined. An ablation study is conducted to assess the contribution of key architectural components, demonstrating the effectiveness of self-attention and patch embedding in capturing discriminative features. Furthermore, attention-based visualization maps are incorporated to highlight brain regions influencing the model’s decisions and to illustrate subtle anatomical differences between MildDemented and VeryMildDemented cases. The proposed approach achieves a test accuracy of 97.98%, outperforming existing methods on the same dataset while providing improved interpretability. It supports early and accurate AD stage identification. Full article
Show Figures

Graphical abstract

Back to TopTop