Deep Learning in Medical Image Segmentation and Diagnosis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 8522

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Engineering, Director of Applied Artificial Intelligence Research Centre, Near East University, Nicosia, Turkey
Interests: computer engineering; artificial intelligence; signal processing and algorithms; deep Learning

Special Issue Information

Dear Colleagues,

Deep learning has achieved tremendous growth in the solution of different signal processing, pattern recognition, natural language, forecasting, image and speech recognition, medical, healthcare, computer vision, and data science applications. Deep learning techniques can efficiently process a huge number of data, automatically extract useful features from data sources and visual images, learn temporal relationships between data items of time series, transfer knowledge from one system to another, and solve problems characterized with high-order nonlinearities. Various emerging directions in deep learning— attention mechanisms, transformers, generative adversarial networks, and residual networks—have attracted attention for their potential in solving different problems. Image segmentation and diagnosing is one of the active problems in medicine. The availability of sufficient data and images from medical devices has positively affected the development of computer-aided systems for segmentation, diagnosis, and analysis of diseases. Recently, these systems using MRI, CT, and X-ray images have also attracted great interest for medical investigation. Deep learning is one of the emerging approaches that can use these data and images and identify their effective solutions. Because of its high computational power, deep learning techniques can be efficiently used in medical diagnosis and medical image processing.

The goal of the given SI is to review the research articles about the application of emerging trends of deep learning techniques for solving medical diagnostic problems.

Potential topics include but are not limited:

  • Deep learning in medical diagnosis;
  • Medical imaging and deep learning;
  • Data mining and deep learning;
  • Emerging deep learning techniques and diagnosis;
  • Deep learning for the classification of lesions and disease;
  • Deep learning for segmentation, denoising, and super-resolution;
  • Deep learning and healthcare;
  • Deep learning for signal analysis;
  • Learning mechanisms in deep neural networks;
  • Ensemble learning;
  • Stacked networks;
  • Medical informatics;
  • Computer-assisted diagnosis.

The authors are required to read guidelines for the preparation of research papers. Prospectus authors should submit their manuscripts through manuscript submission systems at https://www.mdpi.com/journal/diagnostics/sections.

Prof. Dr. Rahib Abiyev
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning in medical diagnosis
  • medical imaging and deep learning
  • data mining and deep learning
  • emerging deep learning techniques and diagnosis
  • deep learning for the classification of lesions and disease
  • deep learning for segmentation, denoising, and super resolution
  • deep learning for signal analysis
  • learning mechanisms in deep neural networks
  • ensemble learning
  • stacked networks
  • medical informatics
  • computer-assisted diagnosis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4167 KiB  
Article
Magnetic Resonance Imaging Texture Analysis Based on Intraosseous and Extraosseous Lesions to Predict Prognosis in Patients with Osteosarcoma
by Yu Mori, Hainan Ren, Naoko Mori, Munenori Watanuki, Shin Hitachi, Mika Watanabe, Shunji Mugikura and Kei Takase
Diagnostics 2024, 14(22), 2562; https://doi.org/10.3390/diagnostics14222562 - 15 Nov 2024
Viewed by 329
Abstract
Objectives: To construct an optimal magnetic resonance imaging (MRI) texture model to evaluate histological patterns and predict prognosis in patients with osteosarcoma (OS). Methods: Thirty-four patients underwent pretreatment MRI and were diagnosed as having OS by surgical resection or biopsy between September 2008 [...] Read more.
Objectives: To construct an optimal magnetic resonance imaging (MRI) texture model to evaluate histological patterns and predict prognosis in patients with osteosarcoma (OS). Methods: Thirty-four patients underwent pretreatment MRI and were diagnosed as having OS by surgical resection or biopsy between September 2008 and June 2018. Histological patterns and 3-year survival were recorded. Manual segmentation was performed in intraosseous, extraosseous, and entire lesions on T1-weighted, T2-weighted, and contrast-enhanced T1-weighted images to extract texture features and perform principal component analysis. A support vector machine algorithm with 3-fold cross-validation was used to construct and validate the models. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate diagnostic performance in evaluating histological patterns and 3-year survival. Results: Eight patients were chondroblastic and the remaining twenty-six patients were non-chondroblastic patterns. Twenty-seven patients were 3-year survivors, and the remaining seven patients were non-survivors. In discriminating chondroblastic from non-chondroblastic patterns, the model from extraosseous lesions on the T2-weighted images showed the highest diagnostic performance (AUCs of 0.94 and 0.89 in the training and validation sets). The model from intraosseous lesions on the T1-weighted images showed the highest diagnostic performance in discriminating 3-year non-survivors from survivors (AUCs of 0.99 and 0.88 in the training and validation sets) with a sensitivity, specificity, positive predictive value, and negative predictive value of 85.7%, 92.6%, 75.0%, and 96.2%, respectively. Conclusions: The texture models of extraosseous lesions on T2-weighted images can discriminate the chondroblastic pattern from non-chondroblastic patterns, while the texture models of intraosseous lesions on T1-weighted images can discriminate 3-year non-survivors from survivors. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

16 pages, 1888 KiB  
Article
Enhancing Breast Cancer Detection through Advanced AI-Driven Ultrasound Technology: A Comprehensive Evaluation of Vis-BUS
by Hyuksool Kwon, Seok Hwan Oh, Myeong-Gee Kim, Youngmin Kim, Guil Jung, Hyeon-Jik Lee, Sang-Yun Kim and Hyeon-Min Bae
Diagnostics 2024, 14(17), 1867; https://doi.org/10.3390/diagnostics14171867 - 26 Aug 2024
Viewed by 1310
Abstract
This study aims to enhance breast cancer detection accuracy through an AI-driven ultrasound tool, Vis-BUS, developed by Barreleye Inc., Seoul, South Korea. Vis-BUS incorporates Lesion Detection AI (LD-AI) and Lesion Analysis AI (LA-AI), along with a Cancer Probability Score (CPS), to differentiate between [...] Read more.
This study aims to enhance breast cancer detection accuracy through an AI-driven ultrasound tool, Vis-BUS, developed by Barreleye Inc., Seoul, South Korea. Vis-BUS incorporates Lesion Detection AI (LD-AI) and Lesion Analysis AI (LA-AI), along with a Cancer Probability Score (CPS), to differentiate between benign and malignant breast lesions. A retrospective analysis was conducted on 258 breast ultrasound examinations to evaluate Vis-BUS’s performance. The primary methods included the application of LD-AI and LA-AI to b-mode ultrasound images and the generation of CPS for each lesion. Diagnostic accuracy was assessed using metrics such as the Area Under the Receiver Operating Characteristic curve (AUROC) and the Area Under the Precision-Recall curve (AUPRC). The study found that Vis-BUS achieved high diagnostic accuracy, with an AUROC of 0.964 and an AUPRC of 0.967, indicating its effectiveness in distinguishing between benign and malignant lesions. Logistic regression analysis identified that ‘Fatty’ lesion density had an extremely high odds ratio (OR) of 27.7781, suggesting potential convergence issues. The ‘Unknown’ density category had an OR of 0.3185, indicating a lower likelihood of correct classification. Medium and large lesion sizes were associated with lower likelihoods of correct classification, with ORs of 0.7891 and 0.8014, respectively. The presence of microcalcifications showed an OR of 1.360. Among Breast Imaging-Reporting and Data System categories, category C5 had a significantly higher OR of 10.173, reflecting a higher likelihood of correct classification. Vis-BUS significantly improves diagnostic precision and supports clinical decision-making in breast cancer screening. However, further refinement is needed in areas like lesion density characterization and calcification detection to optimize its performance. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

12 pages, 2981 KiB  
Article
Differential Diagnosis of OKC and SBC on Panoramic Radiographs: Leveraging Deep Learning Algorithms
by Su-Yi Sim, JaeJoon Hwang, Jihye Ryu, Hyeonjin Kim, Eun-Jung Kim and Jae-Yeol Lee
Diagnostics 2024, 14(11), 1144; https://doi.org/10.3390/diagnostics14111144 - 30 May 2024
Viewed by 683
Abstract
This study aims to determine whether it can distinguish odontogenic keratocyst (OKC) and simple bone cyst (SBC) based solely on preoperative panoramic radiographs through a deep learning algorithm. (1) Methods: We conducted a retrospective analysis of patient data from January 2018 to December [...] Read more.
This study aims to determine whether it can distinguish odontogenic keratocyst (OKC) and simple bone cyst (SBC) based solely on preoperative panoramic radiographs through a deep learning algorithm. (1) Methods: We conducted a retrospective analysis of patient data from January 2018 to December 2022 at Pusan National University Dental Hospital. This study included 63 cases of OKC confirmed by histological examination after surgical excision and 125 cases of SBC that underwent surgical curettage. All panoramic radiographs were obtained utilizing the Proline XC system (Planmeca Co., Helsinki, Finland), which already had diagnostic data on them. The panoramic images were cut into 299 × 299 cropped sizes and divided into 80% training and 20% validation data sets for 5-fold cross-validation. Inception-ResNet-V2 system was adopted to train for OKC and SBC discrimination. (2) Results: The classification network for diagnostic performance evaluation achieved 0.829 accuracy, 0.800 precision, 0.615 recall, and a 0.695 F1 score. (4) Conclusions: The deep learning algorithm demonstrated notable accuracy in distinguishing OKC from SBC, facilitated by CAM visualization. This progress is expected to become an essential resource for clinicians, improving diagnostic and treatment outcomes. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

16 pages, 2220 KiB  
Article
Assessment of Deep Learning Models for Cutaneous Leishmania Parasite Diagnosis Using Microscopic Images
by Ali Mansour Abdelmula, Omid Mirzaei, Emrah Güler and Kaya Süer
Diagnostics 2024, 14(1), 12; https://doi.org/10.3390/diagnostics14010012 - 20 Dec 2023
Cited by 3 | Viewed by 1788
Abstract
Cutaneous leishmaniasis (CL) is a common illness that causes skin lesions, principally ulcerations, on exposed regions of the body. Although neglected tropical diseases (NTDs) are typically found in tropical areas, they have recently become more common along Africa’s northern coast, particularly in Libya. [...] Read more.
Cutaneous leishmaniasis (CL) is a common illness that causes skin lesions, principally ulcerations, on exposed regions of the body. Although neglected tropical diseases (NTDs) are typically found in tropical areas, they have recently become more common along Africa’s northern coast, particularly in Libya. The devastation of healthcare infrastructure during the 2011 war and the following conflicts, as well as governmental apathy, may be causal factors associated with this catastrophic event. The main objective of this study is to evaluate alternative diagnostic strategies for recognizing amastigotes of cutaneous leishmaniasis parasites at various stages using Convolutional Neural Networks (CNNs). The research is additionally aimed at testing different classification models employing a dataset of ultra-thin skin smear images of Leishmania parasite-infected people with cutaneous leishmaniasis. The pre-trained deep learning models including EfficientNetB0, DenseNet201, ResNet101, MobileNetv2, and Xception are used for the cutaneous leishmania parasite diagnosis task. To assess the models’ effectiveness, we employed a five-fold cross-validation approach to guarantee the consistency of the models’ outputs when applied to different portions of the full dataset. Following a thorough assessment and contrast of the various models, DenseNet-201 proved to be the most suitable choice. It attained a mean accuracy of 0.9914 along with outstanding results for sensitivity, specificity, positive predictive value, negative predictive value, F1-score, Matthew’s correlation coefficient, and Cohen’s Kappa coefficient. The DenseNet-201 model surpassed the other models based on a comprehensive evaluation of these key classification performance metrics. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

14 pages, 2189 KiB  
Article
MRI-Based Radiomics Analysis of Levator Ani Muscle for Predicting Urine Incontinence after Robot-Assisted Radical Prostatectomy
by Mohammed Shahait, Ruben Usamentiaga, Yubing Tong, Alex Sandberg, David I. Lee, Jayaram K. Udupa and Drew A. Torigian
Diagnostics 2023, 13(18), 2913; https://doi.org/10.3390/diagnostics13182913 - 11 Sep 2023
Cited by 1 | Viewed by 1348
Abstract
Background: The exact role of the levator ani (LA) muscle in male continence remains unclear, and so this study aims to shed light on the topic by characterizing MRI-derived radiomic features of LA muscle and their association with postoperative incontinence in men undergoing [...] Read more.
Background: The exact role of the levator ani (LA) muscle in male continence remains unclear, and so this study aims to shed light on the topic by characterizing MRI-derived radiomic features of LA muscle and their association with postoperative incontinence in men undergoing prostatectomy. Method: In this retrospective study, 140 patients who underwent robot-assisted radical prostatectomy (RARP) for prostate cancer using preoperative MRI were identified. A biomarker discovery approach based on the optimal biomarker (OBM) method was used to extract features from MRI images, including morphological, intensity-based, and texture-based features of the LA muscle, along with clinical variables. Mathematical models were created using subsets of features and were evaluated based on their ability to predict continence outcomes. Results: Univariate analysis showed that the best discriminators between continent and incontinent patients were patients age and features related to LA muscle texture. The proposed feature selection approach found that the best classifier used six features: age, LA muscle texture properties, and the ratio between LA size descriptors. This configuration produced a classification accuracy of 0.84 with a sensitivity of 0.90, specificity of 0.75, and an area under the ROC curve of 0.89. Conclusion: This study found that certain patient factors, such as increased age and specific texture properties of the LA muscle, can increase the odds of incontinence after RARP. The results showed that the proposed approach was highly effective and could distinguish and predict continents from incontinent patients with high accuracy. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

16 pages, 3338 KiB  
Article
Enhancing Cervical Pre-Cancerous Classification Using Advanced Vision Transformer
by Manal Darwish, Mohamad Ziad Altabel and Rahib H. Abiyev
Diagnostics 2023, 13(18), 2884; https://doi.org/10.3390/diagnostics13182884 - 8 Sep 2023
Cited by 3 | Viewed by 2057
Abstract
One of the most common types of cancer among in women is cervical cancer. Incidence and fatality rates are steadily rising, particularly in developing nations, due to a lack of screening facilities, experienced specialists, and public awareness. Visual inspection is used to screen [...] Read more.
One of the most common types of cancer among in women is cervical cancer. Incidence and fatality rates are steadily rising, particularly in developing nations, due to a lack of screening facilities, experienced specialists, and public awareness. Visual inspection is used to screen for cervical cancer after the application of acetic acid (VIA), histopathology test, Papanicolaou (Pap) test, and human papillomavirus (HPV) test. The goal of this research is to employ a vision transformer (ViT) enhanced with shifted patch tokenization (SPT) techniques to create an integrated and robust system for automatic cervix-type identification. A vision transformer enhanced with shifted patch tokenization is used in this work to learn the distinct features between the three different cervical pre-cancerous types. The model was trained and tested on 8215 colposcopy images of the three types, obtained from the publicly available mobile-ODT dataset. The model was tested on 30% of the whole dataset and it showed a good generalization capability of 91% accuracy. The state-of-the art comparison indicated the outperformance of our model. The experimental results show that the suggested system can be employed as a decision support tool in the detection of the cervical pre-cancer transformation zone, particularly in low-resource settings with limited experience and resources. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

Back to TopTop