Deep Learning Applications in Medical Imaging

A special issue of Computation (ISSN 2079-3197). This special issue belongs to the section "Computational Engineering".

Deadline for manuscript submissions: closed (30 June 2024) | Viewed by 10744

Special Issue Editor


E-Mail Website
Guest Editor
Harvard Medical School, Harvard University, Boston, MA 02114, USA
Interests: deep learning; machine learning; medical informatics; physics-informed data-driven methods
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The success of deep learning (DL) and pattern recognition in many medical image analysis applications has suggested that DL, or artificial intelligence (AI), can bring revolutionary changes in healthcare. Instead of subjective experience-driven diagnosis and prognosis, the widespread accumulation of medical data offers unprecedented opportunities for data-driven DL algorithms to learn about accurate and robust models. Despite the optimism in this new era of DL, the development and implementation of DL or AI tools in clinical practice face many challenges, including the segmentation of small lesions, a performance drop in different scanners and populations, costly labeling, data privacy concerns, as well as the reliability and comprehensibility of AI models.

The aim of this Special Issue is to highlight the advances and technologies in deep learning and pattern recognition in medical image analysis, thereby helping to provide reliable intelligent aids for patient care. The topics of interest include, but are not limited to, the following:

  1. disease diagnosis and prognosis;
  2. medical image reconstruction;
  3. medical image processing;
  4. lesion or anatomical structure segmentation;
  5. uncertainty quantification in medical image analysis;
  6. transfer learning for cross modality, scanners, and population;
  7. image registration and atlas construction;
  8. explainable medial AI systems;
  9. privacy in medical data analysis;
  10. multimodality medical data analysis;
  11. novel public clinical datasets with baselines.

Dr. Xiaofeng Liu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • medical image analysis
  • medical imaging
  • image processing
  • computer-assisted interventions

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 1556 KiB  
Article
Bayesian Optimized Machine Learning Model for Automated Eye Disease Classification from Fundus Images
by Tasnim Bill Zannah, Md. Abdulla-Hil-Kafi, Md. Alif Sheakh, Md. Zahid Hasan, Taslima Ferdaus Shuva, Touhid Bhuiyan, Md. Tanvir Rahman, Risala Tasin Khan, M. Shamim Kaiser and Md Whaiduzzaman
Computation 2024, 12(9), 190; https://doi.org/10.3390/computation12090190 - 16 Sep 2024
Viewed by 981
Abstract
Eye diseases are defined as disorders or diseases that damage the tissue and related parts of the eyes. They appear in various types and can be either minor, meaning that they do not last long, or permanent blindness. Cataracts, glaucoma, and diabetic retinopathy [...] Read more.
Eye diseases are defined as disorders or diseases that damage the tissue and related parts of the eyes. They appear in various types and can be either minor, meaning that they do not last long, or permanent blindness. Cataracts, glaucoma, and diabetic retinopathy are all eye illnesses that can cause vision loss if not discovered and treated early on. Automated classification of these diseases from fundus images can empower quicker diagnoses and interventions. Our research aims to create a robust model, BayeSVM500, for eye disease classification to enhance medical technology and improve patient outcomes. In this study, we develop models to classify images accurately. We start by preprocessing fundus images using contrast enhancement, normalization, and resizing. We then leverage several state-of-the-art deep convolutional neural network pre-trained models, including VGG16, VGG19, ResNet50, EfficientNet, and DenseNet, to extract deep features. To reduce feature dimensionality, we employ techniques such as principal component analysis, feature agglomeration, correlation analysis, variance thresholding, and feature importance rankings. Using these refined features, we train various traditional machine learning models as well as ensemble methods. Our best model, named BayeSVM500, is a Support Vector Machine classifier trained on EfficientNet features reduced to 500 dimensions via PCA, achieving 93.65 ± 1.05% accuracy. Bayesian hyperparameter optimization further improved performance to 95.33 ± 0.60%. Through comprehensive feature engineering and model optimization, we demonstrate highly accurate eye disease classification from fundus images, comparable to or superior to previous benchmarks. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

25 pages, 7331 KiB  
Article
A Deep Learning Approach for Brain Tumor Firmness Detection Based on Five Different YOLO Versions: YOLOv3–YOLOv7
by Norah Fahd Alhussainan, Belgacem Ben Youssef and Mohamed Maher Ben Ismail
Computation 2024, 12(3), 44; https://doi.org/10.3390/computation12030044 - 1 Mar 2024
Cited by 4 | Viewed by 3592
Abstract
Brain tumor diagnosis traditionally relies on the manual examination of magnetic resonance images (MRIs), a process that is prone to human error and is also time consuming. Recent advancements leverage machine learning models to categorize tumors, such as distinguishing between “malignant” and “benign” [...] Read more.
Brain tumor diagnosis traditionally relies on the manual examination of magnetic resonance images (MRIs), a process that is prone to human error and is also time consuming. Recent advancements leverage machine learning models to categorize tumors, such as distinguishing between “malignant” and “benign” classes. This study focuses on the supervised machine learning task of classifying “firm” and “soft” meningiomas, critical for determining optimal brain tumor treatment. The research aims to enhance meningioma firmness detection using state-of-the-art deep learning architectures. The study employs a YOLO architecture adapted for meningioma classification (Firm vs. Soft). This YOLO-based model serves as a machine learning component within a proposed CAD system. To improve model generalization and combat overfitting, transfer learning and data augmentation techniques are explored. Intra-model analysis is conducted for each of the five YOLO versions, optimizing parameters such as the optimizer, batch size, and learning rate based on sensitivity and training time. YOLOv3, YOLOv4, and YOLOv7 demonstrate exceptional sensitivity, reaching 100%. Comparative analysis against state-of-the-art models highlights their superiority. YOLOv7, utilizing the SGD optimizer, a batch size of 64, and a learning rate of 0.01, achieves outstanding overall performance with metrics including mean average precision (99.96%), precision (98.50%), specificity (97.95%), balanced accuracy (98.97%), and F1-score (99.24%). This research showcases the effectiveness of YOLO architectures in meningioma firmness detection, with YOLOv7 emerging as the optimal model. The study’s findings underscore the significance of model selection and parameter optimization for achieving high sensitivity and robust overall performance in brain tumor classification. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

14 pages, 2761 KiB  
Article
A 16 × 16 Patch-Based Deep Learning Model for the Early Prognosis of Monkeypox from Skin Color Images
by Muhammad Asad Arshed, Hafiz Abdul Rehman, Saeed Ahmed, Christine Dewi and Henoch Juli Christanto
Computation 2024, 12(2), 33; https://doi.org/10.3390/computation12020033 - 10 Feb 2024
Cited by 2 | Viewed by 2135
Abstract
The DNA virus responsible for monkeypox, transmitted from animals to humans, exhibits two distinct genetic lineages in central and eastern Africa. Beyond the zoonotic transmission involving direct contact with the infected animals’ bodily fluids and blood, the spread of monkeypox can also occur [...] Read more.
The DNA virus responsible for monkeypox, transmitted from animals to humans, exhibits two distinct genetic lineages in central and eastern Africa. Beyond the zoonotic transmission involving direct contact with the infected animals’ bodily fluids and blood, the spread of monkeypox can also occur through skin lesions and respiratory secretions among humans. Both monkeypox and chickenpox involve skin lesions and can also be transmitted through respiratory secretions, but they are caused by different viruses. The key difference is that monkeypox is caused by an orthopox-virus, while chickenpox is caused by the varicella-zoster virus. In this study, the utilization of a patch-based vision transformer (ViT) model for the identification of monkeypox and chickenpox disease from human skin color images marks a significant advancement in medical diagnostics. Employing a transfer learning approach, the research investigates the ViT model’s capability to discern subtle patterns which are indicative of monkeypox and chickenpox. The dataset was enriched through carefully selected image augmentation techniques, enhancing the model’s ability to generalize across diverse scenarios. During the evaluation phase, the patch-based ViT model demonstrated substantial proficiency, achieving an accuracy, precision, recall, and F1 rating of 93%. This positive outcome underscores the practicality of employing sophisticated deep learning architectures, specifically vision transformers, in the realm of medical image analysis. Through the integration of transfer learning and image augmentation, not only is the model’s responsiveness to monkeypox- and chickenpox-related features enhanced, but concerns regarding data scarcity are also effectively addressed. The model outperformed the state-of-the-art studies and the CNN-based pre-trained models in terms of accuracy. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 4723 KiB  
Review
Methods for Detecting the Patient’s Pupils’ Coordinates and Head Rotation Angle for the Video Head Impulse Test (vHIT), Applicable for the Diagnosis of Vestibular Neuritis and Pre-Stroke Conditions
by G. D. Mamykin, A. A. Kulesh, Fedor L. Barkov, Y. A. Konstantinov, D. P. Sokol’chik and Vladimir Pervadchuk
Computation 2024, 12(8), 167; https://doi.org/10.3390/computation12080167 - 18 Aug 2024
Viewed by 3526
Abstract
In the contemporary era, dizziness is a prevalent ailment among patients. It can be caused by either vestibular neuritis or a stroke. Given the lack of diagnostic utility of instrumental methods in acute isolated vertigo, the differentiation of vestibular neuritis and stroke is [...] Read more.
In the contemporary era, dizziness is a prevalent ailment among patients. It can be caused by either vestibular neuritis or a stroke. Given the lack of diagnostic utility of instrumental methods in acute isolated vertigo, the differentiation of vestibular neuritis and stroke is primarily clinical. As a part of the initial differential diagnosis, the physician focuses on the characteristics of nystagmus and the results of the video head impulse test (vHIT). Instruments for accurate vHIT are costly and are often utilized exclusively in healthcare settings. The objective of this paper is to review contemporary methodologies for accurately detecting the position of pupil centers in both eyes of a patient and for precisely extracting their coordinates. Additionally, the paper describes methods for accurately determining the head rotation angle under diverse imaging and lighting conditions. Furthermore, the suitability of these methods for vHIT is being evaluated. We assume the maximum allowable error is 0.005 radians per frame to detect pupils’ coordinates or 0.3 degrees per frame while detecting the head position. We found that for such conditions, the most suitable approaches for head posture detection are deep learning (including LSTM networks), search by template matching, linear regression of EMG sensor data, and optical fiber sensor usage. The most relevant approaches for pupil localization for our medical tasks are deep learning, geometric transformations, decision trees, and RASNAC. This study might assist in the identification of a number of approaches that can be employed in the future to construct a high-accuracy system for vHIT based on a smartphone or a home computer, with subsequent signal processing and initial diagnosis. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

Back to TopTop