Advances in Artificial Intelligence for Medical Image Analysis

A special issue of Life (ISSN 2075-1729). This special issue belongs to the section "Radiobiology and Nuclear Medicine".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 8061

Special Issue Editor

School of Software, Yunnan University, Kunming, China
Interests: machine learning; artificial neural networks; image processing; bioinformatics; artificial-intelligence-based information security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical image analysis plays a vital role in diagnosing numerous pathologies, ranging from infectious diseases to cancer. Additionally, artificial intelligence (AI) is rapidly transforming the field of medical imaging, with new AI-powered tools and techniques being developed all the time. These advances are making it possible to detect diseases earlier, more accurately, and more efficiently than ever before.

One of the most promising areas of AI in medical imaging is in the development of deep learning algorithms. Deep learning has powerful capabilities of feature extraction and pattern classification, and it can learn knowledge from datasets. This makes it well-suited for tasks such as image classification, object detection, and segmentation, which are all important for medical image analysis. Consequently, the processing burden in medical imaging has now shifted from the human to the computer side, thus allowing more researchers to step into this well-regarded and momentous area.  These advances are making it possible to improve the quality of care for patients and to save lives.

This Special Issue seeks high-quality research articles or comprehensive reviews with the applications of Artificial intelligence in Medical Image Analysis.

Dr. Xin Jin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Life is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medical imaging
  • deep learning for medical imaging
  • histopathological image analysis
  • medical diagnosis
  • visualization in biomedical imaging
  • magnetic resonance imaging (MRI)
  • X-ray computed tomography (CT)
  • positron emission tomography (PET)
  • medical image fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 1202 KiB  
Article
Contribution of an Artificial Intelligence Tool in the Detection of Incidental Pulmonary Embolism on Oncology Assessment Scans
by Samy Ammari, Astrid Orfali Camez, Angela Ayobi, Sarah Quenet, Amir Zemmouri, El Mehdi Mniai, Yasmina Chaibi, Angelo Franciosini, Louis Clavel, François Bidault, Serge Muller, Nathalie Lassau, Corinne Balleyguier and Tarek Assi
Life 2024, 14(11), 1347; https://doi.org/10.3390/life14111347 - 22 Oct 2024
Viewed by 546
Abstract
Introduction: The incidence of venous thromboembolism is estimated to be around 3% of cancer patients. However, a majority of incidental pulmonary embolism (iPE) can be overlooked by radiologists in asymptomatic patients, performing CT scans for disease surveillance, which may significantly impact the patient’s [...] Read more.
Introduction: The incidence of venous thromboembolism is estimated to be around 3% of cancer patients. However, a majority of incidental pulmonary embolism (iPE) can be overlooked by radiologists in asymptomatic patients, performing CT scans for disease surveillance, which may significantly impact the patient’s health and management. Routine imaging in oncology is usually reviewed with delayed hours after the acquisition of images. Nevertheless, the advent of AI in radiology could reduce the risk of the diagnostic delay of iPE by an optimal triage immediately at the acquisition console. This study aimed to determine the accuracy rate of an AI algorithm (CINA-iPE) in detecting iPE and the duration until the management of cancer patients in our center, in addition to describing the characteristics of patients with a confirmed pulmonary embolism (PE). Materials and Methods: This is a retrospective analysis of the role of Avicenna’s CE-certified and FDA-cleared CINA-iPE algorithm in oncology patients treated at Gustave Roussy Cancer Campus. The results obtained from the AI algorithm were compared with the attending radiologist’s report and were analyzed by both a radiology resident and a senior radiologist. In case of any discordant results, the reason for this discrepancy was further investigated. The duration between the exact time of the CT scan and analysis was assessed, as well as the duration from the result’s report and the start of active management. Results: Out of 3047 patients, 104 alerts were detected for iPE (prevalence of 1.3%), while 2942 had negative findings. In total, 36 of the 104 patients had confirmed PE, while 68 alerts were false positives. Only one patient reported as negative by the AI tool was deemed to have a PE by the radiologist. The sensitivity and specificity of the AI model were 97.3% and 97.74%, while the PPV and NPV were 34.62% and 99.97%, respectively. Most causes of FP were artifacts (22 cases, 32.3%) and lymph nodes (11 cases, 16.2%). Seven patients experienced delayed diagnosis, requiring them to return to the ER for treatment after being sent home following their scan. The remaining patients received prompt care immediately after their testing, with a mean delay time of 8.13 h. Conclusions: The addition of an AI system for the detection of unsuspected PEs on chest CT scans in routine oncology care demonstrated a promising efficacy in comparison to human performance. Despite a low prevalence, the sensitivity and specificity of the AI tool reached 97.3% and 97.7%, respectively, with detection of all the reported clinical PEs, except one single case. This study describes the potential synergy between AI and radiologists for an optimal diagnosis of iPE in routine clinical cancer care. Clinical relevance statement: In the oncology field, iPEs are common, with an increased risk of morbidity when missed with a delayed diagnosis. With the assistance of a reliable AI tool, the radiologist can focus on the challenging analysis of oncology results while dealing with urgent diagnosis such as PE by sending the patient straight to the ER (Emergency Room) for prompt treatment. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

14 pages, 3677 KiB  
Article
MRI-Based Machine Learning for Prediction of Clinical Outcomes in Primary Central Nervous System Lymphoma
by Ching-Chung Ko, Yan-Lin Liu, Kuo-Chuan Hung, Cheng-Chun Yang, Sher-Wei Lim, Lee-Ren Yeh, Jeon-Hor Chen and Min-Ying Su
Life 2024, 14(10), 1290; https://doi.org/10.3390/life14101290 - 11 Oct 2024
Viewed by 708
Abstract
A portion of individuals diagnosed with primary central nervous system lymphomas (PCNSL) may experience early relapse or refractory (R/R) disease following treatment. This research explored the potential of MRI-based radiomics in forecasting R/R cases in PCNSL. Forty-six patients with pathologically confirmed PCNSL diagnosed [...] Read more.
A portion of individuals diagnosed with primary central nervous system lymphomas (PCNSL) may experience early relapse or refractory (R/R) disease following treatment. This research explored the potential of MRI-based radiomics in forecasting R/R cases in PCNSL. Forty-six patients with pathologically confirmed PCNSL diagnosed between January 2008 and December 2020 were included in this study. Only patients who underwent pretreatment brain MRIs and complete postoperative follow-up MRIs were included. Pretreatment contrast-enhanced T1WI, T2WI, and T2 FLAIR imaging were analyzed. A total of 107 radiomic features, including 14 shape-based, 18 first-order statistical, and 75 texture features, were extracted from each sequence. Predictive models were then built using five different machine learning algorithms to predict R/R in PCNSL. Of the included 46 PCNSL patients, 20 (20/46, 43.5%) patients were found to have R/R. In the R/R group, the median scores in predictive models such as support vector machine, k-nearest neighbors, linear discriminant analysis, naïve Bayes, and decision trees were significantly higher, while the apparent diffusion coefficient values were notably lower compared to those without R/R (p < 0.05). The support vector machine model exhibited the highest performance, achieving an overall prediction accuracy of 83%, a precision rate of 80%, and an AUC of 0.78. Additionally, when analyzing tumor progression, patients with elevated support vector machine and naïve Bayes scores demonstrated a significantly reduced progression-free survival (p < 0.05). These findings suggest that preoperative MRI-based radiomics may provide critical insights for treatment strategies in PCNSL. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

26 pages, 1630 KiB  
Article
A Unified Pipeline for Simultaneous Brain Tumor Classification and Segmentation Using Fine-Tuned CNN and Residual UNet Architecture
by Faisal Alshomrani
Life 2024, 14(9), 1143; https://doi.org/10.3390/life14091143 - 10 Sep 2024
Viewed by 1257
Abstract
In this paper, I present a comprehensive pipeline integrating a Fine-Tuned Convolutional Neural Network (FT-CNN) and a Residual-UNet (RUNet) architecture for the automated analysis of MRI brain scans. The proposed system addresses the dual challenges of brain tumor classification and segmentation, which are [...] Read more.
In this paper, I present a comprehensive pipeline integrating a Fine-Tuned Convolutional Neural Network (FT-CNN) and a Residual-UNet (RUNet) architecture for the automated analysis of MRI brain scans. The proposed system addresses the dual challenges of brain tumor classification and segmentation, which are crucial tasks in medical image analysis for precise diagnosis and treatment planning. Initially, the pipeline preprocesses the FigShare brain MRI image dataset, comprising 3064 images, by normalizing and resizing them to achieve uniformity and compatibility with the model. The FT-CNN model then classifies the preprocessed images into distinct tumor types: glioma, meningioma, and pituitary tumor. Following classification, the RUNet model performs pixel-level segmentation to delineate tumor regions within the MRI scans. The FT-CNN leverages the VGG19 architecture, pre-trained on large datasets and fine-tuned for specific tumor classification tasks. Features extracted from MRI images are used to train the FT-CNN, demonstrating robust performance in discriminating between tumor types. Subsequently, the RUNet model, inspired by the U-Net design and enhanced with residual blocks, effectively segments tumors by combining high-resolution spatial information from the encoding path with context-rich features from the bottleneck. My experimental results indicate that the integrated pipeline achieves high accuracy in both classification (96%) and segmentation tasks (98%), showcasing its potential for clinical applications in brain tumor diagnosis. For the classification task, the metrics involved are loss, accuracy, confusion matrix, and classification report, while for the segmentation task, the metrics used are loss, accuracy, Dice coefficient, intersection over union, and Jaccard distance. To further validate the generalizability and robustness of the integrated pipeline, I evaluated the model on two additional datasets. The first dataset consists of 7023 images for classification tasks, expanding to a four-class dataset. The second dataset contains approximately 3929 images for both classification and segmentation tasks, including a binary classification scenario. The model demonstrated robust performance, achieving 95% accuracy on the four-class task and high accuracy (96%) in the binary classification and segmentation tasks, with a Dice coefficient of 95%. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

7 pages, 500 KiB  
Article
Assessment of Bone Age Based on Hand Radiographs Using Regression-Based Multi-Modal Deep Learning
by Jeoung Kun Kim, Donghwi Park and Min Cheol Chang
Life 2024, 14(6), 774; https://doi.org/10.3390/life14060774 - 18 Jun 2024
Viewed by 831
Abstract
(1) Objective: In this study, a regression-based multi-modal deep learning model was developed for use in bone age assessment (BAA) utilizing hand radiographic images and clinical data, including patient gender and chronological age, as input data. (2) Methods: A dataset of hand radiographic [...] Read more.
(1) Objective: In this study, a regression-based multi-modal deep learning model was developed for use in bone age assessment (BAA) utilizing hand radiographic images and clinical data, including patient gender and chronological age, as input data. (2) Methods: A dataset of hand radiographic images from 2974 pediatric patients was used to develop a regression-based multi-modal BAA model. This model integrates hand radiographs using EfficientNetV2S convolutional neural networks (CNNs) and clinical data (gender and chronological age) processed by a simple deep neural network (DNN). This approach enhances the model’s robustness and diagnostic precision, addressing challenges related to imbalanced data distribution and limited sample sizes. (3) Results: The model exhibited good performance on BAA, with an overall mean absolute error (MAE) of 0.410, root mean square error (RMSE) of 0.637, and accuracy of 91.1%. Subgroup analysis revealed higher accuracy in females ≤ 11 years (MAE: 0.267, RMSE: 0.453, accuracy: 95.0%) and >11 years (MAE: 0.402, RMSE: 0.634, accuracy 92.4%) compared to males ≤ 13 years (MAE: 0.665, RMSE: 0.912, accuracy: 79.7%) and >13 years (MAE: 0.647, RMSE: 1.302, accuracy: 84.6%). (4) Conclusion: This model showed a generally good performance on BAA, showing a better performance in female pediatrics compared to male pediatrics and an especially robust performance in female pediatrics ≤ 11 years. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

20 pages, 10775 KiB  
Article
Generative-Adversarial-Network-Based Image Reconstruction for the Capacitively Coupled Electrical Impedance Tomography of Stroke
by Mikhail Ivanenko, Damian Wanta, Waldemar T. Smolik, Przemysław Wróblewski and Mateusz Midura
Life 2024, 14(3), 419; https://doi.org/10.3390/life14030419 - 21 Mar 2024
Cited by 2 | Viewed by 1496
Abstract
This study investigated the potential of machine-learning-based stroke image reconstruction in capacitively coupled electrical impedance tomography. The quality of brain images reconstructed using the adversarial neural network (cGAN) was examined. The big data required for supervised network training were generated using a two-dimensional [...] Read more.
This study investigated the potential of machine-learning-based stroke image reconstruction in capacitively coupled electrical impedance tomography. The quality of brain images reconstructed using the adversarial neural network (cGAN) was examined. The big data required for supervised network training were generated using a two-dimensional numerical simulation. The phantom of an axial cross-section of the head without and with impact lesions was an average of a three-centimeter-thick layer corresponding to the height of the sensing electrodes. Stroke was modeled using regions with characteristic electrical parameters for tissues with reduced perfusion. The head phantom included skin, skull bone, white matter, gray matter, and cerebrospinal fluid. The coupling capacitance was taken into account in the 16-electrode capacitive sensor model. A dedicated ECTsim toolkit for Matlab was used to solve the forward problem and simulate measurements. A conditional generative adversarial network (cGAN) was trained using a numerically generated dataset containing samples corresponding to healthy patients and patients affected by either hemorrhagic or ischemic stroke. The validation showed that the quality of images obtained using supervised learning and cGAN was promising. It is possible to visually distinguish when the image corresponds to the patient affected by stroke, and changes caused by hemorrhagic stroke are the most visible. The continuation of work towards image reconstruction for measurements of physical phantoms is justified. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

14 pages, 4374 KiB  
Article
Deep Learning Super-Resolution Technique Based on Magnetic Resonance Imaging for Application of Image-Guided Diagnosis and Surgery of Trigeminal Neuralgia
by Jun Ho Hwang, Chang Kyu Park, Seok Bin Kang, Man Kyu Choi and Won Hee Lee
Life 2024, 14(3), 355; https://doi.org/10.3390/life14030355 - 7 Mar 2024
Viewed by 1660
Abstract
This study aimed to implement a deep learning-based super-resolution (SR) technique that can assist in the diagnosis and surgery of trigeminal neuralgia (TN) using magnetic resonance imaging (MRI). Experimental methods applied SR to MRI data examined using five techniques, including T2-weighted imaging (T2WI), [...] Read more.
This study aimed to implement a deep learning-based super-resolution (SR) technique that can assist in the diagnosis and surgery of trigeminal neuralgia (TN) using magnetic resonance imaging (MRI). Experimental methods applied SR to MRI data examined using five techniques, including T2-weighted imaging (T2WI), T1-weighted imaging (T1WI), contrast-enhancement T1WI (CE-T1WI), T2WI turbo spin–echo series volume isotropic turbo spin–echo acquisition (VISTA), and proton density (PD), in patients diagnosed with TN. The image quality was evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). High-quality reconstructed MRI images were assessed using the Leksell coordinate system in gamma knife radiosurgery (GKRS). The results showed that the PSNR and SSIM values achieved by SR were higher than those obtained by image postprocessing techniques, and the coordinates of the images reconstructed in the gamma plan showed no differences from those of the original images. Consequently, SR demonstrated remarkable effects in improving the image quality without discrepancies in the coordinate system, confirming its potential as a useful tool for the diagnosis and surgery of TN. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop