Machine-Learning-Driven Medical Image Analysis

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 24648

Special Issue Editors


E-Mail Website
Guest Editor
Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: computational pathology; medical image processing; machine learning; artificial intelligence
Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
Interests: digital pathology; medical image analysis; cancer prognostic model

E-Mail Website
Guest Editor
Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: computational pathology; medical image analysis; computer-aided prevention, diagnosis, prognosis, and treatment of diseases; multi-modality medical data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advances in the fields of imaging facilities, biochemical assays, imaging analysis and machine learning algorithms have provided us with new opportunities to interrogate previously intractable diseases, including cancer. With artificial intelligence (AI) being widely applied to multi-modal medical images in recent years, it has greatly improved the objectivity and efficiency of repetitive tasks, such as tumor boundary delineation in radiology and cell quantification in pathology. However, the end goal of bringing AI into clinical settings is to provide reproducible and quantitative second opinions for clinicians and medical practitioners. In another words, predicting the prognosis in the early stage or suggesting the potential response of a targeted therapy would directly facilitate the personalization of the therapeutic regime for individual patients.

In this regard, it is of great clinical significance to develop trustable AI tools for disease prognosis and treatment response based on various modalities of medical images. In order to fully depict the development and progression of a complex disease such as cancer, multi-scale and multi-modal images are often necessary to integrate to enrich the information from the micro- to macro-scale to finally unveil the entire landscape.

We, therefore, invite you to submit high-quality original research, including comprehensive reviews, on the topic of “Machine-learning-driven medical image analysis”. Translational research with applications in clinical settings is also encouraged.

The Topics of this Special Issue may include, but are not limited to, the following:

Medical image analysis (e.g. microscope, histopathology, radiology, etc.);

Machine-learning-based medical image analysis;

Deep learning with medical images;

Disease prognosis;

Prediction of treatment response;

Image-based medical decision support systems;

Computer-aided detection and diagnosis systems;

Multi-modal data integration and fusion.

Dr. Xiangxue Wang
Dr. Cheng Lu
Prof. Dr. Jun Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image analysis
  • prognosis and prediction
  • computer-aided diagnosis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

16 pages, 1553 KiB  
Article
Outlier Handling Strategy of Ensembled-Based Sequential Convolutional Neural Networks for Sleep Stage Classification
by Wei Zhou, Hangyu Zhu, Wei Chen, Chen Chen and Jun Xu
Bioengineering 2024, 11(12), 1226; https://doi.org/10.3390/bioengineering11121226 - 4 Dec 2024
Viewed by 464
Abstract
The pivotal role of sleep has led to extensive research endeavors aimed at automatic sleep stage classification. However, existing methods perform poorly when classifying small groups or individuals, and these results are often considered outliers in terms of overall performance. These outliers may [...] Read more.
The pivotal role of sleep has led to extensive research endeavors aimed at automatic sleep stage classification. However, existing methods perform poorly when classifying small groups or individuals, and these results are often considered outliers in terms of overall performance. These outliers may introduce bias during model training, adversely affecting feature selection and diminishing model performance. To address the above issues, this paper proposes an ensemble-based sequential convolutional neural network (E-SCNN) that incorporates a clustering module and neural networks. E-SCNN effectively ensembles machine learning and deep learning techniques to minimize outliers, thereby enhancing model robustness at the individual level. Specifically, the clustering module categorizes individuals based on similarities in feature distribution and assigns personalized weights accordingly. Subsequently, by combining these tailored weights with the robust feature extraction capabilities of convolutional neural networks, the model generates more accurate sleep stage classifications. The proposed model was verified on two public datasets, and experimental results demonstrate that the proposed method obtains overall accuracies of 84.8% on the Sleep-EDF Expanded dataset and 85.5% on the MASS dataset. E-SCNN can alleviate the outlier problem, which is important for improving sleep quality monitoring for individuals. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

20 pages, 6271 KiB  
Article
Evaluation of Deep Learning Model Architectures for Point-of-Care Ultrasound Diagnostics
by Sofia I. Hernandez Torres, Austin Ruiz, Lawrence Holland, Ryan Ortiz and Eric J. Snider
Bioengineering 2024, 11(4), 392; https://doi.org/10.3390/bioengineering11040392 - 18 Apr 2024
Cited by 1 | Viewed by 1354
Abstract
Point-of-care ultrasound imaging is a critical tool for patient triage during trauma for diagnosing injuries and prioritizing limited medical evacuation resources. Specifically, an eFAST exam evaluates if there are free fluids in the chest or abdomen but this is only possible if ultrasound [...] Read more.
Point-of-care ultrasound imaging is a critical tool for patient triage during trauma for diagnosing injuries and prioritizing limited medical evacuation resources. Specifically, an eFAST exam evaluates if there are free fluids in the chest or abdomen but this is only possible if ultrasound scans can be accurately interpreted, a challenge in the pre-hospital setting. In this effort, we evaluated the use of artificial intelligent eFAST image interpretation models. Widely used deep learning model architectures were evaluated as well as Bayesian models optimized for six different diagnostic models: pneumothorax (i) B- or (ii) M-mode, hemothorax (iii) B- or (iv) M-mode, (v) pelvic or bladder abdominal hemorrhage and (vi) right upper quadrant abdominal hemorrhage. Models were trained using images captured in 27 swine. Using a leave-one-subject-out training approach, the MobileNetV2 and DarkNet53 models surpassed 85% accuracy for each M-mode scan site. The different B-mode models performed worse with accuracies between 68% and 74% except for the pelvic hemorrhage model, which only reached 62% accuracy for all model architectures. These results highlight which eFAST scan sites can be easily automated with image interpretation models, while other scan sites, such as the bladder hemorrhage model, will require more robust model development or data augmentation to improve performance. With these additional improvements, the skill threshold for ultrasound-based triage can be reduced, thus expanding its utility in the pre-hospital setting. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

13 pages, 725 KiB  
Article
Curriculum Consistency Learning and Multi-Scale Contrastive Constraint in Semi-Supervised Medical Image Segmentation
by Weizhen Ding and Zhen Li
Bioengineering 2024, 11(1), 10; https://doi.org/10.3390/bioengineering11010010 - 22 Dec 2023
Cited by 1 | Viewed by 1496
Abstract
Data scarcity poses a significant challenge in medical image segmentation, thereby highlighting the importance of leveraging sparse annotation data. In addressing this issue, semi-supervised learning has emerged as an effective approach for training neural networks using limited labeled data. In this study, we [...] Read more.
Data scarcity poses a significant challenge in medical image segmentation, thereby highlighting the importance of leveraging sparse annotation data. In addressing this issue, semi-supervised learning has emerged as an effective approach for training neural networks using limited labeled data. In this study, we introduced a curriculum consistency constraint within the context of semi-supervised medical image segmentation, thus drawing inspiration from the human learning process. By dynamically comparing patch features with full image features, we enhanced the network’s ability to learn. Unlike existing methods, our approach adapted the patch size to simulate the human curriculum process, thereby progressing from easy to hard tasks. This adjustment guided the model toward improved convergence optima and generalization. Furthermore, we employed multi-scale contrast learning to enhance the representation of features. Our method capitalizes on the features extracted from multiple layers to explore additional semantic information and point-wise representations. To evaluate the effectiveness of our proposed approach, we conducted experiments on the Kvasir-SEG polyp dataset and the ISIC 2018 skin lesion dataset. The experimental results demonstrated that our method surpassed state-of-the-art semi-supervised methods by achieving a 9.2% increase in the mean intersection over union (mIoU) for the Kvasir-SEG dataset. This improvement substantiated the efficacy of our proposed curriculum consistency constraint and multi-scale contrastive loss. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

17 pages, 2184 KiB  
Article
Multimodal Classification Framework Based on Hypergraph Latent Relation for End-Stage Renal Disease Associated with Mild Cognitive Impairment
by Xidong Fu, Chaofan Song, Rupu Zhang, Haifeng Shi and Zhuqing Jiao
Bioengineering 2023, 10(8), 958; https://doi.org/10.3390/bioengineering10080958 - 12 Aug 2023
Cited by 1 | Viewed by 1260
Abstract
Combined arterial spin labeling (ASL) and functional magnetic resonance imaging (fMRI) can reveal more comprehensive properties of the spatiotemporal and quantitative properties of brain networks. Imaging markers of end-stage renal disease associated with mild cognitive impairment (ESRDaMCI) will be sought from these properties. [...] Read more.
Combined arterial spin labeling (ASL) and functional magnetic resonance imaging (fMRI) can reveal more comprehensive properties of the spatiotemporal and quantitative properties of brain networks. Imaging markers of end-stage renal disease associated with mild cognitive impairment (ESRDaMCI) will be sought from these properties. The current multimodal classification methods often neglect to collect high-order relationships of brain regions and remove noise from the feature matrix. A multimodal classification framework is proposed to address this issue using hypergraph latent relation (HLR). A brain functional network with hypergraph structural information is constructed by fMRI data. The feature matrix is obtained through graph theory (GT). The cerebral blood flow (CBF) from ASL is selected as the second modal feature matrix. Then, the adaptive similarity matrix is constructed by learning the latent relation between feature matrices. Latent relation adaptive similarity learning (LRAS) is introduced to multi-task feature learning to construct a multimodal feature selection method based on latent relation (LRMFS). The experimental results show that the best classification accuracy (ACC) reaches 88.67%, at least 2.84% better than the state-of-the-art methods. The proposed framework preserves more valuable information between brain regions and reduces noise among feature matrixes. It provides an essential reference value for ESRDaMCI recognition. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

14 pages, 6760 KiB  
Article
Information-Rich Multi-Functional OCT for Adult Zebrafish Intra- and Extracranial Imaging
by Di Yang, Weike Wang, Zhuoqun Yuan and Yanmei Liang
Bioengineering 2023, 10(7), 856; https://doi.org/10.3390/bioengineering10070856 - 19 Jul 2023
Viewed by 1633
Abstract
The zebrafish serves as a valuable animal model for both intra- and extracranial research, particularly in relation to the brain and skull. To effectively investigate the development and regeneration of adult zebrafish, a versatile in vivo imaging technique capable of showing both intra- [...] Read more.
The zebrafish serves as a valuable animal model for both intra- and extracranial research, particularly in relation to the brain and skull. To effectively investigate the development and regeneration of adult zebrafish, a versatile in vivo imaging technique capable of showing both intra- and extracranial conditions is essential. In this paper, we utilized a high-resolution multi-functional optical coherence tomography (OCT) to obtain rich intra- and extracranial imaging outcomes of adult zebrafish, encompassing pigmentation distribution, tissue-specific information, cranial vascular imaging, and the monitoring of traumatic brain injury (TBI). Notably, it is the first that the channels through the zebrafish cranial suture, which may have a crucial function in maintaining the patency of the cranial sutures, have been observed. Rich imaging results demonstrated that a high-resolution multi-functional OCT system can provide a wealth of novel and interpretable biological information for intra- and extracranial studies of adult zebrafish. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Graphical abstract

Review

Jump to: Research, Other

17 pages, 1545 KiB  
Review
Recent Advancements in Deep Learning Using Whole Slide Imaging for Cancer Prognosis
by Minhyeok Lee
Bioengineering 2023, 10(8), 897; https://doi.org/10.3390/bioengineering10080897 - 28 Jul 2023
Cited by 9 | Viewed by 3303
Abstract
This review furnishes an exhaustive analysis of the latest advancements in deep learning techniques applied to whole slide images (WSIs) in the context of cancer prognosis, focusing specifically on publications from 2019 through 2023. The swiftly maturing field of deep learning, in combination [...] Read more.
This review furnishes an exhaustive analysis of the latest advancements in deep learning techniques applied to whole slide images (WSIs) in the context of cancer prognosis, focusing specifically on publications from 2019 through 2023. The swiftly maturing field of deep learning, in combination with the burgeoning availability of WSIs, manifests significant potential in revolutionizing the predictive modeling of cancer prognosis. In light of the swift evolution and profound complexity of the field, it is essential to systematically review contemporary methodologies and critically appraise their ramifications. This review elucidates the prevailing landscape of this intersection, cataloging major developments, evaluating their strengths and weaknesses, and providing discerning insights into prospective directions. In this paper, a comprehensive overview of the field aims to be presented, which can serve as a critical resource for researchers and clinicians, ultimately enhancing the quality of cancer care outcomes. This review’s findings accentuate the need for ongoing scrutiny of recent studies in this rapidly progressing field to discern patterns, understand breakthroughs, and navigate future research trajectories. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

Other

Jump to: Research, Review

19 pages, 2509 KiB  
Systematic Review
Machine Learning for Medical Image Translation: A Systematic Review
by Jake McNaughton, Justin Fernandez, Samantha Holdsworth, Benjamin Chong, Vickie Shim and Alan Wang
Bioengineering 2023, 10(9), 1078; https://doi.org/10.3390/bioengineering10091078 - 12 Sep 2023
Cited by 4 | Viewed by 3935
Abstract
Background: CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics [...] Read more.
Background: CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. Methods: A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. Results: A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. Conclusions: Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

44 pages, 1448 KiB  
Systematic Review
Emerging Trends in Fast MRI Using Deep-Learning Reconstruction on Undersampled k-Space Data: A Systematic Review
by Dilbag Singh, Anmol Monga, Hector L. de Moura, Xiaoxia Zhang, Marcelo V. W. Zibetti and Ravinder R. Regatte
Bioengineering 2023, 10(9), 1012; https://doi.org/10.3390/bioengineering10091012 - 26 Aug 2023
Cited by 13 | Viewed by 6556
Abstract
Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. [...] Read more.
Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. Methods such as parallel MRI and Compressive Sensing (CS) have significantly reduced the MRI acquisition time by acquiring less data through undersampling k-space. The state-of-the-art of fast MRI has recently been redefined by integrating Deep Learning (DL) models with these undersampled approaches. This Systematic Literature Review (SLR) comprehensively analyzes deep MRI reconstruction models, emphasizing the key elements of recently proposed methods and highlighting their strengths and weaknesses. This SLR involves searching and selecting relevant studies from various databases, including Web of Science and Scopus, followed by a rigorous screening and data extraction process using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. It focuses on various techniques, such as residual learning, image representation using encoders and decoders, data-consistency layers, unrolled networks, learned activations, attention modules, plug-and-play priors, diffusion models, and Bayesian methods. This SLR also discusses the use of loss functions and training with adversarial networks to enhance deep MRI reconstruction methods. Moreover, we explore various MRI reconstruction applications, including non-Cartesian reconstruction, super-resolution, dynamic MRI, joint learning of reconstruction with coil sensitivity and sampling, quantitative mapping, and MR fingerprinting. This paper also addresses research questions, provides insights for future directions, and emphasizes robust generalization and artifact handling. Therefore, this SLR serves as a valuable resource for advancing fast MRI, guiding research and development efforts of MRI reconstruction for better image quality and faster data acquisition. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

11 pages, 3196 KiB  
Perspective
The Digital Twin: A Potential Solution for the Personalized Diagnosis and Treatment of Musculoskeletal System Diseases
by Tianze Sun, Jinzuo Wang, Moran Suo, Xin Liu, Huagui Huang, Jing Zhang, Wentao Zhang and Zhonghai Li
Bioengineering 2023, 10(6), 627; https://doi.org/10.3390/bioengineering10060627 - 23 May 2023
Cited by 8 | Viewed by 3029
Abstract
Due to the high prevalence and rates of disability associated with musculoskeletal system diseases, more thorough research into diagnosis, pathogenesis, and treatments is required. One of the key contributors to the emergence of diseases of the musculoskeletal system is thought to be changes [...] Read more.
Due to the high prevalence and rates of disability associated with musculoskeletal system diseases, more thorough research into diagnosis, pathogenesis, and treatments is required. One of the key contributors to the emergence of diseases of the musculoskeletal system is thought to be changes in the biomechanics of the human musculoskeletal system. However, there are some defects concerning personal analysis or dynamic responses in current biomechanical research methodologies. Digital twin (DT) was initially an engineering concept that reflected the mirror image of a physical entity. With the application of medical image analysis and artificial intelligence (AI), it entered our lives and showed its potential to be further applied in the medical field. Consequently, we believe that DT can take a step towards personalized healthcare by guiding the design of industrial personalized healthcare systems. In this perspective article, we discuss the limitations of traditional biomechanical methods and the initial exploration of DT in musculoskeletal system diseases. We provide a new opinion that DT could be an effective solution for musculoskeletal system diseases in the future, which will help us analyze the real-time biomechanical properties of the musculoskeletal system and achieve personalized medicine. Full article
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop