Application of Machine Learning Using Ultrasound Images, Volume II

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 7854

Special Issue Editor


E-Mail Website
Guest Editor
Imaging Research Laboratories, Robarts Research Institute, Western University, 1151 Richmond St. N. London, ON N6A 5B7, Canada
Interests: ultrasound imaging; image-guided intervention
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Ultrasound imaging is an indispensable imaging tool that is found in almost all global hospitals, as it provides real-time images, uses ionizing radiation, can be conducted with portable systems, and is inexpensive—with systems ranging in price from about USD 10,000 for phone-based systems to over USD 300,000 for high-end systems, providing a wide range of capabilities. However, ultrasound images suffer from low tissue contrast, image speckle, shadowing, and various artifacts, making image interpretation difficult. Furthermore, the use of ultrasound and interpretation of the images also suffer from user variability. Nevertheless, ultrasound imaging is used in disease diagnosis, assessing responses to therapy, guiding biopsies, and guiding surgical interventions. Applications of ultrasound imaging are very wide and include obstetrics, gynecology, cancer, cardiac, vascular, urology, musculoskeletal diseases and conditions.

Although machine learning tools such as deep learning have primarily been used in applications with CT and MR images, due to some of the limitations of ultrasound imaging, applications of machine learning to ultrasound images are lacking. However, over the past few years, applications of deep learning methods have increased exponentially, including applications using ultrasound images. Deep learning tools promise to make ultrasound imaging less variable and user-dependent, make procedure times shorter, and improve the guidance of biopsy and therapy applicators in image-guided interventions. Opportunities include pathology detection, classification of pathology as benign or malignant, segmentation of lesion size needed for monitoring response to therapy, quantification of changes in pathology in response to therapy, guidance and tracking of tools in the body, and other applications.

We are seeking contributions that present machine learning algorithms, techniques, and applications that will contribute to making ultrasound imaging more robust for disease detection, diagnosis, pathology quantification, image-guidance minimal interventions, and image-guided surgery.

Prof. Dr. Aaron Fenster
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Related Special Issue

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 9641 KiB  
Article
Convolutional Neural Network Approaches in Median Nerve Morphological Assessment from Ultrasound Images
by Shion Ando and Ping Yeap Loh
J. Imaging 2024, 10(1), 13; https://doi.org/10.3390/jimaging10010013 - 05 Jan 2024
Viewed by 1430
Abstract
Ultrasound imaging has been used to investigate compression of the median nerve in carpal tunnel syndrome patients. Ultrasound imaging and the extraction of median nerve parameters from ultrasound images are crucial and are usually performed manually by experts. The manual annotation of ultrasound [...] Read more.
Ultrasound imaging has been used to investigate compression of the median nerve in carpal tunnel syndrome patients. Ultrasound imaging and the extraction of median nerve parameters from ultrasound images are crucial and are usually performed manually by experts. The manual annotation of ultrasound images relies on experience, and intra- and interrater reliability may vary among studies. In this study, two types of convolutional neural networks (CNNs), U-Net and SegNet, were used to extract the median nerve morphology. To the best of our knowledge, the application of these methods to ultrasound imaging of the median nerve has not yet been investigated. Spearman’s correlation and Bland–Altman analyses were performed to investigate the correlation and agreement between manual annotation and CNN estimation, namely, the cross-sectional area, circumference, and diameter of the median nerve. The results showed that the intersection over union (IoU) of U-Net (0.717) was greater than that of SegNet (0.625). A few images in SegNet had an IoU below 0.6, decreasing the average IoU. In both models, the IoU decreased when the median nerve was elongated longitudinally with a blurred outline. The Bland–Altman analysis revealed that, in general, both the U-Net- and SegNet-estimated measurements showed 95% limits of agreement with manual annotation. These results show that these CNN models are promising tools for median nerve ultrasound imaging analysis. Full article
(This article belongs to the Special Issue Application of Machine Learning Using Ultrasound Images, Volume II)
Show Figures

Figure 1

19 pages, 2904 KiB  
Article
Segmentation of Substantia Nigra in Brain Parenchyma Sonographic Images Using Deep Learning
by Giansalvo Gusinu, Claudia Frau, Giuseppe A. Trunfio, Paolo Solla and Leonardo Antonio Sechi
J. Imaging 2024, 10(1), 1; https://doi.org/10.3390/jimaging10010001 - 19 Dec 2023
Viewed by 1506
Abstract
Currently, Parkinson’s Disease (PD) is diagnosed primarily based on symptoms by experts clinicians. Neuroimaging exams represent an important tool to confirm the clinical diagnosis. Among them, Brain Parenchyma Sonography (BPS) is used to evaluate the hyperechogenicity of Substantia Nigra (SN), found in more [...] Read more.
Currently, Parkinson’s Disease (PD) is diagnosed primarily based on symptoms by experts clinicians. Neuroimaging exams represent an important tool to confirm the clinical diagnosis. Among them, Brain Parenchyma Sonography (BPS) is used to evaluate the hyperechogenicity of Substantia Nigra (SN), found in more than 90% of PD patients. In this article, we exploit a new dataset of BPS images to investigate an automatic segmentation approach for SN that can increase the accuracy of the exam and its practicability in clinical routine. This study achieves state-of-the-art performance in SN segmentation of BPS images. Indeed, it is found that the modified U-Net network scores a Dice coefficient of 0.859 ± 0.037. The results presented in this study demonstrate the feasibility and usefulness of SN automatic segmentation in BPS medical images, to the point that this study can be considered as the first stage of the development of an end-to-end CAD (Computer Aided Detection) system. Furthermore, the used dataset, which will be further enriched in the future, has proven to be very effective in supporting the training of CNNs and may pave the way for future studies in the field of CAD applied to PD. Full article
(This article belongs to the Special Issue Application of Machine Learning Using Ultrasound Images, Volume II)
Show Figures

Figure 1

20 pages, 26287 KiB  
Article
A KL Divergence-Based Loss for In Vivo Ultrafast Ultrasound Image Enhancement with Deep Learning
by Roser Viñals and Jean-Philippe Thiran
J. Imaging 2023, 9(12), 256; https://doi.org/10.3390/jimaging9120256 - 23 Nov 2023
Cited by 2 | Viewed by 1429
Abstract
Ultrafast ultrasound imaging, characterized by high frame rates, generates low-quality images. Convolutional neural networks (CNNs) have demonstrated great potential to enhance image quality without compromising the frame rate. However, CNNs have been mostly trained on simulated or phantom images, leading to suboptimal performance [...] Read more.
Ultrafast ultrasound imaging, characterized by high frame rates, generates low-quality images. Convolutional neural networks (CNNs) have demonstrated great potential to enhance image quality without compromising the frame rate. However, CNNs have been mostly trained on simulated or phantom images, leading to suboptimal performance on in vivo images. In this study, we present a method to enhance the quality of single plane wave (PW) acquisitions using a CNN trained on in vivo images. Our contribution is twofold. Firstly, we introduce a training loss function that accounts for the high dynamic range of the radio frequency data and uses the Kullback–Leibler divergence to preserve the probability distributions of the echogenicity values. Secondly, we conduct an extensive performance analysis on a large new in vivo dataset of 20,000 images, comparing the predicted images to the target images resulting from the coherent compounding of 87 PWs. Applying a volunteer-based dataset split, the peak signal-to-noise ratio and structural similarity index measure increase, respectively, from 16.466 ± 0.801 dB and 0.105 ± 0.060, calculated between the single PW and target images, to 20.292 ± 0.307 dB and 0.272 ± 0.040, between predicted and target images. Our results demonstrate significant improvements in image quality, effectively reducing artifacts. Full article
(This article belongs to the Special Issue Application of Machine Learning Using Ultrasound Images, Volume II)
Show Figures

Figure 1

13 pages, 18698 KiB  
Article
Leveraging AI in Postgraduate Medical Education for Rapid Skill Acquisition in Ultrasound-Guided Procedural Techniques
by Flora Wen Xin Xu, Amanda Min Hui Choo, Pamela Li Ming Ting, Shao Jin Ong and Deborah Khoo
J. Imaging 2023, 9(10), 225; https://doi.org/10.3390/jimaging9100225 - 16 Oct 2023
Viewed by 1261
Abstract
Ultrasound-guided techniques are increasingly prevalent and represent a gold standard of care. Skills such as needle visualisation, optimising the target image and directing the needle require deliberate practice. However, training opportunities remain limited by patient case load and safety considerations. Hence, there is [...] Read more.
Ultrasound-guided techniques are increasingly prevalent and represent a gold standard of care. Skills such as needle visualisation, optimising the target image and directing the needle require deliberate practice. However, training opportunities remain limited by patient case load and safety considerations. Hence, there is a genuine and urgent need for trainees to attain accelerated skill acquisition in a time- and cost-efficient manner that minimises risk to patients. We propose a two-step solution: First, we have created an agar phantom model that simulates human tissue and structures like vessels and nerve bundles. Moreover, we have adopted deep learning techniques to provide trainees with live visualisation of target structures and automate assessment of their user speed and accuracy. Key structures like the needle tip, needle body, target blood vessels, and nerve bundles, are delineated in colour on the processed image, providing an opportunity for real-time guidance of needle positioning and target structure penetration. Quantitative feedback on user speed (time taken for target penetration), accuracy (penetration of correct target), and efficacy in needle positioning (percentage of frames where the full needle is visualised in a longitudinal plane) are also assessable using our model. Our program was able to demonstrate a sensitivity of 99.31%, specificity of 69.23%, accuracy of 91.33%, precision of 89.94%, recall of 99.31%, and F1 score of 0.94 in automated image labelling. Full article
(This article belongs to the Special Issue Application of Machine Learning Using Ultrasound Images, Volume II)
Show Figures

Figure 1

17 pages, 505 KiB  
Article
Make It Less Complex: Autoencoder for Speckle Noise Removal—Application to Breast and Lung Ultrasound
by Duarte Oliveira-Saraiva, João Mendes, João Leote, Filipe André Gonzalez, Nuno Garcia, Hugo Alexandre Ferreira and Nuno Matela
J. Imaging 2023, 9(10), 217; https://doi.org/10.3390/jimaging9100217 - 10 Oct 2023
Viewed by 1795
Abstract
Ultrasound (US) imaging is used in the diagnosis and monitoring of COVID-19 and breast cancer. The presence of Speckle Noise (SN) is a downside to its usage since it decreases lesion conspicuity. Filters can be used to remove SN, but they involve time-consuming [...] Read more.
Ultrasound (US) imaging is used in the diagnosis and monitoring of COVID-19 and breast cancer. The presence of Speckle Noise (SN) is a downside to its usage since it decreases lesion conspicuity. Filters can be used to remove SN, but they involve time-consuming computation and parameter tuning. Several researchers have been developing complex Deep Learning (DL) models (150,000–500,000 parameters) for the removal of simulated added SN, without focusing on the real-world application of removing naturally occurring SN from original US images. Here, a simpler (<30,000 parameters) Convolutional Neural Network Autoencoder (CNN-AE) to remove SN from US images of the breast and lung is proposed. In order to do so, simulated SN was added to such US images, considering four different noise levels (σ = 0.05, 0.1, 0.2, 0.5). The original US images (N = 1227, breast + lung) were given as targets, while the noised US images served as the input. The Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) were used to compare the output of the CNN-AE and of the Median and Lee filters with the original US images. The CNN-AE outperformed the use of these classic filters for every noise level. To see how well the model removed naturally occurring SN from the original US images and to test its real-world applicability, a CNN model that differentiates malignant from benign breast lesions was developed. Several inputs were used to train the model (original, CNN-AE denoised, filter denoised, and noised US images). The use of the original US images resulted in the highest Matthews Correlation Coefficient (MCC) and accuracy values, while for sensitivity and negative predicted values, the CNN-AE-denoised US images (for higher σ values) achieved the best results. Our results demonstrate that the application of a simpler DL model for SN removal results in fewer misclassifications of malignant breast lesions in comparison to the use of original US images and the application of the Median filter. This shows that the use of a less-complex model and the focus on clinical practice applicability are relevant and should be considered in future studies. Full article
(This article belongs to the Special Issue Application of Machine Learning Using Ultrasound Images, Volume II)
Show Figures

Figure 1

Back to TopTop