Next Article in Journal
Effect of Grinding and Successive Sieving on the Distribution of Active Biological Compounds in the Obtained Fractions of Blackthorn Berries
Previous Article in Journal
Assessment of the Addition of Cricket (Acheta domesticus) Powder to Chickpea (Cicer arietinum) and Flaxseed (Linum usitatissimum) Flours: A Chemometric Evaluation of Their Pasting Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review

by
Hilde G. A. van der Pol
1,2,
Lennard M. van Karnenbeek
1,
Mark Wijkhuizen
1,
Freija Geldof
1 and
Behdad Dashtbozorg
1,*
1
Image-Guided Surgery, Department of Surgery, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands
2
Technical Medicine, Faculty of Mechanical, Maritime, and Materials Engineering (3ME), Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7132; https://doi.org/10.3390/app14167132
Submission received: 31 July 2024 / Revised: 9 August 2024 / Accepted: 11 August 2024 / Published: 14 August 2024
(This article belongs to the Section Applied Biosciences and Bioengineering)

Abstract

:
The popularity of handheld devices for point-of-care ultrasound (POCUS) has increased in recent years due to their portability and cost-effectiveness. However, POCUS has the drawback of lower imaging quality compared to conventional ultrasound because of hardware limitations. Improving the quality of POCUS through post-image processing would therefore be beneficial, with deep learning approaches showing promise in this regard. This review investigates the state-of-the-art progress of image enhancement using deep learning suitable for POCUS applications. A systematic search was conducted from January 2024 to February 2024 on PubMed and Scopus. From the 457 articles that were found, the full text was retrieved for 69 articles. From this selection, 15 articles were identified addressing multiple quality enhancement aspects. A disparity in the baseline performance of the low-quality input images was seen across these studies, ranging between 8.65 and 29.24 dB for the Peak Signal-to-Noise Ratio (PSNR) and between 0.03 an 0.71 for the Structural Similarity Index Measure (SSIM). In six studies, where both the PSNR and the SSIM metrics were reported for the baseline and the generated images, mean differences of 6.60 (SD ± 2.99) and 0.28 (SD ± 0.15) were observed for the PSNR and SSIM, respectively. The reported performance outcomes demonstrate the potential of deep learning-based image enhancement for POCUS. However, variability in the extent of the performance gain across datasets and articles was notable, and the heterogeneity across articles makes quantifying the exact improvements challenging.

1. Introduction

The use of handheld devices suitable for point-of-care ultrasound (POCUS) has been on the rise in recent years. This increase in popularity can be attributed to some key characteristics of these devices. Firstly, their portability makes them more convenient compared to conventional cart-based devices. Moreover, these handheld POCUS devices are more affordable than traditional ultrasound machines [1,2,3,4,5], making ultrasound technology more accessible and expanding its application beyond radiology departments. This is particularly useful in situations where larger, more expensive ultrasound equipment is impractical, such as in bedside emergency settings, general practitioner offices, home care environments, and rural medicine facilities [6,7,8,9,10,11,12].
However, one of the primary drawbacks of ultrasound examination with a handheld device is the reduced imaging quality due to hardware limitations and the absence of sophisticated post-processing algorithms. These limitations can potentially lead to less accurate diagnoses [2,4]. Compared to conventional high-end ultrasound systems, handheld POCUS devices typically exhibit reduced resolution and contrast, less distinct texture or edges of structures, and increased noise levels [6,13,14,15]. Despite the advancements in POCUS technology in recent years, a tradeoff remains between imaging quality and the benefits of cost and portability [14,16,17].
Efforts to enhance the quality of POCUS can be categorized into three main approaches. The first approach involves advancements in hardware. However, this approach is constrained by rising costs or compromised portability. Another option for quality improvement involves refinements in the ultrasound beamforming algorithm [18,19]. Nevertheless, the accessibility of raw radio frequency (RF) signals required for these improvements is limited in most commercial ultrasound systems. Therefore, this systematic review opts to center its focus on a third alternative: modifications to the image post-processing methods, eliminating the need for hardware remodeling or operations on the raw RF signal.
Traditional post-processing techniques, such as filtering and deconvolution, have been employed for ultrasound image enhancement for some time, as described in the review by Ortiz et al. [20]. Over the last few years, deep learning has emerged as a powerful tool, achieving state-of-the-art performance in various image processing tasks, including image quality enhancement [21,22,23]. Lepcha et al. recently conducted a systematic survey on existing state-of-the-art image enhancement techniques, including deep learning [24]. However, to the best of the authors’ knowledge, there has been no recent literature review on the current status of deep learning-based image enhancement specifically focusing on ultrasound. This gap in the literature presents a compelling area for investigation, particularly given the affordability and flexibility of POCUS, alongside its inherent challenges related to image quality.
The aim of this systematic review is, therefore, to explore the current state-of-the-art progress in ultrasound image enhancement using deep learning for point-of-care ultrasound applications. In this review, we will categorize the quality enhancement methods used in the selected articles, provide an overview of the improvements in performance achieved by these methods, and assess the practical benefits of these deep learning algorithms in enhancing ultrasound image quality for clinical practice.

2. Materials and Methods

This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [25].

2.1. Literature Search

2.1.1. Search Strategy

A literature search was conducted in PubMed and Scopus on 29 January 2024, covering publications from 2018 onwards. This decision was made due to the substantial growth of deep learning applications in medical contexts observed only in more recent years, especially in tasks related to image generation [21,26]. The search strategy was formulated to encompass three primary concepts: “Quality enhancement”, “Ultrasound”, and “Deep learning”. It should be noted that even though this review focuses on the POCUS applications, the search included methods developed for general ultrasound imaging to ensure a comprehensive coverage of relevant algorithms. Synonyms and related keywords for each concept were identified and included in the search string, such as specific aspects of quality enhancement like denoising or specific types of deep learning networks like a convolutional neural network. The complete search strings for both databases are reported in Appendix A. Additionally, a snowballing search was conducted to identify related articles, and duplicates were removed during the screening process.

2.1.2. Eligibility Criteria

Studies were included if they met the following criteria: (1) focused on medical image quality enhancement—defined as increased spatial resolution, contrast enhancement, denoising, or enhancement of structure boundaries; (2) used post-image processing techniques on B-mode ultrasound images, i.e., image-to-image methods; (3) proposed a deep learning algorithm; (4) proposed algorithm was specifically developed for ultrasound images; (5) full-text original article was written in English.
Exclusion criteria included: (1) other quality enhancement methods like restoration and inpainting or different study aims like domain conversion and 3D reconstruction; (2) hardware changes were required; (3) RF ultrasound data were used as input; (4) research on (microbubble) contrast-enhanced ultrasound, ultrasound computed tomography, elastography, color Doppler ultrasound, quantitative ultrasound, and high-intensity focused ultrasound; (5) non-journal publications (e.g., reviews, comments, dissertations, newspapers, and books); (6) non-accessible full-text publications.

2.1.3. Selection Procedure

The title and abstract of all studies were screened. Studies were excluded if they did not meet the eligibility criteria. For the remaining studies, the full text for each was retrieved and evaluated comprehensively. Each study was classified according to the quality enhancement aspects it addressed, which is explained in more depth in the paragraph below. This classification resulted in a final selection of articles for further assessment and quantification.

2.2. Categorization According to Quality Enhancement Aspects

Papers published on quality enhancement in ultrasound were further grouped based on the specific distortions addressed, which are particularly relevant to POCUS imaging, namely, (1) spatial resolution; (2) contrast; (3) texture or detail enhancement; and (4) noise. The definitions of these quality enhancement aspects as implemented in this review are further specified in Table 1.
Given the multifaceted nature of distortions in handheld ultrasound and the necessity for real-time quality enhancement, this review focused on deep learning algorithms simultaneously addressing multiple quality enhancement aspects. However, these quality enhancement aspects can be closely related. For instance, this can include the presence of noise reduces image contrast and resolution, thereby affecting edges and fine details [27,28]. Therefore, improving the quality of the ultrasound image by addressing one or more of the quality enhancement aspects should be specifically described and evaluated through a suitable performance metric.
Furthermore, studies were also included if they reported on the process of mapping low-quality images to high-quality reference images. This had to be achieved by obtaining ultrasound images that naturally showed a disparity in quality as a result of differences in the capture process, such as a different number of piezoelectric elements or the number of plane waves used, and not by artificially inducing quality reduction or improvement. Consequently, this led to the identification of a final category: (5) general quality improvement. The articles addressing either multiple quality enhancement aspects or quality improvement in general were selected for further descriptive and quantitative assessment.
Table 1. Definitions of quality enhancement aspects.
Table 1. Definitions of quality enhancement aspects.
Quality Enhancement AspectDefinition
1. Spatial resolutionThe ability to differentiate two adjacent structures as being distinct from one another: either parallel (axial resolution) or perpendicular (lateral resolution) to the direction of the ultrasound beam [29].
2. Contrast resolutionThe ability to distinguish between different echo amplitudes of adjacent structures through image intensity variations [29].
3. Detail enhancement of structuresEnhancement of texture, edges, or boundaries between structures.
4. NoiseMinimization of random variability that is not part of the desired signal.
5. General quality improvementMapping low-quality images to high-quality reference images, where the quality disparities are inherent to differences in the capture process and not artificially induced.

2.3. Data Extraction

Two performance metrics were evaluated: the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index Measure (SSIM). The PSNR and SSIM are commonly used metrics for assessing image quality, which can quantitatively show the effectiveness of the proposed networks. Both are full-reference metrics, which evaluate the quality of an image by comparing it to a high-quality reference image. Data were extracted if the article reported either the PSNR or SSIM with standard deviation for both the low-quality input images, as well as for the images generated by the proposed algorithm.
The PSNR is defined as the ratio of the maximum power of a signal and the power of the distorting noise [30]. It reflects the pixel-based similarity between the reconstructed image and the corresponding high-quality reference [14]. This ratio between two images is expressed in decibels (dB), thus following a log10 scale. A higher value indicates that the reconstructed image contains more details and provides a higher image quality [6,31] and vice versa: a small value of the PSNR implies high numerical differences between images [31]. Given a reference image f and a test image g, both of size M × N with maximum intensity MAXI and the Mean Squared Error (MSE) between f and g, the PSNR is calculated as follows:
PSNR = 10 · log 10 MAX I 2 MSE ( f , g )
where MSE represents
MSE ( f , g ) = 1 M N i = 1 M j = 1 N ( f i j g i j ) 2
The SSIM evaluates the perceived quality and assesses the perceptual-based similarity between paired images [14,30]. It is considered to be correlated with the quality perception of the human visual system. Instead of using traditional error summation methods, the SSIM is designed by modeling any image distortion as a combination of three factors that are loss of correlation, luminance distortion, and contrast distortion. The SSIM index ranges between −1 and 1, with a value of 1 indicating perfect correlation, 0 indicating no correlation, and −1 indicating anti-correlation between the images [31].
For a reference image f and a test image g, where μ denotes the mean, σ f 2 denotes the variance of f, σ f g denotes the covariance of f and g, and C 1 and C 2 are two positive constants used to avoid a null denominator, the SSIM is defined as follows:
SSIM ( f , g ) = ( 2 μ f μ g + C 1 ) ( 2 σ f g + C 2 ) ( μ f 2 + μ g 2 + C 1 ) ( σ f 2 + σ g 2 + C 2 ) .
Additionally, the extracted data were grouped by set type (in vivo, phantom, or simulation).

2.4. Statistical Analysis

Statistical analysis was performed using IBM SPSS Statistics, Version 29.0.2 (Released 2023; IBM Corp., Armonk, NY, USA), using a random effects model. Statistical heterogeneity was evaluated by calculating I2 statistics, with high heterogeneity defined as >75% and statistical significance defined as p < 0.05. Mean differences were calculated by subtracting the quality performance values of the low-quality input image from the values of the generated image by the proposed algorithm. The results were summarized in forest plots.

3. Results

3.1. Study Selection

The systematic literature research identified 457 articles from the two databases after duplicate removal. Snowballing did not identify any additional relevant articles. The initial screening based on the title and abstract resulted in 69 articles being selected for full-text review [6,14,15,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96]. Following an initial analysis focusing on quality enhancement aspects, 15 articles were identified that addressed multiple quality enhancement aspects. These articles were selected for further descriptive and quantitative assessment. Finally, six articles were selected for a meta-analysis based on their reported outcomes. The study selection process is illustrated in Figure 1.

3.2. Quality Enhancement Aspects

The distribution of the quality enhancement aspects across the 69 included articles was analyzed, for which the findings are depicted in Figure 2. This figure shows a predominant focus on denoising, followed by resolution enhancement. Of the 69 articles, the majority 78% (n = 54) focused on a single quality enhancement aspect, while the remaining 22% (n = 15) addressed a combination of aspects or quality enhancement in general. Consequently, these 15 articles [6,14,15,32,33,34,35,36,37,38,39,40,41,42,43] were selected for further analysis.

3.3. Study Characteristics

The datasets used in the 15 selected articles for further assessment were categorized into three types: (1) in vivo data; (2) phantom data (including in vitro, ex vivo, and tissue-mimicking datasets—both self-made and commercial); and (3) simulation data. For studies comparing multiple deep learning algorithms or loss functions, the best-performing algorithm or loss function and its corresponding performance was reported. Characteristics of the included articles are shown in detail in Table 2. Most articles [33,37,38,41,42,43] used Plane Wave Imaging (PWI) for data collection by mapping low-quality images from one or a few angles to high-quality compounded images from multiple angles. The next most common ultrasound mode for data collection involved low-quality input data from handheld POCUS devices and high-quality reference images from high-end ultrasound devices [6,14,15,35]. Additionally, all articles used (a combination of) a CNN and/or a GAN as a deep learning network. More variety in loss functions was observed, even though often (albeit in combination with other loss functions) the MSE and SSIM loss were used.

3.4. Study Outcomes

The outcomes of the selected studies are reported in Table 3, including the computation time, source code availability, number of images in the test set, performance metrics for low-quality input images, and performance metrics for enhanced generated images. The performance outcomes of the proposed algorithms were categorized by dataset type (in vivo, phantom, or simulation). Both the full-reference metrics and non-reference metrics are reported.
Additionally, the quantitative outcomes for the most commonly reported performance metrics (PSNR and SSIM) are visualized in Figure 3a,b. Both the baseline performance metrics for the low-quality input images and the obtained performance metrics for the images generated by the proposed algorithm are shown. In these figures, each bar represents a dataset, and the color represents the corresponding study. The baseline PSNR low-quality input images ranged from 8.65 to 29.24, while the generated images had PSNR values ranging from 13.99 to 36.59. Similarly, the SSIM values ranged from 0.03 to 0.71 for low-quality input images and from 0.30 to 0.93 for the enhanced images.
Table 2. Characteristics of selected studies.
Table 2. Characteristics of selected studies.
StudyAimDataset (Availability)Ultrasound SpecificationsDeep Learning AlgorithmLoss Function
Awasthi et al., 2022 [32]Reconstruction of high-quality high-bandwidth images from low-bandwidth images.Phantom: Five separate datasets, tissue-mimicking, commercial, and in vitro porcin carotid artery (private).Verasonics: L11-5v transducer with PWs at range −25° to 25°. LQ: Limited bandwidth down to 20%, HQ: Full bandwidth.Residual Encoder–Decoder NetScaled MSE
Gasse et al., 2017 [33]Reconstruct high-quality US images from a small number of PW acquisitions.In vivo: Carotid, thyroid, and liver regions of healthy subjects.
Phantom: Gammex (private).
Verasonics: ATL L7-4 probe (5.2 MHz, 128 elements) with range ±15°. LQ: 3 PWs. HQ: 31 PWs.CNNL2 loss
Goudarzi et al., 2020 [34]Achieve the quality of multifocus US images by using a mapping function on a single-focus US image.Phantom: CIRS phantom and ex vivo lamb liver.
Simulation: Field II software [34] (private).
E-CUBE 12 Alpinion machine: L3-12H transducer (8.5 MHz). LQ: image with single focal point. HQ: Multi-focus image with 3 focal points.Boundary-Seeking GANBinary cross-entropy (discriminator), MSE + boundary-seeking loss (generator)
Guo et al., 2020 [35]Improve the quality of handheld US devices using a small number of plane waves.In vivo: Dataset provided by Zhang et al. [42] (carotid artery and brachioradialis images of healthy volunteers).
Phantom: PICMUS [97] dataset, CIRS phantom.
Simulation: US images from natural images using Field II software (only for pre-training LG-Unet) (private and public).
(Derived from dataset sources)
In vivo: Verasonics: L10-5 probe (7.5 MHz). LQ: 3 PWs, HQ: Compounded image of 31 PWs with range −15° to 15°.
Phantom data: Verasonics: L11 probe (5.2 MHz, 128 elements).
Local Global Unet (LG-Unet) + Simplified Residual Network (S_ResNet)MSE + SSIM (LG-Unet) and L1 (S_Resnet)
Huang et al., 2018 [36]Improve the quality of ultrasonic B-mode images from 32 to that from 128 channels.Simulation: Field II software (private).Simulation dataset at 5 MHz center frequency, 0.308 mm pitch, 71% bandwidth. LQ: 32-channel image. HQ: 128-channel image.Context Encoder Reconstruction GANNot reported
Khan et al., 2021 [15]Contrast and resolution enhancement of handheld POCUS images.In vivo: Carotid and thyroid regions.
Phantom: ATS-539 phantom.
Simulation: Intermediate domain images generated by downgrading the in vivo and phantom images acquired from high-end system (private).
LQ: NPUS050 portable US system was used as low-quality input. HQ: E-CUBE 12R US, L3-12 transducer.Cascade application of unsupervised self-consistent CycleGAN + supervised super-resolution network.Cycle consistency + adversarial loss (cycleGAN), MAE + SSIM (super-resolution network)
Lu et al., 2020 [37]High-quality reconstruction for DW imaging using a small number (3) of DW transmissions competing with those obtained by compounding with 31 DWS.In vivo: Thigh muscle, finger phalanx, and liver regions.
Phantom: CIRS and Gammex (private).
Verasonics: ATL P4-2 transducer. LQ: 3 DWs. HQ: Compounded image of 31 DWs.CNN with Inception ModuleMSE
Lyu et al., 2023 [38]Reconstruct super-resolution high-quality images from single-beam plane wave images.PICMUS 2016 dataset [97] modulated following the guidelines of CUBDL consisting of
Simulation: Generated with Field II software.
Phantom: CIRS.
In vivo: Carotid artery of healthy volunteer (public).
(Derived from dataset source)Verasonics: L11 probe with range −16° to 16°. LQ: Single PW image. HQ: PW images synthesized from 75 different angles using CPWC.U-Shaped GAN based on Attention and Residual Connection (ARU-GAN)Combination of MS-SSIM, classical adversarial, and perceptual loss
Moinuddin et al., 2022 [39]Enhance US images using a network where the task of noise suppression and resolution enhancement are carried out simultaneously.In vivo: Breast US (BUS) dataset [98], for which high resolution and low noise label images were generated using NLLR normal filtration.
Simulation: Salient object detection (SOD) dataset [99] augmentated using image formaton physics information divided in two datasets (public).
(Derived from dataset source)Siemens ACUSON Sequoia C512, 17L5 HD transducer (8.5 MHz).Deep CNNMSE
Monkam et al., 2023 [40]Suppress speckle noise and enhance texture and fine details.Simulation: Original low-quality US images of HC18 Challenge fetal dataset [100], from which high-quality target images and additional low-quality images were generated (for training and testing).
In vivo: publicly available datasets: HC18 Challenge (fetal) [100], BUSI (breast), CCA (common carotid artery) (for testing) (public).
(Derived from HC18 dataset source)Voluson E8 or the Voluson 730 US device.U-Net with added feature refinement attention block (US-Net)L1 loss
Tang et al., 2021 [41]Reconstruct high-resolution, high-quality plane wave images from low-quality plane wave images from different angles.PICMUS 2016 dataset [97] modulated following the guidelines of CUBDL consisting of
Simulation: Generated with Field II software.
Phantom: CIRS.
In vivo: Carotid artery of healthy volunteer (public).
(Derived from dataset source)Verasonics: L11 probe with range −16° to 16°. LQ: PW image using 3 angles. HQ: PW images synthesized from 75 different angles using CPWC.Attention Mechanism and U-Net-Based GANCross-entropy + MSE + perceptual loss
Zhang et al., 2018 [42]Reconstruct high-quality US images from small number of PWs (3).In vivo: Carotid artery and brachioradioalis of heathy volunteers.
Phantom: CIRS phantom, ex vivo swine muscles (private).
Verasonics: L10-5 (7.5 MHz) with range −15° to 15°. LQ: 3 PWs. HQ: Coherent compounding using 31 PWs.GAN with feedforward CNN as both generator and discriminator networkMSE + adversarial loss (generator), binary cross-entropy (discriminator)
Zhou et al., 2018 [43]Improve the image quality of a single-angle PW image to that of a PW image synthesized from 75 different angles.PICMUS 2016 dataset [97] synthesized by three different beamforming methods:
In vivo: (1) thyroid gland and (2) carotid arteries of human volunteers (public).
Phantom: CIRS phantom.
Simulation: (1) point image and (2) cyst images generated using Field II software.
(Derived from dataset sources)Verasonics: L11 probe with range −16° to 16°. LQ: Single PW image. HQ: PW images synthesized from 75 different angles.Multi-scaled CNNMSE
Zhou et al., 2020 [6]Improve quality of portable US by mapping low-quality images to corresponding high-quality images.Single-/multi-angle PWI simulation, phantom and in vivo data (only used for transfer learning). For training and testing:
In vivo: Carotid and thyroid images of healthy volunteers.
Phantom: CIRS and self-made gelatin and raw pork.
Simulation: Field II software (private).
LQ: mSonics MU1, L10-5v. transducer. HQ: Verasonics—L11-4v transducer (phantom data) and Toshiba Aplio 500, 7.5 MHz (clinical data).Two-stage GAN with U-Net and Gradual Learning Strategy.MSE + SSIM + Conv loss
Zhou et al., 2021 [14]Enhance video quality of handheld US devices.In vivo: Single- and multi-angle PW videos (only for training). Handheld and high-end images and videos of different bodyparts of healthy volunteers (for training and testing) (private).PW videos: Verasonics—L11-4v transducer (6.25 MHz, 128-element) with range −16° to 16°. High-end US (HQ): Toshiba Aplio 500 device. Handheld US (LQ): mSonics MU1, L10-5 transducer.Low-rank Representation Multi-pathway GANAdversarial + MSE + ultrasound-specific perceptual loss
US: ultrasound, LQ: low-quality, HQ: high-quality, MSE: Mean Squared Error, CNN: convolutional neural network, GAN: Generative Adversarial Network, DW: diverging wave, PICMUS: Plane Wave Imaging Challenge in Medical UltraSound, CUBDL: Challenge on Ultrasound Beamforming with Deep Learning, NLLR: non-local low-rank, PW: plane wave.
Table 3. Outcomes of selected studies.
Table 3. Outcomes of selected studies.
StudyComputation Time(Source Code Availability)Number of Images in Test SetPerformance (±SD) of Low-Quality Input ImagePerformance (±SD) of Generated Image
Awasthi et al., 2022 [32]“Light weight” (available)Phantom:
dataset 1: n = 134
dataset 2: n = 90
dataset 3: n = 31
dataset 4: n = 70
dataset 5: n = 239
Phantom:
dataset 1: PSNR = 17.049 ± 1.107, RMSE = 0.141 ± 0.016, PC = 0.788
dataset 2: PSNR = 15.768 ± 1.376, RMSE = 0.165 ± 0.026
dataset 3: PSNR = 13.885 ± 1.276, RMSE = 0.204 ± 0.032
dataset 4: PSNR = 16.297 ± 1.212, RMSE = 0.155 ± 0.021
dataset 5: PSNR = 15.487 ± 1.876, RMSE = 0.172 ± 0.040
Phantom:
dataset 1: PSNR = 20.903 ± 1.189, RMSE = 0.091 ± 0.012, PC = 0.86
dataset 2: PSNR = 20.523 ± 1.242, RMSE = 0.095 ± 0.013
dataset 3: PSNR = 13.985 ± 1.120, RMSE = 0.201 ± 0.025
dataset 4: PSNR = 21.457 ± 1.238, RMSE = 0.085 ± 0.012
dataset 5: PSNR = 17.654 ± 1.536, RMSE = 0.133 ± 0.022
Gasse et al., 2017 [33]Not reported (not available)Mixed test set of in vivo and phantom data:
n = 1000
Only graphs given showing CR and LR reached by the proposed model with 3 PWs compared to the standard compounding of an increasingly larger number of PWs.-
Goudarzi et al., 2020 [34]Not reported (available)Phantom
(CIRS):
n = -
Simulation:
n = 360
Phantom:
FWHM = 1.52, CNR = 9.6
Simulation: SSIM = 0.622 ± 0.02, PSNR = 23.27 ± 1, FWHM = 1.3, CNR = 7.2
Phantom:
FWHM = 1.44, CNR = 11.1
Simulation:
SSIM = 0.769 ± 0.017, PSNR = 25.32 ± 0.919, FWHM = 1.09, CNR = 8.02
Guo et al., 2020 [35]Not reported (not available)225 (out of 9225) patch images from the in vivo, phantom, and simulation dataset (distribution between datasets not reported).In vivo:
PSNR = 16.04
Phantom:
FWHM = 1.8 mm, CR = 0.36, CNR = 24.93
In vivo:
PSNR = 18.94
Phantom:
FWHM = 1.3 mm, CR = 0.79, CNR = 32.81
Huang et al., 2018 [36]Not reported (not available)Simulation:
n = 1
Simulation:
CNR: 0.939, PICMUS CNR: 2.381, FWHM: 13.34
Simulation:
CNR: 1.508, PICMUS CNR: 6.502, FWHM: 11.15
Khan et al., 2021 [15]13.18 ms (not available)In vivo:
n = 43
Phantom:
n = 32
Not reportedGain compared to simulated intermediate quality images of in vivo and phantom data (only measuring fitness of super-resolution network):
PSNR = 13.58, SSIM = 0.63
Non-reference metrics for entire proposed method for in vivo and phantom data:
CR = 14.96, CNR = 2.38, GCNR = 0.8604 (which is 21.77%, 30.06% and 44.42% higher than those of the low-quality input images.)
Lu et al., 2020 [37]0.75 ± 0.03 ms (not available)Mixed in vivo and phantom data:
n = 1000
Mixed in vivo and phantom data:
PSNR = 29.24 ± 1.57, SSIM = 0.83 ± 0.15, MI = 0.51 ± 0.16
Non-reference metrics are only shown in graph form for low-quality images.
Mixed in vivo and phantom data:
PSNR = 31.13 ± 1.47, SSIM = 0.93 ± 0.06, MI = 0.82 ± 0.20,CR (near field) = 19.54, CR (far field) = 14.95, CNR (near field) = 7.63, CNR (far field) = 5.21, LR (near field) = 0.90, LR (middle field) = 1.64, LR (far field) = 2.35
Lyu et al., 2023 [38]Not reported (not available)In vivo:
n = 150
Phantom:
n = 150
Simulation:
n = 150
No performance metrics available for low-quality images; only for other traditional deep learning methods for comparison.In vivo:
PSNR = 26.508, CW-SSIM = 0.876, NCC = 0.943
Phantom: FWHM = 0.424, CR = 26.900, CNR = 3.693
Simulation: FWHM = 0.277, CR = 39.472, CNR = 5.141
Moinuddin et al., 2022 [39]Not reportedIn vivo:
n = 33
Simulation:
SOD-1: n = 200
SOD-2: n = 200
Evaluated with 5-fold cross-validation approach.
In vivo:
PSNR = 26.0071 ± 2.3083, SSIM = 0.7098 ± 0.0761
Simulation:
SOD-1: PSNR = 12.1587 ± 0.7839, SSIM = 0.5570 ± 0.1205
SOD-2: PSNR = 12.5272 ± 0.8243, SSIM = 0.1556 ± 0.1451,
GCNR = 0.9936 ± 0.0039
In vivo:
PSNR = 26.9112 ± 2.3025, SSIM = 0.7522 ± 0.0635
Simulation:
SOD-1: PSNR = 25.5275 ± 2.9712, SSIM = 0.6946 ± 0.1267
SOD-2: PSNR = 32.4719 ± 2.6179, SSIM = 0.8785 ± 0.0766,
GCNR = 0.9966 ± 0.0026
Monkam et al., 2023 [40]52.16 ms (not available)In vivo:
HC18: n = 30
BUSI: n = 30
CCA: n = 30
Simulation:
HC18: n = 335
No performance metrics available for low-quality images; only for other enhancement methods for comparison.In vivo:
HC18: SNR = 39.32, CNR = 1.10, AGM = 27.46, ENL = 15.71
BUSI: SNR = 34.54, CNR = 4.20, AGM = 39.88, ENL = 17.04
CCA: SNR = 40.87, CNR = 2.59, AGM = 35.92, ENL = 23.03
Simulation:
HC18: SSIM = 0.9155, PSNR = 32.87, EPI = 0.6371
Tang et al., 2021 [41]Not reported (not available)n = 360 (total number of images in test set for the in vivo, phantom, and simulation datasets, distribution not reported)Phantom:
FWHM = 0.5635, CR = 8.718, CNR = 1.109, GCNR = 0.609
Simulation:
FWHM = 0.2808, CR = 13.769, CNR = 1.576, GCNR = 0.735
In vivo:
PSNR = 28.278, SSIM = 0.659, MI = 0.9980, NCC = 0.963
Phantom:
FWHM = 0.3556, CR = 24.571, CNR = 2.495, GCNR = 0.915
Simulation:
FWHM = 0.2695, CR = 39.484, CNR = 5.617, GCNR = 0.998
Zhang et al., 2018 [42]Not reported (not available)In vivo:
n = 500
phantom:
n = 30
Mixed in vivo and phantom test set:FWHM = 0.50, CR = 10.23, CNR = 1.30Mixed in vivo and phantom test set:FWHM = 0.53, CR = 19.46, CNR = 2.25
Zhou et al., 2018 [43]Not reported (not available)In vivo:
Thyroid dataset: n = 30
Simulation:
Point dataset: n = 30
Cyst dataset: n = 30
Evaluated with 5-fold cross-validation approach.
In vivo:
Thyroid dataset: PSNR = 14.9235, SSIM = 0.0291, MI = 0.3474
Simulation:
Point dataset: PSNR = 24.1708, SSIM = 0.1962, MI = 0.4124,
FWHM = 0.49
Cyst dataset: PSNR = 15.8860, SSIM = 0.5537, MI = 1.1976,
CR = 137.0473
In vivo:
Thyroid dataset: PSNR = 21.7248, SSIM = 0.3034, MI = 0.8856
Simulation:
Point dataset: PSNR = 36.5884, SSIM = 0.9216, MI = 0.4483,
FWHM = 0.196
Cyst dataset: PSNR = 24.0167, SSIM = 0.6135, MI = 1.5622,
CR = 184.0432
Zhou et al., 2020 [6]Not reported (not available)In vivo:
n = 94
Phantom:
n = 40
Simulation:
n = 56
Evaluated with 5-fold cross validation approach.
In vivo:
PSNR = 8.65 ± 1.32, SSIM = 0.18 ± 0.04, MI = 0.22 ± 0.13,
BRISQUE = 38.91 ± 4.99
Phantom:
PSNR = 15.26 ± 2.91, SSIM = 0.12 ± 0.03, MI = 0.20 ± 0.11,
BRISQUE = 24.61 ± 4.50
Simulation:
PSNR = 16.38 ± 2.35, SSIM = 0.19 ± 0.06, MI = 0.22 ± 0.16,
BRISQUE = 29.08 ± 3.45
In vivo:
PSNR = 18.08 ± 1.57, SSIM = 0.41 ± 0.05, MI = 0.68 ± 0.18,
BRISQUE = 35.25 ± 4.13
Phantom:
PSNR = 24.70 ± 1.11, SSIM = 0.64 ± 0.07, MI = 0.26 ± 0.09,
BRISQUE = 21.68 ± 3.36
Simulation:
PSNR = 28.50 ± 2.01, SSIM = 0.59 ± 0.02, MI = 0.42 ± 0.04,
BRISQUE = 23.30 ± 3.09
Zhou et al., 2021 [14]Not reported (not available)In vivo:
n = 40 videos
For full-reference methods, a single frame in handheld video was used, and most-similar frame in high-end video was selected.
In vivo:
PSNR = 12.68 ± 3.45, SSIM = 0.24 ± 0.06, MI = 0.71 ± 0.09,
NIQE = 19.48 ± 4.66, ultrasound quality score = 0.06 ± 0.03
In vivo:
PSNR = 19.95 ± 3.24, SSIM = 0.45 ± 0.06, MI = 1.05 ± 0.07,
NIQE = 6.95 ± 1.97, ultrasound quality score = 0.89 ± 0.16
AGM: average gradient magnitudes, BRISQUE: blind referenceless image spatial quality evaluator, CNR: contrast-to-noise ratio, CR: contrast ratio, ENL: equivalent number of looks, EPI: edge preservation index, FWHM: full width at half maximum, GCNR: generalized contrast-to-noise ratio, LR: likelihood ratio, MI: mutual information, MSE: Mean Squared Error, MS-SSIM: Multi-scale Structural Similarity Index Measurement, NCC: normalized cross-correlation, NIQE: natural image quality evaluator, PC: Pearson correlation, PSNR: Peak Signal-to-Noise Ratio, RMSE: Root Mean Squared Error, SNR: Signal-to-Noise Ratio, SSIM: Structural Similarity Index Measurement.
Figure 3. Visualizations of the obtained (a) PSNR and (b) SSIM for each dataset in the included studies for both the low-quality input images and the generated images by the proposed algorithm. The color represents the dataset type (in vivo, phantom, or simulation data). Note that some studies are represented by multiple bars, since they evaluated multiple datasets.
Figure 3. Visualizations of the obtained (a) PSNR and (b) SSIM for each dataset in the included studies for both the low-quality input images and the generated images by the proposed algorithm. The color represents the dataset type (in vivo, phantom, or simulation data). Note that some studies are represented by multiple bars, since they evaluated multiple datasets.
Applsci 14 07132 g003

3.5. Meta-Analysis Results

Six studies reported PSNR values [6,14,32,34,37,39] and five reported SSIM values [6,14,34,37,39], with standard deviations for both the low-quality input images and the generated images. Consequently, these studies were included in a meta-analysis. The meta-analysis revealed a mean increase in the PSNR between the generated and low-quality input images of 6.60 ± 2.99 (Figure 4a). The mean increase in the SSIM was 0.28 ± 0.15 (Figure 4b). Both increases were statistically significant (p = 0.00). However, high heterogeneity was observed in both meta-analyses (I2 = 100%), indicating substantial variability among the included studies.

4. Discussion

Point-of-care ultrasound (POCUS) is recognized for its affordability and convenience, but it suffers from a lower image quality compared to conventional high-end, cart-based ultrasound systems. Recent advances in deep learning have achieved state-of-the-art performance in various problems of image processing, including the enhancement of image quality. This systematic review provides a overview of research focused on ultrasound image enhancement using deep learning methods, which is suitable for real-time POCUS applications. A comprehensive description of the methods used, as well as a further analysis of the performance of the proposed algorithms, is given.
It was observed that the majority of studies utilized GANs incorporating CNNs in both the generator and discriminator networks. The emergence of GANs in the medical imaging field, as described by Liu et al. [23], is noteworthy, as these models are capable of generating highly realistic medical images, effectively bridging the gap between supervised learning and image generation. Despite this trend, there was considerable variation in the GAN and CNN architectures, loss functions, and evaluation methods across studies. Some studies compared their proposed network with existing networks to benchmark quality enhancement, while others compared the generated images to the original low-quality input images and/or paired high-quality reference images. For the methods that quantitatively assessed the networks’ performance, a variety of image quality metrics was reported. In addition to metric-based assessments, some studies incorporated visual assessments or tested the effect of quality enhancement on downstream tasks such as segmentation or diagnosis. Often, a combination of these evaluation methods was utilized to provide a more comprehensive overview of the proposed algorithm’s performance.
Variability in evaluation methods and performance metrics poses a challenge for direct comparisons among all articles. However, those reporting the most commonly reported performance metrics (PSNR and SSIM) for the low-quality input images or generated images allowed for some comparisons. Figure 3a,b reveal substantial disparities in low-quality input image performance, indicating varying baseline qualities across studies. These differences may be explained by the heterogeneity in ultrasound devices and dataset types. Nevertheless, consistent improvements in image quality were observed when comparing enhanced images to the original inputs, as shown in Figure 3a,b. The meta-analysis further supports these findings, showing a statistically significant increase in the PSNR and SSIM values. This indicates the potential of the proposed deep learning algorithms for enhancing the quality of ultrasound images. However, variability in the extent of performance gain across datasets and articles is notable. This is further supported by the I2 score of both meta-analyses (I2 = 100%), indicating high heterogeneity. This variability complicates the determination of achievable quality gain. Notably, simulated datasets generally exhibited higher performance gains compared to in vivo and phantom datasets, suggesting that simulation results may not fully represent clinical scenarios.
This review focused on ultrasound enhancement for POCUS applications but included studies for ultrasounds in general, as well to ensure the comprehensive coverage of relevant algorithms. A key selection criterion was the simultaneous addressing of multiple distortion types, which led to the inclusion of 15 articles. Interestingly, despite the importance of computation time for real-time applications, most articles did not report this aspect. Furthermore, the lack of source code availability hinders the reproducibility of the conducted research. Studies focusing specifically on enhancing POCUS images commonly paired low-quality POCUS images with high-quality images from high-end machines. Although these image pairs are expected to contain the same locational information, they often suffer from locational differences due to acquisition challenges, which can only be partially mitigated by registration methods and consequently impact network training. In contrast, studies using Plane Wave Imaging (PWI) did not encounter this issue, as they used the same device with different numbers of angles, resulting in nearly identical locational information. Future research could benefit from developing more accurately paired datasets, particularly using ex vivo data, to improve image-to-image translation techniques for POCUS.
Several limitations of this review should be noted. First, the selection of articles that addressed multiple quality enhancement aspects might have excluded relevant studies focusing on single aspects. Second, the heterogeneous nature of the included studies, with varying datasets and ultrasound devices, complicates direct and fair comparisons. Although we attempted to group datasets into in vivo, phantom, and simulation categories, diversity remained within these subgroups. Lastly, performing a meta-analysis for machine learning-based research presents unique challenges, as this method was originally designed for comparing cases and controls in medical treatments. Aspects such as the number of images in “case” and “control” groups, the use of cross-validation, and dataset similarities due to augmentation were not consistently accounted for. Therefore, the meta-analysis should be seen primarily as an illustrative tool, and caution is needed when drawing firm conclusions about the precise effects of ultrasound image enhancement in terms of expected PSNR and SSIM gain.

5. Conclusions

This review thoroughly examined the progress in ultrasound image quality enhancement using deep learning, with a focus on applications suitable for POCUS. Ultrasound image enhancement through deep learning is a vibrant research field. However, the majority of performed studies focuses on single aspects of quality enhancement, which are less effective for POCUS devices that suffer from multiple distortion types. Studies addressing multiple quality aspects demonstrate the potential for substantial image quality improvements across various ultrasound devices. The PSNR values for low-quality input images ranged from 8.65 to 29.24, improving to 13.99 to 36.59 for the enhanced images. Similarly, the SSIM values ranged from 0.03 to 0.71 and 0.30 to 0.93 for the low-quality input images and the enhanced images, respectively. However, quantifying the expected performance gain precisely remains challenging due to the heterogeneous nature of the studies. It is important to note that studies often neglect to report computation times, which is a crucial factor for enabling real-time applications. Future research should prioritize the development of standardized evaluation metrics, report computational efficiency, and ensure reproducibility by sharing source code. Additionally, creating accurate paired datasets with POCUS devices and high-end US images is essential for advancing this field and achieving reliable real-time image enhancement.

Author Contributions

Conceptualization, F.G. and B.D.; methodology, H.G.A.v.d.P., L.M.v.K., M.W., F.G. and B.D.; software, H.G.A.v.d.P.; validation, H.G.A.v.d.P., L.M.v.K., M.W., F.G. and B.D.; formal analysis, H.G.A.v.d.P., F.G. and B.D.; investigation, H.G.A.v.d.P. and F.G.; resources, B.D.; data curation, H.G.A.v.d.P. and F.G.; writing—original draft preparation, H.G.A.v.d.P. and F.G.; writing—review and editing, H.G.A.v.d.P., L.M.v.K., M.W., F.G. and B.D.; visualization, H.G.A.v.d.P. and F.G.; supervision, F.G. and B.D.; project administration, B.D. All authors have read and agreed to the published version of the manuscript.

Funding

Research at The Netherlands Cancer Institute has been supported by institutional grants of the Dutch Cancer Society and of the Dutch Ministry of Health, Welfare and Sport.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Search Strings

PubMed:
(“Ultrasonography”[Mesh] OR “ultraso*”[tiab]) AND (“Deep Learning”[Mesh] OR “deep learning”[tiab] OR “deep-learning”[tiab] OR “neural network*”[tiab] OR “generative adversarial*”[tiab] OR “ANN”[tiab] OR “CNN”[tiab] OR “RNN”[tiab] OR “LSTM”[tiab] OR “DNN”[tiab]) AND (“Image Enhancement”[Mesh] OR “enhancement”[tiab] OR “quality improving”[tiab: 2] OR “quality improvement”[tiab: 2] OR “quality improved”[tiab: 2] OR “quality enhanced”[tiab: 2] OR “quality enhancing”[tiab: 2] OR “resolution”[tiab] OR “Reconstruction”[tiab] OR “denoising”[tiab] OR “noise”[tiab] OR “despeckling”[tiab]) AND (2017:2024[pdat]) NOT “segmentation”[tiab] NOT “classification”[tiab] NOT “detection”[tiab] NOT “quantification”[tiab] NOT “detection”[tiab] NOT “localization microscopy”[tiab] NOT “microvessel”[tiab] NOT “microbubble”[tiab] NOT “tomography”[ti] NOT “ultrasound comp* tomography”[tiab: 0] NOT “raw”[tiab] NOT “radio-frequency”[tiab] NOT “beamforming”[tiab] NOT “sparse”[tiab] NOT “photoacoustic”[tiab] NOT “veloc*”[tiab] NOT “elastograph*”[tiab] NOT “diagnos*”[ti] NOT “review”[ti]
Scopus:
((TITLE-ABS-KEY (“deep learning”) OR TITLE-ABS-KEY (“deep-learning”) OR TITLE-ABS-KEY (“neural network”) OR TITLE-ABS-KEY (“generative adversarial”) OR TITLE-ABS-KEY (“ANN”) OR TITLE-ABS-KEY (“CNN”) OR TITLE-ABS-KEY (“RNN”) OR TITLE-ABS-KEY (“LSTM”) OR TITLE-ABS-KEY (“DNN”) ) AND (TITLE-ABS-KEY (“ultrasound”) OR TITLE-ABS-KEY (“ultrasonography”) ) AND (TITLE-ABS-KEY (“enhancement”) OR (TITLE-ABS-KEY (“quality”) W/2 TITLE-ABS-KEY (“improv*”) ) OR (TITLE-ABS-KEY (“quality ”) W/2 TITLE-ABS-KEY (“enhanc*”) ) OR TITLE-ABS-KEY (“resolution”) OR TITLE-ABS-KEY (“reconstruction”) OR TITLE-ABS-KEY (“denoising”) OR TITLE-ABS-KEY (“noise”) OR TITLE-ABS-KEY (“despeckling”)) AND PUBYEAR > 2017 AND PUBYEAR < 2025 AND NOT TITLE-ABS-KEY (“segmentation”) AND NOT TITLE-ABS-KEY (“classification”) AND NOT TITLE-ABS-KEY (“detection”) AND NOT TITLE-ABS-KEY (“quantification”) AND NOT TITLE-ABS-KEY (“localization microscopy”) AND NOT TITLE-ABS-KEY (“microvessel*”) AND NOT TITLE-ABS-KEY (“microbubble*”) AND NOT TITLE (“tomography”) AND NOT TITLE-ABS-KEY (“ultrasound comp* tomography”) AND NOT TITLE-ABS-KEY (“raw”) AND NOT TITLE-ABS-KEY (“radio-frequency”) AND NOT TITLE-ABS-KEY (“beamforming”) AND NOT TITLE-ABS-KEY (“sparse”) AND NOT TITLE-ABS-KEY (“photoacoustic”) AND NOT TITLE-ABS-KEY (“veloc*”) AND NOT TITLE-ABS-KEY (“elastograph*”) AND NOT TITLE (“diagnos*”) AND NOT TITLE (“review*”)

References

  1. Hashim, A.; Tahir, M.J.; Ullah, I.; Asghar, M.S.; Siddiqi, H.; Yousaf, Z. The utility of point of care ultrasonography (POCUS). Ann. Med. Surg. 2021, 71, 102982. [Google Scholar] [CrossRef] [PubMed]
  2. Riley, A.; Sable, C.; Prasad, A.; Spurney, C.; Harahsheh, A.; Clauss, S.; Colyer, J.; Gierdalski, M.; Johnson, A.; Pearson, G.D.; et al. Utility of hand-held echocardiography in outpatient pediatric cardiology management. Pediatr. Cardiol. 2014, 35, 1379–1386. [Google Scholar] [CrossRef] [PubMed]
  3. Gilbertson, E.A.; Hatton, N.D.; Ryan, J.J. Point of care ultrasound: The next evolution of medical education. Ann. Transl. Med. 2020, 8, 846. [Google Scholar] [CrossRef] [PubMed]
  4. Stock, K.F.; Klein, B.; Steubl, D.; Lersch, C.; Heemann, U.; Wagenpfeil, S.; Eyer, F.; Clevert, D.A. Comparison of a pocket-size ultrasound device with a premium ultrasound machine: Diagnostic value and time required in bedside ultrasound examination. Abdom. Imaging 2015, 40, 2861–2866. [Google Scholar] [CrossRef]
  5. Han, P.J.; Tsai, B.T.; Martin, J.W.; Keen, W.D.; Waalen, J.; Kimura, B.J. Evidence basis for a point-of-care ultrasound examination to refine referral for outpatient echocardiography. Am. J. Med. 2019, 132, 227–233. [Google Scholar] [CrossRef] [PubMed]
  6. Zhou, Z.; Wang, Y.; Guo, Y.; Qi, Y.; Yu, J. Image Quality Improvement of Hand-Held Ultrasound Devices with a Two-Stage Generative Adversarial Network. IEEE Trans. Biomed. Eng. 2020, 67, 298–311. [Google Scholar] [CrossRef]
  7. Nelson, B.P.; Sanghvi, A. Out of hospital point of care ultrasound: Current use models and future directions. Eur. J. Trauma Emerg. Surg. 2016, 42, 139–150. [Google Scholar] [CrossRef] [PubMed]
  8. Kolbe, N.; Killu, K.; Coba, V.; Neri, L.; Garcia, K.M.; McCulloch, M.; Spreafico, A.; Dulchavsky, S. Point of care ultrasound (POCUS) telemedicine project in rural Nicaragua and its impact on patient management. J. Ultrasound 2015, 18, 179–185. [Google Scholar] [CrossRef]
  9. Stewart, K.A.; Navarro, S.M.; Kambala, S.; Tan, G.; Poondla, R.; Lederman, S.; Barbour, K.; Lavy, C. Trends in Ultrasound Use in Low and Middle Income Countries: A Systematic Review. Int. J. MCH AIDS 2020, 9, 103–120. [Google Scholar] [CrossRef]
  10. Becker, D.M.; Tafoya, C.A.; Becker, S.L.; Kruger, G.H.; Tafoya, M.J.; Becker, T.K. The use of portable ultrasound devices in low- and middle-income countries: A systematic review of the literature. Trop. Med. Int. Health 2016, 21, 294–311. [Google Scholar] [CrossRef]
  11. McBeth, P.B.; Hamilton, T.; Kirkpatrick, A.W. Cost-effective remote iPhone-teathered telementored trauma telesonography. J. Trauma Acute Care Surg. 2010, 69, 1597–1599. [Google Scholar] [CrossRef] [PubMed]
  12. Evangelista, A.; Galuppo, V.; Méndez, J.; Evangelista, L.; Arpal, L.; Rubio, C.; Vergara, M.; Liceran, M.; López, F.; Sales, C. Hand-held cardiac ultrasound screening performed by family doctors with remote expert support interpretation. Heart 2016, 102, 376–382. [Google Scholar] [CrossRef] [PubMed]
  13. Salimi, N.; Gonzalez-Fiol, A.; Yanez, N.D.; Fardelmann, K.L.; Harmon, E.; Kohari, K.; Abdel-Razeq, S.; Magriples, U.; Alian, A. Ultrasound Image Quality Comparison Between a Handheld Ultrasound Transducer and Mid-Range Ultrasound Machine. Pocus J. 2022, 7, 154–159. [Google Scholar] [CrossRef] [PubMed]
  14. Zhou, Z.; Guo, Y.; Wang, Y. Handheld Ultrasound Video High-Quality Reconstruction Using a Low-Rank Representation Multipathway Generative Adversarial Network. IEEE Trans. Neural Networks Learn. Syst. 2020, 32, 575–588. [Google Scholar] [CrossRef]
  15. Khan, S.; Huh, J.; Ye, J.C. Contrast and resolution improvement of pocus using self-consistent cyclegan. In Proceedings of the MICCAI Workshop on Domain Adaptation and Representation Transfer; Springer: Cham, Switzerland, 2021; pp. 158–167. [Google Scholar]
  16. Jafari, M.H.; Girgis, H.; Van Woudenberg, N.; Moulson, N.; Luong, C.; Fung, A.; Balthazaar, S.; Jue, J.; Tsang, M.; Nair, P. Cardiac point-of-care to cart-based ultrasound translation using constrained CycleGAN. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 877–886. [Google Scholar] [CrossRef] [PubMed]
  17. Henderson, R.; Murphy, S. Portability Enhancing Hardware for a Portable Ultrasound System. 2017. US Patent No. 9,629,606, 25 April 2017. [Google Scholar]
  18. Lockwood, G.R.; Talman, J.R.; Brunke, S.S. Real-time 3-D ultrasound imaging using sparse synthetic aperture beamforming. IEEE T-UFFC 1998, 45, 980–988. [Google Scholar] [CrossRef]
  19. Matrone, G.; Savoia, A.S.; Caliano, G.; Magenes, G. The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging. IEEE Trans. Med Imaging 2014, 34, 940–949. [Google Scholar] [CrossRef]
  20. Ortiz, S.H.C.; Chiu, T.; Fox, M.D. Ultrasound image enhancement: A review. Biomed. Signal Process. Control 2012, 7, 419–428. [Google Scholar] [CrossRef]
  21. Anaya-Isaza, A.; Mera-Jiménez, L.; Zequera-Diaz, M. An overview of deep learning in medical imaging. Inform. Med. Unlocked 2021, 26, 100723. [Google Scholar] [CrossRef]
  22. Zhang, H.M.; Dong, B. A review on deep learning in medical image reconstruction. J. Oper. Res. Soc. China 2020, 8, 311–340. [Google Scholar] [CrossRef]
  23. Liu, J.; Li, K.; Dong, H.; Han, Y.; Li, R. Medical Image Processing based on Generative Adversarial Networks: A Systematic Review. Curr Med Imaging 2023, 20, e15734056258198. [Google Scholar] [CrossRef] [PubMed]
  24. Lepcha, D.C.; Goyal, B.; Dogra, A.; Sharma, K.P.; Gupta, D.N. A deep journey into image enhancement: A survey of current and emerging trends. Inf. Fusion 2023, 93, 36–76. [Google Scholar] [CrossRef]
  25. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Bmj 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  26. Chakraborty, C.; Bhattacharya, M.; Pal, S.; Lee, S.S. From machine learning to deep learning: An advances of the recent data-driven paradigm shift in medicine and healthcare. Curr. Res. Biotechnol. 2023, 7, 100164. [Google Scholar] [CrossRef]
  27. Makwana, G.; Yadav, R.N.; Gupta, L. Analysis of Various Noise Reduction Techniques for Breast Ultrasound Image Enhancement. In Internet of Things and Its Applications: Select Proceedings of ICIA 2020; Springer: Berlin/Heidelberg, Germany, 2022; pp. 303–313. [Google Scholar]
  28. Michailovich, O.V.; Tannenbaum, A. Despeckling of medical ultrasound images. IEEE T-UFFC 2006, 53, 64–78. [Google Scholar] [CrossRef] [PubMed]
  29. Ng, A.; Swanevelder, J. Resolution in ultrasound imaging. Contin. Educ. Anaesthesia, Crit. Care Pain 2011, 11, 186–192. [Google Scholar] [CrossRef]
  30. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
  31. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  32. Awasthi, N.; van Anrooij, L.; Jansen, G.; Schwab, H.M.; Pluim, J.P.; Lopata, R.G. Bandwidth Improvement in Ultrasound Image Reconstruction Using Deep Learning Techniques. Healthcare 2022, 11, 123. [Google Scholar] [CrossRef] [PubMed]
  33. Gasse, M.; Millioz, F.; Roux, E.; Garcia, D.; Liebgott, H.; Friboulet, D. High-quality plane wave compounding using convolutional neural networks. IEEE T-UFFC 2017, 64, 1637–1639. [Google Scholar] [CrossRef]
  34. Goudarzi, S.; Asif, A.; Rivaz, H. Fast multi-focus ultrasound image recovery using generative adversarial networks. IEEE Trans. Comput. Imaging 2020, 6, 1272–1284. [Google Scholar] [CrossRef]
  35. Guo, B.; Zhang, B.; Ma, Z.; Li, N.; Bao, Y.; Yu, D. High-quality plane wave compounding using deep learning for hand-held ultrasound devices. In Proceedings 16, Proceedings of the Advanced Data Mining and Applications: 16th International Conference, ADMA 2020, Foshan, China, 12–14 November 2020; Springer: Abingdon, UK; pp. 547–559.
  36. Huang, C.Y.; Chen, O.T.C.; Wu, G.Z.; Chang, C.C.; Hu, C.L. Ultrasound imaging improved by the context encoder reconstruction generative adversarial network. In Proceedings of the 2018 IEEE International Ultrasonics Symposium, Kobe, Japan, 22–25 October 2018; pp. 1–4. [Google Scholar]
  37. Lu, J.; Millioz, F.; Garcia, D.; Salles, S.; Liu, W.; Friboulet, D. Reconstruction for diverging-wave imaging using deep convolutional neural networks. IEEE T-UFFC 2020, 67, 2481–2492. [Google Scholar] [CrossRef] [PubMed]
  38. Lyu, Y.; Jiang, X.; Xu, Y.; Hou, J.; Zhao, X.; Zhu, X. ARU-GAN: U-shaped GAN based on Attention and Residual connection for super-resolution reconstruction. Comput. Biol. Med. 2023, 164, 107316. [Google Scholar] [CrossRef] [PubMed]
  39. Moinuddin, M.; Khan, S.; Alsaggaf, A.U.; Abdulaal, M.J.; Al-Saggaf, U.M.; Ye, J.C. Medical ultrasound image speckle reduction and resolution enhancement using texture compensated multi-resolution convolution neural network. Front. Physiol. 2022, 13, 2326. [Google Scholar] [CrossRef] [PubMed]
  40. Monkam, P.; Lu, W.; Jin, S.; Shan, W.; Wu, J.; Zhou, X.; Tang, B.; Zhao, H.; Zhang, H.; Ding, X. US-Net: A lightweight network for simultaneous speckle suppression and texture enhancement in ultrasound images. Comput. Biol. Med. 2023, 152, 106385. [Google Scholar] [CrossRef] [PubMed]
  41. Tang, J.; Zou, B.; Li, C.; Feng, S.; Peng, H. Plane-Wave Image Reconstruction via Generative Adversarial Network and Attention Mechanism. IEEE Trans. Instrum. Meas. 2021, 70, 4505115. [Google Scholar] [CrossRef]
  42. Zhang, X.; Li, J.; He, Q.; Zhang, H.; Luo, J. High-quality reconstruction of plane-wave imaging using generative adversarial network. In Proceedings of the 2018 IEEE International Ultrasonics Symposium, Kobe, Japan, 22–25 October 2018; pp. 1–4. [Google Scholar]
  43. Zhou, Z.; Wang, Y.; Yu, J.; Guo, W.; Fang, Z. Super-resolution reconstruction of plane-wave ultrasound imaging based on the improved CNN method. In Proceedings of the VipIMAGE 2017: Proceedings of the VI ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing Porto, Portugal, 18–20 October 2017; Springer: Berlin/Heidelberg, Germany; pp. 111–120.
  44. Zhou, Z.; Wang, Y.; Yu, J.; Guo, Y.; Guo, W.; Qi, Y. High Spatial-Temporal Resolution Reconstruction of Plane-Wave Ultrasound Images with a Multichannel Multiscale Convolutional Neural Network. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2018, 65, 1983–1996. [Google Scholar] [CrossRef] [PubMed]
  45. Goudarzi, S.; Asif, A.; Rivaz, H. High Frequency Ultrasound Image Recovery Using Tight Frame Generative Adversarial Networks. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2020, 2020, 2035–2038. [Google Scholar] [CrossRef] [PubMed]
  46. Goudarzi, S.; Asif, A.; Rivaz, H. Multi-focus ultrasound imaging using generative adversarial networks. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1118–1121. [Google Scholar]
  47. Lu, J.; Liu, W. Unsupervised super-resolution framework for medical ultrasound images using dilated convolutional neural networks. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 739–744. [Google Scholar]
  48. Li, Y.; Lu, W.; Monkam, P.; Wang, Y. IA-Noise2Noise: An Image Alignment Strategy for Echocardiography Despeckling. In Proceedings of the 2023 IEEE International Ultrasonics Symposium, Montreal, QC, Canada, 3–8 September 2023; pp. 1–3. [Google Scholar]
  49. Mansouri, N.J.; Khaissidi, G.; Despaux, G.; Mrabti, M.; Clézio, E.L. Attention gated encoder-decoder for ultrasonic signal denoising. IAES Int. J. Artif. Intell. 2023, 12, 1695–1703. [Google Scholar] [CrossRef]
  50. Basile, M.; Gibiino, F.; Cavazza, J.; Semplici, P.; Bechini, A.; Vanello, N. Blind Approach Using Convolutional Neural Networks to a New Ultrasound Image Denoising Task. In Proceedings of the 2023 IEEE International Workshop on Biomedical Applications, Technologies and Sensors, BATS 2023, Catanzaro, Italy, 28–29 September 2023; pp. 68–73. [Google Scholar] [CrossRef]
  51. Shen, Z.; Tang, C.; Xu, M.; Lei, Z. Removal of Speckle Noises from Ultrasound Images Using Parallel Convolutional Neural Network. Circuits Syst. Signal Process. 2023, 42, 5041–5064. [Google Scholar] [CrossRef]
  52. Gan, J.; Wang, L.; Liu, Z.; Wang, J. Multi-scale ultrasound image denoising algorithm based on deep learning model for super-resolution reconstruction. In Proceedings of the ACM International Conference Proceeding Series, Guangzhou, China, 25–27 August 2023; pp. 6–11. [Google Scholar] [CrossRef]
  53. Asgariandehkordi, H.; Goudarzi, S.; Basarab, A.; Rivaz, H. Deep Ultrasound Denoising Using Diffusion Probabilistic Models. In Proceedings of the IEEE International Ultrasonics Symposium, Montreal, QC, Canada, 3–8 September 2023. [Google Scholar] [CrossRef]
  54. Liu, J.; Li, C.; Liu, L.; Chen, H.; Han, H.; Zhang, B.; Zhang, Q. Speckle noise reduction for medical ultrasound images based on cycle-consistent generative adversarial network. Biomed. Signal Process. Control 2023, 86, 105150. [Google Scholar] [CrossRef]
  55. Mahmoudi Mehr, O.; Mohammadi, M.R.; Soryani, M. Deep Learning-Based Ultrasound Image Despeckling by Noise Model Estimation. Iran. J. Electr. Electron. Eng. 2023, 19, 1–13. [Google Scholar] [CrossRef]
  56. Senthamizh Selvi, R.; Suruthi, S.; Samyuktha Shrruthi, K.R.; Varsha, B.; Saranya, S.; Babu, B. Ultrasound Image Denoising Using Cascaded Median Filter and Autoencoder. In Proceedings of the 4th International Conference on Smart Electronics and Communication, ICOSEC 2023, Trichy, India, 20–22 September 2023; pp. 296–302. [Google Scholar] [CrossRef]
  57. Mikaeili, M.; Bilge, H.S. Evaluating Deep Neural Network Models on Ultrasound Single Image Super Resolution. In Proceedings of the TIPTEKNO 2023—Medical Technologies Congress, Famagusta, Cyprus, 10–12 November 2023. [Google Scholar] [CrossRef]
  58. Liu, H.; Liu, J.; Hou, S.; Tao, T.; Han, J. Perception consistency ultrasound image super-resolution via self-supervised CycleGAN. Neural Comput. Appl. 2023, 35, 12331–12341. [Google Scholar] [CrossRef]
  59. Vetriselvi, D.; Thenmozhi, R. Advanced Image Processing Techniques for Ultrasound Images using Multiscale Self Attention CNN. Neural Process. Lett. 2023, 55, 11945–11973. [Google Scholar] [CrossRef]
  60. Gomez, Y.Z.O.; Costa, E.T. Ultrasound Speckle Filtering Using Deep Learning. In Proceedings of the IFMBE Proceedings; Springer: Cham, Switzerland, 2024; Volume 99, pp. 283–289. [Google Scholar] [CrossRef]
  61. Li, Y.; Zeng, X.; Dong, Q.; Wang, X. RED-MAM: A residual encoder-decoder network based on multi-attention fusion for ultrasound image denoising. Biomed. Signal Process. Control 2023, 79, 104062. [Google Scholar] [CrossRef]
  62. Yang, T.; Wang, W.; Cheng, G.; Wei, M.; Xie, H.; Wang, F.L. FDDL-Net: Frequency domain decomposition learning for speckle reduction in ultrasound images. Multimed. Tools Appl. 2022, 81, 42769–42781. [Google Scholar] [CrossRef]
  63. Karaoğlu, O.; Bilge, H.S.; Uluer, I. Removal of speckle noises from ultrasound images using five different deep learning networks. Eng. Sci. Technol. Int. J. 2022, 29, 101030. [Google Scholar] [CrossRef]
  64. Markco, M.; Kannan, S. Texture-driven super-resolution of ultrasound images using optimized deep learning model. Imaging Sci. J. 2024, 72, 643–656. [Google Scholar] [CrossRef]
  65. Karthiha, G.; Allwin, S. Speckle Noise Suppression in Ultrasound Images Using Modular Neural Networks. Intell. Autom. Soft Comput. 2023, 35, 1753–1765. [Google Scholar] [CrossRef]
  66. Kalaiyarasi, M.; Janaki, R.; Sampath, A.; Ganage, D.; Chincholkar, Y.D.; Budaraju, S. Non-additive noise reduction in medical images using bilateral filtering and modular neural networks. Soft Comput. 2023, 1–10. [Google Scholar] [CrossRef]
  67. Sawant, A.; Kasar, M.; Saha, A.; Gore, S.; Birwadkar, P.; Kulkarni, S. Medical Image De-Speckling Using Fusion of Diffusion-Based Filters And CNN. In Proceedings of the 8th International Conference on Advanced Computing and Communication Systems, ICACCS 2022, Coimbatore, India, 25–26 March 2022; pp. 1197–1203. [Google Scholar] [CrossRef]
  68. Dutta, S.; Georgeot, B.; Kouame, D.; Garcia, D.; Basarab, A. Adaptive Contrast Enhancement of Cardiac Ultrasound Images using a Deep Unfolded Many-Body Quantum Algorithm. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Venice, Italy, 10–13 October 2022. [Google Scholar] [CrossRef]
  69. Sanjeevi, G.; Krishnan Pathinarupothi, R.; Uma, G.; Madathil, T. Deep Learning Pipeline for Echocardiogram Noise Reduction. In Proceedings of the 2022 IEEE 7th International conference for Convergence in Technology, I2CT 2022, Mumbai, India, 7–9 April 2022. [Google Scholar] [CrossRef]
  70. Suseela, K.; Kalimuthu, K. An efficient transfer learning-based Super-Resolution model for Medical Ultrasound Image. J. Phys. Conf. Ser. 1964, 1964, 062050. [Google Scholar] [CrossRef]
  71. Chennakeshava, N.; Luijten, B.; Drori, O.; Mischi, M.; Eldar, Y.C.; Van Sloun, R.J.G. High resolution plane wave compounding through deep proximal learning. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Las Vegas, NV, USA, 7–11 September 2020. [Google Scholar] [CrossRef]
  72. Dong, G.; Ma, Y.; Basu, A. Feature-Guided CNN for Denoising Images from Portable Ultrasound Devices. IEEE Access 2021, 9, 28272–28281. [Google Scholar] [CrossRef]
  73. Jarosik, P.; Lewandowski, M.; Klimonda, Z.; Byra, M. Pixel-Wise Deep Reinforcement Learning Approach for Ultrasound Image Denoising. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Xi’an, China, 11–16 September 2021. [Google Scholar] [CrossRef]
  74. Kumar, M.; Mishra, S.K.; Joseph, J.; Jangir, S.K.; Goyal, D. Adaptive comprehensive particle swarm optimisation-based functional-link neural network filtre model for denoising ultrasound images. IET Image Process. 2021, 15, 1232–1246. [Google Scholar] [CrossRef]
  75. Shen, Z.; Li, W.; Han, H. Deep Learning-Based Wavelet Threshold Function Optimization on Noise Reduction in Ultrasound Images. Sci. Program. 2021, 2021, 3471327. [Google Scholar] [CrossRef]
  76. Kokil, P.; Sudharson, S. Despeckling of clinical ultrasound images using deep residual learning. Comput. Methods Programs Biomed. 2020, 194, 105477. [Google Scholar] [CrossRef] [PubMed]
  77. Feng, X.; Huang, Q.; Li, X. Ultrasound image de-speckling by a hybrid deep network with transferred filtering and structural prior. Neurocomputing 2020, 414, 346–355. [Google Scholar] [CrossRef]
  78. Ma, Y.; Yang, F.; Basu, A. Edge-guided CNN for denoising images from portable ultrasound devices. In Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 10–15 January 2021; pp. 6826–6833. [Google Scholar] [CrossRef]
  79. Lan, Y.; Zhang, X. Real-time ultrasound image despeckling using mixed-attention mechanism based residual UNet. IEEE Access 2020, 8, 195327–195340. [Google Scholar] [CrossRef]
  80. Vasavi, G.; Jyothi, S. Noise Reduction Using OBNLM Filter and Deep Learning for Polycystic Ovary Syndrome Ultrasound Images. In Proceedings of the Learning and Analytics in Intelligent Systems; Springer: Cham, Switzerland, 2020; Volume 16, pp. 203–212. [Google Scholar] [CrossRef]
  81. Shelgaonkar, S.L.; Nandgaonkar, A.B. Deep Belief Network for the Enhancement of Ultrasound Images with Pelvic Lesions. J. Intell. Syst. 2018, 27, 507–522. [Google Scholar] [CrossRef]
  82. Singh, P.; Mukundan, R.; De Ryke, R. Feature Enhancement of Medical Ultrasound Scans Using Multifractal Measures. In Proceedings of the 2019 IEEE International Conference on Signals and Systems, ICSigSys 2019, Bandung, Indonesia, 16–18 July 2019; pp. 85–91. [Google Scholar] [CrossRef]
  83. Choi, W.; Kim, M.; Haklee, J.; Kim, J.; Beomra, J. Deep CNN-Based Ultrasound Super-Resolution for High-Speed High-Resolution B-Mode Imaging. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Kobe, Japan, 22–25 October 2018. [Google Scholar] [CrossRef]
  84. Ando, K.; Nagaoka, R.; Hasegawa, H. Speckle reduction of medical ultrasound images using deep learning with fully convolutional network. Jpn. J. Appl. Phys. 2020, 59, SKKE06. [Google Scholar] [CrossRef]
  85. Temiz, H.; Bilge, H.S. Super Resolution of B-Mode Ultrasound Images with Deep Learning. IEEE Access 2020, 8, 78808–78820. [Google Scholar] [CrossRef]
  86. Liu, J.; Liu, H.; Zheng, X.; Han, J. Exploring multi-scale deep encoder-decoder and patchgan for perceptual ultrasound image super-resolution. In Proceedings of the Communications in Computer and Information Science; Springer: Singapore, 2020; Volume 1265, pp. 47–59. [Google Scholar] [CrossRef]
  87. Mishra, D.; Chaudhury, S.; Sarkar, M.; Soin, A.S. Ultrasound image enhancement using structure oriented adversarial network. IEEE Signal Process. Lett. 2018, 25, 1349–1353. [Google Scholar] [CrossRef]
  88. Mishra, D.; Tyagi, S.; Chaudhury, S.; Sarkar, M.; Singhsoin, A. Despeckling CNN with Ensembles of Classical Outputs. In Proceedings of the International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018; Volume 2018, pp. 3802–3807. [Google Scholar] [CrossRef]
  89. Oliveira-Saraiva, D.; Mendes, J.; Leote, J.; Gonzalez, F.A.; Garcia, N.; Ferreira, H.A.; Matela, N. Make It Less Complex: Autoencoder for Speckle Noise Removal-Application to Breast and Lung Ultrasound. J. Imaging 2023, 9, 217. [Google Scholar] [CrossRef] [PubMed]
  90. Vimala, B.B.; Srinivasan, S.; Mathivanan, S.K.; Muthukumaran, V.; Babu, J.C.; Herencsar, N.; Vilcekova, L. Image Noise Removal in Ultrasound Breast Images Based on Hybrid Deep Learning Technique. Sensors 2023, 23, 1167. [Google Scholar] [CrossRef] [PubMed]
  91. Sineesh, A.; Shankar, M.R.; Hareendranathan, A.; Panicker, M.R.; Palanisamy, P. Single Image based Super Resolution Ultrasound Imaging Using Residual Learning of Wavelet Features. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2023, 2023, 10340196. [Google Scholar] [CrossRef]
  92. Li, X.; Wang, Y.; Zhao, Y.; Wei, Y. Fast Speckle Noise Suppression Algorithm in Breast Ultrasound Image Using Three-Dimensional Deep Learning. Front. Physiol. 2022, 13, 880966. [Google Scholar] [CrossRef]
  93. Tamang, L.D.; Kim, B.W. Super-Resolution Ultrasound Imaging Scheme Based on a Symmetric Series Convolutional Neural Network. Sensors 2022, 22, 3076. [Google Scholar] [CrossRef] [PubMed]
  94. Balamurugan, M.; Chung, K.; Kuppoor, V.; Mahapatra, S.; Pustavoitau, A.; Manbachi, A. USDL: Inexpensive Medical Imaging Using Deep Learning Techniques and Ultrasound Technology. Proc. Des. Med. Devices Conf. 2020, 2020, 5. [Google Scholar] [CrossRef]
  95. Yu, H.; Ding, M.; Zhang, X.; Wu, J. PCANet based nonlocal means method for speckle noise removal in ultrasound images. PLoS ONE 2018, 13, e0205390. [Google Scholar] [CrossRef] [PubMed]
  96. S, L.S.; M, S. Bayesian Framework-Based Adaptive Hybrid Filtering for Speckle Noise Reduction in Ultrasound Images Via Lion Plus FireFly Algorithm. J. Digit. Imaging 2021, 34, 1463–1477. [Google Scholar] [CrossRef]
  97. Liebgott, H.; Rodriguez-Molares, A.; Cervenansky, F.; Jensen, J.A.; Bernard, O. Plane-wave imaging challenge in medical ultrasound. In Proceedings of the 2016 IEEE International Ultrasonics Symposium (IUS), Tours, France, 18–21 September 2016; pp. 1–4. [Google Scholar]
  98. Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef]
  99. Xia, C.; Li, J.; Chen, X.; Zheng, A.; Zhang, Y. What is and what is not a salient object? Learning salient object detector by ensembling linear exemplar regressors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4142–4150. [Google Scholar]
  100. van den Heuvel, T.L.; de Bruijn, D.; de Korte, C.L.; Ginneken, B.v. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS ONE 2018, 13, e0200412. [Google Scholar] [CrossRef]
Figure 1. Flowchart visualizing the results of the PRISMA-based article selection process.
Figure 1. Flowchart visualizing the results of the PRISMA-based article selection process.
Applsci 14 07132 g001
Figure 2. Overview of distribution of quality enhancement aspects addressed in included articles.
Figure 2. Overview of distribution of quality enhancement aspects addressed in included articles.
Applsci 14 07132 g002
Figure 4. Forest plots of (a) the mean PSNR difference (95% CI) and (b) the mean SSIM difference (95% CI).
Figure 4. Forest plots of (a) the mean PSNR difference (95% CI) and (b) the mean SSIM difference (95% CI).
Applsci 14 07132 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

van der Pol, H.G.A.; van Karnenbeek, L.M.; Wijkhuizen, M.; Geldof, F.; Dashtbozorg, B. Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review. Appl. Sci. 2024, 14, 7132. https://doi.org/10.3390/app14167132

AMA Style

van der Pol HGA, van Karnenbeek LM, Wijkhuizen M, Geldof F, Dashtbozorg B. Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review. Applied Sciences. 2024; 14(16):7132. https://doi.org/10.3390/app14167132

Chicago/Turabian Style

van der Pol, Hilde G. A., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. 2024. "Deep Learning for Point-of-Care Ultrasound Image Quality Enhancement: A Review" Applied Sciences 14, no. 16: 7132. https://doi.org/10.3390/app14167132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop