Next Article in Journal
Lean ICU Layout Re-Design: A Simulation-Based Approach
Next Article in Special Issue
Intelligent Remote Photoplethysmography-Based Methods for Heart Rate Estimation from Face Videos: A Survey
Previous Article in Journal
Development of a Simulator for Prototyping Reinforcement Learning-Based Autonomous Cars
Previous Article in Special Issue
Where Is My Mind (Looking at)? A Study of the EEG–Visual Attention Relationship
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding

1
Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
2
Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India
3
Department of Obstetrics and Gynecology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India
4
Department of Cardiology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India
5
Department of Cardiology, National Heart Centre Singapore, Singapore 169609, Singapore
6
Department of Medicine—Cardiology, Columbia University Medical Center, New York, NY 10032, USA
7
School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore
8
Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
9
School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
10
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
11
Department of Electronics and Telecommunications, Politecnico di Torino, 10129 Torino, Italy
12
School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
13
Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
14
International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 8608555, Japan
*
Author to whom correspondence should be addressed.
Informatics 2022, 9(2), 34; https://doi.org/10.3390/informatics9020034
Submission received: 15 January 2022 / Revised: 10 April 2022 / Accepted: 11 April 2022 / Published: 18 April 2022
(This article belongs to the Special Issue Feature Papers in Medical and Clinical Informatics)

Abstract

:
The fetal echocardiogram is useful for monitoring and diagnosing cardiovascular diseases in the fetus in utero. Importantly, it can be used for assessing prenatal congenital heart disease, for which timely intervention can improve the unborn child’s outcomes. In this regard, artificial intelligence (AI) can be used for the automatic analysis of fetal heart ultrasound images. This study reviews nondeep and deep learning approaches for assessing the fetal heart using standard four-chamber ultrasound images. The state-of-the-art techniques in the field are described and discussed. The compendium demonstrates the capability of automatic assessment of the fetal heart using AI technology. This work can serve as a resource for research in the field.

1. Introduction

Congenital heart disease (CHD) has a prevalence ranging from 2 to 6.5 per 1000 live births [1,2] and is a major cause of neonatal morbidity and mortality. The incidence of CHD is not related to the maternal risk status [3]. More than half of all CHD cases require early medical and/or surgical intervention at or soon after birth, which underscores the importance of antenatal screening [4]. Fetal echocardiography is a specialized detailed ultrasound (US) examination that evaluates cardiac structure and function in utero [5] for prenatal diagnosis of CHD as well as myocardial dysfunction [6]. Fetal echocardiography is typically performed between 18 and 22 weeks of gestation when the fetal heart has matured sufficiently to be feasibly imaged using transabdominal US. Prior to that, transvaginal US was utilized to detect the developing fetal heart with less granularity at 12 to 13 weeks of gestation, but the findings would usually have to be confirmed later with detailed fetal echocardiography. Fetal echocardiography is indicated in pregnancies where maternal or fetal factors are present that may increase the risk of cardiac anomaly. Prenatal diagnosis of CHD or myocardial dysfunction significantly impacts pregnancy course, the decision for termination, fetal therapy, delivery mode, and the need for tertiary care [4].
US imaging of the fetal heart is challenging, as the position of the fetus relative to the mother is highly variable, and the operator must be familiar with normal and variant cardiovascular anatomy to orient the US probe to image the heart in the correct view [7]. The standard four-chamber view is the most basic (Figure 1), on which 43 to 96% of fetal anomalies can be detected [5,8]. Test sensitivity can be improved by adding comprehensive views of the left and right ventricular outflow tracts [9]. Sequential segmental analysis [10,11] is a systematic approach to identify the morphologic atria, ventricles [12], and great vessels to deconstruct the veno-atrial, atrioventricular, and ventriculo-arterial connections to arrive at an anatomical diagnosis in CHD.
In addition to cardiac structure, fetal echocardiography allows for the assessment of heart function, including the quantification of chamber dimensions and ventricular ejection fraction. However, the latter is load-dependent and may not represent intrinsic myocardial contractility. Advanced techniques such as tissue Doppler and speckle tracking imaging may be used to detect subclinical systolic and diastolic myocardial dysfunction, which is especially relevant in high-risk pregnancies with intrauterine growth restriction or gestational diabetes mellitus [13]. On a cautionary note, impaired myocardial function or cardiac output in utero may be a sign of compromised umbilical venous return from placental insufficiency [14,15], which puts the fetus at risk of developing hypoxia and acidosis during parturition.
The interpretation of fetal echocardiographic images demands expertise and is time-consuming. Subtle myocardial dysfunction may be missed, even by skilled operators. Computer-aided diagnostic systems can assist physicians in screening out abnormalities for review to save time and potentially enhance diagnostic accuracy. In this regard, many artificial intelligence (AI) approaches have been developed for medical US image analysis [16,17,18,19,20]. This review aims to systematically assess AI methods for automated analysis of fetal US images. In particular, we focused on studies that utilized the standard four-chamber view, as it is the most basic, has the highest diagnostic yield, and will allow us to compare the different AI techniques using a uniform standard.
The remainder of the paper is structured as follows: Section 2 details our search methodology, Section 3 mentions the studies included, and Section 4 analyzes four-chamber (4Ch) heart images. In Section 5, results and discussion are stated, and in Section 6, the conclusion is provided.

2. Article Selection for Systematic Review

Following PRISMA guidelines [21], we searched for scientific articles published between January 2010 and November 2021 on Scopus, IEEE Xplore, PubMed, Google Scholar, and Web of Science using the search string: ((“segmentation” OR “classification” OR “identification” OR “categorization” OR “detection”) AND (“fetal heart”) AND (“echocardiography” OR “cardiac ultrasound”) AND (“deep learning” OR “machine learning” OR “artificial intelligence” OR “feature based”)). Other keywords, such as “fetal heart ultrasound”, “fetal echocardiography”, and “fetal heart segmentation”, were also tested to extract the technical articles. We included articles that used AI techniques for segmentation or diagnostic classification of standard four-chamber US images of the fetal heart and excluded non-English publications, view detection using fetal heart US, cardiac cycle detection using fetal heart US, abnormality detection other than four-chamber US images, non-US studies, medical articles unrelated to AI content, case studies, adult studies, animal studies, articles on fetal growth and related abnormalities, and articles that used AI for organ segmentation other than in the heart. The search query returned 190 articles. After reading the articles, 145 were discarded, as they met the repetition and exclusion criteria, leaving 45 articles that comprised 40 technical and five review articles. The review articles are considered, as they have shown the significance of fetal heart US images in automated assessment with AI technology. The complete selection procedure is shown in Figure 2.

3. AI in Fetal Echocardiography

Figure 3 depicts the pipeline for implementing AI-based computer-aided diagnosis for fetal heart echocardiography, which comprises distinct tasks completed automatically in sequence. First, the region of interest (ROI) containing the fetal heart image is localized, and extraneous signals are removed in order to reduce data dimensionality and preserve downstream computational efficiency. Second, structures of interest, including endocardial borders or US speckles, are segmented, i.e., identified and tracked spatially (and where applicable, temporally), which facilitates derivation of quantitative measures, e.g., indices of chamber function. These first two steps are considered collectively in segmentation. Third, input samples are classified into prespecified diagnostic categories. Both fetal heart US image segmentation and classification tasks can be performed using deep learning (DL), non-DL, or hybrid methodology [16,18,20].
Traditional non-DL generally comprises distinct sequential steps of feature extraction, detection, dimensionality reduction, and classification [17,22,23,24,25,26], and it would typically require design input at every step. For instance, established algorithms based on thresholding, graph theory, gradient vector flow, etc. would be used for the segmentation step [19]. In contrast, DL is a newer type of AI that incorporates neural networks to mimic human reasoning [20], and feature engineering is automatically learned within the model rather than being imposed. DL is modeled on a network of layers—deep architecture—with a hierarchy of features in which features at the next higher level are informed and defined by features at the preceding lower level [27]. Supervised DL models are extensively used in US image segmentation and classification tasks. The most popular architectures are convolutional neural networks (CNNs) and recurrent neural networks (RNNs). CNN consists of convolutional and pooling layers and a rectified linear unit (ReLu), and batch normalization layers are added if required. The last layer is a fully connected layer. These layers are stacked to form the deep model [28]. CNNs are widely used for static two-dimensional image analysis. In contrast, RNNs are more adept at analyzing time-varying input signals. RNNs compute the hidden state vector and output vector iteratively and uses the long short-term memory (LSTM) module to access the long-range context [29].

3.1. Segmentation of Fetal Heart Structures

To replace manual cropping, automatic identification of the ROI containing the fetal heart in a static image or a sequence of images in a video loop is an obligatory and challenging first step. This is followed by identifying and tracking relevant pixels corresponding to the structure of interest in each image. Segmentation techniques can be segregated into non-DL (Table 1) and DL approaches (Table 2).

3.1.1. Nondeep Learning (non-DL) Approaches

A US image, whether static or moving, gets easier to analyze when US speckles in the image are reduced. Techniques for reducing US speckle noise include the Rayleigh-trimmed filter and anisotropic diffusion [30,31], probabilistic patch-based maximum likelihood estimation (PPBMLE) [32], and a combination of sparse representation and nonlocal means [33]. Techniques to localize the ROI containing the fetal heart image include the superimposition of frames and connected component labelling [34] and the region growing (RG) method [35]. Once the ROI has been detected, structures of interest within it can be identified. The fetal heart structure was determined using an active appearance model (AAM) [30]. An improved AAM model with sparse representation was employed by Guo et al. to segment the left ventricle (LV) [33]. In addition, simultaneous tracking of motion and structural information can improve the detection of the fetal heart [31]. It is observed that multitexture AAM was combined with the Hermite transform (HT) to segment the LV [36]. A feature-based approach using Fourier orientation histograms (FOH) and a support vector machine (SVM) classifier was used to detect the fetal heart [37]. However, optical flow was also implemented to find the heart region [38]. An improved region-based Chan–Vese (RCV) model was proposed [39], wherein energy minimization was carried out using the global-pollination-based CAT swarm optimizer with the flower pollination algorithm. Furthermore, possibilistic c-means (PCM) clustering was used to detect the LV and right ventricle (RV) [35]. The curvatures and borders of all four fetal heart chambers were preserved using the transverse dyadic wavelet transform (TDyWT) algorithm [40]. Table 1 lists the non-DL methods reviewed along with their performance for the segmentation of US fetal heart images.

3.1.2. Deep Learning (DL) Approaches

DL segmentation techniques are based on CNNs and RNNs. LV segmentation was performed by using deep tuning and shallow tuning approaches that tracked the mitral valve base points [44]. Two models, the Visual Geometry Group (VGG)-16 and modified region proposals with CNNs (RCNNs), were used to detect anatomical structures [45]. Multiple fine fetal heart structures—LV, epicardium (EP), thorax, descending aorta, right atrium, left atrium, and RV—were segmented with cascaded U-Nets (CU-Net), and the structural similarity index measure (SSIM) was added as a loss function [46]. You Only Look Once (YOLOv3) was incorporated to improve classification accuracy to detect the ROI [47]. Fully CNN was applied to locate the fetal heart in [48]. VGG and U-Net modules were combined to segment the ventricular septum, yielding superior results when time-series information was used [49]. The authors combined dilated convolutional chains (DCCs) and W-Net modules and attained excellent segmentation results [50]. In addition, the authors utilized Mask-RCNN (MRCNN) with ResNet50 as the backbone for multiclass segmentation [51]. Table 2 lists the various DL approaches for the segmentation of US fetal heart images.

3.2. Classification of Fetal Abnormality

Fetal heart pathologies were classified using deep learning and other conventional approaches. The three main steps for image analysis using non-DL approaches were segmentation, feature extraction and reduction, and classification. First, noise and undesirable distortions from US images were removed using a patch-based Wiener filter (WF) [60] and PPBMLE [61]. Further, morphological operation [60], fuzzy connectedness [61], image-and-spatial transformer networks (Atlas-ISTN) [62], and Faster-RCNN [63] were used to segment the ROI. Next, texture features of US images were extracted using the gray-level co-occurrence matrix (GLCM) [60,61,64]. The generated features were further reduced by applying the Fisher discriminant ratio (FDR) [61] and local preserving class separation (LPCS) [64]. Moreover, scale invariant feature transform (SIFT) descriptors and histogram of optical flow (HOF) descriptors were used, and a codebook was constructed using a bag of words (BoW) [65]. Finally, classifiers such as BPNN [60], the adaptive neuro fuzzy inference system classifier (ANFIS) [61], SVM [64,65], and the Gaussian process [62] were deployed to categorize the normal versus diseased fetal heart. DL models were used to classify the US images in [66]. The DANomaly and GACNN (Wgan-GP and CNN) were combined to form a DGACNN architecture [63]. An ensemble of neural networks achieved promising results in [67]. Table 3 summarizes the various state-of-the-art approaches to fetal heart disease categorization.

4. Analysis of CHD Using Four-Chamber US Images

To understand CHD, we analyzed the 4Ch US heart images and the different approaches used, as described in the subsequent sections.

4.1. Data Description

The fetal cardiac ultrasound images were acquired using the Vivid 7 GE healthcare echocardiographic machine with the transducer being a linear convex probe set at 3 to 4 MHz. The 4Ch view helps in the identification of cardiac chamber anatomy and the presence of any intracardiac shunt lesions. Structural anomalies of the cardiac chambers are well-diagnosed using lateral or apical four-chamber views of the fetal heart. The collected images are shown in Figure 4. In the healthy normal fetus, the apical 4Ch view demonstrates four well-developed chambers, a concordant atrioventricular (AV) connection, unobstructed AV valves (mitral and tricuspid valves), the foramen ovale flap opening into left atrium (LA), and an intact interventricular septum. Additionally, the pulmonary venous opening can be visualized at the LA wall. Any structural deviations from normal anatomy may lead to congenital heart disease that can be determined by the 4Ch view of the fetal heart’s ultrasound imaging. We illustrate a few such examples that can be diagnosed by the 4Ch view of the fetal heart alone. Figure 4a displays the normal cardiac structural anatomy, where all four chambers are well-developed without any dilatation/hypoplasia of any chamber and nonobstructive AV valves. Figure 4b shows an atrioventricular septal defect, commonly termed as an endocardial cushion defect, mainly affecting AV valve formation and the abnormal AV connection to the respective ventricle. Figure 4c shows an abnormal atrioventricular connection, where the anatomical LA is connected to the RV through the tricuspid valve, and the RA is connected to the LV through the mitral valve. This structural anomaly is one of the diagnostic criteria for the diagnosis of L-posed transposition of the great artery (L-TGA) or congenitally corrected transposition of the great artery (CCTGA). The presence of an intracardiac tumor in the fetus can also be determined in the 4Ch view of the fetal heart, as shown in Figure 4d. Moreover, Figure 4e describes the tricuspid valve anomaly seen in Ebstein’s anomaly, characterized by the apical displacement of tricuspid valve leaflet insertion. Fetal echocardiography can also show complex congenital heart disease such as hypoplastic RV (Figure 4f), hypoplastic LV (Figure 4g), and tricuspid atresia (Figure 4h). Hypoplasia of the ventricle is characterized by a small-sized chamber with or without ventricular septal defects, whereas the atretic valve is defined as being when the muscle bundle completely seals the respective AV valve segment. The obstructive AV valve with congenital stenosis and thickening can be identified in the fetal cardiac 4Ch view by a restricted movement of the AV valves during diastole. Significant AV valve stenosis will be associated with respective atrial enlargement (Figure 4i). Further, Figure 4j shows a 4Ch view of a fetus with total anomalous pulmonary venous connection (TAPVC). Although anomalous pulmonary venous insertion cannot be identified in 4Ch view alone, the appearance of a bald LA is considered to be the classic finding for the suspicion of TAPVC.

4.2. Analysis Using Various Approaches

The 4Ch view is a basic view in the cardiac examination for the structural and functional assessment of the heart. Most of the complex congenital heart diseases can be detected straightforwardly by the 4Ch view. Hence, we have obtained a sample of the 4Ch cardiac view from normal and CHD fetuses. Here, we have used a few CHD images for analysis purposes, and the study lacked individual CHD images with a greater sample size. However, we strategize automated individual CHD detection using more of such data in the future. In the present study, we have used methods such as steerable filters [69,70], gist [71], and higher-order spectra (HOS) cumulants [72] to analyze the structure of the heart. The application of these methods to characterize image uncertainties and their analysis to achieve efficient results have motivated us to analyze the 4Ch fetal heart images [73,74,75,76,77,78,79,80,81]. These approaches can be further utilized with non-DL and DL approaches to efficiently characterize CHDs.
Steerable filters are used to enhance the various clinical features in different orientations. A set of basis filters is used to produce an arbitrary orientation [69,70]. Moreover, a linear combination of the oriented functions is used to generate the steered filters. The generated filters are then applied to a US image with the help of a linear operator, convolution, and it is also noted that the obtained filter responses are steerable [70] (refer to Figure 5).
Gist is an efficient, holistic lower-dimension representation of an image. It investigates the image on a spatial four-by-four nonoverlapping grid. Initially, the power spectrum of an image is computed and a set of Gabor filters are then used [71,73]. Then, the feature maps are generated to accumulate the information over subregions of an image. Figure 6 shows the computed gist descriptor using four-chamber US images [74].
HOS are the spectral representation of the moments and cumulants from the third order and beyond. Cumulants are computed using certain nonlinear mixtures of moments [72]. Although the process is random, cumulants are deterministic functions. It is noted that, in the spectral domain, cumulant spectra are determined using moment spectra [72]. Herein, fetal heart US images are analyzed for order n = 3 and a step angle of 10 degrees (refer to Figure 7).

5. Results and Discussion

In this work, we have reviewed 45 articles that were segregated into the following areas: segmentation/detection (31/40) and classification (9/40) using four-chamber fetal heart US images and reviews (5/45). The distribution of the articles based on non-DL and DL approaches is shown in Figure 8.
Non-DL and DL are the major approaches implemented with segmentation/detection and classification techniques for the automated assessment of the US fetal heart. From Table 1 and Table 2, most of the articles availed of the dice score/coefficient (DS/DC) [32,36,44,46,49,50,51,52,57,58] as a performance indicator for the detection scheme. For classification, accuracy (Acc.) [63,64,65], positive predictive value (PPV) [64], sensitivity (Sen.) [64,66,67], specificity (Spe.) [64,66,67], F-score [61], and area under curve (AUC) [62,63,67,68] were the reported performance indicators. Among non-DL techniques, AAM was commonly deployed in many approaches [30,33,36,41] for the detection of the fetal heart. SVM [37] and BPNN [42] showed good accuracy for detecting the heart’s features. The combination of PPBMLE and fuzzy connectedness achieved a DC of 0.985, while the improved RCV model achieved a segmentation performance measure (SPM) of more than 99.95% [32]. Further, the CNN-based U-Net approach achieved a DC of 96.02% for the segmentation of atrioventricular septal defects [58]. In Table 3, the classification of fetal heart abnormalities was posed as a two-class (07/09) [61,62,63,64,65,67,68] and three-class (03/09) [60,66] classification problem. An ensemble of neural networks achieved a sensitivity and specificity of 95% and 96%, respectively, on a large image dataset (over 100,000 images) for two-class classification [67]. From the literature, we observed that almost all methods had been developed using private datasets, and it is difficult to compare the performance of non-DL and DL methods. From Table 1 and Table 2, it is noted that MRCNN [51] and the U-Net based approach [58,59] achieved maximum segmentation results. It is noted that CNN-based approaches need high-end computers with GPU [51]. The number of layers depends on the architecture used by the individual approaches. In the CNN-based approach, the parameters, such as learning rate, batch size, number of layers, etc., change from model to model. The selection of these parameters is based on the efficient performance measures obtained from a particular model. On the other hand, it is difficult to generalize a particular model for segmentation or classification using US images. Further, DL methods might not perform well on test data collected from different centers, as they may differ too much from the training data. It is observed from the literature that the non-DL method uses feature-based learning approaches, wherein the hand-crafted features are used in the feature extraction stage. These are the features that data scientists manually compute; hence, it requires domain knowledge. From Table 3, it is observed that the highest classification accuracy achieved by using a hand-crafted feature learning-based approach is 98.15% [64]. The method uses the graph embedding approach on the generated texture features, and it uses only 28 features to obtain maximum accuracy. Hence, only a standalone personal computer is required for its execution, and these techniques can be utilized to analyze other medical images.
On the other hand, segmentation and ensemble neural networks are used to obtain an AUC of 0.99 on a larger dataset [67]. In addition, all of the works used private datasets and the hold-out method instead of the k-fold cross-validation scheme. Hence, it would be difficult to generalize the results.
As far as we know, this research is the first to report a full range of 4Ch fetal heart US images with CHDs, including endocardial cushion defects, CCTGA, myocardial tumors, and mitral and tricuspid stenosis, to name a few. These images were analyzed via various approaches such as steerable filters, gist, and HOS cumulants. It is clear from Figure 5 that a steerable filter enhanced the edges, curvatures, and corners for improved structural identification. The “filtered” image can then be used to analyze the textures of an image [75,76,77]. It is noted from Figure 6 that gist was able to characterize the CHDs from the different patterns. Moving forward, the fusion of features obtained from steerable filters and gist can be used to improve the discrimination among different classes of CHDs. Since the collected dataset is very small, we grouped normal and abnormal (various CHDs) classes (i.e., three normal and nine abnormal images) to test the significance of the generated features. Initially, 512 gist features were generated for all the images. Then, the Student’s t-test as performed and arranged with the highest t-values and lower p-values [82,83]. Table 4 shows the highest-ranked ten features. It is noted that the generated features are significant, as their p-values are <0.005. Further, these features can be used to identify the CHDs. It is difficult to generalize the system, as we have used a smaller dataset. The main goal of the proposed work is to show the structural changes in the fetal heart, and in the future, we want to extend the work by collecting a larger number of samples.
To capture the subtle variations in the inherent structure of the fetal heart, researchers can also employ the HOS cumulants. The plot of HOS cumulants in Figure 7 clearly shows distinctly different plot patterns for various CHDs. The significant changes in the HOS cumulants plots could help the analysis of CHDs by generating compact features. The extraction of texture features [84] and various entropies [85] based on these plots will better assist researchers in analyzing the fetal heart’s intrinsic characteristics. Moreover, it can be utilized to enhance the discriminable capability of the model by understanding pixel organization of US images. There is a small change between Figure 7c,d, where CCTGA and the myocardial tumor may have a similar structure problem. Further, it can be analyzed by considering the intrinsic structure of CCTGA and the myocardial tumor. In addition, the generation of a hybrid model using gist and features from various HOS cumulants plots could also help identify different classes more efficiently. A greater number of US images of various CHDs will allow for a significant solution to the multiclass classification problem using US images.

5.1. Future Scope

It is observed that there are approximately the same number of statistical or non-DL-based and DL-based methods developed. It is noted from the Table 1, Table 2 and Table 3 that the former methods were developed using smaller datasets. For huge datasets, the DL-based methods are more suitable. In addition, feature extraction in the non-DL methods demands domain knowledge. To date, there is a lack of publicly available datasets for fetal heart analysis using US imagery. In addition, annotating the data into the various diagnostic categories is a challenge that requires support from expert radiologists. Image collection should be extended to multiple centers to ensure a balanced dataset. This will help in the global effort to develop more accurate models to assess fetal hearts for CHDs.
In future, the development of the smart healthcare system using the internet of things (IoT) is essential for early diagnosis and treatment of CHDs in expectant mothers living in rural areas of developing countries (refer Figure 9). Here, the collected US images are analyzed using a cloud-based system. The obtained reports can then be sent to doctors and specialists in multispecialty hospitals in the city. Based on the specialists’ advice, the obstetricians in the rural areas can make appropriate decisions regarding the baby’s birth. In addition, the proper medication can be provided to the expectant mother for a better outcome.

5.2. Limitations of the Current Study

Some limitations of the proposed study include:
  • This review considers only manuscripts written in English.
  • The articles are based on specific keywords used. We may have overlooked potential studies based on non-DL and DL approaches.
  • The study targets AI-based techniques for fetal heart assessment using only four-chamber US images and did not consider other views or other imaging modalities.

6. Conclusions

AI techniques have the potential to make a huge impact in the field of medical image processing and analysis. Computer-based diagnostic tools have shown significant growth in clinical and medical applications. In this study, 45 articles were selected from 190. These articles were thoroughly reviewed so as to provide a comprehensive insight into the AI techniques utilized for the characterization of fetal heart US images. These studies were summarized and analyzed in terms of different cutting-edge approaches. The survey showed that AI techniques are able to improve the assessment of the fetal heart and are likely to be valuable resources for medical decision support.

Author Contributions

Conceptualization, A.G. and R.U.; Methodology, A.G. and R.U.; Software, A.G. and R.U.; Validation, J.S., A.V., A.A.J. and K.N.; Writing—review and editing, A.G., R.U., J.S., R.-S.T., C.P.O., E.J.C., F.M. and U.R.A.; Visualization, U.R.A., A.G. and P.D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Manipal Academy of Higher Education (MAHE) for providing the required facility to carry out this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoffman, J.I. The global burden of congenital heart disease. Cardiovasc. J. Afr. 2013, 24, 141–145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Dolk, H.; Loane, M.; Garne, E. European Surveillance of Congenital Anomalies Working Group: Congenital heart defects in Europe: Prevalence and perinatal mortality, 2000 to 2005. Circulation 2011, 123, 841–849. [Google Scholar] [CrossRef] [Green Version]
  3. Nayak, K.; Chandra, G.S.N.; Shetty, R.; Narayan, P.K. Evaluation of fetal echocardiography as a routine antenatal screening tool for detection of congenital heart disease. Cardiovasc. Diagn. Ther. 2016, 6, 4. [Google Scholar] [CrossRef]
  4. Rajiah, P.; Mak, C.; Dubinksy, T.J.; Dighe, M. Ultrasound of fetal cardiac anomalies. Am. J. Roentgenol. 2011, 197, W747–W760. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Stamm, E.R. ; Drose JAThe fetal heart In Diagnostic Ultrasound, 2nd ed.; Rumack, C.A., Wilson, S.R., Charboneau, W.J., Eds.; Mosby: St. Louis, MO, USA, 1998; pp. 1123–1159. [Google Scholar]
  6. Small, M.; Copel, J.A. Indications for fetal echocardiography. Pediatr. Cardiol. 2004, 25, 210–222. [Google Scholar] [CrossRef] [PubMed]
  7. International Society of Ultrasound in Obstetric and Gynecology. Cardiac screening examination of the fetus: Guidelines for performing the “basic” and “extended basic” cardiac scan. Ultrasound Obstet. Gynecol. 2006, 27, 107–113. [Google Scholar]
  8. Dudnikov, O.; Quinton, A.E.; Alphonse, J. The detection rate of first trimester ultrasound in the diagnosis of congenital heart defects: A narrative review. Sonography 2021, 8, 36–42. [Google Scholar] [CrossRef]
  9. Charafeddine, F.; Hachem, A.; Kibbi, N.; AbuTaqa, M.; Bitar, F.; Bulbul, Z.; El-Rassi, I.; Arabi, M. The first fetal echocardiography experience for prenatal diagnosis of congenital heart disease in lebanon: Successes and challenges. J. Saudi Heart Assoc. 2019, 31, 125–129. [Google Scholar] [CrossRef]
  10. Sriraam, N. A Primitive Survey on Ultrasonic Imaging-Oriented Segmentation Techniques for Detection of Fetal Cardiac Chambers. Int. J. Biomed. Clin. Eng. 2019, 8, 69–79. [Google Scholar]
  11. Carvalho, J.S.; Ho, S.Y.; Shinebourne, E.A. Sequential segmental analysis in complex fetal cardiac abnormalities: A logical approach to diagnosis. Ultrasound Obstet. Gynecol. 2005, 26, 105–111. [Google Scholar] [CrossRef]
  12. Naderi, S.; McGahan, J.P. A primer for fetal cardiac imaging: A stepwise approach for 2-D imaging. Ultrasound Q. 2008, 24, 195–206. [Google Scholar] [CrossRef] [PubMed]
  13. Donofrio, M.T.; Moon-Grady, A.J.; Hornberger, L.K.; Copel, J.A.; Sklansky, M.S.; Abuhamad, A.; Cuneo, B.F.; Huhta, J.C.; Jonas, R.A.; Krishnan, A.; et al. American Heart Association Adults With Congenital Heart Disease Joint Committee of the Council on Cardiovascular Disease in the Young and Council on Clinical Cardiology, Council on Cardiovascular Surgery and Anesthesia, and Council on Cardiovascular and Stroke Nursing. Diagnosis and treatment of fetal cardiac disease: A scientific statement from the American Heart Association. Circulation 2014, 129, 2183–2242. [Google Scholar] [PubMed]
  14. Makikallio, K.; Rasanen, J.; Makikallio, T.; Vuolteenaho, O.; Huhta, J.C. Human fetal cardiovascular profile score and neonatal outcome in intrauterine growth restriction. Ultrasound Obstet. Gynecol. 2008, 31, 48–54. [Google Scholar] [CrossRef]
  15. Bahtiyar, M.O.; Copel, J.A. Cardiac changes in the intrauterine growth restricted fetus. Semin Perinatol. 2008, 32, 190–193. [Google Scholar] [CrossRef] [PubMed]
  16. Liu, S.; Wang, Y.; Yang, X.; Lei, B.; Liu, L.; Li, S.X.; Ni, D.; Wang, T. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering 2019, 5, 261–275. [Google Scholar] [CrossRef]
  17. Garcia-Canadilla, P.; Sanchez-Martinez, S.; Crispi, F.; Bijnens, B. Machine learning in fetal cardiology: What to expect. Fetal Diagn. Ther. 2020, 47, 363–372. [Google Scholar] [CrossRef]
  18. de Siqueira, V.S.; Borges, M.M.; Furtado, R.G.; Dourado, C.N.; da Costa, R.M. Artificial intelligence applied to support medical decisions for the automatic analysis of echocardiogram images: A systematic review. Artif. Intell. Med. 2021, 120, 102165. [Google Scholar] [CrossRef] [PubMed]
  19. Rawat, V.; Jain, A.; Shrimali, V. Automated techniques for the interpretation of fetal abnormalities: A review. Appl. Bionics Biomech. 2018, 2018, 6452050. [Google Scholar] [CrossRef] [Green Version]
  20. Day, T.G.; Kainz, B.; Hajnal, J.; Razavi, R.; Simpson, J.M. Artificial intelligence, fetal echocardiography, and congenital heart disease. Prenat. Diagn. 2021, 41, 733–742. [Google Scholar] [CrossRef]
  21. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. The PRISMA Group. Preferred reporting items for systematic reviews and meta—analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef] [Green Version]
  22. Sudarshan, V.; Acharya, U.R.; Ng, E.Y.-K.; Meng, C.S.; Tan, R.S.; Ghista, D.N. Automated Identification of Infarcted Myocardium Tissue Characterization Using Ultrasound Images: A Review. IEEE Rev. Biomed. Eng. 2015, 8, 86–97. [Google Scholar] [CrossRef]
  23. Raghavendra, U.; Acharya, U.R.; Gudigar, A.; Shetty, R.; Krishnananda, N.; Pai, U.; Samanth, J.; Nayak, C. Automated screening of congestive heart failure using variational mode decomposition and texture features extracted from ultrasound images. Neural Comput. Appl. 2017, 28, 2869–2878. [Google Scholar] [CrossRef]
  24. Raghavendra, U.; Fujita, H.; Gudigar, A.; Shetty, R.; Nayak, K.; Pai, U.; Samanth, J.; Acharya, U. Automated technique for coronary artery disease characterization and classification using DD-DTDWT in ultrasound images. Biomed. Signal Processing Control. 2018, 40, 324–334. [Google Scholar] [CrossRef]
  25. Gudigar, A.; Raghavendra, U.; Devasia, T.; Nayak, K.; Danish, S.M.; Kamath, G.; Samanth, J.; Pai, U.M.; Nayak, V.; Tan, R.S.; et al. Global weighted LBP based entropy features for the assessment of pulmonary hypertension. Pattern Recognit. Lett. 2019, 125, 35–41. [Google Scholar] [CrossRef]
  26. Gudigar, A.; Raghavendra, U.; Samanth, J.; Gangavarapu, M.R.; Kudva, A.; Paramasivam, G.; Nayak, K.; Tan, R.-S.; Molinari, F.; Ciaccio, E.J.; et al. Automated detection of chronic kidney disease using image fusion and graph embedding techniques with ultrasound images. Biomed. Signal Process. Control 2021, 68, 102733. [Google Scholar] [CrossRef]
  27. Deng, L.; Yu, D. Deep learning: Methods and applications. Found. Trends Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
  28. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  29. Yang, X.; Yu, L.; Wu, L.; Wang, Y.; Ni, D.; Qin, J.; Heng, P.-A. Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; AAAI Press: San Francisco, California USA, 2017; pp. 1633–1639. [Google Scholar]
  30. Deng, Y.; Wang, Y.; Chen, P. Automated detection of fetal cardiac structure from first-trimester ultrasound sequences. In Proceedings of the 2010 3rd International Conference on Biomedical Engineering and Informatics, Yantai, China, 16–18 October 2010; pp. 127–131. [Google Scholar] [CrossRef]
  31. Deng, Y.; Wang, Y.; Shen, Y.; Chen, P. Active cardiac model and its application on structure detection from early fetal ultrasound sequences. Comput. Med. Imaging Graph. 2012, 36, 239–247. [Google Scholar] [CrossRef] [PubMed]
  32. Sampath, S.; Sivaraj, N. Fuzzy Connectedness Based Segmentation of Fetal Heart from Clinical Ultrasound Images. In Advanced Computing, Networking and Informatics—Volume 1. Smart Innovation, Systems and Technologies; Kumar Kundu, M., Mohapatra, D., Konar, A., Chakraborty, A., Eds.; Springer: Cham, Switzerland, 2014; Volume 27. [Google Scholar] [CrossRef]
  33. Guo, Y.; Wang, Y.; Nie, S.; Yu, J.; Chen, P. Automatic segmentation of a fetal echocardiogram using modified active appearance models and sparse representation. IEEE Trans. Biomed. Eng. 2013, 61, 1121–1133. [Google Scholar] [CrossRef] [PubMed]
  34. Vijayalakshmi, S.; Sriraam, N.; Suresh, S.; Muttan, S. Automated region mask for four-chamber fetal heart biometry. J. Clin. Monit. Comput. 2013, 27, 205–209. [Google Scholar] [CrossRef] [PubMed]
  35. Punya Prabha, V.; Sriraam, N.; Suresh, S. Hybrid Segmentation Approach to Segment Fetal Cardiac Chambers of Ultrasound images. In Proceedings of the 2019 1st International Conference on Advanced Technologies in Intelligent Control, Environment, Computing & Communication Engineering (ICATIECE), Bangalore, India, 19–20 March 2019; pp. 331–334. [Google Scholar] [CrossRef]
  36. Vargas-Quintero, L.; Escalante-Ramírez, B.; Camargo Marín, L.; Guzmán Huerta, M.; Arámbula Cosio, F.; Borboa Olivares, H. Left ventricle segmentation in fetal echocardiography using a multi-texture active appearance model based on the steered Hermite transform. Comput. Methods Programs Biomed. 2016, 137, 231–245. [Google Scholar] [CrossRef]
  37. Bridge, C.P.; Noble, J.A. Object localisation in fetal ultrasound images using invariant features. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), Brooklyn, NY, USA, 16–19 April 2015; pp. 156–159. [Google Scholar] [CrossRef]
  38. Sardsud; Auephanwiriyakul, S.; Theera-Umpon, N.; Tongsong, T. Sardsud; Auephanwiriyakul, S.; Theera-Umpon, N.; Tongsong, T.Patch-Based Fetal Heart Chamber Segmentation in Ultrasound Sequences Using Possibilistic Clustering. In Proceedings of the 2015 Seventh International Conference on Computational Intelligence, Modelling and Simulation (CIMSim), Kuantan, Malaysia,, 27–29 July 2015; pp. 43–48. [Google Scholar] [CrossRef]
  39. Femina, M.A.; Raajagopalan, S.P. Anatomical structure segmentation from early fetal ultrasound sequences using global pollination CAT swarm optimizer–based Chan–Vese model. Med. Biol. Eng. Comput. 2019, 57, 1763–1782. [Google Scholar] [CrossRef] [PubMed]
  40. Nageswari, C.S.; Prabha, K.H. Preserving the border and curvature of fetal heart chambers through TDyWT perspective geometry wrap segmentation. Multimed. Tools Appl. 2018, 77, 10235–10250. [Google Scholar] [CrossRef]
  41. Jacop, R.M.R.; Prabakar, S.; Porkumaran, D.R.K. Fetal cardiac structure detection from ultrasound sequences. Int. J. Instrum. Control Autom. 2013, 2, 12–16. [Google Scholar] [CrossRef]
  42. Yu, L.; Guo, Y.; Wang, Y.; Yu, J.; Chen, P. Determination of fetal left ventricular volume based on two-dimensional echocardiography. J. Healthc. Eng. 2017, 9, 4797315. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Prabha, V.P.; Sriraam, N.; Suresh, S. Ultrasonic imaging based fetal cardiac chambers segmentation using discrete wavelet transform. In Proceedings of the 2016 International Conference on Circuits, Controls, Communications and Computing (I4C), Bangalore, India, 4–6 October 2016; pp. 1–4. [Google Scholar] [CrossRef]
  44. Yu, L.; Guo, Y.; Wang, Y.; Yu, J.; Chen, P. Segmentation of Fetal Left Ventricle in Echocardiographic Sequences Based on Dynamic Convolutional Neural Networks. IEEE Trans. Biomed. Eng. 2017, 64, 1886–1895. [Google Scholar] [CrossRef] [PubMed]
  45. Patra, A.; Noble, J.A. Multi-anatomy localization in fetal echocardiography videos. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1761–1764. [Google Scholar]
  46. Xu, L.; Liu, M.; Zhang, J.; He, Y. Convolutional-neural-network-based approach for segmentation of apical four-chamber view from fetal echocardiography. IEEE Access 2020, 8, 80437–80446. [Google Scholar] [CrossRef]
  47. Pu, B.; Zhu, N.; Li, K.; Li, S. Fetal cardiac cycle detection in multi-resource echocardiograms using hybrid classification framework. Future Gener. Comput. Syst. 2021, 115, 825–836. [Google Scholar] [CrossRef]
  48. Sundaresan, V.; Bridge, C.P.; Ioannou, C.; Noble, J.A. Automated characterization of the fetal heart in ultrasound images using fully convolutional neural networks. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging, Melbourne, Australia, 18–21 April 2017; pp. 671–674. [Google Scholar] [CrossRef]
  49. Dozen, A.; Komatsu, M.; Sakai, A.; Komatsu, R.; Shozu, K.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Image Segmentation of the Ventricular Septum in Fetal Cardiac Ultrasound Videos Based on Deep Learning Using Time-Series Information. Biomolecules 2020, 10, 1526. [Google Scholar] [CrossRef]
  50. Xu, L.; Liu, M.; Shen, Z.; Wang, H.; Liu, X.; Wang, X.; Wang, S.; Li, T.; Yu, S.; Hou, M.; et al. DW-Net: A cascaded convolutional neural network for apical four-chamber view segmentation in fetal echocardiography. Comput. Med. Imaging Graph. 2020, 80, 101690. [Google Scholar] [CrossRef]
  51. Nurmaini, S.; Rachmatullah, M.N.; Sapitri, A.I.; Darmawahyuni, A.; Jovandy, A.; Firdaus, F.; Tutuko, B.; Passarella, R. Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation. IEEE Access 2020, 8, 196160–196174. [Google Scholar] [CrossRef]
  52. Philip, M.E.; Sowmya, A.; Avnet, H.; Ferreira, A.; Stevenson, G.; Welsh, A. Convolutional Neural Networks for Automated Fetal Cardiac Assessment using 4D B-Mode Ultrasound. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 824–828. [Google Scholar] [CrossRef]
  53. Baumgartner, C.F.; Kamnitsas, K.; Matthew, J.; Fletcher, T.P.; Smith, S.; Koch, L.M.; Kainz, B.; Rueckert, D. SonoNet: Real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE Trans. Med. Imaging 2017, 36, 2204–2215. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Qiao, S.; Pang, S.; Luo, G.; Pan, S.; Chen, T.; Lv, Z. FLDS: An Intelligent Feature Learning Detection System for Visualizing Medical Images Supporting Fetal Four-chamber Views. IEEE J. Biomed. Health Informatics 2021. [Google Scholar] [CrossRef] [PubMed]
  55. Shozu, K.; Komatsu, M.; Sakai, A.; Komatsu, R.; Dozen, A.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Model-agnostic method for thoracic wall segmentation in fetal ultrasound videos. Biomolecules 2020, 10, 1691. [Google Scholar] [CrossRef] [PubMed]
  56. Komatsu, M.; Sakai, A.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Shozu, K.; Dozen, A.; Machino, H.; Hidaka, H.; Arakaki, T.; et al. Detection of Cardiac Structural Abnormalities in Fetal Ultrasound Videos Using Deep Learning. Appl. Sci. 2021, 11, 371. [Google Scholar] [CrossRef]
  57. Yang, T.; Han, J.; Zhu, H.; Li, T.; Liu, X.; Gu, X.; Liu, X.; An, S.; Zhang, Y.; Zhang, Y.; et al. Segmentation of five components in four chamber view of fetal echocardiography. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1962–1965. [Google Scholar] [CrossRef]
  58. Sapitri, A.I.; Nurmaini, S.; Sukemi, M.; Rachmatullah, M.N.; Darmawahyuni, A. Segmentation atrioventricular septal defect by using convolutional neural networks based on U-NET architecture. IAES Int. J. Artif. Intell. 2021, 10, 553–562. [Google Scholar] [CrossRef]
  59. Rachmatullah, M.N.; Nurmaini, S.; Sapitri, A.I.; Darmawahyuni, A.; Tutuko, B.; Firdaus, F. Convolutional neural network for semantic segmentation of fetal echocardiography based on four-chamber view. Bull. Electr. Eng. Inform. 2021, 10, 1987–1996. [Google Scholar] [CrossRef]
  60. Athira, P.; Mathew, L. Fetal anomaly detection in ultrasound image. Int. J. Comput. Appl. 2015, 129, 8887. [Google Scholar]
  61. Sridevi, S.; Nirmala, S. ANFIS based decision support system for prenatal detection of Truncus Arteriosus congenital heart defect. Appl. Soft Comput. 2016, 46, 577–587. [Google Scholar] [CrossRef]
  62. Budd, S.; Sinclair, M.; Day, T.; Vlontzos, A.; Tan, J.; Liu, T.; Matthew, J.; Skelton, E.; Simpson, J.; Razavi, R.; et al. Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via Disease-Specific Atlas Maps. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Strasbourg, France, 2021; pp. 207–217. [Google Scholar]
  63. Gong, Y.; Zhang, Y.; Zhu, H.; Lv, J.; Cheng, Q.; Zhang, H.; He, Y.; Wang, S. Fetal Congenital Heart Disease Echocardiogram Screening Based on DGACNN: Adversarial One-Class Classification Combined with Video Transfer Learning. IEEE Trans. Med. Imaging. 2020, 39, 1206–1222. [Google Scholar] [CrossRef] [PubMed]
  64. Gudigar, A.; Samanth, J.; Raghavendra, U.; Dharmik, C.; Vasudeva, A.; Padmakumar, R.; Tan, R.-S.; Ciaccio, E.J.; Molinari, F.; Acharya, U.R. Local preserving class separation framework to identify gestational diabetes mellitus mother using ultrasound fetal cardiac image. IEEE Access 2020, 8, 229043–229051. [Google Scholar] [CrossRef]
  65. Ji, L.; Gu, Y.; Sun, K.; Yang, J.; Qiao, Y. Congenital heart disease (CHD) discrimination in fetal echocardiogram based on 3D feature fusion. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3419–3423. [Google Scholar] [CrossRef]
  66. Arnaout, R.; Curran, L.; Chinn, E.; Zhao, Y.; Moon-Grady, A.J. Deep-learning models improve on community-level diagnosis for common congenital heart disease lesions. arXiv 2018, arXiv:1809.06993v1. [Google Scholar]
  67. Arnaout, R.; Curran, L.; Zhao, Y.; Levine, J.C.; Chinn, E.; Moon-Grady, A.J. An ensemble of neural networks provides expert-level prenatal detection of complex congenital heart disease. Nat. Med. 2021, 27, 882–891. [Google Scholar] [CrossRef] [PubMed]
  68. Chotzoglou, E.; Day, T.; Tan, J.; Matthew, J.; Lloyd, D.; Razavi, R.; Simpson, J.; Kainz, B. Learning normal appearance for fetal anomaly screening: Application to the unsupervised detection of Hypoplastic Left Heart Syndrome. J. Mach. Learn. Biomed. Imaging 2021, 12, 1–25. [Google Scholar]
  69. Freeman, W.T.; Adelson, E.H. The design and use of steerable filters. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 891–906. [Google Scholar] [CrossRef]
  70. Beil, W. Steerable filters and invariance theory. Pattern Recognit. Lett. 1994, 15, 453–460. [Google Scholar] [CrossRef]
  71. Oliva, A.; Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Compu.t Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
  72. Chua, K.C.; Chandran, V.; Acharya, U.R.; Lim, C.M. Application of higher order statistics/spectra in biomedical signals—A review. Med. Eng. Phys. 2010, 32, 679–689. [Google Scholar] [CrossRef] [Green Version]
  73. Oliva, A.; Torralba, A.B.; Guerin-Dugue, A.; Herault, J. Global semantic classification of scenes using power spectrum templates. In Proceedings of the 1999 International Conference on Challenge of Image retrieval, Swindon, UK, 25–26 February 1999; pp. 1–12. [Google Scholar]
  74. Siagian, C.; Itti, L. Rapid biologically-inspired scene classification using features shared with visual attention. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 300–312. [Google Scholar] [CrossRef]
  75. Raghavendra, U.; Acharya, U.R.; Fujita, H.; Gudigar, A.; Tan, J.H.; Chokkadi, S. Application of Gabor wavelet and locality sensitive discriminant analysis for automated identification of breast cancer using digitized mammogram images. Appl. Soft Comput. 2016, 46, 151–161. [Google Scholar] [CrossRef]
  76. Fujita, H.; Raghavendra, U.; Gudigar, A.; Vadakkepat, V.V.; Acharya, U.R. Automated Characterization of Breast Cancer Using Steerable Filters. In New Trends in Intelligent Software Methodologies, Tools and Techniques; Frontiers in Artificial Intelligence and Applications; IOS Press: Amsterdam, The Netherlands, 2017; Volume 297, pp. 321–327. [Google Scholar] [CrossRef]
  77. Ghasemzadeh, A.; Azad, S.S.; Esmaeili, E. Breast cancer detection based on Gabor-wavelet transform and machine learning methods. Int. J. Mach. Learn. Cyber. 2019, 10, 1603–1612. [Google Scholar] [CrossRef]
  78. Gudigar, A.; Raghavendra, U.; Samanth, J.; Dharmik, C.; Gangavarapu, M.R.; Nayak, K.; Ciaccio, E.J.; Tan, R.; Molinari, F.; Acharya, U.R. Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques. J. Imaging 2022, 8, 102. [Google Scholar] [CrossRef]
  79. Raghavendra, U.; Bhandary, S.V.; Gudigar, A.; Acharya, U.R. Novel expert system for glaucoma identification using non-parametric spatial envelope energy spectrum with fundus images. Biocybern. Biomed. Eng. 2018, 38, 170–180. [Google Scholar] [CrossRef]
  80. Molinari, F.; Raghavendra, U.; Gudigar, A.; Meiburger, K.M.; Rajendra Acharya, U. An efficient data mining framework for the characterization of symptomatic and asymptomatic carotid plaque using bidimensional empirical mode decomposition technique. Med. Biol. Eng. Comput. 2018, 56, 1579–1593. [Google Scholar] [CrossRef] [PubMed]
  81. Gudigar, A.; Raghavendra, U.; Ciaccio, E.J.; Arunkumar, N.; Abdulhay, E.; Acharya, U.R. Automated categorization of multi-class brain abnormalities using decomposition techniques with MRI images: A comparative study. IEEE Access 2019, 7, 28498–28509. [Google Scholar] [CrossRef]
  82. Zhou, N.; Wang, L. A modified T-test feature selection method and its application on the HapMap genotype data. Genom. Proteom. Bioinform. 2007, 5, 242–249. [Google Scholar] [CrossRef] [Green Version]
  83. Glen, S. “T Test (Student’s T-Test): Definition and Examples” From StatisticsHowTo.com: Elementary Statistics for the Rest of Us! 2017. Available online: https://www.statisticshowto.com/probability-and-statistics/t-test/ (accessed on 31 January 2021).
  84. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  85. Gudigar, A.; Chokkadi, S.; Raghavendra, U.; Acharya, U.R. Local texture patterns for traffic sign recognition using higher order spectra. Pattern Recognit. Lett. 2017, 94, 202–210. [Google Scholar] [CrossRef]
Figure 1. Standard four-chamber view showing normal appearance and spatial relationships of the four heart chambers and aorta. This image was acquired from the subcostal window with an ultrasound beam perpendicular to the interventricular septum. A similar image can be acquired from the cardiac apex; in which case, the ultrasound beam would be parallel to the interventricular septum.
Figure 1. Standard four-chamber view showing normal appearance and spatial relationships of the four heart chambers and aorta. This image was acquired from the subcostal window with an ultrasound beam perpendicular to the interventricular septum. A similar image can be acquired from the cardiac apex; in which case, the ultrasound beam would be parallel to the interventricular septum.
Informatics 09 00034 g001
Figure 2. Article selection process based on PRISMA guidelines.
Figure 2. Article selection process based on PRISMA guidelines.
Informatics 09 00034 g002
Figure 3. AI-based pipeline architecture for analysis of fetal echocardiography.
Figure 3. AI-based pipeline architecture for analysis of fetal echocardiography.
Informatics 09 00034 g003
Figure 4. (aj) Sample images with various CHDs.
Figure 4. (aj) Sample images with various CHDs.
Informatics 09 00034 g004
Figure 5. Steered filters (second row) and their application to a normal fetal 4-chamber US image (third row).
Figure 5. Steered filters (second row) and their application to a normal fetal 4-chamber US image (third row).
Informatics 09 00034 g005
Figure 6. Gist descriptor: (a) normal, (b) complete endocardial cushion defect, (c) CCTGA, (d) myocardial tumor, (e) Ebstein’s anomaly, (f) hypoplastic RV, (g) hypoplastic LV, (h) tricuspid atresia, (i) mitral and tricuspid stenosis, and (j) bald LA in TAPVC.
Figure 6. Gist descriptor: (a) normal, (b) complete endocardial cushion defect, (c) CCTGA, (d) myocardial tumor, (e) Ebstein’s anomaly, (f) hypoplastic RV, (g) hypoplastic LV, (h) tricuspid atresia, (i) mitral and tricuspid stenosis, and (j) bald LA in TAPVC.
Informatics 09 00034 g006
Figure 7. HOS cumulants: (a) normal, (b) complete endocardial cushion defect, (c) CCTGA, (d) myocardial tumor, (e) Ebstein’s anomaly, (f) hypoplastic RV, (g) hypoplastic LV, (h) tricuspid atresia, (i) mitral and tricuspid stenosis and (j) bald LA in TAPVC.
Figure 7. HOS cumulants: (a) normal, (b) complete endocardial cushion defect, (c) CCTGA, (d) myocardial tumor, (e) Ebstein’s anomaly, (f) hypoplastic RV, (g) hypoplastic LV, (h) tricuspid atresia, (i) mitral and tricuspid stenosis and (j) bald LA in TAPVC.
Informatics 09 00034 g007
Figure 8. Distribution of the articles from 2010 to 2021.
Figure 8. Distribution of the articles from 2010 to 2021.
Informatics 09 00034 g008
Figure 9. Smart healthcare system.
Figure 9. Smart healthcare system.
Informatics 09 00034 g009
Table 1. Nondeep learning segmentation techniques for fetal heart ultrasound images and videos.
Table 1. Nondeep learning segmentation techniques for fetal heart ultrasound images and videos.
PaperMethodGoalDatasetResult
[30]Rayleigh-trimmed anisotropic diffusion + AAMThe structure detection of the fetal heartImages: 258Detection = 74
[31]Active cardiac modelThe detection of cardiac
structure
738 imagesPoint position error = 7.11 ± 6.77
[32]PPBMLE + fuzzy connectednessFetal heart structure delineationFirst imageDC = 0.985
[33]Improved AAM + sparse representationThe segmentation of LVTraining: 23 images
Testing: 23 images
AO = 84.39
[34]Connected component analysisHeart detection13 cine-loop sequences
[35]RG + PCM clusteringThe segmentation of fetal heart chambersImages: 93
[36]Multitexture AAM with HTThe segmentation of LVTraining: 98 images
Validation: 45 images
DC = 0.8631
[37]FOH + circular basis functions + SVMHeart detectionVideos: 63Acc. = 88
[38]Horn–Schunck’s optical flow + PCMFetal heart chamber segmentation70 framesSegmentation
Error = 2.17%
[39] Improved RCV model The segmentation of anatomical structure Videos: 12 subjects SPM = more than 99.95
HF = 2.5204 ± 1.2503
[40] TDyWT Preserving curvature and border of the chambers Images: 100 normal and abnormal Contrast = 85% improvement
[41]k-means clustering + AAMThe detection of fetal cardiac structureThree ultrasound sequences
[42]16 distances from border to center + back-propagation neural network (BPNN)LV volume prediction50 casesHighest intraclass correlation coefficient and concordance correlation coefficient
[43]Discrete Haar wavelet transformChamber segmentation73 cine loop sequencesLV/RV ratio = 0.97
Acc (%): accuracy; SPM (%): segmentation performance measure; detection (%); AO (%): area overlap; DS/DC: dice score/coefficient; HF: Hausdorff distance.
Table 2. Deep learning segmentation techniques for fetal heart ultrasound images and videos.
Table 2. Deep learning segmentation techniques for fetal heart ultrasound images and videos.
PaperMethodGoalDatasetResult
[52] CNN Fetal annulus segmentation 250 cases DS = 0.78
[46]CU-Net + SSIMFetal heart segmentationTraining: 1284 Images Testing: 428 imagesDS = 0.856
HF = 3.33
Pixel Acc. = 92.9
[53]CNNLocalization2694 examinationsAcc. = 77.8
[47]Deep learning hybrid
approach
Localization of end-systolic (ES) and end-diastolic (ED) frames350 pregnant womenAvg. Acc. = 94.84
[45]VGG-16 + modified RCNNThe detection of anatomical structures91 videos from 12 subjectsAcc. = 82.31
[50]DW-NetThe segmentation of anatomical structures895 viewsDC = 0.827
PA = 93.3
AUC = 0.990
[54]Feature learning detection system with multistage residual hybrid attention moduleThe detection of anatomical structures1250 views from 1000 healthy pregnant womenPrecision = 0.919, Recall = 0.971, F1 score = 0.944, and mAP = 0.953
[44]Dynamic CNNLV segmentation51 sequencesDC = 94.5
[49]Cropping–segmentation–calibrationVentricular septum segmentation615 images from 211 pregnant womenmIoU = 0.5543
mDC = 0.6891
[55]Multiframe + cylinder based on ensemble learningThoracic wall segmentation538 frames from 256 normal casesmIoU = 0.493
[56]Supervised object
detection with normal data only based on CNN
The detection of structure abnormalities349 normal cases
14 CHD cases
Area under ROC
Heart = 0.787
Vessel = 0.891
[57]DeeplabV3 + U-netMultidisease segmentation602
Frames from 301 patients
mIoU = 0.768 ± 0.035
DC = 0.926 ± 0.020
for Ebstein’s anomaly
[58]CNN-based U-NetThe segmentation of atrioventricular septal defectAVSD: 337 images
Normal: 332 images
DC = 96.02%
[51]MRCNNMulticlass segmentationImages: 764Hole detection
mIoU = 76
mAP = 99.48
DC = 87.78
[59]CNNs–U-Net and Otsu thresholdFetal heart segmentationImages: 519Mean Accuracy = 96.73
Error rate = 0.21%
Acc (%): accuracy; SPM (%): segmentation performance measure; detection (%); DS/DC: dice score/coefficient; HF: Hausdorff distance; PA (%): pixel accuracy; mean average precision (mAP); mean intersection over union (IoU).
Table 3. State-of-the-art decision support systems using fetal heart ultrasound images and videos.
Table 3. State-of-the-art decision support systems using fetal heart ultrasound images and videos.
PaperMethodDatasetResultClasses
[60]Patch-based WF + morphological operation + features from GLCM + BPNNFrom fetal US image galleryCorrectly classified: 30 images
Not correctly classified: 9 images
3 (normal, hole in the heart, and defect in the valve.)
[61]PPBMLE + fuzzy connectedness + statistical and texture features + FDR + ANFISNormal: 185 images TA-CHD heart: 39 imagesROC: 0.8954
F-score: 0.9673
2 (normal and truncus arteriosus (TA))
[65]SIFT + HOF + BoW + SVMNormal: 240 cases
Abnormal: 60 cases
Acc. (Avg.): 95.1%2 (normal and abnormal)
[66]Deep learning modelNormal: 493
TOF: 87
HLHS: 105
Normal heart vs. TOF:
Sen: 75, Spe: 76
Normal vs. HLHS: Sen: 100, Spe: 90
3 (normal heart, tetralogy of Fallot (TOF), and hypoplastic
left heart syndrome (HLHS))
[64]Texture features based on shearlet + LPCS + SVMNormal: 221 images
Pre-GDM/GDM: 212 images
Acc.: 98.15
PPV: 97.22
Sen: 99.05
Spe: 97.28
2 (normal and pre-GDM/
gestational diabetes mellitus (GDM))
[63]DGACNN3596 images and video slicesAcc.: 85
AUC: 0.881
2 (normal and diseased)
[62]Atlas-ISTN + area ratios + Gaussian processNormal: 1560 images
HLHS: 68 images
AUC-ROC: 0.9782 (normal and HLHS)
[68]Auto-encoding generative adversarial networkNormal: 2224 cases
Abnormal: 93 cases
AUC (avg.): 0.812 (normal and HLHS)
[67] Ensemble of neural networks 107,823 imagesAUC: 0.99
Sen: 95
Spe: 96
NPV: 100
2 (normal and abnormal)
PPV (%): positive predictive value; Sen. (%): sensitivity; Spe. (%): specificity; NPV (%): negative predictive value.
Table 4. Mean and standard deviation (SD) of ranked gist features.
Table 4. Mean and standard deviation (SD) of ranked gist features.
Gist FeaturesNormalCHDp-Valuet-Value
MeanSDMeanSD
g1630.014520.0121480.040070.0046820.0002325.58766
g1470.0187780.0160780.0471040.0033780.0002825.447707
g4350.0259520.0237640.0674440.0057140.0003595.27778
g1790.0151380.0148760.0450540.0059490.0003645.267412
g190.0118120.0101940.0306540.0033650.0004175.173703
g2270.0149750.0102060.0348790.0039610.0004215.167139
g4320.0530770.0026960.0437680.0027580.0004735.086565
g350.0096640.0075290.0276470.0046630.0005125.032358
g2910.0193730.0151070.0451370.0043740.0005784.950172
g990.0095730.006280.0236370.0037460.0006974.825387
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gudigar, A.; U., R.; Samanth, J.; Vasudeva, A.; A. J., A.; Nayak, K.; Tan, R.-S.; Ciaccio, E.J.; Ooi, C.P.; Barua, P.D.; et al. Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding. Informatics 2022, 9, 34. https://doi.org/10.3390/informatics9020034

AMA Style

Gudigar A, U. R, Samanth J, Vasudeva A, A. J. A, Nayak K, Tan R-S, Ciaccio EJ, Ooi CP, Barua PD, et al. Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding. Informatics. 2022; 9(2):34. https://doi.org/10.3390/informatics9020034

Chicago/Turabian Style

Gudigar, Anjan, Raghavendra U., Jyothi Samanth, Akhila Vasudeva, Ashwal A. J., Krishnananda Nayak, Ru-San Tan, Edward J. Ciaccio, Chui Ping Ooi, Prabal Datta Barua, and et al. 2022. "Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding" Informatics 9, no. 2: 34. https://doi.org/10.3390/informatics9020034

APA Style

Gudigar, A., U., R., Samanth, J., Vasudeva, A., A. J., A., Nayak, K., Tan, R. -S., Ciaccio, E. J., Ooi, C. P., Barua, P. D., Molinari, F., & Acharya, U. R. (2022). Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding. Informatics, 9(2), 34. https://doi.org/10.3390/informatics9020034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop